text
stringlengths 14
1.76M
|
|---|
# A probabilistic proof of Cooper&Frieze’s
“First Visit Time Lemma”
Francesco Manzo∗ ∗ Dipartimento di Matematica e Fisca, Università di Roma Tre,
Largo San Leonardo Murialdo 1, 00146 Roma, Italy<EMAIL_ADDRESS>,
Matteo Quattropani† † Dipartimento di Economia e Finanza, LUISS, Viale Romania
32, 00197 Roma, Italy<EMAIL_ADDRESS>and Elisabetta Scoppola# #
Dipartimento di Matematica e Fisca, Università di Roma Tre, Largo San Leonardo
Murialdo 1, 00146 Roma, Italy<EMAIL_ADDRESS>
###### Abstract.
In this short note we present an alternative proof of the so-called _First
Visit Time Lemma_ (FVTL), originally presented by Cooper and Frieze in its
first formulation in [21], and then used and refined in a list of papers by
Cooper, Frieze and coauthors. We work in the original setting, considering a
growing sequence of irreducible Markov chains on $n$ states. We assume that
the chain is rapidly mixing and with a stationary measure having no entry
which is too small nor too large. Under these assumptions, the FVTL shows the
exponential decay of the distribution of the hitting time of a given state
$x$—for the chain started at stationarity—up to a small multiplicative
correction. While the proof of the FVTL presented by Cooper and Frieze is
based on tools from complex analysis, and it requires an additional assumption
on a generating function, we present a completely probabilistic proof, relying
on the theory of quasi-stationary distributions and on strong-stationary times
arguments. In addition, under the same set of assumptions, we provide some
quantitative control on the Doob’s transform of the chain on the complement of
the state $x$.
## 1\. Introduction
In the early 00’s, Cooper and Frieze started a series of papers on which they
compute the first order asymptotics of the cover time of random walks on
different random graphs, see [22, 2, 17, 16, 15, 20, 18]. Given an arbitrary
(possibly directed) graph structure, the cover time is the expected time
needed by a simple random walk to visit every vertex of the graph, maximized
over all the possible starting positions. One of the key ingredients of Cooper
and Freze’s analysis is the so called _First Visit Time Lemma (FVTL)_ , as
named by the authors in [21]. The same lemma has been of use in proving also
different kind of results, e.g., to estimate expected meeting time of multiple
random walks on random graphs, see [19]. The lemma deals with the tail
probability of the stopping time $\tau_{x}$, i.e., the time of the first visit
to the state $x$. Consider a sequence of Markov chains on a growing state
space of size $n$. We assume that for every sufficiently large $n$ the chain
is irreducible, admitting a unique invariant measure $\pi=\pi_{n}$. The
framework of the lemma is based on two additional crucial assumptions relating
mixing time and spread of the stationary measure, namely, we assume the
existence of a time $T=T_{n}$ such that
$\max_{x,y}\left|P^{T}(x,y)-\pi(y)\right|=O\left(\frac{1}{n^{3}}\right),$
(1.1)
and
$T\>\max_{x}\pi(x)=o(1),\qquad\min_{x}\pi(x)=\omega(n^{-2}).$ (1.2)
Under the latter assumptions and adding a technical requirement on the
generating function of the recurrences to a fixed state $x$, the authors show
that starting from any state $y$ and for all $t>T$:
$\qquad\mathbb{P}_{y}\left(\text{the process does not visit $x$ in the
interval $[T,t]$ }\right)\sim\left(1-\frac{\pi(x)}{R_{T}(x)}\right)^{t},$
(1.3)
where $R_{T}(x)\geq 1$ is the expected number of returns in $x$ within the
mixing time $T$. The proof of the latter results, as well as the underlying
technical assumptions, evolved with their uses since the first formulation in
[21] to the last (to the best of our knowledge) formulation and proof in [17].
We remark that the assumptions in Eqs. 1.1 and 1.2 are typically satisfied by
random walks on many models of random graphs, e.g., Erdős-Renyi graphs or
configuration models.
The techniques used in the proof by Cooper and Frieze rely on probability
arguments but also on tools from complex analysis and an analytical expansion
of some probability generating functions. In this paper we aim at finding a
probabilistic proof of the FVTL, trying to shed some light on the underlying
phenomenology. On the technical side, the arguments in our proof are
elementary and do not need the additional assumption on the generating
function required in the original Cooper and Frieze’s proof. We refer to
Section 2.2 for a direct comparison of our result with the original one.
Exponential law of hitting times is a classic and widely studied topic in
probability. We just recall here the pioneering book by Keilson [29] and the
beautiful papers by Aldous (see [7] and also [8, 9]). In [7], Aldous
recognizes two regimes in which the latter phenomenon takes place:
1. (1)
A single state $m$ is frequently visited before $\tau_{x}$. When starting from
$m$, the path to $x$ consists of a geometric number of excursions (with mean
$\left(\mathbb{P}_{m}(\tau_{m}>\tau_{x}\right)^{-1}$) from $m$ to $m$ without
touching $x$, before the final journey to $x$. The hitting time is dominated
by the sum of many i.i.d. excursion times and therefore it is almost
exponential [29].
2. (2)
When the chain is rapidly mixing, then the distribution at time $t$ is near to
the stationary distribution even when conditioned on $\tau_{x}>t$. This case
is analyzed in [7], where Aldous shows that
$\sup_{t\geq
0}\left|\mathbb{P}_{\pi}(\tau_{x}>t)-e^{-\frac{t}{\mathbb{E}_{\pi}[\tau_{x}]}}\right|\leq\delta,$
where $\delta$ is a function of the mixing time of the chain and of the
expectation $\mathbb{E}_{\pi}[\tau_{x}]$. Aldous shows that, if the hitting of
$x$ is a rare event, i.e., the expectation of $\tau_{x}$ is much larger than
the mixing time of the chain, then $\delta$ is small.
In the early years, these two regimes were considered as complementary. One of
the main applications of the scenario in (1) has been the study of
metastability, namely the behavior of processes that are trapped for a long
time in a part of their state space. Before exiting the trap, the process
visits many times a “metastable state”, reaching an apparent, local
equilibrium. In such systems the exit from the trap triggers the relaxation to
equilibrium so that relaxation to equilibrium can be discussed as the first
hitting to the complement of the trap. We refer to [35, 12] for a general
introduction to metastability and to [11, 10, 27, 28, 31] for a discussion of
the extension of metastability methods to other regimes.
The FVTL frames in scenario (2) and it was proved by means of a different set
of techniques. Aldous’ result mentioned in (2) has an additive error term and
therefore it cannot provide first-order asymptotics of the exponential
approximation when $t$ is large, in contrast to the FVTL where a
multiplicative bound is proved.
More recently, these two regimes begin to be understood in a common framework,
by generalizing recurrence ideas to measures instead of recurrence to points.
The quasi-stationary measure, introduced in the pioneering paper by Darroch
and Seneta [23] (see also [14], and [37] for a more recent bibliography on the
subject), plays the role of a recurrent measure before the hitting. The
hitting to the measure can be studied by extending the theory of strong
stationary times [3, 4, 30], to quasi-stationarity, see [25, 31]. In
particular, the notion of _conditional strong quasi-stationary time_
introduced in [31], has shown to be useful in providing exact formulas for the
distribution of the first hitting time $\tau_{x}$ starting from an arbitrary
distribution. An introduction to these tools is given in the following
subsection where a rough estimate on the tail of $\tau_{x}$ is given. Under
the strong hypotheses considered in this paper we can follow an easier way,
involving the quasi-stationary measure but not requiring the use of
conditional strong quasi-stationary times. Indeed, in our case the stationary
measure and the quasi-stationary one are very close to each other. The more
general results obtained in [31] could be useful in considering more general
regimes with different starting measure. The final part of this paper is
devoted to the discussion of our proof in this perspective.
### 1.1. A first discussion
For any $x\in\mathcal{X}$, let $\tau_{x}$ denote the hitting time of $x$,
namely
$\tau_{x}=\inf\\{t\geq 0\>|\>X_{t}=x\\}.$ (1.4)
We will call $[P]_{x}$ the sub-Markovian probability kernel obtained by
removing the $x$-th row and column by the matrix $P$. We will assume that
$[P]_{x}$ is a primitive sub-Markovian kernel, i.e., all entries of
$([P]_{x})^{m}$ are positive for some $m\in\mathbb{N}$. By the Perron-
Froboenius theory (see, e.g., [14]) there exists a unique probability
distribution $\mu^{\star}_{x}$ and a real $\lambda_{x}<1$
$\mu^{\star}_{x}[P]_{x}=\lambda_{x}\mu^{\star}_{x},$ (1.5)
Moreover, we denote by $\gamma_{x}$ the corresponding right eigenvector, i.e.,
$[P]_{x}\gamma_{x}=\lambda_{x}\gamma_{x},$ (1.6)
normalized by $\left\langle\gamma_{x},\mu_{x}^{\star}\right\rangle=1$.
The probability distribution $\mu^{\star}_{x}$ is called quasi-stationary
measure and it is strictly related to the exponential behavior of the tail
probability $\mathbb{P}(\tau_{x}>t)$. Indeed, when looking at the evolution of
the process starting from $\mu^{\star}_{x}$, by Eq. 1.5 we deduce
$\mathbb{P}_{\mu^{\star}_{x}}(\tau_{x}>t)=\sum_{z}\mu^{\star}_{x}(z)\mathbb{P}_{z}(\tau_{x}>t)=\sum_{z\not=x}{\mu^{\star}_{x}(z)\sum_{y\not=x}\big{(}[P]_{x}\big{)}^{t}(z,y)}=\lambda_{x}^{t}\sum_{y\not=x}\mu^{\star}_{x}(y)=\lambda_{x}^{t}.$
(1.7)
For more details see [25, 26, 32, 34], the application to the metastability
regime are discussed in [27, 28, 31].
The right eigenvector $\gamma_{x}$ defined in Eq. 1.6 controls the dependence
on the initial distribution of the probability of the event $\tau_{x}>t$.
Indeed this eigenvector is related to the asymptotic ratios of the right tail
probabilities, see [14, Eq. (3.5)]
$\lim_{t\to\infty}\frac{\mathbb{P}_{y}(\tau_{x}>t)}{\mathbb{P}_{z}(\tau_{x}>t)}=\frac{\gamma_{x}(y)}{\gamma_{x}(z)}\qquad
y,z\not=x.$ (1.8)
With this right eigenvector we can construct a _Local Chain_ on
$\mathcal{X}\setminus\\{x\\}$, which is usually referred to as _Doob’s
transform of $X$_. For any $y,z\not=x$, define the stochastic matrix
$\widetilde{P}(z,y):=\frac{\gamma_{x}(y)}{\gamma_{x}(z)}\frac{P(z,y)}{\lambda_{x}}.$
(1.9)
More generally
$\widetilde{P}^{t}(z,y)=\frac{\gamma_{x}(y)}{\gamma_{x}(z)}\frac{\big{(}[P]_{x}\big{)}^{t}(z,y)}{\lambda_{x}^{t}}\qquad\forall
t\geq 0.$ (1.10)
It is immediate to show that $\widetilde{P}$ is a primitive matrix. The
invariant measure of the latter chain is
$\nu(y):=\gamma_{x}(y)\mu^{\star}_{x}(y).$
For the chain $\widetilde{X}$ we define
$\tilde{s}^{z}(t,y):=1-\frac{\widetilde{P}^{t}(z,y)}{\nu(y)}$ (1.11)
and will call separation distance at time $t$ the quantity $\tilde{s}(t)$
defined as
$\tilde{s}(t):=\sup_{z\not=x}\tilde{s}^{z}(t)\qquad\text{where}\qquad\tilde{s}^{z}(t):=\sup_{y\not=x}\tilde{s}^{z}(t,y).$
(1.12)
Note that $\tilde{s}^{z}(t)\in[0,1]$ and recall that $\tilde{s}(t)$ has the
sub-multiplicative property
$\tilde{s}(t+u)\leq\tilde{s}(t)\tilde{s}(u),$
which in particular implies an exponential decay in time of $\tilde{s}$, see
[30].
Consider any initial measure $\alpha$ on $\mathcal{X}\setminus\\{x\\}$ and
define the transformation
$\tilde{\alpha}(y):=\frac{\alpha(y)\gamma_{x}(y)}{\left\langle\alpha,\gamma_{x}\right\rangle},\qquad\forall
y\neq x.$ (1.13)
Then, as shown in [31],
$\displaystyle\mathbb{P}_{\alpha}(\tau_{x}>t)=$ $\displaystyle\sum_{y\neq
x}\sum_{z\neq x}\alpha(z){\big{(}[P]_{x}\big{)}^{t}(z,y)}$ (1.14)
$\displaystyle=$
$\displaystyle\sum_{y\not=x}\sum_{z\not=x}\alpha(z){\gamma_{x}(z)\lambda_{x}^{t}\mu_{x}^{\star}(y)\frac{\widetilde{P}^{t}(z,y)}{\nu(y)}}$
(1.15) $\displaystyle=$
$\displaystyle\lambda_{x}^{t}\sum_{z\not=x}\alpha(z)\gamma_{x}(z)\sum_{y\not=x}\mu^{\star}_{x}(y)(1-\tilde{s}^{z}(t,y))$
(1.16) $\displaystyle=$
$\displaystyle\lambda_{x}^{t}\left\langle\alpha,\gamma_{x}\right\rangle\Big{(}1-\sum_{y\not=x}\mu^{\star}_{x}(y)\tilde{s}^{\tilde{\alpha}}(t,y)\Big{)}$
(1.17)
where we call
$\tilde{s}^{\tilde{\alpha}}(t,y):=\sum_{x\not=x}\tilde{\alpha}(x)\tilde{s}^{x}(t,y)\qquad\hbox{
and}\qquad\tilde{s}^{\tilde{\alpha}}(t):=\sup_{y\not=x}\tilde{s}^{\tilde{\alpha}}(t,y).$
(1.18)
Moreover, again by [31], we know that Eq. 1.17 can be estimated from above and
below by
$\lambda_{x}^{t}\left\langle\alpha,\gamma_{x}\right\rangle\Big{(}1-\tilde{s}^{\tilde{\alpha}}(t)\Big{)}\leq\mathbb{P}_{\alpha}(\tau_{x}>t)\leq\lambda_{x}^{t}\left\langle\alpha,\gamma_{x}\right\rangle\left(1+\tilde{s}^{\tilde{\alpha}}(t)\left(\frac{1}{\min_{y}\gamma_{x}(y)}-1\right)\right).$
(1.19)
Eq. 1.19 suggests that, in the regime in which $|\mathcal{X}|\to\infty$, the
first order geometric approximation of the tail probability
$\mathbb{P}_{\alpha}(\tau_{x}>t)$ can be obtained. In particular, the
exponentiality immediately follows from Eq. 1.19 for all those Markov chains
$P$, target states $x$, initial distributions $\alpha$ and time $t$ for which
all of the following assumptions hold:
1. (i)
$s^{\tilde{\alpha}}(t)=o(1)$, i.e., $t$ is sufficiently large to have that the
Doob transform starting at $\alpha$ is well mixed by time $t$;
2. (ii)
$\left\langle\alpha,\gamma_{x}\right\rangle\sim 1$, which occurs in particular
if $\gamma_{x}$ approximates the constant vector;
3. (iii)
$\min_{y}\gamma_{x}(y)=\Omega(1)$, which can be thought of as an additional
uniformity requirement to the one in Item ii.
Despite the intuitions based on Eq. 1.19, we are not going to follow exactly
the heuristic recipe explained in Items i, ii and iii. In fact our focus is on
the special case in which $\alpha=\pi$, which leaded us through a different
path toward proving exponentiality. Nevertheless, as a byproduct of our proof
of the FVTL we provide uniform upper and lower bounds on the right eigenvector
$\gamma_{x}$. We think those bounds can be of independent interest, since they
can be turned into a quantitative information on the structure of the Doob’s
transform of the process $X$. In particular, for a given model, our bounds
could be useful in verifying the conditions in Items i, ii and iii, and
therefore in finding—for every fixed choice of the initial distribution
$\alpha$—the right first order approximation of the decay of
$\mathbb{P}_{\alpha}(\tau_{x}>t)$.
## 2\. Notation and results
We start by presenting the notation and briefly recalling the basic quantities
introduced in Section 1. We consider a sequence of Markov chains on a growing
state space. Formally:
* •
$\mathcal{X}^{(n)}$ is a state space of size $n$.
* •
$(X^{(n)})_{t\geq 0}$ is a discrete time Markov chain on $\mathcal{X}^{(n)}$.
* •
$\mathbb{P}^{(n)}$ is the probability law of the Markov chain
$(X^{(n)})_{t\geq 0}$, and $\mathbb{E}^{(n)}$ the corresponding expectation.
* •
$P^{(n)}$ is the transition matrix of $(X^{(n)})_{t\geq 0}$, which is assumed
to be ergodic.
* •
$\pi^{(n)}$ is the stationary distribution of $P^{(n)}$.
* •
For any probability distribution $\alpha$ on $\mathcal{X}^{(n)}$ and every
integer $t\geq 0$, we note by $\mu^{\alpha}_{t}$ the probability distribution
of the chain $X^{(n)}$ starting at $\alpha$ and evolved for $t$ steps, i.e.,
$\mu^{\alpha}_{t}(y):=\sum_{x\in\mathcal{X}^{(n)}}\alpha(x)\big{(}P^{(n)}\big{)}^{t}(x,y),\qquad\forall
y\in\mathcal{X}^{(n)}.$
* •
For all $x\in\mathcal{X}^{(n)}$, $\tau_{x}$ represents the hitting time of
vertex $x$, defined as in Eq. 1.4.
* •
For all $t\geq 0$ and $x\in\mathcal{X}^{(n)}$, we let the symbol
$\zeta_{t}(x)$ denote the random time spent by the process $X^{(n)}$ in the
state $x$ within time $t$, i.e.,
$\zeta_{t}(x):=\sum_{s=0}^{t-1}\mathds{1}_{X^{(n)}_{s}=x}.$ (2.1)
* •
For all $x\in\mathcal{X}^{(n)}$ we denote by $[P^{(n)}]_{x}$ the sub-Markovian
kernel obtained by removing the $x$-th row and column of $P^{(n)}$. The kernel
$[P^{(n)}]_{x}$ is assumed to be irreducible.
* •
For all $x\in\mathcal{X}^{(n)}$, $\lambda_{x}$ denotes as the leading
eigenvalue of $[P^{(n)}]_{x}$ and $\mu^{\star}_{x}$ as the corresponding left
eigenvector, normalized so that $\mu^{\star}_{x}$ is a probability
distribution over $\mathcal{X}^{(n)}\setminus\\{x\\}$. See Eq. 1.5. We remark
that by the definitions follows that
$\mathbb{P}_{\mu^{\star}_{x}}(\tau_{x}>t)=\lambda_{x}^{t},\qquad\forall t\geq
0,$ (2.2)
see Eq. 1.7.
* •
For all $x\in\mathcal{X}^{(n)}$, $\gamma_{x}$ denotes the right eigenvector of
$[P^{(n)}]_{x}$ associated to the eigenvalue $\lambda_{x}$. We consider
$\gamma_{x}$ to be normalized so that
$\left\langle\mu_{x}^{\star},\gamma_{x}\right\rangle=1$.
Since we are interested in asymptotic results when $n\to\infty$, the
asymptotic notation will refer to this limit and the explicit dependence on
$n$ will be usually dropped.
We will adopt the usual asymptotic notation $(o,O,\Theta,\omega,\Omega)$ and,
given two functions $f,g:\mathbb{N}\to\mathbb{R}_{+}$, we will use the symbols
$\sim$ and $\lesssim$ with the meaning
$f(n)\sim g(n)\qquad\iff\qquad\lim_{n\to\infty}\frac{f(n)}{g(n)}=1,$
and
$f(n)\lesssim g(n)\qquad\iff\qquad\limsup_{n\to\infty}\frac{f(n)}{g(n)}\leq
1,$
respectively.
### 2.1. Results
We will work under the following asymptotic assumption for the sequence of
Markov chains: There exist
* •
A real number $c>2$.
* •
A diverging sequence $T=T(n)$ .
such that
* (HP 1)
Fast mixing:
$\max_{x,y\in\mathcal{X}}\left|\mu_{T}^{x}(y)-\pi(y)\right|=o(n^{-c}).$
* (HP 2)
Small $\pi_{\max}$:
$T\max_{x\in\mathcal{X}}\pi(x)=o(1).$
* (HP 3)
Large $\pi_{\min}$:
$\min_{x\in\mathcal{X}}\pi(x)=\omega(n^{-2}).$
Fixed any $x\in\mathcal{X}$ we let $R_{T}(x)$ denote the expected number of
returns at $x$ for the Markov chain starting at $x$ within $T$. More
precisely,
$R_{T}(x)=\sum_{t=0}^{T}\mu_{t}^{x}(x)\geq 1.$ (2.3)
The precise statement that we prove is the following
###### Theorem 2.1 (First Visit Time Lemma).
Under the assumptions (HP1), (HP2) and (HP3) for all $x\in\mathcal{X}$, it
holds
$\sup_{t\geq
0}\left|\frac{\mathbb{P}_{\pi}(\tau_{x}>t)}{\lambda_{x}^{t}}-1\right|\longrightarrow
0,$ (2.4)
and
$\left|\frac{\lambda_{x}}{\left(1-\frac{\pi(x)}{R_{T}(x)}\right)}-1\right|\longrightarrow
0.$ (2.5)
We will see in Section 4 that it follows as an easy consequence of Theorem 2.1
that the right-eigenvector $\gamma_{x}$ asymptotically has mean 1 with respect
to the stationary distribution. In other words, the following corollary holds.
###### Corollary 2.2.
Under the same assumptions of Theorem 2.1: for all $x\in\mathcal{X}$
$\sum_{y\in\mathcal{X}\setminus\\{x\\}}\pi(y)\gamma_{x}(y)\to 1.$ (2.6)
Moreover, we provide some entry-wise upper and lower bound for the eigenvector
$\gamma_{x}$.
###### Theorem 2.3.
Under the same set of assumptions, for every $x\in\mathcal{X}$:
1. (i)
For all $y\in\mathcal{X}\setminus\\{x\\}$ it holds
$\gamma_{x}(y)\lesssim 1.$
2. (ii)
For all $y\in\mathcal{X}\setminus\\{x\\}$ it holds
$\gamma_{x}(y)\gtrsim\big{[}1-\mathbb{E}_{y}\left[\zeta_{T}(x)\right]\big{]}_{+}.$
###### Remark 2.4.
We remark that the asymptotic lower bound in Theorem 2.3 is in fact not void
for most of the models of random graphs which are known to satisfy the
assumptions of the FVTL. As an example, if $X$ is the simple random walk on a
random regular directed graph of in/out-degree $r$, then—with high probability
with respect to the construction of the environment—for every
$x\in\mathcal{X}$ the quantity $\mathbb{E}_{y}[\zeta_{T}(x)]$ is strictly
smaller than $1$ uniformly in $y\in\mathcal{X}\setminus\\{x\\}$; moreover,
$\mathbb{E}_{y}[\zeta_{T}(x)]=0$ for most $y\in\mathcal{X}\setminus\\{x\\}$.
To see the validity of the latter statement, we refer the reader to [13,
Propositions 4.3 and 4.4].
### 2.2. Comparison with Cooper&Frieze’s lemma
In order to facilitate a direct comparison, we write here—using our
notation—the claim proved by Cooper and Frieze, stressing the differences with
Theorem 2.1.
###### Theorem 2.5 (See Lemma 6 and Corollary 7 in [17].).
Consider a sequence of Markov chains satisfying the assumptions (HP1), (HP2)
and (HP3) with $c=3$. Moreover, let
$a=\frac{1}{KT}$
for a suitably large constant $K$. Fix $x\in\mathcal{X}$ and assume further
that the truncated probability generating function
$\mathbf{R}(z)=\sum_{t=0}^{T-1}P^{t}(x,x)z^{t},\qquad\forall z\in\mathbb{C}$
satisfies
$\min_{|z|\leq 1+a}\mathbf{R}(z)\geq\theta$ (2.7)
for some constant $\theta>0$. Then, for all $y\in\mathcal{X}$ and $t\geq 0$
$\mathbb{P}_{\mu^{T}_{y}}\left(\tau_{x}>t\right)=\big{(}1+O(T\pi(x))\big{)}\tilde{\lambda}_{x}^{t}+o\left(e^{-at/2}\right),$
(2.8)
where
$\tilde{\lambda}_{x}=\left(1+\frac{\pi(x)}{R_{T}(x)(1+O\left(T\pi(x)\right))}\right)^{-1}.$
Even at a first sight, there are three main differences between Theorem 2.5
and Theorem 2.1:
1. (1)
First, our proof neglects the technical assumption in Eq. 2.7. Indeed, we
remark once again are not going to use any tool complex analysis, being our
proof elementary and completely probabilistic in nature.
2. (2)
Second, the estimate in Eq. 2.8 concerns the tail probability of the hitting
time when the initial measure is the $T$-step evolution starting at any fixed
vertex $y$. The latter is in fact a minor difference. In Lemma 3.3 we will
show that our estimate in Eq. 2.4 holds even when replacing $\pi$ by
$\mu_{y}^{T}$, for any choice $y$.
3. (3)
Finally, our result does not take into account the precise magnitude of the
second order corrections. This is because we would like to put the accent of
this paper on the underlying phenomenology, trying to keep the paper as easy
and readable as possible. We stress that more precise bounds could be obtained
through the same set of arguments.
### 2.3. Overview of the paper
Section 3 is devoted to the proof of Theorem 2.1. The proof is divided into
several steps. We start by showing a first order approximation for the
expected hitting time of $x$ starting at stationarity, i.e.
$\mathbb{E}_{\pi}[\tau_{x}]\sim R_{T}(x)/\pi(x)$. See Proposition 3.1. In
order to show that the latter expectation coincides at first order with
$\mathbb{E}_{\mu^{\star}_{x}}[\tau_{x}]$ we prove that the tail probability
$\mathbb{P}_{\pi}(\tau_{x}>t)$ is asymptotically larger or equal to the the
tail of the same probability starting at any other measure. This is the
content of Proposition 3.4. To conclude the validity of
$\mathbb{E}_{\pi}[\tau_{x}]\sim\mathbb{E}_{\mu^{\star}}[\tau_{x}]=(1-\lambda_{x})^{-1},$
(2.9)
we then use a bootstrap argument: we first show in Lemma 3.9 that
$\lambda_{x}^{T}\sim 1$, then—in Proposition 3.8—we show that the latter bound
can be translated in the sharper estimate in Eq. 2.9. Once established Eq.
2.9, the exponential approximation can be obtained by using the properties of
quasi-stationary distributions.
In Section 4 we use the understanding developed in Section 3 to show the
validity of Corollaries 2.2 and 2.3. Namely, we see how the FVTL reflects on
the properties of the first right-eigenvector $\gamma_{x}$.
Finally, in Section 5, we aim at framing the FVTL and its setting in the
language of _conditional strong quasi-stationary times_ introduced in [31].
## 3\. Proof of the FVTL
As mentioned in Section 2.3, our proof of the Theorem 2.1 is divided into
several small steps. The first proposition is devoted to the computation of
the average hitting time of $x$ starting at stationarity. The credits for this
result go to Abdullah, who presented it in his PhD thesis, [1, Lemma 58]. We
repeat here the proof for the reader’s convenience.
###### Proposition 3.1 (see [1]).
For all $x\in\mathcal{X}$
$\mathbb{E}_{\pi}[\tau_{x}]\sim\frac{R_{T}(x)}{\pi(x)}.$ (3.1)
###### Proof.
By [6, Lemma 2.1] we have
$\mathbb{E}_{\pi}[\tau_{x}]=\frac{Z(x,x)}{\pi(x)},$
where $Z$ is the so called _fundamental matrix_ , defined by
$Z(x,x):=\sum_{t=0}^{\infty}\mu_{t}^{x}(x)-\pi(x).$ (3.2)
By the submultiplicativity of the sequence
$D(t):=\max_{x,y}\left|\mu_{t}^{x}(y)-\pi(y)\right|,$ (3.3)
i.e.,
$D(t+s)\leq 2D(t)D(s),\qquad\forall t,s>0,$ (3.4)
and thanks to (HP 1), we have
$\max_{x,y}\left|\mu_{kT}^{x}(y)-\pi(y)\right|\leq\left(\frac{2}{n^{c}}\right)^{k},\qquad\forall
k\in\mathbb{N}.$ (3.5)
Hence,
$\displaystyle Z(x,x)=$ $\displaystyle\sum_{t\leq
T}\big{(}\mu_{T}^{x}(x)-\pi(x)\big{)}+T\sum_{k\geq
1}\left(\frac{2}{n^{c}}\right)^{k}$ $\displaystyle=$ $\displaystyle
R_{T}(x)+O(T\pi(x))+O(Tn^{-c})$ $\displaystyle=$ $\displaystyle
R_{T}(x)(1+o(1)),$
where in the latter asymptotic equality we used $Tn^{-c}\leq T\pi_{\max}$, (HP
2), and the fact that $R_{T}(x)\geq 1$. ∎
###### Remark 3.2.
We remark that, by the _eigentime identity_ (see [6, 36, 33]) the trace of the
fundamental matrix of an irreducible chain coincides with the sum of the
inverse non-null eigenvalues of the generator, which in turn coincide with the
expected hitting time of a state sampled accordingly to the stationary
distribution. Namely, for all $y\in\mathcal{X}$,
$\sum_{x\in\mathcal{X}}\pi(x)\mathbb{E}_{y}\tau_{x}=\sum_{x\in\mathcal{X}}Z(x,x)=\sum_{i=2}^{n}\frac{1}{1-\theta_{i}}$
(3.6)
where
$1=\theta_{1}>\Re(\theta_{2})\geq\dots\geq\Re(\theta_{n})\geq-1$
are the eigenvalues of $P$. By Proposition 3.1 we get that, for all
$y\in\mathcal{X}$,
$\sum_{x\in\mathcal{X}}Z(x,x)\sim\sum_{x\in\mathcal{X}}R_{T}(x).$ (3.7)
In other words, under the assumptions in (HP 1), (HP 2) and (HP 3), the sum of
the inverse eigenvalues of the generator can be well approximated by the sum
of the expected returns within the mixing time.
A crucial fact that will be used repeatedly in what follows is that under the
assumptions in Section 2.1, the tails of $\tau_{x}$ starting at $\mu_{T}^{y}$
and starting at $\pi$ coincide at first order.
###### Lemma 3.3.
For all $x,y\in\mathcal{X}$ and $t>0$ it holds
$\mathbb{P}_{\mu_{T}^{y}}(\tau_{x}>t)\sim\mathbb{P}_{\pi}(\tau_{x}>t).$ (3.8)
###### Proof.
By the assumptions we have that
$\displaystyle\max_{x,y\in\mathcal{X}}\left|\frac{\mu_{T}^{x}(y)}{\pi(y)}-1\right|=$
$\displaystyle\max_{x,y\in\mathcal{X}}\frac{1}{\pi(y)}\left|\mu_{T}^{x}(y)-\pi(y)\right|$
(3.9) $\displaystyle\leq$
$\displaystyle\frac{1}{\min_{y\in\mathcal{X}}\pi(y)}\max_{x,y\in\mathcal{X}}\left|\mu_{T}^{x}(y)-\pi(y)\right|$
(3.10) $\displaystyle\text{By {(HP 1)}}\Longrightarrow\quad\leq$
$\displaystyle\frac{n^{-c}}{\min_{y\in\mathcal{X}}\pi(y)}$ (3.11)
$\displaystyle\text{By {(HP 3)}}\Longrightarrow\quad=$ $\displaystyle
o(n^{-c+2}),$ (3.12)
from which the claim follows. In fact,
$\mathbb{P}_{\mu_{y}^{T}}(\tau_{x}>t)=\sum_{z}\mu_{y}^{T}(z)\mathbb{P}_{x}(\tau_{x}>t)=(1+o(1))\sum_{z}\pi(z)\mathbb{P}_{x}(\tau_{x}>t)=(1+o(1))\mathbb{P}_{\pi}(\tau_{x}>t).\qed$
The next proposition shows that under the assumptions in Section 2.1 the tail
of the hitting time $\tau_{x}$ starting at $\pi$ coincides—asymptotically—with
the tail of $\tau_{x}$ starting at the “furthest” vertex.
###### Proposition 3.4.
For all $x\in\mathcal{X}$ and for all $t>T$ it holds
$\max_{y\in\mathcal{X}}\mathbb{P}_{y}(\tau_{x}>t)\sim\mathbb{P}_{\pi}(\tau_{x}>t).$
(3.13)
We start by proving a preliminary version of Proposition 3.4, which is
expressed by the following lemma.
###### Lemma 3.5.
For all $x\in\mathcal{X}$ and for all $t>T$ it holds
$\max_{y\in\mathcal{X}}\mathbb{P}_{y}(\tau_{x}>t)\lesssim\mathbb{P}_{\pi}(\tau_{x}>t-T).$
(3.14)
###### Proof.
For all $x,y\in\mathcal{X}$ it holds
$\displaystyle\mathbb{P}_{y}(\tau_{x}>t)=$
$\displaystyle\sum_{z\in\mathcal{X}}\mathbb{P}_{y}(X_{T}=z;\>\tau_{x}>T)\mathbb{P}_{z}(\tau_{x}>t-T)$
(3.15) $\displaystyle\leq$
$\displaystyle\sum_{z\in\mathcal{X}}\mathbb{P}_{y}(X_{T}=z)\mathbb{P}_{z}(\tau_{x}>t-T)$
(3.16) $\displaystyle=$
$\displaystyle(1+o(1))\sum_{z\in\mathcal{X}}\pi(z)\mathbb{P}_{z}(\tau_{x}>t-T)$
(3.17) $\displaystyle\sim$ $\displaystyle\mathbb{P}_{\pi}(\tau_{x}>t-T).\qed$
(3.18)
Roughly, given Lemma 3.5, the proof of Proposition 3.4 follows by showing that
the $-T$ term in the right hand side of Eq. 3.14 does not affect the
asymptotic relation. This fact is made rigorous by Lemma 3.6 and the
forthcoming Corollary 3.7. The proof of Lemma 3.6 is based on strong
stationary times techniques (see [5, 24, 30]) and it is inspired by the
recursion in the proof of [28, Lemma 5.4]. Before to proceed with the proof,
we need to recall some definitions and properties of strong stationary times.
A randomized stopping time $\tau^{\alpha}_{\pi}$ is a _Strong Stationary Time
(SST)_ for the Markov chain $X_{t}$ with starting distribution $\alpha$ and
stationary measure $\pi$, if for any $t\geq 0$ and $y\in\mathcal{X}$
$\mathbb{P}_{\alpha}\left(X_{t}=y,\tau^{\alpha}_{\pi}=t\right)=\pi(y)\mathbb{P}_{\alpha}\left(\tau^{\alpha}_{\pi}=t\right).$
This is equivalent to say
$\mathbb{P}_{\alpha}\left(X_{t}=y\big{|}\tau^{\alpha}_{\pi}\leq
t\right)=\pi(y).$ (3.19)
If $\tau^{\alpha}_{\pi}$ is a SST then
$\mathbb{P}_{\alpha}(\tau^{\alpha}_{\pi}>t)\geq\text{sep}(\mu^{\alpha}_{t},\pi):=\max_{y\in\mathcal{X}}\Big{[}1-\frac{\mu^{\alpha}_{t}(y)}{\pi(y)}\Big{]},\qquad\forall
t\geq 0,$ (3.20)
and when Eq. 3.20 holds with the equal sign for every $t$, the SST is minimal.
Moreover, a minimal SST always exists, see [30, Prop. 6.14].
###### Lemma 3.6.
For any $t>0$ it holds
$\frac{\mathbb{P}_{\pi}(\tau_{x}>t+T)}{\mathbb{P}_{\pi}(\tau_{x}>t)}\geq
1-o(1).$ (3.21)
###### Proof.
We first prove the following inequality
$\frac{\mathbb{P}_{\pi}(\tau_{x}>t+T)}{\mathbb{P}_{\pi}(\tau_{x}>t)}\geq
1-\varepsilon\,\cdot\,\frac{\mathbb{P}_{\pi}(\tau_{x}>t-T)}{\mathbb{P}_{\pi}(\tau_{x}>t)},$
(3.22)
with $\varepsilon=o(1)$. We start by rewriting
$\mathbb{P}_{\pi}(\tau_{x}>t+T)=\mathbb{P}_{\pi}(\tau_{x}>t)-\mathbb{P}_{\pi}(\tau_{x}\in[t,t+T]).$
(3.23)
Consider $\tau^{z}_{\pi}$ the minimal SST of the process started at $z$, so
that the last term in Eq. 3.23 can be written as
$\displaystyle\mathbb{P}_{\pi}(\tau_{x}\in[t,t+T])=$
$\displaystyle\sum_{z\in\mathcal{X}}\mathbb{P}_{\pi}(\tau_{x}>t-T,X_{t-T}=z)\mathbb{P}_{z}(\tau_{x}\in[T,2T])$
(3.24) $\displaystyle\leq$
$\displaystyle\sum_{z\in\mathcal{X}}\mathbb{P}_{\pi}(\tau_{x}>t-T,X_{t-T}=z)\Big{[}\mathbb{P}_{z}(\tau_{x}\in[T,2T],\tau_{\pi}^{z}\leq
T)+\mathbb{P}_{z}(\tau_{x}\leq 2T,\tau_{\pi}^{z}>T)\Big{]}$
$\displaystyle\leq$
$\displaystyle\mathbb{P}_{\pi}(\tau_{x}>t-T)\Big{[}\mathbb{P}_{\pi}(\tau_{x}\leq
2T)+\max_{z\in\mathcal{X}}\mathbb{P}_{z}(\tau_{\pi}^{z}>T)\Big{]}.$ (3.25)
Moreover,
$\mathbb{P}_{\pi}(\tau_{x}\leq 2T)=\mathbb{P}_{\pi}\left(\exists s\leq
2T\text{ s.t. }X_{s}=x\right)\leq(2T+1)\pi(x)=:\varepsilon_{1}=o(1),$ (3.26)
where we used the assumption (HP 2). On the other hand, thanks to Lemma 3.3,
we have
$\max_{z\in\mathcal{X}}\mathbb{P}_{z}(\tau_{\pi}^{z}>T)=\max_{z\in\mathcal{X}}\text{sep}(\mu^{z}_{T},\pi)\leq\max_{z\in\mathcal{X}}\left\|\frac{\mu^{z}_{T}}{\pi}-1\right\|_{\infty}=:\varepsilon_{2}=o(1).$
(3.27)
By plugging Eqs. 3.26 and 3.27 into Eq. 3.23 we get
$\mathbb{P}_{\pi}(\tau_{x}>t+T)\geq\mathbb{P}_{\pi}(\tau_{x}>t)-\mathbb{P}_{\pi}(\tau_{x}>t-T)(\varepsilon_{1}+\varepsilon_{2}),$
(3.28)
and so Eq. 3.22 follows with $\varepsilon:=\varepsilon_{1}+\varepsilon_{2}$.
We are now going to exploit Eq. 3.22 to prove Eq. 3.21. Consider the sequence
$(y_{i})_{i\geq 1}$
$y_{i}:=\frac{\mathbb{P}_{\pi}(\tau_{x}>(i+1)T)}{\mathbb{P}_{\pi}(\tau_{x}>iT)}.$
(3.29)
Thanks to Eq. 3.22 we deduce
$\quad y_{i+1}\geq 1-\frac{\varepsilon}{y_{i}}.$ (3.30)
Being $\varepsilon<1/4$, we can define
$\bar{\varepsilon}:=\frac{1}{2}-\sqrt{\frac{1}{4}-\varepsilon}$
and get by induction
$y_{i}\geq 1-\bar{\varepsilon},\qquad\forall i\geq 1.$ (3.31)
Indeed, note that
$\varepsilon=\bar{\varepsilon}(1-\bar{\varepsilon})<\bar{\varepsilon}$
$y_{1}=\frac{\mathbb{P}_{\pi}(\tau_{x}>2T)}{\mathbb{P}_{\pi}(\tau_{x}>T)}=1-\frac{\mathbb{P}_{\pi}(\tau_{x}\in[T,2T])}{\mathbb{P}_{\pi}(\tau_{x}>T)}\geq
1-\frac{(T+1)\pi(x)}{1-(T+1)\pi(x)}\geq
1-\frac{\varepsilon}{1-\varepsilon}\geq 1-\bar{\varepsilon}$
and
$y_{i+1}\geq 1-\frac{\varepsilon}{y_{i}}\geq
1-\frac{\varepsilon}{1-\bar{\varepsilon}}\geq 1-\bar{\varepsilon}.$
The result of the induction in Eq. 3.31 can be immediately extended from times
$iT$ to general times $t=iT+t_{0}$ with $t_{0}<T$ by noting that again we get
$1-\frac{(T+t_{0})\pi(x)}{1-t_{0}\pi(x)}\geq
1-\frac{\varepsilon}{1-\varepsilon}.\qed$
###### Corollary 3.7.
For all $x\in\mathcal{X}$ and for all $t>T$ it holds
$\mathbb{P}_{\pi}(\tau_{x}>t-T)\sim\mathbb{P}_{\pi}(\tau_{x}>t).$ (3.32)
###### Proof.
Notice that it is sufficient to show that
$\frac{\mathbb{P}_{\pi}(\tau_{x}>t)}{\mathbb{P}_{\pi}(\tau_{x}>t-T)}\geq
1-o(1),$ (3.33)
which follows immediately by Lemma 3.6. ∎
###### Proof of Proposition 3.4.
It follows immediately by Lemmas 3.5 and 3.7. ∎
The next proposition relates the expected hitting time of $x$ starting at
stationarity, with the same expectation but starting at quasi-stationarity.
###### Proposition 3.8.
For all $x\in\mathcal{X}$
$\mathbb{E}_{\pi}[\tau_{x}]\sim\mathbb{E}_{\mu^{\star}_{x}}[\tau_{x}]=\frac{1}{1-\lambda_{x}}.$
Hence, by Proposition 3.1,
$1-\lambda_{x}\sim\frac{\pi(x)}{R_{T}(x)}.$
In order to prove Proposition 3.8, a key ingredient is the following lemma,
which states that $1-\lambda_{x}$ must be much smaller than $T^{-1}$. We will
later see that such a rough bound is sufficient to recover the precise first
order asymptotic of $\lambda_{x}$ by comparing
$\mathbb{E}_{\mu_{x}^{\star}}[\tau_{x}]$ to $\mathbb{E}_{\pi}[\tau_{x}]$.
###### Lemma 3.9.
For all $x\in\mathcal{X}$, it holds
$\lambda_{x}^{T}\sim 1.$ (3.34)
###### Proof.
Start by noting that
$\displaystyle\lambda_{x}^{2T}=$
$\displaystyle\mathbb{P}_{\mu^{\star}_{x}}(\tau_{x}>2T)$ (3.35)
$\displaystyle=$ $\displaystyle\sum_{z\neq
x}\mathbb{P}_{\mu^{\star}_{x}}\left(X_{T}=z,\>\tau_{x}>T\right)\mathbb{P}_{z}\left(\tau_{x}>T\right)$
(3.36) $\displaystyle=$ $\displaystyle\sum_{z\neq
x}\left[\mathbb{P}_{\mu^{\star}_{x}}\left(X_{T}=z\right)-\mathbb{P}_{\mu^{\star}_{x}}\left(\>X_{T}=z,\>\tau_{x}\leq
T\right)\right]\mathbb{P}_{z}\left(\tau_{x}>T\right)$ (3.37)
$\displaystyle\text{\lx@cref{creftype~refnum}{le:approx}
}\Longrightarrow\quad\sim$
$\displaystyle\mathbb{P}_{\pi}(\tau_{x}>T)-\sum_{z\neq
x}\mathbb{P}_{\mu^{\star}_{x}}\left(X_{T}=z,\>\tau_{x}\leq
T\right)\mathbb{P}_{z}\left(\tau_{x}>T\right)$ (3.38) $\displaystyle\geq$
$\displaystyle\mathbb{P}_{\pi}(\tau_{x}>T)-\max_{z}\mathbb{P}_{z}\left(\tau_{x}>T\right)\mathbb{P}_{\mu^{\star}_{x}}\left(\tau_{x}\leq
T\right)$ (3.39) $\displaystyle\text{
\lx@cref{creftype~refnum}{le:max}}\Longrightarrow\quad\sim$
$\displaystyle\mathbb{P}_{\pi}(\tau_{x}>T)\left(1-\mathbb{P}_{\mu^{\star}_{x}}\left(\tau_{x}\leq
T\right)\right)$ (3.40) $\displaystyle=$
$\displaystyle\mathbb{P}_{\pi}(\tau_{x}>T)\left(1-(1-\lambda_{x}^{T})\right).$
(3.41)
Hence
$\displaystyle\lambda_{x}^{T}\ \gtrsim\mathbb{P}_{\pi}(\tau_{x}>T)\geq
1-(T+1)\pi(x),$ (3.42)
so, by (HP 2) we can conclude that $\lambda_{x}^{T}\sim 1.$ ∎
###### Proof of Proposition 3.8.
We start with the trivial bounds
$\sum_{t=T}^{\infty}\mathbb{P}_{\mu^{\star}_{x}}(\tau_{x}>t)\leq\mathbb{E}_{\mu^{\star}_{x}}[\tau_{x}]\leq
T+\sum_{t=T}^{\infty}\mathbb{P}_{\mu^{\star}_{x}}(\tau_{x}>t).$ (3.43)
We further notice that
$\displaystyle\sum_{t=T}^{\infty}\mathbb{P}_{\mu^{\star}_{x}}(\tau_{x}>t)=$
$\displaystyle\sum_{z}\mathbb{P}_{\mu^{\star}_{x}}(X_{T}=z,\tau_{x}>T)\sum_{t=0}^{\infty}\mathbb{P}_{z}(\tau_{x}>t)$
(3.44) $\displaystyle=$
$\displaystyle\sum_{z}\left[\mathbb{P}_{\mu^{\star}_{x}}(X_{T}=z)-\mathbb{P}_{\mu^{\star}_{x}}(X_{T}=z,\tau_{x}\leq
T)\right]\sum_{t=0}^{\infty}\mathbb{P}_{z}(\tau_{x}>t)$ (3.45)
$\displaystyle=$
$\displaystyle\sum_{z}\left[\pi(z)(1+o(1))-\mathbb{P}_{\mu^{\star}_{x}}(X_{T}=z,\tau_{x}\leq
T)\right]\sum_{t=0}^{\infty}\mathbb{P}_{z}(\tau_{x}>t)$ (3.46)
$\displaystyle=$
$\displaystyle(1+o(1))\mathbb{E}_{\pi}[\tau_{x}]-\sum_{z}\mathbb{P}_{\mu^{\star}_{x}}(X_{T}=z,\tau_{x}\leq
T)\sum_{t=0}^{\infty}\mathbb{P}_{z}(\tau_{x}>t).$ (3.47)
It follows immediately by Eq. 3.47 that
$\sum_{t=T}^{\infty}\mathbb{P}_{\mu^{\star}_{x}}(\tau_{x}>t)\leq(1+o(1))\mathbb{E}_{\pi}[\tau_{x}].$
(3.48)
On the other hand,
$\sum_{z}\mathbb{P}_{\mu^{\star}_{x}}(X_{T}=z,\tau_{x}\leq
T)\sum_{t=0}^{\infty}\mathbb{P}_{z}(\tau_{x}>t)\leq\mathbb{P}_{\mu_{x}^{\star}}(\tau_{x}\leq
T)\cdot\sum_{t=0}^{\infty}\max_{z}\mathbb{P}_{z}(\tau_{x}>t)$ (3.49)
and thanks to Proposition 3.4 we get
$\displaystyle\sum_{t=0}^{\infty}\max_{z}\mathbb{P}_{z}(\tau_{x}>t)\leq$
$\displaystyle
T+\sum_{t=T}^{\infty}\mathbb{P}_{\pi}(\tau_{x}>t)=(1+o(1))\mathbb{E}_{\pi}[\tau_{x}].$
(3.50)
At this point, the proof is complete since
$\mathbb{P}_{\mu_{x}^{\star}}(\tau_{x}\leq T)=1-\lambda_{x}^{T}=o(1),$ (3.51)
where the latter asymptotics follows from Lemma 3.9. ∎
We are now in shape to prove the main result.
###### Proof of Theorem 2.1.
We start by bounding each entry of the $T$-step evolution of the quasi-
stationary measure. From above we have the trivial bound: for all
$x,y\in\mathcal{X}$
$\mu_{T}^{\mu^{\star}_{x}}(y)\geq\lambda^{T}_{x}\mu_{x}^{\star}(y).$ (3.52)
The latter immediately implies that for all $x\in\mathcal{X}$ and $t>0$ it
holds
$\mathbb{P}_{\pi}(\tau_{x}>t)\gtrsim\lambda_{x}^{t+T}\sim\lambda_{x}^{t}.$
(3.53)
In fact, by Lemma 3.3,
$\mathbb{P}_{\pi}(\tau_{x}>t)\sim\mathbb{P}_{\mu_{T}^{\mu_{x}^{\star}}}(\tau_{x}>t)\geq\lambda_{x}^{T}\mathbb{P}_{\mu_{x}^{\star}}(\tau_{x}>t)=\lambda_{x}^{t+T}.$
(3.54)
To conclude the proof, we show a matching upper bound. Component-wise, we can
upper bound
$\displaystyle\mu_{T}^{\mu_{x}^{\star}}(y)=$
$\displaystyle\lambda_{x}^{T}\mu_{x}^{\star}(y)+(1-\lambda_{x})\sum_{s=1}^{T}\lambda_{x}^{s}\mu_{T-s}^{x}(y)$
(3.55) $\displaystyle\leq$
$\displaystyle\lambda_{x}^{T}\mu_{x}^{\star}(y)+(1-\lambda_{x})\mathbb{E}_{x}[\zeta_{T}(y)],$
(3.56)
where $\zeta_{T}(y)$ denotes the local time spent by the chain in the state
$y$ within time $T$, i.e.
$\zeta_{T}(y):=\sum_{s=1}^{T}\mathds{1}_{X_{t}=y}.$ (3.57)
Notice that for all $x,y\in\mathcal{X}$, holds
$\sum_{y\in\mathcal{X}}\mathbb{E}_{x}[\zeta_{T}(y)]=T.$ (3.58)
Hence
$\displaystyle\mathbb{P}_{\pi}(\tau_{x}>t)\sim$
$\displaystyle\mathbb{P}_{\mu_{T}^{\mu_{x}^{\star}}}(\tau_{x}>t)$ (3.59)
$\displaystyle\leq$
$\displaystyle\sum_{y\in\mathcal{X}}\lambda_{x}^{T}\mu_{x}^{\star}(y)\mathbb{P}_{y}(\tau_{x}>t)+(1-\lambda_{x})\sum_{y\in\mathcal{X}}\mathbb{E}_{x}[\zeta_{T}(y)]\mathbb{P}_{y}(\tau_{x}>t)$
(3.60) $\displaystyle\leq$
$\displaystyle\lambda_{x}^{t+T}+(1-\lambda_{x})T\max_{y}\mathbb{P}_{y}(\tau_{x}>t)$
(3.61) $\displaystyle=$
$\displaystyle\lambda_{x}^{t+T}+o\left(\mathbb{P}_{\pi}(\tau_{x}>t)\right)$
(3.62)
where in the latter asymptotic equality we used Lemmas 3.9 and 3.4. We then
conclude that for all $x\in\mathcal{X}$ and $t>T$ it holds
$\mathbb{P}_{\pi}(\tau_{x}>t)\lesssim\lambda_{x}^{t}.\qed$
## 4\. Controlling the Doob’s transform
We start the section by showing that the unique vector $\gamma_{x}$ defined by
the requirements
$\lambda_{x}\gamma_{x}=[P]_{x}\gamma_{x},\qquad\left\langle\mu^{\star}_{x},\gamma_{x}\right\rangle=1,$
(4.1)
can be equivalently characterized by the limits
$\gamma_{x}(y)=\lim_{t\to\infty}\frac{\mathbb{P}_{y}(\tau_{x}>t)}{\lambda_{x}^{t}},\qquad\forall
y\neq x.$ (4.2)
In fact, it is an immediate consequence of Eq. 1.8 and $|\mathcal{X}|<\infty$
that for every measure $\alpha,\alpha^{\prime}$ on $\mathcal{X}$, defining
$\gamma_{x}(x)=0$ and assuming $\alpha\neq\delta_{x}$, holds
$\frac{\left\langle\gamma_{x},\alpha^{\prime}\right\rangle}{\left\langle\gamma_{x},\alpha\right\rangle}=\lim_{t\to\infty}\frac{\mathbb{P}_{\alpha^{\prime}}(\tau_{x}>t)}{\mathbb{P}_{\alpha}(\tau_{x}>t)}.$
(4.3)
Hence, choosing $\alpha=\mu_{x}^{\star}$ and $\alpha^{\prime}=\delta_{y}$ in
the latter display we get Eq. 4.2. Moreover, choosing $\alpha=\mu_{x}^{\star}$
and $\alpha^{\prime}=\pi$ and making use of Theorem 2.1 we get indeed the
claim in Corollary 2.2.
We now aim at proving Theorem 2.3. We discuss the upper and the lower bound
separately. In order to ease the reading, in what follows we consider the
target vertex, $x$, to be fixed.
###### Lemma 4.1.
For all $\varepsilon>0$ and $x\in\mathcal{X}$ it holds
$\max_{y\in\mathcal{X}\setminus\\{x\\}}\gamma_{x}(y)\leq 1+\varepsilon$ (4.4)
###### Proof.
Rewrite
$\displaystyle\max_{y\in\mathcal{X}\setminus\\{x\\}}\mathbb{P}_{y}(\tau_{x}>t)\leq$
$\displaystyle\max_{y\in\mathcal{X}\setminus\\{x\\}}\mathbb{P}_{y}\left(\tau_{x}>t;\>\tau_{\pi}^{y}\leq
T\right)+\max_{y\in\mathcal{X}\setminus\\{x\\}}\mathbb{P}_{y}\left(\tau_{x}>t;\>\tau_{\pi}^{y}>T\right)$
(4.5) $\displaystyle\leq$
$\displaystyle\mathbb{P}_{\pi}\left(\tau_{x}>t-T\right)+\max_{y\in\mathcal{X}\setminus\\{x\\}}\mathbb{P}_{y}\left(\tau_{x}>t;\>\tau_{\pi}^{y}>T\right).$
(4.6)
We aim at showing that
$\max_{y\in\mathcal{X}\setminus\\{x\\}}\mathbb{P}_{y}\left(\tau_{x}>t;\>\tau_{\pi}^{y}>T\right)=o\big{(}\mathbb{P}_{\pi}\left(\tau_{x}>t-T\right)\big{)}.$
(4.7)
We decompose the latter by its position at time $T$, i.e.,
$\displaystyle\max_{y\in\mathcal{X}\setminus\\{x\\}}\mathbb{P}_{y}\left(\tau_{x}>t;\>\tau_{\pi}^{y}>T\right)=$
$\displaystyle\max_{y\in\mathcal{X}\setminus\\{x\\}}\sum_{z\in\mathcal{X}\setminus\\{x\\}}\mathbb{P}_{y}\left(\tau_{x}>T;\>X_{T}=z;\>\tau_{\pi}^{y}>T\right)\>\mathbb{P}_{z}\left(\tau_{x}>t-T\right)$
(4.8) $\displaystyle\leq$
$\displaystyle\bigg{(}\max_{z\in\mathcal{X}\setminus\\{x\\}}\mathbb{P}_{z}\left(\tau_{x}>t-T\right)\bigg{)}\cdot\bigg{(}\max_{y\in\mathcal{X}\setminus\\{x\\}}\mathbb{P}_{y}(\tau_{\pi}^{y}>t)\bigg{)}$
(4.9) $\displaystyle\sim$
$\displaystyle\>\mathbb{P}_{\pi}\left(\tau_{x}>t-T\right)\cdot\max_{y\in\mathcal{X}}\mathbb{P}_{y}(\tau_{\pi}^{y}>t)$
(4.10) $\displaystyle=$
$\displaystyle\>o\big{(}\mathbb{P}_{\pi}\left(\tau_{x}>t-T\right)\big{)}.$
(4.11)
By inserting the bounds in Eqs. 4.6 and 4.7 into Eq. 4.2 we deduce that
$\displaystyle\gamma_{x}(y)=\lim_{t\to\infty}\frac{\mathbb{P}_{y}(\tau_{x}>t)}{\lambda_{x}^{t}}\lesssim$
$\displaystyle\lim_{t\to\infty}\frac{\mathbb{P}_{\pi}(\tau_{x}>t-T)}{\lambda_{x}^{t}}=1+o(1).\qed$
(4.12)
###### Lemma 4.2.
For all $\varepsilon>0$ and $x,y\in\mathcal{X}$ with $x\neq y$ it holds
$\gamma_{x}(y)\geq 1-\varepsilon-\mathbb{E}_{y}[\zeta_{T}(x)].$ (4.13)
###### Proof.
By the same argument of the proof of Lemma 4.1 it is sufficient to show that
for all $\varepsilon>0$
$\mathbb{P}_{y}(\tau_{x}>t)\geq(1-\varepsilon-\mathbb{E}_{y}[\zeta_{T}(x)])\mathbb{P}_{\pi}\left(\tau_{x}>t\right).$
(4.14)
Rewrite
$\displaystyle\mathbb{P}_{y}(\tau_{x}>t)\geq$
$\displaystyle\mathbb{P}_{y}\left(\tau_{x}>t;\>\tau_{\pi}^{y}\leq T\right)$
(4.15) $\displaystyle=$ $\displaystyle\sum_{s\leq
T}\mathbb{P}_{y}\left(\tau_{x}>s;\>\tau_{\pi}^{y}=s\right)\mathbb{P}_{\pi}(\tau_{x}>t-s)$
(4.16) $\displaystyle\geq$
$\displaystyle\mathbb{P}_{\pi}(\tau_{x}>t)\sum_{s\leq
T}\mathbb{P}_{y}\left(\tau_{x}>s;\>\tau_{\pi}^{y}=s\right)$ (4.17)
$\displaystyle=$
$\displaystyle\mathbb{P}_{\pi}(\tau_{x}>t)\mathbb{P}_{y}(\tau_{x}>\tau_{\pi}^{y};\tau_{\pi}^{y}\leq
T)$ (4.18)
we are left with showing that
$\displaystyle\mathbb{P}_{y}(\tau_{x}>\tau_{\pi}^{y};\tau_{\pi}^{y}\leq
T)\geq$
$\displaystyle\mathbb{P}_{y}(\tau_{x}>T)-\mathbb{P}_{y}(\tau_{\pi}^{y}>T)$
(4.19) $\displaystyle=$ $\displaystyle 1-\mathbb{P}_{y}(\tau_{x}\leq
T)-\varepsilon$ (4.20) $\displaystyle=$ $\displaystyle
1-\varepsilon-\sum_{s\leq T}\mathbb{P}_{y}(\tau_{x}=s)$ (4.21)
$\displaystyle\geq$ $\displaystyle 1-\varepsilon-\sum_{s\leq
T}\mathbb{P}_{y}(X_{s}=x)$ (4.22) $\displaystyle=$ $\displaystyle
1-\varepsilon-\mathbb{E}_{y}[\zeta_{T}(x)].\qed$ (4.23)
## 5\. A random time perspective on the FVTL
Besides the rough bounds in Eq. 1.19 it is possible to have a probabilistic
identity that defines the tail probability of the event $\tau_{x}>t$ when the
Markov chain starts at $\alpha$. In order to provide such a representation, it
has been introduced in [31] the notion of _conditional strong quasi-stationary
time_ as extension of the idea of _strong stationary time_ introduced in [5] ,
see also [24, 30]. In this last section, we aim at showing how the assumptions
leading to the validity of the FVTL reflect on the theory of CSQST and on the
mixing behavior of the Doob’s transform.
Consider an irreducible Markovian kernel $P$ and a state $x\in\mathcal{X}$
such that $[P]_{x}$ is irreducible and sub-Markovian. A randomized stopping
time $\tau^{\alpha}_{\star}$ is a _Conditional Strong Quasi Stationary Time
(CSQST)_ if for any $y\in\mathcal{X}\setminus\\{x\\}$, and $t\geq 0$
$\mathbb{P}_{\alpha}(X_{t}=y,\,\tau^{\alpha}_{\star}=t)=\mu^{\star}_{x}(y)\mathbb{P}_{\alpha}(\tau^{\alpha}_{\star}=t<\tau_{x}).$
(5.1)
In other words, $\tau^{\alpha}_{\star}$ is a CSQST if for any
$y\in\mathcal{X}\setminus\\{x\\}$, and $t\geq 0$
$\mathbb{P}_{\alpha}\left(X^{\alpha}_{t}=y,\tau^{\alpha}_{\star}=t\ |\
t<\tau_{x}\right)=\mu^{\star}_{x}(y)\mathbb{P}_{\alpha}\left(\tau^{\alpha}_{\star}=t\
|\ t<\tau_{x}\right)$ (5.2)
which is equivalent to
$\mathbb{P}_{\alpha}\left(X_{\tau^{\alpha}_{\star}}=y\mid\tau^{\alpha}_{\star}<\tau_{x}\right)=\mu^{\star}_{x}(y).$
(5.3)
By Eq. 1.19 we deduce that for any initial distribution $\alpha$ on
$\mathcal{X}\setminus\\{x\\}$ and for any CSQST $\tau^{\alpha}_{\star}$ we
have for any $t\geq 0$:
$\mathbb{P}_{\alpha}(\tau^{\alpha}_{\star}\leq t<\tau_{x})=\sum_{u\leq
t}\lambda^{t-u}\mathbb{P}_{\alpha}(\tau^{\alpha}_{\star}=u<\tau_{x})\leq\lambda_{x}^{t}\left\langle\alpha,\gamma_{x}\right\rangle(1-\tilde{s}^{\tilde{\alpha}}(t)).$
This suggests a new notion of minimality: a conditional strong quasi
stationary time $\tau^{\alpha}_{\star}$ is _minimal_ if for any $t\geq 0$
$\mathbb{P}_{\alpha}(\tau^{\alpha}_{\star}\leq
t<\tau_{x})=\lambda_{x}^{t}\left\langle\alpha,\gamma_{x}\right\rangle(1-\tilde{s}^{\tilde{\alpha}}(t)).$
The existence of minimal CSQSTs is proved in [31] where it is shown the
validity of the following representation formula: for any minimal CSQST
$\tau_{\star}^{\alpha}$ and for any $t\geq 0$:
$\mathbb{P}_{\alpha}\Big{(}\tau_{x}>t\Big{)}=\lambda_{x}^{t}\left\langle\alpha,\gamma_{x}\right\rangle(1-\tilde{s}^{\tilde{\alpha}}(t))+\mathbb{P}_{\alpha}\Big{(}\tau^{\alpha}_{\star,x}>t\Big{)},$
(5.4)
where
$\tau^{\alpha}_{\star,x}:={\tau_{x}\wedge\tau^{\alpha}_{\star}}.$
As a byproduct of the FVTL and of Eq. 5.4 it is possible to show the following
result.
###### Proposition 5.1.
Under the assumptions of the FVTL there exists a minimal CSQST
$\tau_{\star,x}^{\pi}$ such that
$\mathbb{P}_{\pi}(\tau_{\star,x}^{\pi}=0)\to 1.$ (5.5)
In physical terms, Proposition 5.1 confirms once again the idea that, under
the assumptions of the FVTL, the stationary and the quasi-stationary
distributions coincide in the thermodynamic limit.
###### Proof.
We start by rewriting the representation formula in Eq. 5.4 in the case
$\alpha=\pi$,
$\mathbb{P}_{\pi}\Big{(}\tau_{x}>t\Big{)}=\lambda_{x}^{t}\left\langle\pi,\gamma_{x}\right\rangle(1-\tilde{s}^{\tilde{\pi}}(t))+\mathbb{P}_{\pi}\Big{(}\tau^{\pi}_{\star,x}>t\Big{)}.$
(5.6)
By the FVTL in Theorem 2.1 we know that Eq. 5.6 implies that, uniformly in
$t\geq 0$,
$\lambda_{x}^{t}\sim\lambda_{x}^{t}\left\langle\pi,\gamma_{x}\right\rangle(1-\tilde{s}^{\tilde{\pi}}(t))+\mathbb{P}_{\pi}\Big{(}\tau^{\pi}_{\star,x}>t\Big{)}.$
(5.7)
Thanks to Corollary 2.2 we can simplify the latter Eq. 5.7 and get
$\sup_{t\geq
0}\left|\frac{\mathbb{P}_{\pi}\Big{(}\tau^{\pi}_{\star,x}>t\Big{)}}{\lambda^{t}_{x}}-\tilde{s}^{\tilde{\pi}}(t)\right|=o(1).$
(5.8)
We now show that the second term in the left hand side of Eq. 5.8 is $o(1)$
uniformly in $t\geq 0$, which implies that the same holds for the first term.
In fact, by the monotonicity of the separation distance, the estimate
$\sup_{t\geq 0}\tilde{s}^{\tilde{\pi}}(t)=o(1),$ (5.9)
is an immediate consequence of
$\tilde{s}^{\tilde{\pi}}(0)=o(1).$ (5.10)
In order to prove Eq. 5.10, start by noting that the stationary distribution
of the Doob’s transform is given by
$\nu_{x}(y)=\mu_{x}^{\star}(y)\gamma_{x}(y),$ (5.11)
while its starting distribution is, by Eq. 1.13,
$\tilde{\pi}(y)=\frac{\pi(y)\gamma_{x}(y)}{\left\langle\pi,\gamma_{x}\right\rangle}\sim\pi(y)\gamma_{x}(y),$
(5.12)
where in the latter approximation we used again Corollary 2.2. Hence,
$\text{sep}(\nu_{x},\tilde{\pi})=\max_{y\in\mathcal{X}\setminus\\{x\\}}\Big{[}1-\frac{\nu_{x}(y)}{\tilde{\pi}(y)}\Big{]}\sim\max_{y\in\mathcal{X}\setminus\\{x\\}}\Big{[}1-\frac{\mu^{\star}_{x}(y)}{\pi(y)}\Big{]}.$
(5.13)
Therefore, to prove Eq. 5.10, it suffices to show that
$\displaystyle\max_{y\in\mathcal{X}\setminus\\{x\\}}\Big{[}1-\frac{\mu^{\star}_{x}(y)}{\pi(y)}\Big{]}=o(1).$
(5.14)
Notice that for all $y\in\mathcal{X}\setminus\\{x\\}$ it holds
$\displaystyle\mu_{x}^{\star}(y)=$ $\displaystyle\lambda_{x}^{-T}\sum_{z\neq
x}\mu_{x}^{\star}(z)\big{(}[P]_{x}\big{)}^{T}(z,y)$ (5.15) $\displaystyle\leq$
$\displaystyle\lambda_{x}^{-T}\sum_{z\neq x}\mu_{x}^{\star}(z)P^{T}(z,y)$
(5.16)
$\displaystyle\text{\lx@cref{creftype~refnum}{le:approx}}\Longrightarrow\quad=$
$\displaystyle\lambda_{x}^{-T}\sum_{z\neq x}\mu_{x}^{\star}(z)\pi(y)(1+o(1))$
(5.17) $\displaystyle\sim$ $\displaystyle\lambda_{x}^{-T}\pi(y)(1+o(1))$
(5.18) $\displaystyle\text{\lx@cref{creftype~refnum}{le:lambda-
small}}\Longrightarrow\quad\sim$ $\displaystyle\pi(y).$ (5.19)
The latter chain of asymptotic equalities shows that Eq. 5.14 holds, which in
turn implies Eq. 5.9. Therefore, thanks to Eq. 5.8, we conclude that for every
minimal CSQST $\tau_{\star,x}^{\pi}$
$\sup_{t\geq 0}\mathbb{P}_{\pi}(\tau_{\star,x}^{\pi}>t)=o(1).$ (5.20)
∎
## Acknowledgments
M.Q. was partially supported by the GNAMPA-INdAM Project 2020 “Random walks on
random games” and PRIN 2017 project ALGADIMAR.
## References
* [1] Mohammed Abdullah. The cover time of random walks on graphs. PhD thesis, arXiv:1202.5569, 2012.
* [2] Mohammed Abdullah, Colin Cooper, and Alan M. Frieze. Cover time of a random graph with given degree sequence. Discrete Mathematics, 312(21):3146–3163, 2012.
* [3] David Aldous and Persi Diaconis. Shuffling cards and stopping times. The American Mathematical Monthly, 93(5):333–348, 1986.
* [4] David Aldous and Persi Diaconis. Strong uniform times and finite random walks. Advances in Applied Mathematics, 8(1):69–97, 1987.
* [5] David Aldous and Persi Diaconis. Strong uniform times and finite random walks. Advances in Applied Mathematics, 8(1):69 – 97, 1987.
* [6] David Aldous and James Allen Fill. Reversible Markov chains and random walks on graphs. Unfinished monograph, recompiled 2014, available at http://www.stat.berkeley.edu/ãldous/RWG/book.html, 2002.
* [7] David J. Aldous. Markov chains with almost exponential hitting times. Stochastic Processes and their Applications, 13(3):305 – 310, 1982\.
* [8] David J Aldous and Mark Brown. Inequalities for rare events in time-reversible markov chains i. Lecture Notes-Monograph Series, pages 1–16, 1992.
* [9] David J Aldous and Mark Brown. Inequalities for rare events in time-reversible markov chains ii. Stochastic Processes and their Applications, 44(1):15–25, 1993\.
* [10] A Bianchi, A Gaudillière, and P Milanesi. On soft capacities, quasi-stationary distributions and the pathwise approach to metastability. Journal of Statistical Physics, 181(3):1052–1086, 2020.
* [11] Alessandra Bianchi and Alexandre Gaudillière. Metastable states, quasi-stationary distributions and soft measures. Stochastic Processes and their Applications, 126(6):1622 – 1680, 2016.
* [12] Anton Bovier and Frank Den Hollander. Metastability: a potential-theoretic approach, volume 351. Springer, 2016.
* [13] Pietro Caputo and Matteo Quattropani. Stationary distribution and cover time of sparse directed configuration models. arXiv preprint arXiv:1909.05752, 2019.
* [14] Pierre Collet, Servet Martínez, and Jaime San Martín. Quasi-stationary distributions: Markov chains, diffusions and dynamical systems. Springer Science & Business Media, 2012.
* [15] Colin Cooper and Alan Frieze. The cover time of sparse random graphs. Random Structures & Algorithms, 30(1-2):1–16, 2007.
* [16] Colin Cooper and Alan Frieze. The cover time of the preferential attachment graph. Journal of Combinatorial Theory, Series B, 97(2):269–290, 2007\.
* [17] Colin Cooper and Alan Frieze. The cover time of the giant component of a random graph. Random Structures & Algorithms, 32(4):401–439, 2008.
* [18] Colin Cooper, Alan Frieze, and Eyal Lubetzky. Cover time of a random graph with a degree sequence ii: Allowing vertices of degree two. Random Structures & Algorithms, 45(4):627–674, 2014.
* [19] Colin Cooper, Alan Frieze, and Tomasz Radzik. Multiple random walks in random regular graphs. SIAM Journal on Discrete Mathematics, 23(4):1738–1761, 2010.
* [20] Colin Cooper, Alan Frieze, and Tomasz Radzik. The cover times of random walks on random uniform hypergraphs. Theoretical Computer Science, 509:51–69, 2013.
* [21] Colin Cooper and Alan M. Frieze. The cover time of random regular graphs. SIAM J. Discrete Math., 18(4):728–740, 2005.
* [22] Colin Cooper and Alan M. Frieze. Stationary distribution and cover time of random walks on random digraphs. J. Comb. Theory, Ser. B, 102(2):329–362, 2012.
* [23] John N Darroch and Eugene Seneta. On quasi-stationary distributions in absorbing discrete-time finite markov chains. Journal of Applied Probability, 2(1):88–100, 1965.
* [24] Persi Diaconis and James Allen Fill. Strong stationary times via a new form of duality. Ann. Probab., 18(4):1483–1522, 10 1990.
* [25] Persi Diaconis and Laurent Miclo. On times to quasi-stationarity for birth and death processes. Journal of Theoretical Probability, 22(3):558–586, 2009.
* [26] Persi Diaconis and Laurent Miclo. On quantitative convergence to quasi-stationarity. In Annales de la Faculté des sciences de Toulouse: Mathématiques, volume 24, pages 973–1016, 2015.
* [27] Roberto Fernandez, Francesco Manzo, Francesca Nardi, and Elisabetta Scoppola. Asymptotically exponential hitting times and metastability: a pathwise approach without reversibility. Electronic Journal of Probability, 20, 2015.
* [28] Roberto Fernandez, Francesco Manzo, Francesca Romana Nardi, Elisabetta Scoppola, and Julien Sohier. Conditioned, quasi-stationary, restricted measures and escape from metastable states. The Annals of Applied Probability, 26(2):760–793, 2016.
* [29] Julian Keilson. Rarity and exponentiality. In Markov Chain Models: Rarity and Exponentiality, pages 130–163. Springer, 1979.
* [30] David A. Levin and Yuval Peres. Markov Chains and Mixing Times. American Mathematical Society, Providence, RI, 2017. Second edition. With contributions by Elizabeth L. Wilmer, With a chapter on “Coupling from the past” by James G. Propp and David B. Wilson.
* [31] Francesco Manzo and Elisabetta Scoppola. Exact results on the first hitting via conditional strong quasi-stationary times and applications to metastability. Journal of Statistical Physics, 174(6):1239–1262, 2019.
* [32] Laurent Miclo. On absorption times and dirichlet eigenvalues. ESAIM: Probability and Statistics, 14:117–150, 2010.
* [33] Laurent Miclo. An absorbing eigentime identity. Markov Processes and Related Fields, 21, 09 2014.
* [34] Laurent Miclo. On metastability. 2020\.
* [35] Enzo Olivieri and Maria Eulália Vares. Large deviations and metastability, volume 100. Cambridge University Press, 2005.
* [36] Jim Pitman and Wenpin Tang. Tree formulas, mean first passage times and kemeny’s constant of a markov chain. Bernoulli, 24(3):1942–1972, 08 2018.
* [37] Phil K Pollett. Quasi-stationary distributions: a bibliography. http://www.maths.uq.edu.au/~pkp/papers/qsds/qsds.pdf, 2008.
|
# Weak Kaon Production off the nucleon and Watson’s theorem
E. Saúl-Sala Departamento de Física Teórica and IFIC, Centro Mixto
Universidad de Valencia-CSIC, Institutos de Investigación de Paterna, E-46071
Valencia, Spain J. E. Sobczyk Institut für Kernphysik and PRISMA+ Cluster of
Excellence,
Johannes Gutenberg-Universität, 55128 Mainz, Germany M. Rafi Alam Department
of Physics, Aligarh Muslim University, Aligarh-202 002, India L. Alvarez-Ruso
Departamento de Física Teórica and IFIC, Centro Mixto Universidad de Valencia-
CSIC, Institutos de Investigación de Paterna, E-46071 Valencia, Spain J.
Nieves Departamento de Física Teórica and IFIC, Centro Mixto Universidad de
Valencia-CSIC, Institutos de Investigación de Paterna, E-46071 Valencia, Spain
###### Abstract
We have improved the tree-level model of Ref Rafi Alam _et al._ (2010) for
weak production of kaons off nucleons by partially restoring unitarity. This
is achieved by imposing Watson’s theorem to the dominant vector and axial-
vector contributions in appropriate angular momentum and isospin quantum
number sectors. The observable consequences of this procedure are
investigated.
## I Introduction
A good understanding and realistic modeling of neutrino cross sections is
important to reduce systematic uncertainties in oscillation experiments Mahn
_et al._ (2018); Alvarez-Ruso _et al._ (2018); Katori and Martini (2018);
Alvarez-Ruso _et al._ (2014); Formaggio and Zeller (2012). Much attention has
been paid to quasi-elastic scattering and weak pion production, which give a
large contribution in the few-GeV neutrino energy region probed in most
accelerator experiments. On the other hand, with better statistics and higher
precision goals, other, largely unexplored, processes with smaller cross
sections may play a significant role. Kaon, and strangeness production in
general, belongs to this category.
The charged-kaon production ($\nu_{\mu}\mathrm{CH}\rightarrow\mu^{-}K^{+}X$)
measurement at MINERvA Marshall _et al._ (2016) experiment opens a new window
to study the weak strangeness production mechanisms in detail. The weak
processes that could lead to kaons in the final state are either initiated by
strangeness conserving ($\Delta S=0$) or strangeness changing ($\Delta S=1$)
mechanisms. Although the $\Delta S=1$ reactions ($1K$) are Cabibbo suppressed
compared to $\Delta S=0$ ones ($YK$), the latter involve the production of
massive strange hyperons ($Y$), which pushes the reaction thresholds higher in
neutrino energies. Therefore, below 2 GeV of incoming neutrino energies, the
1$K$ reaction is favoured Marshall _et al._ (2016); Rafi Alam _et al._
(2010). In nuclei, final state interactions of the produced kaon are not very
strong because of the absence of baryon resonances. However, kaons can also be
produced in secondary collisions, rendering the extraction of information
about the elementary 1$K$-production amplitudes in experiments with nuclear
targets rather difficult Lalakulich _et al._ (2012). As for several other
processes, progress in our understanding of weak kaon production would greatly
benefit from modern cross section measurements on hydrogen and/or deuterium
Alvarez-Ruso _et al._ (2018).
Theoretical work on weak production of meson-baryon pairs with open and hidden
strangeness was performed in the early days of neutrino physics Shrock (1975);
Mecklenburg (1978); Amer (1978); Dewan (1981) and resumed only recently with
studies in the $\Delta S=0$ Adera _et al._ (2010); Nakamura _et al._ (2015),
$\Delta S=-1$ Alam _et al._ (2012); Ren _et al._ (2015) and $\Delta S=1$
Rafi Alam _et al._ (2010) sectors. The first calculation of the
$\nu_{l}N\rightarrow l^{-}KN^{\prime}$ amplitudes using leading-order SU(3)
chiral perturbation theory was performed by Alam et. al. Rafi Alam _et al._
(2010). The threshold cross section was predicted in a model independent way
in terms of only three precisely-known quantities $f_{\pi}$, $D$ and $F$,
where $F$ and $D$ are the couplings that appear from the SU(3) Wigner–Eckart
theorem of the axial-vector current. To extend the validity of the study to
higher energies, the hadronic currents were multiplied by a phenomenological
global dipole form factor. However, as it is based on tree-level diagrams,
this model neither respects the unitarity of the $S$ matrix, nor it satisfies
the related Watson’s theorem Watson (1952) 111A consequence of unitarity of
$S-$matrix and time reversal symmetry., according to which, the phase of the
amplitude is determined by the strong meson-baryon interaction ($KN$ in this
case).
In the present work, we address this issue and partially restore unitarity by
imposing Watson’s theorem. This is achieved by introducing relative phases in
the amplitudes derived in Ref. Rafi Alam _et al._ (2010), as suggested by
Olsson in Olsson (1974) for pion photoproduction. In Refs. Alvarez-Ruso _et
al._ (2016); Hernández and Nieves (2017), the same strategy has been
successfully applied to the weak pion production model of Ref. Hernandez _et
al._ (2007a). In the following we briefly present the model for $\Delta S=1$
$K$-production and the Watson’s prescription to approximately restore
unitarity, followed by a discussion on the impact of this improvement on
observable quantities.
## II Formalism
The allowed neutrino-induced $\Delta S=1$ single-kaon production reaction
channels on nucleons are
$\displaystyle\nu_{l}+p$ $\displaystyle\rightarrow l^{-}+p+K^{+}$
$\displaystyle\nu_{l}+n$ $\displaystyle\rightarrow l^{-}+p+K^{0}$ (1)
$\displaystyle\nu_{l}+n$ $\displaystyle\rightarrow l^{-}+n+K^{+}.$
The differential cross section for the processes of Eq. (II) is given by
$\frac{d^{4}\sigma}{dW\,dQ^{2}d\Omega_{K}^{*}}=\frac{|\vec{p}_{K}^{\,*}|}{64(2\pi)^{4}E_{\nu}^{2}M_{N}^{2}}|\overline{\mathcal{M}}|^{2}$
(2)
with
$|\overline{\mathcal{M}}|^{2}=\frac{1}{4}G_{F}^{2}|V_{us}|^{2}L^{\mu\nu}J_{\mu\nu}\,,$
(3)
where $L^{\mu\nu}$ ($J_{\mu\nu}$) is the leptonic (hadronic) tensor; $W$ is
the invariant mass of the outgoing kaon-nucleon pair while $Q^{2}=-q^{2}$
stands for minus the square of the four momentum transfer $q=k-k^{\prime}$,
with $k$ and $k^{\prime}$ the four momenta of the incoming neutrino and
outgoing lepton respectively. We fix the lepton kinematics and target nucleon
in the Laboratory frame, in which $E_{\nu}$ denotes the incoming neutrino
energy $(=k^{0})$. The outgoing $KN$ system is treated in the rest frame of
the pair, referred to as the hadronic center-of-mass (HCM) frame. We represent
HCM quantities with a ‘$\ast$’ superscript. In Eq. (2), the kaon momentum
$(\vec{p}_{K}^{\,*})$ and solid-angle ($\Omega_{K}^{\ast}$) are indeed in the
HCM frame. The Fermi coupling constant ($G_{F}$) and the Cabibbo-Kobayashi-
Maskawa (CKM) matrix element, $|V_{us}|$, have numerical values of
$1.166\times 10^{-5}$ GeV-2 and $0.2243$ respectively Tanabashi _et al._
(2018).
The leptonic tensor may be written as,
$\displaystyle L_{\mu\nu}$
$\displaystyle=8\left[k^{\prime}_{\mu}\,k_{\nu}+k^{\prime}_{\nu}\,k_{\mu}-g_{\mu\nu}(k^{\prime}\cdot
k)+i\epsilon^{\mu\nu\sigma\rho}k^{\prime}_{\sigma}k_{\rho}\right]\,,$ (4)
where we follow the convention $\epsilon^{0123}=+1$ for the 4-dimensional
Levi-Civita tensor. Finally, the tensor $J^{\mu\nu}$ can be expressed in terms
of the $W^{+}N\rightarrow KN^{\prime}$ hadronic current $j^{\mu}$ as
$J^{\mu\nu}=\sum_{\mathrm{spins}}j^{\mu}\left(j^{\nu}\right)^{\dagger}\,,$ (5)
where the sum is performed over the spin projections of the incoming and
outgoing nucleons; $W^{+}$ denotes the virtual weak gauge boson. This hadronic
current, obtained from the expansion of the SU(3) chiral Lagrangian at its
lowest order, plus next-to-leading contributions to weak magnetism, was
derived in Ref. Rafi Alam _et al._ (2010). The complete set of diagrams that
contribute to Eq. (II) are shown in Fig. 1. The corresponding expressions that
add to $j^{\mu}$ are given in Eq. (15) of Ref. Rafi Alam _et al._ (2010). The
parameters that enter the current are well known: the pion decay
constant($f_{\pi}$), couplings $D$ and $F$, fixed from nucleon and hyperon
semileptonic decays, and measured values of nucleon magnetic moments. We refer
the reader to Ref. Rafi Alam _et al._ (2010) for details. Finally, to extend
the kinematic range of the calculation, a global dipole form factor has been
introduced, with a dipole mass of $1\pm 0.1$ GeV, accounting for higher-order
hadronic structure and its uncertainty.
Figure 1: Feynman diagrams for the hadronic current $W^{+}N\to KN^{\prime}$.
From the upper left corner in clockwise order: contact (CT), kaon pole (KP),
$\pi$ and $\eta$ in flight ($\pi$P, $\eta$P) and $u-$channel hyperon exchange
(Cr$\Sigma$, Cr$\Lambda$) terms.
### II.1 Watson’s theorem for weak $K$-production
Let us consider matrix elements of the transition ($T$) scattering operator
between two-body states with well defined total angular momentum $J$ and
particle helicities ($\lambda$) in the HCM frame.222We warn the reader that,
although the HCM frame is used throughout II.1, we have dropped the ’*’
superscript to maintain the readability of equations. Following the derivation
of Sec. II.A of Ref. Alvarez-Ruso _et al._ (2016) for weak pion production,
the $S-$matrix unitarity and time reversal symmetry imply that
$\displaystyle\sum_{\lambda_{K^{\prime\prime}}\lambda_{N^{\prime\prime}}}\langle
J,M;\lambda_{K^{\prime\prime}},\lambda_{N^{\prime\prime}}\,|T(s)|J,M;\lambda_{K},\lambda_{N^{\prime}}\,\rangle^{*}$
$\displaystyle\times\langle
J,M;\lambda_{K^{\prime\prime}},\lambda_{N^{\prime\prime}}\,|T(s)|J,M;\lambda_{W},\lambda_{N}\,\rangle\in\mbox{R}\rule{0.28453pt}{6.45831pt}\hskip
4.30554pt\,,$ (6)
for the $W^{+}N\rightarrow KN^{\prime}$ transition. In the present study, the
center-of-mass energy of the kaon-nucleon system, $\sqrt{s}=W$, is limited to
the range in which the only relevant intermediate states in Eq. (6) are
$K^{\prime\prime}N^{\prime\prime}$ pairs. Therefore, this equation, Watson’s
theorem, relates the phases of the strong
$K^{\prime\prime}N^{\prime\prime}\rightarrow KN^{\prime}$ amplitudes with the
electroweak $WN\rightarrow K^{\prime\prime}N^{\prime\prime}$ ones. The later,
up to a real normalization constant
$\langle K^{\prime\prime}N^{\prime\prime}|T|WN\rangle\propto-
ij_{\mu}\epsilon^{\mu}\,,$ (7)
in terms of the hadronic current $j^{\mu}$ introduced above and the
polarization vector of the $W$ boson.333Notice that the gauge coupling has
been factored out and absorbed in the Fermi constant of Eq. (3). The $W$-boson
offshellness does not affect the present argument Alvarez-Ruso _et al._
(2016). As stated above, we consider only $KN$ intermediate states in Eq. (6),
restricting the validity of the approach to invariant masses of the $KN$ pair
below the $KKY$ threshold. We further neglect the influence of $K\pi N$
intermediate states. This assumption relies on the observation that in the
$KN$ partial waves under consideration (details are given below),
inelasticities are either sharply or very close to one for invariant masses
below 2.1 GeV SAI .
To be more specific, in Eq. (6) after setting the kaon helicities to zero, we
denote as $r$ the helicity of the $W$ gauge boson, and as
$\lambda,\lambda^{\prime},\rho$ the corresponding ones of the initial, final
and intermediate nucleons. Furthermore, assigning the $z$ direction
($\theta=\varphi=0$) to the $WN$ incoming pair, one can write
$\ket{\theta=0,\varphi=0;r,\lambda}=\sum_{J}\sqrt{\frac{2J+1}{4\pi}}\ket{J,M=r-\lambda;r\,\lambda}$
(8)
which follows from Eq. (A1) of Appendix A. By taking into account that $T$ is
a scalar and therefore diagonal in $J$, Eq. (6) can be cast as
$\displaystyle\sum_{\rho}$ $\displaystyle\langle
J,M;\underbrace{0,\rho}_{KN}|T(s)|J,M;\underbrace{0,\lambda^{\prime}}_{KN}\rangle^{*}$
$\displaystyle\times\langle
J,M;\underbrace{0,\rho}_{KN}|T(s)|\theta,\varphi=0;\underbrace{r,\lambda}_{WN}\rangle\in\mbox{R}\rule{0.28453pt}{6.45831pt}\hskip
4.30554pt\,,$ (9)
with $M=r-\lambda$. Introducing states with well-defined orbital angular
momentum $L$ and spin $S$, and using their transformation properties given in
Appendix A, one finds
$\displaystyle\sum_{L}\sum_{\rho}\frac{2L+1}{2J+1}(L,1/2,J|0,-\lambda^{\prime},-\lambda^{\prime})(L,1/2,J|0,-\rho,-\rho)$
$\displaystyle\times\underbrace{\langle
J,M;{L,1/2}|T(s)|J,M;{L,1/2}\rangle^{*}}_{KN\to KN}$
$\displaystyle\times\underbrace{\langle
J,M;{0,\rho}|T(s)|\theta,\varphi=0;{r,\lambda}\rangle}_{WN\to
KN}\in\mbox{R}\rule{0.28453pt}{6.45831pt}\hskip 4.30554pt\,,$ (10)
given that parity is conserved by the strong $KN\rightarrow KN$ amplitudes.
Here $(L,S,J|M_{L},M_{S},M_{J})$ are Clebsch-Gordan coefficients.
Based on the behavior of weak kaon production amplitudes close to threshold,
it is reasonable to assume that the process under study is dominated by the
$s-$partial wave ($L=0$). This implies that $S=J=1/2$, the nucleon spin.
Equation (10) takes then the form
$\chi_{r,\lambda}(s)\langle
1/2,r-\lambda;{0,1/2}|T(s)|1/2,r-\lambda;{0,1/2}\rangle^{*}\in\mbox{R}\rule{0.28453pt}{6.45831pt}\hskip
4.30554pt$ (11)
where the shorthand notation
$\displaystyle\chi_{r,\lambda}(s)=\sum_{\rho}\,\langle
1/2,r-\lambda;{0,\rho}|T(s)|\theta,\varphi=0;{r,\lambda}\rangle$ (12)
has been introduced. Up to an irrelevant constant, these functions can be
written as
$\chi_{r,\lambda}(s)=\sum_{\rho}\int d\Omega\ {\cal D}^{(1/2)}_{M\
-\rho}(\varphi,\theta,-\varphi)\braket{\theta,\varphi;0,\rho}{T(s)}{\theta,\varphi=0;r,\lambda}$
(13)
where ${\cal D}^{(1/2)}_{M\ -\rho}$ are Wigner D-matrices [see Eq. (A1) in
Appendix A]. The integral is performed over the solid angle of the outgoing
kaon in the HCM frame.
Owing to the $V-A$ nature of the weak interaction, $T$ in Eq. (12) can be
expressed as $T_{V}-T_{A}$, $T_{V(A)}$ being even (odd) under parity
inversion. Therefore, it is convenient to write
$\chi_{r,\lambda}=\chi_{r,\lambda}^{V}-\chi_{r,\lambda}^{A}$. We then explore
the transformation properties of $\chi_{r,\lambda}(s)$ under parity from which
the following relations are deduced (see Appendix B):
$\displaystyle\chi_{r,\lambda}^{V}$
$\displaystyle=\frac{1}{2}\left(\chi_{r,\lambda}-\chi_{-r,-\lambda}\right)\,,$
(14) $\displaystyle\chi_{r,\lambda}^{A}$
$\displaystyle=-\frac{1}{2}\left(\chi_{r,\lambda}+\chi_{-r,-\lambda}\right)\,.$
They allow to reduce the number of independent functions from four vector
(axial) ones to two Alvarez-Ruso _et al._ (2016) for each of the reaction
channels listed in Eq. (II).444Combinations with $|r-\lambda|=3/2$ are
excluded because $J=1/2$.
Finally, we project onto states with well defined isospin ($I$), introducing
isospin amplitudes, and the corresponding $\chi^{(I=0,1)}$ functions
$\displaystyle\chi^{(1)}$ $\displaystyle=\chi(W^{+}\,p\rightarrow
K^{+}\,p)\,,$ (15) $\displaystyle\chi^{(0)}$
$\displaystyle=\chi(W^{+}\,n\rightarrow K^{+}\,n)-\chi(W^{+}\,n\rightarrow
K^{0}\,p)\,.$
Other indices have been dropped for simplicity. These identities allow us to
write the $\chi$ functions for all three processes in terms of only two with
$I=0,1$.
Figure 2: Absolute value squared of the CT contribution to
$\chi^{V,A}_{r,\lambda}$, defined using Eqs. (12), (14) and (15), as a
function of the $KN$ invariant mass ($W$) for a fixed $Q^{2}=0.1$ GeV2. Left
and right panels stand for isospin $I=0$ and $I=1$ channels, respectively.
Figure 3: Olsson’s phases $\Psi_{V,A}$ obtained by solving Eqs. (17) and (18)
as a function of $W$ for a fixed $Q^{2}=0.1$ GeV2.
Figure 4: Total cross section $\sigma(E_{\nu})$ as a function of the muon-
neutrino energy ($E_{\nu}$) for the processes of Eq. (II). Blue dashed lines
stand for the original results of Ref. Rafi Alam _et al._ (2010), while the
predictions obtained after implementing Watson’s corrections, for the chosen
solution 1, are shown by the solid black lines.
From the analysis of Ref. Rafi Alam _et al._ (2010) we know that contact term
(CT) is the largest one for all processes in Eq. (II). We therefore find
convenient to split the $T$ matrix as $T=T_{CT}+T_{B}$, where $T_{CT}$ denotes
the CT term, while the rest of the diagrams of Fig. 1 are included in $T_{B}$.
Next, we compute all the independent $\chi^{V,A\,(I=0,1)}_{r,1/2}$ with
$r=0,1$ (eight in total), calculated from the CT Feynman diagram. As
illustrated in Fig. 2 for a fixed $Q^{2}$, we identify $\chi^{A(0)}_{0,1/2}$
and $\chi^{V(1)}_{0,1/2}$ as dominant among the CT contributions, and select
them to determine the Olsson’s phases introduced next.
In order to implement Watson’s theorem to partially restore unitarity, we
follow the prescription given by Olsson Olsson (1974). Namely, we introduce
phases $\Psi_{V,A}$ in both vector and axial CT terms, such that the modified
amplitude reads as
$\displaystyle\braket{\theta,\varphi;0,\rho}{T(s)}{\theta,\varphi=0;r,\lambda}=\epsilon_{r\mu}T^{V\mu}_{\text{B}\lambda\rho}(\theta,\varphi)-\epsilon_{r\mu}T^{A\mu}_{\text{B}\lambda\rho}(\theta,\varphi)+\epsilon_{r\mu}T^{V\mu}_{\text{CT}\lambda\rho}(\theta,\varphi)\,e^{i\Psi_{V}}-\epsilon_{r\mu}T^{A\mu}_{\text{CT}\lambda\rho}(\theta,\varphi)\,e^{i\Psi_{A}}.$
(16)
where $\epsilon_{(r,r^{\prime})\mu},\,r=0,\pm 1$, is the $W-$boson
polarization vector. Thanks to Watson’s theorem these unknown phases can be
determined using the available experimental information about $KN$ scattering
phase shifts. We impose that
$\displaystyle\text{Im}{\left\\{\chi^{V(1)}_{0,1/2}(s)\,e^{-i\delta_{S_{11}}}\right\\}}$
$\displaystyle=$ $\displaystyle 0\,,$ (17)
$\displaystyle\text{Im}{\left\\{\chi^{A(0)}_{0,1/2}(s)\,e^{-i\delta_{S_{01}}}\right\\}}$
$\displaystyle=$ $\displaystyle 0\,,$ (18)
where the $KN$ phase shift $\delta_{L_{I,2J}}$ are taken from the SAID
database (Scattering Analyses Interactive Dialin) of the INS Data Analysis
Center SAI . Equations (17) and (18) can be used to determine Olsson’s phases
$\Psi_{V,A}$, which are functions of $W$ and $Q^{2}$.
## III Results and discussion
The $\Psi_{V,A}(W,Q^{2})$ solutions of Eqs. (17), (18) plugged in Eq. (16)
correct the relative phase between the CT term and the rest of mechanisms. It
should be noted, however, that these equations generally have two
solutions555As discussed in Ref. Alvarez-Ruso _et al._ (2016) for pion
production, these two solutions lead to $\chi^{V(1)}_{0,1/2}$
($\chi^{A(0)}_{0,1/2}$) with phases $\delta_{S_{11}(S_{01})}$ and
$\delta_{S_{11}(S_{01})}+\pi$ ($KN$ phase shifts are defined up to a summand
of $\pi$). denoted here as solutions 1 and 2. The $W$ dependence of these
phases is shown in Fig. 3 for the same fixed $Q^{2}$ used in Fig. 2. The plots
show the general tendency for solution 1 (2) to be small (large) phases in the
range of $KN$ invariant masses under consideration. The four combinations of
Olsson’s phases $\Psi_{V,A}(W,Q^{2})$ that can be assembled with these two
solutions lead to different values for observable quantities. In Ref. Alvarez-
Ruso _et al._ (2016), where a similar approach was undertaken for weak pion
production, the preference for small Olsson’s phases was clearly validated by
pion photoproduction data (see Fig. 2 of that paper). In the present case,
there are no equivalent electromagnetic single kaon production data that could
serve for validation purposes. However, as illustrated in Fig. 3, at low $W$
and $Q^{2}$, i.e. close to threshold, $\Psi_{V,A}\sim\pi$ for solution 2. Such
a behavior implies a relative sign between $T_{CT}$ and $T_{B}$ which is
inconsistent with the predictions of chiral symmetry encoded in the leading-
order Lagrangian. We thus rely on this observation to discard solution 2 in
our predictions.
The integrated cross sections obtained with solution 1 are shown in Fig. 4,
together with the reference calculation of Ref. Rafi Alam _et al._ (2010),
which did not include the Olsson’s phases. One immediately notices that the
partial unitarization causes a small variation in the cross section. The
largest change, observed in $\nu_{\mu}n\rightarrow\mu^{-}pK^{0}$, amounts to
about an 18% increase with respect to the reference predictions of Ref. Rafi
Alam _et al._ (2010) at $E_{\nu}=2$ GeV. This small effect is plausibly a
consequence of the weakness (for strong forces) of the $KN$ interactions. One
can therefore expect that, in the energy region in which the present model is
applicable, the size of unitarity corrections is within the model
uncertainties (effectively accounted by the 10 % uncertainty assumed for the
dipole mass) at least for the total cross section. Future data for weak single
kaon production at low energies obtained, for example with the Short Baseline
Near Detector (SBND) Antonello _et al._ (2015) at Fermilab, that will collect
data with high statistics, or in a future neutrino experiment on hydrogen
and/or deuterium could be compared to our predictions, shedding light on this
interesting process.
In order to perform a more detailed analysis of the impact of unitarity
corrections we rely on the following representation of the differential cross
section, Eq. (2),
$\displaystyle\frac{d^{4}\sigma}{dW\,dQ^{2}d\Omega^{*}_{K}}=$
$\displaystyle\frac{G_{F}^{2}W}{4\pi
M_{N}|\vec{k}|^{2}}\left(A+B\cos\phi^{*}_{K}+C\cos 2\phi^{*}_{K}\right.$ (19)
$\displaystyle\left.+D\sin\phi^{*}_{K}+E\sin 2\phi^{*}_{K}\right)\,,$
where the dependence on the HCM kaon azimuthal angle has been singled out
Sobczyk _et al._ (2018); Hernandez _et al._ (2007a, b). The incoming
neutrino momentum $\vec{k}$ is in the Laboratory frame while kaon angles
(carrying the ‘*’ superscript) are in the HCM frame. The structure functions
$A-E$ are real and depend on the scalars $Q^{2}$, $p\cdot q$, $p_{K}\cdot q$
and $p_{K}\cdot p$. We have obtained these structure functions for weak kaon
production for the first time. They are displayed in Fig. 5 as a function of
$\cos{\theta^{*}_{K}}$ for fixed $E_{\nu}$, $W$ and $Q^{2}$. Results obtained
with solution 1 are close to the uncorrected ones as expected. Remarkably, the
$D$ and $E$ structure functions, responsible for parity violation in kaon
production (and weak meson production in general Hernandez _et al._ (2007b)),
which are zero in the tree-level model with real amplitudes, acquire nonzero
although small values due to unitarization.
Figure 5: $A,B,C,D,E$ structure functions for
$\nu_{\mu}+N\rightarrow\mu^{-}+N^{\prime}+K$ as a function of the cosine of
the polar kaon angle in the HCM frame ($\theta^{*}_{K}$) for fixed $E_{\nu}=2$
GeV, $W=1.5$ GeV and $Q^{2}=0.2$ GeV2.
## IV Conclusion
We have improved the theoretical description of single kaon production in
neutrino-nucleon collisions below the $KKY$ threshold by partially accounting
for unitarity. For this purpose we have introduced Olsson’s phases for the
contact term of the amplitude in its largest vector and axial multipoles.
These phases take the values required to fulfill Watson’s theorem. In the
absence of experimental data, we have relied on chiral symmetry to discard
some of the found mathematical solutions. The remaining solution leads to
small corrections in the cross section, as expected because of the absence of
baryon resonances. These corrections are actually within the uncertainties of
the model. This would validate the reference tree-level model, built upon the
leading-order chiral Lagrangian, in the kinematic region under consideration.
Finally, we have investigated the behavior of the structure functions that
characterize the cross-section dependence on the kaon azimuthal angle. The
impact of unitarization is visible in the fact that the parity-violating
structure functions depart from zero.
## Acknowledgements
We thank E. Hernández for useful feedback. MRA is thankful to IFIC, Valencia
for the hospitality during his stay. This research has been partially
supported by Spanish Ministerio de Ciencia e Innovación and the European
Regional Development Fund (ERDF) under contract FIS2017-84038-C2-1-P, the EU
STRONG-2020 project under the program H2020-INFRAIA-2018-1, grant agreement
no. 824093, by Generalitat Valenciana under contract PROMETEO/2020/023, and by
the Deutsche Forschungsgemeinschaft (DFG) through the Collaborative Research
Center [The Low-Energy Frontier of the Standard Model (SFB 1044)] and through
the Cluster of Excellence “Precision Physics, Fundamental Interactions, and
Structure of Matter” (PRISMA+ EXC 2118/1) funded by the DFG within the German
Excellence Strategy (Project ID 39083149).
## Appendices
### A Basis transformations
The states with well defined total angular momentum and the two-particle
helicity states are related by the transformation relation:
$\displaystyle\ket{J,M_{J};\lambda_{1},\lambda_{2}}$ (A1)
$\displaystyle=\sqrt{\frac{2J+1}{4\pi}}\int
d\Omega\,\mathcal{D}_{M_{J}\lambda}^{(J)*}\left(\phi_{K},\theta_{K},-\phi_{K}\right)\ket{\theta_{K},\phi_{K};\lambda_{1},\lambda_{2}}$
with $\lambda=\lambda_{1}-\lambda_{2}$.
$\mathcal{D}^{(J)}_{M_{J}\lambda}\left(\alpha,\beta,\gamma\right)$ is the
Wigner rotation matrix.
In the $L$-$S$ scheme, where we use the basis $\ket{J,M_{J};L,S}$ with $L$ the
orbital angular momentum and $S$ the total spin of the two particles, the
following relations hold
$\displaystyle\ket{J,M_{J};\lambda_{1},\lambda_{2}}=\sum_{L,S}\sqrt{\frac{2L+1}{2J+1}}\left(L,S,J|0,\lambda,\lambda\right)$
(A2)
$\displaystyle\times\left(j_{1},j_{2},S|\lambda_{1},-\lambda_{2},\lambda\right)\ket{J,M_{J};L,S}\,,$
$\displaystyle\ket{J,M_{J};L,S}=\sum_{\lambda_{1},\lambda_{2}}\sqrt{\frac{2L+1}{2J+1}}\left(L,S,J|0,\lambda,\lambda\right)$
$\displaystyle\times\left(j_{1},j_{2},S|\lambda_{1},-\lambda_{2},\lambda\right)\ket{J,M_{J};\lambda_{1},\lambda_{2}}\,,$
where $j_{i}$ is the total angular momentum of each particle and
$\left(j_{1},j_{2},J|m_{1},m_{2},M\right)$ are Clebsch-Gordan coefficients.
### B Properties of $\chi^{V,A}_{r,\lambda}$ functions under helicity
inversion
In terms of two-particle helicity states with well defined angular momentum
$J$ ($=1/2$ in our case)
$\chi^{V,A}_{r,\lambda}=\sum_{\rho}\bra{1/2,M;0,\rho}T^{V,A}\ket{1/2,M;r,\lambda}\,.$
(A3)
Under parity inversion, these states are transformed as (Eq. (5.28) of Ref.
Martin and Spearman (1970))
$P\ket{J,M;\mu_{1},\mu_{2}}=\eta_{1}\eta_{2}(-1)^{J-s_{1}-s_{2}}\ket{J,M;-\mu_{1},-\mu_{2}}$
in terms of the two particles’ intrinsic parities $\eta_{1,2}$ and spins
$s_{1,2}$. Therefore
$\displaystyle P\ket{1/2,M;r,\lambda}$ $\displaystyle=$
$\displaystyle\eta_{N}\eta_{W}(-1)^{1/2-1/2-1}\ket{1/2,M;-r,-\lambda}\,,$
$\displaystyle P\ket{1/2,M;0,\rho}$ $\displaystyle=$
$\displaystyle\eta_{N}\eta_{K}(-1)^{1/2-1/2-0}\ket{1/2,M;-r,-\lambda}\,.$
Consequently
$\chi^{V,A}_{-r,-\lambda}=-\sum_{\rho}\bra{1/2,M;0,\rho}P^{-1}T^{V,A}P\ket{1/2,M;r,\lambda}\,,$
where we have taken into account that these matrix elements do not depend on
$M$ because $T$ is a scalar under rotations. Once $P^{-1}T^{V,A}P=\pm T^{V,A}$
$\chi^{V,A}_{-r,-\lambda}=\mp\chi^{V,A}_{r,\lambda}\,,$ (A4)
from where Eq. (14) immediately follows.
## References
* Rafi Alam _et al._ (2010) M. Rafi Alam, I. Ruiz Simo, M. Sajjad Athar, and M. J. Vicente Vacas, Phys. Rev. D82, 033001 (2010), arXiv:1004.5484 [hep-ph] .
* Mahn _et al._ (2018) K. Mahn, C. Marshall, and C. Wilkinson, Ann. Rev. Nucl. Part. Sci. 68, 105 (2018), arXiv:1803.08848 [hep-ex] .
* Alvarez-Ruso _et al._ (2018) L. Alvarez-Ruso _et al._ , Prog. Part. Nucl. Phys. 100, 1 (2018), arXiv:1706.03621 [hep-ph] .
* Katori and Martini (2018) T. Katori and M. Martini, J. Phys. G45, 013001 (2018), arXiv:1611.07770 [hep-ph] .
* Alvarez-Ruso _et al._ (2014) L. Alvarez-Ruso, Y. Hayato, and J. Nieves, New J. Phys. 16, 075015 (2014), arXiv:1403.2673 [hep-ph] .
* Formaggio and Zeller (2012) J. A. Formaggio and G. P. Zeller, Rev. Mod. Phys. 84, 1307 (2012), arXiv:1305.7513 [hep-ex] .
* Marshall _et al._ (2016) C. M. Marshall _et al._ (MINERvA), Phys. Rev. D94, 012002 (2016), arXiv:1604.03920 [hep-ex] .
* Lalakulich _et al._ (2012) O. Lalakulich, K. Gallmeister, and U. Mosel, Phys. Rev. C86, 014607 (2012), arXiv:1205.1061 [nucl-th] .
* Shrock (1975) R. E. Shrock, Phys. Rev. D12, 2049 (1975).
* Mecklenburg (1978) W. Mecklenburg, Acta Phys. Austriaca 48, 293 (1978).
* Amer (1978) A. A. Amer, Phys. Rev. D18, 2290 (1978).
* Dewan (1981) H. K. Dewan, Phys. Rev. D24, 2369 (1981).
* Adera _et al._ (2010) G. B. Adera, B. I. S. Van Der Ventel, D. D. van Niekerk, and T. Mart, Phys. Rev. C82, 025501 (2010), arXiv:1112.5748 [nucl-th] .
* Nakamura _et al._ (2015) S. X. Nakamura, H. Kamano, and T. Sato, Phys. Rev. D92, 074024 (2015), arXiv:1506.03403 [hep-ph] .
* Alam _et al._ (2012) M. R. Alam, I. R. Simo, M. S. Athar, and M. J. Vicente Vacas, Phys. Rev. D85, 013014 (2012), arXiv:1111.0863 [hep-ph] .
* Ren _et al._ (2015) X.-L. Ren, E. Oset, L. Alvarez-Ruso, and M. J. Vicente Vacas, Phys. Rev. C91, 045201 (2015), arXiv:1501.04073 [hep-ph] .
* Watson (1952) K. M. Watson, Phys. Rev. 88, 1163 (1952), [Riv. Nuovo Cim.31,1(2008)].
* Olsson (1974) M. G. Olsson, Nucl. Phys. B78, 55 (1974).
* Alvarez-Ruso _et al._ (2016) L. Alvarez-Ruso, E. Hernández, J. Nieves, and M. J. Vicente Vacas, Phys. Rev. D93, 014016 (2016), arXiv:1510.06266 [hep-ph] .
* Hernández and Nieves (2017) E. Hernández and J. Nieves, Phys. Rev. D 95, 053007 (2017), arXiv:1612.02343 [hep-ph] .
* Hernandez _et al._ (2007a) E. Hernandez, J. Nieves, and M. Valverde, Phys. Rev. D76, 033005 (2007a), arXiv:hep-ph/0701149 [hep-ph] .
* Tanabashi _et al._ (2018) M. Tanabashi _et al._ (Particle Data Group), Phys. Rev. D98, 030001 (2018).
* (23) “Institute for Nuclear Studies. The George Washington University Virginia Science and Technology Campus,” http://gwdac.phys.gwu.edu/, [Online; accessed 22-February-2019].
* Antonello _et al._ (2015) M. Antonello _et al._ (MicroBooNE, LAr1-ND, ICARUS-WA104), (2015), arXiv:1503.01520 [physics.ins-det] .
* Sobczyk _et al._ (2018) J. Sobczyk, E. Hernández, S. Nakamura, J. Nieves, and T. Sato, Phys. Rev. D98, 073001 (2018), arXiv:1807.11281 [hep-ph] .
* Hernandez _et al._ (2007b) E. Hernandez, J. Nieves, and M. Valverde, Phys. Lett. B647, 452 (2007b), arXiv:hep-ph/0608119 [hep-ph] .
* Martin and Spearman (1970) A. D. Martin and T. D. Spearman, _Elementary-particle theory_ (North-Holland, Amsterdam, 1970).
|
††thanks: Present address: CAS Key Laboratory of Nanosystem and Hierarchical
Fabrication, CAS Center for Excellence in Nanoscience, National Center for
Nanoscience and Technology, Beijing 100190, China††thanks: Present address:
Physics Department, University of Bath, North Rd, Claverton Down, Bath BA2
7AY, UK
# Positive Seebeck Coefficient in Highly doped La2-xSrxCuO4 ($x$=0.33); Its
Origin and Implication
Hao Jin1 Alessandro Narduzzo2 Minoru Nohara3 Hidenori Takagi4,5 N. E.
Hussey2,6 Kamran Behnia1<EMAIL_ADDRESS>(1) LPEM (ESPCI - CNRS -
Sorbonne University), PSL University, 75005 Paris, France
(2) H. H. Wills Physics Laboratory, University of Bristol, Tyndall Avenue,
Bristol, BS8 1TL, UK
(3) Research Institute for Interdisciplinary Science, Okayama University,
Okayama 700-8530, Japan
(4) Max Planck Institute for Solid State Research, 70569 Stuttgart, Germany
(5)Department of Physics, University of Tokyo
Tokyo 113-0033, Japan
(6) High Field Magnet Laboratory (HFML-EMFL) and Institute for Molecules and
Materials, Radboud University, 6525 ED Nijmegen, The Netherlands
###### Abstract
We present a study of the thermoelectric (Seebeck and Nernst) response in
heavily overdoped, non-superconducting La1.67Sr0.33CuO4. In spite of the
electron-like curvature of the Fermi surface, the Seebeck coefficient is
positive at low temperatures. Such a feature, previously observed in copper,
silver, gold and lithium, is caused by a non-trivial energy dependence of the
scattering time. We argue that this feature implies a strong asymmetry between
the lifetime of occupied and unoccupied states along the zone diagonals and
such an electron-hole asymmetry impedes formation of Cooper pairs along the
nodal direction in the superconducting ground state emerging at lower doping
levels.
Cuprates are a family of layered materials each with a Mott insulating parent
which turns superconducting with a sizeable critical temperature upon doping
Lee et al. (2006). Their normal state transport properties exhibit highly
anomalous behaviour and a remarkable evolution with doping, temperature and
magnetic field (for recent reviews, see Hussey et al. (2018); Proust and
Taillefer (2019)). In hole-doped cuprates, superconductivity fades gradually
away beyond optimal doping $p\sim$ 0.16 before vanishing above a critical
threshold $p_{SC}\sim 0.30$, though only in La2-xSrxCuO4 (LSCO) has this
threshold ever been exceeded Nakamae et al. (2003); Cooper et al. (2009).
Figure 1: (Color online) Seebeck and Nernst coefficients: a) Temperature
dependence of the in-plane Seebeck coefficient $S$, in La1.67Sr0.33CuO4. Inset
shows the low temperature $S/T$ showing the uncertainty in the zero-
temperature slope. b) a) Temperature dependence of the in-plane Nernst
coefficient $\nu$ measured at $\mu_{0}H$ = 1 T.
In heavily overdoped La1.67Sr0.33CuO4 (LSCO33), the in-plane resistivity
$\rho_{ab}(T)$ follows the expected $T^{2}$ dependence of a correlated Fermi
liquid below 50 K Nakamae et al. (2003). Removing carriers from this metal
leads to the emergence of superconductivity as well as a ’strange’ metal
regime with a robust $T$-linear component in $\rho_{ab}(T)$ at low temperature
Cooper et al. (2009). Scrutinizing the non-superconducting metal above
$p_{SC}$ may provide clues for the origin of both.
In this paper, we present data on the thermoelectric response of LSCO33. In
the zero temperature limit, both the Seebeck and Nernst coefficients are
$T$-linear, as expected for a Fermi liquid. The sign and the slope of the
Seebeck coefficient, however, point to a remarkably large lifetime asymmetry
between occupied and unoccupied states. We argue that this impedes the
formation of Cooper pairs along the zone diagonal ($\pi$, $\pi$) and thus
provides the starting point for the emergence of a superconducting state with
a $d_{x^{2}-y^{2}}$ symmetry.
Figure 2: (Color online) The Fermi surface: a) Tight binding Fermi surface of
La2-xSrxCuO4 for $x$ = 0.22 and $x$ = 0.32. The Fermi surface is electron-
like. Note the large contrast between the doping evolution along nodal and
anti-nodal directions. b) The relative change in Fermi velocity $v_{F}$ at
azimuthal angle $\phi$ as one passes from $x$ = 0.30 to $x$ = 0.32. The change
is arrested along the nodal direction and is maximal around the anti-nodes.
The resistivity of the single crystal measured in the present work showed no
trace of superconductivity down to 95 mK (see Ref. Nakamae et al. (2003, 2009)
for more details about the crystal growth, annealing conditions and
characterization). The Seebeck and Nernst coefficients were subsequently
measured (in 2004) using a standard one-heater-two-thermometers technique.
Fig. 1 shows their $T$-dependence between 0.4 K and 45 K. Both coefficients
show a quasi $T$-linear dependence at low $T$. The asymptotic zero-temperature
slope of the Seebeck coefficient is $S/T$ = +0.21 $\mu$V.K-2, while for the
Nernst coefficient, the slope is $\nu/T$ = -1.5 nV.T-1.K-2.
The slope of the Nernst coefficient obtained in this measurement was first
discussed in Ref. Behnia (2009). There, it was argued that in the semi-
classical picture, the amplitude of the slope is given by a set of fundamental
constants ($\pi^{2}k_{B}/3e$) multiplied by the ratio of the mobility
$\mu_{\rm H}$ to the Fermi energy $E_{F}$. This picture is backed by available
experimental data on numerous metals Behnia (2009); Behnia and Aubin (2016).
In the specific case of LSCO33, the low-$T$ $\nu/T$ is in fair agreement with
estimates of $\mu_{\rm H}\approx 100$ (from magnetoresistance Nakamae et al.
(2003)) and $E_{F}\approx$ 5900 K (from specific heat Nakamae et al. (2003)).
In this report, we will focus on the Seebeck response and discuss the
significance and implications of both its sign and amplitude. The Seebeck
coefficient in cuprates has been the subject of numerous studies (See chapter
8 in Ref. Behnia (2015) for a review). In the case of LSCO, it was previously
studied in single crystals (up to $x$ = 0.3 by Nakamura and Uchida Nakamura
and Uchida (1993)) and in polycrystalline powders (up to $x$ = 0.45 by Cooper
and Loram Cooper and Loram (1996) and up to $x$ = 0.35 by Elizarova and
Gasumyants Elizarova and Gasumyants (2000)). Our LSCO33 data, which extends
down to sub-kelvin temperature, agrees well with these earlier studies in the
overlapping temperature ranges. Our data also smoothly connects to what was
recently reported by Collignon et al. in Eu-substituted LSCO for $x<0.26$
Collignon et al. (2020).
The first remarkable fact about the Seebeck coefficient of LSCO33 is its
positive sign. Angle resolved photoemission spectroscopy (ARPES) Yoshida et
al. (2007, 2006); Horio et al. (2018) studies have extensively documented the
emergence of an electron-like Fermi surface in LSCO for $x>0.2$, i.e. closed
around the $\Gamma$ point in the Brillouin zone. Fig. 2 shows the Fermi
surface derived from a tight binding model with nearest-neighbor hopping
parameters chosen to fit the ARPES-resolved Fermi surface Yoshida et al.
(2006); Horio et al. (2018). Thus, given the electron-like character of the
Fermi surface, one would naively expect the thermopower of LSCO33 to be
negative.
Interestingly, the Hall coefficient of LSCO33 is also positive Narduzzo et al.
(2008). This observation was explained by taking into account both the
curvature of the Fermi surface and the strong angle dependence of the
scattering time $\tau$ and mean-free-path of the mobile carriers at the Fermi
level Narduzzo et al. (2008). For the Seebeck coefficient, as we will see
below, it is the energy dependence of $\tau$ which matters.
In numerous Fermi liquids, there is an empirical correlation between the slope
of the diffusive Seebeck coefficient $S/T$ and the magnitude of the electronic
specific heat $\gamma$ Behnia et al. (2004). A dimensionless ratio of these
two quantities can be defined using Avogadro’s number $N_{Av}$ and the charge
of an electron, $e$:
$q=\frac{SN_{Av}e}{T\gamma}$ (1)
In dense Fermi liquids (i.e. those with roughly one mobile electron per
formula unit), $q$ is of order unity. This observation, first reported in 2004
Behnia et al. (2004) has been confirmed in numerous cases. The strength of
correlation among the conduction electrons tunes $\gamma$ over several orders
of magnitude ($\approx$ 1-1000 mJ.K-2.mol-1). Concomitantly, it modifies the
absolute value of $S/T$. Since both these quantities track the entropy
accumulated by electrons, such a correlation may not be surprising, though its
persistence in many multi-band metals with a Fermi surface consisting of
pockets of both signs remains a puzzle.
In LSCO33, where $S/T$ = +0.21 $\mu$V.K-2 and $\gamma$ = 6.9 mJ.K-2.mol-1
Nakamae et al. (2003), one finds $q=+2.8$. In dilute Fermi liquids, $q$ can be
significantly larger than unity, because the entropy per volume lags behind
entropy per carrier. In URu2Si2, for example, $|q|\simeq 11$ Zhu et al.
(2009). LSCO33, on the other hand, is a dense Fermi-liquid with 1.3 carriers
per formula unit. Hence, not only the sign, but also the enhanced value of $q$
demand an explanation.
As it turns out, this is not the first example of a system with a large simple
Fermi surface – occupying more than half the Brillouin zone – exhibiting an
anomalous Seebeck coefficient. In noble metals (Cu, Ag, Au), the Seebeck
coefficient is positive and $T$-linear well above the phonon drag peak
MacDonald (2006), despite their Fermi surfaces being electron-like Shoenberg
(2009). In 1967, Robinson called this puzzle ‘a nagging embarrassment to the
theory of the ordinary electronic transport properties of solids’ Robinson
(1967). Interestingly, the $q$ ratio is +0.75 in Cu, +0.81 in Ag and +0.86 in
Au Behnia (2015), i.e. $S/T$ has the right magnitude, just the wrong sign.
This ‘reversed sign thermopower’ puzzle in Cu, Ag, Au and Li (the alkali metal
also showing an unexpected positive Seebeck response) has been addressed by
Robinson Robinson (1967) and more recently by Xu, Di Gennaro and Verstraete Xu
and Verstraete (2014); Xu et al. (2020). Robinson argued that a mean-free-path
rapidly decreasing with increasing energy would provide a solution to the
puzzle and this can arise due to the structure of the electron-ion
pseudopotential. Xu et al. carried out first-principle calculations and found
that in Li, the sign reversal is driven by density and lifetime asymmetries
between states above and below the Fermi level, $E_{F}$ Xu and Verstraete
(2014). Similar calculations for noble metals also produced a positive sign
due to a non-trivial asymmetry in electron-phonon coupling for electronic
states at the two sides of the chemical potential Xu et al. (2020). These
results motivate us to search for a similar solution for the puzzle of the
‘wrong’ sign in LSCO33.
Figure 3: (Color online) Inverting the sign of the Seebeck response: The
Seebeck coefficient is the result of the integration of three components over
the Fermi surface. These three ingredients, sketched as a function of energy
normalized to the chemical potential $\mu$, are: a) The pondering function; b)
The product of density of states and the square of velocity of a gas of free
electrons (blue) and free holes (red): c) The scattering time $\tau(\epsilon)$
in three distinct scenarios. When $\tau(\epsilon)$ is constant (black) or e-h
symmetric (red), the sign of the Seebeck coefficient remains positive for
holes and negative for electrons. When the energy dependence is such that the
unoccupied states are significantly more scattered than the occupied states
(blue) the sign will be inverted.
The Seebeck coefficient is defined as the ratio of thermoelectric conductivity
$\alpha$ to the electric conductivity $\sigma$. The Boltzmann equation links
both coefficients to $\tau$, the density of states $N(\epsilon)$ and the
velocity $v$ Ziman (1972); Behnia (2015):
$\alpha=-e\int\tau(\epsilon_{k})v(\epsilon_{k}).v(\epsilon_{k})N(\epsilon_{k})\frac{(\epsilon_{k}-\mu)}{T}\frac{\partial
f}{\partial\epsilon_{k}}\,d\epsilon_{k}$ (2)
$\sigma=-e^{2}\int\tau(\epsilon_{k})v(\epsilon_{k}).v(\epsilon_{k})N(\epsilon_{k})\frac{\partial
f}{\partial\epsilon}\,d\epsilon_{k}$ (3)
Here, the integrals are over the whole Fermi surface, $f$ is the Fermi-Dirac
distribution and $\mu$ is the chemical potential. The expression for $\alpha$,
contains a material-independent pondering factor together with material-
dependent parameters. As seen in Fig. 3a, the pondering factor is anti-
symmetric about the chemical potential. In the absence of electron-hole
asymmetry near the chemical potential, $\alpha$ would be zero. However, even
in a free electron gas, such asymmetry exists; both the velocity
($v(\epsilon_{k})\propto\sqrt{\epsilon_{k}}$) and the density of states
($N(\epsilon_{k})\propto\sqrt{\epsilon_{k}}$) grow with energy (See Fig. 3b).
As a result, $\alpha$ of a free electron (hole) gas is negative (positive).
Such a correspondence between the sign of the Seebeck coefficient and the sign
of carriers survives even in more complex Fermi surface geometries provided
that the energy dependence of the scattering time (or the mean-free-path) does
not alter the result. Note that Fermi’s golden rule, by linking the scattering
rate and the density of states, implies that features in $N(\epsilon_{k})$
will have counterparts in $\tau(\epsilon_{k})$. Note that $v$, $\tau$ and
$N(\epsilon_{k})$ can all have significant momentum dependence too. As we
shall see below, this $k$-space anisotropy also plays a prominent role here.
Fig. 3c shows three possible scenarios for the energy dependence of the
scattering time. In the first two cases, $\tau$ is constant or its energy
dependence is symmetric as one moves off the chemical potential and there is
no effect on the sign of $S$. In the third case, however, $\tau$ decreases
sufficiently rapidly with increasing energy that it inverts the overall
balance of the responses of the occupied and unoccupied states, as originally
shown by Robinson Robinson (1967).
Such an energy dependence can arise for different reasons. According to first
principle calculations on Li Xu and Verstraete (2014); Xu et al. (2020), a
feature in density of states just below the chemical potential skews the
available phase space for scattering Xu and Verstraete (2014). In copper, on
the other hand, the density of states is flat near the chemical potential Xu
et al. (2020), and it is the electron-phonon coupling that is energy
dependent. Robinson’s phenomenological model – invoking a screened electron-
ion pseudopotential – leads to a similar conclusion Robinson (1967).
Coming back to LSCO33, an energy-dependent scattering time would provide a
natural explanation for the positive sign of the Seebeck coefficient, though
the large magnitude of $q$ implies that the hole-particle asymmetry may be
even more pronounced than in noble metals. If one assumes a conventional
energy dispersion for a free electron gas but with an energy dependence of the
mean-free-path that follows a power law: $\ell\propto\epsilon^{-\delta}$, then
$\delta$ would be set by $3q/2+1$. In copper, for example, an inverted
$q=+0.75$ would then require $\delta\approx 2.1$ Behnia (2015), while an
inverted $q=+2.8$ would require $\delta$ exceeding 5. This may suggest that in
our present case, the energy dependence of the mean-free-path is stronger than
a simple power law.
The evolution of the Fermi surface with doping shown in Fig. 2 indicates a
very plausible nexus for a strong particle-hole asymmetry. The introduction of
additional holes between $x$ = 0.22 and $x$ =0.32, shifts the Fermi surface
almost exclusively along the anti-nodal direction and not at all along the
zone diagonal. This implies that along the nodal orientation the density of
states does not smoothly grow as a function of the chemical potential and
therefore, the phase space for scattering to unoccupied states above the
chemical potential is extremely reduced. As seen in Fig. 2b, the doping-
induced change of the Fermi velocity along the zone diagonal is negligible in
comparison with the anti-nodal direction. This indicates that the Seebeck
coefficient is dominated by the contribution of nodal quasi-particles with
strong asymmetry in the lifetime between occupied and unoccupied states.
The dichotomy between the nodal and anti-nodal contributions to transport has
been demonstrated by angle-dependent magnetoresistance (ADMR) studies, first
in Tl-2201 Hussey et al. (2003); Abdel-Jawad et al. (2006) and more recently
in Eu-LSCO Grissonnanche et al. (2020). However, the focus of attention of
both studies was the $T$-dependence of the anisotropic $\tau$ and not the
energy dependence of $\tau$ and its anisotropy. What causes the scattering
time to be strongly energy dependent along the nodes remains a question to be
addressed by microscopic theory. At this stage, let us point out that this
identification has a possible link with the superconducting gap structure and
pairing symmetry.
LSCO33 becomes a superconductor upon removal of mobile carriers. At the same
time, a $T$-linear scattering rate emerges below $p_{SC}=0.3$. The finite
positive $S/T$, on the other hand, does not appear to be affected by the
emergence of superconductivity. Indeed, the magnitude of $S/T$ in non-
superconducting LSCO33 is almost identical with what has been recently found
by Collignon et al. in superconducting Eu-LSCO for $0.23<x<0.26$ ($\approx$
+0.2 $\mu$V. K-2) Collignon et al. (2020).
In a BCS superconductor, Cooper pairs are formed by the superposition of
states above and below the Fermi level within an energy window of the size of
the gap Tinkham (1996). Strong asymmetry between quasiparticle lifetimes
across the occupation boundary would impede the formation of Cooper pairs. Our
analysis finds that in the nodal orientation, the density of states and the
quasi-particle lifetime do not evolve smoothly across the Fermi energy. Such
an electron-hole asymmetry along ($\pi$, $\pi$) would obliterate the
superconducting gap along the nodes, even if the attractive interaction
leading to the formation of Cooper pairs were isotropic. It remains to be seen
how the $d_{x^{2}-y^{2}}$ pairing symmetry of the superconductor Tsuei and
Kirtley (2000) and the anisotropic energy-dependent scattering phase space of
the strange metal connect to each other.
KB is supported by the Agence Nationale de la Recherche (ANR-18-CE92-0020-01;
ANR-19-CE30-0014-04). NEH is supported by the Netherlands Organisation for
Scientific Research (NWO) (Grant No. 16METL01)—“Strange Metals” and by the
European Research Council (ERC) under the European Union’s Horizon 2020
research and innovation programme (Grant Agreement No. 835279-Catch-22).
## References
* Lee et al. (2006) P. A. Lee, N. Nagaosa, and X.-G. Wen, Rev. Mod. Phys. 78, 17 (2006).
* Hussey et al. (2018) N. E. Hussey, J. Buhot, and S. Licciardello, Rep. Prog. Phys. 81, 052501 (2018).
* Proust and Taillefer (2019) C. Proust and L. Taillefer, Ann. Rev. Condens. Matt. Phys. 10, 409 (2019).
* Nakamae et al. (2003) S. Nakamae, K. Behnia, N. Mangkorntong, M. Nohara, H. Takagi, S. J. C. Yates, and N. E. Hussey, Phys. Rev. B 68, 100502 (2003).
* Cooper et al. (2009) R. A. Cooper, Y. Wang, B. Vignolle, O. J. Lipscombe, S. M. Hayden, Y. Tanabe, T. Adachi, Y. Koike, M. Nohara, H. Takagi, et al., Science 323, 603 (2009).
* Nakamae et al. (2009) S. Nakamae, K. Behnia, N. Mangkorntong, M. Nohara, H. Takagi, S. J. C. Yates, and N. E. Hussey, Phys. Rev. B 79, 219904 (2009).
* Behnia (2009) K. Behnia, J. Phys.: Condens. Matter 21, 113101 (2009).
* Behnia and Aubin (2016) K. Behnia and H. Aubin, Rep. Prog. Phys. 79, 046502 (2016).
* Behnia (2015) K. Behnia, _Fundamentals of Thermoelectricity_ (Oxford University Press, 2015).
* Nakamura and Uchida (1993) Y. Nakamura and S. Uchida, Phys. Rev. B 47, 8369 (1993).
* Cooper and Loram (1996) J. R. Cooper and J. W. Loram, J. Phys. I France 6, 2237 (1996).
* Elizarova and Gasumyants (2000) M. V. Elizarova and V. E. Gasumyants, Phys. Rev. B 62, 5989 (2000).
* Collignon et al. (2020) C. Collignon, A. Ataei, A. Gourgout, S. Badoux, M. Lizaire, A. Legros, S. Licciardello, S. Wiedmann, J. Q. Yan, J. S. Zhou, et al., arXiv e-prints arXiv:2011.14927 (2020).
* Yoshida et al. (2007) T. Yoshida, X. J. Zhou, D. H. Lu, S. Komiya, Y. Ando, H. Eisaki, T. Kakeshita, S. Uchida, Z. Hussain, Z.-X. Shen, et al., Journal of Physics: Condensed Matter 19, 125209 (2007).
* Yoshida et al. (2006) T. Yoshida, X. J. Zhou, K. Tanaka, W. L. Yang, Z. Hussain, Z.-X. Shen, A. Fujimori, S. Sahrakorpi, M. Lindroos, R. S. Markiewicz, et al., Phys. Rev. B 74, 224510 (2006).
* Horio et al. (2018) M. Horio, K. Hauser, Y. Sassa, Z. Mingazheva, D. Sutter, K. Kramer, A. Cook, E. Nocerino, O. K. Forslund, O. Tjernberg, et al., Phys. Rev. Lett. 121, 077004 (2018).
* Narduzzo et al. (2008) A. Narduzzo, G. Albert, M. M. J. French, N. Mangkorntong, M. Nohara, H. Takagi, and N. E. Hussey, Phys. Rev. B 77, 220502 (2008).
* Behnia et al. (2004) K. Behnia, D. Jaccard, and J. Flouquet, J. Phys.: Condens. Matter 16, 5187 (2004).
* Zhu et al. (2009) Z. Zhu, E. Hassinger, Z. Xu, D. Aoki, J. Flouquet, and K. Behnia, Phys. Rev. B 80, 172501 (2009).
* MacDonald (2006) D. K. C. MacDonald, _Thermoelectricity : an introduction to the principles_ (Dover Publications, 2006).
* Shoenberg (2009) D. Shoenberg, _Magnetic Oscillations in Metals_ (Cambridge University Press, 2009).
* Robinson (1967) J. E. Robinson, Phys. Rev. 161, 533 (1967).
* Xu and Verstraete (2014) B. Xu and M. J. Verstraete, Phys. Rev. Lett. 112, 196603 (2014).
* Xu et al. (2020) B. Xu, M. Di Gennaro, and M. J. Verstraete, Phys. Rev. B 102, 155128 (2020).
* Ziman (1972) J. M. Ziman, _Principles of the Theory of Solids_ (Cambridge University Press, 1972), 2nd ed.
* Hussey et al. (2003) N. E. Hussey, M. Abdel-Jawad, A. Carrington, A. P. Mackenzie, and L. Balicas, Nature 425, 814 (2003).
* Abdel-Jawad et al. (2006) M. Abdel-Jawad, M. P. Kennett, L. Balicas, A. Carrington, A. P. Mackenzie, R. H. McKenzie, and N. E. Hussey, Nature Phys. 2, 821 (2006), ISSN 1745-2481.
* Grissonnanche et al. (2020) G. Grissonnanche, Y. Fang, A. Legros, S. Verret, F. Laliberté, C. Collignon, J. Zhou, D. Graf, P. Goddard, L. Taillefer, et al., arXiv e-prints arXiv:2011.13054 (2020).
* Tinkham (1996) M. Tinkham, _Introduction to superconductivity_ (Courier Corporation, 1996).
* Tsuei and Kirtley (2000) C. C. Tsuei and J. R. Kirtley, Rev. Mod. Phys. 72, 969 (2000).
|
# EDGF: Empirical dataset generation framework for wireless network networks
Dinesh Kumar Sah, Praveen Kumar Donta Tarachand Amgoth Department of Computer
Science and Engineering
Indian Institute of Technology (Indian School of Mines), Dhanbad
Jharkhand, India-826004
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
In wireless sensor networks (WSNs), simulation practices, system models,
algorithms, and protocols have been published worldwide based on the
assumption of randomness. The applied statistics used for randomness in WSNs
are broad in nature, e.g., random deployment, activity tracking, packet
generation, etc. Even though with adequate formal and informal information
provided and pledge by authors, validation of the proposal became a
challenging issue. The minuscule information alteration in implementation and
validation can reflect the enormous effect on eventual results. In this
proposal, we show how the results are affected by the generalized assumption
made on randomness. In sensor node deployment, ambiguity arises due to node
error-value ($\epsilon$), and it’s upper bound in the relative position is
estimated to understand the delicacy of diminutives changes. Moreover, the
effect of uniformity in the traffic and contribution of scheduling position of
nodes also generalized. We propose an algorithm to generate the unified
dataset for the general and some specific applications system models in WSNs.
The results produced by our algorithm reflects the pseudo-randomness and can
efficiently regenerate through seed value for validation.
###### keywords:
Dataset generation , random deployment, wireless network, clustering, traffic
data.
–
## 1 Introduction
In wireless sensor networks (WSN) research, the correctness of simulation
practices and results are important because of the limitation occurs
implementations and testbeds. There are many pseudo-random numbers generators
(PNRG) are available such as simulators Mersenne Twister [1], Xorshift [2, 3],
linear congruential generators [4, 5, 6, 7, 8] and so forth [9, 10]. The PNRG
produces 32-bit or 64-bit words, which can further decode as integer based on
the requirement. In most of the simulation or programming tools, “Mersenne
Twister” [1] has been used to generate the random numbers (RN). The problem
that arises in PRNG is that even with the same seed value, the reproduction of
the same sequence is almost impossible. Several existing proposals in WSNs
result, use random deployment to validate connectivity, packet generation, and
much more use this function regularly. These practices raise a concern about
the validation of results. Often formal description seems missing in the
proposal, and provided function not enough to reproduce. Moreover, the
exploration of random processes provided by simulator and programming tools is
also tedious, creating hurdles for authors.
This work proposes a new dataset generation framework for WSNs referred to as
empirical data generation framework (EDGF). The objective is to provide the
unified datasets which are empirical for validation purpose. Our function’s
main contribution is that it uses a modified version of the linear
congruential function with internal seed value based on the user input. We
also observe some useful instances through experiments: the seed constants,
provide string deployment automatically and report in the result analysis.
Moreover, our proposal is independent of the limitation imposed on the
congruential generator [11] of selection of prime $m$ and selection of $a$ as
a primitive element modulo $m$.
### 1.1 Contributions
The contributions of the work are summarized as follows.
* 1.
We propose a new deterministic algorithm to generate datasets for sensor node
deployment and packet generation. The objective is to provide the dataset for
2-D deployment coordinates, and the traffic matrix corresponded to the
deployed nodes.
* 2.
Deployment data generation with the critical aspects of function and seed
value.
* 3.
Traffic generation data for uniform and exponential traffic matrix. The
significance of exponential traffic is that it is needed for event-driven
applications.
* 4.
Randomness property validation of EDGF data with KS-test, $\chi^{2}$-test and
auto-correlation test to assure the uniformity and randomness.
* 5.
Illustration of datasets produced by EDGF and highlighting important issues.
## 2 Effect of random datasets in WSNs
In this section, we will cover the significance of the randomness and its
effect on the validation. Here, randomness on deployment and data traffic are
considered because these are standard requirements of almost all the WSNs
models. However, we are in the initial stages of EDGF; therefore, we are
discussing datasets for deployment and packet generation only in this work. We
also observe that our framework’s true potential is limitless, and many other
datasets can also be generated using the EDGF for various applications in
WSNs.
Most of the time, linear congruential generator and middle square method used
to generate RN for deployment. In computing and programming, the algorithm can
produce an extensive RN sequence and can determine by the initial input is
called seed-based PRNG. BY knowing the seed value, the entire RN sequence can
regenerate, which is often a requirement of computer protocols. This type of
RN generator is often called a pseudo-random number generator (PRNG).
Moreover, the RN generated by the PRNG takes the system clock as seeds value.
In our example, we are taking 2-D random values on the X-Y plane generated by
Python’s function (NetworkX-package) to depict the points as node’ location in
deployment. To proceed with the effect, we perform a K-mean clustering
algorithm to form a cluster on those points. In many of the WSNs application
practices, the clustering operation is often used due to its usefulness in
energy, lifetime, congestion, and many more. We further estimate the euclidean
distance among the cluster head and sink. After estimation, it is easy to
observe the values often show the significant difference among them. Note that
the distance among nodes is directly proportional to the energy consumption;
transmission range estimation affects the network performance.
Figure 1: Cluster generated through K-mean with 4 random instances
To analyze the effect of randomness in data traffic generation, we consider
CSMA and TDMA-MAC to explore the effects. In [12], CSMA/$p^{*}$ has been
considered based on the optimal probability distribution of the traffic
produced by the nodes. The constraint is that the number of nodes and channel
access rate of nodes is supposedly known for this protocol to enhance channel
utilization enhancement. Moreover, an extension of [12] another proposal name
“sift” [13] been given based on CSMA/$p^{*}$ which also provide better
utilization in the case of availability of data at the nodes. If the data
arrival of nodes is random, it is almost impossible to sense the transmission
of more than two hops. The performance also starts degrading only because of
the randomness.
In Z-MAC [14], distributed algorithm DRAND [15] extension of centralized
protocol RAND [16] used to assign the time slots to each node for
synchronization to perform the node schedule further. As synchronization is
one of the important aspects of node scheduling, it’s highly anticipated that
most of the synchronization algorithms consider the hop localization to assign
the slot number. If the node position at the time of deployment is taken
randomly, then the slots will vary throughout the validation. Further, to
enhance the usability of unused slots, an interesting mechanism has been
given. Let suppose, we have 4 node a,b,..,j, need to schedule in order of
a,b,c…,j. Now all the node is the owner of its time slot and free to transmit
its packet through TDMA. If any owner node is not using its slot, then those
slots will be eligible for CSMA data transmission. The effect of uniform
random packet generation is that suppose if any $2^{nd}$ slot and $9^{th}$
slot is not used by its owner. Then in these instance, the congestion on
$2^{nd}$ will be $\sum_{leftover}packets(2\rightarrow onwards)$ and it must be
higher than $\sum_{leftover}packets(9\rightarrow onwards)$ in uniform traffic
scenario. These instances can have a ripple effect in both ways, but one thing
for sure that the packet transmission rate will be different in both
instances. Now, at the time of validation due to congestion, the same
algorithm’s packet delivery rate will be different for the same random
function. This section has shown the observable evidence of how the random
function can cause a severe difference in result validation, even though using
the same procedure for deployment and packet generation.
## 3 Empirical dataset generation framework
In this section, we develop EDGF for WSNs scenario for node deployment and
packet generation. The significance of randomness in the dataset such as
periods, uniformity, and independence, are covered carefully for both
instances. In deployment, apart from randomness, the other issues such as
localization, connectivity, outliers handling mechanism, and coverage are
considered to make it as generalized as possible. To determine the packet
generation of the individual nodes, the proportionality of the packet
generation needs to identify first. In [17], the interval of the packet
generation rate and its distribution throughout the networks has been shown.
### 3.1 Topological significance of datasets
In WSNs, the deployment of nodes represented as graph $G$ with nodes,
$n_{1},n_{2},..n_{i}$ and the coordinate value in 2-D plane represented as
$n_{1,x,y},n_{2,x,y},...,n_{i,x,y}$. The topological significance of the
network follows the same properties as the graph. The graph is said to be
connected if there exists a path between every pair of nodes. For a connected
graph with $i$ number of nodes, $i(i-1)/2$ edges are needed to make the path
between every pair of nodes. In WSNs, very little certainty that each node
falls into the range of every other node. Though the network connectivity is a
different problem, many random generated functions do not care to provide the
points that can form the connected graph within the sensing range $R$. One
simple solution is to increase the sensing range of each node, but energy
consumption will vary at the validation time. The EDGF identified these issues
and pointed out the number of isolated nodes with individual sensing range
values. However, EDGF is able to provide the coordinate based on a non-
hierarchical manner suitable for mesh topology.
We observe some critical issues like connectivity, coverage, and localization
are not full-filled by the value generated through the random function because
of the user’s lesser control. Therefore, we are merely highlighting the issues
and leaving as open problems. The nodes’ output and position are provided,
which can further incorporate as input to handle connectivity, coverage
maximization, and localization in networks. In [6], many random generators
have been explored, out of which we have opted $x_{n}:=(a(x_{n}-1)+c)modm$,
though changing the constant dose not going to change the vector. With this
selection, only two possibilities left as either change the constant with some
good suggestion given in [11] or combined it with some other generator
integrated multiple-recursive generator (CMRG) as in [18]. Moreover, another
combination is the $a$ and $c$ value can be a real number. The possibilities
$a$ and $c$ are a real number is limitless, and floating operation is also
expensive. Moreover, with some limited number of test instance and universal
mathematical constant values [19], the results are impressive. Though the vast
exploration and validation require to check the limit of the function, it
seems to pass in the initial tests performed for deployment. Our function’s
advantage is that it has less memory requirement as it does not need to store
the constant value in term of $2^{32}$ or more to maintain the period of
randomness. Moreover, our deployment’s complexity is also $O(n)$, which is
enough to proceed the path forward.
In algorithm. 1 to 4, the input set is define as $n$ number of sensor nodes,
$m$ area of deployment in $unit^{2}$ as $m\times n$ where $m$ and $n$ can be
width and length, $X[0]$ is seed value for $X$ and $Y$ -coordinate, $mcV$ is
the list of mathematical constant values defined internally based on the seed
value given as input. Moreover, $a$ and $c$ are constant generated using the
function $a=mcV[X[0]\%|mcV|]$ ,
$c=mcV\left[\left(X[0]+\frac{|mcV|}{2}\right)\%|mcV|\right]$ and $a\neq c$.
Moreover, for packet generation, the number of time slots needed and defied as
$t$, with the number of packets generation range, vary between the $P_{1}$ and
$P_{2}$.
### 3.2 Grid deployment
Here, the objective is to generate the empirical dataset for two-dimensional
space, such that it obey the simple rules of randomness and networks. The
first requirement is it should be surely random within the range of the area
of interest. The WSNs properties such as connectivity and coverage of the
network should be maintained. In some instances, outliers exist in the
network, so there must be the proper outliers handling mechanisms. Moreover,
to deals with the outliers, the relaxation in the term of error-value
$(\epsilon-value)$ is being made to accommodate the outliers nodes in the
network to maintain the connectivity. Meanwhile, the following relaxation is
being optional in the algorithm with the concern that some instance such as
mobile sink able to handle the outliers and hidden nodes. Moreover, through
the experimental observations, the uniformity of node deployment is enhanced
with the adoption of grid deployment. In our work, grid is define as the area
such as if we plot the axis by taking the coordinate $(m/2,n/2)$. The entire
grid area $G$ further divided into four sub-grids
$G={g_{1},g_{2},g_{3},g_{4}}$. The advantage of this assumption that it
increase the connectivity and coverage of network without any separate
algorithm.
Data: (n, m, X[0])
Result: Deployment coordinates (X,Y) in range R
1 begin
2 // $a$ and $c$ are constant $a\neq c$
3 $a=mcV[X[0]\%|mcV|]$
4 $c=mcV\left[\left(X[0]+\frac{|mcV|}{2}\right)\%|mcV|\right]$
5 Y[0]=X[0] // $y$ coordinate seed value
6
7 // Deployment of Sensors
8 for _i=1 to n_ do
9 X[i]=(a*X[i-1]+c)%m
10 Y[i]=(a*Y[i-1]+a)%m
11
12
13
14 // Make Graph
15 for _i=1 to n_ do
16 G.add_node(Point(X[i],Y[i]))
17
18
19
Algorithm 1 Deployment data generation without grid Algorithm
Data: (n, m, X[0])
Result: Deployment coordinates (X,Y) in an area $m\times n$
1 begin
2 $a=mcV[X[0]\%|mcV|]$
3 $c=mcV\left[\left(X[0]+\frac{|mcV|}{2}\right)\%|mcV|\right]$
4 Y[0]=X[0] // $y$ coordinate seed value
5
6 m1 = $\frac{m}{2}$
7 n1 = $\frac{n}{4}$ //Here area dividing in to 4 grids.
8
9 // Deployment of Sensors in Grid-1
10 for _i=1 to n1_ do
11 X[i]=(a*X[i-1]+c)%m1
12 Y[i]=(a*Y[i-1]+a)%m1
13
14
15
16 // Deployment of Sensors in Grid-2
17 for _j=1 to n1_ do
18 X[i+j]= X[j] + m1
19 Y[i+j]= Y[j] + m1
20
21
22
23 // Deployment of Sensors in Grid-3
24 for _k=1 to n1_ do
25 X[i+j+k]= X[k] + m1
26 Y[i+j+k]= Y[k]
27
28
29
30 // Deployment of Sensors in Grid-4
31 for _l=1 to n1_ do
32 X[i+j+k+l]= X[l]
33 Y[i+j+k+l]= Y[l] + m1
34
35
36
37 // Make Graph
38 for _i=1 to n_ do
39 G.add_node(Point(X[i],Y[i]))
40
41
42
Algorithm 2 Grid-based deployment data generation Algorithm
## 4 Packet generation
There are two class of data or packet traffic for WSN applications, continuous
monitoring or event driven monitoring. The requirement of packet traffic may
vary along with the applications.
### 4.1 Uniform Packet generation
In continuous monitoring applications, the senors send the packets in every
time interval. These activity can be model as uniform distribution of packets
throughout the network within the specific range.
#### 4.1.1 lemma
Consider a continuous monitoring system, the packet rate of sensors model as
uniform distribution traffic.
#### 4.1.2 proof
If $x_{i}$ is a value taken from the TUD, then the value $a+(b-a)x_{i}$
follows the TUD constrain by a and b. The probability, that a TUD random
variable can categorized in the interval of with the fixed range of variables
is independent of the position in the interval itself ( At the same it depend
upon the interval size). To observe this, if $X$ is uniformly distribute over
$U(a,b)$ and $[x,x+d]$ is a subinterval of $[a,b]$ with fixed $d>0,$ then
$P(X\in[x,x+d])=\int_{x}^{x+d}\frac{dy}{b-a}=\frac{d}{b-a}$ which is
independent of x.
Data: (n, t, $P_{1}$, $P_{2}$)
Result: Uniform packet generation algorithm
1 begin
2 // $P_{1}$ = Minimum size of packet
3 // $P_{2}$ = Maximum Size of packet
4 // $a$ and $c$ are constant $a\neq c$
5
6 X[0] = mcV[$P_{2}$%$|$mcv$|$]
7 $a=mcV[X[0]\%|mcV|]$
8 $c=mcV\left[\left(X[0]+\frac{|mcV|}{2}\right)\%|mcV|\right]$
9 for _i=1 to n_ do
10 for _j=1 to t_ do
11 $X[j\%t]=[a(a*X[j-1]+c)\%(P_{2}-P_{1})]+P_{1}$
$Y[i][j]=[\frac{-1}{\lambda}log(1-\frac{X[j\%t]}{P_{2}})\%(P_{2}-P_{1})]+P_{1}$
12
13
14
Algorithm 3 Uniform packet generation $r_{1,t},r_{2,t},...r_{n,t}$ for node
$1,2..,n$ and for time $t$
### 4.2 Exponential data generation for event driven monitoring (EDM)
In EDM, the sensors is being activate only when some activity occurs in
sensing range. These are the scenario when sensors can exploit the correlation
to improve the network efficiency by adopting the data acquisition mechanism.
In [20], the network management has been suggested based on the spatial-
temporal (ST) relation of sensors. The advantage of adoption of ST is that it
can ensure the effect of any event driven activity and can distribute that
among the nodes exponentially. Let assume that at any time instance $t$ some
event $e_{t}$ occurs in grid $g_{i}$ and in locality $l_{2,{g_{i}}}$ which
effect is monitored by sensor in those locality. Now, as the point of activity
and individual sensor $s_{i}$ distance being increase, the effect start fading
up. There is the possibilities that the corner most sensors of other grid in
different locality such as
$l_{1,{g_{i+1}\%4}},l_{2,{g_{i+2}\%4}},l_{3,{g_{i+3}\%4}},l_{4,{g_{i+3}\%4}}$
will not even able to sense the event due to the limitation on the sensing
range. In this instance the uniform generation of data packets in simulation
practices might lead to ambiguous result. Therefore, we are giving the
algorithm to generate the data packet with exponential distribution. Let
assume the $F(x)$ be the exponential function distribute over random variable
$x$ and rate $\lambda$ and define as:
$F(x)=\left\\{\begin{matrix}1-e^{-\lambda x},x\geq 0&\\\
0,x<0&\end{matrix}\right.$ (1)
Here $\lambda>0$ is the rate parameter of the exponential. Now we already
generated $r_{1,t},r_{2,t},...r_{n,t}$ through algorithm 3 which individual
input value $r_{i}=1-e^{-\lambda x_{i}}$ for $\lambda=1$. Further, $-\lambda
x_{i}=ln(1-r_{i})$ which will generate the individual
$x_{i}=-\frac{1}{\lambda}ln(1-r_{i})$. (Generally $x_{i}$ w.r.t $r_{i}$). Now,
the minimum effect or null effect for the nodes which are further away from
the event can also prove to be random (see Appendices A.1).
#### 4.2.1 lemma
Consider a correlation aware monitoring system, the packet rate of sensors
model to the considerably far nodes is also exponential distribution traffic.
#### 4.2.2 proof
In Appendices A.1, the proof of minimum $\lambda$ is given which is similar to
the nodes which is far from the activity point. The $\lambda$ will be small
but still the data generated w.r.t the point shows exponential distribution.
Data: (n, t, $P_{1}$, $P_{2}$)
Result: Exponential packet generation (Sample result available in Appendices
A.4, Table .3 )
1 begin
2 // $a$ and $c$ are constant $a\neq c$
3
4 X[0][0] = mcV[$P_{2}$%$|$mcv$|$]
5 a=mcV[$P_{1}$%$|$mcv$|$]
6 $c=mcV\left[\left(X[0]+\frac{|mcV|}{2}\right)\%|mcV|\right]$
7 for _i=1 to n_ do
8 for _j=1 to t_ do
9 $X[i$%$t][j$%$t]=(a*X[(i-1)$%$t][(j-1)$%$t]+c)$%$(P_{2}-P_{1})+P_{1}$
10
11
12
Algorithm 4 Exponential Packet Generation Algorithm
## 5 Analysis and results
To establish the degree of agreement among the distribution of a sample of
generated RN and theoretical uniform distribution (TUD), we are performing two
well know for uniformity test and one for independence test.
* 1.
Kolmogorov-Smirnov test (KS-test) (uniformity test)
* 2.
$\chi^{2}$ test ($\chi^{2}$Test) (uniformity test)
* 3.
Autocorrelation test (independence test)
To analyize the RN generated for the test, two hypotheses made as one support
the RN generator is indeed uniformly distributed. Then $H_{0}$, can define as
a null hypothesis. The other one can support the RN generator is not uniformly
distributed can represent as $H_{1}$, and act as alternative hypothesis.
Depending upon the null hypothesis of no major difference among sample
distribution and TUD we will conclude that whether the number generated
through function is truly random with significance levels($\alpha$). In the
experimental results, with respect to different constant $a,c$ and $KS-
test(\alpha=0.01),\chi^{2}-test(\alpha=0.001),autocorelation(\alpha=0.01)$,
the test results are tabulated in Appendices A.3, Table .1.
In analysis, the hypothesis based test predict that whether the two mutually
exclusive statements regarding the sample to figure out which statement is
more supportive to sample data drawn from the population. Further, the test
result is significance only if the sample data drawn from the population is
large enough or frequent enough comparative to the $H_{0}$ to reject the
$H_{0}$ for overall population. There is some instance of special concern in
hypothesis test such as the assumption of $H_{0}=true$, limit of
$\alpha$-value, nature of sample data (large value drawn from critical
region). We have to understand here that, test measurement based on $\alpha$
dose not support the acceptance as $100\%$ accuracy but to not reject the
nature (randomness in our case) with individual $\alpha$. The cases where
either the $\alpha$ value is change or sample change to make sure the
confidence of data but it will never be $100\%$ accurate.
Most popular $\alpha$ values are lies in between $[0.01,0.05]$ and taken in
the account for our experiment too except one exception of $\alpha=0.001$. Let
suppose if $\alpha=0.05$, expect to obtain sample means in the critical region
$5\%$ of the time when the $H_{0}$ is true. Then, we can not determine that
the null hypothesis is true, but if it falls in the critical region then it
get reject. This is the reason, $\alpha$ value consider sometime as error
rate. In our test result if by providing the $\epsilon$-value relaxation in
$\alpha$, if the test is pass than result can accepted. The true nature of
$\epsilon$-value relaxation is also need to explore.
### 5.1 Kolmogorov-Smirnov test (KS-test) of generated datasets
Let suppose the sample of RN generator from our algorithm is
$r_{1},r_{2},.....,r_{n}$ where the $n$ represent the total number of sample
of $RN$ generated. In KS-test, the sub-sample is being selected from the
sample size in such a way that size of sub-sample $\leq n$. The hypothesis
suggests that if the subsample passes the test for some individual $\alpha$
value, it means the number passes the uniformity test, and the test can
further also proceed for different sub-sample. The empirical $s_{n}(x)$ is
depended upon the sample size taken for test and can define:
$s_{n}(s)=\left(\frac{|r_{i},r_{i+1},...r_{i+j}|\leq x}{N}\right)where,\hskip
5.69046ptN=\frac{n}{4}\hskip 5.69046ptand\hskip 5.69046ptN\subset n$ (2)
The KS-test define with the greatest absolute expectation among between $f(x)$
and $s_{n}(x)$, over the range of random variables and difference in
parametric test (DPT) can define as
$DPT=|f(x)-s_{n}(x)|$ (3)
(See Appendices A.2 for detail)
### 5.2 $\chi^{2}-test$ (See Appendices A.2 for detail)
Chi-Square goodness of fit test determines if a sample data have similarity
with the population. Once the $\chi^{2}$ value is calculated, then with
respect to the degree of freedom and $\alpha$ value, the individual $\chi^{2}$
value is being check and if the value is larger then the hypothesis is not
reject.
### 5.3 Auto correlation test (See Appendices A.2 for detail)
It concerned with the dependencies between numbers in sequence to check
whether it is any dependency on the number or not. Once the $Z_{0}$ value
computed, it will compare with the Cumulative standardized normal distribution
table and if $Z_{0}\leq Table(Z_{0})$ w.r.t $\alpha$, then the conclusion can
be made that there is no alpha and hypothesis should be accepted. Apart from
the auto-correlation, if the $x$ and $y$ value is in same dimension (which is
true in case of deployment), circular co-relation has been suggested for a
periodic sequence in [21].
(a) (b) (c) (d) (e) (f)
Figure 2: Deployment of the network in grid and non-grid modes with X[0]=43,
a=3.359885666 b=1.902160583 (a) Transmission range=10 $(Grid-based)$ (b)
Transmission range=15 $(Grid-based)$ (c) Transmission range= 10 $(NonGrid-
based)$ (d) Transmission range=15 $(NonGrid-based)$ (e) a=c $(Grid-based)$ (f)
a=c $(Nongrid-based)$
Test result of the sample data-set of experiment is available in Appendices
A.3 Table. 1.
### 5.4 Results
Fig. 2 represent both grid and non-grid based random deployment of sensor
nodes in a 100 sq. m area with the seed value 43 and different transmission
ranges. In Grid-based deployment, we generate random position in one quarter
and repeat the same positions in remaining quarters. Grid-based deployment
makes easier to test the data set, and satisfies the connectivity of the nodes
and covering the area. Fig. 2a and 2b shows the grid-based deployment with
transmission range 10 and 15 respectively. The observation here we found that
the sensors are well connected when increases the transmission range. The non-
grid based random sensor deployment shown in Fig. 3a and 3b with transmission
ranges 10 and 15. In non-grid based random deployment we found that the sensor
placements are varied and the probability of placing more number of nodes in
particular portion is high. The constant values such as $a$ and $c$ will get
more impact to get the random deployment. We need to make some difference
between these two constant values. If we make both $a$ and $c$ are same then
the result will be like Fig. 2e and Fig.2f, where Fig. 2e represents Grid-
based deployment and Fig. 2f represents non-grid based deployment.
(a) a. (b) b. (c) c. (d) d.
Figure 3: Illustration of the Kolmogorov-Smirnov statistic for sensor
deployment and packet generation (a) Grid-based (b) Non-grid based (c) Uniform
distribution (d)Exponential approach
In Fig. 3, K-S test statics of performed simulation have shown for EDGF and
cumulative distribution function (CDF) for deployment and packet generation
data. The $DPT$ shown in equation. 3 can be easily observe in Fig. 3, as in
Fig. 3a and Fig. 3b the blue line shows the CDF based value of randomness and
the orange line is able to replicate the uniformity in both grid and non-grid
deployment. In Fig. 3d and Fig. 3c, the depiction of packet generation for
different time slots $t_{1},t_{2},..,t_{5}$ has shown. The packet data seems
does not passing the uniformity in Fig. 3d for some interval which shows the
biasness of the of the function. Even though, for uniform distribution as in
Fig. 3d, reference CDF denoted with blue line have showing partial failure,
the exponential packet generation value in Fig. 3c is showing promising
difference.
## Appendix A Appendices
### A.1 Nature of distribution of the $\lambda\rightarrow 0$ in exponential
distribution
Let $X_{1},X_{2},..X_{N}$ is the independent exponentially distributed random
variables with variable $\lambda_{1},..,\lambda_{n}$. Then
$min{X_{1}..,X_{n}}$ is also exponentially distributed, r
$\lambda=\lambda_{1}+..+\lambda_{n}$
$P(min{X_{1}..X_{n}})=P({X_{1}>x..X_{n}}>x)=\Pi_{i=1}^{n}P(X_{i}>x)$ (4)
$P(min{X_{1}..X_{n}})=\Pi_{i=1}^{n}exp\left(-x\sum_{i=1}^{n}\lambda_{i}\right)$
(5)
To ensure the distribution still obeying the exponential, the index of
variable $k$ which nature $\lambda\rightarrow 0$ is define as
$P(k|X_{k}=min{X_{1}..X_{n}})=\frac{\lambda_{k}}{\lambda_{1}+\lambda_{2}+...+\lambda_{n}}$
(6)
### A.2 Validation Test Description
Let suppose $X$ is uniformly distributed over the unit interval of $[0,1]$
then cumulative distribution function (CDF) of $X$ can define as:
$f(x)=\begin{cases}0:&\text{ $x<0$}\\\ x:&\text{$0\leq x<1$}\\\
1:&\text{$x\geq 1$}\end{cases}$ (7)
#### A.2.1 Kolmogorov-Smirnov test:
Following steps require to follow:
* step.1
Compute $D^{+}$
$D^{+}=\max\limits_{1\leq j\leq n}\Bigg{\\{}\frac{i}{n}-r_{i}\Bigg{\\}}$ (8)
$D^{-}=\max\limits_{1\leq j\leq n}\Bigg{\\{}r_{i}-\frac{i-1}{n}\Bigg{\\}}$ (9)
* step.2
Compute max among $D^{+}$ and $D^{-}$ as
$D=[D^{+},D^{-}]$ (10)
* step.3
Locate in KS-table critical value of $D_{\alpha}$ for specified $\alpha$ in
sample space.
#### A.2.2 $\chi^{2}$-test
To perform the $\chi^{2}$ test, we need to perform following steps on the
sample $RV$ for validation.
* step.1
Compute $\chi^{2}$
$\chi^{2}=\sum_{i=1}^{n}\Bigg{\\{}\frac{(F_{i}-AE_{i})^{2}}{AE_{i}}\Bigg{\\}}$
(11)
where $F_{i}$ is frequency of number in the class $i^{th}$ class from uniform
distribution. $AE_{i}$ is defined as absolute expected number in each class
equal to $N/n$ for equally space class.
* step.2
Compute $\nu$
$\nu=number\hskip 5.69046ptof\hskip 5.69046pti-1$ (12)
Sampling distribution of $\chi^{2}$ is approximately the $\chi^{2}$
distribution with $(n-1)$ degree of freedom.
#### A.2.3 autocorrelation and circular autocorrelation -test
To compute the autocorrelation between every $m$ number (m is known as lag),
will start from the $i^{th}$ number with m as interval and can define as
$i+m,i+2m,i+3m...$. To proceeds with, the following step is being taken to
compute the autocorrelation $\rho_{im}$ between numbers
$R_{i},R_{i+m},R_{i+2m},....R_{i+(M+1)}$ is to be found where $M$ is largest
integer as $i+(M+1)m\leq N$.
An non-zero auto-correlation implies lack of dependence following detailed
test appropriate.
$H:\rho_{im}=0$ no correlation
$H_{I}:\rho_{im}\neq 0$ correlation
* step.1
Compute $Z_{0}$
$Z_{0}=\frac{\widehat{\rho}_{im}}{\delta_{\widehat{\rho}_{im}}}$ (13)
* step.2
Compute $\widehat{\rho}_{im}$
$\widehat{\rho}_{im}=\frac{1}{M+1}\sum_{k=0}{M}\left[R_{i+KM}R_{i+(K+1)m}\right]-0.25$
(14)
* step.3
${\delta_{\widehat{\rho}_{im}}}=\sqrt{\frac{13M+7}{12(M+1)}}$ (15)
If two sequence of RN is periodic with same period $N$, then the circular
correlation can define as
$\widehat{\rho}_{im}=\frac{1}{N}\sum_{k=0}^{N-1}{x(n)y(n-m)}$ (16)
${\delta_{\widehat{\rho}_{im}}}=\sqrt{\frac{13N+7}{12(N+1)}}$ (17)
$where\hskip 5.69046ptm=0,1,...,(N-1)\hskip 5.69046pt$ $Note:\text{Only $m$=0
considered in experiment with $\alpha=0.001$ for circulation coreelation}$
### A.3 Test case results Table.1
Table 1: Test case results | Isolated nodes in non-grid based deployment and Testing | Isolated nodes in grid based deployment and Testing
---|---|---
X[0] | a value | c value | TR=10 | TR=15 | TR=20 | KS-Test | Chi2Test | | Auto
---
correlation
Test
TR=10 | TR=15 | TR=20 | KS-Test | Chi2Test | | Auto-
---
correlation
Test
0 | 4.669202 | 2.295587 | 10 | 1 | 0 | Satisfied | Satisfied | Satisfied | 15 | 0 | 0 | Satisfied | Satisfied | Satisfied
2 | 3.275823 | 1.705211 | 11 | 0 | 0 | Satisfied | Satisfied | Satisfied | 8 | 4 | 0 | Rejected | Satisfied | Satisfied
3 | 2.80777 | 1.324718 | 8 | 2 | 0 | Rejected | Rejected | Satisfied | 13 | 2 | 0 | Satisfied | Satisfied | Satisfied
5 | 2.584982 | 3.141593 | 11 | 1 | 1 | Satisfied | Satisfied | Satisfied | 12 | 0 | 0 | Satisfied | Satisfied | Satisfied
7 | 2.295587 | 4.669202 | 1 | 0 | 0 | Satisfied | Satisfied | Satisfied | 11 | 0 | 0 | Satisfied | Satisfied | Satisfied
12 | 3.141593 | 2.584982 | 5 | 2 | 1 | Rejected | Satisfied | Satisfied | 9 | 0 | 0 | Satisfied | Satisfied | Satisfied
14 | 4.669202 | 2.295587 | 8 | 2 | 0 | Rejected | Rejected | Satisfied | 14 | 2 | 0 | Satisfied | Rejected | Satisfied
24 | 1.324718 | 2.80777 | 11 | 2 | 2 | Satisfied | Satisfied | Satisfied | 11 | 0 | 0 | Satisfied | Rejected | Satisfied
43 | 3.359886 | 1.902161 | 8 | 0 | 0 | Satisfied | Satisfied | Satisfied | 17 | 1 | 1 | Satisfied | Satisfied | Satisfied
59 | 2.80777 | 1.324718 | 8 | 1 | 1 | Satisfied | Satisfied | Satisfied | 12 | 0 | 0 | Satisfied | Satisfied | Satisfied
65 | 1.705211 | 3.275823 | 6 | 0 | 0 | Satisfied | Rejected | Satisfied | 8 | 0 | 0 | Satisfied | Satisfied | Satisfied
70 | 4.669202 | 2.295587 | 11 | 3 | 0 | Satisfied | Satisfied | Satisfied | 7 | 0 | 0 | Rejected | Satisfied | Satisfied
76 | 2.502908 | 2.718282 | 5 | 0 | 0 | Satisfied | Satisfied | Satisfied | 9 | 3 | 0 | Satisfied | Rejected | Satisfied
87 | 2.80777 | 1.324718 | 9 | 2 | 0 | Rejected | Satisfied | Satisfied | 8 | 2 | 0 | Rejected | Satisfied | Satisfied
144 | 2.685452 | 1.618034 | 8 | 1 | 0 | Satisfied | Rejected | Satisfied | 6 | 0 | 0 | Satisfied | Satisfied | Satisfied
147 | 2.295587 | 4.669202 | 8 | 1 | 0 | Satisfied | Satisfied | Satisfied | 21 | 6 | 0 | Satisfied | Rejected | Satisfied
192 | 1.324718 | 2.80777 | 13 | 3 | 1 | Satisfied | Satisfied | Satisfied | 8 | 0 | 0 | Satisfied | Satisfied | Satisfied
251 | 2.718282 | 2.502908 | 9 | 1 | 0 | Rejected | Satisfied | Satisfied | 4 | 0 | 0 | Rejected | Satisfied | Satisfied
365 | 3.359886 | 1.902161 | 4 | 1 | 0 | Rejected | Rejected | Satisfied | 16 | 1 | 0 | Satisfied | Satisfied | Satisfied
1111 | 2.584982 | 3.141593 | 9 | 3 | 0 | Satisfied | Satisfied | Satisfied | 8 | 0 | 0 | Satisfied | Satisfied | Satisfied
### A.4 Packet generation data for exponential distribution Table.3
Table 2: Exponential packet generation data Node ID | Exponential distribution | Uniform distribution
---|---|---
| t=1 | t=2 | t=3 | t=4 | t=5 | t=1 | t=2 | t=3 | t=4 | t=5
1 | 3.63 | 2.93 | 3.41 | 2.59 | 2.26 | 6.06 | 7.55 | 4.44 | 2.26 | 3.1
2 | 2.37 | 2.88 | 3.18 | 2.27 | 2.42 | 5.86 | 6.91 | 2.36 | 3.43 | 6.93
3 | 3.18 | 2.28 | 2.45 | 3.38 | 2.54 | 2.41 | 3.59 | 7.47 | 4.18 | 9.41
4 | 4.84 | 2.29 | 2.52 | 4.21 | 4.18 | 2.54 | 4.03 | 8.9 | 8.86 | 8.74
5 | 4.07 | 3.8 | 3.22 | 2.33 | 2.66 | 8.35 | 7.04 | 2.78 | 4.81 | 3.45
6 | 2.42 | 3.2 | 2.31 | 2.57 | 7.69 | 7 | 2.64 | 4.35 | 9.97 | 4.35
7 | 2.57 | 7.61 | 2.57 | 7 | 2.55 | 9.96 | 4.34 | 9.93 | 4.24 | 9.61
8 | 5.23 | 2.38 | 2.94 | 3.46 | 2.66 | 3.17 | 6.1 | 7.67 | 4.84 | 3.56
9 | 2.44 | 3.34 | 2.49 | 3.79 | 3.19 | 7.37 | 3.85 | 8.32 | 6.97 | 2.53
10 | 2.29 | 2.51 | 4.11 | 3.9 | 3.41 | 3.99 | 8.79 | 8.5 | 7.56 | 4.47
11 | 2.59 | 2.27 | 2.41 | 3.14 | 7.91 | 2.34 | 3.39 | 6.8 | 9.97 | 4.37
12 | 2.58 | 2.23 | 2.27 | 2.43 | 3.25 | 2.04 | 2.38 | 3.49 | 7.13 | 3.07
13 | 2.37 | 2.86 | 3.09 | 4.83 | 2.29 | 5.78 | 6.63 | 9.41 | 2.54 | 4.01
14 | 2.51 | 4.17 | 4.05 | 3.75 | 3.13 | 8.86 | 8.72 | 8.26 | 6.76 | 9.86
15 | 6.27 | 2.51 | 4.14 | 3.98 | 3.57 | 4.01 | 8.83 | 8.62 | 7.93 | 5.67
16 | 2.84 | 2.99 | 3.78 | 3.19 | 2.29 | 6.29 | 8.32 | 6.96 | 2.5 | 3.9
17 | 2.49 | 3.88 | 3.37 | 2.53 | 4.45 | 8.47 | 7.45 | 4.1 | 9.14 | 9.65
18 | 5.34 | 2.4 | 3.05 | 4.36 | 4.76 | 3.3 | 6.52 | 9.06 | 9.37 | 2.4
19 | 2.27 | 2.44 | 3.33 | 2.48 | 3.76 | 3.56 | 7.37 | 3.84 | 8.28 | 6.84
20 | 3.15 | 2.24 | 2.31 | 2.58 | 2.24 | 2.12 | 2.66 | 4.41 | 2.17 | 2.8
21 | 2.33 | 2.67 | 2.46 | 3.5 | 2.73 | 4.88 | 3.68 | 7.77 | 5.17 | 4.65
22 | 2.62 | 2.35 | 2.75 | 2.7 | 2.54 | 2.92 | 5.29 | 5.02 | 4.15 | 9.31
23 | 4.68 | 2.25 | 2.35 | 2.76 | 2.72 | 2.21 | 2.94 | 5.32 | 5.14 | 4.56
24 | 2.61 | 2.31 | 2.57 | 6.63 | 2.54 | 2.63 | 4.33 | 9.9 | 4.14 | 9.28
25 | 4.63 | 2.24 | 2.3 | 2.56 | 5.97 | 2.11 | 2.63 | 4.31 | 9.81 | 3.85
26 | 2.49 | 3.78 | 3.18 | 2.27 | 2.42 | 8.31 | 6.92 | 2.37 | 3.46 | 7.04
27 | 3.22 | 2.33 | 2.66 | 2.43 | 3.27 | 2.78 | 4.82 | 3.5 | 7.18 | 3.22
28 | 2.39 | 2.98 | 3.7 | 3.04 | 4.22 | 6.25 | 8.17 | 6.47 | 8.91 | 8.89
29 | 4.2 | 4.15 | 4.01 | 3.65 | 2.96 | 8.84 | 8.66 | 8.08 | 6.17 | 7.9
30 | 3.56 | 2.82 | 2.93 | 3.39 | 2.56 | 5.6 | 6.04 | 7.5 | 4.27 | 9.69
31 | 5.48 | 2.42 | 3.21 | 2.31 | 2.59 | 3.45 | 7.01 | 2.67 | 4.46 | 2.32
32 | 2.26 | 2.4 | 3.07 | 4.49 | 5.7 | 3.31 | 6.55 | 9.17 | 9.75 | 3.65
33 | 2.45 | 3.46 | 2.66 | 2.43 | 3.28 | 7.67 | 4.83 | 3.51 | 7.22 | 3.34
34 | 2.41 | 3.1 | 5.04 | 2.34 | 2.73 | 6.66 | 9.52 | 2.9 | 5.2 | 4.74
35 | 2.64 | 2.39 | 2.99 | 3.75 | 3.13 | 3.23 | 6.27 | 8.26 | 6.76 | 9.86
36 | 6.3 | 2.51 | 4.18 | 4.09 | 3.85 | 4.02 | 8.87 | 8.77 | 8.42 | 7.29
37 | 3.3 | 2.44 | 3.36 | 2.52 | 4.27 | 3.58 | 7.43 | 4.05 | 8.97 | 9.07
38 | 4.38 | 4.86 | 2.3 | 2.55 | 4.99 | 9.43 | 2.6 | 4.21 | 9.5 | 2.81
39 | 2.33 | 2.68 | 2.48 | 3.66 | 2.98 | 4.91 | 3.79 | 8.11 | 6.26 | 8.22
40 | 3.73 | 3.09 | 4.86 | 2.3 | 2.54 | 6.63 | 9.43 | 2.58 | 4.17 | 9.36
41 | 4.75 | 2.27 | 2.42 | 3.21 | 2.31 | 2.36 | 3.45 | 7.01 | 2.68 | 4.47
42 | 2.59 | 2.27 | 2.42 | 3.2 | 2.3 | 2.36 | 3.44 | 6.98 | 2.57 | 4.13
43 | 2.53 | 4.55 | 6.79 | 2.54 | 4.87 | 9.22 | 9.92 | 4.19 | 9.43 | 2.61
44 | 2.3 | 2.55 | 5.21 | 2.38 | 2.92 | 4.24 | 9.6 | 3.15 | 6.01 | 7.41
45 | 3.35 | 2.51 | 4.04 | 3.71 | 3.05 | 3.96 | 8.69 | 8.18 | 6.52 | 9.05
Node ID | Exponential distribution | Uniform distribution
---|---|---
| t=1 | t=2 | t=3 | t=4 | t=5 | t=1 | t=2 | t=3 | t=4 | t=5
46 | 4.35 | 4.74 | 2.27 | 2.41 | 3.12 | 9.35 | 2.34 | 3.37 | 6.74 | 9.78
47 | 5.8 | 2.47 | 3.57 | 2.84 | 2.99 | 3.73 | 7.93 | 5.67 | 6.27 | 8.25
48 | 3.74 | 3.12 | 5.71 | 2.46 | 3.47 | 6.73 | 9.75 | 3.66 | 7.69 | 4.9
49 | 2.67 | 2.47 | 3.62 | 2.91 | 3.31 | 3.76 | 8.02 | 5.98 | 7.31 | 3.64
50 | 2.45 | 3.44 | 2.63 | 2.36 | 2.8 | 7.62 | 4.67 | 2.99 | 5.51 | 5.76
51 | 2.86 | 3.07 | 4.56 | 7.02 | 2.55 | 6.57 | 9.23 | 9.93 | 4.25 | 9.61
52 | 5.26 | 2.39 | 2.97 | 3.6 | 2.88 | 3.2 | 6.19 | 7.98 | 5.86 | 6.91
53 | 3.17 | 2.26 | 2.41 | 3.08 | 4.77 | 2.33 | 3.33 | 6.62 | 9.38 | 2.42
54 | 2.28 | 2.45 | 3.42 | 2.61 | 2.3 | 3.63 | 7.58 | 4.55 | 2.62 | 4.27
55 | 2.56 | 5.51 | 2.43 | 3.24 | 2.36 | 9.7 | 3.48 | 7.11 | 3.01 | 5.56
56 | 2.81 | 2.9 | 3.24 | 2.36 | 2.8 | 5.92 | 7.11 | 2.99 | 5.51 | 5.76
57 | 2.86 | 3.07 | 4.6 | 2.23 | 2.27 | 6.58 | 9.26 | 2.04 | 2.38 | 3.5
58 | 2.43 | 3.27 | 2.39 | 2.99 | 3.76 | 7.18 | 3.23 | 6.28 | 8.28 | 6.84
59 | 3.15 | 2.24 | 2.3 | 2.54 | 4.7 | 2.1 | 2.58 | 4.16 | 9.33 | 2.26
60 | 2.26 | 2.37 | 2.89 | 3.21 | 2.32 | 3.11 | 5.9 | 7.03 | 2.74 | 4.67
61 | 2.63 | 2.36 | 2.82 | 2.92 | 3.36 | 3.02 | 5.59 | 6.02 | 7.43 | 4.03
62 | 2.52 | 4.21 | 4.19 | 4.13 | 3.94 | 8.91 | 8.88 | 8.81 | 8.57 | 7.76
63 | 3.5 | 2.72 | 2.6 | 2.29 | 2.52 | 5.14 | 4.53 | 2.54 | 4.03 | 8.9
64 | 4.21 | 4.18 | 4.1 | 3.85 | 3.32 | 8.87 | 8.77 | 8.43 | 7.33 | 3.71
65 | 2.46 | 3.55 | 2.8 | 2.85 | 3.03 | 7.87 | 5.5 | 5.71 | 6.42 | 8.74
66 | 4.07 | 3.8 | 3.21 | 2.32 | 2.62 | 8.34 | 7.03 | 2.72 | 4.61 | 2.8
67 | 2.33 | 2.67 | 2.47 | 3.57 | 2.82 | 4.89 | 3.73 | 7.91 | 5.62 | 6.1
68 | 2.94 | 3.47 | 2.67 | 2.47 | 3.64 | 7.69 | 4.91 | 3.77 | 8.07 | 6.13
69 | 2.95 | 3.51 | 2.74 | 2.65 | 2.4 | 7.78 | 5.21 | 4.76 | 3.29 | 6.47
70 | 3.04 | 4.22 | 4.2 | 4.16 | 4.02 | 8.91 | 8.9 | 8.84 | 8.68 | 8.14
71 | 3.68 | 3.01 | 3.94 | 3.49 | 2.71 | 6.37 | 8.56 | 7.75 | 5.08 | 4.34
72 | 2.57 | 6.69 | 2.54 | 4.72 | 2.26 | 9.91 | 4.16 | 9.34 | 2.31 | 3.28
73 | 2.4 | 3.03 | 4.13 | 3.93 | 3.48 | 6.44 | 8.81 | 8.56 | 7.73 | 5.03
74 | 2.7 | 2.54 | 4.89 | 2.31 | 2.58 | 4.19 | 9.45 | 2.65 | 4.38 | 2.05
75 | 2.23 | 2.28 | 2.46 | 3.5 | 2.73 | 2.44 | 3.68 | 7.78 | 5.18 | 4.67
76 | 2.63 | 2.36 | 2.81 | 2.88 | 3.15 | 3 | 5.53 | 5.84 | 6.83 | 2.06
77 | 2.23 | 2.28 | 2.48 | 3.68 | 3.01 | 2.47 | 3.8 | 8.14 | 6.36 | 8.55
78 | 3.93 | 3.48 | 2.7 | 2.53 | 4.62 | 7.73 | 5.02 | 4.14 | 9.27 | 2.08
79 | 2.23 | 2.29 | 2.51 | 4.1 | 3.85 | 2.53 | 3.99 | 8.77 | 8.43 | 7.33
80 | 3.32 | 2.46 | 3.53 | 2.78 | 2.78 | 3.71 | 7.84 | 5.4 | 5.4 | 5.4
## References
* [1] M. Matsumoto, T. Nishimura, Mersenne twister: a 623-dimensionally equidistributed uniform pseudo-random number generator, ACM Transactions on Modeling and Computer Simulation (TOMACS) 8 (1) (1998) 3–30.
* [2] A. B. Owen, Latin supercube sampling for very high-dimensional simulations, ACM Transactions on Modeling and Computer Simulation (TOMACS) 8 (1) (1998) 71–102.
* [3] G. Marsaglia, et al., Xorshift rngs, Journal of Statistical Software 8 (14) (2003) 1–6.
* [4] A. De Matteis, S. Pagnutti, Parallelization of random number generators and long-range correlations, Numerische Mathematik 53 (5) (1988) 595–608.
* [5] G. Fishman, Monte Carlo: concepts, algorithms, and applications, Springer Science & Business Media, 2013.
* [6] P. L’Ecuyer, Random numbers for simulation, Communications of the ACM 33 (10) (1990) 85–97.
* [7] P. L’ecuyer, Tables of linear congruential generators of different sizes and good lattice structure, Mathematics of Computation of the American Mathematical Society 68 (225) (1999) 249–260.
* [8] P. L’Ecuyer, F. Blouin, R. Couture, A search for good multiple recursive random number generators, ACM Transactions on Modeling and Computer Simulation (TOMACS) 3 (2) (1993) 87–98.
* [9] P. L’Ecuyer, History of uniform random number generation, in: WSC 2017-Winter Simulation Conference, 2017.
* [10] P. L’Ecuyer, Random number generation, in: Handbook of Computational Statistics, Springer, 2012, pp. 35–71.
* [11] F. James, A review of pseudorandom number generators, Computer physics communications 60 (3) (1990) 329–344.
* [12] Y. Tay, K. Jamieson, H. Balakrishnan, Collision-minimizing csma and its applications to wireless sensor networks, IEEE Journal on selected areas in Communications 22 (6) (2004) 1048–1057.
* [13] K. Jamieson, H. Balakrishnan, Y. Tay, Sift: A mac protocol for event-driven wireless sensor networks, in: European workshop on wireless sensor networks, Springer, 2006, pp. 260–275.
* [14] I. Rhee, A. Warrier, M. Aia, J. Min, M. L. Sichitiu, Z-mac: a hybrid mac for wireless sensor networks, IEEE/ACM Transactions on Networking (TON) 16 (3) (2008) 511–524.
* [15] I. Rhee, A. Warrier, J. Min, L. Xu, Drand: Distributed randomized tdma scheduling for wireless ad hoc networks, IEEE Transactions on Mobile Computing 8 (10) (2009) 1384–1396.
* [16] S. Ramanathan, A unified framework and algorithm for channel assignment in wireless networks, Wireless Networks 5 (2) (1999) 81–94.
* [17] D. K. Sah, T. Amgoth, Parametric survey on cross-layer designs for wireless sensor networks, Computer Science Review 27 (2018) 112–134.
* [18] P. L’ecuyer, R. Simard, E. J. Chen, W. D. Kelton, An object-oriented random-number package with many long streams and substreams, Operations research 50 (6) (2002) 1073–1075.
* [19] S. R. Finch, Mathematical constants, Vol. 93, Cambridge university press, 2003.
* [20] S. N. Das, S. Misra, Correlation-aware cross-layer design for network management of wireless sensor networks, IET Wireless Sensor Systems 5 (6) (2015) 263–270.
* [21] S. Salivahanan, A. Vallavaraj, C. Gnanapriya, Digital signal processing, mcgraw-hill (2001).
|
# Learning Spatial and Spatio-Temporal Pixel Aggregations for Image and Video
Denoising
Xiangyu Xu X. Xu is with Robotics Institute, Carnegie Mellon University,
Pittsburgh, PA 15213, USA (email: xuxiangyu2014@gmail.com). Muchen Li M. Li is
with University of British Columbia, Canada (email: muchenli1997@gmail.com).
Wenxiu Sun W. Sun is with SenseTime Research, Hong Kong, 999077 (email:
irene.wenxiu.sun@gmail.com). Ming-Hsuan Yang M.-H. Yang is with School of
Engineering, University of California, Merced, CA 95343, USA (e-mail:
mhyang@ucmerced.edu).
###### Abstract
Existing denoising methods typically restore clear results by aggregating
pixels from the noisy input. Instead of relying on hand-crafted aggregation
schemes, we propose to explicitly learn this process with deep neural
networks. We present a spatial pixel aggregation network and learn the pixel
sampling and averaging strategies for image denoising. The proposed model
naturally adapts to image structures and can effectively improve the denoised
results. Furthermore, we develop a spatio-temporal pixel aggregation network
for video denoising to efficiently sample pixels across the spatio-temporal
space. Our method is able to solve the misalignment issues caused by large
motion in dynamic scenes. In addition, we introduce a new regularization term
for effectively training the proposed video denoising model. We present
extensive analysis of the proposed method and demonstrate that our model
performs favorably against the state-of-the-art image and video denoising
approaches on both synthetic and real-world data.
###### Index Terms:
Image denoising, video denoising, pixel aggregation, neural network
## I Introduction
Image and video capturing systems are often degraded by noise including shot
noise of photons and read noise from sensors [1]. This problem is exacerbated
for the images and videos captured in low-light scenarios or by cellphone
cameras with small-apertures. To address the problem, different image
denoising algorithms have been proposed for generating high-quality images and
video frames from the noisy inputs [2, 3, 4, 5, 6, 7, 8, 9].
The success of most denoising methods stems from the fact that averaging
multiple independent observations of the same signal leads to lower variance
than the original observations. Mathematically, this is formulated as:
$\displaystyle Var(\frac{1}{N}\sum_{i=1}^{N}x_{(i)})=\frac{1}{N}Var(x),$ (1)
where $Var$ denotes the variance operation. $x$ is a noisy pixel, and
$\\{x_{(i)},i=1,2,...,N\\}$ are $N$ $i.i.d.$ observations of it. Since it is
difficult to obtain multiple observations of the same pixel, existing
denoising algorithms [3, 4, 6, 7, 2] usually sample similar pixels from the
input image and aggregate them with weighted averaging. The sampling grid
$\mathcal{N}$ and averaging weights $\mathcal{F}$ are usually data-dependent
and spatially-variant as the distribution of similar pixels depends on local
image structures. The strategies to decide $\mathcal{N}$ and $\mathcal{F}$ are
the key factors distinguishing different denoising approaches. As a typical
example, the bilateral smoothing model [2] samples pixels in a local square
region and computes the weights with Gaussian functions. In addition, the BM3D
[4] and VBM4D [7] methods search relevant pixels by block matching, and the
averaging weights are decided using empirical Wiener filter. However, these
approaches usually use hand-crafted schemes for sampling and weighting pixels,
which do not always perform well in complex scenarios as shown in Figure 1(c)
and (h).
---
Figure 1: Denoising results on image and video sequence captured by a
cellphone. Compared to the existing classical (BM3D [4], VBM4D [7]) and deep
learning based (DnCNN [10], KPN [9]) methods, the proposed algorithm achieves
better denoising results with fewer artifacts on both single image (top) and
multi-frame input (bottom). (g) is a cropped region from the reference frame
of the input sequence.
Recently, numerous denoising methods based on deep convolutional neural
networks (CNNs) [10, 11, 12] have been proposed. These models exploit a large
amount of training data to learn the mapping function from the noisy image to
the desired clear output. However, the CNNs usually use spatially-invariant
and data-independent convolution kernels whereas the denoising process is
spatially-variant and data-dependent [4]. Thus, very deep structures are
needed for these methods to achieve high non-linearities to implicitly
approximate the spatially-variant and data-dependent process, which is not as
efficient and concise as the aggregation-based formulation. In addition, the
CNN-based approaches do not explicitly manipulate the input pixels to
constrain the output space and may generate corrupted image textures and over-
smoothing artifacts as shown in Figure 1(d).
To address the aforementioned problems, we propose a pixel aggregation network
to explicitly integrate the pixel aggregation process with data-driven methods
for image denoising. Specifically, we use CNNs to estimate a spatial sampling
grid $\mathcal{N}$ for each location in the noisy image. To aggregate the
sampled pixels, we predict the averaging weights $\mathcal{F}$ for each
sample. Note that both $\mathcal{N}$ and $\mathcal{F}$ are content-aware and
can adapt to image structures. Finally, the denoised output can be obtained by
combining $\mathcal{F}$ and $\mathcal{N}$ with weighted averaging in an end-
to-end network. The advantages of the proposed denoising method are as
follows. First, we improve the pixel aggregation process by learning from data
instead of relying on hand-crafted schemes. Second, compared to other deep
learning based methods, the proposed model can better adapt to image
structures and preserve details with the spatially-variant and data-dependent
sampling and averaging strategies. In addition, our algorithm directly filters
the noisy input, which thereby constrains the output space. As shown in Figure
1(e), the proposed network generates clearer result with fewer artifacts. Note
that while one can simply sample pixels from a rigid grid similar to the
kernel prediction network (KPN) [9], it often leads to limited receptive field
and cannot efficiently exploit the structure information in the images.
Moreover, the irrelevant sampling locations of the rigid sampling scheme may
negatively affect the denoising performance. In contrast, the proposed method
can sample pixels in a dynamic manner to better adapt to image structures and
increase the receptive field without sampling more locations.
In addition to single images, we can also use the proposed method in video
denoising. A straightforward approach for this is to apply 2D pixel
aggregation on each frame separately and then fuse the results by pixel-wise
summation. However, this simple strategy is not effective in handling videos
with large motion, where few reliable sampling locations can be found in
neighboring regions for pixel aggregation. To address this issue, we need to
allocate more sampling locations on the frames with higher reliability and
discard the frames with drastic motion. As such, this requires our algorithm
to be able to search pixels across the spatio-temporal space of the input
video. Instead of predicting 2D sampling grid and averaging weights, we
develop a spatio-temporal pixel aggregation network for each location in the
desired output to adaptively select the most informative pixels in the
neighboring video frames. The proposed spatio-temporal method naturally solves
the large motion issues by capturing dependencies between 3D locations and
sampling on more reliable frames. Our method can effectively deal with the
misalignment caused by dynamic scenes and reduce the artifacts of existing
video denoising approaches [7, 9] as shown in Figure 1(j).
In this paper, we make the following contributions. First, we exploit the
strength of both aggregation-based methods and the deep neural networks, and
propose a new algorithm to explicitly learn the pixel aggregation process for
image denoising. Second, we extend the spatial pixel aggregation to the
spatial-temporal domain to better deal with large motion in video denoising,
which further reduces artifacts and improves performance. In addition, we
introduce a regularization term to facilitate training the video denoising
model. Extensive experiments on benchmark datasets demonstrate that our method
compares favorably against state-of-the-arts on both single image and video
inputs.
## II Related Work
We discuss the state-of-the-art denoising algorithms as well as recent methods
on learning dynamic operations for image filtering, and put the proposed
algorithm in proper context.
### II-A Image and Video Denoising
Existing denoising methods are developed based on explicit or implicit pixel
aggregations [13, 2, 3, 4, 6, 7, 8]. Gaussian [13] and bilateral [2] smoothing
models sample pixels from a local window and compute averaging weights using
Gaussian functions. The non-local means (NLM) [3] aggregates image pixels
globally and decides the weights with patch similarities. On the other hand,
the BM3D [4] method searches pixels with block matching and uses transform
domain collaborative filtering for weighted averaging.
As videos contain temporal information and more pixel observations, the VBM3D
[6] and VBM4D [7] algorithms extend the BM3D scheme by grouping more similar
patches in the spatio-temporal domain. In addition, optical flow has also been
exploited in video denoising [14, 8] to aggregate pixels. However, existing
video denoising methods are less effective for videos with fast and complex
motion. In contrast to the above approaches with hand-crafted strategies for
sampling and averaging pixels, our method explicitly learns the pixel
aggregation process from data for denoising. Furthermore, the proposed spatio-
temporal model handles large motion in video denoising without optical flow
estimation.
Recently, numerous image and video denoising [10, 11, 15, 16, 9, 17, 18, 19,
12, 20, 21] methods based on deep learning have been developed. In particular,
CNNs with residual connections have been used to directly learn a mapping
function from the noisy input to the denoised output [10, 11, 12]. On the
other hand, recurrent neural networks (RNNs) [15, 16] have also been used for
exploiting the temporal structure of videos to learn the mapping function for
multiple frame input. While these networks are effective in removing noise,
the adopted activation functions may lead to information loss [22]. In
addition, directly synthesizing images with deep neural networks does not
enforce constrained output space and thereby tends to generate oversmoothing
artifacts. In contrast, the proposed algorithm can directly manipulate the
input frames and adaptively aggregate pixels across the spatial and spatio-
temporal space, which effectively addresses the above issues.
### II-B Burst Denoising
This work is also related to the burst denoising algorithms [16, 9, 23, 8, 24]
in that they rely on the same basic idea of averaging multiple independent
observations as in (1). Specifically, burst denoising exploits multiple short
frames captured in a burst to approximate multiple independent observations of
the same pixel. As a typical example, Hasinoff et al. [24] propose a
computational photography pipeline to merge a burst of frames to reduce noise
and increase dynamic range. Recently, Kokkinos et al. [23] use deep learning
to solve this problem and propose iterative residual CNNs for further
improving the performance. While these methods have achieved impressive
results for image denoising, they usually assume small motion between
different frames in the burst and thus do not always work well for videos. To
solve this problem, Mildenhall et al. [9] propose to predict convolution
kernels for burst denoising which can also work well for video input. However,
these kernels use rigid sampling grids which cannot exploit local image
structures well. Furthermore, these kernels are less effective in handling
large misalignment caused by camera shakes or dynamic scenes. In contrast, we
propose spatially-variant and data-dependent sampling grids as well as a
spatio-temporal model to address these issues.
### II-C Learning Dynamic Filtering
In recent years, deep neural networks have been used to learn dynamic
operations for image filtering [25, 26, 27, 28]. In [25], a dynamic network is
proposed to learn filtering weights for video and stereo prediction. Similar
approaches have been developed for video interpolation [29] and view synthesis
[30]. However, these methods only consider pixels from a fixed region, which
often leads to limited receptive field and can be easily affected by
irrelevant sampling locations. On the other hand, Jaderberg et al. [26]
develop spatial transformer networks [26] for more flexible feature
extractions in image classification. While this scheme enables data-dependent
sampling strategies, the filtering process is still spatially-invariant as it
only learns a global affine transformation. In contrast, Dai et al. [28]
propose spatial deformable kernels for object detection, which considers local
geometric transformations. As this method uses fixed convolution weights for
different input images, it is only effective for high-level vision tasks. The
approaches using rigid weights are likely to generate oversmoothing artifacts
in image restoration similar to those based on Gaussian filters. While [31]
learns both the convolution locations and weights, it explains the learned
weights as modulation for adjusting the signal amplitude of the input
features, and thereby is not suitable for the denoising problem. In addition,
these methods [28, 31] cannot sample pixels from the spatio-temporal space,
and thus does not perform well for video inputs.
The proposed pixel aggregation network can be seen as a novel dynamic
operation for image filtering. Our model learns both the data-dependent and
spatially-variant sampling and weighting schemes, and thus solves the problems
of the aforementioned algorithms. More importantly, our method enables
adaptive sampling in the spatio-temporal space for effective video processing.
---
Figure 2: Overview of the proposed algorithm. We first learn a deep CNN (i.e.
the offset network) to estimate the offsets $V$ of the sampling grid. We then
sample pixels from the noisy input $X$ according to the predicted offsets.
Furthermore, we concatenate the sampled pixels, the input and the features of
the offset network to estimate the averaging weights $\mathcal{F}$. Finally,
we can generate the denoised output frame $Y$ by aggregating the sampled
pixels with the learned weights $\mathcal{F}$. Note that the proposed system
can deal with both single image and video sequence inputs.
## III Proposed Algorithm
We propose a neural network to learn pixel aggregations for image and video
denoising. Compared to most CNN-based denoising methods based on data-
independent and spatially-invariant kernels, the proposed model can better
adapt to image structures and preserve details with data-dependent and
spatially-variant sampling as well as averaging strategies. Specifically, we
use a neural network to predict the sampling locations $\mathcal{N}$ and
averaging weights $\mathcal{F}$ for each pixel of the noisy input. These two
components are integrated for both spatial and spatio-temporal pixel
aggregation. Instead of directly regressing the spatial coordinates of
$\mathcal{N}$, we learn the offsets $V$ for a rigid sampling grid and deform
the rigid grid accordingly.
An overview of the proposed algorithm is shown in Figure 2. We first train a
deep CNN for estimating the offsets of the sampling grid. Next, we sample
pixels from the noisy input according to the predicted offsets, and estimate
the weights by concatenating the sampled pixels, the noisy input and the
features of the offset network. Finally, we generate the denoised output by
averaging the sampled pixels with the learned weights.
TABLE I: Number of feature maps for each layer of our network. We show the structure of the offset network in Figure 3(b). The “conv layers” are presented in Figure 2. “Ck” represents the k-th convolution layer in each part of our model. $n$ is the number of the sampled pixels of the adaptive sampling grid. Layer name | offset network | conv layers
---|---|---
C1-3 | C4-6 | C7-9 | C10-12 | C13-15 | C16-18 | C19-21 | C22-24 | C25-26 | C27 | C1-2 | C3
number of feature maps | 64 | 128 | 256 | 512 | 512 | 512 | 256 | 128 | 128 | $n\times$3 | 64 | $n$
### III-A Learning to Aggregate Pixels
For a noisy image $X\in\mathbb{R}^{h\times w}$ where $h$ and $w$ represent the
height and width, the spatial pixel aggregation for denoising can be
formulated as:
$\displaystyle Y(u,v)=\sum_{i=1}^{n}X(u+u_{i},v+v_{i})\mathcal{F}(u,v,i),$ (2)
where $(u,v)$ is a pixel on the denoised output $Y\in\mathbb{R}^{h\times w}$.
In addition, $\mathcal{N}(u,v)=\\{(u_{i},v_{i})|i=1,2,\dots,n\\}$ represents
the sampling grid with $n$ sampling locations, and
$\mathcal{F}\in\mathbb{R}^{h\times w\times n}$ represents the weights for
averaging pixels. For example,
$\displaystyle\\{(\hat{u}_{i},\hat{v}_{i})\\}=\\{(-1,-1),\dots,(0,0),\dots,(1,1)\\},$
(3)
defines a rigid sampling grid with $n=9$ and size $3\times 3$.
In the proposed pixel aggregation network (PAN), the adaptive sampling grid
can be generated by combining the predicted offsets $V\in\mathbb{R}^{h\times
w\times n\times 2}$ and the rigid grid:
$\displaystyle u_{i}=\hat{u}_{i}+V(u,v,i,1),$ (4) $\displaystyle
v_{i}=\hat{v}_{i}+V(u,v,i,2).$ (5)
Note that both $u_{i}$ and $v_{i}$ are functions of $(u,v)$, which indicates
that our denoising process is spatially-variant. Since the offsets in $V$ are
usually fractional, we use bilinear interpolation to sample the pixels
$X(u+{u}_{i},v+{v}_{i})$ in a way similar to [32].
After the adaptive sampling, we can recover the clear output $Y$ by combining
the sampled pixels with the learned weights $\mathcal{F}$ as in (2). The
weights of $\mathcal{F}$ are also spatially-variant and content-aware, which
is different from the typical CNNs with fixed uniform convolution kernels.
Note that while we can simply use a rigid sampling grid (Figure 4(b)) and only
learn the averaging weights, it often leads to a limited receptive field and
cannot efficiently exploit the structure information in the images.
Furthermore, irrelevant sampling locations in the rigid grid may negatively
affect the denoising results. In contrast, our adaptive grid naturally adapts
to the image structures and increases the receptive field without sampling
more pixels.
---
Figure 3: Architecture of the offset network. A U-Net with skip links is used
to fuse low-level and high-level features for estimating the offsets.
Spatio-temporal pixel aggregation.
The proposed method can be easily extended for video denoising. Suppose that
we have a noisy video sequence
$\\{X_{t-\tau},\dots,X_{t},\dots,X_{t+\tau}\\}$, where $X_{t}$ is the
reference frame. A straightforward approach to process this input is to apply
the PAN model to each frame separately and then fuse the outputs with weighted
sum, as shown in Figure 4(c). However, this simple 2D strategy is not
effective in handling videos with large motion, where few reliable pixels can
be found in the regions of neighboring frames (e.g. the center regions of
frame $X_{t-1}$ and $X_{t+1}$ in Figure 4). To address this issue, we need to
distribute more sampling locations on the frames with higher reliability (e.g.
the reference frame $X_{t}$) and avoid the frames with severe motion. An
effective solution should be able to search pixels across the spatial-temporal
space of the input videos.
---
Figure 4: Illustration of the video denoising process of the ST-PAN model. (a)
a noisy video sequence $\\{X_{t-1},X_{t},X_{t+1}\\}$. The patches in the
following rows are cropped from the yellow box in the corresponding frames.
The center blue point of patch $X_{t}$ in (b)-(d) indicates the reference
pixel to be denoised. (b) The rigid sampling method has limited receptive
field and cannot exploit the structure information. Furthermore, it does not
handle misalignment issues. (c) The proposed PAN model can adapt to image
structures in $X_{t}$ and increase the receptive field without sampling more
pixels. However, it does not perform well on large motion where there are few
reliable pixels available in frame $X_{t-1}$ and $X_{t+1}$. (d) The proposed
ST-PAN model aggregates pixels across the spatial-temporal space, and
distributes more sampling locations on more reliable frames.
In this work, we develop a spatio-temporal pixel aggregation network (ST-PAN)
for video denoising which adaptively selects the most informative pixels in
the spatio-temporal space. The ST-PAN directly takes the concatenated video
frames $X\in\mathbb{R}^{h\times w\times(2\tau+1)}$ as input, and the denoising
process can be formulated as:
$Y(u,v,t)=\sum_{i=1}^{n}X(u+u_{i},v+v_{i},t+t_{i})\mathcal{F}(u,v,i),$ (6)
where $t+t_{i}$ denotes the sampling coordinate in the temporal dimension, and
$n$ is the number of pixels of the 3D sampling grid. Similar to (4)-(5), we
generate the sampling grid by predicting 3D offsets $V\in\mathbb{R}^{h\times
w\times n\times 3}$. To sample pixels across the video frames, we introduce
the trilinear interpolation in which $X(u+u_{i},v+v_{i},t+t_{i})$ can be
computed by:
$\displaystyle\sum_{p=1}^{h}\sum_{q=1}^{w}\sum_{j=t-\tau}^{t+\tau}X(p,q,j)\cdot\max(0,1-|u+u_{i}-p|)$
$\displaystyle\cdot\max(0,1-|v+v_{i}-q|)\cdot\max(0,1-|t+t_{i}-j|),$ (7)
where only the pixels closest to $(u+u_{i},v+v_{i},t+t_{i})$ in the 3D space
of $X$ contribute to the interpolated result. Since the trilinear sampling
mechanism is differentiable, we can learn the ST-PAN model in an end-to-end
manner.
The proposed ST-PAN model naturally solves the large motion issues by
capturing dependencies between 3D locations and sampling on more reliable
frames, as illustrated in Figure 4(d). Furthermore, our method can effectively
deal with the misalignment caused by dynamic scenes and reduce cluttered
boundaries and ghosting artifacts generated by existing video denoising
approaches [7, 9] as shown in Section IV-C.
Gamma correction.
As the noise is nonlinear in the sRGB space [33, 34], we train the denoising
model in the linear raw space. With the linear output $Y$, we conduct Gamma
correction to generate the final result for better perceptual quality:
$\displaystyle\phi(Y)$ $\displaystyle={\begin{cases}12.92Y,&Y\leq
0.0031308,\\\ (1+\alpha)Y^{1/2.4}-\alpha,&Y>0.0031308,\end{cases}}$ (8)
where $\phi$ is the sRGB transformation function for Gamma correction, and
$\alpha=0.055$. The hyper-parameters of $\phi$ are directly obtained from
[34], and more detailed explanations can be found in [33, 34].
### III-B Network Architecture
The offset network in Figure 2 takes a single frame as input for image
denoising, and a sequence of $2\tau+1$ neighboring frames for video denoising.
As shown in Figure 3(b), we adopt a U-Net architecture [35] which has been
widely used in pixel-wise estimation tasks [36, 37]. The U-Net is an encoder-
decoder network where the encoder sequentially transforms the input frames
into lower-resolution feature embeddings, and the decoder correspondingly
expands the features back to full resolution estimates. We perform pixel-wise
summation with skip connections between the same-resolution layers in the
encoder and decoder to jointly use low-level and high-level features for the
estimation task. Since the predicted weights are to be applied to the sampled
pixels, it is beneficial to feed these pixels to the weight estimation branch
such that the weights can better adapt to the sampled pixels. Thus, we
concatenate the sampled pixels, noisy input and features from the last layer
of the offset network, and feed them to three convolution layers to estimate
the averaging weights (Figure 2).
All convolution layers use $3\times 3$ kernels with stride $1$. The feature
map number for each layer of our network is shown in Table I. We use ReLU [38]
as the activation function for the convolution layers except for the last one
which is followed by a Tanh function to output normalized offsets. As the
proposed estimation network is fully convolutional, it is able to handle
arbitrary spatial size during inference.
### III-C Loss Function
With the predicted result $Y$ and ground truth image $Y_{gt}$ in the linear
space, we can use an $L_{1}$ loss to train our network for single image
denoising:
$\displaystyle l(Y,Y_{gt})=\|\phi(Y)-\phi(Y_{gt})\|_{1},$ (9)
where Gamma correction is performed to emphasize errors in darker regions and
generate more perceptually pleasant results. We do not use $L_{2}$ loss in
this work as it often leads to oversmoothing artifacts [39, 40].
Regularization term for video denoising. Since the ST-PAN model samples pixels
across the video frames, it is possible that the training process is stuck to
local minimum where all the sample locations only lie around the reference
frame. To alleviate this problem and encourage the network to exploit more
temporal information, we introduce a regularization term to have subsets of
the sampled pixels individually learn the 3D aggregation process.
We split the $N$ sampling locations in the spatio-temporal grid
$\\{(u_{n},v_{n},t_{n})\\}$ into $s$ groups:
$\mathcal{N}_{1},...,\mathcal{N}_{s}$, and each group consists of $N/s$
points. Similar to (6), the filtered result of the $i$-th pixel group can be
computed by::
$\displaystyle
Y_{i}(u,v,t)=s\sum_{j\in\mathcal{N}_{i}}X(u+u_{j},v+v_{j},t+t_{j})F(u,v,j),$
(10)
where $i\in\\{1,2,...,s\\}$, and the multiplier $s$ is used to match the scale
of $Y$. With $Y_{i}$ for regularization, the final loss function for video
denoising is:
$\displaystyle l(Y,Y_{gt})+\eta\gamma^{m}\sum_{i=1}^{s}l(Y_{i},Y_{gt}).$ (11)
The regularization process for each $Y_{i}$ is slowly reduced during training,
where the hyperparameters $\eta$ and $\gamma$ are used to control the
annealing process. $m$ is the iteration number. At the beginning of the
network optimization, $\eta\gamma^{m}\gg 1$ and the second term is prominent,
which encourages the network to find the most informative pixels for each
subset of the sampled pixels. This constraint is lessened as $m$ increases,
and the whole sampling grid learns to rearrange the sampling locations such
that all the pixel groups, i.e. different parts of the learned pixel
aggregation model, can perform collaboratively. Note that the temporal
consistency [41, 42, 43] is implicitly enforced in this per-frame loss
function as the ground truth image $Y_{gt}$ changes smoothly across the
sequence.
| | | | |
---|---|---|---|---|---
| | | | |
(a) Whole output | (b) Ground truth | (c) Input | (d) BM3D [4] | (e) DnCNN [10] | (f) PAN
| | | | |
| | | | |
(g) Whole output | (h) Ground truth | (i) Input | (j) VBM4D [7] | (k) KPN [9] | (l) ST-PAN
Figure 5: Results from the synthetic dataset for single image (first row) and video denoising (second row). The proposed method generates clearer results with fewer artifacts. TABLE II: Quantitative evaluation of single image denoising on the synthetic dataset. #1-4 are the 4 testing subsets. “LOW” and “HIGH” represent different noise levels, which respectively correspond to $\sigma_{s}=2.5\times 10^{-3},\sigma_{r}=10^{-2}$ and $\sigma_{s}=6.4\times 10^{-3},\sigma_{r}=2\times 10^{-2}$. Red and blue indicate the first and second best performance for each noise level. | | #1 | #2 | #3 | #4 | Average
---|---|---|---|---|---|---
Noise | Algorithms | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM
| Reference frame | 26.75 | 0.6891 | 28.08 | 0.7333 | 27.37 | 0.5843 | 27.96 | 0.7064 | 27.54 | 0.6782
| NLM [3] | 31.04 | 0.8838 | 31.51 | 0.9025 | 33.35 | 0.8687 | 31.71 | 0.8663 | 31.90 | 0.8803
| BM3D [4] | 33.00 | 0.9196 | 32.63 | 0.9245 | 35.16 | 0.9172 | 33.09 | 0.9028 | 33.47 | 0.9160
| DnCNN [10] | 35.30 | 0.9499 | 34.54 | 0.9498 | 37.45 | 0.9436 | 36.22 | 0.9494 | 35.88 | 0.9482
| KPN [9] | 35.23 | 0.9526 | 34.38 | 0.9493 | 37.50 | 0.9451 | 36.18 | 0.9526 | 35.82 | 0.9499
| PAN | 35.40 | 0.9535 | 34.57 | 0.9507 | 37.64 | 0.9465 | 36.41 | 0.9538 | 36.01 | 0.9511
| KPN [9], $\sigma$ blind | 35.18 | 0.9492 | 34.20 | 0.9484 | 37.39 | 0.9438 | 36.05 | 0.9508 | 35.71 | 0.9480
| DnCNN [10], $\sigma$ blind | 35.19 | 0.9500 | 34.38 | 0.9479 | 37.28 | 0.9417 | 36.06 | 0.9491 | 35.73 | 0.9472
LOW | PAN, $\sigma$ blind | 35.33 | 0.9531 | 34.55 | 0.9508 | 37.57 | 0.9458 | 36.35 | 0.9538 | 35.95 | 0.9509
| Reference frame | 22.83 | 0.5403 | 23.94 | 0.5730 | 23.00 | 0.3746 | 23.97 | 0.5598 | 23.43 | 0.5119
| NLM [3] | 28.21 | 0.8236 | 28.57 | 0.8443 | 30.62 | 0.8076 | 28.73 | 0.8040 | 29.03 | 0.8199
| BM3D [4] | 29.96 | 0.8793 | 29.81 | 0.8836 | 32.30 | 0.8766 | 30.27 | 0.8609 | 30.59 | 0.8751
| DnCNN [10] | 32.30 | 0.9163 | 31.54 | 0.9124 | 34.55 | 0.9048 | 33.26 | 0.9148 | 32.91 | 0.9121
| KPN [9] | 32.32 | 0.9198 | 31.44 | 0.9120 | 34.74 | 0.9085 | 33.28 | 0.9200 | 32.94 | 0.9151
| PAN | 32.49 | 0.9226 | 31.62 | 0.9153 | 34.89 | 0.9121 | 33.51 | 0.9232 | 33.13 | 0.9183
| KPN [9], $\sigma$ blind | 32.23 | 0.9182 | 31.37 | 0.9107 | 34.63 | 0.9073 | 33.17 | 0.9183 | 32.85 | 0.9136
| DnCNN [10], $\sigma$ blind | 32.19 | 0.9158 | 31.42 | 0.9105 | 34.40 | 0.9023 | 33.08 | 0.9135 | 32.77 | 0.9105
HIGH | PAN, $\sigma$ blind | 32.44 | 0.9224 | 31.62 | 0.9152 | 34.81 | 0.9109 | 33.46 | 0.9215 | 33.08 | 0.9175
TABLE III: Quantitative evaluation of video denoising on the synthetic dataset. #1-4 are the 4 testing subsets. “PAN-sep” represents the simple 2D strategy of using PAN for video input. “LOW” and “HIGH” denote different noise levels, which respectively correspond to $\sigma_{s}=2.5\times 10^{-3},\sigma_{r}=10^{-2}$ and $\sigma_{s}=6.4\times 10^{-3},\sigma_{r}=2\times 10^{-2}$. Red and blue indicate the first and second best performance for each noise level. | | #1 | #2 | #3 | #4 | Average
---|---|---|---|---|---|---
Noise | Algorithms | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM
| Direct average | 22.75 | 0.6880 | 25.70 | 0.7777 | 25.15 | 0.6701 | 23.47 | 0.6842 | 25.27 | 0.7050
| VBM4D [7] | 33.26 | 0.9326 | 34.00 | 0.9469 | 35.83 | 0.9347 | 34.01 | 0.9327 | 34.27 | 0.9367
| KPN [9] | 35.61 | 0.9597 | 35.25 | 0.9637 | 38.18 | 0.9529 | 36.45 | 0.9604 | 36.37 | 0.9592
| PAN-sep | 35.66 | 0.9576 | 35.82 | 0.9656 | 38.19 | 0.9518 | 36.80 | 0.9609 | 36.62 | 0.9590
| ST-PAN | 36.02 | 0.9618 | 35.80 | 0.9666 | 38.78 | 0.9580 | 37.04 | 0.9624 | 36.91 | 0.9622
| KPN [9], $\sigma$ blind | 35.44 | 0.9577 | 35.03 | 0.9605 | 38.03 | 0.9506 | 36.30 | 0.9586 | 36.20 | 0.9569
LOW | ST-PAN, $\sigma$ blind | 35.70 | 0.9590 | 35.47 | 0.9633 | 38.35 | 0.9538 | 36.67 | 0.9615 | 36.55 | 0.9594
| Direct average | 21.96 | 0.6071 | 24.78 | 0.6934 | 24.34 | 0.5466 | 22.81 | 0.6055 | 23.47 | 0.6132
| VBM4D [7] | 30.34 | 0.8894 | 31.28 | 0.9089 | 32.66 | 0.8881 | 31.33 | 0.8925 | 31.40 | 0.8947
| KPN [9] | 32.92 | 0.9344 | 32.56 | 0.9358 | 35.59 | 0.9223 | 33.80 | 0.9355 | 33.72 | 0.9320
| PAN-sep | 32.94 | 0.9309 | 33.09 | 0.9380 | 35.59 | 0.9208 | 34.15 | 0.9365 | 33.94 | 0.9315
| ST-PAN | 33.29 | 0.9372 | 33.05 | 0.9400 | 36.17 | 0.9301 | 34.40 | 0.9390 | 34.23 | 0.9366
| KPN [9], $\sigma$ blind | 32.73 | 0.9302 | 32.36 | 0.9312 | 35.39 | 0.9185 | 33.61 | 0.9309 | 33.52 | 0.9277
HIGH | ST-PAN, $\sigma$ blind | 33.02 | 0.9327 | 32.79 | 0.9348 | 35.78 | 0.9239 | 34.09 | 0.9361 | 33.92 | 0.9319
## IV Experimental Results
We first describe the datasets and implementation details, and then evaluate
the proposed algorithm for image and video denoising quantitatively and
qualitatively.
### IV-A Datasets
For video denoising, we collect 27 high-quality long videos from the Internet,
where each has a resolution of $1080\times 1920$ or $720\times 1280$ pixels
and a frame rate of $20$, $25$, or
$30\text{\,}\mathrm{f}\mathrm{p}\mathrm{s}$. We use 23 long videos for
training and the other 4 for testing, which are split into 205 and 65 non-
overlapped scenes, respectively. With the videos containing different scenes,
we extract 20K sequences for training where each sequence consists of
$2\tau+1$ consecutive frames. Our test dataset is composed of 4 subsets where
each has approximately 30 sequences sampled from the 4 testing videos. There
is no overlap between training and testing videos. In addition, we use the
center frame of each sequence from the video datasets for both training and
testing in single image denoising.
Similar to [9], we generate the noisy input for our models by performing
inverse Gamma correction and adding signal-dependent Gaussian noise,
$\mathcal{N}(0,\sigma_{s}q+\sigma_{r}^{2})$, where $q$ represents the
intensity of the pixel, and the noise parameters $\sigma_{s}$ and $\sigma_{r}$
are randomly sampled from $[10^{-4},10^{-2}]$ and $[10^{-3},10^{-1.5}]$,
respectively. When dealing with homoscedastic Gaussian noise, we set the shot
noise as $0$ and use the target noise level for the read noise during training
similar to [10]. In our experiments, we train the networks in both blind and
non-blind manners. For the non-blind model, the parameters $\sigma_{s}$ and
$\sigma_{r}$ are assumed to be known, and the noise level is fed into the
network as an additional channel of the input. Similar to [9], we estimate the
noise level as: $\sqrt{\sigma_{r}^{2}+\sigma_{s}q_{ref}}$, where $q_{ref}$
represents the intensity value of the reference frame $X_{t}$ in video
denoising or the input image in single frame denoising. Similar to [9], we use
grayscale inputs for fair comparisons with other methods [4, 7, 10, 9].
### IV-B Training and Parameter Settings
We learn sampling grids with size $5\times 5$ for single image denoising. For
video input, we use size $3\times 3\times 3$ for the spatio-temporal pixel
aggregation to reduce GPU memory requirement. We set $\eta$ and $\gamma$ as
$100$ and $0.9998$, respectively. In addition, we set $s=3$ for the
regularization term by default. During training, we use the Adam optimizer
[44] with the initial learning rate of $2\times 10^{-4}$. We decrease the
learning rate by a factor of $0.999991$ per epoch, until it reaches $1\times
10^{-4}$. The batch size is set to be $32$. We randomly crop $128\times 128$
patches from the original input for training the single image model. In video
denoising, we crop at the same place of all the input frames and set $\tau=2$,
such that each training sample has a size of $128\times 128\times 5$. In our
experiments, we multiply the output of the offset network by $128$ where we
assume the largest spatial offset of the sampling grid is smaller than the
size of the training patch. We train the denoising networks for $2\times
10^{5}$ iterations, and the process takes about 50 hours.
TABLE IV: Quantitative evaluation of single image denoising on homoscedastic Gaussian noise. We directly obtain the PSNRs and SSIMs of the baselines from the original papers, and indicate the results that are not available with “-”. | $\sigma$ | NLNet [45] | N3Net [46] | DnCNN [10] | NLRN [47] | SGN [48] | DDFN-x5W [49] | FOCNet [50] | Ours
---|---|---|---|---|---|---|---|---|---
PSNR / SSIM | PSNR / SSIM | PSNR / SSIM | PSNR / SSIM | PSNR / SSIM | PSNR / SSIM | PSNR / SSIM | PSNR / SSIM
Set12 | 15 | \- / - | \- / - | 32.86 / 0.9031 | 33.16 / 0.9070 | 32.85 / 0.9031 | 32.98 / 0.9052 | 33.07 / - | 33.24 / 0.9110
25 | 30.31 / - | 30.55 / - | 30.44 / 0.8622 | 30.80 / 0.8689 | 30.41 / 0.8639 | 30.60 / 0.8668 | 30.73 / - | 30.97 / 0.8735
50 | 27.04 / - | 27.43 / - | 27.18 / 0.7829 | 27.64 / 0.7980 | 26.77 / 0.7784 | 27.46 / 0.7960 | 27.68 / - | 27.84 / 0.8047
BSD68 | 15 | 31.52 / - | \- / - | 31.73 / 0.8907 | 31.88 / 0.8932 | 31.67 / 0.8897 | 31.83 / 0.8935 | 31.83 / - | 31.91 / 0.8980
25 | 29.03 / - | 29.30 / - | 29.23 / 0.8278 | 29.41 / 0.8331 | 29.03 / 0.8251 | 29.35 / 0.8331 | 29.38 / - | 29.52 / 0.8410
50 | 26.07 / - | 26.39 / - | 26.23 / 0.7189 | 26.47 / 0.7298 | 25.42 / 0.7020 | 26.42 / 0.7302 | 26.50 / - | 26.63 / 0.7433
---
Figure 6: Video denoising results of a real captured sequence. (d) is
generated by directly averaging the input frames. Note the ghosting artifacts
around the glowing tubes by the KPN method in (f).
### IV-C Evaluation on the Proposed Dataset
We evaluate the proposed algorithm against the state-of-the-art image and
video denoising methods [9, 7, 4, 3, 10] on the synthetic dataset at different
noise levels. We conduct exhaustive hyper-parameter finetuning for the NLM
[3], BM3D [4] and VBM4D [7] methods including both blind and non-blind models,
and choose the best results. For fair comparisons, we train the KPN [9] and
DnCNN [10] methods on our datasets with the same settings. While the KPN [9]
scheme is originally designed for multi-frame input, we adapt it to single
image for more comprehensive evaluation by changing the network input.
As shown in Table II and III, the proposed algorithm achieves consistently
better results on both single image and video denoising in terms of both PSNR
and structural similarity (SSIM) in all the subsets with different noise
levels. Even our blind version model achieves competitive results whereas
other methods rely on the oracle noise parameters to perform well. Note that
the KPN [9] learns convolution kernels for image and video denoising, where
the irrelevant input pixels can negatively affect the filtering process and
lead to inferior denoising results. Table III also shows the results by
applying the PAN model on each frame separately and then fusing the outputs
with weighted sum for video denoising (denoted as PAN-sep). The proposed ST-
PAN model can generate better results than PAN-sep owing to its capability of
handling large motion.
TABLE V: Quantitative evaluation of video denoising on homoscedastic Gaussian noise. “Tennis”, “Old Town Cross”, “Park Run”, and “Stefan” represent the 4 subsets of the video dataset [12]. Since [12] does not provide the SSIMs, we as well only compare the PSNRs in this table for clarity. | Tennis | Old Town Cross | Park Run | Stefan
---|---|---|---|---
$\sigma$ | 5 | 25 | 40 | 15 | 25 | 40 | 15 | 25 | 40 | 15 | 25 | 55
DnCNN [10] | 35.49 | 27.47 | 25.43 | 31.47 | 30.10 | 28.35 | 30.66 | 27.87 | 25.20 | 32.20 | 29.29 | 24.51
VBM4D [7] | 34.64 | 29.72 | 27.49 | 32.40 | 31.21 | 29.57 | 29.99 | 27.90 | 25.84 | 29.90 | 27.87 | 23.83
ViDeNN [12] | 35.51 | 29.97 | 28.00 | 32.15 | 30.91 | 29.41 | 31.04 | 28.44 | 25.97 | 32.06 | 29.23 | 24.63
ViDeNN-G [12] | 37.81 | 30.36 | 28.44 | 32.39 | 31.29 | 29.97 | 31.25 | 28.72 | 26.36 | 32.37 | 29.59 | 25.06
TOF [51] | 34.83 | 29.31 | 27.51 | 32.24 | 31.20 | 29.56 | 29.45 | 27.19 | 25.18 | 29.84 | 27.83 | 23.28
INN [23] | 37.63 | 29.76 | 28.17 | 32.52 | 31.38 | 27.14 | 30.86 | 27.93 | 24.64 | 32.14 | 28.72 | 24.00
DVDnet [52] | 37.27 | 30.47 | 28.25 | 32.54 | 31.72 | 29.93 | 31.17 | 28.70 | 26.12 | 31.73 | 29.04 | 24.09
VNLnet [42] | 38.25 | 30.58 | 28.09 | 32.27 | 31.37 | 30.35 | 31.21 | 28.76 | 26.15 | 32.39 | 29.55 | 24.55
KPN [9] | 38.55 | 30.45 | 28.43 | 32.40 | 31.52 | 30.34 | 31.41 | 28.84 | 26.48 | 32.36 | 29.61 | 25.10
Ours | 39.25 | 30.73 | 28.55 | 33.19 | 32.02 | 30.62 | 32.17 | 29.47 | 26.90 | 32.59 | 29.71 | 25.22
| | | | |
---|---|---|---|---|---
(a) Reference frame of input | (b) Input | (c) VBM4D [7] | (d) KPN [9] | (e) ST-PAN | (f) GT
Figure 7: Temporal consistency of the proposed video denoising method. We
collect 1D samples over 60 frames from the red dashed line shown in (a), and
concatenate these 1D samples into a 2D image to represent the temporal
profiles of the videos. Specifically, (b)-(f) show the temporal profiles of
the input, VBM4D [7], KPN [9], and our model, where the proposed ST-PAN model
achieves better temporal consistency.
Figure 5 shows several image and video denoising results from the synthetic
dataset. Conventional methods [4, 3, 7] with hand-crafted sampling and
weighting strategies do not perform well and generate severe artifacts. In
particular, the VBM4D [7] method selects pixels using $L_{2}$ norm to measure
patch similarities, which tends to generate oversmoothing results, as shown in
Figure 5(j). On the other hand, directly synthesizing the results with deep
CNNs [10] can lead to denoised results with corrupted structures and fewer
details (Figure 5(e)). Furthermore, the KPN [9] learns rigid kernels for video
denoising, which do not deal with misalignments larger than $2$ pixels due to
the limitation of rigid sampling. When the misalignment is beyond this limit,
the KPN model is likely to generate oversmoothed results (Figure 5(k)) or
ghosting artifacts around high-contrast boundaries, as shown in Figure 6. In
contrast, the proposed method learns the pixel aggregation process in a data-
driven manner and achieves clearer results with fewer artifacts (Figure 5(f),
(l) and Figure 6(g)).
### IV-D Evaluation on Homoscedastic Gaussian Noise
Whereas the noise in real world is mostly signal-dependent and heteroscedastic
[1, 13, 53], existing methods often evaluate their denoising algorithms on
homoscedastic Gaussian noise [45, 10, 46, 47, 48, 49, 12, 42, 52]. For more
comprehensive study, we evaluate the proposed PAN and ST-PAN models on image
and video denoising datasets with homoscedastic Gaussian noise. As shown in
Table IV, our single image denoising model performs favorably against the
baseline methods [17, 18, 10, 19, 48, 49, 50] on the Set12 and BSD68 datasets
[10]. Since the original models of SGN [48] are not available, we train the
SGN with the code provided by the authors following the settings of the
original paper [48]. Furthermore, the ST-PAN method achieves consistently
better results than the state-of-the-art burst and video denoising approaches
[7, 12, 23, 52, 42, 51, 9] on the dataset of [12] under different noise levels
(Table V).
### IV-E Temporal Consistency
It is often desirable for the video denoising algorithms to generate
temporally-coherent video frames. In Figure 7, we show some video denoising
results for evaluating the temporal consistency of the proposed model.
Specifically, we collect 1D samples highlighted by the vertical red line (as
shown in Figure 7(a)) through 60 consecutive frames, and concatenate these 1D
samples into a 2D image to represent the temporal profiles of the denoised
videos. Compared to the results of the baseline methods (Figure 7(c) and (d)),
the temporal profile of the proposed ST-PAN model (Figure 7(e)) has smoother
structures and fewer jittering artifacts, which indicates better temporal
consistency of our model.
### IV-F Generalization to Real Inputs
We evaluate our method with state-of-the-art denoising approaches [4, 10, 9,
7] on real images and video sequences captured by cellphones in Figure 1 and
8. While trained on synthetic data, our model is able to recover subtle edges
from the real-captured noisy input and well handle misalignment from large
motions.
---
Figure 8: Results on real noisy image (first row) and video frame sequence
(second row) captured by cellphones. “ref.” denotes the reference frame.
---
Figure 9: Denoised results by variants of the proposed model. Pixel
aggregation with learned sampling grid and weighting strategy achieves higher-
quality result with fewer visual artifacts.
## V Discussion and Analysis
### V-A Ablation Study
In this section, we present ablation studies on different components of our
algorithm for better analysis. We show the PSNR and SSIM for six variants of
the proposed model in Table VI, where “our full model $3\times 3\times$3” is
the default setting. First, the model “direct” uses the offset network in
Figure 3 to directly synthesize the denoised output, which cannot produce
high-quality results. This demonstrates the effectiveness of our model to
learn the pixel aggregation for denoising. Second, to learn the spatially-
variant weighting strategies, we use dynamic weights for the proposed pixel
aggregation networks. As shown in the second row of Table VI, learning the
model without dynamic weights significantly degrades the denoising
performance. On the third and fourth rows, we show the denoising results using
rigid sampling grids with different sizes. The result shows that learning the
pixel sampling strategies is important for the denoising process and
significantly improves the performance. Furthermore, we concatenate the
features of the offset network to predict the aggregation weights in Figure 2.
The model without concatenating these features cannot exploit the deep offset
network and only relies on the shallow structure of three convolution layers
for weight prediction, which results in decreased performance as shown by the
fifth row of Table VI. In addition, the results on the sixth row show that the
annealing term is important for training our model, and all components of our
method are essential for denoising. Note that our learned sampling grid with
size $3\times 3\times 3$ can sample pixels from a large receptive field (up to
$\pm$15 pixels in our experiment), and further increasing the grid size of the
ST-PAN only marginally improves the performance. Thus, we choose a smaller
sampling size as our default setting in this work.
TABLE VI: Ablation study on the synthetic dataset. Algorithms | Low | High
---|---|---
PSNR | SSIM | PSNR | SSIM
direct | 35.45 | 0.9518 | 32.71 | 0.9200
fixed averaging weights | 35.50 | 0.9449 | 32.60 | 0.9058
rigid sampling grid $3\times 3\times 3$ | 36.02 | 0.9555 | 33.33 | 0.9256
rigid sampling grid $5\times 5\times 5$ | 36.37 | 0.9592 | 33.73 | 0.9320
w/o concatenating offset features | 36.10 | 0.9590 | 33.46 | 0.9348
w/o regularization term | 36.16 | 0.9601 | 33.48 | 0.9341
our full model $3\times 3\times 3$ | 36.91 | 0.9622 | 34.23 | 0.9366
our full model $5\times 5\times 5$ | 36.88 | 0.9631 | 34.25 | 0.9379
Except for the quantitative comparisons shown above, we present more detailed
analysis as follows to illustrate why the different variants of our model do
not perform well.
Direct synthesis.
As introduced in Section I, the aggregation-based denoising process, including
both pixel sampling and averaging, is usually spatially-variant and data-
dependent. However, most CNNs use spatially-invariant and data-independent
convolution kernels, and often require very deep structures to implicitly
approximate the denoising process. Thus, direct synthesizing denoised outputs
with CNNs is likely to result in local minimum solutions with over-smoothed
results. Similar findings have also been shown in [29] for video frame
interpolation. In contrast, the proposed pixel aggregation model explicitly
learn this spatially-variant filtering process, which can effectively exploit
image structures to alleviate the aforementioned issues of direct synthesis.
In addition, our model directly aggregates input pixels, which constrains the
output space and thus generate fewer artifacts in the denoised results. A
visual comparison between direct synthesis and our method is shown in Figure
9(d) and (g).
Fixed averaging weights.
As shown in (1), the principle of image and video denoising is to sample
similar pixels around the one to be denoised and then take them as multiple
observations of the input pixel for averaging. Thus, a straightforward
solution for image denoising is to only predict the sampling locations and
average the predicted observations (i.e. sampled pixels) with the same weights
for different pixel locations. This is conceptually similar to the deformable
convolution network [28], which uses kernels with adaptive spatial shape and
fixed parameters for object detection. However, the sampled pixels from the
noisy input usually do not obey the exact same distribution and thus should be
adaptively weighted in the denoising process. For example, aggregation-based
algorithms [2, 3] exploit patch similarity to design weighting strategies for
effective image denoising. Similar to these methods [2, 3], we learn content-
aware averaging weights for pixel aggregation, which significantly improves
the performance over fixed averaging weights in Table VI.
---
Figure 10: Comparing our method to the rigid sampling scheme with different
grid sizes. The denoised results are obtained using the BSD68 dataset with
$\sigma=25$.
---
Figure 11: Pixel aggregation process of the ST-PAN model for video input. The
patch sequence $\\{X_{-2},X_{-1},X_{0},X_{1},X_{2}\\}$ in (a) is cropped from
the same spatial location of a sequence of consecutive video frames. We show
the cropping location in the original reference frame (b) with an orange
curve. The blue points in the bottom row of (a) denote _five_ rigid sampling
grids with size $3\times 3$, while the red points in the top row of (a)
represent _one_ adaptive grid with size $3\times 3\times 3$. The center blue
point in $X_{0}$ is the reference pixel for denoising. As the window in (a) is
moving vertically, the sampling locations also moves vertically to trace the
boundaries and search for more reliable pixels, which helps solve the
misalignment issue. For better geometrical understanding, we show the 3D grids
in the spatio-temporal space as the red points in (e). In addition, we
respectively project the sampled pixels to different 2D planes as shown in (c)
and (d). Note that higher coordinate in (d) indicates lower point in (a). With
the frame index getting larger, the sampling locations distribute
directionally in the Vertical axis of (d) while lying randomly in the
Horizontal axis of (c), which is consistent with the motion trajectory of the
window in (a).
---
Figure 12: Experimental results by the PAN-sep and ST-PAN models on different
motion levels. Smaller frame rate at the fps-axis in (a) indicates larger
motion. We visualize the motion difference by averaging the $120$fps and
$24$fps input sequences in (b) and (c).
---
Figure 13: Distributions of the sampling locations on the time dimension in
the test dataset. (a) and (b) represent our models with and without using the
annealing term. $x$\- and $y$-axis denote the frame index and the percentage
of pixels, respectively.
Rigid sampling grid.
Another straightforward alternative of the proposed method is to aggregate
pixels from a rigid sampling grid. The influence of irrelevant sampling
locations in the rigid grid can be reduced by the adaptive weighting model,
which learns to give higher weights to more similar pixels. However, the rigid
strategy can only sample pixels from a restricted receptive field, which
hinders the model from utilizing more valid observations for denoising. This
can be understood by (1), where smaller $N$ leads to higher-variance estimates
indicating worse denoised output (Figure 9(e)). In contrast, the proposed
algorithm can adapt to the image structures (as shown in Section V-D), and
increase the receptive field without sampling more pixels. As such, it can
exploit more useful observations (larger $N$ in (1)) and reduce the variance
of the estimated results, thereby leading to better denoising performance
(Figure 9(g)).
While our method addresses the issues of rigid sampling, one potential
question is whether the trivial solution, i.e., simply enlarging the sampling
grid to cover larger areas, can achieve similar results. We evaluate the image
denoising models using rigid sampling strategy with different grid sizes on
the BSD68 dataset. As shown in Figure 10, enlarging the sampling grid is only
effective for smaller grid sizes. In addition, the SSIM values decrease when
the grid size becomes larger than $7$. This is mainly due to the large amount
of irrelevant sampling locations in the large rigid grids, and we empirically
show that it is difficult to address this issue solely by the learned adaptive
weights. We also present one denoised output using the large-size rigid
sampling strategy in Figure 9(f), which demonstrates the significance of the
proposed pixel aggregation network.
### V-B Effectiveness of the Spatio-Temporal Pixel Aggregation
As illustrated in Figure 4, the proposed ST-PAN model samples pixels across
the spatial-temporal space for video denoising, and thus better handles large
motion videos. To further verify the effectiveness of the spatio-temporal
sampling on large motion, we evaluate the PAN-sep and ST-PAN models under
different motion levels. Specifically, we sample $240$fps video clips with
large motion from the Adobe240 dataset [43]. We temporally downsample the high
frame rate videos and obtain $7$ test subsets of different frame rates: $120$,
$80$, $60$, $48$, $40$, $30$, and $24$fps, where each contains 180 input
sequences. Note that the sequences with different frame rates correspond to
videos with different motion levels, and all the subsets use the same
reference frames. As shown in Figure 12, the performance gap between the 2D
and 3D strategies becomes larger as the frame rate decreases, which
demonstrates the effectiveness of the spatial-temporal sampling on large
motion. We also notice that both methods achieve better results (smaller MSE)
on videos with higher frame rates, which shows the importance of exploiting
temporal information in video denoising.
### V-C Effectiveness of the Regularization Term in (11)
Figure 13 shows the distributions of the sampling locations on the time
dimension in the test dataset. Directly optimizing the $L_{1}$ loss without
the annealing term in video denoising often leads to undesirable local minima
where most the sampling locations are around the reference frame as shown in
Figure 13(b). By adding the regularization term in the training process, the
network is forced to search more informative pixels across a larger temporal
range, which helps alleviate the local minima issues (Figure 13(a)).
### V-D Visualization of the Pixel Aggregation Process
For more intuitive understanding of the proposed algorithm, we visualize the
denoising process of the PAN model in Figure 14. Our network exploits the
structure information by sampling pixels along edges (Figure 14(b) and (c)),
and thereby reduces the interference of inappropriate samples for better image
denoising performance. Note that the predicted sampling locations in Figure
14(b) are not always perfect mainly due to the noise of the input image.
However, the influence of the irrelevant sampling locations can be further
reduced by the learned aggregation weights as shown in Figure 14(c), where the
out-of-edge pixels are given substially smaller weights for denoising. We also
show a much larger rigid grid (size $11\times 11$) in Figure 14(d) to
demonstrate why the straightforward strategy of increasing grid size does not
work as well as our solution.
For video denoising, we show an example in Figure 11 to visualize the 3D
sampling grid of the ST-PAN model. As shown in Figure 11(a), the proposed ST-
PAN model can trace the moving boundary along the motion direction and
aggregate similar pixels from all the input frames. The ability to sample both
spatially and temporally is crucial for our method to deal with large motion
and recover clean structures and details.
---
Figure 14: Visualization of the pixel aggregation process of PAN for single
image input. (a) is the noisy input. (b) represents the sampled pixels of PAN
with grid size $5\times 5$, and (c) shows the averaging weights of these
pixels. We also show the weights of a large rigid sampling grid with size
$11\times 11$ in (d) for better understanding. Note that the PAN model
achieves a large receptive field without increasing the grid size and reduces
the influence of irrelevant pixels.
### V-E Learned Location Shift for Different Noise Levels
To provide further analysis of the learned sampling locations, we apply the
proposed network to the same images with different noise levels and compute
the average receptive field of the learned sampling grids in the images. Table
VII shows that the network tends to search pixels from a wider area as the
noise increases, which demonstrates that our model can automatically adjusts
its sampling strategies to incorporate information from a larger receptive
field to deal with more challenging inputs.
TABLE VII: Average receptive field of the learned sampling grids under different noise levels on the proposed test dataset. The noise parameter ($\sigma_{s},\sigma_{r}$) denotes the intensity of the shot noise and read noise. The proposed network tends to search pixels from a wider area as the noise intensity increases. Noise parameter | (2.5e-3, 1e-2) | (6.4e-3, 2e-2) | (1e-2, 5e-2)
---|---|---|---
Average receptive field | 6.8326 | 7.294 | 7.7152
### V-F Running Speed
The proposed method can process 0.27 megapixels in one second on a desktop
with an Intel i5 CPU and a GTX 1060 GPU, which is close to the running speed
of KPN (0.30 megapixels per second). Note that our model has a similar amount
of parameters to KPN, and the additional time cost mainly comes from the
trilinear sampler which can be potentially accelerated with parallel
implementation.
## VI Conclusions
In this work, we propose to learn the pixel aggregation process for image and
video denoising with deep neural networks. The proposed method adaptively
samples pixels from the 2D or 3D input, handles misalignment caused by dynamic
scenes, and enables large receptive fields while preserving details. In
addition, we present a regularization term for effectively training the
proposed video denoising model. Extensive experimental results demonstrate
that our algorithm performs favorably against the state-of-the-art methods on
both synthetic and real inputs.
While we use the inverse Gamma correction for synthesizing the training data,
recent works have studied more realistic data generation in the raw domain
[54, 55, 53]. Adapting the proposed network for raw data and exploring more
realistic data generation pipelines will be interesting directions for future
work.
## References
* [1] G. E. Healey and R. Kondepudy, “Radiometric ccd camera calibration and noise estimation,” _IEEE Transactions on Image Processing_ , vol. 16, pp. 267–276, 1994.
* [2] C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” in _IEEE International Conference on Computer Vision_ , 1998.
* [3] A. Buades, B. Coll, and J.-M. Morel, “A non-local algorithm for image denoising,” in _IEEE Conference on Computer Vision and Pattern Recognition_ , 2005.
* [4] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” _IEEE Transactions on Image Processing_ , vol. 16, pp. 2080–2095, 2007.
* [5] T. Plotz and S. Roth, “Benchmarking denoising algorithms with real photographs,” in _IEEE Conference on Computer Vision and Pattern Recognition_ , 2017.
* [6] D. Kostadin, F. Alessandro, and E. KAREN, “Video denoising by sparse 3d transform-domain collaborative filtering,” in _European signal processing conference_ , 2007.
* [7] M. Maggioni, G. Boracchi, A. Foi, and K. Egiazarian, “Video denoising, deblocking, and enhancement through separable 4-d nonlocal spatiotemporal transforms,” _IEEE Transactions on Image Processing_ , vol. 21, pp. 3952–3966, 2012.
* [8] Z. Liu, L. Yuan, X. Tang, M. Uyttendaele, and J. Sun, “Fast burst images denoising,” _ACM Transactions on Graphics (TOG)_ , vol. 33, p. 232, 2014\.
* [9] B. Mildenhall, J. T. Barron, J. Chen, D. Sharlet, R. Ng, and R. Carroll, “Burst denoising with kernel prediction networks,” in _IEEE Conference on Computer Vision and Pattern Recognition_ , 2018.
* [10] K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” _IEEE Transactions on Image Processing_ , vol. 26, pp. 3142–3155, 2017.
* [11] T. Remez, O. Litany, R. Giryes, and A. M. Bronstein, “Deep class-aware image denoising,” in _International Conference on Sampling Theory and Applications_ , 2017.
* [12] M. Claus and J. van Gemert, “Videnn: Deep blind video denoising,” in _IEEE Conference on Computer Vision and Pattern Recognition Workshops_ , 2019\.
* [13] R. C. Gonzalez and R. E. Woods, _Digital image processing_. Prentice hall New Jersey, 2002.
* [14] C. Liu and W. T. Freeman, “A high-quality video denoising algorithm based on reliable motion estimation,” in _European Conference on Computer Vision_ , 2010.
* [15] X. Chen, L. Song, and X. Yang, “Deep rnns for video denoising,” in _Applications of Digital Image Processing XXXIX_ , 2016.
* [16] C. Godard, K. Matzen, and M. Uyttendaele, “Deep burst denoising,” in _European Conference on Computer Vision_ , 2018.
* [17] S. Lefkimmiatis, “Non-local color image denoising with convolutional neural networks,” in _IEEE Conference on Computer Vision and Pattern Recognition_ , 2017.
* [18] T. Plötz and S. Roth, “Neural nearest neighbors networks,” in _Advances in Neural Information Processing Systems_ , 2018.
* [19] D. Liu, B. Wen, Y. Fan, C. C. Loy, and T. S. Huang, “Non-local recurrent network for image restoration,” in _Advances in Neural Information Processing Systems_ , 2018.
* [20] W. Zuo, K. Zhang, and L. Zhang, “Convolutional neural networks for image denoising and restoration,” in _Denoising of Photographic Images and Video_ , 2018, pp. 93–123.
* [21] K. Zhang, W. Zuo, and L. Zhang, “Ffdnet: Toward a fast and flexible solution for cnn-based image denoising,” _IEEE Transactions on Image Processing_ , vol. 27, pp. 4608–4622, 2018.
* [22] A. Mahendran and A. Vedaldi, “Understanding deep image representations by inverting them,” in _IEEE Conference on Computer Vision and Pattern Recognition_ , 2015.
* [23] F. Kokkinos and S. Lefkimmiatis, “Iterative residual cnns for burst photography applications,” in _IEEE Conference on Computer Vision and Pattern Recognition_ , 2019.
* [24] S. W. Hasinoff, D. Sharlet, R. Geiss, A. Adams, J. T. Barron, F. Kainz, J. Chen, and M. Levoy, “Burst photography for high dynamic range and low-light imaging on mobile cameras,” _ACM Transactions on Graphics (TOG)_ , vol. 35, no. 6, pp. 1–12, 2016.
* [25] B. D. Brabandere, X. Jia, T. Tuytelaars, and L. V. Gool, “Dynamic filter networks,” in _Advances in Neural Information Processing Systems_ , 2016\.
* [26] M. Jaderberg, K. Simonyan, A. Zisserman _et al._ , “Spatial transformer networks,” in _Advances in Neural Information Processing Systems_ , 2015\.
* [27] T. Hyun Kim, M. S. Sajjadi, M. Hirsch, and B. Scholkopf, “Spatio-temporal transformer network for video restoration,” in _European Conference on Computer Vision_ , 2018.
* [28] J. Dai, H. Qi, Y. Xiong, Y. Li, G. Zhang, H. Hu, and Y. Wei, “Deformable convolutional networks,” in _IEEE International Conference on Computer Vision_ , 2017.
* [29] S. Niklaus, L. Mai, and F. Liu, “Video frame interpolation via adaptive convolution,” in _IEEE Conference on Computer Vision and Pattern Recognition_ , 2017.
* [30] E. Park, J. Yang, E. Yumer, D. Ceylan, and A. C. Berg, “Transformation-grounded image generation network for novel 3d view synthesis,” in _IEEE Conference on Computer Vision and Pattern Recognition_ , 2017.
* [31] X. Zhu, H. Hu, S. Lin, and J. Dai, “Deformable convnets v2: More deformable, better results,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2019.
* [32] H. Jiang, D. Sun, V. Jampani, M. Yang, E. G. Learned-Miller, and J. Kautz, “Super slomo: High quality estimation of multiple intermediate frames for video interpolation,” in _IEEE Conference on Computer Vision and Pattern Recognition_ , 2018.
* [33] M. Anderson, R. Motta, S. Chandrasekar, and M. Stokes, “Proposal for a standard default color space for the internet—srgb,” in _Color and imaging conference_ , 1996.
* [34] IE Commission, “Iec 61966-2-1: 1999,” _Multimedia systems and equipment-Colour measurement and management-Part_ , pp. 2–1, 1999.
* [35] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in _International Conference on Medical image computing and computer-assisted intervention_ , 2015.
* [36] C. Chen, Q. Chen, J. Xu, and V. Koltun, “Learning to see in the dark,” in _IEEE Conference on Computer Vision and Pattern Recognition_ , 2018.
* [37] X. Xu, D. Sun, S. Liu, W. Ren, Y.-J. Zhang, M.-H. Yang, and J. Sun, “Rendering portraitures from monocular camera and beyond,” in _European Conference on Computer Vision_ , 2018.
* [38] V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in _International Conference on Machine Learning_ , 2010.
* [39] B. Lim, S. Son, H. Kim, S. Nah, and K. Mu Lee, “Enhanced deep residual networks for single image super-resolution,” in _IEEE Conference on Computer Vision and Pattern Recognition Workshops_ , 2017.
* [40] Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu, “Residual dense network for image super-resolution,” in _IEEE Conference on Computer Vision and Pattern Recognition_ , 2018.
* [41] W. Ren, J. Zhang, X. Xu, L. Ma, X. Cao, G. Meng, and W. Liu, “Deep video dehazing with semantic segmentation,” _IEEE Transactions on Image Processing_ , vol. 28, no. 4, pp. 1895–1908, 2018.
* [42] A. Davy, T. Ehret, J.-M. Morel, P. Arias, and G. Facciolo, “A non-local cnn for video denoising,” in _IEEE International Conference on Image Processing_ , 2019.
* [43] S. Su, M. Delbracio, J. Wang, G. Sapiro, W. Heidrich, and O. Wang, “Deep video deblurring for hand-held cameras,” in _IEEE Conference on Computer Vision and Pattern Recognition_ , 2017.
* [44] D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in _International Conference on Learning Representations_ , 2014.
* [45] S. Lefkimmiatis, “Non-local color image denoising with convolutional neural networks,” in _IEEE Conference on Computer Vision and Pattern Recognition_ , 2017.
* [46] T. Plötz and S. Roth, “Neural nearest neighbors networks,” in _Advances in Neural Information Processing Systems_ , 2018.
* [47] D. Liu, B. Wen, Y. Fan, C. C. Loy, and T. S. Huang, “Non-local recurrent network for image restoration,” in _Advances in Neural Information Processing Systems_ , 2018.
* [48] S. Gu, Y. Li, L. V. Gool, and R. Timofte, “Self-guided network for fast image denoising,” in _IEEE International Conference on Computer Vision_ , 2019\.
* [49] C. Chen, Z. Xiong, X. Tian, Z.-J. Zha, and F. Wu, “Real-world image denoising with deep boosting,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , 2019.
* [50] X. Jia, S. Liu, X. Feng, and L. Zhang, “Focnet: A fractional optimal control network for image denoising,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2019.
* [51] T. Xue, B. Chen, J. Wu, D. Wei, and W. T. Freeman, “Video enhancement with task-oriented flow,” _International Journal of Computer Vision_ , vol. 127, no. 8, pp. 1106–1125, 2019.
* [52] M. Tassano, J. Delon, and T. Veit, “Dvdnet: A fast network for deep video denoising,” in _IEEE International Conference on Image Processing_ , 2019\.
* [53] X. Xu, Y. Ma, and W. Sun, “Towards real scene super-resolution with raw images,” in _IEEE Conference on Computer Vision and Pattern Recognition_ , 2019.
* [54] C. Chen, Q. Chen, M. N. Do, and V. Koltun, “Seeing motion in the dark,” in _Proceedings of the IEEE International Conference on Computer Vision_ , 2019\.
* [55] H. Yue, C. Cao, L. Liao, R. Chu, and J. Yang, “Supervised raw video denoising with a benchmark dataset on dynamic scenes,” in _IEEE Conference on Computer Vision and Pattern Recognition_ , 2020.
|
# Benchmarking Invertible Architectures on Inverse Problems
Jakob Kruse Lynton Ardizzone Carsten Rother Ullrich Köthe
###### Abstract
Recent work demonstrated that flow-based invertible neural networks are
promising tools for solving ambiguous inverse problems. Following up on this,
we investigate how ten invertible architectures and related models fare on two
intuitive, low-dimensional benchmark problems, obtaining the best results with
coupling layers and simple autoencoders. We hope that our initial efforts
inspire other researchers to evaluate their invertible architectures in the
same setting and put forth additional benchmarks, so our evaluation may
eventually grow into an official community challenge.
Machine Learning, ICML
## 1 Introduction
Both in science and in everyday life, we often encounter phenomena that depend
on hidden properties $\mathbf{x}$, which we would like to determine from
observable quantities $\mathbf{y}$. A common problem is that many different
configurations of these properties would result in the same observable state,
especially when there are far more hidden than observable variables. We will
call the mapping $f$ from hidden variables $\mathbf{x}$ to observable
variables $\mathbf{y}=f(\mathbf{x})$ the forward process. It can usually be
modelled accurately by domain experts. The opposite direction, the inverse
process $\mathbf{y}\rightarrow\mathbf{x}$, is much more difficult to deal
with. Since $f^{-1}(\mathbf{y})$ does not have a single unambiguous answer, a
proper inverse model should instead estimate the full posterior probability
distribution $p(\mathbf{x}\\!\mid\\!\mathbf{y})$ of hidden variables
$\mathbf{x}$ given the observation $\mathbf{y}$.
Recent work (Ardizzone et al., 2019) has shown that flow-based invertible
neural networks such as RealNVP (Dinh et al., 2016) can be trained with data
from the forward process, and then used in inverse mode to sample from
$p(\mathbf{x}\\!\mid\\!\mathbf{y})$ for any $\mathbf{y}$. This is made
possible by introducing additional latent variables $\mathbf{z}$ that encode
any information about $\mathbf{x}$ not contained in $\mathbf{y}$. Assuming a
perfectly representative training set and a fully converged model, they prove
that the generated distribution is equal to the true posterior.
Figure 1: Prior distributions of the parameters $\mathbf{x}$ in either
benchmark. Left: An articulated arm with three segments is mounted on a rail.
$x_{1}$ determines the vertical position on the rail and $x_{2\dots 4}$
determine the angles at the three joints. $97\%$ of end points of the
resulting arms fall within the contour labelled $\mathbf{y}$. Right: An object
is thrown upwards and to the right from a starting position $(x_{1},x_{2})$,
at an angle $x_{3}$ and initial velocity $x_{4}$. We observe the locations of
impact $y$ where each trajectory hits the ground, i.e. the $x$ axis. A green
curve shows the density of these positions.
Interestingly, this proof carries over to all models offering an exact inverse
upon convergence. This poses a natural question: How well can various network
types approximate this ideal behavior in practice? Fundamentally, we can
distinguish between hard invertibility, where the architecture ensures that
forward and backward processing are exact inverses of each other (e.g.
RealNVP), and soft invertibility, where encoder and decoder only become
inverses upon convergence (e.g. autoencoders). The former pays for guaranteed
invertibility with architectural restrictions that may harm expressive power
and training dynamics, whereas the latter is more flexible but only
approximately invertible.
We propose two simple inverse problems, one geometric and one physical, for
systematic investigation of the resulting trade-offs. Common toy problems for
invertible networks are constrained to two dimensions for visualization
purposes (Behrmann et al., 2018; Grathwohl et al., 2018). The 4D problems
shown here are more challenging, facilitating more meaningful variance in the
results of different models. However, they still have an intuitive 2D
representation (Fig. 1) and are small enough to allow computation of ground
truth posteriors via rejection sampling, which is crucial for proper
evaluation. We test ten popular network variants on our two problems to
address the following questions: (i) Is soft invertibility sufficient for
solving inverse problems? (ii) Do architectural restrictions needed for hard
invertibility harm performance? (iii) Which architectures and losses give the
most accurate results?
## 2 Methods
Invertible Neural Networks (INNs). Our starting point is the model from
(Ardizzone et al., 2019), which is based on RealNVP, i.e. affine coupling
layers. They propose to use a standard L2 loss for fitting the network’s
$\mathbf{y}$-predictions to the training data,
$\displaystyle\mathrm{L2}(\mathbf{y})$
$\displaystyle=(\mathbf{y}-\mathbf{y}_{\mathrm{gt}})^{2},$ (1)
and an MMD loss (Gretton et al., 2012) for fitting the latent distribution
$p(\mathbf{z})$ to $\mathcal{N}(\mathbf{0},\mathbf{I})$, given samples:
$\displaystyle\mathrm{MMD}(\mathbf{z})=\,$
$\displaystyle\mathbf{E}_{i,j}[\kappa(\mathbf{z}^{(i)},\mathbf{z}^{(j)})]-2\\!\cdot\\!\mathbf{E}_{i,j}[\kappa(\mathbf{z}^{(i)},\mathbf{z}_{\mathrm{gt}}^{(j)})]\,+$
$\displaystyle\mathbf{E}_{i,j}[\kappa(\mathbf{z}_{\mathrm{gt}}^{(i)},\mathbf{z}_{\mathrm{gt}}^{(j)})]$
(2)
With weighting factors $\alpha,\beta$, their training loss becomes
$\displaystyle\mathcal{L}(\mathbf{y},\mathbf{z})$
$\displaystyle=\mathrm{L2}(\mathbf{y})+\alpha\cdot\mathrm{MMD}(\mathbf{z}).$
(3)
$\mathbf{x}$ Invertible Neural Net $\mathbf{y}$ $\mathbf{z}$Optional MMD to
match priorL2 to match training dataMMD to match
$\mathcal{N}(\mathbf{0},\mathbf{1})$
We find that it is also possible to train the network with just a maximum
likelihood loss (Dinh et al., 2016) by assuming $\mathbf{y}$ to be normally
distributed around the ground truth values $\mathbf{y}_{\mathrm{gt}}$ with
very low variance $\sigma^{2}$,
$\displaystyle\mathcal{L}(\mathbf{y},\mathbf{z})=\,$
$\displaystyle\tfrac{1}{2}\cdot\left(\tfrac{1}{\sigma^{2}}\cdot(\mathbf{y}-\mathbf{y}_{\mathrm{gt}})^{2}+\mathbf{z}^{2}\right)-$
$\displaystyle\log\left|\det
J_{\mathbf{x}\;\mapsto\,[\mathbf{y},\,\mathbf{z}]}\right|,$ (4)
and we compare both approaches in our experiments.
Conditional INNs. Instead of training INNs to predict $\mathbf{y}$ from
$\mathbf{x}$ while transforming the lost information into a latent
distribution, we can train them to transform $\mathbf{x}$ directly to a latent
representation $\mathbf{z}$ given the observation $\mathbf{y}$. This is done
by providing $\mathbf{y}$ as an additional input to each affine coupling
layer, both during the forward and the inverse network passes. cINNs work with
larger latent spaces than INNs and are also suited for maximum likelihood
training:
$\displaystyle\mathcal{L}(\mathbf{z})=\,$
$\displaystyle\tfrac{1}{2}\cdot\mathbf{z}^{2}-\log\left|\det
J_{\mathbf{x}\;\mapsto\,\mathbf{z}}\right|,$ (5)
$\mathbf{x}$ Conditional INN $\mathbf{z}$$\mathbf{y}$MMD to match priormaximum
likelihood loss
Autoregressive flows. Masked autoregressive flows (MAF) decompose multi-
variate distributions into products of 1-dimensional Gaussian conditionals
using the chain rule of probability (Papamakarios et al., 2017). Inverse
autoregressive flows (IAF) similarly decompose the latent distribution (Kingma
et al., 2016). To obtain asymptotically invertible architectures, we add
standard feed-forward networks for the opposite direction in the manner of
Parallel WaveNets (Oord et al., 2018) and train with Eq. 4 and a cycle loss:
$\displaystyle\mathcal{L}(\mathbf{y},\mathbf{z},\hat{\mathbf{x}})=\,$
$\displaystyle\tfrac{1}{2}\cdot\left(\tfrac{1}{\sigma^{2}}\cdot(\mathbf{y}-\mathbf{y}_{\mathrm{gt}})^{2}+\mathbf{z}^{2}\right)-$
$\displaystyle\log\left|\det
J_{\mathbf{x}\;\mapsto\,[\mathbf{y},\,\mathbf{z}]}\right|+\alpha\cdot(\mathbf{x}-\hat{\mathbf{x}})^{2}$
(6)
$\mathbf{x}$$\hat{\mathbf{x}}$$\mathbf{y}$ $\mathbf{z}$ Autoregressive flow
Decoder cycle lossmaximum likelihood loss
Invertible Residual Networks. A more flexible approach is the i-ResNet
(Behrmann et al., 2018), which replaces the heavy architectural constraints
imposed by coupling layers and autoregressive models with a mild Lipschitz-
constraint on its residual branches. With this constraint, the model’s inverse
and its Jacobian determinant can be estimated iteratively with a runtime vs.
accuracy trade-off. Finding that the estimated Jacobian determinants’
gradients are too noisy 111While accurate determinants may be found
numerically for toy problems, this would not scale and thus is of limited
interest., we train with the loss from Eq. 3 instead.
$\mathbf{x}$ Invertible Residual Net $\mathbf{y}$ $\mathbf{z}$feed-forward
with Lipschitz correctionlayer by layer fixed-point iterationL2 to match
training dataMMD to match $\mathcal{N}(\mathbf{0},\mathbf{1})$
Invertible Autoencoders. This model proposed by (Teng et al., 2018) uses
invertible nonlinearities and orthogonal weight matrices to achieve efficient
invertibility. The weight matrices start with random initialization, but
converge to orthogonal matrices during training via a cycle loss:
$\displaystyle\mathcal{L}(\mathbf{y},\mathbf{z},\hat{\mathbf{x}})$
$\displaystyle=\mathrm{L2}(\mathbf{y})+\alpha\cdot\mathrm{MMD}(\mathbf{z})+\beta\cdot(\mathbf{x}-\hat{\mathbf{x}})^{2}$
(7)
$\mathbf{x}$$\hat{\mathbf{x}}$ Invertible Autoencoder $\mathbf{y}$
$\mathbf{z}$use weights $\mathbf{W}$ and LeakyReLUuse $\mathbf{W}^{\top}$ and
inverse LeakyReLUOptional MMDcycle lossL2MMD
Standard Autoencoders. In the limit of zero reconstruction loss, the decoder
of a standard autoencoder becomes the exact inverse of its encoder. While this
approach uses two networks instead of one, it is not subject to any
architectural constraints. In contrast to standard practice, our autoencoders
do not have a bottleneck but use encodings with the same dimension as the
input (exactly like INNs). The loss function is the same as Eq. 7.
$\mathbf{x}$$\hat{\mathbf{x}}$$\mathbf{y}$ $\mathbf{z}$ Encoder Decoder cycle
lossL2MMD
Conditional Variational Autoencoders. Variational autoencoders (Kingma &
Welling, 2013) take a Bayesian approach and thus should be well suited for
predicting distributions. Since we are interested in conditional distributions
and it simplifies training in this case, we focus on the conditional VAE
proposed by (Sohn et al., 2015), with loss
$\displaystyle\mathcal{L}(\boldsymbol{\mu}_{z},\boldsymbol{\sigma}_{z},\hat{\mathbf{x}})$
$\displaystyle=(\mathbf{x}\\!-\\!\hat{\mathbf{x}})^{2}-\tfrac{1}{2}\alpha\\!\cdot\\!(1+\log\boldsymbol{\sigma}_{z}-\boldsymbol{\mu}_{z}^{2}-\boldsymbol{\sigma}_{z}).$
(8)
$\mathbf{x}$$\hat{\mathbf{x}}$$\mathbf{y}$$\\!\\!\\!\begin{array}[]{c}\boldsymbol{\mu}_{z}\\\
\boldsymbol{\sigma}_{z}\end{array}\\!\\!\\!$ Encoder Decoder cycle lossELBO
loss
Mixture Density Networks (MDNs). MDNs (Bishop, 1994; Kruse, 2020) are not
invertible at all, but model the inverse problem directly. To this end, the
network takes $\mathbf{y}$ as an input and predicts the parameters
$\boldsymbol{\mu}_{x},\boldsymbol{\Sigma}_{x}^{-1}$ of a Gaussian mixture
model that characterizes $p(\mathbf{x}\\!\mid\\!\mathbf{y})$. It is trained by
maximizing the likelihood of the training data under the predicted mixture
models, leading to a loss of the form
$\displaystyle\mathcal{L}(\boldsymbol{\mu}_{x},\boldsymbol{\Sigma}_{x}^{-1})$
$\displaystyle=\tfrac{1}{2}\\!\cdot\\!(\mathbf{x}\boldsymbol{\mu}_{x}^{\top}\\!\cdot\\!\boldsymbol{\Sigma}_{x}^{-1}\\!\cdot\\!\mathbf{x}\boldsymbol{\mu}_{x})-\log\lvert\boldsymbol{\Sigma}_{x}^{-1}\rvert^{\tfrac{1}{2}}.$
(9)
We include it in this work as a non-invertible baseline.
$\mathbf{x}$
$\\!\\!\\!\begin{array}[]{c}\boldsymbol{\mu}_{x}\\\\[5.0pt]
\boldsymbol{\Sigma}_{x}^{-1}\end{array}\\!\\!\\!$
Mixture Density Network $\mathbf{y}$Maximum likelihood loss
## 3 Benchmark Problems
We propose two low-dimensional inverse problems as test cases, as they allow
quick training, intuitive visualizations and ground truth estimates via
rejection sampling.
Table 1: Quantitative results for inverse kinematics benchmark, see Section 4. The first three columns are averaged over $1000$ different observations $\mathbf{y}^{*}$. dim$(\mathbf{z})$ denotes the dimensionality of the latent space. ML Loss marks models that were trained with a maximum likelihood loss, while $\mathbf{y}$-Supervision marks models that were trained with an explicit supervised loss on the forward process $\mathbf{x}\rightarrow\mathbf{y}$. Method | $Err_{\mathrm{post}}$ (10) | $Err_{\mathrm{resim}}$ (11) | Inference in ms | dim($\mathbf{z}$) | ML Loss | $\mathbf{y}$-Supervision
---|---|---|---|---|---|---
INN | 0.025 | 0.015 | 10 | ${\bullet}{\bullet}$ | $\checkmark$ | $\checkmark$
INN (L2 + MMD) | 0.017 | 0.086 | 9 | ${\bullet}{\bullet}$ | | $\checkmark$
cINN | 0.015 | 0.008 | 11 | ${\bullet}{\bullet}{\bullet}{\bullet}$ | $\checkmark$ |
IAF + Decoder | 0.419 | 0.222 | 0 | ${\bullet}{\bullet}{\bullet}{\bullet}$ | $\checkmark$ | $\checkmark$
MAF + Decoder | 0.074 | 0.034 | 0 | ${\bullet}{\bullet}{\bullet}{\bullet}$ | $\checkmark$ | $\checkmark$
iResNet | 0.713 | 0.311 | 763 | ${\bullet}{\bullet}$ | | $\checkmark$
InvAuto | 0.062 | 0.022 | 1 | ${\bullet}{\bullet}$ | | $\checkmark$
Autoencoder | 0.037 | 0.016 | 0 | ${\bullet}{\bullet}$ | | $\checkmark$
cVAE | 0.042 | 0.019 | 0 | ${\bullet}{\bullet}$ | |
MDN | 0.007 | 0.012 | 601 | ${\bullet}{\bullet}{\bullet}{\bullet}$ | $\checkmark$ |
Figure 2: Qualitative results for the inverse kinematics benchmark. The faint
lines are arm configurations sampled from each model’s predicted posterior
$\hat{p}(\mathbf{x}\,|\,\mathbf{y}^{*})$, the target point
$\mathbf{y}^{*}=[1.5,0]$ is indicated by a gray cross. We emphasize the most
likely arm (determined by mean shift) as a bold line. The contour around the
target marks the area containing $97\%$ of the sampled arms’ end points.
### 3.1 Inverse Kinematics
First is the geometrical example used by (Ardizzone et al., 2019), which asks
about configurations of a multi-jointed 2D arm that end in a given position,
see Fig. 1 left. The forward process takes a starting height $x_{1}$ and the
three joint angles $x_{2},x_{3},x_{4}$, and returns the coordinate of the
arm’s end point $\mathbf{y}=[y_{1},y_{2}]$ as
$\displaystyle y_{1}\\!$
$\displaystyle=\\!l_{1}\sin(x_{2})+l_{2}\sin(x_{2}\\!+\\!x_{3})+l_{3}\sin(x_{2}\\!+\\!x_{3}\\!+\\!x_{4})\\!+\\!x_{1}$
$\displaystyle y_{2}\\!$
$\displaystyle=\\!l_{1}\cos(x_{2})+l_{2}\cos(x_{2}\\!+\\!x_{3})+l_{3}\cos(x_{2}\\!+\\!x_{3}\\!+\\!x_{4})$
with segment lengths $l_{1}=\tfrac{1}{2},\;l_{2}=\tfrac{1}{2}$ and $l_{3}=1$.
Parameters $\mathbf{x}$ follow a Gaussian prior
$\mathbf{x}\sim\mathcal{N}(\mathbf{0},\;\boldsymbol{\sigma}^{2}\\!\cdot\\!\mathbf{I})$
with
$\boldsymbol{\sigma}^{2}=[\tfrac{1}{16},\tfrac{1}{4},\tfrac{1}{4},\tfrac{1}{4}]$.
The inverse problem is to find the distribution
$p(\mathbf{x}\,|\,\mathbf{y}^{*})$ of all arm configurations $\mathbf{x}$ that
end at some observed 2D position $\mathbf{y}^{*}$.
### 3.2 Inverse Ballistics
A similar, more physically motivated problem in the 2D plane arises when an
object is thrown from a starting position $(x_{1},x_{2})$ with angle $x_{3}$
and initial velocity $x_{4}$. This setup is illustrated in Fig. 1, right. For
given gravity $g$, object mass $m$ and air resistance $k$, the object’s
trajectory $\mathbf{T}(t)$ can be computed as
$\displaystyle T_{1}(t)$
$\displaystyle=x_{1}-\frac{v_{1}m}{k}\cdot\left(e^{-\tfrac{kt}{m}}-1\right)$
$\displaystyle T_{2}(t)$
$\displaystyle=x_{2}-\frac{m}{k^{2}}\cdot\left(\big{(}\,gm+v_{2}k\,\big{)}\cdot\left(e^{-\tfrac{kt}{m}}-1\right)+gtk\right)$
with $v_{1}=x_{4}\cdot\cos{x_{3}}$ and $v_{2}=x_{4}\cdot\sin{x_{3}}$. We
define the location of impact as $y=T_{1}(t^{*})$, where $t^{*}$ is the
solution of $T_{2}(t^{*})=0$, i.e. the trajectory’s intersection with the
$x_{1}$-axis of the coordinate system (if there are two such points we take
the rightmost one, and we only consider trajectories that do cross the
$x_{1}$-axis). Note that here, $y$ is one-dimensional.
We choose the parameters’ priors as
$x_{1}\sim\mathcal{N}(0,\;\tfrac{1}{4}),\;x_{2}\sim\mathcal{N}(\tfrac{3}{2},\;\tfrac{1}{4}),\;x_{3}\sim\mathcal{U}(9^{\circ},\;72^{\circ})$
and $x_{4}\sim\textrm{Poisson}(15)$.
The inverse problem here is to find the distribution $p(\mathbf{x}\,|\,y^{*})$
of all throwing parameters $\mathbf{x}$ that share the same observed impact
location $y^{*}$.
## 4 Experiments
To compare all approaches in a fair setting, we use the same training data,
train for the same number of batches and epochs and choose layer sizes such
that all models have roughly the same number of trainable parameters
(${\sim}3\,\textrm{M}$).
We quantify the correctness of the generated posteriors in two ways, using
$1000$ unseen conditions $\mathbf{y}^{*}$ obtained via prior and forward
process. Firstly, we use MMD (Eq. 2, (Gretton et al., 2012)) to compute the
posterior mismatch between the distribution
$\hat{p}(\mathbf{x}\,|\,\mathbf{y}^{*})$ generated by a model and a ground
truth estimate $p_{\mathrm{gt}}(\mathbf{x}\,|\,\mathbf{y}^{*})$ obtained via
rejection sampling:
$\displaystyle Err_{\mathrm{post}}$
$\displaystyle=\mathrm{MMD}\bigl{(}\hat{p}(\mathbf{x}\,|\,\mathbf{y}^{*}),\,p_{\mathrm{gt}}(\mathbf{x}\,|\,\mathbf{y}^{*})\bigr{)}$
(10)
Secondly, we apply the true forward process $f$ to the generated samples
$\mathbf{x}$ and measure the re-simulation error as the mean squared distance
to the target $\mathbf{y}^{*}$:
$\displaystyle Err_{\mathrm{resim}}$
$\displaystyle=\mathbb{E}_{\,\mathbf{x}\sim\hat{p}(\mathbf{x}\,|\,\mathbf{y}^{*})}\left\lVert
f(\mathbf{x})-\mathbf{y}^{*}\right\rVert_{2}^{2}$ (11)
Finally, we report the inference time for each implementation using one _GTX
1080 Ti_.
### 4.1 Inverse Kinematics
Quantitative results for the kinematics benchmark are shown in Table 1 (extra
detail in Fig. 4), while qualitative results for one challenging end point
$\mathbf{y}^{*}$ are plotted in Fig. 2.
Architectures based on coupling layers (INN, cINN) achieve the best scores on
average, followed by the simple autoencoder. The invertible ResNet exhibits
some mode collapse, as seen in Fig. 2, bottom left. Note that we were unable
to train our iResNet-implementation with the estimated Jacobian determinants,
which were too inaccurate, and resorted to the loss from Eq. 3. Similarly we
would expect the autoregressive models, in particular IAF, to converge much
better with more careful tuning.
MDN on the other hand performs very well for both error measures. Note however
that a full precision matrix $\boldsymbol{\Sigma}_{x}^{-1}$ is needed for
this, as a purely diagonal
$\boldsymbol{\Sigma}_{x}=\mathbf{I}\boldsymbol{\sigma}_{x}$ fails to model the
potentially strong covariance among variables $x_{i}$. Since
$\boldsymbol{\Sigma}_{x}^{-1}$ grows quadratically with the size of
$\mathbf{x}$ and a matrix inverse is needed during inference, the method is
very slow and does not scale to higher dimensions.
Table 2: Quantitative results for the inverse ballistics benchmark. Rows and columns have the same meaning as in Table 1. Method | $Err_{\mathrm{post}}$ (10) | $Err_{\mathrm{resim}}$ (11) | Inference in ms | dim($\mathbf{z}$) | ML Loss | $y$-Supervision
---|---|---|---|---|---|---
INN | 0.047 | 0.019 | 21 | ${\bullet}{\bullet}{\bullet}$ | $\checkmark$ | $\checkmark$
INN (L2 + MMD) | 0.060 | 3.668 | 21 | ${\bullet}{\bullet}{\bullet}$ | | $\checkmark$
cINN | 0.047 | 0.437 | 22 | ${\bullet}{\bullet}{\bullet}{\bullet}$ | $\checkmark$ |
IAF + Decoder | 0.323 | 3.457 | 0 | ${\bullet}{\bullet}{\bullet}{\bullet}$ | $\checkmark$ | $\checkmark$
MAF + Decoder | 0.213 | 1.010 | 0 | ${\bullet}{\bullet}{\bullet}{\bullet}$ | $\checkmark$ | $\checkmark$
iResNet | 0.084 | 0.091 | 307 | ${\bullet}{\bullet}{\bullet}$ | | $\checkmark$
InvAuto | 0.156 | 0.315 | 1 | ${\bullet}{\bullet}{\bullet}$ | | $\checkmark$
Autoencoder | 0.049 | 0.052 | 1 | ${\bullet}{\bullet}{\bullet}$ | | $\checkmark$
cVAE | 4.359 | 0.812 | 0 | ${\bullet}{\bullet}{\bullet}$ | |
MDN | 0.048 | 0.184 | 175 | ${\bullet}{\bullet}{\bullet}{\bullet}$ | $\checkmark$ |
Figure 3: Qualitative results for the inverse ballistics benchmark. Faint
lines show the trajectories of sampled throwing parameters and as above, bold
is the most likely one. A vertical line marks the target coordinate $y^{*}=5$,
the distribution of actual impacts is shown in green.
### 4.2 Inverse Ballistics
Quantitative results for the ballistics benchmark are shown in Table 2 (extra
detail in Fig. 5), while qualitative results for one representative impact
location $y^{*}$ are plotted in Fig. 3.
Again we see INN, cINN and the simple autoencoder perform best. Notably, we
could not get the conditional VAE to predict proper distributions on this
task; instead it collapses to some average trajectory with very high posterior
mismatch. The invertible ResNet does better here, perhaps due to the more uni-
modal posteriors, but IAF and MAF again fail to capture the distributions
properly at all.
Due to the presence of extreme outliers for the error measures in this task,
the averages in Table 2 are computed with clamped values and thus somewhat
distorted. Fig. 5 gives a better impression of the distribution of errors.
There the INN trained with Eq. 4 appears the most robust model (smallest
maximal errors), followed by the autoencoder. cINN and iResNet come close in
performance if outliers are ignored.
## 5 Discussion and Outlook
In both our benchmarks, models based on RealNVP (Dinh et al., 2016) and the
standard autoencoder take the lead, while other invertible architectures seem
to struggle in various ways. Success in our experiments was neither tied to
maximum likelihood training, nor to the use of a supervised loss on the
forward process.
We are aware that training of some models can probably be improved, and
welcome input from experts to do so. In the future, the comparison should also
include ODE-based methods like Chen et al. (2018); Grathwohl et al. (2018),
variants of Parallel WaveNet (Oord et al., 2018) and classical approaches to
Bayesian estimation such as MCMC. Ideally, this paper will encourage the
community to join our evaluation efforts and possibly set up an open challenge
with additional benchmarks and official leader boards.
Code for the benchmarks introduced here can be found at
https://github.com/VLL-HD/inn_toy_data.
## Acknowledgements
J. Kruse, C. Rother and U. Köthe received financial support from the European
Research Council (ERC) under the European Unions Horizon 2020 research and
innovation program (grant agreement No 647769). J. Kruse was additionally
supported by Informatics for Life funded by the Klaus Tschira Foundation. L.
Ardizzone received funding by the Federal Ministry of Education and Research
of Germany project ‘High Performance Deep Learning Framework’ (No 01IH17002).
## References
* Ardizzone et al. (2019) Ardizzone, L., Kruse, J., Rother, C., and Köthe, U. Analyzing inverse problems with invertible neural networks. In _Intl. Conf. on Learning Representations_ , 2019.
* Behrmann et al. (2018) Behrmann, J., Duvenaud, D., and Jacobsen, J.-H. Invertible residual networks. _arXiv:1811.00995_ , 2018.
* Bishop (1994) Bishop, C. M. Mixture density networks. Technical report, Citeseer, 1994.
* Chen et al. (2018) Chen, T. Q., Rubanova, Y., Bettencourt, J., and Duvenaud, D. K. Neural ordinary differential equations. In _Advances in Neural Information Processing Systems_ , pp. 6571–6583, 2018.
* Dinh et al. (2016) Dinh, L., Sohl-Dickstein, J., and Bengio, S. Density estimation using Real NVP. _arXiv:1605.08803_ , 2016.
* Grathwohl et al. (2018) Grathwohl, W., Chen, R. T., Betterncourt, J., Sutskever, I., and Duvenaud, D. Ffjord: Free-form continuous dynamics for scalable reversible generative models. _arXiv:1810.01367_ , 2018.
* Gretton et al. (2012) Gretton, A., Borgwardt, K. M., Rasch, M. J., Schölkopf, B., and Smola, A. A kernel two-sample test. _Journal of Machine Learning Research_ , 13(Mar):723–773, 2012.
* Kingma & Welling (2013) Kingma, D. P. and Welling, M. Auto-encoding variational Bayes. _arXiv:1312.6114_ , 2013.
* Kingma et al. (2016) Kingma, D. P., Salimans, T., Jozefowicz, R., Chen, X., Sutskever, I., and Welling, M. Improved variational inference with inverse autoregressive flow. In _Advances in Neural Information Processing Systems_ , pp. 4743–4751, 2016.
* Kruse (2020) Kruse, J. Technical report: Training mixture density networks with full covariance matrices. _arXiv:2003.05739_ , 2020.
* Oord et al. (2018) Oord, A., Li, Y., Babuschkin, I., Simonyan, K., Vinyals, O., Kavukcuoglu, K., Driessche, G., Lockhart, E., Cobo, L., Stimberg, F., et al. Parallel WaveNet: fast high-fidelity speech synthesis. In _International Conference on Machine Learning_ , pp. 3915–3923, 2018.
* Papamakarios et al. (2017) Papamakarios, G., Murray, I., and Pavlakou, T. Masked autoregressive flow for density estimation. In _Advances in Neural Information Processing Systems_ , pp. 2335–2344, 2017.
* Sohn et al. (2015) Sohn, K., Lee, H., and Yan, X. Learning structured output representation using deep conditional generative models. In _Advances in Neural Information Processing Systems 28_ , pp. 3483–3491, 2015.
* Teng et al. (2018) Teng, Y., Choromanska, A., and Bojarski, M. Invertible autoencoder for domain adaptation. _arXiv preprint arXiv:1802.06869_ , 2018.
## Appendix
Figure 4: Boxplot of inverse kinematics results from Table 1. The posterior
mismatch Eq. 10 is shown in blue and the re-simulation error Eq. 11 in red.
Boxes extend from the lower to upper quartile values of the data and a white
line marks the respective median. The dotted lines show the full range of
results, including outliers. We use log-scale to accommodate extreme values.
Figure 5: Boxplot of inverse ballistics results from Table 2. The plot follows
the same layout as Fig. 5
|
# A broadband view on microquasar MAXI J$1820+070$ during the 2018 outburst
J. Rodi INAF - Istituto di Astrofisica e Planetologia Spaziali, via Fosso del
Cavaliere 100, 00133 Roma, Italy A. Tramacere Department of Astronomy,
University of Geneva, Ch. d’Ecogia 16, 1290, Versoix, Switzerland F. Onori
INAF - Istituto di Astrofisica e Planetologia Spaziali, via Fosso del
Cavaliere 100, 00133 Roma, Italy INAF - Osservatorio Astronomico d’Abruzzo,
via M. Maggini snc, I-64100 Teramo, Italy G. Bruni INAF - Istituto di
Astrofisica e Planetologia Spaziali, via Fosso del Cavaliere 100, 00133 Roma,
Italy C. Sánchez-Fernández European Space Astronomy Centre (ESA/ESAC),
Science Operations Department, 28691 Villanueva dela Cañada, Madrid, Spain M.
Fiocchi INAF - Istituto di Astrofisica e Planetologia Spaziali, via Fosso del
Cavaliere 100, 00133 Roma, Italy L. Natalucci INAF - Istituto di Astrofisica
e Planetologia Spaziali, via Fosso del Cavaliere 100, 00133 Roma, Italy P.
Ubertini INAF - Istituto di Astrofisica e Planetologia Spaziali, via Fosso
del Cavaliere 100, 00133 Roma, Italy
(Received -; Revised -; Accepted -)
###### Abstract
The microquasar MAXI J$1820+070$ went into outburst from mid-March until mid-
July 2018 with several faint rebrightenings afterwards. With a peak flux of
approximately 4 Crab in the $20-50$ keV, energy range the source was monitored
across the electromagnetic spectrum with detections from radio to hard X-ray
frequencies. Using these multi-wavelength observations, we analyzed quasi-
simultaneous observations from 12 April, near the peak of the outburst ($\sim
23$ March). Spectral analysis of the hard X-rays found a $kT_{e}\sim 30$ keV
and $\tau\sim 2$ with a CompTT model, indicative of an accreting black hole
binary in the hard state. The flat/inverted radio spectrum and the accretion
disk winds seen at optical wavelengths are also consistent with the hard
state. Then we constructed a spectral energy distribution spanning $\sim 12$
orders of magnitude using modelling in JetSeT. The model is composed of an
irradiated disk with a Compton hump and a leptonic jet with an acceleration
region and a synchrotron-dominated cooling region. JetSeT finds the spectrum
is dominated by jet emission up to approximately $10^{14}$ Hz after which disk
and coronal emission dominate. The acceleration region has a magnetic field of
$B\sim 1.6\times 10^{4}$ G, a cross section of $R\sim 2.8\times 10^{9}$ cm,
and a flat radio spectral shape naturally obtained from the synchroton cooling
of the accelerated electrons. The jet luminosity of $>8\times 10^{37}$ erg/s
($>0.15L_{Edd}$) compared to an accretion luminosity of $\sim 6\times 10^{37}$
erg/s, assuming a distance of 3 kpc. Because these two values are comparable,
it is possible the jet is powered predominately via accretion with only a
small contribution needed from the Blanford-Znajek mechanism from the
reportedly slowly spinning black hole.
Black Holes: individual (MAXI J$1820+070$) — X-rays: binaries — radiation
mechanisms: non-thermal
††journal: ApJ
## 1 Introduction
The term ”microquasar” was first applied to the persistent black hole
candidate (BHC) 1E $1740.7-2942$ after detecting radio jets from the known
hard X-ray source (Mirabel et al., 1992) that were similar to radio-loud
active galactic nuclei (AGNs). Jets were later found to be common features in
accreting BH systems in the hard state. Multi-wavelength studies showed
correlations between radio and X-ray luminosities (Gallo et al., 2003),
indicating a relationship between the emission mechanisms despite a large
physical separation between the two. Additionally, this correlation holds also
for supermassive BHs in AGN when accounting for mass (Merloni et al., 2003),
thus linking the mechanisms in stellar mass and supermassive BHs. Therefore
understanding the jet and X-ray components in microquasars can shed light on
AGN.
The low-mass X-ray binary MAXI J$1820+070$ (=ASASSN-18ey) was first detected
on 6.59 March 2018111http://www.astronomy.ohio-
state.edu/asassn/transients.html with the All-Sky Automated Survey for
SuperNovae (Shappee et al., 2014) and was detected $\sim 6$ days later by the
MAXI/GSC at 11 March 2018 19:48 UTC (Kawamuro et al., 2018). With a peak flux
of $\sim 4$ Crab in the $20-50$ keV energy band (Roques & Jourdain, 2019) and
a long decay, the source was a good candidate for numerous observing campaigns
across the electromagnetic (EM) spectrum (e.g. Muñoz-Darias et al. 2019;
Tucker et al. 2018; Stiele & Kong 2020; Bright et al. 2020) to explore various
aspects of the source. Combining observations from various campaigns enables
studying the various emission processes together.
Therefore we compiled quasi-simultaneous observations from public archives,
Astronomer’s Telegrams, and Gamma-ray Coordination Network Circulars to
construct the widest possible frequency range, we were able to find detections
covering nearly 12 orders of magnitude from the meter-wavelength frequencies
to hard X-rays on 12 April 2018 (MJD 58220). With this spectral energy
distribution (SED), we studied the spectral components independently before
investigating them jointly by constructing a model consisting of a leptonic
jet, an irradiated disk, and a corona, using the JetSeT
software222https://jetset.readthedocs.io/en/latest/.
Figure 1: (Top) The light curves for Swift/BAT $15-50$ keV (black diamonds),
$0.3-10$ keV Swift/XRT (green squares), and 4.7 GHz RATAN (blue triangles) for
the initial phase of the outburst. The time span of observations analyzed in
this work are denoted in red.
## 2 Observations
Figure 1 shows the initial period of MAXI J$1820+070$ outburst in several
wavelengths across the spectrum using data Swift/BAT (black diamonds),
Swift/XRT (green squares) (Stiele & Kong, 2020), and 4.7 GHz RATAN (blue
triangles) (Trushkin et al., 2018). The XRT and RATAN data have been
normalized to be on the same scale of the BAT data. The period of the
observations used in this work are bracketed in red.
In the following, we give information about the different simultaneous
observations collected from archives, covering different bands from radio to
gamma-ray on April 12, 2018. Further details can be found in Table 1.
### 2.1 JVLA
We retrieved calibrated Karl G. Jansky Very Large Array (VLA) data for
experiment VLA/18A-470 from the National Radio Astronomy (NRAO) online
archive. The JVLA antennas, in A configuration, were split in 3 subarrays in
order to obtain simultaneous observations at 6 different central frequencies
(4.7 GHz, 7.5 GHz, 8.5 GHz, 11 GHz, 20.7 GHz, 25.5 GHz). Data were imaged
using CASA (Common Astronomy Software Applications package) version
5.6.2333https://casa.nrao.edu following standard procedures.
### 2.2 ALMA
Atacama Large Millimiter/submillimiter Array (ALMA) data for project
2017.1.01103.T were retrieved from the ESO archive, and pipelined at the the
Italian node of the European ALMA Regional Centre (INAF-Istituto di
Radioastronomia, Bologna). Imaging was performed with CASA version 5.1.1,
separately for each one of the 4 spectral windows (spw) present in the data
(Band 7, spw 5, 7, 9, 11) corresponding to the following central frequencies:
336 GHz, 338 GHz, 348 GHz, 350 GHz (Bonato et al., 2018).
### 2.3 VLT/X-shooter
A number of observations of MAXI J$1820+070$ were performed with the X-shooter
spectrograph Vernet et al. (2011) in the framework of the ESO program
0101.D-0356(A). We retrieved the processed spectra obtained during the 2018
outburst on 12 April from the European Southern Observatory (ESO) archive
science portal. These data have been reduced by using the ESO X-shooter
pipeline V2.7.0 and cover the 3000-25000 Åwavelength range. The observations
were conduced in nodding configuration with the slit oriented at the
parallactic angle and using slit widths of 1.3$\arcsec\times$11,
1.2$\arcsec\times$11 and 1.2$\arcsec\times$11 for the UVB, VIS and NIR arm,
respectively. This configuration yields a spectral resolution
R=$\lambda$/$\Delta\lambda$ of 4100, 6500 and 4300 for the UVB, VIS and NIR
arm, respectively. The observing conditions where good with a seeing of
0.47$\arcsec$ and an average airmass of the source during the acquisition of
1.3$\arcsec$. The total exposure times are 1640 s, 1300 s and 1520 s for the
UVB, VIS, and NIR arm, respectively. The reduced spectra have been corrected
for the foreground extinction using the Cardelli function Cardelli et al.
(1989) with R(V)=3.1 and AV=0.627 mag (Schlafly & Finkbeiner, 2011, via the
NASA/IPAC Extragalactic Database (NED)).
In order to estimate the slit loss effect in the X-shooter spectra, we first
applied standard aperture photometry on the $i^{\prime}$ acquisition image
using the iraf task phot. The zero point was calibrated using the stars in the
Panoramic Survey Telescope and Rapid Response System (Pan-STARRS1 Flewelling
et al., 2016) catalog.
From the aperture photometry we obtain an $i^{\prime}$ band apparent magnitude
of mAB= (12.20 $\pm$ 0.11) mag, corrected for foreground extinction. The
derived flux at the filter central wavelength is $\lambda$Fi =
(2.01$\pm$0.2)$\times$10-10 erg s-1 cm-2 which is in agreement with the
average flux measured from the spectrum in the 7300-7600 Å wavelength range:
$\lambda$Fi=(1.8$\pm$0.7)$\times$10-10 erg s-1 cm-2.
### 2.4 XMM-Newton/EPIC-pn
XMM-Newton ToO observations were carried out from 2018-04-12 07:27:58 to
09:39:28 UTC (obsid 0820880501) using burst mode. The European Photon Imaging
Camera (EPIC)-pn data were analyzed using the standard procedures with the
Science Analysis System (SAS) software version
xmmsas_20190531_1155-18.0.0444https://www.cosmos.esa.int/web/xmm-newton/sas-
threads.
### 2.5 INTEGRAL
The INTErnational Gamma-Ray Astrophysics Laboratory (INTEGRAL) observed MAXI
J$1820+070$ every 2–3 days between March 16, and May 8, via a series of Target
of Opportunity (ToO) observations. For this work, we selected the data
covering the interval (UTC) 11 April 2018 23:41:01 to 12 April 2018 11
14:00:21 (INTEGRAL revolution 1941). Here we focus on the analysis of data
provided by the Integral Soft Gamma-Ray Imager (ISGRI; $18-1000$ keV) placed
on the upper layer of the detector plane of the Imager on Board the INTEGRAL
Satellite (IBIS) telescope (Ubertini et al. 2003) and by the Optical
Monitoring Camera (OMC) ($500-600$ nm) instruments. The data were analyzed
using the Offline Science Analysis software (OSA) v11.0 available at the
INTEGRAL Science Data Center
(ISDC).555https://www.isdc.unige.ch/integral/analysis We followed standard
analysis procedures.
Table 1: MAXI J$1820+070$ observations log Instrument | Start Time (UTC) | Stop Time (UTC)
---|---|---
VLITE | 07:25:00 12-04-2018 | 13:07:00 12-04-2018.
JVLA | 07:15:00 12-04-2018 | 13:14:50 12-04-2018
ALMA | 08:13:18 12-04-2018 | 09:20:16 12-04-2018
X-shooter | 07:41:08 12-04-2018 | 08:15:48 12-04-2018
EPIC-pn | 07:27:58 12-04-2018 | 09:39:28 12-04-2018
OMC | 23:41:01 11-04-2018 | 14:00:21 12-04-2018
ISGRI | 23:41:01 11-04-2018 | 14:00:21 12-04-2018
Table 2: Collected flux densities for the jet modelling. Instrument | Frequency | Flux density
---|---|---
| (Hz) | (Jy)
VLITE | 3.39E+08 | 0.033$\pm$5.3
JVLA | 4.70E+09 | 0.0469$\pm$0.0047
| 7.50E+09 | 0.0488$\pm$0.0029
| 8.50E+09 | 0.0479$\pm$0.0048
| 1.10E+10 | 0.0483$\pm$0.0048
| 2.07E+10 | 0.0525$\pm$0.0054
| 2.55E+10 | 0.0530$\pm$0.0054
ALMA | 3.36E+11 | 0.116$\pm$0.006
| 3.38E+11 | 0.114$\pm$0.006
| 3.48E+11 | 0.115$\pm$0.006
| 3.50E+11 | 0.110$\pm$0.006
X-shooter | 1.37E+14 | 0.0920$\pm$0.0002
| 1.81E+14 | 0.0757$\pm$0.0012
| 2.40E+14 | 0.0652$\pm$0.0003
| 2.86E+14 | 0.0634$\pm$0.0006
| 3.79E+14 | 0.0689$\pm$0.0007
| 4.00E+14 | 0.0709$\pm$0.0002
| 4.42E+14 | 0.0743$\pm$0.0003
| 4.87E+14 | 0.0775$\pm$0.0005
| 5.15E+14 | 0.0794$\pm$0.0002
| 5.32E+14 | 0.0803$\pm$0.0003
| 5.76E+14 | 0.0831$\pm$0.0005
| 6.27E+14 | 0.0868$\pm$0.0003
| 7.00E+14 | 0.0952$\pm$0.0004
OMC | 5.66E+14 | 0.0813$\pm$0.0071
Figure 2: Jet emission between radio and optical bands, as reconstructed from
VLITE, JVLA, ALMA, and X-shooter observations. The blue dashed line is a
broken power law, used to identify the synchrotron peak frequency and flux
density. The red line is a power-law fit of the most expanded region of the
jet, considered as a physically separated component. The optical flux from OMC
is also shown, although not considered for the fit.
## 3 Results and discussion
### 3.1 The compact jet emission
With the collected flux densities between radio and UV bands, we built the jet
SED. In addition to the JVLA and ALMA data mentioned above, we considered Very
Large Array Low-band Ionosphere and Transient Experiment (VLITE, Clarke et al.
2016), also collected on 12 April 2018 (Polisensky et al., 2018). We show in
Figure 2 data from JVLA, ALMA, X-shooter, and OMC. For X-shooter, we
considered only the part of the spectrum not affected by absorption/emission
features, and averaged values for each of these intervals to obtain continuum
flux density values. In this way, we calculated 12 photometric measurements
(Table 2) covering the interval from NIR to UV. The overall shape of the
X-shooter spectrum shows a break, resulting in a change of the spectral index,
at about 2$\times$1014 Hz. This is most probably the frequency at which the
broadband SED is no longer dominated by the jet synchrotron emission, while
the accretion disk thermal emission increases (see Sec. 4). The single value
from OMC is in good agreement with the X-shooter photometry.
We fitted the radio to optical data set with a broken-power law to identify
the synchrotron peak frequency and the spectral slopes of the optically thin
and thick regions (Russell et al., 2013). We used the Astropy BrokenPowerLaw1D
function (see Astropy Collaboration et al. 2013; Price-Whelan et al. 2018)
adopting the LevMarLSQFitter routine to perform a Levenberg-Marquardt least
squares statistic. We excluded from the fit the X-shooter points above the
third one (Optical and UV ranges) since they show a turn-up of the flux
density liekly due to accretion disk emission. Similarly, we did not consider
data points below 20 GHz, since they show a different slope, probably
belonging to a physically distinct (and more expanded) radio jet component. We
obtained a synchrotron peak frequency of 1.6$\pm$0.2$\times 10^{13}$ Hz. The
estimated slope for the optically thick part of the spectrum is
$\alpha_{thick}=0.28\pm 0.02$ and $\alpha_{thin}=-0.61\pm 0.01$ for the
optically thin one (adopting the convention $S\propto\nu^{\alpha}$, where $S$
is the flux density, $\nu$ the frequency, and $\alpha$ the spectral index).
The slope of the lower frequency radio SED (0.3$-$10 GHz), fitted with the
PowerLaw1D Astropy function, is $\alpha=0.11\pm 0.02$. In Sec. 4, a detailed
physical model of this source is presented and provides a more precise
estimate of the jet parameters.
Figure 3: Accretion disk wind absorption features in the VLT/X-shooter optical
spectrum. Normalised flux errors are shown as a cyan shaded area. Troughs
deeper than 3$\times$RMS are highlighted in purple. The RMS value is 0.0027
for the He I $\lambda$5876 region, while 0.0023 for the H$\alpha$ and He I
$\lambda$6678 regions. The grey dashed lines indicate the 1$\pm$RMS intervals.
### 3.2 Accretion disk winds
Muñoz-Darias et al. (2019) discovered accretion disk winds in MAXI J$1820+070$
during both the 2018 hard state rise and decay. Absorption wind signatures
were detected in the blue wings of He I $\lambda$5876 and $\lambda$6678
emission lines, reaching a maximum terminal velocity ($v_{t}$) of 1200 km/s in
one of the epochs. For H$\alpha$, both a blue-wing broadened emission line
profile, implying a wind component of 1800 km/s, and a superimposed absorption
trough with a $v_{t}$=1200 km/s were found. Those authors collected several
epochs from 15 March to 4 November 2018, allowing them to follow the evolution
of the winds from the hard state to the disappearance during soft state, and
back.
The VLT/X-shooter data presented in this work add an epoch to monitoring in
Muñoz-Darias et al. (2019), falling in an uncovered time window of one month
between 26 March and 23 April. We normalized these spectra by dividing them
for the continuum emission, fitted with a third order spline3 function by
using the IRAF task continuum. These spectra are rich with emission lines from
the UVB to the NIR arms. Following the Muñoz-Darias et al. (2019) analysis, we
explored the presence of wind signatures linked to the mentioned emission
lines (He I and H$\alpha$), and found a significant absorption on the left
wing of He I $\lambda$5876\. In Fig. 3 (left panel) we show the relative
portion of the spectrum, with the absorption features highlighted in purple.
We consider as bona-fide absorption troughs the ones with a dip of at least
three times the continuum RMS. A prominent He I absorption feature is visible
between -700 and -900 km/s, showing the same profile as the correspondent
emission line. This one has a $v_{t}$ of 880 km/s. Further blue-ward
absorption features are visible, but since they are narrower and not connected
to the previous ones, we consider those as not related to the accretion disk
wind. The same is true for the narrow absorption features detected blue-wards
of the H$\alpha$ emission line (Fig. 3, central panel). For the He I
$\lambda$6678 line, a single absorption trough is detected between -800 and
-850 km/s (Fig. 3, right panel) with a $v_{t}$ of -825 km/s.
During this period, we observe strong asymmetries in the emission lines which
are commonly observed in lines emitted from the disc, particularly in the He I
$\lambda$ 6678 and in the H$\alpha$. Therefore, we explored line profile
properties by applying multi-component Gaussian fits using the python packages
curvefit and leastsq. In Figure 4 we show the result of this analysis for
H$\alpha$ (left panel) and He I $\lambda$6678 (right panel). The H$\alpha$
line analysis has been performed in the wavelength region 644$-$670 nm, which
includes the feature of interest and the local continuum. In this case we have
ignored from the fit the He I emission line, which falls at the end of the
analyzed wavelength range. The H$\alpha$ profile is well modelled by two
narrow Gaussian components, and only one broad Gaussian component is needed to
fit the red wing of the emission line. We note the absence of a blue-shifted
broad wing, which has been observed in Muñoz-Darias et al. (2019), as well as
p-cygni profiles signatures. However a forest of narrow absorption lines is
clearly visible in the blue region of the H$\alpha$. The two narrow components
are characterized by a central wavelength ($\lambda_{c}$) and a full width at
half maximum (FWHM) of $\lambda_{c}$= (6574.4$\pm$0.4) Å FWHM = (389$\pm$20)
km/s and $\lambda_{c}$= (6559.8$\pm$ 0.3) Å FWHM = (914$\pm$18) km/s,
respectively. While the broad red wing is centered at $\lambda_{c}$=
(6583.0$\pm$3.0) Å and has FWHM=(2982$\pm$210) km/s. From the redshift of the
broad wing with respect to the H$\alpha$ rest frame wavelength we derive an
outflow velocity of $v$= (923$\pm$14) km/s, while the separation between the
two narrow components is $\sim$ 667 km/s.
For the He I $\lambda$6678 line analysis we used the wavelength region
$664-672$ nm, which includes also the local continuum but excludes the
wavelength range in which the H$\alpha$ falls. The He I $\lambda$6678 line
profile is well modelled by three Gaussian components. The first one is well
centered on the rest-frame He I wavelength with a $\lambda_{c}$= (6679.2 $\pm$
0.4) Å and has a FWHM=(536$\pm$54) km/s. The two remaining Gaussians are
blueshifted and redshifted of $\sim$450 km/s with respect to the first
component, and are characterized by $\lambda_{c}$= (6669.4 $\pm$ 0.2) Å, FWHM
= (307$\pm$12) km/s and $\lambda_{c}$= (6689.7 $\pm$ 0.3) Å, FWHM =
(295$\pm$29) km/s, respectively.
As a whole, the detected optical disk wind features show properties with in
between what was found in the hard and soft state epochs collected by Muñoz-
Darias et al. (2019), confirming the decreasing trend of the optical wind
between the two states of the source.
Figure 4: Fits of the emission line profiles for H$\alpha$ (left panel) and He
I $\lambda$6678 (right panel). The residuals of the fits are shown at the
bottom of each panel. The line profiles are modelled with Gaussian components
(colored dashed lines). The total fitting model is represented by the magenta
solid line.
### 3.3 Soft X-ray
We fitted the XMM Newton/EPIC-pn spectrum in the $0.5-12$ keV energy range in
XSPEC using a(Tbabs*Powerlaw model. With this model we derive the following
parameters: nH = $0.13\pm 0.04\times 10^{22}\textrm{ cm}^{-2},kT_{bb}=0.24\pm
0.03\textrm{ keV, and }\Gamma=1.65\pm 0.08$ with $\chi 2/\nu=1.00$.
### 3.4 Hard X-ray
We fitted the INTEGRAL/IBIS/ISGRI spectrum in the $30-400$ keV energy range. A
systematic 1.5 % error was added to the data, following OSA 11 standard
recommendations666https://www.isdc.unige.ch/integral/analysis. A power-law fit
to the data in XSPEC found a photon index of $\Gamma=2.41\pm 0.01$ and a
$\chi^{2}/\nu=71.80$. The spectrum deviates from a simple power-law model,
especially at high energies, with residuals suggesting a Comptonized spectrum.
Fitting the data with a CompTT model using a photon temperature of 0.24 keV
fixed to the $kT_{bb}$ value from XMM finds a better fit, with $k_{T}=36.4\pm
0.9\textrm{ keV, }\tau=1.27\pm 0.05,\textrm{ and }\chi^{2}/\nu=6.21$. When
including a reflecting component (Reflect) with the reflection fraction fixed
to 1, the fit improves to $\chi^{2}/\nu=3.45\textrm{ with }k_{T}=38\pm
1\textrm{ keV and }\tau=1.44\pm 0.06$. Following Roques & Jourdain (2019), a
cutoff power-law was added Reflect*(CompTT)+cutoff with $\Gamma=1.6$ and a
cutoff energy of 200 keV that improved the fit to $0.71$ and has fit
parameters $k_{T}=27\pm 4\textrm{ keV, }\tau=2.2$.
To characterize the X-ray spectrum, a joint fit was performed between the two
instruments spanning $0.5-400$ keV using the model
Tbabs*Reflect*(diskbb+CompTT)+Tbabs*cutoff found best-fit parameters of
$kt_{BB}=0.27\pm 0.01\textrm{ keV, }kT=27\pm 1\textrm{ keV, and }\tau=2.2\pm
0.1\textrm{ with }\chi^{2}/\nu=0.95$.
Using this joint spectrum, we calculated the accretion luminosity in the
$1-200$ kev energy range and found a value of $\sim 6\times 10^{37}$ erg/s for
a distance 3 kpc.
Table 3: Irradiated disk fit parameters. | diskir | diskir+po
---|---|---
$kT_{disk}$(keV) | $0.116\pm 0.007$ | $0.122\pm 0.007$
$\Gamma$ | $1.78\pm 0.02$ | $1.70\pm 0.04$
$kT_{e}$ (keV) | $58\pm 4$ | $37\pm 4$
$L_{C}/L_{D}$ | $4.7\pm 0.5$ | $4.7\pm 0.6$
$f_{out}$ | $(1\pm 40)\times 10^{-7}$ | $(4\pm 15)\times 10^{-2}$
$\log(r_{out})$ | $3.45\pm 0.04$ | $3\pm 1$
$\Gamma_{po}$ | $-$ | $1.6\pm 0.3$
Normpo | $-$ | $1.0\pm 1.7$
$\chi^{2}/\nu$ | $1.30$ | $0.97$
#### 3.4.1 IR $-$ Hard X-ray Spectrum
Subsequently, we fit our data from the near-IR to hard X-rays using an
irradiated disk model to compare with Shidatsu et al. (2018), which analyzed a
similar energy range using observations from 24 March. The irradiated disk
model accounts for the effects of the Comptonized emission on the accretion
disk and the soft-excess that is seen in the hard state (Gierliński et al.,
2009). Figure 5 shows the spectrum from $0.001-400$ keV with the diskir model
shown as a solid red line and the power-law component of the diskir+po model
as a dashed black line. The power law is used to model the high-energy cutoff
power-law component in the previous section. Table 3 contains the fit
parameters using a diskir model with and without an additional power-law
component. We found that including a power-law component improved the fit at
high energies and reduced the $\chi^{2}/\nu$ from 1.30 to $\chi^{2}/\nu=0.97$
with an f-test probability of $3.7\times 10^{-8}$.
The origin of the power-law component is unclear. As shown below in Figure 7,
the expected jet flux is too low for the component to be jet emission at those
energies. However, the emission could possibly be from Comptonization of non-
thermal electrons as in the case of GRS $1716-249$ (2020MNRAS.494..571B).
Figure 5: MAXI J$1820+070$ spectrum from $0.001-400$ keV. The diskir model is
shown with a solid red line and the po component is shown as a black dashed
line.
## 4 Broadband SED modelling
We modelled the broadband SED of MAXI J1820+070 using a combination of jet
leptonic models and irradiated disk and corona model implemented in Jets SED
modeler and fitting Tool (JetSeT) 777https://jetset.readthedocs.io/en/latest/
(Tramacere, 2020; Tramacere et al., 2011, 2009). A more accurate description
of the model is discussed in Tramacere (in prep.). We assume that the
optical/UV up to keV energies is dominated by disc irradiation and coronal
emission. The emission in the mm to optical region is dominated by the non-
thermal emission of leptons accelerated in the jet by shock and/or stochastic
acceleration, and we assume that the break at $\approx 1.5\times 10^{13}$ Hz
is due to the transition from the optically thin to the optically thick
synchrotron emission. The radio emission is dominated by the terminal part of
the jet that starts beyond the acceleration region and extends up to a
distance of $\approx 1\times 10^{15}$ cm according to Bright et al. (2020). A
schematic view of the model is provided in Fig. 6
### 4.1 Individual Model Components Description
In the following we describe the implementation of each model component.
#### 4.1.1 Irradiated Disk and Hot Corona
To model the UV to hard-X-ray emission we have used the disk Comptonization
plus disk irradiation model, DiskIrrComp implemented in JetSeT. The
DiskIrrComp is based on the diskir model (Gierliński et al., 2009) and the
Comptonization model of Zdziarski et al. (2009). In detail, we assume that a
classical multi-temperature disk with an inner temperature $T_{Disk}$ and an
extension $R_{in}=3R_{S}$ to $R_{out}$, expressed by the dimensionless
parameters $r_{in}=R/R_{in}$ and $r_{out}=R/R_{out}$. The disk spectrum is
modified due to the reprocessing of irradiated emission from the disk itself
and from the corona Compotonization tail. The corona emission is described by
a power law with an exponential cutoff with a photon index $\Gamma_{Comp}$ and
a cut-off energy $E_{Comp}=kT_{e}$ where $kT_{e}$ is the corresponding
electron temperature. The Compton hump component is described by a power-law
with exponential cut-off with a photon index $\Gamma_{hump}$ and a cut-off
energy $E_{hump}$. We refer to this model as Comp. hump. The normalization of
the Compton tail component is parameterized as a fraction of the disk
luminosity $L_{Disk}$ according to $L_{Comp}^{ratio}=L_{C}/L_{Disk}$. The
total bolometric flux will be $L_{bol}=L_{Disk}+L_{rep}+L_{C}$, where
$L_{rep}$ represents the thermalized fraction $f_{in}$ of $L_{C}$ thermalized
within $r_{in}$ and $r_{irr}=R_{irr}/R_{in}$, where $R_{irr}$ is the radius of
the inner disk irradiated by the Compton tail. A fraction $f_{out}$ of the
bolometric luminosity will irradiate the outer disk. The irradiation creates a
shoulder with a spectral trend $f_{out}\propto L_{bol}\nu^{-1}$ that extends
between $\nu_{1}=3kT(r_{out})$ and $\nu_{2}=3kT(r_{t})$, where $r_{t}$ is the
transitional radius between gravitational and irradiation energy release. This
effect depends strongly on $r_{out}$ and $f_{out}$, and it is present even
without corona Comptonization, because it represents the disk self-
irradiation. The presence of a Comptonization component will provide a further
heating of the disk in the inner part modifying the pure gravitational
temperature profile.
#### 4.1.2 Pre-acceleration and Acceleration Region
We assume that electrons in the pre-acceleration region close to the base of
the jet are described by a thermal plasma with cooling dominated by adiabatic
losses. Once the particles approach the acceleration region they are
accelerated under the effect of diffusive shock acceleration and/or stochastic
acceleration and the corresponding energy distribution can be modeled by a
power-law with a high-energy cutoff
$N_{e,acc}(\gamma)=N\gamma^{-s}\exp(-\gamma/\gamma_{cut})$ (1)
where the value of $\gamma_{cut}$ takes into account the balance between
cooling and acceleration terms. The index $s$ is dictated by the competition
of the acceleration time scales and escape time scales (Tramacere et al.,
2011). We assume that the acceleration region extends from $z_{acc}^{start}$
to $z_{acc}^{end}$, with cross section $R_{acc}$ equal to the average cross
section of the jet at $z=(z_{acc}^{start}+z_{acc}^{end})/2$, with
$z_{acc}^{end}-z_{acc}^{start}=2R_{acc}$. The emission from the acceleration
region is reproduced using the jet leptonic model Jet implemented in JetSeT,
and we refer to it as JetAcc (Tramacere, in prep.).
#### 4.1.3 Radio Jet
To model the radio jet emission we have used the JetSeT multi-zone radio jet
model RadioJet. This model implements a continuous jet as a sum of $N_{c}$
single zones, following the approach of Kaiser (2006), where for each zone the
values of $R$ and $B$ are ruled by Eq. 3, and the particle density scales as
$N_{s,i}=N_{s,0}(z_{s,0}/z_{i})^{m_{N}}$ (2)
where $N_{s,0}$ is the initial density of emitters at the starting point of
the radio jet $z_{s,0}$, $z_{i}$ is the average position of the $i_{th}$
component, and $m_{N}$ is the index of the particle density law fixed to 2.
The initial particle density is a fraction $N_{frac}$ of that present in the
acceleration region and we fix it to 1. The radio jet extends from
$z_{radio}^{start}=(z_{acc}+R_{acc})K_{R}^{start}$ to
$z_{radio}^{end}(z_{acc}+R_{acc})K_{R}^{end}$, where $K_{R}^{start}$ and
$K_{R}^{end}$ are free parameters. In the present analysis we fix
$K_{R}^{start}=1$ , and $K_{R}^{end}$ is fixed in order to match the value of
$1\times 10^{15}$ cm according to the analysis presented in Bright et al.
(2020). The particle distribution in each region has the same spectral law as
in the acceleration region, but we decrease the value of $\gamma_{cut}$ to
take into account the effect of the cooling when the particles leave the
acceleration region. In our analysis we take into account only synchrotron
cooling and we evolve $\gamma_{cut}$ according to Eq. 27 in Kaiser (2006).
More details about the connection between the acceleration and radio are
discussed in Sec. 4.3
Figure 6: A schematic view of the jet model setup. The purple region
identifies the pre-acceleration region, the cyan region identifies the
acceleration region, and the green region identifies the radio jet. The $z$
axis on the bottom shows the starting and end end point of each region. The
acceleration region is assumed to be spherical with a radius equal to the jet
cross section. The vertical black lines in the radio jet region marks
qualitatively the division of region in slices.
### 4.2 Phenomenological Model Setup
As a first step we set the geometrical properties of the jet, i.e. we define
the extent of the pre-acceleration, acceleration, and radio emission sites,
and the values of the magnetic field. We assume that the jet is launched at
distance $z_{0}$ from the BH, with an initial cross section $R_{0}$, and that
the bulk Lorentz factor ($\Gamma_{jet}$) of the jet is constant over the full
jet extent. The acceleration region starts at a distance $z_{acc}^{start}$
with a width equal to jet cross section diameter $R_{acc}=2R(z_{acc})$, and we
treat it as a spherical region. The radio region starts at
$z^{start}_{radio}=K_{R}^{start}(z_{acc}+R_{acc}$) and ends at a distance
$z^{end}_{radio}=K_{R}^{end}(z_{acc}+R_{acc})$ (a scheme of the model is
presented in Fig. 6). According to Bright et al. (2020) we fix the distance of
jet from the observer to the value of $d=3$ kpc, the termination of the radio
jet to the value of $z_{end}\approx 1\times 10^{15}$ cm, and the value of the
beaming factor to
$\delta=[\Gamma_{jet}(1-\beta_{jet}\cos(\theta_{obs})]^{-1}\approx 2.2$, using
the values of $\beta_{jet}=0.89$ and $\theta_{obs}=63^{\circ}$ reported in
Bright et al. (2020). We assume a ballistic jet model (Kaiser, 2006; Begelman
et al., 1984) characterized by
$\displaystyle B(z)$ $\displaystyle\propto$ $\displaystyle
B_{0}(z_{0}/z)^{m_{B}}$ $\displaystyle R(z)$ $\displaystyle\propto$
$\displaystyle R_{0}(z/z_{0})^{m_{R}}$ (3) $\displaystyle N(z)$
$\displaystyle\propto$ $\displaystyle R_{0}(z_{0}/z)^{m_{N}}$
with $m_{B}\approx 1$ and $m_{R}=1$, and $m_{N}=2.0$. This choice assumes that
the jet is very close to the ballistic regime, with a magnetic field dominated
by the toroidal component and justifies the assumption that the bulk Lorentz
factor is constant along the jet.
We use a black hole mass of $M_{BH}=8M_{\sun}$. The jet luminosity $L_{jet}$
is linked to the Eddington luminosity ($L_{Edd}$) according to
$L_{jet}=\frac{1}{2}q_{jet}L_{Edd}$ (4)
where $L_{Edd}\approx 1.3\times 10^{38}(M_{BH}/M_{\sun})$ erg s-1 (Rybicki &
Lightman, 1986). It is worth noting that our $q_{jet}$ parameter is not linked
directly to the accretion efficiency process, because the jet powering could,
in principle, be supported also by other mechanisms such as the
Blandford–Znajek mechanism (Blandford & Znajek, 1977), that predicts
electromagnetic extraction of energy and angular momentum from magnetized
accretion disc surround a black hole. Hence, our $q_{jet}$ parameter should
not be used to infer or constrain the accretion efficiency, and will be
discussed in a more accurate physical context in Tramacere (in prep.).
We assume that the jet is launched at a distance $z=z_{0}$ from the BH with
$z_{0}=50R_{S}\approx 1.2\times 10^{8}$ cm, where $R_{S}=(2GM_{BH})/{c^{2}}$.
The launching jet position $z=z_{0}$, in the current analysis is assumed
constant, to reduce the model complexity, and it is chosen according to
reference values published in previous analysis (Vila & Romero, 2010). The
initial radius of the jet is set to $R(z_{0})=0.1z_{0}$, resulting in an
opening angle of $\theta_{open}\approx 5.7^{\circ}$. We impose that in the
launching region the entire jet power is in the form of magnetic energy
$L_{jet}=L_{B}(z_{0})=\pi
U_{B}(z_{0})R(z_{0})^{2}\Gamma_{jet}^{2}\beta_{jet}c$ (5)
where $U_{B}=B^{2}/(8\pi)$, and setting $q_{jet}=0.2$ we obtain $B_{0}\approx
6.8\times 10^{6}$ G.
The value of $m_{B}$ can be constrained from the spectral index of radio jet
emission, $\alpha_{R}\approx 0.15$, according to the Eq. 39 in Kaiser (2006)
that refers to the case of strong radiative cooling and almost constant value
of the electron distribution high-energy cutoff. According to this scenario,
that is very similar to what we expect in our case, we can rearrange Eq. 39 in
Kaiser (2006) as:
$m_{B}=\frac{1+m_{R}}{2-\alpha_{R}}$ (6)
that is similar to the trend of the thick radio spectrum discussed in Pe’er &
Casella (2009a), and we obtain a value of $m_{B}\approx 1.1$ We stress that
this is an initial guess done assuming that the jet is not changing after the
acceleration region. As we will discuss in the next section, during the model
fit we need to take into account that jet expansion might change above the
acceleration region, hence we will relax the constraint on $m_{B}$ and $m_{R}$
considered in the RadioJet emission.
To constrain the value of $z_{acc}$ we impose that $R_{acc}=R(z_{acc})$,
$B_{acc}=B(z_{acc})$ and $N_{e,acc}$ correspond to a synchrotron self-
absorption frequency of $\nu_{t}\approx 1.5\times 10^{13}$ Hz. This value of
$\nu_{t}$ is obtained from the phenomenological fit of the optically-thin to
optically-thick synchrotron emission between mm and optical data shown in Fig.
2. In order to solve this problem we combine the analytical expression of the
synchrotron self-absorption frequency ($\nu_{t}$) (Rybicki & Lightman, 1986),
evaluated at the peak i.e. $\alpha_{\nu}=0$
$\nu_{t}=\nu_{L}\Big{[}\frac{\pi\sqrt{\pi}}{4}\frac{qR_{acc}N_{e,acc}}{B_{acc}}f_{k}(s)\Big{]}^{\frac{2}{s+4}},$
(7)
and of that the synchrotron emissivity (Rybicki & Lightman, 1986)
$\epsilon_{s}(\nu)$ :
$\epsilon_{s}(\nu)=\frac{F_{\nu}d_{L}^{2}}{V}=\frac{3\sigma_{T}cN_{e,acc}U_{B}^{acc}}{16\pi\sqrt{(}\pi)\nu_{L}}f_{\epsilon}(s),$
(8)
where $q$ is the electron charge, $U_{B}^{acc}$ is the value of the magnetic
field, $V$ is the volume of a spherical geometry of volume $V$ of radius
$R_{acc}$, $s$ is the slope of the electron distribution power-law and
$\nu_{L}=\frac{qB}{2\pi m_{e}c}$ is the Larmor frequency, and where the
functions $f_{k}(s)$ and $f_{\epsilon}(s)$ are approximated to percent
accuracy as reported in Ghisellini (2013). The value of $s$ is obtained using
the optically thin spectral index $\approx 0.6$ from the phenomenological fit
in Fig. 2, according to the relation $s=2\alpha+1\approx 2.2$ (Rybicki &
Lightman, 1986). We solve Eq. 8 with respect to $N_{e}^{acc}$ and then
substitute in Eq. 7, and we insert the functional form of $B=B(z_{acc})$ and
$R=R(z_{acc})$ according to Eq. 3. The final equation solved with respect to
$z_{acc}$ reads:
$z_{acc}=\Big{[}\Big{(}\frac{\nu_{t}2\pi
m_{e}c}{qB_{0}^{2}z_{0}^{m_{B}}}\Big{)}^{\frac{s+4}{2}}\frac{3\sigma_{T}(B_{0}R_{0})^{2}f_{\epsilon}(s)z_{0}^{2\Delta_{m}}}{16r_{e}^{2}\pi^{3}f_{k}(s)F_{\nu}d_{L}^{2}}\Big{]}^{\psi}$
(9)
where $r_{e}=q^{2}/(m_{e}c^{2})$ is the classical electron radius,
$\Delta_{m}=m_{B}-m_{R}$, and $\psi=\frac{2}{4\Delta_{m}-m_{B}(s+4)}$.
Consequently, the starting position of the radio jet is set to
$z_{radio}^{start}=z_{acc}^{end}=z_{acc}+R_{acc}\approx 3.1\times 10^{10}$ cm,
with an extent derived from Bright et al. (2020) of $z_{end}\gtrsim
30000z_{radio}^{start}$
The value of the cut-off of the electron distribution is set to
$\gamma_{cut}=60$, in order to produce the peak of the synchrotron emission
above the IR frequencies for a magnetic field $B_{acc}\approx 1.8\times
10^{4}$ G, with a power-law slope $s\approx 2.1$ that is slightly lower then
the value derived from the optically thin spectral index.
The constrained value of $z_{acc}$ can be used to derive the hadroninc content
of the jet energetic in form of cold protons. Following Vila & Romero (2010)
we impose that in the acceleration region of the jet the magnetic energy of
the jet is in subequipartition with the bulk kinetic energy of the cold
protons, a condition that is mandatory to allow the mechanical compressibility
of the plasma (Komissarov et al., 2007). We define the parameter
$\rho^{acc}_{p,B}=U_{p}(z_{acc})/U_{B}(z_{acc})$, where
$U_{p}(z)=n_{p}(z)m_{p}c^{2}$, and we require that $\rho_{p,B}>1$. This choice
sets a value of cold proton luminosity in the acceleration region
$L_{p}(z_{acc})>3.6\times 10^{37}$ erg $s^{-1}$.
Table 4: Phenomenological Setup Parameters Input Parameters
---
par. name | units | input value
$z_{0}$ | cm | $1.12\times 10^{8}$
$r_{0}$ | cm | $1.12\times 10^{7}$
$M_{BH}$ | $M_{\sun}$ | 8
$q_{jet}$ | | 0.20
$F_{\nu}^{t}$ | Jy | $0.5$
$\nu_{t}$ | Hz | $1.5\times 10^{13}$
$s$ | | 2.1
$\rho^{acc}_{p,B}$ | | $>1$
$m_{B}$ | | 1.1
$m_{R}$ | | 1.0
Output Parameters
par. name | units | output value
$B_{0}$ | G | $6.8\times 10^{6}$
$B_{acc}$ | G | $1.8\times 10^{-4}$
$L_{p}^{acc}$ | erg s-1 | $>3.6\times 10^{37}$
$z_{acc}^{start}$ | cm | $2.4\times 10^{10}$
$z_{acc}^{end}$ | cm | $2.9\times 10^{10}$
$z_{acc}$ | cm | $2.6\times 10^{10}$
$R_{acc}$ | cm | $2.6\times 10^{9}$
$z_{radio}^{start}$ | cm | $2.9\times 10^{10}$
$z_{radio}^{end}$ | cm | $\gtrsim 1\times 10^{15}$
Table 5: JetSeT best fit model parameters model name | par. name | units | best fit value | error | starting value | fit boundaries | frozen
---|---|---|---|---|---|---|---
CompHump | $E_{hump}$ | keV | 26 | 14 | 20 | [ 15 ; 35] | False
” | $\Gamma_{hump}$ | | -0.5 | 2 | -1.2 | [ -2 ; 2] | False
DiskIrrComp | $T_{Disk}$ | K | | | $1.55\times 10^{6}$ | | True
” | $L_{Disk}$ | erg s-1 | $1.09\times 10^{37}$ | $1.0\times 10^{32}$ | $1\times 10^{37}$ | [ $1\times 10^{36}$ ; $1\times 10^{39}$] | False
” | $r_{out}$ | | $3.58\times 10^{3}$ | $0.21\times 10^{3}$ | $5\times 10^{3}$ | [ 1 ; – ] | False
” | $r_{irr}$ | | | | 1.1 | | True
” | $\Gamma_{Comp}$ | | 1.64 | 0.12 | 1.65 | [ 1.3 ; 1.9 ] | False
” | $E_{Comp}$ | keV | 150 | 100 | 140 | [ 20 ; 200 ] | False
” | $L_{Comp}^{ratio}$ | | 4.1 | 0.6 | 4.5 | [ 0 ; – ] | False
” | $f_{in}$ | | | | 0.1 | | True
” | $f_{out}$ | | $1\times 10^{-2}$ | $40\times 10^{-2}$ | 0.01 | [ 0 ; – ] | False
DiskIrrComp | $r_{out}$ | | $3.4\times 10^{3}$ | $0.5\times 10^{3}$ | $3.58\times 10^{3}$ | [ 1 ; – ] | False
” | $f_{out}$ | | $7.33\times 10^{-3}$ | $0.15\times 10^{-3}$ | $1\times 10^{-2}$ | [ 0 ; – ] | False
” | $L_{Comp}^{ratio}$ | | 4.270 | 0.016 | 4.1 | [ 0 ; – ] | False
JetAcc | $N_{e,acc}$ | cm-3 | $9.998\times 10^{11}$ | $0.001\times 10^{11}$ | $1.0\times 10^{12}$ | [0 ; – ] | False
” | $s$ | | 2.082 | 0.007 | 2.1 | [- ; - ] | False
” | $\gamma_{cut}$ | | $65.4$ | $1.7$ | 60 | [1 ; – ] | False
” | $R_{acc}$ | cm | $2.6\times 10^{9}$ | $1.0\times 10^{1}$ | $2.6\times 10^{9}$ | [ $1.32\times 10^{9}$ ; $3.96\times 10^{9}$] | False
” | $z_{acc}$ | cm | | | $2.8\times 10^{10}$ | | True
” | $B_{acc}$ | G | 17986 | $1.0\times 10^{-3}$ | 17986 | [ 8993 ; 26980 ] | False
” | $\theta_{jet}$ | deg | | | 63 | | True
” | $\Gamma_{jet}$ | | | | 2.19 | | True
RadioJet | $z_{inj}$ | | | | $2.5\times 10^{10}$ | | True
” | $N_{frac}$ | | | | 1 | | True
” | $K_{R}^{start}$ | | | | 1 | | True
” | $K_{R}^{end}$ | | | | 30000 | | True
” | $m_{jet}$ | | 1.203 | 0.001 | 1.1 | [ 0.5 ; 1.5 ] | False
### 4.3 Model Fit and Results
#### 4.3.1 Initial model setup
To optimize the model we use the composite model interface FitModel provided
by JetSeT, that allows combining different models in a global model. This
model can be optimized by inserting it to the ModelMinimizer JetSeT plugin. In
the current analysis we use a frequentist approach and we use the Minuit
ModelMinimizer option. We have used the Data and ObsData JetSeT tools to
import the observed data, and we have added a 5% systematic error in the range
$[1\times 10^{8},1\times 10^{16}]$ Hz, to avoid that the large inhomogeneity
on the fractional error between radio and optical/UV data, could bias the fit
convergence. For the error estimate we provide only errors derived form the
MIGRAD module of Minuit, a more reliable estimate based on a Markov chain
Monte Carlo (MCMC) will be presented in Tramacere (in prep.)
The DiskIrrComp model, the Comp. hump model, and the JetAcc are independent,
on the contrary, JetAcc and radio RadioJet are bound.
Figure 7: The best-fit JetSeT model of the broadband SED. Top panel: the
$F_{\nu}$ representation of the global model fit. Bottom panel: the $\nu
F_{\nu}$ representation. The red line represents the global model, the dashed
lines correspond to the single components, the color is reported in the
legend. The best fit parameters are reported in Tab. 5. The residuals plot is
evaluated with respect to the $\nu F_{\nu}$ representation.
The initial values of the parameters for the DiskIrrComp model are chosen
according to the analysis presented in Sec 3.3 and Sec. 3.4. In detail, we set
the initial values of $L_{disk}=1\times 10^{37}$ erg s-1, of $r_{out}=5000$,
of $f_{out}=0.01$, and $L_{Comp}^{ratio}=4.5$ and we fix the inner disk
temperature to $T_{Disk}=1.55\times 10^{6}$K, and the parameters $r_{irr}=1.1$
and $f_{in}=0.1$, the choice adopted in Gierliński et al. (2009), when the
Comptonization of the outer disk is included in the irradiated disk.
For the JetAcc model, we fix $\theta_{jet}=63^{\circ}$, $\Gamma_{jet}=2.19$,
we put a relative bound of +/- 0.5 centered on the parameters values derived
in the previous section, $R_{acc}=2.6\times 10^{19}$ cm, and $B_{acc}\approx
1.8\times 10^{4}$ G, we freeze the initial value of $z_{acc}=2.6\times
10^{10}$ cm, and we leave free the parameters for the electron distribution.
The initial setup of the parameters of the RadioJet is more complex and we
need to take into account the physical connection with the acceleration region
and the cooling process. This effect plays a crucial role, indeed, as already
discussed in Kaiser (2006) and Pe’er & Casella (2009a), the combination of
synchrotron cooling and jet expansion (assuming a negligible contribution from
adiabatic cooling) will result in an asymptotic value of $\gamma_{cut}(t)$,
that can naturally explain the flat radio spectrum without the need to
introduce significant particle re-acceleration in the radio jet. We follow the
approach reported in Pe’er & Casella (2009a) (in the case of negligible
adiabatic cooling) and we set $m_{B}^{radio}=m_{R}^{radio}=m_{jet}$. The
particle cut-off evolution in the radio jet will evolve according to (Kaiser,
2006):
$\gamma_{cut}(t)=\frac{\gamma_{cut}}{1+\frac{\sigma_{T}B0^{2}}{6m_{e}c\pi(f)}\gamma_{cut}t_{0}^{1-f}(t^{f}-t_{inj}^{f})}$
(10)
where $f=1-2m_{jet}$, and $t_{0}=z_{0}/\beta_{jet}c\Gamma_{jet}$,
$t_{inj}=z_{inj}/\beta_{jet}c\Gamma_{jet}$ and
$t=z_{/}\beta_{jet}c\Gamma_{jet}$, are the comoving time scales. We freeze a
starting value of $z_{inj}=z_{acc}^{start}\approx 2.5\times 10^{10}$ cm.
Another effect to take into account is the fact that for $z>z_{acc}$ the
structure of the jet could change, for this reason we leave free the
parameters $m_{jet}$ with a fit boundary of [0.5,1.5], with an initial value
of 1.18, that is slightly larger than the value used for the phenomenological
constraining, but gives a better agreement with radio-to-optical data.
The density of emitters at the base of the RadioJet, is bound to be equal to
the density of emitters in the acceleration region $N_{e}$ calculated
according to Eq. 3, at $z=z_{radio}^{start}$, by fixing $N_{frac}=1.0$. We fix
the values of $K_{R}^{end}=3000$ and of $K_{R}^{start}=1$.
A list of the free and frozen and of the bounds is reported in Table 5 in the
columns ‘staring values’, ‘fit boundaries’, respectively.
#### 4.3.2 Model fit results for the Disk and Corona emission
We fit first the DiskIrrComp and Comp. hump components restricting the fit
range to $\nu$= $[5\times 10^{14},10^{20}]$ Hz and we get $\chi^{2}=152$ for
98 degree of freedom ($N_{dof}$), corresponding to reduced
$\chi^{2}_{red}=1.55$. The parameters values are reported in the upper part of
Table 5.
The best-fit parameters resulting from JetSeT are similar to those obtained
form the XSPEC analysis for the diskir model. In particular the $r_{out}$
value ($3.58\times 10^{3}R_{in}$ and $3.45\times 10^{3}R_{in}$) and
$L_{C}/L_{D}$ (4.1 and 4.6), for JetSeT and XSPEC respectively. The $f_{out}$
parameter, is unconstrained both for XSPEC and the JetSeT. However, a well
constrained value is obtained when the jet component is added as shown in
section 4.3.3. Because the JetSeT model for the Comptonized emission is
phenomenological, the high-energy range of the irradiated disk is fit as a
cutoff power-law and thus is not directly comparable to the diskir parameters.
For that portion of the spectrum, JetSeT found $\Gamma=1.64$ (compared to 1.78
from diskir) and $E_{C}=150$ keV (compared to $kT_{e}=58$ keV from diskir). We
do note that $E_{C}/kT_{e}\approx 2.6$, which falls within the predicted range
of ratios between cutoff energy and electron temperature (Petrucci et al.,
2000, 2001), suggesting that the values are in agreement, even though the
incertitude on the JetSeT value is quite large.
#### 4.3.3 Model fit results for the jet emission
To fit the full band SED we freeze all the parameters in the CompHump and
DiskIrrComp components, except for $r_{out}$, $f_{out}$, and $L_{C}/L_{D}$,
and we fit the global model over the full SED band in the range $\nu$=
$[5\times 10^{8},10^{20}]$ Hz .
The model fit converged with a final $\chi^{2}=181$ for 122 degree of freedom
($N_{dof}$), corresponding to a $\chi^{2}_{red}=1.48$. The parameters values
are reported in the bottom part of Table 5, and the parameters derived from
the best-fit model are reported in Table 6. Regarding the DiskIrrComp, we note
that adding the jet component results in a better constraint on the value of
$f_{out}$=$(7.33\pm 0.15)\times 10^{-3}$, that is in the expected range of
other black hole binaries in the hard state (Gierliński et al., 2009).
Moreover, restricting the fit statistics to the same interval used in the
XSPEC analysis, we get a $\chi^{2}=157$ with $N_{dof}=107$, corresponding to a
$\chi^{2}_{red}=1.6$. Regarding the jet component, we note that final best-fit
model parameters did not change significantly from the input values,
suggesting that the phenomenological setup was able to find a configuration
very close to the optimal one, even though the fit might be biased by the
degeneracy among some parameters. We will investigate this problem in a
forthcoming work Tramacere (in prep.), by means of Bayesian approach based on
the MCMC technique. In general, we find that our assumption based on the
connection between a compact acceleration region feeding the extended radio
jet is able to model self-consistently the UV-to-optical emission, reproducing
the observed flat radio spectrum. In particular, we find that, according to
our best fit model, the particles in the radio region reach the asymptotic
value of $\gamma_{cut}\approx 8$, and keep it almost constant as result of the
decrease in the cooling synchrotron cooling rate due to the jet expansion.
This behaviour is in agreement with the results of Pe’er & Casella (2009b) and
Kaiser (2006) for the case of synchrotron cooling dominating over the
adiabatic one. It is worth discussing some specific parameters in detail:
Table 6: Model parameters evaluated from the best-fit model par. name | units | value | setup value
---|---|---|---
$q_{jet}$ | | $>0.15$ | 0.20
$U_{e}/U_{B}$ | | 0.18 | –
$N^{acc}_{e}/N_{p}^{cold}$ | | $<94$ | –
$L_{jet}^{acc}$ | erg s-1 | $>8.0\times 10^{37}$ | –
$L_{rad}^{acc}$ | erg s-1 | $1.1\times 10^{36}$ | –
$L_{B}^{acc}$ | erg s-1 | $3.6\times 10^{37}$ | $3.6\times 10^{37}$
$L_{e}^{acc}$ | erg s-1 | $6.6\times 10^{36}$ | –
$L_{p}^{acc}$ | erg s-1 | $>3.6\times 10^{37}$ | $>3.6\times 10^{37}$
* •
$q_{jet}>0.15$. This value is compatible with the input value $q_{jet}=0.2$.
As already discussed in the previous section, the $q_{jet}$ parameter is not
linked directly to the accretion efficiency because the jet powering could, in
principle, be supported also by other mechanisms such as the Blandford–Znajek
mechanism Blandford & Znajek (1977), that takes into account advection of
magnetic flux from an accretion disk surrounding the Black Hole. Hence, our
$q_{jet}$ parameter can not be used to infer or constrain the accretion
efficiency.
* •
$U_{e}/U_{B}=0.18$. The $U_{e}/U_{B}$ is not far from equipartition, and it is
obtained without providing any constraint. This proves that the combination of
the phenomenological model setup and the minimization of the global model
converged naturally toward a configuration close to the physical equipartition
of $U_{e}$ and $U_{B}$, giving further support to the choice of a compact
acceleration region that is connecting the pre-acceleration region to the
radio jet.
* •
$N^{acc}_{e}/N_{p}^{cold}<94$. Since our model is leptonic, the content of
cold protons can be derived from ancillary conditions, as the condition that
the magnetic energy of the jet has to be in subequipartition with the bulk
kinetic energy of the cold protons, in order to allow the mechanical
compressibility of the plasma (Komissarov et al., 2007), and formation of
shocks/turbulent acceleration sites in the acceleration region. From the best
fit model we get that to respect the condition $\rho^{acc}_{p,B}>1.0$ we need
to impose a lower limit of the ratio of relativistic electrons to cold protons
$N_{e}/N_{p}^{cold}<112$. This value is compatible with the usual value of
$N_{e}/N_{p}^{cold}=10$ (Celotti & Ghisellini, 2008) used in the case of
relativistic jets with a leptonic radiative domination. Moreover, we note that
the value of $B_{acc}$ obtained from the best fit did not require a
significant change in the value of $L_{B}$ as derived from the
phenomenological model setup, and demonstrating that constraining $z_{acc}$
based on the value of $\nu_{t}$ is naturally in agreement with formation of
mechanical compression in the jet when $U_{p}>U_{B}$.
* •
$m_{jet}=1.2$. The value of $m_{jet}$ is one the most critical, indeed it
dictates the topology and intensity of the magnetic field beyond the
acceleration region, and it is interesting to compare to the value of $m_{B}$
that is used to model the jet below the acceleration region. The initial guess
based on the value of $\alpha_{R}$ has required a small modification in order
to reproduce the observed radio spectrum, and the final model naturally
explains the almost flat radio spectrum as emission of the cooled electron
leaving the acceleration region.
## 5 Discussion and Conclusions
As MAXI J$1820+070$ was observed numerous times across the EM spectrum during
its outburst, there are multiple works relevant to portions of our multi-
wavelength analysis, though to date none study the source in such a complete
picture as is presented with our model from JetSeT.
Shidatsu et al. (2018) provides the most direct comparison to the analysis in
this work, though the source behavior is different before and after 26 March
(MJD 58206). They found that the optical and near-IR emission is not entirely
from disk emission and thus included a power-law to their diskir model with a
spectrum described by fixed parameters $kT_{disk}=0.35$ keV, $L_{C}/L_{D}=70$,
$f_{out}=5\times 10^{-3}$, and $R_{out}=10^{5}R_{in}$. Our fit found a
considerably lower $kT_{disk}$ (0.12 keV), $L_{C}/L_{D}$ (4.7), and $R_{out}$
($10^{3}R_{in}$), but a higher $f_{out}$ ($4\times 10^{-2}$). We note that the
source behavior between the two observations is different with changes in the
spectral hardness in hard X-rays (Roques & Jourdain, 2019), the development of
type-C QPOs (Stiele & Kong, 2020), and a reduction in the size of the corona
(Kara et al., 2019) that can possibly explain the differences.
Following the work of Muñoz-Darias et al. (2019), we explored the presence of
disk wind signatures in our VLT/X-shooter optical spectrum, as this data-set
falls between epochs 11 and 12 of Muñoz-Darias et al. (2019) campaign and adds
an epoch in their uncovered time window (between 26 March and 23 April). We
focus our spectral analysis on the He I $\lambda$ 5876, He I $\lambda$ 6678
and H$\alpha$ wavelength regions. We found shallow p-cygni profiles and strong
line asymmetries in all the three mentioned lines, while a broad outflow
component is detected only in the red wing of the H$\alpha$. Among the
observed absorption troughs, the one detected in the blue wing of He I
$\lambda$ 5876 is the more prominent and it results in a terminal wind
velocity $v_{t}$=880 km/s, which is consistent with the outflow velocity of
$v\sim$900 km/s, derived from the H$\alpha$ redshifted broad component. These
properties indicate that at this epoch optical disk winds are still present,
although with slower velocities with respect to what found in Muñoz-Darias et
al. (2019). The author also report an evolution of the line profiles during
their monitoring campaign and our observation confirms this trend. In
particular the observed H$\alpha$ profile can be interpreted as a continuation
in the evolving pattern of the line between the epochs 9 and 12 shown in
Figure 2 of Muñoz-Darias et al. (2019). Similar spectral variations were
previously reported by Tucker et al. (2018) and ascribed to the orbital motion
of the system. Interestingly, some of the most conspicuous optical wind
detections in Muñoz-Darias et al. (2019) occur in epochs corresponding to the
hard state of the source, when radio emission and strong jet activity are
present (Bright et al., 2020) and the peak of the optical outburst of the
source is reported. This led the authors to the conclusion that the optical
wind detected in MAXI J$1820+070$ is simultaneous with the jet. Our wind
signatures detection, together with the results from our broad band spectral
analysis are consistent with this scenario.
Our phenomenological analysis of the compact jet found the data could be
modelled by a broken power-law with $\alpha_{thick}=0.28\pm 0.02$ and
$\alpha_{thin}=-0.61\pm 0.01$. Combining observations from late March and
early April, Russell et al. (2018) performed a similar analysis and found
spectral indices of $\alpha_{thick}\sim 0.3$ and $\alpha_{thin}\sim-0.7$.
Building on Russell et al. (2018), Shidatsu et al. (2018) estimated a
transition frequency of $\sim 3\times 10^{13}$ Hz and a corresponding flux
density of $\sim 0.4$ Jy. From these values they determined $B\sim 1\times
10^{4}$ G and $R\sim 2\times 10^{9}$ cm using equations from Shidatsu et al.
(2011). Our model peaks at 1.6$\pm$0.2$\times 10^{13}$ Hz with a flux density
of $\sim 0.35$ Jy thus resulting in similar values.
These values are in agreement with the phenomenological setup and with the
best-fit model from JetSeT. In particular the JetSeT best-fit model gives a
magnetic field in the acceleration region of $\approx 1.8\times 10^{4}$ G, and
a region radius of $\approx 2.6\times 10^{9}$ cm.
The corresponding energy density of the magnetic field is $\approx 1.3\times
10^{7}\textrm{ erg/cm}^{3}$ compared to the values of $8\times
10^{6}\textrm{erg/cm}^{3}$ from Shidatsu et al. (2018).
Additionally, we identify a separate radio spectral components at frequencies
below $\sim$10 GHz, showing an inverted power-law spectrum with slope
$\alpha=0.11\pm 0.02$. Bright et al. (2020), collecting data from different
epochs of VLA, Multi-Element Radio Linked Interferometer Network (eMERLIN),
and Meer Karoo Array Telescope (MeerKAT) observations, could identify at least
one ejected component during the transition from the hard to the soft state
(mid-June to mid-September 2018). Though the source is unresolved down to a
sub-arcsec resolution in the VLA observations considered here (collected in a
previous epoch), the presence of an additional low-frequency spectral
component could suggest that the ejecta later detected by Bright et al. (2020)
were already present at a sub-pc scale during the April 12th 2018 epoch
considered here.
This component is represented in the JetSeT broadband model by the RadioJet
component, and stems naturally from the cooling of the accelerated particle
leaving the acceleration region. Interestingly we find that the best-fit index
$m_{jet}\approx 1.2$ predicts a radio spectral index of
$\alpha=1-1/m_{jet}\approx 0.166$ that is close to the value found in the
power-law fit. We note, that the small difference between the two values, is
due to the fact the JetSeT RadioJet model takes into account the data range
from radio-to-mm frequencies, differently from the power-law fit, whose range
extends up $\approx 10^{10}$ Hz.
In conclusion, our broadband analyses of MAXI J$1820+070$ found the source in
a hard state with parameters similar to what was reported by Shidatsu et al.
(2018) The JetSeT broadband model was able to reproduce the full SED taking
into account both the disk/corona emission, and the leptonic radiativelly
dominated relativistic jet contribution. We found that the relativistic jet
required a total energy of $L_{jet}\geq 8.0\times 10^{37}$ erg/s,
corresponding to 0.15 $L_{Edd}$. This value represents a lower limit, since we
assume that the hadronic content of the jets is only in terms of cold protons,
without a significant radiative contribution. The flat radio spectral shape
stems naturally from the synchroton cooling of the electrons in the
acceleration regions, in agreement with previous analyses (Kaiser, 2006; Pe’er
& Casella, 2009a). In comparison, the accretion luminosity ($6\times 10^{37}$
erg/s) is comparable to the lower limit of the jet luminosity. Thus in MAXI
J$1820+070$, it is possible for the jet to be powered predominately via
accretion with only a small contribution from the Blanford-Znajek mechanism,
which in this case cannot provide much power since the black hole spin is
reported to be low (Bassi et al., 2020; Zhao et al., 2020).
We thank the Italian node of the European ALMA Regional Centre (ARC) for the
support. JR and GB acknowledge financial support under the INTEGRAL ASI-INAF
agreement 2019-35-HH.0 and ASI/INAF n. 2017-14-H.0. FO acknowledge the support
of the H2020 European Hemera program, grant agreement No 730970. The research
leading to these results has received funding from the European Union’s
Horizon 2020 Programme under the AHEAD2020 project (grant agreement n. 871158)
F.O. acknowledges the support of the H2020 European Hemera program, grant
agreement No 730970, and the support of the GRAWITA/PRIN-MIUR project: ”The
new frontier of the Multi-Messenger Astrophysics: follow-up of electromagnetic
transient counterparts of gravitational wave sources”. Based on observations
with INTEGRAL, an ESA project with instruments and science data centre funded
by ESA member states (especially the PI countries: Denmark, France, Germany,
Italy, Switzerland, Spain) and with the participation of Russia and the USA.
This research has made use of the services of the ESO Science Archive
Facility. Based on observations collected at the European Southern Observatory
under ESO programmes 2017.1.01103.T (ALMA) and 0101.D-0356(A) (VLT). ALMA is a
partnership of ESO (representing its member states), NSF (USA), and NINS
(Japan), together with NRC (Canada) and NSC and ASIAA (Taiwan), in co-
operation with the Republic of Chile. The Joint ALMA Observatory is operated
by ESO, AUI/NRAO, and NAOJ. The National Radio Astronomy Observatory is a
facility of the National Science Foundation operated under cooperative
agreement by Associated Universities, Inc.
## References
* Astropy Collaboration et al. (2013) Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, A&A, 558, A33, doi: 10.1051/0004-6361/201322068
* Bassi et al. (2020) Bassi, T., Malzac, J., Del Santo, M. et al. 2020, MNRAS, 494, 571 doi: 10.1093/mnras/staa739
* Begelman et al. (1984) Begelman, M. C., Blandford, R. D., & Rees, M. J. 1984, Rev. Mod. Phys., 56, 255, doi: 10.1103/RevModPhys.56.255
* Blandford & Znajek (1977) Blandford, R. D., & Znajek, R. L. 1977, MNRAS, 179, 433, doi: 10.1093/mnras/179.3.433
* Bonato et al. (2018) Bonato, M., Liuzzo, E., Giannetti, A., et al. 2018, MNRAS, 478, 1512, doi: 10.1093/mnras/sty1173
* Bright et al. (2020) Bright, J. S., Fender, R. P., Motta, S. E., et al. 2020, Nature Astronomy, doi: 10.1038/s41550-020-1023-5
* Cardelli et al. (1989) Cardelli, J. A., Clayton, G. C., & Mathis, J. S. 1989, ApJ, 345, 245, doi: 10.1086/167900
* Celotti & Ghisellini (2008) Celotti, A., & Ghisellini, G. 2008, MNRAS, 385, 283, doi: 10.1111/j.1365-2966.2007.12758.x
* Clarke et al. (2016) Clarke, T. E., Kassim, N. E., Brisken, W., et al. 2016, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 9906, Proc. SPIE, 99065B, doi: 10.1117/12.2233036
* Flewelling et al. (2016) Flewelling, H. A., Magnier, E. A., Chambers, K. C., et al. 2016, arXiv e-prints, arXiv:1612.05243. https://arxiv.org/abs/1612.05243
* Gallo et al. (2003) Gallo, E., Fender, R. P., & Pooley, G. G. 2003, MNRAS, 344, 60, doi: 10.1046/j.1365-8711.2003.06791.x
* Ghisellini (2013) Ghisellini, G. 2013, Radiative Processes in High Energy Astrophysics, Vol. 873, doi: 10.1007/978-3-319-00612-3
* Gierliński et al. (2009) Gierliński, M., Done, C., & Page, K. 2009, Monthly Notices of the Royal Astronomical Society, 392, 1106
* Kaiser (2006) Kaiser, C. R. 2006, Monthly Notices of the Royal Astronomical Society, 367, 1083
* Kara et al. (2019) Kara, E., Steiner, J. F., Fabian, A. C., et al. 2019, Nature, 565, 198, doi: 10.1038/s41586-018-0803-x
* Kawamuro et al. (2018) Kawamuro, T., Negoro, H., Yoneyama, T., et al. 2018, The Astronomer’s Telegram, 11399, 1
* Komissarov et al. (2007) Komissarov, S. S., Barkov, M. V., Vlahakis, N., & Königl, A. 2007, MNRAS, 380, 51, doi: 10.1111/j.1365-2966.2007.12050.x
* Merloni et al. (2003) Merloni, A., Heinz, S., & di Matteo, T. 2003, MNRAS, 345, 1057, doi: 10.1046/j.1365-2966.2003.07017.x
* Mirabel et al. (1992) Mirabel, I. F., Rodriguez, L. F., Cordier, B., Paul, J., & Lebrun, F. 1992, Nature, 358, 215, doi: 10.1038/358215a0
* Muñoz-Darias et al. (2019) Muñoz-Darias, T., Jiménez-Ibarra, F., Panizo-Espinar, G., et al. 2019, ApJ, 879, L4, doi: 10.3847/2041-8213/ab2768
* Pe’er & Casella (2009a) Pe’er, A., & Casella, P. 2009a, Astrophysical Journal, 699, 1919
* Pe’er & Casella (2009b) —. 2009b, Astrophysical Journal, 699, 1919
* Petrucci et al. (2000) Petrucci, P. O., Haardt, F., Maraschi, L., et al. 2000, ApJ, 540, 131, doi: 10.1086/309319
* Petrucci et al. (2001) —. 2001, ApJ, 556, 716, doi: 10.1086/321629
* Polisensky et al. (2018) Polisensky, E., Giacintucci, S., Peters, W. M., Clarke, T. E., & Kassim, N. E. 2018, The Astronomer’s Telegram, 11540, 1
* Price-Whelan et al. (2018) Price-Whelan, A. M., Sipőcz, B. M., Günther, H. M., et al. 2018, AJ, 156, 123, doi: 10.3847/1538-3881/aabc4f
* Roques & Jourdain (2019) Roques, J.-P., & Jourdain, E. 2019, ApJ, 870, 92, doi: 10.3847/1538-4357/aaf1c9
* Russell et al. (2013) Russell, D. M., Markoff, S., Casella, P., et al. 2013, MNRAS, 429, 815, doi: 10.1093/mnras/sts377
* Russell et al. (2018) Russell, D. M., Baglio, M. C., Bright, J., et al. 2018, The Astronomer’s Telegram, 11533, 1
* Rybicki & Lightman (1986) Rybicki, G. B., & Lightman, A. P. 1986, Radiative Processes in Astrophysics
* Schlafly & Finkbeiner (2011) Schlafly, E. F., & Finkbeiner, D. P. 2011, ApJ, 737, 103, doi: 10.1088/0004-637X/737/2/103
* Shappee et al. (2014) Shappee, B. J., Prieto, J. L., Grupe, D., et al. 2014, ApJ, 788, 48, doi: 10.1088/0004-637X/788/1/48
* Shidatsu et al. (2011) Shidatsu, M., Ueda, Y., Tazaki, F., et al. 2011, PASJ, 63, S785, doi: 10.1093/pasj/63.sp3.S785
* Shidatsu et al. (2018) Shidatsu, M., Nakahira, S., Yamada, S., et al. 2018, ApJ, 868, 54, doi: 10.3847/1538-4357/aae929
* Stiele & Kong (2020) Stiele, H., & Kong, A. K. H. 2020, ApJ, 889, 142, doi: 10.3847/1538-4357/ab64ef
* Tramacere (2020) Tramacere, A. 2020, JetSeT: Numerical modeling and SED fitting tool for relativistic jets. http://ascl.net/2009.001
* Tramacere (in prep.) Tramacere, A. in prep.
* Tramacere et al. (2009) Tramacere, A., Giommi, P., Perri, M., Verrecchia, F., & Tosti, G. 2009, A&A, 501, 879, doi: 10.1051/0004-6361/200810865
* Tramacere et al. (2011) Tramacere, A., Massaro, E., & Taylor, A. M. 2011, ApJ, 739, 66, doi: 10.1088/0004-637X/739/2/66
* Trushkin et al. (2018) Trushkin, S. A., Nizhelskij, N. A., Tsybulev, P. G., & Erkenov, A. 2018, The Astronomer’s Telegram, 11539, 1
* Tucker et al. (2018) Tucker, M. A., Shappee, B. J., Holoien, T. W. S., et al. 2018, ApJ, 867, L9, doi: 10.3847/2041-8213/aae88a
* Ubertini et al. (2003) Ubertini, P., Lebrun, F., Di Cocco, G., et al. 2003, A&A, 411, L131, doi: 10.1051/0004-6361:20031224
* Vernet et al. (2011) Vernet, J., Dekker, H., D’Odorico, S., et al. 2011, A&A, 536, A105, doi: 10.1051/0004-6361/201117752
* Vila & Romero (2010) Vila, G. S., & Romero, G. E. 2010, Monthly Notices of the Royal Astronomical Society, 403, 1457
* Zdziarski et al. (2009) Zdziarski, A. A., Malzac, J., & Bednarek, W. 2009, MNRAS, 394, L41, doi: 10.1111/j.1745-3933.2008.00605.x
* Zhao et al. (2020) Zhao, X., Gou, L., Dong, Y. et al. 2020, arXiv e-prints, arXiv:2012.05544
|
# New fixed-circle results related to $F_{c}$-contractive and
$F_{c}$-expanding mappings on metric spaces
Nabil MLAIKI1, NİHAL ÖZGÜR2 and NİHAL TAŞ2 1Department of Mathematics and
General Sciences, Prince Sultan University, Riyadh, Saudi Arabia.
<EMAIL_ADDRESS><EMAIL_ADDRESS>2Balıkesir University, Department of
Mathematics, 10145 Balıkesir, TURKEY<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract.
The fixed-circle problem is a recent problem about the study of geometric
properties of the fixed point set of a self-mapping on metric (resp.
generalized metric) spaces. The fixed-disc problem occurs as a natural
consequence of this problem. Our aim in this paper, is to investigate new
classes of self-mappings which satisfy new specific type of contraction on a
metric space. We see that the fixed point set of any member of these classes
contains a circle (or a disc) called the fixed circle (resp. fixed disc) of
the corresponding self-mapping. For this purpose, we introduce the notions of
an $F_{c}$-contractive mapping and an $F_{c}$-expanding mapping. Activation
functions with fixed circles (resp. fixed discs) are often seen in the study
of neural networks. This shows the effectiveness of our fixed-circle (resp.
fixed-disc) results. In this context, our theoretical results contribute to
future studies on neural networks.
###### Key words and phrases:
Fixed point, fixed circle, fixed disc.
###### 2010 Mathematics Subject Classification:
Primary 47H10; Secondary 54H25.
## 1\. Introduction
In the last few decades, the Banach contraction principle has been generalized
and studied by different approaches such as to generalize the used contractive
condition (see [1], [4], [5], [6], [7], [8], [11], [18], [19] and [26] for
more details) and to generalize the used metric space (see [2], [3], [9],
[12], [14], [17], [25], [27] and [28] for more details). Recently, some fixed-
circle theorems have been introduced as a geometrical direction of
generalization of the fixed-point theorems (see [20], [21], [22] and [23] for
more details).
Let $(X,d)$ be a metric space and $f$ be a self-mapping on $X$. First, we
recall that the circle $C_{u_{0},\rho}=\left\\{u\in
X:d(u,u_{0})=\rho\right\\}$ is a fixed circle of $f$ if $fu=u$ for all $u\in
C_{u_{0},\rho}$ (see [20]). Similarly, the disc $D_{u_{0},\rho}=\left\\{u\in
X:d(u,u_{0})\leq\rho\right\\}$ is called a fixed disc of $f$ if $fu=u$ for all
$u\in D_{u_{0},\rho}$. There are some examples of self-mappings such that the
fixed point set of the self-mapping contains a circle (or a disc). For
example, let us consider the metric space $\left(\mathbb{C},d\right)$ with the
metric
$d\left(z_{1},z_{2}\right)=\left|x_{1}-x_{2}\right|+\left|y_{1}-y_{2}\right|+\left|x_{1}-x_{2}+y_{1}-y_{2}\right|,$
(1.1)
defined for the complex numbers $z_{1}=x_{1}+iy_{1}$ and $z_{2}=x_{2}+iy_{2}$.
We note that the metric defined in (1.1) is the metric induced by the norm
function
$\left\|z\right\|=\left\|x+iy\right\|=\left|x\right|+\left|y\right|+\left|x+y\right|\text{,}$
(see Example 2.4 in [24]). The circle $C_{0,1}$ is seen in the following
figure which is drawn using Mathematica [31]. Define the self-mapping $f_{1}$
on $\mathbb{C}$ as follows$:$
$f_{1}z=\left\\{\begin{array}[]{ccc}z&;&x\leq 0,y\geq 0\text{ or }x\geq
0,y\leq 0\\\ -y+\frac{1}{2}+i\left(-x+\frac{1}{2}\right)&;&x>0,y>0\\\
-y-\frac{1}{2}+i\left(-x-\frac{1}{2}\right)&;&x<0,y<0\end{array}\right.\text{,}$
for each $z=x+iy\in\mathbb{C}$, then clearly, the fixed point set of $f_{1}$
contains the circle $C_{0,1}$, that is, $C_{0,1}$ is a fixed circle of
$f_{1}$. Therefore, the study of geometric properties of the fixed point set
of a self-mapping seems to be an interesting problem in case where the fixed
point is non unique.
Figure 1. The graph of the circle $C_{0,1}$.
On the other hand, fixed points of self-mappings play an important role in the
study of neural networks. For example, in [16], it was pointed out that fixed
points of a neural network can be determined by fixed points of the employed
activation function. If the global input-output relationship in a neural
network can be considered in the framework of Möbius transformations, then the
existence of one or two fixed points of the neural network is guaranteed (see
[10] for basic algebraic and geometric properties of Möbius transformations).
Some possible applications of theoretical fixed-circle results to neural
networks have been investigated in the recent studies [20] and [23].
Next, we remind the reader of the following theorems on a fixed circle.
###### Theorem 1.1.
[20] Let $(X,d)$ be a metric space and consider the map
$\varphi:X\rightarrow\left[0,\infty\right)\text{,
}\varphi(u)=d(u,u_{0})\text{,}$ (1.2)
for all $u\in X$. If there exists a self-mapping $f:X\rightarrow X$ satisfying
$(C1)$ $d(u,fu)\leq\varphi(u)-\varphi(fu)$
and
$(C2)$ $d(fu,u_{0})\geq\rho$,
for each $u\in C_{u_{0},\rho}$, then the circle $C_{u_{0},\rho}$ is a fixed
circle of $f$.
###### Theorem 1.2.
[20] Let $(X,d)$ be a metric space and consider the map $\varphi$ as defined
in $($1.2$)$. Also, assume that $f:X\rightarrow X$ satisfies the following
conditions:
$(C1)^{\ast}$ $d(u,fu)\leq\varphi(u)+\varphi(fu)-2\rho$
and
$(C2)^{\ast}$ $d(fu,u_{0})\leq\rho$,
for each $u\in C_{u_{0},\rho}$, then the circle $C_{u_{0},\rho}$ is a fixed
circle of $f$.
###### Theorem 1.3.
[20] Let $(X,d)$ be a metric space and consider the map $\varphi$ as defined
in $($1.2$)$. Also, assume that $f:X\rightarrow X$ satisfies the following
conditions:
$(C1)^{\ast\ast}$ $d(u,fu)\leq\varphi(u)-\varphi(fu)$
and
$(C2)^{\ast\ast}$ $hd(u,fu)+d(fu,u_{0})\geq\rho$,
for each $u\in C_{u_{0},\rho}$ and some $h\in\left[0,1\right)$, then the
circle $C_{u_{0},\rho}$ is a fixed circle of $f$.
###### Theorem 1.4.
[23] Let $(X,d)$ be a metric space and assume that the mapping
$\varphi_{\rho}:\mathbb{R}^{+}\cup\left\\{0\right\\}\rightarrow\mathbb{R}$ be
defined by
$\varphi_{\rho}(x)=\left\\{\begin{array}[]{ccc}x-\rho&;&x>0\\\
0&;&x=0\end{array}\right.\text{,}$ (1.3)
for all $x\in\mathbb{R}^{+}\cup\left\\{0\right\\}$. If there exists a self-
mapping $f:X\rightarrow X$ satisfying
1. (1)
$d(fu,u_{0})=\rho$ for each $u\in C_{u_{0},\rho}$,
2. (2)
$d(fu,fv)>\rho$ for each $u,v\in C_{u_{0},\rho}$ and $u\neq v$ ,
3. (3)
$d(fu,fv)\leq d(u,v)-\varphi_{\rho}(d(u,fu))$ for each $u,v\in
C_{u_{0},\rho}$,
then the circle $C_{u_{0},\rho}$ is a fixed circle of $f$.
This manuscript is structured as follows; in Section 2, we give some
generalizations of Theorems 1.1, 1.2 and 1.3. In Section 3, we present the
definitions of an “$F_{c}$-contraction” and an “$F_{c}$-expanding map” where
we prove new theorems on a fixed circle. In section 4, we consider the fixed
point sets of some activation functions frequently used in the study of neural
networks with a geometric viewpoint. This shows the effectiveness of our
fixed-circle results. In section 5, we present some open problems for future
works. Our results show the importance of the geometry of fixed points of a
self-mapping when the fixed point is non unique.
## 2\. New Fixed-Circle Theorems for Some Contractive Mappings
First, we give a fixed-circle theorem using an auxiliary function.
###### Theorem 2.1.
Let $(X,d)$ be a metric space, $f$ be a self-mapping on $X$ and the mapping
$\theta_{\rho}:\mathbb{R}\rightarrow\mathbb{R}$ be defined by
$\theta_{\rho}(x)=\left\\{\begin{array}[]{ccc}\rho&;&x=\rho\\\
x+\rho&;&x\neq\rho\end{array}\right.\text{,}$
for all $x\in\mathbb{R}$ and $\rho\geq 0$. Suppose that
1. (1)
$d(fu,u_{0})\leq\theta_{\rho}(d(u,u_{0}))+Ld(u,fu)$ for some
$L\in\left(-\infty,0\right]$ and each $u\in X$,
2. (2)
$\rho\leq d(fu,u_{0})$ for each $u\in C_{u_{0},\rho}$,
3. (3)
$d(fu,fv)\geq 2\rho$ for each $u,v\in C_{u_{0},\rho}$ and $u\neq v$,
4. (4)
$d(fu,fv)<\rho+d(v,fu)$ for each $u,v\in C_{u_{0},\rho}$ and $u\neq v$,
then $f$ fixes the circle $C_{u_{0},\rho}$.
###### Proof.
Let $u\in C_{u_{0},\rho}$ be an arbitrary point. By the conditions (1) and
(2), we have
$d(fu,u_{0})\leq\theta_{\rho}(d(u,u_{0}))+Ld(u,fu)=\rho+Ld(u,fu)$
and so
$\rho\leq d(fu,u_{0})\leq\rho+Ld(u,fu)\text{. }$ (2.1)
We have two cases.
Case 1. Let $L=0$. Then we find $d(fu,u_{0})=\rho$ by (2.1), that is, we have
$fu\in C_{u_{0},\rho}$. Then $d(u,fu)=0$ or $d(u,fu)\neq 0$. Assume
$d(u,fu)\neq 0$ for $u\in C_{u_{0},\rho}$. Since $u\neq fu$, from the
condition (3), we obtain
$d(fu,f^{2}u)\geq 2\rho\text{.}$ (2.2)
Also using the condition (4), we get
$d(fu,f^{2}u)<\rho+d(fu,fu)$
and hence
$d(fu,f^{2}u)<\rho\text{.}$
which contradicts the inequality (2.2). Therefore, it should be $d(u,fu)=0$
which implies $fu=u$.
Case 2. Let $L\in\left(-\infty,0\right)$. If $d(u,fu)\neq 0$ we get a
contradiction by (2.1). Hence it should be $d(u,fu)=0$.
Thereby, we obtain $fu=u$ for all $u\in C_{u_{0},\rho}$, that is,
$C_{u_{0},\rho}$ is a fixed circle of $f$. In other words, the fixed point set
of $f$ contains the circle $C_{u_{0},\rho}$. ∎
###### Remark 2.2.
Notice that, if we consider the case $L\in\left(-\infty,0\right)$ in the
condition $(1)$ of Theorem 2.1 for $u\in C_{u_{0},\rho}$, then we get
$-Ld(u,fu)\leq\theta_{\rho}(d(u,u_{0}))-d(fu,u_{0})=d(u,u_{0})-d(fu,u_{0})=\varphi(u)-\varphi(fu)\text{
}$
and hence
$-Ld(u,fu)\leq\varphi(u)-\varphi(fu)\text{.}$
For $L=-1$, we obtain
$d(u,fu)\leq\varphi(u)-\varphi(fu)\text{.}$
This means that the condition $(C1)$ $($resp. the condition $(C1)^{\ast\ast})$
is satisfied for this case.
Clearly, the condition $(2)$ of Theorem 2.1 is the same as condition $(C2)$.
On the other hand, if the condition $(2)$ of Theorem 2.1 is satisfied then the
condition $(C2)^{\ast\ast}$ is satisfied. Consequently, Theorem 2.1 is a
generalization of Theorem 1.1 and Theorem 1.3 for the cases
$L\in\left(-\infty,0\right)\setminus\left\\{-1\right\\}$. For the case $L=-1$,
Theorem 2.1 coincides with Theorem 1.1 and it is a special case of Theorem
1.3.
Next, we present some illustrative examples.
###### Example 2.3.
Let $\left(\mathbb{R},d\right)$ be the metric space with the usual metric
$d(x_{1},x_{2})=\left|x_{1}-x_{2}\right|$ and consider the circle
$C_{0,1}=\left\\{-1,1\right\\}$. If we define the self-mapping
$f_{1}:\mathbb{R}\rightarrow\mathbb{R}$ as
$f_{1}x=\left\\{\begin{array}[]{ccc}3x^{2}+x-3&;&x\in\left\\{-1,1\right\\}\\\
0&;&\text{otherwise}\end{array}\right.\text{,}$
for each $x\in\mathbb{R}$, then it is not difficult to see that $f_{1}$
satisfies the hypothesis of Theorem 2.1 for the circle $C_{0,1}$ and
$L=\frac{-1}{2}$. Clearly, $C_{0,1}$ is the fixed circle of $f_{1}$.
###### Example 2.4.
Consider $\left(\mathbb{R},d\right)$ to be the usual metric space and the
circle $C_{0,2}=\left\\{-2,2\right\\}$. Define
$f_{2}:\mathbb{R}\rightarrow\mathbb{R}$ by
$f_{2}x=\left\\{\begin{array}[]{ccc}2&;&x=-2\\\ -2&;&x=2\\\
0&;&\text{otherwise}\end{array}\right.\text{,}$
for each $x\in\mathbb{R}$, then $f_{2}$ does not satisfy the condition $(1)$
of Theorem 2.1 for each $x\in C_{0,2}$ and for any
$L\in\left(-\infty,0\right)$. Also, $f_{2}$ does not satisfy the condition
$(4)$ for each $x\in C_{0,2}$ and for any $L\in\left(-\infty,0\right]$.
Clearly, $f_{2}$ does not fix $C_{0,2}$ and this example shows that the
condition $(4)$ is crucial in Theorem 2.1.
###### Example 2.5.
Consider $\left(\mathbb{R},d\right)$ to be the usual metric space and the
circles $C_{0,1}=\left\\{-1,1\right\\}$ and $C_{0,2}=\left\\{-2,2\right\\}$.
If we define $f_{3}:\mathbb{R}\rightarrow\mathbb{R}$ as
$f_{3}x=\left\\{\begin{array}[]{ccc}x&;&x\in C_{0,1}\cup C_{0,2}\\\
0&;&\text{otherwise}\end{array}\right.\text{,}$
for each $x\in\mathbb{R}$, then $f_{3}$ satisfies the hypothesis of Theorem
2.1 for each of the circles $C_{0,1}$ and $C_{0,2}$ and for any
$L\in\left[-1,0\right]$. Clearly, $C_{0,1}$ and $C_{0,2}$ are the fixed
circles of $f_{3}$.
We give another fixed-circle result.
###### Theorem 2.6.
Let $(X,d)$ be a metric space, $f$ be a self-mapping on $X$ and the mapping
$\theta_{\rho}:\mathbb{R}\rightarrow\mathbb{R}$ be defined by
$\theta_{\rho}(x)=\left\\{\begin{array}[]{ccc}\rho&;&x=\rho\\\
x+\rho&;&x\neq\rho\end{array}\right.\text{,}$
for all $x\in\mathbb{R}$ and $\rho\geq 0$. Suppose that
1. (1)
$2d(u,u_{0})-d(fu,u_{0})\leq\theta_{\rho}(d(u,u_{0}))+Ld(u,fu)$ for some
$L\in\left(-\infty,0\right]$ and each $u\in X$,
2. (2)
$d(fu,u_{0})\leq\rho$ for each $u\in C_{u_{0},\rho}$,
3. (3)
$d(fu,fv)\geq 2\rho$ for each $u,v\in C_{u_{0},\rho}$ and $u\neq v$,
4. (4)
$d(fu,fv)<\rho+d(v,fu)$ for each $u,v\in C_{u_{0},\rho}$ and $u\neq v$,
then the self-mapping $f$ fixes the circle $C_{u_{0},\rho}.$
###### Proof.
Consider $u\in C_{u_{0},\rho}$ to be an arbitrary point. Using the conditions
(1) and (2), we get
$2d(u,u_{0})-d(fu,u_{0})\leq d(u,u_{0})+Ld(u,fu)\text{,}$
$2\rho-d(fu,u_{0})\leq\rho+Ld(u,fu)$
and
$\rho\leq d(fu,u_{0})+Ld(u,fu)\leq\rho+Ld(u,fu)\text{.}$ (2.3)
Similarly to the arguments used in the proof of Theorem 2.1, a direct
computation shows that the circle $C_{u_{0},\rho}$ is fixed by $f$. ∎
###### Remark 2.7.
Notice that, if we consider the case $L=-1$ in the condition $(1)$ of Theorem
2.6 for $u\in C_{u_{0},\rho}$ then we get
$d(u,fu)\leq\theta_{\rho}(d(u,u_{0}))+d(fu,u_{0})-2d(u,u_{0})=\rho+d(fu,u_{0})-2\rho=\varphi(u)+\varphi(fu)-2\rho\text{.}$
Hence the condition $(C1)^{\ast}$ is satisfied. Also, the condition $(2)$ of
Theorem 2.6 is contained in the condition $(C2)^{\ast}$. Therefore, Theorem
2.6 is a special case of Theorem 1.2 in this case. For the cases
$L\in\left(-\infty,0\right)$, Theorem 2.6 is a generalization of Theorem 1.2.
Now, we give some illustrative examples.
###### Example 2.8.
Consider the usual metric space $\left(\mathbb{R},d\right)$ and the circle
$C_{0,1}=\left\\{-1,1\right\\}$. Define the map
$f_{4}:\mathbb{R}\rightarrow\mathbb{R}$ as
$f_{4}x=\left\\{\begin{array}[]{ccc}\frac{1}{x}&;&u\in\left\\{-1,1\right\\}\\\
2x&;&\text{otherwise}\end{array}\right.\text{,}$
for each $x\in\mathbb{R}$, hence $f_{4}$ satisfies the hypothesis of Theorem
2.6 for $L=-\frac{1}{2}$. Clearly, $C_{0,1}$ is the fixed circle of $f_{4}$.
It is easy to check that $f_{4}$ does not satisfy the condition $(1)$ of
Theorem 2.1 for any $L\in\left(-\infty,0\right]$.
###### Example 2.9.
Consider the usual metric space $\left(\mathbb{R},d\right)$ and the circles
$C_{0,1}=\left\\{-1,1\right\\}$ and $C_{1,2}=\left\\{-1,3\right\\}$. Define
the self-mapping $f_{5}:\mathbb{R}\rightarrow\mathbb{R}$ as
$f_{5}x=\left\\{\begin{array}[]{ccc}x&;&x\in C_{0,1}\cup C_{1,2}\\\ \alpha
x&;&\text{otherwise}\end{array}\right.\text{,}$
for each $x\in\mathbb{R}$ and $\alpha\geq 2$, then $f_{5}$ satisfies the
hypothesis of Theorem 2.6 for $L=0$ and for each of the circles $C_{0,1}$ and
$C_{1,2}$. Clearly, $C_{0,1}$ and $C_{1,2}$ are the fixed circles of $f_{5}$.
Notice that the fixed circles $C_{0,1}$ and $C_{1,2}$ are not disjoint.
Considering Example 2.5 and Example 2.9, we deduce that a fixed circle need
not to be unique in Theorem 2.1 and Theorem 2.6. If a fixed circle is non
unique then two fixed circle of a self-mapping can be disjoint or not. Next,
we prove a theorem where $f$ fixes a unique circle.
###### Theorem 2.10.
Let $(X,d)$ be a metric space and $f:X\rightarrow X$ be a self-mapping which
fixes the circle $C_{u_{0},\rho}$. If the condition
$d(fu,fv)<\max\left\\{d(v,fu),d(v,fv)\right\\}\text{,}$ (2.4)
is satisfied by $f$ for all $u\in C_{u_{0},\rho}$ and $v\in X\setminus
C_{u_{0},\rho}$, then $C_{u_{0},\rho}$ is the unique fixed circle of $f$.
###### Proof.
Let $C_{u_{1},\mu}$ be another fixed circle of $f$. If we take $u\in
C_{u_{0},\rho}$ and $v\in C_{u_{1},\mu}$ with $u\neq v$, then using the
inequality (2.4), we obtain
$\displaystyle d(u,v)$ $\displaystyle=$ $\displaystyle d(fu,fv)$
$\displaystyle<$
$\displaystyle\max\left\\{d(v,fu),d(v,fv)\right\\}=d(u,v)\text{,}$
a contradiction. We have $u=v$ for all $u\in C_{u_{0},\rho}$, $v\in
C_{u_{1},\mu}$ hence $f$ only fixes the circle $C_{u_{0},\rho}.$ ∎
In the following example, we show that the converse of Theorem 2.10 is not
true in general.
###### Example 2.11.
Consider the usual metric space $\left(\mathbb{C},d\right)$ and the circle
$C_{0,\frac{1}{4}}.$ Define $f_{6}$ on $\mathbb{C}$ as follows$:$
$f_{6}z=\left\\{\begin{array}[]{ccc}\frac{1}{16\overline{z}}&\text{if}&z\neq
0\\\ 0&\text{if}&z=0\end{array}\right.\text{,}$
for $z\in\mathbb{C}$, where $\overline{z}$ denotes the complex conjugate of
$z$. It is not difficult to see that $C_{0,\frac{1}{4}}$ is the unique fixed
circle of $f_{6}$ where $f_{6}$ does not satisfy the hypothesis of Theorem
2.10.
Now, we give the following example as an illustration of Theorem 2.10.
###### Example 2.12.
Let $Y=\left\\{-1,0,1\right\\}$ and the metric $d:Y\times
Y\rightarrow\left[0,\infty\right)$ be defined by
$d(u,v)=\left\\{\begin{array}[]{ccc}0&;&u=v\\\
\left|u\right|+\left|v\right|&;&u\neq v\end{array}\right.\text{,}$
for all $u\in Y$. If we consider the self-mapping $f_{7}:Y\rightarrow Y$
defined by
$f_{7}u=0\text{,}$
for any $u\in Y$, then $C_{1,1}=\left\\{0\right\\}$ is the unique fixed circle
of $f_{7}.$
Next, we present the following interesting theorem that involves the identity
map $I_{X}:X\rightarrow X$ defined by $I_{X}(u)=u$ for all $u\in X.$
###### Theorem 2.13.
Let $(X,d)$ be a metric space. Consider the map $f$ from $X$ to itself with
the fixed circle $C_{u_{0},\rho}$. The self-mapping $f$ satisfies the
condition
$d(u,fu)\leq\alpha\left[\max\left\\{d(u,fu),d(u_{0},fu)\right\\}-d(u_{0},fu)\right]\text{,}$
(2.5)
for all $u\in X$ and some $\alpha\in\left(0,1\right)$ if and only if
$f=I_{X}$.
###### Proof.
Let $u\in X$ with $fu\neq u$. By inequality (2.5), if $d(u,fu)\geq
d(u_{0},fu)$, then we get
$d(u,fu)\leq\alpha\left[d(u,fu)-d(u_{0},fu)\right]\leq\alpha d(u,fu)\text{,}$
which leads us to a contradiction due to the fact that
$\alpha\in\left(0,1\right)$. If $d(u,fu)\leq d(u_{0},fu)$, then we find
$d(u,fu)\leq\alpha\left[d(u_{0},fu)-d(u_{0},fu)\right]=0.$
Hence, $fu=u$ and that is $f=I_{X}$ since $u$ is an arbitrary in $X$.
Conversely, $I_{X}$ satisfies the condition (2.5) clearly. ∎
###### Corollary 2.14.
Let $(X,d)$ be a metric space and $f:X\rightarrow X$ be a self-mapping. If $f$
satisfies the hypothesis of Theorem 2.1 $($resp. Theorem 2.6$)$ but the
condition $($2.5$)$ is not satisfied, then $f\neq I_{X}$.
Now, we rewrite the following theorem given in [20].
###### Theorem 2.15.
[20] Let $(X,d)$ be a metric space. Consider the map $f$ from $X$ to itself
which have a fixed circle $C_{u_{0},\rho}$ and $\varphi$ as in $($1.2$)$. Then
$f$ satisfies the condition
$d(u,fu)\leq\frac{\varphi(u)-\varphi(fu)}{h}\text{,}$ (2.6)
for every $u\in Y$ and $h>1$ if and only if $f=I_{X}$.
###### Theorem 2.16.
Let $(X,d)$ be a metric space. Consider the map $f$ from $X$ to itself which
have a fixed circle $C_{u_{0},\rho}$ and $\varphi$ as in $($1.2$)$. Then $f$
satisfies $($2.5$)$ if and only if $f$ satisfies $($2.6$)$.
###### Proof.
The proof follows easily. ∎
## 3\. $F_{c}$-contractive and $F_{c}$-expanding mappings in metric spaces
In this section, we use a different approach to obtain new fixed-circle
results. First, we recall the definition of the following family of functions
which was introduced by Wardowski in [30].
###### Definition 3.1.
[30] Let $\mathbb{F}$ be the family of all functions
$F:(0,\infty)\rightarrow\mathbb{R}$ such that
$(F_{1})$ $F$ is strictly increasing,
$(F_{2})$ For each sequence $\left\\{\alpha_{n}\right\\}$ in
$\left(0,\infty\right)$ the following holds
$\underset{n\rightarrow\infty}{\lim}\alpha_{n}=0\text{ if and only if
}\underset{n\rightarrow\infty}{\lim}F(\alpha_{n})=-\infty\text{,}$
$(F_{3})$ There exists $k\in(0,1)$ such that $\underset{\alpha\rightarrow
0^{+}}{\lim}\alpha^{k}F(\alpha)=0$.
Some examples of functions that satisfies the conditions $(F_{1})$, $(F_{2})$
and $(F_{3})$ of Definition 3.1 are $F(u)=\ln(u)$, $F(u)=\ln(u)+u$,
$F(u)=-\frac{1}{\sqrt{u}}$ and $F(u)=\ln(u^{2}+u)$ (see [30] for more
details).
At this point, we introduce the following new contraction type.
###### Definition 3.2.
Let $(X,d)$ be a metric space and $f$ be a self-mapping on $X$. If there exist
$t>0$, $F\in\mathbb{F}$ and $u_{0}\in X$ such that
$d(u,fu)>0\Rightarrow t+F(d(u,fu))\leq F(d(u_{0},u))\text{,}$
for all $u\in X$, then $f$ is called as an $F_{c}$-contraction.
We note that the point $u_{0}$ mentioned in Definition 3.2 must be a fixed
point of the mapping $f$. Indeed, if $u_{0}$ is not a fixed point of $f$, then
we have $d(u_{0},fu_{0})>0$ and hence
$d(u_{0},fu_{0})>0\Rightarrow t+F(d(u_{0},fu_{0}))\leq
F(d(u_{0},u_{0}))\text{.}$
This is a contradiction since the domain of $F$ is $(0,\infty)$. Consequently,
we obtain the following proposition as an immediate consequence of Definition
3.2.
###### Proposition 3.3.
Let $(X,d)$ be a metric space. If $f$ is an $F_{c}$-contraction with
$u_{0}\in$ $X$ then we have $fu_{0}=u_{0}.$
Using this new type contraction we give the following fixed-circle theorem.
###### Theorem 3.4.
Let $(X,d)$ be a metric space and $f$ be an $F_{c}$-contraction with
$u_{0}\in$ $X$. Define the number $\sigma$ by
$\sigma=\inf\left\\{d(u,fu):u\neq fu,u\in X\right\\}\text{.}$
Then $C_{u_{0},\sigma}$ is a fixed circle of $f$. In particular, $f$ fixes
every circle $C_{u_{0},r}$ where $r<\sigma$.
###### Proof.
If $\sigma=0$ then clearly $C_{u_{0},\sigma}=\left\\{u_{0}\right\\}$ and by
Proposition 3.3, we see that $C_{u_{0},\sigma}$ is a fixed circle of $f$.
Assume $\sigma>0$ and let $u\in C_{u_{0},\sigma}$. If $fu\neq u$, then by the
definition of $\sigma$ we have $d(u,fu)\geq\sigma$. Hence using the
$F_{c}$-contractive property and the fact that $F$ is increasing, we obtain
$F(\sigma)\leq F(d(u,fu))\leq F(d(u_{0},u))-t<F(d(u_{0},u))=F(\sigma)\text{,}$
which leads to a contradiction. Therefore, we have $d(u,fu)=0$, that is,
$fu=u$. Consequently, $C_{u_{0},\sigma}$ is a fixed circle of $f$.
Now we show that $f$ also fixes any circle $C_{u_{0},r}$ with $r<\sigma$. Let
$u\in C_{u_{0},r}$ and assume that $d(u,fu)>0$. By the $F_{c}$-contractive
property, we have
$F(d(u,fu))\leq F(d(u_{0},u))-t<F(r)\text{.}$
Since $F$ is increasing, then we find
$d(u,fu)<r<\sigma.$
But $\sigma=\inf\left\\{d(u,fu):\text{for all }u\neq fu\right\\}$, which leads
us to a contradiction. Thus, $d(u,fu)=0$ and $fu=u$. Hence, $C_{u_{0},r}$ is a
fixed circle of $f$. ∎
###### Remark 3.5.
$1)$ Notice that, in Theorem 3.4, the $F_{c}$-contraction $f$ fixes the disc
$D_{u_{0},\sigma}$. Therefore, the center of any fixed circle is also fixed by
$f$. In Theorem 1.4, the self-mapping $f$ maps $C_{u_{0},\rho}$ into $($or
onto$)$ itself, but the center of the fixed circle need not to be fixed by
$f$.
$2)$ Related to the number of the elements of the set $X$, the number of the
fixed circles of an $F_{c}$-contractive self-mapping $f$ can be infinite
$($see Example 3.8$)$.
We give some illustrative examples.
###### Example 3.6.
Let $X=\left\\{0,1,e^{2},-e^{2},e^{2}-1,e^{2}+1\right\\}$ be the metric space
with the usual metric. Define the self-mapping $f_{8}:X\rightarrow X$ as
$f_{8}u=\left\\{\begin{array}[]{ccc}1&;&u=0\\\
u&;&\text{otherwise}\end{array}\right.\text{,}$
for all $u\in X$. Then the self-mapping $f_{8}$ is an $F_{c}$-contractive
self-mapping with $F=\ln u$, $t=1$ and $u_{0}=e^{2}$. Using Theorem 3.4, we
obtain $\sigma=1$ and $f_{8}$ fixes the circle
$C_{e^{2},1}=\left\\{e^{2}-1,e^{2}+1\right\\}$. Clearly, $\mathbb{C}_{8}$
fixes the disc $D_{e^{2},1}=\left\\{u\in Y:d(u,e^{2})\leq
1\right\\}=\left\\{e^{2},e^{2}-1,e^{2}+1\right\\}$. Notice that $f_{8}$ fixes
also the circle $C_{0,e^{2}}=\left\\{-e^{2},e^{2}\right\\}.$
The converse statement of Theorem 3.4 is not always true as seen in the
following example.
###### Example 3.7.
Let $(X,d)$ be a metric space, $u_{0}\in X$ any point and the self-mapping
$f_{9}:X\rightarrow X$ defined as
$f_{9}u=\left\\{\begin{array}[]{ccc}u&;&d(u,u_{0})\leq\mu\\\
u_{0}&;&d(u,u_{0})>\mu\end{array}\right.\text{,}$
for all $u\in X$ with any $\mu>0$. Then it can be easily seen that $f_{9}$ is
not an $F_{c}$-contractive self-mapping for the point $u_{0}$ but $f_{9}$
fixes every circle $C_{u_{0},r}$ where $r\leq\mu$.
###### Example 3.8.
Let $\left(\mathbb{C},d\right)$ be the usual metric space and define the self-
mapping $f_{10}:\mathbb{C}\rightarrow\mathbb{C}$ as
$f_{10}u=\left\\{\begin{array}[]{ccc}u&;&\left|u\right|<2\\\
u+1&;&\left|u\right|\geq 2\end{array}\right.\text{,}$
for all $u\in\mathbb{C}$. We have $\sigma=\min\left\\{d(u,f_{10}u):u\neq
f_{10}u\right\\}=1$. Then $f_{10}$ is an $F_{c}$-contractive self-mapping with
$F=\ln u$, $t=\ln 2$ and $u_{0}=0\in\mathbb{C}$. Evidently, the number of the
fixed circles of $f_{10}$ is infinite.
Now, to obtain a new fixed-circle theorem, we use the well-known fact that if
a self-mapping $f$ on $X$ is surjective, then there exists a self mapping
$f^{\ast}:X\rightarrow X$ such that the map $(f\circ f^{\ast})$ is the
identity map on $X$.
###### Definition 3.9.
A self-mapping $f$ on a metric space $X$ is called as an $F_{c}$-expanding map
if there exist $t<0$, $F\in\mathbb{F}$ and $u_{0}\in X$ such that
$d(u,fu)>0\Rightarrow F(d(u,fu))\leq F(d(u_{0},fu))+t\text{,}$
for all $u\in X$.
###### Theorem 3.10.
Let $(X,d)$ be a metric space. If $f:X\rightarrow X$ is a surjective
$F_{c}$-expanding map with $u_{0}\in X$, then $f$ has a fixed circle in $X.$
###### Proof.
Since $f$ is surjective, we know that there exists a self-mapping
$f^{\ast}:X\rightarrow X,$ such that the map $(f\circ f^{\ast})$ is the
identity map on $X$. Let $u\in X$ be such that $d(u,f^{\ast}u)>0$ and
$z=f^{\ast}u$. First, notice the following fact
$fz=f(f^{\ast}u)=(f\circ f^{\ast})u=u\text{.}$
Since
$d(z,fz)=d(fz,z)>0\text{,}$
now, by applying the $F_{c}$-expanding property of $f$ we get
$F\left(d(z,fz)\right)\leq F(d(u_{0},fz))+t$
and
$F\left(d(f^{\ast}u,u)\right)\leq F(d(u_{0},u))+t\text{.}$
Therefore, we obtain
$-t+F\left(d(f^{\ast}u,u)\right)\leq F(d(u_{0},u))\text{.}$
Consequently, $f^{\ast}$ is an $F_{c}$-contraction on $X$ with $u_{0}$ as
$-t>0$. Then by Theorem 3.4, $f^{\ast}$ has a fixed circle $C_{u_{0},\sigma}$.
Let $v\in C_{u_{0},\sigma}$ be any point. Using the fact that
$fv=f(f^{\ast}v)=v\text{,}$
we deduce that $fv=v$, that is $v$ is a fixed point of $f$, which implies that
$f$ also fixes $C_{u_{0},\sigma}$, as required. ∎
###### Example 3.11.
Let $X=\left\\{1,2,3,4,5\right\\}$ with the usual metric. Define the self-
mapping $f_{11}:X\rightarrow X$ by
$f_{11}u=\left\\{\begin{array}[]{ccc}2&;&u=1\\\ 1&;&u=2\\\
u&;&u\in\left\\{3,4,5\right\\}\end{array}\right.\text{.}$
$f_{11}$ is a surjective $F_{c}$-expanding map with $u_{0}=4$, $F(u)=\ln u$
and $t=-\ln 2$. We have
$\sigma=\min\left\\{d\left(u,fu\right):u\neq fu,u\in X\right\\}=1$
and the circle $C_{4,1}=\left\\{3,5\right\\}$ is the fixed circle of $f$.
###### Remark 3.12.
If $f$ is not a surjective map, then the result in Theorem 3.10 is not true
everywhen. For example, let $X=\left\\{1,2,3,4\right\\}$ with the usual metric
$d.$ Define the self-mapping $f_{12}:X\rightarrow X$ by
$f_{12}u=\left\\{\begin{array}[]{ccc}2&;&u\in\left\\{1,3\right\\}\\\
1&;&u=2\\\ 4&;&u=4\end{array}\right.\text{.}$
Then, it is easy to check that $f_{12}$ satisfies the condition
$d(u,fu)>0\Rightarrow F(d(u,fu))\leq F(d(u_{0},fu))+t$
for all$\ u\in X$, with $F\left(u\right)=\ln u$, $u_{0}=4$ and $t=-\ln 2$.
Therefore, $f_{12}$ satisfies all the conditions of Theorem 3.10, except that
$f_{12}$ is not surjective. Notice that $\sigma=1$ and $f_{12}$ does not fix
the circle $C_{4,1}$.
## 4\. Fixed point sets of activation functions
Activation functions are the primary neural networks decision-making units in
a neural network and hence it is critical to choose the most appropriate
activation function for neural network analysis [29]. Characteristic
properties of activation functions play an important role in learning and
stability issues of a neural network. A comprehensive analysis of different
activation functions with individual real-world applications was given in
[29]. We note that the fixed point sets of commonly used activation functions
(e.g. Ramp function, ReLU function, Leaky ReLU function) contain some fixed
discs and fixed circles. For example, let us consider the Leaky ReLU function
defined by
$f(x)=\max(kx,x)=\left\\{\begin{array}[]{ccc}kx&;&x\leq 0\\\
x&;&x>0\end{array}\right.\text{,}$
where $k\in\left[0,1\right]$. In [32], the Leaky-Reluplex algorithm was
proposed to verify Deep Neural Networks (DNNs) with Leaky ReLU activation
function (see [32] for more details). Now we consider the fixed point set of
the Leaky ReLU activation function by a geometric viewpoint. Let
$\rho=u_{0}\in\left(0,\infty\right)$ be any positive number and consider the
circle $C_{u_{0},\rho}=\left\\{0,2u_{0}\right\\}$. Then it is easy to check
that the function $f(x)$ satisfies the conditions of Theorem 2.1 for the
circle $C_{u_{0},\rho}$ with $L=0$. Clearly, the circle $C_{u_{0},\rho}$ is a
fixed circle of $f(x)$ and the center of the fixed circle is also fixed by
$f(x)$.
On the other hand, theoretic fixed point theorems have been extensively used
in the study of neural networks. For example, in [15], the existence of a
fixed point for every recurrent neural network was shown and a geometric
approach was used to locate where the fixed points are. Brouwer’s Fixed Point
Theorem was used to ensure the existence of a fixed point. This study shows
the importance of the geometric viewpoint and theoretic fixed point results in
applications. Obviously, our fixed circle and fixed disc results are important
for future studies in the study of neural networks.
## 5\. Conclusion and future works
In this section, we want to bring to the reader’s attention in connection with
the investigation of some open questions. Concerning the geometry of non
unique fixed points of a self-mapping on a metric space, we have obtained new
geometric (fixed-circle or fixed-disc) results. To do this, we use two
different approaches. One of them is to measure whether a given circle is
fixed or not by a self-mapping. Another approach is to find which circle is
fixed by a self-mapping under some contractive or expanding conditions. The
investigation of new conditions which ensure a circle or a disc to be fixed by
a self-mapping can be considered as a future problem. For a self-mapping of
which fixed point set contains a circle or a disc, new contractive or
expanding conditions can also be investigated.
On the other hand, there are some examples of self-mappings which have a
common fixed circle. For example, let $\left(\mathbb{R},d\right)$ be the usual
metric space and consider the circle $C_{0,1}=\left\\{-1,1\right\\}$. We
define the self-mappings $f_{13}:\mathbb{R}\rightarrow\mathbb{R}$ and
$f_{14}:\mathbb{R}\rightarrow\mathbb{R}$ as
$f_{13}x=\left\\{\begin{array}[]{ccc}\frac{1}{x}&;&x\in\left\\{-1,1\right\\}\\\
0&;&\text{otherwise}\end{array}\right.\text{ and
}f_{14}x=\frac{5x+3}{3x+5}\text{,}$
for each $x\in\mathbb{R}$, respectively. Then both the self-mappings $f_{13}$
and $f_{14}$ fixes the circle $C_{0,1}=\left\\{-1,1\right\\}$, that is, the
circle $C_{0,1}=\left\\{-1,1\right\\}$ is a common fixed circle of the self-
mappings $f_{13}$ and $f_{14}$. At this point, the following question can be
left as a future study.
###### Question 5.1.
What is (are) the condition(s) to make any circle $C_{u_{0},\rho}$ as the
common fixed circle for two (or more than two) self-mappings?
Finally, the problems considered in this paper can also be studied on some
generalized metric spaces. For example, the notion of an $M_{s}$-metric space
was introduced in [13].
###### Notation 5.2.
We use the following notations.
1\. $m_{{s}_{u,v,z}}:=min\\{m_{s}(u,u,u),m_{s}(v,v,v),m_{s}(z,z,z)\\}$
2\. $M_{{s}_{u,v,z}}:=max\\{m_{s}(u,u,u),m_{s}(v,v,v),m_{s}(z,z,z)\\}$
###### Definition 5.3.
An $M_{s}$-metric on a nonempty set $Y$ is a function
$m_{s}:Y^{3}\rightarrow\mathbb{R}^{+}$ if for all $u,v,z,t\in Y$ we have
1. (1)
$m_{s}(u,u,u)=m_{s}(v,v,v)=m_{s}(z,z,z)=m_{s}(u,v,z)\Longleftrightarrow
u=v=z,$
2. (2)
$m_{{s}_{u,v,z}}\leq m_{s}(u,v,z),$
3. (3)
$m_{s}(u,u,v)=m_{s}(v,v,u),$
4. (4)
$\displaystyle(m_{s}(u,v,z)-m_{{s}_{u,v,z}})\leq$
$\displaystyle(m_{s}(u,u,t)-m_{{s}_{u,u,t}})$
$\displaystyle+(m_{s}(v,v,t)-m_{{s}_{v,v,t}})+(m_{s}(z,z,t)-m_{{s}_{z,z,t}}).$
Then the pair $(Y,m_{s})$ is called an $M_{s}$-metric space.
One can consult [13] for some examples and basic notions of an $M_{s}$-metric
space.
In $M_{s}$-metric spaces we define a circle as follow;
$C_{u_{0},\rho}=\\{u\in Y\mid m_{s}(u_{0},u,u)-m_{{s}_{{u_{0},u,u}}}=\rho\\}.$
###### Question 5.4.
Let $(Y,m_{s})$ be an $M_{s}$-metric space, $k>1$ and $f$ be a surjective
self-mapping on $Y$. Let we have
$m_{s}(u,fu,f^{2}u)\leq km_{s}(u_{0},u,fu),$
for every $u\in Y$ and some $u_{0}\in Y$. Does $f$ have point circle on $Y?$
###### Question 5.5.
Let $(Y,m_{s})$ be an $M_{s}$-metric space, $t>0$, $F\in\mathbb{F}$ and $f$ be
a surjective self-mapping on $Y$. Let we have
$m_{s}(u,fu,f^{2}u)>0\Rightarrow F(m_{s}(u,fu,f^{2}u))\geq
F(m_{s}(u_{0},u,fu))+t,$
for every $u\in Y$ and some $u_{0}\in Y$. Does $f$ have a fixed circle on $Y?$
## Acknowledgements
The first author would like to thank Prince Sultan University for funding this
work through research group Nonlinear Analysis Methods in Applied Mathematics
(NAMAM) group number RG-DES-2017-01-17.
## References
* [1] I. Altun, M. Aslantaş and H. Şahin, Best proximity point results for p-proximal contractions, Acta Math. Hungar. (2020). https://doi.org/10.1007/s10474-020-01036-3
* [2] T. V. An, N. V. Dung, Z. Kadelburg and S. Radenović, Various generalizations of metric spaces and fixed point theorems, Rev. R. Acad. Cienc. Exactas Fis. Nat. Ser. A Math. RACSAM 109 (2015), no. 1, 175-198.
* [3] I. A. Bakhtin, The contraction mapping principle in almost metric spaces, Funct. Anal., Gos. Ped. Inst. Unianowsk 30 (1989), 26-37.
* [4] J. Caristi, Fixed point theorems for mappings satisfying inwardness conditions, Trans. Amer. Math. Soc. 215 (1976), 241-251.
* [5] S. K. Chatterjea, Fixed point theorem, C. R. Acad. Bulgare Sci. (25) 1972, 727-730.
* [6] L. B. Ćirić, A generalization of Banach’s contraction principle, Proc. Amer. Math. Soc. 45 (1974), 267-273.
* [7] B. Djafari Rouhani and S. Moradi, On the existence and approximation of fixed points for Ćirić type contractive mappings, Quaest. Math. 37 (2014), no. 2, 179-189.
* [8] M. Edelstein, On fixed and periodic points under contractive mappings, J. Lond. Math. Soc. 37 (1962), 74-79.
* [9] S. Gähler, 2-metrische Räume und ihre topologische Struktur, Math. Nachr. 26 (1963), 115-148.
* [10] G. A. Jones and D. Singerman, Complex functions: an algebraic and geometric viewpoint. Cambridge university press, 1987\.
* [11] R. Kannan, Some results on fixed points II, Am. Math. Mon. 76 (1969), 405-408.
* [12] H. Karayılan, M. Telci, Caristi type fixed point theorems in fuzzy metric spaces, Hacet. J. Math. Stat. 48 (2019), no. 1, 75-86.
* [13] N. Mlaiki, N. Souayah, K. Abodayeh, T. Abdeljawad, Contraction principles in $M_{s}-$metric spaces, J. Nonlinear Sci. Appl., 10 (2017), 575-582.
* [14] Z. Mustafa and B. Sims, A new approach to generalized metric spaces, J. Nonlinear Convex Anal. 7 (2006), no. 2, 289-297.
* [15] L. K. Li, Fixed point analysis for discrete-time recurrent neural networks, In: Proceedings 1992 IJCNN International Joint Conference on Neural Networks, Baltimore, MD, USA, 1992, pp. 134-139 vol.4, doi: 10.1109/IJCNN.1992.227277.
* [16] D. P Mandic, The use of Möbius transformations in neural networks and signal processing. In Neural Networks for Signal Processing X. Proceedings of the 2000 IEEE Signal Processing Society Workshop (Cat. No. 00TH8501) (Vol. 1, pp. 185-194). IEEE.
* [17] Z. D. Mitrović and S. Radenović, A common fixed point theorem of Jungck in rectangular _b_ -metric spaces, Acta Math. Hungar. 153 (2017), no.2, 401-407.
* [18] V. V. Nemytskii, The fixed point method in analysis, Usp. Mat. Nauk 1 (1936), 141-174 (in Russian).
* [19] M. Olgun, Ö. Biçer T. Alyıldız and İ. Altun, A related fixed point theorem for $F$-contractions on two metric spaces, Hacet. J. Math. Stat. 48 (2019), no. 1, 150-156.
* [20] N. Y. Özgür and N. Taş, Some fixed-circle theorems on metric spaces, Bull. Malays. Math. Sci. Soc. 42 (4) (2019), 1433-1449.
* [21] N. Y. Özgür, N. Taş and U. Çelik, New fixed-circle results on $S$-metric spaces, Bull. Math. Anal. Appl. 9 (2017), no. 2, 10-23.
* [22] N. Y. Özgür and N. Taş, Fixed-circle problem on $S$-metric spaces with a geometric viewpoint, Facta Universitatis. Series: Mathematics and Informatics 34 (3) (2019), 459-472.
* [23] N. Y. Özgür and N. Taş, Some fixed-circle theorems and discontinuity at fixed circle, AIP Conference Proceedings 1926, 020048 (2018).
* [24] N. Y. Özgür, Ellipses and similarity transformations with norm functions, Turkish Journal of Mathematics, 42 (6) (2018), 3204-3210.
* [25] L. Pasicki, A strong fixed point theorem, Topology Appl. 282 (2020), 107300, 18 pp.
* [26] B. E. Rhoades, A comparison of various definitions of contractive mappings, Trans. Amer. Math. Soc. 226 (1977), 257-290.
* [27] S. Sedghi, N. Shobe and A. Aliouche, A generalization of fixed point theorems in $S$-metric spaces, Mat. Vesnik 64 (2012), no. 3, 258-266.
* [28] V. Sihag, R. K. Vats and C. Vetro, A fixed point theorem in $G$-metric spaces via $\alpha$-series, Quaest. Math. 37 (2014), no. 3, 429-434.
* [29] T. Szandała, Review and Comparison of Commonly Used Activation Functions for Deep Neural Networks, arXiv:2010.09458
* [30] D. Wardowski, Fixed points of a new type of contractive mappings in complete metric spaces, Fixed Point Theory and Applications 2012, 2012:94.
* [31] Wolfram Research, Inc., Mathematica, Version 12.0, Champaign, IL (2019).
* [32] J. Xu, Z. Li, B. Du, M. Zhang and J. Liu, Reluplex made more practical: Leaky ReLU. In 2020 IEEE Symposium on Computers and Communications (ISCC) (pp. 1-7). IEEE, (2020, July).
* [33] L. Zhang, Implementation of fixed-point neuron models with threshold, ramp and sigmoid activation functions, In: IOP Conference Series: Materials Science and Engineering Vol. 224, No. 1, p. 012054, (2017).
|
# A Behavioural Analysis of Credulous Twitter Users
Alessandro Balestrucci Gran Sasso Science Institute, via M. Iacobucci 2,
67100 L’Aquila, Italy<EMAIL_ADDRESS>Rocco De Nicola IMT School for
Advanced Studies Lucca, Piazza San Francesco 19, 55100 Lucca, Italy CINI
Cybersecurity Lab, Via Ariosto, 25, 00185 Roma, Italy
<EMAIL_ADDRESS>Marinella Petrocchi Istituto di Informatica e
Telematica - CNR, Via G. Moruzzi 1, 56124 Pisa, Italy<EMAIL_ADDRESS>Catia Trubiani<EMAIL_ADDRESS>
###### Abstract
Thanks to platforms such as Twitter and Facebook, people can know facts and
events that otherwise would have been silenced. However, social media
significantly contribute also to fast spreading biased and false news while
targeting specific segments of the population. We have seen how false
information can be spread using automated accounts, known as bots. Using
Twitter as a benchmark, we investigate behavioural attitudes of so called
‘credulous’ users, i.e., genuine accounts following many bots. Leveraging our
previous work, where supervised learning is successfully applied to single out
credulous users, we improve the classification task with a detailed features’
analysis and provide evidence that simple and lightweight features are crucial
to detect such users. Furthermore, we study the differences in the way
credulous and not credulous users interact with bots and discover that
credulous users tend to amplify more the content posted by bots and argue that
their detection can be instrumental to get useful information on possible
dissemination of spam content, propaganda, and, in general, little or no
reliable information.
###### keywords:
Online Behavioral Analysis, Features Analysis, Disinformation Spreading,
Credulous Users, Twitter.
††journal: Online Social Networks and Media
## 1 Introduction
Disruptive Innovation: two words that sum up the impact of social networks and
media on people’s everyday life. Crucial information can be disseminated to
millions of people in a flash: critical data, such as real-time updates on
basic events. Unfortunately, new technologies have not only revolutionized
traditional sectors such as retail and advertising. As noticed by the
nonprofit organization National Endowment for Democracy, they have been
fertile and groundbreaking even on a much more slippery ground: that of
misinformation, hoaxes and propaganda [1]. According to the 2019 report
‘Weapons of mass distractions’ [2], strategists of false news can exploit - at
least - three significant vulnerabilities of the online information ecosystem:
i) the medium: the platforms on which fake news creep in and expand; ii) the
message: the information one wants to convey; iii) the audience: the readers
who consume (and contributes to diffuse) the information.
This work focuses on the last aspect, i.e., the audience. Online Social Media
convey the information quickly and diffusely. They are ‘optimised’ for posting
and sharing catchy and sensationalist news. False messages go from deliberate
lies to mislead users to biased information, aiming at influencing communities
and agendas. Whatever the strategy adopted for spreading false news (like
supporting automatic accounts or using trolls to inflame crowds [3, 4, 5, 6]),
this would not be effective if there was no audience willing to believe
them111Online: ‘Americans may appreciate knowing when a news story is suspect,
but more than a third will share that story anyway’. Source
https://www.stopfake.org/. All URLs in this manuscript were last accessed on
January 3, 2021.. The quest for belonging to a community and getting
reassuring answers, the adherence to one’s viewpoint, native reluctance to
change opinion [7, 8] are key factors for people to contribute to the success
of disinformation spreading [9, 10].
Information spreading on Social Media is often corroborated by automated
accounts, called bots, which are totally or partially controlled by computer
algorithms. Designed to mimic human behaviour online, a dominant and worrisome
use of automated accounts is far from being benign: they have been often used
to amplify narratives or drown out political dissent, see Ferrara et al. in
[11]. Recent studies, such as the one by Shao et al. [5], demonstrate that
bots are particularly active in spreading low credibility content. Moreover,
the Oxford Internet Institute has monitored the global organization of social
media manipulation by governments and political parties and analysed the
trends of computational propaganda in different countries. The report [12]
provides evidence of organized social media manipulation campaigns which have
taken place in 70 countries, up from 48 countries in 2018 and 28 countries in
2017. In each country, there is at least one political party or government
agency using social media to shape public attitudes domestically.
In our previous work [13], starting from the consideration that human-operated
accounts are exposed to manipulation and contribute to misinformation
spreading by, e.g., retweeting or liking low-reputable content [5], we
concentrated on Twitter and developed a classification framework for
automatically detecting genuine accounts with a significant number of bots as
friends, without exploiting this last datum. Hereafter, we define those users
following many bots as ‘credulous’ users. This manuscript extends our previous
work [13] by analyzing the behavioural aspects (i.e., ratio on pure tweets,
retweets, and replies) of credulous users, with the goal of understanding
their distinguishing features.
The main goals of our research are summarized in the following: (i)
automatically identify genuine online users who may be prey of disinformation;
(ii) reduce misleading activities (e.g., spreading of fake news) performed by
malicious entities like social bots; (iii) stimulate users to verify the
source of an information and fact-check the information itself, to pave the
way to awareness.
To achieve these goals, we apply automated techniques to discriminate
potential susceptible audiences and the accounts they interact with. Thus, the
following four research objectives have been defined.
1. 1.
Assessing users’ gullibility level: We propose a technique to automatically
rank human-operated accounts to assess their gullibility, by relying on
aspects exposed on their social profiles. For instance, the number of bot
accounts among their friends, or the number of by-bot-posts liked by the
genuine user.
2. 2.
Detecting credulous users: We design and develop a supervised learning based
classification framework to recognize those human-operated accounts following
a high amount of bots.
3. 3.
Profiling credulous users: We study the behavioral characteristics typical of
credulous users, by analysing the interactions with their social contacts, and
assessment of behavioral differences between credulous and not credulous
users.
The novel contributions of this manuscript, with respect to our previous work
[13], are:
1. 1.
A deeper study of the features of the credulous classifier, with a specific
analysis assessing the relevance of each single feature.
2. 2.
An investigation of online behavior of users, in terms of tweets, retweets and
replies, to better discriminate between credulous and non-credulous ones.
3. 3.
A study of the actual influence of bots on credulous users by considering
which and how many of the activities of credulous users are linked to tweets
produced by bots.
We can safely state that there exists a clear connection between the fact that
a user has many bots among her/his friends and her/his actual contribution to
amplifying the bots’ messages. In particular, in this study we show that:
1. 1.
Lightweight features, such as the number of followers, tweets, and friends,
are statistically significant to single out users with a high number of bots
among their friends;
2. 2.
The ‘social activity reactions’ to content originating from bots of credulous
Twitter users is higher than that of not credulous users.
The experimental results are supported by statistical tests. We think that our
methodology, which classifies credulous users with easy-to-extract features,
is a promising new tool for finding content originating by automated accounts
and, thus, for detecting spam, misleading information, and propaganda news.
The remainder of the paper is organized as follows. Section 2 briefly sums up
our previous work. Section 3 provides a study of the relevance of the features
exploited for the credulous users’ classifier. Section 4 discusses the
behavior of credulous vs non-credulous users in terms of retweets and replies
and provides a fine-grained analysis of the extent to which retweets and
replies refer to tweets originated by bots. In Section 5, we discuss the main
findings and implications of this investigation. Section 6 discusses recent
related work, positioning our study among relevant state-of-the-art papers.
Finally, Section 7 draws conclusions and highlights promising directions for
future research and experimentation. The data used in this study are publicly
available for the sake of reproducibility222https://tinyurl.com/y6p7n38x.
## 2 Background
In the following, we introduce some background notions reported in our
previous work [13, 14], and present some of the performed experiments and the
main findings. The main aim of this section is to provide a connection between
what we have previously achieved and the analyses/experiments described in the
following sections. Specifically, Section 2.1 introduces our datasets. Section
2.2 shows an excerpt of the experimental results related to the training of
some bot detectors that we use to obtain the data used for the subsequent
analyses. Section 2.3 briefly describes the methodology applied for the
identification of the credulous users and an excerpt of the experimental
results, related to the training of credulous users detectors.
### 2.1 Datasets
We considered three publicly available datasets: CR15 [15], CR17 [4] and VR17
[16], where Twitter accounts are labelled according to their nature (either
bots or not)333Bot Repository Datasets: https://goo.gl/87Kzcr.
* CR15:
introduced in [15] consists of three smaller datasets. The first one has been
collected over a period of twelve days in December 2012, and contains 469
Twitter accounts certified of being human-operated. The second one was
collected between 2013-2015 and contains 1,488 genuine (human) users. The
third subset is composed of 833 fake accounts, bought from three different
Twitter accounts online markets.
* CR17:
first presented in [4], was obtained by following a hybrid crowd-sensing
approach [17]. The authors randomly contacted Twitter users by asking simple
questions. All the replies were manually verified and 3,474 Twitter accounts
were certified as humans. The dataset contains also 6,609 social spambots
(e.g., spammers of job offers, products on sale at Amazon).
* VR17:
introduced in [16], contains 2,573 Twitter accounts. A manual annotation was
performed by inspecting the profile details and the produced content. Overall,
1,747 Twitter accounts were annotated as human-operated and 826 as bots.
From the merging of these three datasets, we obtain a unique labelled dataset
(human-operated accounts/bots) of 12,961 accounts - 7,165 bots and 5,796
human-operated ones.
### 2.2 Bot detection
The merged dataset was used to train a bot detector. To this end, we used the
Java Twitter API444Twitter API: https://goo.gl/njcjr1, and, for each account
in the dataset, we collected: tweets (up to 3,200), mentions (up to 100) and
IDs of friends and followers (up to 5,000).
In [13], we considered two features’ sets derived from
[16]555https://botometer.iuni.iu.edu/ and [15]. In particular, we relied on
what Cresci at al. in [15] called ClassA features, which conveniently require
only information available in the profile of the account.
Table 1: Classification results for bot detection task with ClassA’s features | evaluation metrics |
---|---|---
| alg | accuracy | precision | recall | F1 | AUC
| HMM | 55.28 | 0.55 | 1.00 | 0.71 | 0.50
| IBk | 91.03 | 0.91 | 0.93 | 0.92 | 0.91
| BN | 87.15 | 0.93 | 0.83 | 0.88 | 0.94
| NB | 64.37 | 0.89 | 0.42 | 0.54 | 0.77
| VP | 80.07 | 0.82 | 0.82 | 0.82 | 0.80
| MLP | 85.01 | 0.89 | 0.84 | 0.86 | 0.91
| SMO | 68.58 | 0.76 | 0.63 | 0.69 | 0.69
| JRip | 94.38 | 0.96 | 0.94 | 0.95 | 0.96
| 1R | 84.51 | 0.88 | 0.84 | 0.86 | 0.85
| 0R | 55.28 | 0.55 | 1.00 | 0.71 | 0.50
| J48 | 94.30 | 0.96 | 0.94 | 0.95 | 0.96
| HT | 84.48 | 0.90 | 0.81 | 0.85 | 0.88
| RT | 92.48 | 0.93 | 0.94 | 0.93 | 0.92
| J48c | 94.36 | 0.96 | 0.93 | 0.95 | 0.96
| J48g | 94.41 | 0.96 | 0.94 | 0.95 | 0.96
| LAD | 89.19 | 0.93 | 0.87 | 0.90 | 0.94
| REP | 93.96 | 0.96 | 0.93 | 0.94 | 0.97
| LMT | 94.33 | 0.96 | 0.94 | 0.95 | 0.97
| RF | 95.84 | 0.98 | 0.95 | 0.96 | 0.99
In Table LABEL:table:botDet we report the results of the 19 learning
algorithms adopted in [13] to train the bot detector (with a 10-fold cross
validation). There are three reasons behind the decision to consider the
classifier trained with the ClassA features: (i) the performance results were
very similar to those achieved considering the Botometer features, (ii) the
features engineering phase rely on users’ profile data only666Data from a user
profile: https://tinyurl.com/y5s5kpuw, and (iii) with respect to Botometer
features, where their calculation requires a connection to a web
service777Botometer web service: https://tinyurl.com/yytf282s, ClassA’s
features can be computed in an autonomous fashion. The training was executed
also by considering the Botometer’s features and a union set of ClassA’s and
Botometer’s features. Experiments were performed with Weka [18], and the
complete experimentation results are publicly available:
https://tinyurl.com/y4l632g5.
### 2.3 Classification of Credulous Twitter Users
In [13], we built a decision model to automatically classify Twitter accounts
as credulous or not. As ground-truth to train the learning model, we
considered 316 accounts belonging to the initial set of 5,796 human-operated
ones, as introduced above (Section 2.1). Due to the rate limits of the Twitter
APIs and to the huge amount of friends possibly belonging to the 6,000 genuine
accounts, we considered only those accounts with a list of friends lower than
or equal to 400 [14]. This leads to a dataset of 2,838 human-operated
accounts, and 316 users have been identified as credulous ones, according to
the approach in [14].
Table 2: Classification results for credulous detection with ClassA’s features. | | evaluation metrics
---|---|---
| alg | accuracy | precision | recall | F1 | AUC
| HMM | 50.06 | 0.50 | 1.00 | 0.67 | 0.50
| IBk | 92.59 | 0.74 | 0.73 | 0.92 | 0.97
| BN | 82.77 | 0.98 | 0.88 | 0.79 | 0.93
| NB | 73.00 | 0.97 | 0.69 | 0.73 | 0.73
| VP | 68.68 | 0.72 | 0.63 | 0.67 | 0.70
| SMO | 75.32 | 0.74 | 0.80 | 0.77 | 0.75
| MLP | 80.08 | 0.81 | 0.81 | 0.80 | 0.87
| JRip | 93.05 | 0.99 | 0.87 | 0.93 | 0.94
| 1R | 93.27 | 0.99 | 0.88 | 0.93 | 0.93
| 0R | 49.51 | 0.49 | 0.65 | 0.66 | 0.50
| J48 | 92.58 | 0.97 | 0.88 | 0.92 | 0.94
| HT | 83.28 | 0.96 | 0.71 | 0.80 | 0.93
| RT | 88.88 | 0.89 | 0.89 | 0.89 | 0.89
| J48C | 92.68 | 0.97 | 0.88 | 0.92 | 0.94
| J48g | 92.64 | 0.97 | 0.88 | 0.92 | 0.94
| LAD | 92.38 | 0.96 | 0.89 | 0.92 | 0.97
| LMT | 92.66 | 0.98 | 0.88 | 0.92 | 0.96
| REP | 93.09 | 0.98 | 0.88 | 0.93 | 0.95
| RF | 92.71 | 0.97 | 0.89 | 0.92 | 0.97
We experimented with the same learning algorithms and the same features’ sets
considered in Section 2.2, with a 10 cross-fold validation. It is worth noting
that, for credulous users classification, the learning algorithms took as
input a very unbalanced dataset: we had 2,838 human-operated accounts and,
among them, 316 have been identified as credulous users. To avoid working with
unbalanced datasets, we split the sets of not credulous users into smaller
portions, equal to the number of credulous users. We randomly selected a
number of not credulous users equal to the number of credulous ones; then, we
unified these instances in a new dataset (hereinafter referred to as fold).
Then, we repeated this process on previously un-selected sets, until there
were no more not credulous instances. Such procedure has been inspired by the
under-sampling iteration methodology, for strongly unbalanced datasets [19].
Each learning algorithm was trained on each fold. To evaluate the
classification performances on the whole dataset, and not just on individual
folds, we computed the average of the single performance values, for each
evaluation metric. Table LABEL:table:CredClassifiers reports the
classification performances for the credulous users classifiers, obtained by
using 19 learning algorithms. Also in this case, we used Weka to perform the
experiments; further details are available here: https://tinyurl.com/y4l632g5.
## 3 Features’ Evaluation
The original contribution of this manuscript starts here. We extend the
credulous classification analysis to assign each ClassA’s feature an ‘index of
ability’ to distinguish C from NC instances888In the following, we will adopt
notation C and NC users to indicate, resp. credulous and not credulous
accounts.. Table LABEL:tab:ClassANot presents the ClassA features with their
type and description.
Table 3: Type and description of ClassA’s features Label | Feature Name | Description
---|---|---
F1 | #friends/#followers2 | The ratio between the number of friends and the squared number of followers
F2 | age (in months) | The number of months since the creation of the account
F3 | #tweets | The number of tweets, retweets, replies and quotes of the account
F4 | has a Name | True if a name is specified in the account’s profile
F5 | #friends | (Alias #followees): The number of accounts a user is following
F6 | URL in profile | True if a URL is specified in the account’s profile
F7 | following rate | The number of followees over the sum of followees and followers
F8 | default image after 2m | True if the account did not change the default image provided by Twitter in the account’s profile after 2 months of its creation
F9 | belong to a list | True if the account is member of, at least, one list
F10 | profile has image | True if the account has an image in its profile
F11 | #friends/#followers $\geq$ 50 | True if the ratio between the number of friends and followers is greater than or equal 50
F12 | ‘bot’ in bio | True if there is a clear declaration of being a bot in the account’s profile
F13 | duplicate profile pictures | True if the profile’s image is the same of that of other accounts (We do not consider this feature in the current work)
F14 | 2 x #followers $\geq$ #friends | True if twice the followers is greater than or equal the number of followees
F15 | #friends/#followers $\simeq$ 100 | True if an account is following a number of accounts that is about 100 order of magnitude the number of accounts that follows it
F16 | profile has address | True if a location is specified in the account’s profile
F17 | no bio, no location, #friends $\geq$ 100 | True if: the account has no description in the bio and location fields of its profile and the number of friends is greater than or equal 100
F18 | has biography | True if the biography is specified in the account’s profile
F19 | #followers | The number of the account’s followers
### 3.1 Ranking of ClassA features
Weka’s tools allow to assess the discriminatory importance of a feature in a
features’ set through the so called attribute selection. For the sake of
reliability, we consider three attribute selector algorithms that evaluate the
value (in terms of importance) of each attribute with different methodologies:
(i) OneRAttributeEval999OneRAttributeEval: https://tinyurl.com/qtl3nox uses
the OneR classifier, (ii)
SymmetricalUncertAttributeEval101010SymmetricalUncertAttributeEval:
https://tinyurl.com/wcgccoz measures the symmetric uncertainty with respect to
the class and (iii) InfoGainAttributeEval111111InfoGainAttributeEval:
https://tinyurl.com/ve99qt8 considers the information gain [20] against the
class.
Rank | OneR | SymmetricalUncert | InfoGain
---|---|---|---
1 | F1 (1.000) | F1 (1.000) | F1 (1.000)
2 | F14 (0.977) | F14 (0.896) | F14 (0.894)
3 | F19 (0.889) | F19 (0.509) | F19 (0.620)
4 | F3 (0.768) | F5 (0.299) | F3 (0.323)
5 | F5 (0.720) | F7 (0.235) | F5 (0.273)
6 | F7 (0.712) | F3 (0.218) | F7 (0.255)
Table 4: Ranking of relevance of ClassA features, according to the three
attribute evaluators
Table 4 shows the ranking of the first six most important features, according
to the three evaluating algorithms. The remaining features have been estimated
to impact with a lower relevance, in fact at least one of the evaluators
estimated a value lower than 0.1, this happens for the seventh feature in the
rank (i.e., $F9$) estimated as follows: 0.631 (OneRAttributeEval), 0.101
(SymmetricalUncertAttributeEval) and 0.085 (InfoGainAttributeEval). From Table
4, we can see that all the attribute evaluators confirm the relevance of the
same features in the first six positions.
### 3.2 Analysis on three specific features
Here, we carry out a further analysis on three specific features, F3
(#tweets), F5 (#friends) and F19 (#followers). The rationale behind this
feature selection is due to the following reasons: (i) these features are
direct and simple indicators of the level of the account’s activity (F3) and
friendliness (F5 and F19), (ii), they are not related between each other
(like, for example, F1 and F7), and (iii) we think they are more easily
understandable rather than a combination of the same, see F1 and F14.
Furthermore, given the specific nature of the statistical tests carried on in
the following, we do not consider boolean features.
The statistical tests are carried on to determine if the values of the three
features are statistically significant to discriminate between C users and NC
users. Precisely, a paired t-test [21] (with $\alpha=0.05$) is a well known
parametric statistical test where the observations of some values of two
populations are compared; the goal is to verify whether the average values of
the two value distributions significantly deviate between each other.
Furthermore, the Pearson Correlation Coefficient (PCC) has been calculated to
single out any correlation between each feature data value. The PCC is an
index expressing a possible relationship of linearity between two statistical
variables. PCC values are between +1 and -1, where +1 corresponds to the
perfect positive linear correlation, 0 corresponds to an absence of linear
correlation and -1 corresponds to the perfect negative linear correlation.
| F3 | F5 | F19
---|---|---|---
P-value | 6.211$\times 10^{-24}$ | 1.166$\times 10^{-34}$ | 5.005$\times 10^{-12}$
PCC | -0.019 | 0.061 | 0.001
Table 5: Statistical significance test (T-test with $\alpha=0.05$) on F3, F5
and F19.
Tests are carried out over 316 C users and 316 NC users. The 316 NC users have
been randomly selected (without re-entry) from the full set of 2.522 NC users.
Table 5 shows the p-values (derived from the t-test) and the PCCs. Results
have been obtained with the use of the commonMaths Apache’s
library121212Commons-Math library (Apache): https://tinyurl.com/lt7zeud.
Seeing at the values in the table, we argue that the two populations (C and NC
users) feature a difference, w.r.t. the considered features, and this
difference is statistically significant, since the p-values are practically
equal to 0. Also, the fact that PCC is, for the three features, very close to
0 implies that there is no linear correlation between the values, per feature,
of the two populations.
## 4 Behavioral Analysis
(a) Pure Tweets ratio
(b) Retweets ratio
(c) Replies ratio
Figure 1: Activities of credulous users (vs not). Each plot expresses the
ratio between the tweets in the user’s timeline and (a) content produced by
the user, (b) retweets, and (c) replies
In this section, we analyse the activities of credulous accounts, in terms of
tweets (Figure 1(a)), retweets (Figure 1(b)), and replies (Figure 1(c)).
Quoted tweets have been considered as retweets131313On Twitter, a quoted tweet
is a retweet with an extra text inserted by the retweeter.. Results are shown
in Figure 1. For each type of content, each subfigure reports statistics about
users’ activities for: the 316 C users (blue), the 2,522 NC users (red), and a
random sample of NC users of the same number of C ones, 316 (green).
Figure 1(a) reports information related to pure tweets. When related to the
overall amount of tweets, C users (blue points) produced, on average, 56.44%
of tweets (horizontal blue dashed line), with a standard deviation (dashed
blue rhombus) of 26.4%. The totality of NC users (red points) feature an
average tweets production that is lower than C users, precisely 46.49%
($\sigma$=25.45%). When considering the sample of NC users (green points), we
notice an even lower average (31.13%, $\sigma$=24.85%). The analysis of this
first graph suggests that those accounts classified as credulous tweet
original content more than the others.
Figure 1(b) reports the information related to retweets and quotes (w.r.t. the
overall amount of tweets). In this case, the difference between C and NC users
is less marked. C users (blue points) show a retweets-tweets ratio equal to
0.2882 ($\sigma$=0.2432), while NC users (red points) ratio is 0.3182
($\sigma$=0.2591). Very similar scores are obtained if the NC users’ sample
(green points) is considered, with average ratio =0.311 ($\sigma$=0.2485).
Similar findings have been obtained for the replies, see Figure 1(c). The
replies-tweets ratio is equal to 0.14 ($\sigma$=0.124) for C users (blue
points), on average. The same ratio for the NC population (red points) is
higher, with a value equal to 0.19 ($\sigma$=0.164). For the NC users’ sample,
we obtain 0.18 ($\sigma$=0.16).
Although the last two cases (retweets and replies) show a common decreasing
trend in the averages of the activities of C and NC users, we will further
investigate the issue with a fine grained analysis, with the aim of finding
more discriminant results in terms of C and NC users’ behavior. Precisely, we
will analyse the nature of the accounts that originated retweets and replies
of C and NC users. For each of the 2,838 human-operated accounts in our
dataset, and for the two kind of actions type of action –retweeting and
replying –, we will calculate the percentage of content originated by bots.
Considering, for example, the case of retweets, it is possible to retrieve the
ID of the original tweet. Consequently, from the tweet ID, it is possible to
retrieve the tweet author. We can then evaluate if that author is classified
as bot or not. A similar reasoning holds for C users’ replies (considering the
nature of the author of the tweet which the user replies to) and quotes.
For the bot classification task, we adopt the classifier presented in Section
2.2. The authors of the original tweets retweeted and quoted by our human-
operated accounts, or to which they responded, are 1.22 million users. Among
them, 104k has been classified as bots (8.5%).
### 4.1 Retweets
(a) Percentage of retweets originated by bots
(b) % of populations w.r.t. the % of retweets originated by bots
Figure 2: Comparative analysis between credulous and not credulous users with
respect to the retweets whose related tweets have been originated by bots.
Figure 2 gives two different views of the same phenomenon. In both subfigures,
C users are represented in purple, while NC users in green.
Figure 2(a) gives, on the y-axis, the percentage of retweets whose original
tweets have been originated by bots141414For the sake of briefness, hereafter
we will denote such retweets as ‘byBot-retweets’.. Numbers on the x-axis are
indexes for each user, instead of the Twitter ID; such values are useful to
count the number of users with a percentage of byBot-retweets greater than a
certain threshold.
It is worth reminding that the original NC set is composed of 2,522 users;
hence, for sake of fair comparison, in the figure we consider a sample of 316
NC users. To obtain a representative sample, we first built 20 samples of 316
NC users; each sample was obtained by randomly selecting instances from the
original set, without re-injection. Then, for each sample, we computed the
average and standard deviation on the percentage of byBot-retweets. Finally,
we computed the Euclidean distance between the averages and standard
deviations of the samples and we compared them to those calculated over the
entire NC population. We identified as more representative of the entire NC
population the sample with the smallest distance.
Looking at Figure 2(a), we can notice that the purple points (C users) are
above the green ones (sample of NC users). The average percentage of tweets
originated by bots retweeted by C users is 16.45 ($\sigma=11.84$%), while the
average percentage for NC users is lower, 13.21 (with $\sigma=12.1$%).
The percentage of byBot-retweets have been calculated over the total amount of
retweets. Some of the human-operated accounts in our dataset do not retweet at
all. We call such accounts outliers. In Figure 2(a), the outliers are shown
under the zero on the y-axis: 10 C users and 12 NC users are outliers.
Moreover, the users lying exactly on the y-axis are those users who retweet
only tweets originated by human-operated accounts.
Figure 2(b) compares the whole C and NC populations. The values on the x-axis
are the same of those on y-axis in Figure 2(a). Instead, on the y-axis, we
report the percentage of the population having byBot-retweets (in percentage)
greater or equal to (for C users – purple dots) or lower (for NC users – green
dots) to the values on the x-axis.
The aim of the graphs in Figure 2(b) is conveying a measure of population
coverage, i.e., fixing the number of byBot-retweets, we know the percentage of
C users whose byBot-retweets is $\geq$ that number and the percentage of NC
users which quotes is less than that number. In Figure 2(b), the data related
to NC users refer to all of them (2,522).
The green and purple curves reach the maximum population coverage (of users)
at the abscissa point of 15.59 (%). Specifically, the 43.75% of C users has a
percentage of byBot-retweets $\geq$ 15.59 (coordinates 15.59, 43.75 – purple
dots). The 70.04% of NC users has a percentage of byBot-retweets < 15.59
(coordinates 15.59, 70.04 – green dots).
Going further with the analysis, Figure 3 provides two aggregation
perspectives, by grouping the C and NC users according to the number of their
byBot-retweets.
(a) Deciles of Figure 2(a)
(b) Deciles of C and all NC users
Figure 3: Comparative analysis between credulous and not credulous users with
respect to byBot-retweets.
In Figure 3(a), the x-axis reports intervals (deciles) of byBot-retweets and
the y-axis reports the number of users falling in each interval. Since the two
sets (C and NC) have the same number of users (316), we prefer to report the
real number of users, instead of the percentage, which however is still easily
calculable. The sample of NC users is the same used for the results shown in
Figure 2(a).
Figure 3(b) considers all NC users. Since they are 2,522, we report the
percentage (y-axis). When considering the whole population of NC users, we can
notice very similar results, in fact the differences between C and NC users,
already observed in Figure 3(a), are here preserved. This can be interpreted
as a close relationship between these two sets, i.e., the subset of the 316 NC
users considered in Figure 3(a), and all NC users.
Finally, in both subfigures of Figure 3, the users in the last group, i.e.,
the outliers, do not retweet any tweet; the users in the 0 group are users
retweeting tweets originated by human-operated accounts only.
#### Findings
From Figure 2, we can appreciate a difference in users’ behaviour between C
and NC users. On average, C users feature a higher percentage of retweets
whose original tweets have been originated by bots. The difference between the
standard deviation values for the two populations is negligible, indicating a
behavioural similarity between C and NC users (Figure 2(a)).
Regarding the analyses shown in Figure 3(a), both the subfigures show a
greater presence of C users in almost all the deciles; the only relevant
difference is for the [10,0[ group. In this group, NC users are more than C
users.
### 4.2 Replies
(a) Percentage of replies to bot’s tweets
(b) % of populations w.r.t. the number of replies to bot’s tweets
Figure 4: Comparative analysis between C and NC users with respect to replies
to bots’ tweets.
(a) Deciles of Figure 4(a)
(b) Deciles of C and all NC users
Figure 5: Comparative analysis between credulous and not credulous users with
respect to the replies to tweets originated by bots.
Figures 4 and 5 report the analysis related to the replies. Figure 4(a) shows
a quite clear difference between C and (a sample of) NC users. C users have an
average percentage of replies to bot’s tweets equal to 13.77
($\sigma=15.10$%), while NC users show a mean’s value of 10.81 (
$\sigma=14.03$%). As for the retweets, the number of outliers is quite low (9
and 12 accounts for C and NC users, resp.).
Figure 4(b) shows that the maximum percentage of covered populations is
achieved on a replies percentage value equal to 27.96 (x-axis). Specifically,
the 11.40% of C users reply to bot’s tweets more than the 91.56% of NC users.
Considering the average percentage value of replies for C users in Figure
4(a), the populations percentage are 35% for C users and 75% for NC users.
The behavioral analysis concludes with the bars in Figures 5(a) and 5(b). The
outcomes are very similar to the ones related to the retweets analysis. For
most of the groups, there is no a clear distinction between the number in
Figure 5(a) and the percentage in Figure 5(b) of C and NC users. This holds at
least till the group [20, 10[.
#### Findings
Similarly to what unveiled in the previous subsection, the replies analysis
confirms that, on average, C users feature a higher percentage of replies to
bots. However, looking more in detail at the amount of replies (the ‘group
analysis’ in Figure 5 , there is no common trend showing, for each group, a
majority of C replies w.r.t. NC replies. To further explore the behavioural
difference between C and NC users, we carry out statistical tests, aiming at
assessing whether the diversities found up to this point can be considered
statistically significant.
### 4.3 Statistical significance of the behavioral differences between C and
NC users
In Section 4.1 and 4.2, the analyses showed behavioral differences between C
and NC users in terms of retweets and replies. Here, we will try to assess
whether these differences can be considered statistically significant. For
this purpose, we rely on hypothesis testing. It is worth noticing that the
users involved in the statistical tests, representative of both C and NC
users, are those considered in Figure 2(a) for retweets and Figure 4(a) for
replies.
| Kolmogorov-Smirnov (Test of Normality)
---|---
| C (Res.) | NC (Res.)
Replies | $\times$ | $\times$
Retweets | $\times$ | $\times$
Table 6: Test of Normality. Type of tweets | T-Test ($\alpha$=0.05) | | ANOVA ($\alpha$=0.05)
---|---|---|---
| Res. | t-value | p-value | | Res. | f-ratio | p-value
Replies | $\checkmark$ | 3.001 | 0.001 | | $\checkmark$ | 9.04942 | 0.002738
Retweets | $\checkmark$ | 3.190 | 0.001 | | $\checkmark$ | 10.17804 | 0.001496
Table 7: Parametric Statistical tests: T-test and one-way ANOVA.
In Table 6, for the two types of post (1st column), we show the results of the
Kolmogorov-Smirnov’s test [22] (or Test of Normality). This is a non
parametric test that, given a certain number of observations (in our case, the
percentages of retweets and replies of the two sample populations originated
by bots), checks whether they are normally distributed. If the test is
successful, to determine whether the means of the two sets of data are
significantly different we could rely on the outcomes of parametric
statistical tests on C and NC users’ data, like the T-test [23] and the one-
way Analysis of Variance [24] (ANOVA). Unfortunately (see Table 6) the two
populations did not pass the test ($\times$ symbol); therefore, the
information obtained by the latter mentioned tests is useless in our
situation. For the sake of curiosity and completeness, we considered (see
Table 7) the outcomes obtained by conducting both the parametric tests on
retweets and replies. However, we will not consider them further.
Type of tweets | Mann-Whitney ($\alpha$=0.05) | | Kruskal–Wallis ($\alpha$=0.05)
---|---|---|---
| Res. | z-score | p-value | | Res. | H-value | p-value
Replies | $\checkmark$ | 3.37056 | 0.00038 | | $\checkmark$ | 11.36 | 0.00075
Retweets | $\checkmark$ | 3.3 | 0.00048 | | $\checkmark$ | 10.89 | 0.00097
Table 8: Mann-Whitney and Kruskal-Wallis (non parametric) tests.
We thus rely on non parametric statistical tests. In Table 8, we show the
outcomes of two well-known non-parametric tests. The first one corresponds to
the non parametric version of the T-test, i.e., test of Mann-Whitney [25]. The
second one is known as Kruskal–Wallis test [26]. For both of them, the test is
successfully passed if there is enough grounds to reject the null hypothesis.
Roughly, in both tests, the null hypothesis states that “there is no
difference in means” (of ‘byBot’ content) between the considered populations
(in our case C and NC users). As we can see in Table 8, both types of tweets
(i.e., replies and retweets) successfully pass the two tests ($\checkmark$
symbol). These results suggest that when replies and retweets are considered,
C users interact more with bots than NC users and this behavioural difference
is not due to chance.
## 5 Discussion
In Section 2, we presented one of our previous work [13], where we:
1. 1.
assessed the capability of a supervised learning-based approach to classify
human-operated Twitter users following many bots;
2. 2.
tested 19 learning algorithms to train a credulous users classifier;
3. 3.
evaluated the effectiveness of three sets of features to determine which one
obtained the best classification performances.
Encouraged by promising results (e.g., an accuracy 93%) and, therefore, by the
ability to automatically identify those users following a large number of bots
), in this work we extend our studies on C users in an in-depth way.
Specifically, to single out information useful to distinguish C from NC users,
we:
1. 1.
conducted a detailed study on the classification features, by focusing on
those used to train our best performing credulous detector (i.e., ClassA-
features);
2. 2.
analysed genuine users’ tweeting activities and compare those of credulous
with those of not credulous users (a coarse grained analysis not linked to
interactions with bots);
3. 3.
conducted a fine grained analysis to check our intuition about the higher
engagement of credulous users in spreading content originated by bots.
Regarding features’ analysis, we considered three different and well-known
feature ranking algorithms and compared them. There are small differences in
the order in which the features appear in the three rankings. However, since
the same features appear in the highest positions, we can infer that they are
the most effective ones. Some of these high-ranked features are not ‘Class A’
features (i.e., they are not directly accessible from the user profile);
indeed, combinations of other features (for example, by division or
multiplication) are requested to calculate them. To avoid correlation factors
between features, we selected three among the highest-ranked ones that, in our
opinion, express the essence of our investigations, namely the number of
tweets (a measure of the activity level on the platform), of friends and of
followers (a measure of the level of social relationships). For each of these
features, we carried out a T-test to assess whether the values associated with
C and NC users differ in a statistically significant way. The test succeeded,
revealing that these three features unveil a difference between these two
populations of human-operated accounts.
Since both C and NC users are human-operated accounts, it is possible that,
among the data used to perform meaningfulness tests (on each feature), there
may exists some correlations in terms of linear dependency. The statistical
test performed on the features (namely F3, F5, F7), although successfully
passed, do not take this factor into account. For this reason, we calculated
the PCC and found that indeed there is no correlation.
Table 9 recaps in a numerical format the statistics of tweeting, retweeting
and replying activities of the populations investigated in the previous
sections (see Figure 1).
| | Pure Tweets | | Retweets | | Replies
---|---|---|---|---|---|---
| | $\mu$ | $\sigma$ | | $\mu$ | $\sigma$ | | $\mu$ | $\sigma$
Credulous (C) | | 0.56 | 0.26 | | 0.29 | 0.24 | | 0.14 | 0.12
Not Credulous (NC) | | 0.46 | 0.25 | | 0.32 | 0.26 | | 0.19 | 0.16
NC (sample) | | 0.31 | 0.25 | | 0.31 | 0.25 | | 0.18 | 0.16
Table 9: Tweeting activity (stats) of credulous vs not credulous users.
On average, C users tweet more than NC users; nevertheless, their average
retweeting and replying activities are lower than those of NCs. At a first
sight, credulous users seem more talkative in terms of self-produced content,
whereas the scenario seems the opposite for retweets and replies. Paying more
attention, differences in retweets and replies are not so marked and we can
indeed notice similar behaviours of C and NC users. This ‘behavioural
equivalence’ is exploited in a second and fine-grained behavioural analysis
(Sections 4.1 and 4.2). Indeed, since the coarse-grained analysis does
evidence significant differences between C and NC users, we assume similar
behaviour in terms of high level activities (i.e., replies and retweets). The
fine-grained analysis enables us to assess the difference in terms of replies
to bots, and retweets by bots.
This additional analysis has been conducted both on retweets and replies and
has revealed the tendency of C users to bounce more content originated by
bots, with respect to NC users. To ensure that this behavioural variation does
not happen not by chance, we perform further non-parametric statistical tests
(hypothesis tests) which confirm the statistical significance of the different
attitudes featured by the two categories of users. We argue that these results
provide an initial, but relevant, evidence of the actual involvement of
specific categories of human-operated accounts in supporting, albeit
unknowingly, potentially malicious activities.
## 6 Related work
In the following we consider both works that have taken into account the topic
of gullibility and approaches that, more in general, consider the behavior of
users on social media. We restrict the related work to those papers we
consider more relevant relatively to our approach. Thus, this review is not
intended to be exhaustive.
For the sake of schematization, Table 10 reports a brief recap of the selected
papers that are discussed hereafter. Our interest is focused on studying
users’ behavioral patterns, aiming to derive the main characteristics of
specific humans, i.e., those more exposed to malicious automated accounts’
activities on Twitter, and thus to a higher risk of being influenced by them.
In a recent study about detection of fake news and mitigation of their harmful
effect, Shu and others in [27] give a clear definition of different kinds of
social media users: 1) the ‘persuaders’, which spread false narratives with
supporting opinions to influence others to believe it; 2) the ‘clarifiers’,
which propose opposite views to debunk such narratives, and 3) the ‘gullible
users’, those open to believe them. We have investigated the possibility that
gullible users are characterized by more interactions with entities such as
automated accounts, when compared to non-gullible users. The measure that
defines gullibility of a user is the amount of automated accounts that the
user has among her/his friends.
Individual behaviour in relation to actions and thinking by other people has
been studied in the social sciences for many years. The studies have led to
the definition of characteristic aptitudes of the individual, such as the
confirmation bias [28], i.e., the tendency ‘to trust information that confirms
personal preexisting beliefs or hypotheses’, the cognitive closure [29], i.e.,
the need of obtaining ‘certainty in an uncertain world’, and the selective
exposure [30], i.e., the preference for ‘information that confirms preexisting
attitudes’, to mention a few of them. With the advent of internet and social
media, the analysis of individual behaviour w.r.t. to communities and their
beliefs has been projected by data scientists onto the virtual behaviour of
users on the net. In order to understand who and what influences users the
most, and to what extent they can be influenced, in the recent survey Zhou and
Zafarani [31] devote a small section to what they call ‘vulnerable normal
users’. This designation identifies ‘users that are susceptible and vulnerable
to fake news’. Social theories attest that a reason why a user engages in
spreading fake news in a naive way (i.e., without any malicious intent) is
that spreading bears a greater social influence [32].
Table 10: Summary of the most relevant related work. _Ref._ | _Brief summary_
---|---
[27] | taxonomy of social users according to susceptibility, persuasion, and aptitude to clarification levels
[33] | inclination of susceptible users to listen to fake news regarding financial markets
[34] | study on the perceived trust of social users towards massive retweet campaigns
[35] | social users’ aptitude to share unverified rumours
[36] | persistence of social users to share rumours even if debunked or verified
[2] | reasoned motivations that lead social users to believe and spread unverified news
[37] | propaganda and how to spot it in online news
[38] | users’ behaviour on social networks is influenced by connections and interactions
[16] | characterisation of Twitter accounts with features discerning humans from social bots
[39] | strategic formation of bots squads to amplify political messages on Twitter
[40] | susceptibility of human users quantified in terms of interactions, i.e., mentions, replies, retweets, etc.
[41] | studying the characteristics of users replying to or following bots
[42] | investigation of users’ retweeting to understand the features of susceptible users attracted by election topics
[43] | tracking susceptibility in social media by considering the temporal dynamics of the behavioural factors
[44] | building a dataset of Twitter fake news followers by selecting all the accounts replying to known fake news
[45] | identifying polarised content on social media (based on users behaviour) and predicting future fake news topics
[46] | fake news spreaders in Twitter during the US presidential election influenced Trump supporters
Probably, the work most similar to ours is the one by Wagner et al. [40],
dated 2012. In that work, the accounts that here we call credulous are
referred to as susceptible. Even in that work susceptible users are
successfully recognized by a classifier, but the premises, and the aims of
[40] are profoundly different from ours. The two works have in common that
they do not focus on detecting social bots but on detecting users who are
susceptible to their attacks. However, there is a substantial difference in
the definition of our credulous users and the susceptible users of [40]. A
susceptible user is a human that has been ‘infected’ by a bot, i.e., has
interacted at least once with a bot, either by mentioning it, retweeting it,
or replying to it. For us, the credulous user is a user with a large number of
bots among her friends. The construction of the datasets is also very
different. In fact, [40] inherits accounts and their interactions from the
Social Bot Challenge 2011 - a competition organised by the WebEcologyProject.
Thus, Wagner et al. started with a ground truth of genuine bots and accounts,
plus a list of their interactions. We also started with datasets of accounts a
priori known as genuine ones but then ran a bot detector on their friends to
see how many bots they had as friends [13]. Here we study whether C users
interact with bots differently than NC ones. Finally, Wagner at el. had the
goal of discriminating the susceptibility level of the susceptible accounts, a
goal that is out of scope here. Moreover, the results of the analysis of the
susceptibility level were somehow inconclusive, in the sense that the
granularity with which the level of susceptibility was discriminated was very
coarse. In light of this, it would be very interesting to understand to what
extent it is possible to understand the level of credulity of our credulous
users.
A concrete example of the greater exposure of gullible users to deceptive news
is given in the recent work by Florendo et al. [33], which highlights how
gullibility is, along with demographic factors, one of the features that have
led social media users to believe false news about financial markets. Thus, we
think that automatically recognizing gullible users and understanding their
intrinsic characteristics is one of the cornerstones to build defences to the
spread of false news.
Human reactions are obviously multiple: we do not know ‘a priori’ if C users
approve or not the content they consume and possibly retweet. For instance,
Lin et al. in [34] tested the perceived trust of a set of users towards one
fictitious organization that varied the number of retweets concerning an
invented story about contaminated food in grocery stores. In this study, a
‘snob effect’ was demonstrated, that is, the more the story was retweeted, the
more people tended not to trust the truth of the tweets. Other studies show
different reactions. For example, Zubiaga et al. found that online users are
more active in sharing unverified rumors than they are in later sharing that
these rumors were either debunked or verified [35]. Furthermore, even a bit in
disagreement with the previous result, Owen has shown that even after knowing
that a story is false, a third of the users continue to spread it anyway [36].
Overall, it seems that ‘the veracity of information therefore appears to
matter little’, as observed by Nemr and Gangware in their report on Weapons of
Mass ‘Distraction’ [2]. Nevertheless, even for the scrupulous reader, it would
be very difficult to find out the level of truthfulness of a news, just by
using the critical sense. The literature has made progress with the use of
automatic tools that exploit the automatic processing of natural language, as
demonstrated - for example - in a recent work by Barrón-Cedeño et al. on the
detection of propaganda articles [37].
To understand users’ behavior on social networks, some crucial points have
been identified by Jin et al. in [38]. Among others, a key aspect is
represented by connection and interaction, i.e., the representation of the
relationships among users through different types of social graphs, e.g.,
friendship, following, etc. Inspired by this point, our work aims to
investigate the behaviour of users related by the Twitter followees
relationship, since there might be users that are more exposed to malicious
activities.
A framework for the detection, estimation, and characterisation of Twitter
accounts is presented by Varol et al. in [16], where more than a thousand
features are used to discern humans from social bots. When characterising
friendship ties and information flow between users, two main findings hold on
average, i.e., (i) reciprocity of friendship ties is higher for humans, and
(ii) humans resulted to interact more with human-like accounts than bots. As
opposite, in this paper we are interested to spot those humans that, maybe
unknowingly, diffuse content generated by bots.
The central role of bot accounts in contributing to retweet news and
amplifying the hubs’ messages has been recently observed in Caldarelli et al.
[39]. Given the prominent role, testified by a decade long literature, on the
harms that social bots may cause, it becomes of uttermost importance to find
out automatic methods to unveil who listens to them, and to what extent.
Hence, we firmly believe that new approaches should be explored to
automatically detect those who heavily interact with the bots.
To the best of our knowledge, most of the literature on social network
analysis deals with detecting bots or assessing the impact of their malicious
activities. The role of humans, instead, has received less attention,
especially when studying misinformation diffusion. Only few attempts have been
made to identify those social media human users that are susceptible to
disinformation attacks by social bots. Users that are most vulnerable to
social bots were considered in [41], where Wald et al. conducted some
experiments to derive the characteristics of users replying to bots or
following them. From their experiments emerged that the Klout score151515Klout
is a private company collecting information on users acting in different
social media (Facebook, Twitter, G+, LinkedIn), to determine their overall
social influence., the number of friends and followers are the best indicators
(among a set of 13 features) to predict whether a human will interact with a
bot. Our work can be considered as complementary to [41], in fact we also
consider the total number of bots’ followees for spotting credulous users.
Users’ retweeting is investigated by Lim and Hoang in [42], and it is
associated to three behavioral factors: (i) topic virality, i.e., the ability
of a topic to attract retweets, (ii) user virality, i.e., the ability of a
user to get retweeted for a specific topic, and (iii) user susceptibility,
i.e., the ability of a user to retweet for a specific topic. In this paper we
are mainly interested to retweets induced by user susceptibility, and from
[42] we learnt that a small group of users is extremely susceptible to
election-related influences.
Virality and susceptibility in social media is tackled by Hoang and Lim in
[43], the focus being on the temporal dynamics of the behavioral factors that
were neglected by the same authors in [42]. Time models are proposed to assign
higher/lower susceptibility score to users on the basis of retweeting
activities during specific time steps. Our work also does not consider the
temporal aspect to lighten the computational cost. However, as future work we
plan to study how the behavior of credulous users change over time.
More recently, there has been some research effort devoted to detecting users
susceptible to fake news. In [44], Shen et al. start from a dataset of fake
news, and all the Twitter users replying to such news are labelled as
vulnerable to disinformation. A supervised classification is later adopted to
train a model that classifies gullible users, according to content-, user-,
and network-based features. Results show the capability to differentiate users
with different susceptibility levels, achieving 0.82 in AUC-ROC as best
performance value. Also in this paper we analyse the content originated by
bots and disseminated by human users. In particular, we study how potentially
fake content (because originated by bots) are disseminated by credulous users
who, although unknowingly, can actively contribute to the dissemination of
fake news.
A framework to identify polarised content on social media and to predict
future fake news topics is proposed Del Vicario et al. [45] which use a number
of characteristics related to users behavior (e.g., number of likes, comments,
and shares) for the classification task. It would be interesting to design ad-
hoc experiments to exploit these characteristics by leveraging those values
that are associated to potential targets for hoaxes and fake news. This way,
we can detect users that are susceptible to and potential contributors of
misinformation spreading.
The influence of fake news in Twitter has been examined in [46] where Bovet
and Makse analyze the information related to the 2016 US presidential
election. Results of this study demonstrate that Clinton supporters were
largely influenced by the spreading of center and left leaning news, whereas
Trump supporters were heavily influenced by the dynamics of the top fake news
spreaders. Similarly to approaches on fake news [44, 45, 46], our interest is
on verifying if users contributing to spreading of fake content are among our
credulous users.
## 7 Conclusion
Disinformation spreading on social media is a worrisome phenomenon to which
researchers, platform administrators, and even governments are looking at with
concern. The role of bot accounts in this business is unquestionable, but it
would not be effective if there was nobody considering them. The work
presented in this paper aimed precisely to test the attitude of human-operated
accounts towards reacting to the actions of bots. To this purpose, we have
considered Twitter online accounts which have a high number of bots among
their friends; we have named them as credulous users. Leveraging a
classification work carried out in our previous work, we have analysed the
statistical value of the features considered for such classification phase.
Such analysis has enabled us to conclude that some features, such as the
number of tweets, of friends and of followers, that can be easily extracted
from the account’s profile, are statistically relevant to discriminate between
Credulous and Non Credulous users.
Besides, by considering the retweets and the replies of the accounts in our
datasets, we have shown, through two statistical tests, that, on average, C
users amplify more than NC ones the content posted by bots. Even before
conducting further experimental analysis on larger samples of C users, we
consider this result very promising. Indeed, it shows that it is possible:
1. 1.
to automatically identify credulous users accounts by leveraging on
discriminating features that are very easy to extract;
2. 2.
to get useful information on possible dissemination of spam content,
propaganda, and, in general, of unreliable information, by focusing on the
source of the content credulous users bounce.
#### Future Work
Despite these encouraging results, we argue that scholars and platform admins
should put more effort to make users aware of the pitfalls that can be hidden
by interacting with accounts, let them be automated or not, whose purposes are
not honest. Hereafter, we propose some possible future investigations:
* -
observe the variations of credulous users’ followees and check, by considering
an observation time frame, the nature (genuine vs bots) of those who have
started to be followed, those who have stopped being followed and those who
stay longer on the followees lists. This study could help understanding the
constancy of a C user in being susceptible to possibly not reputable content.
* -
develop approaches for C users detection also for human-operated accounts with
more than 400 followees. Investigations in this direction would further
contribute to understanding whether the proportion of suspicious users that a
C user follows is proportional to the number of followees.
* -
adapt the approach to other social platforms. The concept of C users is
strongly dependent on the specific relationships between users on the specific
social platform, thus the concept of being interested in published content
deserves specific attention.
## Acknowledgements
Partially supported by the European Union’s Horizon 2020 programme (grant
agreement No. 830892, SPARTA) and by IMT Scuola Alti Studi Lucca: Integrated
Activity Project TOFFEe ‘TOols for Fighting FakEs’. It has also benefited from
the computing resources (ULITE) provided by the IT division of LNGS in
L’Aquila.
## References
* [1] D. Jackson, Distinguishing Disinformation from Propaganda, Misinformation and Fake News (2017).
URL https://tinyurl.com/yb6cy63p
* [2] C. Gangware, W. Nemr, Weapons of Mass Distraction: Foreign State-Sponsored Disinformation in the Digital Age, Park Advisors, 2019.
URL https://tinyurl.com/y2lpnahe
* [3] E. Ferrara, O. Varol, C. A. Davis, F. Menczer, A. Flammini, The rise of social bots, Commun. ACM 59 (7) (2016) 96–104.
* [4] S. Cresci, R. D. Pietro, M. Petrocchi, A. Spognardi, M. Tesconi, The paradigm-shift of social spambots: Evidence, theories, and tools for the arms race, in: WWW (Companion Volume), 2017, pp. 963–972.
* [5] C. Shao, G. L. Ciampaglia, O. Varol, K.-C. Yang, A. Flammini, F. Menczer, The spread of low-credibility content by social bots, Nat. Commun. 9 (1) (2018).
* [6] L. Luceri, S. Giordano, E. Ferrara, Detecting troll behavior via inverse reinforcement learning: A case study of Russian trolls in the 2016 US election, in: M. D. Choudhury, R. Chunara, A. Culotta, B. F. Welles (Eds.), Proceedings of the Fourteenth International AAAI Conference on Web and Social Media, ICWSM 2020, AAAI Press, 2020, pp. 417–427.
* [7] G. Walton, G. Cohen, A question of belonging: Race, social fit, and achievement, J. Pers. Soc. Psychol. (2007) 82–96.
* [8] D. Webster, A. Kruglanski, Cognitive and social consequences of the need for cognitive closure, European Review of Social Psychology 8 (1) (1997) 133–173. doi:10.1080/14792779643000100.
* [9] A. Waytz, The Psychology Behind Fake News, Kellogg School of Managment (847) (2017) 60208.
URL https://tinyurl.com/y65hzqez
* [10] J. De keersmaecker, A. Roets, Fake news: Incorrect, but hard to correct. The role of cognitive ability on the impact of false information on social impressions, Intelligence 65 (2017) 107 – 110. doi:https://doi.org/10.1016/j.intell.2017.10.005.
* [11] K. C. Yang, O. Varol, C. A. Davis, E. Ferrara, A. Flammini, F. Menczer, Arming the public with artificial intelligence to counter social bots, Human Behavior and Emerging Technologies 1 (1) (2019) 48–61. doi:10.1002/hbe2.115.
* [12] S. Bradshaw, P. N. Howard, The Global Disinformation Order 2019 Global Inventory of Organised Social Media Manipulation, Tech. rep. (2019).
URL https://tinyurl.com/y4yxoz7f
* [13] A. Balestrucci, R. De Nicola, M. Petrocchi, C. Trubiani, Do you really follow them? Automatic detection of credulous twitter users, in: IDEAL (1), Vol. 11871 of Lecture Notes in Computer Science, 2019, pp. 402–410. doi:10.1007/978-3-030-33607-3\\_44.
* [14] A. Balestrucci, R. D. Nicola, O. Inverso, C. Trubiani, Identification of credulous users on Twitter, in: SAC, ACM, 2019, pp. 2096–2103.
* [15] S. Cresci, R. D. Pietro, M. Petrocchi, A. Spognardi, M. Tesconi, Fame for sale: Efficient detection of fake Twitter followers, Decis. Support Syst. 80 (2015) 56–71.
* [16] O. Varol, E. Ferrara, C. A. Davis, F. Menczer, A. Flammini, Online human-bot interactions: Detection, estimation, and characterization, in: ICWSM, AAAI Press, 2017, pp. 280–289.
* [17] M. Avvenuti, et al., Hybrid crowdsensing: A novel paradigm to combine the strengths of opportunistic and participatory crowdsensing, in: 26th WWW, 2017, 2017, pp. 1413–1421.
* [18] I. H. Witten, E. Frank, M. A. Hall, Data mining: practical machine learning tools and techniques, 3rd Edition, Morgan Kaufmann, Elsevier, 2011.
* [19] J. B. Lee, J. Lee, An iterative undersampling of extremely imbalanced data using CSVM, in: ICMV, Vol. 9445 of SPIE Proceedings, SPIE, 2014, p. 94452B.
* [20] J. T. Kent, Information gain and a general measure of correlation, Biometrika 70 (1) (1983) 163–173.
* [21] H. Hsu, P. A. Lachenbruch, Paired t test, Encyclopedia of Biostatistics 6 (2005).
* [22] H. W. Lilliefors, On the Kolmogorov-Smirnov test for normality with mean and variance unknown, J. Am. Stat. Assoc. 62 (318) (1967) 399–402.
* [23] W. Sealy Gosset, The probable error of a mean, Biometrika 6 (1) (1908) 1–25.
* [24] D. C. Howell, Statistical methods for psychology, Cengage Learning, 2009.
* [25] H. B. Mann, D. R. Whitney, On a test of whether one of two random variables is stochastically larger than the other, The Annals of Mathematical Statistics (1947) 50–60.
* [26] W. H. Kruskal, W. A. Wallis, Use of ranks in one-criterion variance analysis, J. Am. Stat. Assoc. 47 (260) (1952) 583–621.
* [27] L. H. Shu K., Bernard H.R., Studying fake news via network analysis: Detection and mitigation, in: Emerging Research Challenges and Opportunities in Computational Social Network Analysis and Mining, Springer, 2019.
* [28] R. Nickerson, Confirmation bias: A ubiquitous phenomenon in many guises, Review of General Psychology 2 (2) (1998).
* [29] K. A. W., W. D. M., Motivated closing of the mind: ‘seizing’ and ‘freezing’, Psychological Review 103 (1996) 263–283.
* [30] J. Freedman, D. Sears, Selective exposure, Vol. 2 of Advances in Experimental Social Psychology, Academic Press, 1965, pp. 57 – 97.
* [31] X. Zhou, R. Zafarani, A survey of fake news: Fundamental theories, detection methods, and opportunities, ACM Comput. Surv. 53 (5) (Sep. 2020).
* [32] B. Ashforth, F. Mael, Social identity theory and the organization, Academy of management review 14 (1) (1989) 20–39.
* [33] E. H. Florendo J., The role of cognitive style, gullibility, and demographics on the use of social media for financial decision making, J. Financial Services Marketing 24 (2019) 1–10.
* [34] X. Lin, P. R. Spence, Others share this message, so we can trust it? an examination of bandwagon cues on organizational trust in risk, Inf. Process. Manag. 56 (4) (2019) 1559–1564.
* [35] A. Zubiaga, M. Liakata, R. Procter, G. Wong Sak Hoi, P. Tolmie, Analysing how people orient to and spread rumours in social media by looking at conversational threads, PLOS ONE 11 (3) (2016) 1–29.
* [36] L. Hazard Owen, Americans may appreciate knowing when a news story is suspect, but more than a third will share that story anyway (2018).
URL https://tinyurl.com/y5guo8qr
* [37] A. Barrón-Cedeño, I. Jaradat, G. D. S. Martino, P. Nakov, Proppy: Organizing the news based on their propagandistic content, Inf. Process. Manag. 56 (5) (2019) 1849–1864.
* [38] L. Jin, Y. Chen, T. Wang, P. Hui, A. V. Vasilakos, Understanding user behavior in online social networks: a survey, IEEE Commun. Mag. 51 (9) (2013).
* [39] G. Caldarelli, R. De Nicola, F. Del Vigna, M. Petrocchi, F. Saracco, The role of bot squads in the political propaganda on Twitter, Communications Physics 3 (2020) 1–15.
* [40] C. Wagner, S. Mitter, C. Körner, M. Strohmaier, When social bots attack: Modeling susceptibility of users in online social networks, in: #MSM, Vol. 838 of CEUR Workshop Proceedings, CEUR-WS.org, 2012, pp. 41–48.
* [41] R. Wald, T. M. Khoshgoftaar, A. Napolitano, C. Sumner, Predicting susceptibility to social bots on Twitter, in: IRI, IEEE Computer Society, 2013, pp. 6–13.
* [42] E. Lim, T. Hoang, Retweeting: An act of viral users, susceptible users, or viral topics?, in: SDM, SIAM, 2013, pp. 569–577.
* [43] T. Hoang, E. Lim, Tracking virality and susceptibility in social media, in: CIKM, ACM, 2016, pp. 1059–1068.
* [44] T. J. Shen, et al., How gullible are you?: Predicting susceptibility to fake news, in: Web Science, 2019, pp. 287–288.
* [45] M. Del Vicario, W. Quattrociocchi, A. Scala, F. Zollo, Polarization and fake news: Early warning of potential misinformation targets, ACM Transactions on the Web (TWEB) 13 (2) (2019) 1–22.
* [46] A. Bovet, H. A. Makse, Influence of fake news in Twitter during the 2016 US presidential election, Nat. Commun. 10 (1) (2019) 7.
|
# Data-Driven Set-Based Estimation using Matrix Zonotopes
with Set Containment Guarantees ††thanks: ∗Authors are with equal
contributions. 1The author is with Jacobs University, Bremen.
<EMAIL_ADDRESS>2The authors are with the Division of Decision
and Control Systems at KTH Royal Institute of Technology. {alberndt, hsan,
<EMAIL_ADDRESS>
Amr Alanwar∗,1, Alexander Berndt∗,2, Karl Henrik Johansson2, and Henrik
Sandberg2
###### Abstract
We propose a method to perform set-based state estimation of an unknown
dynamical linear system using a data-driven set propagation function. Our
method comes with set-containment guarantees, making it applicable to safety-
critical systems. The method consists of two phases: (1) an offline learning
phase where we collect noisy input-output data to determine a function to
propagate the state-set ahead in time; and (2) an online estimation phase
consisting of a time update and a measurement update. It is assumed that known
finite sets bound measurement noise and disturbances, but we assume no
knowledge of their statistical properties. These sets are described using
zonotopes, allowing efficient propagation and intersection operations. We
propose a new approach to compute a set of models consistent with the data and
noise-bound, given input-output data in the offline phase. The set of models
is utilized in replacing the unknown dynamics in the data-driven set
propagation function in the online phase. Then, we propose two approaches to
perform the measurement update. Simulations show that the proposed estimator
yields state sets comparable in volume to the $3\sigma$ confidence bounds
obtained by a Kalman filter approach, but with the addition of state set-
containment guarantees. We observe that using constrained zonotopes yields
smaller sets but with higher computational costs than unconstrained ones.
## I Introduction
Set-based estimation involves the computation of a set, which is guaranteed to
contain the system’s true state at each time step given bounded uncertainties
[1]. Existing set-based observers require a system model to propagate the
state set at each time step [2, 3]. We address the problem of propagating the
state set using only noisy offline input-output data and merging this with
online measurements to obtain a time-varying state set which is guaranteed to
contain the true system’s state at each time-step. This problem is essential
in safety-critical applications [4].
Two popular set-based estimators are interval observers and set-membership
observers. Interval-based observers generally generate state estimates by
utilizing an observer gain to fuse a model-based time update of the state with
current measurements. For example, the authors in [5] propose an exponentially
stable interval-based observer for time-invariant linear systems. Set-
membership observers generally follow a geometrical approach by intersecting
the state-space regions consistent with the model with those from the
measurements to obtain the current state set [6]. This approach has been
extended to sensor networks with event-based communication in [7] and multi-
rate systems in [8]. Various set representations have been used for set-
membership observers such as ellipsoids [9], polytopes [10] and zonotopes
[11]. Zonotopes are a special class of polytopes for which one can efficiently
compute linear maps, and Minkowski sums – both frequent operations performed
by set-based observers.
All the aforementioned observers use a model of the underlying system to
propagate the state set. However, identifying a system model is often time-
consuming, and the identified model is not necessarily well-suited for
estimation or control. Recent works based on Willems’ fundamental lemma [12]
have shown that system trajectories can be used directly to synthesize
controllers. The authors in [13] present an extended Kalman filter and model
predictive control (MPC) scheme computed directly from system trajectories.
Stability and robustness guarantees for such a data-driven control scheme are
presented in [14], and for an MPC scheme in [15]. An alternative approach is
to find a set of models that is consistent with data and use this set of
models to propagate a state set [16].
Our contribution is a novel method to perform set-based state estimation with
set-containment guarantees given bounded, noisy measurements and known inputs.
The algorithm, summarized in Fig. 1, consists of an offline learning phase to
determine a state-propagation function $f(\cdot)$ directly from data, and an
online estimation phase to perform a time update using $f(\cdot)$ and
measurements iteratively to track the system state. A new approach to compute
the set of models consistent with the data and noise bound from input-output
data is proposed different from input-state data in [16, 17]. Then, we present
two approaches to perform the measurement update utilizing either the singular
value decomposition (SVD) of the observation matrix or an optimization
formulation. We compare the approaches in simulation. Our method is shown to
yield set-based state estimates similar in size to $3\sigma$ confidence bounds
of an approach based on system identification and a Kalman filter, but with
the addition of set-containment guarantees. The code to recreate our findings
is publicly available111https://github.com/alexberndt/data-driven-set-based-
estimation-zonotopes.
The rest of this paper is outlined as follows. Sec. II introduces the
preliminaries and problem statement. We present our method in Sec. III and
evaluate it in Sec. IV. Finally, Sec. V concludes the paper.
Figure 1: The proposed method showing the offline learning phase yielding
$f(\cdot)$ and the online estimation phase which utilizes $f(\cdot)$ to
perform the time update, followed by a measurement update yielding the set
$\hat{\mathscr{R}}_{k}$ at time-step $k$.
## II Preliminaries and Problem Statement
We denote the $i$-th element of a vector or list $A$ by $A^{(i)}$. We first
introduce some set representations.
###### Definition 1.
(Zonotope [18]) Given a center $c\in\mathbb{R}^{n}$ and a number
$\xi\in\mathbb{N}$ of generator vectors in a generator matrix
$G=[g^{(1)},...,g^{(\xi)}]\in\mathbb{R}^{n\times\xi}$, a zonotope is a set
$\mathscr{Z}=\Big{\\{}x\in\mathbb{R}^{n}\;\Big{|}\;x=c+\sum_{i=1}^{\xi}\beta^{(i)}\,g^{(i)}\,,-1\leq\beta^{(i)}\leq
1\Big{\\}}.$ (1)
We use the shorthand notation $\mathscr{Z}=\langle c,G\rangle$.
Given two zonotopes $\mathscr{Z}_{1}$ and $\mathscr{Z}_{2}$, we use the
notation $+$ for the Minkowski sum, and $\mathscr{Z}_{1}-\mathscr{Z}_{2}$ to
denote $\mathscr{Z}_{1}+(-\mathscr{Z}_{2})$ not the Minkowski difference.
###### Definition 2.
(Matrix zonotope [4, p.52]) Given a center matrix $C\in\mathbb{R}^{n\times k}$
and $\xi\in\mathbb{N}$ generator matrices ${G}^{(i)}\in\mathbb{R}^{n\times k}$
where $i\in\\{1,\dots,\xi\\}$, a matrix zonotope is the set
$\mathscr{M}=\Big{\\{}X\in\mathbb{R}^{n\times
k}\;\Big{|}\;X=C+\sum_{i=1}^{\xi}{\beta}^{(i)}\,{G}^{(i)}\,,-1\leq{\beta}^{(i)}\leq
1\Big{\\}}.$
We use the notation $\mathscr{M}=\langle C,{G}^{(1:\xi)}\rangle$, where
${G}^{(1:\xi)}=[{G}^{(1)},\dots,{G}^{(\xi)}]$.
###### Definition 3.
(Interval matrix [4, p. 42]) An interval matrix $\mathscr{I}$ specifies the
interval of all possible values for each matrix element between the left limit
$\underline{I}$ and right limit $\bar{I}$:
$\displaystyle\mathscr{I}=\begin{bmatrix}\underline{I},\bar{I}\end{bmatrix},\quad\underline{I},\bar{I}\in\mathbb{R}^{r\times
c}$ (2)
We consider estimating the set of all possible system states using an array of
$q$ sensors. Our system is described as
$\displaystyle x(k+1)$
$\displaystyle=A_{\text{tr}}x(k)+B_{\text{tr}}u(k)+w(k),$ (3a) $\displaystyle
y^{i}(k)$ $\displaystyle=C^{i}x(k)+v^{i}(k),\;\;i\in\\{1,\dots,q\\},$ (3b)
where $x(k)\in\mathbb{R}^{n}$ is the system state, $u(k)\in\mathbb{R}^{m}$ the
input, $y^{i}(k)\in\mathbb{R}^{p_{i}}$ the measurement of sensor $i$,
$x(0)\in\mathscr{X}_{0}$ the initial condition where $\mathscr{X}_{0}$ is the
initial bounding zonotope. Furthermore, the system matrices
$A_{\text{tr}}\in\mathbb{R}^{n\times n}$ and
$B_{\text{tr}}\in\mathbb{R}^{n\times m}$ are unknown whereas
$C^{i}\in\mathbb{R}^{p_{i}\times n}$ is known for all $i\in\\{1,\dots,q\\}$.
The noise $w(k)\in\mathscr{Z}_{w}$ and $v^{i}(k)\in\mathscr{Z}_{v,i}$ are
assumed to belong to the bounding zonotopes $\mathscr{Z}_{w}=\langle
c_{w},G_{w}\rangle\subset\mathbb{R}^{n}$ and $\mathscr{Z}_{v,i}=\langle
c_{v,i},G_{v,i}\rangle\subset\mathbb{R}^{p_{i}}$ for $i\in\\{1,\dots,q\\}$,
respectively. We denote the Frobenius norm by $\|.\|_{F}$ and the null space
of a matrix $A$ by $\texttt{ker}(A)$. We compute the pseudoinverse of an
interval matrix by adapting [19, Thm 2.40]. The pseudoinverse of an interval
matrix is denoted by $\dagger$.
Let ${\mathscr{R}}_{k}$ denote a set containing $x(k)$ given the exact system
model and bounded, but unknown, process and measurement noise. The problem
addressed in this paper is to develop an algorithm that returns a set
$\hat{\mathscr{R}}_{k}\supseteq{\mathscr{R}}_{k}$, which is guaranteed to
contain the true state $x(k)$ at each time instance $k$, i.e.,
$x(k)\in\hat{\mathscr{R}}_{k}$ for all $k$, given input-output data and bounds
for model uncertainties and measurement noise without knowledge of the model
$\begin{bmatrix}A_{\text{tr}}&B_{\text{tr}}\end{bmatrix}$.
## III Data-driven Set-based Estimation
Our proposed data-driven set estimator consists of two phases: an offline
learning phase and an online estimation phase. In the offline phase, we
compute the function to perform the time update. The online phase consists of
iteratively performing a time update and a measurement update. We denote the
time and measurement updated sets at $k$ by
$\tilde{\mathscr{R}}_{k}\subset\mathbb{R}^{n}$ and
$\hat{\mathscr{R}}_{k}\subset\mathbb{R}^{n}$, respectively.
### III-A Offline Learning Phase
The objective of this phase is to compute a function
$f:\mathbb{R}^{n}\times\mathbb{R}^{m}\to\mathbb{R}^{n}$, such that
$\tilde{\mathscr{R}}_{k+1}=f(\hat{\mathscr{R}}_{k},\mathscr{U}_{k})$, i.e.,
$f$ returns $\tilde{\mathscr{R}}_{k+1}$ given a known input zonotope
$\mathscr{U}_{k}$ and the measurement updated set $\hat{\mathscr{R}}_{k}$ at
time-step $k$ such that we can guarantee $x(k+1)\in\tilde{\mathscr{R}}_{k+1}$
for all $k$. During this phase, we assume that we have offline an access to an
input sequence $u(k)$ and noisy output $z^{i}(k)$ such that
$\displaystyle z^{i}(k)$ $\displaystyle=C^{i}x(k)+\gamma^{i}(k),$ (4)
where the noise $\gamma^{i}(k)$ is bounded by the zonotope
$\mathscr{Z}_{\gamma,i}=\langle c_{\gamma,i},G_{\gamma,i}\rangle$, i.e.,
$\gamma^{i}(k)\in\mathscr{Z}_{\gamma,i},\forall k$. We have for all sensors
vertically combined noisy output
$z(k)=\begin{bmatrix}z^{1^{T}}(k)&...&z^{q^{T}}(k)\end{bmatrix}^{T}$ and
similarly for $\gamma$ and $C$. For the sake of clarity, we differentiate the
notation of the offline noisy output $z^{i}(k)$ from the online noisy output
$y^{i}(k)$ and similarly for the measurement noise. Given an experiment
yielding a sequence of noisy data of length $T$, we can construct the
following sequences
$\displaystyle\begin{split}Z^{+}&=\begin{bmatrix}z(1)&\dots&z(T)\end{bmatrix},\\\
Z^{-}&=\begin{bmatrix}z(0)&\dots&z(T-1)\end{bmatrix},\\\
U^{-}&=\begin{bmatrix}u(0)&\dots&u(T-1)\end{bmatrix}.\end{split}$ (5)
We further construct
$\displaystyle Z$ $\displaystyle=\begin{bmatrix}z(0)&\dots&z(T)\end{bmatrix},$
and similarly for other signals. The data
$D=\begin{bmatrix}U^{-}&Z\end{bmatrix}$ can be from one sensor or multiple
sensors. Furthermore, we denote the sequence of unknown process noise $w(k)$
as ${W}^{-}=\begin{bmatrix}{w}(0)&\dots&{w}(T{-}1)\end{bmatrix}$. Here,
${W}^{-}\in\mathscr{M}_{w}$ where $\mathscr{M}_{w}=\langle
C_{\mathscr{M},w},G^{(1:\xi T)}_{\mathscr{M},w}\rangle$ is the matrix zonotope
resulting from the concatenation of multiple noise zonotopes
$\mathscr{Z}_{w}=\langle c_{w},[g_{w}^{(1)},\dots,g_{w}^{(\xi)}]\rangle$ as
$\begin{split}C_{\mathscr{M},w}&=\begin{bmatrix}c_{w}&\dots&c_{w}\end{bmatrix},\\\
G^{(1+(i-1)T)}_{\mathscr{M},w}&=\begin{bmatrix}g_{w}^{(i)}&0_{n\times(T-1)}\end{bmatrix},\\\
G^{(j+(i-1)T)}_{\mathscr{M},w}&=\begin{bmatrix}0_{n\times(j-1)}&g_{w}^{(i)}&0_{n\times(T-j)}\end{bmatrix},\\\
G^{(T+(i-1)T)}_{\mathscr{M},w}&=\begin{bmatrix}0_{n\times(T-1)}&g_{w}^{(i)}\end{bmatrix},\end{split}$
for all $i=\\{1,\dots,\xi\\}$, $j=\\{2,\dots,T-1\\}$ [16]. In a similar
fashion, we describe the unknown noise and matrix zonotope of $\gamma(k)$ as
$\Gamma^{+},\Gamma^{-}\in\mathscr{M}_{\gamma}=\langle
C_{\mathscr{M},{\gamma}},G^{(1:\xi T)}_{\mathscr{M},{\gamma}}\rangle$. We
denote all system matrices $\begin{bmatrix}A&B\end{bmatrix}$ that are
consistent with the data:
$\displaystyle\mathscr{N}_{\Sigma}=\\{$
$\displaystyle\begin{bmatrix}A&B\end{bmatrix}|\;X^{+}=AX^{-}+BU^{-}+W^{-},$
$\displaystyle
Z^{-}=CX^{-}+\Gamma^{-},W^{-}\in\mathscr{M}_{w},\Gamma^{+}\in\mathscr{M}_{\gamma},$
$\displaystyle\Gamma^{-}\in\mathscr{M}_{\gamma}\\}.$
By definition,
$\begin{bmatrix}A_{\text{tr}}&B_{\text{tr}}\end{bmatrix}\in\mathscr{N}_{\Sigma}$
as $\begin{bmatrix}A_{\text{tr}}&B_{\text{tr}}\end{bmatrix}$ is one of the
systems that are consistent with the data. The following theorem finds a set
of models $\mathscr{M}_{\Sigma}$ that over-approximates
$\mathscr{N}_{\Sigma}$, i.e.,
$\mathscr{N}_{\Sigma}\subseteq\mathscr{M}_{\Sigma}$, which defines $f(\cdot)$
introduced above. For this, we aim to determine the mapping of the observation
$Z^{+}$ and $Z^{-}$ to the corresponding state-space region. Specifically, we
construct a zonotope $\mathscr{Z}_{x|z^{i}(k)}\subset\mathbb{R}^{n}$ that
contains all possible $x\in\mathbb{R}^{n}$ given $z^{i}(k)$, $C^{i}$ and
bounded noise $\gamma^{i}(k)\in\mathscr{Z}_{\gamma,i}$ satisfying (4), for
each $i$. This can be written as
$\displaystyle\mathscr{Z}_{x|z^{i}(k)}=\Big{\\{}x\in\mathbb{R}^{n}\;\Big{|}\;C^{i}x=z^{i}(k)-\mathscr{Z}_{\gamma,i}\Big{\\}}.$
(6)
Extending (6) to a matrix zonotope allows to find the mapping of $Z^{+}$ and
$Z^{-}$ to the state space which is utilized to compute the
$\mathscr{M}_{\Sigma}$. We omit the time index $k$ and sensor index $i$ when
possible for simplicity. We assume a prior known upper bound $M$ on the state
trajectory, i.e., $M\geq\lVert x\rVert_{2}$.
###### Lemma 1.
Given input-output trajectories $D=\begin{bmatrix}U^{-}&Z\end{bmatrix}$ of the
system (3). Then, the matrix zonotope
$\displaystyle\mathscr{M}_{\Sigma}=(\mathscr{M}^{+}_{x|z}-\mathscr{M}_{w})\begin{bmatrix}\mathscr{M}^{-}_{x|z}\\\
U^{-}\end{bmatrix}^{\dagger}$ (7)
contains all matrices $\begin{bmatrix}A&B\end{bmatrix}$ that are consistent
with the data $D$ and the noise bounds, i.e.,
$\mathscr{N}_{\Sigma}\subseteq\mathscr{M}_{\Sigma}$, with
$\mathscr{M}^{+}_{x|z}=\langle
C^{+}_{\mathscr{M},x|z},G_{\mathscr{M},x|z}^{(1:\xi T+1)}\rangle$ and
$\mathscr{M}^{-}_{x|z}=\langle
C^{-}_{\mathscr{M},x|z},G_{\mathscr{M},x|z}^{(1:\xi T+1)}\rangle$ where
$\displaystyle C^{+}_{\mathscr{M},x|z}$ $\displaystyle=V_{1}\Sigma_{r\times
r}^{-1}P_{1}^{\top}\big{(}Z^{+}-C_{\mathscr{M},{\gamma}}\big{)},$ (8)
$\displaystyle C^{-}_{\mathscr{M},x|z}$ $\displaystyle=V_{1}\Sigma_{r\times
r}^{-1}P_{1}^{\top}\big{(}Z^{-}-C_{\mathscr{M},{\gamma}}\big{)},$ (9)
$\displaystyle G_{\mathscr{M},x|z}^{(i)}$ $\displaystyle=V_{1}\Sigma_{r\times
r}^{-1}P_{1}^{\top}G^{(i)}_{\mathscr{M},{\gamma}},\quad i=\\{1,\dots,\xi
T\\},$ (10) $\displaystyle G_{\mathscr{M},x|z}^{(\xi T+1)}$
$\displaystyle=MV_{2}1_{(n-r)\times T},$ (11)
for all $M\geq\lVert x\rVert_{2}$, with $P_{1}$, $V_{1}$, $\Sigma$ and $V_{2}$
obtained from the SVD of $C$. Assuming $C$ has rank $r$, then
$\displaystyle
C=\begin{bmatrix}P_{1}&P_{2}\end{bmatrix}\begin{bmatrix}\Sigma_{r\times
r}&0_{r\times(n-r)}\\\ 0_{(p-r)\times
r}&0_{(p-r)\times(n-r)}\end{bmatrix}\begin{bmatrix}V_{1}^{\top}\\\
V_{2}^{\top}\end{bmatrix},$ (12)
where a matrix with non-positive index is an empty matrix.
###### Proof.
From (12), we rewrite (4) as ${P_{1}\Sigma V_{1}^{\top}x=z-\gamma}$, so
$x=V_{1}\Sigma^{-1}P_{1}^{\top}(z-\gamma)$. Since $\gamma$ is bounded by
${\mathscr{Z}_{\gamma}=\langle c_{\gamma},G_{\gamma}\rangle}$, we can write
$\displaystyle
x=\underbrace{V_{1}\Sigma^{-1}P_{1}^{\top}\big{(}z-c_{\gamma}\big{)}}_{c_{x|z}}-\underbrace{V_{1}\Sigma^{-1}P_{1}^{\top}G_{\gamma}}_{G_{x|z}^{\prime}}\beta,\;\;|\beta|\leq
1.$
This set corresponds to all possible $x$ values within the range space of $C$
satisfying (4). By definition, if $r=n$, then ${V_{2}=\emptyset}$, $V_{1}$
spans the domain of $x$, and $\langle c_{x|z},G_{x|z}^{\prime}\rangle$
sufficiently defines all possible $x$ satisfying (4). However, if $r<n$,
$V_{1}$ only spans a subset of the domain of $x$. To ensure
$\mathscr{Z}_{x|z}$ contains all possible $x$ we include a basis for
$\texttt{ker}(C)$ in $G_{x|z}$ by appending the generator $V_{2}M$ to
$G_{x|z}$, and ensuring $M\geq\|x\|_{2}$ such that $V_{2}M$ includes all $x$
values in the directions of $V_{2}$. In both cases for $r$, the generator
matrix can be written as
$\displaystyle
G_{x|z}=\begin{bmatrix}G_{x|z}^{\prime}&V_{2}M\end{bmatrix}=\begin{bmatrix}V_{1}\Sigma^{-1}P_{1}^{\top}G_{\gamma}&V_{2}M\end{bmatrix},$
and the set $\mathscr{Z}_{x|z}=\langle c_{x|z},G_{x|z}\rangle$. This result
extends to the case when $r<p$ using similar argumentation in the respective
cases $r=n$ and $r<n$. Considering the matrix version of $\mathscr{Z}_{x|z}$
results in proving $\mathscr{M}^{+}_{x|z}$ and $\mathscr{M}^{-}_{x|z}$. Then,
we extend the proof of [17, Lem.1] for input-output data: For any
$\begin{bmatrix}A&B\end{bmatrix}\in\mathscr{N}_{\Sigma}$, we know that there
exists a $W^{-}\in\mathscr{M}_{w}$ such that
$\displaystyle AX^{-}+BU^{-}=X^{+}-W^{-}.$ (13)
Every $W^{-}\in\mathscr{M}_{w}$ can be represented by a specific choice
$\hat{\beta}^{(i)}_{\mathscr{M},w}$,
$-1\leq\hat{\beta}^{(i)}_{\mathscr{M},w}\leq 1$,
$i=1,\dots,\xi_{\mathscr{M},w}$, that results in a matrix inside the matrix
zonotope $\mathscr{M}_{w}$:
$\displaystyle W^{-}$
$\displaystyle=C_{\mathscr{M},w}+\sum_{i=1}^{\xi_{\mathscr{M},w}}\hat{\beta}^{(i)}_{\mathscr{M},w}G_{\mathscr{M},w}^{(i)}.$
Rearranging (13) and considering $\mathscr{M}^{+}_{x|z}$ and
$\mathscr{M}^{-}_{x|z}$ as an over-approximation of $X^{+}$ and $X^{-}$,
respectively, yields
$\displaystyle\begin{bmatrix}A\\!\\!&\\!B\end{bmatrix}{=}\\!\\!\left(\\!\\!\mathscr{M}^{+}_{x|z}{-}C_{\mathscr{M},w}{-}\sum_{i=1}^{\xi_{\mathscr{M},w}}\hat{\beta}^{(i)}_{\mathscr{M},w}G_{\mathscr{M},w}^{(i)}\right)\\!\\!\begin{bmatrix}\mathscr{M}^{-}_{x|z}\\\
U^{-}\end{bmatrix}^{\dagger}$ (14)
Hence, for all $\begin{bmatrix}A&B\end{bmatrix}\in\mathscr{N}_{\Sigma}$, there
exists $\hat{\beta}^{(i)}_{\mathscr{M},w}$,
${-1\leq\hat{\beta}^{(i)}_{\mathscr{M},w}\leq 1}$,
$i=1,\dots,\xi_{\mathscr{M},w}$, such that (14) holds. Therefore, for all
$\begin{bmatrix}A&B\end{bmatrix}\in\mathscr{N}_{\Sigma}$, it also holds that
$\begin{bmatrix}A&B\end{bmatrix}\in\mathscr{M}_{\Sigma}$ as defined in (7),
which concludes the proof. ∎
Given that we have found a matrix zonotope $\mathscr{M}_{\Sigma}$ that
contains the true system dynamics
$\begin{bmatrix}A_{\text{tr}}&B_{\text{tr}}\end{bmatrix}{\in}\mathscr{M}_{\Sigma}$,
we can utilize it in computing the time update reachable set
$\tilde{\mathscr{R}}_{k}$ in the following theorem.
###### Theorem 1.
The set $\tilde{\mathscr{R}}_{k}$ over-approximates the exact reachable set,
i.e., $\tilde{\mathscr{R}}\supseteq\mathscr{R}_{k}$ where
$\displaystyle\tilde{\mathscr{R}}_{k+1}=\mathscr{M}_{\Sigma}(\tilde{\mathscr{R}}_{k}\times\mathscr{U}_{k})+\mathscr{Z}_{w},$
(15)
and $\tilde{\mathscr{R}}_{0}=\mathscr{X}_{0}$.
###### Proof.
As
$\begin{bmatrix}A_{\text{tr}}&B_{\text{tr}}\end{bmatrix}{\in}\mathscr{M}_{\Sigma}$
according to Lemma 1 and starting from the same initial set $\mathscr{X}_{0}$,
it follows that ${\tilde{\mathscr{R}}_{k}{\supseteq}\mathscr{R}_{k}}$. ∎
### III-B Online Estimation Phase using Zonotopes
In this subsection, we present the online estimation phase. We are now
considering the system (3a) with observations (3b). This phase consists of a
time update and a measurement update. In Sec. III-A, we derived the function
$f(\cdot)$ for the time update. We next present two approaches to perform the
measurement update.
#### III-B1 Approach 1 - Reverse-Mapping
For this approach, we aim to determine the mapping of an observation
$y^{i}(k)$ to the corresponding state-space region. Similar to Lemma 1, we
construct a zonotope $\mathscr{Z}_{x|y^{i}(k)}\subset\mathbb{R}^{n}$ that
contains all possible $x\in\mathbb{R}^{n}$ given $y^{i}(k)$, $C^{i}$ and
bounded noise $v^{i}(k)\in\mathscr{Z}_{v,i}$ satisfying (3b), for each $i$.
###### Proposition 1.
Assume $\|x\|_{2}\leq K$. Given a measurement $y^{i}(k)$ with noise
$v^{i}(k)\in\mathscr{Z}_{v,i}=\langle c_{v,i},G_{v,i}\rangle$ satisfying (3b),
the possible states $x$ that correspond to this measurement are contained
within the zonotope $\mathscr{Z}_{x|y^{i}}=\langle
c_{x|y^{i}},G_{x|y^{i}}\rangle,$ where
$\begin{split}c_{x|y^{i}}&=V_{1}\Sigma_{r^{i}\times
r^{i}}^{-1}P_{1}^{\top}\big{(}y^{i}(k)-c_{v,i}\big{)},\\\
G_{x|y^{i}}&=\begin{bmatrix}V_{1}\Sigma_{r^{i}\times
r^{i}}^{-1}P_{1}^{\top}G_{v,i}&V_{2}M\end{bmatrix},\end{split}$ (16)
for all $M\geq K$, with $P_{1}$, $V_{1}$, $\Sigma$ and $V_{2}$ obtained from
the SVD of $C^{i}$ as in (12).
###### Proof.
The proof follows immediately from Lemma 1. ∎
###### Remark 1.
In our case, $\mathscr{Z}_{x|y^{i}(k)}$ will eventually be intersected with
$\tilde{\mathscr{R}}_{k}=\langle\tilde{c}_{k},\tilde{G}_{k}\rangle$. It is
therefore sufficient to set
$M\geq\texttt{radius}(\tilde{\mathscr{R}}_{k})+\|V_{2}^{\top}\tilde{c}_{k}\|_{2}$
instead of the more conservative $M\geq\lVert x\rVert_{2}$, where
$\texttt{radius}(\tilde{\mathscr{R}}_{k})$ returns the radius of a minimal
hyper-sphere containing $\tilde{\mathscr{R}}_{k}$ [20].
Having determined the sets $\mathscr{Z}_{x|y^{i}(k)}$ for all
$i\in\\{1,\dots,q\\}$, we can compute the measurement updated set
$\hat{\mathscr{R}}_{k}$ given the predicted set $\tilde{\mathscr{R}}_{k}$ and
each measurement set $\mathscr{Z}_{x|y^{i}(k)}$ as
$\displaystyle\hat{\mathscr{R}}_{k}=\tilde{\mathscr{R}}_{k}\cap_{i=1}^{q}\mathscr{Z}_{x|y^{i}(k)},$
(17)
which can be performed using the standard intersection operations presented in
[20, 11].
#### III-B2 Approach 2 - Implicit Intersection
Contrary to Approach 1, here, we do not explicitly determine the sets
$\mathscr{Z}_{x|y^{i}(k)}$. Instead, $\hat{\mathscr{R}}_{k}$ is determined
directly from the set $\tilde{\mathscr{R}}_{k}$, the measurements $y^{i}(k)$
and some weights $\lambda_{k}^{i}$ for $i\in\\{1,\dots,q\\}$. We then optimize
over the weights to minimize the volume of $\hat{\mathscr{R}}_{k}$.
###### Proposition 2.
The intersection of
$\tilde{\mathscr{R}}_{k}=\langle\tilde{c}_{k},\tilde{G}_{k}\rangle$ and the
$q$ regions for $x$ corresponding to $y^{i}(k)$ with noise
$v^{i}(k)\in\mathscr{Z}_{v,i}=\langle c_{v,i},G_{v,i}\rangle$ satisfying (3b)
can be over-approximated by the zonotope
$\hat{\mathscr{R}}_{k}=\langle\hat{c}_{k},\hat{G}_{k}\rangle$ with
$\displaystyle\hat{c}_{k}$
$\displaystyle=\tilde{c}_{k}+\sum\limits_{i=1}^{q}\lambda_{k}^{i}\Big{(}y^{i}(k)-C^{i}\tilde{c}_{k}-c_{v,i}\Big{)},$
(18) $\displaystyle\hat{G}_{k}$
$\displaystyle=\begin{bmatrix}(I-\sum\limits_{i=1}^{q}\lambda_{k}^{i}C^{i})\tilde{G}_{k}&-\lambda_{k}^{1}G_{v,1}&\dots&-\lambda_{k}^{q}G_{v,q}\end{bmatrix},$
(19)
where $\lambda_{k}^{i}\in{\mathbb{R}}^{n\times p_{i}}$ for
$i\in\\{1,\dots,q\\}$ are weights.
###### Proof.
The proof is based on [21, Prop.1] but with zonotopes as measurements instead
of strips. Let
$x\in\tilde{\mathscr{R}}_{k}\cap\mathscr{Z}_{x|y^{1}}\cap\dots\cap\mathscr{Z}_{x|y^{q}}$.
Then there exists a $z$ such that $x=\tilde{c}_{k}+\tilde{G}_{k}z$. Adding and
subtracting $\sum_{i=1}^{q}\lambda_{k}^{i}C^{i}\tilde{G}_{k}z$ yields
$x=\tilde{c}_{k}+\sum\limits_{i=1}^{q}\lambda_{k}^{i}C^{i}\tilde{G}_{k}z+(I-\sum\limits_{i=1}^{q}\lambda_{k}^{i}C^{i})\tilde{G}_{k}z.$
(20)
From (3b), we obtain $C^{i}x=y^{i}-c_{v,i}-G_{v,i}d^{i}.$ Using
$x=\tilde{c}_{k}+\tilde{G}_{k}z$ yields
$C^{i}\tilde{G}_{k}z=y^{i}(k)-C^{i}\tilde{c}_{k}-c_{v,i}-G_{v,i}d^{i}$, which
we insert into (20) to obtain
$\displaystyle x$
$\displaystyle=\tilde{c}_{k}+\sum\limits_{i=1}^{q}\lambda_{k}^{i}\Big{(}y^{i}(k)-C^{i}\tilde{c}_{k}-c_{v,i}-G_{v,i}d^{i}\Big{)}$
$\displaystyle\;\;\;+\Big{(}I-\sum\limits_{i=1}^{q}\lambda_{k}^{i}C^{i}\Big{)}\tilde{G}_{k}z,$
$\displaystyle=\underbrace{\begin{bmatrix}(I-\sum\limits_{i=1}^{q}\lambda_{k}^{i}C^{i})\tilde{G}_{k}&-\lambda_{k}^{1}G_{v,1}&\dots&-\lambda_{k}^{q}G_{v,q}\end{bmatrix}}_{\hat{G}_{k}}\\!\\!\underbrace{\begin{bmatrix}z\\\
d^{1}\\\ \vdots\\\ d^{q}\end{bmatrix}}_{z^{b}}$
$\displaystyle\;\;\;+\underbrace{\tilde{c}_{k}+\sum\limits_{i=1}^{q}\lambda_{k}^{i}(y^{i}(k)-C^{i}\tilde{c}_{k}-c_{v,i})}_{\hat{c}_{k}}=\hat{G}_{k}z^{b}+\hat{c}_{k}.$
Note that $z^{b}\in[-1,1]$ since $d^{i}\in[-1,1]$ and $z\in[-1,1]$.
$\hat{R}_{k}$ adheres to Definition 1 with center $\hat{c}_{k}$ and generators
$\hat{G}_{k}$. ∎
As in [11], we find the optimal weights
$\lambda_{k}^{i}\in{\mathbb{R}}^{n\times p_{i}}$ from
$\displaystyle\bar{\lambda}^{*}_{k}=\textrm{arg}\min_{\bar{\lambda}_{k}}\lVert\hat{G}_{k}\rVert^{2}_{F},$
(21)
where $\bar{\lambda}_{k}=[\lambda_{k}^{1}\dots\lambda_{k}^{q}]$.
The online estimation phase is illustrated in the block diagram of Fig. 1. The
detailed estimation phase is presented in Algorithm 1. The function measZon()
executes Proposition 1, and optZon() Proposition 2. The function
reduce$(\tilde{\mathscr{R}}_{k+1})$ reduces the order of
$\tilde{\mathscr{R}}_{k+1}$ using the method proposed in [22], which ensures
the number of generators in $\tilde{\mathscr{R}}_{k+1}$ remains relatively
low, avoiding potential tractability issues after multiple iterations.
${\hat{\mathscr{R}}}_{0}=\mathscr{X}_{0}$
$k=1$
while _True_ do
$\tilde{\mathscr{R}}_{k}=f(\hat{\mathscr{R}}_{k-1},\langle u(k-1),0\rangle)$
using (15)
if _Approach 1_ then
foreach _$i\in\\{1,\dots,q\\}$_ do
$\mathscr{Z}_{x|y^{i}(k)}=\textit{measZon}\big{(}y^{i}(k),\mathscr{Z}_{v,i},C^{i}\big{)}$
using (16)
end foreach
$\hat{\mathscr{R}}_{k}=\tilde{\mathscr{R}}_{k}\bigcap_{i=1}^{q}\mathscr{Z}_{x|y^{i}(k)}$
if _Approach 2_ then
$\langle\hat{c}_{k},\hat{G}_{k}\rangle=\textit{optZon}(\tilde{\mathscr{R}}_{k},y(k),C,\mathscr{Z}_{v})$
$\hat{G}_{k}^{*},\;\bar{\lambda}^{*}\leftarrow$ Solve (21)
$\hat{\mathscr{R}}_{k}=\langle\hat{c}_{k},\hat{G}_{k}^{*}\rangle$
$\tilde{\mathscr{R}}_{k}=\textit{reduce}(\hat{\mathscr{R}}_{k})$ using [22]
$k\leftarrow k+1$
end while
Algorithm 1 Online Estimation Phase
### III-C Online Estimation Phase using Constrained Zonotopes
When intersecting zonotopes, the result is an over-approximation of the true
intersection. However, it is possible to determine the exact intersection of
constrained zonotopes.
###### Definition 4.
(Constrained zonotope [23]) An $n$-dimensional constrained zonotope is
$\mathscr{C}=\left\\{x\in\mathbb{R}^{n}\hskip 2.84544pt\middle|\hskip
2.84544ptx=c_{\mathscr{C}}+G_{\mathscr{C}}\beta,\
A_{\mathscr{C}}\beta=b_{\mathscr{C}},\,\lVert\beta\rVert_{\infty}\leq
1\right\\},$ (22)
where $c_{\mathscr{C}}\in{\mathbb{R}}^{n}$ is the center, $G_{\mathscr{C}}$
$\in$ ${\mathbb{R}}^{n\times n_{g}}$ the generator matrix and
$A_{\mathscr{C}}\in$ ${\mathbb{R}}^{n_{c}\times n_{g}}$ and
$b_{\mathscr{C}}\in{\mathbb{R}}^{n_{c}}$ the constraints. In short, we write
$\mathscr{C}=\langle
c_{\mathscr{C}},G_{\mathscr{C}},A_{\mathscr{C}},b_{\mathscr{C}}\rangle$.
When using constrained zonotopes, we replace the time and measurement updated
sets $\tilde{\mathscr{R}}_{k}$ and $\hat{\mathscr{R}}_{k}$ by the constrained
zonotopes $\tilde{\mathscr{C}}_{k}$ and $\hat{\mathscr{C}}_{k}$, respectively.
#### III-C1 Approach 1 - Reverse-Mapping
This approach works directly with constrained zonotopes. The sets
$\mathscr{Z}_{x|y^{i}(k)}$ of Proposition 1 are constrained zonotopes with no
$A_{\mathscr{C}},b_{\mathscr{C}}$ constraints. The intersection in (17)
becomes
${\hat{\mathscr{C}}_{k}=\tilde{\mathscr{C}}_{k}\cap_{i=1}^{q}\mathscr{Z}_{x|y^{i}(k)}}$
which can be performed as described in [23].
#### III-C2 Approach 2 - Implicit Intersection
We adapt Proposition 2 to use constrained zonotopes.
###### Proposition 3.
The intersection of
$\tilde{\mathscr{C}}_{k}=\langle\tilde{c}_{k},\tilde{G}_{k},\tilde{A}_{k},\tilde{b}_{k}\rangle$
and $q$ regions for $x$ corresponding to $y^{i}(k)$ as in (3b) can be
described by the constrained zonotope
$\hat{\mathscr{C}}_{k}=\langle\hat{c}_{k},\hat{G}_{k},\hat{A}_{k},\hat{b}_{k}\rangle$
with weights $\lambda_{k}^{i}\in\mathbb{R}^{n\times p_{i}}$ for
$i\in\\{1,\dots,q\\}$ where
$\displaystyle\hat{c}_{k}$
$\displaystyle=\tilde{c}_{k}+\sum\limits_{i=1}^{q}\lambda_{k}^{i}\big{(}y^{i}(k)-C^{i}\tilde{c}_{k}-c_{v,i}\big{)},$
$\displaystyle\hat{G}_{k}$
$\displaystyle=\begin{bmatrix}(I-\sum\limits_{i=1}^{q}\lambda_{k}^{i}C^{i})\tilde{G}_{k}&-\lambda_{k}^{1}G_{v,1}&\dots&-\lambda_{k}^{q}G_{v,q}\end{bmatrix},$
(23) $\displaystyle\hat{A}_{k}$
$\displaystyle=\begin{bmatrix}\tilde{A}_{k}&0&\dots&0\\\
C^{1}\tilde{G}_{k}\\!\\!&\\!\\!G_{v,1}&\\!\\!\dots\\!\\!&\\!\\!0\\\
\vdots\\!\\!&\\!\\!&\\!\\!\ddots\\!\\!&\\!\\!\\\
C^{q}\tilde{G}_{k}\\!\\!&\\!\\!0&\\!\\!\dots\\!\\!&\\!\\!G_{v,q}\end{bmatrix},$
(24) $\displaystyle\hat{b}_{k}$ $\displaystyle=\begin{bmatrix}\tilde{b}_{k}\\\
y^{1}(k)-C^{1}{c}_{k}-c_{v,1}\\\ \vdots\\\
y^{q}(k)-C^{q}{c}_{k}-c_{v,q}\end{bmatrix}.$ (25)
###### Proof.
We follow a similar approach to [24, Thm. 6.3] and [23], but extend the proof
by defining measurement sets as zonotopes instead of strips.
$\mathscr{Z}_{x|y^{i}}$ refers to $\mathscr{Z}_{x|y^{i}(k)}$ unless specified
otherwise. Let
$x_{k}\in\tilde{\mathscr{C}}_{k}\cap\mathscr{Z}_{x|y^{1}}\cap\dots\cap\mathscr{Z}_{x|y^{q}}$,
then there exists a $z_{k}\in\left[-1,1\right]$ such that
$\displaystyle x_{k}=\tilde{c}_{k}+\tilde{G}_{k}z_{k},\hskip
14.22636pt\tilde{A}_{k}z_{k}=\tilde{b}_{k}.$ (26)
Using (3b) and the measurement noise $\langle c_{v,i},G_{v,i}\rangle$, we
write
$\displaystyle C^{i}x=y^{i}(k)-c_{v,i}-G_{v,i}d^{i},$ (27)
where $d^{i}\in[-1,1]$. Inserting (26) into (27) yields
$\displaystyle
C^{i}\tilde{G}_{k}z_{k}=y^{i}(k)-C^{i}\tilde{c}_{k}-c_{v,i}-G_{v,i}d^{i},$
(28)
which, combined with (26), yields
$\displaystyle\underbrace{\begin{bmatrix}\tilde{A}_{k}&0&\dots&0\\\
C^{1}{G}_{k}&G_{v,1}&\dots&0\\\ \vdots&&\ddots&\\\
C^{q}{G}_{k}&0&\dots&G_{v,q}\end{bmatrix}}_{\hat{A}_{k}}$
$\displaystyle\underbrace{\begin{bmatrix}z_{k}\\\ d^{1}\\\ \vdots\\\
d^{q}\end{bmatrix}}_{z_{b}}=\underbrace{\begin{bmatrix}\tilde{b}_{k}\\\
y^{1}(k)-C^{1}{c}_{k}-c_{v,1}\\\ \vdots\\\
y^{q}(k)-C^{q}{c}_{k}-c_{v,q}\end{bmatrix}}_{\hat{b}_{k}}.$ (29)
Adding and subtracting $\sum_{i=1}^{q}\lambda_{i,k}C^{i}\tilde{G}_{k}z_{k}$ to
(26) yields
$x_{k}=\tilde{c}_{k}+\sum_{i=1}^{q}\lambda^{i}_{k}C^{i}\tilde{G}_{k}z_{k}+(I-\sum_{i=1}^{q}\lambda^{i}_{k}C^{i})\tilde{G}_{k}z_{k}.$
(30)
If we now insert (28) into (30), we obtain
$\displaystyle x$
$\displaystyle=\underbrace{\begin{bmatrix}(I-\sum\limits_{i=1}^{q}\lambda_{k}^{i}C^{i})\tilde{G}_{k}&-\lambda_{k}^{1}G_{v,1}&\dots&-\lambda_{k}^{m_{i}}G_{v,q}\end{bmatrix}}_{\hat{G}_{k}}z_{b}$
$\displaystyle\;\;\;\;+\underbrace{\hat{c}_{k-1}+\sum\limits_{i=1}^{q}\lambda_{k}^{j}\big{(}y^{i}(k)-C^{i}\tilde{c}_{k}-c_{v,i}\big{)}}_{\hat{c}_{k}}=\hat{G}_{k}z_{b}+\hat{c}_{k}.$
Hence, $x(k)\in\hat{\mathscr{C}}_{k}$ and
$(\tilde{\mathscr{C}}\cap\mathscr{Z}_{x|y^{1}}\cap\dots\cap\mathscr{Z}_{x|y^{q}})\subseteq\hat{\mathscr{C}}_{k}$.
Conversely, let $x(k)\in\hat{\mathscr{C}}_{k}$. Then, there exists a $z_{b}$
such that (22) in Definition 4 is satisfied. Partitioning $z_{b}$ into
$z_{b}=[z_{k},d^{1}\dots,d^{q}]^{T}$, it follows that we can construct a
constrained zonotope
$\tilde{\mathscr{C}}_{k}=\\{\tilde{c}_{k},\tilde{G}_{k},\tilde{A}_{k},\tilde{b}_{k}\\}$
given that $\|z_{k}\|_{\infty}\leq 1$. Thus, $x(k)\in\tilde{\mathscr{C}}$.
Similarly, we can get the constraints in (27). Inserting (26) in (28) results
in obtaining all the equations in (27). Therefore,
$x(k)\in\mathscr{Z}_{x|y^{i}(k)}$, $\forall i\in\\{1,\dots,q\\}$. Thus,
$x(k)\in(\tilde{\mathscr{C}}_{k}\cap\mathscr{Z}_{x|y^{1}}\cap\dots\cap\mathscr{Z}_{x|y^{q}})$
and
$\hat{\mathscr{C}}_{k}\subseteq(\tilde{\mathscr{C}}_{k}\cap\mathscr{Z}_{x|y^{1}}\cap\dots\cap\mathscr{Z}_{x|y^{q}})$,
which concludes the proof. ∎
## IV Evaluation
We evaluate our method by considering an input-driven variant of the rotating
target described in [11]. We set
$\displaystyle A_{\text{tr}}=\begin{bmatrix}0.9455&-0.2426\\\
0.2486&0.9455\end{bmatrix},\hskip
14.22636ptB_{\text{tr}}=\begin{bmatrix}0.1\\\ 0\end{bmatrix}$ (31)
with $q=3$ measurements parameterized as follows
$\displaystyle
C^{1}=\begin{bmatrix}1&0.4\end{bmatrix},C^{2}=\begin{bmatrix}0.9&-1.2\end{bmatrix},C^{3}=\begin{bmatrix}-0.8&0.2\\\
0&0.7\end{bmatrix},$ $\displaystyle\mathscr{Z}_{v,1}=\langle
0,1\rangle,\mathscr{Z}_{v,2}=\langle
0,1\rangle,\mathscr{Z}_{v,3}=\langle[0\;\;0]^{\top},I_{2}\rangle.$
The noise signals are characterized by the zonotopes
${\mathscr{Z}_{\gamma}=\langle[0\;\;0]^{\top},0.02I_{2}\rangle}$ and
$\mathscr{Z}_{w}=\langle[0\;\;0]^{\top},0.02I_{2}\rangle$. We run the offline
learning phase with $T=500$ and inputs sampled uniformly from the set
$\mathscr{U}=\langle 0,10\rangle$. The noise signals $v^{i}(k)$, $w(k)$ and
$\gamma(k)$ are sampled uniformly from their respective zonotope sets using
the command randPoint$(\mathscr{Z})$ as described in [20].
After learning $f(\cdot)$, we run the online estimation phase. The initial
state set is $\mathscr{X}_{0}=\langle[0\;\;0]^{\top},15I_{2}\rangle$ and the
true initial state is $x(0)=\begin{bmatrix}-10&10\end{bmatrix}^{\top}$. Once
again, we sample the inputs uniformly from $\mathscr{U}$. We evaluate both the
zonotope and constrained zonotope methods, each time using either of the two
proposed measurement update approaches. Fig. 2(a) shows the bounds of
$\hat{\mathscr{R}}_{k}$ in the $x_{1}$ state dimension for both approaches.
Fig. 2(b) shows the equivalent results when our method uses constrained
zonotopes. As expected, $x(k)$ is always contained within
$\hat{\mathscr{R}}_{k}$ (or $\hat{\mathscr{C}}_{k}$) at each time step.
Although both measurement update approaches yield similar set sizes on
average, the set evolution of Approach 2 is comparatively smoother.
Furthermore, we compare our results with N4SID subspace identification [25]
combined with a Kalman filter (KF). In Fig. 3, we show the sets
$\hat{\mathscr{R}}_{k}$ and $\hat{\mathscr{C}}_{k}$, using either measurement
update approach, using zonotopes or constrained zonotopes. We also show the
ellipse corresponding to the $3\sigma$ uncertainty bound of the KF estimate,
indicating that our estimator provides state sets comparable in size to that
of the KF. We should mention that KF bounds come without any guarantees.
(a) Using zonotopes showing bounds of $\hat{\mathscr{R}}_{k}$ in $x_{1}$
(b) Using constrained zonotopes showing bounds of $\hat{\mathscr{C}}_{k}$ in
$x_{1}$
Figure 2: Bounds of the set $\hat{\mathscr{R}}_{k}$ in (a), and
$\hat{\mathscr{C}}_{k}$ in (b), projected onto the first state dimension
$x_{1}$ of $x(k)$ using measurement update approaches 1 and 2. Figure 3: Sets
$\hat{\mathscr{R}}_{k}$ using measurement update approaches 1 and 2, and the
equivalent sets $\hat{\mathscr{C}}_{k}$ using constrained zonotopes (CZ),
compared to the KF’s $3\sigma$ confidence bounds.
Referring to both Fig. 2 and Fig. 3, it is clear that the constrained
zonotopes yield smaller state sets at each time step. However, this comes at
the cost of increased computational load. Running our simulations on a Dell
laptop with an 8-core i5-8365U processor at 1.6GHz, the average computation
time per iteration for Approach 1 increased from $0.656$sec to $1.267$sec.
when using constrained zonotopes; for Approach 2, the corresponding times were
$0.221$sec and $0.971$sec, respectively. For all our approaches, we observed
that reducing the order of the sets to $5$, which reduces the number of
generators in $\hat{\mathscr{R}}$ (or $\hat{\mathscr{C}}$), was critical to
keep the computational load low.
## V Conclusions and Recommendations
In this paper, we introduced a novel zonotope-based method to perform set-
based state estimation with set containment guarantees using a data-driven set
propagation function. We presented an approach to compute the set of model
that is consistent with the data and noise bounds given input-output data.
Then, we presented two approaches to perform the measurement update which
merges the time updated state set with the observed measurements. We extended
our method to use constrained zonotopes, which yielded smaller state sets at
the cost of increased computational load. Our results show state sets
comparable in size to the $3\sigma$ uncertainty bounds obtained when running
N4SID subspace identification and a Kalman filter, but with the added feature
of set-containment guarantees and without requiring any knowledge of the
statistical properties of the noise.
Future work includes evaluating our proposed estimator on real-world examples
as well as gaining more insight into the limitations of our method when
applied to more complex dynamical systems. Additionally, improving the
zonotope intersection operation to lessen the degree of over-approximation of
the resultant state set would yield tighter state set estimates at each time
step.
## Acknowledgement
This work was supported by the Swedish Research Council, the Knut and Alice
Wallenberg Foundation, the Democritus project on Decision-making in Critical
Societal Infrastructures by Digital Futures, and the European Unions Horizon
2020 Research and Innovation program under the CONCORDIA cyber security
project (GA No. 830927).
## References
* [1] D. Bertsekas and I. Rhodes, “Recursive state estimation for a set-membership description of uncertainty,” IEEE Transactions on Automatic Control, vol. 16, no. 2, pp. 117–128, 1971.
* [2] C. Ierardi, L. Orihuela, and I. Jurado, “A distributed set-membership estimator for linear systems with reduced computational requirements,” Automatica, vol. 132, p. 109802, 2021.
* [3] C. Ierardi, Distributed estimation techniques for cyber-physical systems. PhD thesis, Departamento de Ingeniería, Universidad Loyola, 2021\.
* [4] M. Althoff, Reachability analysis and its application to the safety assessment of autonomous cars. PhD thesis, Technische Universität München, 2010.
* [5] F. Mazenc and O. Bernard, “Interval observers for linear time-invariant systems with disturbances,” Automatica, vol. 47, no. 1, pp. 140–147, 2011\.
* [6] G. Belforte, B. Bona, and V. Cerone, “Parameter estimation algorithms for a set-membership description of uncertainty,” Automatica, vol. 26, no. 5, pp. 887–898, 1990.
* [7] L. Ma, Z. Wang, H.-K. Lam, and N. Kyriakoulis, “Distributed event-based set-membership filtering for a class of nonlinear systems with sensor saturations over sensor networks,” IEEE Transactions on Cybernetics, vol. 47, no. 11, pp. 3772–3783, 2016.
* [8] L. Orihuela, S. Roshany-Yamchi, R. A. García, and P. Millán, “Distributed set-membership observers for interconnected multi-rate systems,” Automatica, vol. 85, pp. 221–226, 2017.
* [9] C. Durieu, E. Walter, and B. Polyak, “Multi-input multi-output ellipsoidal state bounding,” Journal of Optimization Theory and Applications, vol. 111, no. 2, pp. 273–303, 2001.
* [10] J. Blesa, V. Puig, and J. Saludes, “Robust fault detection using polytope-based set-membership consistency test,” IET Control Theory & Applications, vol. 6, no. 12, pp. 1767–1777, 2012.
* [11] A. Alanwar, J. J. Rath, H. Said, and M. Althoff, “Distributed set-based observers using diffusion strategy,” arXiv:2003.10347, 2020.
* [12] J. C. Willems, P. Rapisarda, I. Markovsky, and B. L. De Moor, “A note on persistency of excitation,” Systems & Control Letters, vol. 54, no. 4, pp. 325–329, 2005.
* [13] D. Alpago, F. Dörfler, and J. Lygeros, “An Extended Kalman Filter for Data-Enabled Predictive Control,” IEEE Control Systems Letters, vol. 4, no. 4, pp. 994–999, 2020.
* [14] C. De Persis and P. Tesi, “Formulas for data-driven control: Stabilization, optimality, and robustness,” IEEE Transactions on Automatic Control, vol. 65, no. 3, pp. 909–924, 2019.
* [15] J. Berberich, J. Köhler, M. A. Müller, and F. Allgöwer, “Data-driven model predictive control with stability and robustness guarantees,” IEEE Transactions on Automatic Control, 2020.
* [16] A. Alanwar, A. Koch, F. Allgöwer, and K. H. Johansson, “Data-driven reachability analysis using matrix zonotopes,” in Proceedings of the 3rd Conference on Learning for Dynamics and Control, vol. 144, pp. 163–175, 2021\.
* [17] A. Alanwar, A. Koch, F. Allgöwer, and K. H. Johansson, “Data-driven reachability analysis from noisy data,” arXiv preprint arXiv:2105.07229, 2021.
* [18] W. Kühn, “Rigorously computed orbits of dynamical systems without the wrapping effect,” Computing, vol. 61, no. 1, pp. 47–67, 1998.
* [19] M. Fiedler, J. Nedoma, J. Ramík, J. Rohn, and K. Zimmermann, Linear optimization problems with inexact data. Springer Science & Business Media, 2006.
* [20] M. Althoff, “An introduction to CORA 2015,” in Proceedings of the Workshop on Applied Verification for Continuous and Hybrid Systems, 2015.
* [21] V. T. H. Le, C. Stoica, T. Alamo, E. F. Camacho, and D. Dumur, “Zonotope-based set-membership estimation for multi-output uncertain systems,” in IEEE International Symposium on Intelligent Control, pp. 212–217, 2013.
* [22] A. Girard, “Reachability of uncertain linear systems using zonotopes,” in Hybrid Systems: Computation and Control, pp. 291–305, 2005.
* [23] J. K. Scott, D. M. Raimondo, G. R. Marseglia, and R. D. Braatz, “Constrained zonotopes: A new tool for set-based estimation and fault detection,” vol. 69, pp. 126–136, 2016.
* [24] A. Alanwar, V. Gassmann, X. He, H. Said, H. Sandberg, K. H. Johansson, and M. Althoff, “Privacy preserving set-based estimation using partially homomorphic encryption,” arXiv:2010.11097, 2020.
* [25] P. Van Overschee and B. De Moor, “N4SID: Subspace algorithms for the identification of combined deterministic-stochastic systems,” Automatica, vol. 30, no. 1, pp. 75 – 93, 1994.
|
# Hydrodynamical study of Terahertz emission in magnetized graphene field-
effect transistors
Pedro Cosme<EMAIL_ADDRESS>Instituto Superior
Técnico, 1049-001 Lisboa, Portugal Instituto de Plasmas e Fusão Nuclear,
1049-001 Lisboa, Portugal Hugo Terças<EMAIL_ADDRESS>Instituto Superior Técnico, 1049-001 Lisboa, Portugal Instituto de Plasmas e
Fusão Nuclear, 1049-001 Lisboa, Portugal
###### Abstract
Several hydrodynamic descriptions of charge transport in graphene have been
presented in the late years. We discuss a general hydrodynamic model governing
the dynamics of a two-dimensional electron gas in a magnetized field-effect
transistor in the slow drift regime. The Dyakonov–Shur instability is
investigated, including the effect of weak magnetic fields (i.e. away from
Landau levels). We show that the gap on the dispersion relation prevents the
instability from reaching the lower frequencies, thus imposing a limit on the
Mach number of the electronic flow. Furthermore, we discuss that the presence
of the external magnetic field decreases the growth rate of the instability,
as well as the saturation amplitude. The numerical results from our
simulations and the presented higher order dynamic mode decomposition support
such reasoning.
Graphene hydrodynamics; Dyakonov–Shur instability; Magnetic field; Graphene
field-effect transistor
## I Introduction
In recent years, the scientific community has witnessed the emergence of
integrated-circuit technology with bi-dimensional (2D) materials. In this
scope, graphene is undoubtedly one of the most prominent materials. Among the
many applications of graphene, the possibility of resorting to plasmonics
instabilities to trigger the emission, or conversely, the detection, of THz
radiation has been an active field of study Hosseininejad _et al._ (2018);
Mendl, Polini, and Lucas (2021); Suessmeier _et al._ (2017); Otsuji, Popov,
and Ryzhii (2014). The explored mechanisms for the creation and control of
plasmons in graphene commonly rely on graphene field-effect transistors
(GFET), which allow to control the Fermi level while being easily combined in
integrated circuitry.
One of the defining characteristics of graphene is its high electron mobility,
as a consequence of the weak scattering between electrons and phonons,
defects, or impurities, which leads to large electron–impurity mean free path
$\ell_{\text{imp}}$. Indeed, ultra-clean samples of graphene encapsulated by
hexagonal boron nitride (hBN) Son _et al._ (2018) or hBN–graphene–WSe2
structures Banszerus _et al._ (2019) exhibit a mobility $\mu>3.5\times
10^{5}\,\mathrm{cm^{2}V^{-1}s^{-1}}$. Yet, the electron–electron scattering is
significant, resulting in a short mean free path $\ell_{ee}$ at room
temperature. Thereby, it is possible to design a system of size $L$ under the
condition $\ell_{ee}\ll L\ll\ell_{\text{imp}}$. In such a regime, the
collective behavior of carriers can be accurately described hydrodynamically
Chaves _et al._ (2017); Narozhny _et al._ (2017); Svintsov _et al._ (2013);
Mendl, Polini, and Lucas (2021); Lucas and Fong (2018); Müller, Schmalian, and
Fritz (2009), with some recent experimental results validating this approach
Sulpizio _et al._ (2019); Mayzel, Steinberg, and Varshney (2019); Berdyugin
_et al._ (2019).
Given the massless nature of graphene electrons, a relativistic description is
required for velocities near the Fermi velocity $v_{F}$. However, for the
usual operation conditions of GFETs, the velocity of the carriers is expected
to saturate far below $v_{F}$ Wilmart _et al._ (2020); Yamoah _et al._
(2017); Dorgan, Bae, and Pop (2010). As such, we here model graphene plasmons
making use of a hydrodynamic set of equations valid in the regime $v\ll
v_{F}$. Moreover, we operate at room temperature, such that the Fermi level is
large enough to prevent interband transitions, $E_{F}\gg k_{B}T$.
The Dyakonov–Shur (DS) instability has been extensively studied for high-
mobility semi-conductors as a mechanism for emission/detection of THz
radiation Dyakonov and Shur (1993); Crowne (2000) and has recently been
considered in graphene devices Cosme and Terças (2020); Lucas and Das Sarma
(2018). However, few works have approached the issue under the influence of
magnetic fields Dyakonova _et al._ (2005); Kushwaha and Vasilopoulos (2001);
Zhang and Xue (2013). In this work, we investigate the DS instability taking
place in GFETs in the regime of weak magnetic fields, i.e. away from the
Landau levels. Due to the appearance of a gap, the difference of frequency
between the forward and backward plasmon modes is decreased, leading to an
attenuation of the DS frequency and growth rate. We also show that the
emergence of a transverse (Hall) current in the channels in the nonlinear
regime is responsible for the decreasing of the electron saturation amplitude.
## II Hydrodynamic Model for Graphene Electrons
Figure 1: Schematic representation of a graphene channel field-effect
transistor with a top gate (G). The presented setup also shows the
Dyakonov–Shur impedance realization at source (S) and drain (D). The magnetic
field is perpendicular to the channel.
The fact that the electrons in graphene behave as massless Dirac fermions
poses the major difficulty for the development of hydrodynamic models: not
only do carriers have zero mass, but also the effective inertial mass tensor
diverges Ashcroft and Mermin (1976). A naive approach would dictate to define
an effective mass as
$m^{\star}=\frac{\hbar k_{F}}{v_{F}}=\frac{\hbar\sqrt{\pi n}}{v_{F}},$ (1)
where $\hbar k_{F}$ is the Fermi momentum and $n$ is the electron 2D number
density. This definition is extensively used in the literature Chaves _et
al._ (2017); Lucas and Fong (2018); Svintsov _et al._ (2013), and recent
developments based on quantum kinetic theory propose corrections to it
Figueiredo, Bizarro, and Terças (2020). Since the electronic fluid is
compressible, the effective mass is not a conserved quantity, contrary to
customary fluids. For typical conditions in GFETs, the effective mass is
expected in the range
$2.7\,\mathrm{keV/c^{2}}\ll m^{\star}\ll 270\,\mathrm{keV/c^{2}},$ (2)
lying fairly below the free electron mass.
Starting from the Boltzmann equation for the distribution function
$f=f(\bm{r},\bm{p},t)$
$\frac{\partial}{\partial
t}f+v_{F}\mathbf{\frac{\bm{p}}{|\bm{p}|}}\cdot\bm{\nabla}_{\bm{r}}f+\mathbf{F}\cdot\bm{\nabla}_{\bm{p}}f=\widehat{\mathcal{C}}[f],$
(3)
one can derive the hydrodynamic model for electronic transport in graphene.
Here, the collision operator can be taken in the Bhatnagar–Gross–Krook
approximation Rieutord (2015); Haas (2011),
$\widehat{\mathcal{C}}[f]=(f_{\text{Equilibrium}}-f)/\tau$. However, since we
are interested in mesoscopic effects with small Knudsen number, $v\tau/L\ll
1$, and time scales much longer than the collision time, we can safely set
$\widehat{\mathcal{C}}[f]\approx 0$. This does not imply the absence of
electron-electron collisions in the electronic fluid, but rather that they
occur fast enough to maintain the local equilibrium.
By integrating the zero-order momentum of Eq. (3), yields the continuity
equation
$\frac{\partial n}{\partial t}+\bm{\nabla}\cdot\left(n\mathbf{v}\right)=0.$
(4)
Furthermore, the first momentum of Eq. (3) leads to
$\frac{\partial\mathbf{v}}{\partial
t}+\frac{(\mathbf{v}\cdot\bm{\nabla})\mathbf{v}}{2}+\frac{1}{nm^{\star}}\bm{\nabla}\cdot\mathbb{P}-\frac{\mathbf{F}}{m^{\star}}=0,$
(5)
where $\mathbb{P}$ is the pressure stress tensor and $\mathbf{F}$ the
resultant external force. As we can see, the variation of the effective mass
introduces a $1/2$ factor to the convective term. Such correction breaks the
Galilean invariance of the system, leading to an unusual expression for the
dispersion relation in the presence of a Doppler shift Cosme and Terças
(2020).
The _hydrostatic_ diagonal terms of the pressure, $\mathbb{P}=P\delta_{ij}$,
is given by the 2D Fermi-Dirac pressure Landau _et al._ (1980); Giuliani and
Vignale (2005); Chaves _et al._ (2017)
$P=\frac{2(k_{B}T)^{3}}{\pi\hbar^{2}v_{F}^{2}}\,\mathfrak{F}_{2}\left(\frac{E_{F}}{k_{B}T}\right),$
(6)
where $\mathfrak{F}_{2}$ is the complete Fermi-Dirac pressure, which at room
temperature, $E_{F}\gg k_{B}T$, gives
$P=\frac{E_{F}^{3}}{3\pi(\hbar
v_{F})^{2}}+\mathcal{O}\left(\frac{k_{B}T}{E_{F}}\right)^{2}\simeq\frac{\hbar
v_{F}}{3\pi}\big{(}\pi n\big{)}^{\frac{3}{2}}.$ (7)
As such, the pressure term in (5) reduces to
$\frac{1}{nm^{\star}}\bm{\nabla}P=\frac{v_{F}^{2}}{2n}\bm{\nabla}n.$ (8)
The off-diagonal elements of the pressure in Eq. (5) describe the viscous
terms of the fluid. The kinematic viscosity near the Dirac point is $\nu\simeq
v_{F}\ell_{ee}/4\sim 2.5\\!\times\\!10^{-3}\,\mathrm{m^{2}s^{-1}}$; however,
at room temperature $T\ll T_{F}$ this value increases to $\nu\sim
0.1\,\mathrm{m^{2}s^{-1}}$ Narozhny and Schütt (2019); Lucas and Fong (2018);
Müller, Schmalian, and Fritz (2009); Torre _et al._ (2015); Levitov and
Falkovich (2016), and the corresponding Reynolds number of the electron fluid
is
$\mathrm{Re}\sim\frac{Lv_{0}}{0.1\mathrm{m^{2}s^{-1}}}.$ (9)
A suitable choice of the system parameters can be made such that ${\rm Re}\gg
1$, rendering the viscous effects negligible. As a matter of fact, our
simulations performed for moderate values of the Reynolds number have not
shown any significant difference from the inviscid case, apart from the
expected suppression of higher frequency content and subsequent smoothing of
the waveforms.
For a magnetized graphene electron gas in the field-effect transistor
configuration, as depicted in Fig. 1, the force term results from the combined
effect of the gate and the cyclotron (Lorentz) force,
$\mathbf{F}=-\bm{\nabla}U_{\rm
gate}-\frac{e}{m^{\star}}\mathbf{v}\times\mathbf{B},$ (10)
where $U_{\rm gate}$ is the gate voltage,
$U_{\text{gate}}=en\left(\frac{1}{C_{g}}+\frac{1}{C_{q}}\right),$ (11)
with $C_{g}$ and $C_{q}$ denoting the geometric and the quantum capacitances
Zhu _et al._ (2009); Das Sarma _et al._ (2011). For typical carrier
densities $n\gtrsim 10^{12}\,\mathrm{cm}^{-2}$, the quantum capacity
dominates, $C_{q}\gg C_{g}$, and $U_{\rm gate}\simeq
en/C_{g}=end_{0}/\epsilon$.
### II.1 Enhanced diamagnetic drift
In the presence of a magnetic field, the system is subject to Lorentz force
and, taking the steady state of eq. (5) leads to
$\frac{v_{F}^{2}}{2n}\bm{\nabla}n+\frac{s^{2}}{\sqrt{n_{0}n}}\bm{\nabla}n+\frac{e\mathbf{v}\times\mathbf{B}}{m^{\star}}=0,$
(12)
where
$s=\left(e^{2}dv_{F}\sqrt{n_{0}}/\varepsilon\hbar\sqrt{\pi}\right)^{1/2}$ is
the screened plasmon sound velocity. The drift velocity perpendicular to
$\mathbf{B}$ can be retrieved as
$\mathbf{v}_{\perp}=\frac{\underline{S}^{2}m^{\star}}{n_{0}e}\frac{\bm{\nabla}n\times\mathbf{B}}{\mathbf{B}^{2}},$
(13)
with $\underline{S}^{2}=s^{2}+v_{F}^{2}/2$ which is analogous to a diamagnetic
driftChen (2016) in plasmas. Here, however, the drift is not only due to the
pressure gradientChen (2016) but has the added contribution of the force drift
since $\mathbf{F}\sim\bm{\nabla}n$ as well. Thus, the fluid has a larger
diamagnetic drift compared to what would be expected from the pressure itself.
In the case of wave or shock propagation along the GFET channel, as the
density gradient will be mostly in the $x$ direction and, therefore, the
diamagnetic drift will give rise to a transverse Hall current.
### II.2 Magneto-plasmons in graphene FETs
Considering an uniform field $\mathbf{B}=B_{0}\bm{\hat{z}}$ perpendicular to
the graphene layer and writing
$\mathbf{v}=v_{x}\bm{\hat{x}}+v_{y}\bm{\hat{y}}$ while looking for propagation
along $x$, $\mathbf{k}=k\bm{\hat{x}}$, linearization of Eqs. (4) and (5), with
$\mathbf{v}=(v_{0}+v_{x})\bm{\hat{x}}+v_{y}\bm{\hat{y}}$ and $n=n_{0}+n_{1}$,
reads in Fourier space
$\displaystyle\left(\omega-kv_{0}\right)\tilde{n}_{1}=kn_{0}\tilde{v}_{x},$
(14a)
$\displaystyle\left(\omega-\frac{kv_{0}}{2}\right)\tilde{v}_{x}=k\frac{\underline{S}^{2}}{n_{0}}\tilde{n}_{1}-i\omega_{c}\tilde{v}_{y},$
(14b)
$\displaystyle\left(\omega-\frac{kv_{0}}{2}\right)\tilde{v}_{y}=i\omega_{c}\tilde{v}_{x},$
(14c)
where $\omega_{c}=eB/m^{\star}$ is the cyclotron frequency. Note that as the
effective mass is much smaller than the electron mass, $m^{\star}\ll m_{e}$,
it is possible to access high cyclotron frequencies with modest fields; for a
typical excess density of $10^{12}\,\mathrm{cm}^{-2}$
$\omega_{c}/B=9\,\mathrm{THzT^{-1}}$. Furthermore, combining (14) yields the
relation
$\left(\omega-
kv_{0}\right)\left[\left(\omega-\frac{kv_{0}}{2}\right)^{2}\\!\\!-\omega_{c}^{2}\right]\\!\\!=\underline{S}^{2}k^{2}\left(\omega-\frac{kv_{0}}{2}\right).$
(15)
With this dispersion relation, the propagating solutions $\omega_{\pm}(k)$
coalesce to $\omega_{c}$ as $k\\!\\!\rightarrow\\!\\!0$, opening a gap at the
origin as patent in Fig. 2, whereas for large $k$ we recover the unperturbed
solutions $\omega\simeq(3/4v_{0}\pm\underline{S})k$. Moreover, a third
solution $\omega_{0}(k)\simeq kv_{0}/2$ is also present.
Figure 2: Magneto-plasmon dispersion in graphene FETs. Solutions of the
dispersion relation in EQ. (15) with $\underline{S}/v_{0}=10$ (solid lines)
alongside the solutions in the absence of magnetic field (dashed lines).
## III Dyakonov–Shur Instability
Figure 3: Numerical solutions for frequency and growth rate (in units of
$v_{0}/L$) of Dyakonov–Shur instability for several cyclotron frequencies
$\omega_{c}$ (coloured dots) and analytical solution (18) corresponding to
$B=0$ (dashed black line). Although there is no significant change in the real
part of the frequency, the growth rate diminishes slightly.
The hydrodynamic model in Eqs. (4) and (5) contains an instability under the
boundary conditions of fixed density at the source $n(x=0)=n_{0}$ and fixed
current density at the drain $n(x=L)v(x=L)=n_{0}v_{0}$, dubbed in the
literature as the Dyakonov–Shur (DS) instability Dyakonov and Shur (1993);
Dyakonov (2010). The latter arises from the multiple reflections of the plasma
waves at the boundaries, which provide positive feedback for the incoming
waves driven by the current at the drain. From an electronic point of view,
the peculiar boundary conditions correspond to an AC short circuit at the
source, forcing the voltage (and so the carriers density) to remain constant,
and an AC open circuit at the drain setting the current constant Crowne
(1997); Barut _et al._ (2019). Thus, these conditions can be implemented with
a low-reactance capacitor on the source and a high-reactance inductor on the
drain Fay, Jena, and Maki (2020), as outlined in Figure 1.
The asymmetric boundary conditions described above imply that the
counterpropagating wave vectors need to comply with the relation
$\frac{k_{+}}{k_{-}}=e^{i(k_{+}-k_{-})L},$ (16)
where
$k_{\pm}=\frac{\frac{3}{4}\omega\mp\text{Sgn}(\omega)\sqrt{s^{2}\left(\omega^{2}-\omega_{c}^{2}\right)+\left(\frac{3}{4}\omega_{c}\right)^{2}}}{\left(\frac{3}{4}\right)^{2}-s^{2}}.$
(17)
This condition leads to complex solutions, $\omega=\omega_{r}+i\gamma$, where
$\omega_{r}$ is the electron oscillation frequency and $\gamma$ is the
instability growth rate Dyakonov and Shur (1993); Dmitriev _et al._ (1997);
Crowne (1997). Numerical inspection of Eq. (16) provides the results depicted
in Fig. 3. In the unmagnetized case, the instability condition can be
analitically solved
$\begin{gathered}\omega_{r}=\frac{|\underline{S}^{2}-\left(\frac{3}{4}v_{0}\right)^{2}|}{2L\underline{S}}\pi,\\\
\gamma=\frac{\underline{S}^{2}-\left(\frac{3}{4}v_{0}\right)^{2}}{2L\underline{S}}\log\left|\frac{\underline{S}+\frac{3}{4}v_{0}}{\underline{S}-\frac{3}{4}v_{0}}\right|.\end{gathered}$
(18)
Plasmonic dynamical instability takes place for $S/v_{0}>3/4$, i.e. in the
_subsonic_ regime. The fact that the instability develops in such a regime is
advantageous from the technological point of view, as it allows the operation
of the GFET far from the velocity saturationSchwierz (2010); Wilmart _et al._
(2020). Moreover, when $S\gg v_{0}$ the frequency is dominated by the $S/L$
ration as $\omega_{r}\sim\pi\underline{S}/2L$ while $\gamma\sim 3v_{0}/4L$.
Then, given the dependence of $S$ with gate voltage, and as $v_{0}n_{0}\sim
I_{\rm DS}/We$, with $I_{\rm DS}$ representing the source-to-drain current and
$W$ the transverse width of the sheet, the frequency can be tuned by the gate
voltage and injected drain current, not being solely restricted to the
geometric factors of the GFET.
In the presence of the magnetic field, the solutions of (16) reveal that the
growth rate of the instability decreases slightly, which is more evident
around the transonic regime, while at the subsonic case the influence of the
magnetic field on the growth rate is less noticeable (Fig. 3). This
observation contradicts what has been previously reported in Ref. Zhang and
Xue (2013). Regarding the frequency, the magnetic field introduces a small
shift from the unmagnetized scenario.
The reason for our results to differ from those presented in Zhang and Xue
(2013) lies in the treatment of the wave vector solutions. In the cited work
the cyclotron frequency $\omega_{c}$ is a priori normalized to
$\underline{S}/L$. Such approach simplifies the problem as it artificially
linearises (17). However, this obscures the analysis as in a $\omega$ vs.
$\underline{S}$ plot, the cyclotron frequency would also be varying. Moreover,
the gap of the dispersion relation opened by the magnetic field suppresses
frequencies below $\omega_{c}$; hence, as one approaches the sonic regime
$\underline{S}\sim v_{0}$, the real part of the frequency drops and reaches
the cut-off. Thus, leaving the solutions on Fig.3 with an endpoint.
## IV Numerical Simulation
Figure 4: Evolution of drain-to-source and Hall currents across the graphene
channel for distinct values of cyclotron frequency. The presence of magnetic
filed diverts part of the current to the transverse direction and diminishes
the growth rate of instability. All three simulations performed with
$S=20v_{0}$ and $v_{F}=10v_{0}$. Figure 5: Hall current response with the
applied magnetic field. All simulations performed with $S=20v_{0}$ and
$v_{F}=10v_{0}$.
In order to perform the simulations revealing the late-stage (nonlinear)
evolution of the plasmon wave in the FET channel, the hydrodynamical equations
have been recast into a conservation form plus a magnetic source term.
Resorting to the mass flux density $\mathbf{p}=m^{\star}n\mathbf{v}$, the
continuity and momentum equation can be written in the equivalent form
$\frac{\partial n}{\partial
t}+\bm{\nabla}\\!\cdot\\!\frac{\mathbf{p}}{\sqrt{n}}=0,$ (19a)
$\frac{\partial\mathbf{p}}{\partial
t}+\bm{\nabla}\\!\cdot\\!\left(\frac{\mathbf{p}\otimes\mathbf{p}}{{n}^{3/2}}+\frac{v_{F}^{2}}{v_{0}^{2}}\frac{n^{3/2}}{3}\mathds{1}+\frac{S^{2}}{v_{0}^{2}}\frac{{n}^{2}}{2}\mathds{1}\right)+\\\
+\frac{\omega_{c}}{\omega_{0}}\frac{\mathbf{p}\times\mathbf{\hat{z}}}{\sqrt{n}}=0.$
(19b)
This hyperbolic system of differential equations has been solved with a finite
volume Lax-Wendroff method Hirsch (2007); LeVeque (1992), the two-step
Richtmyer scheme for nonlinear systems LeVeque (1992). The simulation of
system (19b), as well as the computation of the observable electronic
quantities of the GFET, has been carried with a software specifically
developed for the task Cosme and Santos (2020). Our simulations confirm that
the magnetic field reduces the instability growth rate, as expected for the
subsonic regime (Fig.3). The average value and oscillation amplitude of the
quantities along the channel are also reduced (Tab.1), as the diamagnetic
current removes a fraction of the electrons participating in the longitudinal
oscillation. A typical situation for the current density at source can be seen
in Fig.4. The latter reveals that the magnetic drift is responsible for a
transverse current, which could be exploited for a directional coupler
operating in the THz regime He _et al._ (2014). In the present case, we are
dealing with plasmons, but it may also be applicable to the case of surface-
plasmon polaritons Hwang and Yang (2019). Indeed the applied magnetic field
can control not only the average $I_{\text{Hall}}$ value but also amplify the
amplitude of its oscillation as patent on Fig.5.
Table 1: Average values and extrema of the drain-to-source and Hall currents (in units of $en_{0}v_{0}L$) at the nonlinear regime with the imposition of a cyclotron frequency $\omega_{c}$ (in units of $v_{0}/L$). All simulations were performed with $S=20v_{0}$ and $v_{F}=10v_{0}$. $\omega_{c}$ | $\langle I_{\text{Hall}}\rangle$ | $\min I_{\text{Hall}}$ | $\max I_{\text{Hall}}$ | $\langle I_{DS}\rangle$ | $\min I_{DS}$ | $\max I_{DS}$
---|---|---|---|---|---|---
$0$ | — | — | — | $2.053$ | $-22.884$ | $21.584$
$1$ | $\phantom{1}1.017$ | $\phantom{1}0.701$ | $\phantom{1}1.322$ | $2.051$ | $-22.851$ | $21.557$
$5$ | $\phantom{1}5.039$ | $\phantom{1}3.486$ | $\phantom{1}6.539$ | $2.042$ | $-22.183$ | $21.037$
$10$ | $\phantom{1}9.796$ | $\phantom{1}6.979$ | $12.507$ | $1.971$ | $-19.053$ | $18.835$
$15$ | $13.979$ | $10.703$ | $17.134$ | $1.734$ | $-13.356$ | $13.029$
To further analyze and quantify the impact of $\omega_{c}$ on the electronic
fluid, the numerical results were evaluated with higher order dynamic mode
decomposition (HODMD) Le Clainche and Vega (2017) resorting to PyDMD software
Demo, Tezzele, and Rozza (2018). The direct outputs of the fluid equations
have been firstly integrated to obtain the average drain-to-source current;
this enables the analysis to be performed on a lower dimensionality quantity
that retains the dynamic of the system. Then, the HODMD algorithm was applied
to the linear growth portion of the signal, i.e. before the nonlinear
saturation effects, which corresponds to $t\lesssim 1.5L/v_{0}$. Although
HODMD can perfectly deal with the transition to the saturation regime, the
eigenmodes and complex frequencies thus retrieved do not necessarily reflect
the values predicted by linear theory. Figure 6 shows an example of such
results where the overall decrease of growth rate is evident, with the growth
rates from the $\omega_{c}=0$ case exceeding the subsequent results with
magnetic field. Moreover, the predicted slight drift of the main frequency
towards higher values can also be observed.
Figure 6: Higher order dynamic mode decomposition frequencies,
$\Re(\omega_{m})$, and growth rates, $\Im(\omega_{m})$ (in units of
$v_{0}/L$), the modes with higher amplitude are displayed with stronger color.
Dashed line marking the theoretical growth rate from (18). The decomposition
was obtained from the linear regime ($t\lesssim 2\,L/v_{0}$) of the average
drain-to-source current for different values of cyclotron frequency
$\omega_{c}$ with $S=20v_{0}$ and $v_{F}=10v_{0}$.
## V Conclusions
The theoretical study of electronic transport in graphene is a challenging
task, covering several regimes and interactions, and resorting to complex
techniques. Nonetheless, the hydrodynamic models provide a semi-classical
description capable of recovering the behavior and properties of such quantum
fluids while also allowing numerical simulation with well-established methods.
However, it is vital to stress that conventional fluid equations — for
instance, the Cauchy momentum equation — can not be bluntly applied and that
the variation of the effective mass with the numerical density introduces a
correction in the nonlinear convective term, breaking the symmetry of the
dispersion relation in the presence of a base drift of the fluid.
The presented model evince that the presence of a weak transverse magnetic
field dramatically changes the nature of the plasmons for small $k$, opening a
gap in the dispersion relation, imposing a cut-off on the feasible frequencies
of such systems. Furthermore, our numerical results point out that the
magnetic field impairs the growth of the DS instability, a result that, to our
knowledge, has not yet been reported in this context. Such reduction of the
growth rate is practically unnoticeable for the deep subsonic flows on which
technological applications are bound to operate. Yet, the frequency itself can
be increased for moderate values of Mach number before reaching the gap cut-
off. Moreover, our results suggest that the DS configuration in a magnetized
FET has the potential to function as a directional coupler operating in the
THz regime He _et al._ (2014). In future studies, other magnetic effects
could be addressed, either with DS mechanism or exploring other instability
processes. Namely, drift instabilities considering the enhanced diamagnetic
drift arising from the gated scenario. Lastly, the presence of magnetic field
would also lead to the emergence of an odd viscosity Avron (1998) contribution
with potentially interesting effects, such as topologically protected edge
states and new exotic dynamics.
###### Acknowledgements.
The authors acknowledge the funding provided by Fundação para a Ciência e a
Tecnologia (FCT-Portugal) through the Grant No. PD/BD/150415/2019 and the
Contract No. CEECIND/00401/2018.
## AIP Publishing Data Sharing Policy
The data that support the findings of this study are available from the
corresponding author upon reasonable request.
## References
* Hosseininejad _et al._ (2018) S. Hosseininejad, R. Faraji-Dana, S. Abadal, M. Lemme, P. Haring Bolívar, E. Alarcón, M. Neshat, and A. Cabellos-Aparicio, Nanomaterials 8, 577 (2018).
* Mendl, Polini, and Lucas (2021) C. B. Mendl, M. Polini, and A. Lucas, Applied Physics Letters 118, 013105 (2021).
* Suessmeier _et al._ (2017) C. Suessmeier, S. Schaeffer, S. Abadal, E. Alarcón, S. E. Hosseininejad, A. K. Wigger, D. Stock, S. Wagner, A. Cabellos-Aparicio, M. Lemme, and P. H. Bolívar, in _Conference on Lasers and Electro-Optics_ (OSA, Washington, D.C., 2017) p. JTh2A.37.
* Otsuji, Popov, and Ryzhii (2014) T. Otsuji, V. Popov, and V. Ryzhii, Journal of Physics D: Applied Physics 47 (2014).
* Son _et al._ (2018) J. Son, J. Kwon, S. Kim, Y. Lv, J. Yu, J.-Y. Lee, H. Ryu, K. Watanabe, T. Taniguchi, R. Garrido-Menacho, N. Mason, E. Ertekin, P. Y. Huang, G.-H. Lee, and A. M. van der Zande, Nature Communications 9, 3988 (2018).
* Banszerus _et al._ (2019) L. Banszerus, T. Sohier, A. Epping, F. Winkler, F. Libisch, F. Haupt, K. Watanabe, T. Taniguchi, K. Müller-Caspary, N. Marzari, F. Mauri, B. Beschoten, and C. Stampfer, arXiv preprint , 1 (2019).
* Chaves _et al._ (2017) A. J. Chaves, N. M. R. Peres, G. Smirnov, and N. A. Mortensen, Physical Review B 96, 195438 (2017).
* Narozhny _et al._ (2017) B. N. Narozhny, I. V. Gornyi, A. D. Mirlin, and J. Schmalian, Annalen der Physik 529, 1700043 (2017).
* Svintsov _et al._ (2013) D. Svintsov, V. Vyurkov, V. Ryzhii, and T. Otsuji, Physical Review B 88, 245444 (2013).
* Lucas and Fong (2018) A. Lucas and K. C. Fong, Journal of Physics Condensed Matter 30, 053001 (2018).
* Müller, Schmalian, and Fritz (2009) M. Müller, J. Schmalian, and L. Fritz, Physical Review Letters 103, 025301 (2009).
* Sulpizio _et al._ (2019) J. A. Sulpizio, L. Ella, A. Rozen, J. Birkbeck, D. J. Perello, D. Dutta, M. Ben-Shalom, T. Taniguchi, K. Watanabe, T. Holder, R. Queiroz, A. Principi, A. Stern, T. Scaffidi, A. K. Geim, and S. Ilani, Nature 576, 75 (2019).
* Mayzel, Steinberg, and Varshney (2019) J. Mayzel, V. Steinberg, and A. Varshney, Nature Communications 10 (2019).
* Berdyugin _et al._ (2019) A. I. Berdyugin, S. G. Xu, F. M. Pellegrino, R. Krishna Kumar, A. Principi, I. Torre, M. Ben Shalom, T. Taniguchi, K. Watanabe, I. V. Grigorieva, M. Polini, A. K. Geim, and D. A. Bandurin, Science 364, 162 (2019).
* Wilmart _et al._ (2020) Q. Wilmart, M. Boukhicha, H. Graef, D. Mele, J. Palomo, M. Rosticher, T. Taniguchi, K. Watanabe, V. Bouchiat, E. Baudin, J. M. Berroir, E. Bocquillon, G. Fève, E. Pallecchi, and B. Plaçais, Applied Sciences (Switzerland) 10 (2020).
* Yamoah _et al._ (2017) M. A. Yamoah, W. Yang, E. Pop, and D. Goldhaber-Gordon, ACS Nano 11, 9914 (2017).
* Dorgan, Bae, and Pop (2010) V. E. Dorgan, M.-H. Bae, and E. Pop, Applied Physics Letters 97, 082112 (2010).
* Dyakonov and Shur (1993) M. Dyakonov and M. Shur, Physical Review Letters 71, 2465 (1993).
* Crowne (2000) F. J. Crowne, Journal of Applied Physics 87, 8056 (2000).
* Cosme and Terças (2020) P. Cosme and H. Terças, ACS Photonics 7, 1375 (2020).
* Lucas and Das Sarma (2018) A. Lucas and S. Das Sarma, Physical Review B 97, 245128 (2018).
* Dyakonova _et al._ (2005) N. Dyakonova, F. Teppe, J. Łusakowski, W. Knap, M. Levinshtein, A. P. Dmitriev, M. S. Shur, S. Bollaert, and A. Cappy, Journal of Applied Physics 97, 114313 (2005).
* Kushwaha and Vasilopoulos (2001) M. S. Kushwaha and P. Vasilopoulos, Physical Review B - Condensed Matter and Materials Physics 64, 1253201 (2001).
* Zhang and Xue (2013) L.-P. Zhang and J.-K. Xue, Physics of Plasmas 20, 082118 (2013).
* Ashcroft and Mermin (1976) N. W. Ashcroft and N. D. Mermin, _Solid state physics_ (Holt, Rinehart and Winston, 1976) p. 826.
* Figueiredo, Bizarro, and Terças (2020) J. L. Figueiredo, J. P. S. Bizarro, and H. Terças, arXiv Mesoscale and Nanoscale Physics , 1 (2020).
* Rieutord (2015) M. Rieutord, _Fluid Dynamics_, Graduate Texts in Physics (Springer International Publishing, 2015).
* Haas (2011) F. Haas, _Quantum Plasmas_, Springer Series on Atomic, Optical, and Plasma Physics, Vol. 65 (Springer New York, New York, NY, 2011).
* Landau _et al._ (1980) L. D. Landau, E. M. Lifshitz, L. P. Pitaevskii, J. B. Sykes, and M. J. Kearsley, _Statistical Physics Part 1_ , 3rd ed. (Pergamon Press, 1980).
* Giuliani and Vignale (2005) G. Giuliani and G. Vignale, _Quantum Theory of the Electron Liquid_ (Cambridge University Press, 2005).
* Narozhny and Schütt (2019) B. N. Narozhny and M. Schütt, Physical Review B 100, 035125 (2019).
* Torre _et al._ (2015) I. Torre, A. Tomadin, A. K. Geim, and M. Polini, Physical Review B - Condensed Matter and Materials Physics 92, 1 (2015).
* Levitov and Falkovich (2016) L. Levitov and G. Falkovich, Nature Physics 12, 672 (2016).
* Zhu _et al._ (2009) W. Zhu, V. Perebeinos, M. Freitag, and P. Avouris, Physical Review B 80, 235402 (2009).
* Das Sarma _et al._ (2011) S. Das Sarma, S. Adam, E. H. Hwang, and E. Rossi, Reviews of Modern Physics 83, 407 (2011).
* Chen (2016) F. F. Chen, _Introduction to Plasma Physics and Controlled Fusion_ (Springer International Publishing, Cham, 2016).
* Dyakonov (2010) M. I. Dyakonov, Comptes Rendus Physique 11, 413 (2010).
* Crowne (1997) F. J. Crowne, Journal of Applied Physics 82, 1242 (1997).
* Barut _et al._ (2019) B. Barut, G. R. Aizin, E. Einarsson, J. M. Jornet, T. Sugaya, and J. P. Bird, in _2019 44th International Conference on Infrared, Millimeter, and Terahertz Waves (IRMMW-THz)_, Vol. 2019-Septe (IEEE, 2019) pp. 1–1.
* Fay, Jena, and Maki (2020) P. Fay, D. Jena, and P. Maki, _High-Frequency GaN Electronic Devices_, edited by P. Fay, D. Jena, and P. Maki (Springer International Publishing, Cham, 2020).
* Dmitriev _et al._ (1997) A. P. Dmitriev, A. S. Furman, V. Y. Kachorovskii, G. G. Samsonidze, and G. G. Samsonidze, Physical Review B 55, 10319 (1997).
* Schwierz (2010) F. Schwierz, Nature Nanotechnology 5, 487 (2010).
* Hirsch (2007) C. Hirsch, _Fundamentals of Computational Fluid Dynamics_ , 2nd ed. (Elsevier Inc., 2007).
* LeVeque (1992) R. J. LeVeque, _Numerical Methods for Conservation Laws_, 2nd ed. (Birkhäuser Basel, Basel, 1992).
* Cosme and Santos (2020) P. Cosme and J. Santos, “TETHYS \- Two-dimensional Emitter of THz, Hydrodynamic Simulation.” (2020), Version 2.3.1-beta.
* He _et al._ (2014) M.-D. He, K.-J. Wang, L. Wang, J.-B. Li, J.-Q. Liu, Z.-R. Huang, L. Wang, L. Wang, W.-D. Hu, and X. Chen, Applied Physics Letters 105, 081903 (2014).
* Hwang and Yang (2019) Y. Hwang and J.-K. Yang, Scientific Reports 9, 7348 (2019).
* Le Clainche and Vega (2017) S. Le Clainche and J. M. Vega, SIAM Journal on Applied Dynamical Systems 16, 882 (2017).
* Demo, Tezzele, and Rozza (2018) N. Demo, M. Tezzele, and G. Rozza, The Journal of Open Source Software 3, 530 (2018).
* Avron (1998) J. E. Avron, Journal of Statistical Physics 92, 543 (1998).
|
# Notes on the Superstatistical approach to UK Airport Arrival Delays
Statistics
Evangelos Mitsokapas School of Mathematical Sciences, Queen Mary University
of London, London E1 4NS, United Kingdom
Correspondence to<EMAIL_ADDRESS>Benjamin Schäfer School of
Mathematical Sciences, Queen Mary University of London, London E1 4NS, United
Kingdom
Correspondence to<EMAIL_ADDRESS>Rosemary J. Harris School of
Mathematical Sciences, Queen Mary University of London, London E1 4NS, United
Kingdom
Correspondence to<EMAIL_ADDRESS>Christian Beck School of
Mathematical Sciences, Queen Mary University of London, London E1 4NS, United
Kingdom
Correspondence to<EMAIL_ADDRESS>
# Superstatistical approach to UK Airport Arrival Delays Statistics
Evangelos Mitsokapas School of Mathematical Sciences, Queen Mary University
of London, London E1 4NS, United Kingdom
Correspondence to<EMAIL_ADDRESS>Benjamin Schäfer School of
Mathematical Sciences, Queen Mary University of London, London E1 4NS, United
Kingdom
Correspondence to<EMAIL_ADDRESS>Rosemary J. Harris School of
Mathematical Sciences, Queen Mary University of London, London E1 4NS, United
Kingdom
Correspondence to<EMAIL_ADDRESS>Christian Beck School of
Mathematical Sciences, Queen Mary University of London, London E1 4NS, United
Kingdom
Correspondence to<EMAIL_ADDRESS>
# COVID-19 Impact on Plane Delays
Evangelos Mitsokapas School of Mathematical Sciences, Queen Mary University
of London, London E1 4NS, United Kingdom
Correspondence to<EMAIL_ADDRESS>Benjamin Schäfer School of
Mathematical Sciences, Queen Mary University of London, London E1 4NS, United
Kingdom
Correspondence to<EMAIL_ADDRESS>Rosemary J. Harris School of
Mathematical Sciences, Queen Mary University of London, London E1 4NS, United
Kingdom
Correspondence to<EMAIL_ADDRESS>Christian Beck School of
Mathematical Sciences, Queen Mary University of London, London E1 4NS, United
Kingdom
Correspondence to<EMAIL_ADDRESS>
# Statistical characterization of Airport Arrival Delays
Evangelos Mitsokapas School of Mathematical Sciences, Queen Mary University
of London, London E1 4NS, United Kingdom
Correspondence to<EMAIL_ADDRESS>Benjamin Schäfer School of
Mathematical Sciences, Queen Mary University of London, London E1 4NS, United
Kingdom
Correspondence to<EMAIL_ADDRESS>Rosemary J. Harris School of
Mathematical Sciences, Queen Mary University of London, London E1 4NS, United
Kingdom
Correspondence to<EMAIL_ADDRESS>Christian Beck School of
Mathematical Sciences, Queen Mary University of London, London E1 4NS, United
Kingdom
Correspondence to<EMAIL_ADDRESS>
# Statistical Characterization of Airplane Delays
Evangelos Mitsokapas School of Mathematical Sciences, Queen Mary University
of London, London E1 4NS, United Kingdom
Correspondence to<EMAIL_ADDRESS>Benjamin Schäfer School of
Mathematical Sciences, Queen Mary University of London, London E1 4NS, United
Kingdom
Correspondence to<EMAIL_ADDRESS>Rosemary J. Harris School of
Mathematical Sciences, Queen Mary University of London, London E1 4NS, United
Kingdom
Correspondence to<EMAIL_ADDRESS>Christian Beck School of
Mathematical Sciences, Queen Mary University of London, London E1 4NS, United
Kingdom
Correspondence to<EMAIL_ADDRESS>
###### Abstract
The aviation industry is of great importance for a globally connected economy.
Customer satisfaction with airlines and airport performance is considerably
influenced by how much flights are delayed. But how should the delay be
quantified with thousands of flights for each airport and airline? Here, we
present a statistical analysis of arrival delays at several UK airports
between 2018 and 2020. We establish a procedure to compare both mean delay and
extreme events among airlines and airports, identifying a power-law decay of
large delays. Furthermore, we note drastic changes in plane delay statistics
during the COVID-19 pandemic. Finally, we find that delays are described by a
superposition of simple distributions, leading to a superstatistics.
## I Introduction
The aviation industry was a rapidly growing sector until recently, prior to
the current COVID-19 pandemic. Economic growth led to higher average yearly
distances travelled, as well as higher air traffic volumes, robustly observed
among several regions worldwide until 2019 [1, 2]. But both the ongoing
pandemic [3] and also the push towards more renewable options in aviation [4]
may induce a considerable change in the industry in the future. This makes the
industry a very interesting object to study as it transforms.
As a passenger, an important benchmark for evaluating travel options, e.g. in
terms of airports, airlines or even modes of transportation (train vs plane)
is the punctuality of each option. In particular, flight delays severely
decrease customer satisfaction [5] and might lead to customers choosing a
different airport or airline, in the long term. Generally, it is important to
quantitatively understand delay-risks both in terms of the expectation values
but also in terms of the extreme events, i.e. quantifying how likely a very
early or very late arrival is.
The study of delays in aviation is already an active field of research.
Previous, simple, investigation frameworks to classify and categorize delays
have been proposed [6] but mostly rely on mean values. In other cases,
stochastic models of plane delays [7] were developed either without
considering the corresponding probability distributions or assuming simple
Normal or Poisson distributions [8]. More recent work also includes the
application of machine learning techniques to aviation data, e.g. via
recurrent neural networks [9]. One problem of any data-driven approach is that
many articles on aviation research solely rely on proprietary data: In a
recent review investigating 200 research articles, $68\%$ were based on
proprietary data [10]. Hence, to enable the broader applicability of machine
learning applications, more publicly available data are still required.
To quantify delay statistics, we will go beyond the often-used averages of
delays [6] and instead investigate the entire probability density function of
delays at a given airport. Thereby, we consider all possible delay values,
from highly negative delays (i.e. flights arriving significantly earlier than
their scheduled arrival time) to severely positively delayed flights. These
delay distributions are influenced by many different aspects, including random
events, congestion, delay propagation between airports [11, 12] and (for long-
haul flights on large scales) the topological structure of the worldwide air
transportation network [13, 14]. To explain the emergence of heavy tails in a
local distribution, i.e. extreme deviations from the mean, we will utilize
superstatistical modelling [15]. Such an approach has been successfully
applied in transport before, for modelling train delays [16]; it has also
attracted recent interest when describing fluctuations in the energy system
[17] and air pollutant concentrations [18] and it has been extended to the
general framework of diffusing diffusivities in nonequilibrium statistical
physics and biologically inspired physics [19, 20, 21].
In this article, we present new data collected from 2018 to 2020 at several UK
airports, with a particular focus on Heathrow, being the most important
international hub in the UK. The data were publicly available from the arrival
information of each airport, given out on their websites each day but had to
be collected and processed for further usage. While the past arrival data can
no longer be accessed via the airport websites, all collected data have been
uploaded in a repository, see Methods. We analyse the full probability density
of delay distributions and introduce certain performance indices to describe
these distributions, such as the mean delay, the exponential decay rate of
negative delays, and the power-law exponent of large positive delays. These
indices are then compared for the different UK airports and the different
airlines operating at these airports, to understand the main features of the
delay statistics (such as frequency of extreme delays, average delay per
airport or per airline, etc) in a more systematic way. Finally, we deal with a
theoretical model to explain features of the delay statistics. We show that
the power law of large positive delays can be linked to a superposition of
exponential delays with a varying decay parameter, in a superstatistical
approach. Conversely, negative delays (early arrivals) do not exhibit any
power laws but simply behave in an exponential way, with extremely early
arrivals exponentially unlikely. Throughout this article, we assume that
passengers prefer to arrive as early as possible, i.e. with as little positive
and as much negative delay as possible.
## II New data
We collected flight details from a number of different airports. For the
purposes of this article, we have taken into consideration the top five UK
airports, in order of passenger traffic [22], namely: London Heathrow Airport
(LHR), London Gatwick Airport (LGW), London Luton Airport (LTN), London
Stansted Airport (STN) and Manchester Airport (MAN). For a period of time
lasting between Autumn 2018 and Spring 2019, we collected a combined total of
approximately two-hundred and twenty thousand ($2.2\times 10^{5}$) flight-
arrivals from all five airports mentioned above. Furthermore, we continued
collecting flight-information from London Heathrow during the 2020 COVID-19
pandemic, to illustrate the effect the lockdown had on the delay distribution.
For each flight, we recorded the airline company operating the flight along
with the corresponding flight number, departure and arrival airports, as well
as scheduled and actual landing times. The delay is then computed simply as
the difference between an aircraft’s scheduled arrival time and its actual
arrival time. Note that airlines and airports presumably have some freedom in
setting the scheduled arrival time, potentially influencing the average
“delay” (average difference between scheduled and actual arrival). We made all
collected data publicly available. For details of the data processing and
availability, see Methods.
The main body of our data (about $85\%$) is sourced from London Heathrow,
making it the chief focus of our analysis simply due to its size. London
Heathrow is an international airport operating flights of 80 different
airlines in total, which fly to 84 different countries around the world, as of
2019 [22]. Of course, in addition there are domestic flights within the UK.
The passenger nationalities are $48\%$ European and UK and $52\%$ from the
rest of the world. It is the busiest airport in Europe by passenger traffic
[22].
The empirical probability density function (PDF) of all delays is a key
characteristic to monitor, see Fig. 1 for all Heathrow delays. There, we
compare the data collected from 2018 to 2019 with more recent data collected
during the 2020 COVID-19 pandemic (during the first lockdown in Spring to
Summer 2020), which led to a drastic reduction in air transport [23, 24].
There are two interesting observations: Firstly, the delay statistics under
COVID-19 are shifted to the left, indicating overall smaller delays (including
more negative delays); secondly, the general shape of the distribution does
not change drastically. In particular, we observe a fast decay of the PDF of
negative delays on the left side and a much slower decay of the PDF on the
right side for positive delays. In the following sections, we will analyse
this behaviour in much more detail.
Figure 1: Flight delays follow a broad distribution with large negative and
positive delays. We display LHR delay histograms prior to and during the
COVID-19 pandemic, both normalized. As the COVID-19 LHR data set is
significantly smaller in size, compared to the regular LHR data set, it
contains many gaps, where no data were recorded. The COVID-19 data set is
significantly shifted towards the left (smaller delays) as compared to the
pre-pandemic time.
## III Quantifying delay statistics
Starting from a histogram of the flight delays, we derive three
indices/measures to quantify flight delay distributions: Mean delay, exponent
of left exponential and power-law exponent of right $q$-exponential, as
explained below in detail. We will use the LHR data previous to any COVID-19
influence as our main example.
As a first step, we split the full histogram at its peak value into two
histograms, a left flank of predominantly negative delays and a right flank of
predominantly positive delays, see Fig. 2. Based on the shape of the empirical
distributions, we use exponentials and $q$-exponentials as fitting functions,
see also Methods for details. Splitting the histogram has two advantages:
Firstly, the analysis of each flank is much simpler than the analysis of the
full aggregated data. Secondly, a given stakeholder might be particularly
interested in positive rather than negative delays, or vice versa.
The left flank is observed to be well approximated by an exponential function
of the form
$p(t_{L};\lambda)=\lambda e^{-\lambda t_{L}},\lambda>0,$ (1)
where $t_{L}$ are the rescaled arrival delays on the left flank, see Methods
for details. The exponent $\lambda$ here quantifies the exponential decay of
the probability of early arrivals. Therefore, a large $\lambda$ implies that
few flights arrive very early while a small $\lambda$ indicates that very
large negative delays are observed. Since we assume that passengers prefer to
arrive as early as possible, a small $\lambda$ indicates good performance.
The right flank of the delay distribution obeys a power law, i.e. a slow decay
of $p\sim t^{\nu}$, with $\nu$ negative. To quantitatively describe the right
flank, we use a $q$-exponential function [25] of the form
$p(t_{R};q,\lambda_{q})=(2-q)\lambda_{q}\left[1+(q-1)\lambda_{q}t_{R}\right]^{\frac{1}{1-q}},$
(2)
where $t_{R}$ are the rescaled arrival delays on the right flank, see Methods
for details. The power-law exponent, i.e. the rate at which the probability
density decays for high (positive) delay values, is given by
$\nu:=1/(1-q),1<q<2$. Note that the scale parameter $\lambda_{q}>0$ is
relevant for the precise fit but does not impact the power-law exponent $\nu$.
Since the power-law decay is controlled by the value $q$, we utilize $q$ to
characterize the right flank. Contrary to the left-flank exponential decay,
good performance is indicated by the absolute value of the right-flank power
law exponent $\nu$ being large. The reason is that large (absolute) values of
$\nu$ imply a rapid decay of the probability density of positive delays, i.e.
fewer extreme events of very delayed arrivals.
Finally, we note that the two flanks describe the tails of the distribution
well, but overestimate the height of the peak, i.e. the most likely value, see
Fig. 3. To include more information on the most frequent delays, we complement
the two previous fits by using the mean delay $\mu$ as a third index. Here we
interpret a small positive $\mu$, or a negative $\mu$ (indicating early
arrival), as desirable for passengers. In the case of LHR, the three delay
indices that we introduced are $\lambda=0.131$, $\mu=-5.06$ and $\nu=-5.371$.
We also introduce a continuous smooth fitting function for the full range in
the ”Connecting the flanks” section.
Note that the mean value $\mu$ can be easily manipulated by airline companies
by scheduling flight arrival times later then actually needed, hence always
causing a negative mean delay, which may artificially improve their
performance. On the contrary, the tail behavior truthfully represents the
extreme event statistics for both positive and negative delays and cannot be
easily manipulated by the operators.
Figure 2: Splitting the full distribution at the peak leads to two easier-to-
fit flanks. Left: Negative delays decay approximately linearly in the log-
scale and thereby suggest an exponential fit (1). Right: Positive delays
display substantial heavy tails and thereby suggest the usage of a
$q$-exponential function (2). Figure 3: Exponential (green) and
$q$-exponential (blue) theoretical distributions capture the empirical
distribution. The fits are obtained via the MLE method, see Methods for
fitting details. To complement the over-estimated “peak” (tent-like shape) we
introduce the mean delay $\mu$ index.
## IV Comparison of airports and airlines
We here use the previously developed framework to quantify and compare delay
statistics for different airlines and airports. Intuitively, we expect that
long-distance flights would, on average, yield more extreme early or late
arrivals, compared to the corresponding short-distance ones. Thus, we
distinguish between short-distance airlines, covering mostly domestic and
European destinations, and airlines that include long-distance, international
destinations, as well as destinations within Europe. We first compute the
three indices $\lambda,\mu,\nu$ for each of those airline groups and then
compare full airport statistics, aggregating all airlines.
There are several factors impacting the delay distribution for each airport or
airline: Airline policies, flight routes, technical defects or issues with
documentation contribute to $27\%$ of all delays [26]. Specifically, overseas
flights are more sensitive to wind (head wind or tail wind), as well as
unstable weather conditions (storms, fog) and military exercises. Airlines
operating international flights, as illustrated in Fig. 4, exhibit
considerable variations in their flight delay indices. Note that a low left
exponent $\lambda$ may be regarded as a desirable property (flights often
arrive very early) while good performance is definitely indicated by low mean
$\mu$ and right exponent $\nu$ (low mean delay and few very late arrivals).
Since the latter two quantities tend to be negative, their absolute values
should be large. Comparing the airlines, we observe a “grouping” behaviour for
some of the carriers. On the one hand, airlines having a blend between short-
distance (e.g. domestic or EU) and overseas destinations, such as Iberia,
British Airways (BA), Aer Lingus and Finnair, appear to follow a similar trend
for each index. On the other hand, airlines that do not possess such a spread
of destinations tend to perform well only in some of the indices. As an
illustrative example, we choose Air Canada and United Airlines: Although both
their left and right exponents are in a similar range to the other airlines,
their mean delays are substantially less negative than those of their
competitors.
Figure 4: International airlines appear to differ substantially in their three
delay indices. We plot the left-side (negative) delay exponential decay,
right-side (positive) delay power-law decay and the mean delay. Arrows
indicate whether a small or large value is desirable. Figure 5: Delay indices
for low-cost airlines not covering long-distance flights. Wizz Air, easyjet,
Ryanair and Vueling share the largest $\lambda$ index (early arrivals). Jet2
has the lowest mean delay $\mu$ and Vueling is characterized by the lowest
$\nu$ index (late arrivals).
Characterization of short-distance flights shows a strong grouping of the
delay behavior for some airlines. As seen in Fig. 5, comparison of five of the
largest low-cost domestic and European providers, reveals a systematic
similarity between Wizz Air, easyJet and Ryanair. All three airlines manage to
perform well in the left exponent metric, maximizing early arrivals, while
they maintain an acceptable negative average delay (with easyJet obtaining the
lowest value here). Again, they are characterized by similar right-exponents,
translating to a certain share of overall late arrivals. Furthermore, Jet2
outperforms all other short-distance airlines in $\lambda$ left-exponents and
mean delays. Finally, Vueling resembles Wizz Air and Ryanair values in the
$\lambda$ and $\mu$ metrics but seems to have less late arrivals as per its
high right exponent $\nu$.
Comparing the long distance airlines with the short-distance ones, we notice
some differences: Airlines covering long distances tend to display lower (more
desirable) left exponents as well as more negative mean delays. Meanwhile, the
right exponent behavior is similar between the two groups with Vueling and
Qatar Airlines as the “outliers” in their respective categories. Whether this
behavior is due to company policies or flight distance remains a question for
future research.
Studying the indices for individual airports yields interesting insights as
well. Airports populated by airlines flying mainly to domestic and EU
destinations, such as LTN and STN, have a mixed score in both early and late
arrivals, with an approximately net zero mean delay, see Fig. 6. On the one
hand, STN is characterized by the minimum $\lambda$ value, showing the best
performance in early arrivals in the group of airports, while LTN attains the
maximum value. On the other hand, it can be seen that LTN scores the best
$\nu$ value while STN lies very slightly above the group median $\nu$.
Interestingly, mean delays at MAN airport are net zero, contrary to LHR and
LGW where arrivals are scheduled in such a way that the mean delay is
negative. Furthermore, MAN seems to have a similar performance to LGW in the
early arrivals index, having a slightly worse score, but does attain the
second best value when compared from the perspective of extreme positive
delays. International airports LHR and LGW (with the exception of LHR
COVID-19) tend to cluster around similar values for all delay indices.
LHR during the COVID-19 pandemic outperforms all airports on the mean delay
index by a large margin. Indeed focussing in on LHR, we see a clear difference
between the time prior to the pandemic ($\mu_{\text{LHR}}\approx-5$min) and
during the pandemic ($\mu_{\text{LHR COVID19}}\approx-25$min). The reason
behind this is that the dramatic reduction of flight traffic worldwide saw
many flights arriving too early. Interestingly, the left exponent, i.e. the
decay of early arrivals, did not change substantially, compared to LHR under
business-as-usual conditions since the shape of the delay distribution on the
left did not change much but was only shifted to more negative values. The
right flank behaves quite differently: Both business-as-usual and LHR during
the COVID-19 pandemic, recorded relatively heavily delayed flights, which
arrived more than 3 hours late (see also Fig. 1). The right index reveals the
likelihood of these extreme events. In the case of LHR under COVID-19, the low
mean delay suggests early arrival but relative extreme events are still
present and hence the right exponent reveals this poor performance.
Notice that we cannot fully exclude a sampling bias of the airline analysis
due to the different number of flights recorded for each airport: For a given
airline, e.g. BA, we use all flights at all airports in our data set. However,
since we recorded more total flights in LHR, the BA distribution is influenced
more by the LHR data than by other airports.
Figure 6: Airports appear to differ substantially in the three delay metrics.
Airports that serve mostly domestic and European destinations, such as LTN and
STN, behave differently from international airports such as LHR, LGW and MAN.
## V Superstatistical modelling of delays
As we have seen previously, the right flank of the delay statistics exhibits
heavy tails and is well-described by a $q$-exponential. Let us now explore a
potential explanation for this particular distribution by employing the
framework of superstatistics [27, 28, 15]. Superstatistics is relevant when an
aggregated system (e.g. a long time series) displays heavy tails, but the
system may then be disentangled into many smaller sub-parts (e.g. short time
periods of the trajectory). These sub-parts then are no longer heavy-tailed
but follow a simple local distribution, for example an exponential or a
Gaussian. This idea has been successfully applied, for example, to train
delays [16], electric power systems [17] and intermittent wind statistics
[29].
Assuming for now that the right-flank delays are indeed $q$-exponentially
distributed and follow a superstatistics, we should be able to observe “local”
exponential densities, with a decay parameter $\lambda$. Superimposing all
these $\lambda$, we get a $q$-exponential if the $\lambda$ themselves follow a
$\chi^{2}$-distribution:
$f(\lambda)=\frac{1}{\Gamma\left(\frac{n}{2}\right)}\left(\frac{n}{2\lambda_{0}}\right)^{\frac{n}{2}}\lambda^{\frac{n}{2}-1}e^{-\frac{n\lambda}{2\lambda_{0}}}.$
(3)
Here $n$ denotes the number of degrees of freedom characterizing the
fluctuations in $\lambda$ and $\lambda_{0}$ is the sample mean of $\lambda$.
Indeed, choosing an appropriate time scale to separate the trajectory (see
next paragraph), the heavy tails of the delay distributions vanish and instead
the distributions are well described by simple exponential functions, see Fig.
7.
Figure 7: We analyse the full time series of plane delays and extract a time
window during which we observe locally exponential distributions. These local
distributions can decay slowly or fast, i.e. the rate $\lambda$ is
fluctuating.
Let us explain how to extract the relevant time scale $T$ on which we locally
observe exponential distributions. Since we know that an exponential
distribution has a kurtosis of $\kappa_{\text{exponential}}=9$, we test time
windows of different size $\Delta\tau$ and compute the local average kurtosis
[15] as
$\bar{\kappa}\left(\Delta\tau\right)=\frac{1}{\tau_{\text{max}}-\Delta\tau}\int_{0}^{\tau_{\text{max}}-\Delta\tau}d\tau_{0}\frac{\langle\left(u-\bar{u}\right)^{4}\rangle_{\tau_{0},\Delta\tau}}{\langle\left(u-\bar{u}\right)^{2}\rangle_{\tau_{0},\Delta\tau}^{2}},$
(4)
where $\tau_{\text{max}}$ is the length of the time series $u$ and $\bar{u}$
is the mean of the time series. We denote by
$\langle\dots\rangle_{\tau_{0},\Delta\tau}$ the expectation formed for a time
slice of length $\Delta\tau$ starting at $\tau_{0}$. For the LHR data, we
compute the local kurtosis and thereby determine the long time scale:
$\bar{\kappa}\left(T\right)=9$, for $T\approx 1.55h$, see Fig. 8.
Next, let us carry out an important consistency check: As explained above, the
mixing of numerous local exponential distributions with exponents following a
$\chi^{2}$-distribution leads to a $q$-exponential. Now, we can make a
histogram of the $\lambda$-distribution and fit it with a $\chi^{2}$\- and an
inverse $\chi^{2}$-distribution. Then, we derive the $q$-exponential from the
fitted $\chi^{2}$-distribution and compare it with the direct fit of the
$q$-exponential and the original data. This is illustrated in Fig. 9.
We note that the empirical $\lambda$-distribution is slightly better fitted by
an inverse $\chi^{2}$\- than a $\chi^{2}$-distribution, as also observed in
other application areas [30, 18]. Overall, the superstatistical description
seems consistent, given the short time series of flight delays under
consideration. The $q$-exponential derived from the $\chi^{2}$ tends to
overestimate the PDF at low values, which is understandable as we also exclude
them for the fitting of the $q$-exponential via MLE (see Methods). Still, the
tail behavior of the $q$-exponential based on the $\chi^{2}$ matches the real
data and the MLE fit nicely. This means the observed power laws of the right
flanks are essentially explained by a suitable superstatistics which describes
changes in the microvariables on a time scale of $T\approx 1.5$ hours.
Figure 8: The average kurtosis $\bar{\kappa}$ of the data set is plotted as a
function of the time window $\Delta\tau$ in hours (yellow). The intersection
between the horizontal line at $\bar{\kappa}=9$ (the kurtosis of an
exponential distribution) and the $\bar{\kappa}$ vs $\Delta t$ curve gives the
optimal value for $\Delta t$; we find $T\approx 1.55$ hours.
Figure 9: Applying superstatistics leads to consistent results. Left: We
extract the distribution of local exponents and compare them to a $\chi^{2}$
and inverse $\chi^{2}$ fit (based on the method of least squares). Right:
Using the previously derived $\chi^{2}$ distribution, we again derive a
$q$-exponential with right exponent $\nu_{\chi^{2}}\approx-5.296$, compared to
the fitted one of $\nu_{\text{MLE}}\approx-5.371$. We note that the power-law
decay of the data is well captured by the $q$-exponential induced by the
$\chi^{2}$-distribution. The blue curve is scaled to the same amplitude as the
data for visual guidance.
## VI Connecting the flanks
So far, we focused our attention on describing and fitting the tail aspects of
the distribution, namely the left, approximately exponential, flank and the
right, approximately $q$-exponential, flank. Both these functions combined
overestimate the peak of the distribution and hence, we also included the mean
delay as the final metric in our framework. Now, let us consider how the two
tail distributions could be merged in one smooth-fitting function.
First, we note that the so far mostly ignored central part of the delay
distribution can be approximated by a Gaussian distribution, based on the
parabola shape in the log-scale plots. We use this insight to propose the
following continuous fitting function
$\displaystyle
p(t)=\begin{cases}A_{e}\exp{\left(-\lambda\sqrt{C+(t-t_{\text{peak}})^{2}}\right)},t<t_{\text{peak}}\\\
A_{q}\exp_{q}{\left(-\lambda_{q}\sqrt{C+(t-t_{\text{peak}})^{2}}\right)},t\geq
t_{\text{peak}}\end{cases}$ (5)
with
$\exp_{q}(t)=(2-q)\lambda_{q}\left[1+(q-1)\lambda_{q}t\right]^{\frac{1}{1-q}}$
being the $q$-exponential function. Here, $A_{e}$ and $A_{q}$ are amplitudes,
$C$ is a curvature parameter, describing the approximately Gaussian part in
the center, $t_{\text{peak}}$ is the delay at the peak of the delay
distribution, where we split into left and right flanks and $t$ is the delay
value, see Methods for fitting details and code.
The resulting fit is a smooth function, covering the full delay range, see
Fig. 10. Since the new curvature parameter $C$ also influences the general
shape, the new values for $q$ and $\lambda$, now named $\tilde{q}$ and
$\tilde{\lambda}$, are slightly different from the ones solely focusing on the
tails (empirically we tend to observe a slight reduction in $\lambda$ and
increase in $q$). Still, the general observations using the delay indices and
comparing airlines, such as in Figs. 4-6, remain mostly unchanged. Equation
(5) provides an alternative approach to the three delay indices introduced so
far. If one is interested in describing the full distribution as accurately as
possible, we recommend using equation (5). Meanwhile, to compare performance
of individual airlines or to obtain a general impression of the delay
distribution, the three delay indices are a simplified framework, allowing
easy and robust estimation and comparison. Finally, note that the full curve
is not strictly a probability density function as we did not enforce that its
integral equals one. While theoretically making it easier by reducing the
number of parameters, that would make the fitting more difficult in practice
as the integrals cannot be evaluated analytically by hand and impose
additional constraints during the fitting. Also note that our observed flight
delays are constrained to the finite interval $[-100,210]$, whereas the
fitting function is defined on $[-\infty,\infty]$, which makes the
normalization outside the interval ambiguous.
Figure 10: Using the approximately Gaussian shape in the center, we smoothly
combine left and right flank fits into one coherent fit of the full delay data
set. To emphasize the quality of the fit, we display both a linear (left) and
logarithmic (right) scale of the PDF for LHR (top) and LGW (bottom), the two
airports with the most flights in our data set.
## VII Discussion and Conclusions
In summary, we have analysed a newly obtained data set of plane delays for
various British airports, which contains tens of thousands of flights,
aggregated over multiple months. We believe this is a substantial improvement
on some earlier studies which, to the best of our knowledge, only investigated
a few days of measurements and a couple of thousand flights, thereby greatly
underestimating the contribution of the tails to the probability distribution
[31]. Interestingly, we find that all investigated airports and even
individual airlines at each airport follow a qualitatively similar
distribution, namely an approximately exponential decay on the left flank (of
negative delays) and a slowly decaying power law on the right flank (of
positive delays). To characterize these distributions and systematically
compare airlines and airports, we have developed a framework to quantify delay
performance. Critically, we do not merely use the mean delay but also consider
extreme events of both positive and negative delays via their respective
flanks in the empirical probability distribution. Applying this newly
developed framework, we find substantial differences between airlines serving
short and long-distance routes.
We offer an explanation for the emerging power law on the right flank via
superstatistics: The local $q$-exponential distribution with its heavy tails
seems to arise from many superimposed exponential distributions. In
particular, we identify the long time scale $T$ as approximately 1.5 hours,
during which delays fall off exponentially. Comparing to other
superstatistical results [28, 27], we note the relevance of both
$\chi^{2}$-distributions and inverse-$\chi^{2}$-distributions for the scale
parameter, similar to the ones observed in air pollution or cancer [30, 18],
stressing again the universality of superstatistics. Finally, we propose a
continuous function to capture the full delay statistics. While this
introduces additional parameters and the superstatistical theory mentioned
previously can no longer be used to rigorously derive the fitting function,
this fit does describe the full distribution with high accuracy.
Our framework of three delay indices to characterize flight delay
distributions can be applied quite generally to measure the punctuality of
flights, going beyond an analysis based on just the mean. Crucially, while
airlines or airports might be able to “game” the system of mean delays, this
is not possible with the left and right exponents. Companies could shift their
flight schedule, i.e. announce intentionally that flights will take longer
than they do in practice, and thereby systematically record early arrivals so
pushing their mean delay to negative values. However, such a procedure would
still leave the remaining two indices (left and right exponent) untouched so
that they provide a stable way of measuring performance.
One remarkable result is the impact of the global pandemic of COVID-19 on the
delay statistics. Heathrow (LHR) under COVID-19 conditions (travel
restrictions, quarantine upon arrival, etc) displays an impressively low mean
delay, while the left flank decay was mostly unchanged. Interestingly, LHR
still experienced some relatively heavily delayed flights during the COVID-19
pandemic, which leads to pronounced heavy tails towards the right and thereby
a poor performance in the right exponent. These observations indicate that in
different (COVID-19) situations and given fewer flights, airports can perform
better in some aspects (e.g. mean delay) than under business-as-usual
conditions, while other observables (extreme delays) can still be improved.
Aside from the upsides of COVID-19-related lockdown measures on air quality
[32, 33] or $CO_{2}$ emissions [34], we find that having fewer flights also
improves delay statistics.
We have assumed throughout this article that negative delays are preferred by
all passengers. However, some passengers might value arrival at exactly the
predicted time more highly than arriving early. This would change the
interpretation of the left index slightly: Instead of desiring low exponents,
airlines and airports should aim for high exponents. Similarly, the absolute
value of the delay should be zero, i.e. arrival on time should be the default.
Regardless of preference, the indices, as introduced, provide a sufficient
framework to measure the delay performance.
In the future, we would like to apply our framework to delay statistics at
other airports in different countries, and investigate how delays are related
to geographical distance of the flights. In particular it would be interesting
to see how our three indices differ between years, countries and so on. From a
more fundamental perspective, we aim to further understand correlations in the
flight delays. Preliminary indications from the British data are that on
“typical” days correlations decay quickly but on some “exceptional” days
(perhaps those where external factors affect many flights) the autocorrelation
function can settle on a non-zero value for some time and many flights have
long delays which contribute to the tail of the probability density function.
Long-range temporal correlations and memory effects have been studied in many
other physical and non-physical systems [35, 36]; modelling such effects here
is challenging, since the build-up of delays at one airport may be influenced
by earlier flights to and from completely different airports, but practically
important since controlling the “cascading” of delays would lead to a
significantly improved passenger experience. In this way, future
investigations could take into account spatio-temporal information from the
entire worldwide air transportation network. More concretely, our data set
could be expanded in type of information as well as volume. First, it would be
interesting to also study departure delays, in addition to the arrival delays
studied here. Furthermore, we could explicitly include flight duration and
distance and investigate correlations between delays and flight
distance/duration for many different airports in the world.
### Acknowledgments
This project has received funding from the European Union’s Horizon 2020
research and innovation programme under the Marie Sklodowska-Curie grant
agreement No 840825.
### Author contributions
E.M., B.S., contributed equally. E.M., B.S., and C.B. conceived and designed
the research. E.M. collected the data, E.M. and B.S. analysed the data and
produced the figures. R.J.H. and all other authors contributed to discussing
and interpreting the results and writing the manuscript.
### Competing interests
The authors declare no competing interests.
## Methods
### Data processing
As we mentioned in the main text, for each flight, we recorded the airline
company operating the flight, the flight number, the departure and arrival
airports as well as the scheduled and actual landing times, as provided on the
airport web page. The data was cleaned and organized according to the delay,
computed as the difference between scheduled arrival time and actual arrival
time for each flight. We kept data for each arrival airport as well as a
summary of the overall delays, independent of the arrival airport. A
“negative” delay occurs when the actual aircraft arrival is earlier than the
expected one, according to the scheduled timetable. After examining the data
it became evident that a reasonable cut-off point as to how early or late an
aircraft can arrive at the designated airport should be implemented. This
prevents over-representation of individual extreme events in the resulting
probability distributions. We decided that the delays (in minutes) would have
to be contained in the interval $[-100,210]$.
### Theoretical distribution fitting
Here we explain the fitting procedure in more detail. We approximate the
empirical distribution of the left flank, where negative delays are dominant,
with an exponential distribution of the form
$p(t_{L};\lambda)=\lambda e^{-\lambda t_{L}},\lambda>0.$ (6)
As we have seen in the main text, the observed distribution curves towards a
Gaussian distribution around the peak value and thereby deviates from an
exponential distribution. Hence, we restrict our fitting to values deviating
from the central area as follows. Let $t_{\text{peak}}$ be the delay at which
the distribution reaches its highest PDF value and $t_{\text{min}}$ the
smallest delay we observe. Then, we restrict our exponential fit to any delay
falling in the interval
$[t_{\text{min}},t_{\text{peak}}-0.3|t_{\text{min}}-t_{\text{peak}}|]$, where
$|...|$ indicates the absolute value. Following this restriction, we define
the left flank delay values as
$t_{L}=-t+t_{\text{peak}}-0.3|t_{\text{min}}-t_{\text{peak}}|,t\in[t_{\text{min}},t_{\text{peak}}-0.3|t_{\text{min}}-t_{\text{peak}}|].$
(7)
We now turn to the right flank of the empirical distribution, i.e. the portion
of the data set that constitutes the majority of the positive delays. The
$q$-exponential is much better at incorporating parts of the Gaussian central
distribution on the right-hand side than the exponential distribution is on
the left flank. Hence, we only exclude the smallest $10\%$ of the data, i.e.
we consider delays $t$ in the interval interval
$[t_{\text{peak}}+0.1|t_{\text{max}}-t_{\text{peak}}|,t_{\text{max}}]$, where
$t_{\text{max}}$ is the highest delay observed. Hence the right-flank delays
to be fitted are defined as
$t_{R}=t-t_{\text{peak}}-0.1|t_{\text{max}}-t_{\text{peak}}|,t\in\left[t_{\text{peak}}+0.1|t_{\text{max}}-t_{\text{peak}}|,t_{\text{max}}\right].$
(8)
Our theoretical distribution choice is now a $q$-exponential
$p(t_{R};q,\lambda_{q})=(2-q)\lambda_{q}\left[1+(q-1)\lambda_{q}t_{R}\right]^{\frac{1}{1-q}},$
(9)
with parameters $\lambda_{q}$ and $q$. It has been shown that $q$-exponentials
and $q$-Gaussians arise from maximizing Tsallis entropy [25].
Note that both $t_{L}$ and $t_{R}$ are defined such that they start at 0 and
continue towards positive values to keep the fitting functions easier.
These two functions (exponential and $q$-exponential) are fitted to the data
using a maximum likelihood estimate (MLE), i.e. maximizing the Likelihood
$L(\mathbf{\theta},\mathbf{x})$. Here, $\mathbf{x}$ indicates the data we wish
to fit and $\mathbf{\theta}$ the set of parameters that are being optimized.
The likelihood of a parameter setting $\mathbf{\theta}$ on a given one-
dimensional data set $\mathbf{x}=\left(x_{1},x_{2},...,x_{N}\right)$ is
computed as
$L(\mathbf{\theta},\mathbf{x})=\prod_{i=1}^{N}p(x_{i},\mathbf{\theta}),$ (10)
with probability density function $p(x_{i},\mathbf{\theta})$, dependent on the
parameters $\mathbf{\theta}$. Technically, we carry out the MLE using the
_scipy.stats_ module in python with custom PDFs, see also Code availability
(below) for a link to the code.
### Fitting the smooth combined function
To obtain a smooth fit, combining both flanks, we employ the following
procedure. We first estimate the exponential decay rate $\lambda$ based on the
lowest 70% of negative delays, then estimate $q$ and the $q$-exponential decay
rate $\lambda_{q}$ based on almost the full right-hand side of the histogram.
This is identical to the procedure for the individual flanking fits. Next, we
estimate the central curvature $C$, which we assume to be identical for both
intervals, and the amplitudes $A_{e}$ and $A_{q}$, as well as $\lambda_{q}$
using least squares fitting. While carrying out this least-square fit, we also
allow the parameters $q$ and $\lambda$ to vary slightly from the MLE-optimal
value determined earlier, while all other parameters are not bounded. The
reason to allow any variance is to ensure a continuous fit while keeping the
change from the optimal MLE parameters small. Empirically, we find that
restricting $0.95\ q_{\text{MLE}}\leq\tilde{q}\leq 1.15\ q_{\text{MLE}}$ and
$0.95\ \lambda_{\text{MLE}}\leq\tilde{\lambda}\leq 1.05\ \lambda_{\text{MLE}}$
yields the best results. Technically, we use the _scipy.stats_ module to
perform the MLE fits and the least-square fit; continuity is ensured using
constraints in the _symfit_ package.
### Airline data
In Figs. 4 and 5 we compared several airlines. Let us briefly list how many
flights we analysed to derive our delay indices: For the short-distance
airlines “Wizz Air”: 2428, “easyJet”: 15449, “Ryanair”: 13488, “Vueling”:
1034, “Jet2”: 1215; for the other airlines we have “Iberia”: 12892, “British
Airways”: 38257, “Aer Lingus”: 7331, “Finnair”: 8560, “American Airlines”:
23119, “Air Canada”: 7247, “United Airlines”: 6797, “Japan Airlines”: 5966,
“Qatar Airways”: 5935. For all airlines we have at least 1000 flights and
often several thousand flights.
### Data availability
The original data of airport arrivals has been uploaded to an open repository:
https://osf.io/snav9. All data that support the results presented in the
figures of this study are available from the authors upon reasonable request.
### Code availability
Python code to reproduce figures, perform the fits and extract the delay
indices, is also uploaded here: https://osf.io/snav9/.
## References
* Hakim and Merkert [2016] M. M. Hakim and R. Merkert, The causal relationship between air transport and economic growth: Empirical evidence from south asia, Journal of Transport Geography 56, 120 (2016).
* Brida _et al._ [2018] J. G. Brida, P. D. Monterubbianesi, and S. Zapata-Aguirre, Exploring causality between economic growth and air transport demand for argentina and uruguay, World Review of Intermodal Transportation Research 7, 310 (2018).
* Suau-Sanchez _et al._ [2020] P. Suau-Sanchez, A. Voltes-Dorta, and N. Cugueró-Escofet, An early assessment of the impact of covid-19 on air transport: Just another crisis or the end of aviation as we know it?, Journal of Transport Geography (2020).
* Kuhn _et al._ [2011] H. Kuhn, C. Falter, and A. Sizmann, Renewable energy perspectives for aviation, in _Proceedings of the 3rd CEAS Air &Space Conference and 21st AIDAA Congress, Venice, Italy_ (2011) pp. 1249–1259.
* Efthymiou _et al._ [2019] M. Efthymiou, E. T. Njoya, P. L. Lo, A. Papatheodorou, and D. Randall, The Impact of Delays on Customers’ Satisfaction: an Empirical Analysis of the British Airways On-Time Performance at Heathrow Airport, Journal of Aerospace Technology and Management 11 (2019).
* Rebollo and Balakrishnan [2014] J. J. Rebollo and H. Balakrishnan, Characterization and prediction of air traffic delays, Transportation Research Part C: Emerging Technologies 44, 231 (2014).
* Rosenberger _et al._ [2002] J. M. Rosenberger, A. J. Schaefer, D. Goldsman, E. L. Johnson, A. J. Kleywegt, and G. L. Nemhauser, A stochastic model of airline operations, Transportation Science 36, 357 (2002).
* Mueller and Chatterji [2002] E. Mueller and G. Chatterji, Analysis of aircraft arrival and departure delay characteristics, in _AIAA’s Aircraft Technology, Integration, and Operations (ATIO) 2002 Technical Forum_ (2002) p. 5866.
* Gui _et al._ [2019] G. Gui, F. Liu, J. Sun, J. Yang, Z. Zhou, and D. Zhao, Flight delay prediction based on aviation big data and machine learning, IEEE Transactions on Vehicular Technology 69, 140 (2019).
* Li and Ryerson [2019] M. Z. Li and M. S. Ryerson, Reviewing the datas of aviation research data: Diversity, availability, tractability, applicability, and sources, Journal of Air Transport Management 75, 111 (2019).
* Fleurquin _et al._ [2013] P. Fleurquin, J. J. Ramasco, and V. M. Eguiluz, Systemic delay propagation in the US airport network, Scientific Reports 3, 1159 (2013).
* Pyrgiotis _et al._ [2013] N. Pyrgiotis, K. M. Malone, and A. Odoni, Modelling delay propagation within an airport network, Transportation Research Part C: Emerging Technologies 27, 60 (2013).
* Verma _et al._ [2014] T. Verma, N. A. Araújo, and H. J. Herrmann, Revealing the structure of the world airline network, Scientific Reports 4, 1 (2014).
* Guimera _et al._ [2005] R. Guimera, S. Mossa, A. Turtschi, and L. N. Amaral, The worldwide air transportation network: Anomalous centrality, community structure, and cities’ global roles, Proceedings of the National Academy of Sciences 102, 7794 (2005).
* Beck _et al._ [2005] C. Beck, E. G. D. Cohen, and H. L. Swinney, From time series to superstatistics, Physical Review E 72, 056133 (2005).
* Briggs and Beck [2007] K. Briggs and C. Beck, Modelling train delays with q-exponential functions, Physica A: Statistical Mechanics and its Applications 378, 498 (2007).
* Schäfer _et al._ [2018] B. Schäfer, C. Beck, K. Aihara, D. Witthaut, and M. Timme, Non-gaussian power grid frequency fluctuations characterized by lévy-stable laws and superstatistics, Nature Energy 3, 119 (2018).
* Williams _et al._ [2020] G. Williams, B. Schäfer, and C. Beck, Superstatistical approach to air pollution statistics, Physical Review Research 2, 013019 (2020).
* Metzler [2020] R. Metzler, Superstatistics and non-Gaussian diffusion, The European Physical Journal Special Topics 229, 711 (2020).
* Chubynsky and Slater [2014] M. V. Chubynsky and G. W. Slater, Diffusing diffusivity: a model for anomalous, yet brownian, diffusion, Physical Review Letters 113, 098302 (2014).
* Itto and Beck [2021] Y. Itto and C. Beck, Superstatistical modelling of protein diffusion dynamics in bacteria, Journal Royal Society Interface 18, 20200927 (2021).
* UK Civil Aviation Authority [2019] UK Civil Aviation Authority, UK Airports - Annual Statements of Movements, Passengers and Cargo -Table 09 (2019), [Online; accessed 10-September-2020].
* Kalyeena Makortoff [2020] Kalyeena Makortoff , Heathrow cargo flights rise 500% as airport restyles itself as ‘vital airbridge’ (2020), [Online; accessed 19-August-2020].
* Nižetić [2020] S. Nižetić, Impact of coronavirus (covid-19) pandemic on air transport mobility, energy, and environment: A case study, International Journal of Energy Research 44, 10953 (2020).
* Tsallis [1988] C. Tsallis, Possible generalization of boltzmann-gibbs statistics, Journal of Statistical Physics 52, 479 (1988).
* EUROCONTROL [2018] EUROCONTROL, Delays – three questions and many answers (2018), [Online; accessed 10-September-2020].
* Beck [2001] C. Beck, Dynamical foundations of nonextensive statistical mechanics, Physical Review Letters 87, 180601 (2001).
* Beck and Cohen [2003] C. Beck and E. G. D. Cohen, Superstatistics, Physica A: Statistical Mechanics and its Applications 322, 267 (2003).
* Weber _et al._ [2019] J. Weber, M. Reyers, C. Beck, M. Timme, J. G. Pinto, D. Witthaut, and B. Schäfer, Wind power persistence characterized by superstatistics, Scientific Reports 9, 1 (2019).
* Chen and Beck [2008] L. L. Chen and C. Beck, A superstatistical model of metastasis and cancer survival, Physica A: Statistical Mechanics and its Applications 387, 3162 (2008).
* Caccavale _et al._ [2014] M. V. Caccavale, A. Iovanella, C. Lancia, G. Lulli, and B. Scoppola, A model of inbound air traffic: The application to Heathrow airport, Journal of Air Transport Management 34, 116 (2014).
* Shrestha _et al._ [2020] A. M. Shrestha, U. B. Shrestha, R. Sharma, S. Bhattarai, H. N. T. Tran, and M. Rupakheti, Lockdown caused by covid-19 pandemic reduces air pollution in cities worldwide, EarthArxiv (2020).
* Schäfer _et al._ [2020] B. Schäfer, R. Verma, A. Giri, H. He, S. Nagendra, M. Khare, and C. Beck, Covid-19 impact on air quality in megacities, arXiv preprint arXiv:2007.00755 (2020).
* Le Quéré _et al._ [2020] C. Le Quéré, R. B. Jackson, M. W. Jones, A. J. Smith, S. Abernethy, R. M. Andrew, A. J. De-Gol, D. R. Willis, Y. Shan, J. G. Canadell, _et al._ , Temporary reduction in daily global co 2 emissions during the covid-19 forced confinement, Nature Climate Change , 1 (2020).
* Rangarajan and Ding [2003] G. Rangarajan and M. Ding, eds., _Processes with Long-Range Correlations: Theory and Applications_ , Lecture Notes in Physics, Vol. 621 (Springer-Verlag, Berlin, Heidelberg, 2003).
* Beran _et al._ [2013] J. Beran, Y. Feng, S. Ghosh, and R. Kulik, _Long-Memory Processes: Probabilistic Properties and Statistical Methods_ , berlin heidelberg ed. (Springer-Verlag, 2013).
|
ATL-PHYS-PROC-2021-004 August 29, 2024
Measurement of the inclusive and differential cross section of a top quark
pair in association with a $Z$ boson at $13\,\text{TeV}$ with the ATLAS
detector
Florian Fischer111Work supported by BMBF, Germany (FSP-103), on behalf of the
ATLAS Collaboration††Copyright 2024 CERN for the benefit of the ATLAS
Collaboration. CC-BY-4.0 license.
Fakultät für Physik
Ludwig-Maximilians-Universität München, 85748 Garching, Germany
> The inclusive as well as differential cross section of the associated
> production of top-antitop quark pairs and a $Z$ boson ($t\overline{t}Z$) is
> measured in final states with exactly three or four isolated leptons
> (electrons or muons). For this purpose, the full LHC Run 2 dataset of
> proton-proton collisions recorded by the ATLAS detector from $2015$ to
> $2018$, which corresponds to an integrated luminosity of
> $139\text{\,}{\mathrm{fb}}^{-1}$, is used. The inclusive production cross
> section is measured to be $\sigma_{t\overline{t}Z}=1.05\pm
> 0.05\,(\text{stat.})\,\pm 0.09\,(\text{syst.})\,\text{pb}$, which is in
> agreement with the most precise Standard Model theoretical prediction.
> Absolute and normalised differential cross section measurements are
> performed as a function of various kinematic variables in order to probe the
> kinematics of the $t\overline{t}Z$ system within both parton- and particle-
> level phase spaces.
> PRESENTED AT
>
>
>
>
> $13^{\mathrm{th}}$ International Workshop on Top Quark Physics
> Durham, UK (videoconference), 14–18 September, 2020
## 1 Introduction
The coupling of the top quark to the $Z$ boson is precisely predicted within
the Standard Model (SM) of particle physics by the theory of the electroweak
interaction. However, experimentally it is not yet well constrained and its
value can significantly vary in many models including physics beyond the
Standard Model (BSM). A process that is particularly sensitive to this
coupling is the associated production of a top-antitop quark pair with a $Z$
boson ($t\overline{t}Z$). The large centre-of-mass energy of the Large Hadron
Collider (LHC) [1] at CERN and the tremendous amount of data collected in
recent years have opened up the possibility to study this rare process which
was previously inaccessible due to its small production cross section. As
$t\overline{t}Z$ production contributes to the background processes in many
searches at the LHC for both SM and BSM physics, a better understanding of the
$t\overline{t}Z$ process can further enhance the experimental reach in such
analyses.
The results of previous inclusive measurements by the ATLAS [2] and CMS [3]
collaborations agree very well with the SM prediction [4, 5, 6]. A first
measurement of differential $t\overline{t}Z$ cross sections was conducted by
CMS only recently [7]. The first analysis using the full LHC Run 2 dataset was
performed by ATLAS using $139\text{\,}{\mathrm{fb}}^{-1}$ of proton-proton
($pp$) collision data [8] which is presented in the following.
## 2 Analysis channels
The most sensitive decay channels in which to perform measurements of the
$t\overline{t}Z$ process feature a multi-lepton final state with exactly three
or four isolated electrons or muons.Based on these signatures, different
signal regions are defined and optimised, referred to as trilepton ($3\ell$)
and tetralepton ($4\ell$) signal regions, depending on the respective lepton
multiplicity.
Three signal regions are defined for the trilepton decay channel, and four
signal regions are defined for the tetralepton decay channel. Of all lepton
pairs with opposite sign of the charge and of the same flavour (OSSF), the one
with the value of its invariant mass closest to the $Z$ boson mass is
considered to originate from the $Z$ boson decay. Furthermore, the difference
between its invariant mass and the $Z$ boson mass must not be greater than
$10\text{\,}\mathrm{GeV}$. Contributions from events featuring low-mass
resonances are suppressed by requiring all OSSF lepton combinations to have a
mass greater than $10\text{\,}\mathrm{GeV}$. Additionally, the sum of the
lepton charges is required to equal to $\pm 1$ and $0$ in the $3\ell$ and in
the $4\ell$ case, respectively. The trilepton signal regions differ from each
other by the number of selected jets and $b$-jets, where the latter are tagged
with different efficiency working points depending on the required $b$-jet
multiplicity. Similarly, the tetralepton signal regions are categorised into
same-flavour and different-flavour regimes of the two non-$Z$ leptons, and
each case is again subdivided into a regime with either exactly one or at
least two $b$-jets. In addition, depending on the flavour composition of the
non-$Z$ lepton pair and $b$-jet multiplicity, different thresholds on the
missing transverse energy are required.
## 3 Background estimation
Background processes – physics processes described by the Standard Model other
than $t\overline{t}Z$ – are subdivided into prompt and non-prompt
contributions.
The dominant prompt background processes are $WZ/ZZ\text{\,+\,jets}$
production which feature either three or four isolated leptons in the final
state, respectively. Dedicated control regions are used to estimate the light-
flavour components of these backgrounds during the fit employed for the
inclusive cross-section measurement. These regions are defined such that they
are orthogonal to the respective signal regions and are predominantly
populated with events featuring $WZ/ZZ\text{\,+\,jets}$ light-flavour
components. In contrast, the charm- and bottom-flavour components are
constrained in the fit with the corresponding uncertainties assigned which are
related to the simulation of heavy-flavour components. Further SM background
processes considered such as the associated production of single top quarks or
top-antitop quark pairs with heavy vector bosons are estimated directly from
simulated Monte Carlo (MC) samples.
Background contributions from leptons from secondary decays (“non-prompt”) or
so-called fake leptons (objects misidentified as leptons), however, are
estimated employing a data-driven method, referred to as matrix method.
Details about this method can be found in the reference documents [9] and
[10].
## 4 Results
The inclusive $t\overline{t}Z$ production cross section is extracted by
performing a simultaneous maximum-likelihood fit to the number of events in
the trilepton and tetralepton signal regions, as well as the
$WZ/ZZ\text{\,+\,jets}$ control regions. A total of three free parameters are
given to the fit: the ratio between the measured value of in the inclusive
$t\overline{t}Z$ production cross section and its corresponding Standard Model
prediction, referred to as signal strength, as well as the normalisation
factors of the $WZ/ZZ\text{\,+\,jets}$ backgrounds used to extrapolate the
corresponding event yields into the signal regions. The inclusive
$3\ell+4\ell$ cross section of $t\overline{t}Z$ production in $pp$-collision
data at a centre-of-mass energy of $13\text{\,}\mathrm{TeV}$ is measured to
be:
$\sigma(pp\to t\overline{t}Z)=1.05\pm 0.05\,(\text{stat.})\,\pm
0.09\,(\text{syst.})\,\text{pb}=\left(1.05\pm 0.10\right)\,\text{pb}$ (1)
This result agrees with the dedicated theoretical prediction [11] of
$\sigma_{t\overline{t}Z}^{\mathrm{NLO+NNLL}}=0.863^{+0.07}_{-0.09}\,(\mathrm{scale})\pm
0.03\,(\mathrm{PDF+\alpha_{s}})\,\mathrm{pb}\quad.$ (2)
The uncertainties on this result are dominated by the systematic uncertainties
of which the most important ones are related to the modelling of the parton
shower in the signal Monte Carlo, the modelling of various background
processes, and the $b$-tagging procedure.
In addition to the inclusive result, the $t\overline{t}Z$ cross section is
measured as a function of different variables sensitive to the kinematics and
the production of the $t\overline{t}Z$ system. For this purpose, a total of
nine such variables are unfolded to parton and particle level, employing the
Iterative Bayesian Unfolding method [12]. On parton level, the (anti-)top
quark and the $Z$ boson can be directly accessed before the decay within Monte
Carlo simulation, whereas on particle level these have to be reconstructed
from simulated stable particles without any modelling of their interaction
with the detector material or pile-up. In this way, events are corrected for
detector effects and results can be directly compared to theoretical
calculations.
(a)
(b)
Figure 1: Absolute (left) and normalised (right) cross section measured at
parton (left) and particle (right) level as a function of the transverse
momentum (left) and of the absolute rapidity (right) of the $Z$ boson. The
predictions of various MC generators are represented by dashed and dotted
coloured lines whereas the data are depicted as black dots. In addition,
custom differential [15] predictions are shown by a black solid line within a
grey-shaded area. In the ratio panels, the relative contributions from both
the statistical and systematic uncertainties are shown. A branching fraction
of $\mathcal{B}(t\overline{t}Z_{3\ell+4\ell})=0.0223$ is applied for the
parton-level result [8].
This analysis determined both the absolute and normalised differential cross
section for the $3\ell$ and $4\ell$ scenarios separately as well as for the
combination. In Figure 1, two examples for the differential cross section
measurements in the combined $3\ell+4\ell$ channel are depicted. Different
simulated samples generated with different sets of MC generators, including
the nominal MG5_aMC@NLO+Pythia 8 [13, 14], as well as a set of additional
differential predictions, calculated at parton level as described in [15], are
compared to the unfolded data. In general, a good agreement between the
unfolded data and the various predictions can be observed.
ACKNOWLEDGEMENTS
The author would like to thank for the support of his work by BMBF, Germany
(FSP-103).
## References
* [1] L. Evans and P. Bryant (editors), JINST 3, S08001 (2008).
* [2] ATLAS Collaboration, JINST 3, S08003 (2008).
* [3] CMS Collaboration, JINST 3, S08004 (2008).
* [4] ATLAS Collaboration, Eur. Phys. J. C 77, no. 1, 40 (2017).
* [5] ATLAS Collaboration, Phys. Rev. D 99, no. 7, 072009 (2019).
* [6] CMS Collaboration, JHEP 1808, 011 (2018).
* [7] CMS Collaboration, JHEP 2003, 056 (2020).
* [8] ATLAS Collaboration, ATLAS-CONF-2020-028,
https://cds.cern.ch/record/2725734.
* [9] ATLAS Collaboration, Eur. Phys. J. C 71, 1577 (2011).
* [10] ATLAS Collaboration, JHEP 1406, 035 (2014).
* [11] A. Kulesza et al., Eur. Phys. J. C 79, no. 3, 249 (2019)
* [12] G. D’Agostini, Nucl. Instrum. Meth. A 362, 487 (1995).
* [13] J. Alwall et al., JHEP 1407, 079 (2014)
* [14] T. Sjöstrand et al., Comput. Phys. Commun. 191, 159 (2015)
* [15] A. Broggio et al., JHEP 1908, 039 (2019)
|
# Mirror Chern numbers in the hybrid Wannier representation
Tomáš Rauch Friedrich-Schiller-University Jena, 07743 Jena, Germany Thomas
Olsen Computational Atomic-scale Materials Design, Department of Physics,
Technical University of Denmark, 2800 Kgs. Lyngby Denmark David Vanderbilt
Department of Physics and Astronomy, Rutgers University, Piscataway, New
Jersey 08854-8019, USA Ivo Souza Centro de Física de Materiales, Universidad
del País Vasco, 20018 San Sebastián, Spain Ikerbasque Foundation, 48013
Bilbao, Spain
###### Abstract
The topology of electronic states in band insulators with mirror symmetry can
be classified in two different ways. One is in terms of the mirror Chern
number, an integer that counts the number of protected Dirac cones in the
Brillouin zone of high-symmetry surfaces. The other is via a $\mathbbm{Z}_{2}$
index that distinguishes between systems that have a nonzero quantized orbital
magnetoelectric coupling (“axion-odd”), and those that do not (“axion-even”);
this classification can also be induced by other symmetries in the magnetic
point group, including time reversal and inversion. A systematic
characterization of the axion $\mathbbm{Z}_{2}$ topology has previously been
obtained by representing the valence states in terms of hybrid Wannier
functions localized along one chosen crystallographic direction, and
inspecting the associated Wannier band structure. Here we focus on mirror
symmetry, and extend that characterization to the mirror Chern number. We
choose the direction orthogonal to the mirror plane as the Wannierization
direction, and show that the mirror Chern number can be determined from the
winding numbers of the touching points between Wannier bands on mirror-
invariant planes, and from the Chern numbers of flat bands pinned to those
planes. In this representation, the relation between the mirror Chern number
and the axion $\mathbbm{Z}_{2}$ index is readily established. The formalism is
illustrated by means of ab initio calculations for SnTe in the monolayer and
bulk forms, complemented by tight-binding calculations for a toy model.
## I Introduction
The band theory of solids has been enriched in recent years by a vigorous
study of its topological aspects. That effort resulted in a systematic
topological classification of insulators on the basis of symmetry, and in the
identification of a large number of topological materials. After an initial
focus on the role of time-reversal symmetry, it was realized that
crystallographic symmetries could also protect topological behaviors, leading
to the notion of “topological crystalline insulators.”
The assignment of an insulator to a particular topological class can be made
by evaluating the corresponding topological invariant. Depending on the
protecting symmetry, that invariant may assume one of two values
($\mathbbm{Z}_{2}$ classification), or it may assume any integer value
($\mathbbm{Z}$ classification). Other types of classifications such as
$\mathbbm{Z}_{4}$ also occur, but they do not concern us here. When the
invariant vanishes the system is classified as trivial, and otherwise it is
classified as nontrivial or topological. Topological insulators typically
display robust gapless states at the boundary, which provide an experimental
signature of topological behavior.
In some cases, the same symmetry may induce two different topological
classifications. This happens for example with mirror symmetry, where a
$\mathbbm{Z}$ classification in terms of the mirror Chern number (MCN) Teo
_et al._ (2008); Ando and Fu (2015) coexists with a $\mathbbm{Z}_{2}$
classification based on the quantized axion angle. The two classifications are
not independent, and elucidating the relation between them is one goal of the
present work.
The axion $\mathbbm{Z}_{2}$ classification of three-dimensional (3D)
insulators is based on the orbital magnetoelectric effect. In brief, the
isotropic part of the linear orbital magnetoelectric tensor is conveniently
expressed in terms of the axion angle $\theta$, which is only defined modulo
$2\pi$ as a bulk property. In the presence of “axion-odd” symmetries that flip
its sign, the axion angle can only assume two values: $\theta=0$ (trivial),
and $\theta=\pi$ (topological) Qi _et al._ (2008); Essin _et al._ (2009);
Vanderbilt (2018); Armitage and Wu (2019); Nenno _et al._ (2020); Sekine and
Nomura (2021).
The axion $\mathbbm{Z}_{2}$ index was originally introduced for time-reversal
invariant insulators, where it was shown to be equivalent to the “strong”
$\mathbbm{Z}_{2}$ index $\nu_{0}=0$ or $1$, that is, $\theta=\pi\nu_{0}$. More
generally, axion-odd symmetries can be classified as proper rotations combined
with time reversal (including time reversal itself), and improper rotations
(including inversion and reflection) not combined with time reversal; in both
cases, the associated symmetry operation in the magnetic space group may
include a fractional translation. This results in a large number of magnetic
space groups that can host axion-odd topological insulators. A recent
realization is the MnBi2Te4 family of antiferromagnetic materials Otrokov _et
al._ (2019); Nenno _et al._ (2020); Sekine and Nomura (2021), whose axion
topology is protected by the time reversal operation combined with a half-
lattice translation as envisioned in Ref. Mong _et al._ (2010).
To aid the computational search for axionic topological insulators, it is
useful to devise simple procedures for determining the (quantized) axion angle
$\theta$. Unfortunately, subtle gauge issues make its direct evaluation from
the valence Bloch states rather challenging in general Vanderbilt (2018).
Notable exceptions are centrosymmetric insulators, both nonmagnetic and
magnetic. For such systems, the axion $\mathbbm{Z}_{2}$ index can be obtained
by counting the number of odd-parity states at high-symmetry points in the
Brillouin zone (BZ) Fu and Kane (2007); Turner _et al._ (2012).
Recently, an alternative procedure was introduced based on representing the
valence states in terms of hybrid Wannier (HW) functions that are maximally
localized along a chosen crystallographic direction $z$. The HW centers along
$z$, also known as “Wilson-loop eigenvalues,” form a band structure when
plotted as a function of $k_{x}$ and $k_{y}$; in the presence of one or more
axion-odd symmetries, the quantized $\theta$ value can be determined from this
“Wannier band structure,” often by mere visual inspection Varnava _et al._
(2020).
In the HW representation, axion-odd symmetries are naturally classified as
“$z$-preserving” or “$z$-reversing,” and the rules for deducing the axion
$\mathbbm{Z}_{2}$ index are different in each case (they also depend on
whether or not the symmetry operation involves a fractional translation along
$z$) Varnava _et al._ (2020). Time reversal is an example of a $z$-preserving
operation, while inversion is $z$ reversing. Mirror operations may be placed
in one group or the other, depending on whether the Wannierization direction
$z$ lies in the reflection plane (vertical mirror) or is orthogonal to it
(horizontal mirror). In this work we make the latter choice, so that the
mirror operation of interest becomes
$M_{z}:z\rightarrow-z\,,$ (1)
which is manifestly $z$ reversing.
A simple mirror symmetry without a glide component protects not only the axion
$\mathbbm{Z}_{2}$ classification, but also a $\mathbbm{Z}$ or
$\mathbbm{Z}\times\mathbbm{Z}$ classification based on one or two MCNs,
depending on the type of mirror. This raises the question of whether the HW
representation might also be convenient for determining the MCNs, and for
illuminating their relationship to the quantized axion angle.
In this work, we address the above questions by investigating in detail the
Wannier bands in the presence of $M_{z}$ symmetry. We clarify the generic
behaviors that are expected, and discuss the rules for deducing the MCNs. By
comparing those rules with the ones obtained in Ref. Varnava _et al._ (2020)
for the axion $\mathbbm{Z}_{2}$ index, we establish the relation between the
two classifications.
The paper is organized as follows. In Sec. II we first distinguish between
“type-1” and “type-2” crystallographic mirror operations; we then review the
definitions of Chern invariants and MCNs in terms of the Bloch states in the
filled bands; finally, we introduce maximally localized HW functions spanning
the valence states, and assign Chern numbers to isolated groups of Wannier
bands. This background material sets the stage for the developments in the
remainder of the paper. In Sec. III we discuss the generic features of the
Wannier band structure in the presence of $M_{z}$ symmetry, and obtain a
relation between Chern numbers and winding numbers in groups of bands touching
on a mirror plane. The rules for deducing the MCNs from the Chern numbers and
winding numbers on the mirror planes are given in Sec. IV, where their
relation to the quantized axion angle is also established. In Sec. V we
describe the numerical methods that are used in Sec. VI to apply the formalism
to several prototypical systems. We summarize and conclude in Sec. VII, and
present in three Appendices some derivations that were left out of the main
text.
## II Preliminaries
### II.1 Two types of crystallographic mirrors
Figure 1: The upper panel shows schematically a pair of 2D crystals lying on
the $(x,z$) plane; each has one atom per primitive cell (black dots), and
lattice constant $c$ along $z$. The crystal on the left has a rectangular
lattice and a type-1 horizontal mirror, with inequivalent mirror lines
$z=0\text{ mod $c$}$ (A) and $z=c/2\text{ mod $c$}$ (B), shown as dashed
lines; the one on the right has a centered rectangular lattice and a type-2
mirror, with equivalent mirror lines A and B. The lattice vectors ${\bf
a}_{3}$ and $\widetilde{{\bf a}}_{3}$ are defined in the main text. The lower
panel shows the reciprocal lattices, with a separation of $2\pi/c$ between
horizontal lattice lines G. On the left the periodicity along $k_{z}$ is
$2\pi/c$, and hence both $k_{z}=0\text{ mod $2\pi/c$}$ (G) and
$k_{z}=\pi/c\text{ mod $2\pi/c$}$ (X) are pointwise-invariant mirror lines, as
indicated by the dashed lines. On the right, where the periodicity along
$k_{z}$ is $4\pi/c$, G is a mirror-invariant line but X is not. The associated
Brillouin zones are indicated by the shaded green areas.
We begin by observing that if a crystal is left invariant under an $M_{z}$
reflection operation, then its Bravais lattice must contain vectors pointing
along $z$. To construct the shortest such vector ${\bf a}_{3}=c\hat{\bf z}$,
we pick the shortest vector $\widetilde{{\bf a}}_{3}$ connecting lattice
points on adjacent horizontal lattice planes. If $\widetilde{{\bf a}}_{3}$
points along $z$ then we take it as ${\bf a}_{3}$, and we say that the mirror
is of type 1. Otherwise we choose the vector ${\bf a}_{3}=\widetilde{{\bf
a}}_{3}-M_{z}\widetilde{{\bf a}}_{3}$ connecting second-neighbor lattice
planes, and the mirror is of type 2.
The two types of crystallographic mirrors are exemplified in 2D in Fig. 1,
where the mirror lines $z=0$ and $c/2$ are labeled A and B, and the
reciprocal-space lines $k_{z}=0$ and $k_{z}=\pi/c$ are labeled G and X. The
same notation will be used in 3D, where A and B (G and X) become planes in
real (reciprocal) space.
The distinction between mirror operations that leave pointwise invariant two
inequivalent planes in the BZ, and those that leave invariant only one BZ
plane, was made in Refs. Varjas _et al._ (2015); Fulga _et al._ (2016).
Since MCNs are defined on such planes Teo _et al._ (2008); Ando and Fu
(2015), a 3D insulator with a type-1 mirror is characterized by two separate
MCNs $\mu_{\rm G}$ and $\mu_{\rm X}$, while for a type-2 mirror there is a
single MCN $\mu_{\rm G}$. If the crystallographic space group contains
additional mirror operations, there will be additional MCNs associated with
them.
### II.2 Chern invariants in band insulators
#### II.2.1 Generic insulators
Before introducing MCNs for insulators with reflection symmetry, let us define
Chern invariants for generic 2D and 3D band insulators in terms of the ${\bf
k}$-space Berry curvature of the valence states Vanderbilt (2018).
In 2D, the Berry curvature of a Bloch state $|\psi_{n{\bf k}}\rangle$ with
cell-periodic part $|u_{n{\bf k}}\rangle$ is a scalar defined as
$\Omega_{n{\bf k}}=-2{\rm Im\,}\langle\partial_{k_{x}}u_{n{\bf
k}}|\partial_{k_{y}}u_{n{\bf k}}\rangle$ (2)
where ${\bf k}=(k_{x},k_{y})$, and the Chern number is given by
$C=\frac{1}{2\pi}\int_{\rm 2DBZ}\sum_{n=1}^{J}\,\Omega_{n{\bf k}}\,d^{2}k$ (3)
where the summation is over the $J$ filled energy bands. Since the Berry
curvature has units of length squared, $C$ is a dimensionless number, and for
topological reasons it must be an integer. The Chern number is a global
property of the manifold of occupied states, remaining invariant under
multiband gauge transformations described by $J\times J$ unitary matrices at
each ${\bf k}$, and it vanishes when the crystal has time-reversal symmetry.
If a 2D magnetic crystal has a nonzero Chern number $C$, when that crystal is
terminated at an edge there will be $|C|$ edge modes crossing the bulk gap,
whose chirality will depend on the sign of $C$.
3D insulators are characterized by a Chern vector
${\bf K}=\frac{1}{2\pi}\int_{\rm 3DBZ}\sum_{n=1}^{J}\,{\bm{\Omega}}_{n{\bf
k}}\,d^{3}k\,,$ (4)
where now ${\bf k}=(k_{x},k_{y},k_{z})$ and the Berry curvature has become a
vector field, ${\bm{\Omega}}_{n{\bf k}}=-{\rm Im\,}\langle\partial_{\bf
k}u_{n{\bf k}}|\times|\partial_{\bf k}u_{n{\bf k}}\rangle$. The Chern vector
has units of inverse length, and is quantized to be a reciprocal-lattice
vector. Like the Chern number in 2D, the Chern vector always vanishes in
nonmagnetic crystals.
Given a set of lattice vectors ${\bf a}_{j}$ and dual reciprocal-lattice
vectors ${\bf b}_{j}$, the expansion ${\bf K}=\sum_{j}\,C_{j}{\bf b}_{j}$
defines a triad of integer Chern indices $C_{j}$. Let us orient the Cartesian
axes such that ${\bf a}_{3}=c\hat{\bf z}$. The vectors ${\bf b}_{1}$ and ${\bf
b}_{2}$ then lie on the $(x,y)$ plane, and the third Chern index can be
expressed as
$C_{3}=\frac{c}{2\pi}\int_{0}^{2\pi/c}\,C(k_{z})\,dk_{z}\,,$ (5)
where
$C(k_{z})=\frac{1}{2\pi}\int_{\rm
2DBZ}\sum_{n=1}^{J}\,\Omega^{z}_{n}(k_{x},k_{y},k_{z})\,dk_{x}dk_{y}\,.$ (6)
The integral in Eq. (6) is over a slice of the 3D BZ spanned by ${\bf b}_{1}$
and ${\bf b}_{2}$ at fixed $k_{z}$. By viewing it as an effective 2D BZ and
comparing with Eq. (3), it becomes clear that $C(k_{z})$ is a Chern number;
and since in a gapped system its integer value cannot change with the
continuous parameter $k_{z}$, Eq. (5) reduces to $C_{3}=C(k_{z})$ evaluated at
any $k_{z}$. The Chern indices of 3D insulators can therefore be evaluated as
Chern numbers defined over individual BZ slices.
#### II.2.2 Mirror-symmetric insulators
We now consider a 3D crystalline insulator with mirror symmetry $M_{z}$, and
assume that its Chern vector ${\bf K}$ vanishes. A new integer-valued
topological index, the MCN, can be defined for such a system as follows Teo
_et al._ (2008); Ando and Fu (2015).
On the mirror-invariant BZ planes, G and possibly X, the energy eigenstates
are also eigenstates of $M_{z}$. The eigenvalues are $i^{F}p$, where $p=\pm 1$
is the “mirror parity” and $F=0$ or $1$ when the electrons are treated as
spinless or spinful particles, respectively. The occupied Bloch states on
those planes can therefore be grouped into “even” ($p=+1$) and “odd” ($p=-1$)
sectors under reflection about the A plane $z=0$, each carrying its own Chern
number. The Chern numbers of the two sectors on the G plane $k_{z}=0$ are
given by
$C_{\rm G}^{\pm}=\frac{1}{2\pi}\int_{\rm 2DBZ}\sum_{n=1}^{J}\,f_{n{\bf
k}}^{\pm}\Omega^{z}_{n}(k_{x},k_{y},k_{z}=0)\,dk_{x}dk_{y}\,,$ (7)
where $f_{n{\bf k}}^{+}=1-f_{n{\bf k}}^{-}$ equals one or zero for a state
with $p=\pm 1$, respectively. The MCN is defined as
$\mu_{\rm G}=\frac{1}{2}\left(C_{\rm G}^{+}-C_{\rm G}^{-}\right)\,,$ (8)
and it is guaranteed to be an integer since $C_{\rm G}^{+}+C_{\rm
G}^{-}=C_{3}$ vanishes by assumption. If the mirror is of type 1, the plane X
carries a second MCN
$\mu_{\rm X}=\frac{1}{2}\left(C_{\rm X}^{+}-C_{\rm X}^{-}\right)\,,$ (9)
where $C_{\rm X}^{\pm}$ is obtained by replacing $k_{z}=0$ with $k_{z}=\pi/c$
in Eq. (7). The MCNs remain invariant under multiband gauge transformations
that do not mix the two mirror-parity sectors. When they are nonzero,
protected gapless modes appear on surfaces that retain the mirror symmetry
$M_{z}$, with $|\mu_{\rm G}|$ and $|\mu_{\rm X}|$ counting the number of Dirac
cones on the two $M_{z}$-invariant lines in the surface BZ Hsieh _et al._
(2012).
In the case of a 2D or quasi-2D insulator with reflection symmetry $M_{z}$
about its own plane, the entire 2D BZ is left invariant under $M_{z}$. Such a
system has a unique MCN
$\mu_{\rm 2D}=\frac{1}{2}\left(C_{+}-C_{-}\right)\,,$ (10)
where $C_{+}$ and $C_{-}$ are obtained by inserting the 2D Berry curvature of
Eq. (2) in Eq. (7). When the net Chern number $C=C_{+}+C_{-}$ vanishes,
$|\mu_{\rm 2D}|$ becomes an integer that counts the number of pairs of
counterpropagating chiral edge modes Liu _et al._ (2014).
We note in passing that spin-orbit coupling is required to obtain non-
vanishing MCNs in systems that are either non-magnetic or whose magnetic order
is collinear.
### II.3 The hybrid Wannier representation
#### II.3.1 Hybrid Wannier functions and Wannier bands
HW functions are obtained from the valence Bloch states of a 2D or 3D
crystalline insulator by carrying out the Wannier construction along a chosen
reciprocal-lattice direction. They are therefore localized along one direction
only, in contrast to ordinary Wannier functions which are localized in all
spatial directions.
Let us momentarily return to a generic 3D insulating crystal, not necessarily
mirror-symmetric. We denote by $z$ the chosen localization direction and let
${\bm{\kappa}}=(k_{x},k_{y})$, so that the wavevector in the 3D BZ becomes
${\bf k}=({\bm{\kappa}},k_{z}$). Given a gauge for the Bloch states that is
periodic in $k_{z}$,
$|\psi_{n{\bm{\kappa}},k_{z}+2\pi/c}\rangle=|\psi_{n{\bm{\kappa}}k_{z}}\rangle$,
the corresponding HW functions are defined as
$|h_{ln{\bm{\kappa}}}\rangle=\frac{1}{2\pi}\int_{-\pi/c}^{\pi/c}\,e^{-ik_{z}lc}e^{-i{\bm{\kappa}}\cdot{\bf
r}}|\psi_{n{\bm{\kappa}}k_{z}}\rangle\,dk_{z},$ (11)
where the index $l$ runs over unit cells along $z$, and $n$ runs over the $J$
HW functions in one unit cell. By factoring out $e^{-i{\bm{\kappa}}\cdot{\bf
r}}$, we have made the HW functions cell periodic in the in-plane directions,
$h_{ln{\bm{\kappa}}}({\bf r}+{\bf R})=h_{ln{\bm{\kappa}}}({\bf r})$ for any
in-plane lattice vector ${\bf R}$. This will be convenient later on when we
define Berry curvatures and Chern numbers in the HW representation.
For each ${\bm{\kappa}}$ in the projected 2D BZ, we choose the multiband gauge
for the Bloch states in such a way that the HW functions have the smallest
possible quadratic spread along $z$. Such maximally-localized HW functions
satisfy the eigenvalue equation Marzari and Vanderbilt (1997)
$P_{\bm{\kappa}}zP_{\bm{\kappa}}|h_{ln{\bm{\kappa}}}\rangle=z_{ln{\bm{\kappa}}}|h_{ln{\bm{\kappa}}}\rangle,$
(12)
where $P_{\bm{\kappa}}$ is the projection operator onto the space of valence
states with in-plane wave vector ${\bm{\kappa}}$. The eigenvalues in Eq. (12)
are the HW centers
$z_{ln{\bm{\kappa}}}=\langle
h_{ln{\bm{\kappa}}}|z|h_{ln{\bm{\kappa}}}\rangle\,,$ (13)
which form Wannier bands. These are periodic in real space along $z$, as well
as in the in-plane reciprocal space,
$z_{ln{\bm{\kappa}}}=z_{0n{\bm{\kappa}}}+lc\,,\quad z_{ln,{\bm{\kappa}}+{\bf
G}}=z_{ln{\bm{\kappa}}}\,,$ (14)
where ${\bf G}$ is an in-plane reciprocal lattice vector.
A Wannier band structure is said to be gapped if it contains at least one
Wannier band per vertical cell that is separated from the band below by a
finite gap at all ${\bm{\kappa}}$. When that is the case, we choose the cell
contents in such a way that the first band, $n=1$, has a gap below it.
#### II.3.2 Chern numbers of Wannier bands
The Berry curvature of a HW state is defined as
$\Omega_{ln}=-2\,{\rm
Im\,}\langle\partial_{k_{x}}h_{ln}|\partial_{k_{y}}h_{ln}\rangle\,,$ (15)
and periodicity along $z$ implies that $\Omega_{ln}=\Omega_{0n}$. (Here and in
the following, we will frequently drop the index ${\bm{\kappa}}$.) When the
Wannier spectrum is gapped, it becomes possible to associate a Chern number
with each isolated group $a$ of bands within a vertical cell,
$C_{la}=\frac{1}{2\pi}\int_{\rm 2DBZ}\sum_{n\in
a}\,\Omega_{ln}\,d^{2}k=C_{0a}\,.$ (16)
From the HW states in a given group, one can construct Bloch-like states at
any ${\bf k}=(k_{x},k_{y},k_{z})$ by inverting Eq. (11). In general these are
not energy eigenstates, and their band indices label Wannier bands rather than
energy bands. Their Berry curvatures along $z$ are given by
$\Omega^{z}_{n}(k_{x},k_{y},k_{z})=\sum_{l}\,e^{ik_{z}lc}\Omega_{0n,ln}(k_{x},k_{y})\,,$
(17)
where
$\Omega_{0n,ln}=i\langle\partial_{k_{x}}h_{0n}|\partial_{k_{y}}h_{ln}\rangle-i\langle\partial_{k_{y}}h_{0n}|\partial_{k_{x}}h_{ln}\rangle$
(18)
is a matrix generalization of Eq. (15) Taherinejad and Vanderbilt (2015). To
evaluate the net Chern number $C_{a}(k_{z})$ of that group of Bloch-like
states on a slice of the 3D BZ, we insert Eq. (17) in Eq. (6) and restrict the
summation over $n$ to $n\in a$. The contributions from the $l\not=0$ terms
drop out,111The expression for $C_{a}(k_{z})$ involves
$\int_{0}^{2\pi/a}\partial_{k_{x}}Y_{0n,ln}(k_{x})\,dk_{x}$ where
$Y_{0n,ln}(k_{x})=\int_{0}^{2\pi/b}A^{y}_{0n,ln}(k_{x},k_{y})\,dk_{y}$, and
another similar integral
$\int_{0}^{2\pi/b}\partial_{k_{y}}X_{0n,ln}(k_{y})\,dk_{y}$. When $l\not=0$
the quantity $Y_{0n,ln}(k_{x})$ becomes fully invariant under band-diagonal
gauge transformations of the HW states. Hence its value at $k_{x}=2\pi/a$ must
be the same as at $k_{x}=0$, and the integral vanishes. yielding
$C_{a}(k_{z})=C_{0a}\,.$ (19)
Hence the Chern numbers are the same in the Bloch-like and HW representations,
as expected since the two representations are related by a unitary
transformation. When the group $a$ comprises all $J$ Wannier bands in one
vertical cell, its Chern number becomes equal to the Chern index $C_{3}$ of
Eq. (5), which vanishes by assumption.
## III Mirror-symmetric Wannier bands
With the above background material in hand, we now return to our system of
interest – a 3D insulator with $M_{z}$ symmetry – and construct HW functions
localized along the direction $z$ orthogonal to the mirror plane. We begin
this section by discussing the generic features of Wannier band structures
with $M_{z}$ symmetry.
### III.1 Flat vs dispersive bands, and the uniform parity assumption
If $M_{z}$ is a symmetry of the system, the operator $PzP$ anticommutes with
$M_{z}$. It follows that if a HW function $|h_{ln}\rangle$ satisfies Eq. (12)
with eigenvalue $z_{ln}$, $M_{z}|h_{ln}\rangle$ satisfies it with eigenvalue
$-z_{ln}$. Since $z_{ln}$ is only defined modulo $c$, two situations may
occur. (i) $|h_{ln}\rangle$ and $M_{z}|h_{ln}\rangle$ are orthogonal, in which
case a pair of dispersive bands appear at $\pm z_{ln}$. (ii) $|h_{ln}\rangle$
and $M_{z}|h_{ln}\rangle$ are the same up to a phase, in which case
$|h_{ln}\rangle$ is an eigenstate of $M_{z}$, and a single flat band appears
at either $z=0$ (A plane) or $z=c/2$ (B plane). The Wannier bands of the
system can therefore be classified into flat bands of even or odd mirror
parity at A; flat bands of even or odd mirror parity at B; and dispersive
pairs appearing at $\pm z$.
If there are several flat bands on a given mirror plane and not all of them
have the same parity, those of opposite parity will generally have a nonzero
$PzP$ matrix element between them, and will tend to hybridize and split to
form dispersive pairs. Thus, all flat bands pinned at A are expected to have
the same parity $p_{\rm A}$, and all flat bands pinned at B are expected to
have the same parity $p_{\rm B}$. Following Ref. Varnava _et al._ (2020), we
call this the “uniform parity” assumption. As discussed in Ref. Varnava _et
al._ (2020), this assumption is closely related to a well-known theorem on the
minimum number of zero-energy modes in bipartite lattices Sutherland (1986);
Lieb (1989); Ramachandran _et al._ (2017).
Under the uniform parity assumption, the numbers $\overline{N}_{\rm A}$ and
$\overline{N}_{\rm B}$ of flat bands at A and B can be expressed in terms of
the imbalance between even- and odd-parity valence Bloch states at the mirror-
invariant plane(s) in the BZ. For a type-1 mirror we have
$\overline{N}_{\rm A}=\frac{1}{2}\left|\Delta N_{\rm G}+\Delta N_{\rm
X}\right|$ (20)
and
$\overline{N}_{\rm B}=\frac{1}{2}\left|\Delta N_{\rm G}-\Delta N_{\rm
X}\right|\,,$ (21)
where $\Delta N_{\rm G}$ and $\Delta N_{\rm X}$ denote the excess of even over
odd valence states at G and X, respectively. Hence if the mirror-parity
content is balanced at both G and X, flat Wannier bands are absent from both A
and B; if it is balanced only at G but not at X or vice-versa, the same number
of flat bands is present at A and at B; and if it is unbalanced at both G and
X, the number of flat bands at B can differ from the number at A. The
corresponding relation for a type-2 mirror is
$\overline{N}_{\rm A}=\overline{N}_{\rm B}=\frac{1}{2}|\Delta N_{\rm G}|\,.$
(22)
Equations (20-22) are derived in Appendix A.
### III.2 Types of generic degeneracies
In this section, we consider the types of degeneracies that are typical of the
Wannier spectra of insulators with $M_{z}$ symmetry. We call a degeneracy
generic when it occurs without the assistance of any symmetries other than
$M_{z}$. If in addition the degeneracy is codimension protected, we call it
accidental.
Accidental degeneracies away from the A and B planes have codimension three,
and hence they require fine tuning. On the mirror planes, there are two types
of generic degeneracies: multiple flat bands pinned to the same plane, and
accidental touchings, at isolated points in the 2D BZ, between one or more
pairs of dispersive bands. Other possibilities such as nodal lines are non-
generic and will not be considered further. In the following we focus on the A
plane $z=0$, but the discussion would be identical for the B plane $z=c/2$.
#### III.2.1 Point nodes between pairs of dispersive bands
If there are no flat bands pinned at $z=0$, any bands near $z=0$ must come in
dispersive pairs at $\pm z$. If there is a single pair, we construct from the
two HW functions at each ${\bm{\kappa}}$ a pair of orthogonal states with
opposite parities about $z=0$. In this basis, the $z$ operator is represented
by a matrix of the form
$\begin{pmatrix}0&f_{\bm{\kappa}}\\\ f^{*}_{\bm{\kappa}}&0\end{pmatrix}\,,$
(23)
with eigenvalues $z_{\bm{\kappa}}=\pm|f_{\bm{\kappa}}|$. The two bands touch
at $z=0$ when $|f_{\bm{\kappa}}|=0$, and for that to happen both the real and
imaginary parts of $f_{\bm{\kappa}}$ must vanish; this means that such
degeneracies have codimension two, and hence they occur at isolated points in
the 2D BZ. (When the bands disperse linearly close to the nodal point, the
degeneracy is called a “Dirac node.”) If more than one dispersive band pair is
involved, $f_{\bm{\kappa}}$ becomes a matrix. The degeneracy condition
$\det(f_{\bm{\kappa}})=0$ again leads to point nodes on the $z=0$ plane.
Generically, these are simple nodes where only two bands meet. However, with
additional symmetries or fine tuning, more than one pair of bands may become
degenerate at a given node.
In summary, pairs of dispersive Wannier bands can touch accidentally at
isolated points on a mirror plane free of flat bands. We note that the same
happens, and for the same mathematical reasons, with the energy bands of
models with sublattice symmetry Ramachandran _et al._ (2017).
#### III.2.2 Flat bands repel point nodes
When one or more flat bands are present at $z=0$, they gap out the point
nodes. Let us show this for the simplest case of one flat band surrounded by a
dispersive pair. Choosing a basis of $M_{z}$ eigenstates within this three-
band space, the matrix representation of the $z$ operator takes the form
$\begin{pmatrix}0&f_{\bm{\kappa}}&g_{\bm{\kappa}}\\\
f^{*}_{\bm{\kappa}}&0&0\\\ g^{*}_{\bm{\kappa}}&0&0\end{pmatrix},$ (24)
where we have chosen the first basis state to have the opposite mirror parity
from the other two. The eigenvalues are $z_{\bm{\kappa}}=0$ (flat band) and
$z_{\bm{\kappa}}=\pm\sqrt{|f_{\bm{\kappa}}|^{2}+|g_{\bm{\kappa}}|^{2}}$
(dispersive pair). An accidental degeneracy between the pair requires the real
and imaginary parts of both $f_{\bm{\kappa}}$ and $g_{\bm{\kappa}}$ to vanish
(codimension four). In general this cannot be achieved by adjusting
${\bm{\kappa}}$ alone; it also requires fine tuning the parameters
$f_{\bm{\kappa}}$ and $g_{\bm{\kappa}}$.
In conclusion, flat bands and point nodes do not generally coexist on a mirror
plane. Although we have only shown this for the case of one flat band plus one
dispersive pair, the same result is expected to hold when several flat bands
and/or dispersive pairs are present. That scenario has in fact been considered
for the analogous problem of energy bands in models with sublattice symmetry
Ramachandran _et al._ (2017).
#### III.2.3 Spinful time-reversal symmetry excludes flat bands
The presence of flat bands on the mirror planes can sometimes be ruled out on
the basis of symmetry. This is the case for a crystal that has both $M_{z}$
symmetry and spinful time-reversal symmetry $\mathcal{T}$. Since
$[P_{\bm{\kappa}}zP_{\bm{\kappa}},\mathcal{T}]=0$, the standard Kramers-
degeneracy argument applies to the Wannier bands: if $|h_{\bm{\kappa}}\rangle$
is an eigenstate of $P_{\bm{\kappa}}zP_{\bm{\kappa}}$ with eigenvalue
$z_{\bm{\kappa}}$, then
$|h^{\prime}_{-{\bm{\kappa}}}\rangle=\mathcal{T}|h_{\bm{\kappa}}\rangle$ is an
orthogonal eigenstate with the same eigenvalue. Now suppose that
$|h_{\bm{\kappa}}\rangle$ is a flat-band state at A, with $M_{z}$ eigenvalue
$\lambda=\pm i$. Then $|h^{\prime}_{-{\bm{\kappa}}}\rangle$ is also a flat-
band state, and using $[M_{z},\mathcal{T}]=0$ we find that its mirror
eigenvalue is $\lambda^{*}=-\lambda$. Since the two flat bands have opposite
mirror eigenvalues, they will generally hybridize to form a dispersive pair.
Another example is a crystal that has both $M_{z}$ symmetry, and spinful
$\mathcal{T}$ combined with inversion $\mathcal{I}$. The combined symmetry
$\mathcal{I}*\mathcal{T}$ renders the energy bands Kramers-degenerate at every
${\bf k}$, and since $[M_{z},\mathcal{I}*\mathcal{T}]=0$ and $M_{z}$ has
purely imaginary eigenvalues, Kramers pairs of Hamiltonian eigenstates on the
invariant BZ planes have opposite $M_{z}$ eigenvalues. The mirror-parity
content is therefore balanced on those planes, and from Eqs. (20-22) we
conclude that both $\overline{N}_{\rm A}$ and $\overline{N}_{\rm B}$ vanish.
(Note that while the energy bands are Kramers degenerate in the presence of
$\mathcal{I}*\mathcal{T}$ symmetry, the Wannier bands are not. The difference
is that $\mathcal{I}*\mathcal{T}$ commutes with the Hamiltonian, but it
anticommutes with $PzP$.)
In summary, spinful time-reversal symmetry, either by itself or in combination
with inversion, rules out the presence of flat Wannier bands on the mirror
planes (under the uniform parity assumption).
### III.3 Chern numbers in gapped band structures
When an $M_{z}$-symmetric Wannier band structure is gapped, the $J$ bands per
cell can be grouped into three internally connected collections Varnava _et
al._ (2020): one containing bands that are pinned at A (over the entire 2D BZ
or at isolated ${\bm{\kappa}}$ points), another containing bands that are
pinned at B, and a third containing “unpinned” bands, in the sense that they
do not touch the mirror planes anywhere in the 2D BZ. In Ref. Varnava _et
al._ (2020) these three collections were called origin-centered, boundary-
centered, and uncentered, respectively.
Letting $\alpha=\text{A or B}$, in each vertical cell $l$ there are in general
* •
$\overline{N}_{\alpha^{+}}$ flat bands at $\alpha$ of even parity,
* •
$\overline{N}_{\alpha^{-}}$ flat bands at $\alpha$ of odd parity,
* •
$\widetilde{N}_{\alpha}$ dispersive bands touching at $\alpha$
in the $\alpha$-pinned collection, and $\widetilde{N}_{\rm UC}$ dispersive
bands in the unpinned collection. (At this stage we do not yet assume uniform
parity for the flat bands, nor do we invoke the fact that flat bands repel
point nodes.) In the home cell $l=0$, the dispersive bands in the A-pinned
collection come in pairs at $\pm z$, and those in the B-pinned collection come
in pairs at $z$ and $c-z$. In the case of the unpinned collection we have a
choice, since the mirror-symmetric partners never become degenerate; for
definiteness, we choose the contents of the home cell so that the bands in the
unpinned collection come in pairs at $\pm z$.
For each of the seven groups listed above, we can add up the Chern numbers in
that group to get $\overline{C}_{\alpha^{\pm}}$, $\widetilde{C}_{\alpha}$, and
$\widetilde{C}_{\rm UC}$, keeping in mind that their sum $C_{3}$ vanishes by
assumption,
$C_{\rm A}+C_{\rm B}+\widetilde{C}_{\rm UC}=0\,,$ (25)
where
$C_{\alpha}=\overline{C}_{\alpha^{+}}+\overline{C}_{\alpha^{-}}+\widetilde{C}_{\alpha}$
is the net Chern number of the $\alpha$-pinned collection. We further
decompose each of the three dispersive band subspaces into even and odd
sectors under reflection about their centers, and assign separate Chern
numbers to them,
$\displaystyle\widetilde{C}_{\alpha}$
$\displaystyle=\widetilde{C}_{\alpha^{+}}+\widetilde{C}_{\alpha^{-}}\,,$ (26a)
$\displaystyle\widetilde{C}_{\rm UC}$ $\displaystyle=\widetilde{C}_{{\rm
UC}^{+}}+\widetilde{C}_{{\rm UC}^{-}}\,.$ (26b)
In Appendix B we show that
$\widetilde{C}_{\alpha^{+}}-\widetilde{C}_{\alpha^{-}}=W_{\alpha}\,,$ (27)
where $W_{\alpha}$ is the sum of the winding numbers of all the nodal points
in the projected 2D BZ on the $\alpha$ mirror plane.
The winding number of a nodal point ${\bm{\kappa}}_{j}$ is defined as Asbóth
_et al._ (2016)
$W_{j}=\frac{1}{2\pi}\oint_{c_{j}}\partial_{\bm{\kappa}}\gamma_{\bm{\kappa}}\cdot
d{\bm{\kappa}}\,,$ (28)
where the integral is over a small circle around the node. $W_{j}$ is an
integer, typically taking values $\pm 1$ according to how the phase
$\gamma_{\bm{\kappa}}$ changes going around the node. In the simplest case
where a single pair of bands meet at the node, $\gamma_{\bm{\kappa}}$ is the
phase angle of the complex matrix element $f_{\bf k}$ appearing in Eq. (23).
If two or more pairs of bands meet at a node, $f_{\bf k}$ becomes a matrix and
$\gamma_{\bm{\kappa}}$ becomes the phase angle of its determinant (see Sec.
V.3).
Combining Eqs. (26a) and (27) we obtain
$W_{\alpha}=\widetilde{C}_{\alpha}-2\widetilde{C}_{\alpha^{-}}\,,$ (29)
which shows that $\widetilde{C}_{\alpha}$ has the same even or odd parity as
$W_{\alpha}$. Since band pairs in the unpinned collection do not touch on the
special planes, by applying the same argument in Appendix B that leads to Eq.
(27) we obtain
$\widetilde{C}_{{\rm UC}^{+}}=\widetilde{C}_{{\rm UC}^{-}}\,,$ (30)
which implies that their sum $\widetilde{C}_{\rm UC}$ is always an even
number.222The fact that $\widetilde{C}_{\rm UC}$ is even can also be seen as
follows Varnava _et al._ (2020). The unpinned collection is formed by two
disconnected groups of bands related by $M_{z}$ symmetry, which imposes the
same Berry curvature at every ${\bm{\kappa}}$ in the two groups, and hence the
same Chern number.
## IV Mirror Chern numbers in the hybrid Wannier representation
We are finally ready to evaluate the MCNs in the HW representation, and then
relate them to the axion $\mathbbm{Z}_{2}$ index. In Sec. IV.1 we consider the
case of a gapped Wannier spectrum, and in Sec. IV.2 we treat the gapless case.
### IV.1 Gapped Wannier band structure
To recap, a generic gapped Wannier band structure with $M_{z}$ symmetry
consists of seven band collections per cell. The four that are flat have well-
defined mirror parities, and the three that are dispersive can be decomposed
into even and odd sectors. This yields a total of ten HW groups with well-
defined parities, each carrying its own Chern number.
#### IV.1.1 Type-1 mirrors
Table 1: Parities under a type-1 mirror $M_{z}$ of Bloch-like states
constructed from HW functions that are maximally localized along $z$. For
spinful electrons, the parity is said to be “even” or “odd” when the $M_{z}$
eigenvalue is $+i$ or $-i$. Bloch representation
---
${\rm G}^{+}=\text{even about A (and even about B)}$
${\rm G}^{-}=\text{odd\,\, about A (and odd\,\, about B)}$
${\rm X}^{+}\,=\text{even about A (and odd\,\, about B)}$
${\rm X}^{-}\,=\text{odd\,\, about A (and even about B)}$
Hybrid Wannier representation
${\rm A}^{+}$ = even about A, generates ${\rm G}^{+}$ and ${\rm X}^{+}$
${\rm A}^{-}$ = odd about A, generates ${\rm G}^{-}$ and ${\rm X}^{-}$
${\rm B}^{+}$ = even about B, generates ${\rm G}^{+}$ and ${\rm X}^{-}$
${\rm B}^{-}$ = odd about B, generates ${\rm G}^{-}$ and ${\rm X}^{+}$
pairs C and ${\rm C}^{\prime}$, generates ${\rm G}^{+}{\rm G}^{-}$ and ${\rm
X}^{+}{\rm X}^{-}$
To evaluate the MCNs $\mu_{\rm G}$ and $\mu_{\rm X}$, we construct from each
of the ten HW groups a group of Bloch-like states by performing Bloch sums
along $z$, and recall from Eq. (19) that their Chern numbers on any
constant-$k_{z}$ BZ slice (and, in particular, at G and X) are the same as the
Chern numbers of the parent HW groups. The final needed ingredient is Table 1,
which tells the mirror parities at G and X of the Bloch groups coming from
each of the HW groups. That table is valid for both spinless and spinful
mirror symmetry $M_{z}$, and it agrees with the parity rules for inversion
symmetry ${\cal I}$ in 1D Varnava _et al._ (2020); this is consistent with
the fact that $M_{z}={\cal I}*C_{2}^{z}$ acts along $z$ in the same way as
${\cal I}$.
To evaluate $\mu_{\rm G}$, we need to split the occupied Bloch space at G into
even- and odd-parity sectors about A. According to Table 1, their Chern
numbers are
$C_{\rm G}^{\pm}=\left(\overline{C}_{{\rm A}^{\pm}}+\widetilde{C}_{{\rm
A}^{\pm}}+\widetilde{C}_{\rm UC}^{\pm}\right)+\left(\overline{C}_{{\rm
B}^{\pm}}+\widetilde{C}_{{\rm B}^{\pm}}\right)\,,$ (31)
where the first and second groups of terms correspond to Wannier groups that
are even or odd about A and B, respectively. Inserting this expression into
Eq. (8) for $\mu_{\rm G}$ and then using Eqs. (27) and (30), we find
$2\mu_{\rm G}=\left(\overline{C}_{{\rm A}^{+}}-\overline{C}_{{\rm
A}^{-}}\right)+\left(\overline{C}_{{\rm B}^{+}}-\overline{C}_{{\rm
B}^{-}}\right)+W_{\rm A}+W_{\rm B}\,.$ (32)
Under the uniform parity assumption the first group of terms becomes $p_{\rm
A}\overline{C}_{\rm A}$, where $\overline{C}_{\rm A}$ is the total Chern
number of the flat bands at A, all of the same parity $p_{\rm A}=\pm 1$;
similarly, the second group becomes $p_{\rm B}\overline{C}_{\rm B}$. Thus we
arrive at
$\mu_{\rm G}=\frac{1}{2}\left(p_{\rm A}\overline{C}_{\rm A}+W_{\rm
A}\right)+\frac{1}{2}\left(p_{\rm B}\overline{C}_{\rm B}+W_{\rm B}\right)\,,$
(33)
and via similar steps Eq. (9) for $\mu_{\rm X}$ turns into
$\mu_{\rm X}=\frac{1}{2}\left(p_{\rm A}\overline{C}_{\rm A}+W_{\rm
A}\right)-\frac{1}{2}\left(p_{\rm B}\overline{C}_{\rm B}+W_{\rm B}\right)\,.$
(34)
Out of the three collections in a type-1 disconnected band structure, the
uncentered collection does not contribute to the MCNs; and the A-centered and
B-centered ones contribute as in Eqs. (33) and (34).
Equations (33) and (34) are a central result of this work, and in the
following sections we will extract several conclusions from them. In practical
applications, those equations can often be simplified: since flat bands and
point nodes do not generically coexist on the mirror planes, at least one of
the two terms inside each parenthesis will typically vanish.
Before proceeding, let us verify that Eq. (33) correctly yields an integer
value for $\mu_{\rm G}$ when $C_{3}=0$. First we eliminate the winding numbers
from Eq. (33) with the help of Eq. (29), and then we take mod 2 on both sides
of the resulting equation to find
$\displaystyle 2\mu_{\rm G}\text{ mod 2}$
$\displaystyle=\left(\overline{C}_{\rm A}+\widetilde{C}_{\rm
A}+\overline{C}_{\rm B}+\widetilde{C}_{\rm B}\right)\text{ mod 2}$
$\displaystyle=-\widetilde{C}_{\rm UC}\text{ mod 2}\,,$ (35)
where Eq. (25) was used to go from the first to the second line. Given that
$\widetilde{C}_{\rm UC}$ is an even number, we conclude that $\mu_{\rm G}$ is
an integer. The proof is identical for Eq. (34).
We emphasize that the separate contributions from the A- and B-centered
collection to Eqs. (33) and (34) are not always integer-valued. As can be seen
from Eq. (37) below, those contributions assume half-integer values when the
axion angle is quantized to $\theta=\pi$ by mirror symmetry; a concrete
example where this happens will be given in Sec. VI.3.
#### IV.1.2 Relation to the quantized axion coupling
As mentioned in the Introduction, mirror symmetry belongs to the group of
“axion-odd” symmetries that reverse the sign of the axion angle $\theta$. When
one or more such symmetries are present in a 3D insulator with a vanishing
Chern vector, $\theta$ is restricted to be zero or $\pi$ mod $2\pi$, becoming
a $\mathbbm{Z}_{2}$ topological index.
In the case of mirror symmetry, where the band topology is already
characterized by the MCNs, there should be a relation between them and the
quantized $\theta$ value. Below we derive that relation for an insulator with
a type-1 mirror and a gapped Wannier spectrum. To that end, we make use of the
formalism of Ref. Varnava _et al._ (2020) for expressing $\theta$ in the HW
representation.
First we write $\mu_{\rm G}+\mu_{\rm X}$ by combining Eqs. (33) and (34), and
eliminate the winding numbers using Eq. (29). Then we take mod 2 on both sides
to find
$\left(\mu_{\rm G}+\mu_{\rm X}\right)\text{ mod 2}=C_{\rm A}\text{ mod 2}\,.$
(36)
Comparing with the relation $\theta/\pi=C_{\rm A}\text{ mod 2}$ Varnava _et
al._ (2020), valid for a gapped spectrum in the presence of a $z$-reversing
axion-odd symmetry such as $M_{z}$, we conclude that
$\frac{\theta}{\pi}=\left(\mu_{\rm G}+\mu_{\rm X}\right)\text{ mod 2}\,.$ (37)
Thus, the system is axion-even ($\theta=0$) or axion-odd ($\theta=\pi$)
depending on whether the sum of the two MCNs associated with $M_{z}$ is even
or odd. Previously, this result had been inferred from an argument based on
counting Dirac cones in the surface BZ Varjas _et al._ (2015); Fulga _et
al._ (2016). Here, we have obtained it directly as a formal relation between
bulk quantities expressed in the HW representation. As we will see shortly,
the same relation holds when the Wannier spectrum is gapless.
#### IV.1.3 Type-2 mirrors
In a crystal with a type-2 mirror, where the planes A and B are equivalent and
G is the only mirror-invariant plane in reciprocal space, the unique MCN
$\mu_{\rm G}$ is obtained by setting $p_{\rm B}=p_{\rm A}$, $\overline{C}_{\rm
B}=\overline{C}_{\rm A}$, and $W_{\rm B}=W_{\rm A}$ in Eq. (33),
$\mu_{\rm G}=p_{\rm A}\overline{C}_{\rm A}+W_{\rm A}\,.$ (38)
If flat bands are present at A, they repel the point nodes. Hence $W_{\rm
A}=0$, and therefore $|\mu_{\rm G}|=|\overline{C}_{\rm A}|$. Interestingly, in
this case the magnitude of the MCN does not depend on the parity of the flat-
band states; this simplifies considerably its numerical evaluation, since one
does not need to know how the basis orbitals transform under $M_{z}$. Given
that only the magnitude (not the sign) of the MCN is needed to establish the
bulk-boundary correspondence, this is a potentially useful result.
Inserting Eq. (29) for $W_{\rm A}$ in Eq. (38), taking mod 2 on both sides,
and again comparing with $\theta/\pi=C_{\rm A}\text{ mod 2}$, we conclude that
in this case the relation between the axion $\mathbbm{Z}_{2}$ index and the
MCN reads
$\frac{\theta}{\pi}=\mu_{\rm G}\text{ mod 2}\,,$ (39)
as stated in Ref. Fulga _et al._ (2016).
#### IV.1.4 Weakly coupled layered crystals
Consider a crystal composed of weakly coupled identical layers that remain
invariant under reflection about their own planes. Following Ref. Kim _et
al._ (2015), we assume that the layers are stacked exactly vertically. In this
case the reflection symmetry about the individual layers becomes a type-1
mirror of the 3D structure, with two separate MCNs $\mu_{\rm G}$ and $\mu_{\rm
X}$. In the fully decoupled limit where there is no $k_{z}$ dependence the G
and X reciprocal planes become equivalent, so that $\mu_{\rm X}=\mu_{\rm
G}\equiv\mu_{\rm 2D}$ where $\mu_{\rm 2D}$ is the MCN of an isolated layer
[Eq. (10)]. But since the MCNs are integers, they cannot change if a weak
interlayer coupling is introduced, and from Eqs. (33) and (34) we obtain
$\mu_{\rm 2D}=\frac{1}{2}\left(p_{\rm A}\overline{C}_{\rm A}+W_{\rm A}\right)$
(40)
for the unique MCN of a weakly-coupled layered crystal.
If flat bands are present at A (the plane of a layer), then $W_{\rm A}=0$ and
the net Chern number of the valence bands becomes $\overline{C}_{\rm
A}+\widetilde{C}_{\rm UC}$; since the net Chern number vanishes by assumption
and $\widetilde{C}_{\rm UC}$ is even, $\mu_{\rm 2D}=p_{\rm A}\overline{C}_{\rm
A}/2$ is clearly an integer. In this case $|\mu_{\rm 2D}|$ can be determined
without knowing the parity of the flat-band states, as in the case of a type-2
mirror with flat bands.
Let us now evaluate the axion $\mathbbm{Z}_{2}$ index. Since $\mu_{\rm
G}+\mu_{\rm X}=2\mu_{\rm 2D}$ is an even number, Eq. (37) yields
$\theta=0\text{ mod $2\pi$}\,.$ (41)
This is consistent with the assertion made in Ref. Kim _et al._ (2015) that
weakly-coupled layered topological crystalline insulators are analogous to
“weak topological insulators” with a vanishing strong $\mathbbm{Z}_{2}$
invariant $\nu_{0}$.
### IV.2 Gapless Wannier band structure
Let us now apply our formalism to a $M_{z}$-symmetric system with a gapless
Wannier spectrum. We start out by noting that such a spectrum must have
degeneracies at both A and B. On those special planes the codimension is two,
so point nodes are allowed. Flat bands can be ruled out since they would repel
any nodes and generate a gap, and we assume that nodal lines are absent as
well.
We are left with a scenario where there are point nodes at both A and B, and
these are connected by Wannier bands. The only way this can happen without the
assistance of other symmetries is if there are only two Wannier bands, one in
each half unit cell, since otherwise there is generically a gap somewhere in
each half cell (accidental degeneracies away from A and B are not protected,
since the codimension is three). With the assistance of other symmetries, the
gapless spectrum may contain more than two bands per cell.
To treat the above scenario, we temporarily add a symmetric pair of occupied
orbitals at degeneracy-free planes $\pm z_{0}$, and initially do not let them
hop at all (completely isolated). This will introduce flat bands on those
planes. Now let the added orbitals hybridize with other orbitals. Since
accidental degeneracies away from the mirror planes are not protected, gaps
will generally open up between the new and the old Wannier bands (the only
exceptions to this rule are treated in the next paragraph). And since the
added orbitals are topologically trivial, they have no effect on the MCNs,
which can now be evaluated using the formalism of Sec. IV.1 for gapped
spectra. Setting $\overline{C}_{\rm A}=\overline{C}_{\rm B}=0$ in Eqs. (33)
and (34) therein, we obtain
$\mu_{\rm G}=\frac{1}{2}\left(W_{\rm A}+W_{\rm B}\right)$ (42)
and
$\mu_{\rm X}=\frac{1}{2}\left(W_{\rm A}-W_{\rm B}\right)\,.$ (43)
But since $W_{\rm A}$ and $W_{\rm B}$ cannot be affected by orbitals inserted
far from the A and B planes, we conclude that Eqs. (42) and (43) can be
directly applied to the original system with a gapless Wannier spectrum.
The above argument needs to be refined if the system is an axion-odd insulator
that has, in addition to $M_{z}$ symmetry, one or more axion-odd symmetries
that are $z$ preserving and symmorphic (e.g., spinful time reversal or
vertical mirrors). The Wannier spectrum is then guaranteed to be gapless, with
adjacent bands touching at an odd number of Dirac nodes Varnava _et al._
(2020). The solution is to weakly break all such symmetries via some low-
symmetry perturbation; the band connectivity then becomes “fragile,” allowing
gaps to open up once the added orbitals hybridize with the original ones
Wieder and Bernevig (2018); Varnava _et al._ (2020). The rest of the argument
proceeds as before, again with the conclusion that Eqs. (42) and (43) can be
directly applied to the original system with a gapless spectrum. This scenario
is illustrated in Sec. VI.3.2, where the orbital insertion itself acts as the
symmetry-lowering perturbation.
To conclude, let us show that the relation (37) between the MCNs and the axion
angle remains valid for gapless spectra. Equations (42) and (43) give
$\mu_{\rm G}+\mu_{\rm X}=W_{\rm A}$, while $\theta$ is equal to the sum of
Berry phases of vanishingly small loops around the nodes at A Varnava _et
al._ (2020). Since those Berry phases divided by $\pi$ are equal to the node
winding numbers modulo 2 Park and Marzari (2011), Eq. (37) is immediately
recovered.
## V Methods
### V.1 Tight-binding, ab initio, and Wannier methods
In this work, the formalism for evaluating MCNs in the HW representation is
implemented in the tight-binding (TB) framework, using a modified version of
the PythTB code pyt . Illustrative calculations are carried out for 2D and 3D
models with mirror symmetry; some are simple toy models, while others are
obtained from ab initio calculations as described below. Each model is
specified by providing the on-site energies, the hopping amplitudes, and the
matrix elements of the position and mirror operators.
In the TB literature, it is common to assume that the position operator is
represented by a diagonal matrix in the TB basis,
$\langle\varphi_{{\bf R}i}|{\bf r}|\varphi_{{\bf R}^{\prime}j}\rangle=({\bf
R}+\bm{\tau}_{i})\delta_{{\bf R},{\bf R}^{\prime}}\delta_{ij}$ (44)
where $\bm{\tau}_{i}$ is the location of the $i$th basis orbital in the home
cell ${\bf R}=\mathbf{0}$. This approximation is problematic for calculating
the Wannier bands of unbuckled monolayers, since it forces all bands to lie
flat on the $z=0$ plane: when all basis orbitals lie on the $z=0$ plane and
all off-diagonal matrix elements $\langle\varphi_{{\bf R}i}|z|\varphi_{{\bf
R}^{\prime}j}\rangle$ vanish, the matrix $Z_{\bm{\kappa}}$ that is
diagonalized to obtain the HW centers [see Eqs. (45) and (46)] is the null
matrix.
To apply our formalism to flat monolayers, any flat Wannier bands that may be
present must be robust and satisfy the uniform parity assumption, while all
other bands must be dispersive. To ensure that this is so, one should retain
some off-diagonal $z$ matrix elements. For models based on ab initio Wannier
functions this occurs naturally, since the position matrix elements between
the Wannier functions are explicitly calculated, and they are generally
nonzero for nearby Wannier functions. In the case of toy models, one needs to
assign nonzero values to some of the off-diagonal $z$ matrix elements under
reasonable assumptions.
The material chosen for the ab initio calculations is SnTe, which we study as
a flat monolayer in Sec. VI.1 and as a bulk phase in Sec. VI.2. We first
calculate the electronic structure from density-functional theory (DFT) using
the GPAW code Enkovaara _et al._ (2010), and then use the Wannier90 code
Mostofi _et al._ (2014) to construct well-localized Wannier functions.
Lastly, TB models are generated by tabulating the matrix elements of the Kohn-
Sham Hamiltonian and of the position operator between those Wannier functions.
The self-consistent DFT calculations are performed without including spin-
orbit coupling, which is added afterwards non-selfconsistently Olsen (2016).
We use the Perdew-Burke-Ernzerhof exchange-correlation functional Perdew _et
al._ (1996, 1997), and describe the valence-core interaction via the projector
augmented wave method Blöchl (1994). The valence states are expanded in a
plane-wave basis with an energy cutoff of 600 eV, and the BZ is sampled on
$\Gamma$-centered uniform grids containing $6\times 6\times 1$ and $6\times
6\times 6$ points for monolayer and bulk SnTe, respectively. The projector
augmented wave setup includes the 4$d$ semicore states of Sn in addition to
the 5$s$ and 5$p$ states of Sn and Te, yielding a total of 20 valence
electrons for each SnTe formula unit (one per cell for the monolayer, and two
for the bulk).
For each formula unit, we construct 16 spinor Wannier functions of $s$ and $p$
character spanning the upper-valence and low-lying conduction band states. The
Sn 4$d$ states, which give rise to flat bands lying 22 eV below the Fermi
level, are excluded from the Wannier construction.
As a first step towards obtaining well-localized Wannier functions, we extract
from the space of ab initio Bloch eigenstates at each grid point ${\bf k}$ an
$N$-dimensional subspace with the desired orbital character ($N=16$ for the
monolayer, and $N=32$ for the bulk). This is achieved via the “band
disentanglement” procedure of Ref. Souza _et al._ (2001), which involves
specifying two energy windows, known as the inner and the outer window, and a
set of trial orbitals. The outer window encloses all the valence bands except
for the 4$d$ semicore states, as well as all the low-lying conduction states
of $5s$ and $5p$ character. To ensure that the valence states are exactly
preserved in the disentangled subspace, we “freeze” them inside an inner
window. An initial guess for the target subspace is obtained by projecting
atom-centered $s$ and $p$ trial orbitals onto the outer-window states. This is
followed by an iterative procedure that yields an optimally-smooth
disentangled subspace across the BZ Souza _et al._ (2001).
Having extracted a suitable Bloch subspace, we proceed to construct well-
localized $s$\- and $p$-like Wannier functions spanning that subspace. This is
done by projecting onto it the same $s$ and $p$ trial orbitals that were used
in the disentanglement step, and then orthogonalizing the resulting orbitals
via the Löwdin scheme Marzari and Vanderbilt (1997). This one-shot procedure,
without additional maximal-localization steps Marzari and Vanderbilt (1997),
ensures that the Wannier functions retain the orbital character of the trial
orbitals.
To assess the quality of the Wannier basis we calculate the energy bands from
the Hamiltonian matrix elements in that basis Souza _et al._ (2001), and find
that they are in excellent agreement with the ab initio bands obtained using
the GPAW code Olsen _et al._ (2019).
In addition to the Hamiltonian and position matrix elements, we also require
the matrix elements of the mirror operator $M_{z}$ in the Wannier basis. These
are needed to determine the winding numbers of the nodal touchings between
Wannier bands on the mirror planes (see Sec. V.3), as well as the mirror
parities $p_{\rm A}$ and $p_{\rm B}$ of the flat-band states. To set up the
matrix representation of $M_{z}$, we assume that the Wannier functions
transform under $M_{z}$ in the same way as pure $s$ and $p$ orbitals. We find
that the eigenstates of the Wannier Hamiltonian on the mirror-invariant BZ
planes are, to a good approximation, eigenstates of this approximate $M_{z}$
operator, which validates that assumption.
### V.2 Construction of hybrid Wannier functions and Wannier bands
Formally, maximally-localized HW functions satisfy the eigenvalue equation
(12). For a 2D or quasi-2D system extended along $x$ and $y$, the matrix
elements of the $z$ operator appearing in that equation are well defined. It
is therefore straightforward to set up the matrix
$Z_{mn{\bf k}}=\langle\psi_{m{\bf k}}|z|\psi_{n{\bf k}}\rangle\,,$ (45)
where ${\bf k}=(k_{x},k_{y})$ and $m$ and $n$ run over the $J$ occupied energy
bands, and to diagonalize it,
$\left[U^{\dagger}_{\bf k}Z_{\bf k}U_{\bf k}\right]_{mn}=z_{m{\bf
k}}\delta_{mn}\,.$ (46)
The eigenvalues are the HW centers, and from the eigenvectors (the columns of
the $U_{\bf k}$ matrix) we can construct the maximally-localized HW functions
according to
$|h_{n{\bf k}}\rangle=\sum_{m}\,e^{-i{\bf k}\cdot{\bf r}}|\psi_{m{\bf
k}}\rangle U_{mn{\bf k}}\,,$ (47)
where the phase factor has been included to render them in-plane periodic.
For bulk systems, which are extended in all directions including the
wannierization direction $z$, the above procedure fails because the matrix
elements in Eq. (45) become ill defined. In such cases, it is still possible
to construct maximally-localized HW functions by working in reciprocal space.
We now write ${\bf k}=({\bm{\kappa}},k_{z})$, and choose a uniform grid; for
each point ${\bm{\kappa}}$ in the projected 2D BZ, the problem reduces to the
construction of 1D maximally-localized Wannier functions along $z$. The
procedure is detailed in Refs. Vanderbilt (2018); Marzari and Vanderbilt
(1997). Briefly, the first step is to establish a “twisted parallel transport
gauge” for the valence Bloch states along the string of $k_{z}$ points at each
${\bm{\kappa}}$, obtaining as a byproduct the HW centers
$z_{ln{\bm{\kappa}}}$. The maximally-localized HW functions
$|h_{ln{\bm{\kappa}}}\rangle$ are then constructed in this gauge using Eq.
(11), with the integral over $k_{z}$ replaced by a summation over the string
of $k_{z}$ points.
### V.3 Winding number of a point node of order $N$
#### V.3.1 Definition
Earlier, we defined the winding number of a point node where two Wannier bands
meet on a mirror plane. Since there are situations where $N>1$ pairs of bands
meet at a node, we need to generalize that definition to handle such “higher-
order” nodes.
Given a point node ${\bm{\kappa}}_{j}$ of order $N\geq 1$, we introduce the
$2N\times 2N$ matrix representation of $M_{z}$ at a nearby point
${\bm{\kappa}}$,
${\cal M}^{z}_{mn{\bm{\kappa}}}=\langle
h_{m{\bm{\kappa}}}|M_{z}|h_{n{\bm{\kappa}}}\rangle\,.$ (48)
Here, $m$ and $n$ run over the $2N$ Wannier bands that meet at
${\bm{\kappa}}_{j}$. By diagonalizing ${\cal M}^{z}_{\bm{\kappa}}$ and then
transforming the $|h_{n{\bm{\kappa}}}\rangle$ states accordingly [see Eqs.
(46) and (47)], we obtain a new set of $2N$ states
$|\tilde{h}_{n{\bm{\kappa}}}\rangle$. Like the original ones they are cell-
periodic in plane and localized along $z$, but they have definite mirror
parities. We choose the first $N$ to be even under $M_{z}$, and denote them as
$|\tilde{h}^{+}_{l{\bm{\kappa}}}\rangle$; the remaining $N$ are odd under
$M_{z}$, and we denote them as $|\tilde{h}^{-}_{l{\bm{\kappa}}}\rangle$. In
both cases, $l$ goes from 1 to $N$. The matrix representation of $z$ in the
new basis takes the form of Eq. (23), where $f_{\bm{\kappa}}$ is the $N\times
N$ matrix with elements
$f_{ll^{\prime}{\bm{\kappa}}}=\langle\tilde{h}^{+}_{l{\bm{\kappa}}}|z|\tilde{h}^{-}_{l^{\prime}{\bm{\kappa}}}\rangle\,.$
(49)
Letting
$\gamma_{\bm{\kappa}}=\arg(\det f_{\bm{\kappa}})\,,$ (50)
the winding number can be evaluated from Eq. (28) irrespective of the order
$N$ of the node.
#### V.3.2 Numerical evaluation
Suppose a single pair of Wannier bands meet at a point node
${\bm{\kappa}}_{j}$. To evaluate the winding number (28), the phase
$\gamma_{\bm{\kappa}}$ must be smooth on $c_{j}$. In practice, we establish a
smooth gauge for the states $|\tilde{h}^{\pm}_{\bm{\kappa}}\rangle$ as
follows. We pick a representation of the two states at a reference point
${\bm{\kappa}}^{\prime}_{j}$ in the vicinity of the node. Then at any point
${\bm{\kappa}}^{\prime}_{j}+\Delta{\bm{\kappa}}$ on the circle $c_{j}$ we
choose the gauge by enforcing maximal phase alignment with the states at
${\bm{\kappa}}^{\prime}_{j}$, i.e., by requiring that the overlaps
$\langle\tilde{h}^{+}_{{\bm{\kappa}}^{\prime}_{j}}|\tilde{h}^{+}_{{\bm{\kappa}}^{\prime}_{j}+\Delta{\bm{\kappa}}}\rangle$
and
$\langle\tilde{h}^{-}_{{\bm{\kappa}}^{\prime}_{j}}|\tilde{h}^{-}_{{\bm{\kappa}}^{\prime}_{j}+\Delta{\bm{\kappa}}}\rangle$
are real and positive. In other words, we carry out a one-step parallel
transport from ${\bm{\kappa}}^{\prime}_{j}$ to each circumference point.
If several pairs of bands meet at a node, the strategy is basically the same.
The only difference is that one must now use the multiband version of the
parallel-transport procedure Vanderbilt (2018); Marzari and Vanderbilt (1997).
## VI Numerical results
In this section, we use our formalism to calculate the MCNs of three different
systems. The first is an unbuckled monolayer of SnTe, a topological
crystalline insulator protected by reflection symmetry about its plane. The
second is rocksalt SnTe, a 3D topological crystalline insulator protected by a
type-2 mirror. Our last example is a 3D toy model based on a modified Dirac
equation. It is both a strong topological insulator protected by time-reversal
symmetry, and a topological crystalline insulator with a type-1 mirror. In the
first example the Wannier spectrum is trivially gapped, while in the other two
it is gapless.
### VI.1 Unbuckled monolayer of SnTe
Figure 2: (a) Atomic structure of monolayer SnTe. The black square is the
conventional unit cell with lattice constant $a$, and the red square is the
primitive cell with lattice constant $a^{\prime}=a/\sqrt{2}$. (b) Brillouin
zone and high-symmetry points.
Figure 3: (a) Energy bands of monolayer SnTe, with the $s$-type lower valence
bands that are exluded from the Wannierization shown in grey. All bands are
doubly degenerate, and the Fermi level is indicated by the dashed line. (b)
Wannier bands obtained from the Bloch states in the six $p$-type upper valence
bands. (c) Heatmap plot of the gap function of Eq. (52) for the central pair
of Wannier bands, where zero-gap points (nodal points) appear as dark spots.
Those with winding numbers $W_{j}=\pm 1$ are indicated by red or blue circles,
while the one with $W_{j}=-3$ at the $\Gamma$ point is indicated by a blue
triangle. Dashed circles denote pairs of nearby nodes with equal and opposite
winding numbers. When a node falls on the BZ boundary, only one of the
periodic images is shown.
The structure we consider is shown in Fig. 2(a). It consist of a single
unbuckled layer of Sn and Te atoms arranged in a checkerboard pattern, which
can be viewed as a single (001) layer of the bulk rocksalt structure.
DFT calculations reveal that the system with an optimized lattice constant of
$a=6.16$ Å is situated 0.4 eV above the convex hull and is dynamically
unstable Haastrup _et al._ (2018), and that a buckled structure that breaks
mirror symmetry is energetically favored Kobayashi (2015). These results imply
that a flat SnTe monolayer is not likely to be experimentally relevant. This
system is nevertheless ideally suited for illustrating our methodology, since
it has reflection symmetry about its own plane and the associated MCN is
nonzero Liu _et al._ (2015).
We carry out calculations using the primitive cell containing one formula
unit. The Wannier-interpolated energy bands are shown in Fig. 3(a), where all
bands are doubly degenerate due to time-reversal and inversion symmetry. There
is a robust inverted gap ($0.3$ eV) at the X point, and a tiny indirect gap
($0.17$ meV) around the X point; when the lattice expands the indirect gap
increases, and when it shrinks the system turns into a band overlap semimetal
Liu _et al._ (2015); Kobayashi (2015). The lowest four valence bands are
predominantly $s$-type, and the remaining six (plotted in red) are
predominantly $p$-type.
Figure 3(b) shows the Wannier bands calculated from the Bloch states in the
$p$-type upper valence bands. The spectrum consists of three mirror-symmetric
band pairs that touch on the A plane $z=0$ at isolated points in the 2D BZ.
There are no flat bands on that plane, as expected from the presence of time-
reversal symmetry (Sec. III.2.3). Equation (40) therefore reduces to
$\mu_{\rm 2D}=\frac{1}{2}W_{\rm A}\,,$ (51)
and the MCN can be determined by evaluating the winding numbers of the nodal
points on the A plane.
To locate those nodal points, we plot in Fig. 3(c) the “gap function”
$g_{\bf k}=-\log(\Delta z_{\bf k}/c)\,,$ (52)
where $\Delta z({\bf k})$ is the separation between the central pair of bands.
Regions with a small gap appear in dark gray, and nodal points as dark spots.
The positions and winding numbers of all the nodal points are indicated in the
figure, where we have included only one of the periodic images when a node
falls on the BZ boundary. At $\Gamma$ and M there are nodes where three pairs
of Wannier bands touch, with winding numbers $W_{j}=-3$ and $W_{j}=+1$,
respectively. All other nodes on the $z=0$ plane are simple Dirac nodes where
only the two central bands meet, and they have $W_{j}=\pm 1$. Adding up the
winding numbers of the 36 nodal points in the BZ we obtain $W_{\rm A}=-4$, and
from Eq. (51) we conclude that the group of six $p$-type valence bands has a
MCN of $-2$.
We repeat the calculation for the four $s$-type lower valence bands, and find
that their net winding number vanishes. The net MCN of the occupied states is
therefore $\mu_{\rm 2D}=-2$, with the nontrivial topology coming from the $p$
states. This result agrees with the value $|\mu_{\rm 2D}|=2$ inferred from a
$k\cdot p$ analysis of the simultaneous band inversions at the two X points in
the BZ Liu _et al._ (2014, 2015).
### VI.2 Bulk SnTe
Bulk SnTe, which crystallizes in the rocksalt structure, is known both from
theory Hsieh _et al._ (2012) and experiment Tanaka _et al._ (2012) to be a
topological crystalline insulator. The symmetry protecting its nontrivial band
topology is reflection about the $\\{110\\}$ family of planes. (Instead, the
(001) mirror symmetry responsible for the topological state of the monolayer
is topologically trivial in the bulk crystal.)
The lattice is face-centered cubic lattice, so that the shortest lattice
vector perpendicular to the (110) planes is ${\bf a}_{3}=a\hat{\bf
x}/2+a\hat{\bf y}/2$. Since its length is twice the separation between
adjacent planes, the (110) mirror operation is of type 2, as is typical of
centered lattices (see Fig. 1).
For our simulations we pick a tetragonal cell subtended by ${\bf
a}_{1}=-a\hat{\bf x}/2+a\hat{\bf y}/2$, ${\bf a}_{2}=a\hat{\bf z}$, and ${\bf
a}_{3}$, and reorient the axes such that those vectors point along $\hat{\bf
x}$, $\hat{\bf y}$, and $\hat{\bf z}$, respectively. In this new frame, the
(110) mirror operation of interest becomes $M_{z}$. The simulation cell with
two formula units is shown in Fig. 4(a), and the associated BZ in Fig. 4(b).
Figure 4: (a) Rocksalt structure of bulk SnTe in a tetragonal conventional
cell. $a$ is the lattice constant of the conventional cubic cell, and
$b=c=a/\sqrt{2}$. Green planes are equivalent mirror planes. (b) Brillouin
zone associated with the tetragonal cell, with its high-symmetry points
indicated in red and the unique $M_{z}$-invariant plane in green. The
projected 2D Brillouin zone with its high-symmetry points is shown on top.
Figure 5: (a) Energy bands of bulk SnTe along high-symmetry lines of the
folded tetragonal BZ. The Fermi level is indicated by the dashed line. (b)
Wannier band structure obtained from the full set of valence states. (c)
Detail of the Wannier bands around the $z=0$ mirror plane. (d) Heatmap plot of
the gap function of Eq. (52) for the central pair of Wannier bands around
$z=0$, with the nodal points color-coded as in Fig. 3(c).
In Fig. 5(a) we present the energy bands calculated along the high-symmetry
lines of the folded BZ. The nontrivial topology arises from simultaneous band
inversions at the two L points in the unfolded BZ Hsieh _et al._ (2012),
which map onto the two R points in Fig. 4(b). The inverted band gap at R and
the global indirect band gap amount to $0.3$ and $0.1$ eV, respectively.
From the full set of valence band states, we construct HW functions localized
along $z$. The Wannier spectrum is shown in Fig. 5(b). Its periodicity is
$c/2$ because the cell is doubled along $z$, and only one period is shown. The
spectrum is gapless, with two pairs of bands crossing in opposite directions,
between $\overline{\rm X}$ and $\overline{\Gamma}$, the gap centered at
$z=c/4$ (only one of the two crossings is shown). This spectral flow arises
from the nonzero MCN associated with $M_{y}$ symmetry (equivalent to $M_{z}$),
which leaves invariant the BZ plane containing the $\Gamma$, X, ${\rm R}_{2}$,
and ${\rm Y}_{2}$ points. For a discussion of such “in-plane” Wannier flow
associated with a nonzero MCN, see Ref. Gresch _et al._ (2017).
Since $M_{z}$ is a type-2 mirror, we evaluate its unique MCN using Eq. (38).
And since the Wannier spectrum is gapless, and hence devoid of flat bands, we
set $\overline{C}_{\rm A}=0$ in that equation to obtain
$\mu_{\rm G}=W_{\rm A}\,,$ (53)
which says that the MCN equals the sum of the winding numbers of all the point
nodes on the $z=0$ plane.
As indicated in Fig. 5(d), there are 16 independent point nodes in total on
that plane, all of them simple nodes where only two bands meet. Seven have
winding numbers $+1$ and the other nine have winding numbers $-1$, yielding
$\mu_{\rm G}=-2$ for the MCN. This value is in agreement with that originally
obtained in Ref. Hsieh _et al._ (2012) from a $k\cdot p$ analysis of the band
inversions. Using Eq. (39), we confirm that the system is axion-trivial.
### VI.3 Modified Dirac model on a cubic lattice
In this section we study a 3D toy model constructed by first modifying the
free Dirac equation to enable topological phases for certain parameter values,
and then placing it on a cubic lattice. The 4$\times$4 Hamiltonian matrix in
reciprocal space reads Shen _et al._ (2011); Rauch _et al._ (2017)
$H({\bf k})=\left(\begin{matrix}m-2MK(\mathbf{k})&0&c\sin k_{z}&c(\sin
k_{x}-i\sin k_{y})\\\ 0&m-2MK(\mathbf{k})&c(\sin k_{x}+i\sin k_{y})&-c\sin
k_{z}\\\ c\sin k_{z}&c(\sin k_{x}-i\sin k_{y})&-m+2MK(\mathbf{k})&0\\\ c(\sin
k_{x}+i\sin k_{y})&-c\sin k_{z}&0&-m+2MK(\mathbf{k})\end{matrix}\right)\,,$
(54)
where $K(\mathbf{k})=3-\cos k_{x}-\cos k_{y}-\cos k_{z}$, and $c$, $m$, and
$M$ are dimensionless parameters inherited from the original isotropic
modified Dirac equation Shen _et al._ (2011) by setting the rest mass
$m_{0}c^{2}$ to be the energy scale of the model Rauch _et al._ (2017).
Figure 6: Topological phase diagram of the model of Eq. (54) for $c=1.0$.
Orange and blue regions denote axion-even ($\theta=0$) and axion-odd
($\theta=\pi$) phases, respectively.
The topological phase diagram of the half-filled model is shown in Fig. 6 for
$c=1.0$. The system is gapped except on the $m=0,4M,8M,12M$ lines, where the
gap closes at $\Gamma=(0,0,0)$, ${\rm X}=(\pi,0,0)$, ${\rm M}=(\pi,\pi,0)$,
and ${\rm A}=(\pi,\pi,\pi)$, respectively. As shown in Appendix C, those
metallic lines separate axion-trivial from axion-odd insulating phases.
The axion angle is quantized by several axion-odd symmetries. Some are
$z$-reversing (inversion and horizontal mirror $M_{z}$), and others are
$z$-preserving (spinful time reversal and vertical mirrrors). As $M_{z}$ is a
type-1 mirror, it protects two MCNs that are related to the axion angle by Eq.
(37).
#### VI.3.1 Axion-odd phase with protected Wannier flow
For our numerical tests we set $c=m=1.0$ and $M=0.5$ to put the model in the
axion-odd phase. The energy band structure is shown in Fig. 7(a). The bands
are pairwise degenerate due to the presence of time-reversal and inversion
symmetry, with a finite gap between the two pairs over the entire BZ. The
Fermi level is placed at midgap.
Figure 7: (a) Energy bands of the model described by Eq. (54) with $c=m=1.0$
and $M=0.5$. The bands are doubly degenerate, and the Fermi level (dashed
line) has been placed at midgap. (b) Wannier band structure obtained from the
valence states. (c) and (d) Heatmap plots of the gap function of Eq. (52)
about the $z=0$ and $z=c/2$ planes, respectively, with the nodal points color-
coded as in Fig. 3(c).
Since the system is axion-odd and has $z$-preserving axion-odd symmetries, the
connectivity (or “flow”) of the Wannier bands is topologically protected
Varnava _et al._ (2020). In particular, spinful time reversal symmetry
requires that the two bands per vertical cell are glued together as follows:
one band touches the band above at one of the four time-reversal invariant
momenta (TRIM), and it touches the periodic image below at the other three. As
for the $z$-reversing axion-odd symmetries, the effect of $M_{z}$ is to pin
the up-touching to one of the mirror planes and the three down-touchings to
the other, while inversion further constrains the four touchings to occur at
TRIM on those planes, as already mandated by time reversal.
The pattern of band touchings described above is confirmed by Fig. 7(b), where
we plot the Wannier bands. They were obtained by placing at the origin the
four basis orbitals that belong to the home unit cell, and making the diagonal
approximation of Eq. (44) for the position matrix. There is one band touching
at $\overline{\Gamma}$ on the B plane, and three more on the A plane: one at
$\overline{\rm M}$, and the others at the two $\overline{\rm X}$ points.
Since the Wannier spectrum is gapless, the MCNs $\mu_{\rm G}$ and $\mu_{\rm
X}$ are given respectively by the half-sum and the half-difference of the net
winding numbers on the A and B planes [Eqs. (42) and (43)]. As indicated in
the gap-function plots of Figs. 7(c,d), the three nodes at A give $W_{\rm
A}=-1$ and the single node at B gives $W_{\rm B}=-1$, so that $\mu_{\rm G}=-1$
and $\mu_{\rm X}=0$. Note that $\mu_{\rm G}+\mu_{\rm X}$ is an odd number, as
required by Eq. (37) for an axion-odd system.
#### VI.3.2 Axion-odd phase with fragile Wannier flow
If the $z$-preserving axion-odd symmetries of the model (time reversal and
vertical mirrors) are weakly broken, the system will remain in an axion-odd
phase protected by $M_{z}$ and inversion. But since these are $z$-reversing
operations, the Wannier spectrum is no longer topologically required to be
gapless. The Wannier flow is only protected in a “fragile” sense, and it can
be destroyed, while preserving $M_{z}$, by adding some weakly-coupled trivial
bands to the valence manifold Varnava _et al._ (2020); Wieder and Bernevig
(2018). Below we carry out this procedure in two different ways, and confirm
that the MCNs remain the same as in the original model.
##### Insertion of a symmetric pair of occupied orbitals
Figure 8: (a) Energy bands of the same model as in Fig. 7, after adding an
extra pair of occupied orbitals with $E=-4.0$ at $z=\pm 0.2c$ and coupling
them to the other orbitals. The bands are doubly degenerate, and the Fermi
level (dashed line) has been placed at midgap. (b) Wannier band structure
obtained from the valence states, with small gaps around $z=\pm 0.2c$ due to
the added orbitals.
Here we implement the strategy outlined in Sec. IV.2. We insert in the unit
cell two more orbitals, denoted as $|5\rangle$ and $|6\rangle$, that have
opposite spins and the same on-site energy $E=-4.0$. To break time reversal
and the vertical mirrors while preserving $M_{z}$ and inversion, we place the
spin-up orbital $|5\rangle$ at $(x,y,z)=(0.0,0.0,0.2c)$, and the spin-down
orbital $|6\rangle$ at $(x,y,z)=(0.0,0.0,-0.2c)$, keeping the original
orbitals $|1\rangle$ to $|4\rangle$ at the origin. Finally, we couple the new
orbitals to the old via the matrix elements $\langle 5|H|1\rangle=\langle
6|H|2\rangle=0.5$. The resulting model retains the $M_{z}$ and inversion
symmetries of the original model, and it breaks the time-reversal and vertical
mirror symmetries in the $Z$ matrix of Eq. (45) (but not in the Hamiltonian).
The energy and Wannier band structures are plotted in Figs. 8(a,b). Because
the Hamiltonian has both inversion and time-reveral symmetry, the energy bands
remain doubly degenerate as in Fig. 7(a). The breaking of the $z$-preserving
symmetries in the $Z$ matrix is reflected in the Wannier spectrum which is no
longer connected as in Fig. 7(b), with small gaps opening up near $z=\pm
0.2c$. The node at $\overline{\Gamma}$ on the B plane and those at
$\overline{\rm X}_{1}$, $\overline{\rm X}_{2}$, and $\overline{\rm M}$ on the
A plane remain intact, protected by $M_{z}$ and inversion. Their winding
numbers are also unchanged, leading to the same MCNs as in the original model.
##### Insertion of a single occupied orbital at $z=0$.
Figure 9: (a) Energy bands of the same model as in Fig. 7, after adding an
extra occupied orbital at $z=0$ and coupling it to the other orbitals. The
Fermi level (dashed line) has been placed in the gap. (b) Wannier band
structure obtained from the valence states. The added orbital generates a flat
band at $z=0$, which repels the nodal points on that plane (lower panel).
An alternative way of opening up a gap in the Wannier spectrum is to insert a
flat band on a mirror plane. To illustrate this procedure, we add at the
origin a single spin-up orbital $|5\rangle$ with on-site energy $E=-4.0$ and
odd parity about that plane, and couple it to the model via $\langle
5|H|1\rangle=\langle 5|H|4\rangle=2.0$. Because the orbital is spin-polarized,
it breaks time reversal; and because the spin points in the vertical
direction, it also breaks all vertical mirrors while preserving $M_{z}$. In
addition, the coupling terms break inversion symmetry, leaving $M_{z}$ as the
only axion-odd symmetry. The energy bands of the modified model are shown in
Fig. 9(a). A new band has appeared below the other four, so that there are now
three valence bands in total, leading to three Wannier bands.
The added orbital, which belongs to the ${\rm A}^{+}$ class in Table 1,
generates an extra even-parity state at both G and X. This creates an
imbalance $\Delta N_{\rm G}=\Delta N_{\rm X}=1$ between even- and odd-parity
states on the two mirror-invariant BZ planes, which according to Eq. (20)
results in a flat band at A. We emphasize that this extra band remains flat
even after the added orbital is coupled to the model, as long as the coupling
terms respect $M_{z}$ symmetry. As already mentioned, those terms are chosen
to break inversion symmetry. This is needed to ensure that the three point
nodes on the A plane are repelled by the flat band in the manner described in
Sec. III.2.2, since inversion symmetry would otherwise protect them.
The resulting Wannier bands are displayed in the upper panel of Fig. 9(b);
because of the lowered symmetry, the node at $z=c/2$ is no longer pinned to
$\overline{\Gamma}$ as in Fig. 7(b). The lower panel reveals a perfectly flat
band at $z=0$, well separated from a pair of dispersive bands whose three
touchings on the $z=0$ plane in Fig. 7(c) have been gapped out. Under these
circumstances, Eqs. (33) and (34) for the MCNs reduce to
$\mu_{\rm G}=\tfrac{1}{2}(p_{\rm A}\overline{C}_{\rm A}+W_{\rm B})$ (55)
and
$\mu_{\rm X}=\tfrac{1}{2}(p_{\rm A}\overline{C}_{\rm A}-W_{\rm B})\,.$ (56)
The single node at B has the same winding number $W_{\rm B}=-1$ as in the
original model, while the net winding number $W_{\rm A}=-1$ of the gapped-out
nodes at A has been transferred to the index $p_{\rm A}\overline{C}_{\rm A}$
of the flat band ($p_{\rm A}=-1$, and $\overline{C}_{\rm A}=+1$). Overall, the
MCNs remain unchanged.
## VII Summary
In summary, we have investigated the topological properties of mirror-
symmetric insulating crystals from the viewpoint of HW functions localized
along the direction orthogonal to the mirror plane. We first clarified the
generic behaviors of the associated Wannier bands, and then derived a set of
rules for deducing the MCNs. To validate and illustrate the formalism, we
applied it to SnTe in the monolayer and bulk forms, and to a toy model of an
axion-odd insulator.
In the HW representation, the MCNs are expressed in terms of a set of integer-
valued properties of the Wannier bands on the mirror planes: the Chern numbers
and mirror parities of flat bands lying on those planes, and the winding
numbers of the touching points on those planes between symmetric pairs of
dispersive bands. One advantage of this representation is that it reveals the
relation between the MCNs and the axion $\mathbbm{Z}_{2}$ index from purely
bulk considerations. That relation is far from obvious in the standard Bloch
representation, and previously it had only been obtained via an indirect
argument involving surface states.
In some cases the axion $\mathbbm{Z}_{2}$ index can be determined by visual
inspection of the Wannier band structure, e.g., by counting the number of
nodal points between certain bands Varnava _et al._ (2020). We have found
that mere visual inspection does not suffice for obtaining the MCNs since it
does not reveal, for example, the relative signs of the winding numbers of
different nodes.
Interestingly, in certain cases where flat Wannier bands are present the
magnitudes of the MCN can be determined without having to divide the occupied
manifold into two mirror sectors. This follows from the uniform-parity
assumption for the flat bands, which has no counterpart in the Bloch
representation. Since the determination of the mirror parities is the most
cumbersome step in the calculation of MCNs, this feature of the HW formalism
could lead to a more automated algorithm for computing MCNs. Even without such
further developments, the formalism has already proven useful for discussing
the topological classification of mirror-symmetric insulators.
###### Acknowledgements.
Work by T.R. was supported by the Deutsche Forschungsgemeinschaft Grant No. Ra
3025/1-1 from the Deutsche Forschungsgemeinschaft. Work by D.V. was supported
by National Science Foundation Grant DMR-1954856. Work by I.S. was supported
by Grant No. FIS2016-77188-P from the Spanish Ministerio de Economía y
Competitividad.
## Appendix A Derivation of Eqs. (20-22)
According to Table 1, the numbers of occupied states with each mirror parity
at G and X are
$\displaystyle N_{{\rm G}^{\pm}}$ $\displaystyle=\overline{N}_{{\rm
A}^{\pm}}+\overline{N}_{{\rm B}^{\pm}}+\frac{1}{2}\widetilde{N}\,,$ (57a)
$\displaystyle N_{{\rm X}^{\pm}}$ $\displaystyle=\overline{N}_{{\rm
A}^{\pm}}+\overline{N}_{{\rm B}^{\mp}}+\frac{1}{2}\widetilde{N}\,,$ (57b)
where $\widetilde{N}=\widetilde{N}_{\rm A}+\widetilde{N}_{\rm
B}+\widetilde{N}_{\rm UC}$ is the total number of dispersive Wannier bands per
cell. Letting $\Delta N_{{\rm G}}=N_{{\rm G}^{+}}-N_{{\rm G}^{-}}$ and
$\Delta\overline{N}_{{\rm A}}=\overline{N}_{{\rm A}^{+}}-\overline{N}_{{\rm
A}^{-}}$, and defining $\Delta N_{{\rm X}}$ and $\Delta N_{{\rm B}}$ in the
same way, we find
$\displaystyle\Delta\overline{N}_{{\rm A}}$
$\displaystyle=\frac{1}{2}\left(\Delta N_{{\rm G}}+\Delta N_{{\rm
X}}\right)\,,$ (58a) $\displaystyle\Delta\overline{N}_{{\rm B}}$
$\displaystyle=\frac{1}{2}\left(\Delta N_{{\rm G}}-\Delta N_{{\rm
X}}\right)\,.$ (58b)
Under the uniform parity assumption $|\Delta\overline{N}_{{\rm
A}}|=\overline{N}_{\rm A}$ and $|\Delta\overline{N}_{{\rm
B}}|=\overline{N}_{\rm B}$, resulting in Eqs. (20) and (21). In the case of a
type-2 mirror A and B are equivalent, and from Eq. (57a)
$\Delta\overline{N}_{\rm A}+\Delta\overline{N}_{\rm B}=\Delta N_{\rm G}$.
Hence $\Delta\overline{N}_{\rm A}=\Delta\overline{N}_{\rm B}=\Delta N_{\rm
G}/2$, yielding Eq. (22) under the same assumption.
## Appendix B Derivation of Eq. (27)
Let us prove Eq. (27) for the case of a single pair of dispersive Wannier
bands connected by point nodes on the A plane. In this case the matrix
$f_{\bm{\kappa}}$ of Eq. (49) reduces to the scalar
$f_{\bm{\kappa}}\equiv\langle\widetilde{h}_{\bm{\kappa}}^{+}|z|\widetilde{h}_{\bm{\kappa}}^{-}\rangle=|f_{\bm{\kappa}}|e^{i\gamma_{\bm{\kappa}}}\,,$
(59)
where $|\widetilde{h}_{\bm{\kappa}}^{\pm}\rangle$ are states of even or odd
mirror parity constructed from the pair of HW functions as described in Sec.
V.3.1. These states are cell-periodic in plane and localized along $z$, and we
also define new states
$|\psi_{\bm{\kappa}}^{\pm}\rangle=e^{i{\bm{\kappa}}\cdot{\bf
r}}|\widetilde{h}_{\bm{\kappa}}^{\pm}\rangle$ that are Wannier-like along $z$
and Bloch-like in plane.
When the Chern numbers $\widetilde{C}_{{\rm A}^{\pm}}$ are nonzero, it becomes
impossible to choose a gauge for the states $|\psi_{\bm{\kappa}}^{\pm}\rangle$
that is both smooth and periodic in the projected 2D BZ Vanderbilt (2018). We
assume a square BZ with $k_{x},k_{y}\in[0,2\pi]$, and choose a smooth but
nonperiodic gauge for the $|\psi_{\bm{\kappa}}^{-}\rangle$ states. To
characterize the lack of periodicity, let the phase relations between the
edges of the BZ be
$|\psi^{-}_{\rm R}\rangle=e^{-i\mu}|\psi^{-}_{\rm
L}\rangle\,,\quad|\psi^{-}_{\rm T}\rangle=e^{-i\nu}|\psi^{-}_{\rm
B}\rangle\,,$ (60)
where $\\{\text{L,R,T,B}\\}=\\{\text{left,right,top,bottom}\\}$,
$\mu=\mu(k_{y})$, and $\nu=\nu(k_{x})$. Also let
$\Delta\mu=\mu(2\pi)-\mu(0)\,,\quad\Delta\nu=\nu(2\pi)-\nu(0)\,.$ (61)
When computing the Berry phase around the BZ boundary as an integral of the
connection ${\bf
A}_{\bm{\kappa}}^{-}=i\langle\widetilde{h}_{\bm{\kappa}}^{-}|\partial_{\bm{\kappa}}\widetilde{h}_{\bm{\kappa}}^{-}\rangle$,
$\phi_{-}=\oint_{\partial\text{BZ}}{\bf A}_{\bm{\kappa}}^{-}\cdot
d{\bm{\kappa}}\,,$ (62)
the contribution from the L and R segments cancel except for terms coming from
$\mu$, and similarly for the top and bottom segments. It follows that
$\phi_{-}=\Delta\mu-\Delta\nu\,.$ (63)
We assume a smooth but nonperiodic gauge for the
$|\psi^{+}_{\bm{\kappa}}\rangle$ states as well, so that the phase
$\gamma_{\bm{\kappa}}$ in Eq. (59) becomes a smooth function of
${\bm{\kappa}}$ (except at the nodes, where $f_{\bm{\kappa}}$ vanishes and
$\gamma_{\bm{\kappa}}$ becomes ill defined). Now we phase-align
$|\psi^{+}_{\bm{\kappa}}\rangle$ with $|\psi^{-}_{\bm{\kappa}}\rangle$ by re-
gauging as follows,
$|\psi^{+}_{\bm{\kappa}}\rangle^{\prime}=e^{i\gamma_{\bm{\kappa}}}|\psi^{+}_{\bm{\kappa}}\rangle\,.$
(64)
(In this new gauge $f^{\prime}_{\bm{\kappa}}$ is real, and
$\gamma^{\prime}_{\bm{\kappa}}$ is zero everywhere.) This will make a gauge
for $|\psi^{+}_{\bm{\kappa}}\rangle^{\prime}$ that is also nonperiodic. For
the moment we only assume that this gauge is smooth in a neighborhood
extending some small distance inside the boundary; we ignore what is going on
deeper inside. It is not hard to see that the same relations as in Eq. (60),
with the same functions $\mu$ and $\nu$, apply to the
$|\psi^{+}_{\bm{\kappa}}\rangle^{\prime}$ states, and it follows that
$\phi^{\prime}_{+}=\phi_{-}\quad\text{(call it $\phi$)\,.}$ (65)
Now, in the case of the $|\psi^{-}_{\bm{\kappa}}\rangle$ states the interior
was smooth, so by applying Stokes’ theorem to
$2\pi\widetilde{C}_{{\rm A}^{-}}=\int_{\rm
BZ}\Omega_{\bm{\kappa}}^{-}\,d^{2}k$ (66)
where
$\Omega^{-}_{\bm{\kappa}}=\partial_{k_{x}}A^{-}_{{\bm{\kappa}},y}-\partial_{k_{y}}A^{-}_{{\bm{\kappa}},x}$
is the Berry curvature of state $|u^{-}_{\bm{\kappa}}\rangle$, we get
$2\pi\widetilde{C}_{{\rm A}^{-}}=\phi\,.$ (67)
If the interior of $|\psi^{+}_{\bm{\kappa}}\rangle^{\prime}$ were also smooth,
we would conclude that $\widetilde{C}_{{\rm A}^{+}}=\widetilde{C}_{{\rm
A}^{-}}$. Conversely, when the MCN is nonzero there must exist nonanalytic
points where the phase of $|u^{+}_{\bm{\kappa}}\rangle^{\prime}$ changes
discontinuously. Those points are precisely the nodes of $f_{\bm{\kappa}}$,
which we label by $j$; they act as vortex singularities of the Berry
connection
$\left({\bf A}_{\bm{\kappa}}^{+}\right)^{\prime}={\bf
A}_{\bm{\kappa}}^{+}-\partial_{\bm{\kappa}}\gamma_{\bm{\kappa}}\,,$ (68)
and we extract their winding numbers $W_{j}$ using Eq. (28). Let $S$ be the
interior of the projected BZ with a small circle $c_{j}$ cut around each node,
and apply Stokes’ theorem over the region $S$ to find
$\int_{S}\Omega_{\bm{\kappa}}^{+}\,d^{2}k=\int_{\partial\text{BZ}}\left({\bf
A}_{\bm{\kappa}}^{+}\right)^{\prime}\cdot
d{\bm{\kappa}}-\sum_{j}\oint_{c_{j}}\left({\bf
A}_{\bm{\kappa}}^{+}\right)^{\prime}\cdot d{\bm{\kappa}}\,.$ (69)
The first term on the right-hand side is equal to
$\phi^{\prime}_{+}=\phi=2\pi\widetilde{C}_{{\rm A}^{-}}$. In the limit of
small circles the left-hand side becomes $2\pi\widetilde{C}_{{\rm A}^{+}}$,
and the second term on the right-hand side reduces to $2\pi\sum_{j}\,W_{j}$
(this follows from Eq. (68) by noting that ${\bf A}^{+}_{\bm{\kappa}}$ is
smooth everywhere). Thus $\widetilde{C}_{{\rm A}^{+}}-\widetilde{C}_{{\rm
A}^{-}}$ equals $W_{\rm A}=\sum_{j\in{\rm A}}\,W_{j}$, which is what we set
out to prove. The same result holds if more than one pair of bands meet at
some of the point nodes, in which case $\gamma_{\bm{\kappa}}$ is given by the
more general expression in Eq. (50).
## Appendix C Phase diagram of the modified Dirac model on a cubic lattice
Figure 10: Wannier bands of the modified Dirac model on a cubic lattice [Eq.
(54)], for $m=1.0$ and varying $M$.
In this Appendix, we map out the topological phase diagram of the model of Eq.
(54) as a function of the parameters $m$ and $M$, for $c=1.0$. The band gap
closes for $m=0,4M,8M,12M$ at the points $\Gamma$, $\rm X$, $\rm M$, and $\rm
A$, respectively Shen (2012). Those lines in the phase diagram mark the
topological phase transitions between axion-even and axion-odd phases.
To decide which phases are trivial and which are topological, it is sufficient
to inspect the Wannier band structures in Fig. 10, obtained for representative
states in each of the four phases along the $m=1.0$ line. Since the model has
several axion-odd symmetries (time reversal, inversion, and multiple mirrors),
we can base our analysis on either of them, applying in each case the rules
given in Ref. Varnava _et al._ (2020) to determine the axion
$\mathbbm{Z}_{2}$ index. In the following, we choose to focus on time-reversal
symmetry.
The Wannier spectrum of an axion-odd phase with spinful time-reversal symmetry
must be gapless, with each band touching the band above at one of the four
TRIM and the band below at the other three (or vice-versa). From this
criterion we conclude that Figs. 10(a,c) correspond to axion-trivial phases,
and Figs. 10(b,d) to axion-odd topological phases. Hence the system is
topological for $0<m/M<4$ and $8<m/M<12$, producing the phase diagram in Fig.
6. This is in agreement with Ref. Shen, 2012, where the strong topological
index $\nu_{0}=\theta/\pi$ of each phase was determined from the parity
eigenvalues of the Bloch states at the eight TRIM in the 3D BZ Fu and Kane
(2007).
## References
* Teo _et al._ (2008) J. C. Y. Teo, L. Fu, and C. L. Kane, “Surface states and topological invariants in three-dimensional topological insulators: Application to ${\text{Bi}}_{1-x}{\text{Sb}}_{x}$,” Phys. Rev. B 78, 045426 (2008).
* Ando and Fu (2015) Y. Ando and L. Fu, “Topological Crystalline Insulators and Topological Superconductors: From Concepts to Materials,” Annu. Rev. Condens. Matter Phys. 6, 361 (2015).
* Qi _et al._ (2008) X.-L. Qi, T. L. Hughes, and S.-C. Zhang, “Topological field theory of time-reversal invariant insulators,” Phys. Rev. B 78, 195424 (2008).
* Essin _et al._ (2009) A. M. Essin, J. E. Moore, and D. Vanderbilt, “Magnetoelectric Polarizability and Axion Electrodynamics in Crystalline Insulators,” Phys. Rev. Lett. 102, 146805 (2009).
* Vanderbilt (2018) D. Vanderbilt, _Berry Phases in Electronic Structure Theory: Electric Polarization, Orbital Magnetization and Topological Insulators_ (Cambridge University Press, Cambridge (United Kingdom), 2018).
* Armitage and Wu (2019) N. P. Armitage and Liang Wu, “On the matter of topological insulators as magnetoelectrics,” SciPost Phys. 6, 46 (2019).
* Nenno _et al._ (2020) D. M. Nenno, C. A. C. Garcia, J. Gooth, C. Felser, and P. Narang, “Axion physics in condensed-matter systems,” Nat. Rev. Phys. 2, 682 (2020).
* Sekine and Nomura (2021) A. Sekine and K. Nomura, “Axion electrodynamics in topological materials,” J. Appl. Phys. 129, 141101 (2021).
* Otrokov _et al._ (2019) M. M. Otrokov _et al._ , “Prediction and observation of an antiferromagnetic topological insulator,” Nature 576, 416 (2019).
* Mong _et al._ (2010) R. S. K. Mong, A. M. Essin, and J. E. Moore, “Antiferromagnetic topological insulators,” Phys. Rev. B 81, 245209 (2010).
* Fu and Kane (2007) L. Fu and C. L. Kane, “Topological insulators with inversion symmetry,” Phys. Rev. B 76, 045302 (2007).
* Turner _et al._ (2012) A. M. Turner, Y. Zhang, R. S. K. Mong, and A. Vishwanath, “Quantized response and topology of magnetic insulators with inversion symmetry,” Phys. Rev. B 85, 165120 (2012).
* Varnava _et al._ (2020) N. Varnava, I. Souza, and D. Vanderbilt, “Axion coupling in the hybrid Wannier representation,” Phys. Rev. B 101, 155130 (2020).
* Varjas _et al._ (2015) D. Varjas, F. de Juan, and Y.-M. Lu, “Bulk invariants and topological response in insulators and superconductors with nonsymmorphic symmetries,” Phys. Rev. B 92, 195116 (2015).
* Fulga _et al._ (2016) I. C. Fulga, N. Avraham, H. Beidenkopf, and A. Stern, “Coupled-layer description of topological crystalline insulators,” Phys. Rev. B 94, 125405 (2016).
* Hsieh _et al._ (2012) T. H. Hsieh, H. Lin, J. Liu, W. Duan, A. Bansil, and L. Fu, “Topological crystalline insulators in the SnTe material class,” Nat. Commun. 3, 982 (2012).
* Liu _et al._ (2014) J. Liu, T. H. Hsieh, P. Wei, W. Duan, J. Moodera, and L. Fu, “Spin-filtered edge states with an electrically tunable gap in a two-dimensional topological crystalline insulator,” Nature Mater. 13, 178 (2014).
* Marzari and Vanderbilt (1997) N. Marzari and D. Vanderbilt, “Maximally localized generalized Wannier functions for composite energy bands,” Phys. Rev. B 56, 12847 (1997).
* Taherinejad and Vanderbilt (2015) M. Taherinejad and D. Vanderbilt, “Adiabatic Pumping of Chern-Simons Axion Coupling,” Phys. Rev. Lett. 114, 096401 (2015).
* Sutherland (1986) B. Sutherland, “Localization of electronic wave functions due to local topology,” Phys. Rev. B 34, 5208 (1986).
* Lieb (1989) E. H. Lieb, “Two theorems on the Hubbard model,” Phys. Rev. Lett. 62, 1201 (1989).
* Ramachandran _et al._ (2017) A. Ramachandran, A. Andreanov, and S. Flach, “Chiral flat bands: Existence, engineering, and stability,” Phys. Rev. B 96, 161104(R) (2017).
* Asbóth _et al._ (2016) J. A. Asbóth, A. Pályi, and L. Oroszlány, _A Short Course on Topological Insulators_ (Springer, Cham, 2016).
* Kim _et al._ (2015) Y. Kim, C. L. Kane, E. J. Mele, and A. M. Rappe, “Layered Topological Crystalline Insulators,” Phys. Rev. Lett. 115, 086802 (2015).
* Wieder and Bernevig (2018) B. J. Wieder and B. A. Bernevig, “The Axion Insulator as a Pump of Fragile Topology,” (2018), arXiv:1810.02373 .
* Park and Marzari (2011) C.-H. Park and N. Marzari, “Berry phase and pseudospin winding number in bilayer graphene,” Phys. Rev. B 84, 205440 (2011).
* (27) The PythTB code package is available at http://www.physics.rutgers.edu/pythtb/about.html.
* Enkovaara _et al._ (2010) J. Enkovaara, C. Rostgaard, J. J. Mortensen, J. Chen, M. Dułak, L. Ferrighi, J. Gavnholt, C. Glinsvad, V. Haikola, H. A. Hansen, H. H. Kristoffersen, M. Kuisma, A. H. Larsen, L. Lehtovaara, M. Ljungberg, O. Lopez-Acevedo, P. G. Moses, J. Ojanen, T. Olsen, V. Petzold, N. A. Romero, J. Stausholm-Møller, M. Strange, G. A. Tritsaris, M. Vanin, M. Walter, B. Hammer, H. Häkkinen, G. K. H. Madsen, R. M. Nieminen, J. K. Nørskov, M. Puska, T. T. Rantala, J. Schiøtz, K. S. Thygesen, and K. W. Jacobsen, “Electronic structure calculations with GPAW: a real-space implementation of the projector augmented-wave method,” J. Phys. Condens. Matter 22, 253202 (2010).
* Mostofi _et al._ (2014) A. A. Mostofi, J. R. Yates, G. Pizzi, Y.-S. Lee, I. Souza, D. Vanderbilt, and N. Marzari, “An updated version of wannier90: A tool for obtaining maximally-localised Wannier functions,” Comput. Phys. Commun. 185, 2309 (2014).
* Olsen (2016) T. Olsen, “Designing in-plane heterostructures of quantum spin Hall insulators from first principles: $1\text{T}^{\prime}$-MoS2 with adsorbates,” Phys. Rev. B 94, 235106 (2016).
* Perdew _et al._ (1996) J. P. Perdew, K. Burke, and M. Ernzerhof, “Generalized Gradient Approximation Made Simple,” Phys. Rev. Lett. 77, 3865 (1996).
* Perdew _et al._ (1997) J. P. Perdew, K. Burke, and M. Ernzerhof, “Generalized Gradient Approximation Made Simple [Phys. Rev. Lett. 77, 3865 (1996)],” Phys. Rev. Lett. 78, 1396(E) (1997).
* Blöchl (1994) P. E. Blöchl, “Projector augmented-wave method,” Phys. Rev. B 50, 17953 (1994).
* Souza _et al._ (2001) I. Souza, N. Marzari, and D. Vanderbilt, “Maximally localized Wannier functions for entangled energy bands,” Phys. Rev. B 65, 035109 (2001).
* Olsen _et al._ (2019) T. Olsen, E. Andersen, T. Okugawa, D. Torelli, T. Deilmann, and K. S. Thygesen, “Discovering two-dimensional topological insulators from high-throughput computations,” Phys. Rev. Mater. 3, 024005 (2019).
* Haastrup _et al._ (2018) Sten Haastrup, Mikkel Strange, Mohnish Pandey, Thorsten Deilmann, Per S. Schmidt, Nicki F. Hinsche, Morten N. Gjerding, Daniele Torelli, Peter M. Larsen, Anders C. Riis-Jensen, Jakob Gath, Karsten W. Jacobsen, Jens Jørgen Mortensen, Thomas Olsen, and Kristian S. Thygesen, “The Computational 2D Materials Database: High-throughput modeling and discovery of atomically thin crystals,” 2D Mater. 5, 042002 (2018).
* Kobayashi (2015) K. Kobayashi, “Electronic states of SnTe and PbTe (001) monolayers with supports,” Surf. Sci. 639, 54 (2015).
* Liu _et al._ (2015) J. Liu, X. Qian, and L. Fu, “Crystal Field Effect Induced Topological Crystalline Insulators In Monolayer IV–VI Semiconductors,” Nano Lett. 15, 2657 (2015).
* Tanaka _et al._ (2012) Y. Tanaka, Z. Ren, T. Sato, K. Nakayama, S. Souma, T. Takahashi, K. Segawa, and Y. Ando, “Experimental realization of a topological crystalline insulator in SnTe,” Nat. Phys. 8, 800 (2012).
* Gresch _et al._ (2017) D. Gresch, G. Autès, O. V. Yazyev, M. Troyer, D. Vanderbilt, B. A. Bernevig, and A. A. Soluyanov, “Z2Pack: Numerical implementation of hybrid Wannier centers for identifying topological materials,” Phys. Rev. B 95, 075146 (2017).
* Shen _et al._ (2011) S.-Q. Shen, W.-Y. Shan, and H.-Z. Lu, “Topological Insulator and the Dirac Equation,” SPIN 01, 33 (2011).
* Rauch _et al._ (2017) T. Rauch, H. Nguyen Minh, J. Henk, and I. Mertig, “Model for ferromagnetic Weyl and nodal line semimetals: Topological invariants, surface states, anomalous and spin Hall effect,” Phys. Rev. B 96, 235103 (2017).
* Shen (2012) S.-Q. Shen, _Topological Insulators – Dirac Equation in Condensed Matters_ (Springer, Berlin, Heidelberg, 2012).
|
# ACAV100M: Automatic Curation of Large-Scale Datasets for
Audio-Visual Video Representation Learning
Sangho Lee11footnotemark: 1, Jiwan Chung11footnotemark: 1, Youngjae Yu, Gunhee
Kim
Seoul National University Equal Contribution Thomas Breuel, Gal Chechik
NVIDIA Research Yale Song
Microsoft Research https://acav100m.github.io
###### Abstract
The natural association between visual observations and their corresponding
sound provides powerful self-supervisory signals for learning video
representations, which makes the ever-growing amount of online videos an
attractive source of training data. However, large portions of online videos
contain irrelevant audio-visual signals because of edited/overdubbed audio,
and models trained on such uncurated videos have shown to learn suboptimal
representations. Therefore, existing approaches rely almost exclusively on
datasets with predetermined taxonomies of semantic concepts, where there is a
high chance of audio-visual correspondence. Unfortunately, constructing such
datasets require labor intensive manual annotation and/or verification, which
severely limits the utility of online videos for large-scale learning. In this
work, we present an automatic dataset curation approach based on subset
optimization where the objective is to maximize the mutual information between
audio and visual channels in videos. We demonstrate that our approach finds
videos with high audio-visual correspondence and show that self-supervised
models trained on our data achieve competitive performances compared to models
trained on existing manually curated datasets. The most significant benefit of
our approach is scalability: We release ACAV100M that contains 100 million
videos with high audio-visual correspondence, ideal for self-supervised video
representation learning.
## 1 Introduction
Our long-term objective is learning to recognize objects, actions, and sound
in videos without the need for manual ground-truth labels. This is not only a
theoretically interesting problem, since it mimics the development of auditory
and visual perception by infants [22], it is also of immense practical
importance, since accurate manual labeling of audio-visual data is
impractical. Compared to self-supervised learning on static images [55, 30,
26, 13], audio-visual inputs pose additional challenges: large portions of a
video may contain no relevant information, and auditory and visual inputs may
not always be in correspondence. Consequently, existing self-supervised
methods on audio-visual data either start with datasets for which there is a
high probability of audio-visual correspondence, or they learn audio-visual
properties corresponding only to short-term statistical regularities. The
necessary datasets are usually manually created or rely on domain-specific
properties (e.g., [9, 21] and below). If we want to carry out self-supervised
learning on full length (minutes, hours) of video without manually generating
and/or selecting video clips, we need automated ways of curating such
collections of audio/video clips from diverse collections of full length
video.
Figure 1: We address the challenge of constructing a large-scale audio-visual
dataset from uncurated Internet videos without relying on manual annotation or
verification. We solve a constrained optimization problem that finds a subset
maximizing the mutual information between audio and visual signals in videos.
The result is a new 100M video dataset with high audio-visual correspondence,
ideal for self-supervised video representation learning.
We consider self-supervised learning from unlabeled videos as a two-step
process: (1) an automatic dataset curation process that generates short,
relevant clips with useful self-supervisory signals, e.g., audio-visual
correspondence, and (2) a self-supervised learning approach that operates on
the collection of short clips. This paper focuses on step (1) and not on step
(2), providing an automated way of taking a collection of general or domain-
specific videos of arbitrary length and reducing it to a collection of shorter
clips containing a high portion of relevant audio-video correspondences. The
output of this step is a dataset, which can be used as input to existing self-
supervised algorithms on audio-visual data [37, 3, 59], as well as the
development of novel self-supervised techniques.
To achieve step (1), we assume access to a large collection of unconstrained
videos and solve a subset selection problem with an information-theoretic
measure of audio-visual correspondence as a selection criterion. Specifically,
we find a subset that maximizes mutual information (MI) between audio and
visual channels of videos. This is a necessary condition for self-supervised
learning approaches that rely on audio-visual correspondence [18]. The main
technical challenge we address is how to efficiently measure the audio-visual
MI and find a subset that maximizes the MI in a scalable manner. Given that
video processing is notoriously compute and storage intensive, we put a
particular emphasis on scalability, i.e., we want an approach that can easily
handle hundreds of millions of video clips.
MI estimation has a long history of research [58, 38], including the recent
self-supervised approaches [55, 30, 13] that use noise contrastive estimation
[24] as the learning objective. While it is tempting to use such approaches to
estimate MI in our work, we quickly encounter the “chicken-and-egg” problem:
to obtain such models for estimating audio-visual MI, we need a training
dataset where we can reliably construct positive pairs with a high probability
of audio-visual correspondence; but that is what we are set out to find in the
first place! One might think that randomly chosen videos from the Internet
could be sufficient, but this has shown to produce suboptimal representations
[3]; our empirical results also show that self-supervised models indeed suffer
from noisy real-world audio-visual correspondences.
In this work, we turn to a clustering-based solution that estimates the MI by
measuring the agreement between two partitions of data [46, 72]. To circumvent
the “chicken-and-egg” issue, we use off-the-shelf models as feature extractors
and obtain multiple audio and visual clusters to estimate the MI. The use of
off-the-shelf models is a standard practice in video dataset generation.
Unlike existing approaches that use them as concept classifiers [8, 1, 47, 51,
12], here we use them as generic feature extractors. To avoid estimating the
MI based on a restricted set of concepts the off-the-shelf models are trained
on, we perform clustering over features computed across multiple layers
(instead of just the penultimate layers), which has been shown to provide
general feature descriptors not tied to specific concepts [81].
To make our approach scalable, we avoid using memory-heavy components such as
the Lloyd’s algorithm [57] and instead use SGD [7] to perform K-means
clustering. Further, we approximately solve the subset maximization objective
with a mini-batch greedy method [14]. Through controlled experiments with
ground-truth and noisy real-world correspondences, we show that our
clustering-based approach is more robust to the real-world correspondence
patterns, leading to superior empirical performances than the contrastive MI
estimation approaches.
We demonstrate our approach on a large collection of videos at an
unprecedented scale: We process 140 million full-length videos (total duration
1,030 years) and produce a dataset of 100 million 10-second clips (31 years)
with high audio-visual correspondence. We call this dataset ACAV100M (short
for automatically curated audio-visual dataset of 100M videos). It is two
orders of magnitude larger than the current largest video dataset used in the
audio-visual learning literature, i.e., AudioSet [21] (8 months), and twice as
large as the largest video dataset in the literature, i.e., HowTo100M [48] (15
years).
To evaluate the utility of our approach in self-supervised audio-visual
representation learning, we produce datasets at varying scales and compare
them with existing datasets of similar sizes that are frequently used in the
audio-visual learning literature, i.e., Kinetics-Sounds [4] at 20K-scale, VGG-
Sound [12] at 200K-scale, and AudioSet [21] at 2M-scale. Under the linear
evaluation protocol with three downstream datasets, UCF101 [67], ESC-50 [61],
and Kinetics-Sounds [4], we demonstrate that models pretrained on our datasets
perform competitively or better than the ones pretrained on the baseline
datasets, which were constructed with careful annotation or manual
verification.
To summarize, our main contributions are: 1) We propose an information-
theoretic subset optimization approach to finding a large-scale video dataset
with a high portion of relevant audio-visual correspondences. 2) We evaluate
different components of our pipeline via controlled experiments using both the
ground-truth and the noisy real-world correspondence patterns. 3) We release
ACAV100M, a large-scale open-domain dataset of 100M videos for future research
in audio-visual representation learning.
## 2 Related Work
Large-Scale Data Curation. Several different types of audio-visual video
datasets have been collected: (1) manually labeled, e.g., AudioSet [21], AVE
[70], (2) domain specific, e.g., AVA ActiveSpeaker [63], AVA Speech [11],
Greatest Hits [56], FAIR-Play [20], YouTube-ASMR-300K [80], and (3) unlabeled,
unrestricted collections from consumer video sites, e.g., Flickr-SoundNet [5,
4].
AudioSet [21] contains about 2M clips corresponding to audio events retrieved
from YouTube by keyword search; human raters verified the presence of audio
events in the candidate videos. Moments in Time [50] contains over one million
clips of diverse visual and auditory events; video clips were selected using
keywords (verbs) and manually reviewed for high correspondence between the
clips and the keywords. HowTo100M [48] contains 136M clips segmented from
1.22M narrated instructional web videos retrieved by text search from YouTube,
with an additional filtering step based on metadata. Web Videos and Text (WVT)
[69] contains 70M clips obtained by searching the web with keywords based on
the Kinetics-700 [9] categories and retaining both the video and the
associated text. Chen _et al_. [12] created a dataset of 200K clips for audio-
visual research; clips were originally obtained by keyword search on YouTube
and frames were classified with pretrained visual classifiers. Since keywords
and visual classes do not perfectly correspond, such correspondences needed to
be manually reviewed and corrected on randomly sampled clips in an iterative
and interactive process.
We are building systems for learning audio-visual correspondence on diverse,
unrestricted inputs. This requires large amounts of training data, making
manual collection and labeling costly and impractical. Unlike previous dataset
curation processes that involve costly human intervention, we introduce an
automatic and scalable data curation pipeline for large-scale audio-visual
datasets.
Subset Selection. Our work focuses on data subset selection; extensive prior
work exists in supervised [71, 77, 66, 76], unsupervised [25, 78], and active
learning settings [42, 65]. Different criteria for subset selection have been
explored in the literature. Submodular functions naturally model notions of
information, diversity and coverage [75], and can be optimized efficiently
using greedy algorithms [49, 53]. Geometric criteria like the coreset [2] aim
to approximate geometric extent measures over a large dataset with a
relatively small subset.
Mutual-information (MI) between input feature values and/or labels has been
used successfully [23, 43, 68] as a probablistically motivated criterion. We
propose to use MI as an objective function for subset selection and make the
following two unique contributions: First, we use MI to measure audio-visual
correspondence within videos by formulating MI between the audio and visual
features. Second, we apply MI for the large-scale video dataset curation
problem. In case of clustering-based MI estimation, we demonstrate that
optimizing MI objective with a greedy algorithm is a practical solution for
building a large-scale pipeline.
## 3 Data Collection Pipeline
Our pipeline consists of four steps: (i) acquiring raw videos from the web and
filtering them based on metadata, (ii) segmenting the videos into clips and
extracting features with pretrained extractors, (iii) estimating mutual
information (MI) between audio and visual representations, and (iv) selecting
a subset of clips that maximizes the MI.
### 3.1 Obtaining Candidate Videos
We crawl YouTube to download videos with a wide variety of topics. Unlike
previous work that use a carefully curated set of keywords [12], which could
inadvertently introduce bias, we aim for capturing the natural distribution of
topics present in the website. To ensure the diversity in topics, cultures and
languages, we create combinations of search queries with diverse sets of
keywords, locations, events, categories, etc., to obtain an initial video
list.
Before downloading videos, we process the search results using metadata
(provided by YouTube API) to filter out potentially low quality / low audio-
visual correspondence videos. We use the duration to exclude videos shorter
than 30 seconds (to avoid low quality videos) and longer than 600 seconds (to
avoid large storage costs). We also exclude videos that contain selected
keywords (in either title or description) or from certain categories – i.e.,
gaming, animation, screencast, and music videos – because most videos exhibit
non-natural scenes (computer graphics) and/or low audio-visual correspondence.
Finally, we detect language from the titles and descriptions using fastText
[33, 34] and keep the ones that constitute a cumulative ratio of $0.9$,
resulting in eight languages (English, Spanish, Portuguese, Russian, Japanese,
French, German, and Korean).
The result is 140 million full-length videos with a total duration of 1,030
years (median: 198 seconds). To minimize the storage cost we download 360p
resolution videos; this still consumes 1.8 petabytes of storage. Handling such
large-scale data requires a carefully designed data pipeline. We discuss our
modularized pipeline below.
### 3.2 Segmentation & Feature Extraction
Clip Segmentation. To avoid redundant clips, we extract up to three 10-second
clips from each full-length video. We do this by detecting shot boundaries
(using the scdet filter in FFmpeg) and computing pairwise clip similarities
based on the MPEG-7 video signatures (using the signature filter in FFmpeg).
We then select up to 3 clips that give the minimum total pairwise scores using
local search [32]. This gives us about 300M clips.
Feature Extraction. To measure correspondence between audio and visual
channels of the 300M clips, we need good feature representations. An ideal
representation would capture a variety of important aspects from low-level
details (e.g., texture and flow) to high-level concepts (e.g., semantic
categories). However, such an oracle extractor is hard to obtain, and the
sheer scale of data makes it impractical to learn optimal feature extractors
end-to-end. Therefore, we use the “off-the-shelf” pretrained models to extract
features, i.e., SlowFast [16] pretrained on Kinetics-400 [35] and VGGish [28]
pretrained on YouTube-8M [1] for visual and audio features, respectively.
### 3.3 Subset Selection via MI Maximization
Next, we select clips that exhibit strong correspondence between visual and
audio channels. To this end, we estimate the mutual information (MI) between
audio and visual signals. Computing the exact MI is infeasible because it
requires estimating the joint distribution of high dimensional variables, but
several approximate solutions do exist [73]. Here we implement and compare two
approaches: a noise-contrastive estimator (NCE) [24], which measures MI in a
continuous feature space, and a clustering-based estimator that computes MI in
a discrete space via vector quantization. The former estimates MI for each
video clip, while the latter estimates MI for a set of video clips. As we show
later in our experiments, we find the clustering-based MI estimator to be more
robust to real-world noise.
#### 3.3.1 NCE-based MI Estimation
Contrastive approaches have become a popular way of estimating MI between
different views of the data [55, 30]. We add linear projection heads over the
precomputed audio/visual features and train them using the contrastive loss
[13]. From a mini-batch $\\{(v_{i},a_{i})\\}_{i=1}^{N_{b}}$ where $v_{i}$ and
$a_{i}$ are visual and audio features, respectively, we minimize
$l(v_{i},{a_{i})}=-\log\frac{\exp(S(\mathbf{z}_{i}^{v},\mathbf{z}_{i}^{a})/\tau)}{\sum_{j=1}^{N_{b}}\exp(S(\mathbf{z}_{i}^{v},\mathbf{z}_{j}^{a})/\tau)},$
(1)
where $\mathbf{z}_{i}^{v}$ and $\mathbf{z}_{i}^{a}$ are embeddings from the
linear projection heads, $S(\cdot,\cdot)$ measures the cosine similarity, and
$\tau$ is a temperature term (we set $\tau=0.1$). For each mini-batch we
compute $l(v_{i},a_{i})$ and $l(a_{i},v_{i})$ to make the loss symmetric.
Once trained, we can directly use $S(\mathbf{z}^{v},\mathbf{z}^{a})$ to
estimate audio-visual MI and find a subset by taking the top $N$ candidates
from a ranked list of video clips.
#### 3.3.2 Clustering-based MI Estimation
MI Estimation. Clustering is one of the classical ways of estimating MI [46,
72]. Given two partitions of a dataset $\mathbf{X}$ w.r.t. audio and visual
features, $\mathcal{A}=\\{\mathbf{A}_{1},\cdots,\mathbf{A}_{|\mathcal{A}|}\\}$
and $\mathcal{V}=\\{\mathbf{V}_{1},\cdots,\mathbf{V}_{|\mathcal{V}|}\\}$, we
estimate their MI as:
$\mbox{MI}(\mathcal{A},\mathcal{V})=\sum_{i=1}^{|\mathcal{A}|}\sum_{j=1}^{|\mathcal{V}|}\frac{|\mathbf{A}_{i}\cap\mathbf{V}_{j}|}{|\mathbf{X}|}\log\frac{|\mathbf{X}||\mathbf{A}_{i}\cap\mathbf{V}_{j}|}{|\mathbf{A}_{i}||\mathbf{V}_{j}|}.$
(2)
This formulation estimates MI in a discrete (vector-quantized) space induced
by clustering, and thus the quality of clustering affects the quality of the
estimator. A straightforward approach to obtaining $\mathcal{A}$ and
$\mathcal{V}$ is to cluster videos using the output from the penultimate
layers of the pretrained networks. However, this can introduce distributional
bias specific to the datasets on which the networks are pretrained [81, 74].
To address this issue, we cluster samples over each output space induced by
different layers of the networks. This allows the MI estimator to consider a
wide range of abstract concepts, from low-level (such as textures) to high-
level (such as object parts) [6].
Specifically, we use the feature spaces induced by the five convolutional
blocks from each of the SlowFast and VGGish feature extractors. We then
compute the average MI between all pairs of clusterings as our MI estimator.
Let
$\mathcal{CV}_{\mathbf{X}}^{(i)}=\\{\mathbf{V}_{1}^{(i)},\cdots,\mathbf{V}_{n_{i}}^{(i)}\\}$
and
$\mathcal{CA}_{\mathbf{X}}^{(i)}=\\{\mathbf{A}_{1}^{(i)},\cdots,\mathbf{A}_{m_{i}}^{(i)}\\}$
denote the clustering results induced by the $i$-th convolutional block of the
visual and audio feature extractors, respectively. We compute:
$F(\mathbf{X})=\sum_{(\mathcal{X},\mathcal{Y})\in\mathcal{C}_{\mathbf{X}}}\frac{\mbox{MI}(\mathcal{X},\mathcal{Y})}{{{}_{10}C_{2}}},$
(3)
where $\mathcal{C}_{\mathbf{X}}$ denotes the combination of two elements from
$\\{\mathcal{CV}_{\mathbf{X}}^{(i)}\\}_{i=1}^{5}\cup\\{\mathcal{CA}_{\mathbf{X}}^{(j)}\\}_{j=1}^{5}$
and ${}_{10}C_{2}$ denotes the number of 2-combinations out of 10 elements,
which equals to 45. This computes MI between layers from both within and
across the extractors of different modalities (referred to as combination
pairing scheme in Section 4.2).
Input: initial dataset $\mathbf{D}$, MI estimator $F$, target subset size $M$,
batch size $b$, selection size $s$
Output: $\mathbf{X}\subseteq\mathbf{D},|\mathbf{X}|=M$
$\mathbf{X}_{0}\leftarrow\emptyset,i\leftarrow 0$
while _$|X_{i}| <M$_ do
Randomly sample
$\mathbf{B}\subseteq\mathbf{D}\backslash\mathbf{X}_{i},|\mathbf{B}|=b$
$\mathbf{Y}_{0}\leftarrow\emptyset,j\leftarrow 0$
while _$j <s$_ do
$x\leftarrow\operatornamewithlimits{argmax}_{x\in\mathbf{B}\backslash\mathbf{Y}_{j}}F(\mathbf{X}_{i}\cup\mathbf{Y}_{j}\cup\\{x\\})$
$\mathbf{Y}_{j+1}\leftarrow\mathbf{Y}_{j}\cup\\{x\\},j\leftarrow j+1$
if _$|\mathbf{X}_{i}\cup\mathbf{Y}_{j}|=M$_ then break
end while
$\mathbf{X}_{i+1}\leftarrow\mathbf{X}_{i}\cup\mathbf{Y}_{j},i\leftarrow i+1$
end while
$\mathbf{X}\leftarrow\mathbf{X}_{i}$
Return $\mathbf{X}$
Algorithm 1 Batch Greedy Subset Selection
Batch Greedy Subset Selection. Since the MI estimator $F(\cdot)$ is a function
of $\mathbf{X}$, we can formulate an optimization problem where the goal is to
find a subset $\mathbf{X}$ that maximizes $F(\mathbf{X})$. In general, finding
a global solution to problems such as ours is NP-hard and thus greedy
heuristic solutions are used instead [54]. However, they typically select one
sample in each iteration and re-evaluate the goodness function, e.g.,
$F(\cdot)$, on all the remaining candidates. This introduces a challenge to
our setting because the time complexity is quadratic to the size of the
population; this is clearly not scalable to 300 million instances.
Therefore, we approximate the typical greedy solution using the batch greedy
algorithm [14], as shown in Algorithm 1. It randomly samples a batch
$\mathbf{B}$ from the remaining pool of candidates, and searches for the next
element to be included in the active solution set only within $\mathbf{B}$.
This batch trick reduces the time complexity down to linear, i.e.,
$O(N\times|\mathbf{B}|)$, where $N$ is the size of the input dataset. We
demonstrate the efficacy of the algorithm in Section 4.
Stochastic Clustering. One missing piece in this pipeline is an efficient
clustering algorithm scalable to hundreds of millions of instances. The most
popular choice among various clustering methods is K-means clustering [79],
which is a special case of mixture density estimation for isotropic normal and
other densities. Typically, an expectation-maximization (EM) algorithm, such
as Lloyd’s [57], is used to find the cluster centers. Such algorithms require
repeated computation of the distances of all samples from all $k$ cluster
centers, followed by cluster assignment, until convergence. Lloyd’s algorithm
updates cluster centers only after each pass through the entire dataset. But
for very large datasets (like ours), a small subset usually contains enough
information to obtain good estimates of the cluster centers, meaning that EM-
style algorithms tend to take (perhaps too) many epochs to converge.
There are different strategies for addressing this issue, including random
sampling and subsetting, but a straightforward approach is to replace EM
algorithm with an SGD [45, 7, 64]. In such an approach, for large datasets,
convergence rate and final accuracy of the cluster centers are determined not
by the total dataset size, but by the learning rate schedule. A
straightforward SGD update rule is to compute the nearest cluster centers for
each sample in a batch and then update the cluster centers using a convex
combination of the cluster centers and their nearest samples, weighting the
samples with a learning rate $\lambda$ and the cluster centers with
$(1-\lambda)$. However, mixture density estimators in general suffer from the
problem that adding mixture components with zero probability does not change
the mixture density; in practice, this means EM and SGD-based algorithms may
end up with cluster centers that stop receiving updates at some point during
the optimization.
We address this problem by estimating the mixture component utilization rate
as the ratio of the total number of updates to the cluster center divided by
the total number of estimation steps, and reinitializing cluster centers when
that probability falls below $(1/k)^{2}$. In Section 4.2, we demonstrate that
our mini-batch SGD update shows comparable accuracy to batch update in
correspondence retrieval tasks.
| Natural Class Correspondence | Arbitrary Class Correspondence | Audio-Visual
---|---|---|---
Method | CIFAR10-Rotation | CIFAR10-Flip | MNIST-CIFAR10 | MNIST-FSDD | Kinetics-Sounds
Ranking-inner | 87.872 $\pm$ 0.002 | 87.044 $\pm$ 0.001 | 63.076 $\pm$ 0.001 | 64.453 $\pm$ 0.003 | 52.558 $\pm$ 0.002
Ranking-cos | 87.872 $\pm$ 0.002 | 87.044 $\pm$ 0.001 | 67.600 $\pm$ 0.002 | 61.893 $\pm$ 0.004 | 60.108 $\pm$ 0.001
Ranking-$l_{2}$ | 87.872 $\pm$ 0.002 | 87.044 $\pm$ 0.001 | 66.796 $\pm$ 0.001 | 62.933 $\pm$ 0.003 | 51.236 $\pm$ 0.001
Ours-Contrastive | 99.395 $\pm$ 0.000 | 99.480 $\pm$ 0.001 | 73.252 $\pm$ 0.040 | 73.733 $\pm$ 0.027 | 73.066 $\pm$ 0.036
Ours-Clustering | 87.292 $\pm$ 0.014 | 87.248 $\pm$ 0.010 | 77.224 $\pm$ 0.009 | 69.440 $\pm$ 0.049 | 88.705 $\pm$ 0.004
Table 1: Correspondence retrieval results. We conduct a total of five runs and
report the precision with the 99% confidence interval. We use the clustering
pairing scheme which gives the highest score in each configuration:
combination, except diagonal for Ranking-inner, Ranking-cos and Rank-$l_{2}$
on CIFAR10-Rotation and CIFAR10-Flip.
## 4 Evaluation on Correspondence Retrieval
We systematically evaluate different components of our pipeline with synthetic
correspondence-retrieval tasks, where we generate corresponding and non-
corresponding pairs using CIFAR-10 [39], MNIST [41] and FSDD [31]. In each
correspondence retrieval task, the goal is to discover the known corresponding
samples among the non-corresponding pairs. To show the generality of the
findings, we also experiment with Kinetics-Sounds [4] which exhibit real-world
audio-visual correspondence.
### 4.1 Experimental Setting
##### Datasets
We construct five datasets where each instance is a pair of samples with
different correspondence types.
1/2) CIFAR10-Rotation/Flip. We use images from five randomly selected
categories to construct a “positive pair” set, and use the rest for a
“negative pair” set. For the positive set, we create pairs of images by
sampling two different images from the same category (e.g., two images of a
bird), and apply a geometric transformation to one of them; we apply either a
90° CCW rotation (CIFAR10-Rotation) or a horizontal flip (CIFAR10-Flip). The
negative set follows the same process but each pair contains images from
different categories. We categorize this type of correspondence as “Natural
Class Correspondence” because pairings are made over natural semantic
categories.
3/4) MNIST-CIFAR10/FSDD. We use images from five digit categories to construct
a positive set and use the rest for a negative set. Different from above,
correspondence is defined via an arbitrary class-level mapping, e.g., “digit
0” images map to the “car” images in CIFAR-10 or “digit 0” audio samples in
FSDD. We take samples from the same categories to construct the positive set
and samples from different categories for the negative set. We call these
“Arbitrary Class Correspondence” to differentiate from above.
5) Kinetics-Sounds. Unlike the above datasets where the correspondence is
defined over class categories, here the correspondence is defined at the
sample level, i.e., a positive set contains pairs of audio and visual channels
of the same video, and a negative set contains randomly permuted pairs. We do
not utilize class labels to construct the dataset.
##### Methods
We compare our pipeline (both contrastive-based and clustering-based) to three
ranking-based approaches. All the methods use the same precomputed features.
For images, we use ResNet-50 [27] pretrained on ImageNet [15]. For videos, we
use SlowFast [16] pretrained on Kinetics-400 [35] and VGGish [28] pretrained
on YouTube-8M [1] for visual and audio features, respectively. For the ranking
baselines, we apply PCA [60] to reduce the feature dimensionality to 64 and
rank the instances based on three similarity metrics: inner product, cosine
similarity, and (negative) $l_{2}$ distance. Because all our datasets have an
equal number of positive and negative instances, we simply select the top 50%
instances as the retrieval result.
##### Protocol
We split each dataset into train and test partitions of the same size. We
conduct a total of five runs for each of the five datasets and report results
on the test splits. We use train sets only for the contrastive estimator to
train the projection heads. When constructing each dataset, we sample at most
$n=1000$ instances from each category of the source datasets. For the noise
contrastive estimator, we train the linear projection heads for 100 epochs
using the AMSGrad of Adam optimizer [62] with a learning rate of 2e-4. We
randomly take one sample from each class to build a mini-batch for class-level
correspondence datasets, and sample random $N_{b}=10$ clips to build a mini-
batch for the sample-level correspondence dataset. When applying our
clustering-based method, we perform the SGD K-means clustering with the
“ground-truth” number of centroids as the number of classes in each source
dataset; we use the batch greedy algorithm with a batch size $b=100$ and a
selection size $s=25$.
### 4.2 Ablation Results & Discussion
Table 1 shows that the two variants of our approach – contrastive and
clustering – achieve overall higher precision rates than the ranking
baselines. The contrastive approach performs well on the two datasets with the
“natural class correspondence,” conforming to the previous results that shows
contrastive learning is robust to geometric transformations [13]. The
clustering approach excels on Kinetics-Sounds that contains natural audio-
visual correspondence, which is closer to our intended scenario. Therefore, we
conduct various ablation studies on Kinetics-Sounds to validate different
components of our clustering-based approach.
Layers | Method | Precision
---|---|---
Single | Layer1 | 50.820 $\pm$ 0.014
Layer2 | 51.412 $\pm$ 0.011
Layer3 | 52.659 $\pm$ 0.012
Layer4 | 54.422 $\pm$ 0.012
Layer5 | 58.418 $\pm$ 0.030
Multiple | Diagonal | 71.450 $\pm$ 0.005
Bipartite | 76.969 $\pm$ 0.005
Combination | 88.705 $\pm$ 0.004
Table 2: Correspondence retrieval results on Kinetics-Sounds with different
clustering pairing schemes. We conduct a total of five runs and report the
precision with the 99% confidence interval.
Multi-Layer Clustering. All the feature extractors that we use consist of five
convolutional blocks. As discussed in Section 3.3.2, we cluster samples over
each of the five output spaces to capture a wide range of abstract concepts.
This raises a question: How should we combine audio-visual clusters for MI
estimation? Table 2 compares the single-layer approaches to multi-layer
approaches. Each of the single-layer approach estimates the audio-visual MI
based on a single pair of clustering results. We can see that the precision
increases as we use clustering results from higher layers. However, all
single-layer methods perform significantly worse than multi-layer variants.
We explore three options to select pairs of clusterings for MI estimation.
Diagonal computes an average MI across all five single-layer scores (with $L$
layers, this computes MI $L$ times), Bipartite computes an average MI between
all possible combinations of audio-visual clustering results ($L^{2}$ times),
and Combination (ours) computes an average MI between all possible
combinations of clustering results, regardless of modalities (${}_{2L}C_{2}$
times). We observe that the performance increases with the number of
connections as shown in the bottom rows of Table 2. This positive relationship
suggests that the consensus between layers from the same extractor, as well as
that across extractors, contributes to the clarity of correspondence signal.
We further experimented with different layer weights for the Combination
approach and found it to be robust to different weight distributions; we
provide the results in the supplementary material.
Figure 2: Greedy vs. batch greedy algorithms with varying selection-to-batch
size ratios, $s/b$. The shaded regions show 99% confidence intervals obtained
by five runs on Kinetics-Sounds. The batch greedy algorithm is robust when the
ratio is $\leqslant$ 25%. Figure 3: Sensitivity analysis on the number of
centroids. We determine under/over-clustering based on the ground-truth number
of class categories in Kinetics-Sounds ($c=32$). The shaded regions show 99%
confidence intervals over five runs.
Mini-Batch SGD K-means Clustering. We compared mini-batch SGD K-means [7] to
the standard EM (Lloyd’s) approach [57] and obtained very similar results on
Kinetics-Sounds: 88.705 $\pm$ 0.004 (SGD) versus 88.732 $\pm$ 0.005 (EM). This
shows that our SGD solution has negligible performance degradation while
enjoying a significantly less memory requirement than the standard EM
approach.
Batch Greedy Subset Selection. We explore how the use of mini-batches affects
the quality of the selected subsets. We compare the greedy algorithm and the
batch greedy algorithm with a batch size $b=160$ and varying selection sizes
$s=\\{5,10,20,40,80\\}$. As shown in Figure 2, the performance gap between the
greedy algorithm and the batch greedy algorithm is marginal (greedy: 98.970
vs. batch greedy with $(b,s)=(160,5)$: 98.020), which validates our use of the
batch greedy algorithm. While the batch size itself does not have a large
impact on the subset quality, the ratio of selection size to batch size
($s/b$) highly affects the retrieval performance; the performance drops
sharply as the ratio exceeds 0.25 in several ($b$, $s$) configurations. This
is mainly dataset-dependent: by construction, there is a 50% chance that a
sample will be a positive. We believe that the constructed dataset contains
roughly 25% easy positives, i.e., videos with very high correspondence. When
the selection ratio $s/b$ does not exceed the easy positive ratio, the batch
greedy algorithm finds those videos without introducing false positives,
providing robustness. We found similar patterns with other ratios of
$s/b>25\%$.
Number of Centroids. We vary the number of centroids
$k\in\\{8,16,32,64,128\\}$ to see how sensitive our approach is to the
parameter. We apply the batch greedy algorithm with a batch size $b=100$ and a
selection size $s=25$ on Kinetics-Sounds. Figure 3 shows that, although the
final performance is similar across different number of centroids, they show
different trends: underclustering ($k=\\{8,16\\}$) shows high precision in
early iterations while overclustering ($k=\\{64,128\\}$) shows slower drop in
the later stage.
Figure 4: Linear evaluation on downstream tasks. The top-1/5 accuracy (%) of
video classification on UCF101 [67], audio classification on ESC-50 [61] and
audio-visual classification on Kinetics-Sounds (KS) [4]. We group the results
by the downstream tasks and by the scale of the pretrain datasets. Baselines
are Kinetics-Sounds [4] (20K), VGG-Sound [12] (200K), and AudioSet [21] (2M).
## 5 Large-Scale Evaluation
We construct datasets at varying scales (20K, 200K, 2M) and compare them to
existing datasets often used in the audio-visual learning literature:
Kinetics-Sounds [4] (20K), VGG-Sound [12] (200K), and AudioSet [21] (2M). Note
that all three datasets involve either human annotation [4, 21] or manual
verification [12]. To demonstrate the scalable nature of our approach, we also
generate datasets with 10M and 100M videos and evaluate their performance.
For the contrastive approach, we train linear projection heads on a batch size
of 1024 from a randomly drawn set of 100M videos. Note that these additional
videos are only used to train projection heads for MI estimation (Sec. 3.3.1),
which is discarded once dataset curation is finished; all approaches use the
same number of videos under the same evaluation protocol on all downstream
tasks. We train the model for three epochs and rank the entire video set
(300M) based on the cosine similarity [13]. We then take top
$N\in\\{20\text{K},200\text{K},2\text{M}\\}$ ranked videos for the final
dataset. For the clustering-based variant, we vary the number of clusters
$C\in\\{100,200,500,1000,2000\\}$ for each size of the datasets.
### 5.1 Linear Evaluation on Downstream Tasks
To assess the quality of the datasets, we pretrain identical models on
different datasets and evaluate their performance on downstream tasks. The
idea is that if a model performed particularly better than the others, the
dataset used to train that model must be superior to the other datasets. We
pretrain audio-visual CNNs from scratch using the self-supervised objective of
SimCLR [13]; we use 3D ResNet-50 [17] and ResNet-50 [27] as the visual and
audio CNNs, respectively. We follow the linear evaluation protocol [13] by
adding a linear classifier on top of the learned and frozen models. We test on
three downstream tasks: visual action recognition on UCF101 [67], sound
classification on ESC-50 [61], and audio-visual action recognition on
Kinetics-Sounds [4] (we concatenate audio-visual features for the linear
classifier). Note that the training procedures are identical for all the
models except for the datasets used to train them. We report mean accuracy
across the official splits of UCF101 and ESC-50. We provide details of these
experimental settings in the supplementary material.
Figure 4 shows that models pretrained on our dataset (green bars) achieve
similar, or even slightly better, performances compared to the baseline
datasets (pink bars) at 20K, 200K, and 2M scales. The significant gap between
ours vs. random set (yellow bars) shows the improvement does not come from the
initial pool we crawl (the 300M set) but rather come from higher portion of
audio-visual correspondence in the resulting dataset. Our clustering approach
to MI estimation (green bars) generally outperforms the contrastive approach
(blue bars), suggesting its robustness to noisy real-world audio-visual
correspondences. Finally, we report the results obtained from 10M and 100M
datasets produced with our clustering-based MI estimation module (we omit the
baseline results at these scales due to computational reasons). The
significant performance boost from the 10M and 100M models reaffirms the
importance of large-scale training. Considering our data curation process does
not involve human intervention (i.e., no manual annotation and verification)
this is a promising result showing the potential for large-scale self-
supervised learning: one can obtain datasets of arbitrary scales and develop
self-supervised models by leveraging high portion of audio-visual
correspondences provided in the datasets.
### 5.2 Human Evaluation
We conduct a user study to assess the perceived presence/absence of audio-
visual correspondence in video clips. We compare clips from four datasets:
AudioSet [21], VGG-Sound [12], ours with clustering (2M scale, 1K clusters),
and random (drawn from the 300M set). We prepare 100 randomly sampled clips
from each of these datasets, for a total of 400 clips. We recruit 12
participants and present each with 100 clips (25 clips per dataset), and ask
them whether audio and visual are corresponding or not. This provides us with
3 votes per video (we provide the details of the questionnaire in the
supplementary material).
Table 3 shows the majority voting accuracy and inter-rater agreement (measured
by Fleiss’ Kappa [19]). Every dataset has Fleiss’ Kappa greater than 0.4,
verifying the reliability of the accuracy statistics [40]. Ours significantly
improves audio-visual correspondence over a random subset (69% vs. 44%), and
is even rated slightly higher than AudioSet. The annotation process for
AudioSet has focused on audio events so we suspect that several of videos do
not contain visible sound sources. There is still a significant gap between
ours and VGG-Sound; we note that our process finds audio-visual correspondence
without relying on manual verification as was done in VGG-Sound.
Dataset | Majority Vote (%) | Fleiss’ Kappa
---|---|---
AudioSet | 65.66 | 0.4385
VGG-Sound | 84.00 | 0.4634
Ours (2M) | 69.00 | 0.5110
Random | 44.00 | 0.6112
Table 3: Human evaluation results assessing the perceived audio-visual
correspondence in videos from different datasets.
## 6 Conclusion
This work complements existing line of research on self-supervised
representation learning with three main contributions: i) proposing an
automatic and scalable data collection pipeline for audio-visual
representation learning, ii) demonstrating that the MI-based subset selection
can retrieve correspondence in both artificial and practical settings, and
iii) releasing a large-scale open-domain video dataset consisting of 100M
clips curated with our pipeline.
Acknowledgements. Authors in Seoul National University are supported by
Institute of Information & communications Technology Planning & Evaluation
(IITP) grant funded by the Korea government (MSIT) (No.2017-0-01772, Video
Turing Test, No.2019-0-01082, SW StarLab).
Figure 5: Histograms of cluster IDs from our curated subsets and randomly
sampled subsets (with 100 cluster centroids). The blue histograms represent
the case where samples are drawn uniformly random and thus is the unbiased
representation of the concepts naturally appearing in the entire population.
## Appendix A On the Diversity of Concepts in Sampled Clips
### A.1 Histogram of Cluster IDs
To analyze the diversity of concepts contained in our curated dataset, we
examine the histograms of cluster IDs from the chosen videos. Figure 5 shows
audio and visual histograms obtained from either our curated subsets or
randomly sampled subsets at varying scales (20K, 200K, and 2M). To obtain
these, we cluster the features from the last layer of audio and visual feature
extractors, respectively, and plot the histograms of cluster IDs. For the
purpose of visualization we sort the cluster indices by the cluster size in a
decreasing order (and thus the cluster IDs do not match between “Random” and
“Ours” in each of the plots). The histograms from random subsets represent the
natural distribution of the entire video population.
In the visual domain, the curated datasets (green histograms) mostly follow
the original cluster distributions (which is reflected in the blue histogram
in each subplot). This indicates that the visual concept distribution largely
follows the natural distribution in the entire population, suggesting that our
subset contains visual concepts that are as diverse as the entire set.
On the other hand, the audio clusters show noticeable concentration in
distribution after subset selection. Upon close inspection of videos from the
largest audio clusters, we observe that our curated datasets tend to choose
videos from clusters with high audio-visual correspondence (e.g., videos of a
single person speaking with no other sound in background) while random
sampling tend to choose videos from clusters with no apparent audio-visual
correspondence (e.g., videos of multiple people taking with background
music/noise). This shows that the concentration in the audio histograms is
caused by filtering out videos of low audio-visual correspondence, which is a
highly desirable artifact in the curated subset.
### A.2 Qualitative Analysis of Audio-Visual Clustering Results
To further investigate the diversity of concepts appearing in our subsets, we
manually inspect audio and visual clustering results in the 2M dataset and
compare the concepts appearing in the largest clusters to those in the
smallest ones. Figure 6 and Figure 7 show representative videos from the five
largest and five smallest clusters obtained from audio and visual clustering
results, respectively. Figure 6 (from audio clusters) suggests that our
curated dataset contains diverse concepts including general sound categories
(e.g., voice and objects sounds) as well as specific topics (e.g., outdoor
interview and cooking). Similarly, Figure 7 (from visual clusters) also
suggests that our dataset contains diverse concepts including both natural
(e.g., animals and fire) and human sounds (e.g., makeup and playing guitar).
Clips from larger clusters (depicted in the left column of Figure 6 and Figure
7) contain clear and isolated sound sources, while sounds of smaller clusters
(the right column) are less distinguishable due to multiple sound sources or
background noise. Our dataset also captures several audio-visual concepts that
existing datasets (such as VGG-Sound [12] and AudioSet [21]) do not offer. For
instance, in Figure 6, the 77th cluster contains videos recorded from a front-
facing camera with voice recordings from a phone mic, and the 46th cluster
contains videos of comedians performing exaggerated body actions with the
sound of crowd (cheering and laughter). The 88th cluster in Figure 7 contains
shoes unboxing videos.
Figure 6: Representative samples and concepts derived from a manual inspection
of 100 audio clusters of the 2M subset. We show samples from the five largest
clusters on the left column and those from the five smallest clusters on the
right. Each cluster captures distinctive audio-visual concepts, indicating
that our curated subset contains various concepts with high audio-visual
correspondence. Figure 7: Representative samples and concepts derived from a
manual inspection of 100 visual clusters of the 2M subset. We show samples
from the five largest clusters on the left column and those from the five
smallest clusters on the right. Each cluster captures distinctive audio-visual
concepts, indicating that our curated subset contains various concepts with
high audio-visual correspondence.
## Appendix B Weighted Summation of Layer Scores (Section 4.2)
Table 4 compares different layer weighting schemes in clustering-based MI
estimation, which shows that our multi-layer approach is generally robust to
weight distributions. We explored two alternative weighting schemes: a
linear($k$) function with slope $k$ and an exp($k$) function with slope
$e^{k}$; we used uniform weights in the main paper. We can see that precision
is stable under a linear weighting scheme; the robustness comes from the
Combination pairing approach which computes an average MI between all possible
combinations across layers. However, precision drops significantly when the
weights have a steep slope (e.g., exp(-10)), which is a degenerate case
similar to the single-layer approach reported in Table 2 of the main paper.
Method | Layer Weights | Precision
---|---|---
1 | 2 | 3 | 4 | 5
exp(-10) | 5e+10 | 2e+04 | 1 | 5e-05 | 2e-09 | 50.791
exp(-1) | 7.4 | 2.7 | 1 | 0.4 | 0.1 | 65.374
exp(1) | 0.1 | 0.4 | 1 | 2.7 | 7.4 | 79.858
exp(10) | 2e-09 | 5e-05 | 1 | 2e+04 | 5e+10 | 57.880
linear(-0.50) | 1.9 | 1.5 | 1 | 0.5 | 0.1 | 88.018
linear(-0.25) | 1.5 | 1.2 | 1 | 0.8 | 0.5 | 88.673
linear(0.25) | 0.5 | 0.8 | 1 | 1.2 | 1.5 | 88.777
linear(0.50) | 0.1 | 0.5 | 1 | 1.5 | 1.9 | 87.997
Uniform (Ours) | 1 | 1 | 1 | 1 | 1 | 88.705
Table 4: Different layer weighting schemes in clustering-based MI estimation
using Kinetics-Sounds with Combination pairing.
## Appendix C Details of Linear Evaluation on Downstream Tasks (Section 5.1)
Size | Pretrain | UCF101 | ESC-50 | Kinetics-Sounds
---|---|---|---|---
top-1 | top-5 | top-1 | top-5 | top-1 | top-5
- | Random Init | 11.48 | 29.21 | 8.35 | 34.85 | 20.31 | 47.03
20K | Kinetics-Sounds | 33.51 | 64.47 | 49.40 | 81.85 | 49.98 | 82.15
Random Set | 36.34 | 66.59 | 46.95 | 79.30 | 45.19 | 77.25
Clustering (Ours) | 46.28 | 75.24 | 50.55 | 81.30 | 55.78 | 85.15
200K | VGG-Sound | 49.55 | 78.60 | 65.55 | 90.95 | 55.59 | 86.46
Random Set | 34.33 | 63.92 | 45.80 | 78.45 | 44.15 | 76.88
Contrastive | 45.10 | 76.46 | 56.90 | 85.00 | 53.80 | 85.26
Clustering (Ours) | 50.19 | 78.89 | 62.80 | 89.50 | 56.12 | 84.10
2M | AudioSet | 55.54 | 83.94 | 65.05 | 90.70 | 57.46 | 86.72
Random Set | 41.12 | 72.24 | 52.75 | 83.55 | 48.30 | 79.54
Contrastive | 45.87 | 75.80 | 58.85 | 87.10 | 53.68 | 83.05
Clustering (Ours) | 55.63 | 83.92 | 65.10 | 90.50 | 57.48 | 87.19
10M | Clustering (Ours) | 74.21 | 93.82 | 74.20 | 93.40 | 67.71 | 92.14
100M | Clustering (Ours) | 86.10 | 97.94 | 86.95 | 97.45 | 75.42 | 95.88
Table 5: Linear evaluation of representations pretrained on different
datasets. We report the top-1/5 accuracies (%) of video classification on
UCF101 [67], audio classification on ESC-50 [61] and audio-visual
classification on Kinetics-Sounds [4]. We average the accuracies across the
official splits of UCF101 (three splits) and ESC-50 (five splits).
Table 5 shows the results of liner evaluation on downstream tasks, which were
also shown in the bar chart of the main paper, Figure 4; we reproduced here to
compensate for the lack of readability of the bar chart.
### C.1 Experimental Settings
We pretrain audio-visual models in a contrastive manner [13] on different
datasets. Specifically, we attach MLP projection heads on top of audio and
visual feature extractors, respectively, and train the whole model end-to-end
using the noise-contrastive loss (see Eqn. 1 of the main paper). As for the
visual and audio backbone feature extractors, we use 3D ResNet-50 [10] and
ResNet-50 [27], respectively. Each of the MLP projection head is composed of
two fully-connected layers with ReLU [52] activation, and produces the
embeddings of dimension 128. We pretrain the model for 50 epochs with a batch
size 64. We use the AMSGrad variant [62] of AdamW [44] optimizer with a
learning rate 1e-3, $\beta_{1}=0.9$, $\beta_{2}=0.999$ and an L2 weight decay
of 1e-5. We apply learning rate warm-up for the first 20,000 iterations
followed by a linear decay of learning rate.
For linear evaluation on downstream tasks, we attach a linear classifier on
top of the pretrained feature extractors and train it from scratch while
fixing the parameters of the feature extractors. We use only the visual CNN
for action recognition on UCF101 [67] and only the audio CNN for sound
classification on ESC50 [61]. For audio-visual action recognition on Kinetics-
Sounds [4], we concatenate audio-visual features before feeding them as input
to the linear classifier. We apply dropout [29] with a 50% rate before the
linear classifier. We train the model for 30 epochs with a batch size of 1024
on ESC-50 [61], for 10 epochs with a batch size of 64 on UCF101 [67] and for 5
epochs with a batch size of 64 on Kinetics-Sounds. We use the Adam [36]
optimizer with a learning rate 1e-2, $\beta_{1}=0.9$, $\beta_{2}=0.999$ and an
L2 weight decay of 5e-6.
### C.2 Impact of the Number of Centroids
To visualize the impact of the number of clusters in our clustering-based
approach, we group the results by the number of clusters as shown in Figure 8.
Notice that the number of clusters is not positively correlated with
downstream task performance. Instead, clustering with about 500 clusters seems
to yield the best performance. Also, experiments using the largest number of
centroids ($C=2000$) show low accuracy consistently across all datasets and
subset sizes. This confirms our findings in Section 4.2 of the main paper:
over-clustering tends to have a negative impact on the quality of the selected
subset. We believe that this happens because, as the number of clusters
increases, samples with homogeneous concepts in large clusters are scattered
into small clusters sharing similar concepts. When we do not have many
references to compare as in the early stage of subset selection, this
fragmentation effect inhibits sample count sharing between conceptually
similar small clusters, complicating the clustering-based MI estimation.
Figure 8: Linear evaluation of representations pretrained on the datasets that
are constructed by our clustering-based approach. We report the top-1 accuracy
(%) on UCF101 [67], ESC-50 [61], and Kinetics-Sounds [4], grouped by the
number of cluster centroids. The shaded regions show 99% confidence intervals
obtained by runs over the official splits of UCF101 (3 splits) and ESC-50 (5
splits).
## Appendix D More Discussion on Subset Selection (Section 3.3.2)
### D.1 Greedy Algorithm
We provide the details of the greedy algorithm [54] that is approximated using
the batch greedy algorithm [14]. As shown in Algorithm 2, the greedy algorithm
needs to re-evaluate the clustering-based MI estimator $F$ on all the
remaining candidates in each iteration. Thus, the time complexity is
$O(N^{2})$ where $N$ is the size of the initial dataset $\mathbf{D}$.
On the other hand, the batch greedy algorithm approximates this by selecting
the next element to be included in the solution within only a randomly chosen
batch, not the entire candidates. This is shown in Algorithm 3 below (same as
Algorithm 1 of the main paper; reproduced here for easy comparison).
### D.2 Batch Greedy Subset Selection
When using the batch greedy algorithm for subset selection, the batch size $b$
and the selection size $s$ affect the quality of the selected subsets. We
explore various $(b,s)$ configurations on Kinetics-Sounds [4], as shown in
Figure 9. Note that the performance gap between different batch sizes is
small. The precision 93.9%, 94.3% and 94.6% are respectively obtained when
using batch sizes $b=40,80,160$ with the same ratio of selection size to batch
size $s/b=12.5\%$. On the contrary, the value of $s/b$ highly affects the
retrieval performance across all the batch sizes examined; the performance
drops sharply as the ratio exceeds 25% regardless of the batch size. As stated
in Section 4.2 of the main paper, we construct the dataset to have an equal
number of positive and negative pairs and the drop in robustness manifests
itself when the selection ratio $s/b$ exceeds the easy positive ratio of 25%.
## Appendix E Details of Automatic Dataset Curation
Here, we describe the details of subset selection via (i) NCE-based MI
estimation and (ii) clustering-based MI estimation. To construct datasets, we
vary scales of 20K, 200K and 2M. Based on the results at the three scales, we
also generate a version with 10M videos using the clustering-based approach.
Input: initial dataset $\mathbf{D}$, clustering-based MI estimator $F$, target
subset size $M$
Output: $\mathbf{X}\subseteq\mathbf{D},|\mathbf{X}|=M$
$\mathbf{X}_{0}\leftarrow\emptyset$
for _$i=0$ to $M-1$_ do
$x\leftarrow\operatornamewithlimits{argmax}_{x\in\mathbf{D}\backslash\mathbf{X}_{i}}F(\mathbf{X}_{i}\cup\\{x\\})$
$\mathbf{X}_{i+1}\leftarrow\mathbf{X}_{i}\cup\\{x\\}$
end for
$\mathbf{X}\leftarrow\mathbf{X}_{M}$
Return $\mathbf{X}$
Algorithm 2 Greedy Algorithm
Input: initial dataset $\mathbf{D}$, clustering-based MI estimator $F$, target
subset size $M$, batch size $b$, selection size $s$
Output: $\mathbf{X}\subseteq\mathbf{D},|\mathbf{X}|=M$
$\mathbf{X}_{0}\leftarrow\emptyset,i\leftarrow 0$
while _$|X_{i}| <M$_ do
Randomly sample
$\mathbf{B}\subseteq\mathbf{D}\backslash\mathbf{X}_{i},|\mathbf{B}|=b$
$\mathbf{Y}_{0}\leftarrow\emptyset,j\leftarrow 0$
while _$j <s$_ do
$x\leftarrow\operatornamewithlimits{argmax}_{x\in\mathbf{B}\backslash\mathbf{Y}_{j}}F(\mathbf{X}_{i}\cup\mathbf{Y}_{j}\cup\\{x\\})$
$\mathbf{Y}_{j+1}\leftarrow\mathbf{Y}_{j}\cup\\{x\\},j\leftarrow j+1$
if _$|\mathbf{X}_{i}\cup\mathbf{Y}_{j}|=M$_ then break
end while
$\mathbf{X}_{i+1}\leftarrow\mathbf{X}_{i}\cup\mathbf{Y}_{j},i\leftarrow i+1$
end while
$\mathbf{X}\leftarrow\mathbf{X}_{i}$
Return $\mathbf{X}$
Algorithm 3 Batch Greedy Algorithm (reproduced from the main paper for easy
comparison) Figure 9: Precisions of Batch greedy algorithm with varying ratios
of selection size to batch size, $s/b$ (x axis: iterations, y axis:
precision). We group the plots by the batch size: $b=40,80,160$ from left to
right. The shaded regions show 99% confidence intervals obtained by five runs
on Kinetics-Sounds. The batch greedy algorithm is robust when the ratio is
$\leqslant$ 25%, regardless of the batch size.
### E.1 NCE-Based MI Estimation
We use the linear projection heads that transform audio and visual features
into 128-dimension embeddings. We randomly sample a subset of 100M clips from
the initial 300M set that we crawl, and train on the subset for three epochs
with a batch size $N_{b}=1,024$. We use the AMSGrad variant of Adam optimizer
[62] with a learning rate 2e-4, $\beta_{1}=0.9$ and $\beta_{2}=0.999$. We
apply learning rate warm-up for the first 3 epochs followed by a linear decay
of learning rate.
### E.2 Clustering-Based MI Estimation
For SGD K-Means clustering, we train the cluster centroids with a mini-batch
of size 100K for 100 epochs using a learning rate $\lambda=\textrm{1e-2}$.
When applying the batch greedy algorithm, we use the fixed batch size
$b=10,000$ and the selection size $s=500$ (with a ratio of $s/b=0.05$), but
vary the number of clusters $C\in\\{100,200,500,1000,2000\\}$ for each size of
the datasets, except the dataset of 10M scale (we generate the dataset only
with $C=500$ for computational reasons).
## Appendix F Human Evaluation Interface (Section 5.2)
Figure 10: Screenshots of the human evaluation interface. The introduction
page (top) provides instructions to the annotators, and the test page (bottom)
shows clips to the raters and receives the corresponding Yes/No responses.
Figure 10 shows the user interface we developed for human evaluation. We
provide guidelines on how to assess audio-visual correspondence:
> You will watch a video clip for 10 seconds. Please determine whether there
> is audio-visual correspondence in the video. In other words, decide whether
> the sound source is visible or can be inferred from visual context.
After a pilot study we gathered feedback from experts and added additional
guidelines to help disambiguate common edge scenarios (shown in Figure 10).
Annotators are given one 10-second clip at a time and asked to provide a
Yes/No answer judging whether or not there is audio-visual correspondence in
the given clip. We do not provide a replay interface to collect intuitive
response from the raters.
## References
* [1] Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, and Sudheendra Vijayanarasimhan. Youtube-8M: A Large-Scale Video Classification Benchmark. arXiv preprint arXiv:1609.08675, 2016.
* [2] Pankaj K Agarwal, Sariel Har-Peled, and Kasturi R Varadarajan. Geometric Approximation via Coresets. Combinatorial and Computational Geometry, 52:1–30, 2005.
* [3] Humam Alwassel, Dhruv Mahajan, Lorenzo Torresani, Bernard Ghanem, and Du Tran. Self-Supervised Learning by Cross-Modal Audio-Video Clustering. arXiv preprint arXiv:1911.12667, 2019.
* [4] Relja Arandjelovic and Andrew Zisserman. Look, Listen and Learn. In ICCV, 2017.
* [5] Yusuf Aytar, Carl Vondrick, and Antonio Torralba. SoundNet: Learning Sound Representations from Unlabeled Video. In NeurIPS, 2016.
* [6] David Bau, Bolei Zhou, Aude Oliva, and Antonio Torralba. Interpreting Deep Visual Representations via Network Dissection. PAMI, 41(9):2131–2145, 2019.
* [7] Leon Bottou and Yoshua Bengio. Convergence Properties of the K-Means Algorithms. In NeurIPS, 1995.
* [8] Fabian Caba Heilbron, Victor Escorcia, Bernard Ghanem, and Juan Carlos Niebles. ActivityNet: A Large-Scale Video Benchmark for Human Activity Understanding. In CVPR, 2015.
* [9] João Carreira, Eric Noland, Chloe Hillier, and Andrew Zisserman. A Short Note on the Kinetics-700 Human Action Dataset. arXiv preprint arXiv:1907.06987, 2019.
* [10] Joao Carreira and Andrew Zisserman. Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset. In CVPR, 2017.
* [11] Sourish Chaudhuri, Joseph Roth, Daniel P. W. Ellis, Andrew Gallagher, Liat Kaver, Radhika Marvin, Caroline Pantofaru, Nathan Reale, Loretta Guarino Reid, Kevin Wilson, and Zhonghua Xi. AVA-Speech: A Densely Labeled Dataset of Speech Activity in Movies. In Interspeech, 2018.
* [12] Honglie Chen, Weidi Xie, Andrea Vedaldi, and Andrew Zisserman. VGGSound: A Large-Scale Audio-Visual Dataset. In ICASSP, 2020.
* [13] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A Simple Framework for Contrastive Learning of Visual Representations. In ICML, 2020.
* [14] Yuxin Chen and Andreas Krause. Near-optimal Batch Mode Active Learning and Adaptive Submodular Optimization. ICML, 2013.
* [15] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR, 2009.
* [16] Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. SlowFast Networks for Video Recognition. In ICCV, 2019.
* [17] Christoph Feichtenhofer, Axel Pinz, and Richard Wildes. Spatiotemporal Residual Networks for Video Action Recognition. In NeurIPS, 2016.
* [18] John W Fisher and Trevor Darrell. Probabalistic Models and Informative Subspaces for Audiovisual Correspondence. In ECCV, 2002.
* [19] Joseph L Fleiss. Measuring Nominal Scale Agreement Among Many Raters. Psychological Bulletin, 76(5):378, 1971.
* [20] Ruohan Gao and Kristen Grauman. 2.5D Visual Sound. In CVPR, 2019.
* [21] Jort F. Gemmeke, Daniel P. W. Ellis, Dylan Freedman, Aren Jansen, Wade Lawrence, R. Channing Moore, Manoj Plakal, and Marvin Ritter. Audio Set: An Ontology and Human-Labeled Dataset for Audio Events. In ICASSP, 2017.
* [22] Eleanor Jack Gibson. Principles of Perceptual Learning and Development. Appleton-Century-Crofts, 1969.
* [23] Yuhong Guo. Active Instance Sampling via Matrix Partition. In NeurIPS, 2010.
* [24] Michael Gutmann and Aapo Hyvärinen. Noise-Contrastive Estimation: A New Estimation Principle for Unnormalized Statistical Models. In AISTATS, 2010.
* [25] Sariel Har-Peled and Soham Mazumdar. On Coresets for K-Means and K-Median Clustering. In STOC, 2004.
* [26] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum Contrast for Unsupervised Visual Representation Learning. In CVPR, 2020.
* [27] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In CVPR, 2016.
* [28] Shawn Hershey, Sourish Chaudhuri, Daniel PW Ellis, Jort F Gemmeke, Aren Jansen, R Channing Moore, Manoj Plakal, Devin Platt, Rif A Saurous, Bryan Seybold, et al. CNN Architectures for Large-Scale Audio Classification. In ICASSP, 2017.
* [29] Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. Improving Neural Networks by Preventing Co-Adaptation of Feature Detectors. arXiv preprint arXiv:1207.0580, 2012.
* [30] R Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Phil Bachman, Adam Trischler, and Yoshua Bengio. Learning Deep Representations by Mutual Information Estimation and Maximization. In ICLR, 2019.
* [31] Zohar Jackson, César Souza, Jason Flaks, Yuxin Pan, Hereman Nicolas, and Adhish Thite. Free Spoken Digit Dataset: v1.0.8, Aug. 2018.
* [32] David S Johnson, Christos H Papadimitriou, and Mihalis Yannakakis. How Easy Is Local Search? Journal of computer and system sciences, 37(1):79–100, 1988.
* [33] Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, and Tomas Mikolov. FastText.zip: Compressing Text Classification Models. arXiv preprint arXiv:1612.03651, 2016.
* [34] Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. Bag of Tricks for Efficient Text Classification. In EACL, 2017.
* [35] Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, et al. The Kinetics Human Action Video Dataset. arXiv preprint arXiv:1705.06950, 2017.
* [36] Diederik P Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. In ICLR, 2015.
* [37] Bruno Korbar, Du Tran, and Lorenzo Torresani. Cooperative Learning of Audio and Video Models from Self-Supervised Synchronization. In NeurIPS, 2018.
* [38] Alexander Kraskov, Harald Stögbauer, and Peter Grassberger. Estimating Mutual Information. Physical review E, 69(6), 2004.
* [39] Alex Krizhevsky and Geoffrey Hinton. Learning Multiple Layers of Features from Tiny Images. Technical report, University of Toronto, 2009.
* [40] J Richard Landis and Gary G Koch. The Measurement of Observer Agreement for Categorical Data. Biometrics, pages 159–174, 1977.
* [41] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-Based Learning Applied to Document Recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
* [42] David D Lewis and William A Gale. A Sequential Algorithm for Training Text Classifiers. In SIGIR, 1994.
* [43] Xin Li and Yuhong Guo. Adaptive Active Learning for Image Classification. In CVPR, 2013.
* [44] Ilya Loshchilov and Frank Hutter. Decoupled Weight Decay Regularization. In ICLR, 2019.
* [45] Thomas Martinetz and Klaus Schulten. A “Neural-Gas” Network Learns Topologies. In ICANN, 1991.
* [46] Marina Meilă. Comparing Clusterings—An Information Based Distance. Journal of multivariate analysis, 98(5), 2007.
* [47] N Michele Merler, Khoi-Nguyen C. Mac, Dhiraj Joshi, Quoc-Bao Nguyen, Stephen Hammer, John Kent, Jinjun Xiong, Minh N. Do, John R. Smith, and Rogerio Schmidt Feris. Automatic Curation of Sports Highlights Using Multimodal Excitement Features. IEEE Trans Multimedia, 21(5), 2019.
* [48] Antoine Miech, Dimitri Zhukov, Jean-Baptiste Alayrac, Makarand Tapaswi, Ivan Laptev, and Josef Sivic. HowTo100M: Learning a Text-Video Embedding by Watching Hundred Million Narrated Video Clips. In ICCV, 2019.
* [49] Michel Minoux. Accelerated Greedy Algorithms for Maximizing Submodular Set Functions. In Optimization techniques, pages 234–243. Springer, 1978.
* [50] Mathew Monfort, Alex Andonian, Bolei Zhou, Kandan Ramakrishnan, Sarah Adel Bargal, Tom Yan, Lisa Brown, Quanfu Fan, Dan Gutfruend, and Carl Vondrick. Moments in Time Dataset: One Million Videos for Event Understanding. PAMI, pages 1–8, 2019.
* [51] Arsha Nagrani, Joon Son Chung, Weidi Xie, and Andrew Zisserman. Voxceleb: Large-scale speaker verification in the wild. Computer Science and Language, 2019.
* [52] Vinod Nair and Geoffrey E Hinton. Rectified Linear Units Improve Restricted Boltzmann Machines. In ICML, 2010.
* [53] G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher. An Analysis of Approximations for Maximizing Submodular Set Functions–I. Mathematical Programming, 14(1):265–294, 1978.
* [54] George L Nemhauser, Laurence A Wolsey, and Marshall L Fisher. An Analysis of Approximations for Maximizing Submodular Set Functions—I. Mathematical programming, 14(1):265–294, 1978.
* [55] Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation Learning with Contrastive Predictive Coding. arXiv preprint arXiv:1807.03748, 2018.
* [56] Andrew Owens, Phillip Isola, Josh McDermott, Antonio Torralba, Edward H Adelson, and William T Freeman. Visually Indicated Sounds. In CVPR, 2016.
* [57] Stuart P. Lloyd. Least Squares Quantization in PCM. IEEE Transactions on Information Theory, 28(2):129–137, 1982.
* [58] Liam Paninski. Estimation of Entropy and Mutual Information. Neural computation, 15(6):1191–1253, 2003.
* [59] Mandela Patrick, Yuki M Asano, Ruth Fong, João F Henriques, Geoffrey Zweig, and Andrea Vedaldi. Multi-modal Self-Supervision from Generalized Data Transformations. arXiv preprint arXiv:2003.04298, 2020.
* [60] Karl Pearson. LIII. On Lines and Planes of Closest Fit to Systems of Points in Space. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 2(11):559–572, 1901.
* [61] Karol J. Piczak. ESC: Dataset for Environmental Sound Classification. In ACM-MM, 2015.
* [62] Sashank J Reddi, Satyen Kale, and Sanjiv Kumar. On the Convergence of Adam and Beyond. In ICLR, 2018.
* [63] Joseph Roth, Sourish Chaudhuri, Ondrej Klejch, Radhika Marvin, Andrew Gallagher, Liat Kaver, Sharadh Ramaswamy, Arkadiusz Stopczynski, Cordelia Schmid, Zhonghua Xi, and Caroline Pantofaru. AVA-ActiveSpeaker: An Audio-Visual Dataset for Active Speaker Detection. arXiv preprint arXiv:1901.01342, 2019.
* [64] David Sculley. Web-Scale K-Means Clustering. In WWW, 2010.
* [65] Burr Settles. Active Learning Literature Survey. Science, 10(3):237–304, 1995.
* [66] Yusuke Shinohara. A Submodular Optimization Approach to Sentence Set Selection. In ICASSP, 2014.
* [67] Khurram Soomro, Amir Roshan Zamir, and Mubarak Shah. UCF101: A Dataset of 101 Human Actions Classes From Videos in The Wild. arXiv preprint arXiv:1212.0402, 2012.
* [68] Jamshid Sourati, Murat Akcakaya, Jennifer G Dy, Todd K Leen, and Deniz Erdogmus. Classification Active Learning Based on Mutual Information. Entropy, 18(2):51, 2016.
* [69] Jonathan C Stroud, David A Ross, Chen Sun, Jia Deng, Rahul Sukthankar, and Cordelia Schmid. Learning Video Representations from Textual Web Supervision. arXiv preprint arXiv:2007.14937, 2020.
* [70] Yapeng Tian, Jing Shi, Bochen Li, Zhiyao Duan, and Chenliang Xu. Audio-Visual Event Localization in Unconstrained Videos. In ECCV, 2018.
* [71] Ivor W Tsang, James T Kwok, and Pak-Ming Cheung. Core Vector Machines: Fast SVM Training on Very Large Data Sets. Journal of Machine Learning Research, 6(Apr):363–392, 2005.
* [72] Nguyen Xuan Vinh, Julien Epps, and James Bailey. Information Theoretic Measures for Clusterings Comparison: Variants, Properties, Normalization and Correction for Chance. Journal of Machine Learning Research, 11, 2010.
* [73] Janett Walters-Williams and Yan Li. Estimation of Mutual Information: A Survey. In RSKT, 2009.
* [74] Mei Wang and Weihong Deng. Deep Visual Domain Adaptation: A Survey. Neurocomputing, 312:135–153, 2018.
* [75] Kai Wei, Rishabh Iyer, and Jeff Bilmes. Submodularity in Data Subset Selection and Active Learning. In ICML, 2015.
* [76] Kai Wei, Yuzong Liu, Katrin Kirchhoff, Chris Bartels, and Jeff Bilmes. Submodular Subset Selection for Large-Scale Speech Training Data. In ICASSP, 2014.
* [77] Kai Wei, Yuzong Liu, Katrin Kirchhoff, and Jeff Bilmes. Using Document Summarization Techniques for Speech Data Subset Selection. In NAACL, 2013.
* [78] Kai Wei, Yuzong Liu, Katrin Kirchhoff, and Jeff Bilmes. Unsupervised Submodular Subset Selection for Speech Data. In ICASSP, 2014.
* [79] Xindong Wu, Vipin Kumar, J Ross Quinlan, Joydeep Ghosh, Qiang Yang, Hiroshi Motoda, Geoffrey J McLachlan, Angus Ng, Bing Liu, S Yu Philip, et al. Top 10 Algorithms in Data Mining. Knowledge and Information Systems, 14(1):1–37, 2008.
* [80] Karren Yang, Bryan Russell, and Justin Salamon. Telling Left From Right: Learning Spatial Correspondence of Sight and Sound. In CVPR, 2020.
* [81] Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How Transferable are Features in Deep Neural Networks? In NeurIPS, 2014.
|
# Exploring Transfer Learning on Face Recognition of Dark Skinned, Low Quality
and Low Resource Face Data
Nuredin Ali
Department of Information Systems
Mekelle University
<EMAIL_ADDRESS>
###### Abstract
There is a big difference in the tone of color of skin between dark and light
skinned people. Despite this fact, most face recognition tasks almost all
classical state-of-the-art models are trained on datasets containing an
overwhelming majority of light skinned face images. It is tedious to collect a
huge amount of data for dark skinned faces and train a model from scratch. In
this paper, we apply transfer learning on VGGFace to check how it works on
recognising dark skinned mainly Ethiopian faces. The dataset is of low quality
and low resource. Our experimental results show above 95% accuracy which
indicates that transfer learning in such settings works.
## 1 Introduction
Face recognition (FR) is a technology capable of identifying or verifying a
person from a digital image or a video frame. [1] Face recognition has been a
prominent bio-metric technique for identity authentication and has been widely
used in many areas such as military, finance, public security, and everyday
life. Most of the classical state-of-the-art models are trained on very large
datasets of mostly light skinned faces. Most of the people in African
countries have dark skinned faces and currently there are no readily available
datasets collected for researchers to make such experiments. It is tedious to
collect a huge amount of data and train a model from scratch. The most
efficient technique to use in the case of a low resource is to transfer the
knowledge a model has learned on another data. [2] Transfer Learning is a
Machine Learning technique whereby a model is trained and developed for one
task and is then re-used on a second related task. In this work, we evaluate
how transfer learning from a model pre-trained on mostly light skinned faces
works to recognize a very low quality and low resource dataset of dark skinned
faces.
## 2 Background and related work
Research in computer vision has included work on issues that have direct
social impact, such as security and privacy. However, research on the related
issue of diversity and inclusion in vision is surprisingly lacking [3]. The
work by [3] focused on gender classification and face detection. While in this
paper we focus on recognition of individuals by applying transfer learning.
The ChaLearn “Looking at People” challenge from [4] provides the Faces of the
World (FotW) dataset, which annotates gender and the presence of smiling on
faces. [5] won first place in this challenge, utilizing multi-task learning
(MTL) and fine-tuning on top of a model trained for face recognition [6]. [7]
later published an out-performing result for the same task on FotW utilizing
MTL and transfer learning from a face recognition model. In this case, we use
transfer learning to recognize dark skinned faces from a model pre-trained on
mostly light skinned faces.
## 3 Data and methodology
To develop the dataset for this experiment, 15 students coming from a
diversified part of Ethiopia participated. A total of 1,500 images were used
(100 for each individual). Figure 1 shows example images from our dataset. 70%
of the data is used for training the model and the remaining 30% is used to
validate the trained model. The images are collected using a very low-quality
camera which is 0.98MP (megapixels). The data has been collected in a
controlled environment. Which can be applicable to Electronic Gate for
instance.
First we trained a model from scratch by only having the structure of some of
the classical models like LeNet and AlexNet. After looking at the results they
were not satisfactory. The results are stated below. We used a model pre-
trained on a huge dataset of mostly light skinned faces which is VGGFace. The
model was trained on VGGFace dataset, a very large-scale dataset 2.6M images,
over 2.6K people [6]. Figure 1 shows example images from this dataset. While
applying transfer learning, Feeding the extracted features as input to a fully
connected layer and softmax activation provides better result [8].
Our experimental settings are as follows. The extracted features are fed in to
a fully connected layer. As our experiment, Finetuning deeper results
reduction in accuracy as there is limited data to train on. To learn some
extra features, Maxpooling, average pooling, dense layer and dropout layers
are added. A very low learning rate of 0.001, batch size of 32, activation of
softmax, loss function of categorical cross-entropy and Adam as an optimizer
were used to train the face recognition model.
Figure 1: Sample of the VGGFace dataset Figure 2: Sample of the dataset used
to develop our model
## 4 Results
The evaluation metric used in this experiment is accuracy. For each image, we
check if the correct label is found. VGGFace achieved 98.95% accuracy when it
was first developed [6]. Using our dataset the architecture of LeNet achieved
68% and AlexNet 82%. The model developed using the transfer learning achieved
more than 95% accuracy. This indicates that it is possible to develop a model
by transfer learning from the state-of-the-art VGGFace model.
## 5 Conclusion
In this work, we showed experimentally and got an indication that using
transfer learning on VGGFace to recognize a low quality and low resource dark-
skinned face data works. This is very promising as it is very tedious to
collect a huge amount of data for dark skinned faces and develop a model that
has a high accuracy from scratch. For future works, We encourage vision
researchers to explore more towards such techniques and add on how to make
such methods more efficient.
## References
* [1] Wang Mei and Weihong Deng. Deep face recognition: A survey. arXiv preprint arXiv: 1804.06655, 2018.
* [2] Mahbub Hussain, Jordan Bird, and Diego Faria. A study on cnn transfer learning for image classification. 06 2018.
* [3] Joy Buolamwini and Timnit Gebru. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency, pages 77–91, 2018.
* [4] Sergio Escalera, Mercedes Torres Torres, Brais Martinez, Xavier Baró, Hugo Jair Escalante, Isabelle Guyon, Georgios Tzimiropoulos, Ciprian Corneou, Marc Oliu, Mohammad Ali Bagheri, et al. Chalearn looking at people and faces of the world: Face analysis workshop and challenge 2016. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 1–8, 2016.
* [5] Kaipeng Zhang, Lianzhi Tan, Zhifeng Li, and Yu Qiao. Gender and smile classification using deep convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 34–38, 2016.
* [6] Omkar M Parkhi, Andrea Vedaldi, and Andrew Zisserman. Deep face recognition. 2015\.
* [7] Rajeev Ranjan, Swami Sankaranarayanan, Carlos D Castillo, and Rama Chellappa. An all-in-one convolutional neural network for face analysis. In 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017), pages 17–24. IEEE, 2017.
* [8] R. M. Prakash, N. Thenmoezhi, and M. Gayathri. Face recognition with convolutional neural network and transfer learning. In 2019 International Conference on Smart Systems and Inventive Technology (ICSSIT), pages 861–864, 2019.
|
# Spread and defend infection in graphs
Arya Tanmay Gupta<EMAIL_ADDRESS>
(Computer Science and Engineering, Michigan State University)
###### Abstract
The spread of an infection, a contagion, meme, emotion, message and various
other spreadable objects have been discussed in several works. Burning and
firefighting have been discussed in particular on static graphs. Graph burning
simulates the notion of the spread of “fire” throughout a graph (plus, one
unburned node burned at each time-step); graph firefighting simulates the
defending of nodes by placing firefighters on the nodes which have not been
already burned while the fire is being spread (started by only a single fire
source).
This article studies a combination of firefighting and burning on a graph
class which is a variation (generalization) of temporal graphs. Nodes can be
infected from “outside” a network. We present a notion of both upgrading (of
unburned nodes, similar to firefighting) and repairing (of infected nodes).
The nodes which are burned, firefighted, or repaired are chosen
probabilistically. So a variable amount of nodes are allowed to be infected,
upgraded and repaired in each time step.
In the model presented in this article, both burning and firefighting proceed
concurrently, we introduce such a system to enable the community to study the
notion of spread of an infection and the notion of upgrade/repair against each
other. The graph class that we study (on which, these processes are simulated)
is a variation of temporal graph class in which at each time-step,
probabilistically, a communication takes place (iff an edge exists in that
time step). In addition, a node can be “worn out” and thus can be removed from
the network, and a new healthy node can be added to the network as well. This
class of graphs enables systems with high complexity to be able to be
simulated and studied.
Keywords: variable burning, variable firefighting, temporal graphs, variable
nodes, variable edges
## 1 Introduction
Several models based on discrete mathematics, probability and complex calculus
have been used to demonstrate the spread of an infection or a contagion in a
network of hosts, a human social network, or other biological network.
Graph burning is a process introduced on static graphs [6].
###### Definition 1.
Graph Burning. Initially, all nodes are marked as “unburned”. Then in each
time-step, (any) one unburned node is burned from “outside”, and then the fire
spreads to the neighbouring nodes from the nodes which were burnt uptil the
previous time-step. This process continues until all the nodes are burned.
Firefighting is another process which was introduced on static graphs [24].
###### Definition 2.
Graph Firefighting. Fire is initiated from (any) one node in the first time-
step. From the second time-step, at each time-step, a firefighter is placed on
an unburned node, the fire spreads to all nodes neighbouring the nodes which
were burned till the last time-step, except that a firefighted node cannot be
burned. This process stops when fire cannot spread to any new nodes.
Both firefighting and graph burning have been verified as NP-Hard problems.
In this article, we extend our work in [22, 23], and study a model in which
graph burning and firefighting are used against each other. We study more
sophisticated versions of burning and firefighting in which we burn an
arbitrary number of nodes from outside, and we firefight on an arbitrary
number of nodes in each time-step, both of which are done probabilistically.
The spread of an infection and the choice of upgrading/repair of nodes is done
probabilistically. Further, both burning and firefighting is not permanent on
the nodes. In addition, the graph that we study is not static. These
modifications to the contemporary definitions of burning and firefighting on
the model presented in this article have been done to be able to efficiently
model several real-world systems.
In the literature, a temporal graph $G$ is defined as follows.
###### Definition 3.
Temporal Graph. A temporal graph $G=(V,E_{1},E_{2},\dots,E_{\ell})$ defined by
a static set of nodes $V(G)$, and a sequence $G_{1},G_{2},\dots G_{\ell}$ of
graphs which have the same node set as $G$, but for any graph $G_{i}$ $(1\leq
i\leq\ell)$, $G_{i}=(V,E_{i})$.
As per the above definition, $i$ corresponds to the $i^{th}$ time-step, and as
the name suggests, temporal graphs were initially introduced to simulate the
graphs that change with time. A temporal graph $G$ can be viewed as a graph
which has constant number of nodes, but a sequence of (not necessarily)
distinct sets of edges on $V(G)$. We call $G_{i}$ an instance of $G$. We study
a model which presents a fusion of probability based graph burning and
firefighting on a variation of temporal graphs: in addition to how temporal
graphs have been defined, we allow the number of nodes be modified in each
time-step.
The structure of the article is as follows. The structure of the subject class
of graphs that we study is discussed in Section 2. Section 3 contains the
preliminaries. In Section 5, we study the variable burning, in Section 6 we
study variable burning, in combination with variable repair and upgrade, and
in Section 7 we study variable burning, repair and upgrade along with allowing
the nodes to be inserted and deleted. In Section 8, we introduce variable edge
probability in nodes. In Section 9 we discuss some interesting modifications
that can be done while simulating certain real-time systems. In Section 10, we
discuss the related work in the literature, and we conclude ourselves in
Section 11.
## 2 Structure of the subject class of networks
The model described in this article studies a network of nodes where we
emphasize on the spreading of an infection and the defence against it. We
reduce our discussion to the same in this article on an arbitrary graph $G$.
In the model that we are going to present, the input is a graph $G$ with a
defined set of nodes. Associated with the graph $G$, there are some variables
that we define in Table 1 (page 1). Values to all the variables in Table 1 are
provided as part of the input.
Variable | What it represents
---|---
(associated |
with $G$) |
$\rho_{del}$ | probability of deletion of an infected node
$\rho_{ins}$ | probability of insertion of a new node to $G$, denotes the
| number of nodes being added in each time-step as the
| ratio to the contemporary number of nodes.
Table 1: Variables associated with $G$
This is different than the traditional temporal graphs because we allow
insertion and deletion of nodes as well.
Each node $v$ in the graph has some associated variables which we describe in
Table 2 (page 2). Values to all the variables in Table 2 are not provided as
part of the input, except for $v.\rho_{c}$ and $v.type$. The default initial
values for the rest of the variables are discussed in Section 3.
Variable | What it represents
---|---
$v.\rho_{c}$ | the probability that $v$ is connected to any other
| node at any time-step.
$v.i_{s}$ | $true$ iff $v$ is infected (infection status).
$v.type$ | denotes the type of $v$, just for book-keeping.
$v.e_{s}$ | true if $v.i_{s}$ is true and the infection in $v$ is evident
| and has been reported (infection evidence status).
$v.t_{e}$ | denotes the time-step in which $v.e_{s}$ was last
| flipped to $true$ (from $false$).
$v.t_{r}$ | the time-step when $v$ was repaired (after getting
| infected, and then getting reported).
$v.t_{u}$ | the time-step when $v$ was upgraded.
Table 2: Variables associated with each node $v$ in $V(G)$
We also use some variables that are globally accessible, are commonly
applicable to all the nodes, but are constant for each node in $G$, so we do
not associate them with any node and we assume that a single copy of these
variables will be used by all the nodes. We define these variables in Table 3
(page 3). Such variables define some statistical characteristics of all the
nodes. These variables can also be defined as node-specific, depending on the
type of nodes in the network, but we assume in our model that all the nodes
are “probabilistically” similar. Values to all the variables in Table 3 are
provided as part of the input.
Variable | What it represents
---|---
$\mathcal{N}_{s}$ | probability of a node getting infected from another
| node (spread), given that they communicate.
$\mathcal{N}_{e}$ | denotes the average number of infected nodes in which
| the infection gets evident.
$\mathcal{N}_{r}$ | denotes the average number of infected and “evident”
| which get repaired after their infection is reported.
$\mathcal{N}_{u}$ | denotes the average number of nodes which are
| upgraded in each time-step, as the ratio to healthy
| nodes which were not “recently” repaired or upgraded.
$\tau_{r}$ | time(-steps) of immunity from infection after getting
| repaired.
$\tau_{u}$ | time(-steps) of immunity from infection after getting
| upgraded.
$\mathcal{N}_{o}$ | denotes the fraction of healthy nodes that can be
| infected from outside, nodes (which were not “recently”
| repaired or upgraded).
Table 3: Variables which are globally common for all nodes - applicable to all
nodes, but are constant for each node in $G$.
Let $G^{\prime}$ be the graph instance which is manifested at a time-step $t$.
Based on the variables that we have defined, in each discrete time-step $t$
our model proceeds as follows.
1. 1.
In the edge set $E(G^{\prime})$ of an instance $G^{\prime}$ of $G$, the edge
$\\{u,v\\}$ exists with a probability which is defined by $u.\rho_{c}$ and
$v.\rho_{c}$ (we discuss this in detail as we describe the algorithm in
Section 5). We can consider that a vertex $v$ is in $V(G^{\prime})$ iff $v$ is
a part of an edge in $E(G^{\prime})$. All other nodes, since they are inactive
for $G^{\prime}$, they are not the part of $V(G^{\prime})$.
2. 2.
Any healthy node is infected from “outside” the network with a probability of
$\mathcal{N}_{o}$.
3. 3.
From each node $u$ which was infected until time-step $t-1$, any healthy node
$v$ which is adjacent to $u$ in $G^{\prime}$ gets infected with the
probability $\mathcal{N}_{s}$. This happens only when $v$ was repaired at
least $\tau_{r}+1$ steps before $t$ or $v$ was upgraded at least $\tau_{u}+1$
steps before $t$.
4. 4.
The infection gets reported with a probability $\mathcal{N}_{e}$.
5. 5.
Each infected node for which an infection is reported is repaired with a
probability $\mathcal{N}_{r}$.
6. 6.
A healthy node which was repaired at least $\tau_{r}+1$ time steps before $t$
and was upgraded at least $\tau_{u}+1$ time steps before $t$ is upgraded with
probability $\mathcal{N}_{u}$.
Once the infection initiates in $G$, we start monitoring it. After that we
terminate when:
1. 1.
all the nodes are infected, or
2. 2.
none of the nodes is infected.
If all the nodes get infected, we assume that the repair/upgrade strategy was
not “good enough”, and vice-versa.
## 3 Preliminaries
$V(G)$ is the set of nodes in a graph $G$. $E(G)$ is the set of edges in $G$.
$E(G)=\\{\\{u,v\\}\ |\ u,v\in V(G),u\neq
v\\}\implies|E(G)|=|V(G)\times(V(G)-1)|$ but for each instance $G^{\prime}$ of
$G$, there is a specific probability $p_{uv}$ which decides the existence of
an edge $\\{u,v\\}$ in $G^{\prime}$. Before the algorithm starts to process
$G$, each node is supposed to be initialized with the values $v.i_{s}=$
$false$, $v.e_{s}=$ $false$, $v.t_{e}=-1$, $v.t_{r}=-1$, and $v.t_{u}=-1$.
$v.\rho_{c}$, as discussed in Section 2 for each node $v\in V(G)$, is provided
with the input to the algorithm.
A state of a node at a particular time-step is defined by the value that each
of its variable contains. The state of $G$, the global state, is the set of
values of all variables of each node. A trace [1] with respect to a node $v\in
V(G)$ is defined by the sequence of states that $v$ goes through in each time-
step, starting from time-step 0. The trace of $G$, the global trace, is the
sequence of states of $G$. A fault is a contiguous subsequence of the trace of
$v$ that is not desirable. In our model, we consider the invariant to be each
node present in $G$ being uninfected, that is for each node $v$ we desire
$\lnot v.i_{s}$. Otherwise if $v.is$, then we consider that $v$ is in faulty
state. If any node of $G$ is in faulty state, then we have that $G$ is outside
the invariant. The transfer of infection can happen from within the network
$G$ (which $v$ is a part of) or from outside of $G$.
From the perspective of the network as a whole, we define a state and trace as
follows. While the algorithm proceeds, the global state is defined by the
values in the variables $v.i_{s}$, $v.e_{s}$, $v.t_{e}$, $v.t_{r}$ and
$v.t_{u}$ at a time-step for each node $v$ in $V(G)$; these are the only
variables that are possibly modified throughout the execution of the
algorithm. A trace is a sequence of such states, that is, a sequence of sets
$\\{v.i_{s}$, $v.e_{s}$, $v.t_{e}$, $v.t_{r}$ and $v.t_{u}\ |\ v\in V(G)\\}$
at each time-step.
The infection spreads between nodes only as a result of a communication. A
pair of nodes $u$ and $v$ communicate at a time step $G$ if and only if
$\\{u,v\\}\in E(G^{\prime})$, where $G^{\prime}$ is the instance of $G$ at
that time-step. Along with the original communication, a node may also
transfer an infection to the destination node. A node may or may not execute a
fault if it is already infected. If it executes a fault, we assume that it is
visible (throughout and outside the network) and is immediately reported, in
which case it will be repaired and does not take part in any communication
until repaired. In our model, it is preferred that the repairing and (random)
upgrading strategy is able to eventually result in a state of the network $G$
where none of the nodes is infected, despite of the spread of the infection.
## 4 General firefight burning or burn firefighting
The graph class that we study allows vertices to be added or removed as
required. Let $G$ be such a graph.
A general algorithm which simulates the spread and defend of infection in
graphs can have the components as described in Table 4 (page 4) and Table 5
(page 5). The working of each function depends on the time-step number stored
by the variable $time$. The variables that are discussed in a row are the only
variables that are affected by the respective function, no other variable is
modified. In this table, in most of the cases, we make copies of the vertex
sets from the input graph instance $G^{\prime}$ to the output graph instance
$G^{\prime\prime}$. In these cases, we have that for a vertex $v$, if
$G^{\prime}.v$ stands for a vertex in the graph $G^{\prime}$, and
$G^{\prime\prime}.v$ stands for the same vertex (as copied) in the output
graph $G^{\prime\prime}$. Table 4 describes the list of functions that an
arbitrary spread-and-defend algorithm might use. It may not be necessary that
such an algorithm uses all and only these methods explicitly, but the the
underlying functionalities can be divided into these methods following the
predicates as described. These functions with respect to their significance
will be explained more in the following sections.
Each method takes the time-step number $time$ as an argument. An algorithm can
choose to mark some changes by the time-step number. Some changes may depend
on the occurrence time-step of certain events. For example, Outside-Infect()
and Spread-Infection() may depend on the values of $\tau_{r}$ or $\tau_{u}$ in
the vertices. Such dependencies are discussed in the following sections in
this article; we are going to utilize the functions from Table 4.
Function name | Out | Logical properties
---|---|---
| put |
Instance($G$, $time$) | $G^{\prime}$ | $V\subseteq V(G)\land G^{\prime}=(V,E)\land$
| | $E\subseteq V\times V$.
Outside-Infect($G^{\prime}$, $time$) | $V^{\prime}$ | $V^{\prime}\subseteq V(G^{\prime})~{}\land\forall~{}v\in V^{\prime}$, $\lnot G^{\prime}.v.i_{s}$.
Spread-Infection($G^{\prime}$, $time$) | $V^{\prime}$ | $V^{\prime}\subseteq V(G^{\prime})\land\forall~{}v\in V^{\prime}$,
| | $(G^{\prime}.v.i_{s}~{}\lor$
| | $(\exists u\in V(G^{\prime}):\\{u,v\\}\in E(G^{\prime})\land$
| | $G^{\prime}.u.i_{s}))$.
Report-Infection($G^{\prime}$, $time$) | $G^{\prime\prime}$ | $V(G^{\prime\prime})=V(G^{\prime})\land E(G^{\prime\prime})=E(G^{\prime})\land$
| | $\forall~{}v\in V(G^{\prime})$,
| | $(G^{\prime\prime}.v.e_{s}\implies(G^{\prime}.v.i_{s}~{}\lor G^{\prime}.v.e_{s})\land$
| | $(G^{\prime}.v.e_{s}\implies G^{\prime\prime}.v.e_{s}))$.
Repair-Instance($G^{\prime}$, $time$) | $G^{\prime\prime}$ | $V(G^{\prime\prime})=V(G^{\prime})\land E(G^{\prime\prime})=E(G^{\prime})\land$
| | $\forall~{}v\in V(G^{\prime})$,
| | $((\lnot G^{\prime\prime}.v.i_{s}\land G^{\prime\prime}.v.t_{r}=time)$
| | $\implies G^{\prime}.v.i_{s})$.
Upgrade-Instance($G^{\prime}$, $time$) | $G^{\prime\prime}$ | $V(G^{\prime\prime})=V(G^{\prime})\land E(G^{\prime\prime})=E(G^{\prime})\land$
| | $\forall~{}v\in V(G^{\prime})$,
| | $((\lnot G^{\prime\prime}.v.i_{s}\land G^{\prime\prime}.v.t_{u}=time)$
| | $\implies\lnot G^{\prime}.v.i_{s})$.
Delete-Infected($G^{\prime}$, $time$) | $G^{\prime\prime}$ | $V(G^{\prime\prime})\subseteq V(G^{\prime})\land$
| | $\forall~{}v\in V(G^{\prime})\setminus V(G^{\prime\prime})$, $G^{\prime}.v.i_{s}$.
Insert-New($G^{\prime}$, $time$) | $G^{\prime\prime}$ | $V(G^{\prime})\subseteq V(G^{\prime\prime})\land$
| | $\forall~{}v\in V(G^{\prime\prime})\setminus V(G^{\prime})$, $\lnot G^{\prime\prime}v.i_{s}$.
Table 4: List of functions that an arbitrary spread-and-defend simulation algorithm might use. For each row, column 1: function name, column 2: return value symbol, column 3: predicates followed by the function. Function name | Functionality
---|---
Infect($v$) | infect $v$
Repair($v$) | repair $v$
Upgrade($v$) | upgrade $v$
Table 5: List of functions that may be invoked by the functions in Table 4.
### 4.1 Logic of algorithms: burning and firefighting
Any algorithm involving the simulation of the spread and defend of infection
in graphs can be broken into the following modules, as demonstrated by the
steps in Algorithm 1.
###### Algorithm 1.
Given the input initial set of nodes $V$, perform the following steps.
Generalized-Burning($G$)
Initialize $time=0$. Run the following steps iteratively.
1. 1.
$time=time+1$.
2. 2.
$G^{\prime}=$ Instance($G$, $time$).
3. 3.
$I_{out}=$ Outside-Infect($G^{\prime}$, $time$).
4. 4.
$S_{in}=$ Spread-Infection($G^{\prime}$, $time$).
5. 5.
$\forall~{}v:v\in S_{in}\cup I_{out},$ Infect($v$).
6. 6.
$G^{\prime}=$ Report-Infection($G^{\prime}$, $time$).
7. 7.
$G^{\prime}=$ Repair-Instance($G^{\prime}$, $time$).
8. 8.
$G^{\prime}=$ Upgrade-Instance($G^{\prime}$, $time$).
9. 9.
$G^{\prime}=$ Delete-Infected($G^{\prime}$, $time$).
10. 10.
$G^{\prime}=$ Insert-New($G^{\prime}$, $time$).
## 5 Only burning
In this section, we are going to study the spread of contagion through a
network.
The functions that we are going to utilize are as follows. $\epsilon$ stands
for a null character.
1. A.
Instance($G$)
1. 1.
$V^{\prime}\leftarrow V(G)$. $E^{\prime}\leftarrow\phi$.
2. 2.
for every set $\\{u,v\\}:u,v\in V^{\prime}\land u\neq v$,
3. 3.
$e_{uv}\leftarrow\epsilon$. $e_{vu}\leftarrow\epsilon$.
4. 4.
With probability $u.\rho_{c}$, execute: $e_{uv}\leftarrow(u,v)$.
5. 5.
With probability $v.\rho_{c}$, execute: $e_{vu}\leftarrow(v,u)$.
6. 6.
if $e_{uv}=(u,v)\ \land\ e_{vu}=(v,u)$, then
7. 7.
$E^{\prime}\leftarrow E^{\prime}\cup\\{\\{u,v\\}\\}$.
8. 8.
Return $G^{\prime}=(V^{\prime}$, $E^{\prime})$.
2. B.
Outside-Infect($G,time$)
1. 1.
$I_{out}=\phi$.
2. 2.
$\forall\ v\in V(G)$:
3. 3.
if $\lnot$ Is-Infected($v$),
4. 4.
if ($v.t_{r}=-1$ $\lor$ $time-v.t_{r}\geq\tau_{r}$) $\land$ ($v.t_{u}=-1$
$\lor$ $time-v.t_{u}\geq\tau_{u}$)
5. 5.
With probability $\mathcal{N}_{o}$, execute:
6. 6.
$I_{out}\leftarrow I_{out}\cup\\{v\\}$.
7. 7.
Return $I_{out}$.
3. C.
Spread-Infection($G,time$)
1. 1.
for each set ${u,v}:u,v\in V(G)$:
2. 2.
if $\\{u,v\\}\in E(G)$
3. 3.
if XOR($u.i_{s}$, $v.i_{s}$):
4. 4.
if $(u.i_{s}\land\lnot u.e_{s})\lor(v.i_{s}\land\lnot v.e_{s})$, then continue
5. 5.
With probability $\mathcal{N}_{s}$ execute:
6. 6.
if ($u.t_{r}=-1$ $\lor$ $time-u.t_{r}\geq\tau_{r}$) $\land$ ($u.t_{u}=-1$
$\lor$ $time-u.t_{u}\geq\tau_{u}$), then Infect($u$)
7. 7.
if ($v.t_{r}=-1$ $\lor$ $time-v.t_{r}\geq\tau_{r}$) $\land$ ($v.t_{u}=-1$
$\lor$ $time-v.t_{u}\geq\tau_{u}$), then Infect($v$)
4. D.
Infect($v$)
1. 1.
$v.i_{s}$ $\leftarrow true$
2. 2.
$v.t_{u}=-1$
3. 3.
$v.t_{r}=-1$
The algorithm simulating a burning process is described as follows.
###### Algorithm 2.
Given the input graph $G=(V,E)$, where essentially the edge set $E(G)$ is
empty, along with the variables discussed in Table 1, Table 2 and Table 3
provided as part of the input, perform the following steps.
Variable-Burning($G$)
Initialize $time=0$. Repeat the folowing steps until the algorithm stops.
1. 1.
$G^{\prime}=$ Instance($G$).
2. 2.
if $\forall\ v\in V(G^{\prime})$, Is-Infected(v), then Stop.
3. 3.
$time\leftarrow time+1$
4. 4.
$I_{out}=$ Outside-Infect($G^{\prime}$, $time$).
5. 5.
$S_{in}=$ Spread-Infection($G^{\prime}$, $time$).
6. 6.
$\forall~{}v:v\in S_{in}\cup I_{out},$ Infect($v$).
7. 7.
$V(G)\leftarrow V(G^{\prime})$.
We describe Algorithm 2 in the following few paragraphs. We initiate with an
instance $G^{\prime}$ of $G$ (line 1). $G^{\prime}$ has the same node set as
that of $G$. The edges in $G^{\prime}$ are decided based on the values in
$v.\rho_{c}$ in every node $v$, such that a node $v$ may decide an arc $(v,u)$
to exist based on $v.\rho_{c}$, but the edge $e=\\{u,v\\}$ will be inserted in
$G^{\prime}.E$ only if $u$ also decides the arc $(u,v)$ to exist based on
$u.\rho_{c}$. We stop if every node is infected (line 2).
Now we simulate the infection that nodes get from outside of the network $G$.
We first compute the set of nodes $I_{out}$ which can be infected from outside
(line 4); each node gets infection from outside the network with probability
$\mathcal{N}_{o}$. Then we determine the set of nodes $S_{in}$ which are
infected as a result of the spread of infection from within the network (line
5). Both $I_{out}$ and $S_{in}$ are computed independent on each other. At any
time step, both of them depend on the status of the nodes in the beginning of
that time-step. Then we actually infect the nodes in $I_{out}$ and $S_{in}$
(line 6). This is similar to the notion of the graph burning procedure [6]. We
are only spreading infection to the healthy nodes in line 5, so this would
help us simulate the notion that a node can get infection from both inside and
outside of its local network only if it is healthy. In line 5, we spread the
infection from within the network such an infected node $u$ can infect an
uninfected node $v$ with the probability $\mathcal{N}_{s}$ if $\\{u,v\\}\in
E(G)$. When a node $v$ gets infected, $v.i_{s}$ is set to $true$.
In the above set of functions, (1) the if condition at Line 4 of Outside-
Infect(), and (2) the if conditions at line 4, 6 and 7 of Spread-Infection()
are not useful for our purposes right now, but they will become useful later
(in Section 6). For now, they can be safely ignored, as they are will always
remain true according to Algorithm 2.
Algorithm 2 follows Algorithm 1 as per the constraints listed in Table 4. The
functions which are used explicitly follow respective functions. Algorithm 2
also follows all the constraints of Algorithm 1 where the functions of
Algorithm 1 are not used in Algorithm 2.
###### Observation 1.
If at the beginning of some time-step, the fraction of healthy nodes is $h$,
then at the end of that time-step the fraction of healthy nodes will be
$h(1-\mathcal{N}_{o})$.
###### Lemma 1.
If at the beginning of some time-step, the fraction of healthy nodes is $h$,
and the edge probability for each vertex is $\rho_{c}$, then the number of
nodes that remain healthy at the end of that time-step is
$h-\mathcal{N}_{s}nh(1-h)(\rho_{c})^{2}$.
###### Proof.
In the beginning, the fraction of infected nodes is $1-h$. Let the total
number of nodes in the subject graph be $n$. If each healthy node is connected
to all the unhealthy nodes, then the number of communications that any healthy
node will do is $n(1-h)$.
The edge probability of each vertex is $\rho_{c}$, so the number of
communications that a healthy node will do with the unhealthy nodes is
$(\rho_{c})^{2}n(1-h)$.
Now if each healthy node is connected to one unhealthy node, then any healthy
node will get infected with the probability $\mathcal{N}_{s}$.
Since the edge probability of each node is $\rho_{c}$, so the probability for
a pair of nodes to agree to communicate is $(\rho_{c})^{2}$. Each healthy node
is connected to $(\rho_{c})^{2}n(1-h)$ unhealthy nodes, so the probability
with which any healthy node will be infected is
$\mathcal{N}_{s}(\rho_{c})^{2}n(1-h)$.
There are $nh$ healthy nodes. The fraction of nodes which get infected in this
step is $\dfrac{\mathcal{N}_{s}n(1-h)(\rho_{c})^{2}\times
nh}{n}=\mathcal{N}_{s}nh(1-h)(\rho_{c})^{2}$. The fraction of nodes remaining
healthy after one time step is $h-\mathcal{N}_{s}nh(1-h)(\rho_{c})^{2}$.
∎
###### Theorem 1.
If at the beginning of some time-step, the fraction of healthy nodes is $h$,
then by the end of that time-step, the fraction of nodes that are healthy is
$h-h\mathcal{N}_{o}+h\mathcal{N}_{s}n(1-h)(\rho_{c})^{2}-h^{2}\mathcal{N}_{o}\mathcal{N}_{s}n(1-h)(\rho_{c})^{2}$.
###### Proof.
According to the description of the algorithm, first (1) the nodes are
“chosen” to be infected from outside, then (2) the nodes are chosen which get
infected from within the network, and then (3) the nodes chosen at (1) and (2)
are “declared” infected.
So the infection from outside and spread of infection within the network
happen independently from each other. This infection only depends on the
vertices that were already infected at the end of the previous time-step. The
number of infected nodes is the union of the fraction of nodes infected from
outside and the nodes which are infected due to the spread of infection from
within the network. This number is
$h\mathcal{N}_{o}+h\mathcal{N}_{s}n(1-h)(\rho_{c})^{2}-h\mathcal{N}_{o}\times
h\mathcal{N}_{s}n(1-h)(\rho_{c})^{2}$. The number of healthy vertices that
remain after one time step is
$h-h\mathcal{N}_{o}+h\mathcal{N}_{s}n(1-h)(\rho_{c})^{2}-h\mathcal{N}_{o}\times
h\mathcal{N}_{s}n(1-h)(\rho_{c})^{2}=h-h\mathcal{N}_{o}+h\mathcal{N}_{s}n(1-h)(\rho_{c})^{2}-h^{2}\mathcal{N}_{o}\mathcal{N}_{s}n(1-h)(\rho_{c})^{2}$.
∎
The experimental results are as follows. We took average over 10 runs. We took
the following values.
$n$ | 100
---|---
$\mathcal{N}_{s}$ | .2
$\mathcal{N}_{o}$ | .05
$\rho_{c}$ | 0.05
Table 6: Initial values of the variables in the experiment. Figure 1: The
experimental results agree with theoretical results (of Theorem 1). This graph
is a plot of the time-step $i$ number against the mean number of healthy
vertices at $i^{th}$ time-step (over all the runs of the algorithm). The
longest run took 84 time-steps.
From Theorem 1, it can be observed that the number that we have come up with
depends on $n$, the number of vertices in the graph as well. It is
proportional to the number of vertices in the graph, and it depends on the
edge probability $\rho_{c}$ as well. Recall that in this section, we have
assumed that the probability of communication for all the vertices is same.
## 6 Introducing repair and upgrade on nodes
In this section, we will introduce the notion of repair and upgrade of the
nodes. We also have that after repair or upgrade of a node, there is a certain
amount of time-steps uptil which that node remains immune to infection, that
is, uptil a certain amount of time, it will not catch infection even after an
infected communication.
The additional functions that we utilize are as follows.
1. A.
Report-Infection($G$, $time$)
1. 1.
$\forall\ v\in V(G)$
2. 2.
if $v.i_{s}\land\lnot v.e_{s}$
3. 3.
With probability $\mathcal{N}_{e}$ execute
4. 4.
$v.e_{s}$ $\leftarrow true$. $v.t_{e}\leftarrow time$.
2. B.
Repair-Instance($G$, $time$)
1. 1.
$\forall\ v\in V(G)$
2. 2.
if $v.i_{s}\land v.e_{s}$, then
3. 3.
With probability $\mathcal{N}_{r}$, execute: Repair-node($v$, $time$).
3. C.
Upgrade-Instance($G$, $time$)
1. 1.
$\forall\ v\in V(G)$
2. 2.
if $\lnot\ v.i_{s}$, then
3. 3.
With probability $\mathcal{N}_{u}$, execute: Upgrade-node($v$, $time$).
4. D.
Repair-node($v$, $time$)
1. 1.
$v.i_{s}\leftarrow false$
2. 2.
$v.t_{r}\leftarrow time$
3. 3.
$v.t_{e}\leftarrow-1$
4. 4.
$v.e_{s}\leftarrow false$
5. E.
Upgrade-node($v$, $time$)
1. 1.
$v.t_{u}\leftarrow time$
2. 2.
$v.t_{e}\leftarrow-1$
3. 3.
$v.e_{s}\leftarrow false$
4. 4.
$v.t_{r}\leftarrow-1$
We study the behaviour of a random temporal graph when we introduce repair of
infected vertices and upgrade of healthy vertices. The upgradation of healthy
nodes that we introduce here is similar to firefighting, with a difference
that we impose a minimum time $\tau_{u}$ (only) uptil which the upgraded node
remains immune to the infection.
The algorithm that we use here is as follows.
###### Algorithm 3.
Given the input graph $G=(V,E)$, where essentially the edge set $E(G)$ is
empty, along with the variables discussed in Table 1, Table 2 and Table 3
provided as part of the input, perform the following steps.
Variable-Burning($G$)
Initialize $time=0$ and $infection\\_started=$ $false$. Repeat the folowing
steps until the algorithm stops.
1. 1.
$G^{\prime}=$ Instance($G$).
2. 2.
if not $infection\\_started$
3. 3.
if $\exists\ v\in G^{\prime}:$ Is-Infected($v$), then $infection\\_started=$
$true$
4. 4.
if $infection\\_started$:
5. 5.
if $\forall\ v\in V(G^{\prime})$, Is-Infected(v), then Stop.
6. 6.
if $\forall\ v\in V(G^{\prime})$, $\lnot$Is-Infected(v), then Stop.
7. 7.
$time\leftarrow time+1$
8. 8.
$I_{out}=$ Outside-Infect($G^{\prime}$, $time$).
9. 9.
$S_{in}=$ Spread-Infection($G^{\prime}$, $time$).
10. 10.
$\forall~{}v:v\in S_{in}\cup I_{out},$ Infect($v$).
11. 11.
Report-Infection($G^{\prime}$, $time$).
12. 12.
Repair-Instance($G^{\prime}$, $time$).
13. 13.
Upgrade-Instance($G^{\prime}$, $time$).
14. 14.
$V(G)\leftarrow V(G^{\prime})$.
Algorithm 3 is more complex than Algorithm 2. We are going to discus the
differences between Algorithm 3 and Algorithm 2 and the new insertions in
Algorithm 3. In this section, we will have to use the lines that we insisted
to ignore after we described some functions in Section 5. After initializing a
functional instance of $G$ (line 1), we determine if the infection has started
(lines 2, 3).
After when the infection has started in $G$, then we stop if every node is
infected (lines 4, 5). Similarly, after when the infection has started in $G$,
then we stop if every node is not infected (lines 4, 6), which will imply that
the all the nodes in the system have been “cured”. The current time-step
number is stored in the variable $time$ (line 7).
We compute the set $I_{out}$ of vertices which are infected as a result of
outside infection (line 8) and the set $S_{in}$ of vertices which are infected
as a result of the spread of infection from within the network (line 9). Then
burn the vertices in $I_{out}$ and $S_{in}$ (line 10). A healthy node is
infected from outside with a probability of $\mathcal{N}_{o}$, and is infected
as a result of the spread of infection from with the network with a
probability of $\mathcal{N_{s}}$. In addition to these constraints, a node can
only be infected if it was repaired at least $\tau_{r}$ steps before $time$,
also, it should been upgraded at least $\tau_{u}$ time-steps before $time$.
That is, whether a node $v$ is infected from outside or from within the
network, it is can be infected only if (($v.t_{r}=-1$ $\lor$
$time-v.t_{r}\geq\tau_{r}$) $\land$ ($v.t_{r}=-1$ $\lor$
$time-v.t_{r}\geq\tau_{r}$)) holds $true$. In addition to that if the
infection of a node has become evident, that is, if $v.e_{s}$ is set to
$true$, then it cannot be infected, as we assume that that node is under
scrutiny and will not take part in communications or will take part in
screened communications only. When a node $v$ is declared infected, $v.i_{s}$
is set to $true$.
The nodes which are infected may be reported as their infection gets evident
with a probability of $\mathcal{N}_{e}$ (line 11), and the time of their being
reported is recorded, that is, for each node $v$ in $G$, $v.e_{s}$ is set to
$true$ and $v.t_{e}$ is set to $time$ based on $\mathcal{N}_{e}$. This
simulates that the infection in a node may not get reported immediately as
they get infected. It is not necessary for a node to execute the fault state
as soon as it gets infected. As the node executes a fault state, we assume
that its infection is evident throughout and outside the network, its
infection gets reported. A node whose infection gets reported shall not take
part in any communication or will take part in screened communications only in
$G$ until it is repaired, so it does not spread infection to any other nodes.
Any node which has been infected will be repaired (line 12) with probability
$\mathcal{N}_{r}$; for each node $v$ in $G$ for which $v.i_{s}$ is $true$,
$v.i_{s}$ is set to $false$ and $v.t_{r}$ is set to $time$ based on
$\mathcal{N}_{r}$. This simulates the notion that a node may take time to get
repaired and cannot be repaired immediately in the same time-step in which its
infection was reported. The notion of probability is inserted here to create a
delay for a node to be finally repaired. Next, any uninfected node will be
upgraded (line 13) with probability $\mathcal{N}_{u}$; for each node $v$ in
$G$ for which $v.i_{s}$ is $false$, $v.t_{u}$ is set to $time$ based on
$\mathcal{N}_{u}$. This simulates the notion that each node may not be set to
upgraded at each time-step, a node may be chosen randomly by the system
administrator to be upgraded. The notion of probability is inserted here to
denote randomness of choice to upgrade a node.
After a node is repaired (respectively, upgraded), it is immune to the
infection for next $\tau_{r}$ (respectively, $\tau_{u}$) time-steps and thus
cannot be infected. Also, note that we are upgrading a node irrespective of
when it was upgraded latest; this may be less coherent with real-time human
social networks in the sense that if a healthy human has been treated against
some disease, then she may get a dose of the vaccine in less than $\tau_{r}$;
she may not get two doses of vaccines within more or less than a certain
period of time. But this is conforming with a network of computers.
We now study the behaviour of the instructions that we have inserted at line
10 - line 12, that is we study the behaviour of the lines 10 - line 12 when we
have that the spread and outside infection of the nodes have already taken
place.
Let that at the beginning of some time-step, let
1. 1.
$h$ be the fraction of healthy nodes with not repaired or upgraded status,
2. 2.
$u$ be the fraction of healthy nodes with upgraded status,
3. 3.
$r$ be the fraction of healthy nodes with repaired status,
4. 4.
$e_{y}$ be the fraction of nodes that have already shown evidence of
infection, and
5. 5.
$e_{n}$ be the fraction of nodes that have not shown evidence of infection.
We have that $h+u+r+e_{n}+e_{y}=1$. Now (by Theorem 1) by the end of line 9
(Algorithm 3), the fraction of infected nodes will increase by
$h\mathcal{N}_{o}+\mathcal{N}_{s}ne_{n}(\rho_{c})^{2}-h\mathcal{N}_{o}\mathcal{N}_{s}ne_{n}(\rho_{c})^{2}$
of the nodes.
The following values will be affected.
1. 1.
The final value of $h$ will be
$h=h-(h\mathcal{N}_{o}+\mathcal{N}_{s}ne_{n}(\rho_{c})^{2}-h\mathcal{N}_{o}\mathcal{N}_{s}ne_{n}(\rho_{c})^{2}).$
2. 2.
The final value of $e_{n}$ will be
$e_{n}=e_{n}+h\mathcal{N}_{o}+\mathcal{N}_{s}ne_{n}(\rho_{c})^{2}-h\mathcal{N}_{o}\mathcal{N}_{s}ne_{n}(\rho_{c})^{2}.$
###### Lemma 2.
$e_{n}(e_{y}+\mathcal{N}_{e})$ nodes in total show infection evidence in this
step.
###### Proof.
The fraction of unhealthy nodes which have not shown evidence of infection yet
is $e_{n}$. $\mathcal{N}_{e}$ (cf. Table 3) nodes of them show evidence of
infection. So $e_{y}+e_{n}\mathcal{N}_{e}$ vertices in total are the fraction
of vertices which are infected and show evidence of infection by this time-
step. ∎
At the end of line 10,
1. 1.
The final value of $e_{y}$ will be
$e_{y}=e_{y}+e_{n}\mathcal{N}_{e}.$
2. 2.
The final value of $e_{n}$ will be
$e_{n}=e_{n}-e_{n}\mathcal{N}_{e}.$
###### Lemma 3.
The vertices repaired are $e_{y}\mathcal{N}_{r}$ and the vertices upgraded are
$h\mathcal{N}_{u}.$
###### Proof.
The fraction of nodes which have shown infection evidence is $e_{y}$. So it
trivially follows that the fraction of nodes that are newly repaired is
$e_{y}\mathcal{N}_{r}$.
Only those nodes are considered upgraded which do not have infection, and were
upgraded more than $\tau_{u}$ time-steps ago or were repaired $\tau_{r}$ time-
steps ago. Then the total number of nodes which are upgraded are
$h\mathcal{N}_{u}$. ∎
At the end of line 11,
1. 1.
The final value of $r$ will be
$r=r+e_{y}\mathcal{N}_{r}.$
2. 2.
The final value of $e_{y}$ will be
$e_{y}=e_{y}-e_{y}\mathcal{N}_{r}.$
At the end of line 12,
1. 1.
The final value of $h$, and $u$ will be
$h,u=h-h*\mathcal{N}_{u},u+h\mathcal{N}_{u}.$
###### Observation 2.
Let that during an iteration of the algorithm, $r_{time-\tau_{r}}$ be the
fraction of infected vertices with evident status that were repaired at the
time-step $(time-\tau_{r})$, then if $time\geq\tau_{r}+1$, then the final
fraction of nodes at the end of that time step with repair status now is
$r-r_{time-\tau_{r}}$.
At the end of a time step,
1. 1.
The final value of $h$ will be
$h=h+r_{time-\tau_{r}}.$
2. 2.
The final value of $r$ will be
$r=r-r_{time-\tau_{r}}.$
###### Observation 3.
Let that during an iteration of the algorithm, $u_{time-\tau_{u}}$ be the
fraction of infected vertices with evident status that were upgraded at the
time-step $(time-\tau_{u})$, then if $time\geq\tau_{u}+1$, then the final
fraction of nodes at the end of that time step with repair status now is
$u-u_{time-\tau_{r}}$.
At the end of a time step,
1. 1.
The final value of $h$ will be
$h=h+u_{time-\tau_{u}}.$
2. 2.
The final value of $u$ will be
$u=u-u_{time-\tau_{u}}.$
For 2 and 3, while computing for the statistical values, instead of deriving a
complex formula to predict the values of $u$ or $r$ $\tau_{u}$ or $\tau_{r}$
time-steps ago, we use the standard dynamic programming trick to retrieve
those values.
The experimental results are as follows (Figure 2). We took average over 10
runs. We initiated the experiment with the following values (Table 7). In all
the runs, all the nodes were cured.
$n$ | 100
---|---
$\mathcal{N}_{s}$ | .2
$\mathcal{N}_{e}$ | .3
$\tau_{r}$ | 25
$\tau_{u}$ | 25
$\mathcal{N}_{o}$ | .05
$\mathcal{N}_{r}$ | .5
$\mathcal{N}_{u}$ | .15
$\rho_{c}$ | 0.05
Table 7: Initial values with which the experiment started: burn with repair
and upgrade. Figure 2: Experimental versus theoretical results: burning with
repair and upgrade.
## 7 Adding and removing nodes
In this section, we are going to add more complexity to Algorithm 3 where we
allow addition and removal of nodes.
The additional functions that we utilize are as follows.
1. A.
Delete-Infected($G$)
1. 1.
for each node $v$ in $V(G)$,
2. 2.
if if $v.i_{s}$, then
3. 3.
With probability $\rho_{del}$, execute Delete-node($G^{\prime}$, $v$).
2. B.
Insert-New($G$)
1. 1.
for each node $v\in V(G)$,
2. 2.
With probability $\rho_{ins}$, execute:
3. 3.
$v\leftarrow$ a new node.
4. 4.
$v.i_{s}\leftarrow false$. $v.e_{s}\leftarrow false$., $v.t_{e}\leftarrow-1$.
$v.t_{r}\leftarrow-1$. $v.t_{u}\leftarrow-1$.
5. 5.
$v.\rho_{c}\leftarrow$ a random or fixed number between 0 and 1 (say, some
number between $\min\limits_{u\in V(G)}\\{u.\rho_{c}\\}$ and
$\max\limits_{u\in V(G)}\\{u.\rho_{c}\\}$).
6. 6.
$V(G)=V(G)\cup\\{v\\}$.
3. C.
Delete-node($G$, $v$)
1. 1.
$V(G)=V(G)\setminus\\{v\\}$
We have $G$ as initial graph with the properties as described in Table 1 and
Table 3. We have a list of 15 functions described as follows, which we utilize
in the main algorithm.
###### Algorithm 4.
Given the input graph $G=(V,E)$, where essentially the edge set $E(G)$ is
empty, along with the variables discussed in Table 1, Table 2 and Table 3
provided as part of the input, perform the following steps.
Variable-Burning($G$)
Initialize $time=0$ and $infection\\_started=$ $false$. Repeat the folowing
steps until the algorithm stops.
1. 1.
$G^{\prime}=$ Instance($G$).
2. 2.
if not $infection\\_started$
3. 3.
if $\exists\ v\in G^{\prime}:$ Is-Infected($v$), then $infection\\_started=$
$true$
4. 4.
if $infection\\_started$:
5. 5.
if $\forall\ v\in V(G^{\prime})$, Is-Infected(v), then Stop.
6. 6.
$time\leftarrow time+1$
7. 7.
$I_{out}=$ Outside-Infect($G^{\prime}$, $time$).
8. 8.
$S_{in}=$ Spread-Infection($G^{\prime}$, $time$).
9. 9.
$\forall~{}v:v\in S_{in}\cup I_{out},$ Infect($v$).
10. 10.
Report-Infection($G^{\prime}$, $time$).
11. 11.
Repair-Instance($G^{\prime}$, $time$).
12. 12.
Upgrade-Instance($G^{\prime}$, $time$).
13. 13.
Delete-Infected($G^{\prime}$).
14. 14.
Insert-New($G^{\prime}$).
15. 15.
$V(G)\leftarrow V(G^{\prime})$.
We explain Algorithm 4 as follows. Most of the functionality is similar to
Algorithm 3, except for lines 13 and 14.
An infected node is deleted from $G$ (lines 14, 16) with a probability
$\rho_{del}$ such that for each node $v$ in $V(G)$, $v$ is deleted with
probability $\rho_{del}$ only if $v.i_{s}$ is $true$. This is based on the
notion that a node can get unusable, and once a fault makes a node unusable,
it can be discarded completely. About $\rho_{ins}\times|V(G)|$ new nodes can
be inserted to $G$ (lines 15, 16) such that for each $v$ in $V(G)$, a new node
can be inserted with probability $\rho_{ins}$. This denotes the potential
efforts made by the system administrators to maintain the usability and
efficiency of the network which is also based on the number of the nodes in
it. Mark that $V(G)$ itself is variable.
For the following two lemmas, we are going to assume that at the beginning of
line 13,
1. 1.
$h$ is the fraction of healthy nodes with not repaired or upgraded status,
2. 2.
$u$ be the fraction of healthy nodes with upgraded status,
3. 3.
$r$ be the fraction of healthy nodes with repaired status,
4. 4.
$e_{y}$ be the fraction of nodes that have already shown evidence of
infection, and
5. 5.
$e_{n}$ be the fraction of nodes that have already shown evidence of
infection.
###### Lemma 4.
Let that at the end of line 13, the final fraction of healthy vertices is
$\dfrac{h+u+r}{1-(e_{y}+e_{n})\rho_{del}}$.
###### Proof.
The number of vertices removed are $n(e_{y}+e_{n})\rho_{del}$. The number of
vertices remaining now is $n-n(e_{y}+e_{n})\rho_{del}$.
The final fraction of healthy vertices is
$\dfrac{n(h+u+r)}{n-n(e_{y}+e_{n})\rho_{del}}$ $=$
$\dfrac{h+u+r}{1-(e_{y}+e_{n})\rho_{del}}$. ∎
###### Lemma 5.
Let that at the end of line 14, the final fraction of healthy vertices is
$\dfrac{h+u+r}{(1+\rho_{ins})(1-(1-e_{y}-e_{n})\rho_{del})}$.
###### Proof.
The number of nodes remaining at the end of line 13 is
$n-n(1-e_{y}-e_{n})\rho_{del}$. The number of vertices now is
$(n-n(1-e_{y}-e_{n})\rho_{del})+\rho_{ins}(n-n(1-e_{y}-e_{n})\rho_{del})$.
The final fraction of healthy vertices is
$\dfrac{h+u+r}{(1-(1-e_{y}-e_{n})\rho_{del})+\rho_{ins}(1-(1-e_{y}-e_{n})\rho_{del})}$.
∎
The experimental results are as follows. We took average over 10 runs.
We took the following values.
$n$ | 100
---|---
$\mathcal{N}_{s}$ | .2
$\mathcal{N}_{e}$ | .3
$\tau_{r}$ | 25
$\tau_{u}$ | 25
$\mathcal{N}_{o}$ | .05
$\mathcal{N}_{r}$ | .5
$\mathcal{N}_{u}$ | .15
$\rho_{c}$ | 0.05
$\rho_{del}$ | 0.002
$\rho_{ins}$ | 0.005
Table 8: Initial values with which the experiment started: burn with repair,
upgrade, add and remove.
In all the runs, all the nodes were cured.
Figure 3: Experimental versus theoretical results: burning with repair,
upgrade, add and remove.
## 8 Variable edge probability
Let that at some time-step $time$,
1. 1.
$f_{1},f_{2},...,f_{k}$ be the fraction of nodes
2. 2.
$\rho_{1},\rho_{2},...,\rho_{k}$ respectively be the edge probabilities of the
nodes,
3. 3.
$h_{1},h_{2},...,h_{k}$ be the fraction of healthy nodes not having the
upgrade or repair status,
4. 4.
$r_{1},r_{2},...,r_{k}$ be the fraction of healthy nodes having the repair
status,
5. 5.
$u_{1},u_{2},...,u_{k}$ be the fraction of healthy nodes having the upgrade
status,
6. 6.
$e_{n_{1}},e_{n_{1}},...,e_{n_{1}}$ be the fraction of infected nodes which
have not shown infection evidence,
7. 7.
$e_{y_{1}},e_{y_{1}},...,e_{y_{1}}$ be the fraction of infected nodes which
have shown infection evidence,
The probability of the edges that they make with the unhealthy and unreported
nodes will be
$E_{j}=\sum\limits_{i=0}^{k}(h_{j}e_{n_{i}})(\rho_{j}\rho_{i}).$
The fraction of nodes infected by spread and outside infection will be
$I_{j}=nE_{j}\mathcal{N}_{s}+h_{j}\mathcal{N}_{o}-(nE_{j}\mathcal{N}_{s})\times(h_{j}\mathcal{N}_{o}).$
At the end of line 9, the following values are modified.
1. 1.
The final fraction of infected nodes will be (we show this by reassigning the
value to $e_{n_{j}}$ so that we can reuse it later)
$e_{n_{j}}=e_{n_{j}}+I_{j}.$
2. 2.
The final fraction of healthy nodes not having the upgrade or repair status is
$h_{j}=h_{j}=I_{j}.$
At the end of line 10, $\mathcal{N}_{e}e_{n_{j}}$ of the total nodes, which
were not evident earlier, become evident. At the end of line 10,
1. 1.
The final fraction of infected nodes not having evident status will be
$e_{n_{j}}=e_{n_{j}}-\mathcal{N}_{e}e_{n_{j}}.$
2. 2.
The final fraction of infected nodes evident status will be
$e_{n_{j}}=e_{y_{j}}+\mathcal{N}_{e}e_{n_{j}}.$
At the end of line 11, $\sum\limits_{j}\mathcal{N}_{r}e_{y_{j}}$ of the nodes
are repaired. At the end of line 11,
1. 1.
The final fraction of repaired nodes will be
$r_{j}=r_{j}+\mathcal{N}_{r}e_{y_{j}}.$
2. 2.
The final fraction of infected nodes with evident status will be
$e_{y_{j}}=e_{y_{j}}-\mathcal{N}_{r}e_{y_{j}}.$
At the end of line 12, $\sum\limits_{j}(h_{j}+r_{j}+u_{j})\mathcal{N}_{u}$
more of the vertices are upgraded. At the end of line 12,
1. 1.
The final fraction of repaired nodes will be
$r_{j}=r_{j}-\mathcal{N}_{u}r_{j}.$
2. 2.
The final fraction of healthy nodes not having repair or upgrade status will
be
$h_{j}=h_{j}-\mathcal{N}_{u}h_{j}.$
3. 3.
The final fraction of upgraded nodes will be
$u_{j}=(h_{j}+r_{j}+u_{j})\mathcal{N}_{u}.$
At the end of line 13, $\sum\limits_{j}\rho_{del}(e_{n_{j}}+e_{y_{j}})$ of the
nodes are removed. At the end of line 13,
1. 1.
The total number of nodes is
$n=n-\rho_{del}n(e_{n_{j}}+e_{y_{j}})$
.
2. 2.
The final fraction of fraction of nodes will be
$f_{j}=\dfrac{nf_{j}}{n-\rho_{del}n(e_{n_{j}}+e_{y_{j}})}=\dfrac{f_{j}}{1-\rho_{del}(e_{n_{j}}+e_{y_{j}})}$
3. 3.
The final fraction of healthy nodes not having the upgrade or repair status
will be
$h_{j}=\dfrac{nh_{j}}{n-\rho_{del}n(e_{n_{j}}+e_{y_{j}})}=\dfrac{h_{j}}{1-\rho_{del}(e_{n_{j}}+e_{y_{j}})}$
4. 4.
The final fraction of healthy nodes having the repair status will be
$r_{j}=\dfrac{nr_{j}}{n-\rho_{del}n(e_{n_{j}}+e_{y_{j}})}=\dfrac{r_{j}}{1-\rho_{del}(e_{n_{j}}+e_{y_{j}})}$
5. 5.
The final fraction of healthy nodes having the upgrade status will be
$u_{j}=\dfrac{nu_{j}}{n-\rho_{del}n(e_{n_{j}}+e_{y_{j}})}=\dfrac{u_{j}}{1-\rho_{del}(e_{n_{j}}+e_{y_{j}})}$
6. 6.
The final fraction of infected nodes which have not shown infection evidence
will be
$e_{n_{j}}=\dfrac{ne_{n_{j}}-n\rho_{del}e_{n_{j}}}{n-\rho_{del}n(e_{n_{j}}+e_{y_{j}})}=\dfrac{e_{n_{j}}-\rho_{del}e_{n_{j}}}{1-\rho_{del}(e_{n_{j}}+e_{y_{j}})},\text{
and}$
7. 7.
The final fraction of infected nodes which have shown infection evidence will
be
$e_{y_{j}}=\dfrac{ne_{y_{j}}-n\rho_{del}e_{y_{j}}}{n-\rho_{del}n(e_{n_{j}}+e_{y_{j}})}=\dfrac{e_{y_{j}}-\rho_{del}e_{y_{j}}}{1-\rho_{del}(e_{n_{j}}+e_{y_{j}})}.$
At the end of line 14, $n\rho_{ins}$ nodes are added. At the end of line 14,
1. 1.
The total number of nodes is
$n=n+n\rho_{ins}$
.
2. 2.
The final fraction of fraction of nodes will be
$f_{j}=\dfrac{nf_{j}}{n+n\rho_{ins}}=\dfrac{f_{j}}{1+\rho_{ins}}$
3. 3.
The final fraction of healthy nodes not having the upgrade or repair status
will be
$h_{j}=\dfrac{nh_{j}}{n+n\rho_{ins}}=\dfrac{h_{j}}{1+\rho_{ins}}$
4. 4.
The final fraction of healthy nodes having the repair status will be
$r_{j}=\dfrac{nr_{j}}{n+n\rho_{ins}}=\dfrac{r_{j}}{1+\rho_{ins})}$
5. 5.
The final fraction of healthy nodes having the upgrade status will be
$u_{j}=\dfrac{nu_{j}}{n+n\rho_{ins}}=\dfrac{u_{j}}{1+\rho_{ins}}$
6. 6.
The final fraction of infected nodes which have not shown infection evidence
will be
$e_{n_{j}}=\dfrac{ne_{n_{j}}}{n+n\rho_{ins}}=\dfrac{e_{n_{j}}}{1+\rho_{ins}},\text{
and}$
7. 7.
The final fraction of infected nodes which have shown infection evidence will
be
$e_{y_{j}}=\dfrac{ne_{y_{j}}}{n+n\rho_{ins}}=\dfrac{e_{y_{j}}}{1+\rho_{ins}}.$
Now we discuss what happens when the nodes leave their upgrade or repair
status. If $time\geq\tau_{r}+1$, then
1. 1.
The final fraction of healthy nodes not having repair or upgrade status will
be
$h_{j}=h_{j}+r_{{time-\tau_{r}}_{j}}.$
2. 2.
The final fraction of nodes with repair status will be
$r_{j}=r_{j}-r_{{time-\tau_{r}}_{j}}.$
If $time\geq\tau_{u}+1$, then
1. 1.
The final fraction of healthy nodes not having repair or upgrade status will
be
$h_{j}=h_{j}+u_{{time-\tau_{r}}_{j}}.$
2. 2.
The final fraction of nodes with repair status will be
$u_{j}=u_{j}-u_{{time-\tau_{r}}_{j}}.$
### 8.1 Test case
We demonstrate the working of Algorithm 4 on a small network of 10 initial
nodes. After that, in Section 9, we discuss the possible variations by which
this model can be used to study the several networks, with more focus on the
human and biological networks. We initialized the global variables to the
following values, as described in Table 9.
$\mathcal{N}_{s}$ | .2
---|---
$\mathcal{N}_{e}$ | .1
$\tau_{r}$ | 60
$\tau_{u}$ | 60
$\mathcal{N}_{o}$ | .05
$\mathcal{N}_{r}$ | .08
$\mathcal{N}_{u}$ | .15
$\rho_{del}$ | 0.02
$\rho_{ins}$ | 0.005
Table 9: Input values of the variables of sample.
In the graph $G$ of order 10 such that for 4 nodes, $v.\rho_{c}$ was 0.1 and
for 6 nodes, $v.\rho_{c}$ was 0.2, we ran the algorithm 10, 000 times. Each
iteration was run until all the nodes were reported not infected after the
onset of the initial infection in the network. We received the following
output, described in Table 10. Here, we only focus on the average time to
disinfect (for one iteration, time to disinfect).
###### Definition 4.
Time to Disinfect. Given an input graph and the infection and disinfection
processes running on it, the time to disinfect is the difference between the
time-step number in which $infection\\_started$ was set to $true$ (line 3,
Algorithm 4), and the time-step number in which for each node $v$,
IsInfected($v$) is $false$, that is, when all nodes in the network are
disinfected after the onset of infection (line 6, Algorithm 4)
Average total time-steps | 34.9148
---|---
Average time-steps to disinfect | 28.403
New nodes added (average) | 1.9348
Infected nodes removed (average) | 1.0633
Infection start time (average) | 6.5118
Table 10: Output of sample.
## 9 Discussion
Several or all the proceedings of this algorithm can be transformed from
probabilistic to definite algorithmic procedures (for example, changing how we
spread infection, or how we upgrade a node) and used to study the systems
under those modified constraints.
In Table 1, we discussed some variables which decide the removal of “worn-out”
nodes or insertion of new nodes to a network. This may represent the removal
of an infected node from the network, or insertion of a new node to a network.
This may apply to more general and real-time systems as nodes can be removed
from a network or new nodes can be inserted to the network for several
administrative or cost-related reasons. If no modifications to the number of
nodes is desired, then both $\rho_{ins}$ and $\rho_{del}$ can be set to zero.
Clearly, temporal graphs are a subclass of this graph class where both
$\rho_{ins}$ and $\rho_{del}$ are zero.
As discussed in table Table 3, the notion associated with $\tau_{r}$
(respectively, $\tau_{u}$) can be changed to a probability of a node being
immune to infection after a repair (respectively, upgrade). We have that when
a node is repaired (respectively, upgraded), it is vulnerable to infection
which is spread from within the network or introduced from outside the network
after $\tau_{r}$ (respectively, $\tau_{u}$) time steps. The notion of the
probability of infection in a node getting reported is based on the fact that
a node reaches a state that is a fault is not necessary immediately when a
fault is inserted. The notion of probability of repair denotes the average of
time-steps that nodes takes to get repaired. The notion of a non-infected node
is upgraded with a probability denotes the fraction of nodes that are upgraded
on an average in a single time-step.
Several modifications of this model can be studied. For modelling human social
networks, we have that a group of, say, $k$ people meet frequently, that is
the edge probability per node can be high for that cluster of $k$ nodes, on
the other hand, if two nodes belong to different clusters, this probability
reduces. If the cluster are populated far apart, then this probability can
further reduce with the edge probability being in some inverse proportion to
some exponent of the distance. This is similar for social networks in other
biological ecosystems. In the current proposed model, the edge probability of
a node $v$ has uniform impact on all possible edges containing $v$ in the
network $G$. Further, the probability of repair may be increased in a
supercluster of nodes (a cluster of clusters residing locally with respect to
each other) where the percentage of infection is greater. We have, on the
other hand, we have used the value over all the nodes. Another very obvious
and desirable modification is studying several algorithmic strategies of a
combination of burning and firefighting on several graph classes, instead to
choosing nodes based on a probability. Such variations can be used to study
the nature of the spread of infection along with an optimal vaccination
strategy from the perspective of a human social network, or other complex
biological or artificial networks.
More generally, the class of graphs that we study is an implementation of the
graph class defined as follows.
###### Definition 5.
Altered Temporal Graphs with Generalization. To formally define the class of
graphs that we study in this article, we first define a set of nodes $V$ and a
set of edges $E$ such that $\forall~{}u,v\in V,\\{u,v\\}\in E$. A graph $G$
falling in this graph class is defined as follows: $G=(V_{1}$, $V_{2}$, $...$,
$V_{\ell}$, $E_{1}$, $E_{2}$, $...$, $E_{\ell})$ such that $\forall~{}i:1\leq
i\leq\ell,V_{i}\subseteq V\land E_{i}\subseteq E\land G_{i}=(V_{i},E_{i})$ is
an instance of $G$.
The class of graphs as defined in Definition 5 is an extension to the class of
temporal graphs as defined in the literature. We can better understand this
class of graphs as follows. Let $G$ be a graph falling in this class. $G$ may
or may not have well defined set of vertices $V(G)$ where $V(G)=V_{1}\cup
V_{2}\cup...\cup V_{\ell}$. But by the time an algorithm such as Algorithm 4
terminates, we will obtain a well defined set of vertices $V(G)$ such that for
each $v:v\in G$, $v$ is also an element of some $V_{i}$ such that
$G_{i}:G_{i}=(V_{i},E_{i})$ is an instance of $G$. For further formal
considerations, we assume that we have a defined set $V(G)$. The sets
$V_{i}:1\leq i\leq\ell$ may be decided by an algorithm (Algorithm 4 in our
case), or may be provided as part of the input. Similarly, the sets
$E_{i}:1\leq i\leq\ell$ may be decided by an algorithm (in our case,
Instance($G$) is assigning edges to $V_{i}$ for Algorithm 4), or may be
provided as part of the input. At a time-step $i$, any set
$E_{i}:E_{i}\subseteq V(G)\times V(G)$ is defined such that no vertex in
$V(G)\setminus V_{i}$ takes part in forming an edge at that time-step. Such a
definition has practical applications as it allows the flexibility to the
system that it can restrict some of none of the nodes in $V(G)$ to not to take
part in any communication at some time step; we have already demonstrated a
few related examples earlier in this section.
## 10 Related work
In the literature, the trend of the spread of a virus through biological
systems is studied using several efficient models. In addition to this, spread
of a virus through hosts (computational systems), spread of a meme, or other
contagion in networks is studied or opined [4].
Graph burning was first studied in [6] with a notion that only one new fire
source is initiated in each time step, and in each time-step the fire spreads
as well. Graph burning has been shown to be NP-Complete in [4, 23, 22] and has
been studied several times in [6, 7, 5, 17, 21, 26, 27, 29, 35]. Graph burning
where more than one (but a constant number of) nodes has been studied in [33].
The notion of the firefighter problem was first described in in [24]. This
problem was also discussed on static graphs and with a model in which 1
firefighter is to be placed in each time-step. The firefighting problem is NP-
Complete [18, 20, 30] and has been studied several times in the literature [3,
2, 8, 11, 12, 13, 14, 19, 25].
In addition, temporal graphs is a concept which is significant for this
article, and has been studied intensively. Temporal graphs were first
discussed in [28]. Since then, several works have been done on temporal graphs
[9, 10, 15, 16, 31, 32, 34].
This article introduces the notion of firefighting with multiple firefighters
bring placed on nodes in a single time-step. This article introduces the
notion of variable firefighting and variable burning. Also, this article
introduces a model in which burning and firefighting are analyzed together,
them being used against each other. We allow firefighting to save infected
nodes, which represents “repair” of an infected nodes, along with firefighting
of uninfected nodes, which represents “upgrade” of healthy nodes. This makes
graph burning [6] and $w$-burning [33] special cases of this model where a
constant number of nodes are burned from outside. This article also introduces
a graph class as defined in Definition 5 which an extension of temporal graphs
in which number of nodes are also variable: nodes can be both added or
removed. The model that we presented in this article is an implementation of
this graph class. Temporal graphs are a special case of this model, where (a)
$\rho_{del}=0$, and (b) $\rho_{ins}=0$. This also makes static graphs a
special case of this model, where (a) $\rho_{del}=0$, (b) $\rho_{ins}=0$, and
(c) the probability of existence of edges is either 1 or 0. From the
perspective of the class of graphs defined in Definition 5, we can obtain
temporal graphs by setting $\forall~{}i,j:1\leq i,j\leq\ell,V_{i}=V_{j}$, and
we can obtain static graphs by setting $\forall~{}i,j:1\leq
i,j\leq\ell,V_{i}=V_{j}\land E_{i}=E_{j}$.
## 11 Conclusion
The model the we present in this article may be viewed as a complex fusion of
graph burning (introduced in [6]) and firefighting (introduced in [24]) on a
variation of temporal graphs (temporal graphs introduced in [28]) where each
instance $G^{\prime}$ of the underlying graph $G$ is a probabilistic graph
such that we introduce probability on the insertion of new nodes to the
underlying graph $G$ as well as on the deletion of nodes once they get
infected, along with only having probabilities on every edge. This is one of
the variations of the graph class as defined in Definition 5.
The enforcement of the probabilities in our model, however, remains simple and
is based on uniform probability distribution. Several other complex
probability distributions can be enforced based on the nature of system being
simulated.
This model can be further used to study for different (heuristic) algorithmic
strategies of a quantification of repairs and upgrades (firefighting)
required, as well as to test strategies for spread of infection among the
nodes (burning), independently, or (burning and firefighting) against each
other. In this article we presented a system in which the notions of burning
and firefighting have been modelled against each other. In the application
system that we have demonstrated our model on, we prefer firefighting to “win”
over burning, that is, the desired trace property in this example is
“eventually, for each node $v$ in the network $G$, $\lnot\ v.i_{s}$ holds
$true$”. Such preferences may also change based on the system being studied.
This model can also be studied on larger systems such as cellular networks,
molecular networks, human social networks or other biological or ecological
systems and study several aspects of this model including its reliability and
robustness on general networks.
We have also established an introductory theory of a more diverse class of
graphs. The class of graphs as defined in Definition 5 is an advancement to
the existing definition of the class of temporal graphs; it allows a larger
set of systems to be represented formally and modelled theoretically. The
model that we have presented in this article works on these graphs: static
graphs or temporal graphs would be insufficient for this model. We call this
model bvfa, which stands for variable burning versus firefighting on Altered
Temporal Graphs with Generalization.
## References
* [1] Bowen Alpern and Fred B. Schneider. Defining liveness. Information Processing Letters, 21(4):181 – 185, 1985.
* [2] Cristina Bazgan, Morgan Chopin, Marek Cygan, Michael R. Fellows, Fedor V. Fomin, and Erik Jan van Leeuwen. Parameterized complexity of firefighting. Journal of Computer and System Sciences, 80(7):1285 – 1297, 2014\.
* [3] Cristina Bazgan, Morgan Chopin, and Michael R. Fellows. Parameterized complexity of the firefighter problem. In Takao Asano, Shin-ichi Nakano, Yoshio Okamoto, and Osamu Watanabe, editors, Algorithms and Computation, pages 643–652, Berlin, Heidelberg, 2011. Springer Berlin Heidelberg.
* [4] Stéphane Bessy, Anthony Bonato, Jeannette Janssen, Dieter Rautenbach, and Elham Roshanbin. Burning a graph is hard. Discrete Applied Mathematics, 232:73 – 87, 2017.
* [5] Anthony Bonato. A survey of graph burning, 2020.
* [6] Anthony Bonato, Jeannette Janssen, and Elham Roshanbin. Burning a graph as a model of social contagion. In In: Bonato A., Graham F., Prałat P. (eds) Algorithms and Models for the Web Graph. WAW 2014. Lecture Notes in Computer Science., pages 13–22. Springer International Publishing, 2014.
* [7] Anthony Bonato and Thomas Lidbetter. Bounds on the burning numbers of spiders and path-forests. Theoretical Computer Science, 794:12 – 19, 2019. Special Issue on Theory and Applications of Graph Searching.
* [8] Leizhen Cai, Yongxi Cheng, Elad Verbin, and Yuan Zhou. Surviving rates of graphs with bounded treewidth for the firefighter problem. SIAM Journal on Discrete Mathematics, 24(4):1322–1335, 2010.
* [9] Arnaud Casteigts, Ralf Klasing, Yessin M Neggaz, and Joseph Peters. Computing Parameters of Sequence-Based Dynamic Graphs. Theory of Computing Systems, 63(3):394–417, April 2019.
* [10] Jiehua Chen, Hendrik Molter, Manuel Sorge, and Ondrej Suchý. Cluster Editing in Multi-Layer and Temporal Graphs. In Wen-Lian Hsu, Der-Tsai Lee, and Chung-Shou Liao, editors, 29th International Symposium on Algorithms and Computation (ISAAC 2018), volume 123 of Leibniz International Proceedings in Informatics (LIPIcs), pages 24:1–24:13, Dagstuhl, Germany, 2018. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik.
* [11] Xujin Chen, Xiaodong Hu, Changjun Wang, and Ying Zhang. Continuous firefighting on infinite square grids. In T.V. Gopal, Gerhard Jäger, and Silvia Steila, editors, Theory and Applications of Models of Computation, pages 158–171, Cham, 2017\. Springer International Publishing.
* [12] Pierre Coupechoux, Marc Demange, David Ellison, and Bertrand Jouve. Firefighting on trees. Theoretical Computer Science, 794:69 – 84, 2019. Special Issue on Theory and Applications of Graph Searching.
* [13] Marek Cygan, Fedor V. Fomin, and Erik Jan van Leeuwen. Parameterized complexity of firefighting revisited. In Dániel Marx and Peter Rossmanith, editors, Parameterized and Exact Computation, pages 13–26, Berlin, Heidelberg, 2012. Springer Berlin Heidelberg.
* [14] Bireswar Das, Murali Krishna Enduri, Masashi Kiyomi, Neeldhara Misra, Yota Otachi, I. Vinod Reddy, and Shunya Yoshimura. On structural parameterizations of firefighting. Theoretical Computer Science, 782:79 – 90, 2019.
* [15] Argyrios Deligkas and Igor Potapov. Optimizing reachability sets in temporal graphs by delaying. Proceedings of the AAAI Conference on Artificial Intelligence, 34(06):9810–9817, Apr. 2020.
* [16] Thomas Erlebach, Michael Hoffmann, and Frank Kammer. On temporal graph exploration. In Magnús M. Halldórsson, Kazuo Iwama, Naoki Kobayashi, and Bettina Speckmann, editors, Automata, Languages, and Programming, pages 444–455, Berlin, Heidelberg, 2015. Springer Berlin Heidelberg.
* [17] Zahra Rezai Farokh, Maryam Tahmasbi, Zahra Haj Rajab Ali Tehrani, and Yousof Buali. New heuristics for burning graphs, 2020.
* [18] Stephen Finbow, Andrew King, Gary MacGillivray, and Romeo Rizzi. The firefighter problem for graphs of maximum degree three. Discrete Mathematics, 307(16):2094 – 2105, 2007. EuroComb ’03 - Graphs and Algorithms.
* [19] Stephen Finbow and Gary Macgillivray. The firefighter problem: A survey of results, directions and questions. The Australasian Journal of Combinatorics [electronic only], 43, 02 2009.
* [20] Fedor V. Fomin, Pinar Heggernes, and Erik Jan van Leeuwen. The firefighter problem on graph classes. Theoretical Computer Science, 613:38 – 50, 2016.
* [21] Rahul Kumar Gautam, Anjeneya Swami Kare, and S. Durga Bhavani. Faster heuristics for graph burning, 2020.
* [22] Arya Tanmay Gupta. Burning geometric graphs. mathesis, Indian Institute of Information Technology Vadodara, India, July 2020.
* [23] Arya Tanmay Gupta, Swapnil A. Lokhande, and Kaushik Mondal. Burning grids and intervals. In Apurva Mudgal and C. R. Subramanian, editors, Algorithms and Discrete Applied Mathematics, pages 66–79, Cham, 2021. Springer International Publishing.
* [24] B. Hartnell. Firefighter! an application of domination. the 24th Manitoba Conference on Combinatorial Mathematics and Computing, University of Minitoba, Winnipeg, Cadada, 1995, 1995.
* [25] Bert Hartnell and Qiyan Li. Firefighting on trees: How bad is the greedy algorithm? Congressus Numerantium, 145, 01 2000.
* [26] Shahin Kamali, Avery Miller, and Kenny Zhang. Burning two worlds. In Alexander Chatzigeorgiou, Riccardo Dondi, Herodotos Herodotou, Christos Kapoutsis, Yannis Manolopoulos, George A. Papadopoulos, and Florian Sikora, editors, SOFSEM 2020: Theory and Practice of Computer Science, pages 113–124, Cham, 2020. Springer International Publishing.
* [27] Anjeneya Swami Kare and I. Vinod Reddy. Parameterized algorithms for graph burning problem. In Charles J. Colbourn, Roberto Grossi, and Nadia Pisanti, editors, Combinatorial Algorithms, pages 304–314, Cham, 2019. Springer International Publishing.
* [28] David Kempe, Jon Kleinberg, and Amit Kumar. Connectivity and inference problems for temporal networks. Journal of Computer and System Sciences, 64(4):820 – 842, 2002\.
* [29] Huiqing Liu, Ruiting Zhang, and Xiaolan Hu. Burning number of theta graphs. Applied Mathematics and Computation, 361:246 – 257, 2019.
* [30] Gary MacGillivray and Ping Wang. On the firefighter problem. JCMCC. The Journal of Combinatorial Mathematics and Combinatorial Computing, 47, 01 2003.
* [31] Othon Michail. An introduction to temporal graphs: An algorithmic perspective. Internet Mathematics, 12(4):239–280, 2016.
* [32] Othon Michail and Paul G. Spirakis. Traveling salesman problems in temporal graphs. Theoretical Computer Science, 634:1 – 23, 2016.
* [33] Debajyoti Mondal, N. Parthiabn, V. Kavitha, and Indra Rajasingh. Apx-hardness and approximation for the k-burning number problem, 2020\.
* [34] Philipp Zschoche, Till Fluschnik, Hendrik Molter, and Rolf Niedermeier. The Complexity of Finding Small Separators in Temporal Graphs. In Igor Potapov, Paul Spirakis, and James Worrell, editors, 43rd International Symposium on Mathematical Foundations of Computer Science (MFCS 2018), volume 117 of Leibniz International Proceedings in Informatics (LIPIcs), pages 45:1–45:17, Dagstuhl, Germany, 2018. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik.
* [35] Marek Šimon, Ladislav Huraj, Iveta Dirgová Luptáková, and Jiří Pospíchal. Heuristics for spreading alarm throughout a network. Applied Sciences, 9(16), 2019.
|
# The Golden Angle is not Constructible
Pedro J. Freitas
(December 14, 2020)
The golden mean is usually defined with relation to a line segment. A line
segment is said to be divided according to the golden ratio if it is
decomposed into two segments, with lengths $a>b$, satisfying
$\frac{a+b}{a}=\frac{a}{b}$ (1)
If this happens, the value of these two ratios is the golden number,
$\varphi=(1+\sqrt{5})/2\approx 1.618034$. So,
$a=\frac{1}{\varphi}(a+b)\qquad
b=(a+b)-a=\left(1-\frac{1}{\varphi}\right)(a+b)$
The same can be done with a circle, instead of a line segment. A circle is
divided into two arcs $\alpha$ and $\beta$ according to the golden ratio if
they satisfy equation (1), which leads to
$\alpha=\frac{1}{\varphi}2\pi\qquad\beta=\left(1-\frac{1}{\varphi}\right)2\pi$
(2)
The smaller angle $\beta$ is called the golden angle and has some connections
to plant growth and phyllotaxis, see [Th, ch. 14]. Its measure in degrees,
approximated to two decimal points, is $137.51^{\rm o}$.
0.02.40000225441740240.180123646705*cos(t)+0.—0.180123646705*sin(t)+0.
$137.51^{\rm o}$ Figure 1. The golden angle
In this note we prove that the golden angle is not constructible with
straightedge and compass, by proving that its sine and cosine are
transcendental numbers. Since all constructible numbers have to be algebraic,
this is enough to prove what we want.
We recall that the algebraic numbers form a subfield of
$\mathord{\mathbb{C}}$, which is closed for taking $n$-th roots.
###### Lemma 1.
Given $x\in\mathord{\mathbb{R}}$, we have that $\sin x$ and $\cos x$ are
either both algebraic or both transcendental.
Moreover, the number $e^{ix}$ is transcendental iff either $\cos x$ or $\sin
x$ is transcendental (in which case, both are).
###### Proof.
If both $\sin x$ and $\cos x$ are transcendental, then the first statement is
true. If one of them is algebraic, say $\sin x$, then $\cos
x=\pm\sqrt{1-\sin^{2}x}$ is also algebraic.
For the second statement, if $z=e^{ix}$ is algebraic, then $\cos
x=(z+\bar{z})/2$ is also algebraic, and similarly for $\sin x$. Conversely, if
both $\cos x$ and $\sin x$ are algebraic, then $e^{ix}=\cos x+i\sin x$ is
algebraic. ∎
We now make use of the Gelfond-Schneider theorem, which is a very powerful
tool to generate transcendental numbers (one could also use the
Lindemann–Weierstrass theorem).
###### Theorem 2 (Gelfond-Schneider).
Let $a$ and $b$ be algebraic numbers, such that $a\notin\\{0,1\\}$ and
$b\in\mathord{\mathbb{C}}\setminus\mathord{\mathbb{Q}}$. Then $a^{b}$ is a
transcendental number.
See [La, p. 868] as a reference. This theorem solved part of Hilbert’s seventh
problem, on the irrationality and transcendence of certain numbers
Now consider the angles $\alpha$ and $\beta$ in equation (2). We wish to prove
that the golden angle has transcendental sine and cosine.
###### Proposition 3.
The golden angle has transcendental sine and cosine, and therefore it is not
constructible with straightedge and compass.
###### Proof.
Since $\beta=2\pi-\alpha$, it has the same cosine as $\alpha$, and symmetric
sine, we can prove that $\alpha=2\pi/\varphi$ has transcendental sine and
cosine. For this we prove that $z=e^{i\alpha}=e^{2i\pi/\varphi}$ is
transcendental, which is equivalent, according to the lemma.
If $z$ were algebraic, then, according to the Gelfond-Schneider theorem,
$z^{\varphi}=e^{2\pi i}=1$ would be transcendental, which is false. Therefore,
$e^{2\pi/\varphi}$ is transcendental. ∎
This proves the non-constructibility of the golden angle. Nevertheless, it is
possible to achieve very good approximations, using straightedge and compass.
Portuguese artist Almada Negreiros (1893–1970) devoted several years to
finding geometric constructions which related to his own analysis of artistic
artefacts (see [FC] for more information). One of his discoveries was
precisely an approximate construction for the golden angle, presented in
figure 2, based on the regular pentagram, which is constructible.
0.31415926535897931.199019693702687614.5308505601*cos(t)-20
—14.5308505601*sin(t) $A$$B$$C$$b$$a$ Figure 2. An approximate construction
for the golden angle
In this drawing, point $C$ is obtained through an arc of circle centred at
$A$. The golden angle is approximated by circle arc $BC$. To compute its exact
measure, one only needs to notice three facts—the two first ones are known
properties of the regular pentagon.
* •
The segments marked $a$ and $b$ are proportioned according to the golden
number: $a/b=\varphi$.
* •
Length $a$ coincides with the side of the pentagon, that is, it is the chord
of $2\pi/5$.
* •
Arc $AC$ has chord $b$.
To compute the value of arc $BC$, it is useful to use the chord as a
trigonometric function of the angle.
-0.5566028933440740.5566028933440740.229380048231*cos(t)+3.—0.229380048231*sin(t)+0. $x$ Figure 3. The chord function
Figure 3 helps to deduce its expression, as well as that of its inverse
function:
$\mathop{\text{chr}}\nolimits(x)=2\sin\frac{x}{2}\qquad\mathop{\text{arcchr}}\nolimits(x)=2\arcsin\frac{x}{2}$
Using the three facts mentioned above, we get:
$a=2\sin\frac{\pi}{5}\qquad b=\frac{2}{\varphi}\sin\frac{\pi}{5}$
$AC=\mathop{\text{arcchr}}\nolimits
b=2\arcsin\left(\frac{1}{\varphi}\sin\frac{\pi}{5}\right)$
From this, we get that the measure of arc $BC$, in degrees, rounded to two
decimal values, is $137.40^{\rm o}$. This represents an error of 0.08% with
respect to the golden angle.
## References
* [FC] Pedro J Freitas and Simão Palmeirim Costa, “Almada Negreiros and the Geometric Canon,” Journal of Mathematics and the Arts, vol. 9 nos. 1-2 (2015).
* [La] Serge Lang, Algebra – Revised Third Edition, vol. 1, Springer, 2002.
* [Th] D’Arcy Wentworth Thompson, On Growth and Form, Cambridge University Press, 1942.
|
# T-Quadratic Forms and Spectral Analysis of T-Symmetric Tensors
Liqun Qi and Xinzhen Zhang Department of Applied Mathematics, The Hong Kong
Polytechnic University, Hung Hom, Kowloon, Hong Kong, China;
(liqun.qi@polyu.edu.hk).School of Mathematics, Tianjin University, Tianjin
300354 China; (xzzhang@tju.edu.cn). This author’s work was supported by NSFC
(Grant No. 11871369).
###### Abstract
An $n\times n\times p$ tensor is called a T-square tensor. It arises from many
applications, such as the image feature extraction problem and the multi-view
clustering problem. We may symmetrize a T-square tensor to a T-symmetric
tensor. For each T-square tensor, we define a T-quadratic form, whose variable
is an $n\times p$ matrix, and whose value is a $p$-dimensional vector. We
define eigentuples and eigenmatrices for T-square tensors. We show that a
T-symmetric tensor has unique largest and smallest eigentuples, and a
T-quadratic form is positive semi-definite (definite) if and only if its
smallest eigentuple is nonnegative (positive). The relation between the eigen-
decomposition of T-symmetric tensors, and the TSVD of general third order
tensors are also studied.
Key words. T-square tensors, T-symmetric tensors, T-quadratic forms,
eigentuples.
AMS subject classifications. 15A69, 15A18
## 1 Introduction
We call a third order tensor ${\mathcal{A}}\in\Re^{n\times n\times p}$ a
T-square tensor. It was called an f-square tensor in [6]. The representation
tensor $\mathcal{Z}\in\Re^{n\times n\times p}$ arising in the multi-view
clustering problem [2] and the multi-view image feature extraction problem [9,
10] is a T-square tensor. Here $n$ is the number of the samples in the
database, $p$ is the number of the views.
Suppose that ${\mathcal{A}}\in\Re^{n\times n\times p}$ is a T-square tensor.
Let $X\in\Re^{n\times p}$. We may regard $X$ as a tensor
${\mathcal{X}}\in\Re^{n\times 1\times p}$. Define
$F_{\mathcal{A}}(X):={\mathcal{X}}^{\top}*{\mathcal{A}}*{\mathcal{X}},$ (1.1)
where $*$ is the T-product operation introduced in [1, 3, 4], and $\top$ is
the transpose operation in the T-product sense. In the next section, we will
review the definition of T-product and its transpose concept. We call
$F_{\mathcal{A}}$ the T-quadratic form defined by ${\mathcal{A}}$. Then for
any $X\in\Re^{n\times p}$, $F_{\mathcal{A}}(X)\in\Re^{p}$. If
$F_{\mathcal{A}}(X)\geq{\bf 0}$ for any $X\in\Re^{n\times p}$, then we say
that the T-quadratic form $F_{\mathcal{A}}$ is T-positive semi-definite. If
$F_{\mathcal{A}}(X)>{\bf 0}$ for any $X\in\Re^{n\times p}$, then we say that
the T-quadratic form $F_{\mathcal{A}}$ is T-positive definite.
The T-positive semidefiniteness (definiteness) concept here is different from
the T-positive semidefiniteness (definiteness) concept discussed in [12]. The
T-positive semidefiniteness (definiteness) concept in [12] is in the sense of
nonnegative (positive) scalars. Here, the concept is in the sense of
nonnegative (positive) vectors. Thus, the T-positive semidefiniteness
(definiteness) concept here is stronger and may reflect more correlative
properties of T-square tensors.
The T-product operation, TSVD decomposition and tubal ranks were introduced by
Kilmer and her collaborators in [1, 3, 4]. It is now widely used in
engineering [2, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]. In [1], Bradman
defined real eigentuples and eigenmatrices for third order tensors in
$\Re^{n\times n\times n}$. Viewing the wide applications of T-product, TSVD
decomposition and tubal ranks, the theory of eigentuples and eigenmatrices
deserves to be further studied.
In this paper, we extend the concepts of eigentuples and eigenmatrices to
T-square tensors and allow complex eigentuples and eigenmatrices. We show that
an $n\times n\times p$ T-symmetric tensor has unique largest eigentuple ${\bf
s}_{1}\in\Re^{p}$ and unique smallest eigentuple ${\bf s}_{n}\in\Re^{p}$ such
that any real eigentuple ${\bf s}$ of ${\mathcal{A}}$ satisfies ${\bf
s}_{1}\geq{\bf s}\geq{\bf s}_{n}$. We further show that a T-quadratic form is
positive semidefinite (definite) if and only if the smallest eigentuple of the
corresponding T-symmetric tensor is nonnegative (positive).
The T-quadratic function $F_{\mathcal{A}}$ maps $\Re^{n\times p}$ to
$\Re^{p}$. Its positive semidefiniteness (definiteness) requires $p$ quadratic
polynomials of $np$ variables to be nonnegative (positive) simutaneously. We
present its spectral conditions. This theory is noval.
We then further study the relation between the eigen-decomposition of
T-symmetric tensors, and the TSVD of general third order tensors.
The reset of this paper is distributed as follows. We deliver some preliminary
knowledge of T-product operations in the next section. In Section 3, we define
eigentuples and eigenmatrices for a T-square tensor, and show the existence of
the largest and the smallest eigentuples of a T-symmetric tensor. In Section
4, we prove that a T-symmetric tensor is positive semidefinite (definite) if
and only if its smallest eigentuple is nonnegative (positive). We study the
relation between the eigen-decomposition of T-symmetric tensors, and the TSVD
of general third order tensors in Section 5.
## 2 Preliminaries
Let ${\bf a}=(a_{1},a_{2},\cdots,a_{p})^{\top}\in\mathbb{C}^{p}$. Then
${\rm circ}({\bf a}):=\left(\begin{aligned} a_{1}\
&a_{p}&a_{p-1}&\cdots&a_{2}\ \\\ a_{2}\ &a_{1}&a_{p}\ &\cdots&a_{3}\ \\\
\cdot\ \ \ &\ \cdot&\cdot\ \ &\cdots&\cdot\ \ \ \\\ \cdot\ \ \ &\ \cdot&\cdot\
\ &\cdots&\cdot\ \ \ \\\
a_{p}&a_{p-1}&a_{p-2}&\cdots&a_{1}\end{aligned}\right),$
and circ${}^{-1}($circ$({\bf a})):={\bf a}$.
Suppose that ${\bf a},{\bf b}\in\mathbb{C}^{p}$. Define
${\bf a}\odot{\bf b}={\rm circ}({\bf a}){\bf b}.$
In [3], ${\bf a},{\bf b}\in\Re^{p}$ are called tubal scalars. Here, we extend
them to $\mathbb{C}^{p}$.
In general, ${\bf a}\odot{\bf b}\not={\bf b}\odot{\bf a}$. We denote
${\bf a}^{\odot 2}:={\bf a}\odot{\bf a}.$
If ${\bf a}\in\Re^{p}$ is nonnegative, then ${\bf a}^{\odot 2}$ is also
nonnegative. However, if ${\bf b}\in\Re^{p}$ is nonnegative, there may be no
${\bf a}\in\Re^{p}$ such that ${\bf a}^{\odot 2}={\bf b}$. For example, let
$p=2$, ${\bf a}=(a_{1},a_{2})^{\top}$, ${\bf b}=(b_{1},b_{2})^{\top}$ and
${\bf a}^{\odot 2}={\bf b}$. Then we have $b_{1}=a_{1}^{2}+a_{2}^{2}$ and
$b_{2}=2a_{1}a_{2}$. To satisfy these two equations, we must have $b_{1}\geq
b_{2}$. We say that ${\bf b}\in\Re^{p}$ is a square tubal scalar if it is
nonnegative and there is an ${\bf a}\in\Re^{p}$, such that ${\bf a}$ is
nonnegative and ${\bf a}^{\odot 2}={\bf b}$.
For ${\bf a}=(a_{1},\cdots,a_{p})^{\top}\in\Re^{p}$, denote $|{\bf
a}|:=(|a_{1}|,\cdots,|a_{p}|)^{\top}$.
Question 1 Suppose that ${\bf b}\in\Re^{p}$ is a square tubal scalar. Is there
a unique ${\bf a}\in\Re^{p}$, such that ${\bf a}$ is nonnegative and ${\bf
a}^{\odot 2}={\bf b}$?
###### Proposition 2.1
$(\mathbb{C}^{p},+,\odot)$ is a commutative ring with unity ${\bf
e}=(1,0,\cdots,0)^{\top}\in\mathbb{C}^{p}$, where $+$ is the vector addition.
Proposition 2.1 extends Theorem 3.2 of [1] from $\Re^{p}$ to $\mathbb{C}^{p}$,
as we need to consider complex eigentuples for third order real tensors. The
proof is almost the same. Hence, we omit the proof.
Note that the operation $\odot$ is different from vector convolution. For
${\bf a},{\bf b}\in\mathbb{C}^{p}$, the vector convolution of ${\bf a}$ and
${\bf b}$ is in $\mathbb{C}^{2p-1}$.
For $X\in\mathbb{C}^{n\times p}$ and ${\bf a}\in\mathbb{C}^{p}$, define
${\bf a}\circ X=X{\rm circ}({\bf a}).$
###### Proposition 2.2
Let ${\bf a},{\bf b}\in\mathbb{C}^{p}$, and $X,Y\in\mathbb{C}^{n\times p}$.
Then
1\. ${\bf a}\circ(X+Y)={\bf a}\circ X+{\bf a}\circ Y$;
2\. $({\bf a}+{\bf b})\circ X={\bf a}\circ X+{\bf b}\circ X$;
3\. ${\bf a}\circ({\bf b}\circ X)=({\bf a}\odot{\bf b})\circ X$;
4\. Let ${\bf e}=(1,0,\cdots,0)^{\top}\in\mathbb{C}^{p}$ as in Proposition
2.1. Then ${\bf e}\circ X=X$ for all $X\in\mathbb{C}^{n\times p}$.
Furthermore, ${\bf e}$ is the unique element in $\mathbb{C}^{p}$ with this
property.
Proof This proposition extends Theorem 3.5 of [1] from $\mathbb{C}^{p\times
p}$ to $\mathbb{C}^{n\times p}$, except the second half of item 4 is
additional. The proof of the other part except the second half of item 4 is
almost the same as the proof of Theorem 3.5 of [1]. We omit this part and now
prove the second half of item 4. Suppose ${\bf a}\circ X=X$ for all
$X\in\mathbb{C}^{n\times p}$. Then $X{\rm circ}({\bf a})=X$ for all
$X\in\mathbb{C}^{n\times p}$. This implies ${\rm circ}({\bf a})=I_{p}$, the
identity matrix of $\Re^{p\times p}$. Thus, ${\bf a}={\rm
circ}^{-1}(I_{p})={\bf e}$. . $\Box$
For a third order tensor ${\mathcal{A}}\in\Re^{m\times n\times p}$, its
frontal slices are denoted as $A^{(1)},\cdots,A^{(p)}\in\Re^{m\times n}$. As
in [1, 3, 4], define
${\rm bcirc}({\mathcal{A}}):=\left(\begin{aligned} A^{(1)}\
&A^{(p)}&A^{(p-1)}&\cdots&A^{(2)}\ \\\
A^{(2)}&A^{(1)}&A^{(p)}&\cdots&A^{(3)}\\\ \cdot\ \ \ &\ \cdot&\cdot\ \
&\cdots&\cdot\ \ \ \\\ \cdot\ \ \ &\ \cdot&\cdot\ \ &\cdots&\cdot\ \ \ \\\
A^{(p)}&A^{(p-1)}&A^{(p-2)}&\cdots&A^{(1)}\end{aligned}\right),$
and bcirc${}^{-1}($bcirc$({\mathcal{A}})):={\mathcal{A}}$.
Various T-product structured properties of third order tensors are based upon
their block circulant matrix versions. For a third order tensor
${\mathcal{A}}\in\Re^{m\times n\times p}$, its transpose can be defined as
${\mathcal{A}}^{\top}={\rm bcirc}^{-1}[({\rm birc}({\mathcal{A}}))^{\top}].$
This will be the same as the definition in [1, 3, 4]. The identity tensor
$\mathcal{I}_{nnp}$ may also be defined as
$\mathcal{I}_{nnp}={\rm bcirc}^{-1}(I_{np}),$
where $I_{np}$ is the identity matrix in $\Re^{np\times np}$.
However, a third order tensor $\mathcal{S}$ in $\Re^{m\times n\times p}$ is
f-diagonal in the sense of [1, 3, 4] if all of its frontal slices
$S^{(1)},\cdots,S^{(p)}$ are diagonal. In this case, bcirc$(\mathcal{S})$ may
not be diagonal.
For a third order tensor ${\mathcal{A}}\in\Re^{m\times n\times p}$, it is
defined [1, 4] that
${\rm unfold}({\mathcal{A}}):=\left(\begin{aligned} A^{(1)}\\\ A^{(2)}\\\
\cdot\ \ \\\ \cdot\ \ \\\ \cdot\ \ \\\
A^{(p)}\end{aligned}\right)\in\Re^{mp\times n},$
and fold$($unfold$({\mathcal{A}})):={\mathcal{A}}$. For
${\mathcal{A}}\in\Re^{m\times s\times p}$ and $\mathcal{B}\in\Re^{s\times
n\times p}$, the T-product of ${\mathcal{A}}$ and $\mathcal{B}$ is defined as
${\mathcal{A}}*\mathcal{B}:=$
fold$($bcirc$({\mathcal{A}})$unfold$(\mathcal{B})\in\Re^{m\times n\times p}$.
Then, we see that
${\mathcal{A}}*\mathcal{B}={\rm bcirc}^{-1}({\rm bcirc}({\mathcal{A}}){\rm
bcirc}(\mathcal{B})).$
Thus, the bcirc and bcirc-1 operations not only form a one-to-one relationship
between third order tensors and block circulant matrices, but their product
operation is reserved.
The Standard Form of a Real f-Diagonal Tensor. Let
$\mathcal{S}=(s_{ijk})\in\Re^{m\times n\times p}$ be a f-diaginal tensor. Let
${\bf s}_{j}=(s_{jj1},{\bf s}_{jj2},\cdots,s_{jjp})^{\top}$ be the $jj$th tube
of $\mathcal{S}$ for $j=1,\cdots,\min\\{m,n\\}$. We say that $\mathcal{S}$ is
in its standard form if ${\bf s}_{1}\geq{\bf s}_{2}\geq\cdots\geq{\bf
s}_{\min\\{m,n\\}}$.
## 3 Eigentuples and Eigenmatrices of T-Square Tensors
For a matrix $X\in\mathbb{C}^{n\times p}$, let its column vectors be ${\bf
x}^{(1)},\cdots,{\bf x}^{(p)}$. Define
${\rm unfold}(X):=\left(\begin{aligned} {\bf x}^{(1)}\\\ {\bf x}^{(2)}\\\
\cdot\ \ \\\ \cdot\ \ \\\ \cdot\ \ \\\ {\bf
x}^{(p)}\end{aligned}\right)\in\mathbb{C}^{np},$
and fold$($unfold$(X)):=X$. Then we define the T-product of ${\mathcal{A}}$
and $X$ as
${\mathcal{A}}*X={\rm fold}({\rm bcirc}({\mathcal{A}}){\rm unfold}(X)).$
Thus, ${\mathcal{A}}*X\in\mathbb{C}^{m\times p}$.
We now define eigentuples and eigenmatrices of T-square tensors. Suppose that
${\mathcal{A}}\in\Re^{n\times n\times p}$ is a T-square tensor,
$X\in\mathbb{C}^{n\times p}$ and $X\not=O$, ${\bf d}\in\mathbb{C}^{p}$, and
${\mathcal{A}}*X={\bf d}\circ X.$ (3.2)
Then we call ${\bf d}$ an eigentuple of ${\mathcal{A}}$, and $X$ an
eigenmatrix of ${\mathcal{A}}$, corresponding to the eigentuple ${\bf d}$.
The eigentuple and eigenmatrix concepts extend the eigentuple and eigenmatrix
concepts of [1] from $\Re^{p\times p\times p}$ to $\Re^{n\times n\times p}$
and allow complex eigentuples and eigenmatrices.
We aim to study T-positive semi-definiteness and T-positive definiteness of
the T-quadratic form $F_{\mathcal{A}}$, defined in (1.1). This would not be
easy by using the eigentuples of ${\mathcal{A}}$, as even for real square
matrices, their eigenvalues may not be real. Thus, as in the matrix case, we
symmetrize the T-square tensor ${\mathcal{A}}$.
Let ${\mathcal{A}}\in\Re^{n\times n\times p}$ be a T-square tensor. We say
that ${\mathcal{A}}$ is T-symmetric if ${\mathcal{A}}={\mathcal{A}}^{\top}$.
T-symmetric tensors have been studied in [12]. We have the following
propositions.
###### Proposition 3.1
Suppose that ${\mathcal{A}}\in\Re^{n\times n\times p}$. Then
${\mathcal{A}}+{\mathcal{A}}^{\top}$ is a T-symmetric tensor. Then
${\mathcal{A}}$ is positive semidefinite (definite) if and only if the
T-symmetric tensor ${\mathcal{A}}+{\mathcal{A}}^{\top}$ is positive
semidefinite (definite).
Proof Since
$\left({\mathcal{A}}+{\mathcal{A}}^{\top}\right)^{\top}={\mathcal{A}}^{\top}+\left({\mathcal{A}}^{\top}\right)^{\top}={\mathcal{A}}+{\mathcal{A}}^{\top},$
${\mathcal{A}}+{\mathcal{A}}^{\top}$ is T-symmetric.
For $X\in\Re^{n\times p}$, regard it as a tensor ${\mathcal{X}}\in\Re^{n\times
1\times p}$. We have
$F_{\mathcal{A}}(X)={\mathcal{X}}^{\top}*{\mathcal{A}}*{\mathcal{X}}=({\mathcal{X}}^{\top}*{\mathcal{A}}*{\mathcal{X}})^{\top}={\mathcal{X}}^{\top}*{\mathcal{A}}^{\top}*{\mathcal{X}}={1\over
2}{\mathcal{X}}^{\top}*({\mathcal{A}}+{\mathcal{A}}^{\top})*{\mathcal{X}}.$
Thus, ${\mathcal{A}}$ is positive semidefinite (definite) if and only if the
T-symmetric tensor ${\mathcal{A}}+{\mathcal{A}}^{\top}$ is positive
semidefinite (definite). . $\Box$
We thus study the eigentuples of T-symmetric tensors, and use them to analyze
positive semidefiniteness (definiteness) of these tensors.
The following proposition holds obviously.
###### Proposition 3.2
A T-square tensor ${\mathcal{A}}\in\Re^{n\times n\times p}$ is T-symmetric if
and only if bcirc$({\mathcal{A}})$ is symmetric. A T-square tensor
${\mathcal{A}}\in\Re^{n\times n\times p}$ is invertible if and only if
bcirc$(A)$ is invertible. In this case, we have
${\mathcal{A}}^{-1}={\rm bcirc}^{-1}({\rm bcirc}({\mathcal{A}}^{-1})).$
Furthermore, ${\mathcal{A}}$ is orthogonal in the sense of [1, 3, 4] if and
only if bcirc$({\mathcal{A}})$ is orthogonal.
We have the following theorem.
###### Theorem 3.3
Suppose that ${\mathcal{A}}\in\Re^{n\times n\times p}$ is a T-symmetric
tensor. Then there are orthogonal tensor $\mathcal{U}\in\Re^{n\times n\times
p}$ and T-symmetric f-diagonal tensor $\mathcal{D}\in\Re^{n\times n\times p}$
such that
${\mathcal{A}}=\mathcal{U}*\mathcal{D}*\mathcal{U}^{\top}.$ (3.3)
Let the frontal slices of $\mathcal{D}$ be $D^{(1)},\cdots,D^{(p)}$. If
$\hat{\mathcal{D}}\in\Re^{n\times n\times p}$ is another T-symmetric
f-diagonal tensor, whose frontal slices $\hat{D}^{(1)},\cdots,\hat{D}^{(p)}$
are resulted from switching some diagonal elements of
$D^{(1)},\cdots,D^{(p)}$, then there is an orthogonal tensor
$\hat{\mathcal{U}}\in\Re^{n\times n\times p}$, such that
${\mathcal{A}}=\hat{\mathcal{U}}*\hat{\mathcal{D}}*\hat{\mathcal{U}}^{\top}.$
(3.4)
Proof Block circulant matrices can be block diagonalized with normalized
discrete Fourier transformation (DFT) matrix, which is unitary. Then, as in
(3.1) of [4], we have
$(F_{p}\otimes I_{n})\cdot{\rm bcirc}({\mathcal{A}})\cdot(F_{p}^{*}\otimes
I_{n})={\rm diag}(D_{1},\cdots,D_{p}),$ (3.5)
where $F_{p}$ is the $p\times p$ DFT matrix, $F_{p}^{*}$ is its conjugate
transpose, $\cdot$ is the standard matrix multiplication, $\otimes$ denotes
the Kronecker product. Since bcirc$({\mathcal{A}})$ is symmetric, by taking
conjugate transpose of (3.5), we see that $D_{1},\cdots,D_{p}$ in (3.1) of [4]
are all hermite. Applying the eigen-decomposition of
$D_{i}=U_{i}\Sigma_{i}U_{i}^{\top}$ for $i=1,\cdots,p$, we have
${\rm diag}(D_{1},\cdots,D_{p})={\rm diag}(U_{1},\cdots,U_{p}){\rm
diag}(\Sigma_{1},\cdots,\Sigma_{p}){\rm
diag}(U_{1}^{\top},\cdots,U_{p}^{\top}).$ (3.6)
Apply $(F_{p}^{*}\otimes I_{n})$ to the left and $(F_{p}\otimes I_{n})$ to the
right of each of the block diagonal matrices in (3.6). In each of the three
cases, the resulting triple product results in a block circulant matrix. We
have
${\rm bcirc}({\mathcal{A}})={\rm bcirc}(\mathcal{U}){\rm
bcirc}(\mathcal{D}){\rm bcirc}(\mathcal{U}^{\top}).$
This implies (3.3). Then we have
$\mathcal{D}=\mathcal{U}^{\top}*{\mathcal{A}}*\mathcal{U},$
and
$\mathcal{D}^{\top}=\left(\mathcal{U}^{\top}*{\mathcal{A}}*\mathcal{U}\right)^{\top}=\mathcal{U}^{\top}*{\mathcal{A}}^{\top}*\mathcal{U}=\mathcal{U}^{\top}*{\mathcal{A}}*\mathcal{U}=\mathcal{D},$
as ${\mathcal{A}}$ is T-symmetric. Thus, $\mathcal{D}$ is also T-symmetric.
Switching the order of eigenvalues in the eigen-decomposition
$D_{i}=U_{i}\Sigma_{i}U_{i}^{\top}$ for $i=1,\cdots,p$, we have (3.4). .
$\Box$
We call (3.3) a T-eigen-decomposition (TED) of ${\mathcal{A}}$.
###### Corollary 3.4
Suppose that ${\mathcal{A}}\in\Re^{n\times n\times p}$ is a T-symmetric tensor
and (3.3) holds. Denote ${\mathcal{A}}^{*2}={\mathcal{A}}*{\mathcal{A}}$ and
${\mathcal{A}}^{*k}={\mathcal{A}}^{*(k-1)}*{\mathcal{A}}$ for any integer
$k\geq 3$. Then for any positive integer $k$, ${\mathcal{A}}^{*k}$ is still
T-symmetric, and we have
${\mathcal{A}}^{*k}=\mathcal{U}*\mathcal{D}^{*k}*\mathcal{U}^{\top}.$
###### Corollary 3.5
Suppose that ${\mathcal{A}}\in\Re^{n\times n\times p}$ is a T-symmetric tensor
and (3.3) holds. Then ${\mathcal{A}}^{-1}$ exists if and only if
$\mathcal{D}^{-1}$ exists. If they exist, then they are T-symmetric and
${\mathcal{A}}^{-1}=\mathcal{U}*\mathcal{D}^{-1}*\mathcal{U}^{\top}.$
We may rewrite (3.3) as
${\mathcal{A}}*\mathcal{U}=\mathcal{U}*\mathcal{D},$ (3.7)
or
${\rm bcirc}({\mathcal{A}}){\rm bcirc}(\mathcal{U})={\rm
bcirc}(\mathcal{U}){\rm bcirc}(\mathcal{D}),$ (3.8)
Denote the $j$th lateral slice of $\mathcal{U}$ by $U_{j}\in\Re^{n\times p}$
for $j=1,\cdots,n$. Consider the $j$th column of (3.8) for $j=1,\cdots,n$. Let
$\mathcal{D}=(d_{ijk})$. Then $d_{ijk}=0$ if $i\not=j$. Let $d_{11k}\geq
d_{22k}\geq\cdots\geq d_{nnk}$ for $k=1,\cdots,p$. We have
${\mathcal{A}}*U_{j}={\bf d}_{j}\circ U_{j},$ (3.9)
where ${\bf d}_{j}=(d_{jj1},d_{jjp},d_{jj(p-1)},\cdots,d_{jj2})^{\top}$. Since
$\mathcal{U}$ is orthogonal, $U_{j}\not=O$. Thus, ${\bf d}_{j}$ is an
eigentuple of ${\mathcal{A}}$ with an eigenmatrix $U_{j}$.
For a matrix $U\in\Re^{n\times p}$, let its column vectors are ${\bf
u}^{(1)},\cdots,{\bf u}^{(p)}$. Then
$U=({\bf u}^{(1)},{\bf u}^{(2)},\cdots,{\bf u}^{(p)}).$
Denote $U^{[0]}=U$,
$U^{[1]}=({\bf u}^{(p)},{\bf u}^{(1)},\cdots,{\bf u}^{(p-1)}),$ $U^{[2]}=({\bf
u}^{(p-1)},{\bf u}^{(2)},\cdots,{\bf u}^{(p-1)}),$ $\cdots,$ $U^{[p-1]}=({\bf
u}^{(2)},{\bf u}^{(3)},\cdots,{\bf u}^{(1)}).$
Consider the $(n+j)$th column of (3.8). We have
${\mathcal{A}}*U_{j}^{[1]}={\bf d}_{j}\circ U_{j}^{[1]}.$ (3.10)
Thus, $U_{j}^{[1]}$ is also an eigenmatrix of ${\mathcal{A}}$, associated with
the eigentuple ${\bf d}_{j}$. Similarly, $U_{j}^{[2]},\cdots,U_{j}^{[p-1]}$
are also eigenmatrices of ${\mathcal{A}}$, associated with the eigentuple
${\bf d}_{j}$.
Consider the set of eigenmatrices
$T=\left\\{U_{j}^{[k]}:j=1,\cdots,n,k=0,\cdots,p-1\right\\}.$
Then $T$ forms an orthonormal basis of $\Re^{n\times p}$. For any two distinct
members $W,V\in T$, let $\mathcal{W}$ and $\mathcal{V}$ be the corresponding
$n\times 1\times p$ tensors. Then we have
$\mathcal{W}^{\top}*\mathcal{W}=\mathcal{I}_{11p},$ (3.11)
and
$\mathcal{W}^{\top}*\mathcal{V}={\mathcal{O}}_{11p}.$ (3.12)
Viewing (3.4), we may switch the order in
$\\{d_{11k},d_{22k},\cdots,d_{nnk}\\}$ for any $k=1,\cdots,p$. The resulted
$\hat{\bf d}_{j}$, $j=1,\cdots,n$ are still eigentuples of ${\mathcal{A}}$.
Hence, the number of eigentuples of ${\mathcal{A}}$ is large. But we may
always take ${\bf d}_{1},\cdots,{\bf d}_{n}$ in its standard form.
Combining the orthogonality of $\mathcal{U}$, we have the following theorem.
###### Theorem 3.6
Suppose that ${\mathcal{A}}\in\Re^{n\times n\times p}$ is a T-symmetric
tensor. Then ${\mathcal{A}}$ has real eigentuples ${\bf d}_{1},\cdots,{\bf
d}_{n}$, such that ${\bf d}_{1}\geq{\bf d}_{2}\cdots\geq{\bf d}_{n}$. For each
$j$, $j=1,\cdots,n$, there are real eigenmatrices
$U_{j}^{[0]},\cdots,U_{j}^{[p-1]}$, of ${\mathcal{A}}$, associated with the
eigentuple ${\bf d}_{j}$. These $np$ eigenmatrices form an orthonormal basis
of $\Re^{n\times p}$.
We call the eigentuples $\\{{\bf d}_{1},\cdots,{\bf d}_{n}\\}$, satisfying
${\bf d}_{1}\geq{\bf d}_{2}\cdots\geq{\bf d}_{n}$, in Theorem 3.6 the set of
the principal eigentuples of ${\mathcal{A}}$.
If ${\mathcal{A}}=\mathcal{I}_{nnp}$, then
$\mathcal{U}=\mathcal{D}=\mathcal{I}_{nnp}$. Therefore, ${\bf
d}_{1}=\cdots={\bf d}_{n}=(1,0,\cdots,0)^{\top}$. If ${\mathcal{A}}$ has a set
of principal eigentuples ${\bf d}_{j}=(d_{j1},\cdots,d_{jp})^{\top}$ for
$j=1,\cdots,n$, then ${\mathcal{A}}+\lambda\mathcal{I}_{nnp}$ has a set of
principal eigentuples ${\bf
d}_{j}=(d_{j1}+\lambda,\cdots,d_{jp}+\lambda)^{\top}$ for $j=1,\cdots,n$.
We are not sure whether all eigentuples of a T-symmetric tensor are real and
two eigenmatrices associated with two distinct eigentuples of a T-symmetric
tensor are orthogonal to each other. However, but we can prove the following
theorem.
###### Theorem 3.7
Suppose that ${\mathcal{A}}\in\Re^{n\times n\times p}$ is a T-symmetric
tensor, and $\\{{\bf d}_{1},\cdots,{\bf d}_{n}\\}$ is a set of principal
eigentuples of ${\mathcal{A}}$ such that ${\bf d}_{1}\geq\cdots\geq{\bf
d}_{n}$. Then for any real eigentuple ${\bf d}_{0}$ of ${\mathcal{A}}$, we
have
${\bf d}_{1}\geq{\bf d}_{0}\geq{\bf d}_{n}.$ (3.13)
Proof Assume that there is an eigenmatrix $V\in\mathbb{C}^{n\times p}$ such
that
${\mathcal{A}}*V={\bf d}_{0}\circ V.$
Taking conjugate, we have
${\mathcal{A}}*\bar{V}={\bf d}_{0}\circ\bar{V}.$
Let $W=V+\bar{V}$. Then $W$ is real and
${\mathcal{A}}*W={\bf d}_{0}\circ W.$
If $W$ is nonzero, then $U$ is a real eigenmatrix of ${\mathcal{A}}$,
associated with ${\bf d}_{0}$. Otherwise, $V$ is pure imaginary. Letting
$\hat{W}=\sqrt{-1}V$, we still have a real eigenmatrix of ${\mathcal{A}}$,
associated with ${\bf d}_{0}$. Without loss of generality, assume $W$ is such
a real eigenmatrix.
Let $U_{j}^{[0]},\cdots,U_{j}^{[p-1]}$ be the eigenmatrices of ${\mathcal{A}}$
in Theorem 3.6. Then we have real coefficients
$\alpha_{j}^{[0]},\cdots,\alpha_{j}^{[p-1]}$, for $j=1,\cdots,n$, such that
$W=\sum_{j=1}^{n}\sum_{k=1}^{p}\alpha_{j}^{[k-1]}U_{j}^{[k-1]}.$
Let $\mathcal{U}_{j}^{[k]}$ be the $n\times 1\times p$ tensors corresponding
to $U_{j}^{[k]}$ for $j=1,\cdots,n$ and $k=0,\cdots,p-1$. Let $\mathcal{W}$ be
the $n\times 1\times p$ tensor corresponding to $W$. Let $\mathcal{D}_{j}$ be
the $1\times 1\times p$ tensors corresponding to ${\bf d}_{j}$ for
$j=0,\cdots,n$. Then
$\displaystyle\mathcal{W}^{\top}*{\mathcal{A}}*\mathcal{W}$ $\displaystyle=$
$\displaystyle\mathcal{W}^{\top}*\mathcal{W}*\mathcal{D}_{0}$ $\displaystyle=$
$\displaystyle\left(\sum_{j=1}^{n}\sum_{k=1}^{p}\alpha_{j}^{[k-1]}\mathcal{U}_{j}^{[k-1]}\right)^{\top}*\left(\sum_{j=1}^{n}\sum_{k=1}^{p}\alpha_{j}^{[k-1]}\mathcal{U}_{j}^{[k-1]}\right)*\mathcal{D}_{0}$
$\displaystyle=$
$\displaystyle\sum_{j=1}^{n}\left(\sum_{k=1}^{p}\alpha_{j}^{[k-1]}\right)^{2}\mathcal{I}_{11p}*\mathcal{D}_{0}$
$\displaystyle=$
$\displaystyle\sum_{j=1}^{n}\left(\sum_{k=1}^{p}\alpha_{j}^{[k-1]}\right)^{2}\mathcal{D}_{0}.$
On the other hand,
$\displaystyle\mathcal{W}^{\top}*{\mathcal{A}}*\mathcal{W}$ $\displaystyle=$
$\displaystyle\mathcal{W}^{\top}*{\mathcal{A}}*\left(\sum_{j=1}^{n}\sum_{k=1}^{p}\alpha_{j}^{[k-1]}\mathcal{U}_{j}^{[k-1]}\right)$
$\displaystyle=$
$\displaystyle\left(\sum_{j=1}^{n}\sum_{k=1}^{p}\alpha_{j}^{[k-1]}\mathcal{U}_{j}^{[k-1]}\right)^{\top}*\left(\sum_{j=1}^{n}\sum_{k=1}^{p}\alpha_{j}^{[k-1]}{\mathcal{A}}*\mathcal{U}_{j}^{[k-1]}\right)$
$\displaystyle=$
$\displaystyle\left(\sum_{j=1}^{n}\sum_{k=1}^{p}\alpha_{j}^{[k-1]}\mathcal{U}_{j}^{[k-1]}\right)^{\top}*\left(\sum_{j=1}^{n}\sum_{k=1}^{p}\alpha_{j}^{[k-1]}\mathcal{U}_{j}^{[k-1]}*\mathcal{D}_{j}\right)$
$\displaystyle=$
$\displaystyle\sum_{j=1}^{n}\left(\sum_{k=1}^{p}\alpha_{j}^{[k-1]}\right)^{2}\mathcal{I}_{11p}*\mathcal{D}_{j}$
$\displaystyle=$
$\displaystyle\sum_{j=1}^{n}\left(\sum_{k=1}^{p}\alpha_{j}^{[k-1]}\right)^{2}\mathcal{D}_{j}.$
From this, we have
$\sum_{j=1}^{n}\left(\sum_{k=1}^{p}\alpha_{j}^{[k-1]}\right)^{2}\mathcal{D}_{0}=\sum_{j=1}^{n}\left(\sum_{k=1}^{p}\alpha_{j}^{[k-1]}\right)^{2}\mathcal{D}_{j},$
i.e.,
$\sum_{j=1}^{n}\left(\sum_{k=1}^{p}\alpha_{j}^{[k-1]}\right)^{2}{\bf
d}_{0}=\sum_{j=1}^{n}\left(\sum_{k=1}^{p}\alpha_{j}^{[k-1]}\right)^{2}{\bf
d}_{j}.$
The inequality (3.13) is obtained. . $\Box$
###### Corollary 3.8
The eigentuples ${\bf d}_{1}$ and ${\bf d}_{n}$ are unique to ${\mathcal{A}}$.
We call ${\bf d}_{1}$ the largest eigentuple of ${\mathcal{A}}$, and ${\bf
d}_{n}$ the smallest eigentuple of ${\mathcal{A}}$.
## 4 T-Symmetric Positive Semidefinite Tensors
Suppose that ${\mathcal{A}}\in\Re^{n\times n\times p}$ is a T-square tensor.
Then by Proposition LABEL:p3.1, the T-quadratic form is positive semidefinite
(definite) if and only if the T-symmetric tensor
${\mathcal{A}}+{\mathcal{A}}^{\top}$ is positive semidefinite (definite). This
stimulates us to study positive semidefiniteness (definiteness) of a
T-symmetric tensor.
###### Theorem 4.1
Suppose that ${\mathcal{A}}\in\Re^{n\times n\times p}$ is a T-symmetric tensor
and it has a set of principal eigentuples ${\bf d}_{1},\cdots,{\bf d}_{n}$,
such that ${\bf d}_{1}\geq{\bf d}_{2}\cdots\geq{\bf d}_{n}$. Then
${\mathcal{A}}$ is positive semidefinite (definite) if and only if the
smallest eigentuple ${\bf d}_{n}\geq(>){\bf 0}$.
Proof By Theorem 3.6, ${\bf d}_{1}\geq{\bf d}_{2}\cdots\geq{\bf d}_{n}$, and
for each $j$, $j=1,\cdots,n$, there are real eigenmatrices
$U_{j}^{[0]},\cdots,U_{j}^{[p-1]}$, of ${\mathcal{A}}$, associated with the
eigentuple ${\bf d}_{j}$, such that these $np$ eigenmatrices form an
orthonormal basis of $\Re^{n\times p}$.
If ${\bf d}_{n}$ is not nonnegative, let $U=U_{n}^{[0]}$ and
$\mathcal{U}_{n}^{[0]}$ be the corresponding $n\times 1\times p$ tensor. Let
$\L$ be the $1\times 1\times p$ tensor corresponding to ${\bf d}_{n}$. Then
$F_{\mathcal{A}}(U)=\left(\mathcal{U}_{n}^{[0]}\right)^{\top}*{\mathcal{A}}*\mathcal{U}_{n}^{[0]}=\left(\mathcal{U}_{n}^{[0]}\right)^{\top}*\mathcal{U}_{n}^{[0]}*\L=\mathcal{I}_{nnp}*\L\not\geq{\bf
0},$
which implies that $F_{\mathcal{A}}$ is not positive semi-definite. Similarly,
if ${\bf d}_{n}$ is not positive, then $F_{\mathcal{A}}$ is not positive
definite.
On the other hand, suppose that ${\bf d}_{n}\geq 0$. Let
$\mathcal{U}_{j}^{[k]}$ be the $n\times 1\times p$ tensor corresponding to
$U_{j}^{[k]}$ for $j=1,\cdots,n$ and $k=0,\cdots,p-1$. Let $X\in\Re^{n\times
p}$. Then there are real coefficients $\alpha_{j}^{[k]}$ for $j=1,\cdots,n$
and $k=0,\cdots,p-1$, such that
$X=\sum_{j=1}^{n}\sum_{k=0}^{p-1}\alpha_{j}^{[k]}U_{j}^{[k]}.$
Let $\L_{j}$ be the $1\times 1\times p$ tensor corresponding to ${\bf s}_{j}$
for $j=1,\cdots,n$. We have
$\displaystyle F_{\mathcal{A}}(X)$ $\displaystyle=$
$\displaystyle\left(\sum_{i=1}^{n}\sum_{l=0}^{p-1}\alpha_{i}^{[l]}\mathcal{U}_{i}^{[l]}\right)^{\top}*{\mathcal{A}}*\left(\sum_{j=1}^{n}\sum_{k=0}^{p-1}\alpha_{j}^{[k]}\mathcal{U}_{j}^{[k]}\right)$
$\displaystyle=$
$\displaystyle\sum_{i=1}^{n}\sum_{j=1}^{n}\sum_{l=0}^{p-1}\sum_{k=0}^{p-1}\alpha_{i}^{[l]}\alpha_{j}^{[k]}(\mathcal{U}_{i}^{[l]})^{\top}*{\mathcal{A}}*\mathcal{U}_{j}^{[k]}$
$\displaystyle=$
$\displaystyle\sum_{i=1}^{n}\sum_{j=1}^{n}\sum_{l=0}^{p-1}\sum_{k=0}^{p-1}\alpha_{i}^{[l]}\alpha_{j}^{[k]}(\mathcal{U}_{i}^{[l]})^{\top}*\mathcal{U}_{j}^{[k]}*\L_{j}$
$\displaystyle=$
$\displaystyle\sum_{j=1}^{n}\sum_{k=0}^{p-1}\left(\alpha_{j}^{[k]}\right)^{2}{\bf
d}_{j}$ $\displaystyle\geq$ $\displaystyle{\bf 0}.$
Thus, $F_{\mathcal{A}}$ is positive semidefinite. Similarly, if ${\bf
d}_{n}\geq{\bf 0}$, then $\mathcal{F}_{\mathcal{A}}$ is positive definite. .
$\Box$
## 5 Relation with TSVD of General Third Order Tensors
Suppose that ${\mathcal{A}}\in\Re^{m\times n\times p}$. By [4],
${\mathcal{A}}$ has a T-singular value decomposition (TSVD):
${\mathcal{A}}=\mathcal{U}*\mathcal{S}*\mathcal{V}^{\top},$ (5.14)
where $\mathcal{U}\in\Re^{m\times m\times p}$ and $\mathcal{V}\in\Re^{n\times
n\times p}$ are orthogonal tensors, $\mathcal{S}\in\Re^{m\times n\times p}$ is
a f-diagonal tensor.
###### Theorem 5.1
Suppose that ${\mathcal{A}}\in\Re^{m\times n\times p}$ with TSVD (5.14). Then
${\mathcal{A}}*{\mathcal{A}}^{\top}\in\Re^{m\times m\times p}$ and
${\mathcal{A}}^{\top}*{\mathcal{A}}\in\Re^{n\times n\times p}$ are T-symmetric
positive semi-definite tensors with TED
${\mathcal{A}}*{\mathcal{A}}^{\top}=\mathcal{U}*(\mathcal{S}*\mathcal{S}^{\top})*\mathcal{U}^{\top},$
and
${\mathcal{A}}^{\top}*{\mathcal{A}}=\mathcal{V}*(\mathcal{S}^{\top}*\mathcal{S})*\mathcal{V}^{\top},$
respectively.
We now define singular tuples and singular matrices of general third order
tensors. Suppose that ${\mathcal{A}}\in\Re^{m\times n\times p}$ is a third
order tensor, $X\in\Re^{n\times p}$, $X\not=O$, $Y\in\Re^{m\times p}$,
$Y\not=O$, ${\bf s}\in\Re^{p}$, and
${\mathcal{A}}*X={\bf s}\circ Y$ (5.15)
and
${\mathcal{A}}^{\top}*Y={\bf s}\circ X.$ (5.16)
Then we call ${\bf s}$ a singular tuple of ${\mathcal{A}}$, $X$ a right
singular matrix of ${\mathcal{A}}$, $Y$ a right singular matrix of
${\mathcal{A}}$, corresponding to the singular tuple ${\bf s}$.
###### Theorem 5.2
Suppose that ${\mathcal{A}}\in\Re^{m\times n\times p}$ is a third order
tensor. Without loss of generality, assume that $n\leq m$. Then
${\mathcal{A}}$ has singular tuples ${\bf s}_{1}\geq{\bf
s}_{2}\geq\cdots\geq{\bf s}_{n}\geq{\bf 0}$. For each $j$, $j=1,\cdots,n$,
there are right singular matrices $U_{j}^{[0]},\cdots,U_{j}^{[p-1]}$, and left
singular matrices $V_{j}^{[0]},\cdots,V_{j}^{[p-1]}$, of ${\mathcal{A}}$,
associated with the singular tuple ${\bf s}_{j}$. The $np$ singular matrices
$U_{j}^{[0]},\cdots,U_{j}^{[p-1]}$ form an orthonormal basis of $\Re^{n\times
p}$, and the $np$ singular matrices $V_{j}^{[0]},\cdots,V_{j}^{[p-1]}$ form a
part of an orthonormal basis of $\Re^{m\times p}$, respectively.
Furthermore, ${\mathcal{A}}^{\top}*{\mathcal{A}}\in\Re^{n\times n\times p}$
and ${\mathcal{A}}*{\mathcal{A}}^{\top}\in\Re^{m\times m\times p}$ are two
T-symmetric positive semidefinite tensors. The tensor
${\mathcal{A}}^{\top}*{\mathcal{A}}$ has $n$ nonnegative eigentuples ${\bf
s}_{1}^{\odot 2},\cdots,{\bf s}_{n}^{\odot 2}$. For each $j$, $j=1,\cdots,n$,
there are real eigenmatrices $U_{j}^{[0]},\cdots,U_{j}^{[p-1]}$, of
${\mathcal{A}}$, associated with the eigentuple ${\bf s}_{j}^{\odot 2}$. The
tensor ${\mathcal{A}}*{\mathcal{A}}^{\top}$ has $n$ nonnegative eigentuples
${\bf s}_{1}^{\odot 2},\cdots,{\bf s}_{n}^{\odot 2}$. For each $j$,
$j=1,\cdots,n$, there are real eigenmatrices
$V_{j}^{[0]},\cdots,V_{j}^{[p-1]}$, of ${\mathcal{A}}$, associated with the
eigentuples ${\bf s}_{j}^{\odot 2}$. If $n<m$, for each $j$, $j=n+1,\cdots,m$,
there are real eigenmatrices $V_{j}^{[0]},\cdots,V_{j}^{[p-1]}$, of
${\mathcal{A}}$, associated with the zero eigentuple ${\bf
0}\in\mathbb{R}^{p}$. The $mp$ singular matrices
$V_{j}^{[0]},\cdots,V_{j}^{[p-1]}$ form an orthonormal basis of $\Re^{m\times
p}$.
Acknowledgment We are thankful to Prof. Yicong Zhou and Dr. Dongdong Chen for
the discussion on the multi-view clustering problem and the image feature
extraction problem.
## References
* [1] K. Bradman, “Third-Order tensors as linear operators on a space of matrices”, Linear Algebra and Its Applications 433 (2010) 1241-1253.
* [2] Y. Chen, X. Xiao and Y. Zhou, “Multi-view subspace clustering via simultabeously learing the representation tensor and affinity matrix”, Pattern Recognition 106 (2020) 107441.
* [3] M. Kilmer, K. Braman, N. Hao and R. Hoover, “Third-order tensors as operators on matrices: A theoretical and computational framework with applications in imaging”, SIAM Journal on Matrix Analysis and Applications 34 (2013) 148-172.
* [4] M. Kilmer and C.D. Martin, “Factorization strategies for third-order tensors”, Linear Algebra and Its Applications 435 (2011) 641-658.
* [5] Y. Miao, L. Qi and Y. Wei, “Generalized tensor function via the tensor singular value decomposition based on the T-product”, Linear Algebra and Its Applications 590 (2020) 258-303.
* [6] Y. Miao, L. Qi and Y. Wei, “T-Jordan canonical form and T-Drazin inverse based on the T-product”, Communications on Applied Mathematics and Computation 3 (2021) doi.org/10.1007/s42967-019-00055-4.
* [7] O. Semerci, N. Hao, M.E. Kilmer and E.L. Miller, “Tensorbased formulation and nuclear norm regularization for multienergy computed tomography”, IEEE Transactions on Image Processing 23 (2014) 1678–1693.
* [8] G. Song, M.K. Ng and X. Zhang, “Robust Tensor Completion Using Transformed Tensor SVD”, Numerical Linear Algebra with Applications doi.org/10.1002/nla.2299.
* [9] X. Xiao, Y. Chen, Y.J. Gong and Y. Zhou, “Low-rank reserving t-linear projection for robust image feature extraction”, IEEE Transactions on image processing 30, (2021) 108-120.
* [10] X. Xiao, Y. Chen, Y.J. Gong and Y. Zhou, “Prior knowledge regularized multiview self-reprresentation and its applications”, IEEE Transactions on neural networks and learning systems, in press.
* [11] L. Yang, Z.H. Huang, S. Hu and J. Han, “An iterative algorithm for third-order tensor multi-rank minimization”, Computational Optimization and Applications 63 (2016) 169-202.
* [12] M. Zheng, Z. Huang and Y. Wang, “T-positive semidefiniteness of third-order symmetric tensors and T-semidefinite programming”, Computational Optimization and Applications doi.org/10.1007/s10589-020-00231-w.
* [13] J. Zhang, A.K. Saibaba, M.E. Kilmer and S. Aeron, “A randomized tensor singular value decomposition based on the t-product”, Numerical Linear Algebra with Applications 25 (2018) e2179.
* [14] Z. Zhang and S. Aeron, “Exact tensor completion using t-SVD”, IEEE Transactions on Signal Processing 65 (2017) 1511-1526.
* [15] Z. Zhang, G. Ely, S. Aeron, N. Hao and M. Kilmer, “Novel methods for multilinear data completion and de-noising based on tensor-SVD”, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, ser. CVPR ’14 (2014) 3842-3849.
* [16] P. Zhou, C. Lu, Z. Lin and C. Zhang, “Tensor factorization for low-rank tensor completion”, IEEE Transactions on Image Processing 27 (2018) 1152-1163.
|
# Understanding Magnetism in Double Double Perovskites: A Complex Multiple
Magnetic Sublattice System
Anita Halder1,2 Shreya Das1 Prabuddha Sanyal3 Tanusri Saha-Dasgupta1
<EMAIL_ADDRESS>1Department of Condensed Matter Physics and Material
Sciences, S.N. Bose National Centre for Basic Sciences, JD Block, Sector-III,
Salt Lake City, Kolkata 700 106, India 2 School of Physics, Trinity College
Dublin, Dublin, Ireland. 3 Maulana Abul Kalam Azad University of Technology,
Kolkata, India.
###### pacs:
75.50.-y,71.20.-b,75.10.Dg
Understanding magnetism in multiple magnetic sublattice system, driven by the
interplay of varied nature of magnetic exchanges, is on one hand challenging
and on other hand intriguing. Motivated by the recent synthesis of
AA${}^{{}^{\prime}}$BB${}^{{}^{\prime}}$O6 double double perovskites with
multiple magnetic ions both at A- and B-sites, we investigate the mechanism of
magnetic behavior in these interesting class of compounds. We find that the
magnetism in such multiple sublattice compounds is governed by the interplay
and delicate balance between two distinct mechanisms, a) kinetic energy-driven
multiple sublattice double exchange mechanism and b) the conventional super-
exchange mechanism. The derived spin Hamiltonian based on first-principles
calculations is solved by classical Monte Carlo technique which reproduces the
observed magnetic properties. Finally, the influence of off-stoichiometry, as
in experimental samples, is discussed. Some of these double double perovskite
compounds are found to possess large total magnetic moment and also are found
to be half-metallic, which raises the hope of future applications of these
large magnetic moment half-metallic oxides in spintronics and memory devices.
Perovskite structured ABO3 transition metal oxides remained the holy grail of
condensed matter physics due to wide range of fascinating properties exhibited
by them, which includes properties like high temperature superconductivity,
colossal magneto-resistance, half-metallicity etc.C. N. R Rao(1989) ; AS
Bhalla(2000) With an aim to tailor properties further, one of the common
route is cation substitution. Substitution and 1:1 ordering of cations in B
sublattices in rock-salt arrangement give rise to A2BB${}^{{}^{\prime}}$O6
double perovskites.King (2010) ; Tanusri(2013) ; Sami Vasala(2015) ;
Tanusri(2020) The topic of magnetism in transition metal oxides with two
magnetic ions as in double perovskite structure has received significant
attention.D.D. Sarma(2000) ; K.-I. Kobayashi(1998) ; Hena Das(2008) ; DP1 ;
DP2 ; DP3 ; DP4 ; Prabuddha Sanyal(2009) ; kato(2007) ; Krockenberger(2007) ;
Nyrissa S. Rogado(2005) ; Hena Das(2009) ; Prabuddha Sanyal(2017) ; Hena
Das(2011) ; Anita(2019) ; kartik(2015)
In this backdrop, it is an interesting issue to ask, what happens if the
compounds involve even larger number of magnetic ions, i.e. more than two
magnetic ions, as in double double perovskites. Due to the structural and
compositional flexibility of the structure, perovskites can accommodate almost
all of the elements of the periodic table, and also can support various
possible coordination. In recent time double double perovskites of general
formula
AA${}^{{}^{\prime}}_{0.5}$A${}^{{}^{\prime\prime}}_{0.5}$BB${}^{{}^{\prime}}$O6
have been synthesized using high pressure and temperature,E. Solana-
Madruga(2016) ; McNally (2017) ; E. Solana-Madruga(2018) combining columnar
ordering at A sublattice and rock-salt ordering at B sublattice with five
independent cation sites, A, A${}^{{}^{\prime}}$, A${}^{{}^{\prime\prime}}$, B
and B${}^{{}^{\prime}}$, hosting rare-earth or alkaline-earth ion at A site,
3d transition metals at A${}^{{}^{\prime}}$, A`` and B sites, and 5d
transition metal at B${}^{{}^{\prime}}$ site. Use of high pressure is able to
stabilize small magnetic transition-metal ions such as Mn2+ at the A-sites of
perovskites in place of large, nonmagnetic cations like Ca2+ and Sr2+, with
reduced coordination of tetrahedral (4) and square planar (4) instead of usual
dodecahedral (12) coordination of A site.A. J. Dos santos-Garca(2015) This
introduces a source of magnetism at A-site, which in turn drives the interplay
of magnetism between multiple sublattices, resulting in highly enriched
magnetic properties.
One would naively expect presence of multiple magnetic ions with multiple
magnetic exchange would lead to a frustrating situation and spin glass like
ground state. Contrary to this expectation, recently synthesized double double
perovskites CaMnMReO6 (M=Ni,Co) are found to be magnetically ordered, Elena
Solana-Madruga(2019) showing ferromagnetic ordering in CaMnNiReO6 with
parallel alignment of spins, which is found to change to ferrimagnetic
ordering when Ni is replaced by Co, the neighboring element in periodic table.
We note the net moment of such multi-component ferromagnetic system is very
high, paving the way to design large moment magnetic oxides. This makes the
situation rather curious in the sense, what makes the three or more magnetic
sublattice system CaMnNiReO6 ferromagnetic, and why replacement of Ni by Co,
the neighboring element in periodic table, makes it ferrimagnetic. What is the
driving mechanism of magnetism in such multi sublattice magnetic system ?
Understanding of such a complex, multiple magnetic sublattice system is
expected to bring out rich physics, which would help future designing of such
oxides.
Motivated by these developments, we present here a first-principles density
functional theory (DFT) based study of these compounds which takes into
account the structural and chemical details in an accurate way, followed by
construction of DFT-derived spin Hamiltonian, which is solved with Monte Carlo
(MC) simulation. Our study uncovers a novel exchange mechanism to be operative
in these compounds, which turn out to be a combination of multi-sublattice
hybridization or kinetic energy-driven double-exchange mechanism, and the more
conventional super-exchange mechanism, the nature of ground state magnetic
order being decided by the competition of these two. While for CaMnNiReO6, the
multi-sublattice hybridization-driven double-exchange mechanism wins over the
super-exchange mechanism stabilizing the long range ferromagnetic state, the
replacement of Ni by Co, increases the core spin value at B sublattice from
S=1 to S=3/2, thus toggling the balance between two exchange mechanisms,
favoring the long range ferrimagnetic behavior. The spin-Hamiltonian with
parameters derived in a first-principles manner provide good description of
measured magnetic properties.Elena Solana-Madruga(2019) Introduction of off-
stoichiometry is found to maintain the magnetic ground state, encouraging
exploration of many more candidates in such multiple magnetic sublattice
double perovskites. Interestingly, the large magnetic moment compounds,
arising due to long range ferromagnetic ordering between multiple magnetic
sublattices, also turned out to be half-metallic having important implication
for spintronic applications.
## I Results
### I.1 Crystal Structure
Fig. 1 shows the four formula unit tetragonal, P42/n crystal structure of
stoichiometric CaMnNiReO6 (CMNRO).Elena Solana-Madruga(2019) CaMnCoReO6
(CMCRO) compound is isostructural to CaMnNiReO6. The structure consists of
four magnetic sublattices, tetrahedrally coordinated 3d transition metal (TM)
Mn1 at A${}^{{}^{\prime}}$ site, square planar coordinated 3d transition metal
Mn2 at A`` site, octahedrally coordinated 3d transition metal Ni/Co at B site
and octahedrally coordinated 5d transition metal Re at B${}^{{}^{\prime}}$
site, making it a 3d-5d TM magnetic system. Mn1 is connected to two nearest
neighbour (NN) Mn2 sites through Mn1-O-O-Mn2 superexchange paths, while it is
connected to 4 NN Ni(Co)/Re through Mn1-O-Ni(Co)/Re super-exchange paths.
Ni(Co) and Re are connected to each other through corner shared Ni(Co)-O-Re,
of bond angles 141-152o.
### I.2 Electronic Structure
We analyze the electronic structure of the studied compounds in terms of spin-
polarized density of states, and its projection to orbital characters which
provide us the information on charge and spin states of the transition metal
ions. The GGA+$U$ density of states (DOS), with choice of $U$ = 5 (2) eV and
$J_{H}$ = 0.9 (0.4) eV at 3d TM (Re) sites, for CMNRO and CMCRO are shown in
top and bottom panels of Fig. 2, respectively. In agreement with experimental
findings, the ground state of CMNRO is found to be ferromagnetic, with moments
at three 3d TM sublattices Mn1, Mn2 and Ni sites aligned in parallel
direction, while the moment at Re site is found to be aligned opposite to the
moments at Mn1, Mn2 and Ni sites. The calculated magnetic moments at Mn (Mn1
and Mn2), Ni and Re are found to be 4.5 $\mu_{B}$, 1.6 $\mu_{B}$ and 0.5
$\mu_{B}$ respectively, with a large total moment of 24 $\mu_{B}$ in the unit
cell. On the contrary the ground state of CMCRO is found to be ferrimagnetic,
as observed experimentally, with moments of Mn1, and Mn2 aligned in
antiparallel direction, and Co moment pointing in the direction of Mn1. The Re
moment is found to be antiparallel to Mn1 and Co, with calculated moments
values of 4.5 $\mu_{B}$ (Mn1 and Mn2), 2.6 $\mu_{B}$ (Co) and 0.5 $\mu_{B}$
(Re) and total moment of 8 $\mu_{B}$ in the unit cell. The calculated moments
are in conformity with nominal 2+ valence of Mn1 and Mn2 with high spin d5
occupancy, 2+ valence of Ni/Co with high spin (HS) d8/d7 occupancy and 6+
valence of Re with d1 occupancy. Following this, in DOS of CMNRO we find Mn1
and Mn2 states are filled in the majority spin channel and empty in the
minority channel. Ni eg DOS in CMNRO get filled in majority spin channel and
empty in minority, while Ni t2g states are filled in both spin channels. The
partially filled Re t2g states in CMNRO, with one electron in the minority
spin channel and strongly hybridized with Mn1/Mn2 d and Ni eg states, crosses
the Fermi level making the solution metallic in minority spin channel and
gaped in the majority spin channel. This half metallic solution persists in
CMCRO, though Mn1 and Mn2 d states now become filled and empty, respectively
in two opposite spin channels and Co t2g becomes partly empty. This points to
possibility of achieving spin-dependent nature of the carrier scattering in
these compounds, with a large spin value, which would allow for the resistance
of these large moment compounds to be strongly influenced by the low magnetic
field.
Effect of spin-orbit coupling (SOC) was checked, which is expected to be
appreciable for 5d TM element, Re. The qualitative results are found to remain
unchanged upon inclusion of SOC, apart from an appreciable orbital moment of
$\sim$ 0.15 $\mu_{B}$ that develops at Re site, antiparallel to its spin
moment.
### I.3 Mechanism of Magnetism
In order to shed light on the mechanism of magnetism in this interesting class
of compounds, we derive the low energy spin Hamiltonian out of DFT inputs. For
this purpose, we perform muffin tin orbital based downfolding calculationsO.
K. Andersen(2000) that integrate out degrees of freedom which are not of
interest in an energy selective manner. Wannier representation of the
downfolded Hamiltonian provides the estimates of the onsite energies and the
hopping interactions between the orbitals retained in the basis during the
process of downfolding. In the first step of downfolding calculations, we
retain the Mn1, Mn2 d states, Ni eg/ Co d states and Re t2g in the basis and
integrate out the rest. In second step, Mn1, Mn2 and Ni/Co degrees of freedom
are downfolded retaining only the Re t2g degrees of freedom in the basis. The
latter massive downfolding provides the estimates of the Re t2g onsite
energies renormalized by the hybridization from Mn1, Mn2 and Ni/Co states.
Thus the onsite matrix elements of the real space Hamiltonian defined in the
first and second step of downfolding calculations, give the energy level
positions before and after switching on the hybridization between
Mn1/Mn2/Ni(Co) and Re states, respectively. Results of two step downfolding
calculations for CMNRO and CMCRO are presented in top and bottom panels of
Fig. 3, respectively. Mn1 -d, Mn2-d, Ni eg (Co -d ) and Re t2g states are both
crystal field split and exchange split. In distorted tetrahedral coordination,
Mn1 d states are split into 1-1-1-2 fold degeneracies, while Mn2 d states in
square planar coordination are split into 2-2-1 fold degeneracies. The
trigonal distortion in ReO6 octahedra splits its t2g states in 1-2 fold
degeneracies.
Examination of Fig. 3 reveals several interesting aspects, key to construct
the low energy spin Hamiltonian. First of all, in Mn1-Mn2-Ni(Co)-Re basis, Re
t2g states are essentially non-magnetic with negligible exchange splitting.
Secondly, the Re t2g states lie within the exchange split states of Mn1 d, Mn2
d and Ni eg/Co d. Third and most importantly, upon switching on the
hybridization between Mn1 d/Mn2 d/Ni eg(Co d) and Re t2g, captured through
massive downfolding procedure, an exchange splitting of 0.6-0.8 eV is induced
among Re t2g states, with the direction of spin splitting opposite to that of
Mn1 or Mn2 or Ni/Co. This essentially establishes the hybridization-driven
multi sublattice double-exchange process to be operative, in which a negative
spin splitting in the essentially non-magnetic site is induced through
hybridization between the localized spin and itinerant electrons.D.D.
Sarma(2000) ; K.-I. Kobayashi(1998) In the present context, this can be
captured in terms of a 3+1 sublattice Kondo Lattice model, consisting of a) a
large core spin at the Mn1, Mn2 and Ni(Co) sites, b) strong coupling on the
Mn1/Mn2/ Ni (Co) site between the core spin and the itinerant electron,
strongly preferring one spin polarization of the itinerant electron, and c)
delocalization of the itinerant electron on the Mn1-Mn2-Ni(Co)-Re network, in
the similar spirit as in Prabuddha Sanyal(2009) ,
$\displaystyle H_{DE}$ $\displaystyle=$
$\displaystyle\epsilon_{B}\sum_{i\sigma}b_{i\sigma}^{\dagger}b_{i\sigma}+\epsilon_{Mn1}\sum_{i}m^{1\dagger}_{i\sigma}m^{1}_{i\sigma}$
$\displaystyle+$
$\displaystyle\epsilon_{Mn2}\sum_{i}m^{2\dagger}_{i\sigma}m^{2}_{i\sigma}+\epsilon_{Re}\sum_{i}r^{\dagger}_{i\sigma}r_{i\sigma}$
$\displaystyle+$ $\displaystyle
t_{B-Re}\sum_{<ij>}(b_{i\sigma}^{\dagger}r_{j\sigma}+h.c.)$ $\displaystyle+$
$\displaystyle t_{Mn1-Re}\sum_{<ij>}(m_{i\sigma}^{1\dagger}r_{j\sigma}+h.c.)$
$\displaystyle+$ $\displaystyle
t_{Mn2-Re}\sum_{<ij>}(m_{i\sigma}^{2\dagger}r_{j\sigma}+h.c.)$
$\displaystyle+$ $\displaystyle J_{B}\sum_{i\in B}\vec{S}_{i}^{B}\cdot
b_{i\alpha}^{\dagger}\vec{\sigma}_{\alpha\beta}b_{i\beta}$ $\displaystyle+$
$\displaystyle J_{Mn1}\sum_{i\in A^{\prime}}\vec{S}_{i}^{Mn1}\cdot
m_{i\alpha}^{1\dagger}\vec{\sigma}_{\alpha\beta}m_{i\beta}^{1}$
$\displaystyle+$ $\displaystyle J_{Mn2}\sum_{i\in
A^{\prime\prime}}\vec{S}_{i}^{Mn2}\cdot
m_{i\alpha}^{2\dagger}\vec{\sigma}_{\alpha\beta}m_{i\beta}^{2}$
The $m$’s refer to the Mn sites and the $b$’s to the B (Ni/Co) sites.
$t_{B-Re}$, $t_{Mn1-Re}$, $t_{Mn2-Re}$ represent the nearest neighbor B-Re,
Mn1-Re, Mn2-Re hoppings respectively, with onsite elements $\epsilon_{B}$,
$\epsilon_{Mn1}$, $\epsilon_{Mn2}$ and $\epsilon_{Re}$. The ${\bf S}_{i}$ are
‘classical’ (large $S$) core spins at the Mn1/Mn2/B sites, coupled to the
itinerant Re electrons through a coupling $J$, when the Re electron hops onto
the respective sublattice.
It is to be noted that in $H_{DE}$, the Kondo coupling parameter $J$ is
present only at the magnetic Mn1, Mn2 and B sites, which possess a large core
spin (S=5/2 at Mn1 and Mn2, and S=1 for Ni and S=3/2 for Co), with which the
itinerant Re electron interacts when it hops onto these magnetic sites. The
$J/W$ ratio between the Kondo exchange coupling, $J$ and the bandwidth, $W$ is
thus relevant only on the magnetic Mn1, Mn2 and B (Ni/Co) sites. Following the
DFT inputs, the calculated $J/W$ ratios for the Mn1, Mn2 and Ni sites in CMNRO
are found to be 3.77, 2.06 and 2.7 respectively, while the ratios for the Mn1,
Mn2 and Co sites in CMCRO are given by 3.529, 1.87 and 3.5 respectively. Thus
the exchange coupling $J$ is appreciably larger than the bandwidth $W$ for all
the magnetic sites. Moreover, since the bandwidth, $W\approx z\times t$ where
$t$’s are the hopping parameters appearing in the Hamiltonian, and $z$ is the
number of neighbors, the $J/t$ ratios are even larger, of the order of 8 or
10, justifying use of $J\rightarrow$ $\infty$ model. The $J\rightarrow$
$\infty$ limit of double exchange models was used by Anderson and
HasegawaAnderson-hasegawa and later studied by P. De GennesGennes in the
context of perovskites. This limit was studied in the context of double
perovskites in Ref.DP3, . $J\rightarrow$ $\infty$ approximation has been used
for other magnetic perovskite and double perovskite compounds in the
literature for similar $J/t$ ratios.manganites ; millis ; Prabuddha
Sanyal(2009) ; DP3
Invoking the $J\rightarrow$ $\infty$ approximation, one can derive the
effective spin Hamiltonian for CMNRO in terms of the core spins at Mn1
(S=5/2), Mn2 (S=5/2) and Ni (S=1) site as given in the following. The details
of the derivation can be found in the supplementary information (SI).
$\displaystyle H^{{}^{\prime}}_{DE}$ $\displaystyle=$ $\displaystyle
4D_{Mn1-Mn2}\sum_{<ij>i\in A^{\prime},j\in
A^{\prime\prime}}\sqrt{\frac{1+\mathbf{S}^{Mn1}_{i}\cdot\mathbf{S}^{Mn2}_{j}}{2}}$
(2) $\displaystyle+$ $\displaystyle 8D_{Mn1-Ni}\sum_{<ij>i\in A^{\prime},j\in
B}\sqrt{\frac{1+\mathbf{S}^{Mn1}_{i}\cdot\mathbf{S}^{Ni}_{j}}{2}}$
$\displaystyle+$ $\displaystyle 8D_{Mn2-Ni}\sum_{<ij>i\in
A^{\prime\prime},j\in
B}\sqrt{\frac{1+\mathbf{S}^{Mn2}_{i}\cdot\mathbf{S}^{Ni}_{j}}{2}}$
A similar Hamiltonian can be written for the Co compound, with
$\mathbf{S}^{Ni}$ (S=1) replaced by $\mathbf{S}^{Co}$ (S=3/2), where the
coupling constants are $D_{Mn1-Mn2}$, $D_{Mn1-Co}$ and $D_{Mn2-Co}$.
The above described multi sublattice double-exchange Hamiltonian although is
capable of describing the ferromagnetic state of CMNRO, does not account for
the fact that replacement of Ni by Co in CMCRO changes ferromagnetic state to
ferrimagnetic state. This suggests that together with HDE another source of
magnetism needs to be considered. Indeed, there exists another source of
magnetism, namely the super-exchange between the half-filled Mn1-d, Mn2-d, Ni
eg, and high spin Co (t2g+eg) states. The Goodenough-Kanamori ruleGoodenough
states that superexchange interactions are antiferromagnetic where the virtual
electron transfer is between overlapping orbitals that are each half-filled,
but they are ferromagnetic where the virtual electron transfer is from a half-
filled to an empty orbital or from a filled to a half-filled orbital.
Following this, the super-exchange contributions are all antiferromagnetic in
nature (cf Fig. 4) with its strength defined by hopping integrals ($t$) and
onsite energy differences ($\Delta$), $J$ $\propto$
$\sum_{m,m^{{}^{\prime}}}t_{m,m^{{}^{\prime}}}^{2}/$ (U+
$\Delta_{m,m^{{}^{\prime}}}$), where $m$ and $m^{{}^{\prime}}$ are the
orbitals at site $i$ (Mn1/Mn2/Ni (Co)) and $j$ (Mn1/Mn2/Ni (Co)).
The net Hamiltonian for CMNRO adding the contribution of double exchange and
super-exchange is thus given by,
$\displaystyle H$ $\displaystyle=$ $\displaystyle 4D_{Mn1-Mn2}\sum_{<ij>i\in
A^{\prime},j\in
A^{\prime\prime}}\sqrt{\frac{1+\mathbf{S}^{Mn1}_{i}\cdot\mathbf{S}^{Mn2}_{j}}{2}}$
(3) $\displaystyle+$ $\displaystyle 8D_{Mn1-Ni}\sum_{<ij>i\in A^{\prime},j\in
B}\sqrt{\frac{1+\mathbf{S}^{Mn1}_{i}\cdot\mathbf{S}^{Ni}_{j}}{2}}$
$\displaystyle+$ $\displaystyle 8D_{Mn2-Ni}\sum_{<ij>i\in
A^{\prime\prime},j\in
B}\sqrt{\frac{1+\mathbf{S}^{Mn2}_{i}\cdot\mathbf{S}^{Ni}_{j}}{2}}$
$\displaystyle+$ $\displaystyle 4J_{Mn1-Mn2}\sum_{<ij>i\in A^{\prime},j\in
A^{\prime\prime}}\mathbf{S}^{Mn1}_{i}\cdot\mathbf{S}^{Mn2}_{j}$
$\displaystyle+$ $\displaystyle 8J_{Mn1-Ni}\sum_{<ij>i\in A^{\prime},j\in
B}\mathbf{S}^{Mn1}_{i}\cdot\mathbf{S}^{Ni}_{j}$ $\displaystyle+$
$\displaystyle 8J_{Mn2-Ni}\sum_{<ij>i\in A^{\prime\prime},j\in
B}\mathbf{S}^{Mn2}_{i}\cdot\mathbf{S}^{Ni}_{j}$
and a similar one for CMCRO.
In order to estimate the various coupling constants, $D_{Mn1-Mn2}$,
$D_{Mn1-Ni/Co}$, $D_{Mn2-Ni/Co}$ and $J_{Mn1-Mn2}$, $J_{Mn1-Ni/Co}$ and
$J_{Mn2-Ni/Co}$, we apply a two step process. In the first step, we apply
downfolding procedure O. K. Andersen(2000) to construct a spin unpolarized
Mn1-Mn2-Ni(Co) Hamiltonian defined in effective Mn1-d, Mn2-d, Ni eg (Co d)
basis. The real space representative of this Hamiltonian provides the estimate
of onsite matrix elements of Mn1-d, Mn2-d, Ni eg (Co d) and the hopping
interactions (assumed to be nearest neighbour) between them. Following this,
$J_{Mn1-Mn2}$, $J_{Mn1-Ni/Co}$ and $J_{Mn2-Ni/Co}$ were estimated using the
super-exchange formula, $J$ = $\varSigma_{m,m^{{}^{\prime}}}$ 2 $\times$
$t_{m,m^{{}^{\prime}}}^{2}/$ (U+ $\Delta_{m,m^{{}^{\prime}}}$). In the second
step, the total energies for different possible spin configurations at Mn1,
Mn2 and Ni(Co) sites are calculated, and mapped on to the spin Hamiltonian
given in Eqn. 2. Putting the values of $J_{Mn1-Mn2}$, $J_{Mn1-Ni/Co}$ and
$J_{Mn2-Ni/Co}$ obtained from super-exchange formula, the estimates of
$D_{Mn1-Mn2}$, $D_{Mn1-Ni/Co}$ and $D_{Mn2-Ni/Co}$ are obtained. The estimated
values of $D$’s, and $J$’s, for the two compounds are given in Table I.
For CMNRO, we find that effective Mn1-Mn2, Mn1-Ni and Mn2-Ni interactions are
all negatively signed, i.e ferromagnetic, in conformity with the ferromagnetic
ground state found in the experiment, as well as in DFT total energy
calculations. Similarly for CMCRO, we find Mn1-Mn2 and Mn2-Co effective
interactions are positively signed i.e antiferromagnetic, while Mn1-Co
interaction is ferromagnetic in conformity with its ferrimagnetic ground
state.
Inspecting Table I, we further find while the strength of Mn1-Mn2 super-
exchange ($J_{Mn1-Mn2}$) remains similar between the two compounds, the
strength of Mn1/Mn2 - B super-exchange is greatly enhanced in CMCRO compared
to CMNRO, $J_{Mn1-Ni/Co}$ being enhanced by a factor of 1.6 and
$J_{Mn2-Ni/Co}$ being enhanced by a factor of 3.2. This is expected due to the
fact that while for Ni two unpaired eg electrons participate in the super-
exchange process, for Co, three unpaired electrons, two belonging to eg
manifold and one belonging to t2g contribute. At the same time, we find a
significant weakening of the hybridization-driven exchange between Mn1-Mn2,
reduced by two orders of magnitude compared to Ni compound. These important
changes turn the net interaction to be antiferromagnetic between Mn1 and Mn2,
and that between Mn2 and Co, compared to all ferro interaction for Ni
compound.
### I.4 Monte Carlo Study of the Spin Hamiltonian
In order to evaluate the finite temperature properties of the defined spin
Hamiltonian, described by Eqn. 3, we perform Monte Carlo simulations. The
total energy of a particular spin configuration can be obtained from the spin
Hamiltonian by plugging in input parameters $D$’s and $J$’s, as listed in
Table I. The spin configurations at Mn1, Mn2 and Ni/Co sites are generated
through Metropolis algorithm in a 3$\times$3$\times$3 unit cell simulation box
with periodic boundary condition. Starting from an initial temperature of 400
K (1000 K) for CMNRO (CMCRO) the simulation temperature is stepped down to T =
1 K with an interval of 2 K. Hundred thousand Monte Carlo steps are employed
to ensure a large sample space, while the physical quantity like magnetization
is calculated by averaging over last 10,000 Monte Carlo steps. The
magnetizations plotted as a function of temperature for CMNRO and CMCRO are
shown in Fig. 5.
For CMNRO compound, our Monte Carlo simulation correctly reproduces the ground
state of this compound where the spins of Mn1. Mn2 and Ni are all found to be
aligned in parallel to each other (cf top, left panel, Fig. 5). We note that
the total moment at low temperature is found out to be 28$\mu_{B}$/unit cell,
corresponding to the sum of the nominal moment 5$\mu_{B}$ for 2 Mn1 and 2 Mn2
with the nominal moment 2$\mu_{B}$ for 4 Ni sites. Transition temperature (Tc)
may be obtained from the inflection point of the derivative of magnetization
versus temperature curve, as shown in the top panel, Fig 5. The Tc is found to
be 142 K which is close to experimentally reported value of 158 K.Elena
Solana-Madruga(2019) For CMCRO compound, the ferrimagnetic ground state is
also correctly captured. In this case, Mn2 is found to be antiparallel to Mn1
and Co moment, giving rise to the total moment of 12 $\mu_{B}$/unit cell,
arising from magnetic moment of 3$\mu_{B}$ in 4 Co sites and cancellation of
moments at Mn1 and Mn2 sites. The transition in the case of CMNRO is
noticeably sharper compared to CMCRO, as reflected in the narrower width of
the inverse peak in dM/dT curve. Additionally in CMCRO, a shoulder is observed
in the left of the inverse peak which is completely absent in CMNRO.
By repeating the calculation with larger simulation cell size of
4$\times$4$\times$4 (see SI), we find the peak and shoulder structure in CMCRO
is robust, which arises due to the competition between effective FM Mn1-Co
interaction and the two effective AFM Mn1-Mn2 and Mn1-Co interactions. To
demonstrate the effect of competing nature of magnetic interactions in dM/dT
curve of Co compound, we further present the dM/dT curve for varying
$D_{Mn1-Co}$ value in the inset of bottom right panel of Fig. 5. As found,
upon reducing $D_{Mn1-Co}$ from DFT estimated value of -143.9 meV to -135.9
meV, i.e. weakening ferro interaction, the high temperature peak is shifted to
lower temperature, along with redistribution of weight between the peak and
the shoulder, converting the shoulder to a peak. Thus the shoulder feature is
reminiscent of second peak which is not resolved when the strength of ferro
and antiferro interactions are comparable. This implies the high temperature
peak feature arises from the ferro interaction, with the lower temperature
feature arising due to antiferro interaction. The experimental studyElena
Solana-Madruga(2019) on CaMnReCoO6 reports only magnetic susceptibility, and
do not report dM/dT. However reported experimental dM/dT data for multiple
magnetic ion containing Nd2NiMnO6 double perovskite does exhibit such two
feature structure.das Such two feature dM/dT curve is also seen for
ferrimagnetic compound NiCr2O4.nicro We notice with choice of DFT estimated
value of $D_{Mn1-Co}$, the second feature appears around 200 K, very close to
experimentally reported TC of 188 K.Elena Solana-Madruga(2019)
### I.5 Effect of Off-stoichiometry
The discussion above involves stoichiometric compounds, while the experimental
samples of CMNRO and CMCRO are reported to be off-stoichiometric.Elena Solana-
Madruga(2019) The experimental samples show high degree of B-site cation
ordering with nominal antisite disorder of 3.4 and 2.5$\%$ for Co and Ni
compounds respectively. This has been attributed to high degree of charge
contrast between (Mn/Co/Ni)2+ and Re6+. However, for CMCRO, while there is
96$\%$ Co at the octahedral B site, 30-40$\%$ of Co was reported to substitute
Mn at the A-sites, leading to an overall Co-rich composition of
CaMn0.7Co1.3ReO6 as opposed to the stoichiometric formula of CaMnCoReO6.
Similarly, for CMNRO, there is an overall Ni-poor composition of
CaMn1.2Ni0.8ReO6 in the experimental sample with some of the Mn atoms
occupying Ni sites in B sublattice. We therefore, need to check whether the
above theoretical understanding also holds good for the off-stoichiometric
compounds.
In order to mimic these off-stoichiometric situations, we replace one out of
the four Ni atoms in the unit cell by Mn, giving rise to Ni poor composition
CaMn1.25Ni0.75ReO6, close to experimental composition. Since all four Ni sites
are equivalent in the unit cell, any one chosen out of four possible sites,
give rise to same results. Similarly, for CMCRO, an extra Co atom replacing
one of the four Mn atoms in the unit cell is introduced, giving rise to
composition CaMn0.75Co1.25ReO6. Total energy calculations show that Co prefers
to occupy the square planar Mn site (Mn2) over Mn1. Interestingly it is found
that for CMNRO even in presence of off-stoichiometry the ground state remains
ferromagnetic with Mn1, Mn2, Mn@Ni and Ni spins aligned in parallel. This
highlights the dominant role of hybridization-driven magnetism, as opposed to
super-exchange driven mechanism, which depends primarily on the positioning of
energy levels, and not on the exchange pathways. Similarly, for CMCRO, even in
presence of off-stoichiometry, Mn1 and Mn2 spins continue to remain
antiparallel, while the Co spins (both at A`` site and B site) are found to be
oppositely aligned to Mn2. The magnetic ground states are thus found to be
robust, and remain unaltered even in presence of off-stoichiometry, as also
found experimentally.
The computed Mn1-Mn2, Mn1-Ni(Co), Mn2-Ni(Co) exchanges for CaMn1.25Ni0.75ReO6
and CaMn0.75Co1.25ReO6 are found not to change significantly compared to their
stoichiometric counterparts ($\sim$ 3-10$\%$) (see Table I), although
introduction of off-stoichiometry introduces few additional interactions like
in CMNRO, Mn@Ni-Mn1, Mn@Ni-Mn2 replacing some of Ni-Mn1, Ni-Mn2 interactions,
respectively, and in CMCRO, Co@Mn2-Mn1, Co@Mn2-Co, replacing some of Mn2-Mn1,
Mn2-Co interaction, respectively. Computation of these additional interactions
show the signs of effective interactions corresponding to these additional
interactions are the same as those of replacing interactions, with values
within 5-7 $\%$.
This suggests the magnetic transition temperature to be not altered
drastically by the off-stoichiometry effect. To check this explicitly, we
carried out Monte Carlo study of Ni-poor and Co-rich compounds as well. Within
the 3$\times$3$\times$3 unit cell simulation box size, for the off-
stoichiometric compounds, it is possible to have different possible atomic
configurations of Mn@Ni and Co@Mn2. Total energy calculations show that extra
Mn (Co) atoms at Ni (Mn2) sites prefer to be uniformly distributed rather than
being clustered. Considering uniform distribution of extra Mn (Co) atoms in
3$\times$3$\times$3 unit simulation cell, this leads to 152 different
configurations. To take this into account, the MC results are averaged over
atomic configurations.
In conformity with DFT results the ground state is found to be ferromagnetic
for CMNRO and ferrimagnetic for CMCRO. The variation of moment with
temperature results in two cases is shown Fig. 6. In presence of off-
stoichiometry, the saturation moment for CMNRO and CMCRO becomes 31
$\mu_{B}$/unit cell and 20 $\mu_{B}$/unit cell, respectively. The dM/dT curves
presented in insets, show similarity with that found for stoichiometric
compounds, confirming the transition temperature not be significantly effected
by off stoichiometry.
## II Summary and Outlook
In this communication, we study the magnetism in systems containing multiple
magnetic sublattices. The study has been motivated by synthesis of double
double perovskite compounds of general formula,
AA${}^{{}^{\prime}}_{0.5}$A${}^{{}^{\prime\prime}}_{0.5}$BB${}^{{}^{\prime}}$O6,
having transition metal magnetic ions in both A and B sites. The key findings
of our study are summarized in the following,
$\bullet$ Our theoretical analysis combining first-principles and model
Hamiltonian approaches, uncovers the microscopic origin of counter-intuitive
long range ordered magnetism in double double perovskite compounds containing
3d magnetic ions at A and B sites, and 5d magnetic ions at B${}^{{}^{\prime}}$
sites, which turn out to be an interplay of hybridization-driven multi-
sublattice double exchange and super-exchange mechanism of magnetism.
$\bullet$ This interplay relies on the positioning of the $d$ energy levels as
well as the filling. This in turn, triggers ferromagnetic long range order in
CMNRO compound containing two different Mn ions at A sites, and Ni and Re ions
at B and B${}^{{}^{\prime}}$ sites, while replacement of Ni by Co at B site,
decreasing the filling by one in CMCRO stabilizes ferrimagnetism, accounting
for the experimental observations.Elena Solana-Madruga(2019)
$\bullet$ The spin Hamiltonian, capturing the interplay of hybridization-
driven multi-sublattice double exchange and super-exchange mechanism of
magnetism is parameterized in terms of three exchange constants $D_{Mn1-Mn2}$,
$D_{Mn1-Ni/Co}$, $D_{Mn2-Ni/Co}$ for the hybridization-driven multi-sublattice
double exchange and another three exchange constants $J_{Mn1-Mn2}$,
$J_{Mn1-Ni/Co}$, $J_{Mn2-Ni/Co}$ for the super-exchange, with the parameters
derived from first-principles estimated hopping interactions, onsite energies
and total energies of different spin configurations.
$\bullet$ The computed temperature dependent magnetization by Monte Carlo
reproduces the measured magnetic transition temperature of CMNRO with
reasonable accuracy. For CMCRO, it is found that the competition between ferro
and antiferro nature of effective interactions, manifests as two hump
structure of dM/dT, which should be probed further experimentally.
$\bullet$ The calculations are further extended to off stoichiometric
composition of Ni-poor and Co-rich compounds, in order to mimic the
experimental situation. The magnetic properties are found to be retained even
in presence of off-stoichiometry, since the hybridization-driven multi-
sublattice double exchange, a dominant contributor in exchange mechanism of
CMNRO and CMCRO, relies on energy level positioning, rather than on exchange
pathways.
The proposed theory of magnetism being general in nature, should be applicable
to multi sublattice mixed 3d-4d/5d transition metal systems, where one of the
transition metal element is a large band width 4d or 5d element with exchange
splitting significantly smaller than the band width. With appropriate choice
of 3d and 4d/5d elements, this opens up the possibility of stabilization of a
large moment ferromagnetic state. Our first-principles calculation shows this
ferromagnetic state with high value of magnetization is furthermore half-
metallic which should be an attractive possibility for spintronics
applications.
## III Methods
The first-principles DFT calculations are carried out using the plane-wave
pseudo-potential method implemented within the Vienna Ab-initio Simulation
Package (VASP).G. Kresse(1996) The exchange-correlation functional is
considered within the generalized gradient approximation (GGA).PBE The
projector-augmented wave (PAW) potentialsP. E. Blochl(1994) are used and the
wave functions are expanded in the plane-wave basis with a kinetic-energy cut-
off of 600 eV. Reciprocal-space integration is carried out with a $k$-space
mesh of 6 $\times$ 6 $\times$ 6\. The exchange-correlation beyond GGA is
treated within GGA+$U$ approach with local Coulomb interaction parameterized
in terms of Hubbard $U$ and Hund’s coupling $J_{H}$ within the multi-band
formulation.S. L. Dudarev(1998) For double-counting correction, fully
localized limit (FLL) of double-counting is considered since the around mean
field (AMF) double-counting is found to give magnetic states a significantly
larger energy penalty than that by the FLL counterpart.doublecounting The
parameters of GGA+$U$ calculations are chosen as $U$ = 5 eV and $J_{H}$ = 0.9
eV for Ni/Co as appropriate for 3d TM atoms, and U = 2 eV and $J_{H}$ = 0.4 eV
for Re as appropriate for 5d TM atoms.U-value The U values are varied over
1-2 eV and the qualitative results are found to remain unchanged.
In order to extract a few-band tight-binding (TB) Hamiltonian out of the full
DFT calculation, $N$-th order muffin tin orbital $N$MTO) calculations are
carried out.O. K. Andersen(2000) A prominent feature of this method is the
downfolding scheme. Starting from a full DFT calculation, it defines a few-
orbital Hamiltonian in an energy-selected, effective Wannier function basis,
by integrating out the degrees of freedom that are not of interest. The
$N$-MTO technique relies on the self-consistent potential parameters obtained
out of linear muffin-tin orbital (LMTO)lmto calculations.
The magnetization data, obtained from the Monte Carlo simulation of the spin
Hamiltonian, are calculated on a N$\times$N$\times$N unit cell of Mn and Ni/Co
atoms. Finite size effect has been checked. The results presented in the
manuscript are obtained from 3$\times$3$\times$3 lattice simulations. The
periodic boundary conditions are applied during the simulation. The magnetic
transition temperatures are estimated from these calculations.
## IV Acknowledgments
The authors acknowledge the support of DST Nano-mission for the computational
facility used in this study. T.S-D acknowledges J.C.Followship (grant no.
JCB/2020/000004) for research support.
## V Author contributions statement
A.H., S.D., P.S. did the theoretical calculations. A.H. and S.D. have equal
contributions. The figures were made by A.H. and S.D. The results were
analyzed by T.S.-D, A.H., S.D., and P.S. T.S.-D wrote the manuscript. A.H.,
S.D., P. S. and T.S.-D. finalized the manuscript.
## VI Competing financial interests
The authors declare no competing financial interests.
## References
* (1) C. N. R. Rao Annu. Rev. Phys. Chem., 40, 291 (1998).
* (2) A. S. Bhall, R. Guo, R. Roy,Mater Res Innov. 4, 3 (2000).
* (3) G. King, P. M. Woodward, J. Mater. Chem., 20, 5785 (2010).
* (4) T. Saha-Dasgupta, J. Superconductivity and Novel Mag., 26, 1991 (2013).
* (5) S. Vasala, M. Karppinen, Progress in Solid State Chemistry, 43, 1 (2015).
* (6) T. Saha-Dasgupta, Materials Research Express, 7, 1 (2020).
* (7) D. D. Sarma, P. Mahadevan, T. Saha Dasgupta, S. Ray, A. Kumar, Phys. Rev. Lett., 85, 2549 (2000).
* (8) K.-I. Kobayashi, T. Kimura, H. Sawada, K. Terakura, Y. Tokura, Nature (London), 395, 677 (1998).
* (9) H. Das, U. V. Waghmare, T. Saha-Dasgupta, D. D. Sarma, Phys. Rev. Lett., 100, 186402 (2008).
* (10) A. Chattopadhyay and A. J. Millis, Phys. Rev. B 64, 024424 (2001).
* (11) L. Brey, M. J. Calderon, S. Das Sarma and F. Guinea, Phys. Rev. B, 74, 094429 (2006).
* (12) J.L.Alonso,L.A. Fernandez, F. Guinea, F. Lesmes, and V Martin-Mayor, Phys. Rev. B, 67, 214423 (2003).
* (13) J.B. Phillip, P. Majewski, L. Alff, A. Erb, R. Gross, T. Graf, M.S.Brandt, J. Simon, T. Walther, W. Mader, D. Topwal, and D.D. Sarma, Phys. Rev.B, 68, 144431 (2003).
* (14) P. Sanyal, P. Majumdar, Phys. Rev. B, 80, 054411 (2009).
* (15) H. Kato et. al. App. Phys. Lett, 81, 328 (2002).
* (16) Y. Krockenberger et. al., Phys. Rev. B, 75, 020404 (2007).
* (17) J. Li, A. W. Sleight, M. A. Subramanian, Adv. Mater., 17, 2225 (2005).
* (18) H. Das, U. V. Waghmare, T. Saha-Dasgupta, D. D. Sarma, Phys. Rev. B, 79, 144403 (2009).
* (19) P. Sanyal, Phys. Rev. B, 96, 214407 (2017).
* (20) H. Das, P. Sanyal, T. Saha-Dasgupta, D. D. Sarma, Phys. Rev. B, 83, 104418 (2011).
* (21) A. Halder, P. Sanyal, T. Saha-Dasgupta, Phys. Rev. B, 99, 020402 (R) (2019).
* (22) K Samanta, P Sanyal, T Saha-Dasgupta Sci. Rep. 5, 15010 (2015).
* (23) E. Solana-Madruga, A. M. Arévalo-Lóṕez, A. J. Dos santos-Garcıá, E. Urones-Garrote, D. Avila-Brande, R. Sáéz-Puche, J. P. Attfield, Angew. Chem., Int. Ed., 55, 9340 (2016).
* (24) G. M. McNally, A. M. Arévalo-López, P. Kearins, F. Orlandi, P. Manuel, J. P. Attfield, Chem. Mater., 29, 8870 (2017).
* (25) E. Solana-Madruga, A. M. Arévalo-López, A. J. Dos santos-Garcıá, C. Ritter, C. Cascales, R. Sáez-Puche, J. P. Attfield, Phys. Rev. B, 97, 134408 (2018).
* (26) E. Solana-Madruga, A. J. Dos santos-Garca, A. M. Arvalo Lýpez, D. Avila-Brande, C. Ritter, J. P. Attfield, J. Saez-Puche, R. Dalton Trans, 44, 20441 (2015).
* (27) E. Solana-Madruga, Y. Sun, A. M. Arévalo-López, J. P. Attfield, Chem. Commun., 55, 2605 (2019).
* (28) O. K. Andersen, T. Saha-Dasgupta, Phys. Rev. B, 62, R16219 (2000).
* (29) A. J. Millis, B. I. Shraiman and R. Mueller, Phys. Rev. Lett, 77, 175 (1996); T. V. Ramakrishnan, H. R. Krishnamurthy, S. R. Hassan, and G. Venketeswara Pai, Phys. Rev. Lett. 92, 157203 (2004).
* (30) A. Chattopadhyay and A. J. Millis, Phys. Rev. B, 64, 024424, (2001).
* (31) P.W. Anderson and H. Hasegawa, Phys. Rev. 100, 675 (1955).
* (32) P. -G de Gennes, Phys. Rev. 118, 141 (1960).
* (33) P. W. Anderson, Phys. Rev., 79 350 (1950); J. B. Goodenough, Phys. Rev. 100 564 (1955); J. Kanamori, J. Phys. Chem. Solids. 10 87 (1959).
* (34) R. Das, P. Yanda, A.Sundaresan and D.D. Sarma, Mater. Res. Express6 116122 (2019).
* (35) A. Ali, G. Sharma, Y. Singh, arXiv:1811.07836.
* (36) G. Kresse, J. Furthmller, Computational Materials Science, 6(1), 15 (1996).
* (37) J. P. Perdew, K. Burke, M. Ernzerhof, Phys. Rev. Lett., 77, 3865 (1996); ibid, 78, 1396(E) (1991).
* (38) P. E. Blöchl, Phys. Rev. B, 50, 17953 (1994).
* (39) S. L. Dudarev, G. A. Botton, S. Y. Savrasov, C. J. Humphreys, A. P. Sutton, Phys. Rev. B, 57(3), 1505 (1998).
* (40) E. R. Ylvisaker, W. E. Pickett, and K. Koepernik, Phys. Rev. B 79, 035103 (2009).
* (41) I. V. Solovyev, P. H. Dederichs, V. I. Anisimov, Phys. Rev. B, 50, 16861 (1994).
* (42) O. K. Andersen, O. Jepsen, Phys. Rev. Lett., 53, 2571 (1994).
Table 1: Estimates of the coupling constants connecting the core spins for the double-exchange and super-exchange mechanisms operative in CMNRO and CMCRO, as estimated employing the super-exchange formula and total energy calculations of different spin configurations. Negative (positive) signs indicate ferro-(antiferro-) magnetic interactions. CMNRO | $D$ (meV) | $JS^{2}$ (meV) | Effective
---|---|---|---
Mn1-Mn2 | -91.7 (-94.7) | 47.5 (48.8) | -44.2 (-45.9)
Mn1-Ni | -123.5 (-117.3) | 54.8 (56.1) | -68.7 (-61.2)
Mn2-Ni | -28.6 (-29.6) | 10.8 (9.7) | -17.8 (-19.9)
Mn@Ni-Mn1 | -100.9 | 37.9 | -63.0
Mn@Ni-Mn2 | -50.3 | 30.1 | -20.2
CMCRO | $D$ (meV) | $JS^{2}$ (meV) | Effective
Mn1-Mn2 | 1.9 (2.1) | 48.5 (44.6) | 50.4 ( 46.7)
Mn1-Co | -143.9 (-147.4) | 87.9 (86.1) | -56.0 (-61.3)
Mn2-Co | -18.7 (-19.8) | 41.0 (43.1) | 22.3 (23.3)
Co@Mn2-Mn1 | 2.4 | 47.7 | 50.1
Co@Mn2-Co | -150.3 | 88.1 | -62.2
Figure 1: (Color online) Crystal structure of the stoichiometric CMNRO
compound. CMCRO compound is isostructural to CMNRO. The left panel shows the
three dimensional network of four magnetic ions in the structure with Mn at
tetrahedral site (Mn1), Mn at square planar site (Mn2), Ni and Re atoms marked
as red, blue, yellow and green coloured balls respectively. The right panel
shows the oxygen coordination of the four magnetic ions, tetrahedral for Mn1,
square planar for Mn2, and octahedral for Ni and Re. Figure 2: (Color online)
The GGA+$U$ density of states of CMNRO (top) and CMCRO (bottom) projected to
Mn1 d (black, solid), Mn2 d (black, dashed), Ni eg/ Co d (red, solid) and Re
t2g (green, solid) states. The zero of the energy is fixed at the Fermi
energy. Figure 3: (Color online) The energy level diagram for CMNRO (top) and
CMCRO (bottom) considering Mn1-d/Mn2-d/Ni eg (Co d)/Re t2g in basis
(Hybridization-off) and in the massively downfolded Re t2g only basis
(Hybridization-on). See text for details. Figure 4: Super-exchange
interactions in CMNRO and CMCRO between half-filled $d$ states of Mn1, Mn2,
half-filled $e_{g}$ states of Ni and half-filled $e_{g}$ and one of the
$t_{2g}$ states of Co. The fully filled levels not contributing in super-
exchange are not shown. Figure 5: (Color online) Magnetic properties of CMNRO
(left) and CMCRO (right) obtained from Monte-Carlo simulation. Top, left
panels show the ground state magnetic structures, while the top, right panels
show the derivative of magnetization as a function of temperature, the minimum
corresponding to the transition temperature of the corresponding compound.
Mn1, Mn2, Ni(Co) atoms are represented by red, blue and light pink (yellow)
balls respectively. The lower panels shows the magnetization plotted as a
function of temperature. The inset in lower, right panel shows the shift of
transition temperature (the minima of the curves) for monotonic decrease of
$D_{Mn1-Co}$. For details see text. Figure 6: (Color online) Magnetic
properties of CaMn1.25Ni0.75ReO6 (left) and CaMn0.75Co1.25ReO6 (right)
obtained from Monte-Carlo simulation. Insets show the derivative of
magnetization as a function of temperature.
|
# Elastic $dcs$ of $ep$-scattering fitted via the $dcs$
of $eq$-scatterings with cloud-covering effects
Jingle B. Magallanes Also a researcher at Premier Research Institute of
Science and Mathematics (PRISM) emails<EMAIL_ADDRESS><EMAIL_ADDRESS>Jinky B. Bornales Department of Physics, Mindanao
State University - Iligan Institute of Technology, Iligan City 9200
Philippines René Luna-García Centro de Investigación en Computación,
Instituto Politécnico Nacional, México City 07738 México
###### Abstract
The angular-averaged differential cross section ($dcs$) of the elastic
electron-proton ($ep$) scattering, covering $Q^{2}<1.0GeV^{2}$, was fitted via
a combined modified $eq$-scatterings where $q$ is a point particle. The
modifications represent the cloud-covering effects to $q$. An energy-decaying
ratio ($edr$) was derived by inspecting the generated $dcs_{ep}$ from the form
factor data gathered at Mainz Microtron (A1-Collaboration) and Continuous
Electron Beam Accelerator Facility (Jefferson Laboratory) when compared to the
$dcs_{eq}$ with modified relativistic recoil factor. The diminishing cloud
layer, $edr$, has a decay rate of $-2.8$ for the data sets under
investigation. The formulated SBM and SEM fitting models use the bare and
effective $u$ and $d$-quark masses, respectively, while SCBM and SCEM
integrate other considerations. Three comparison methods were used and all of
them favor the models with other additional considerations. SCEM was the most
favored model in general.
††preprint: APS/123-QED
## I Introduction
Electron-nucleon scattering has been used to extensively measure the nucleon’s
electromagnetic form factors to study the charge and magnetization
distributions [1]. For this, it is important to measure the scattering’s
differential cross section ($dcs$) since it is proportional to the probability
for any given reaction or process to occur. The objective of this study is to
demonstrate a fitting model to the angular-averaged $dcs$ of the elastic
electron-proton ($ep$) scattering, $dcs_{ep}$, generated from different form
factor data sets covering the transfer momentum, $Q<1GeV$.
Initially, it was thought that fitting the $dcs_{ep}$ through electron-point
particle ($eq$) scatterings would be impossible since the proton is definitely
not a point particle as characterized by the form factors. However, it could
and would be possible by putting some cloud-covering effects on the point
particle $q$. Inasmuch as, at low-energy Quantum Chromodynamics (QCD) where
both perturbation theory and asymptotic freedom are not possible, there are
significant collective interactions between the valence and sea quarks; and
the effects are in the form of cloud coverings. The valence quarks get
surrounded by some dense concentration of virtual quarks and gluons. When
probed at low energy, this cloud is the high energy barrier to the core of the
proton.
For the range of transfer momenta in consideration, $eq$-scattering would have
to be masked by modifications mantling the particle. This includes the
modifications in $dcs_{eq}$’s recoil factor (fixed cloud layer) and the energy
dependent ratio (diminishing cloud layer) between $dcs_{ep}$ and $dcs_{eq}$.
## II The Electron-Proton ($ep$) Scattering
The elastic $ep$-scattering is one of the fundamental interactions used in the
understanding of the structure and the build-up of hadronic physics [2]. It is
called Mott or no-structure ($ns$) scattering when it is the electron that is
scattered by the point-particle nucleus. Electrons are very light; with high
energies, they can penetrate further into the nucleus. However, they couple to
the nuclear magnetic field because they have nonzero spin, an effect carried
by the final term in the $dcs$ given in Equation 1. This equation also
contains the ratio between the final ($E^{\prime}$) and initial ($E$) energies
of the electron called the relativistic recoil factor of the nucleus. The
cross-section is denoted by $\sigma_{ns}$ for the Mott scattering:
$\displaystyle{\frac{d\sigma}{d\Omega}}_{Mott}$ $\displaystyle=\sigma_{ns}$
(1)
$\displaystyle=\frac{(Z_{1}Z_{2}\alpha)^{2}}{4k^{2}sin^{4}\left(\frac{\theta}{2}\right)}\left(\frac{E^{\prime}}{E}\right)\left\\{1-v^{2}sin^{2}\left(\frac{\theta}{2}\right)\right\\}$
where
$\frac{E^{\prime}}{E}=\frac{1}{1+\frac{2E}{M}sin^{2}\frac{\theta}{2}}$ (2)
and
$E-E^{\prime}=\frac{-\overline{q}^{2}}{2M}=\frac{Q^{2}}{2M}$ (3)
with $M$ being the mass of the nucleon. The electron has to release a virtual
photon as a necessary condition in order to probe the proton with an energy
equal to the difference between the electron’s initial and final energies,
given by Equation 3, where $\overline{q}^{2}$ is the square of the transfer
momentum and $-\overline{q}^{2}=Q^{2}$.
Electron scattering has been deeply studied over the years and there are two
cases: the elastic scattering characterized by the electromagnetic form
factors and the deep inelastic scattering characterized by the structure
functions. Electromagnetic form factors of the proton provide some of the
first information of its size and distribution of charge and magnetization.
Moreover, the observation of unexpected behavior in form factors and structure
functions has also brought new understanding of the strong interaction.
The electron being a point-particle has the simple vertex, $\gamma_{\mu}$, and
its current takes the form $j_{\mu}=-e\bar{u}(k^{\prime})\gamma_{\mu}u(k)$
while the proton has a vertex, $\Gamma^{\mu}$, with a current expressed using
form factors parameterizing its internal structure. Also, the proton current
must be a Lorentz-invariant four-vector that satisfies the parity and current
conservation of the electromagnetic interaction. Hence, for a single-photon
exchange, two form factors are allowed in the vertex and the current is given
by
$\displaystyle J^{\mu}$ $\displaystyle=e\bar{v}(p^{\prime})\Gamma^{\mu}v(p)$
(4)
$\displaystyle=e\bar{v}(p^{\prime})\left[F_{1}(q^{2})\gamma^{\mu}+\frac{i\kappa}{2M}F_{2}(q^{2})\sigma^{\mu\nu}q_{\nu}\right]v(p)$
where $F_{1}(q^{2})$ is the Dirac form factor corresponding to the helicity-
conserving current; $F_{2}(q^{2})$ is the Pauli form factor corresponding to
the helicity-flip current; $\kappa=1.793\mu_{N}$ is the proton anomalous
magnetic moment; $M$ is the proton nucleon mass; and
$\sigma_{\mu\nu}=2i\left[\gamma_{\mu},\gamma_{\nu}\right]$. For
$q^{2}\rightarrow 0$, $F_{1}(0)=F_{2}(0)=1.0$ in the non-relativistic limit
and the proton is treated as a point-particle where the virtual photon is
insensitive to the proton’s internal structure.
The $dcs$ becomes
$\displaystyle\frac{d\sigma}{d\Omega}$ $\displaystyle=$
$\displaystyle\frac{|j_{\mu}\frac{1}{q^{2}}J_{\mu}|^{2}}{4\left((k\dot{p})^{2}-m^{2}M^{2}\right)}(2\pi)^{4}\delta^{4}(k^{\prime}-k+p-p^{\prime})$
(5)
$\displaystyle\times\frac{d^{3}k^{\prime}d^{3}p^{\prime}}{(2\pi)^{3}2E^{\prime}(2\pi)^{3}2(M+\omega)}.$
where the conservation of momentum is assured by the delta functions.
Integrating over the relevant variables; averaging initial spin states; and
summing over final ones, the $dcs$ as a function of the scattering angle
$\theta$ becomes
$\displaystyle\frac{d\sigma}{d\Omega}=\frac{(Z_{1}Z_{2}\alpha)^{2}}{4k^{2}sin^{4}\left(\frac{\theta}{2}\right)}\left(\frac{E^{\prime}}{E}\right)cos^{2}\frac{\theta}{2}$
$\displaystyle\times\left[\left(F_{1}^{2}+\frac{\kappa^{2}Q^{2}}{4M^{2}}F_{2}^{2}\right)+\frac{Q^{2}}{2M^{2}}\left(F_{1}+\kappa
F_{2}\right)^{2}tan^{2}\frac{\theta}{2}\right].$ (6)
This can simplify to the structureless Mott cross section multiplied with the
form factor term where $(1-v^{2}sin^{2}\frac{\theta}{2})\rightarrow
cos^{2}\frac{\theta}{2}$, for relativistic electrons. If the proton were a
point charge, its $dcs$ would have only been
$\frac{d\sigma}{d\Omega}=\frac{(Z_{1}Z_{2}\alpha)^{2}}{4k^{2}sin^{4}\left(\frac{\theta}{2}\right)}\left(\frac{E^{\prime}}{E}\right)\left[cos^{2}\frac{\theta}{2}+\frac{Q^{2}}{2M^{2}}sin^{2}\frac{\theta}{2}\right].$
(7)
To avoid the interference between $F_{1}$ and $F_{2}$ in Equation 6, the
structure-dependent part of the cross section can be rewritten in terms of the
electric and magnetic form factors $G_{E}(Q^{2})$ and $G_{M}(Q^{2})$ [3] where
$G_{E}(Q^{2})=F_{1}(Q^{2})-\kappa\tau F_{2}(Q^{2})$ and
$G_{M}(Q^{2})=F_{1}(Q^{2})+\kappa F_{2}(Q^{2})$. Then, with
$\tau=Q^{2}/4M^{2}$, the $dcs$ becomes
$\displaystyle\frac{d\sigma}{d\Omega}=$
$\displaystyle\frac{(Z_{1}Z_{2}\alpha)^{2}}{4k^{2}sin^{4}\left(\frac{\theta}{2}\right)}\left(\frac{E^{\prime}}{E}\right)cos^{2}\frac{\theta}{2}$
(8) $\displaystyle\times\left[\frac{G_{E}^{2}+\tau G_{M}^{2}}{1+\tau}+2\tau
G_{M}^{2}tan^{2}\frac{\theta}{2}\right]$
which can be further simplified to
$\frac{d\sigma}{d\Omega}=\sigma_{ns}\frac{1}{1+\tau}\left[G_{E}^{2}+\frac{\tau}{\epsilon}G_{M}^{2}\right]$
(9)
where
$\displaystyle 1/\epsilon$ $\displaystyle=$
$\displaystyle[1+2(1+Q^{2}/4M^{2})tan^{2}(\theta/2)]$ (10) $\displaystyle=$
$\displaystyle[1+2(1+\tau)tan^{2}(\theta/2)];$
$\epsilon$ is an angular variable. In the non-relativistic limit,
$Q\rightarrow 0$, these form factors are just the Fourier transforms of the
charge and magnetization distributions [4],
$F_{nr}(Q^{2})=\int\rho(\overrightarrow{r})e^{-\overrightarrow{Q}\cdot\overrightarrow{r}}d^{3}\overrightarrow{r}.$
(11)
Dipole form factor,
$G_{D}(Q^{2})=\frac{1}{(1+a^{2}Q^{2})^{2}}$ (12)
comes out if the charge distribution is exponential,
$\rho(r)=\rho_{0}e^{-r/a}$, where $a$ is the scale of the proton radius given
by $a^{2}=(0.71GeV^{2})^{-1}$. If the charge and magnetic moment distributions
are the same, then their transforms will be as well; and generally, the form
factor ratio will be
$\frac{\mu G_{E}(Q^{2})}{G_{M}(Q^{2})}=1.0,$ (13)
which is known as the form factor scaling. For low $Q^{2}$, at which the
electric and magnetic root mean square (rms) radii can be determined [5], the
form factors can be expanded as
$\frac{G(Q^{2})}{G(0)}=1-\frac{1}{6}\left<r^{2}\right>Q^{2}+\frac{1}{120}\left<r^{4}\right>Q^{4}-...\>.$
(14)
The (rms) radius can be determined from the slope of the form factors at
$Q^{2}=0$ with
$\left<r^{2}\right>=-\frac{6}{G(0)}\frac{dG(Q^{2})}{dQ^{2}}|_{Q^{2}=0}.$ (15)
## III Low energy form factor data
Form factors can be extracted via Rosenbluth Extraction Method [6, 7, 8]. The
form factor ratio $\frac{\mu G_{E}}{G_{M}}$ is $\sim 1.0$ at lower energies
with the world data in [7, 9, 10, 11, 12] and this is consistent with the form
factor scaling. Other methods of extractions are Polarization Transfer Method
[13] and Super-Rosenbluth Method [14]. Previous Rosenbluth data used the
Bosted global fit [15] valid at $0<Q^{2}<7GeV^{2}$. Recently, the Global
Fitting Procedure [1, 16] was used for the world data valid for $Q^{2}$ up to
$\sim 30GeV^{2}$. There was already an attempt in separating the quark flavor
contributions to the elastic form factors at low-energy, detailed in [17].
The low momentum transfer data presented in [18, 19] were determined from the
measurements at the Mainz Microtron (MAMI) using the 3-spectrometer-facility
of the A1-Collaboration taken in three periods between 2006 and 2007 using
beam energies of $180$, $315$, $450$, $585$, $720$ and $855MeV$. The
experiment covers $0.004GeV^{2}<Q^{2}<1.0GeV^{2}$ with counting rate
uncertainties below $0.2\%$ for most of the data points [5]. They separate the
form factors by fitting a wide selection of models directly to the measured
cross sections. Extensive simulations were done to test the validity of this
method. Standard Rosenbluth extraction technique was used in comparing the
results. Form factors determined via Rosenbluth Separation Method, Friedrich-
Walcher Model, Polynomial Model and Spline Model were used in this study. The
details pertaining to the measurements and analyses can be found in [19].
For the experiment presented in [8], high-precision proton Rosenbluth
extractions using beam energies from $849MeV$ to $5.157GeV$ were performed
covering a large range of transfer momenta, $0.40GeV^{2}<Q^{2}<5.76GeV^{2}$,
focusing on the extremes of $\epsilon$ where two-photon exchanges (TPEs)
occur. The experiment has higher momentum transfers than proton Rosenbluth
experiments before this and provided higher precision at low momentum
transfer. To reconcile the discrepancy of results with that of Polarization
data, considerations were taken including the missing corrections from TPE,
which are difficult to calculate, and the results from other experiments but
are not expected to be valid at low $Q^{2}$. But for this study, only
$Q^{2}<1GeV^{2}$ were considered and in which case TPE rarely happens, hence,
correction will be not as reliable. For the purposes of comparing the models
with the available data, only some of the Rosenbluth extracted values were
included. The details pertaining to the experiment and data analyses are found
in [8].
## IV Implementations
The averaged multiple-angle $dcs_{ep}$ was fitted by the modified $dcs$ of
$eq$-scatterings at transfer momenta less than $1GeV$ where $q$ is a point
particle. Since the proton is a finite particle, cloud-covering effects have
to be carried-out on $q$. This also warrants that $m_{q}<m_{p}$, in terms of
particle masses. The quark flavor composition of the proton ($uud$) was the
basis in the choice of masses for the $q$’s in the fitting models; taking the
quark masses and their corresponding fractional charges. Accordingly,
effective (low energy) quark masses [20, 21] are assigned to $q$ for the
transfer momentum in consideration, but it could also be assigned bare quark
masses [20, 22] since the cloud-effect is already represented by the
modifications. The relativistic recoil factor of the angle and spin averaged
$dcs$ of $eq$-scattering was modified using the proton mass as a parameter.
Overlapping of the electron wave functions, spin-spin interactions, and color
interactions were also considered in coming-up with the fitting models but
arbitrarily not quantitative yet. The form factors derived from experiments at
Mainz Microtron (MAMI) [18, 19] and Continuous Electron Beam Accelerator
Facility (CEBAF, JLab) [8] were used to generate the data for $dcs_{ep}$.
The angular-averaged $dcs_{ep}$ were generated via Equation 8 in ROOT Data
Analysis Framework [23] platform. Raw $dcs$ of $eq$-scattering with $q$ having
the mass of $u$-quark ($dcs_{eu}$) and $d$-quark ($dcs_{ed}$) were also
simultaneously generated using the same random numbers via Equation 7. A total
of $2000$ data points each for $dcs_{ep}$, $dcs_{eu}$ and $dcs_{ed}$ were
gathered at random various scattering angles from $0^{o}$ to $180^{o}$ for
each corresponding particular transfer momentum in the experimental data
considered. The energy-decaying ratios, which decreases as photon energy
increases, between $dcs_{ep}$ and $dcs_{eq}$ were then determined and
incorporated back to the $dcs_{eq}$ modifying them further. New data points
were generated and then re-analyzed.
Equation 2 is the relativistic recoil factor and this is due to the recoil of
the target particle during the interaction [4, 24]. Its modification has a
significant change to the $dcs$, acting like a fixed layer of cloud, as it
shifts the $dcs_{eq}$ distribution vertically and closer to the $dcs_{ep}$
when the mass used is similar to that of proton. At a particular $Q^{2}$ and
considering an angle-averaged $dcs$, the recoil factor is a constant. This
materializes the proton mass as a parameter to the fitting model.
Correspondingly, the averaged points in the same transfer momentum for
$dcs_{ep}$ and $dcs_{eq}$ were compared. There is a decreasing ratio, between
$dcs_{ep}$ and $dcs_{eu}$ and more so with $dcs_{ed}$, behaving exponentially.
A dimensionless energy-decaying ratio ($edr$) of the form $Ae^{-rQ^{2}}$ was
found for the investigated Rosenbluth form factor data sets with $A$ as the
amplitude and $r$ as the decay rate, see TABLE 1. There are differences in the
amplitudes of the $edr_{d**}$ and $edr_{u**}$ but the decay rate for each data
set is the same. It should be noted that the data set from [19] has 27
selected data points while [8] only has 6 data points; experiments from which
the data sets were taken have different considerations.
Combined fitting models with contributions from both $dcs_{eu}$ and $dcs_{ed}$
underpins the quark flavor composition of the proton. Additionally, the weight
of the modified $dcs_{eu}$ and $dcs_{ed}$ contributions can be affected by the
overlapping of the electron’s initial and final wave functions, spin-spin
interactions of the electron and proton, and color interactions of the quarks
inside the proton, and other considerations. For instance, the contributions
can be arbitrarily set to be $80\%$ instead of 2/3 for $dcs_{eu}$ and $20\%$
instead of 1/3 for $dcs_{ed}$.
Table 1: Energy-Decaying Ratio: The $edr$ was derived from the comparison of the data gathered by the known method of extracting form factors at low transfer momentum (Rosenbluth Extraction Method) to the $dcs_{eu}$ and $dcs_{ed}$ at fixed transfer momentum. | Form Factor
---
Data Sets
$A$ | $r$ | Form | Notation
| Rosenbluth
---
Separation
Data [19]
$ep$-$eu$ with
bare mass
3.50 | 2.8 | $3.50e^{-2.8Q^{2}}$ | $edr_{ubs}$
| Rosenbluth
---
Separation
Data [19]
$ep$-$ed$ with
bare mass
14.0 | 2.8 | $14.0e^{-2.8Q^{2}}$ | $edr_{dbs}$
| Rosenbluth
---
Separation
Data [19]
$ep$-$eu$ with
effective mass
2.40 | 2.8 | $2.40e^{-2.8Q^{2}}$ | $edr_{ues}$
| Rosenbluth
---
Separation
Data [19]
$ep$-$ed$ with
effective mass
9.60 | 2.8 | $9.60e^{-2.8Q^{2}}$ | $edr_{des}$
| Rosenbluth
---
Extraction
Data [8]
$ep$-$eu$ with
bare mass
1.85 | 1.8 | $1.85e^{-1.8Q^{2}}$ | $edr_{ube}$
| Rosenbluth
---
Extraction
Data [8]
$ep$-$ed$ with
bare mass
7.40 | 1.8 | $7.40e^{-1.8Q^{2}}$ | $edr_{dbe}$
| Rosenbluth
---
Extraction
Data [8]
$ep$-$eu$ with
effective mass
1.45 | 1.8 | $1.45e^{-1.8Q^{2}}$ | $edr_{uee}$
| Rosenbluth
---
Extraction
Data [8]
$ep$-$ed$ with
effective mass
5.80 | 1.8 | $5.80e^{-1.8Q^{2}}$ | $edr_{dee}$
## V Results
When probed with very low energy, most if not all, hadrons are just point
particles. Gradual increase in the probe energy reveals that they are actually
extended particles. At low energy, the valence quarks are cloud covered
constituent quarks and the proton would be a lump of clouds with an extended
size. And, it is difficult to describe this lump without increasing the energy
of the photon probe. The cloud, however, can be treated as an energy barrier
through the core of the proton which can be diminished by increasing the
energy probe.
TABLE 1 tabulates the $edr$ for the Rosenbluth data sets [8, 19]. The
amplitudes of the $edr$ were derived by separately comparing $dcs_{eu}$ and
$dcs_{ed}$ to $dcs_{ep}$. Compromising results of point to point comparison,
corresponding to different transfer momenta, led to a concensus amplitude
ratio of $\sim 4$. One of the critical reasons being looked into is that, at
very low transfer momenta, the ratio between $dcs_{eu}$ and $dcs_{ed}$ is
predominantly affected by the ratio of the squares of their respective
charges. Thus, in order to close-in with $dcs_{ep}$, $dcs_{ed}$ have to be
intensified by about four times as much as $dcs_{eu}$. However, the transfer
momentum, as it increments, also eventually affects the $dcs$ ratio in
addition to the effects contributed by the assigned masses to the point
particles; this aspect is open for more investigations. Moreover, the
amplitudes for $edr_{*e*}$ are lesser than $edr_{*b*}$ since, at the range of
transfer momenta in consideration, the particles with effective masses are
presumably having thinner clouds than those carrying their bare masses. The
decay rate of the diminishing cloud effect layer, $edr$, for each data set is
constant. It can be seen, however, that the decay rate for Rosenbluth form
factor in [19] is greater than in [8]. The reason for this is speculated to be
caused by either or both the experimental set-up considerations and of the
statistical data size.
The $dcs_{ep}$ generated from the investigated Rosenbluth form factor data
sets are compared to the $edr_{u**}dcs_{eu}$ and $edr_{d**}dcs_{ed}$ and three
ways of comparison were done—Ratio Test (averaging the ratios between the
corresponding generated experimental data and fitting data) in TABLE 2,
Absolute Difference (averaging the absolute differences between the
corresponding generated experimental data and fitting data) in TABLE 3 and Chi
Test (square-root of the average of the squares of the differences between the
corresponding generated experimental data and fitting data) in TABLE 4. Other
form factor data sets were also used for comparison such as those determined
by Friedrich-Walcher, Polynomial and Spline models with $68.3\%$ confidence
level. The description of the other form factor models and the parameters for
their best fits are found in chapter 7 and appendix J of [19].
For the Ratio Test in TABLE 2, the $dcs_{ep}$ generated from form factors of
the models from [19] were closer to the modified $dcs_{eu}$, except for the
Rosenbluth Extraction Data of [8], than to the modified $dcs_{ed}$ where the
$q$’s assume bare masses. As expected, the $dcs_{ep}$ generated from
Rosenbluth extraction method are the ones closer to $edr_{u**}dcs_{eu}$ and
$edr_{d**}dcs_{ed}$ compared to the ones generated from other data sets.
However, corresponding numbers as seen in TABLE 2 are not in agreement among
themselves which could be attributed to the differences in the experimental
set-ups from which the two data sets were taken. The data from [19] were
derived from the set-up that was intended for measurements using low beam
energies while from [8] were measured from the set-up intended for higher beam
energies.
Table 2: Ratio Test: The average ratio between the $dcs_{ep}$ generated from the different data sets to their corresponding $dcs_{eu}$ and $dcs_{ed}$ with $edr$ where bare quark masses (BM) are used and, separately, for effective quark masses (EM). | Form Factor
---
Data Sets
| $ep$-$eu$
---
(BM)
| $ep$-$ed$
---
(BM)
| $ep$-$eu$
---
(EM)
| $ep$-$ed$
---
(EM)
| Rosenbluth
---
Extraction [8]
0.96964 | 0.96971 | 1.0003 | 0.99732
| Rosenbluth
---
Separation [19]
1.0167 | 1.0173 | 0.98951 | 0.98742
| Friedrich-Walcher
---
Model [19]
1.0504 | 1.0507 | 1.1520 | 1.1490
| Polynomial
---
Model [19]
1.2053 | 1.2056 | 1.3455 | 1.3420
| Spline
---
Model [19]
1.2134 | 1.2138 | 1.3558 | 1.3521
For the Absolute Difference in Table 3, the $dcs_{ep}$ generated from
different data sets were more in agreement with $edr_{*e*}dcs_{e*}$ than
$edr_{*b*}dcs_{e*}$ since the differences are much smaller in favor of the
$dcs$ where quarks are assuming the effective masses. It can also be seen that
all the $dcs_{ep}$ are in more agreement with $edr_{u**}dcs_{eu}$ than with
$edr_{d**}dcs_{ed}$ except for the Rosenbluth Extraction Data [8]. Among the
data sets from [19], the generated $dcs_{ep}$ from Friedrich-Walcher has the
lowest average absolute difference.
Table 3: Absolute Difference: The average absolute difference between the $dcs_{ep}$ generated from the different data sets and their corresponding $dcs_{eu}$ and $dcs_{ed}$ with $edr$ where bare quark masses (BM) are used and, separately, for the effective quark masses (EM). | Form Factor
---
Data Sets
| $ep$-$eu$
---
(BM)
$\times 10^{-6}$
| $ep$-$ed$
---
(BM)
$\times 10^{-6}$
| $ep$-$eu$
---
(EM)
$\times 10^{-6}$
| $ep$-$ed$
---
(EM)
$\times 10^{-6}$
| Rosenbluth
---
Extraction [8]
6.6196 | 6.6253 | 2.3954 | 2.1956
| Rosenbluth
---
Separation [19]
832.92 | 841.93 | 225.39 | 238.93
| Friedrich-
---
Walcher
Model [19]
737.26 | 744.84 | 197.24 | 204.42
| Polynomial
---
Model [19]
738.73 | 746.31 | 204.99 | 212.17
| Spline
---
Model [19]
739.24 | 746.80 | 204.82 | 211.86
For the Chi Test in Table 4, the $dcs_{ep}$ generated from different data sets
were more in agreement with $edr_{*e*}dcs_{e*}$ than $edr_{*b*}dcs_{e*}$ since
the deviation are much smaller in favor of the $dcs$ where the point particles
assume effective quark masses. Again, it can also be seen that all the
$dcs_{ep}$ are in more agreement with $edr_{u**}dcs_{eu}$ than with
$edr_{d**}dcs_{ed}$ except for the Rosenbluth Extraction Data [8]. Expectedly,
among the data sets from [19], the generated $dcs_{ep}$ from Rosenbluth
Separation Data is the most favored by the Chi Test.
Table 4: Chi Test: The Chi Test between the $dcs_{ep}$ generated from the different data sets from their corresponding $dcs_{eu}$ and $dcs_{ed}$ with $edr$ where bare quark masses (BM) are used and, separately, for effective quark masses (EM). | Form Factor
---
Data Sets
| $ep$-$eu$
---
(BM)
$\times 10^{-6}$
| $ep$-$ed$
---
(BM)
$\times 10^{-6}$
| $ep$-$eu$
---
(EM)
$\times 10^{-6}$
| $ep$-$ed$
---
(EM)
$\times 10^{-6}$
| Rosenbluth
---
Extraction [8]
8.9374 | 8.9499 | 4.2973 | 3.8496
| Rosenbluth
---
Separation [19]
1647.3 | 1666.2 | 375.85 | 394.87
| Friedrich-
---
Walcher
Model [19]
2565.1 | 2591.5 | 603.56 | 619.44
| Polynomial
---
Model [19]
2557.6 | 2584.0 | 611.84 | 627.73
| Spline
---
Model [19]
2558.0 | 2584.1 | 612.10 | 627.25
Considering Equation 7, $edr$, weight contribution by quark flavor composition
and, additionally, other criteria, four fitting models were formulated (see
TABLE 5). The first is the Spin Bare Mass (SBM) which takes into account the
respective contributions of $edr_{*bs}$ and $dcs_{e*}$. Second, is the Spin
with other Criteria Bare Mass (SCBM) which is just the SBM but including the
other considerations. The third is the Spin Effective Mass (SEM) which has
lower amplitudes compared to the SBM and uses the effective quark masses. The
fourth one, Spin with other Criteria Effective Mass (SCEM), is just the SEM
but considering the same other criteria included in SCBM.
Table 5: The $dcs_{eq}$ Models: The four models include SBM, SCBM, SEM and SCEM and their forms. Model | Form
---|---
Spin Bare Mass (SBM) | | $(2/3)3.50e^{-2.8Q^{2}}dcs_{eu}$
---
$+(1/3)14.0e^{-2.8Q^{2}}dcs_{ed}$
| Spin with other Criteria
---
Bare Mass (SCBM)
| $(4/5)3.50e^{-2.8Q^{2}}dcs_{eu}$
---
$+(1/5)14.0e^{-2.8Q^{2}}dcs_{ed}$
Spin Effective Mass (SEM) | | $(2/3)2.40e^{-2.8Q^{2}}dcs_{eu}$
---
$+(1/3)9.60e^{-2.8Q^{2}}dcs_{ed}$
| Spin with other Criteria
---
Effective Mass (SCEM)
| $(4/5)2.40e^{-2.8Q^{2}}dcs_{eu}$
---
$+(1/5)9.60e^{-2.8Q^{2}}dcs_{ed}$
The Ratio Test in TABLE 6, Absolute Difference in TABLE 7 and Chi Test in
TABLE 8 show the comparisons of the data between the four fitting models and
the corresponding generated $dcs_{ep}$ from different form factor data sets
listed. The plots of the $dcs_{ep}$ from the Rosenbluth data sets with all the
four models almost lie on the same space. It can be seen in TABLE 6 that, in
general, the $dcs_{ep}$’s are in agreement with SCBM for the ratio test. On
the other hand, both $dcs_{ep}$’s from the Rosenbluth form factor data sets
are in close agreement with SCEM.
Table 6: Ratio Test: The average ratio between the $dcs_{ep}$ of the different data sets to their corresponding $dcs_{eq}$ of the different models. | Form Factor
---
Data Sets
SBM | SCBM | SEM | SCEM
| Rosenbluth
---
Extraction [8]
0.96967 | 0.96966 | 0.99933 | 0.99973
| Rosenbluth
---
Separation [19]
1.0169 | 1.0168 | 0.98881 | 0.98909
| Friedrich-Walcher
---
Model [19]
1.0505 | 1.0505 | 1.1510 | 1.1514
| Polynomial
---
Model [19]
1.2054 | 1.2053 | 1.3443 | 1.3448
| Spline
---
Model [19]
1.2135 | 1.2135 | 1.3546 | 1.3550
For the comparison using Absolute Difference in TABLE 7, SCBM is favored over
SBM by all the generated $dcs_{ep}$ from different form factor data sets. In
general, SCEM is also favored by the generated $dcs_{ep}$ except those
generated from Rosenbluth Extraction Data from [8] and this could be due to
the experimental parameters in considerations. With the numbers given in this
table, the SCEM is most favored since its corresponding average absolute
difference is smaller compared to SCBM; both fitting models feature the other
additional criteria.
Table 7: Absolute Difference: The average absolute difference between the $dcs_{ep}$ of the different data sets and their corresponding $dcs_{eq}$ of the different models—Spin Bare Mass (SBM), Spin with other Criteria Bare Mass (SCBM), Spin Effective Mass (SEM) and Spin with other Criteria Effective Mass (SCEM). | Form Factor
---
Data Sets
| SBM
---
$\times 10^{-6}$
| SCBM
---
$\times 10^{-6}$
| SEM
---
$\times 10^{-6}$
| SCEM
---
$\times 10^{-6}$
| Rosenbluth
---
Extraction [8]
6.6215 | 6.6207 | 2.3035 | 2.3403
| Rosenbluth
---
Separation [19]
835.92 | 834.72 | 229.90 | 228.09
| Friedrich-
---
Walcher
Model [19]
739.79 | 738.78 | 199.64 | 198.68
| Polynomial
---
Model [19]
741.26 | 740.25 | 207.38 | 206.42
| Spline
---
Model [19]
741.76 | 740.76 | 207.16 | 206.22
The plots in FIG. 1, FIG. 2 and FIG. 3 show the $dcs_{ep}$ of form factors
derived from Friedrich-Walcher, Spline and Polynomial models, respectively,
together with the formulated fitting models. From these three data sets, it is
the generated $dcs_{ep}$ from the Friedrich-Walcher form factors that has the
smallest average absolute difference. Looking at FIG. 2 and FIG. 3, it can be
seen that the last two data points from the data sets diverge way-off from the
models and this could be attributed by the limitations of the experimental
set-up and the fitting parameters when the form factors were derived. It is
also only up to this region that the formulated fitting models are expected to
be valid.
Figure 1: The plot shows the generated $dcs_{ep}$ (black $\bullet$) using the
form factors from Friedrich-Walcher model [19] versus $Q^{2}$. They were
compared to (a) $dcs_{SBM}$ ($\blacksquare$), (b) $dcs_{SCBM}$
($\blacktriangle$), (c) $dcs_{SEM}$ ($\blacktriangledown$) and (d)
$dcs_{SCEM}$ ($\blacklozenge$), showing a pronounced agreement. Figure 2: The
plot shows the generated $dcs_{ep}$ (black $\bullet$) using the form factors
from Spline model [19] versus $Q^{2}$. They were compared to (a) $dcs_{SBM}$
($\blacksquare$), (b) $dcs_{SCBM}$ ($\blacktriangle$), (c) $dcs_{SEM}$
($\blacktriangledown$) and (d) $dcs_{SCEM}$ ($\blacklozenge$), showing good
agreement except with the two points at the tail. Figure 3: The plot shows the
generated $dcs_{ep}$ (black $\bullet$) using the form factors from Polynomial
model [19] versus $Q^{2}$. They were compared to (a) $dcs_{SBM}$
($\blacksquare$), (b) $dcs_{SCBM}$ ($\blacktriangle$), (c) $dcs_{SEM}$
($\blacktriangledown$) and (d) $dcs_{SCEM}$ ($\blacklozenge$), showing good
agreement except with the two points at the tail.
For the comparison using Chi Test in TABLE 8, it is expected that the
Rosenbluth form factor data sets are favorable to all four models since the
Chi Test values are smaller, compared to the other data sets; but more
specially to SCBM and SCEM. With general considerations, it is the SCEM that
is the most favored model for this comparison test with SEM coming next.
Table 8: Chi Test: The Chi Test between the $dcs_{ep}$ of the different data sets and their corresponding $dcs_{eq}$ of the different models—Spin Bare Mass (SBM), Spin with other Criteria Bare Mass (SCBM), Spin Effective Mass (SEM) and Spin with other Criteria Effective Mass (SCEM). | Form Factor
---
Data Sets
| SBM
---
$\times 10^{-6}$
| SCBM
---
$\times 10^{-6}$
| SEM
---
$\times 10^{-6}$
| SCEM
---
$\times 10^{-6}$
| Rosenbluth
---
Extraction [8]
8.9416 | 8.9399 | 4.1442 | 4.2050
| Rosenbluth
---
Separation [19]
1653.6 | 1651.1 | 382.19 | 379.65
| Friedrich-
---
Walcher
Model [19]
2573.9 | 2570.4 | 608.85 | 606.73
| Polynomial
---
Model [19]
2566.4 | 2562.9 | 617.13 | 615.01
| Spline
---
Model [19]
2566.7 | 2563.2 | 617.14 | 615.13
## VI Conclusions and Recommendations
Several experimental data, such as those coming from A1-Collaboration and
JLab, have measured the proton electromagnetic form factors with precision and
accuracy for relativistic systems through elastic scatterings. These
measurements, specially for $Q^{2}<1GeV^{2}$, are important since they give
the electric and magnetic form factors that determine the distribution of
charge and magnetization of the proton or its charge and magnetic (rms) radii.
The $dcs_{ep}$ generated from different sets of form factor data were compared
to raw $dcs_{eq}$ where $q$ is a point particle assigned with bare and
effective masses of $u$ and $d$ quarks. The $edr$’s were determined from this
comparison and are listed in TABLE 1. The $edr$ that suit best the generated
data corresponds to the one derived from Rosenbluth Separation Data in [19].
The amplitude of $edr_{d**}$ is greater than that of $edr_{u**}$ and this is
due to their differences in charge and, eventually, in mass as transfer
momentum increments. It is recommended that this will be delved more;
specially, on the behavior of the ratio $edr_{d**}/edr_{u**}$. Also,
$edr_{*e*}<edr_{*b*}$ and this could be due to the dominance of the
constituent or effective mass at the range of transfer momentum studied. Aside
from that, it is quite logical that the point particle with (smaller) bare
mass would need a thicker cloud to compensate for its mass compared to the one
with (bigger) effective mass. The decay rate in the $edr$ is constant,
however, this could change depending on the number of data points considered
in the formulation of the fitting model or if different form factor data sets
are used, in addition to the speculation that this variation could also be due
to the differences in the set-up and parameters considered in the experiments;
as can be seen, $edr_{**s}>edr_{**e}$. By averaging the $dcs$ of 2000 events,
each taken with different and randomly selected scattering angles from $0^{o}$
to $180^{o}$, the recoil factor can be treated as a constant. Moreover, it was
necessary to modify the recoil factor of the $eq$-scattering by using the
proton mass to shift the distribution of $dcs_{eq}$ closer to $dcs_{ep}$. And,
this materializes the proton as a parameter to the fitting model. The
existence of the $edr$ and the modification of the recoil factors, foremost,
are acting as the cloud layers that are supposed to cover the point particle
$q$ at low energy. Furthermore, TABLE 2, TABLE 3 and TABLE 4 imply with
generality that the generated $dcs_{ep}$ favors $edr_{u**}dcs_{eu}$ over
$edr_{d**}dcs_{ed}$.
Four models were formulated (see TABLE 5) considering the assignment of bare
and effective quark masses—SBM and SEM consider the $edr$ and contributions
based on the quark flavor composition of the proton while SCBM and SCEM
incorporate other considerations, albeit arbitrarily, such as overlapping of
the electron wave functions, spin-spin interactions, and color interactions.
For the Ratio Test, SCBM is the most favored model while SCEM is favored by
both the Absolute Difference and Chi Test. With SCEM and SEM having favorable
comparative numbers imply that at this range of transfer momenta, the said
models are consistent on the point particles being likely to assume effective
masses rather than bare masses. It should be noted that the fitting models are
not meant to prove the quark composition of the proton but, rather, show that
previous and known results of $eq$-scattering with modifications can be used
to create models for $ep$-scattering at low transfer momenta.
Although the additional arbitrary considerations has an effect to the elastic
$ep$-scattering, it is assumed to be really very small in magnitude for
$Q^{2}<1GeV^{2}$ but its existence in the models have been very helpful in
optimizing the comparison tests as manifested by both the SCBM and SCEM
models. It is recommended, for example, to re-assess the geometrical
arrangement preferences of the quarks and gluons and the configuration
counting in order to have a more optimized fitting model. Variations in the
results are also expected by considering more number of events and thus
involving more scattering angles. The cloud covering is also affected by the
overlapping of the electron’s initial and final wave functions and the overall
spin-spin interactions between the electron and proton. These effects will be
investigated more.
## Acknowledgement
The Mindanao State University - Iligan Institute of Technology (MSU-IIT) and
its Department of Physics and the Premier Research Institute for Science and
Mathematics (PRISM) of Iligan City, Philippines; Research Center for
Theoretical Physics (RCTP) of Jagna, Philippines; and Centro de Investigacion
en Computacion - Instituto Politecnico Nacional (CIC-IPN) of CDMX, Mexico are
acknowledged for their conducive venues in making this research possible.
Gratitude is extended to the Department of Science and Technology (DOST) of
the Philippines and MSU-IIT for their financial support. The inspiration and
encouragements from Prof. Christopher Bernido, Prof. Maria Victoria Bernido,
Prof. Ludwig Streit, and Prof. Roland Winkler are highly appreciated.
## References
* [1] Z. Ye et al., Proton and neutron electromagnetic form factors and uncertainties. Phys. Lett. B 777, 8 15 (2018)
* [2] J. Dainton, The structure of hadronic physics. Physikalische Blätter 55-7/8 (1999)
* [3] R. G. Sachs, High-precision determination of the electric and magnetic form factors of the proton. Phys. Rev. 126, 2256 (1962)
* [4] F. Halzen and A. D. Martin, Quarks and leptons: An introductory course in modern particle physics. John Wiley and Sons, Incorporated, New York (1984)
* [5] J. C. Bernauer, High-precision determination of the electric and magnetic form factors of the proton. Phys. Rev. Lett., DOI: 10.1103/PhysRevLett.105.242001 (2010)
* [6] M. N. Rosenbluth, High energy elastic scattering of electrons on protons. Phys. Rev. 79, 615 (1950)
* [7] C. Berger et al., Electromagnetic form factors of the proton at squared four-momentum transfers between $10$ and $50fm^{-2}$. Phys. Lett. B 35, 87 (1971)
* [8] M. J. Johnson, Two-photon exchange effects in elastic electron-proton scattering, PhD Dissertation. Northwestern University, Illinois, USA. DOI:10.2172/1093450 (2013)
* [9] L. Andivahis et al., Measurements of the electric and magnetic form factors of the proton from $Q^{2}=1.75$ to $8.83(GeV/c)^{2}$. Phys. Rev. D 50, 5491 (1994)
* [10] R. C. Walker et al., Measurements of the proton elastic form factors for $1\leq Q^{2}\leq 3(GeV/c)^{2}$ at SLAC. Phys. Rev. D 49, 5671 (1994)
* [11] T. Janssens et al., Proton form factors from elastic electron-proton scattering. Phys. Rev. 142, 922 (1966)
* [12] J. Litt et al., Measurement of the ratio of the proton form factors, $G_{E}/G_{M}$, at high momentum transfers and the question of scaling. Phys. Lett. B 31, 40 (1970)
* [13] M. Jones et al., $G_{E_{p}}/G_{M_{p}}$ ratio by polarization transfer in $ep\rightarrow ep$. Phys. Rev. Lett. 84, 1398 (2000)
* [14] I. A. Qattan et al., Precision rosenbluth measurement of the proton elastic form factors. Phys. Rev. Lett. 94, 142301 (2005)
* [15] P. E. Bosted, Empirical fit to the nucleon electromagnetic form factors. Phys. Rev. C 51, 409 (1995)
* [16] G. Lee et al., Extraction of the proton radius from electron-proton scattering data. Phys. Rev. D 92, 013013 (2015)
* [17] G. D. Cates et al., Flavor decomposition of the elastic nucleon electromagnetic form factors. Phys. Rev. Lett. 106 252003 (2011)
* [18] J. C. Bernauer, Precise form factors from elastic electron scattering. Journal of Physics: Conference Series 381-012006. IOP (2012)
* [19] J. C. Bernauer, Measurement of the elastic electron-proton cross section and separation of the electric and magnetic form factor in the $Q^{2}$ range from 0.004 to 1$(GeV/c)^{2}$. (United States Department of Energy, Office of Scientific and Technical Information: 21403504). PhD Dissertation, Mainz University, Germany (2010)
* [20] W. M. Yao et al., Particle Physics Booklet from Review of Particle Physics. Journal of Physics G 33-1 (2006)
* [21] D. J. Griffiths, Introduction to elementary particles. WILEY-VCH (2008)
* [22] C. Patrignani et al., Particle Data Group. Review of Particle Physics, Chinese Physics C 40-10 100001 (2016)
* [23] R Brun et al., ROOT - An object oriented data analysis framework. Proceedings AIHENP’96 Workshop, Lausanne, September 1996. Nuclear Instruments and Methods in Physics Research A 389, 81-86. See also http://root.cern.ch (1997)
* [24] V. J. Martin, Lectures on particle physics, https://www2.ph.edu.ac.uk/$\sim$vjm. University of Edinburgh (2012)
|
SNSN-323-63
Using associated top quark production to probe for new physics within the
framework of effective field theory
Brent R. Yates
Department of Physics
The Ohio State University
191 West Woodruff Ave
Columbus, OH 43210, USA
> Signs of new physics are probed in the context of an Effective Field Theory
> using events containing one or more top quarks in association with
> additional leptons. Data consisting of proton-proton collisions at a center-
> of-mass energy of $\sqrt{s}=$13 TeV was collected at the LHC by the CMS
> experiment in 2017. We apply a novel technique to parameterize 16 dimension-
> six EFT operators in terms of the respective Wilson coefficients (WCs). A
> simultaneous fit is performed to the data in order to extract the two
> standard deviation confidence intervals (CIs) of the 16 WCs. The Standard
> Model value of zero is completely contained in most CIs, and is not excluded
> by a statistically significant amount in any interval.
> PRESENTED AT
>
>
>
>
> $13^{\mathrm{th}}$ International Workshop on Top Quark Physics
> Durham, UK (videoconference), 14–18 September, 2020
## 1 Introduction
The Standard Model (SM) of particle physics is one of the most complete and
precise models to date, but it only accounts for 5% of the known universe. The
SM currently provides no correct explanation for dark matter and dark energy,
the hierarchy problem, and baryon asymmetry, to name a few. The Large Hadron
Collider (LHC) located at CERN can currently probe a center-of-mass energy of
$\sqrt{s}=$13 TeV. Therefore, the natural question arises: what if new physics
beyond the SM occurs at an energy scale above what the LHC can probe directly?
The formalism of Effective Field Theory (EFT) allows us to approximate new
physics above this scale purely in terms of SM fields. The strength of each
new physics operator ($\mathcal{O}$) is controlled by the so called Wilson
coefficients (WCs), and are suppressed by powers of the energy scale
$\Lambda$. The effective Lagrangian may be written as
$\mathcal{L}_{\mathrm{EFT}}=\mathcal{L}_{\mathrm{SM}}+\sum_{d,i}\frac{c_{i}^{(d)}}{\Lambda^{d-4}}\mathcal{O}_{i}^{(d)},$
(1)
where $\mathcal{L}_{\mathrm{SM}}$ is the SM Lagrangian, $c_{i}$ is the
$i^{\mathrm{th}}$ WC, and $d$ is the dimension of the operator. It is
important to note that all odd numbered dimensions violate lepton and/or
baryon number conservation. This analysis focuses on dimension six; higher
dimensions are suppressed by additional powers of $\Lambda$, making them
unimportant at this level of precision.
The analysis described in this proceeding uses a novel technique to examine
data collected by the CMS experiment in 2017, corresponding to an integrated
luminosity of $41.5\,\mathrm{fb^{-1}}$. It performs a global fit across all
processes—including signal and background. We specifically probe EFT effects
using multilepton final states. The procedure used helps to constrain the
systematic uncertainties, and any correlations rely solely on the data—no
assumptions are made. The production channels examined are:
$\mathrm{\mathrm{t\overline{t}}l\nu}$,
$\mathrm{\mathrm{t\overline{t}}l\overline{l}}$, $\mathrm{tl\overline{l}q}$,
and $\mathrm{tHq}$, where $\mathrm{H\to\mathrm{b\overline{b}}}$ is
specifically removed. The complete details of this analysis may be found in
[1].
## 2 Parameterization of the EFT
The EFT may be parameterized in simulations by splitting the matrix elements
($\mathcal{M}$) into SM and EFT terms
$\mathcal{M}=\mathcal{M}_{\mathrm{SM}}+\sum_{j}\frac{c_{j}}{\Lambda^{2}}\mathcal{M}_{j}.$
(2)
The cross section is proportional to $\mathcal{M}^{2}$, and each simulated
event may be viewed as a differential piece of the cross section with an event
weight $w$. Therefore, we may parameterize these weights using
$w_{i}\left(\frac{\vec{c}}{\Lambda^{2}}\right)=s_{0i}+\sum_{j}s_{1ij}\frac{c_{j}}{\Lambda^{2}}+\sum_{j}s_{2ij}\frac{c_{j}^{2}}{\Lambda^{4}}+\sum_{j,k}s_{3ijk}\frac{c_{j}}{\Lambda^{2}}\frac{c_{k}}{\Lambda^{2}},$
(3)
where the structure constants ($s$) correspond to: the SM term ($s_{0}$),
interference between the SM and EFT ($s_{1}$), pure EFT terms ($s_{2}$), and
interference between EFT terms ($s_{3}$). These weights may be summed to
produce the predicted event yields as a function of the WCs.
Simulations are generated with non-zero WC values at leading order, and extra
partons are included when possible to improve our sensitivity. Initial values
are chosen to include all relevant phase space and to optimize the statistical
power—$\sigma^{2}_{\mathrm{stat}}=\sum w^{2}_{i}(\vec{c})$. The weight of each
event accounts for variations in the yield due to EFT effects, and are used to
solve for the structure constants in the quadratic parameterization. These
quadratic functions are then used to fit to the data.
The simulations are made using the dim6TopEFT model [2]. Due to limitations in
the model, only tree-level simulations are possible. The 16 operators which
have the largest impact on the signal processes, and relatively small impact
on the $\mathrm{t\overline{t}}$ background, are considered. Only the real
components are considered since the imaginary coefficients lead to CP
violation, and are well constrained by EDM experiments and $\mathrm{B}\to
X_{s}\gamma$ decays.
## 3 Event selection and signal extraction
The analysis is split into 35 sub-categories, including: lepton ($\ell$)
multiplicity, sum of the lepton charges, jet multiplicity, and b-tagged jet
multiplicity. A BDT is applied to help separate the prompt leptons from the
non-prompt leptons. All final-state observables are an admixture of the
processes—the method does not require we separate the states. Each analysis
sub-category stores the sum of the quadratic coefficients, and therefore the
event yields are fully parameterized by the WCs. Table 1 lists all the
categories used.
Table 1: Requirements for the different event categories. Requirements
separated by commas indicate a division into subcategories. The b jet
requirement on individual jets varies based on the lepton category, as
described in the text.
Selection | 2$\ell$ss | $3\ell$ | $\geq$4$\ell$
---|---|---|---
Leptons | Exactly 2 leptons | Exactly 3 leptons | $\geq$4 leptons
Charge requirements | $\sum_{\ell}q<0,\sum_{\ell}q>0$ | $\sum_{\ell}q<0,\sum_{\ell}q>0$ | - | -
Jet multiplicity | 4, 5, 6, $\geq$7 jets | 2, 3, 4, $\geq$5 jets | 2, 3, 4, $\geq$5 jets | 2, 3, $\geq$4 jets
Number of b jets | $\geq$2 b jets | 1, $\geq$2 b jets | 1, $\geq$2 b jets | $\geq$2 b jets
Dilepton mass | - | $|m_{\ell\ell}-m_{\mathrm{Z}}|>10$ GeV | $|m_{\ell\ell}-m_{\mathrm{Z}}|\leq 10$ GeV | -
Each category listed in Table 1 is treated as a Poisson experiment with a
probability of obtaining the observed data. A profiled likelihood is used
simultaneously fit all categories and is used to extract the 2 standard
deviation ($\sigma$) confidence intervals (CIs). Two fitting procedures are
used: one where a single WC is fit while the other 15 are treated as
unconstrained nuisance parameters, and another where a single WC is fit while
the other 15 WCs are fixed to their SM value of zero. The first fitting
procedure is the more physical of the two, as there is no reason for new
physics to only favor one WC. The second procedure is an extreme scenario
where nature has a single WC. The ability to fit this single WC is limited by
the lack of knowledge of the other 15.
Systematic uncertainties are treated as nuisance parameters in the profiled
fit. The most important systematic uncertainties in this analysis are: the
misidentified lepton rate estimate, and simulation modeling including matrix-
element parton-shower matching, missing parton uncertainties, and scale
uncertainties.
### Misidentified lepton rate estimate
Contamination from non-prompt leptons entering into the analysis region are to
be expected. This is overcome by examining a multijet enriched background
region and comparing this to a $\mathrm{t\overline{t}}+\gamma$ enriched
background. The limited statistics of the $\mathrm{t\overline{t}}+\gamma$
background is taken into account, and is treated as an additional source of
uncertainty.
### Simulation modeling uncertainties
Uncertainties in the process of matching matrix element simulations to those
produced via parton shower models must be accounted for. The leading term in
this uncertainty is from matching the extra partons added to the final-state
jets. An additional missing parton uncertainty must be applied to any samples
which could not be generated with extra partons. This involves comparing
leading order EFT effects without extra partons to next-to-leading order SM
simulations, and assigning an uncertainty to cover any discrepancies. Finally,
the scale uncertainties due to initial- and final-state radiation are taken
into account.
## 4 Results
Figure 1: Observed WC 1$\sigma$ (thick line) and 2$\sigma$ (thin line)
confidence intervals (CIs). Solid lines correspond to the other WCs profiled,
while dashed lines correspond to the other WCs fixed to the SM value of zero.
In order to make the figure more readable, the $c_{\mathrm{\varphi t}}$
interval is scaled by $1/2$, the $c_{\mathrm{tG}}$ interval is scaled by 2,
the $c^{-}_{\mathrm{\varphi Q}}$ interval is scaled by $1/2$, and the
$c_{\mathrm{t\varphi}}$ interval is scaled by $1/5$.
The 1$\sigma$ and 2$\sigma$ CIs are visualized in Figure 1. When the other 15
WCs are fixed to zero $c_{\mathrm{tW}}$, $c_{\mathrm{t\varphi}}$, and
$c_{\mathrm{\varphi t}}$ obtain broader disjoint 1$\sigma$ CIs. This is due to
the quadratic nature of the parameterization, which broadens the profiled
likelihood curves. None of the WCs exclude the SM value of zero by any
statistically significant amount. Figure 2 contains the event yields for the
SM (left) and the postfit values (right).
Figure 2: Expected yields prefit (left) and postfit (right). The postfit
values of the WCs are obtained from performing the fit over all WCs
simultaneously. “Conv.” refers to the photon conversion background, “Charge
misid.” is the lepton charge mismeasurement background, and “Misid. leptons”
is the background from misidentified leptons. The jet multiplicity bins have
been combined here, however, the fit is performed using all 35 event
categories. The lower panel is the ratio of the observation over the
prediction.
ACKNOWLEDGEMENTS
We would like to acknowledge the CMS Collaboration for their work in
maintaining the CMS experiment and collecting all relevant data for this
analysis. We also thank Adam Martin and Jeong Han Kim for their theoretical
guidance in configuring and debugging the EFT model used to generate the
signal samples in this analysis.
## References
* [1] A. M. Sirunyan et al., “Search for new physics in top quark production with additional leptons in proton-proton collisions at $\sqrt{s}=$ 13 TeV using effective field theory,” 12 2020.
* [2] D. Barducci et al., “Interpreting top-quark LHC measurements in the standard-model effective field theory,” 2018.
|
english
Analysis and evaluation of deep learning based super-resolution algorithms to
improve performance in low-resolution face recognition
Angelo Garangau Menezes Carlos Alberto Estombelo-Montesco Computer Science
*
See Pre_Textual/Ficha_Catalografica.pdf
See Pre_Textual/ata_angelo.pdf
Firstly, I would like to say that I am thankful to God for creating this
perfect simulation that we live, and for supporting me to get this far in this
incredible adventure that we call life.
To my family, Roberto, Adilma, and Apolo, for all the support, love, and
consideration that I have had all my life. You guys are the reason why I want
to become a better version of myself every day.
To my advisor Prof. Dr. Carlos Estombelo, for believing and guiding me through
the course of my master’s program while also being an incredibly understanding
person. Thanks for your friendship and for being hard on me when I needed it.
To the best people in the world that have inspired me, stayed by my side in
the hardest moments, and have helped this thesis come to life directly with
their support and love, Fernando Melo, Gracieth Cavalcanti, Barbara Sena, and
Rita Macedo.
To Prof. Dr. André Carvalho and all the friends that I made in São Paulo while
working at USP, for their amazing friendship, research insights, and support
while doing some “balbúrdia” inside and outside the lab.
To Prof. Dr. Vijay Mago, for having introduced me to the field of data science
and made me believe that with the right amount of effort, I would be able to
learn anything and produce valuable research.
To Prof. Dr. Wilson Wang and Dr. Peter Luong, for having shaped me as a
researcher and taught me how to overcome all the difficulties that academia
could possibly have.
To all the great friends who understood my absence in certain moments and that
I hope will always be with me Grace Kelly, Natalia Rosa, Thiago Charles, Raul
Rodrigo, Duda Maia, Renan Albuquerque, Felipe Torres, Vinicius Araujo, Ronny
Almeida, Davi Santana, Eriana Pinto, Manu Magno, and all the incredible others
that I will probably have to pay a beer since their names are not here.
[]
“Quando disser sim para os outros, certifique-se
de não estar dizendo não para si mesmo.”
(Paulo Coelho)
[Resumo] portuguese
Os cenários de vigilância e monitoramento estão propensos a vários problemas,
pois não existe um controle sobre a distância dos possíveis suspeitos para a
câmera e geralmente as tarefas envolvem avaliação de imagens em baixa
resolução. Para tais situações, a aplicação de algoritmos de super-resolution
(super-resolução) pode ser uma alternativa adequada para recuperar as
propriedades discriminantes das faces dos suspeitos envolvidos.
Embora abordagens gerais de super-resolução tenham sido propostas para
aprimorar a qualidade da imagem para a percepção no nível humano, os métodos
de super-resolução biométrica buscam a melhor versão da imagem para
“percepção” do computador, pois seu foco é melhorar o desempenho do
reconhecimento automático. Redes neurais convolucionais e algoritmos de
aprendizado profundo, em geral, têm sido aplicados a tarefas de visão
computacional e agora são o estado da arte em seus vários subdomínios,
incluindo classificação, restauração e super-resolução de imagens. No entanto,
poucos trabalhos avaliaram os efeitos que os mais recentes métodos de super-
resolução propostos podem ter sobre a precisão e o desempenho da verificação
de faces em imagens de baixa resolução do mundo real.
Este projeto teve como objetivo avaliar e adaptar diferentes arquiteturas de
redes neurais profundas para a tarefa de super-resolução de faces,
impulsionada pelo desempenho do reconhecimento de faces em imagens de baixa
resolução do mundo real. Os resultados experimentais em um conjunto de dados
de monitoramento/vigilância e de avaliação de presença universitária mostraram
que arquiteturas gerais de super-resolução podem melhorar o desempenho da
verificação de faces utilizando uma redes neural profunda treinada em faces de
alta resolução para extração de características. Além disso, como as redes
neurais são aproximadores de funções e podem ser treinadas com base em funções
objetivo específicas, o uso de uma função de custo personalizada que foi
otimizada para extração de características da face mostrou resultados
promissores para recuperar atributos discriminantes em imagens de faces em
baixa resolução.
Palavras-chave: Reconhecimento Facial em Baixa Resolução; Super-Resolução;
Aprendizado Profundo; Redes Neurais Convolucionais.
Surveillance scenarios are prone to several problems since they usually
involve low-resolution footage, and there is no control of how far the
subjects may be from the camera in the first place. This situation is suitable
for the application of upsampling (super-resolution) algorithms since they may
be able to recover the discriminant properties of the subjects involved.
While general super-resolution approaches were proposed to enhance image
quality for human-level perception, biometrics super-resolution methods seek
the best “computer perception” version of the image since their focus is on
improving automatic recognition performance. Convolutional neural networks and
deep learning algorithms, in general, have been applied to computer vision
tasks and are now state-of-the-art for several sub-domains, including image
classification, restoration, and super-resolution. However, no work has
evaluated the effects that the latest proposed super-resolution methods may
have upon the accuracy and face verification performance in low-resolution
“in-the-wild” data.
This project aimed at evaluating and adapting different deep neural network
architectures for the task of face super-resolution driven by face recognition
performance in real-world low-resolution images. The experimental results in a
real-world surveillance and attendance datasets showed that general super-
resolution architectures might enhance face verification performance of deep
neural networks trained on high-resolution faces. Also, since neural networks
are function approximators and can be trained based on specific objective
functions, the use of a customized loss function optimized for feature
extraction showed promising results for recovering discriminant features in
low-resolution face images.
Key-Words: Low-Resolution Face Recognition; Super-Resolution; Deep Learning;
Convolutional Neural Networks;
###### List of Figures
1. 1 CNN Architecture Exemplified [Deshpande 2017]
2. 2 Example of Inception Block. (Source: Author’s own)
3. 3 GoogleNet architecture. [Szegedy et al. 2015]
4. 4 Single Residual Block. (Source: Author’s own)
5. 5 Architecture of a GAN. (Source: Author’s own)
6. 6 Differences between normal and coordinate convolutions. [Liu et al. 2018]
7. 7 General classes of SR algorithms. [Huang and Liu 2015]
8. 8 Visual comparison of general interpolation methods: (8(a)) Nearest Neighbor (8(b)) Bilinear (8(c)) Bicubic (8(d)) Original HD image. - (Source: Author’s own)
9. (a)
10. (b)
11. (c)
12. (d)
13. 9 DL for SR algorithms related topics. (Adapted from wang2019deep)
14. 10 Example of a generic pipeline for face recognition. (Source: Author’s own)
15. 1 Example of face images in the VGGFace2 dataset. (Adapted from cao2018vggface2)
16. 2 Example of face images in the CelebA dataset. (Adapted from liu2015faceattributes)
17. 3 Example of gallery images in the Quis-Campi dataset. (Adapted from neves2017quis)
18. 4 Example of probe images in the Quis-Campi dataset. (Adapted from neves2017quis)
19. 5 Example of gallery images for the UFS-Classroom Attendance dataset. [Sá 2019]
20. 6 Example of a probe image for the UFS-Classroom Attendance dataset. [Sá 2019]
21. 7 Example of a probe face image from the UFS-Classroom Attendance dataset saved in the three settings. (Adapted from joao2019Automatic)
22. 8 Example of a probe face image from the Quis-Campi dataset saved in the three settings. (Adapted from neves2017quis)
23. 9 Pipeline for face verification in the ICB-RW. (Source: Author’s own)
24. 1 SRCNN architecture. [Dong, Loy and Tang 2016]
25. 2 Subpixel CNN architecture. (Adapted from dong2016accelerating and shi2016real)
26. 3 FSRCNN architecture. [Dong, Loy and Tang 2016]
27. 4 SRGAN architecture. [Jiao and Zhao 2019]
28. 5 Watch-List setting for the ICB-RW. [Neves and Proença 2016]
29. 6 Performance results for accuracy on ICB-RW and UFS Clasroom 1 data. Proposed architectures have an asterisk in their names.(Source: Author’s own)
30. 7 Performance results for accuracy on UFS Clasroom 2 and UFS Classroom 3 data. Proposed architectures have an asterisk in their names. (Source: Author’s own)
*
###### List of Tables
1. 1 Number of students for each class
2. 2 PSNR and SSIM validation results (2000 images from CelebA)
3. 3 Rank-1 Accuracy (in %) for Face Recognition task (1xN) on 90 subjects from the Quis-Campi dataset (ICB-RW)
4. 4 Accuracy (in %) for Face Recognition task in Classroom 1 (1xN)
5. 5 Accuracy (in %) for Face Recognition task in Classroom 2 (1xN)
6. 6 Accuracy (in %) for Face Recognition task in Classroom 3 (1xN)
7. 7 Spearman Correlation PSNR/SSIM vs. Accuracy
*
Coordinate Convolution
Deep Learning
Frames per second
Fast Super-Resolution Convolutional Neural Network
Generative Adversarial Network
Graphical Processor Unit
High-Resolution
Labeled Faces in the Wild
Low-Resolution
Mean Squared Error
Peak signal-to-noise ratio
Super-Resolution
Super-Resolution Convolutional Neural Network
Subpixel Convolutional Neural Network
Super-Resolution Generative Adversarial Network
State-of-the-art
Structural Simmilarity
Federal University of Sergipe
Greek letter Beta
Greek letter Phi
Real space
Theta
###### Contents
1. 0 Introduction
1. 1 Hypotheses
2. 2 Objectives
3. 3 Thesis Structure
2. 1 Technical Background
1. 1 Convolutional Neural Networks and Deep Learning
1. 1 Residual Networks
2. 2 Generative Adversarial Networks
3. 3 Coordinate Convolutions
2. 2 Super-Resolution
1. 1 Operating Channels
2. 2 Super-Resolution Benchmarking
3. 3 Deep Learning for Image Super-Resolution
3. 3 Face Recognition
1. 1 Face Detection
2. 2 Feature Extraction and Face Verification
4. 4 Final Considerations
3. 2 Related Work
1. 1 Super-Resolution
2. 2 Low-Resolution Face Recognition
3. 3 Final Considerations
4. 3 Methodology
1. 1 Datasets
1. 1 VGGFace2
2. 2 CelebA
3. 3 Quis-Campi Dataset (ICB-RW)
4. 4 Federal University of Sergipe Classroom Attendance Dataset
2. 2 Data Pre-Processing
3. 3 Transfer Learning
1. 1 Face Feature Extraction
2. 2 Face Verification
3. 3 Face Loss
4. 4 Final Considerations
5. 4 Experimental Results
1. 1 Experiments
1. 1 Task 1 - Face Super-Resolution
2. 2 Task 2 - Watch-List ICB-RW (1x5 Problem)
3. 3 Task 3 - Attendance Evaluation (1xN Problem)
2. 2 Results Evaluation
1. 1 Task 1 - Face Super-Resolution
2. 2 Task 2 - Watch-List ICB-RW (1x5 Problem)
3. 3 Task 3 - Attendance Evaluation (1xN Problem)
4. 4 Hypotheses Discussion
3. 3 Final Considerations
6. 5 Conclusions
7. 6 Perceptual Results of SR Algorithms for 4x Upscaling
8. 7 Average Training Losses for the SR Algorithms
*
## Chapter 0 Introduction
An essential ability in human beings that group them as social animals is face
perception. Infants tend to prefer to look at faces at a very early age, and
across the lifespan, most people spend more time looking at faces than at any
other type of object [Johnson et al. 1991].
Faces provide a wealth of information that facilitates social communication
since humans are able to recognize the identity of other people and interpret
their emotional state by analyzing the facial expression and pose. More
specifically, regarding identity recognition, there is behavioral and neural
evidence that such a feature has its basis on the perception of aspects of
facial structure that are invariant across changes [Gobbini and Haxby 2007,
Haxby, Hoffman and Gobbini 2000].
Face perception is also related to a high-level visual and memory process that
involves the retrieval of the memory of faces and the identity information
stored in memory (i.e., person semantic knowledge). This process is developed
in such a robust way in human brains that some people are able to recognize
others by situations where there are only a few resembling features of a
person, such as in caricature drawings and photos with low-resolution [Chang
et al. 2017]. The field of research that describes and evaluates the reliable
methods for automatic identification of subjects based on their physiological
and behavioral characteristics is usually called biometrics [Nguyen et al.
2018].
As an example of how face biometrics has become an important matter in modern
society, situations in surveillance that employ the verification of a watch-
list of subjects through CCTV footage have become quite regular for world
security standards in airports, malls, and other crowded places. However, as
sometimes they do not involve automation, they might become a weak spot as
they require an impressive amount of manual work to check the live feed or
saved data of several cameras [Rasti et al. 2016]. This is one of the reasons
why countries are spending a large number of resources to rapidly grow their
technology market related to surveillance in order to have intelligible
solutions specifically designed to their needs [Feldstein 2019].
Even though computers have shown a great ability to also deal with image and
face recognition in the last decade, in situations where low-resolution (LR)
inputs are employed, they tend to fail as much as humans when trying to
identify an individual or reconstruct a higher-resolution representation of
the same subject [Nguyen et al. 2018]. These occurrences are the majority in
surveillance scenarios since the cheapest and most commonly used cameras can
only provide low-quality video footage, and there is no control for the
distance between the subjects of interest and the device [Rasti et al. 2016].
These recognition faults mainly occur because when the resolution drops, the
amount of information available for identifying or verifying a subject
decreases as well. That leads to a severe degradation for both human
perception and machine interpretation. Since there is no standard resolution
that can be set for making recognition available [Nguyen et al. 2018], the
development of image upscaling algorithms, commonly known as super-resolution
(SR) algorithms, has become an intensive area of research. An example of that
is the fact that the pioneering work of this group of algorithms dates back to
1974, when gerchberg1974super showed that the resolution of a data object
could be significantly improved through error energy reduction. Thenceforth,
researchers have put a massive effort into investigating SR and its possible
range of applications, even knowing that it is fundamentally an ill-posed
problem since the details presented in the LR samples are usually not enough
to provide a robust reconstruction of the original high-resolution (HR) image
[Tian and Ma 2011].
Deep Learning (DL) algorithms started to be used to solve tasks regarding
image classification and reconstruction due to their computational cost being
now facilitated by advances in hardware and parallel processing [Krizhevsky,
Sutskever and Hinton 2012]. This group of techniques has become the state-of-
the-art (SOTA) rapidly in a great variety of tasks regarding images both for
accuracy and applicability [LeCun, Bengio and Hinton 2015]. Also, they have
shown excellent performance in image restoration tasks that are related to
biometrics such as iris, fingerprint, and face super-resolution for improving
recognition performance [Ribeiro and Uhl 2017, Li, Feng and Kuo 2018, Kim et
al. 2019].
Most of the SR solutions for LR face recognition have relied on the use of
convolutional neural networks (CNNs) optimized by a pixel loss [Nguyen et al.
2018]. Nevertheless, there exists nowadays a large pool of network designs and
learning strategies that are applied to solve similar computer vision problems
[Haris, Shakhnarovich and Ukita 2018, Liu et al. 2018]. Since the goal of SR
for face biometrics is to optimize face recognition performance while keeping
reasonable perceptual quality, replicating successful strategies from similar
computer vision tasks can be a worth research direction. One example of a
different strategy that some similar works have applied is the use of
different types of convolution operators and customized loss functions to
increase performance [Wang, She and Ward 2019, Wang, Chen and Hoi 2019].
One of the current issues with SR solutions to the LR face recognition problem
is that, researchers often train their SR deep learning models reporting their
accuracy results only on the downsampled version of the same or other HR
frontal image dataset [Ouyang et al. 2018, Abello and Jr. 2019]. However, it
is known that such task becomes more challenging when faces are captured in an
unconstrained environment where they can be subject to blurring, motion, non-
frontal pose, and other situations that hinder recognition.
The origin of the analysis to be presented in this thesis is related to the
lack of recent studies of if and how the state-of-the-art deep learning SR
techniques may assist face biometrics in real-world low-resolution scenarios,
taking into consideration also different network architectures, learning
strategies, and their real applicability and scalability.
### 1 Hypotheses
For the development of this thesis and the proposal of experiments, the
following specific hypotheses were elaborated:
1. 1.
The relationship between image quality metrics and accuracy performance is not
significant.
2. 2.
The use of a specific convolution operator that take into account position
information (CoordConv) can effectively improve metric performance over normal
convolution operators when dealing with super-resolution.
3. 3.
Application of a loss function based on face identity for an upscaling network
(FaceLoss) can influence the verification results positively in a face
recognition pipeline using DL models.
### 2 Objectives
Taking into account all the possible challenges regarding the discussed
topics, the general objective of this thesis is to evaluate the efficiency of
a face recognition pipeline in real-world low-resolution scenarios and check
whether the recently developed SR algorithms and their variants are capable of
enhancing recognition performance in these situations.
The specific objectives are listed below:
* •
Evaluation of the possibility of a correlation between image quality metrics
and face verification accuracy in a LR recognition pipeline as considered by
hypothesis 1.
* •
Evaluation of different SOTA neural network architectures, also involving
different convolution operators as proposed in hypothesis 2, for the super-
resolution task driven by face biometrics performance involving faces in real-
world LR datasets.
* •
Evaluation of an adapted loss function that optimizes the DL model for better
face feature extraction while keeping the SR upsampling characteristic as
suggested by hypothesis 3.
### 3 Thesis Structure
In order to make an easier read, this thesis brings the technical background
before the related work chapter since the discussed topics are from recent
research, and a prior overview can be useful for a better comprehension of the
concepts. Therefore, this manuscript was structured with the following
chapters:
* •
Chapter 1 - Introduction
* •
Chapter 2 - Technical Background
* •
Chapter 3 - Related Work
* •
Chapter 4 - Methodology
* •
Chapter 5 - Experiments
* •
Chapter 6 - Results
* •
Chapter 7 - Final Considerations
## Chapter 1 Technical Background
This chapter gives a brief technical background overview for the topics
discussed in this thesis in order to provide the basics that validate the
proposed experiments and hypotheses.
### 1 Convolutional Neural Networks and Deep Learning
Deep learning (DL) is a branch of machine learning that is capable of learning
the data representation through the use of a structure of hierarchical layers,
similar to the way the brain handles new information. Its concept is mainly
applied to supervised learning problems (e.g., where there is a need for
mapping an input vector to an output vector), and its core is based on the
math behind Artificial Neural Networks [LeCun, Bengio and Hinton 2015].
Deep Neural Networks can have different architectures based on the nature of
the data that is used as input. When image data needs to be processed as
input, CNNs have been ideally applied by academia and industry because of its
interior architecture properly set to work with high dimensional data and
extract its more discriminating features. [LeCun, Bengio and Hinton 2015, Shi
et al. 2016]
A typical structure of a CNN can be seen by Figure 1 where an image is used as
input, and the network needs to predict a label for it. The first operation
that happens inside the network is on the convolutional layer, where a moving
window is applied to a small pixel grid of the image. This moving window,
commonly called a kernel, works as a “filter” and its task is to multiply its
weight values by the original pixel values. All these multiplications are
summed up to one number that is going to be placed on the matrix used as input
on the following layer.
Figure 1: CNN Architecture Exemplified [Deshpande 2017]
The CNN per see consists of several stacked convolutional networks mixed with
nonlinear and pooling layers that work as feature extractors. Usually, the
nonlinear layer is added after each convolution operation, which brings a
nonlinear property characteristic to the network through the use of an
activation function. The pooling layer will then be placed after the nonlinear
layer working directly with the width and height of the image in order to
perform a downsampling operation. This step reduces the image data to a more
compressed version containing only details that were processed and identified
by the previous filter (convolutional) layer. After a series of “feature
extraction” layers, a fully connected layer is generally stacked upon them in
order to map the extracted features to a fixed output.
The learning phase of a CNN happens on the update of the weights presented on
every convolutional layer and the weights for the fully connected one. The
first often allows the network to identify edges, contours, and shapes that
characterize the image while the second is accountable for the classification
or regression step. The training is usually performed using variants of
gradient-based optimization methods via backpropagation [Krizhevsky, Sutskever
and Hinton 2012, LeCun, Bengio and Hinton 2015].
#### 1 Residual Networks
When training large image classifiers, usually there is a considerable
variation in the location and size of the object of interest. In order to have
a robust feature extractor that identifies features that are globally or
locally distributed on the image, the use of different kernel sizes may be
needed. With this in mind, szegedy2015going proposed GoogleNet using large
blocks that contained different convolution operators with several kernel
sizes. One representation of such block is shown in Figure 2.
Figure 2: Example of Inception Block. (Source: Author’s own)
Using several stacks of blocks in a very deep network, they were able to
achieve 93.3% top-5 accuracy on the ImageNet competition with much less
computation than the state-of-the-art (SOTA) at that time, VGG16. The final
architecture of GoogleNet can be seen in Figure 3.
Figure 3: GoogleNet architecture. [Szegedy et al. 2015]
Nevertheless, as academia started to implement and test different types of
deep architectures, the problem of vanishing gradients became popular. This
issue appears because certain activation functions squish an ample input space
into the range between 0 and 1. Then, sometimes even when a large change
arrives in the input, the output is going to have only a minor change, and
consequently, the gradients become too small for updating the weights when
backpropagated [LeCun, Bengio and Hinton 2015].
One solution that researchers found to this problem was to use skip
connections. These connections, as shown by Figure 4, are used to feed
posterior layers the same input that previous layers had, which makes the
network skip the training of a few layers and learn only the residual between
the input and the output [He et al. 2016].
Figure 4: Single Residual Block. (Source: Author’s own)
This structure gave the name for the group of residual networks, commonly
known as ResNets, and influenced researchers to go even deeper since networks
consequently could have more layers and still train in sufficient time. One of
the examples of such structure in SOTA applications is in the work of
szegedy2017inception, where inception and residual blocks are combined to
create robust feature extractors.
#### 2 Generative Adversarial Networks
Generative Adversarial Networks (GANs) were proposed by
goodfellow2014generative in order to sidestep the common difficulties that
involve deep generative models such as approximating intractable probabilistic
computations that arise in maximum likelihood estimation and leveraging the
benefits of piecewise linear units in the generative context.
In this architecture, a discriminator network $D(x)$, where $x$ is an image,
is optimized for distinguishing whether the given input is fake or not, while
a generator network $G(x)$, where $x$ can be random noise or even another
image, is optimized to generate fake image samples that follow the same
distribution of the real image and fool the discriminator from discerning
which one is the real [Wang, She and Ward 2019]. Therefore, in this context,
the output for the discriminator network is always a label (real $\rightarrow$
1, fake $\rightarrow$ 0), and for the generator is always an image. The
general idea presented in the learning process is shown in Figure 5.
Figure 5: Architecture of a GAN. (Source: Author’s own)
In other words, $D$ is trained to maximize the probability of assigning the
same correct label for both generated and real images, while simultaneously
$G$ is trained to minimize $log(1-D(G(z)))$. goodfellow2014generative
described their optimization, also known as adversarial training, as the play
of a minimax game with value function $V(D,G)$:
$min(G)\,max(D)\,\rightarrow V(D,G)=E_{x\,\sim
p_{data}(x)}[logD(x))]+E_{z\,\sim p_{z}(z)}[log(1-D(G(x)))]$ (1)
given $p_{z}(z)$ as the input noise and considering $E_{x}$ and $E_{z}$ the
error associated with discriminator and generator, respectively.
Since then, GANs attracted growing interests in the research community due to
their applicability and versatility. They have been applied to various domains
such as natural language processing, time-series synthesis, and computer
vision [Yang et al. 2017, Donahue, McAuley and Puckette 2018, Bao et al.
2017]. In the latter area, they have become the SOTA for several applications
such as image-to-image translation, image inpainting, and image SR [Ma et al.
2018, Yu et al. 2018, Ledig et al. 2017].
However, since generator and discriminator need to achieve Nash equilibrium
during training where neither generator nor discriminator can become too
specialist in its task, GANs suffer from major challenges when training such
as non-convergence, mode collapse, and diminished gradient [Wang, She and Ward
2019]. Consequently, they are highly sensitive to hyperparameters. In
addition, for obtaining good results with them, their loss functions need to
represent well the real optimization problem involved in the task [Johnson,
Alahi and Fei-Fei 2016].
#### 3 Coordinate Convolutions
The convolution operator is widely used in image processing, after learning
the ideal filter weights, due to its ability to extract features of content
from the training set that may not be in the same angle or place all the time.
Such learned characteristic is called translation invariance. However,
liu2018intriguing noted that also due to this feature, regular convolutions in
CNNs could perform poorly in tasks that involve coordinate transforms. One
example of this problem is the mapping between coordinates in $(x,y)$
cartesian space to coordinates in the pixel space features, where even state-
of-the-art architectures would bot be able to obtain more than 90% of testing
accuracy.
For dealing with problems that require varying degrees of translation
dependence or complete translation invariance, liu2018intriguing proposed an
operator called CoordConv, which works by giving the normal convolution
operator access to its own input coordinates through the use of extra
coordinate channels. This allows the network to check and work with the exact
location of pixels inside its grid. This operator allows the network to learn
either complete translation invariance or varying degrees of translation
dependence, as required by position regression tasks. Their result in the same
given position regression task presented perfect generalization, being 150
times faster, and having 10–100 times fewer parameters. The difference between
a standard convolution operator to a CoordConv can be visualized in Figure 6.
Figure 6: Differences between normal and coordinate convolutions. [Liu et al.
2018]
Since their launch, researchers have explored different applications and
scenarios where normal convolutions can be switched to CoordConv for improving
performance [Upadhyay, Singhal and Singh 2019, Xu, Chen and Jia 2019].
Nonetheless, only zafeirouli2019efficient so far in literature have reported
the improvements that CoordConvs may provide over the use of regular
convolutions for SR, which makes it an interesting research direction.
### 2 Super-Resolution
Super-resolution can be described as an attempt to generating a higher
resolution image out of a lower resolution input. Throughout this domain,
researchers have applied different strategies to reconstruct the HR image,
which culminated in different classes of SR algorithms being developed
depending on a variety of conditions [Huang and Liu 2015]. Some of the
categories involving SR are shown in Figure 7.
Figure 7: General classes of SR algorithms. [Huang and Liu 2015]
The general principle of supervised SR is that a LR image $I_{LR}$ is the
result of a degradation process that was applied to its HR version $I_{HR}$ as
in:
$I_{LR}=D(I_{HR})$ (2)
The degradation function $D$ is naturally unknown, but researchers usually
associate it with blur, motion, warp, and noise [Nguyen et al. 2018].
Therefore, the goal of the SR algorithm is to learn the inverse mapping in
such a way that, from a LR input, its HR can be achieved as in:
$I_{HR}=F_{SR}(I_{LR};\theta)$ (3)
where $F_{SR}$ is the SR function and $\theta$ its parameters.
The most common and used techniques for upscaling images are the ones based on
interpolation such as bicubic, bilinear, or nearest neighbor since their time
cost is low, which makes them ideal for real-time applications. An
illustration of the results when zooming an image (4x) with each technique is
presented in Figure 8. Although the bicubic interpolation has a higher time
complexity, it is the default method for upscaling images in software such as
MATLAB and Photoshop. [Purkait, Pal and Chanda 2014, Vedadi and Shirani 2014]
(a)
(b)
(c)
(d)
Figure 8: Visual comparison of general interpolation methods: (8(a)) Nearest
Neighbor (8(b)) Bilinear (8(c)) Bicubic (8(d)) Original HD image. - (Source:
Author’s own)
#### 1 Operating Channels
The human evaluation of the degradation degree in a LR image is based on the
perception of the RGB channel. However, when applying SR methods to images,
some researchers instead use the YCbCr color space representation. In this
space, images are depicted in Y, Cb, Cr channels, denoting the luminance,
blue-difference, and red-difference chroma components, respectively [Wang,
Chen and Hoi 2019]. Some works report that using only the Y channel may bring
better results than when working with the addition of Cb and Cr channels since
they are more blurry than the Y channel by nature, and therefore are less
affected by the downsampling process [Dong et al. 2015]. There is no consensus
in academia for which channels are better for training and evaluating SR;
nevertheless, the most recent architectures tend to operate on RGB channels
[Ledig et al. 2017, Chen et al. 2018].
#### 2 Super-Resolution Benchmarking
Even though different works have presented several ways of benchmarking and
measuring their image SR results regarding their specific field of
application, the most common objective measurements of image quality are Peak
Signal to Noise Ratio (PSNR) and Structural Similarity (SSIM). [Tian, Suzuki
and Koike 2010]. PSNR is an estimation of quality based on the mean squared
error (MSE) of pixels for every channel between the HR image generated and the
ground truth, as can be seen in Equations 4 e 5.
$PSNR=10\log_{10}(\frac{S^{2}}{MSE})$ (4)
$MSE=\frac{\sum_{n,m}(x_{mn}-y_{mn})^{2}}{m*n}$ (5)
where: $S$ is the maximum value in the input image data type; $n$ is the
number of pixels; $m$ the number of channels; $x_{mn}$ and $y_{mn}$ represent
the pixel value described in $n$ with the channel $m$ for the generated and
original images respectively.
SSIM is a measurement that considers the visual degradation in quality with
more importance through analysis of the homogeneity and phase coherence of the
gradient magnitude on the original and reconstructed image. This similarity is
based on structure, brightness, and contrast of the images. [Begin and Ferrie
2006, Reibman, Bell and Gray 2006] Its mathematical formulation can be seen in
Equation 6.
$SSIM=\frac{(2\mu_{x}\mu_{y}+c_{1})(2\sigma_{xy}+c_{2})}{(\mu_{x}^{2}+\mu_{y}^{2}+c_{1})(\sigma_{x}^{2}+\sigma_{y}^{2}+c_{2})}$
(6)
where: $\mu_{x}$ and $\mu_{y}$ represent the average intensity value of a
linked windows for the original and reconstructed image; $c_{1}$ and $c_{2}$
denote the brightness of two images; $\sigma_{x}$ and $\sigma_{y}$ formulate
the variance of the two sets of intensity for both images; $\sigma_{xy}$
presents the correlation between these two sets.
#### 3 Deep Learning for Image Super-Resolution
Deep learning solutions for SR fits into the “learning-based” category show in
Figure 7. In the last few years, DL methods have become the most explored
approach for performing SR tasks since they early showed SOTA performance in
various benchmarks and competitions [Agustsson and Timofte 2017, Timofte et
al. 2018]. In special, the single image super-resolution (SISR) problem has
been the most fundamentally tackled problem within SR, since researchers can
make use of already available large datasets scrapped from the internet to
train their models [Liu et al. 2015, Chen et al. 2018].
A variety of methods have been used and incorporated for solving the SR
problem, ranging from simpler approaches involving only convolutional layers,
to more sophisticated ones with the use of residual blocks, recursive learning
and different losses [Wang, Chen and Hoi 2019]. An overview of the most
related directions that researchers have taken when considering working with
DL in SR can be analyzed in Figure 9.
Figure 9: DL for SR algorithms related topics. (Adapted from wang2019deep)
For proposing the new architectures assessed in this thesis, different network
designs and learning strategies presented in Figure 9 were considered, such as
the use of “Residual Learning” and “Content Loss”. A more deep review of the
works which influenced the directions taken in this manuscript is presented in
Chapter 2.
### 3 Face Recognition
The basic steps that involve a general face recognition pipeline are defined
in Figure 10 and described in sequence.
Figure 10: Example of a generic pipeline for face recognition. (Source:
Author’s own)
#### 1 Face Detection
Face detection in an image is the first step in a recognition pipeline because
it eliminates unnecessary information from the image. In this way, if the
algorithm finds one or more faces, they are extracted from the original image
so that they can be analyzed separately [Muttu and Virani 2015].
The training phase of these algorithms happens with the use of several images
containing faces and others without them. Even though this problem presents
itself as a simple binary classification, several face detection algorithms
need to be trained exhaustively so that they can give good results [Zhang et
al. 2016]. Two measures are responsible for evaluating the quality of face
detection algorithms [Vezhnevets 2002]:
* •
False positive: Represents the number of objects that were detected wrongly as
faces.
* •
False negative: Represents the number of faces that were not detected.
Face detection algorithms are usually divided into four different groups:
knowledge, feature, template, and appearance-based models [Zafeiriou, Zhang
and Zhang 2015]. However, as the amount of available data has increased over
the years for training such algorithms, the appearance-based methods have
overcome the other solutions since they generalize face models from a set of
representative samples. A common core for the SOTA algorithms proposed in this
group of techniques is the use of CNNs since they derive problem-specific
feature extractors from the training examples automatically, without making
any assumptions about the features to extract or the areas of the face
patterns to analyze due to their spatially invariant characteristic [Zhang and
Zhang 2010].
#### 2 Feature Extraction and Face Verification
The “real” recognition step in a face recognition pipeline consists of the
representation and extraction of facial features of an image. These features
are then input into a mathematical model, which is meant to specify whether
the presented face matches one or any previously stored face [Crosswhite et
al. 2018].
The implementation of recognition systems can range from low-throughput to
process-intensive methods where, for example, GPUs are required. Some more
straightforward methods can make use of metric learning approaches or
principal component analysis for dimensionality reduction. On the other hand,
the most sophisticated ones are usually based on analysis of probability
densities, manifold learning, and deep neural networks, among other methods
with a higher computational cost [Wang and Deng 2018].
For extracting discriminative features of an image that only contains a face
(after the pre-processing step), models based on CNN have been the ones most
used by SOTA approaches. This architecture is suitable for feature extraction
because it takes advantage of local connections to extract the spatial
information effectively. Also, their shared weights significantly reduce the
number of parameters for training the network, which consequently reduces its
size [Chen et al. 2016]. An effective way to create accurate face recognition
models is through the application of Transfer Learning [LeCun, Bengio and
Hinton 2015] using available pre-trained models. These models are often
trained in datasets with millions of faces and, through the use of their
intern representations, it is possible to extract discriminative features of
an input face directly [Cao et al. 2018].
Once extracted all the features of the involved subjects, the system needs to
decide whether the person is whom he/she claims to be. This step is called
face verification and different machine learning approaches can be employed to
perform it depending on how many dimensions the obtained feature space may
have [Faceli et al. 2011]. These approaches can be differentiated by how their
functions create the decision boundaries on the feature hyperplane. However,
in the context of face recognition, when only one or few training samples are
provided, methods based on distance metrics have shown the best results
regarding computational complexity and accuracy [Nguyen and Bai 2010, Schroff,
Kalenichenko and Philbin 2015].
### 4 Final Considerations
In this chapter, an overview of the main topics discussed in this thesis was
provided. It is important to reinforce that most of the trends regarding DL in
the fields of face recognition and super-resolution have only emerged in the
past five years through empirical experimentation with different
architectures. This statement indicates that most of the theory behind why
these models have performed better than others is still in the development
phase and will probably lead in more exploration and changes in the following
years.
In the next chapter, different SOTA works with respect to SR and LR face
recognition are discussed. Their evaluation was essential to extract
meaningful insights for proposing the hypotheses and objectives of this
thesis.
## Chapter 2 Related Work
This chapter presents some of the related works evaluated during the
development of this thesis.
### 1 Super-Resolution
baker2000hallucinating proposed the first SR work to be applied to faces in
2000. They created an algorithm that was used to learn priors on the spatial
distribution of the image gradient for frontal images of faces. At that time,
they stated that the high-frequency details inferred by the probabilistic
models were “hallucinated” by the model.
The work of tian2010task presented objective and subjective measures for
evaluating how SR impacts different image processing and computer vision
tasks. Their findings reflected the conflicts between objective and subjective
measures since the former tends to penalize the model that enhanced the image
according to computer vision standards, and the latter tends more to changes
that improve the image quality based on the human vision system.
dong2015image were the first to propose the use of CNNs for the SR problem.
Their architecture was called “Super-Resolution Convolutional Neural Network”
(SRCNN) and provided superior accuracy compared with other SOTA example-based
methods at the time. In their work, LR images are pre-upsampled using
traditional methods (e.g., bicubic interpolation) to the desired size, and
then a deep CNN with three layers is applied to the coarse image for
reconstructing the high-frequency details. This work became later the baseline
for all works that involve DL based algorithms in SR. One advantage of this
method (and all pre-upsampling methods) is that they can take input images of
any arbitrary size and perform the SR task. The downsides may be the
introduction of noise and blurring and, since most operations are performed
with images in a high-dimensional space, time and memory costs can be higher
than other frameworks.
dong2016accelerating designed a compact hourglass-shape CNN structure using
the basic SRCNN structure for faster inference and improved accuracy in SR
called FSRCNN. They proposed an architecture with a deconvolution layer at the
end of the network for mapping the original LR image directly to the super-
resolved output, an iterative up-and-down sampling in the mapping layers, and
the use of smaller filter sizes with more mapping layers. The results pointed
out an increase in performance of over 40x for inference time while presenting
a superior restoration quality when compared against the naive SRCNN
architecture.
For the work of shi2016real, the authors presented a strategy to solve the
necessity to upscale the LR with interpolation methods or using a single
filter before feature extracting and mapping. They presented a modified CNN
architecture with en efficient sub-pixel convolutional layer for “post-
upsampling” where the feature extraction could happen in the LR space before
being upscaled. This architecture was capable of performing real-time SR in
1080p videos on a single K2 GPU.
In the work of ledig2017photo, a deep generative adversarial network using
residual convolutional blocks was applied for image SR. Their approach
achieved SOTA results in upscaling photo-realistic natural images by a factor
of 4. To accomplish such results, instead of only optimizing the network by
image similarity in pixel space, the authors proposed a perceptual loss
function, which consisted of an adversarial loss and a content loss. The
adversarial loss was responsible for pushing the upscaled solution to the
natural image manifold using a discriminator network trained to differentiate
between the super-resolved images and the original photo-realistic ones.
Besides, they proposed the use of a content loss motivated by perceptual
similarity. This similarity was calculated from the comparison of extracted
semantic features from an ImageNet pre-trained network. Also, they evaluated
the impact of applying several image losses together, such as adversarial,
content, mean-squared-error, and total-variance, which inspired this thesis in
investigating a different task-specific learning strategy.
The work of haris2018task presented an approach to detect objects in LR images
using an end-to-end strategy with the training of a CNN to perform the SR
steps, and also aid detection. In this approach, a specific multi-objective
loss function was developed for CNN training, where individual weights for
each part of the loss were used in order to optimize the learning process
based on each desired task. The goal behind the work was to assess how much
improvement in resolution would assist a recognition/detection task in the
input image.
Chen2018FSRNetEL presented an end-to-end approach to perform SR on face images
using prior geometric face features as prior information. The authors divided
the training process into several stages where different encoder-decoder
network architectures were applied to extract geometric features from the
faces to aid the task of SR. The deep network produced in this work, FSRNet,
presented results that today are the SOTA for SR in face images. However, when
dealing with real-world SR of LR face images in the wild, obtaining priors is
a hard and computationally expensive task that hinders its implementation in
the face biometric context.
zafeirouli2019efficient proposed an efficient, lightweight model leveraged by
the benefits of a recursive progressive upsampling architecture to tackle the
SR problem. This work recognized that SR tasks involve spatial representations
and transformations, and exploited the pixel position information to reinforce
the reconstruction task using the CoordConv operator. They obtained
comparative results with SOTA implementations in four SR benchmarks. More
importantly, their results also showed accuracy performance improvements with
the use of the coordinate convolutional layer for the SR task while keeping
low computational complexity, which motivated the application of this operator
for proposing the new architectures described in Chapter 4.
### 2 Low-Resolution Face Recognition
hennings2008simultaneous presented an approach for simultaneous SR and face
feature extraction for recognition of LR faces by treating face features
(e.g., Eigenfaces, Fisherfaces) as prior information in the SR method. They
evaluated their approach against matching gallery and probe images in the LR
and applying the pure SR approach to check for matches in the high-dimensional
domain. They concluded that their approach could produce better recognition
performance since the focus of the SR shifted to recognition instead of
reconstruction. This particular work inspired this thesis for using features
extracted from the face as an optimization strategy for the SR models.
The work of rasti2016convolutional proposed a system that super-resolves a
face image before the face feature extraction and recognition phases. They
used a deep CNN to upscale the image followed by a Hidden Markov Model and
Single Value Decomposition based face recognition model. They experimented in
two general and one small surveillance database and pointed out that such
upscaling phase could result in a 6 to 10% increase in performance for face
recognition. The increase in accuracy performance reported in this paper
influenced the elaboration of the hypothesis that DL based SR could assist
positively face recognition in real-world LR data.
berger2016boosting proposed a two-step neural approach for face SR with the
focus of improving face recognition. They employed a generic SR CNN network
based on the work of peyrard2015comparison trained on the Labeled Faces in the
Wild (LFW) dataset [Huang et al. 2008] and then refined the HR output with
localized SR steps using autoencoders trained in patches of the images on the
LFW. The localized SR step focused on locally reconstructing image patches at
crucial face landmark points (e.g., eyes, nose, mouth) via dictionary
learning. They claimed that the image reconstruction had a +2.80dB
improvement, while the recognition performance also had a 3.94% increase
compared to the same results on x4 bicubic interpolation. However, they lacked
tests in real-world LR datasets in order to check if their model would be able
to keep the high performance within the wild data. Also, as having two
networks in cascade is computationally expensive, this CNN architecture may
not be ideal for real-world surveillance situations.
wang2016studying presented an attempt to deal with the problem of very low-
resolution recognition, where the region of interest could be smaller than
16x16 pixels. Their approach achieves feature enhancement and recognition
simultaneously through the use of a deep SR network for pre-training with a
carefully selected loss function for matching between LR and HR face images.
The recognition step employed a deep neural network for classification trained
on a different dataset that shared similar features with the one used for
evaluation. They report a rise of 1.71% in top-1 accuracy on the famous UCCS
surveillance dataset, which is no longer publicly available.
abdollahi2019exploring explored factors for improving LR face recognition
using DL classifiers in two real-world surveillance datasets. Instead of
focusing on SR approaches, they proposed two strategies to overcome the lack
of fine information on the face images: increase the crop on a detected face
before upsampling the image to match the input size of the classifier and
match the resolution between gallery and probe images. For classification,
they evaluated several ResNet-50 and SENet-50 architectures for feature
extraction trained on VGGFace2 and the MS-Celeb-1M datasets. Along with a
nearest neighbor classifier, they were able to achieve SOTA results in Rank-1
verification for the ICB-RW and SCFace surveillance datasets. Their work
inspired this thesis in also investigating different face crop sizes and using
the ICB-RW dabase as benchmark for LR face recognition.
elsayed2018unsupervised evaluated the effects that SR and face alignment may
have on accuracy for LR face recognition using an unsupervised approach. They
proposed experiments where a LR version of the LFW dataset was frontalized and
fed to a simple SR network based on SRCNN. Later they made use of an
unsupervised recognition model using speed up robust features and local binary
features. They tested only on the LFW data and reported that SR and face
alignment increased the recognition performance.
ataer2019verification proposed a two-stage architecture for simultaneous
feature extraction and super-resolution. They trained a VGG-based deep face
recognition network to be used as a feature extractor and trained an SR
network to decrease the L1 distance between the features extracted from the
VGG network for real and generated images. The evaluation procedure presented
two DL based SR networks and showed that this setup increases recognition
performance. However, they only evaluated the results in LR frontal images
that were acquired after the downsampling of four HR datasets.
li2019low presented results for LR face recognition in the wild with the
evaluation of different deep learning SR network architectures in two
originally LR datasets. They trained a VGG network using LR versions and HR
versions of images from a HR dataset to extract features and then applied
different classifiers for validation. Their best results were obtained by pre-
training an SR architecture based on GAN in LR images of datasets with similar
features to the ones used for evaluation. This trick helped their models reach
close to SOTA recognition performance in the used datasets.
abello2019optimizingSR explored the use of a loss function defined as the L2
error between face features from an super-resolved face and the ground truth
for improving LR face recognition. For feature extraction, they used the pre-
trained network with Inception architecture proposed by schroff2015facenet.
They reported that this loss was able to give improvements for both image
quality and recognition performance. Nevertheless, they only tested their
system in LR versions of the HR dataset used for training.
### 3 Final Considerations
In this chapter, several works that present the SOTA for SR and LR face
recognition are described with their results. After analyzing what the trends
present in the SOTA were, several architectures for SR were chosen for
evaluation in real-world LR images. Also, the discussed related works led this
thesis to a research direction that included the assessment of the use of
different convolution operators and loss functions for better image quality
and recognition accuracy.
For the next chapter, the methodology behind the experiments proposed in this
thesis for assessing the hypotheses established in Chapter are presented.
## Chapter 3 Methodology
This chapter describes the materials and methods used for experimenting with
different architectures for super-resolving images and improving face
recognition.
### 1 Datasets
#### 1 VGGFace2
The VGGFace2 is an in the wild dataset of faces that contains 3.31 million
images from 9131 celebrities downloaded from Google Image Search and shows
significant variations in pose, age, lighting, and background [Cao et al.
2018]. One advantage of using this dataset for training a robust image
classifier/feature extractor is the fact that approximately 20% of its images
have pixel resolution lower than 50 pixels, which leads the model to have a
better feature representation for low-resolution face images [Aghdam et al.
2019]. A set of five images from this dataset can be seen in Figure 1.
Figure 1: Example of face images in the VGGFace2 dataset. (Adapted from
cao2018vggface2)
#### 2 CelebA
The CelebFaces Attributes Dataset (CelebA) is a large-scale face attributes
dataset with over 200,000 images of 10,117 celebrities around the world [Liu
et al. 2015]. It presents a vast diversity of poses and background clutter.
Also, it is commonly used for training SOTA SR networks because of its rich
features and size [Chen et al. 2018, Yu et al. 2018, Kim et al. 2019]. The
first 18,000 images of this dataset were used for training, and the following
2,000 were used for validating the results for the SR networks according to
the specified image quality metrics. A set of 5 samples from this dataset is
shown in Figure 2
Figure 2: Example of face images in the CelebA dataset. (Adapted from
liu2015faceattributes)
#### 3 Quis-Campi Dataset (ICB-RW)
The Quis-Campi dataset is a growing biometric database comprising 3000 images
from 320 subjects automatically acquired by an outdoor visual surveillance
system, with subjects on-the-move and at-a-distance (up to 50 m). The system
used for image acquisition has a master wide camera for subject detection and
tracking, and a slave pan-tilt-zoom (PTZ) camera, as the foveal sensor, for
extracting the facial region at a high-magnification state.
In the context of face verification, they supply three high-quality images of
the subject in a controlled environment to be used as gallery data and several
images of the same subject on the move inside an university campus to be used
as probe data. One strong feature of this dataset is that all probe images
present variation in illumination, pose, focus, expression, motion-blur, and
occlusion [Neves, Moreno and Proença 2017].
Part of this dataset was published to promote the International Challenge on
Biometric Recognition in the Wild (ICB-RW) competition, and that is why most
of the results present the same benchmarking setup used in the competition.
The ICB-RW challenge provided three face images to be used as gallery and five
probe images for each of 90 subjects.
As the goal was to evaluate the performance of the proposed network
architectures against other works in a real-word LR scenario, the Quis-Campi
dataset was adapted to resemble the ICB-RW challenge to the maximum once the
latter was not available. Therefore, since not all the subjects on the dataset
had enough images to be selected, the first 90 subjects out of the 320 that
had available the three gallery and five probe images were picked. This
approach to making an equivalent representation of ICB-RW was considered
instead of taking 90 random samples out of the 320 since, at the time of the
2016 competition, the Quis-Campi dataset did not have all of their subjects
registered, and according to neves2016icb, the new samples were registered and
added automatically to the database. The images chosen for one of the subjects
can be seen in Figures 3 and 4.
Figure 3: Example of gallery images in the Quis-Campi dataset. (Adapted from
neves2017quis) Figure 4: Example of probe images in the Quis-Campi dataset.
(Adapted from neves2017quis)
#### 4 Federal University of Sergipe Classroom Attendance Dataset
This dataset was formulated with the goal of creating an automated attendance
system for classes within the computer science department of the Federal
University of Sergipe (UFS) [Sá 2019]. The dataset is composed of one high-
resolution frontal image of each student, referred to as a gallery image, and
three probe images of a whole class taken in slightly different angles with a
1.2MP Webcam. For this thesis, three classes with different amount of students
were used for the evaluation since this dataset presents a challenging LR
uncontrolled environment ideal for testing real-world face recognition
pipelines. An example of the gallery and probe images present on the dataset
can be seen in Figures 5 and 6, respectively.
Figure 5: Example of gallery images for the UFS-Classroom Attendance dataset.
[Sá 2019] Figure 6: Example of a probe image for the UFS-Classroom Attendance
dataset. [Sá 2019]
### 2 Data Pre-Processing
For detecting and extracting the faces from the presented datasets, a pre-
trained Multi-task Cascaded Convolutional Neural Network (MTCNN) was used
since it has shown SOTA results on a variety of benchmarks for face detection
and face alignment while keeping real-time results [Zhang et al. 2016].
For training and evaluating the SR networks, every image from the CelebA
dataset was scaled to a [0,1] range, and then underwent a process of
“crappification” to create an LR pair to be used since a paired supervised
learning approach was adopted. Each cropped face was resized to 160x160 and
saved as the HR sample. Then, for obtaining a “crappy” version of it, the same
cropped face was resized to 40x40 pixels, and saved using JPEG compression
with a quality factor that varied randomly from 10 to 70 (where 1 is the
minimum, 75 the standard, and 95 the maximum quality) as advised by
howard2018fastai during the FastAI course. This compression approach helps to
simulate the data distribution that may be present in real-world LR
surveillance footage. The resolution was chosen according to the input size of
the deep learning model used for feature extraction (160x160x3), and due to
limited computational resources, only the $4x$ SR upscaling setting was
experimented.
As the primary task was to evaluate how SR may influence verification
performance in real-world LR in-the-wild scenarios and faces detected in the
Quis-Campi and UFS-Classroom Attendance datasets had a large variation in size
due to differences in data acquisition equipment, every face detected (gallery
and probe) was saved as an image in three different settings: without any
change to the size, with bicubic resizing to 40x40 pixels, and with bicubic
resizing to 40x40 pixels with a 1.3 crop margin to the borders. This last
setup is able to increase the amount of information within the image and,
added to the employed resolution matching step, can increase recognition
performance for LR samples as validated by abdollahi2019exploring. An example
of the three cases for each dataset can be seen in Figures 7 and 8.
Figure 7: Example of a probe face image from the UFS-Classroom Attendance
dataset saved in the three settings. (Adapted from joao2019Automatic) Figure
8: Example of a probe face image from the Quis-Campi dataset saved in the
three settings. (Adapted from neves2017quis)
### 3 Transfer Learning
#### 1 Face Feature Extraction
Inspired by the work of schroff2015facenet, a pre-trained network to extract
feature embeddings of the faces for further comparison was employed. The
chosen deep network architecture was the Inception-ResNet-V1 [Szegedy et al.
2017] trained on the VGGFace2 dataset. This network was able to achieve the
SOTA accuracy of 0.9965 on the LFW benchmark. Compared to the Inception
network architecture employed similarly in the Facenet paper, the Inception-
ResNet-V1 network achieves faster convergence without adding additional
computation complexity due to its residual connections. This network was
trained for mapping an 160x160x3 image ($\mathbb{R}^{HxWxC}$) to a vector
($\phi(Image)$) in a feature space of 512x1 dimensions ($\mathbb{R}^{512}$).
#### 2 Face Verification
The verification is performed by applying the nearest neighbor algorithm to
check the distance among embeddings for the selected probe and gallery images.
The metric employed for verification of closeness is the cosine similarity
used previously by the winner of the ICB-RW competition [Neves and Proença
2016], and also by abdollahi2019exploring. A description of the metric can be
seen in Equation 1.
$Nearest\,Neighbor=1-\frac{\phi(I_{Face1})\,\cdot\,\phi(I_{Face2})}{\left\|\phi(I_{Face1})\right\|_{2}\,\left\|\phi(I_{Face2})\right\|_{2}}$
(1)
Most approaches that deal with the LR face recognition problem, as it was
reviewed in Chapter 2, try to train from scratch a robust network that may be
able to overcome the difficulties of such task. However, this study approaches
the problem from a different point of view since it takes advantage of pre-
trained large and robust classifiers with the addition of a specially designed
upscaling step including an SR network previous to the feature extraction and
verification steps.
A general description of the whole pipeline for verification after the
upsampling task, using the ICB-RW challenge as an example, can be seen in
Figure 9.
Figure 9: Pipeline for face verification in the ICB-RW. (Source: Author’s
own)
#### 3 Face Loss
Mean-squared-error loss optimizes the response of an SR network for generating
images with better quality, but it does not take into account if the
recognized person has kept the same unique features that may differentiate
this person from the others. As a task-driven approach for SR was meant to be
developed, the face identity loss commonly used for face normalization [Cole
et al. 2017] and 3D face reconstruction [Gecer et al. 2019] was adopted to
guide the SR process for better face recognition accuracy.
The SOTA face feature extractor (Inception-ResNet-V1) was used for checking if
the distance between the embedding of the real and the super-resolved image
was decreasing within each epoch. To accomplish that, the cosine similarity
measure (Equation 1) of the face embeddings extracted from the image pairs was
added to the standard image loss used for training each SR network. This
customized loss ensures that the reconstruction made by the SR network
resembles the target identity under various conditions in the feature space.
The definition for this loss can be seen in Equation 2.
$Face\,Loss=1-\frac{\phi(I^{SR})\,\cdot\,\phi(I^{HR})}{\left\|\phi(I^{SR})\right\|_{2}\,\left\|\phi(I^{HR})\right\|_{2}}$
(2)
This approach adopts the same concept of the task-driven loss presented by
haris2018task, but it kept the focus on a more robust and general task (face
recognition). In addition, the Face Loss presented here differs from the
recent work of abello2019optimizingSR since they applied L2 error, which is
more susceptible to outliers, on the feature vectors for the optimization of
their SR network. Moreover, in the end, they neither provided an ablation
study nor validated their approach for accuracy improvement in real-world LR
data.
### 4 Final Considerations
This chapter described the datasets and the steps taken towards finding the
best DL architecture for super-resolving images with the goal of improving
face recognition. To ensure the models work in real-world LR data, two
challenging LR datasets (surveillance and attendance assessment) were used for
evaluation. Also, for making sure the super-resolved face images have
discriminant features regarding identity, a custom loss function was proposed.
For the next chapter, the experiments of this thesis are presented and
explained. Then, their results are broadly discussed according to the
hypotheses idealized in Chapter .
## Chapter 4 Experimental Results
### 1 Experiments
This section provides an overview of the performed experiments with their
respective parameterization.
#### 1 Task 1 - Face Super-Resolution
For performing this task, 4 of the CNN architectures discussed in Chapter 2
were modified and implemented following the implementation details on their
papers in order to evaluate the best choice for a face verification pipeline.
These models were chosen based on previously reported results and
computational complexity. In this thesis, 7 different new architectures were
proposed for evaluation. The variants that have the name “Coord” kept the same
original architecture but had the first “Conv2d” layer switched to a
“CoordConv”. The variants with “FaceLoss” had their training with the addition
of the customized loss function presented in Section 3. The implemented models
for SR described below can be checked on
https://github.com/angelomenezes/Pytorch_Face_SR.
Network architectures from literature that were evaluated:
1. 1.
SRCNN (dong2015image) shown in Figure 1.
Figure 1: SRCNN architecture. [Dong, Loy and Tang 2016]
2. 2.
Subpixel CNN (shi2016real) shown in Figure 2.
Figure 2: Subpixel CNN architecture. (Adapted from dong2016accelerating and
shi2016real)
3. 3.
FSRCNN (dong2016accelerating) shown in Figure 3.
Figure 3: FSRCNN architecture. [Dong, Loy and Tang 2016]
4. 4.
SRGAN (ledig2017photo) shown in Figure 4.
Figure 4: SRGAN architecture. [Jiao and Zhao 2019]
Network architectures with modifications proposed in this thesis for
evaluation:
1. 1.
SRCNN Coord
2. 2.
Subpixel CNN Coord
3. 3.
FSRCNN Coord
4. 4.
FSRCNN Coord FaceLoss
5. 5.
SRGAN Coord
6. 6.
SRGAN FaceLoss
7. 7.
SRGAN Coord FaceLoss
* •
Hyperparameters $\rightarrow$ SRCNN and SRCNN Coord
* –
Batch size: 64
* –
Number of epochs: 50
* –
Loss: MSE
* –
Optimizer: Adam
* –
Learning Rate: 0.01 with decay to 10% of the current learning rate every 15
steps
* •
Hyperparameters $\rightarrow$ Subpixel CNN and Subpixel CNN Coord
* –
Batch size: 32
* –
Number of epochs: 50
* –
Loss: MSE
* –
Optimizer: Adam
* –
Learning Rate: 0.01 with decay to 20% of the current learning rate every 15
steps
* •
Hyperparameters $\rightarrow$ FSRCNN, FSRCNN Coord and FSRCNN Coord FaceLoss
* –
Batch size: 32
* –
Number of epochs: 50 (30 for FSRCNN Coord FaceLoss)
* –
Loss: MSE and variant that added FaceLoss
* –
Optimizer: Adam
* –
Learning Rate: 0.001 with decay to 20% of the current learning rate every 15
steps
* •
Hyperparameters $\rightarrow$ SRGAN, SRGAN FaceLoss, SRGAN Coord and SRGAN
Coord FaceLoss
* –
Batch size: 32
* –
Number of epochs: 30
* –
Loss: MSE + Adversarial Loss + Perceptual Loss and variant that added FaceLoss
* –
Optimizer: Adam with $\beta_{1}=0.5$ and $\beta_{2}=0.999$
* –
Learning Rate: 0.001 with decay to 20% of the current learning rate every 15
steps
#### 2 Task 2 - Watch-List ICB-RW (1x5 Problem)
As proposed by neves2016icb, the ICB-RW challenge used part of the Quis-Campi
dataset to evaluate the average Rank-1 identification of a suspect against the
“watch-list” subjects. For each probe image, the model had to output a
similarity score related to each of five possible suspects. An example of this
setup is presented in Figure 5.
Figure 5: Watch-List setting for the ICB-RW. [Neves and Proença 2016]
For this experiment, each individual had its frontal gallery image and a
random probe image selected along with four random probe images of different
subjects. The challenge is to obtain the highest number of matches according
to the smallest distance given by the nearest neighbor algorithm.
#### 3 Task 3 - Attendance Evaluation (1xN Problem)
For the task of evaluating the attendance inside a classroom, the identity of
every student needs to be checked against all entries on the attendance list.
The number of students in each class can be seen in Table 1.
Table 1: Number of students for each class | N° of Students
---|---
Class 1 | 15
Class 2 | 16
Class 3 | 12
This experiment followed the same verification principle of Task 2.
Nevertheless, it is a more challenging situation since this recognition task
takes into account all the subjects in the classroom (1 vs. ALL problem).
Each experiment described in this chapter was either run in a personal
computer with an Intel i7-6500U with 16 GB of memory and GeForce GTX 950M (4
GB) or in a Google Cloud instance with a Skylake processor (8 vCPUs and 52 GB
memory) with an NVIDIA Tesla P100 (16 GB). All models were implemented and
evaluated using Python and the Pytorch library.
### 2 Results Evaluation
When training the SRGAN and its variants, if the discriminator network had its
weights updated in the same frequency of the generator, its loss would quickly
converge to zero, and both networks would not have any gradients for learning
along the epochs. Therefore, in order to make the learning happen, two extra
steps were followed according to the tips given on the FastAI course [Howard
et al. 2018]:
* •
Generator Pre-Train: the generator network was pre-trained for 5 epochs using
only the MSE loss in order to have some advantage initially against the
discriminator.
* •
Smart update for the Discriminator: across the epochs, the discriminator
network was only trained (had its weights updated) when its loss was above a
threshold (0.5). This step ensures that the network is learning gradually to
assess the output of the generator since the update of weights for the
discriminant only happened when it was making more “mistakes” according to the
right labels for each input. All the training losses can be evaluated in
Appendix 7.
The results for each proposed task are shown and discussed in the following
subsections.
#### 1 Task 1 - Face Super-Resolution
The validation results regarding image quality metrics and inference results
for the SR architectures are presented in Table 2. The PSNR was calculated on
the RGB channels, and the average time for inference was computed as the
average of 10 runs of each algorithm. The perceptual results can be evaluated
in the Appendix 6.
As can be seen in Table 2, except for the “SRGAN Coord FaceLoss”, all the
presented architectures achieve real-time performance in a small GPU (GeForce
GTX 950M). The best algorithm regarding quality metrics was the FSRCNN with
the coordinated convolution operator. Nonetheless, the SRGAN and its variants,
even with a low PSNR, had the best human perceptual quality as can be seen by
the perceptual clarity of the outputs in the Appendix 6. Their better
performance may be related to the fact that they make use of different losses,
which optimizes for a less blur and more textured output (Adversarial and
Perceptual Loss).
Table 2: PSNR and SSIM validation results (2000 images from CelebA) Validation
metric results for 4x Upscaling
---
| PSNR | SSIM | | Avg. Time
---
for Inference (s)
| Inference
---
FPS in GPU
| Trained on
---
Channel
SRCNN | 27.95 | 0.7973 | 0.0100 | 100.00 | Y
SRCNN Coord | 27.98 | 0.7966 | 0.0153 | 65.36 | Y
SubCNN | 28.08 | 0.8003 | 0.0035 | 285.71 | Y
SubCNN Coord | 28.13 | 0.8022 | 0.0048 | 208.33 | Y
FSRCNN | 28.45 | 0.8104 | 0.0046 | 217.39 | RGB
FSRCNN Coord | 28.88 | 0.8175 | 0.0048 | 208.33 | RGB
FSRCNN Coord Face Loss | 28.78 | 0.8151 | 0.0047 | 212.77 | RGB
SRGAN | 27.02 | 0.8077 | 0.0336 | 29.76 | RGB
SRGAN Coord | 27.28 | 0.8078 | 0.0382 | 26.18 | RGB
SRGAN FaceLoss | 26.63 | 0.8083 | 0.0384 | 26.04 | RGB
SRGAN Coord FaceLoss | 26.69 | 0.7984 | 0.0404 | 24.75 | RGB
Bicubic | 27.93 | 0.7881 | 0.0012 | 833.33 | -
$\rightarrow$ Red color highlights the architectures proposed in this thesis.
Architectures that were modified with the CoordConv operator improved its PSNR
100% of the times with an average increase of 0.16 dB. Their SSIM presented
fluctuations, and no significative gains could be measured. This situation may
be explained by the fact that all architectures were optimized to decrease the
difference in pixels according to the MSE loss, which directly improves the
PSNR of the image but often smooths and blurs the output. Such blur decreases
the perceptual quality of the image, and consequently, does the same to
perceptual quality metrics such as the SSIM.
The SR models that were optimized using the customized loss function
(FaceLoss) presented in general lower image quality metrics than their pairs
that used their own objective function. They also presented a longer time for
inference, which may be an indication that fewer weights were zero since more
computation was measured. Despite that, every evaluated architecture was able
to reach real-time inference in the small GPU used for training and testing
the models. The interpolation method (Bicubic) was still around three times
faster than the fastest SR model. This can indicate that, for situations where
processing time weights more than accuracy, interpolation methods can still be
a considered direction.
#### 2 Task 2 - Watch-List ICB-RW (1x5 Problem)
The results for the task of recognizing which suspect was correctly identified
by the surveillance camera can be seen in Table 3. Fine-tuning was not
performed. Therefore, the recognition pipeline did not have direct samples
that resembled the gallery/probe data of this experiment.
The best obtained results came from SRGAN and its variants. In the setting
where the face image was not resized previously, only these algorithms were
able to overcome the baseline (Bicubic). When the cropped face had its
resolution reduced for simulating lower resolution scenarios, they still were
the best performing group, but other networks were also able to beat the
interpolation method.
Table 3: Rank-1 Accuracy (in %) for Face Recognition task (1xN) on 90 subjects from the Quis-Campi dataset (ICB-RW) | Results Quis-Campi / ICB-RW
---
(N = 5 suspects).
| No Resize No Margin | Size 40 No Margin | Size 40 Margin 1.3
SRCNN | 74.89 | 61.78 | 64.67
SRCNN Coord | 78.22 | 62.89 | 67.11
SubCNN | 64.67 | 55.33 | 57.56
SubCNN Coord | 67.33 | 58.44 | 62.22
FSRCNN | 72.00 | 60.22 | 65.56
FSRCNN Coord | 78.22 | 64.00 | 69.11
FSRCNN Coord Face Loss | 78.44 | 63.33 | 67.33
SRGAN | 85.78 | 77.33 | 72.00
SRGAN Coord | 83.80 | 78.40 | 68.90
SRGAN FaceLoss | 85.11 | 78.22 | 71.78
SRGAN Coord FaceLoss | 84.89 | 76.00 | 71.11
Bicubic | 83.11 | 62.00 | 62.44
$\rightarrow$ Red color highlights the architectures proposed in this thesis.
As the resolution of the faces on the probe data for the ICB-RW had naturally
almost the same resolution as the ones for the gallery (around 200x200), the
1.3 margin did not have much effect on the accuracy results, which was also
noticed in the work of abdollahi2019exploring. As in this experiment the sizes
of the gallery and probe data were always matched before upsampling, the 40x40
resizing may have caused a drastic loss of high-frequency details and
discriminative features, which can be noticed by the decrease in accuracy
ratings.
Every accuracy result on this task, except for the SubCNN and its variant,
overcame the results of ghaleb2018deep, the best performing system in the ICB-
RW challenge at that time. He was able to achieve a Rank-1 IR rate of 71.7%,
which differs from the proposed SRGAN and SRGAN FaceLoss by a margin of 17.08%
and 16.41%, respectively. These two architectures would also beat the results
of abdollahi2019exploring, who was able to achieve a Rank-1 rate of 84.22%,
the highest registered so far.
#### 3 Task 3 - Attendance Evaluation (1xN Problem)
The results for the task of evaluating which students are present in each of
the classrooms can be seen in Tables 4, 5, and 6. Similarly to the other task,
neither training nor fine-tuning was performed using gallery/probe data for
this experiment.
For Classroom 1, the best performing algorithm for all settings was the SRGAN
and its variants. The FSRCNN model presented the highest obtained accuracy,
but it did not keep a performance consistency. For the setting where no resize
was employed, all architectures were able to beat the baseline. Yet, when the
margin was applied to increase the amount of information within the image,
bicubic interpolation overcame even the SRGAN and two of its variants.
Table 4: Accuracy (in %) for Face Recognition task in Classroom 1 (1xN) | Results UFS Classroom 1
---
(N = 15 students)
| No Resize No Margin | Size 40 No Margin | Size 40 Margin 1.3
SRCNN | 62.50 | 66.67 | 83.33
SRCNN Coord | 64.58 | 70.83 | 85.42
SubCNN | 68.75 | 64.58 | 66.67
SubCNN Coord | 70.83 | 66.67 | 68.75
FSRCNN | 66.67 | 72.92 | 83.33
FSRCNN Coord | 66.67 | 66.67 | 89.58
FSRCNN Coord Face Loss | 64.58 | 68.75 | 89.58
SRGAN | 85.42 | 81.25 | 75.00
SRGAN Coord | 81.25 | 79.17 | 81.25
SRGAN FaceLoss | 85.42 | 83.33 | 83.33
SRGAN Coord FaceLoss | 81.25 | 81.25 | 79.17
Bicubic | 58.33 | 66.67 | 83.33
$\rightarrow$ Red color highlights the architectures proposed in this thesis.
For Classroom 2, it was possible to conclude that the images had a high degree
of degradation since the highest accuracy was around 73%, and the setting
without resizing and margin adjustment had around 55%. The best obtained
results came from the SRGAN with coordinate convolution in the setting where
the margin was adjusted. All the other SRGAN related models happened to hit
the same accuracy, which gives a hint that they might have similar weights.
For Classroom 3, probe images might have had a higher resolution than in
previous classroom experiments since the results for the setting without
margin adjustment presented the highest rate, similar to the results on the
simulated ICB-RW benchmark. The most consistent models were the proposed
architectures based on SRGAN with results around 80%, which overcame the
baseline in every possible setting. However, when the size and margin were
adjusted, FSRCNN presented the best results and the outcome for the other
models became similar.
Table 5: Accuracy (in %) for Face Recognition task in Classroom 2 (1xN) | Results UFS Classroom 2
---
(N = 16 students)
| No Resize No Margin | Size 40 No Margin | Size 40 Margin 1.3
SRCNN | 26.56 | 23.99 | 58.06
SRCNN Coord | 26.56 | 31.32 | 50.92
SubCNN | 23.99 | 28.94 | 41.39
SubCNN Coord | 21.61 | 16.85 | 43.77
FSRCNN | 33.88 | 26.56 | 51.10
FSRCNN Coord | 31.50 | 26.56 | 55.86
FSRCNN Coord Face Loss | 33.88 | 26.56 | 55.86
SRGAN | 39.01 | 45.97 | 67.95
SRGAN Coord | 38.83 | 36.45 | 72.71
SRGAN FaceLoss | 48.72 | 48.72 | 67.95
SRGAN Coord FaceLoss | 53.66 | 38.64 | 67.95
Bicubic | 26.74 | 26.56 | 55.49
$\rightarrow$ Red color highlights the architectures proposed in this thesis.
Table 6: Accuracy (in %) for Face Recognition task in Classroom 3 (1xN) | Results UFS Classroom 3
---
(N = 12 students)
| No Resize No Margin | Size 40 No Margin | Size 40 Margin 1.3
SRCNN | 40.00 | 40.00 | 66.67
SRCNN Coord | 36.67 | 56.67 | 56.67
SubCNN | 40.00 | 46.67 | 60.00
SubCNN Coord | 40.00 | 43.33 | 60.00
FSRCNN | 46.67 | 43.33 | 80.00
FSRCNN Coord | 46.67 | 43.33 | 73.33
FSRCNN Coord Face Loss | 40.00 | 33.33 | 80.00
SRGAN | 70.00 | 63.33 | 70.00
SRGAN Coord | 76.67 | 76.67 | 73.33
SRGAN FaceLoss | 83.33 | 76.67 | 70.00
SRGAN Coord FaceLoss | 86.67 | 80.00 | 70.00
Bicubic | 46.67 | 40.00 | 66.67
$\rightarrow$ Red color highlights the architectures proposed in this thesis.
It was possible to check that, for all experiments in this task, increasing
the amount of information within the image with a 1.3 margin resulted in an
increase in accuracy for most algorithms. This increase did help the SR
network to provide more discriminative face images for the feature extractor
since the average accuracy results were higher in general for such setting.
Figure 6: Performance results for accuracy on ICB-RW and UFS Clasroom 1 data.
Proposed architectures have an asterisk in their names.(Source: Author’s own)
Figure 7: Performance results for accuracy on UFS Clasroom 2 and UFS Classroom
3 data. Proposed architectures have an asterisk in their names. (Source:
Author’s own)
#### 4 Hypotheses Discussion
For checking if there is correlation between image quality metrics and
accuracy performance, the application of a correlation test was necessary.
Since it was not possible to confirm if the original image data distribution
approached normality, the Spearman Correlation Coefficient was calculated
since it is specific for nonparametric data. Its results can be seen in Table
7.
Table 7: Spearman Correlation PSNR/SSIM vs. Accuracy | PSNR | SSIM
---|---|---
| Spearman
---
Correlation
Coefficient
-0.3625 | 0.1159
p-value | 5.671 e-07 | 0.121
As can be seen by Table 7, the null hypothesis (there is no dependency) can be
rejected in the case of SSIM, yet it was not the case for the PSNR since its
p-value was less than 0.05. However, this result implies that there is a
negative correlation involving PSNR. This outcome can be explained by the fact
that the best performing models for accuracy (SRGAN and its variants) had the
worst PSNR results when compared to the other models, which occasionally
performed poorly in the face verification step.
Regarding accuracy, all architectures that took advantage of coordinate
convolutions kept or increased its accuracy performance 72% of the time for
all experiments. However, even with the positive results, after applying the
Wilcoxon Signed-Rank Test to check if the hypothesis of having CoordConvs
brought substantial gains, the p-value value was equal to 0.083. This p-value
was not sufficient to reject the null hypothesis, which may indicate that
either the results data distributions were the same or there was not
sufficient data to point their difference. Since the evaluation of different
architectures was meant to be applied to real-world LR data, which in this
case is limited, this CoordConv operator still needs to be further explored to
have a real measure of its potential for the SR task. Even so, it has shown
already promising results for general SR in face biometrics and can be
employed for different architectures performing similar tasks.
Architectures that were optimized with the specially designed for feature
extraction “FaceLoss” kept or increased its accuracy performance 77% of the
time for all experiments. Also, they presented the highest average accuracy
across all experiments and most of the best results, as it is shown in Figures
6 and 7. The Wilcoxon Signed-Rank Test presented a p-value of 0.03, which
means that the null hypothesis (results have same distribution) can be
rejected. Therefore, architectures with this loss had results from a different
data distribution when compared to the same models trained on their general
standard losses.
According to Figures 6 and 7, the verification accuracy increased when the 1.3
margin was applied to naturally LR images (UFS Classroom data) and decreased
when probe images were in HR and already matched the gallery resolution (ICB-
RW data). This suggests that a simple system that monitors what to apply to
detected faces depending on their size could be elaborated to take advantage
of this characteristic and provide significant improvements for a LR face
recognition pipeline.
In general, deeper architectures (from literature and adapted) had better
performance over the shallower ones and the baseline for the executed
experiments. Even the simplest SR models (SRCNN, SubCNN, FSRCNN and their
variants) were able to beat bicubic interpolation by at least a tiny margin,
as shown in Figures 6 and 7. Nevertheless, this small margin may not justify
the use of a simple DL model for upscaling images prior to recognition in
real-world situations since the loss of FPS is still substantial and needs to
be taken into consideration.
### 3 Final Considerations
In this chapter, the experiments elaborated for validating the initial
hypotheses were presented, and the results regarding the proposed
architectures were discussed. The use of deep SR models for enhancing image
features before verification proved to be a beneficial step in the low-
resolution face recognition pipeline. The models that made use of adversarial
training (GANs) with different loss functions presented not only the best
results regarding visual perception, as seen in Appendix 6, but also
recognition performance since they were able to produce a clearer image for
feature extraction.
The final chapter presents the final considerations taking into account the
whole thesis and the conclusions.
## Chapter 5 Conclusions
Super-resolution has shown vastly on recent works, and also reaffirmed in this
thesis, both its ability to enhance the clarity and visual aspects of images
and its potential to improve the accuracy performance of face recognition
systems.
In this work, several state-of-the-art deep learning architectures for SR were
implemented, modified and evaluated with the objective of enhancing face
recognition performance in two naturally LR datasets. The application of SR
models with the proposed change that addressed the use of an operator that
takes into account the position information resulted in an equal or better
accuracy 72% of the time over the use of the same architectures without
adaptation. Meanwhile, applications of SR architectures that were optimized
with the loss that prioritized better feature extraction obtained a comparable
or improved accuracy 77% of the time.
The deeper networks (SRGAN and its variants) presented results that were both
perceptually good for the human eye and the evaluation of the recognition
criteria. The Inception ResNet V1 feature extractor with the proposed SRGAN
FaceLoss architecture, the best performing SR model, had an average accuracy
in all experiments of 73.54%, which overcame the bicubic interpolation
baseline by 17%. Also, this same setup was able to achieve 85.11% in the
simulated ICB-RW dataset, which is an indication that such strategy might be
able to defeat the SOTA model for the original benchmark since its accuracy
was around 84%.
The study performed in this thesis also confirmed that most of the other
recently proposed SR deep learning architectures, even the not so deep ones,
could be effective for recovering discriminant features of LR face images in
real-world settings. In addition, the deeper network presented real-time
capabilities when using a small GPU, which can facilitate their implementation
for real-world surveillance systems.
Regarding the specific objectives elaborated for this thesis, some points are
important to be highlighted:
* •
Even though SR algorithms are usually optimized for upscaling an image and
obtaining good PSNR metrics, not always this super-resolved image is going to
present the most discriminative features for face recognition.
* •
Coordinated Convolutions presented gains for both image quality metrics and
verification accuracy in the pipeline when applied to SR network
architectures. However, they still need further studies since the data
distribution for the results of networks with and without them presented high
similarities, as confirmed by the Wilcoxon Signed-Rank Test.
* •
The use of a custom loss function to enhance the discriminative face features
in images allows solid gains to the accuracy performance of a LR face
recognition pipeline.
As future work, a simple system can be proposed for applying the best margin
to the crop size of a detected face to ensure there is enough information
available for feature extraction. Notwithstanding, a comparative study should
be necessary for evaluating in which size the recognition accuracy would start
to drop.
Also, different strategies may be employed to tackle the deficiencies
presented in an SR pipeline for LR face recognition. One of the problems is
the need for a fixed upscale factor when training the SR network. This
challenge may be tackled by the use of a meta-upscaling strategy based on the
recent work of hu2019meta where, for example, the weights of the upscaling
network may be predicted based on the knowledge acquired by meta-features
extracted from similar face datasets.
Another challenge to be solved is that some deep generative SR architectures
may not be able to achieve real-time performance in CPU or mobile devices due
to higher computational complexity. To overcome that, the use of distillation
methods to prune these networks may improve time for inference to the cost of
losing some performance [Zhang et al. 2018].
## References
* [Abello and Jr. 2019] ABELLO, A. A.; JR., R. H. Optimizing super resolution for face recognition. In: SBC. _SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)_. [S.l.], 2019.
* [Aghdam et al. 2019] AGHDAM, O. A. et al. Exploring factors for improving low resolution face recognition. In: _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops_. [S.l.: s.n.], 2019. p. 0–0.
* [Agustsson and Timofte 2017] AGUSTSSON, E.; TIMOFTE, R. Ntire 2017 challenge on single image super-resolution: Dataset and study. In: _The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops_. [S.l.: s.n.], 2017\.
* [Ataer-Cansizoglu et al. 2019] ATAER-CANSIZOGLU, E. et al. Verification of very low-resolution faces using an identity-preserving deep face super-resolution network. _arXiv preprint arXiv:1903.10974_ , 2019.
* [Baker and Kanade 2000] BAKER, S.; KANADE, T. Hallucinating faces. _fg_ , Citeseer, v. 2000, p. 83–88, 2000.
* [Bao et al. 2017] BAO, J. et al. Cvae-gan: fine-grained image generation through asymmetric training. In: _Proceedings of the IEEE International Conference on Computer Vision_. [S.l.: s.n.], 2017. p. 2745–2754.
* [Begin and Ferrie 2006] BEGIN, I.; FERRIE, F. P. Comparison of super-resolution algorithms using image quality measures. In: IEEE. _The 3rd Canadian Conference on Computer and Robot Vision (CRV’06)_. [S.l.], 2006. p. 72–72.
* [Berger, Peyrard and Baccouche 2016] BERGER, G.; PEYRARD, C.; BACCOUCHE, M. Boosting face recognition via neural super-resolution. In: _ESANN_. [S.l.: s.n.], 2016.
* [Cao et al. 2018] CAO, Q. et al. Vggface2: A dataset for recognising faces across pose and age. In: IEEE. _2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018)_. [S.l.], 2018. p. 67–74.
* [Chang et al. 2017] CHANG, C.-H. et al. Memory and perception-based facial image reconstruction. _Scientific reports_ , Nature Publishing Group, v. 7, n. 1, p. 6499, 2017.
* [Chen et al. 2016] CHEN, Y. et al. Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. _IEEE Transactions on Geoscience and Remote Sensing_ , IEEE, v. 54, n. 10, p. 6232–6251, 2016.
* [Chen et al. 2018] CHEN, Y. et al. Fsrnet: End-to-end learning face super-resolution with facial priors. _2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , p. 2492–2501, 2018.
* [Cole et al. 2017] COLE, F. et al. Synthesizing normalized faces from facial identity features. In: _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_. [S.l.: s.n.], 2017. p. 3703–3712.
* [Crosswhite et al. 2018] CROSSWHITE, N. et al. Template adaptation for face verification and identification. _Image and Vision Computing_ , Elsevier, v. 79, p. 35–48, 2018.
* [Deshpande 2017] DESHPANDE, A. _A Beginner‘s Guide To Understanding Convolutional Neural Networks_. 2017. https://adeshpande3.github.io/A-Beginner’s-Guide-To-Understanding-Convolutional-Neural-Networks/. Accessed: 2019-11-25.
* [Donahue, McAuley and Puckette 2018] DONAHUE, C.; MCAULEY, J.; PUCKETTE, M. Synthesizing audio with generative adversarial networks. _arXiv preprint arXiv:1802.04208_ , 2018.
* [Dong et al. 2015] DONG, C. et al. Image super-resolution using deep convolutional networks. _IEEE transactions on pattern analysis and machine intelligence_ , IEEE, v. 38, n. 2, p. 295–307, 2015.
* [Dong, Loy and Tang 2016] DONG, C.; LOY, C. C.; TANG, X. Accelerating the super-resolution convolutional neural network. In: SPRINGER. _European conference on computer vision_. [S.l.], 2016. p. 391–407.
* [ElSayed et al. 2018] ELSAYED, A. et al. Unsupervised face recognition in the wild using high-dimensional features under super-resolution and 3d alignment effect. _Signal, Image and Video Processing_ , Springer, v. 12, n. 7, p. 1353–1360, 2018.
* [Faceli et al. 2011] FACELI, K. et al. Inteligência artificial: Uma abordagem de aprendizado de máquina. 2011\.
* [Feldstein 2019] FELDSTEIN, S. The global expansion of ai surveillance. _Carnegie Endowment. https://carnegieendowment. org/2019/09/17/global-expansion-of-ai-surveillance-pub-79847_ , 2019.
* [Gecer et al. 2019] GECER, B. et al. Ganfit: Generative adversarial network fitting for high fidelity 3d face reconstruction. In: _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_. [S.l.: s.n.], 2019. p. 1155–1164.
* [Gerchberg 1974] GERCHBERG, R. Super-resolution through error energy reduction. _Optica Acta: International Journal of Optics_ , Taylor & Francis, v. 21, n. 9, p. 709–720, 1974.
* [Ghaleb et al. 2018] GHALEB, E. et al. Deep representation and score normalization for face recognition under mismatched conditions. _Ieee Intelligent Systems_ , IEEE, v. 33, n. 3, p. 43–46, 2018.
* [Gobbini and Haxby 2007] GOBBINI, M. I.; HAXBY, J. V. Neural systems for recognition of familiar faces. _Neuropsychologia_ , Elsevier, v. 45, n. 1, p. 32–41, 2007.
* [Goodfellow et al. 2014] GOODFELLOW, I. et al. Generative adversarial nets. In: _Advances in neural information processing systems_. [S.l.: s.n.], 2014. p. 2672–2680.
* [Haris, Shakhnarovich and Ukita 2018] HARIS, M.; SHAKHNAROVICH, G.; UKITA, N. Task-driven super resolution: Object detection in low-resolution images. _arXiv preprint arXiv:1803.11316_ , 2018.
* [Haxby, Hoffman and Gobbini 2000] HAXBY, J. V.; HOFFMAN, E. A.; GOBBINI, M. I. The distributed human neural system for face perception. _Trends in cognitive sciences_ , Elsevier, v. 4, n. 6, p. 223–233, 2000.
* [He et al. 2016] HE, K. et al. Deep residual learning for image recognition. In: _Proceedings of the IEEE conference on computer vision and pattern recognition_. [S.l.: s.n.], 2016. p. 770–778.
* [Hennings-Yeomans, Baker and Kumar 2008] HENNINGS-YEOMANS, P. H.; BAKER, S.; KUMAR, B. V. Simultaneous super-resolution and feature extraction for recognition of low-resolution faces. In: IEEE. _2008 IEEE Conference on Computer Vision and Pattern Recognition_. [S.l.], 2008. p. 1–8.
* [Howard et al. 2018] HOWARD, J. et al. _fastai_. [S.l.]: GitHub, 2018. https://github.com/fastai/fastai.
* [Hu et al. 2019] HU, X. et al. Meta-sr: A magnification-arbitrary network for super-resolution. In: _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_. [S.l.: s.n.], 2019. p. 1575–1584.
* [Huang and Liu 2015] HUANG, D.; LIU, H. A short survey of image super resolution algorithms. _Journal of Computer Science Technology Updates_ , v. 2, n. 2, p. 19–29, 2015\.
* [Huang et al. 2008] HUANG, G. B. et al. Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: . [S.l.: s.n.], 2008.
* [Jiao and Zhao 2019] JIAO, L.; ZHAO, J. A survey on the new generation of deep learning in image processing. _IEEE Access_ , IEEE, 2019.
* [Johnson, Alahi and Fei-Fei 2016] JOHNSON, J.; ALAHI, A.; FEI-FEI, L. Perceptual losses for real-time style transfer and super-resolution. In: SPRINGER. _European conference on computer vision_. [S.l.], 2016. p. 694–711.
* [Johnson et al. 1991] JOHNSON, M. H. et al. Newborns’ preferential tracking of face-like stimuli and its subsequent decline. _Cognition_ , Elsevier, v. 40, n. 1-2, p. 1–19, 1991.
* [Kim et al. 2019] KIM, D. et al. Progressive face super-resolution via attention to facial landmark. _arXiv preprint arXiv:1908.08239_ , 2019.
* [Krizhevsky, Sutskever and Hinton 2012] KRIZHEVSKY, A.; SUTSKEVER, I.; HINTON, G. E. Imagenet classification with deep convolutional neural networks. In: _Advances in neural information processing systems_. [S.l.: s.n.], 2012. p. 1097–1105.
* [LeCun, Bengio and Hinton 2015] LECUN, Y.; BENGIO, Y.; HINTON, G. Deep learning. _nature_ , Nature Publishing Group, v. 521, n. 7553, p. 436, 2015.
* [Ledig et al. 2017] LEDIG, C. et al. Photo-realistic single image super-resolution using a generative adversarial network. In: _Proceedings of the IEEE conference on computer vision and pattern recognition_. [S.l.: s.n.], 2017. p. 4681–4690.
* [Li, Feng and Kuo 2018] LI, J.; FENG, J.; KUO, C.-C. J. Deep convolutional neural network for latent fingerprint enhancement. _Signal Processing: Image Communication_ , Elsevier, v. 60, p. 52–63, 2018\.
* [Li et al. 2019] LI, P. et al. On low-resolution face recognition in the wild: Comparisons and new techniques. _IEEE Transactions on Information Forensics and Security_ , IEEE, v. 14, n. 8, p. 2000–2012, 2019.
* [Liu et al. 2018] LIU, R. et al. An intriguing failing of convolutional neural networks and the coordconv solution. In: _Advances in Neural Information Processing Systems_. [S.l.: s.n.], 2018. p. 9605–9616.
* [Liu et al. 2015] LIU, Z. et al. Deep learning face attributes in the wild. In: _Proceedings of International Conference on Computer Vision (ICCV)_. [S.l.: s.n.], 2015.
* [Ma et al. 2018] MA, S. et al. Da-gan: Instance-level image translation by deep attention generative adversarial networks. In: _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_. [S.l.: s.n.], 2018. p. 5657–5666.
* [Muttu and Virani 2015] MUTTU, Y.; VIRANI, H. Effective face detection, feature extraction & neural network based approaches for facial expression recognition. In: IEEE. _2015 International Conference on Information Processing (ICIP)_. [S.l.], 2015. p. 102–107.
* [Neves, Moreno and Proença 2017] NEVES, J.; MORENO, J.; PROENÇA, H. Quis-campi: an annotated multi-biometrics data feed from surveillance scenarios. _IET Biometrics_ , IET, v. 7, n. 4, p. 371–379, 2017.
* [Neves and Proença 2016] NEVES, J.; PROENÇA, H. Icb-rw 2016: International challenge on biometric recognition in the wild. In: IEEE. _2016 International Conference on Biometrics (ICB)_. [S.l.], 2016. p. 1–6.
* [Nguyen and Bai 2010] NGUYEN, H. V.; BAI, L. Cosine similarity metric learning for face verification. In: SPRINGER. _Asian conference on computer vision_. [S.l.], 2010. p. 709–720.
* [Nguyen et al. 2018] NGUYEN, K. et al. Super-resolution for biometrics: A comprehensive survey. _Pattern Recognition_ , Elsevier, v. 78, p. 23–42, 2018.
* [Ouyang et al. 2018] OUYANG, N. et al. Deep joint super-resolution and feature mapping for low resolution face recognition. In: IEEE. _2018 IEEE International Conference of Safety Produce Informatization (IICSPI)_. [S.l.], 2018. p. 849–852.
* [Peyrard, Mamalet and Garcia 2015] PEYRARD, C.; MAMALET, F.; GARCIA, C. A comparison between multi-layer perceptrons and convolutional neural networks for text image super-resolution. In: _VISAPP (1)_. [S.l.: s.n.], 2015. p. 84–91.
* [Purkait, Pal and Chanda 2014] PURKAIT, P.; PAL, N. R.; CHANDA, B. A fuzzy-rule-based approach for single frame super resolution. _IEEE Transactions on Image processing_ , IEEE, v. 23, n. 5, p. 2277–2290, 2014\.
* [Rasti et al. 2016] RASTI, P. et al. Convolutional neural network super resolution for face recognition in surveillance monitoring. In: SPRINGER. _International conference on articulated motion and deformable objects_. [S.l.], 2016. p. 175–184.
* [Reibman, Bell and Gray 2006] REIBMAN, A. R.; BELL, R. M.; GRAY, S. Quality assessment for super-resolution image enhancement. In: IEEE. _2006 International Conference on Image Processing_. [S.l.], 2006. p. 2017–2020.
* [Ribeiro and Uhl 2017] RIBEIRO, E.; UHL, A. Exploring texture transfer learning via convolutional neural networks for iris super resolution. In: IEEE. _2017 International Conference of the Biometrics Special Interest Group (BIOSIG)_. [S.l.], 2017. p. 1–5.
* [Sá 2019] SÁ, J. M. D. d. C. _Registro de Classe Automatizado Utilizando Reconhecimento Facial_. 74 p. Bachelor’s Thesis — Universidade Federal de Sergipe, 2019.
* [Schroff, Kalenichenko and Philbin 2015] SCHROFF, F.; KALENICHENKO, D.; PHILBIN, J. Facenet: A unified embedding for face recognition and clustering. In: _Proceedings of the IEEE conference on computer vision and pattern recognition_. [S.l.: s.n.], 2015. p. 815–823.
* [Shi et al. 2016] SHI, W. et al. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In: _Proceedings of the IEEE conference on computer vision and pattern recognition_. [S.l.: s.n.], 2016. p. 1874–1883.
* [Szegedy et al. 2017] SZEGEDY, C. et al. Inception-v4, inception-resnet and the impact of residual connections on learning. In: _Thirty-First AAAI Conference on Artificial Intelligence_. [S.l.: s.n.], 2017.
* [Szegedy et al. 2015] SZEGEDY, C. et al. Going deeper with convolutions. In: _Proceedings of the IEEE conference on computer vision and pattern recognition_. [S.l.: s.n.], 2015. p. 1–9.
* [Tian and Ma 2011] TIAN, J.; MA, K.-K. A survey on super-resolution imaging. _Signal, Image and Video Processing_ , Springer, v. 5, n. 3, p. 329–342, 2011\.
* [Tian, Suzuki and Koike 2010] TIAN, L.; SUZUKI, A.; KOIKE, H. Task-oriented evaluation of super-resolution techniques. In: IEEE. _2010 20th International Conference on Pattern Recognition_. [S.l.], 2010. p. 493–498.
* [Timofte et al. 2018] TIMOFTE, R. et al. Ntire 2018 challenge on single image super-resolution: Methods and results. In: _The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops_. [S.l.: s.n.], 2018.
* [Upadhyay, Singhal and Singh 2019] UPADHYAY, U.; SINGHAL, B.; SINGH, M. Spinal stenosis detection in mri using modular coordinate convolutional attention networks. In: IEEE. _2019 International Joint Conference on Neural Networks (IJCNN)_. [S.l.], 2019. p. 1–8.
* [Vedadi and Shirani 2014] VEDADI, F.; SHIRANI, S. A map-based image interpolation method via viterbi decoding of markov chains of interpolation functions. _IEEE Transactions on Image Processing_ , IEEE, v. 23, n. 1, p. 424–438, 2014\.
* [Vezhnevets 2002] VEZHNEVETS, V. Face and facial feature tracking for natural human-computer interface. In: . [S.l.: s.n.], 2002.
* [Wang and Deng 2018] WANG, M.; DENG, W. Deep face recognition: A survey. _arXiv preprint arXiv:1804.06655_ , 2018.
* [Wang et al. 2016] WANG, Z. et al. Studying very low resolution recognition using deep networks. In: _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_. [S.l.: s.n.], 2016. p. 4792–4800.
* [Wang, Chen and Hoi 2019] WANG, Z.; CHEN, J.; HOI, S. C. Deep learning for image super-resolution: A survey. _arXiv preprint arXiv:1902.06068_ , 2019.
* [Wang, She and Ward 2019] WANG, Z.; SHE, Q.; WARD, T. E. Generative adversarial networks: A survey and taxonomy. _arXiv preprint arXiv:1906.01529_ , 2019.
* [Xu, Chen and Jia 2019] XU, X.; CHEN, Y.-C.; JIA, J. View independent generative adversarial network for novel view synthesis. In: _Proceedings of the IEEE International Conference on Computer Vision_. [S.l.: s.n.], 2019. p. 7791–7800.
* [Yang et al. 2017] YANG, Z. et al. Semi-supervised qa with generative domain-adaptive nets. _arXiv preprint arXiv:1702.02206_ , 2017.
* [Yu et al. 2018] YU, J. et al. Generative image inpainting with contextual attention. In: _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_. [S.l.: s.n.], 2018. p. 5505–5514.
* [Yu et al. 2018] YU, X. et al. Face super-resolution guided by facial component heatmaps. In: _Proceedings of the European Conference on Computer Vision (ECCV)_. [S.l.: s.n.], 2018. p. 217–233.
* [Zafeiriou, Zhang and Zhang 2015] ZAFEIRIOU, S.; ZHANG, C.; ZHANG, Z. A survey on face detection in the wild: past, present and future. _Computer Vision and Image Understanding_ , Elsevier, v. 138, p. 1–24, 2015\.
* [Zafeirouli et al. 2019] ZAFEIROULI, K. et al. Efficient, lightweight, coordinate-based network for image super resolution. In: IEEE. _2019 IEEE International Conference on Engineering, Technology and Innovation (ICE/ITMC)_. [S.l.], 2019. p. 1–9.
* [Zhang and Zhang 2010] ZHANG, C.; ZHANG, Z. A survey of recent advances in face detection. 2010\.
* [Zhang et al. 2016] ZHANG, K. et al. Joint face detection and alignment using multitask cascaded convolutional networks. _IEEE Signal Processing Letters_ , IEEE, v. 23, n. 10, p. 1499–1503, 2016\.
* [Zhang et al. 2018] ZHANG, L. et al. Adaptive importance learning for improving lightweight image super-resolution network. _arXiv preprint arXiv:1806.01576_ , 2018.
## Chapter 6 Perceptual Results of SR Algorithms for 4x Upscaling
In this section it is possible to check the perceptual results for the SR
experiment described in Section 1. All the images were upscaled from a grid of
40x40 to a size of 160x160.
## Chapter 7 Average Training Losses for the SR Algorithms
This section presents the behavior for the training losses of each network
obtained during the experiment described in Section 1.
|
11institutetext: University of Cassino and Southern Latium, Cassino, FR 03043
Italy 22institutetext: University of Salerno, Fisciano, SA 84084 Italy
# Sinc-based convolutional neural networks for EEG-BCI-based motor imagery
classification††thanks: This work was supported by MIUR (Minister for
Education, University and Research, Law 232/216, Department of Excellence).
Alessandro Bria 11 0000-0002-2895-6544 Claudio Marrocco 11 0000-0003-0840-7350
Francesco Tortorella 22 0000-0002-5033-9323
###### Abstract
Brain-Computer Interfaces (BCI) based on motor imagery translate mental motor
images recognized from the electroencephalogram (EEG) to control commands. EEG
patterns of different imagination tasks, e.g. hand and foot movements, are
effectively classified with machine learning techniques using band power
features. Recently, also Convolutional Neural Networks (CNNs) that learn both
effective features and classifiers simultaneously from raw EEG data have been
applied. However, CNNs have two major drawbacks: (i) they have a very large
number of parameters, which thus requires a very large number of training
examples; and (ii) they are not designed to explicitly learn features in the
frequency domain. To overcome these limitations, in this work we introduce
Sinc-EEGNet, a lightweight CNN architecture that combines learnable band-pass
and depthwise convolutional filters. Experimental results obtained on the
publicly available BCI Competition IV Dataset 2a show that our approach
outperforms reference methods in terms of classification accuracy.
###### Keywords:
Motor imagery Brain computer interface Convolutional neural networks.
## 1 Introduction
A Brain-Computer Interface (BCI) translates brain signals into messages or
commands for an interactive task. This enables a wide range of applications
from clinic to industry for both patients and healthy users, such as
rehabilitation devices for stroke patients [22], controllable wheelchairs and
prostheses [35], new gaming input devices [8], to name a few. Among different
brain activity monitoring modalities, noninvasive approaches based on
electroencephalography (EEG) use multiple electrodes placed on the skull
surface to record the activity of cerebral cortical neurons [5] and are widely
used in many BCI studies thanks to their ease of implementation, reduced costs
and high availability [20]. The most popular EEG signals used to control BCI
systems are P300 evoked potentials, steady-state visual evoked potentials
(SSVEP) and motor imagery (MI) which is the focus of our work. Specifically,
MI refers to the imagination of moving certain body parts without actual
movement [28]. Different MI tasks result into discriminable patterns observed
from the oscillatory activities in the sensorimotor cortex region of the brain
[21]. Imagination of left hand, right hand, foot and tongue movements are the
most investigated MI tasks in the BCI literature [15].
Handcrafted feature extraction methods coupled with conventional classifiers
like Linear Discriminant Analysis (LDA), Support Vector Machines (SVM),
Bayesian classifiers, and Nearest Neighbor classifiers have been used in a
number of studies for MI task recognition [15]. A widely used approach is to
extract and combine band power features from different channel(electrode)
signals to capture connectivity patterns among different regions of the
sensorimotor cortex and, ultimately, their interaction and engagement with
each other. This is thought to play a fundamental role in accomplishing
movement imaginations [14]. Common spatial patterns (_CSP_) were introduced to
this end in [23] and received a large share of research in the field [4, 16,
25, 26, 34], but their effectiveness depended on subject-specific frequency
bands. This problem was alleviated by the popular filter bank CSP (_FBCSP_)
[1] that decomposes the EEG into multiple frequency pass bands prior to
spatial filtering, feature selection and classification. This method also won
the BCI Competition IV [33] for 4-class motor imagery recognition (Dataset 2a)
and was since used as a reference method for comparison.
Given their effectiveness in other fields [9, 29], deep learning methods, and
in particular Convolutional Neural Networks (CNNs)[13], have the potential to
learn both effective features and classifiers simultaneously from raw EEG
data. Several studies have recently explored deep learning for MI
classification [17, 27, 31, 32, 12]. Notably, [27] showed that their _Shallow
ConvNet_ (one temporal convolution, one spatial convolution, squaring and mean
pooling) could outperform their _Deep ConvNet_ (temporal convolution, spatial
convolution, then three layers of standard convolution) as well as _FBCSP_. A
similar result was achieved by [12] with _EEGNet_ , a compact lightweight
network (one temporal convolution, one depthwise convolution, one separable
convolution, and a fully connected layer) that compared favorably with _Deep
ConvNet_ and performed on par with _Shallow ConvNet_. These results indicate
that shallow networks having a small number of parameters are beneficial for
MI applications that are characterized by very small numbers of training
examples because of the difficulty in performing millions or even thousands of
mental commands during training sessions.
In this paper we propose _Sinc-EEGNet_ , a 4-layer CNN architecture that
combines the benefits of both EEG frequency band decomposition of classical
methods, such as _FBCSP_ , and automatic feature learning and extraction of
lightweight CNN models, such as _EEGNet_. In particular, the first
convolutional layer of our network is restricted to use parameterized sinc
functions that implement band pass filters. The subsequent depthwise and
separable convolution layers learn a spatial filter and combine the features
from the different frequency bands previously selected, which are then
inputted to the final classification layer. An overview of the proposed
architecture is shown in Fig. 1.
Figure 1: An overview of the proposed _Sinc-EEGNet_ architecture.
## 2 Sinc layer
A standard CNN convolution layer applied on a one-dimensional discrete time-
domain signal $s[t]$ performs convolutions with $F$ one-dimensional filters
$h_{1},...,h_{F}$ each having $K$ learnable weights. Conversely, the Sinc
layer performs convolutions with $F$ predefined functions $g_{1},...,g_{F}$
each implementing a learnable bandpass filter $G$ as the difference between
two low-pass filters in the frequency domain:
$G[f]=rect\left(\frac{f}{2f_{2}}\right)-rect\left(\frac{f}{2f_{1}}\right)$ (1)
where $f_{1}$ and $f_{2}>f_{1}$ are the learnable low and high cutoff
frequencies. Using the inverse Fourier transform, the time-domain filter $g$
is obtained as:
$g[t]=2f_{2}sinc(2\pi f_{2}t)-2f_{1}sinc(2\pi f_{1}t)$ (2)
where the sinc function is defined as $sinc(x)=sin(x)/x$. The cutoff
frequencies are initialized by sampling from a Gaussian distribution with mean
and variance equal to $f_{s}/4$, where $f_{s}$ represents the sampling
frequency of the input signal. The constraint $f_{2}>f_{1}$ is implemented by
using in Eq. 2 the following cutoff frequencies $f_{1}^{abs}$ and
$f_{2}^{abs}$:
$f_{1}^{abs}=|f_{1}|$ (3) $f_{2}^{abs}=f_{1}+|f_{2}-f_{1}|.$ (4)
Because of the discrete approximation of $g$, the resulting bandpass filter is
nonideal and may present ripples in the passband and limited attenuation in
the stopband. To alleviate this problem, we multiply $g$ with the popular
Hamming window $w$ [18] defined as:
$w[t]=0.54-0.46\cdot\cos\left(\frac{2\pi t}{L}\right)$ (5)
where $L$ is the number of discrete samples used to approximate $g$. The sinc
convolutional layer transforming the input signal $s[t]$ into the band-
decomposed output signal $o_{1},...,o_{F}$ is then defined by:
$o_{i}[t]=s[t]*\left(g_{i}[t]\cdot w[t]\right).$ (6)
## 3 The Sinc-EEGNet architecture
The proposed Sinc-EEGNet is a combination and adaptation of the Sinc
convolution layer originally proposed by [24] for speech recognition with
_SincNet_ , and _EEGNet_ [12] for what concerns the spatial filtering
implemented with depthwise convolution. Specifically, the architecture of
Sinc-EEGNet (see Fig. 1 and Table LABEL:tab:architecture) consists of four
blocks described as follows:
1. 1.
_Sinc Convolution_. The first block takes in input a signal having $C$
channels and $T$ time samples, and performs convolution with $F_{1}$ sinc
filters having $L$ time samples. Compared to the first standard convolution
layer used in other CNN architectures such as _EEGNet_ , here the sinc filters
are explicitly designed to learn the optimal band decomposition for the MI
classification task and, when the CNN is trained with data from a single BCI
user, this will reflect the peculiarities of the EEG oscillatory activity of
that user. Another advantage is the reduced number of parameters, from
$K\times F_{1}$ of the standard convolution to $2\times F_{1}$ of the sinc
convolution. This also implies faster convergence and better generalization
capabilities especially when using small training sets as in the case of MI
applications. Computational efficiency also is improved since the filters are
symmetric, thus the convolution can be performed on one side of the filter and
inheriting the result for the other half.
2. 2.
_Depthwise Convolution_. Similarly to _EEGNet_ [12], we use a Depthwise
Convolution layer [6] of size $(C,1)$ to learn $D$ spatial filters for each of
the $F_{1}$ inputted feature maps across the channel dimension, for a total of
$F_{2}=D\times F_{1}$ filters. Combined with the first layer that performs
optimal band decomposition, this two-step sequence can be considered a
‘learnable’ version of the well known _FBCSP_ [1] approach.
3. 3.
_Separable Convolution_. Similarly to _EEGNet_ , we summarize each feature map
individually using a Depthwise Convolution of size $(1,16)$, and then merge
the outputs using $F_{2}$ $(1,1)$ Pointwise Convolutions. This allows optimal
combination of the information within and across feature maps.
4. 4.
_Classification_. The last layer is a fully connected layer that receives the
flattened features from the previous layer and maps them to 4 decision classes
(left hand, right hand, foot, tongue).
At the end of blocks 1-3 we apply Average Pooling of size $(1,4)$ for
dimensionality reduction, Layer Normalization [2], Dropout regularization
[30], and CELU activation [3]. Layer Normalization, as opposed to Batch
Normalization [10] used in other architectures (_EEGNet_ , _Deep ConvNet_ ,
_Shallow ConvNet_), calculates the mean and variance across channels instead
than batches. This is especially useful for BCI datasets characterized by a
high number of channels(electrodes) and small batch sizes resulting from the
scarcity of training data. As to the CELU activation, it is an improvement
over the ELU activation [7] used in other architectures (_EEGNet_ , _Deep
ConvNet_ , _Shallow ConvNet_) since its derivative does not diverge and it
contains both the linear transfer function and ReLU [19] activation as special
cases.
[ caption=Sinc-EEGNet architecture, where $C$ = number of channels, $T$ =
number of time points, $L$ = number of sinc samples, $F_{1}$ = number of
temporal filters, $D$ = number of spatial filters, $F_{2}$ = number of
pointwise filters, and $N$ = number of classes., label = tab:architecture,
width = pos = !t, doinside=]m1.0cmm3.1cmm1.2cmm1.0cmXXm1.3cm Block Layer
filters size params Output Activation
1 Input $(C,T)$
Reshape $(1,C,T)$
Sinc Convolution $F_{1}$ $(1,L)$ $2\times F_{1}$ $(F_{1},C,T)$
Average Pooling $(1,4)$ $(F_{1},C,\frac{T}{4})$
Layer Normalization $2\times F_{1}$ $(F_{1},C,\frac{T}{4})$ CELU
Dropout $(F_{1},C,\frac{T}{4})$
2 Depthwise Convolution $D\times F_{1}$ $(C,1)$ $C\times D\times F_{1}$
$(D\times F_{1},1,\frac{T}{4})$
Average Pooling $(1,4)$ $(D\times F_{1},1,\frac{T}{16})$
Layer Normalization $2\times D\times F_{1}$ $(D\times F_{1},1,\frac{T}{16})$
CELU
Dropout $(D\times F_{1},1,\frac{T}{16})$
Depthwise Convolution $D\times F_{1}$ $(1,16)$ $16\times D\times F_{1}$
$(D\times F_{1},1,\frac{T}{16})$
Layer Normalization $2\times D\times F_{1}$ $(D\times F_{1},1,\frac{T}{16})$
CELU
Dropout $(D\times F_{1},1,\frac{T}{16})$
3 Pointwise Convolution $F_{2}$ $(1,1)$ $F_{2}\times(D\times F_{1})$
$(F_{2},1,\frac{T}{16})$
Average Pooling $(1,4)$ $(F_{2},1,\frac{T}{64})$
Layer Normalization $2\times F_{2}$ $(F_{2},1,\frac{T}{64})$ CELU
Dropout $(F_{2},1,\frac{T}{64})$
4 Flatten $F_{2}\times\frac{T}{64}$
Fully Connected $N\times F_{2}\times\frac{T}{64}$ $N$ Softmax
## 4 Experiments
The EEG data used in this study comes from the BCI Competition IV Dataset 2A
[33]. The data consists of four classes of imagined movements of left and
right hands, feet and tongue recorded from 9 subjects during two separate
sessions, each composed by 288 trials. The EEG data were originally recorded
using $C=22$ Ag/AgCl electrodes(channels), sampled at 250 Hz and bandpass
filtered between 0.5 and 100 Hz. We applied a further bandpass filtering to
suppress frequencies above 64 Hz and resampled the timeseries to 128 Hz as in
[12]. Z-score standardization was used to normalize the signals within each
trial.
EEG data were splitted for training and testing according to three different
paradigms:
1. 1.
_Competition-based_. The training and test sets were the same as indicated in
the BCI Competition. This allowed to compare our method with reference methods
from the literature that reported their results using the same data split,
namely _FBSCP_ [1], _Deep ConvNet_ [27], and _Shallow ConvNet_ [27] as well as
all other participants to the original challenge.
2. 2.
_Within-subject_. For each subject, a dedicated experiment was performed using
only data from that subject from the BCI Competition training and test sets.
3. 3.
_Cross-subject_. For each subject, a dedicated experiment was performed using
only data from other subjects from the BCI Competition training set, and only
data from that subject from the BCI Competition test set.
In all the experiments, we performed a four-class classification using
accuracy as the summary measure. In the within- and cross-subject experiments,
we also trained and tested an _EEGNet_ with $F_{1}=8$ and $D=2$, which was the
best performing CNN reported in [12]. As to our _Sinc-EEGNet_ , we chose $D=2$
for a fair comparison with _EEGNet_ , but we set $F_{1}=32$ since our Sinc
layer is specifically designed for frequency band decomposition and thus can
benefit from learning a wide variety of bandpass filters. This can be seen in
Fig. 2 that shows 32 distinct filters learnt by _Sinc-EEGNet_ in the
competition-based experiment. The number of samples $L$ used to discretize the
sinc functions was set to $64$ that resulted from a trade-off between
approximation precision and computational complexity.
All the CNNs were trained using backpropagation and Adam optimizer [11] with
weight updates that proceeded in batches of $20$ samples for $100$ epochs. The
base learning rate was set to $10^{-3}$. Momentum and weight decay were set
respectively to $0.9$ and $2\times 10^{-2}$. Following [12], for the Dropout
layers we chose $p=0.5$ for within-subject experiments, and $p=0.25$ for
competition-based and cross-subject experiments that used more training data
and thus required less regularization. The loss function was categorical
cross-entropy.
[ caption=Comparison of classification accuracies between our method and
reference methods on the BCI Competition IV-2A., label = tab:results, pos =
!t, doinside=]m4.0cmm2.0cm Method Accuracy
_FBCSP_ $68.0\%$
_Deep ConvNet_ $70.9\%$
_Shallow ConvNet_ $73.7\%$
_Sinc-EEGNet_ $75.39\%$
## 5 Results
The comparison between _Sinc-EEGNet_ and the reference methods from the
literature on the competition-based data split are reported in Table
LABEL:tab:results. Remarkably, _Sinc-EEGNet_ outperforms all other methods in
terms of accuracy and sets a new state-of-the-art on the BCI Competition IV-2A
with an accuracy of $75.39\%$ that improves _FBCSP_ by $17.39\%$. As to the
within- and cross-subject experiments, _EEGNet_ yielded an average accuracy of
$60.99\%$ and $58.75\%$, respectively, and _Sinc-EEGNet_ of $70.56\%$ and
$58.98\%$, respectively. Also in this case, our method exhibited superior
performance, with an improvement of almost $10\%$ accuracy in the more
practically adopted within-subject classification.
## 6 Conclusions
In this work we proposed _Sinc-EEGNet_ , a lightweight convolutional neural
network for EEG-BCI-based motor imagery classification that learns optimal
band decomposition and spatial filtering, mimicking the behavior of the well-
known _FBCSP_ but learning the filters directly from the raw EEG data. Our
method outperformed reference methods from the literature, including _FBCSP_
and _EEGNet_ , on the publicly available BCI Competition IV-2A dataset. To the
best of our knowledge, this is the first work that validated the use of
learnable bandpass filters in the first layer of a CNN for EEG signal
classification. Future work will investigate alternative frequency filters,
such as Difference of Gaussian (DoG) filter, that are less subject to discrete
approximation issues, and architecture variants that explore different spatial
filtering and feature map combination approaches.
Figure 2: The 32 sinc filters learnt by _Sinc-EEGNet_ on the BCI Competition
IV Dataset 2A.
## References
* [1] Ang, K.K., Chin, Z.Y., Wang, C., Guan, C., Zhang, H.: Filter bank common spatial pattern algorithm on bci competition iv datasets 2a and 2b. Frontiers in neuroscience 6, 39 (2012)
* [2] Ba, J.L., Kiros, J.R., Hinton, G.E.: Layer normalization. arXiv preprint arXiv:1607.06450 (2016)
* [3] Barron, J.T.: Continuously differentiable exponential linear units. arXiv pp. arXiv–1704 (2017)
* [4] Blankertz, B., Tomioka, R., Lemm, S., Kawanabe, M., Muller, K.R.: Optimizing spatial filters for robust eeg single-trial analysis. IEEE Signal processing magazine 25(1), 41–56 (2007)
* [5] Britton, J.W., Frey, L.C., Hopp, J.L., Korb, P., Koubeissi, M.Z., Lievens, W.E., Pestana-Knight, E.M., St, E.L.: Electroencephalography (EEG): An introductory text and atlas of normal and abnormal findings in adults, children, and infants. American Epilepsy Society, Chicago (2016)
* [6] Chollet, F.: Xception: Deep learning with depthwise separable convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 1251–1258 (2017)
* [7] Clevert, D.A., Unterthiner, T., Hochreiter, S.: Fast and accurate deep network learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289 (2015)
* [8] Coyle, D., Principe, J., Lotte, F., Nijholt, A.: Guest editorial: Brain/neuronal-computer game interfaces and interaction. IEEE Transactions on Computational Intelligence and AI in games 5(2), 77–81 (2013)
* [9] He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 1026–1034 (2015)
* [10] Ioffe, S., Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167 (2015)
* [11] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
* [12] Lawhern, V.J., Solon, A.J., Waytowich, N.R., Gordon, S.M., Hung, C.P., Lance, B.J.: Eegnet: a compact convolutional neural network for eeg-based brain–computer interfaces. Journal of neural engineering 15(5), 056013 (2018)
* [13] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)
* [14] Liu, T., Li, F., Jiang, Y., Zhang, T., Wang, F., Gong, D., Li, P., Ma, T., Qiu, K., Li, H., et al.: Cortical dynamic causality network for auditory-motor tasks. IEEE Transactions on Neural Systems and Rehabilitation Engineering 25(8), 1092–1099 (2016)
* [15] Lotte, F., Bougrain, L., Cichocki, A., Clerc, M., Congedo, M., Rakotomamonjy, A., Yger, F.: A review of classification algorithms for eeg-based brain–computer interfaces: a 10 year update. Journal of neural engineering 15(3), 031005 (2018)
* [16] Lotte, F., Guan, C.: Regularizing common spatial patterns to improve bci designs: unified theory and new algorithms. IEEE Transactions on biomedical Engineering 58(2), 355–362 (2010)
* [17] Lu, N., Li, T., Ren, X., Miao, H.: A deep learning scheme for motor imagery classification based on restricted boltzmann machines. IEEE transactions on neural systems and rehabilitation engineering 25(6), 566–576 (2016)
* [18] Mitra, S.K.: Digital Signal Processing. McGraw-Hill Science/Engineering/Math (2005)
* [19] Nair, V., Hinton, G.E.: Rectified linear units improve restricted boltzmann machines. In: ICML (2010)
* [20] Nicolas-Alonso, L.F., Gomez-Gil, J.: Brain computer interfaces, a review. sensors 12(2), 1211–1279 (2012)
* [21] Pfurtscheller, G., Da Silva, F.L.: Event-related eeg/meg synchronization and desynchronization: basic principles. Clinical neurophysiology 110(11), 1842–1857 (1999)
* [22] Pichiorri, F., Morone, G., Petti, M., Toppi, J., Pisotta, I., Molinari, M., Paolucci, S., Inghilleri, M., Astolfi, L., Cincotti, F., et al.: Brain–computer interface boosts motor imagery practice during stroke recovery. Annals of neurology 77(5), 851–865 (2015)
* [23] Ramoser, H., Muller-Gerking, J., Pfurtscheller, G.: Optimal spatial filtering of single trial eeg during imagined hand movement. IEEE transactions on rehabilitation engineering 8(4), 441–446 (2000)
* [24] Ravanelli, M., Bengio, Y.: Interpretable convolutional filters with sincnet. arXiv preprint arXiv:1811.09725 (2018)
* [25] Rivet, B., Cecotti, H., Phlypo, R., Bertrand, O., Maby, E., Mattout, J.: Eeg sensor selection by sparse spatial filtering in p300 speller brain-computer interface. In: 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology. pp. 5379–5382. IEEE (2010)
* [26] Samek, W., Kawanabe, M., Müller, K.R.: Divergence-based framework for common spatial patterns algorithms. IEEE Reviews in Biomedical Engineering 7, 50–72 (2013)
* [27] Schirrmeister, R.T., Springenberg, J.T., Fiederer, L.D.J., Glasstetter, M., Eggensperger, K., Tangermann, M., Hutter, F., Burgard, W., Ball, T.: Deep learning with convolutional neural networks for eeg decoding and visualization. Human brain mapping 38(11), 5391–5420 (2017)
* [28] Schuster, C., Hilfiker, R., Amft, O., Scheidhauer, A., Andrews, B., Butler, J., Kischka, U., Ettlin, T.: Best practice for motor imagery: a systematic literature review on motor imagery training elements in five different disciplines. BMC medicine 9(1), 75 (2011)
* [29] Silver, D., Huang, A., Maddison, C.J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., et al.: Mastering the game of Go with deep neural networks and tree search. nature 529(7587), 484 (2016)
* [30] Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research 15(1), 1929–1958 (2014)
* [31] Sturm, I., Lapuschkin, S., Samek, W., Müller, K.R.: Interpretable deep neural networks for single-trial eeg classification. Journal of neuroscience methods 274, 141–145 (2016)
* [32] Tabar, Y.R., Halici, U.: A novel deep learning approach for classification of eeg motor imagery signals. Journal of neural engineering 14(1), 016003 (2016)
* [33] Tangermann, M., Müller, K.R., Aertsen, A., Birbaumer, N., Braun, C., Brunner, C., Leeb, R., Mehring, C., Miller, K.J., Mueller-Putz, G., et al.: Review of the bci competition iv. Frontiers in neuroscience 6, 55 (2012)
* [34] Yger, F., Lotte, F., Sugiyama, M.: Averaging covariance matrices for eeg signal classification based on the csp: an empirical study. In: 2015 23rd European Signal Processing Conference (EUSIPCO). pp. 2721–2725. IEEE (2015)
* [35] Zhang, R., Li, Y., Yan, Y., Zhang, H., Wu, S., Yu, T., Gu, Z.: Control of a wheelchair in an indoor environment based on a brain–computer interface and automated navigation. IEEE transactions on neural systems and rehabilitation engineering 24(1), 128–139 (2015)
|
# Soft pions and transport near the chiral critical point
Eduardo Grossi<EMAIL_ADDRESS>Center for Nuclear Theory,
Department of Physics and Astronomy, Stony Brook University, Stony Brook, New
York 11794, USA Alexander Soloviev<EMAIL_ADDRESS>Center
for Nuclear Theory, Department of Physics and Astronomy, Stony Brook
University, Stony Brook, New York 11794, USA Derek Teaney
<EMAIL_ADDRESS>Center for Nuclear Theory, Department of Physics
and Astronomy, Stony Brook University, Stony Brook, New York 11794, USA
Fanglida Yan<EMAIL_ADDRESS>Center for Nuclear Theory,
Department of Physics and Astronomy, Stony Brook University, Stony Brook, New
York 11794, USA
###### Abstract
Background: During the expansion of a heavy ion collision, the system passes
close to the $O(4)$ critical point of QCD, and thus the fluctuations of the
order parameter $(\sigma,\vec{\pi})$ are expected to be enhanced. Purpose: Our
goal is to compute how these enhanced fluctuations modify the transport
coefficients of QCD near the pseudo-critical point. We also make a
phenomenological estimate for how chiral fluctuations could effect the
momentum spectrum of soft pions. Method: We first formulate the appropriate
stochastic hydrodynamic equations close to the $O(4)$ critical point. Then,
working in mean field, we determine the correlation functions of the stress
tensor and the currents which result from this stochastic real time theory,
and use these correlation functions to determine the scaling behavior of the
transport coefficients. The hydrodynamic theory also describes the propagation
of pion waves, fixing the scaling behavior of the dispersion curve of soft
pions. Results: We present scaling functions for the shear viscosity and the
charge conductivities near the pseudo-critical point, and estimate the
absolute magnitude of the critical fluctuations to these parameters and the
bulk viscosity. Using the calculated pion dispersion curve, we estimate the
expected critical enhancement of soft pion yields, and this estimate provides
a plausible explanation for the excess seen in experiment relative to ordinary
hydrodynamic computations. Conclusions: Our results motivate further
phenomenological and numerical work on the implications of chiral symmetry on
real time properties of thermal QCD near the pseudo-critical point.
###### pacs:
## I Introduction
Measurements on heavy ion collisions at the Relativistic Heavy Ion Collider
(RHIC) and Large Hadron Collider (LHC) are remarkably well described by
viscous hydrodynamics, which predicts the measured flow harmonics and their
correlations in exquisite detail Jeon and Heinz (2015); Heinz and Snellings
(2013). These hydrodynamic simulations are based on a theory of ordinary
hydrodynamics, which ignores chiral symmetry breaking at low temperature and
the associated chiral phase transition. This is reasonable at finite quark
mass, where chiral symmetry is always explicitly broken. Nevertheless, if the
quark mass is small enough, one would expect that the pattern of chiral
symmetry breaking would provide a useful organizing principle for
hydrodynamics, increasing its predictive power.
As a starting point for this reorganization, let us describe the appropriate
hydrodynamic theory in the limit of two exactly massless quark flavors. In
this limit the symmetry group of the microscopic theory is $U(1)\times
SU_{L}(2)\times SU_{R}(2)$. At high temperatures where the symmetry of the
Lagrangian is reflected in the symmetry of the thermal state, the hydrodynamic
variables are simply the conserved charges $Q$, i.e. the energy and momentum,
the iso-vector and iso-axial-vector charges, and the baryon number. At low
temperatures the symmetry of the thermal state is spontaneously broken to
$U(1)\times SU_{V}(2)$, and the three massless Goldstone modes associated with
broken symmetry (the pions) must be added to the original list of of
hydrodynamic variables, $\\{Q,\pi\\}$ Son (2000). The theory in this case is
akin to a non-abelian superfluid. The hydrodynamic theories at high and low
temperatures are separated by the chiral critical point, which is somewhat
analogous to the critical point separating the normal and superfluid phases of
helium Rajagopal and Wilczek (1993); Son and Stephanov (2002a). At the
critical point the hydrodynamic variables consist of the conserved charges $Q$
and a four component order parameter field
$\Sigma\sim\langle\bar{q}_{R}q_{L}\rangle$. For $T\gg T_{c}$, $\Sigma$ can be
consistently integrated out, leaving an ordinary fluid state with only the
conserved charges, while for $T\ll T_{c}$ the phase of $\Sigma$ fluctuates,
reducing the hydrodynamics to a superfluid theory consisting of conserved
charges and the Goldstone modes, $\\{Q,\pi\\}$.
In the presence of a small but finite quark mass the theory is only
approximately invariant under $SU_{L}(2)\times SU_{R}(2)$. The iso-axial
vector charge is only approximately conserved, and the $\pi$ fluctuations are
only approximately massless. In addition, the system never passes directly
through the chiral phase transition, and the correlation length remains
finite, but large. Thus at large enough distances, the theory asymptotes to
ordinary hydrodynamics, and the usual approach based on ordinary hydrodynamics
is fine. However, at shorter distances (but still macroscopic) the
fluctuations of the order parameter $\Sigma$ need to be taken into account to
accurately model the system with hydrodynamics. The thermal fluctuations of
the $\Sigma$ field are incorporated into the equation of state and the
transport coefficients of the ordinary fluid theory. By writing down the
hydrodynamics theory including the $\Sigma$, and then integrating out these
modes, one can precisely determine how the critical modes affect the equation
of state, and modify the transport coefficients of the ordinary theory, such
as the shear and bulk viscosities. This computation will determine the
behavior of these parameters in the vicinity of the chiral critical point. Our
goal in this paper to perform this computation, albeit in a mean-field
approximation.
The validity of the approach relies on the smallness of quark mass and the
proximity of the $O(4)$ critical point in real world QCD. We are encouraged by
Euclidean lattice QCD simulations Ding _et al._ (2019); Kaczmarek _et al._
(2020) at the physical pion mass and smaller, which show that aspects of QCD
thermodynamics, such as the chiral susceptibility, can be qualitatively, and
even quantitatively, understood using $O(4)$ scaling functions. These scaling
functions dictate the behavior of the singular part of the temperature
dependence (at fixed quark mass) of the equation of state near the pseudo-
critical point. It seems reasonable to expect that the real time $O(4)$
scaling functions can be used to prescribe the temperature dependence of the
transport parameters in the critical region with similar precision.
The singular parts of the equation of state can be determined by simulating an
appropriate $O(4)$ symmetric Landau-Ginzburg field theory on a 3D lattice. In
effect, this means that the singular part is captured by a classical effective
field theory (EFT) describing the equilibrium fluctuations of a classical
order parameter field. In practice, the classical EFT is replaced by a spin
model and lattice techniques are used to determine the scaling functions with
high precision Engels and Karsch (2014, 2012); Engels and Vogt (2010). For
dynamical quantities the appropriate classical real time EFT is stochastic
hydrodynamics Hohenberg and Halperin (1977). The hydrodynamic equations of
motion were written down many years ago in an insightful paper by Wilczek and
Rajagopal Rajagopal and Wilczek (1993). We will present a somewhat different
derivation of their equations of motion in Sect. III. A useful
phenomenological model which tracks the amplitude of the chiral condensate
(but not the phase) within hydrodynamics was presented in Nahrgang _et al._
(2012).
A numerical simulation of the critical theory could be used to find the two
point functions of the conserved currents, which in turn determine the scaling
functions for the transport coefficients near the critical point. In the
current paper we will work in a mean field approximation, in order to get a
qualitative understanding for the expected scaling functions from such
simulations, and to estimate the absolute magnitude of critical contributions
from the $\Sigma$ field to the transport coefficients. We will reserve a
numerical simulation for future work.
Currently, there is no experimental evidence for the long wavelength
fluctuations of the chiral condensate, which are the hallmark of the chiral
phase transition111Note, however, that there is an observed enhancement of
thermal dileptons in a specific mass range, which can be taken as evidence
that the vector and axial-vector correlation functions are becoming
degenerate, as expected when chiral symmetry is partially restored
[Forareviewsee:][]Rapp:2009yu.. As a first attempt to remedy the situation,
the current paper will point out an enhancement of soft pions seen in the
experimental data, and recall that such an enhancement is an expected
signature of the $O(4)$ critical point. An estimate for the magnitude of the
enhancement expected from critical fluctuations encourages us to explore this
explanation for the observed excess in future work. In addition, the proposed
upgrade to the ALICE detector Colella (2019) will be more sensitive to low
$p_{T}$ pions, and this new experimental thrust provides us with additional
encouragement.
This paper builds upon our earlier work Grossi _et al._ (2020), which
computed the contributions of soft pions to the transport coefficients of QCD
in the broken phase, and then estimated how these contributions would evolve
as one approaches the critical point from below. We will recover these earlier
results as a low temperature limit of the more general expressions presented
here. However, while the current paper works with mean field theory, the
previous results are more general and are expected match the full numerical
simulations of stochastic hydrodynamics.
An outline of the paper is as follows: to set notation we will first describe
the thermodynamics of the $O(4)$ scaling theory, and compare results from
previous numerical simulations with the mean field expectations used in this
work. Then in Sect. III, we will provide a general formulation of the
hydrodynamic equations of motion, and compute the linearized propagators for
the theory. These propagators will then be used in Sect. IV to compute the
scaling behavior of the transport coefficients in a mean field approximation,
and the results are analyzed. Finally, in Sect. V we estimate the enhanced
yield of soft pions near the chiral critical point and outline future
directions.
## II Thermodynamic preliminaries
### II.1 The magnetic equation of state at mean field
The order parameter of the chiral phase transition is a four component field
$\phi_{a}$ transforming in the defining representation of $O(4)$, and reflects
the fluctuations of the chiral condensate,
$\Sigma(x)\equiv-\bar{q}_{R}q_{L}(x)/F^{2}_{0}$ where $F_{0}$ is the vacuum
pion decay constant. $\Sigma$ is expanded in terms of the four component
field222 Roman indices at the beginning of the alphabet $a,b,c\ldots$ are
$O(4)$ indices. Isospin indices are denoted as
$s,s^{\prime},s^{\prime\prime},\ldots$ etc, and are notated with a vector
$\vec{\pi}$. Minkowski indices are $\mu,\nu,\rho,\ldots$ etc, while spatial
indices are $i,j,k,\ldots$. To lighten the notation, contraction of flavor
indices are denoted by a dot, e.g. $H\cdot\phi=H_{a}\phi_{a}$ and $\mu\cdot
n=\mu_{ab}\cdot n_{ab}$. More explicitly, the chiral condensate is
$\left[\Sigma\right]^{\ell_{1}}_{\;\ell_{2}}=-\bar{q}_{R\ell_{2}}\,q_{L}^{\ell_{1}}(x)/F_{0}^{2}$,
where $q^{\ell}=(u,d)$, and $\Sigma$ transforms as $\Sigma\rightarrow
g_{L}\Sigma g_{R}^{\dagger}$ under a chiral rotation.
$\Sigma\equiv\phi_{a}\tau_{a}=\sigma\,\mathbb{I}+i\vec{\pi}\cdot\vec{\lambda}\,,$
(1)
where the matrices of the Clifford algebra
$\tau_{a}=(\mathbb{I},-i\vec{\lambda})$ are an amalgamation of the unit matrix
and the Pauli matrices, $\vec{\lambda}$, transforming together as a vector
under $O(4)$. The components of $\phi_{a}$ are the sigma and pion fields
$(\phi_{a})\equiv(\sigma,-\vec{\pi})\,,$ (2)
where the minus sign appearing in (2) is a slightly inconvenient convention.
Given the approximate $O(4)$ symmetry of the microscopic theory, there are
approximately conserved charge densities, $n_{ab}$, transforming as an
antisymmetric tensor under $O(4)$. $n_{ij}$ is the conserved iso-vector
charge, while $n_{0i}$ is the partially conserved iso-axial-vector charge. The
associated chemical potential is $\mu_{ab}$, and we also adopt the notation
$\mu^{2}=\mu_{ab}\mu_{ab}$.
Close to the critical point, the Euclidean action that determines the
fluctuations in the order parameter $\phi_{a}$ at fixed temperature $T$ and
chemical potential $\mu_{ab}$ is
$\displaystyle{S}_{E}=$ $\displaystyle\beta\int
d^{3}x\,\left(p_{0}(T)+\frac{1}{2}\chi_{0}\mu^{2}-\frac{1}{2}\partial_{i}\phi_{a}\,\partial^{i}\phi_{a}-V({\Phi})+H_{a}\phi_{a}\right)\,,$
(3)
where the scalar potential is of Landau-Ginzburg form
$V({\Phi})=\frac{1}{2}m_{0}^{2}(T){\Phi}^{2}+\frac{\lambda}{4}{\Phi}^{4}\,.$
(4)
Here we have defined
${\Phi}\equiv\sqrt{\phi_{a}\phi_{a}}\,,$ (5)
and
$m_{0}^{2}(T)\equiv{\mathfrak{m}}^{2}\,\frac{(T-T_{c})}{T_{c}}\equiv{\mathfrak{m}}^{2}t,$
(6)
where $t$ is the reduced temperature, and ${\mathfrak{m}}$ is of order the
vacuum sigma mass or higher and is a constant. $H_{a}\equiv(H,0,0,0)$ is the
applied magnetic field or quark mass. At this point $T$ and $\mu$ are simply
constants but have been brought inside the integral in eq. (3) to motivate the
hydrodynamic analysis of Sect. III, where $T,\mu$ depend slowly space and
time.
The full partition function takes the form
$Z=\int D\phi\,e^{{S}_{E}[\phi,H]}\,,$ (7)
and reproduces the critical behavior of the equation of state. In spite of its
well known shortcomings, we will work in a mean field approximation. The mean
field takes the form
$\left\langle\phi_{a}\right\rangle=(\bar{\sigma},0)\,,$ (8)
where $\bar{\sigma}$ is the real solution to
$m_{0}^{2}(T)\,\bar{\sigma}+\lambda\,\bar{\sigma}^{3}-H=0\,.$ (9)
It is straightforward to show that the solution to (9) takes the scaling form
$\bar{\sigma}=\frac{{\mathfrak{m}}}{\sqrt{\lambda}}\,h^{1/3}f_{G}(z)\,,\qquad\text{with}\quad
z=th^{-2/3}\,,$ (10)
where we have defined reduced field
$\displaystyle h\equiv$
$\displaystyle\frac{H\sqrt{\lambda}}{{\mathfrak{m}}^{3}}.$ (11)
$z$ is the mean field scaling variable, and $f_{G}(z)$ is the (mean field)
scaling function for the magnetic equation of state. As we will see in the
next section, the pion screening mass on the critical line, $z=0$, is given
by333Here and below the subscript $c$, such as $m_{c}$ and $m_{\sigma c}$,
indicates that the quantity is being evaluated on the critical line $z=0$.
Later we will introduce $v_{c}^{2}$ and $u_{c}^{2}$ (in eqs. (24) and (73)).
$m_{c}^{2}={\mathfrak{m}}^{2}\,h^{2/3}\,,$ (12)
and is a temperature independent constant which parametrizes $h$. It is
convenient to express all lengths in terms of $m_{c}$. The scaling variable
and equation of state take the form
$\bar{\sigma}=\frac{m_{c}}{\sqrt{\lambda}}f_{G}(z)\,,\qquad
z=\frac{m_{0}^{2}(T)}{m_{c}^{2}}=\frac{{\mathfrak{m}}^{2}}{m_{c}^{2}}\frac{(T-T_{c})}{T_{c}}\,.$
(13)
Parametrically, ${\mathfrak{m}}^{2}/m_{c}^{2}=h^{-2/3}$ is a large parameter,
and thus $T$ must be close to $T_{c}$ in order to have an order one scaling
variable, $z\sim 1$.
Outside of the mean field approximation, the expectation value of the order
parameter also takes the scaling form
$\bar{\sigma}=B\,h^{1/\delta}f_{G}(z)\,,\qquad z=th^{-1/\Delta}\,,$ (14)
with $B$ a non-universal constant. $\delta$ and $\Delta$ are known critical
exponents, and $f_{G}(z)$ is a known universal function Engels and Karsch
(2014, 2012); Engels and Vogt (2010). Table 1 compares the mean field
expectations for the critical exponents to the $O(4)$ scaling theory, and Fig.
1(a) compares the mean field $f_{G}(z)$ to the scaling theory.
Exponent or ratio | Mean field | $O(4)$ scaling theory
---|---|---
$\beta$ | 1/2 | 0.38
$\delta$ | 3 | 4.8
$\Delta=\beta\delta$ | 3/2 | 1.83
$\nu_{c}=\nu/\beta\delta$ | 1/3 | 0.40
$m_{\sigma c}^{2}/m_{c}^{2}$ | 3 | 4.0
Table 1: A comparison of mean field theory and the $O(4)$ scaling theory (see
for example Parisen Toldin _et al._ (2003); Engels _et al._ (2003) for
current estimates of the $O(4)$ exponents). $m_{c}$ and $m_{\sigma c}$ are the
pion and sigma screening masses on the critical line, $z=0$, and this ratio
was taken from Engels _et al._ (2003).
Figure 1: (a) A comparison of the mean field magnetic EOS to numerical results
from lattice methods taken from Engels and Karsch (2014, 2012). (b) The sigma
and pion screening masses (inverse correlation lengths), $m_{\sigma}$ and $m$,
compared to results from lattice methods. The lattice curves were obtained by
digitizing the numerical data from Fig. 8 and Fig. 9 of Engels _et al._
(2003), which was subsequently fit with a parametrized form.
### II.2 Static correlation functions in mean field
Given the mean value $\bar{\sigma}$, we can evaluate the action (3) to
quadratic order
${S}_{E}=\beta\int
d^{3}x\,\left[p_{\sigma}(T)+\frac{1}{2}\chi_{0}\mu^{2}-\frac{1}{2}\left(\partial_{i}\delta\sigma\,\partial^{i}\delta\sigma+m_{\sigma}^{2}\delta\sigma^{2}\right)-\frac{1}{2}\left(\partial_{i}\vec{\pi}\cdot\partial^{i}\vec{\pi}+m^{2}\vec{\pi}^{2}\right)\right],$
(15)
where
$p_{\sigma}(T)=p_{0}(T)-\left(\frac{1}{2}m_{0}^{2}(T)\bar{\sigma}^{2}(T)+\frac{\lambda}{4}\bar{\sigma}(T)^{4}-H\bar{\sigma}(T)\right)\,.$
(16)
The sigma and pion screening masses are
$\displaystyle m_{\sigma}^{2}$ $\displaystyle\equiv
m_{c}^{2}\left(z+3f_{G}^{2}(z)\right)\,,$ (17a) $\displaystyle m^{2}$
$\displaystyle\equiv\frac{H}{\bar{\sigma}(T)}=\frac{m_{c}^{2}}{f_{G}(z)}\,.$
(17b)
As in the previous section, the screening masses (or inverse correlation
lengths) are also defined outside of mean field theory. These are expected to
scale as
$\displaystyle m_{\sigma}=$
$\displaystyle{\mathfrak{m}}_{L}\,h^{\nu_{c}}\,g_{L}(z)\,,$ (18)
$\displaystyle m=$ $\displaystyle{\mathfrak{m}}_{T}\,h^{\nu_{c}}\,g_{T}(z)\,,$
(19)
where ${\mathfrak{m}}_{L}$ and ${\mathfrak{m}}_{T}$ are non-universal
constants, and $g_{L}(z)$ and $g_{T}(z)$ are universal scaling functions. As
before, $g_{L}$ and $g_{T}$ are normalized to unity for $z=0$. The ratio
between $m_{\sigma}$ and $m$ is also universal, and can be parameterized by
$m_{\sigma}/m$ on the critical line, i.e. $m_{\sigma c}^{2}/m_{c}^{2}$. This
universal ratio is compared to the mean field prediction of three in Table 1.
$m(z)$ and $m_{\sigma}(z)$ are extracted from the numerical work of Ref.
Engels _et al._ (2003) and compared to mean field theory in Fig. 1(b).
The quadratic action predicts the equal time correlation functions
$\displaystyle\frac{1}{V}\left\langle\delta\sigma({\bm{k}})\delta\sigma(-{\bm{k}})\right\rangle$
$\displaystyle=\frac{T}{k^{2}+m_{\sigma}^{2}}\,,$ (20a)
$\displaystyle\frac{1}{V}\left\langle\varphi_{s}({\bm{k}})\varphi_{s^{\prime}}(-{\bm{k}})\right\rangle$
$\displaystyle=\frac{T}{\bar{\sigma}^{2}(k^{2}+m^{2})}\delta_{ss^{\prime}}\,.$
(20b)
Finally, we can use the general theory of thermodynamics fluctuations to
recognize that444 The easiest way to see this in the current framework is to
recognize that the thermodynamic fluctuations in $n_{ab}$ are Gaussian and
summed over in the grand canonical ensemble. The factor
$\tfrac{1}{4}\chi_{0}\mu^{2}$ reflects the integration over $n$ with the
Lagrange multiplier $\mu$ $e^{\beta\int{\rm
d}^{3}x\,\tfrac{1}{4}\chi_{0}\mu^{2}}=\int[Dn]\,{\rm exp}\left(-\beta\int{\rm
d}^{3}x\left(\tfrac{1}{4\chi_{0}}n^{2}-\frac{1}{2}n\cdot\mu\right)\right)\,.$
(21) This integral implies eq. (22).
$\displaystyle\frac{1}{V}\left\langle
n_{ab}({\bm{k}})n_{cd}(-{\bm{k}})\right\rangle$
$\displaystyle=T\chi_{0}\,(\delta_{ac}\delta_{bd}-\delta_{ad}\delta_{bc})\,.$
(22)
Well below $T_{c}$, the pion mass is small and soft pions are long lived
quasi-particles Son (2000); Son and Stephanov (2002a, b). From this context we
introduce a number of definitions following Son and Stephanov Son and
Stephanov (2002b). The phase of the condensate is
$\varphi_{s}\equiv\frac{\pi_{s}}{\bar{\sigma}}\,,$ (23)
while the associated the pion velocity squared is
$v^{2}(T)\equiv\frac{\bar{\sigma}^{2}(T)}{\chi_{0}}.$ (24)
The pole mass is defined as $m_{p}^{2}\equiv v^{2}m^{2}$, and the soft pion
dispersion curve takes the form
$\omega^{2}_{q}=v^{2}q^{2}+m_{p}^{2},$ (25)
which is parameterized by two Euclidean quantities $v^{2}(T)$ and $m^{2}(T)$.
In the next section we will develop the hydrodynamic theory for the $O(4)$
model. The real time correlation functions constructed from this theory will
reproduce (20) and (22) after integrating over frequency.
## III Hydrodynamics
Having discussed the thermodynamics, we are ready to derive the corresponding
hydrodynamic theory. The resulting equations of motion are equivalent to those
derived previously by Rajagopal and Wilczek using Poisson bracket methods
Rajagopal and Wilczek (1993). Well below $T_{c}$, the equations of motion
resemble a non-abelian superfluid theory and also have been analyzed Son
(2000); Son and Stephanov (2002b); Jain (2017); Grossi _et al._ (2020). The
methodology here follows closely our previous work Grossi _et al._ (2020).
### III.1 Ideal hydrodynamics and the Josephson constraint
To derive the ideal hydrodynamic expressions we follow an expedient procedure
procedure outlined in Jensen _et al._ (2012) and take as hydrodynamic action
${S}[g_{\mu\nu},A_{\mu},H]=\int
d^{4}x\sqrt{-g}\,p_{\Sigma}(T,\mu,(\partial_{\perp}\phi)^{2},\phi^{2},H\cdot\phi)\,,$
(26)
where the redefined pressure takes the same form as its Euclidean counterpart
$p_{\Sigma}(T,\mu,(\partial_{\perp}\phi)^{2},\phi^{2},H\cdot\phi)\equiv
p(T)+\frac{1}{4}\chi_{0}\,\mu\cdot\mu-\frac{1}{2}\Delta^{\mu\nu}D_{\mu}\phi\cdot
D_{\nu}\phi-V({\Phi})+H\cdot\phi\,,$ (27)
but we have replaced the integration over thermal circle with an integration
over time
$\beta=\int_{0}^{\beta}d\tau\rightarrow\int dt\,.$ (28)
We also have added external gauge and gravitational fields, $(A_{\mu})_{ab}$
and $g_{\mu\nu}$, for the purpose of deriving the stress tensor and currents,
and ultimately these sources will be set to zero. In these expressions
$T\equiv(-\beta^{\mu}g_{\mu\nu}\beta^{\mu})^{-1/2}$, and then we define
$u^{\mu}\equiv T\beta^{\mu}$, and $\Delta^{\mu\nu}\equiv
g^{\mu\nu}+u^{\mu}u^{\nu}$. The chemical potential can be written
$\mu_{ab}=\left(u^{\rho}\tilde{\mu}_{\rho}+u^{\rho}A_{\rho}\right)_{ab}\,,$
(29)
where $(\tilde{\mu}_{\rho})_{ab}$ is the contact chemical potential and is
independent of $A_{\rho}$. The covariant derivative is
$(D_{\mu}\phi)_{a}=\partial_{\mu}\phi_{a}-\tfrac{i}{2}(A_{\mu}\cdot\mathcal{J})_{ab}\phi_{b}\,,$
(30)
where $\mathcal{J}_{cd}$ are the generators $O(4)$ rotation group
$(i\mathcal{J}_{cd})_{ab}=\delta_{ca}\delta_{db}-\delta_{cb}\delta_{da}\,.$
(31)
Setting the external fields to zero for simplicity, the differential of
pressure at fixed $H$ follows from the form of $p_{\Sigma}$
$dp_{\Sigma}=s_{\Sigma}\,dT+\frac{1}{2}n_{ab}\,d\mu_{ab}-\frac{1}{2}d(\partial_{\perp}^{\mu}\phi)^{2}+\left(-\frac{\partial
V}{\partial\phi_{a}}+H_{a}\right)d\phi_{a}\,,$ (32)
which defines the entropy density, $s_{\Sigma}\equiv\partial
p_{\Sigma}/\partial T$, and the number densities, $n_{ab}\equiv 2\,\partial
p_{\Sigma}/\partial\mu_{ab}$, respectively555 More explicitly,
$s_{\Sigma}(T)=s(T)-\frac{1}{2T_{c}}{\mathfrak{m}}^{2}\,\Phi^{2}\,$. The
factor of two in the definition of $n_{ab}$, leading to
$n_{ab}=\chi_{0}\mu_{ab}$, is a symmetry factor, i.e. $n_{ab}\equiv\partial
p_{\Sigma}/\partial\mu_{ab}-\partial p_{\Sigma}/\partial\mu_{ba}$ with
$\mu_{12}$ and $\mu_{21}$ treated as independent variables. Similar symmetry
factors for symmetric and antisymmetric tensors are present in the definitions
of $T^{\mu\nu}$ and $J^{\mu}_{ab}$. . Here and below $d\equiv
u^{\mu}\partial_{\mu}$ and
$\partial_{\perp}^{\mu}=\Delta^{\mu\nu}\partial_{\nu}$.
Varying the action with respect to the metric yields the conserved stress
tensor $\partial_{\mu}T^{\mu\nu}=0$. Recognizing that both the temperature and
the chemical potential depend implicitly on the metric after they are written
in terms of $\beta^{\mu}$, the variation of the action gives
$\displaystyle T^{\mu\nu}=\left.\frac{2}{\sqrt{-g}}\frac{\delta{S}}{\delta
g_{\mu\nu}}\right|_{g=A=0}=(\varepsilon_{\Sigma}+p_{\Sigma})\,u^{\mu}u^{\nu}+p_{\Sigma}g^{\mu\nu}+\partial^{\mu}\phi\cdot\partial^{\nu}\phi-u^{\mu}u^{\nu}(u^{\sigma}\partial_{\sigma}\phi)\cdot(u^{\rho}\partial_{\rho}\phi),$
(33)
where in this expression the energy density has been defined through the
Gibbs-Duhem relation
$\varepsilon_{\Sigma}=-p_{\Sigma}+Ts_{\Sigma}+\tfrac{1}{2}\mu_{ab}\cdot
n_{ab}\,.$ (34)
We can find the partial current conservation equation by requiring that the
action in (26) be invariant under gauge transformations. We will limit the
discussion to weak fields and switch off the gravitational field for this
purpose. Under an infinitesimal $O(4)$ rotation with parameters
$\omega_{cd}(x)$, the gauge fields and magnetic field transform as
$\displaystyle A_{\mu,cd}\rightarrow$ $\displaystyle
A_{\mu,cd}+\partial_{\mu}\omega_{cd}\,,$ (35a) $\displaystyle\delta
H_{a}\rightarrow$ $\displaystyle
H_{a}+\tfrac{i}{2}(\omega\cdot{\mathcal{J}})_{ab}H_{b}\,.$ (35b)
Then, requiring invariance of the action under the rotation,
$\displaystyle\delta{S}=$ $\displaystyle\int d^{4}x\,\frac{\delta{S}}{\delta
A_{\mu,ab}}\,\delta A_{\mu,ab}+\frac{\delta{S}}{\delta H_{a}}\delta H_{a}=0,$
(36)
and inserting the transformation rules (35), we find partial current
conservation
$\partial_{\mu}J^{\mu}_{cd}=\phi_{c}H_{d}-\phi_{d}H_{c}\,.$ (37)
Here the currents are given by
$\displaystyle J^{\mu}_{ab}\equiv 2\frac{\delta{S}}{\delta
A_{\mu,ab}}=\chi_{0}\mu_{ab}u^{\mu}+(J^{\mu}_{\perp})_{ab}\,,$ (38)
where the first term is the normal component and the second term is the
superfluid component, given by
$(J_{\perp})_{ab}^{\mu}=\Delta^{\mu\nu}(\partial_{\nu}\phi_{a}\phi_{b}-\partial_{\nu}\phi_{b}\phi_{a}).$
(39)
To complete the equations of motion of ideal hydrodynamics, we need to specify
a relationship between the phase of the condensate and the chemical potential
known as the Josephson constraint. The Josephson constraint is the requirement
that the field $\phi_{a}$ is stationary under the evolution generated by the
grand potential, $\Omega=H-\frac{1}{2}\mu_{bc}N_{bc}$, i.e. the stability of
the thermal state. This reasoning leads to a requirement on the classical
Poisson bracket between $\phi$ and $\Omega$
$\displaystyle\\{\phi_{a},\Omega\\}=\\{\phi_{a},-u^{\mu}P_{\mu}-{\textstyle\frac{1}{2}}\mu_{bc}N_{bc}\\}=0\,.$
(40)
Recalling that $P_{\mu}$ and $N_{ab}$ generate translations and rotations
respectively (which determines their Poisson brackets with $\phi$), we find
$u^{\mu}\partial_{\mu}\phi_{a}+{\textstyle\frac{1}{2}}(\mu_{ab}\phi_{b}-\phi_{b}\mu_{ba})=0\,.$
(41)
Alternatively, but ultimately equivalently, the Josephson constraint can be
derived by requiring entropy conservation at ideal order Jain (2017)
$\partial_{\mu}(s_{\Sigma}u^{\mu})=0.$ (42)
Appendix A uses the conservation laws together with the Gibbs-Duhem relation
(34) and the pressure differential (32) to show that entropy is only conserved
if the Josephson constraint is satisfied. When viscous corrections are
included at subsequent orders in the gradient expansion, the Josephson
constraint will need to be modified.
Finally, it is useful to express the Josephson constraint in terms of the
amplitude, $\Phi$, and $SU(2)$ phase, $U$. Writing the chiral condensate as
$\Sigma=\phi_{a}\tau_{a}=\Phi U\,,$ (43)
the Josephson constraint can be written
$\displaystyle u^{\mu}\partial_{\mu}\Phi$ $\displaystyle=0\,,$ (44a)
$\displaystyle iu^{\mu}\partial_{\mu}UU^{-1}$
$\displaystyle=\mu_{L}-U\mu_{R}U^{\dagger}\,,$ (44b)
where $\mu_{L}=\tfrac{1}{2}\mu_{ab}\tau_{ab}$ and
$\mu_{R}=\tfrac{1}{2}\mu_{ab}\bar{\tau}_{ab}$ are the left and right chemical
potentials666The Clifford algebra of $O(4)$ is generated by
$\tau_{a}=(\mathbb{I},-i\vec{\lambda})$ and
$\bar{\tau}_{a}=(\mathbb{I},i\vec{\lambda})$. The generators of the (1/2, 0)
and $(0,1/2)$ representations of $O(4)$ are
$\tau_{ab}=-i[\tau_{a},\bar{\tau}_{b}]/4$ and
$\bar{\tau}_{ab}=-i[\bar{\tau}_{a},\tau_{b}]/4$, respectively. . The last
relation between the phase and the chemical potentials is familiar from non-
abelian superfluids Son (2000); Grossi _et al._ (2020)
### III.2 Viscous corrections
So far we have considered only the ideal equations of motion. In the
dissipative case the energy-momentum tensor, the charge current, and the
Josephson constraint will acquire new terms that correspond to dissipation
into the system. The energy-momentum tensor and the conserved currents are
modified due to dissipative effects in the usual way:
$\displaystyle T^{\mu\nu}$
$\displaystyle=T^{\mu\nu}_{\text{ideal}}+\Pi^{\mu\nu},$ (45) $\displaystyle
J_{ab}^{\mu}$ $\displaystyle=J_{ab,\text{ideal}}^{\mu}+q^{\mu}_{ab}.$ (46)
We will work in the Landau frame, where the dissipative contributions to the
stress tensor and the diffusion current are taken to be orthogonal to the four
velocity $u^{\mu}$, i.e.
$\Pi^{\mu\nu}u_{\mu}=0,\quad q^{\mu}_{ab}u_{\mu}=0.$ (47)
The stress tensor can be further decomposed into a symmetric-traceless and
transverse part, $\pi^{\mu\nu}$, and a trace part, $\Pi$,
$\Pi^{\mu\nu}=\pi^{\mu\nu}+\Pi\Delta^{\mu\nu}.$ (48)
In addition to the dissipative corrections to the energy-momentum tensor and
the current, the evolution equation of the chiral condensate, $\phi_{a}$, gets
modified by dissipative effects. Therefore it is useful to define
$u^{\mu}\partial_{\mu}\phi_{a}+\mu_{ab}\phi_{b}=\Xi_{a},$ (49)
where $\Xi_{a}$ is a Lorentz scalar that encodes the dissipative contribution
to the scalar field equation of motion.
Using the conservation of the energy-momentum tensor and the partial
conservation of the charge, the Gibbs-Duhem relation in (34), and the pressure
differential in (32) we can derive the entropy production as
$\displaystyle\partial_{\mu}(s_{\Sigma}u^{\mu}-\frac{\mu_{ab}}{2T}q^{\mu}_{ab})=\frac{\Xi_{a}}{T}\Theta_{a}-\partial_{\mu}\left(\frac{u_{\nu}}{T}\right)\Pi^{\mu\nu}-\partial_{\mu}\left(\frac{\mu_{ab}}{2T}\right)q^{\mu}_{ab}.$
(50)
where we have defined the scalar quantity
$\Theta_{a}=\partial^{2}_{\perp}\phi_{a}-\frac{\partial
V}{\partial\phi_{a}}+H_{a},$ (51)
with
$\partial_{\perp}^{2}\phi_{a}\equiv\partial_{\mu}\partial^{\mu}_{\perp}\phi_{a}$.
Up to now we have not specified an expansion scheme; the equations are just a
consequence of the definition of entropy, the conservation of energy and
momentum, and the partial conservation charge. The positivity of entropy
production in the tensor sector can be enforced with
$\pi^{\mu\nu}=-\eta_{\Sigma}\sigma^{\mu\nu},\quad\text{with}\quad\eta_{\Sigma}\geq
0,$ (52)
where $\eta_{\Sigma}$ is the shear viscosity of the $O(4)$ theory. In the
vector sector we have
$q_{ab}^{\mu}=-T\sigma_{\Sigma}\partial^{\mu}\left(\frac{\mu_{ab}}{T}\right),\quad\text{with}\quad\sigma_{\Sigma}\geq
0\,,$ (53)
where $\sigma_{\Sigma}$ is the $O(4)$ conductivity.
The scalar sector requires a bit more care as there are two Lorentz scalars,
$\Xi_{a}\Theta_{a}$ and $\Pi\,\partial_{\mu}u^{\mu}$. Generally we have as the
constitutive relations for $\Pi$ and $\Xi_{a}$
$\displaystyle\Pi$
$\displaystyle=-\zeta_{\Sigma}\,\partial_{\mu}u^{\mu}-\zeta^{(1)}_{\Sigma}\,\phi_{a}\Theta_{a},$
(54) $\displaystyle\Xi_{a}$
$\displaystyle=\zeta^{(1)}_{\Sigma}\,\phi_{a}\partial_{\mu}u^{\mu}+\Gamma\,\Theta_{a},$
(55)
where $\zeta_{\Sigma}$ is the bulk viscosity, $\zeta^{(1)}_{\Sigma}$ and
$\Gamma$ are the transport coefficients regulating the dissipative effects of
the scalar field dynamics. The positivity of the associated quadratic form is
enforced if
$\zeta_{\Sigma}\geq 0,\quad\Gamma\geq 0,\quad\text{ and
}\quad\zeta_{\Sigma}\,\Gamma-(\zeta^{(1)}_{\Sigma})^{2}\,\phi^{2}\geq 0.$ (56)
Having specified the dissipative fluxes, it is possible to write down the
energy-momentum tensor, the current, and the scalar field equation, including
the first gradient corrections. The scalar field obeys a relaxation-type
equation where the ideal part is the Josephson constraint
$\displaystyle
u^{\mu}\partial_{\mu}\phi_{a}+\mu_{ab}\phi_{a}=\Gamma\left[\partial^{2}_{\perp}\phi_{a}-\frac{\partial
V}{\partial\phi_{a}}+H_{a}\right]+\zeta^{(1)}_{\Sigma}\phi_{a}\,\partial_{\mu}u^{\mu}.$
(57)
The energy momentum tensor now includes dissipative contributions due to
chiral condensate
$\displaystyle\Pi^{\mu\nu}=-\eta_{\Sigma}\sigma^{\mu\nu}-\Delta^{\mu\nu}\left[\zeta_{\Sigma}\partial_{\mu}u^{\mu}-\zeta^{(1)}_{\Sigma}\phi_{\alpha}\left(\partial^{2}_{\perp}\phi_{a}-\frac{\partial
V}{\partial\phi_{a}}+H_{a}\right)\right].$ (58)
Finally the current has the form
$\displaystyle(J^{\mu})_{ab}=n_{ab}u^{\mu}+(J^{\mu}_{\perp})_{ab}-T\sigma_{\Sigma}\,\Delta^{\mu\nu}\partial_{\nu}\left(\frac{\mu_{ab}}{T}\right)\,,$
(59)
and is partially conserved as in (37). The coefficient $\zeta^{(1)}_{\Sigma}$
is an independent transport coefficient that couple the expansion rate
$\partial_{\mu}u^{\mu}$ to the Josephson constraint and vice versa. Near the
phase transition $\phi_{\alpha}$ is approximately zero, which means this term
is subdominant and can be neglected.
Let us compare these equations to a number of results in the literature. The
equations are equivalent to those of Rajagopal and Wilczek written down almost
thirty years ago Rajagopal and Wilczek (1993); our notation for
$\sigma_{\Sigma}$ and $\Gamma$ follows theirs. The current reformulation is
covariant and includes the coupling to the background flow. In the low
temperature limit the equations match those of our previous paper Grossi _et
al._ (2020) (which includes a discussion of earlier work Son (2000); Son and
Stephanov (2002b)), provided one identifies some of the
coefficients777Specifically, we have $\Gamma\rightarrow D_{m}$ and
$\Gamma+D\rightarrow D_{A}$ where $D=\sigma_{\Sigma}/\chi$. .
### III.3 Linear response
Knowing the equations of motion we can determine the hydrodynamic predictions
for the retarded Green functions of the system. In the axial channel we can
consider the coupled equations of motion for $\varphi_{s}$ and the axial
chemical potential $\mu_{0s}$, when $\phi_{a}$ is linearized around
equilibrium
$\displaystyle\phi_{a}=$
$\displaystyle(\bar{\sigma},-\bar{\sigma}\varphi_{s})\,.$ (60)
To find the response function for $(\omega_{k}\varphi_{s},\mu_{0s})$, we
introduce a pseudoscalar source $H_{a}=(H,\delta H_{s}(x))$ and a gauge field
$(A_{0}(x))_{0s}$ which are conjugate to $-\sigma\varphi_{s}$ and
$\chi_{0}\mu_{0s}$ respectively. Due to the $O(4)$ symmetry, the external
gauge field can appear in the time derivative of $\varphi_{s}$ and spatial
gradient of the chemical potential
$\displaystyle\partial_{t}\phi_{s}$ $\displaystyle\to
D_{t}\phi_{s}=\partial_{t}\phi_{s}-(A_{0})_{s0}\bar{\sigma},$ (61a)
$\displaystyle\partial_{i}\mu_{0s}$
$\displaystyle\rightarrow\partial_{i}\mu_{0s}-(E_{i})_{0s},$ (61b)
where $(E_{i})_{0s}=(\partial_{i}A_{0})_{0s}$. Applying these transformations
and Fourier transforming leads us to the linearized equations in matrix form
$\displaystyle\begin{pmatrix}-i\omega+\Gamma(k^{2}+m^{2})&\omega_{k}\\\
-\omega_{k}&-i\omega+Dk^{2}\\\
\end{pmatrix}\begin{pmatrix}\omega_{k}\varphi_{s}\\\
\mu_{0s}\end{pmatrix}=\frac{1}{\chi_{0}}\begin{pmatrix}\Gamma(k^{2}+m^{2})&\omega_{k}\\\
-\omega_{k}&Dk^{2}\\\ \end{pmatrix}\begin{pmatrix}-\bar{\sigma}\delta
H_{s}/\omega_{k}\\\ \chi_{0}(A_{0})_{0s}\end{pmatrix},$ (62)
where we have defined the diffusion coefficient $D=\sigma_{\Sigma}/\chi_{0}$
of the $O(4)$ symmetric theory. The linearized equations can be solved to find
the retarded propagator
$\begin{pmatrix}\omega_{k}\varphi_{s}\\\
\mu_{0s}\end{pmatrix}=\frac{1}{\chi_{0}}\frac{1}{(-\omega^{2}+\omega_{k}^{2}+g_{1}g_{2})-i\omega\Gamma_{k}}\\\
\times\begin{pmatrix}g_{1}(g_{2}-i\omega)+\omega_{k}^{2}&-i\omega\omega_{k}\\\
i\omega\omega_{k}&g_{2}(g_{1}-i\omega)+\omega_{k}^{2}\\\
\end{pmatrix}\begin{pmatrix}-\bar{\sigma}\delta H_{s}/\omega_{k}\\\
\chi_{0}\,(A_{0})_{0s}\end{pmatrix}\,,$ (63)
where, for compactness, we define the following shorthand for the dissipative
rates:
$\displaystyle g_{1}$ $\displaystyle\equiv\Gamma(k^{2}+m^{2})\,,$ (64a)
$\displaystyle g_{2}$ $\displaystyle\equiv Dk^{2}\,,$ (64b)
$\displaystyle\Gamma_{k}$
$\displaystyle\equiv\Gamma(k^{2}+m^{2})+Dk^{2}=g_{1}+g_{2}\,.$ (64c)
$\Gamma_{k}$ determines the damping rate of soft pions in the broken phase Son
and Stephanov (2002b).
To compute the hydrodynamic loops in the next section, it will be necessary to
use the symmetrized propagator
$[G_{\rm
sym}(\omega)]=\frac{T}{\omega}\frac{[G_{R}(\omega)]-[G_{A}(\omega)]}{i}\equiv\frac{T}{\omega}[\rho(\omega,k)],$
(65)
where the advanced propagator is
$\displaystyle[G_{A}(\omega)]=[G_{R}(\omega)]^{\dagger},$ (66)
and $\rho$ notates the spectral density. Thus, the symmetrized propagator is
$[G_{\rm
sym}(\omega)]=\frac{2T}{\chi_{0}}\frac{1}{(-\omega^{2}+\omega_{k}^{2}+g_{1}g_{2})^{2}+(\omega\Gamma_{k})^{2}}\begin{pmatrix}g_{1}(\omega^{2}+g_{2}^{2})+g_{2}\omega_{k}^{2}&-i\omega_{k}\omega\Gamma_{k}\\\
i\omega_{k}\omega\Gamma_{k}&g_{2}(\omega^{2}+g_{1}^{2})+g_{1}\omega_{k}^{2}\end{pmatrix}.$
(67)
In Fig. 2 we exhibit the spectral density for several values of the scaling
variable $z$ and a specific choice of parameters discussed below. It is
instructive to analyze the spectral density in the pion-axial charge channel
in two different limits, the broken phase, $z\to-\infty$, and the symmetric
phase, $z\to\infty$.
In the broken phase $z\to-\infty$, the field expectation value $\bar{\sigma}$
is large and $\omega_{k}\gg g_{1,2}$. In this limit the spectral density
approaches a Breit-Wigner form with the peaks given by the quasi-particle
dispersion relation $\omega=\pm\omega_{k}$ and a width given by $\Gamma_{k}$
Son and Stephanov (2002b). Then the denominator in the spectral density can be
approximated as
$\frac{\Gamma_{k}}{(-\omega^{2}+\omega_{k}^{2}+g_{1}g_{2})^{2}+(\omega\Gamma_{k})^{2}}\sim\frac{1}{4\omega_{k}^{2}}\left[\rho(\omega,\omega_{k})+\rho(\omega,-\omega_{k})\right],$
(68)
where $\rho(\omega,\omega_{k})$ notates the Breit-Wigner form
$\rho(\omega,\omega_{k})=\frac{\Gamma_{k}}{(-\omega+\omega_{k})^{2}+(\Gamma_{k}/2)^{2}}.$
(69)
In this limit, we can simplify the expression of the spectral density, leading
to
$[\rho(\omega)]=\frac{\omega}{2\chi_{0}}\left[\rho(\omega,\omega_{k})+\rho(\omega,-\omega_{k})\right]\begin{pmatrix}1&-i\\\
i&1\end{pmatrix}+\mathcal{O}\left(\frac{\Gamma_{k}}{\omega_{k}}\right).$ (70)
Thus, there is a relation between the spectral density of pions and the axial
charge888 The response function in (63) gives the retarded function and
spectral density of the chemical potential $\rho_{\mu_{A}\mu_{A}}$. Since
$n_{A}=\chi_{0}\mu_{A}$, the density-density spectral function can be obtained
including the appropriate power of $\chi_{0}$, e.g.
$\rho_{AA}=\chi_{0}^{2}\rho_{\mu_{A}\mu_{A}}$.
$\displaystyle\rho_{AA}(\omega,k)=i\chi_{0}\omega_{k}\,\rho_{\varphi
A}(\omega,k)=(\chi_{0}\omega_{k})^{2}\,\rho_{\varphi\varphi}\,,$ (71)
which is a manifestation of the PCAC relations. These relations are the direct
consequences of the Josephson equation, $-\partial_{t}\varphi=n_{A}/\chi_{0}$.
Indeed, due to the ideal equation of motion for the field $\varphi_{s}$, we
have
$\displaystyle\rho_{AA}(\omega,k)=-\chi_{0}\,\rho_{\partial_{t}\varphi
A}(\omega,k)=\chi_{0}^{2}\,\rho_{\partial_{t}\varphi\partial_{t}\varphi}(\omega,k)\,,$
(72)
which highlights that the axial charge and the time derivative of the pion
field are two interchangeable concepts.
As the temperature increases, the real and imaginary parts of the propagator
become of the same order of magnitude $\omega_{k}\sim g_{1}g_{2}$, and the
parameter that governs their relative importance can be taken as
$u^{2}=\frac{\omega^{2}_{k}}{g_{1}g_{2}}\Big{|}_{k=m}=\frac{v^{2}}{\Gamma
Dm^{2}}.$ (73)
At the phase transition, $z=0$, where $m=m_{c}$ and $v=v_{c}$ it is natural to
assume that the real and imaginary part are the same order of magnitude Son
and Stephanov (2002b), and therefore here we will consider the case where
$u^{2}_{c}=\frac{v_{c}^{2}}{\Gamma Dm_{c}^{2}}=1.$ (74)
The propagator also depends on a another dimensionless parameter
$r^{2}=\frac{\Gamma}{\Gamma+D},$ (75)
which expresses the relative strengths of the axial diffusion and the order
parameter relaxation. Calculations from chiral perturbation theory found the
value $r^{2}=3/4$ Teaney _et al._ , and we will adopt this number as an
estimate for this order one constant.
In the symmetric phase $z\to\infty$, the field expectation value is very small
$\bar{\sigma}\sim 0$ and $v^{2}\sim 0$. Thus, the spectral density matrix
becomes diagonal
$[\rho(\omega)]_{z\to\infty}=\frac{2\omega}{\chi_{0}}\frac{1}{(\omega^{2}+g_{2}^{2})(\omega^{2}+g_{1}^{2})}\begin{pmatrix}g_{1}(\omega^{2}+g_{2}^{2})&0\\\
0&g_{2}(\omega^{2}+g_{1}^{2})\end{pmatrix}\,,$ (76)
and the pion and axial charge are completely decoupled. The pion field simply
relaxes to zero and the axial charge is purely diffusive. The spectral density
of axial charge is therefore
$\rho_{AA}(\omega,k)=\frac{2\omega\chi_{0}Dk^{2}}{(\omega^{2}+(Dk^{2})^{2})},$
(77)
while in the pion channel we have
$\bar{\sigma}^{2}\rho_{\varphi\varphi}(\omega,k)=\frac{2\omega\Gamma}{\omega^{2}+\Gamma^{2}(k^{2}+m^{2})^{2}},$
(78)
which exhibits a simple relaxation pole with relaxation rate
$\Gamma(k^{2}+m^{2})$. In the symmetric phase the axial charge and the
pions999In the symmetric phase there are no Goldstone bosons, the “pions” are
the pseudo-scalar fluctuations of the chiral condensate and have a very large
mass. are completely disentangled, and their dissipative dynamics is
controlled by two distinct transport coefficients.
Figure 2: The spectral density $\rho_{AA}(\omega,q)$ for the axial charge
density-density correlator with the scaling variable $z\equiv th^{-2/3}$
taking values $z=-16,-4,-1,0,1,4,16$. For large positive $z$ the distribution
asymptotes to the simple diffusive pole, $\rho_{AA}/\omega\propto
Dk^{2}/(\omega^{2}+(Dk^{2})^{2})$, reflecting the diffusion of quarks. For
large negative $z$ the pair of peaks reflects the propagating pions. We have
rescaled the axes, defining $\bar{\omega}\equiv\omega/\Gamma m_{c}^{2}$ and
$\bar{\rho}_{AA}(\omega)=\rho_{AA}(\omega,q)/2\chi_{0}$, and chosen
$q/m_{c}=1$ for illustration. For definiteness, we have set $D/\Gamma=1/3$,
and $v^{2}_{c}/\Gamma Dm^{2}_{c}=1$, and the motivation for these constants is
given in the text surrounding eq. (73).
Moving to the $\sigma$ contribution, we see that the linearized equation of
motion is
$\partial_{t}\delta\sigma=\Gamma(\nabla^{2}-m_{\sigma}^{2})\,\delta\sigma+\Gamma\delta
H\,,$ (79)
where we have added an external source to the scalar field, $H\rightarrow
H+\delta H$. Solving in Fourier space, we see that the retarded Green’s
function is
$G_{R}^{\sigma\sigma}(\omega,k)=\frac{\Gamma}{-i\omega+\Gamma(k^{2}+m_{\sigma}^{2})}\,,$
(80)
and the symmetrized propagator is
$\displaystyle G_{\rm
sym}^{\sigma\sigma}=\frac{2T\Gamma}{\omega^{2}+\Gamma^{2}(k^{2}+m_{\sigma}^{2})^{2}}.$
(81)
In the symmetric case ($z\to\infty$) the propagator of $\delta\sigma$ and
$\varphi$ become degenerate and $O(4)$ symmetric
$\rho_{\sigma\sigma}(\omega,k)=\bar{\sigma}^{2}\rho_{\varphi\varphi}(\omega,k),$
(82)
with $m^{2}=m_{\sigma}^{2}$.
## IV Transport coefficients
In this section we will use the response functions calculated in the previous
section to determine the current-current and stress-stress correlation
functions. This will determine the critical behavior of the transport
coefficients, which is analyzed and estimated in Sect. IV.2.
### IV.1 Hydrodynamic loops
In the critical hydrodynamic theory we outlined in Sect. III, we have
integrated out modes of order $k\sim T$. These modes are incorporated into the
transport coefficients such as $\eta_{\Sigma}$ and its associated noise,
$\xi_{\eta_{\Sigma}}^{\mu\nu}$. Modes of order $k\sim m_{\sigma}$ are
explicitly propagated in the theory, and the critical hydrodynamic theory is
defined with a cutoff $\Lambda_{T}$
$k\sim m_{\sigma}\ll\Lambda_{T}\ll T.$ (83)
In normal hydrodynamics, modes with $k\sim m_{\sigma}$ are integrated out and
incorporated into the transport coefficients of the normal theory, such as
$\eta$ and its noise. The only modes which are explicitly propagated are the
conserved charges, and the theory is defined with a cutoff $\Lambda_{\sigma}$
$k\ll\Lambda_{\sigma}\ll m_{\sigma}\,.$ (84)
The two transport coefficients $\eta_{\Sigma}$ and $\eta$ may be related by
integrating out modes between $k\in[\Lambda_{\sigma},\Lambda_{T}]$.
The $xy$ components of the stress tensor in the critical hydrodynamic theory
is
$\displaystyle T^{xy}=$
$\displaystyle\partial^{x}\delta\sigma\,\partial^{y}\delta\sigma+\bar{\sigma}^{2}\partial^{x}\varphi_{s}\partial^{y}\varphi_{s}+\xi^{xy}_{\eta_{\Sigma}},$
(85)
where the noise satisfies
$\left\langle\xi_{\eta_{\Sigma}}^{xy}(x_{1})\xi_{\eta_{\Sigma}}^{xy}(x_{2})\right\rangle=2T\eta_{\Sigma}\delta^{4}(x_{1}-x_{2})\,.$
(86)
It is understood that the noise in the critical hydrodynamic theory is only
local on scales with $k\ll\Lambda_{T}$, i.e. the $\delta$-function in (86)
should be cutoff at the scale $\Lambda_{T}$ and associated with a scale-
dependent parameter, $\eta_{\Sigma}(\Lambda_{T})$. The stress tensor in the
normal hydrodynamic theory is simply the noise (in the absence of external
flow)
$T^{xy}_{\rm hydro}=\xi^{xy}_{\eta}(x)\,,$ (87)
and satisfies
$\left\langle\xi_{\eta}^{xy}(x_{1})\xi_{\eta}^{xy}(x_{2})\right\rangle=2T\eta\delta^{4}(x_{1}-x_{2})\,.$
(88)
Matching the two effective theories yields Kubo formulas, which require that
the integrated variances of the fluctuations are equal in the two theories
Forster (1995):
$2T\eta=\int d^{4}x\left\langle T^{xy}_{\rm hydro}(t,\bm{x})T^{xy}_{\rm
hydro}(0,{\bm{0}})\right\rangle=\int d^{4}x\left\langle
T^{xy}(t,\bm{x})T^{xy}(0,{\bm{0}})\right\rangle\,.$ (89)
Incorporating the fluctuations of $\sigma$ and $\varphi$ at one loop, this
evaluates to101010 Here and below $d_{A}=3$ and $T_{A}=2$ denote the dimension
and trace of the adjoint representation of the unbroken $SU(2)$ iso-vector
subgroup. The “extra” factor of $1/\omega_{k}^{4}$ multiplying $G_{\rm
sym}^{\varphi\varphi}$ is because of the way $G_{\rm sym}^{\varphi\varphi}$
was defined in (67) as the symmetric correlator of $\omega_{k}\varphi$.
$\displaystyle 2T\eta$
$\displaystyle=2T\eta_{\Sigma}(\Lambda_{T})+2\int^{\Lambda_{T}}\frac{d^{3}k}{(2\pi)^{3}}\frac{d\omega}{2\pi}\left[(k^{x}k^{y}G_{\rm
sym}^{\sigma\sigma})^{2}+d_{A}\frac{\bar{\sigma}^{4}}{\omega_{k}^{4}}(k^{x}k^{y}G_{\rm
sym}^{\varphi\varphi})^{2}\right],$ (90)
where we have anticipated a divergence which is regulated at the scale
$\Lambda_{T}$, as is appropriate for the critical hydrodynamic theory.
The other transport coefficients of interest here are expressed similarly as
$\displaystyle 2T\sigma_{I}=$ $\displaystyle\int
d^{4}x\frac{1}{d_{A}}\left\langle{\textstyle\frac{1}{2}}\\{J_{V,s}^{x}(t,{\bm{x}}),J_{V,s}^{x}(0,{\bm{0}})\\}\right\rangle,$
(91) $\displaystyle 2T\zeta=$ $\displaystyle\int
d^{4}x\left\langle{\textstyle\frac{1}{2}}\left\\{{\mathcal{O}}_{\rm
bulk}(t,{\bm{x}}),{\mathcal{O}}_{\rm bulk}(0,{\bm{0}})\right\\}\right\rangle.$
(92)
where ${\mathcal{O}}_{\rm bulk}=\tfrac{1}{3}T^{i}_{i}+c_{s}^{2}T^{0}_{0}$. The
bulk viscosity is significantly more complicated, and quite susceptible to
physics which goes beyond the mean field approach adopted here. Therefore we
will evaluate the bulk viscosity only in the high temperature symmetric
regime. The relevant operators appearing in the conductivity computation and
the bulk viscosity are
$\displaystyle J^{x}_{V,s}$
$\displaystyle=\bar{\sigma}^{2}\epsilon_{ss^{\prime}s^{\prime\prime}}\varphi_{s^{\prime}}\partial^{x}\varphi_{s^{\prime\prime}},$
(93) $\displaystyle O_{\rm bulk,\infty}$
$\displaystyle=\frac{1}{2}c_{s}^{2}{\mathfrak{m}}^{2}(\delta\sigma^{2}+\pi^{2}).$
(94)
Here and below we use the $\infty$ subscript to indicate that we have made
approximations of $O_{\rm bulk}$ appropriate only in the symmetric phase where
$z$ is large. We have also recognized that near $T_{c}$ the terms stemming
from $c_{s}^{2}T^{0}_{0}$ are parametrically large compared to $T^{i}_{i}$.
Evaluating the relevant Feynman diagrams leads to
$\displaystyle 2T\sigma_{I}$
$\displaystyle=2T\sigma_{\Sigma}+2T_{A}\bar{\sigma}^{4}\int\frac{d^{3}k}{(2\pi)^{3}}\frac{d\omega}{2\pi}\frac{1}{\omega_{k}^{4}}(k^{x}G_{\rm
sym}^{\varphi\varphi})^{2}\,,$ (95) $\displaystyle 2T\zeta_{\infty}$
$\displaystyle\approx
2T\zeta_{\Sigma}+2\left(\tfrac{1}{2}c_{s}^{2}{\mathfrak{m}}^{2}\right)^{2}\int\frac{d^{3}k}{(2\pi)^{3}}\frac{d\omega}{2\pi}\left[(G_{\rm
sym}^{\sigma\sigma})^{2}+d_{A}(G_{\rm sym}^{\pi\pi})^{2}\right].$ (96)
The propagators in these expressions can be read from (67) and (81). The
$\sigma$ and $\pi$ propagators at large $z$ which are used in (96) are the
same and are given in (81). To make the results of the above integrations more
explicit we recall the dimensionless variables of Sect. III.3
$r^{2}=\frac{\Gamma}{\Gamma+D},\quad\text{and}\quad u^{2}=\frac{v^{2}}{\Gamma
Dm^{2}},$
and introduce the symmetric, dimensionless function $f_{n}(r,u)=f_{n}(u,r)$,
defined by
$\displaystyle f_{n}(r,u)$
$\displaystyle=\frac{16}{15\pi}\int_{0}^{\infty}\frac{dk}{m}\frac{k^{2n}}{(k^{2}+m^{2})^{3}}\frac{m^{8-2n}k^{2}}{(k^{2}+r^{2}m^{2})(k^{2}+u^{2}m^{2})}.$
(97)
For the transport coefficients in question, we will need only the following
explicit expressions
$\displaystyle f_{3}(r,u)$
$\displaystyle=\frac{1}{15\left(r^{2}-u^{2}\right)}\left[\frac{r^{2}\left(8r^{2}+9r+3\right)}{(r+1)^{3}}-\frac{u^{2}\left(8u^{2}+9u+3\right)}{(u+1)^{3}}\right],$
(98) $\displaystyle f_{2}(u,r)$
$\displaystyle=\frac{1}{15\left(r^{2}-u^{2}\right)}\left[\frac{r^{2}(3r+1)}{(r+1)^{3}}-\frac{u^{2}(3u+1)}{(u+1)^{3}}\right].$
(99)
More details can be found in Appendix B. Then the conductivity, shear
viscosity, and asymptotic bulk viscosity are given by
$\displaystyle\sigma_{I}(z)$
$\displaystyle=\sigma_{\Sigma}+\frac{TT_{A}}{32\pi
m\Gamma}\left(1-5u^{2}(1-r^{2})f_{2}(r,u)\right)\,,$ (100a)
$\displaystyle\eta(z)$
$\displaystyle=\eta_{\Sigma}-\frac{T}{32\pi\Gamma}(m_{\sigma}+md_{A}+md_{A}u^{2}(1-r^{2})f_{3}(r,u)),$
(100b) $\displaystyle\zeta_{\infty}(z)$
$\displaystyle=\zeta_{\Sigma}+\frac{T}{8\pi\Gamma
m_{\sigma}^{3}}\left(\tfrac{1}{2}c_{s}^{2}{\mathfrak{m}}^{2}\right)^{2}.$
(100c)
In these expressions the shear viscosity has been renormalized
$\displaystyle\eta_{\Sigma}$
$\displaystyle=\eta_{\Sigma}(\Lambda)+\delta_{aa}\,\frac{T\Lambda}{30\pi^{2}\Gamma},$
(101a)
and the parameters $m(z)$, $m_{\sigma}(z)$, and $u^{2}(z)$, depend on the
scaling variable $z$.
### IV.2 Discussion
Figure 3: The critical contribution to the hydrodynamic transport
coefficients, $\Delta\eta$and $\Delta\sigma_{I}$, as a function of the scaling
variable $z=th^{-2/3}$. The asymptotic forms at large $z$ and small $z$, e.g.
$\Delta\eta_{\infty}(z)$ and “pion kinetics” respectively, are discussed in
text surrounding eq. (103). For the viscosity we have normalized the curve by
a positive constant, $\eta^{\rm pc}_{\infty}\equiv|\Delta\eta_{\infty}(z_{\rm
pc})|$, so that at the pseudo-critical point, $z_{\rm pc}=1.19$, the orange
dashed asymptotic curve passes through minus one. We have defined
$\sigma_{I\infty}^{\rm pc}$ with an analogous notation. The absolute
magnitudes of these normalization constants are discussed in the text
surrounding eq. (109). The curves depend weakly on two order one parameters,
which we take to be $r^{2}$ and $u_{c}^{2}$ (see Fig. 2).
To gain an appreciation for the results of the previous section, in Fig. 3 we
plot the critical contribution to the transport coefficients $\Delta\eta$ and
$\Delta\sigma$ as a function of the scaling variable, $z$. The normalization
of the curves and the asymptotics at large and small $z$ will be discussed
shortly. We emphasize that Fig. 3 contains just the contribution from critical
modes, e.g. the full shear viscosity takes the form
$\eta(z)=\eta_{\Sigma}+\Delta\eta(z)\,,$ (102)
where $\eta_{\Sigma}$ is a $z$ independent constant (the regular contribution
to the shear viscosity).
At large positive $z$ the propagators for the $\sigma$ and $\pi$ fields become
degenerate and take a simple form (see Sect. III.3). This greatly simplifies
the computation of the hydrodynamic loop, leading to some simple forms for the
critical transport corrections. Expanding our results in (100) for large $z$,
or $u\rightarrow 0$, we find
$\displaystyle\Delta\sigma_{\infty}(z)$ $\displaystyle\equiv\frac{T}{16\pi
m\Gamma},$ (103a) $\displaystyle\Delta\eta_{\infty}(z)$
$\displaystyle\equiv-\frac{Tm_{\sigma}}{8\pi\Gamma},$ (103b)
$\displaystyle\Delta\zeta_{\infty}(z)$ $\displaystyle\equiv\frac{T}{8\pi\Gamma
m_{\sigma}^{3}}\left(\tfrac{1}{2}c_{s}^{2}{\mathfrak{m}}^{2}\right)^{2}.$
(103c)
These large $z$ asymptotics are presented as the (orange) dashed curves in
Fig. 3. In these expressions $T$, ${\mathfrak{m}}^{2}$, $c_{s}$, and $\Gamma$
are constants near $T_{c}$, while the remaining functions, $m(z)$,
$m_{\sigma}(z)$, are scaling functions which are determined by the equilibrium
magnetic equation of state. Outside of the mean field approximation used here,
$\Gamma$ is not a constant, but is expected to grow (fairly weakly) near the
critical point as $\Gamma\sim m_{\sigma}^{d/2-2}\sim m_{\sigma}^{-1/2}$
Rajagopal and Wilczek (1993); Son and Stephanov (2002b). Treating $\Gamma$ and
$D$ as constants is known in the literature as the van Hove approximation
Hohenberg and Halperin (1977).
The asymptotic form of the transport coefficients sets the overall scale for
our results. Thus in Fig. 3 we have divided each transport coefficient by a
$z$-independent constant, the magnitude of the asymptotic result at the
pseudo-critical point
$\displaystyle\sigma_{\infty}^{\rm pc}\equiv$
$\displaystyle\Delta\sigma_{\infty}(z_{\rm pc})\,,$ (104a)
$\displaystyle\eta_{\infty}^{\rm pc}\equiv$
$\displaystyle|\Delta\eta_{\infty}(z_{\rm pc})|\,,$ (104b)
$\displaystyle\zeta_{\infty}^{\rm pc}\equiv$
$\displaystyle\Delta\zeta_{\infty}(z_{\rm pc})|\,.$ (104c)
Estimates for these scale coefficients in absolute units are given below. We
also find that the simple asymptotic forms in (103) provide a useful order of
magnitude estimate over the whole range in $z$, and in Fig. 4 we present the
ratio between the full result and these forms. We expect that our asymptotic
expression for the critical bulk viscosity in (103c) can provide a similarly
good estimate over the whole range in $z$.
Figure 4: Ratio of the singular part of the transport coefficients to the
asymptotic formulas (103) over the full range in $z$.
For large negative $z$, the $\sigma$ is significantly heavier than the pions,
$m_{\sigma}\gg m$. We can integrate out the heavy sigma modes and pions with
$p\sim m_{\sigma}$, leaving a local effective theory for soft pions with
$p\sim m$. A hydrodynamic theory can be worked out for these soft pion modes
coupled to the background stress Grossi _et al._ (2020). The (stochastic)
hydrodynamic equations for the soft pions are equivalent to a Boltzmann
equation in a “fluid metric”, which describes how the soft pions propagate in
the background fluid Grossi _et al._ (2020). The collision terms of the
kinetic equation are determined by the transport coefficients of the
hydrodynamic theory. The computation of $\sigma$, $\eta$ and $\zeta$ at large
negative $z$ could thus be done in two steps: first one would match the
hydrodynamic equations at the critical point given in Sect. III to the soft-
pion hydro-kinetic theory, and then one would use the soft-pion kinetic theory
to determine the transport coefficients as a function of the temperature. For
large negative $z$, the results predicted by the (matched) pion hydro-kinetic
theory are shown by the black dotted curves in Fig. 3. The pion kinetic theory
gives a reasonable description of the results of the full theory up to its
boundary of applicability, $z\sim 0$. We will use the pion kinetic theory to
estimate soft pion yields in Sect. V. Further details about the pion kinetic
theory are given in Appendix C.
Now we will make several estimates for the absolute scales of the critical
contribution to the transport coefficients, i.e. we wish to estimate
$\eta_{\infty}^{\rm pc}$, $\zeta_{\infty}^{\rm pc}$, and
$\sigma_{I\infty}^{\rm pc}$ defined in (104). These formulas have a number of
physical quantities that need to be estimated, which we will do in the next
paragraphs.
First, we consider the thermodynamic quantities, which are precisely
determined by lattice measurements. To present each transport coefficient, we
will first divide by the corresponding susceptibilities: $sT$ in the shear and
bulk cases (the momentum susceptibility), and $T\chi_{Q}$ for the conductivity
(the charge susceptibility). The pseudo-critical point is at $T_{\rm pc}\simeq
155\,{\rm MeV}$ Borsanyi _et al._ (2020). From lattice measurements of QCD
thermodynamics at $T=155$ we have Borsanyi _et al._ (2010, 2014); Bazavov
_et al._ (2012, 2014):
$sT^{-3}=5.4\,,\qquad\chi_{Q}T^{-2}=0.4\,.$ (105)
We will also need to estimate the screening masses, $m_{\sigma}$ and $m$, at
these temperatures. At a temperature of $T_{\rm pc}=155$, we take from Table X
of Ref. Bazavov _et al._ (2019)
$m_{\sigma}(T_{\rm pc})=0.271\,{\rm GeV},\,\quad\text{and}\quad m(T_{\rm
pc})=0.198\,{\rm GeV}\,.$ (106)
The mean field predictions for $m_{\sigma}$ and $m$ are described in Sect. II;
the one free mass parameter is adjusted so that the pion screening mass at the
mean field pseudo-critical point at $z_{\rm pc}=1.19$ matches the lattice. The
corresponding mean field $\sigma$ mass at $z_{\rm pc}=1.19$ is
$m_{\sigma}=0.24\,{\rm GeV}$, which is slightly lower than the lattice
results. To summarize, in our estimates below we take
$\displaystyle m_{\sigma}/T=1.56,\quad\text{and}\quad m/T=1.28\,.$ (107)
Finally, in order to evaluate the bulk viscosity we need to estimate
${\mathfrak{m}}^{2}$. In mean field theory we have111111All of these relations
follow with minor algebra from (9) and (17).
$\frac{{\mathfrak{m}}^{2}}{T^{2}}=\frac{m_{\sigma}^{2}}{T^{2}}\left(-\frac{d\log\bar{\sigma}}{d\log
T}\right)=\frac{m_{\sigma}^{2}}{T^{2}}\left(\frac{T_{c}}{T-T_{c}}\right)\left(-\frac{d\log
f_{G}}{d\log z}\right)\simeq 7.0.$ (108)
In making this estimate we have taken $T\simeq 155\,{\rm MeV}$ and
$T_{c}\simeq 132\,{\rm MeV}$ Ding _et al._ (2019); Kaczmarek _et al._
(2020), and used the mean field equation of state. In absolute units
${\mathfrak{m}}\simeq 0.410\,{\rm GeV}$, which seems somewhat too low for a
cutoff scale. Indeed $O(4)$ fits to lattice data suggest a somewhat higher
value Kaczmarek _et al._ (2020).
The real time quantities in the transport coefficients are comparatively
poorly determined. The two real time parameters are the order parameter
relaxation coefficient $\Gamma$ and the diffusion coefficient $D$, which set
the critical relaxation rates, $\Gamma m_{\sigma}^{2}\sim Dm_{\sigma}^{2}$.
$D$ is regular near $T_{c}$ and determines the charge diffusion coefficient
well above $T_{c}$. We will therefore adopt the strong coupling estimate,
$D=1/2\pi T$ Son and Starinets (2007); Schäfer and Teaney (2009); Heinz and
Snellings (2013), and we take $r^{2}=\Gamma/(\Gamma+D)=3/4$ and
$v_{c}^{2}/\Gamma Dm_{c}^{2}=1$ as in Fig. 2 (see Sect. III.3 for further
discussion).
Figure 5: The yields for soft pions due to a critical modification of the
dispersion curve, relative to an expectation based on the vacuum dispersion
curve (see eq. (121)). The results are shown for two different values of the
cutoff $\Lambda$.
With these preliminaries we can estimate the scale factors for each transport
coefficient.
$\displaystyle\frac{\sigma_{I\infty}^{\rm pc}}{\chi_{Q}}$
$\displaystyle=\frac{0.50}{2\pi T}\,\left[\left(\frac{1.5}{\pi
T\Gamma}\right)\left(\frac{1.27}{m/T}\right)\left(\frac{0.4}{\chi
T^{-2}}\right)\right],$ (109a) $\displaystyle\frac{\eta_{\infty}^{\rm
pc}}{sT}$ $\displaystyle=\frac{0.3}{4\pi T}\,\left[\left(\frac{1.5}{\pi
T\Gamma}\right)\left(\frac{5.4}{s/T^{3}}\right)\left(\frac{m_{\sigma}/T}{1.56}\right)\right],$
(109b) $\displaystyle\frac{\zeta_{\infty}^{\rm pc}}{sT}$
$\displaystyle=\frac{0.025}{4\pi T}\,\left[\left(\frac{1.5}{\pi
T\Gamma}\right)\left(\frac{c_{s}^{2}}{0.2}\right)^{2}\left(\frac{{\mathfrak{m}}^{2}}{7.0\,T^{2}}\right)^{2}\left(\frac{5.4}{s/T^{3}}\right)\left(\frac{1.56}{m_{\sigma}/T}\right)^{3}\right].$
(109c)
We have rescaled each transport coefficient by a value which is typical of
strongly coupled plasmas Son and Starinets (2007); Schäfer and Teaney (2009).
Thus, we see that the correction to the shear viscosity is small even in units
of $1/4\pi$. The correction to the charge diffusion coefficient
$D_{Q}=\sigma_{I}/\chi_{Q}$ is also modest, which is surprising given that
this parameter diverges in the chiral limit. Evidently this parametrically
large enhancement does not compensate for the overall kinematics of the loop
integral. Similarly, the bulk viscosity is also parametrically enhanced by
$m_{\sigma}^{-3}$, but in practice this also does not compensate for other
kinematic factors. A similar observation about the bulk viscosity has been
made previously in a somewhat different context Martinez _et al._ (2019).
## V Outlook: chiral critical dynamics in heavy ion data?
The previous section estimated the influence of critical chiral modes on the
transport coefficients of QCD. Given the rather large pion mass, these
corrections are modest, and probably can not be observed in practice. However,
it may be possible to observe the critical chiral fluctuations by directly
measuring soft pions, rather than indirectly through their influence on the
kinetics of the system. The approach in this section bears some similarities
with Bluhm _et al._ (2020), which investigated how a reduced chiral
condensate could influence the thermal fits over a wide range of collision
energies.
Current hydrodynamic codes underestimate the yield of pions at small
transverse momenta, see for example Fig. 3 of Devetak _et al._ (2020) where
the data to model ratios are approximately $1.5$ for $p_{T}\lesssim\pi T$.
Comparable discrepancies are also found in Mazeliauskas and Vislavicius
(2020); Acharya _et al._ (2020); Guillen and Ollitrault (2020). In the broken
phase, the critical dynamics is characterized by the formation of light
Goldstone bosons, which is reflected in the spectral density of axial charge
by the formation of two quasiparticle peaks (see Sect. III.3). The dynamics of
the heavy scalar field can be neglected well below $T_{c}$. With this in mind,
it is reasonable to search for effects of the chiral crossover in soft pions.
Well below $T_{c}$, we have previously shown that the phase-space density of
pions with momentum $q\ll\pi T$ is approximately governed by a simple kinetic
equation Grossi _et al._ (2020). We will use this kinetic equation right at
its boundary of applicability (the pseudo-critical point) to make an estimate
for the critical soft pion yield.
The kinetic equation in the rest frame of the fluid is121212For simplicity we
will limit the discussion to the rest frame of the fluid, leaving the more
general case to the references Grossi _et al._ (2020).
$\displaystyle\frac{\partial f_{\pi}}{\partial
t}+\frac{\partial\omega_{0}(q)}{\partial q_{i}}\frac{\partial
f_{\pi}}{\partial x^{i}}-\frac{\partial\omega_{0}(q)}{\partial
x^{i}}\frac{\partial f_{\pi}}{\partial
q_{i}}=-\Gamma_{q}\left(f_{\pi}-\frac{T}{\omega_{0}(q)}\right)\,,$ (110)
where $f_{\pi}(t,{\bm{x}},{\bm{q}})$ is the phase-space density of pions, and
the soft pion dispersion curve is Son and Stephanov (2002b)
$\omega^{2}_{0}(q)=v^{2}_{0}\,q^{2}+m^{2}_{0p}\,.$ (111)
Here and in the remainder of this section we have attached the “zero”
subscript to $v^{2}_{0}(T)$ and $m^{2}_{0p}(T)$ as a reminder that (111) holds
only for nearly zero momenta, $q\ll\pi T$. The equilibrium phase-space
distribution in this limit is the classical part of the Bose-Einstein
distribution
$f_{\pi}\big{|}_{\rm eq}=\frac{T}{\omega_{0}(q)}\,.$ (112)
Since $v^{2}_{0}(T)$ and $m_{0p}^{2}(T)$ both drop near $T_{c}$, it is natural
to expect an enhancement of soft pions Son and Stephanov (2002a). Here we will
given estimate of this enhancement by estimating the critical modifications of
(111).
The dispersion curve in (111) is valid only for soft pions $q\ll\pi T$, and at
higher momenta one expects higher derivative corrections, i.e.
$\omega^{2}_{0}(q)=v^{2}_{0}q^{2}+m_{0p}^{2}+\mathcal{O}\left(\frac{q^{4}}{\Lambda^{2}},\frac{m_{0p}^{2}q^{2}}{\Lambda^{2}}\right)\,,$
(113)
with $\Lambda\sim\pi T$. At large momentum the dispersion curve should
approach its vacuum form131313In this formula and in (116), $c=1$ is the speed
of light and $m^{2}_{\rm vac}\simeq 140\,{\rm MeV}$ is the vacuum pion mass.
$\omega^{2}_{\rm vac}(q)=c^{2}q^{2}+m_{\rm vac}^{2}\,.$ (114)
In the future, it might be possible to constrain the dispersion curve at
fourth order in momenta using second order chiral hydrodynamics and lattice
QCD measurements Grossi _et al._ (2020); Son and Stephanov (2002b). For now,
we will adopt an ansatz for the pion dispersion curve at all momenta which
interpolates between the low and high momentum limits, by writing
$\omega^{2}(q)=v^{2}(q)q^{2}+m_{p}^{2}(q)\,,$ (115)
with $v^{2}(q)$ and $m_{p}^{2}(q)$ taking the rough form
$\displaystyle v^{2}(p)=$ $\displaystyle
c^{2}\,(1-F(p/\Lambda))+v^{2}_{0}\,F(p/\Lambda)\,,$ (116) $\displaystyle
m^{2}_{p}(p)=$ $\displaystyle m^{2}_{\rm
vac}(1-F(p/\Lambda))+m^{2}_{0p}\,F(p/\Lambda)\,.$ (117)
Here $F(p/\Lambda)$ is any cutoff function which has a Taylor series,
$F(y)\simeq 1-y^{2}/2$, at small $y$ and approaches zero for $y\sim 1$. In
Fig. 5, we take
$F(y)=\frac{1}{1+y^{2}/2+y^{4}}\,,$ (118)
although qualitatively similar results were found with a simple cutoff,
$F(y)={\rm max}(1-y^{2}/2,0)$.
In order to have a prediction for the dispersion curve, we still need to
specify $v_{0}^{2}$ and $m^{2}_{0p}=v_{0}^{2}m_{0}^{2}$. These choices should
be approximately consistent with lattice data on screening masses. The lattice
finds that the pion screening mass is approximately its vacuum value for a
temperature of $135\,{\rm MeV}$, and approximately $198\,{\rm MeV}$ at the
pseudo-critical point Bazavov _et al._ (2019). The temperature of $135\,{\rm
MeV}$ is when the chiral susceptibility has reached approximately 60% of its
maximum and defines $z_{60}$. In mean-field theory $z_{60}=-0.79$ and the
pseudo critical point is at $z_{\rm pc}=1.19$, which is determined from the
maximum of the susceptibility. At $z_{60}=-0.79$, we will choose the pion’s
pole and screening masses to be equal to the vacuum pion mass, and the
velocity to be $c$. The mean-field the scaling curves then dictate the
screening mass at $z_{\rm pc}=1.19$, yielding:
$m_{0}(z_{\rm pc})\simeq 0.197\,{\rm GeV}\,,$ (119)
which is nicely consistent with lattice measurements on the pion screening
mass at $T=155\,{\rm MeV}$. The same mean field scaling curves then give the
values of the pole mass and the pion velocity at the pseudo-critical point:
$\displaystyle m_{0p}(z_{\rm pc})$ $\displaystyle\simeq 0.1\,{\rm GeV}\,,$
(120a) $\displaystyle v_{0}^{2}(z_{\rm pc})$ $\displaystyle\simeq 0.25\,.$
(120b)
In the future it would be nice to measure $v_{0}$ and $m$ very precisely on
the lattice (they are Euclidean quantities) and to verify their critical
scaling behavior in the chiral limit.
We have now fully specified the dispersion curve $\omega^{2}(q)$ with eqs.
(115), (116), (117) and (120). Given the dispersion curve we can estimate the
expected enhancement of yields
$\frac{\frac{dN^{\rm crit}}{d^{3}p}}{\frac{dN_{\rm
vac}}{d^{3}p}}=\frac{\omega_{\rm vac}(p)}{\omega(p)}\,.$ (121)
This prediction is shown in Fig. 5 for two different choices of $\Lambda$. We
note that using the full Bose-Einstein distribution instead of its classical
limit $T/\omega$ produces only minor differences, which a slightly increases
ratio shown in Fig. 5.
The ratio estimated in Fig. 5 is roughly inline with the observed enhancement,
although strong conclusions about the chiral critical point can not be made at
this time. Nevertheless, we find the result encouraging and it strongly
motivates further research. The most obvious deficiency in our estimate is the
lack of resonance decays at a naive level. Resonances are a way of encoding
interactions, and these interactions are already incorporated into the
dispersion curve. It is therefore difficult “include” resonances without
double counting. From a phenomenological perspective, it would be good to know
if the fluctuations in the soft pion yield are correlated with rest of the
pion $p_{T}$ spectrum, or if the variance of the soft yield has an independent
component. This correlation measurement certainly can be done, and is ideally
suited to the proposed ITS3 detector by the ALICE collaboration ALI (2018).
Additional clarifying measurements could include a direct measurement of the
correlations between two soft pions. It should be possible to provide good
theoretical predictions for these correlations using $O(4)$ scaling ideas.
These predictions can be contrasted with the (presumably) rather different
predictions of the hadron resonance gas. Finally, it would be interesting to
see if the velocity of the soft pions could be measured directly with non-
identical particle correlations. We hope to address these and other topics in
the future.
###### Acknowledgements.
We thank Anirban Lahiri and Rob Pisarski for discussions. This work is
supported by the U.S. Department of Energy, Office of Science, Office of
Nuclear Physics, grants Nos. DE-FG-02-08ER41450. AS is supported by the
Austrian Science Fund (FWF), project no. J4406.
## Appendix A Entropy production
In this appendix we compute entropy production with guidance from Bhattacharya
_et al._ (2011) and the insightful eightfold way classification scheme Haehl
_et al._ (2015). Repeating eq. (34) and eq. (32) for convenience, the entropy
is given by the Gibbs-Duhem relation
$s_{\Sigma}=\frac{1}{T}(e_{\Sigma}+p_{\Sigma}-{\textstyle\frac{1}{2}}\mu_{ab}n_{ab}),$
(122)
and the pressure differential follows from the action
$dp_{\Sigma}=s_{\Sigma}dT+\frac{1}{2}n_{ab}d\mu_{ab}-\frac{1}{2}d(\partial_{\perp}\phi)^{2}+\left(-\frac{\partial
V}{\partial\phi_{a}}+H_{a}\right)\,d\phi_{a}\,.$ (123)
Here $d\equiv u^{\mu}\partial_{\mu}$, and below we define $\partial
u\equiv\partial_{\mu}u^{\mu}$.
Differentiating (122) and using (123), the differential of the entropy density
$ds_{\Sigma}$ can be written as
$\displaystyle Tds_{\Sigma}$
$\displaystyle=de_{\Sigma}-\frac{1}{2}\mu_{ab}dn_{ab}-\frac{1}{2}d(\partial_{\perp}\phi)^{2}+\left(-\frac{\partial
V}{\partial\phi_{a}}+H_{a}\right)d\phi_{a}\,.$ (124)
The divergence of the entropy current is then:
$\displaystyle\partial_{\mu}(s_{\Sigma}u^{\mu})$
$\displaystyle=ds_{\Sigma}+s_{\Sigma}\,\partial u$ (125)
$\displaystyle=\frac{1}{T}[de_{\Sigma}+(e_{\Sigma}+p_{\Sigma})\partial
u]-\frac{\mu_{ab}}{2T}[dn_{ab}+n_{ab}\partial
u]-\frac{1}{2T}d(\partial_{\perp}\phi)^{2}+\left(-\frac{\partial
V}{\partial\phi_{a}}+H_{a}\right)\,\frac{d\phi_{a}}{T}.$ (126)
We will now evaluate the first two terms in square brackets using energy-
momentum and charge conservation respectively.
### A.1 Energy conservation
Energy conservation follows from the timelike projection of the conservation
law, $u_{\nu}\partial_{\mu}T^{\mu\nu}=0$, and yields
$\displaystyle-
de_{\Sigma}-(e_{\Sigma}+p_{\Sigma})\partial_{\mu}u^{\mu}=-u_{\nu}\partial_{\mu}[\partial^{\mu}\phi\cdot\partial^{\nu}\phi]+u_{\nu}\partial_{\mu}[u^{\mu}u^{\sigma}u^{\nu}u^{\rho}\partial_{\rho}\phi\cdot\partial_{\sigma}\phi]\,.$
(127)
To simplify the notation, we introduce the shorthand
$\xi^{\mu}_{a}=\partial^{\mu}\phi_{a},\quad\xi^{\mu}_{a}=-d\phi_{a}\,u^{\mu}+\partial_{\perp}^{\mu}\phi,$
(128)
and then rhs of 127 can be rewritten as
$\displaystyle
u_{\nu}\partial_{\mu}(\xi^{\mu}\cdot\xi^{\nu}-u^{\mu}u^{\nu}(d\phi)^{2})$
$\displaystyle=d\phi\cdot\partial_{\mu}\xi^{\mu}+\frac{1}{2}d\xi^{2}+u_{\nu}\xi^{\mu}\cdot(\partial_{\mu}\xi^{\nu}-\partial^{\nu}\xi_{\mu})-u_{\nu}\partial_{\mu}(u^{\mu}u^{\nu}(d\phi)^{2}).$
(129)
The curl vanishes due to the definition of $\xi$, and then using
$d\xi^{2}=d(d\phi)^{2}+d(\partial_{\perp}\phi^{2})$ this evaluates to
$\displaystyle
u_{\nu}\partial_{\mu}(\xi^{\mu}\cdot\xi^{\nu}-u^{\mu}u^{\nu}(d\phi)^{2})=d\phi\cdot\partial_{\mu}\partial_{\perp}^{\mu}\phi+\frac{1}{2}d\,(\partial_{\perp}\phi)^{2}\,.$
(130)
Including the dissipative part of the energy-momentum tensor, energy
conservation yields finally
$\displaystyle de_{\Sigma}+(e_{\Sigma}+p_{\Sigma})\,\partial
u=d\phi\,\cdot\partial_{\mu}\partial^{\mu}_{\perp}\phi+\frac{1}{2}d\,(\partial_{\perp}\phi)^{2}+u_{\nu}\,\partial_{\mu}\Pi^{\mu\nu}\,.$
(131)
### A.2 Charge Conservation
The equation of (partial) current conservation reads
$\displaystyle\partial_{\mu}J^{\mu}_{ab}=\phi_{a}H_{b}-\phi_{b}H_{a},$ (132)
where the current is defined as
$J^{\mu}_{ab}=n_{ab}u^{\mu}+J^{\mu}_{\perp ab}+q^{\mu}_{ab}.$ (133)
Here $n_{ab}$ is the charge, $J^{\mu}_{\perp ab}$ is the superfluid current in
(39), and $q^{\mu}_{ab}$ is the dissipative part of the current,
$q^{\mu}_{ab}u_{\mu}=0$. We then contract the eom with the antisymmetric
tensor $\mu_{ab}$ and find
$\displaystyle-\frac{1}{2}\mu_{ab}\,(dn_{ab}+n_{ab}\,\partial
u)=\frac{1}{2}\mu_{ab}\,\partial_{\mu}q^{\mu}_{ab}+\frac{1}{2}\mu_{ab}\,\partial_{\mu}J^{\mu}_{\perp
ab}+\mu_{ab}\,\phi_{b}H_{a}\,.$ (134)
Using the superfluid current in (39), we find finally
$\displaystyle-\frac{1}{2}\mu_{ab}\,(dn_{ab}+n_{ab}\,\partial
u)=\frac{1}{2}\mu_{ab}\,\partial_{\mu}q^{\mu}_{ab}+\mu_{ab}\phi_{b}\left(\partial_{\mu}\partial_{\perp}^{\mu}\phi_{a}-\frac{\partial
V}{\partial\phi_{a}}+H_{a}\right)\,,$ (135)
where we have inserted, $\phi_{b}\,\partial
V/\partial\phi_{a}-\phi_{a}\,\partial V/\partial\phi_{b}$, which vanishes due
to the $O(4)$ symmetry of the potential.
### A.3 Synthesis
After substitutions using (131) and (135), we find the final expression for
the entropy production quoted in the text
$\displaystyle\partial_{\mu}(s_{\Sigma}u^{\mu}-\frac{\mu}{2T}\cdot q^{\mu})=$
$\displaystyle\frac{1}{T}\left(d\phi_{a}+\mu_{ab}\phi_{b}\right)\,[\partial_{\mu}\partial^{\mu}_{\perp}\phi_{a}-\frac{\partial
V}{\partial\phi_{a}}+H_{a}]-\Pi^{\mu\nu}\partial_{\mu}\beta_{\nu}-q^{\mu}\cdot\partial_{\mu}\left(\frac{\mu}{2T}\right)\,.$
(136)
## Appendix B Computing the transport coefficients near the critical point
In this appendix, we gather the details of the computation of the transport
coefficients. First, we note that the dimensionless function introduced in
(97) can be integrated exactly
$\displaystyle f_{n}(r,u)$
$\displaystyle=\frac{16}{15\pi}\frac{m^{7-2n}}{r^{2}-u^{2}}\int_{0}^{\infty}dk\frac{k^{2n}}{(k^{2}+m^{2})^{3}}\left[\frac{r^{2}}{k^{2}+r^{2}m^{2}}-\frac{u^{2}}{k^{2}+u^{2}m^{2}}\right],$
(137) $\displaystyle=\frac{\sec(\pi
n)}{15\left(r^{2}-u^{2}\right)}\Big{[}\frac{4n^{2}\left(r^{2}-1\right)^{2}-8r^{2n+1}+8n\left(r^{2}-1\right)-r^{4}+6r^{2}+3}{\left(r^{2}-1\right)^{3}}$
$\displaystyle\qquad\qquad\qquad-\frac{4n^{2}\left(u^{2}-1\right)^{2}-8u^{2n+1}+8n\left(u^{2}-1\right)-u^{4}+6u^{2}+3}{\left(u^{2}-1\right)^{3}}\Big{]}.$
(138)
Next, we take a closer look at the shear viscosity computation. We see from
(90) that the shear viscosity will have a contribution from the $\sigma$ and
$\varphi$ propagators:
$\displaystyle\langle T^{xy}(x)T^{xy}(z)\rangle$
$\displaystyle=\langle\partial^{x}\delta\sigma(x)\partial^{y}\delta\sigma(x)\partial^{x}\delta\sigma(z)\partial^{y}\delta\sigma(z)\rangle+\sigma_{0}^{4}\langle\partial^{x}\varphi_{a}(x)\partial^{y}\varphi_{a}(x)\partial^{x}\varphi_{b}(z)\partial^{y}\varphi_{b}(z)\rangle,$
$\displaystyle\equiv I_{\sigma\sigma}^{xy}+I_{\varphi\varphi}^{xy}\,.$ (139)
The contribution from the $\sigma\sigma$ propagator reads
$\displaystyle I_{\sigma\sigma}^{xy}$
$\displaystyle=\frac{2T^{2}}{(30\pi^{2}\Gamma)}\int_{0}^{\Lambda}\frac{k^{6}dk}{(k^{2}+m^{2}_{\sigma})^{3}}=\frac{T^{2}\Lambda}{15\pi^{2}\Gamma}-\frac{T^{2}m_{\rm\sigma}}{16\pi\Gamma}.$
(140)
Similarly, we need to evaluate the contribution to the shear viscosity due to
the $\varphi\varphi$ propagator:
$\displaystyle
I^{xy}_{\varphi\varphi}=2d_{A}\int\frac{d^{3}k}{(2\pi)^{3}}\frac{d\omega}{(2\pi)}\frac{\bar{\sigma}^{4}}{\omega_{k}^{4}}(k^{x}k^{y}G^{\varphi\varphi}_{\rm
sym})^{2}=2T^{2}d_{A}\int\frac{d^{3}k}{(2\pi)^{3}}\frac{(k^{x}k^{y})^{2}}{(k^{2}+m^{2})^{2}}\frac{g_{2}^{2}+(g_{1}g_{2}+\omega_{k}^{2})}{(g_{1}+g_{2})(g_{1}g_{2}+\omega_{k}^{2})}\,.$
(141)
We can evaluate the expression neatly by adding and subtracting the leading
divergent piece
$\displaystyle
I^{xy}_{\varphi\varphi}=\frac{2T^{2}d_{A}}{30\pi^{2}\Gamma}\int_{0}^{\Lambda}\frac{k^{6}}{(k^{2}+m^{2})^{3}}+\frac{2T^{2}}{30\pi^{2}}\int
dk\frac{k^{6}}{(k^{2}+m^{2})^{2}}\left(\frac{g_{2}^{2}+(g_{1}g_{2}+\omega_{k}^{2})}{(g_{1}+g_{2})(g_{1}g_{2}+\omega_{k}^{2})}-\frac{1}{g_{1}}\right),$
(142)
and by using (73) and (75), we can evaluate the above expression to find
$\displaystyle
I^{xy}_{\varphi\varphi}=\frac{2T^{2}d_{A}\Lambda}{30\pi^{2}\Gamma}-\frac{2T^{2}md_{A}}{32\pi\Gamma}\left(1+u^{2}(1-r^{2})f_{3}(r,u)\right).$
(143)
Combining the ingredients, we find that the shear viscosity is given by
(100b).
## Appendix C Comparison with pion kinetics
Our purpose in this appendix is to explain the (black dashed) “$\pi$-kinetics”
curves in Fig. 3. As discussed in Sect. IV, when writing down the hydrodynamic
theory with the $\Sigma$ field we have integrated out modes with $k\sim T$,
which are then incorporated into the dissipative transport coefficients of the
hydrodynamic theory such as $\eta_{\Sigma}$. Modes with $k\sim m_{\sigma}$ are
explicitly propagated in the theory.
At large negative $z$ (well in the broken phase), the $\sigma$ is heavy is
compared to the pions, and can be consistently integrated out by exploiting
the mass hierarchy
$m\ll m_{\sigma}\ll T\,.$ (144)
The resulting hydrodynamic effective theory consists of energy, momentum, and
light pions, which are parameterized by the unitary matrix, $U=e^{i2\varphi}$
Grossi _et al._ (2020). Modes with $k\sim m_{\sigma}$ are now incorporated
into the new transport coefficients of this theory such as $\eta_{U}$,
$\eta_{U}$ differs from $\eta_{\Sigma}$ due to the contribution of these
modes.
At the longest distances with $k\ll m$, the pion hydrodynamic theory reduces
to ordinary hydrodynamics with the familiar transport coefficients $\eta$,
$\zeta$ and $\sigma_{I}$. Matching the pion effective theory to normal
hydrodynamics determines the contribution of soft pions to these normal
coefficients. This computation gives Grossi _et al._ (2020)
$\displaystyle\eta=$
$\displaystyle\eta_{U}-\frac{d_{A}Tm}{120\pi(\Gamma+D_{0})}\left[\frac{2r^{3}+4r^{2}+6r+3}{(1+r)^{2}}\right],$
(145a) $\displaystyle\sigma_{I}=$
$\displaystyle(\sigma_{I})_{U}+\frac{T_{A}T}{24\pi
m(\Gamma+D_{0})}\left[\frac{1+2r}{(1+r)^{2}}\right],$ (145b)
$\displaystyle\zeta=$
$\displaystyle\zeta_{U}-\frac{d_{A}Tm}{8\pi(\Gamma+D_{0})}\left(\frac{\beta
c_{s0}^{2}}{t}\right)^{2}\left[\frac{8r^{3}+16r^{2}+16r+7}{4(1+r)^{2}}\right].$
(145c)
Here $r=\Gamma/(\Gamma+D_{0})$, and $\eta_{U}$, $\zeta_{U}$, and
$(\sigma_{I})_{U}$ are the dissipative parameters of the soft-pion effective
theory141414In Grossi _et al._ (2020), the (renormalized) dissipative
parameters $\eta_{U},\zeta_{U},(\sigma_{I})_{U}$ where called $\eta^{(0)}_{\rm
phys}$, $\zeta^{(0)}_{\rm phys}$, and $(\sigma_{I})^{(0)}_{\rm phys}$. The
transport coefficients $\Gamma$ and $D_{0}$ in this work were called $D_{m}$
and $D_{A}-D_{m}$ in Grossi _et al._ (2020). .
Expanding our results in eq. (100) for $\eta$ and $\sigma_{I}$ at large
negative $z$ (where the parameter $u$ tends to infinity), we find that our
expressions match with the pion EFT results in (145), provided we identify
$\displaystyle(\sigma_{I})_{U}$
$\displaystyle=\sigma_{\Sigma}+\frac{T_{A}T}{12\pi\Gamma
m_{\sigma}}\left(\frac{\Gamma+D}{D}\right)\left(\frac{\sqrt{\Gamma
Dm_{\sigma}^{2}}}{v}\right)_{-\infty},$ (146a) $\displaystyle\eta_{U}$
$\displaystyle=\eta_{\Sigma}-\frac{Tm_{\sigma}}{32\pi\Gamma}-\frac{Td_{A}m_{\sigma}}{60\pi\Gamma}\left(\frac{D}{\Gamma+D}\right)\left(\frac{v}{\sqrt{\Gamma
Dm_{\sigma}^{2}}}\right)_{-\infty}.$ (146b)
Here we have defined the constant
$\left(\frac{v}{\sqrt{\Gamma
Dm_{\sigma}^{2}}}\right)_{-\infty}\equiv\lim_{z\to-\infty}\frac{v}{\sqrt{\Gamma
Dm_{\sigma}^{2}}}=\frac{u_{c}}{\sqrt{2}}\,,$ (147)
where $u_{c}^{2}\equiv v^{2}_{c}/\Gamma Dm_{c}^{2}$ is a dimensionless
combination of parameters evaluated on the critical line (see Sect. III.3 for
a physical explanation). Throughout the paper we have taken $u_{c}=1$ as in
Fig. 2. As discussed above, the difference between $\eta_{U}$ and
$\eta_{\Sigma}$ (and similarly for the conductivity) comes from integrating
out modes with $k\sim m_{\sigma}$. Thus, for instance, the second term in
(146b) stems from integrating out the $\sigma$ field, while the third term
stems from integrating out hard pions with $k\sim m_{\sigma}$. The
“$\pi$-kinetics” curves in Fig. 3 are the asymptotics given in (145) with
parameters identified in (146).
## References
* Jeon and Heinz (2015) Sangyong Jeon and Ulrich Heinz, “Introduction to Hydrodynamics,” Int. J. Mod. Phys. E 24, 1530010 (2015), arXiv:1503.03931 [hep-ph] .
* Heinz and Snellings (2013) Ulrich Heinz and Raimond Snellings, “Collective flow and viscosity in relativistic heavy-ion collisions,” Ann. Rev. Nucl. Part. Sci. 63, 123–151 (2013), arXiv:1301.2826 [nucl-th] .
* Son (2000) D. T. Son, “Hydrodynamics of nuclear matter in the chiral limit,” Phys. Rev. Lett. 84, 3771–3774 (2000), arXiv:hep-ph/9912267 [hep-ph] .
* Rajagopal and Wilczek (1993) Krishna Rajagopal and Frank Wilczek, “Static and dynamic critical phenomena at a second order QCD phase transition,” Nucl. Phys. B399, 395–425 (1993), arXiv:hep-ph/9210253 [hep-ph] .
* Son and Stephanov (2002a) D. T. Son and Misha A. Stephanov, “Pion propagation near the QCD chiral phase transition,” Phys. Rev. Lett. 88, 202302 (2002a), arXiv:hep-ph/0111100 [hep-ph] .
* Ding _et al._ (2019) H. T. Ding _et al._ , “Chiral Phase Transition Temperature in ( 2+1 )-Flavor QCD,” Phys. Rev. Lett. 123, 062002 (2019), arXiv:1903.04801 [hep-lat] .
* Kaczmarek _et al._ (2020) Olaf Kaczmarek, Frithjof Karsch, Anirban Lahiri, Lukas Mazur, and Christian Schmidt, “QCD phase transition in the chiral limit,” (2020) arXiv:2003.07920 [hep-lat] .
* Engels and Karsch (2014) J. Engels and F. Karsch, “Finite size dependence of scaling functions of the three-dimensional O(4) model in an external field,” Phys. Rev. D 90, 014501 (2014), arXiv:1402.5302 [hep-lat] .
* Engels and Karsch (2012) J. Engels and F. Karsch, “The scaling functions of the free energy density and its derivatives for the 3d O(4) model,” Phys. Rev. D 85, 094506 (2012), arXiv:1105.0584 [hep-lat] .
* Engels and Vogt (2010) J. Engels and O. Vogt, “Longitudinal and transverse spectral functions in the three-dimensional O(4) model,” Nucl. Phys. B 832, 538–566 (2010), arXiv:0911.1939 [hep-lat] .
* Hohenberg and Halperin (1977) P.C. Hohenberg and B.I. Halperin, “Theory of Dynamic Critical Phenomena,” Rev. Mod. Phys. 49, 435–479 (1977).
* Nahrgang _et al._ (2012) Marlene Nahrgang, Stefan Leupold, and Marcus Bleicher, “Equilibration and relaxation times at the chiral phase transition including reheating,” Phys. Lett. B711, 109–116 (2012), arXiv:1105.1396 [nucl-th] .
* Rapp _et al._ (2010) R. Rapp, J. Wambach, and H. van Hees, “The Chiral Restoration Transition of QCD and Low Mass Dileptons,” Landolt-Bornstein 23, 134 (2010), arXiv:0901.3289 [hep-ph] .
* Colella (2019) Domenico Colella (ALICE), “ALICE Inner Tracking System Upgrade: construction and commissioning,” in _18th International Conference on Strangeness in Quark Matter (SQM 2019) Bari, Italy, June 10-15, 2019_ (2019) arXiv:1912.12188 [physics.ins-det] .
* Grossi _et al._ (2020) Eduardo Grossi, Alexander Soloviev, Derek Teaney, and Fanglida Yan, “Transport and hydrodynamics in the chiral limit,” Phys. Rev. D 102, 014042 (2020), arXiv:2005.02885 [hep-th] .
* Parisen Toldin _et al._ (2003) Francesco Parisen Toldin, Andrea Pelissetto, and Ettore Vicari, “The 3-D O(4) universality class and the phase transition in two flavor QCD,” JHEP 07, 029 (2003), arXiv:hep-ph/0305264 .
* Engels _et al._ (2003) J. Engels, L. Fromme, and M. Seniuch, “Correlation lengths and scaling functions in the three-dimensional O(4) model,” Nucl. Phys. B 675, 533–554 (2003), arXiv:hep-lat/0307032 .
* Son and Stephanov (2002b) D. T. Son and Misha A. Stephanov, “Real time pion propagation in finite temperature QCD,” Phys. Rev. D66, 076011 (2002b), arXiv:hep-ph/0204226 [hep-ph] .
* Jain (2017) Akash Jain, “Theory of non-Abelian superfluid dynamics,” Phys. Rev. D95, 121701 (2017), arXiv:1610.05797 [hep-th] .
* Jensen _et al._ (2012) Kristan Jensen, Matthias Kaminski, Pavel Kovtun, Rene Meyer, Adam Ritz, and Amos Yarom, “Towards hydrodynamics without an entropy current,” Phys. Rev. Lett. 109, 101601 (2012), arXiv:1203.3556 [hep-th] .
* (21) Derek Teaney, Juan Torres-Rancon, and Fanglida Yan, Manuscript in preparation.
* Forster (1995) D. Forster, _Hydrodynamic Fluctuations, Broken Symmetry, And Correlation Functions_, Advanced Books Classics (Avalon Publishing, 1995).
* Borsanyi _et al._ (2020) Szabolcs Borsanyi, Zoltan Fodor, Jana N. Guenther, Ruben Kara, Sandor D. Katz, Paolo Parotto, Attila Pasztor, Claudia Ratti, and Kalman K. Szabo, “QCD Crossover at Finite Chemical Potential from Lattice Simulations,” Phys. Rev. Lett. 125, 052001 (2020), arXiv:2002.02821 [hep-lat] .
* Borsanyi _et al._ (2010) Szabolcs Borsanyi, Gergely Endrodi, Zoltan Fodor, Antal Jakovac, Sandor D. Katz, Stefan Krieg, Claudia Ratti, and Kalman K. Szabo, “The QCD equation of state with dynamical quarks,” JHEP 11, 077 (2010), arXiv:1007.2580 [hep-lat] .
* Borsanyi _et al._ (2014) Szabocls Borsanyi, Zoltan Fodor, Christian Hoelbling, Sandor D. Katz, Stefan Krieg, and Kalman K. Szabo, “Full result for the QCD equation of state with 2+1 flavors,” Phys. Lett. B 730, 99–104 (2014), arXiv:1309.5258 [hep-lat] .
* Bazavov _et al._ (2012) A. Bazavov _et al._ (HotQCD), “Fluctuations and Correlations of net baryon number, electric charge, and strangeness: A comparison of lattice QCD results with the hadron resonance gas model,” Phys. Rev. D 86, 034509 (2012), arXiv:1203.0784 [hep-lat] .
* Bazavov _et al._ (2014) A. Bazavov _et al._ (HotQCD), “Equation of state in ( 2+1 )-flavor QCD,” Phys. Rev. D 90, 094503 (2014), arXiv:1407.6387 [hep-lat] .
* Bazavov _et al._ (2019) A. Bazavov, S. Dentinger, H.-T. Ding, P. Hegde, O. Kaczmarek, F. Karsch, E. Laermann, Anirban Lahiri, Swagato Mukherjee, H. Ohno, P. Petreczky, R. Thakkar, H. Sandmeyer, C. Schmidt, S. Sharma, and P. Steinbrecher (HotQCD Collaboration), “Meson screening masses in ($2+1$)-flavor qcd,” Phys. Rev. D 100, 094510 (2019).
* Son and Starinets (2007) Dam T. Son and Andrei O. Starinets, “Viscosity, Black Holes, and Quantum Field Theory,” Ann. Rev. Nucl. Part. Sci. 57, 95–118 (2007), arXiv:0704.0240 [hep-th] .
* Schäfer and Teaney (2009) Thomas Schäfer and Derek Teaney, “Nearly Perfect Fluidity: From Cold Atomic Gases to Hot Quark Gluon Plasmas,” Rept. Prog. Phys. 72, 126001 (2009), arXiv:0904.3107 [hep-ph] .
* Martinez _et al._ (2019) M. Martinez, T. Schäfer, and V. Skokov, “Critical behavior of the bulk viscosity in QCD,” Phys. Rev. D 100, 074017 (2019), arXiv:1906.11306 [hep-ph] .
* Bluhm _et al._ (2020) Marcus Bluhm, Marlene Nahrgang, and Jan M. Pawlowski, “Locating the freeze-out curve in heavy-ion collisions,” (2020), arXiv:2004.08608 [nucl-th] .
* Devetak _et al._ (2020) D. Devetak, A. Dubla, S. Floerchinger, E. Grossi, S. Masciocchi, A. Mazeliauskas, and I. Selyuzhenkov, “Global fluid fits to identified particle transverse momentum spectra from heavy-ion collisions at the Large Hadron Collider,” JHEP 06, 044 (2020), arXiv:1909.10485 [hep-ph] .
* Mazeliauskas and Vislavicius (2020) Aleksas Mazeliauskas and Vytautas Vislavicius, “Temperature and fluid velocity on the freeze-out surface from $\pi$, $K$, $p$ spectra in pp, p-Pb and Pb-Pb collisions,” Phys. Rev. C 101, 014910 (2020), arXiv:1907.11059 [hep-ph] .
* Acharya _et al._ (2020) Shreyasi Acharya _et al._ (ALICE), “Production of charged pions, kaons, and (anti-)protons in Pb-Pb and inelastic $pp$ collisions at $\sqrt{s_{NN}}$ = 5.02 TeV,” Phys. Rev. C 101, 044907 (2020), arXiv:1910.07678 [nucl-ex] .
* Guillen and Ollitrault (2020) Anthony Guillen and Jean-Yves Ollitrault, “Fluid velocity from transverse momentum spectra,” (2020), arXiv:2012.07898 [nucl-th] .
* ALI (2018) “Expression of Interest for an ALICE ITS Upgrade in LS3,” (2018).
* Bhattacharya _et al._ (2011) Jyotirmoy Bhattacharya, Sayantani Bhattacharyya, and Shiraz Minwalla, “Dissipative Superfluid dynamics from gravity,” JHEP 04, 125 (2011), arXiv:1101.3332 [hep-th] .
* Haehl _et al._ (2015) Felix M. Haehl, R. Loganayagam, and Mukund Rangamani, “The eightfold way to dissipation,” Phys. Rev. Lett. 114, 201601 (2015), arXiv:1412.1090 [hep-th] .
|
# Spark NLP: Natural Language Understanding at Scale
Veysel Kocaman, David Talby John Snow Labs Inc.
16192 Coastal Highway
Lewes, DE , USA 19958
{veysel<EMAIL_ADDRESS>
###### Abstract
Spark NLP is a Natural Language Processing (NLP) library built on top of
Apache Spark ML. It provides simple, performant & accurate NLP annotations for
machine learning pipelines that can scale easily in a distributed environment.
Spark NLP comes with 1100+ pretrained pipelines and models in more than 192+
languages. It supports nearly all the NLP tasks and modules that can be used
seamlessly in a cluster. Downloaded more than 2.7 million times and
experiencing 9x growth since January 2020, Spark NLP is used by 54% of
healthcare organizations as the world’s most widely used NLP library in the
enterprise.
###### keywords:
spark , natural language processing , deep learning , tensorflow , cluster
††journal: Software Impacts
## 1 Spark NLP Library
Natural language processing (NLP) is a key component in many data science
systems that must understand or reason about a text. Common use cases include
question answering, paraphrasing or summarising, sentiment analysis, natural
language BI, language modelling, and disambiguation. Nevertheless, NLP is
always just a part of a bigger data processing pipeline and due to the
nontrivial steps involved in this process, there is a growing need for all-in-
one solution to ease the burden of text preprocessing at large scale and
connecting the dots between various steps of solving a data science problem
with NLP. A good NLP library should be able to correctly transform the free
text into structured features and let the users train their own NLP models
that are easily fed into the downstream machine learning (ML) or deep learning
(DL) pipelines with no hassle.
Spark NLP is developed to be a single unified solution for all the NLP tasks
and is the only library that can scale up for training and inference in any
Spark cluster, take advantage of transfer learning and implementing the latest
and greatest algorithms and models in NLP research, and deliver a mission-
critical, enterprise-grade solutions at the same time. It is an open-source
natural language processing library, built on top of Apache Spark and Spark
ML. It provides an easy API to integrate with ML pipelines and it is
commercially supported by John Snow Labs Inc, an award-winning healthcare AI
and NLP company based in USA.
Spark NLP’s annotators utilize rule-based algorithms, machine learning and
deep learning models which are implemented using TensorFlow that has been
heavily optimized for accuracy, speed, scalability, and memory utilization.
This setup has been tightly integrated with Apache Spark to let the driver
node run the entire training using all the available cores on the driver node.
There is a CuDA version of each TensorFlow component to enable training models
on GPU when available. The Spark NLP is written in Scala and provides open-
source API’s in Python, Java, Scala, and R - so that users do not need to be
aware of the underlying implementation details (TensorFlow, Spark, etc.) in
order to use it. Since it has an active release cycle (released 26 new
versions in 2019 and another 26 in 2020), the latest trends and research in
NLP field are embraced and implemented rapidly in a way that could scale well
in a cluster setting to allow common NLP pipelines run orders of magnitude
faster than what the inherent design limitations of legacy libraries allowed.
Spark NLP library has two versions: Open source and enterprise. Open source
version has all the features and components that could be expected from any
NLP library, using the latest DL frameworks and research trends. Enterprise
library is licensed (free for academic purposes) and designed towards solving
real world problems in healthcare domain and extends the open source version.
The licensed version has the following modules to help researchers and data
practitioners in various means: Named entity recognition (NER), assertion
status (negativity scope) detection, relation extraction, entity resolution
(SNOMED, RxNorm, ICD10 etc.), clinical spell checking, contextual parser,
text2SQL, deidentification and obfuscation. High level overview of the
components from each version can be seen at Figure 4.
## 2 The impact to research fields
The COVID-19 pandemic brought a surge of academic research about the virus -
resulting in 23,634 new publications between January and June of 2020 [1] and
accelerating to 8,800 additions per week from June to November on the COVID-19
Open Research Dataset [2]. Such a high volume of publications makes it
impossible for researchers to read each publication, resulting in increased
interest in applying natural language processing (NLP) and text mining
techniques to enable semi-automated literature review [3].
In parallel, there is a growing need for automated text mining of Electronic
health records (EHRs) in order to find clinical indications that new research
points to. EHRs are the primary source of information for clinicians tracking
the care of their patients. Information fed into these systems may be found in
structured fields for which values are inputted electronically (e.g.
laboratory test orders or results) [4] but most of the time information in
these records is unstructured making it largely inaccessible for statistical
analysis [5]. These records include information such as the reason for
administering drugs, previous disorders of the patient or the outcome of past
treatments, and they are the largest source of empirical data in biomedical
research, allowing for major scientific findings in highly relevant disorders
such as cancer and Alzheimer’s disease [6]. Despite the growing interest and
ground breaking advances in NLP research and NER systems, easy to use
production ready models and tools are scarce in biomedical and clinical domain
and it is one of the major obstacles for clinical NLP researchers to implement
the latest algorithms into their workflow and start using immediately. On the
other hand, NLP tool kits specialized for processing biomedical and clinical
text, such as MetaMap [7] and cTAKES [8] typically do not make use of new
research innovations such as word representations or neural networks discussed
above, hence producing less accurate results [9, 10]. We introduce Spark NLP
as the one-stop solution to address all these issues.
A primary building block in such text mining systems is named entity
recognition (NER) - which is regarded as a critical precursor for question
answering, topic modelling, information retrieval, etc [11]. In the medical
domain, NER recognizes the first meaningful chunks out of a clinical note,
which are then fed down the processing pipeline as an input to subsequent
downstream tasks such as clinical assertion status detection [12], clinical
entity resolution [13] and de-identification of sensitive data [14]. However,
segmentation of clinical and drug entities is considered to be a difficult
task in biomedical NER systems because of complex orthographic structures of
named entities [15]. Sample NER predictions from a clinical text can be found
at Figure 3.
The next step following an NER model in the clinical NLP pipeline is to assign
an assertion status to each named entity given its context. The status of an
assertion explains how a named entity (e.g. clinical finding, procedure, lab
result) pertains to the patient by assigning a label such as present ("patient
is diabetic"), absent ("patient denies nausea"), conditional ("dyspnea while
climbing stairs"), or associated with someone else ("family history of
depression"). In the context of COVID-19, applying an accurate assertion
status detection is crucial, since most patients will be tested for and asked
about the same set of symptoms and comorbidities - so limiting a text mining
pipeline to recognizing medical terms without context is not useful in
practice. The flow diagram of such a pipeline can be seen in Figure 1.
In our previous study [16], we showed through extensive experiments that NER
module in Spark NLP library exceeds the biomedical NER benchmarks reported by
Stanza in 7 out of 8 benchmark datasets and in every dataset reported by
SciSpacy without using heavy contextual embeddings like BERT. Using the
modified version of the well known BiLSTM-CNN-Char NER architecture [17] into
Spark environment, we also presented that even with a general purpose GloVe
embeddings (GloVe6B) and with no lexical features, we were able to achieve
state-of-the-art results in biomedical domain and produces better results than
Stanza in 4 out of 8 benchmark datasets.
In another study [18], we introduced a set of pre-trained NER models that are
all trained on biomedical and clinical datasets using the same deep learning
architecture. We then illustrated how to extract knowledge and relevant
information from unstructured electronic health records (EHR) and COVID-19
Open Research Dataset (CORD-19) by combining these models in a unified &
scalable pipeline and shared the results to illustrate extracting valuable
information from scientific papers. The results suggest that papers present in
the CORD-19 include a wide variety of the many entity types that this new NLP
pipeline can recognize, and that assertion status detection is a useful filter
on these entities (Figure 2). The most frequent phrases from the selected
entity types can be found at Table 2. This bodes well for the richness of
downstream analysis that can be done using this now structured and normalized
data - such as clustering, dimensionality reduction, semantic similarity,
visualization, or graph-based analysis to identity correlated concepts.
Moreover, in order to evaluate how fast the pipeline works and how effectively
it scales to make use of a compute cluster, we ran the same Spark NLP
prediction pipelines in local mode and in cluster mode: and found out that
tokenization is 20x faster while the entity extraction is 3.5x faster on the
cluster, compared to the single machine run.
## 3 The impact to industrial and academic collaborations
As the creator of Spark NLP, John Snow Labs company has been supporting the
researchers around the globe by distributing them a free license to use all
the licensed modules both in research projects and graduate level courses at
universities, providing hands-on supports when needed, organizing workshops
and summits to gather distinguished speakers and running projects with the R&D
teams of the top pharmacy companies to help them unlock the potential of
unstructured text data buried in their ecosystem. Spark NLP already powers
leading healthcare and pharmaceutical companies including Kaiser Permanente,
McKesson, Merck, and Roche. Since Spark NLP can also be used offline and
deployed in air-gapped networks, the companies and healthcare facilities do
not need to worry about exposing the protected health information (PHI). The
detailed information about these projects and case studies can be found at
[19], [20], [21].
Figure 1: The flow diagram of a Spark NLP pipeline. When we fit() on the
pipeline with a Spark data frame, its text column is fed into the
DocumentAssembler() transformer and a new column document is created as an
initial entry point to Spark NLP for any Spark data frame. Then, its document
column is fed into the SentenceDetector() module to split the text into an
array of sentences and a new column “sentences” is created. Then, the
“sentences” column is fed into Tokenizer(), each sentence is tokenized, and a
new column “token” is created. Then, Tokens are normalized (basic text
cleaning) and word embeddings are generated for each. Now data is ready to be
fed into NER models and then to the assertion model. Table 1: NER performance
across different datasets in the biomedical domain. All scores reported are
micro-averaged test F1 excluding O’s. Stanza results are from the paper
reported in [9], SciSpaCy results are from the scispacy-medium models reported
in [10]. The official training and validation sets are merged and used for
training and then the models are evaluated on the original test sets. For
reproducibility purposes, we use the preprocessed versions of these datasets
provided by [22] and also used by Stanza. Spark-x prefix in the table
indicates our implementation. Bold scores represent the best scores in the
respective row.
Dataset | Entities | Spark - Biomedical | Spark - GloVe 6B | Stanza | SciSpacy
---|---|---|---|---|---
NCBI-Disease | Disease | 89.13 | 87.19 | 87.49 | 81.65
BC5CDR | Chemical, Disease | 89.73 | 88.32 | 88.08 | 83.92
BC4CHEMD | Chemical | 93.72 | 92.32 | 89.65 | 84.55
Linnaeus | Species | 86.26 | 85.51 | 88.27 | 81.74
Species800 | Species | 80.91 | 79.22 | 76.35 | 74.06
JNLPBA | 5 types in cellular | 81.29 | 79.78 | 76.09 | 73.21
AnatEM | Anatomy | 89.13 | 87.74 | 88.18 | 84.14
BioNLP13-CG | 16 types in Cancer Genetics | 85.58 | 84.30 | 84.34 | 77.60
Figure 2: Named Entity Recognition is a fundamental building block of medical
text mining pipelines, and feeds downstream tasks such as assertion status,
entity linking, de-identification, and relation extraction. Figure 3: Sample
clinical entities predicted by a clinical NER model trained on various
datasets. There are more than 40 pretrained NER models in Spark NLP Enterprise
edition. Figure 4: Spark NLP library has two versions (open source and
enterprise) and each comes with a set of pretrained models and pipelines that
could be used out of the box with no further training or dataset. Table 2: The
most frequent 10 terms from the selected entity types predicted through
parsing 100 articles from CORD-19 dataset [2] with an NER model named
jsl_ner_wip in Spark NLP. Getting predictions from the model, we can get some
valuable information regarding the most frequent disorders or symptoms
mentioned in the papers or the most common vital and EKG findings without
reading the paper. According to this table, the most common symptom is cough
and inflammation while the most common drug ingredients mentioned is
oseltamivir and antibiotics. We can also say that cardiogenic oscillations and
ventricular fibrillation are the common observations from EKGs while fever and
hyphothermia are the most common vital signs.
Disease Syndrome Disorder | Communicable Disease | Symptom | Drug Ingredient | Procedure | Vital Sign Findings | EKG Findings
---|---|---|---|---|---|---
infectious diseases | HIV | cough | oseltamivir | resuscitation | fever | low VT
sepsis | H1N1 | inflammation | biological agents | cardiac surgery | hypothermia | cardiogenic oscillations
influenza | tuberculosis | critically ill | VLPs | tracheostomy | hypoxia | significant changes
septic shock | influenza | necrosis | antibiotics | CPR | respiratory failure | CO reduces oxygen transport
asthma | TB | bleeding | saline | vaccination | hypotension | ventricular fibrillation
pneumonia | hepatitis viruses | lesion | antiviral | bronchoscopy | hypercapnia | significant impedance increases
COPD | measles | cell swelling | quercetin | intubation | tachypnea | ventricular fibrillation
gastroenteritis | pandemic influenza | hemorrhage | NaCl | transfection | respiratory distress | pulseless electrical activity
viral infections | seasonal influenza | diarrhea | ribavirin | bronchoalveolar lavage | hypoxaemia | mildmoderate hypothermia
SARS | rabies | toxicity | Norwalk agent | autopsy | pyrexia | cardiogenic oscillations
## 4 Acknowledgements
We thank our colleagues and research partners who contributed in the former
and current developments of Spark NLP library. We also thank our users and
customers who helped us improve the library with their feedbacks and
suggestions.
## References
* [1] J. A. T. da Silva, P. Tsigaris, M. Erfanmanesh, Publishing volumes in major databases related to covid-19, Scientometrics (2020) 1 – 12.
* [2] L. L. Wang, K. Lo, Y. Chandrasekhar, R. Reas, J. Yang, D. Eide, K. Funk, R. Kinney, Z. Liu, W. Merrill, et al., Cord-19: The covid-19 open research dataset, ArXiv.
* [3] X. Cheng, Q. Cao, S. Liao, An overview of literature on covid-19, mers and sars: Using text mining and latent dirichlet allocation, Journal of Information Science.
* [4] A. Liede, R. K. Hernandez, M. Roth, G. Calkins, K. Larrabee, L. Nicacio, Validation of international classification of diseases coding for bone metastases in electronic health records using technology-enabled abstraction, Clinical epidemiology 7 (2015) 441.
* [5] T. B. Murdoch, A. S. Detsky, The inevitable application of big data to health care, Jama 309 (13) (2013) 1351–1352.
* [6] G. Perera, M. Khondoker, M. Broadbent, G. Breen, R. Stewart, Factors associated with response to acetylcholinesterase inhibition in dementia: a cohort study from a secondary mental health care case register in london, PloS one 9 (11) (2014) e109484.
* [7] A. R. Aronson, F.-M. Lang, An overview of metamap: historical perspective and recent advances, Journal of the American Medical Informatics Association 17 (3) (2010) 229–236.
* [8] G. K. Savova, J. J. Masanz, P. V. Ogren, J. Zheng, S. Sohn, K. C. Kipper-Schuler, C. G. Chute, Mayo clinical text analysis and knowledge extraction system (ctakes): architecture, component evaluation and applications, Journal of the American Medical Informatics Association 17 (5) (2010) 507–513.
* [9] Y. Zhang, Y. Zhang, P. Qi, C. D. Manning, C. P. Langlotz, Biomedical and clinical english model packages in the stanza python nlp library, arXiv preprint arXiv:2007.14640.
* [10] M. Neumann, D. King, I. Beltagy, W. Ammar, Scispacy: Fast and robust models for biomedical natural language processing, arXiv preprint arXiv:1902.07669.
* [11] V. Yadav, S. Bethard, A survey on recent advances in named entity recognition from deep learning models, arXiv preprint arXiv:1910.11470.
* [12] Ö. Uzuner, B. R. South, S. Shen, S. L. DuVall, 2010 i2b2/va challenge on concepts, assertions, and relations in clinical text, Journal of the American Medical Informatics Association 18 (5) (2011) 552–556.
* [13] D. Tzitzivacos, International classification of diseases 10th edition (icd-10):: main article, CME: Your SA Journal of CPD 25 (1) (2007) 8–10.
* [14] Ö. Uzuner, Y. Luo, P. Szolovits, Evaluating the state-of-the-art in automatic de-identification, Journal of the American Medical Informatics Association 14 (5) (2007) 550–563.
* [15] S. Liu, B. Tang, Q. Chen, X. Wang, Effects of semantic features on machine learning-based drug name recognition systems: word embeddings vs. manually constructed dictionaries, Information 6 (4) (2015) 848–865.
* [16] V. Kocaman, D. Talby, Biomedical named entity recognition at scale, arXiv preprint arXiv:2011.06315.
* [17] J. P. Chiu, E. Nichols, Named entity recognition with bidirectional lstm-cnns, Transactions of the Association for Computational Linguistics 4 (2016) 357–370.
* [18] V. Kocaman, D. Talby, Improving clinical document understanding on covid-19 research with spark nlp, arXiv preprint arXiv:2012.04005.
* [19] J. S. Labs, Apache Spark NLP for Healthcare: Lessons Learned Building Real-World Healthcare AI Systems, https://databricks.com/session_na20/apache-spark-nlp-for-healthcare-lessons-learned-building-real-world-healthcare-ai-systems, [Online; accessed 22-Jan-2021] (2021).
* [20] J. S. Labs, NLP Case Studies, https://www.johnsnowlabs.com/nlp-case-studies/, [Online; accessed 22-Jan-2021] (2021).
* [21] J. S. Labs, AI Case Studies, https://www.johnsnowlabs.com/ai-case-studies/, [Online; accessed 22-Jan-2021] (2021).
* [22] X. Wang, Y. Zhang, X. Ren, Y. Zhang, M. Zitnik, J. Shang, C. Langlotz, J. Han, Cross-type biomedical named entity recognition with deep multi-task learning, Bioinformatics 35 (10) (2019) 1745–1752.
## Required Metadata
## Current code version
Nr. | Code metadata description | Please fill in this column
---|---|---
C1 | Current code version | v2.7.1
C2 | Permanent link to code/repository used for this code version | https://github.com/JohnSnowLabs/spark-nlp
C3 | Permanent link to Reproducible Capsule | https://github.com/JohnSnowLabs/spark-nlp-workshop
C4 | Legal Code License | Apache-2.0 License
C5 | Code versioning system used | git, maven
C6 | Software code languages, tools, and services used | scala, python, java, R
C7 | Compilation requirements, operating environments & dependencies | jdk 8, spark
C8 | If available Link to developer documentation/manual | https://nlp.johnsnowlabs.com/api/
C9 | Support email for questions |<EMAIL_ADDRESS>
Table 3: Code metadata (mandatory)
## Current executable software version
Nr. | (Executable) software metadata description | Please fill in this column
---|---|---
S1 | Current software version | 2.7.1
S2 | Permanent link to executables of this version | https://github.com/JohnSnowLabs/spark-nl
S3 | Permanent link to Reproducible Capsule | https://github.com/JohnSnowLabs/spark-nlp-workshop
S4 | Legal Software License | Apache-2.0 License
S5 | Computing platforms/Operating Systems | Linux, Ubuntu, OSX, Microsoft Windows, Unix-like
S6 | Installation requirements & dependencies | jdk 8, spark
S7 | If available, link to user manual - if formally published include a reference to the publication in the reference list | https://nlp.johnsnowlabs.com/api/
S8 | Support email for questions |<EMAIL_ADDRESS>
|
# Sound Speed in Extended Chaplygin Fluid
Behnam Pourhassana, Hoda Farahania,b and Sudhaker Upadhyayc,d,a
aSchool of Physics, Damghan University, Damghan, Iran
P.O.Box 3671641167, Damghan, Iran
bCanadian Quantum Research Center, 204-300232 AveVernon, BCV1T2L7 Canada
cDepartment of Physics, K.L.S. College, Nawada-805110,
(a constituent unit of Magadh University, Bodh-Gaya), Bihar, India
dVisiting Associate, Inter-University Centre for Astronomy and Astrophysics
(IUCAA)
Pune-411007, Maharashtra, India Email<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
We consider an extended Chaplygin gas equation of state which is driven from
D-brane action and construct a cosmological model based on this equation of
state. In this regard, we compute the scale factor of the model under a
certain approximation. The conservation equation of this case is a non-linear
differential equation which should solve using the special conditions. We also
analyze the stability of the model by using sound speed as well as adiabatic
index and discuss certain special cases of the model. We find special equation
of state in this model which yields to dynamical and thermodynamical
stability. Furthermore, we study the cosmological consequences of this model
under certain conditions.
Keywords: String Theory; Dark Energy; Fluid Mechanics.
## 1 Overview and motivation
Even after confirmation that the most of the Universe filled by dark energy
and dark matter, the nature of this dark sector of the Universe remains a
mystery. Therefore, determining the dark Universe nature is an important
challenge in theoretical physics. In that case, particle physics need to
understand elementary particles which constitute the dark energy and dark
matter. There are several phenomenological and theoretical models to describe
the accelerating expansion of the Universe (Riess et al., 1998; Perlmutter et
al., 1999; Deffayet et al., 2002; Chaubey and Shukla 2013). Most of them are
based on emphasis of the fact that dark energy and cold dark matter have
negative pressure. But, neither cold dark matter nor dark energy has direct
observational test to confirm their reality. Therefore, a unified scenario of
dark matter and dark energy suggests that these two components are different
aspects of a single fluid as proposed by Matos and Urena-Lopez (2000).
One of the interesting models describing dark side of the Universe is based on
the Chaplygin gas (CG) fluid (Bento et al., 2002; Kamenshchik et al., 2001).
The primary CG model was not consistent with recent observational data like
SNIa, BAO, and CMB (Makler, et al., 2003; Sandvik, et al., 2004; Zhu 2004;
Bento et al., 2003). Hence, generalized Chaplygin gas (GCG) was proposed,
which is indeed a unification of dark energy and dark matter (Bilic et al.,
2002; Bazeia 1999). Subsequently, the GCG changed to the modified Chaplygin
gas (MCG) (Debnath et al., 2004) to obtain more agreement with recent
observations. More extensions also exist such as generalized cosmic Chaplygin
gas (GCCG) which has been done by Gonzalez-Diaz (2003), or modified cosmic
Chaplygin gas (MCCG) which has been studied by Pourhassan (2013).
Consideration of viscosity in various CG model is also studied with interests
(Saadat and B. Pourhassan 2013). Also, some CG models described accelerating
expansion of the Universe by taking variable parameters (Salti et al., 2018;
Salti et al., 2019). The latest CG model, so-called extended Chaplygin gas
(ECG), is proposed to cover barotropic fluid with the quadratic equation of
state (Pourhassan and Kahya 2014a; Kahya et al., 2015; Kahya and Pourhassan
2015).
Let us systematically summarize the various CG models studied so far. One of
the recent cosmological models including a negative pressure is based on the
exotic type of a perfect fluid suggests that the Universe filled by the CG to
produce accelerating expansion. This model is described by the following
equation of state (EoS) relating energy density $\rho$ and pressure $p$ (Pun
et al., 2008):
$p=-\frac{A}{\rho},$ (1)
where $A$ is a positive constant. It should be noted that in the natural
units, the energy density and pressure are dimensionless quantities. The EoS
given by equation (1) was introduced originally by Chaplygin as a suitable
model to reflect the lifting force in an airplane (Chaplygin 1904).
Further, GCG is described by the following state equation (Bento et al.,
2002):
$p=-\frac{A}{\rho^{\alpha}},$ (2)
where $0\leq\alpha\leq 1$. This model provides a cosmic evolution from an
initial dust-like behavior to late time which is the cosmological constant.
The Chaplygin gas model is relevant in the stabilization of branes in black
hole backgrounds (Kamenshchik and Moschella 2000).
The MCG equation of state is given by Debnath et al., (2004)
$p=A_{1}\rho-\frac{A}{\rho^{\alpha}},$ (3)
where $A$ and $\alpha$ are positive constants, while $A_{1}$ may be positive
or negative constant. This equation of state discusses radiation era at one
extreme for negligibly small scale factor while a $\Lambda$CDM model at the
other extreme. In fact, $A$ or $A_{1}$ is also considered as a variable (Guo
and Zhang 2007). A recent equation of state so-called ECG are also obtained as
(Pourhassan and Kahya 2014b)
$p=\sum_{i=1}^{n}A_{i}\rho^{i}-\frac{A}{\rho^{\alpha}}.$ (4)
There is a possibility to write a much more comprehensive equation of state
describing viscous MCCG as follows
$p=\sum_{i=1}^{n}A_{i}\rho^{i}-\frac{1}{\rho^{\alpha}}\left[\frac{A}{1+w}+\left(\rho^{1+\alpha}-\frac{A}{1+w}+1\right)^{-w}-1\right]-\Pi,$
(5)
where $w$ is called cosmic parameter [20] and $\Pi$ corresponds to viscosity
which is generally depends on the energy density and can be written as powers
of $\rho$ ($\Pi\propto\rho^{m}$). All the equations of state (1)-4) are
particular cases of (5). For example, it is easy to check that for
$\Pi=w=A_{n}=0$ and $\alpha=1$ (5) reduces to the equation (1). For
$\Pi=w=A_{n}=0$, the equation (5) coincides with (2). For $\Pi=w=0$ and $n=1$
(5) reduces to (3). Eq. (4) can be obtained from (5) by setting $w=0$.
A more comprehensive equation of state is obtained from string theory by
Pourhassan (2019)
$p=\sum_{i\in R}A_{i}\rho^{i},$ (6)
where $i$ may have a positive, negative, integer, and non-integer number. The
same procedure already has been considered by Ogawa (2000) for the equation of
state (1).
Our analysis is based on this particular equation of state (6). We unify all
of the mentioned CG equations of state using the Nambo-Goto action for a
$d$-brane moving in a ($d+2$)-dimensional space-time. In this regard, we first
consider several CG equations of state and solve the string equation of motion
in order to obtain a general equation of state which generates all of the
above mentioned equations. The next purpose of the letter is to study the
cosmological consequence of this model. Therefore, we try to obtain the
relation between energy density and scale factor. In this regard, following
the conservation law, we estimate the scale factor of the model. We consider
first-order approximation here and, as per expectation, we find that the scale
factor reduces with increasing energy density. This justifies the consistency
of our cosmological model. In order to explain the accelerating expansion of
the Universe and describe dark matter effects, the equation of state parameter
is also calculated. Furthermore, we derive sound speed for this modified
cosmological model. We show that the barotropic case of this model is stable.
We check the stability of the model make sure that squared sound speed must be
positive. We confirm that the viscous modified Chaplygin fluid model is a
stable model while there are various unstable generalized Chaplygin fluid
models. By studying sound speed analysis, the model imposes constrain on
parameter.
The paper is presented as follows. In Sec. 2, we discuss the scale factor of
the model. The sound speed is studied in section 3. A particular case is
realized in section 4. Finally, we summarize the results with concluding
remarks in the last section.
## 2 Scale factor
In order to find cosmological implication, we first write conservation law for
the fluid with an energy density $\rho$ and pressure $p$ as follows the
conservation law
$\dot{\rho}+3H(p+\rho)=0,$ (7)
where Hubble expansion parameter $H$ is defined in terms of the scale factor
$a(t)$ by
$H=\frac{\dot{a}}{a}.$ (8)
The conservation equation (7) for the pressure (6) and Hubble parameter (8)
takes the following form:
$d\rho+3\frac{d{a}}{a}\left(\rho+\sum{A_{i}\rho^{i}}\right)=0.$ (9)
In order to obtain an analytical solution, we assume the same value for all
coefficients and the following expansion,
$\sum_{i=-m}^{n}{A_{i}\rho^{i}}=\frac{A}{\rho-1}(\rho^{n+1}-\rho^{-m}),$ (10)
where we assumed the same value for all coefficients. In that case, the
expression (9) reduces to
$\ln{\frac{a}{a(0)}}=-\frac{1}{3}\int{\frac{d\rho}{\rho+\frac{A}{\rho-1}(\rho^{n+1}-\rho^{-m})}}.$
(11)
The solution for above equation for the special case of $m=1$ and $n=3$ is
computed by
$\ln{\frac{a}{a(0)}}=C_{-}\tan^{-1}\left(\frac{{\mathcal{B}}_{+}}{{\mathcal{A}}_{-}}\right)-C_{+}\tan^{-1}\left(\frac{{\mathcal{B}}_{-}}{{\mathcal{A}}_{+}}\right),$
(12)
where
$\displaystyle{\mathcal{A}}_{\pm}$ $\displaystyle=$
$\displaystyle\sqrt{10A^{2}\pm 2A\sqrt{A(5A-4)}+4A},$
$\displaystyle{\mathcal{B}}_{\pm}$ $\displaystyle=$ $\displaystyle
A(1+4\rho)\pm\sqrt{A(5A-4)},$ $\displaystyle C_{+}$ $\displaystyle=$
$\displaystyle\frac{4}{3{\mathcal{A}}_{+}}\sqrt{\frac{A}{5A-4}},$
$\displaystyle C_{-}$ $\displaystyle=$
$\displaystyle\frac{4}{3{\mathcal{A}}_{-}}\sqrt{\frac{A}{5A-4}}.$ (13)
Hence, the scale factor is given by
$a=a(0)\exp\left[C_{-}\tan^{-1}\left(\frac{{\mathcal{B}}_{+}}{{\mathcal{A}}_{-}}\right)-C_{+}\tan^{-1}\left(\frac{{\mathcal{B}}_{-}}{{\mathcal{A}}_{+}}\right)\right].$
(14)
At the first order approximation (for small $\rho$ we neglect
$\mathcal{O}(\rho^{2})$), we obtain the late time scale factor as
$a=a(0)\exp\left[C_{0}-C_{1}\rho\right],$ (15)
where
$\displaystyle C_{0}$ $\displaystyle=$ $\displaystyle
C_{-}\tan^{-1}\left(\frac{A+\sqrt{A(5A-4)}}{\sqrt{10A^{2}-2A\sqrt{A(5A-4)}+4A}}\right)$
$\displaystyle-$ $\displaystyle
C_{+}\tan^{-1}\left(\frac{A-\sqrt{A(5A-4)}}{\sqrt{10A^{2}+2A\sqrt{A(5A-4)}+4A}}\right),$
$\displaystyle C_{1}$ $\displaystyle=$ $\displaystyle
C_{+}\frac{4A}{\sqrt{10A^{2}+2A\sqrt{A(5A-4)}+4A}\left(1+\frac{(A-\sqrt{A(5A-4)})^{2}}{\sqrt{10A^{2}+2A\sqrt{A(5A-4)}+4A}}\right)}$
(16) $\displaystyle-$ $\displaystyle
C_{-}\frac{4A}{\sqrt{10A^{2}-2A\sqrt{A(5A-4)}+4A}\left(1+\frac{(A+\sqrt{A(5A-4)})^{2}}{\sqrt{10A^{2}-2A\sqrt{A(5A-4)}+4A}}\right)}.$
$\begin{array}[]{cccc}\includegraphics[width=227.62204pt]{1-1.eps}\end{array}$
Figure 1: Typical behavior of scale factor with $\rho$ for $a({0})=1$. Here,
$C_{0}=C_{1}=1$ case is denoted by blue line, $C_{0}=C_{1}=2$ case is denoted
by green line and $C_{0}=C_{1}=5$ case is denoted by red line.
In order to study the dependence of scale factor on energy density, we plot a
graph of the scale factor given by the equation (14) in Fig. 1. From the plot,
it is obvious that the scale factor reduces with increasing energy density as
expected. This establishes the consistency of our cosmological model based on
the equation of state (6). Therefore, we can say that the Universe is filled
by a fluid described by the following equation of state:
$p=\frac{A}{\rho-1}(\rho^{n+1}-\rho^{-m}).$ (17)
Therefore, we have the following equation of state parameter:
$\omega=\frac{p}{\rho}=\frac{A}{\rho-1}(\rho^{n}-\rho^{-m-1}).$ (18)
This explains the accelerating expansion of the Universe and describes dark
matter effects as well. It can be shown by computing the deceleration
parameter
$q=-\frac{a\ddot{a}}{{\dot{a}}^{2}}.$ (19)
Using the solution (14) we can obtain a negative deceleration parameter at a
late time. However, there is a situation where the deceleration/acceleration
phase transition happens. Next, we study sound speed in such fluid to analyze
the stability of the model.
## 3 Sound speed
In this section, we analyze the sound speed of the fluid to discuss the
stability of the model. The sound speed for the cosmic fluid model, $C_{s}$,
can be estimated from the following relation:
$C_{s}^{2}=\frac{dp}{d\rho}.$ (20)
Corresponding to equation of state (17), this formula yields
$C_{s}^{2}=A\frac{(n\rho-n-1)\rho^{n}+((m+1)\rho-m)\rho^{-m-1}}{(\rho-1)^{2}}.$
(21)
The sound speed should be positive in the stable model and hence this is
justified by first consideration that constant $A$ must be positive as
mentioned early in Eq. (1). In the plots of Fig. 2 we can see behavior of
sound speed. Red solid line shows special case of $m=1$ and $n=3$ which
discussed in previous section. For several values of positive $m$ and $n$ we
see real sound speed which is increasing function of energy density. However,
there are situations with negative $m$ or $n$ where sound speed is decreasing
function of energy density. It help us to find dynamically stable model.
$\begin{array}[]{cccc}\includegraphics[width=170.71652pt]{C-1.eps}\includegraphics[width=170.71652pt]{C-2.eps}\\\
\includegraphics[width=170.71652pt]{C-3.eps}\includegraphics[width=170.71652pt]{C-4.eps}\end{array}$
Figure 2: Sound speed in terms of energy density for $A=1$.
In order for establish a physical stable and acceptable model we need to study
also thermodynamical stability. It may be found by analyzing the adiabatic
index,
$\gamma=\frac{C_{p}}{C_{v}},$ (22)
where $C_{p}$ and $C_{v}$ are specific heat at constant pressure and constant
volume respectively. For the general fluid, it yields to the following
relation at the constant entropy,
$\gamma=\left(\frac{\partial\ln{p}}{\partial\ln{\rho}}\right)_{S}.$ (23)
It is estimated that the value of this parameter must be greater than
$frac{4}{3}$ for a dynamically stable model. In that case, using the equation
of state (17) one can obtain,
$\gamma=\frac{(n(1-\rho)+1)\rho^{n+1}-((m+1)\rho-m)\rho^{-m}}{(\rho-1)(\rho^{-m}-\rho^{n+1})}.$
(24)
In plots of Fig. 3 we draw adiabatic index in terms of energy density. For
some cases of positive $m$ and $n$ we find that the condition
$\gamma\geq\frac{4}{3}$ satisfied at the early time with the large energy
density. For example, in the case of $n=3$ we find $\gamma\approx\frac{8}{3}$
at the early time which decreased by time. On the other hand we can see the
complete stable model for some positive $n$ and negative $m$ (see the last
plot of Fig. 3).
$\begin{array}[]{cccc}\includegraphics[width=156.49014pt]{a-1.eps}\includegraphics[width=156.49014pt]{a-2.eps}\includegraphics[width=156.49014pt]{a-3.eps}\end{array}$
Figure 3: Adiabatic index in terms of energy density for $A=1$.
Now, we consider the case of early time where $\rho\gg 1$ and in this case the
first term of the numerator of (21) is dominant and hence the model is
dynamically stable for $n\geq 0$. On the other hand, at the late time where
$\rho\ll 1$ the second term of the numerator in the equation (21) is dominant
and the model is only stable for $m\leq 0$. Hence, the scale factor (15) which
is obtained for $n=3$ and $m=1$ leads to an instability to the model for the
late time. This motivates us we consider a very special case to construct a
suitable cosmological model in the next section. At the moment, we assume
$m=-n$ in expression (21) which results
$C_{s}^{2}=\frac{nA(\rho^{n+2}+\rho^{n}-2\rho^{n+1})}{\rho(\rho-1)^{2}}.$ (25)
From the above expression it is obvious that the sound speed vanishes in the
case of $n=0$. Hence, the requirement for non-vanishing sound speed here is
that $n>0$. For positive integer valued $n$, we find that $C_{s}^{2}\geq A$,
here equality holds for $n=1$.
In that case ($m=-n$ with $n>0$), we find that
$\gamma\approx n,$ (26)
hence, for the all cases of $n\geq\frac{4}{3}$ we have completely stable
model. Therefore, the equation of state in the stable model may read as,
$p=\frac{A}{\rho-1}(\rho^{n+1}-\rho^{n}),$ (27)
with $n=1,2,3,\cdots$, which may reduced to barotropic equation of state
($p=A\rho^{n}$) which already studied deeply in literature.
## 4 Very special case
Now, we consider only three non-zero coefficient $A_{1}=A$, $A_{-\alpha}=-B$
and $A_{\frac{1}{3}}=-\frac{\xi}{\sqrt{3}}$, where $B$ is a positive constant
and $\xi$ is a constant viscous coefficient (Khadekar et al., 2019). Hence, we
recover viscous modified Chaplygin gas with the following equation of state:
$p=A\rho-\frac{B}{\rho^{\alpha}}-\frac{\xi}{\sqrt{3}}\rho^{\frac{1}{3}},$ (28)
where $\Pi=\xi H=\frac{\xi}{\sqrt{3}}\rho^{\frac{1}{3}}$ is used. In the case
of $\alpha=\frac{1}{3}$ and assuming $X\equiv\frac{1}{\rho^{\frac{2}{3}}}$ one
can obtain the sound speed
$C_{s}^{2}=\frac{B}{3}X^{2}-\frac{\xi\sqrt{3}}{9}X+A.$ (29)
Here, we observe that $C_{s}^{2}>0$ if
$\displaystyle A\geq\frac{\xi^{2}}{36B}.$ (30)
The above relation is a required condition for stability of the model.
In order to study the stability for the general case, we draw square sound
speed (29) with respect to $\rho$ in Fig. 4. Here, we observe that the model
is completely stable for all parameter values in the range $0\leq A\leq 2$,
$0\leq\xi\leq 1$, $0\leq\alpha\leq 1$ and $0\leq B\leq 2$.
$\begin{array}[]{cccc}\includegraphics[width=236.15787pt]{2.eps}\end{array}$
Figure 4: Square sound speed in terms of energy density for $A=B=1$ and
$\xi=0.5$ by variation of $\alpha$.
In the case of $\alpha=0$ where we have viscous barotropic fluid, the sound
speed is increasing function of energy density and, hence, it is decreasing
function of time. For other cases with $\alpha>0$, the sound speed is
increasing function of time (decreasing by energy density) which diverges at a
late time. Here, one can conclude that in all cases the sound speed is a
constant at the early time. As per expectation, the viscous coefficient
decreases the value of sound speed. Another important result can be seen for
the case of $A=0$ which yields to negative $C_{s}^{2}$ at an early time (which
is illustrated by Fig. 5). It means that various versions of generalized
Chaplygin gas may be unstable at the late time.
$\begin{array}[]{cccc}\includegraphics[width=236.15787pt]{3.eps}\end{array}$
Figure 5: Square sound speed in terms of energy density for $\alpha=0.5$,
$B=1$ and $\xi=0.5$ by variation of $A$.
In the Fig. 5, we draw squared sound speed of the GCG models to show that
model yields to imaginary sound speed at the early time.
## 5 Discussions and conclusions
We have considered a model for CG which is inspired from the string theory.
This model is used to unify dark energy and dark matter to describe
accelerating expansion of the Universe which is in agreement with the recent
observational data. First, following from the conservation law, we have
obtained the scale factor of the model. Here, we have considered first-order
approximation. As per expectation, we have found that the scale factor reduces
with increasing energy density and this establishes the consistency of our
cosmological model. In order to explain accelerating expansion of the Universe
and describe dark matter effects, we have calculated the equation of state
parameter as well.
Furthermore, we have studied sound speed in this modified extended Chaplygin
gas model. We have found that the barotropic-like case of this model is
stable. We studied dynamical stability of this model by analyzing sound speed
via the fact that squared sound speed must be positive. Also we studied
thermodynamical stability of the model by analyzing the adiabatic index and
found that for the all cases of $m=-n$ with $n\geq\frac{4}{3}$ we have
completely stable model. We found that the viscous modified Chaplygin gas is
the stable model while various versions of generalized Chaplygin gas is
unstable model. We constrained the model parameter by doing sound speed
analysis and found that special limit on $A$ parameter is such that
$C_{s}^{2}>A$. Certainly, we are lacking a general solution for the equation
(11) which is the goal of our future investigations in order to construct more
comprehensive cosmological model.
## References
* [1] D. Bazeia, ”Galileo invariant system and the motion of relativistic d-branes”, Phys. Rev. D 59 (1999) 085007
* [2] M. C. Bento, O. Bertolami, and A. A. Sen, ”Generalized Chaplygin Gas, Accelerated Expansion and Dark Energy-Matter Unification”, Phys. Rev. D 66 (2002) 043507
* [3] M.C. Bento, O. Bertolami, A.A. Sen, ”WMAP constraints on the generalized Chaplygin gas model”, Phys. Lett. B 575 (2003) 172
* [4] N. Bilic, G.B. Tupper, and R.D. Viollier, ”Unification of dark matter and dark energy: the inhomogeneous Chaplygin gas”, Phys. Lett. B 535 (2002) 17
* [5] S. Chaplygin, ”On gas jets”, Sci. Mem. Moscow Univ. Math. Phys. 21 (1904) 1
* [6] R. Chaubey, A. K. Shukla, ”A new class of Bianchi cosmological models in f(R, T) gravity”, Astrophys Space Sci. 343 (2013) 415
* [7] U. Debnath, A. Banerjee, and S. Chakraborty, ”Role of modified Chaplygin gas in accelerated universe”, Class. Quant. Grav. 21 (2004) 5609
* [8] C. Deffayet, G. Dvali, G. Gabadadze, ”Accelerated universe from gravity leaking to extra dimensions”, Phys. Rev. D 65 (2002) 044023
* [9] P. F. Gonzalez-Diaz, ”You need not be afraid of phantom energy”, Phys. Rev. D 68 (2003) 021303(R)
* [10] Z-K. Guo, Y-Z. Zhang, ”Cosmology with a Variable Chaplygin Gas”, Phys. Lett. B645 (2007) 326
* [11] E.O. Kahya, B. Pourhassan, S. Uraz, ”Constructing an Inflaton Potential by Mimicking Modified Chaplygin Gas”, Phys. Rev. D92 (2015) 103511
* [12] E.O Kahya, B. Pourhassan, ”The universe dominated by the extended Chaplygin gas”, Modern Phys. Lett. A 30 (2015) 1550070
* [13] A.Y. Kamenshchik, U. Moschella, and V. Pasquier, ”An alternative to quintessence”, Phys. Lett. B 511 (2001) 265
* [14] A. Kamenshchik, U. Moschella and V. Pasquier, Phys. Lett. B 487 (2000) 7
* [15] G. S. Khadekar, P. Kumar, S. Islam, ”Modified Chaplygin gas with bulk viscous cosmology in FRW (2+1)-dimensional spacetime”, Journal of Astrophysics and Astronomy 40 40 (2019) 40
* [16] M. Makler, et al., ”Constraints on the generalized Chaplygin gas from supernovae observations”, Phys. Lett. B. 555 (2003) 1
* [17] T. Matos and A. Urena-Lopez, ”Quintessence and Scalar Dark Matter in the Universe”, Class. Quantum Grav. 17 (2000) L75
* [18] N. Ogawa, ”A Note on Classical Solution of Chaplygin-gas as D-brane”, Phys. Rev. D62 (2000) 085023
* [19] S. Perlmutter et al., ”Measurements of $\Omega$ and $\Lambda$ from 42 High-Redshift Supernovae”, Astrophys. J. 517 (1999) 565
* [20] B. Pourhassan, ”Viscous Modified Cosmic Chaplygin Gas Cosmology”, Int. J. Mod. Phys. D 22 (2013) 1350061
* [21] B. Pourhassan, E.O. Kahya, 2014a ”FRW cosmology with the extended Chaplygin gas”, Adv. High Energy Phys. 2014 (2014) 231452
* [22] B. Pourhassan, E.O. Kahya,2014b ”Extended Chaplygin gas model”, Res. Phys. 4 (2014) 101
* [23] B. Pourhassan, ”Developed Chaplygin gas equation of state and its relation with the string theory”, Journal of Research on Many-body Systems 9 (2) (2019) 31
* [24] C.S.J. Pun et al., ”Viscous dissipative Chaplygin gas dominated homogenous and isotropic cosmological models”, Phys. Rev. D77 (2008) 063528
* [25] A.G. Riess et al., ”Observational Evidence from Supernovae for an Accelerating Universe and a Cosmological Constant”, Astron. J. 116 (1998) 1009
* [26] H. Saadat and B. Pourhassan, ”FRW bulk viscous cosmology with modified Chaplygin gas in flat space”, Astrophys Space Sci. 343 (2013) 783
* [27] M. Salti, O. Aydogdu, H. Yanar, K. Sogut, ”Variable generalized Chaplygin gas in a 5D cosmology”, Annals of Physics 390 (2018) 131
* [28] M. Salti, O. Aydogdu, A. Tas, K. Sogut, E. E. Kangal, ”Variable Chaplygin gas in Kaluza Klein framework” , Can. J. Phys. 97 (2019) 117
* [29] H. Sandvik, et al., ”The end of unified dark matter?”, Phys. Rev. D 69 (2004) 123524
* [30] Z. H. Zhu, ”Generalized Chaplygin gas as a unified scenario of dark matter/energy: Observational constraints”, Astron. Astrophys. 423 (2004) 421
|
# Asymmetric Tobit analysis for correlation estimation from censored data
HongYuan Cao1 and Tsuyoshi Kato1,2
1 Faculty of Science and Technology, Gunma University,
Tenjin-cho 1-5-1, Kiryu, Gunma 376-8515, Japan.
2 Integrated Institute for Regulatory Science, Waseda University,
513 Wasedatsurumakicho, Shinjuku, Tokyo, 162-0041, Japan
Abstract: Contamination of water resources with pathogenic microorganisms
excreted in human feces is a worldwide public health concern. Surveillance of
fecal contamination is commonly performed by routine monitoring for a single
type or a few types of microorganism(s). To design a feasible routine for
periodic monitoring and to control risks of exposure to pathogens, reliable
statistical algorithms for inferring correlations between concentrations of
microorganisms in water need to be established. Moreover, because pathogens
are often present in low concentrations, some contaminations are likely to be
under a detection limit. This yields a pairwise left-censored dataset and
complicates computation of correlation coefficients. Errors of correlation
estimation can be smaller if undetected values are imputed better. To obtain
better imputations, we utilize side information and develop a new technique,
the _asymmetric Tobit model_ which is an extension of the Tobit model so that
domain knowledge can be exploited effectively when fitting the model to a
censored dataset. The empirical results demonstrate that imputation with
domain knowledge is effective for this task.
Keywords: Censored data, Tobit analysis, asymmetric normal distribution, EM
algorithm, and non-negative least square.
## 1 Introduction
Contamination of water resources with pathogenic microorganisms excreted in
human feces is a public health concern worldwide. Contamination of water with
several types of pathogenic microorganisms, such as bacteria and viruses,
causes diseases in humans. Well-known harmful enteric bacteria include
Salmonella, Shigella, and Escherichia coli (E. coli) O157:H7, while
enterovirus, norovirus, and rotavirus are common pathogenic viruses. Oral
ingestion is the primary transmission route of enteric illnesses (See Figure
1). Numerous enteric pathogens remaining in treated wastewater contaminate the
environment when they are returned to seawater, rivers, lakes and groundwater
[5, 22]. Pathogens in seawater condense in shellfish, leading to enteric
illnesses transmitted by consumption of raw or undercooked shellfish grown in
sewage-polluted seawater [6]. The microbial quality of groundwater tends to be
relatively stable due to filtration through layers of soil, although it was
reported that in the United States, approximately half of waterborne disease
outbreaks are associated with polluted groundwater [15]. Outbreaks associated
with untreated recreational waters in rivers, lakes, and ocean often occur
owing to fecal contamination. Adequate assessment of microbial water quality
is required in order to control public health risks related to exposure to
pathogenic microorganisms.
It is almost impossible to include all pathogens in periodic routine
monitoring by checking the contamination level of each pathogenic
microorganism. Current measurement technologies consume considerable expense
and labor for many pathogens, making routine monitoring of such pathogens
prohibitive. A more feasible approach to controlling public health risk from
waterborne pathogens is to routinely test for only a few selected types of
pathogens. The common targets of routine monitoring of water quality are
harmless indicator microorganisms and physicochemical water qualities.
Commonly used indicators are total coliforms, fecal coliforms, enterococci,
and F-specific bacteriophage [3, 17, 14, 23, 19, 7]. Physicochemical water
quality measurements include pH (potential of hydrogen), BOD (biochemical
oxygen demand), COD(chemical oxygen demand), SS (suspended solids), DO
(dissolved oxygen), TN (total nitrogen), and TP (total phosphorus) [10].
However, concentrations of these indicators and physicochemical data may not
necessarily be correlated strongly with the presence of pathogenic
microorganisms and may not suffice to assess waterborne infectious risk.
Meanwhile, with continuous efforts made by many researchers in the water
engineering field, new detection technologies for pathogenic microorganisms in
water are being developed [20, 21]. Establishment of statistical techniques
for analyzing pathogenic measurement data [11, 12, 8, 9] is expected to enable
future advancements in the design of routine monitoring approaches for
pathogen detection.
_Pearson correlation coefficient_ (PCC) is a standard measure in the water
engineering field for evaluating the relationship between concentrations of
two microorganisms [24]. The microorganism concentrations that have higher
correlation coefficient with concentrations of another target microorganism
are more effective in predicting concentrations of the target microorganism.
Computation of the PCC for indicator–pathogen pairs tends to be a challenge in
attempting to measure the relationship between concentrations of two
pathogenic microorganisms in water. The difficulty is caused due to the
existence of detection limits, which are not included in the standard setting
of statistical analysis. Many pathogens are present in low concentrations. To
detect the few individuals of such a pathogen, a large volume of water must be
sampled, which burdens procedures for periodic routine monitoring with a heavy
workload. For monitoring based on realistic volume sampling, samples of
pathogen concentrations are usually left-censored data [11, 12]. A naïve
approach to estimation of PCC between such data is to discard undetected data
and to compute PCC only from data pairs in which both pathogens were detected.
However, this approach suffers from a severe disadvantage in that commonly
detected data amounts to be too low to infer correlation. Thus, reliable
algorithms for inferring PCC from censored data need to be established to
ensure safe and sustainable water resources for human societies on Earth.
In this study, we investigated the performance of several methods for
inferring PCC between censored concentration data of two microorganisms in
water. We examined a more sophisticated approach than the aforementioned naïve
method, exploiting side information to impute undetected concentrations before
computing PCC. We fitted a Tobit model [1] to the censored data and imputed
the undetected data with expected values based on the model. Then, more
complete data can be used to infer PCC. The estimation accuracy of this
approach depends on the imputation accuracy. To improve the imputation
accuracy, we consider exploitation of domain knowledge. For water quality
data, the signs of the correlations between any pair of two variates are known
in advance. A third approach utilizes this knowledge by introducing the
_asymmetric normal distribution_ [13] as the prior for the regression
coefficients of the Tobit model.
Another technical contribution of this study is the discovery of an efficient
algorithm for fitting the Tobit model with the asymmetric normal prior. An
expectation-maximization (EM) algorithm can be used for fitting the classical
Tobit model. Each iteration of the EM algorithm consists of an E-step and an
M-step. If the prior of the regression coefficients is the ordinary normal
distribution, an M-step can be performed by simply solving a linear system. In
general, M-steps tend to be challenging if the prior is changed. In this
study, we found that M-steps can still be performed efficiently even if the
asymmetric normal distribution is adopted as the prior of the regression
coefficients.
This paper is organized as follows. The next section provides a review of
three fundamental tools as preliminaries to the later sections: the PCC, a
Tobit model, and the nonnegative least square. In Section 3, we introduce
three approaches for correlation analysis: a naïve approach, a classical Tobit
approach, and an asymmetric Tobit approach. In Section 4, we present a new
algorithm for fitting the asymmetric Tobit model to censored data. In Section
5, simulation results are reported. The final section summarizes and concludes
the contributions of this study.
---
Figure 1: Water resources and uses. Fecal contamination in water resources leads to microbial risk of exposure to waterborne pathogens through various water uses including drinking, recreation, agriculture, and industry. (a) Naïve | (b) Classical Tobit | (c) Asymmetric Tobit
---|---|---
| |
Figure 2: Three approaches for correlation analysis. The targets to be
analyzed are censored. (a) Naïve approach computes the correlation only from
commonly available entries. (b) Classical Tobit approach imputes the missing
entries using side information before correlation computation. (c) Asymmetric
Tobit approach exploits domain knowledge to improve the imputations.
## 2 Preliminaries
### 2.1 Pearson correlation coefficient
PCC is a statistic for paired data:
$(y_{1,\text{a}},y_{1,\text{b}}),\dots,(y_{n,\text{a}},y_{n,\text{b}})\in{\mathbb{R}}\times{\mathbb{R}}$.
The definition of PCC is given by
$\displaystyle
R({\bm{y}}_{\text{a}},{\bm{y}}_{\text{b}}):=\frac{\sum_{i=1}^{n}(y_{i,\text{a}}-\bar{y}_{\text{a}})(y_{i,\text{b}}-\bar{y}_{\text{b}})}{\sqrt{\sum_{i=1}^{n}(y_{i,\text{a}}-\bar{y}_{\text{a}})^{2}}\sqrt{\sum_{i=1}^{n}(y_{i,\text{b}}-\bar{y}_{\text{b}})^{2}}}$
(1)
where
${\bm{y}}_{n,\text{a}}:=\left[y_{1,\text{a}},\dots,y_{n,\text{a}}\right]^{\top}$,
${\bm{y}}_{n,\text{b}}:=\left[y_{1,\text{b}},\dots,y_{n,\text{b}}\right]^{\top}$,
$\displaystyle\bar{y}_{\text{a}}:=\frac{1}{n}\sum_{i=1}^{n}y_{i,\text{a}}\;\;\text{and}\;\bar{y}_{\text{b}}:=\frac{1}{n}\sum_{i=1}^{n}y_{i,\text{b}}.$
(2)
### 2.2 Tobit analysis
Tobit analysis [1] is a regression method for censored data. In Tobit
analysis, a target variable $y\in{\mathbb{R}}$ (the concentration of a
microorganism, in this study) is assumed to be drawn with the following
generative model.
$\displaystyle y=\left<{\bm{w}},\bm{x}\right>+\epsilon$ (3)
where $\epsilon$ is a normal noise, $\epsilon\sim{\mathcal{N}}(0,\beta^{-1})$,
the vector $\bm{x}\in{\mathbb{R}}^{d}$ contains explanatory variables
(including physicochemical data and possibly concentration data of another
microorganism), and ${\bm{w}}\in{\mathbb{R}}^{d}$ is a regression coefficient
vector. This is largely the same as the setting of the least square
estimation; however, one important difference is that Tobit analysis allows
censoring in sample data. In a case where a concentration $y$ is undetected
with detection limit $\theta$, the expected concentration is given by
$\displaystyle{\mathbb{E}}[y|y<\theta,\bm{x}]=\left<{\bm{w}},\bm{x}\right>-\beta^{-1/2}\lambda_{\text{IMR}}((\theta-\left<{\bm{w}},\bm{x}\right>)\sqrt{\beta}).$
(4)
Herein, $\lambda_{\text{IMR}}(\xi)=\phi(\xi)/\Phi(\xi)$ is the _inverse Mills
ratio_ where $\phi$ and $\Phi$ are the standard normal density function and
its cumulative density function, respectively. Equation (4) is derived from
the fact that under the condition $y<\theta$, $y$ follows the truncated normal
distribution with the truncation of upper tail:
$\displaystyle
p(y|y<\theta,\bm{x})=f_{\text{tn}}(y\,|\,\left<{\bm{w}},\bm{x}\right>,\beta,\theta)$
(5)
where
$\displaystyle
f_{\text{tn}}(y\,|\,\mu,\beta,\theta):=\begin{cases}\frac{\sqrt{\beta}\phi(\sqrt{\beta}(y-\mu))}{\Phi(\sqrt{\beta}(\theta-\mu)}&\text{for
}y\in(-\infty,\theta),\\\ 0&\text{for }y\in[\theta,+\infty).\end{cases}$ (6)
The second moment can also be expressed in a closed form as
${\mathbb{E}}[y^{2}|y<\theta,\bm{x}]=\frac{1-\xi\lambda_{\textsc{imr}}((\theta-\left<{\bm{w}},\bm{x}\right>)\sqrt{\beta})}{\beta}\\\
+\left<{\bm{w}},\bm{x}\right>^{2}-\frac{2\lambda_{\textsc{imr}}((\theta-\left<{\bm{w}},\bm{x}\right>)\sqrt{\beta})\left<{\bm{w}},\bm{x}\right>}{\sqrt{\beta}}.$
(7)
The values of the model parameters ${\bm{w}}$ and $\beta$ are determined by
fitting the model to a censored dataset
$(\bm{x}_{i},y_{i})\in{\mathbb{R}}^{d}\times{\mathbb{R}}$ for $i=1,\dots,n$ in
which $y_{1},\dots,y_{n_{\text{v}}}$ are observed, whereas
$y_{n_{\text{v}}+1},\dots,y_{n}$ are not observed due to the detection limit
$\theta$. Fitting to the dataset is performed by maximizing the following
regularized log-likelihood function.
$\displaystyle L_{\text{sym}}({\bm{w}},\beta):=\log
p_{\text{sym}}({\bm{w}})+L_{0}({\bm{w}},\beta),$ (8)
where $p_{\text{sym}}({\bm{w}})$ is the normal prior of the regression
coefficients ${\bm{w}}$:
$\displaystyle
p_{\text{sym}}({\bm{w}})={\mathcal{N}}({\bm{w}}\,;\,{\bm{0}},\lambda^{-1}{\bm{I}}).$
(9)
The second term in (8), $L_{0}({\bm{w}},\beta)$, is the Tobit log-likelihood
function:
$L_{0}({\bm{w}},\beta):=\frac{n_{\text{v}}}{2}\log\beta+\sum_{i=1}^{n_{\text{v}}}\log\phi\left(\sqrt{\beta}(y_{i}-\left<{\bm{w}},\bm{x}_{i}\right>)\right)\\\
+\sum_{i=n_{\text{v}}+1}^{n}\log\Phi\left(\sqrt{\beta}(\theta-\left<{\bm{w}},\bm{x}_{i}\right>)\right).$
(10)
The EM algorithm is a standard method for maximization of $L_{\text{sym}}$.
The details of this method can be found in a paper by Amemiya [1].
(a) $\lambda^{\text{p}}_{h}=1$, | (b) $\lambda^{\text{p}}_{h}=100$, | (c) $\lambda^{\text{p}}_{h}=1$,
---|---|---
(a) $\lambda^{\text{n}}_{h}=1$, | (b) $\lambda^{\text{n}}_{h}=1$, | (c) $\lambda^{\text{n}}_{h}=100$,
| |
Figure 3: Priors of regression coefficients for asymmetric Tobit model. In the
three panels, the densities $p(w_{h})$ are plotted against a regression
coefficient $w_{h}$. (a) The prior is reduced to the symmetric normal
distribution when $\lambda^{\text{p}}_{h}=1$ and $\lambda^{\text{n}}_{h}=1$.
(b) When $\lambda^{\text{p}}_{h}\gg\lambda^{\text{n}}_{h}$, positive
regression coefficients are strongly penalized. (c) When
$\lambda^{\text{p}}_{h}\ll\lambda^{\text{n}}_{h}$, negative coefficients are
likely to be avoided.
### 2.3 Nonnegative least square
The nonnegative least square problem is a quadratic programming problem
defined as
min
$\displaystyle\lVert{\bm{A}}^{\top}\bm{x}-{\bm{b}}\rVert\quad\text{wrt}\quad\bm{x}\in{\mathbb{R}}^{m},$
(11) where $\displaystyle{\bm{A}}\in{\mathbb{R}}^{m\times
n},{\bm{b}}\in{\mathbb{R}}^{n}.$
This problem is denoted by $\text{NNLS}({\bm{A}},{\bm{b}})$ hereinafter. For
solving the NNLS problem, Lawson and Hanson’s active set algorithm presented
in their book [16] is popular. Since then, many improvements have been
developed, and presently, NNLS is known as an efficiently solvable convex
problem [4, 18, 2].
## 3 Correlation analysis methods
In this study, we consider three approaches for correlation analysis: a naïve
approach, a classical Tobit approach, and an asymmetric Tobit approach. The
three approaches are summarized in Figure 2. The details are described below.
Naïve approach: Assume that a dataset contains $n$ data pairs
$\displaystyle(y_{1,\text{a}},y_{1,\text{b}}),\dots,(y_{n,\text{a}},y_{n,\text{b}})$
(12)
representing concentrations of two microorganisms that may be left-censored.
Let $\theta_{\text{a}}$ and $\theta_{\text{b}}$ be the detection limits of the
two microorganisms, respectively. The data are such that
$y_{i,\text{a}}<\theta_{\text{a}}$ and $y_{i,\text{b}}<\theta_{\text{b}}$ are
not available. We use the index sets of visible entries
$\displaystyle{\mathcal{I}}_{\text{v,a}}:=\left\\{i\in[n]\,\middle|\,y_{i,\text{a}}\geq\theta_{\text{a}}\right\\}\quad\text{and}$
(13)
$\displaystyle{\mathcal{I}}_{\text{v,b}}:=\left\\{i\in[n]\,\middle|\,y_{i,\text{b}}\geq\theta_{\text{b}}\right\\}.$
Our example of a naïve method computes PCC only from visible pairs (i.e.
$(y_{i,\text{a}},y_{i,\text{b}})$ for
$i\in{\mathcal{I}}_{\text{vv}}:={\mathcal{I}}_{\text{v,a}}\cap{\mathcal{I}}_{\text{v,b}}$).
Namely, the PCC is computed as
$\displaystyle
R_{\text{na\"{i}ve}}:=R({\bm{y}}_{\text{vv,a}},{\bm{y}}_{\text{vv,b}})$ (14)
where
$\displaystyle{\bm{y}}_{\text{vv,a}}:=\left[y_{i,\text{a}}\right]_{i\in{\mathcal{I}}_{\text{vv}}},\quad{\bm{y}}_{\text{vv,b}}:=\left[y_{i,\text{b}}\right]_{i\in{\mathcal{I}}_{\text{vv}}}.$
(15)
A shortcoming of this approach is that the cardinality of commonly visible set
${\mathcal{I}}_{\text{vv}}$ tends to be small, yielding a large estimation
error.
Classical Tobit approach: We now consider another approach to correlation
analysis utilizing undetected entries of the concentrations of two
microorganisms A and B. Here, it is assumed that other physicochemical
observations are available as side information. Typical physicochemical data
such as water temperature, DO, SS, TN, and TP are more easily measured
compared to microorganism concentrations. The approach being discussed here
imputes undetected concentrations of the microorganism B, and then imputes the
undetected concentrations of the microorganism A using the side information
and B’s completed concentrations. Tobit analysis is used for imputation of
undetected concentrations. This method is referred to as the _classical Tobit
approach_.
After the above procedure, the concentration data of both microorganisms are
complete. PCC can be computed from the completed vectors as
$\displaystyle
R_{\text{sym}}:=R(\hat{{\bm{y}}}_{\text{a}},\hat{{\bm{y}}}_{\text{b}}),$ (16)
where the completed vectors are denoted by
$\hat{{\bm{y}}}_{\text{a}}:=\left[y_{i,\text{a}}\right]_{i\in[n]}$ and
$\hat{{\bm{y}}}_{\text{b}}:=\left[y_{i,\text{b}}\right]_{i\in[n]}$,
respectively. PCC is expected to be estimated well if the imputations of
undetected entries are accurate.
Asymmetric Tobit approach: The third approach exploits domain information to
improve the Tobit analysis, and consequently, the PCC estimation. In water
quality engineering, it is known whether typical physicochemical data are
positively correlated to each of several typical pathogens. For example, more
pathogens tend to survive in warmer water, leading to positive correlation
between pathogen concentration and water temperature. It can be assumed that
all correlated explanatory variables have positive correlations to a target
variable without loss of generality, because negatively correlated explanatory
variables are negated in advance by preprocessing. For positively correlated
explanatory variables, positive regression coefficients are preferred.
However, in highly censored datasets, often only a few visible observations
are available. In such a case, positively correlated explanatory variables may
often have a negative sample correlation in small samples, which decreases the
effectiveness of the Tobit model. The third approach to correlation analysis
uses a modification of the Tobit model, introduced below, to impute undetected
concentrations. We denote the resultant PCC by $R_{\text{asym}}$.
In the rest of this section, our proposed modification of the Tobit model is
described. This modified Tobit model is called the _asymmetric Tobit model_ ,
and the correlation analysis approach using the new Tobit model is called the
_asymmetric Tobit approach_ hereinafter. Asymmetric Tobit model penalizes the
negative coefficient. To do so, the ordinary normal prior in (8) is replaced
by the asymmetric normal distribution [13] (See Figure 3) as follows.
$\displaystyle
p_{\text{asym}}({\bm{w}}):=\prod_{h=1}^{d}\frac{1}{Z_{h}}\exp\left(-\frac{\lambda^{\text{p}}_{h}(w_{h})_{+}^{2}+\lambda^{\text{n}}_{h}(-w_{h})_{+}^{2}}{2}\right)$
(17)
where $(x)_{+}:=\max(0,x)$ and
$\displaystyle
Z_{h}:=\sqrt{\frac{\pi}{2\lambda^{\text{p}}_{h}}}+\sqrt{\frac{\pi}{2\lambda^{\text{n}}_{h}}}.$
(18)
Let ${\mathcal{I}}_{\text{p}}\subseteq[d]$ be the index set of explanatory
variables correlated to the target variable. In our simulations described
later, the constant vectors
${\bm{\lambda}}^{\text{p}},{\bm{\lambda}}^{\text{h}}\in{\mathbb{R}}^{d}$ are
set to
$\lambda^{\text{p}}_{h}=(1+99\mathds{1}[h\in{\mathcal{I}}_{\text{n}}])\lambda$
and
$\lambda^{\text{n}}_{h}=(1+99\mathds{1}[h\in{\mathcal{I}}_{\text{p}}])\lambda$
for $h\in[d]$. The new regularized log-likelihood function is expressed as
$\displaystyle L_{\text{asym}}({\bm{w}},\beta):=\log
p_{\text{asym}}({\bm{w}})+L_{0}({\bm{w}},\beta).$ (19)
The new Tobit model is fitted to censored data by maximizing the new
regularized log-likelihood function (19). In the next section, our approach to
maximizing the new objective function (19) is described.
## 4 Fitting asymmetric Tobit model
In this study, we propose a new algorithm for fitting the asymmetric Tobit
model. To find the maximizer of the regularized log-likelihood function (19),
we adopted the expectation-maximization (EM) algorithm. Modification of the
prior often gives rise to some technical difficulties. In this section, we
show that each iteration of the EM algorithm can be performed efficiently even
if the prior is changed from the ordinary normal distribution to the
asymmetric normal distribution.
EM algorithms are a general framework for fitting a latent variable model to a
dataset by repeating E-step and M-step until convergence. The EM algorithm for
Tobit analysis uses the following Q-function.
$\displaystyle Q({\bm{w}},\beta,q):=\log p({\bm{w}})+\frac{n}{2}\log\beta$
(20)
$\displaystyle+\sum_{i=1}^{n_{\text{v}}}\log\phi\left(\sqrt{\beta}(y_{i}-\left<{\bm{w}},\bm{x}_{i}\right>)\right)$
$\displaystyle+\sum_{i=n_{\text{v}}+1}^{n}{\mathbb{E}}_{q_{i}(y_{i})}\left[\log\phi\left(\sqrt{\beta}(y_{i}-\left<{\bm{w}},\bm{x}_{i}\right>)\right)\right]$
where $q$ is a set of $(n-n_{\text{v}}$) probabilistic density functions
$q_{n_{\text{v}}+1}(y_{n_{\text{v}}+1}),\dots,q_{n}(y_{n})$. Therein,
$p({\bm{w}})$ is the prior of ${\bm{w}}$; $p=p_{\text{sym}}$ for the classical
Tobit model and $p=p_{\text{asym}}$ for the asymmetric Tobit model. Let
$({\bm{w}}^{(t-1)},\beta^{(t-1)})$ denote the value of the model parameters
obtained at $(t-1)$th iteration. The set of the distributions $q$ at the
$t$-th iteration is denoted by
$q^{(t)}:=\left(q_{i}^{(t)}\right)_{i=n_{\text{v}}+1}^{n}$. The $t$th
iteration consists of the following procedure.
1. 1.
Set the density function $q_{i}^{(t)}$ to the posterior of $y_{i}$ based on
$({\bm{w}}^{(t-1)},\beta^{(t-1)})$, and update each of the expected terms in
the Q-function.
2. 2.
${\bm{w}}^{(t)}:=\mathop{\textrm{argmax}}\limits_{{\bm{w}}\in{\mathbb{R}}^{d}}Q({\bm{w}},\beta^{(t-1)},q^{(t)})$;
3. 3.
$\beta^{(t)}:=\mathop{\textrm{argmax}}\limits_{\beta\in{\mathbb{R}}}Q({\bm{w}}^{(t)},\beta,q^{(t)})$;
The first line is called the E-step. The other two lines are called the
M-step. The E-step and the update rule of $\beta$ are unchanged even if the
prior of ${\bm{w}}$ is changed. Meanwhile, the change of the prior of
${\bm{w}}$ may complicate the update rule of ${\bm{w}}$. In this study, we
found the following result.
###### Theorem 1.
If $p=p_{\text{asym}}$, the update rule of ${\bm{w}}$ in the EM algorithm for
fitting the Tobit model is reduced to an NNLS problem.
This theorem implies that each iteration of the EM algorithm is performed
efficiently even if the prior of the regression coefficients ${\bm{w}}$ is
replaced with the asymmetric normal distribution.
Before discussing the update rule of ${\bm{w}}$, we review the E-step and the
update rule of $\beta$. Let
$\displaystyle{\bm{y}}^{\text{v}}:=\left[y_{1},\dots,y_{n_{\text{v}}}\right]^{\top},\quad$
$\displaystyle{\bm{y}}^{\text{h}}:=\left[y_{n_{\text{v}}+1},\dots,y_{n}\right]^{\top},$
$\displaystyle{\bm{X}}^{\text{v}}:=\left[\bm{x}_{1},\dots,\bm{x}_{n_{\text{v}}}\right],\quad$
$\displaystyle{\bm{X}}^{\text{h}}:=\left[\bm{x}_{n_{\text{v}}+1},\dots,\bm{x}_{n}\right].$
The posterior, computed at the E-step of $t$th iteration, is updated as
$\displaystyle
q_{i}^{(t)}(y_{i})=f_{\text{tn}}\left(y_{i}\,\middle|\,\left<{\bm{w}}^{(t-1)},\bm{x}_{i}\right>,\beta^{(t-1)},\theta\right).$
(21)
This allows us to update the following expected quantities.
$\displaystyle\bar{{\bm{y}}}^{(t)}:=\left[\left({\bm{y}}^{{\textnormal{v}}}\right)^{\top},\,{\mathbb{E}}_{q^{(t)}}\left[\left({\bm{y}}^{{\textnormal{h}}}\right)^{\top}\right]\right]^{\top},$
(22) $\displaystyle
v^{(t)}:={\mathbb{E}}_{q^{(t)}}\left[\left\lVert{\bm{y}}^{{\textnormal{h}}}\right\rVert^{2}\right]-\left\lVert{\mathbb{E}}_{q^{(t)}}\left[{\bm{y}}^{{\textnormal{h}}}\right]\right\rVert^{2}.$
Each expectation in both $\bar{{\bm{y}}}^{(t)}$ and $v^{(t)}$ is expressed in
a closed form using (4) and (7). The update rule of $\beta$ is readily
obtained by setting the derivative of the Q-function as
$\displaystyle\beta^{(t)}=\frac{n}{\lVert{\bm{X}}^{\top}{\bm{w}}-\bar{{\bm{y}}}^{(t)}\rVert^{2}+v^{(t)}}.$
(23)
We thus observe that efficient computation of the E-step and the update rule
of $\beta$ is possible.
Finally, we conclude this section by demonstrating that NNLS fitting
accomplishes the update rule of ${\bm{w}}$, as described in Theorem 1. Define
a $2d\times(n+2d)$ matrix ${\bm{A}}^{(t)}$ and an $(n+2d)$-dimensional vector
${\bm{b}}^{(t)}$ as
$\displaystyle{\bm{A}}^{(t)}:=\begin{bmatrix}{\bm{X}}&\operatorname{diag}\left(\frac{{\bm{\lambda}}^{\text{p}}}{\beta^{(t-1)}}\right)^{1/2}&{\bm{O}}\\\
-{\bm{X}}&{\bm{O}}&\text{diag}\left(\frac{{\bm{\lambda}}^{\text{n}}}{\beta^{(t-1)}}\right)^{1/2}\end{bmatrix},$
(24)
$\displaystyle\text{and}\quad{\bm{b}}^{(t)}:=\begin{bmatrix}\bar{{\bm{y}}}^{(t)}\\\
{\bm{0}}_{2d}\end{bmatrix}.$
The regression coefficient vector ${\bm{w}}\in{\mathbb{R}}^{d}$ can be
decomposed with two nonnegative vectors
${\bm{w}}_{+},{\bm{w}}_{-}\in{\mathbb{R}}_{+}^{d}$ as
${\bm{w}}={\bm{w}}_{+}-{\bm{w}}_{-}$. Using the two vectors, the Q-function
can be rewritten as
$Q({\bm{w}}_{+}-{\bm{w}}_{-},\beta^{(t-1)},q^{(t)})=\\\
-\frac{\beta}{2}\left\lVert({\bm{A}}^{(t)})^{\top}\begin{bmatrix}{\bm{w}}_{+}\\\
{\bm{w}}_{-}\end{bmatrix}-{\bm{b}}^{(t)}\right\rVert^{2}+\text{const}$ (25)
where const denotes the terms with no dependency on the regression
coefficients. Equation (25) implies that the sub-problem for maximizing
$Q(\cdot,\beta^{(t-1)},q^{(t)})$ is reduced to the problem
$\text{NNLS}({\bm{A}}^{(t)},{\bm{b}}^{(t)})$ defined in Subsection 2.3. From
the optimal solution to the sub-problem, denoted by
$\begin{bmatrix}{\bm{w}}_{+}^{(t)}\\\ {\bm{w}}_{-}^{(t)}\end{bmatrix}$, the
regression coefficient vector is updated as
${\bm{w}}^{(t)}={\bm{w}}_{+}^{(t)}-{\bm{w}}_{-}^{(t)}$.
The above discussions are summarized in Algorithm 1 that shows a pseudo-code
of the EM algorithm for fitting the asymmetric Tobit model.
1 begin
2 Initialize ${\bm{w}}^{(0)}$ and $\beta^{(0)}$;
3 for _$t:=1$ to $T$_ do
4 Use (21) and (LABEL:eq:baryt-vt-def) to update $q$ and compute
$\bar{{\bm{y}}}^{(t)}$ and $v^{(t)}$;
5 Solve $\text{NNLS}({\bm{A}}^{(t)},{\bm{b}}^{(t)})$ where ${\bm{A}}^{(t)}$
and ${\bm{b}}^{(t)}$ are defined as (LABEL:eq:At-bt-def-in-em) to get
${\bm{w}}_{+}^{(t)}$ and ${\bm{w}}_{-}^{(t)}$;
6 ${\bm{w}}^{(t)}:={\bm{w}}_{+}^{(t)}-{\bm{w}}_{-}^{(t)}$;
7 Update the inverse variance parameter by (23);
8
9 end for
10
11 end
12
Algorithm 1 EM algorithm for asymmetric Tobit model. Table 1: Estimation errors on Indian water dataset. A | B | Asym Tobit | Sym Tobit | Naïve
---|---|---|---|---
FC | TC | 0.025 (0.020) | 0.075 (0.064) | 0.083 (0.100)
FC | pH | 0.134 (0.104) | 0.171 (0.115) | 0.623 (0.267)
FC | Cond | 0.112 (0.091) | 0.119 (0.093) | 0.522 (0.335)
FC | N | 0.116 (0.083) | 0.131 (0.092) | 0.419 (0.272)
FC | BOD | 0.156 (0.123) | 0.151 (0.132) | 0.453 (0.250)
TC | FC | 0.028 (0.022) | 0.060 (0.057) | 0.083 (0.100)
TC | pH | 0.116 (0.084) | 0.163 (0.115) | 0.635 (0.308)
TC | Cond | 0.142 (0.082) | 0.178 (0.104) | 0.732 (0.343)
TC | N | 0.101 (0.081) | 0.114 (0.087) | 0.376 (0.313)
TC | BOD | 0.091 (0.069) | 0.096 (0.068) | 0.441 (0.347)
pH | FC | 0.141 (0.091) | 0.191 (0.107) | 0.623 (0.267)
pH | TC | 0.124 (0.091) | 0.165 (0.116) | 0.635 (0.308)
pH | Cond | 0.144 (0.068) | 0.167 (0.081) | 0.684 (0.364)
pH | N | 0.114 (0.094) | 0.127 (0.107) | 0.978 (0.036)
pH | BOD | 0.131 (0.087) | 0.156 (0.123) | 0.600 (0.318)
Cond | FC | 0.098 (0.081) | 0.111 (0.086) | 0.522 (0.335)
Cond | TC | 0.135 (0.078) | 0.167 (0.093) | 0.729 (0.341)
Cond | pH | 0.128 (0.068) | 0.161 (0.081) | 0.684 (0.364)
Cond | N | 0.066 (0.060) | 0.090 (0.074) | 0.558 (0.302)
Cond | BOD | 0.066 (0.049) | 0.070 (0.051) | 0.518 (0.318)
N | FC | 0.133 (0.087) | 0.145 (0.093) | 0.419 (0.272)
N | TC | 0.115 (0.088) | 0.127 (0.098) | 0.376 (0.313)
N | pH | 0.070 (0.052) | 0.098 (0.080) | 0.978 (0.036)
N | Cond | 0.071 (0.064) | 0.098 (0.080) | 0.558 (0.302)
N | BOD | 0.143 (0.072) | 0.143 (0.072) | 0.342 (0.215)
BOD | FC | 0.165 (0.118) | 0.161 (0.140) | 0.453 (0.250)
BOD | TC | 0.103 (0.075) | 0.110 (0.078) | 0.441 (0.347)
BOD | pH | 0.111 (0.088) | 0.154 (0.130) | 0.600 (0.318)
BOD | Cond | 0.053 (0.041) | 0.070 (0.057) | 0.518 (0.318)
BOD | N | 0.139 (0.085) | 0.143 (0.088) | 0.342 (0.215)
Table 2: Estimation errors on Harbor water dataset. A | B | Asym Tobit | Sym Tobit | Naïve
---|---|---|---|---
FC | TC | 0.102 (0.066) | 0.109 (0.074) | 0.475 (0.429)
FC | WT | 0.138 (0.099) | 0.149 (0.100) | 0.647 (0.279)
FC | pH | 0.197 (0.127) | 0.197 (0.124) | 0.567 (0.298)
TC | FC | 0.129 (0.078) | 0.139 (0.084) | 0.475 (0.429)
TC | WT | 0.136 (0.108) | 0.151 (0.116) | 0.656 (0.368)
TC | pH | 0.102 (0.066) | 0.109 (0.074) | 0.475 (0.429)
WT | FC | 0.147 (0.101) | 0.153 (0.110) | 0.647 (0.279)
WT | TC | 0.149 (0.107) | 0.157 (0.116) | 0.656 (0.368)
WT | pH | 0.149 (0.107) | 0.157 (0.116) | 0.656 (0.368)
pH | FC | 0.195 (0.128) | 0.195 (0.124) | 0.567 (0.298)
pH | TC | 0.129 (0.078) | 0.139 (0.084) | 0.475 (0.429)
pH | WT | 0.136 (0.108) | 0.151 (0.116) | 0.656 (0.368)
Table 3: Estimation errors on Sapporo water dataset. | A | B | Asym Tobit | Sym Tobit | Naïve
---|---|---|---|---
E.coli | TC | 0.110 (0.027) | 0.082 (0.039) | 0.776 (0.268)
E.coli | pH | 0.087 (0.068) | 0.094 (0.066) | 0.337 (0.232)
E.coli | EC | 0.116 (0.082) | 0.132 (0.089) | 0.993 (0.541)
E.coli | SS | 0.115 (0.089) | 0.133 (0.102) | 0.631 (0.390)
E.coli | TN | 0.101 (0.057) | 0.059 (0.048) | 0.651 (0.329)
E.coli | TP | 0.180 (0.053) | 0.138 (0.063) | 0.731 (0.492)
E.coli | FR | 0.264 (0.117) | 0.270 (0.117) | 1.058 (0.288)
TC | E.coli | 0.103 (0.031) | 0.085 (0.033) | 0.776 (0.268)
TC | pH | 0.127 (0.081) | 0.124 (0.081) | 0.574 (0.380)
TC | EC | 0.071 (0.048) | 0.072 (0.045) | 0.654 (0.458)
TC | SS | 0.112 (0.079) | 0.121 (0.096) | 0.333 (0.152)
TC | TN | 0.105 (0.055) | 0.069 (0.046) | 0.917 (0.445)
TC | TP | 0.171 (0.050) | 0.109 (0.060) | 0.764 (0.496)
TC | FR | 0.167 (0.081) | 0.206 (0.078) | 0.563 (0.305)
pH | E.coli | 0.098 (0.080) | 0.111 (0.082) | 0.337 (0.232)
pH | TC | 0.118 (0.077) | 0.120 (0.085) | 0.574 (0.380)
pH | EC | 0.082 (0.054) | 0.082 (0.054) | 0.451 (0.314)
pH | SS | 0.095 (0.071) | 0.098 (0.078) | 0.675 (0.343)
pH | TN | 0.179 (0.114) | 0.191 (0.119) | 0.962 (0.354)
pH | TP | 0.121 (0.100) | 0.111 (0.093) | 0.483 (0.330)
pH | FR | 0.138 (0.104) | 0.166 (0.120) | 0.734 (0.292)
EC | E.coli | 0.114 (0.084) | 0.129 (0.096) | 0.998 (0.542)
EC | TC | 0.086 (0.062) | 0.084 (0.057) | 0.654 (0.458)
EC | pH | 0.078 (0.061) | 0.077 (0.061) | 0.451 (0.314)
EC | SS | 0.082 (0.065) | 0.125 (0.095) | 0.767 (0.348)
EC | TN | 0.232 (0.092) | 0.244 (0.094) | 0.502 (0.381)
EC | TP | 0.067 (0.045) | 0.078 (0.046) | 0.620 (0.390)
EC | FR | 0.247 (0.076) | 0.297 (0.106) | 1.062 (0.214)
| SS | E.coli | 0.122 (0.090) | 0.139 (0.105) | 0.631 (0.390)
---|---|---|---|---
SS | TC | 0.118 (0.087) | 0.136 (0.103) | 0.333 (0.152)
SS | pH | 0.095 (0.074) | 0.095 (0.081) | 0.675 (0.343)
SS | EC | 0.080 (0.059) | 0.101 (0.086) | 0.767 (0.348)
SS | TN | 0.163 (0.114) | 0.188 (0.129) | 0.319 (0.221)
SS | TP | 0.146 (0.103) | 0.174 (0.132) | 1.049 (0.169)
SS | FR | 0.086 (0.060) | 0.116 (0.084) | 0.495 (0.290)
TN | E.coli | 0.120 (0.056) | 0.078 (0.056) | 0.651 (0.329)
TN | TC | 0.130 (0.043) | 0.078 (0.051) | 0.925 (0.448)
TN | pH | 0.125 (0.091) | 0.131 (0.095) | 0.962 (0.354)
TN | EC | 0.229 (0.089) | 0.240 (0.090) | 0.502 (0.381)
TN | SS | 0.145 (0.111) | 0.190 (0.136) | 0.316 (0.219)
TN | TP | 0.055 (0.036) | 0.046 (0.030) | 0.716 (0.286)
TN | FR | 0.224 (0.099) | 0.264 (0.108) | 0.524 (0.298)
TP | E.coli | 0.181 (0.054) | 0.139 (0.063) | 0.731 (0.492)
TP | TC | 0.176 (0.046) | 0.128 (0.057) | 0.756 (0.494)
TP | pH | 0.133 (0.114) | 0.125 (0.105) | 0.483 (0.330)
TP | EC | 0.082 (0.044) | 0.094 (0.045) | 0.620 (0.390)
TP | SS | 0.122 (0.089) | 0.149 (0.115) | 1.052 (0.179)
TP | TN | 0.054 (0.030) | 0.046 (0.028) | 0.716 (0.286)
TP | FR | 0.153 (0.113) | 0.207 (0.128) | 0.436 (0.269)
FR | E.coli | 0.260 (0.118) | 0.271 (0.117) | 1.058 (0.288)
FR | TC | 0.208 (0.111) | 0.253 (0.112) | 0.563 (0.305)
FR | pH | 0.129 (0.105) | 0.145 (0.109) | 0.734 (0.292)
FR | EC | 0.297 (0.086) | 0.326 (0.104) | 1.062 (0.214)
FR | SS | 0.083 (0.059) | 0.109 (0.082) | 0.487 (0.290)
FR | TN | 0.259 (0.096) | 0.285 (0.096) | 0.524 (0.298)
FR | TP | 0.240 (0.158) | 0.300 (0.169) | 0.421 (0.264)
Table 4: Computational times on four datasets. (a) Indian | (b) NY Harbor | (d) Random
---|---|---
$n$ Asym Tobit Sym Tobit 10 0.330 (0.001) 0.321 (0.001) 17 0.536 (0.001) 0.529 (0.001) 31 0.957 (0.006) 0.964 (0.012) 56 1.706 (0.002) 1.715 (0.003) 100 3.026 (0.010) 3.025 (0.002) 177 5.315 (0.011) 5.322 (0.001) 316 9.429 (0.034) 9.434 (0.032) 562 16.665 (0.046) 16.633 (0.021) 1000 29.610 (0.094) 29.660 (0.080) | $n$ Asym Tobit Sym Tobit 10 0.332 (0.002) 0.324 (0.001) 17 0.541 (0.002) 0.535 (0.001) 31 0.959 (0.002) 0.952 (0.001) 56 1.703 (0.002) 1.701 (0.007) 100 2.986 (0.007) 2.990 (0.012) 177 5.276 (0.023) 5.280 (0.011) (c) Sapporo $n$ Asym Tobit Sym Tobit 10 0.335 (0.001) 0.326 (0.000) 17 0.544 (0.001) 0.535 (0.001) 31 0.962 (0.001) 0.956 (0.001) 56 1.709 (0.002) 1.700 (0.002) 100 3.019 (0.002) 3.011 (0.005) | $n$ Asym Tobit Sym Tobit 10 1.028 (0.070) 0.342 (0.005) 17 1.331 (0.117) 0.549 (0.002) 31 1.804 (0.095) 0.969 (0.006) 56 3.041 (0.099) 1.720 (0.007) 100 5.226 (0.248) 3.032 (0.017) 177 6.367 (0.275) 5.313 (0.018) 316 10.626 (0.282) 9.414 (0.033) 562 18.297 (0.121) 16.717 (0.058) 1000 32.200 (0.463) 29.770 (0.059)
## 5 Simulations
We carried out simulations to investigate the performance of the three
correlation analysis methods described in Section 3. Three water quality
datasets including an Indian water dataset, a water dataset on NY Harbor, and
a water dataset on Sapporo were used. The Indian water dataset contained 1,580
data, each of which contained six variates, FC, TC, pH, Cond, N, and BOD. The
NY Harbor water dataset contained 292 records, each consisting of four
variates, FC, TC, WT, and pH. These two datasets are available from
https://www.kaggle.com/. The Sapporo water dataset is also publicly available
from the supplement of Kato et al.’s paper [10]. The Sapporo dataset had 175
data, each of which included eight variates, E.coli, TC, pH, EC, SS, TN, TP,
and FR. All of these have no detection limit. To simulate censoring
situations, we chose two variates to regard as concentrations of two
microorganisms. A virtual detection limit was assumed for each of the two
microorganisms. The detection limits were selected so that the negative ratio
is 0.8 for both microorganisms. For each dataset, $n=50$ records were randomly
selected, and the concentration data of the two microorganisms of interest
were censored. Three correlation analysis approaches were applied to the data
prepared in this way. The estimated PCC
$\hat{R}\in\\{R_{\text{na\"{i}ve}},R_{\text{sym}},R_{\text{asym}}\\}$ was
assessed by the absolute error from the PCC computed from uncensored data.
Namely, the error was defined as
$|\hat{R}-R({\bm{y}}_{\text{a}},{\bm{y}}_{\text{b}})|$, where
${\bm{y}}_{\text{a}}\in{\mathbb{R}}^{n}$ and
${\bm{y}}_{\text{b}}\in{\mathbb{R}}^{n}$ are the concentration data before
censoring for two microorganisms, respectively. This procedure was repeated 50
times to obtain 50 errors.
Table 1, Table 2, and Table 3 report the average of the estimation errors for
all choices of two microorganisms for the Indian, NY Harbor, Sapporo water
datasets, respectively. The standard deviations are presented in parentheses.
The minimal error among three errors in each row is bold-faced. Asym Tobit,
Sym Tobit, and Naïve denote the asymmetric Tobit, classical Tobit, and naïve
approaches, respectively. For the Indian dataset, the asymmetric and classical
Tobit approaches obtained the minimal error for 28 pairs and two pairs,
respectively, whereas the naïve approach could not obtain the minimal error
for any pair. For the NY Harbor dataset, the asymmetric and classical Tobit
approaches achieved the minimal errors for 12 and two pairs, respectively. For
Sapporo dataset, the two Tobit approaches obtained the minimal errors for 39
and 19 pairs. The naïve approach did not obtain the minimal error for any
pair. These results suggest that imputation of undetected observations leads
to a better estimation performance for inferring PCC compared to the naïve
approach. Let us speculate why imputation leads to a better estimation. Common
visible observations ${\mathcal{I}}_{vv}$ tend to be few when two
microorganisms are correlated poorly. In such a case, the naïve method
computes the PCC from a small, paired dataset, which worsens estimation. The
empirical observations that the asymmetric Tobit performed better than the
classical Tobit for many pairs suggested that the asymmetric Tobit exploited
domain knowledge effectively to impute undetected concentrations, resulting in
more precise estimations.
Moreover, the runtimes for fitting the Tobit models are reported. A major
technical contribution of this study is finding that the M-step of the EM
algorithm is reduced to NNLS even when the prior of the regression
coefficients is replaced with a slightly complicated distribution named the
asymmetric normal distribution. This is in contrast to the EM algorithm for
fitting the classical Tobit model, in which the M-step can be performed by
solving an ordinary unconstrained least square problem. NNLS is a constrained
convex program. Given this, how much additional runtime is required for
fitting the asymmetric Tobit model, compared to fitting the symmetric Tobit
model? To answer this question empirically, the runtimes of 30 iterations of
the EM algorithms for the asymmetric and symmetric Tobit models were measured
with a variable sample size $n$. Table 4(a), (b), and (c) report the average
CPU times over 10 trials for the Indian, NY Harbor, and Sapporo water
datasets, respectively. The unit is seconds in the tables. The figures in
parentheses are the standard deviations. Surprisingly, no significant
differences between the two models were observed even though the M-step for
the asymmetric Tobit is a constrained least square problem because the number
of regression coefficients is not very large in the application of water
quality analysis. The dimensionalities for the three datasets, say $d$, were
only six, four, and eight, respectively, thus solving the NNLS problems
quickly compared to the E-step in which the values of the cumulative density
function are computed for $(n-n_{\text{v}})$ data. To examine the case where
the number of regression coefficients is large, an artificial dataset was
generated with $d=200$ and the computational times were examined. When $d$ was
large, solving NNLS became a computationally expensive step, and thereby the
differences between the computational times of the two models appeared
clearly, as shown in Table 4(d). However, when the sample size $n$ was
increased, the ratio of the two computational times approached one, because
the computational cost of the E-step was again dominant with larger $n$. To
summarize the results of our investigation of runtimes, we simply note the
intended application of water quality analysis. In this application, the
dimensionality $d$ is small, meaning that the additional computational cost
paid for NNLS can be ignored.
## 6 Conclusions
In this paper, we demonstrated the favorable effects of imputation of
undetected observations using side information prior to correlation
computation for analysis of relationships between left-censored data pairs,
with the aim of applying pathogenic concentration data to assess exposure risk
to pathogens in water. The simulation results suggested that exploitation of
domain knowledge for imputation of undetected data made the use of side
information more effective. The asymmetric normal prior was introduced to the
Tobit model as the key tool for imputation of undetected data. We showed
theoretically that each iteration of the EM algorithm for Tobit fitting with
the asymmetric prior can run efficiently by reducing the sub-problem for the
M-step to the nonnegative least square problem, which is known as a quickly
solvable convex problem. In future work, the method developed in this study
will be applied to actual analysis of pathogen concentration data to redesign
an improved routine to periodic monitoring, given that pathogen measurement
technologies keep evolving.
## Acknowledgment
This research was performed by the Environment Research and Technology
Development Fund JPMEERF20205006 of the Environmental Restoration and
Conservation Agency of Japan and supported by JSPS KAKENHI Grant Number
19K04661.
## References
* [1] Takeshi Amemiya. Tobit models: A survey. Journal of Econometrics, 24(1-2):3–61, January 1984.
* [2] Stefania Bellavia, Maria Macconi, and Benedetta Morini. An interior point newton-like method for non-negative least-squares problems with degenerate solution. Numerical Linear Algebra with Applications, 13(10):825–846, 2006\. doi: 10.1002/nla.502.
* [3] A. B. Boehm and L. M. Sassoubre. Enterococci: From Commensals to Leading Causes of Drug Resistant Infection, chapter Enterococci as Indicators of Environmental Fecal Contamination. Massachusetts Eye and Ear Infirmary in Boston, editors: Michael S Gilmore, Don B Clewell, Yasuyoshi Ike, Nathan Shankar, 2014.
* [4] Donghui Chen and Robert J. Plemmons. Nonnegativity constraints in numerical analysis. In Adhemar Bultheel and Ronald Cools, editors, The Birth of Numerical Analysis, pages 109–139. World Scientific, Nov 2009. doi: 10.1142/9789812836267_0008.
* [5] James Dobrowoski, Michael O’Neill, Lisa Duriancik, and Joanne Throwe. Opportunities and challenges in agricultural water reuse: Final report. USDA-CSREES, 89:–, - 2008.
* [6] J. Gentry, J. Vinje, and E. K. Lipp. A rapid and efficient method for quantitation of genogroups i and ii norovirus from oysters and application in other complex environmental samples. J Virol Methods, 156(1–2):59–65, Mar 2009.
* [7] S. G. Goh, N. Saeidi, X. Gu, G. G. R. Vergara, L. Liang, H. Fang, M. Kitajima, A. Kushmaro, and K. Y. Gin. Occurrence of microbial indicators, pathogenic bacteria and viruses in tropical surface waters subject to contrasting land use. Water Res, 150(-):200–215, Mar 2019.
* [8] Toshihiro Ito, Tsuyoshi Kato, Makoto Hasegawa, Hiroyuki Katayama, Satoshi Ishii, Satoshi Okabe, and Daisuke Sano. Evaluation of virus reduction efficiency in wastewater treatment unit processes as a credit value in the multiple-barrier system for wastewater reclamation and reuse. Journal of Water and Health, 14(5):879–889, Dec 2016.
* [9] Toshihiro Ito, Tsuyoshi Kato, Kenta Takagishi, Satoshi Okabe, , and Daisuke Sano. Bayesian modeling of virus removal efficiency in wastewater treatment processes. Water Science and Technology, 72(10):1789–95, Nov 2015. doi: 10.2166/wst.2015.402.
* [10] T. Kato, A. Kobayashi, W. Oishi, S. S. Kadoya, S. Okabe, N. Ohta, M. Amarasiri, and D. Sano. Sign-constrained linear regression for prediction of microbe concentration based on water quality datasets. J Water Health, 17(3):404–415, Jun 2019.
* [11] Tsuyoshi Kato, Ayano Kobayashi, Toshihiro Ito, Takayuki Miura, Satoshi Ishii, Satoshi Okabe, and Daisuke Sano. Estimation of concentration ratio of indicator to pathogen-related gene in environmental water based on left-censored data. Journal of Water and Health, 14(1):14–25, Feb 2016. doi:10.2166/wh.2015.029.
* [12] Tsuyoshi Kato, Takayuki Miura, Satoshi Okabe, and Daisuke Sano. Bayesian modeling of enteric virus density in wastewater using left-censored data. Food and Environmental Virology, 5(4):185–193, Dec 2013.
* [13] Tsuyoshi Kato, Shinichiro Omachi, and Hirotomo Aso. Asymmetric gaussian and its application to pattern recognition. In Joint IAPR International Workshops on Syntactical and Structural Pattern Recognition and Statistical Pattern Recognition(S+SSPR2002), pages 405–413, 2002.
* [14] A. Korajkic, B. R. McMinn, and V. J. Harwood. Relationships between microbial indicators and pathogens in recreational water settings. Int J Environ Res Public Health, 15(12):–, Dec 2018.
* [15] M. H. Kramer, B. L. Herwaldt, G. F. Craun, R. L. Calderon, and D. D. Juranek. Surveillance for waterborne-disease outbreaks–united states, 1993-1994. MMWR CDC Surveill Summ, 45(1):1–33, Apr 1996.
* [16] Charles L. Lawson and Richard J. Hanson. Solving Least Squares Problems. Society for Industrial and Applied Mathematics, jan 1995. doi:10.1137/1.9781611971217.
* [17] B.R. McMinn, N.J. Ashbolt, and A. Korajkic. Bacteriophages as indicators of faecal pollution and enteric virus removal. Letters in Applied Microbiology, 65(1):11–26, June 2017.
* [18] Nicolai Meinshausen. Sign-constrained least squares estimation for high-dimensional regression. Electronic Journal of Statistics, 7:1607–1631, 2013. doi: 10.1214/13-ejs818.
* [19] S. P. Nappier, T. Hong, A. Ichida, A. Goldstone, and S. E. Eftim. Occurrence of coliphage in raw wastewater and in ambient water: A meta-analysis. Water Res, 153(-):263–273, Apr 2019.
* [20] Rachel T. Noble and Stephen B. Weisberg. A review of technologies for rapid detection of bacteria in recreational waters. Journal of Water and Health, 3(4):381–392, December 2005.
* [21] Committee on Indicators for Waterbone Pathogens. Indicators for Waterborne Pathogens. National Academies Press, 2004.
* [22] Francisco Pedrero, Ioannis Kalavrouziotis, Juan Jose Alarcon, Prodromos Koukoulakis, and Takashi Asano. Use of treated municipal wastewater in irrigated agriculture–review of some practices in spain and greece. Agricultural Water Management, 97(9):1233–1241, Sept 2010.
* [23] M. I. Sedji, M. Varbanov, M. Meo, M. Colin, L. Mathieu, and I. Bertrand. Quantification of human adenovirus and norovirus in river water in the north-east of france. Environ Sci Pollut Res Int, 25(30):30497–30507, Oct 2018.
* [24] Xiaotong Wen, Feiyu Chen, Yixiang Lin, Hui Zhu, Fang Yuan, Duyi Kuang, Zhihui Jia, and Zhaokang Yuan. Microbial indicators and their use for monitoring drinking water quality—a review. Sustainability, 12(6):2249, March 2020.
|
# The Higgs Field and the Jordan Brans Dicke Cosmology
Onder<EMAIL_ADDRESS>Levent
<EMAIL_ADDRESS>Metin<EMAIL_ADDRESS>Yelda
<EMAIL_ADDRESS>
Selale<EMAIL_ADDRESS>and Tarik<EMAIL_ADDRESS>
Department of Physics, Bogazici University, Bebek, Istanbul, Turkey
###### Abstract
We investigate a field theoretical approach to the Jordan-Brans- Dicke (JBD)
theory extended with a particular potential term on a cosmological background
by starting with the motivation that the Higgs field and the scale factor of
the universe are related. Based on this relation, it is possible to come up
with mathematically equivalent but two different interpretations. From one
point of view while the universe is static, the masses of the elementary
particles change with time. The other one, which we stick with throughout the
manuscript, is that while the universe is expanding, particle masses are
constant. Thus, a coupled Lagrangian density of the JBD field and the scale
factor (the Higgs field), which exhibit a massive particle and a linearly
expanding space in zeroth order respectively, is obtained. By performing a
coordinate transformation in the field space for the reduced JBD action whose
kinetic part is nonlinear sigma model, the Lagrangian of two scalar fields can
be written as uncoupled for the Higgs mechanism. After this transformation, as
a result of spontaneous symmetry breaking, the time dependent vacuum
expectation value (vev) of the Higgs field and the Higgs bosons which are the
particles corresponding to quantized oscillation modes about the vacuum, are
found.
## 1 Introduction
The concept of mass has been very important and challenging to understand at
the fundamental level throughout the advancement of modern physics. The
particle physics as well as the foundations of classical physics such as the
dynamics of particle motion and the phenomenon of gravitation, in fact depend
on this concept. Lately by the discovery of the Higgs boson[1], one of the
most striking developments in our understanding of this concept has been the
fact that the mass in physics is fundamentally resulting from the vacuum
expectation value of the Higgs field[2]. Also, all the elementary particles in
the standard model obtain their masses by this mechanism[3][4][5] which is
only consistent in Minkowski spacetime. Although from a historical perspective
the concept is divided into two as inertial and gravitational masses, their
equivalence which is known the weak equivalence principle (WEP), is essential
by experiments[6]. WEP ensures test bodies follow the same path in a
gravitational field regardless of their compositions. So, this equivalence was
one of the main motivations for Einstein to construct the general theory of
relativity which explains the gravitation in a purely geometrical way. Another
motivation was Mach’s principle which relates an inertial force on a body to
the gravitational effects originating from the matter distribution of the
universe. While Newton’s concept of absolute space defines a special frame of
reference and an inertial force is the result of motion relative to this
frame, Mach’s principle states that the observable motion is the relative one
and there is no a special frame of reference. Thus, based upon Mach’s
principle, a test particle experiences an inertial force because of its
relative motion to the rest of the universe, or simply, the physical space
shaped by distant stars and galaxies. Furthermore, a model which relates
inertia to the gravitational potential of the universe, has been proposed by
Sciama[7]. In the rest of this section, a novel connection between inertial
mass and the metric tensor is constructed by means of the Higgs field. A
similar model[8] has recently been introduced by two of the authors and the
more complete and the precise one in terms of theoretical arguments and
calculations is studied in this manuscript.
In the general theory of relativity, motion of particles are determined by the
action
$S=-m\int ds=-m\int\sqrt{g_{\mu\nu}dx^{\mu}dx^{\nu}},$ (1)
where $m$ and $ds$ are the mass of the particle and the length of the
Riemannian line element respectively. As is stated earlier, since, from a
field theoretical point of view, a particle is gained mass because of its
interaction with Higgs field $\phi$, the mass of the particle should be time
dependent if the vev of the Higgs field changes with time throughout the
evolution of the universe. However, the variation of the field must be at a
sufficiently slow rate so that the concept of mass is not put under too much
stress and the factor in front of the interaction term is interpreted as mass
in a field theoretical Lagrangian density. So, once the solution of the Higgs
field is obtained in terms of the parameters of our model, it will be shown
that this condition is satisfied. Based on this motivation, the same action in
Eq.(1) can be written as
$S=-m_{0}\int\frac{\phi(t)}{\phi_{0}}\sqrt{g_{\mu\nu}dx^{\mu}dx^{\nu}}.$ (2)
Another fundamental understanding of the universe is the concept of the
expanding universe as described by the FLRW metric tensor[9, 10, 11, 12] for
which the line element is given by
$ds^{2}=a^{2}(t)(-dt^{2}+dx^{2}+dy^{2}+dz^{2})$ (3)
where t is, in cosmological language, called conformal time. Here and
henceforth we will use units $\hbar=c=1$. If the mass is determined by the
time dependent cosmological expectation value of the Higgs field, for a
macroscopic theory one can use Eq.(2) to embed the factor $\phi(t)/\phi_{0}$
into the metric and the line element becomes
$ds^{2}=<\phi(t)/\phi_{0}>^{2}\eta_{\mu\nu}dx^{\mu}dx^{\nu}$ (4)
as far as homogeneous and isotropic space-time is considered. Now, while
$\phi/\phi_{0}$ is representing the time dependence of the mass, from another
perspective, it can be considered as a part of the metric tensor for a
cosmological scenario by the relation
$a(t)=<\phi(t)/\phi_{0}>.$ (5)
Thus, roughly speaking, the scale factor $a$ must be related to the time
dependent cosmological expectation value of the Higgs field.
Besides many successes of the general theory of relativity in explaining some
phenomena in the solar system as well as in the standard model of cosmology
such as the precession of the mercury, gravitational lensing, proton-neutron
ratio in the early universe and the primordial nucleosynthesis, its
insufficiency to solve the late-time accelerating expansion of the universe
and the galaxy rotation curves without adding dark energy and dark matter as
unknown exotic constituents led to search for the modified or the alternative
gravity theories[13][14]. It also suffers from some conceptual issues[15][16]
related to Mach’s principle which it relies on. Whereas modified gravity
theories satisfy WEP, the strong equivalence principle (SEP) is violated as a
result of the introduction of a fifth force[15][17]. Thus, objects which have
different gravitational binding energies, move on different geodesics of the
space-time metric. In this sense, the general theory of relativity is the only
tensor theory which satisfies both WEP and SEP. Among modified gravity
theories, due to the coupling of a simple scalar field to the geometry of
space-time, scalar-tensor theories are the more prevalent and flexible
alternatives. In this manuscript, we will consider the Jordan Brans Dicke
(JBD) theory[15, 18, 19, 20] which is the first scalar-tensor theory and seems
to be a more complete theory of gravitation with respect to Mach’s principle.
Furthermore, for our case, it is more suitable to show the relation between
the relativistic cosmology and the Higgs mechanism by relating the scalar
fields of two picture. In their original paper, Brans and Dicke define the
reciprocal of Newton’s constant $1/G$ as the scalar field which has the
dimension of mass squared. Since our approach will mostly be field
theoretical, it is better define a field to have the dimension of mass. Thus,
the JBD action extended with a potential term (here, a massless JBD field is
taken into account) turns out to be
$S=\int
d^{4}x\sqrt{-g}\left(-\frac{\tilde{\xi}^{2}}{2}\tilde{\chi}^{2}R-\frac{1}{2}g^{\mu\nu}\partial_{\mu}\tilde{\chi}\partial_{\nu}\tilde{\chi}-\frac{\tilde{\lambda}}{4}\tilde{\chi}^{4}\right).$
(6)
Here, $R$, $\tilde{\chi}$ and $\tilde{\xi}^{2}$ are the Ricci scalar, the JBD
scalar field and the dimensionless parameter respectively. As it will be
explained at the beginning of the next section, the use of tilde sign in
Eq.(6) is because of some dimensional concerns for coordinate transformation
in field space. Furthermore, although the coupling parameter $\omega$ which is
the original JBD parameter, is more common in the literature, we prefer to
stick with the action form in Eq.(6). So, the relation between two parameters
is given by $\tilde{\xi}^{2}=-1/4\omega$.
## 2 The Higgs Field and the Conformal Factor in the JBD Theory
Since it is better fit this scalar-tensor theory into relatively simpler form
as the Lagrangian of two scalar field in Minkowski spacetime, we use the
following two relations
$\sqrt{-g}=a^{4}(t)\sqrt{-\eta}=a^{4}(t)$ (7)
$R=6\frac{\partial_{0}^{2}a}{a^{3}}$ (8)
to write the Lagrangian density in Eq.(6) in terms of dimensionless scalar
fields $\chi$ and $a$ as
$\mathcal{L}=-\frac{\xi^{2}}{2}\chi^{2}a\partial_{0}^{2}a+\frac{\mu^{2}}{2}a^{2}(\partial_{0}\chi)^{2}-\frac{\lambda}{4}a^{4}\chi^{4}$
(9)
where $\chi=\tilde{\chi}/\mu$ and it has undergone dimensional transmutation.
To make $\chi$ dimensionless, $\mu$ must have dimension of mass and so
dimensionful constants are defined as $\xi=\mu\tilde{\xi}$ and
$\lambda=\mu^{4}\tilde{\lambda}$. Also, the factor of six in the Ricci scalar
has already been embedded within $\xi^{2}$.
After the first term in Eq.(9) is expanded by applying integration by parts,
the Lagrangian density becomes
$\mathcal{L}=-\frac{\xi^{2}}{2}[\partial_{0}(\chi^{2}a\partial_{0}a)-2a\chi\partial_{0}a\partial_{0}\chi-\chi^{2}(\partial_{0}a)^{2}]+\frac{\mu^{2}}{2}a^{2}(\partial_{0}\chi)^{2}-\frac{\lambda}{4}a^{4}\chi^{4}.$
(10)
At this point, since the first term in the square bracket is a total
divergence, it can be set to zero at the infinity in the action. So, after
disregarding this term, by addition and subtraction of the term
$\frac{\xi^{2}}{2}a^{2}(\partial_{0}\chi)^{2}$ into the Lagrangian density,
and then taking the factor of $a^{2}\chi^{2}$ outside the parenthesis, one
ends up with
$\mathcal{L}=\frac{\xi^{2}}{2}a^{2}\chi^{2}\left[\left(\frac{\partial_{0}\chi}{\chi}\right)^{2}+2\frac{\partial_{0}a}{a}\frac{\partial_{0}\chi}{\chi}+\left(\frac{\partial_{0}a}{a}\right)^{2}\right]+\frac{\mu^{2}-\xi^{2}}{2}a^{2}(\partial_{0}\chi)^{2}-\frac{\lambda}{4}a^{4}\chi^{4}.$
(11)
The plus sign in front of the bracket in which the terms correspond the
kinetic energy, implies the positivity of $\xi^{2}$ but negativity of the JBD
parameter $\omega$ based upon our definition in the introduction. In order to
simplify this expression, the following relation for the terms inside the
bracket and the definitions (or the coordinate transformations in the field
space) for fields $\alpha$ and $\gamma$ are very useful.
$[\partial_{0}(\ln\chi+\ln a)]^{2}=[\partial_{0}(\ln\chi
a)]^{2}=[\partial_{0}\ln\alpha]^{2}$ (12) $\alpha=\chi a$ (13)
$\gamma=\ln\chi$ (14)
Then, the Lagrangian density can be put in the form
$\mathcal{L}=\frac{\xi^{2}}{2}(\partial_{0}\alpha)^{2}+\frac{\mu^{2}-\xi^{2}}{2}\alpha^{2}(\partial_{0}\gamma)^{2}-\frac{\lambda}{4}\alpha^{4}.$
(15)
Based upon the equation of motion of $\gamma$, since
$\frac{\partial\mathcal{L}}{\partial\gamma}=0,$ (16)
$\frac{\partial\mathcal{L}}{\partial(\partial_{0}\gamma)}=(\mu^{2}-\xi^{2})\alpha^{2}\partial_{0}\gamma$
(17)
must be equal to a constant. Thus,
$\partial_{0}\gamma=\frac{C}{(\mu^{2}-\xi^{2})\alpha^{2}}.$ (18)
After some algebra, Hamiltonian density is found as
$\mathcal{H}=\frac{\xi^{2}}{2}(\partial_{0}\alpha)^{2}+\frac{\mu^{2}-\xi^{2}}{2}\alpha^{2}(\partial_{0}\gamma)^{2}+\frac{\lambda}{4}\alpha^{4},$
(19)
and using Eq.(18) in Eq.(19) yields
$\mathcal{H}=\frac{\xi^{2}}{2}(\partial_{0}\alpha)^{2}+\frac{C^{2}}{2(\mu^{2}-\xi^{2})}\frac{1}{\alpha^{2}}+\frac{\lambda}{4}\alpha^{4}.$
(20)
Before obtaining the equation of motion of $\alpha$, one must have the
Hamiltonian density in terms of the field and its canonical momentum. In our
case, it will be equal to
$\mathcal{H}=\frac{1}{2\xi^{2}}\pi_{\alpha}^{2}+\frac{C^{2}}{2(\mu^{2}-\xi^{2})}\frac{1}{\alpha^{2}}+\frac{\lambda}{4}\alpha^{4},$
(21)
where the canonical momentum is
$\pi_{\alpha}=\xi^{2}\partial_{0}\alpha.$ (22)
Furthermore, since the equation of motion is given by
$-\frac{\partial\mathcal{H}}{\partial\alpha}=\partial_{0}\pi_{\alpha},$ (23)
and the left hand side of Eq.(23) is
$-\frac{\partial\mathcal{H}}{\partial\alpha}=\frac{C^{2}}{(\mu^{2}-\xi^{2})}\frac{1}{\alpha^{3}}-\lambda\alpha^{3},$
(24)
after some algebraic manipulation one can easily get the equation of motion as
$\partial_{0}^{2}\alpha-\frac{C^{2}}{\xi^{2}(\mu^{2}-\xi^{2})}\frac{1}{\alpha^{3}}+\frac{\lambda}{\xi^{2}}\alpha^{3}=0.$
(25)
In Eq.(21), last two terms behave as an effective potential, so one may write
$\mathcal{H}=\frac{\xi^{2}}{2}(\partial_{0}\alpha)^{2}+V_{eff},$ (26)
where
$V_{eff}=\frac{C^{2}}{2(\mu^{2}-\xi^{2})}\frac{1}{\alpha^{2}}+\frac{\lambda}{4}\alpha^{4}.$
(27)
To determine the vacuum expectation value of the field, the derivative of the
potential with respect to $\alpha$ must be equal to zero
$\frac{\partial V_{eff}}{\partial\alpha}=0.$ (28)
This simple procedure gives the vev of $\alpha$
$\alpha_{0}=\left(\frac{C^{2}}{\lambda(\mu^{2}-\xi^{2})}\right)^{1/6}.$ (29)
After substituting Eq.(29) in Eq.(18) in order to solve the field $\gamma$ at
the vacuum
$\partial_{0}\gamma=\frac{C}{(\mu^{2}-\xi^{2})\alpha_{0}^{2}}=\left(\frac{\lambda
C}{(\mu^{2}-\xi^{2})^{2}}\right)^{1/3}=D,$ (30)
$\gamma$ is found
$\gamma=\ln\chi=Dt+E,$ (31)
where $D$ and $E$ are another constants which must be determined. Then, on the
basis of the definition in Eq.(14), the JBD scalar field, is obtained at the
vacuum as
$\chi=e^{Dt+E}.$ (32)
Since the temporal evolution of the universe is designated by the scale factor
which also gives the time dependence of the Higgs field in our theoretical
model, we can take advantage of the definition of $\alpha$ at its vev
$\alpha_{0}=a\chi,$ (33)
to find
$a=\frac{\alpha_{0}}{e^{Dt+E}}=\exp[-D(t-t_{0})],$ (34)
where
$\alpha_{0}e^{-E}=e^{Dt_{0}},$ (35)
in which $t_{0}$ is the age of the universe to make the scale factor equal to
one today. As it is seen, Eq.(35) implies an exponential expansion for space-
time intervals but this is true in comoving time. After one switches to the
cosmological time which will be represented with $t^{\prime}$ throughout the
manuscript, and arrange constants accordingly to be able to set today’s value
of $a$ to one, a linear expansion is obtained
$a(t^{\prime})=\frac{t^{\prime}}{t_{0}^{\prime}}.$ (36)
We have already learned the evolution of the fields with time at the vev of
$\alpha$. Now, a small perturbation can be added to $\alpha$
$\alpha=\alpha_{0}(1+\epsilon(t)),$ (37)
and insert this into Eq.(25) to get
$\partial_{0}^{2}\epsilon(t)-\frac{C^{2}}{\xi^{2}(\mu^{2}-\xi^{2})}\alpha_{0}^{-4}(1+\epsilon(t))^{-3}+\frac{\lambda}{\xi^{2}}\alpha_{0}^{2}(1+\epsilon(t))^{3}=0.$
(38)
Since the perturbation is small in comparison with $\alpha_{0}$, the second
and the third terms in Eq.(38) can be expanded by keeping only the zeroth and
the first order terms and it turns out to be
$\partial_{0}^{2}\epsilon(t)-\frac{C^{2}}{\xi^{2}(\mu^{2}-\xi^{2})}\alpha_{0}^{-4}(1-3\epsilon(t))+\frac{\lambda}{\xi^{2}}\alpha_{0}^{2}(1+3\epsilon(t))=0.$
(39)
Since the zeroth order terms give
$-\frac{C^{2}}{\xi^{2}(\mu^{2}-\xi^{2})}\alpha_{0}^{-4}+\frac{\lambda}{\xi^{2}}\alpha_{0}^{2}=0,$
(40)
we are left with the equation to solve
$\partial_{0}^{2}\epsilon(t)+\left(\frac{3C^{2}}{\xi^{2}(\mu^{2}-\xi^{2})}\alpha_{0}^{-4}+\frac{3\lambda}{\xi^{2}}\alpha_{0}^{2}\right)\epsilon(t)=0.$
(41)
The constant term in the parenthesis has the dimension of mass squared so it
may be redefined to write the equation as
$\partial_{0}^{2}\epsilon(t)+m^{2}\epsilon(t)=0,$ (42)
where
$m^{2}=\left(\frac{3C^{2}}{\xi^{2}(\mu^{2}-\xi^{2})}\alpha_{0}^{-4}+\frac{3\lambda}{\xi^{2}}\alpha_{0}^{2}\right).$
(43)
Using the vev of $\alpha$ from Eq.(29) makes $m^{2}$ to be equal to
$m^{2}=\frac{6(\lambda C)^{2/3}}{\xi^{2}(\mu^{2}-\xi^{2})^{1/3}},$ (44)
then the solutions for $\epsilon$ and $\alpha$, around the vacuum, are found
as
$\epsilon(t)=\epsilon_{0}(e^{imt}+e^{-imt}),$ (45)
$\alpha=\alpha_{0}(1+\epsilon_{0}(e^{imt}+e^{-imt})).$ (46)
Ultimately, we are interested in the solutions of the fields $\chi$ and $a$.
We can follow the same procedure as before by finding $\gamma$ first, then
$\chi$ and $a$. To do that Eq.(46) is placed into Eq.(18) again by ignoring
second and higher order terms of $\zeta$
$\partial_{0}\gamma=\frac{C}{(\mu^{2}-\xi^{2})\alpha^{2}}=\frac{C}{(\mu^{2}-\xi^{2})}\alpha_{0}^{-2}\left(1-2\epsilon(t)\right),$
(47)
then by using the definition of constant $D$ and integrating
$\partial_{0}\gamma=D(1-2\epsilon(t)),$ (48)
$\gamma$ is gained as
$\gamma=Dt+F+\frac{i2D}{m}\epsilon_{0}(e^{imt}-e^{-imt}).$ (49)
Here, $F$ is another integration constant which must be defined. Using the
relations $\gamma=\ln\chi$ and $\alpha=a\chi$ one more time in order results
in
$\chi(t)=\exp\left(Dt+F+\frac{i2D}{m}\epsilon_{0}(e^{imt}-e^{-imt})\right),$
(50)
$a(t)=\alpha_{0}(1+\epsilon(t))\exp\left(-Dt-F-\frac{i2D}{m}\epsilon_{0}(e^{imt}-e^{-imt})\right).$
(51)
Once again the higher order terms are disregarded because of the fact that
$\epsilon\ll 1$ and constant $F$ is selected to be equal to $E$ in Eq.(31)
(since $a(t_{0})=1$), so the evolution of the universe in the conformal time
is
$a(t)=\exp\left(-D(t-t_{0})+\epsilon_{0}\left(\left(1-\frac{i2D}{m}\right)e^{imt}+\left(1+\frac{i2D}{m}\right)e^{-imt}\right)\right),$
(52)
and in the cosmological time is
$a(t^{\prime})=\left(\frac{t^{\prime}}{t_{0}^{\prime}}+\epsilon_{0}\left(\left(1-\frac{i2D}{m}\right)e^{imt^{\prime}}+\left(1+\frac{i2D}{m}\right)e^{-imt^{\prime}}\right)\right).$
(53)
At this point, it is important to note that for the quantization of
oscillation modes of $\epsilon$, it can be written in terms of creation and
annihilation operators $A$ and $A^{\dagger}$ like
$\epsilon(t)=\epsilon_{0}(Ae^{imt}+A^{\dagger}e^{-imt}).$ (54)
## 3 Coordinate Transformation in Field Space
From a non-static cosmological perspective, once the metric is defined as
$g_{\mu\nu}=a^{2}(t)\eta_{\mu\nu}$, one can rewrite the action in Eq.(6) more
explicitly as
$S=\int
d^{4}x\bigg{[}\frac{1}{2}\bigg{(}\xi^{2}\chi^{2}(\partial_{0}a)^{2}+2\xi^{2}a\chi\partial_{0}a\partial_{0}\chi+\mu^{2}a^{2}(\partial_{0}\chi)^{2}\bigg{)}-\frac{\lambda}{4}a^{4}\chi^{4}\bigg{]}.$
(55)
It is easily seen that this action defines a non-linear $\sigma$ model[21][22]
in which a potential term is added, and can be represented as
$S=\frac{1}{2}\int
d^{4}x\left(G_{bc}(\psi)\partial_{0}\psi^{b}\partial_{0}\psi^{c}-V(\psi)\right)$
(56)
where $\Psi_{b,c}$ corresponds to $a$ and $\chi$. In addition, the metric in
Eq.(56) is given by
$G_{bc}=\begin{pmatrix}\xi^{2}\chi^{2}&\xi^{2}a\chi\\\
\xi^{2}a\chi&\mu^{2}a^{2}\\\ \end{pmatrix}$ (57)
whose scalar curvature can be found zero after straightforward calculations.
Since the metric is flat, the kinetic term of this action can be converted to
that of Klein-Gordon action by a coordinate transformation in field space. In
this way, one can investigate the action in Eq.(55) from the perspective of
the Higgs mechanism. Also, when the following transformation between $G_{bc}$
and $\hat{G}_{bc}$ is achieved, it means we have a non-linear sigma model in
the JBD picture.
$G_{bc}=\begin{pmatrix}\xi^{2}\chi^{2}&\xi^{2}a\chi\\\
\xi^{2}a\chi&\mu^{2}a^{2}\end{pmatrix}\longleftrightarrow\hat{G}_{bc}=\begin{pmatrix}1&0\\\
0&1\end{pmatrix}$ (58)
Since we are looking for a transformation of the Lagrangian density from the
JBD picture to the Higgs picture, the starting point is to write the line
element of the target space as
$ds^{2}=\frac{1}{2}[\xi^{2}\chi^{2}(da)^{2}+2\xi^{2}a\chi
dad\chi+\mu^{2}a^{2}(d\chi)^{2}].$ (59)
Adding and subtracting the term $\xi^{2}a^{2}(d\chi)^{2}$ in Eq.(59), and then
taking the factor of $\xi^{2}a^{2}\chi^{2}$ outside the parenthesis results in
$ds^{2}=\frac{1}{2}\xi^{2}a^{2}\chi^{2}\left[\left(\frac{d\chi}{\chi}+\frac{da}{a}\right)^{2}+\frac{\mu^{2}-\xi^{2}}{\xi^{2}}\left(\frac{d\chi}{\chi}\right)^{2}\right].$
(60)
Relating $a$ and $\chi$ to new fields $\alpha$ and $\gamma$ as we did before
in Eq.(13) and Eq.(14), gives
$a(\alpha,\gamma)=\alpha e^{-\gamma},$ (61)
$\frac{da}{a}=\frac{d\alpha}{\alpha}-d\gamma,$ (62) $\chi(\gamma)=e^{\gamma},$
(63) $\frac{d\chi}{\chi}=d\gamma,$ (64)
the line element in Eq.(60) turns out to be
$ds^{2}=\frac{1}{2}\xi^{2}\left(d\alpha^{2}+\frac{\mu^{2}-\xi^{2}}{\xi^{2}}\alpha^{2}d\gamma^{2}\right).$
(65)
At this point, another transformation is needed to get rid of the factors and
the following ones are useful to accomplish this.
$\alpha(\rho)=\frac{\rho}{\xi}$ (66) $d\alpha=\frac{d\rho}{\xi}$ (67)
$\gamma(\theta)=\frac{\xi}{\sqrt{\mu^{2}-\xi^{2}}}\theta$ (68)
$d\gamma=\frac{\xi}{\sqrt{\mu^{2}-\xi^{2}}}d\theta$ (69)
Substitution of Eq.(67) and Eq.(69) into Eq.(65) yields
$ds^{2}=\frac{1}{2}d\rho^{2}+\frac{1}{2}\rho^{2}d\theta^{2}.$ (70)
Here, $\rho$ and $\theta$ correspond to spherical coordinates. To get
$\hat{G}_{\mu\nu}$ in Eq.(59), it is straightforward to define them as
$\rho(\phi_{3},\phi_{5})=\sqrt{\phi_{3}^{2}+\phi_{5}^{2}},$ (71)
$\theta(\phi_{3},\phi_{5})=\arctan\frac{\phi_{5}}{\phi_{3}}.$ (72)
When these new coordinates are used in Eq.(70), one can write the line element
as desired from the very beginning of this section and it is
$ds^{2}=\frac{1}{2}(d\phi_{3}^{2}+d\phi_{5}^{2}).$ (73)
To write the coordinates $a$ and $\chi$ in terms of $\phi_{3}$ and $\phi_{5}$,
all the transformations can be applied one by one from the beginning to the
end. First of all, after implementing Eq.(66) and Eq.(68) into Eq.(61) and
Eq.(63), $a$ and $\chi$ can be expressed like
$a(\rho,\theta)=\frac{\rho}{\xi}\exp\left(-\frac{\xi}{\sqrt{\mu^{2}-\xi^{2}}}\theta\right),$
(74) $\chi(\theta)=\exp\left(\frac{\xi}{\sqrt{\mu^{2}-\xi^{2}}}\theta\right).$
(75)
Then, the transformation from the spherical coordinates to the cartesian ones
results in
$a(\phi_{3},\phi_{5})=\frac{\sqrt{\phi_{3}^{2}+\phi_{5}^{2}}}{\xi}\exp\left(-\frac{\xi}{\sqrt{\mu^{2}-\xi^{2}}}\arctan\left(\frac{\phi_{5}}{\phi_{3}}\right)\right),$
(76)
$\chi(\phi_{3},\phi_{5})=\exp\left(\frac{\xi}{\sqrt{\mu^{2}-\xi^{2}}}\arctan\left(\frac{\phi_{5}}{\phi_{3}}\right)\right).$
(77)
At this point, it is also possible to state $\phi_{3}$ and $\phi_{5}$ in terms
of $a$ and $\chi$ by carrying out all the transformations back in order. To
start with, because of the spherical ones which have lastly been obtained,
$\phi_{3}$ and $\phi_{5}$ are
$\phi_{3}(\rho,\theta)=\rho\cos\theta,$ (78)
$\phi_{5}(\rho,\theta)=\rho\sin\theta.$ (79)
Thanks to Eq.(66) and Eq.(68), $\rho$ and $\theta$ are found
$\rho(\alpha)=\xi\alpha,$ (80)
$\theta(\gamma)=\frac{\sqrt{\mu^{2}-\xi^{2}}}{\xi}\gamma,$ (81)
and then using Eq.(80) and Eq.(81) in Eq.(78) and Eq.(79) gives
$\phi_{3}(\alpha,\gamma)=\xi\alpha\cos\left(\frac{\sqrt{\mu^{2}-\xi^{2}}}{\xi}\gamma\right),$
(82)
$\phi_{5}(\alpha,\gamma)=\xi\alpha\sin\left(\frac{\sqrt{\mu^{2}-\xi^{2}}}{\xi}\gamma\right).$
(83)
Since, on the basis of Eq.(61) and Eq.(63), $\alpha$ and $\gamma$ are
$\alpha(a,\chi)=a\chi,$ (84) $\gamma(\chi)=\ln\chi,$ (85)
substituting these into Eq.(82) and Eq.(83) gives the scalar fields of the
Higgs picture $\phi_{3}$ and $\phi_{5}$ in terms of those of the JBD picture
$a$ and $\chi$ as
$\phi_{3}(a,\chi)=\xi
a\chi\cos\left(\frac{\sqrt{\mu^{2}-\xi^{2}}}{\xi}\ln\chi\right),$ (86)
$\phi_{5}(a,\chi)=\xi
a\chi\sin\left(\frac{\sqrt{\mu^{2}-\xi^{2}}}{\xi}\ln\chi\right).$ (87)
Therefore, in terms of $\phi_{3}$ and $\phi_{5}$, the Lagrangian density in
Eq.(55) can be stated as
$\mathcal{L}=\frac{1}{2}(\partial_{0}\phi_{3})^{2}+\frac{1}{2}(\partial_{0}\phi_{5})^{2}-\frac{\kappa}{4}(\phi^{2}_{3}+\phi^{2}_{5})^{2}$
(88)
where $\kappa=\lambda\xi^{-4}$.
## 4 The Higgs Picture
The Lagrangian density of the Higgs field in doublet form is taken to be
$\mathcal{L}=\partial_{\mu}\Theta^{\dagger}\partial^{\mu}\Theta-V(\Theta)$
(89)
with $\Theta=\frac{1}{\sqrt{2}}\begin{pmatrix}\phi_{1}+i\phi_{2}\\\
\phi_{3}+i\phi_{4}\end{pmatrix}$ where $\phi_{a}$ corresponds to scalar fields
and $a=1,2,3,4$. Furthermore, the potential term can be defined as
$V(\Theta)=-\frac{1}{2}\bar{m}^{2}\Theta^{\dagger}\Theta+\frac{\kappa}{4}(\Theta^{\dagger}\Theta)^{2}$
(90)
where dimensionless constant $\kappa\textgreater 0$ and the scalar fields have
dimension of mass. In addition, the mass term has a minus sign so that for a
time-independent expectation value, spontaneous symmetry breaking occurs. In
terms of the fields $\phi_{a}$, the Lagrangian density can be written as
$\mathcal{L}=\frac{1}{2}\partial_{\mu}\phi_{a}\partial^{\mu}\phi^{a}-\frac{\kappa}{4}(\phi_{a}\phi^{a})^{2}$
(91)
where we put $\bar{m}=0$ so that the potential term in Eq.(91) is purely
quartic.
Note that the symmetry of this Lagrange density is SO(5) which is larger than
the gauge symmetry $SU(2)\times U(1)$ of the standard model. We will extend
this Lagrangian by adding an additional scalar field $\phi_{5}$, so that now
$a=1,2,3,4,5.$
Since the rotational symmetry is spontaneously broken, a fluctuation emerges
about the minimum. Breaking the symmetry annihilates three of four components
of $\Theta$ such that
$\phi_{1}=\phi_{2}=\phi_{4}=0.$ (92)
Moreover, the fields can be independent of spatial coordinates to be
transformable to those of the Jordan-Brans-Dicke theory. Then one obtains the
Lagrangian density in Eq.(88).
At this point, applying field space coordinate transformations in Eq.(86) and
Eq.(87) to the vacuum expectation values and their quantum fluctuations of the
fields of the JBD theory in order to find their correspondence in the Higgs
mechanism gives
$\begin{split}\phi_{3}=\xi\chi_{0}\cos(H(t))\bigg{(}&1+\epsilon_{0}\big{(}Ae^{imt}+A^{\dagger}e^{-imt}\big{)}\\\
&-i\sqrt{\frac{2}{3}}\tan(H(t))\epsilon_{0}\big{(}Ae^{imt}-A^{\dagger}e^{-imt}\big{)}\bigg{)}\end{split}$
(93)
$\begin{split}\phi_{5}=\xi\chi_{0}\sin(H(t))\bigg{(}&1+\epsilon_{0}\big{(}Ae^{imt}+A^{\dagger}e^{-imt}\big{)}\\\
&+i\sqrt{\frac{2}{3}}\cot(H(t))\epsilon_{0}\big{(}Ae^{imt}-A^{\dagger}e^{-imt}\big{)}\bigg{)}\end{split}$
(94)
where
$H(t)=\frac{\sqrt{\mu^{2}-\xi^{2}}}{\xi}(Dt+E),$ (95)
and
$\chi_{0}=e^{Dt_{0}+E}$ (96)
which is the today’s value of the JBD field.
We note that the system can be quantized by imposing the commutation relation
$\left[A,A^{\dagger}\right]=1.$ (97)
Here, $A$ and $A^{\dagger}$ are the creation and annihilation operators of the
quantum particles and the vucuum expectation values of $\phi_{3}$ and
$\phi_{5}$ are given by
$\langle\phi_{3}\rangle=\xi\chi_{0}\cos\left(\frac{\sqrt{\mu^{2}-\xi^{2}}}{\xi}Dt\right),$
(98)
$\langle\phi_{5}\rangle=\xi\chi_{0}\sin\left(\frac{\sqrt{\mu^{2}-\xi^{2}}}{\xi}Dt\right).$
(99)
Here, the temporal evolution of the vev of the Higgs field is given by the
argument of cosine in Eq.(98), i.e. the parameter $D$. As it can be checked by
relating the scale factors in two different time scales (conformal and
cosmological time) in Eq.(34) and Eq.(36), $D=-\frac{1}{t^{\prime}_{0}}$ in
which $t^{\prime}$ is the age of the universe in cosmological time. Thus, in
our model $D$ and the evolution of the Higgs field are very slow and the
condition about the particle masses, which has been mentioned before Eq.(2) in
the introduction, is satisfied.
## 5 Conclusion
A cosmological model in which the expansion of the universe is related to the
time dependent vev of the Higgs field has been proposed. Based upon Eq.(1),
the time dependent inertial mass may have another interpretation such that the
time dependence of the Higgs field is part of the metric tensor. With this
approach, the Higgs field has been taken into account as a conformal factor
and related to the scale factor of the FLRW metric. Since it is a more
complete theory of gravitation with respect to Mach’s principle, the JBD
theory has been considered and only the scalar mode of the theory has been
studied. By taking the action of the scale factor $a(t)$ and the JBD field
$\chi(t)$ as depending only on time, the relation between the JBD cosmology
and the Higgs mechanism has been established with the field space coordinate
transformations (Eq.(76), Eq.(77), Eq.(86) and Eq.(87)) for negative values of
the JBD parameter. Although solar system experiments predict the original JBD
parameter $\omega$ to be a big positive number[23][24], scenarios based on its
negative values[25, 26, 27, 28, 29, 30] are viable and quite common in the
literature for cosmological scales. In addition to this, negative values of
the coupling parameter are encountered in the applications of the low-energy
effective action of the string theory[31][32] such that the dilatonic coupling
constant is chosen as $\omega=-1$ for the string frame[33, 34, 35]. Finally,
oscillation modes about the vacuum in both pictures have been found and it has
been shown that they are quantizable.
## References
* [1] G. Aad, T. Abajyan, B. Abbott, J. Abdallah, S. A. Khalek, A. A. Abdelalim, R. Aben, B. Abi, M. Abolins, O. AbouZeid, et al., “Observation of a new particle in the search for the standard model higgs boson with the atlas detector at the lhc,” Physics Letters B, vol. 716, no. 1, pp. 1–29, 2012\.
* [2] P. W. Higgs, “Broken symmetries and the masses of gauge bosons,” Physical Review Letters, vol. 13, no. 16, p. 508, 1964.
* [3] S. Weinberg, “Conceptual foundations of the unified theory of weak and electromagnetic interaction,” in Origin Of Symmetries, pp. 215–223, World Scientific, 1991.
* [4] A. Salam, “Gauge unification of fundamental forces,” Reviews of Modern Physics, vol. 52, no. 3, p. 525, 1980.
* [5] S. L. Glashow, “Towards a unified theory: Threads in a tapestry,” Reviews of Modern Physics, vol. 52, no. 3, p. 539, 1980.
* [6] R. v. Eotvos, “Beitrage zum gesetze der proportionalitat von tragheit und gravitat,” Ann. Phys., vol. 68, pp. 11–66, 1922.
* [7] D. W. Sciama, “On the origin of inertia,” Monthly Notices of the Royal Astronomical Society, vol. 113, no. 1, pp. 34–42, 1953.
* [8] M. Arik and T. Tok, “The scalar mode of gravity,” arXiv preprint arXiv:2001.02347, 2020.
* [9] A. Friedman, “Über die krümmung des raumes,” Zeitschrift für Physik, vol. 10, no. 1, pp. 377–386, 1922.
* [10] G. Lemaître, “Expansion of the universe, the expanding universe,” Monthly Notices of the Royal Astronomical Society, vol. 91, pp. 490–501, 1931\.
* [11] H. P. Robertson, “Kinematics and world-structure,” The Astrophysical Journal, vol. 82, p. 284, 1935.
* [12] A. G. Walker, “On milne’s theory of world-structure,” Proceedings of the London Mathematical Society, vol. 2, no. 1, pp. 90–127, 1937.
* [13] P. J. E. Peebles and B. Ratra, “The cosmological constant and dark energy,” Reviews of modern physics, vol. 75, no. 2, p. 559, 2003.
* [14] V. Sahni, “Dark matter and dark energy,” in The Physics of the Early Universe, pp. 141–179, Springer, 2004.
* [15] C. Brans and R. H. Dicke, “Mach’s principle and a relativistic theory of gravitation,” Physical review, vol. 124, no. 3, p. 925, 1961.
* [16] D. J. Raine, “Mach’s principle in general relativity,” Monthly Notices of the Royal Astronomical Society, vol. 171, no. 3, pp. 507–528, 1975.
* [17] A. Joyce, L. Lombriser, and F. Schmidt, “Dark energy versus modified gravity,” Annual Review of Nuclear and Particle Science, vol. 66, pp. 95–122, 2016.
* [18] P. Jordan, “Zum gegenwärtigen stand der diracschen kosmologischen hypothesen,” Zeitschrift für Physik, vol. 157, no. 1, pp. 112–121, 1959.
* [19] R. H. Dicke, “New research on old gravitation,” Science, vol. 129, no. 3349, pp. 621–624, 1959.
* [20] R. H. Dicke, “Gravitation—an enigma,” American Scientist, vol. 47, no. 1, pp. 25–40, 1959.
* [21] M. Gell-Mann and M. Lévy, “The axial vector current in beta decay,” Il Nuovo Cimento (1955-1965), vol. 16, no. 4, pp. 705–726, 1960.
* [22] M. E. Peskin and D. V. Schroeder, “An introduction to quantum field theory (boulder, co),” 1995.
* [23] Y.-C. Li, F.-Q. Wu, and X. Chen, “Constraints on the brans-dicke gravity theory with the planck data,” Physical Review D, vol. 88, no. 8, p. 084053, 2013.
* [24] C. M. Will, Theory and experiment in gravitational physics. Cambridge university press, 2018.
* [25] O. Bertolami and P. Martins, “Nonminimal coupling and quintessence,” Physical Review D, vol. 61, no. 6, p. 064007, 2000.
* [26] S. Sen and A. Sen, “Late time acceleration in brans-dicke cosmology,” Physical Review D, vol. 63, no. 12, p. 124006, 2001.
* [27] S. Sen and T. Seshadri, “Self interacting brans–dicke cosmology and quintessence,” International Journal of Modern Physics D, vol. 12, no. 03, pp. 445–460, 2003.
* [28] N. Banerjee and D. Pavon, “A quintessence scalar field in brans-dicke theory,” Classical and Quantum Gravity, vol. 18, no. 4, p. 593, 2001.
* [29] A. Batista, J. Fabris, and R. de Sa Ribeiro, “A remark on brans–dicke cosmological dust solutions with negative $\omega$,” General Relativity and Gravitation, vol. 33, no. 7, pp. 1237–1244, 2001.
* [30] J. Fabris, S. Gonçalves, and R. Ribeiro, “Late time accelerated brans-dicke pressureless solutions and the supernovae type ia data,” arXiv preprint astro-ph/0510779, 2005.
* [31] J. Fabris, R. Furtado, P. Peter, and N. Pinto-Neto, “Regular cosmological bouncing solutions in low energy effective action from string theories,” Physical Review D, vol. 67, no. 12, p. 124003, 2003.
* [32] G. Calcagni, S. Tsujikawa, and M. Sami, “Dark energy and cosmological solutions in second-order string gravity,” Classical and Quantum Gravity, vol. 22, no. 19, p. 3977, 2005.
* [33] E. J. Copeland, A. Lahiri, and D. Wands, “Low energy effective string cosmology,” Physical Review D, vol. 50, no. 8, p. 4868, 1994.
* [34] M. Gasperini, J. Maharana, and G. Veneziano, “Graceful exit in quantum string cosmology,” Nuclear Physics B, vol. 472, no. 1-2, pp. 349–360, 1996.
* [35] D. Wands, “String-inspired cosmology,” Classical and Quantum Gravity, vol. 19, no. 13, p. 3403, 2002.
|
Statistical models and
probabilistic methods
on Riemannian manifolds
Salem Said – CNRS, Université de Bordeaux
CHAPTER: A GUIDE TO THIS THESIS
This thesis reflects the major themes of my work, which I have carried out in the past four years.It does leave out some of this work, especially on the subject of warped information metrics. I hope readers may find time for at least a glance at this “missing" part, (for example, in [1]). However, the thesis is rather self-contained, and I feel that the best way of reading it is just from beginning to end, uninterrupted.
At any rate, I would like to ask readers to begin with Chapter <ref>. Then, once this is done, they can skip to Chapters <ref> and <ref>, which I would like to ask them to read together, or go on toChapter <ref>, which is quite independent from following. The same goes for Chapter <ref>, which can be read right after Chapter <ref>, provided just a little bit of familiarity with Chapter <ref>.
Each chapter begins with a table of contents, followed by a sort of “abstract", which provides some additional details, on the table of contents, and points to some of the more interesting results. I have done my best to avoid the thesis being a copy-paste of published research papers. Chapter <ref> uncovers several new connections between Riemannian Gaussian distributions and random matrix theory, while Chapter <ref> is entirely made up of previously unpublished material. The other chapters stick more closely to my existing papers (published, or under review), although I have made a consistent effort to improve the presentation, and to include useful background and historical discussion.
I hope that readers will find it stimulating, to read an “original thesis". On the other hand, exploring new ideas exposes one to the risk of making mistakes (of various magnitude), and I also hope these are duly pointed out, and the appropriate criticism is served up, without restraint. On the whole, writing this thesis has been a humbling experience for me. I have found out, once and again, that I was unable to answer questions or to prove statements,even when they seemed very natural. Chapter <ref>, a short final chapter, contains a list of such “open problems" (they are open to me, but others may find them easy).
I should acknowledge the input of many colleagues, who have shaped the ideas layed out in the following. Chapter <ref> was born out of discussions with Marc Arnaudon, and Chapter <ref> relies heavily on joint work with Alain Durmus, Pablo Jimenez, and Eric Moulines [2][3].
For Chapter <ref>, the idea of a useful connection between Riemannian Gaussian distributions and random matrix theory was first suggested to me by my colleague Yannick Berthoumieu. During the summer of 2020, I worked on this idea with Cyrus Mostajeran and Simon Heuveline. Later on, when I was nearly finished writing this thesis, I was very excited to discover the work of Leonardo Santilli and Miguel Tierz [4], who were simultaneously developing the same idea. It is really a great satisfaction to see a whole project unfold out of an “innocent" discussion. For this, I want to thank all of the colleagues I just mentioned.
Perhaps nobody will ever write a better preface than Cervantes, whose following famous words certainly apply here are now. Idle reader : Without my swearing to it, you can believe that I would like this book, the child of my understanding, to be the most beautiful, the most brilliant, and the most discreet that anyone could imagine. But I have not been able to contravene the natural order; in it, like begets like.
CHAPTER: NOTATION AND BACKGROUND
Certainly, this thesis is intended for specialised readers, who are already familiar with the basics of Riemannian geometry. This first chapter is not a stand-alone introduction to Riemannian geometry, but merely hopes to help the readers ease into the material in subsequent chapters : by recalling some elementary notions in Riemannian geometry, I hope to find a shared language with my benevolent readers. Some original, or even unpublished, material is still included. As discussed in the following :
* <ref> – <ref> lead up to the second-order Taylor expansion of a function defined on a Riemannian manifold. Proposition <ref> of <ref> states that geodesic curves are exactly those curves which admit a Taylor expansion of any $C^2$ function, in terms of its gradient and Hessian.
* <ref> recalls real Grassmann manifolds, and the PCA objective function. It presents a calculation of the gradient and Hessian of this function, based on the symmetric space structure of real Grassmann manifolds.
* <ref> and <ref> contain some original material, from [2]. In <ref>, the original concept of regular retraction (a retraction which avoids the cut locus) is introduced. In <ref>, the usual “projection retraction" on the real Grassman manifold is shown to be a regular retraction.
* <ref> recalls the usual metric and Hessian comparison theorems of Riemannian geometry.
* <ref> is an application of <ref>, studied in [2]. It introduces the robust Riemanian barycentre of a probability measure on a Hadamard manifold, proving its existence and uniqueness.
* <ref> is concerned with Riemannian volume : integration in geodesic spherical coordinates, volume comparison, and integral formulae for Riemannian symmetric spaces. All of these will be very important, in Chapters <ref> and <ref>.
* <ref> provides the previously unpublished Propositions <ref> and <ref>, which may be used to compute geodesics in symmetric spaces.
§ THE LEVI-CIVITA CONNECTION
A smooth (0,2)-tensor field $g$, on a finite-dimensional smooth manifold $M$, is called a Riemannian metric, if the bilinear form
\begin{equation} \label{eq:metric}
\langle u,\!v\rangle_{\scriptscriptstyle x} = g_{\scriptscriptstyle x}(u,v) \hspace{1cm} u\,,v \in T_xM
\end{equation}
is a true scalar product, for each $x \in M$. In this case, for $u \in T^{\phantom{*}}_xM$ and $t \in T^*_xM$, the identities
(u^{\flat},v) = \langle u,\!v\rangle_{\scriptscriptstyle x} \hspace{0.3cm}\mbox{and}\hspace{0.3cm}
\langle t^{\,\scriptscriptstyle \#},\!v\rangle_{\scriptscriptstyle x}= \left(t,v\right) \hspace{1cm} v \in T^{\phantom{*}}_xM
uniquely define $u^{\scriptscriptstyle \flat} \in T^*_xM$ and $t^{\,\#} \in T^{\phantom{*}}_xM$. By a useful abuse of notation,
\begin{equation} \label{eq:pairing}
u^{\flat} = g(u) \hspace{1cm} t^{\,\scriptscriptstyle \#} = g^{\scriptscriptstyle -1}(t)
\end{equation}
The Levi-Civita connection of $g$ is the unique affine connection $\nabla$ which is metric, so that
\begin{equation} \label{eq:metricconnection}
\nabla g = 0
\end{equation}
and tortionless, so that the exterior derivative $d\theta$, of any $1$-form $\theta$, reads
\begin{equation} \label{eq:tortionless}
d\theta(X,Y) = \nabla_X\theta(Y) - \nabla_Y\theta(X)
\end{equation}
for vector fields $X$ and $Y$. In effect, by (<ref>) and (<ref>), the $(1,1)$-tensor field $\nabla X$ (the covariant derivative of the vector field $X$), decomposes into self-adjoint and skew parts,
\begin{equation} \label{eq:koszul}
2\,\langle\nabla_YX,Z\rangle = \mathcal{L}_Xg(Y,Z) + dX^{\flat}(Y,Z)
\end{equation}
where $\mathcal{L}_Xg$ denotes the Lie derivative of the metric $g$ along $X$, the “the linear elasticity tensor" (the equivalence between (<ref>)–(<ref>) and (<ref>) is the content of Koszul's theorem).
Given local coordinates $(x^i\,;i=1,\ldots,n)$ on an open $U \subset M$, there is a coordinate frame $(\partial_i)$, along with a coframe $(dx^i)$ — of course, $\partial_i$ stands for $\left.\partial\middle/\partial x^i\right.$. In terms of these coordinates, the metric $g$ takes on the form of a length element
\begin{equation} \label{eq:lengthelement}
g = g_{ij}\,dx^i\otimes dx^j \hspace{1cm} g_{ij} = \langle \partial_i\,,\partial_j\rangle
\end{equation}
and covariant derivatives may be expressed, in coordinate form,
\begin{equation} \label{eq:christoffel}
\nabla X = \left\lbrace\partial_j X^i + \Gamma^i_{jk} X^k\right\rbrace \partial_i \otimes dx^j \hspace{1cm} \nabla_{\partial j}\,\partial_k = \Gamma^i_{jk}\,\partial_i
\end{equation}
using the Christoffel symbols $(\Gamma^i_{jk})$.
§ PARALLEL TRANSPORT AND GEODESICS
A vector field $X$, along a smooth curve $c:I\rightarrow M$, defined on some interval $I \subset \mathbb{R}$, is a map $X:I \rightarrow TM$ such that
$\pi \circ X = c$ — of course, $\pi:TM \rightarrow M$ denotes the canonical projection. The Levi-Civita connection $\nabla$ can be used to compute the covariant derivative of $X$ along $c$, itself a vector field along $c$, here denoted $\nabla_{\dot{c}\,}X$. In local coordinates,
\begin{equation} \label{eq:dotX}
\nabla_{\dot{c}\,}X(t) = \left \lbrace \frac{d}{dt}X^i(t) + (\Gamma^i_{jk}\circ c(t))\, \dot{c}^{\,j}(t)X^k(t)\right\rbrace (\partial_i\circ c)(t)
\end{equation}
and this suggests writing $\nabla_{\dot{c}\,}X = \nabla_{t\,}X$ or even $\dot{X}$, when $c$ is understood from the context. Now, $X$ is called parallel along $c$ if $\nabla_{\dot{c}\,}X = 0$. From (<ref>), this means that the components $X^i(t)$ satisfy a linear differential equation with smooth coefficients.
Thus, if $X$ is parallel along $c$, then $X$ is completely determined by its value at any instant, say $t_{\scriptscriptstyle o} \in I$. Equivalently, if $v$ is tangent to $M$ at $c(t_{\scriptscriptstyle o})$, then there exists a unique parallel vector field $X$ along $c$, with $X(t_{\scriptscriptstyle o}) = v$. It follows that, for $t \in I$, there exists a linear operator $\Pi^t_{t_{\scriptscriptstyle o}}$ which maps $T_{c(t_{\scriptscriptstyle o})}M$ onto $T_{c(t)}M$, by $\Pi^t_{t_{\scriptscriptstyle o}}(v) = X(t)$. This linear operator $\Pi^t_{t_{\scriptscriptstyle o}}$ is called parallel transport along $c$, from $c(t_{\scriptscriptstyle o})$ to $c(t)$, and has the following properties,
\begin{equation} \label{eq:hemigroup}
\text{hemigroup property} \hspace{1cm} \Pi^t_{t_{\scriptscriptstyle o}} = \Pi^t_{t_{\scriptscriptstyle 1}} \circ \Pi^{t_{\scriptscriptstyle 1}}_{t_{\scriptscriptstyle o}}
\end{equation}
\begin{equation} \label{eq:isometry}
\phantom{abcd}\text{isometry property} \hspace{1cm} \Vert\Pi^t_{t_{\scriptscriptstyle o}}(v)\Vert_{\scriptscriptstyle c(t)} = \Vert v\Vert_{\scriptscriptstyle c(t_{\scriptscriptstyle \scriptscriptstyle o})}
\end{equation}
where $\Vert \cdot \Vert_x$ is the norm associated with the scalar product in (<ref>), for any $x \in M$. Clearly, if one knows how to compute parallel transports, then one is able to recover covariant derivatives,
\begin{equation} \label{eq:pinabla}
\nabla_{\dot{c}\,}X(t_{\scriptscriptstyle o}) = \left.\frac{d}{dt}\right|_{t=t_{\scriptscriptstyle o}}\Pi^{t_{\scriptscriptstyle o}}_{t}(X(t))
\end{equation}
A smooth curve $c:I\rightarrow M$ is called a geodesic curve, if its velocity vector field $\dot{c}$ is parallel. This means that $c$ satisfies the geodesic equation,
\begin{equation} \label{eq:geodesicequation}
\nabla_{\dot{c}\,}\dot{c} = 0 \text{ or } \ddot{c} = 0
\end{equation}
Written out in local coordinates, this is a non-linear ordinary differential equation,
\begin{equation} \label{eq:accelerationcoordinates}
\frac{d^{\scriptscriptstyle\hspace{0.03cm}2}}{dt^{\scriptscriptstyle 2}}c^i(t) + \Gamma^i_{jk}(c(t))\frac{d}{dt}c^{\,j}(t)\frac{d}{dt}c^k(t) = 0
\end{equation}
If its solutions $c(t)$ exists at all finite $t \in \mathbb{R}$, for any initial conditions $c(t_{\scriptscriptstyle o}) = x$ and $\dot{c}(t_{\scriptscriptstyle o})=v$, then the metric $g$ on the manifold $M$ is called geodesically complete. In this case, the Riemannian exponential map $\mathrm{Exp}:TM\rightarrow M$, given by $\mathrm{Exp}_x(v) = c(1)$ is well-defined.
The geodesic equation (<ref>) states that the curve $c$ has zero acceleration (just like a particle in free motion). This means that geodesic curves are extremals of the energy functional
\begin{equation} \label{eq:energy}
E(c) = \int_I\,\Vert \dot{c}(t)\Vert^2\,dt
\end{equation}
and that re-parameterised geodesic curves (of the form $c\circ \varphi$ where $c$ is a geodesic, and $t^\prime = \varphi(t)$ a new parameterisation) are extremals of the length functional
\begin{equation} \label{eq:length}
L(c) = \int_I\,\Vert \dot{c}(t)\Vert\,dt
\end{equation}
This leads to the notion of Riemannian distance, which will be discussed in <ref> below.
§ TAYLOR EXPANSION OF A FUNCTION
Let $f:M \rightarrow \mathbb{R}$ be a $C^2$ function, denote $df$ its differential. The gradient of $f$ is the vector field
\begin{equation} \label{eq:gradient}
\mathrm{grad}\,f = g^{\scriptscriptstyle -1}(df)
\end{equation}
In the notation of (<ref>). The Hessian of $f$ is the $(1,1)$-tensor field
\begin{equation} \label{eq:hessian}
\mathrm{Hess}\,f = \nabla\,\mathrm{grad}\,f
\end{equation}
The following proposition says that geodesic curves are exactly the curves which admit a Taylor expansion of any $C^2$ function, in terms of its gradient and Hessian.
A smooth curve $c:I\rightarrow M$ is a geodesic curve, if and only if, for any $s,t \in I$, and any $C^2$ function $f:M \rightarrow \mathbb{R}$,
\begin{equation} \label{eq:taylor1}
(f\circ c)(t) = (f\circ c)(s) + \langle \mathrm{grad}\,f,\dot{c}\rangle_{\scriptscriptstyle c(s)}(t-s) + \frac{1}{2}\, \langle\mathrm{Hess}\,f\cdot\dot{c}\,,\dot{c}\rangle_{\scriptscriptstyle c(s)}(t-s)^{\scriptscriptstyle 2} + o(|t-s|^{\scriptscriptstyle 3})
\end{equation}
The proof of this proposition follows from the identity,
\frac{d^{\scriptscriptstyle\hspace{0.03cm}2}}{dt^{\scriptscriptstyle 2}}(f\circ c)(t) = \frac{d}{dt}\langle \mathrm{grad}\,f,\dot{c}\rangle_{\scriptscriptstyle c(t)} =
\langle\nabla_{\dot{c}\,} \mathrm{grad}\,f,\dot{c}\rangle_{\scriptscriptstyle c(t)} + \langle \mathrm{grad}\,f,\ddot{c}\rangle_{\scriptscriptstyle c(t)}
Indeed, the last term is identically zero (i.e., for any $C^2$ function $f$), if and only if $\ddot{c}$ is identically zero, as in the geodesic equation (<ref>).
In (<ref>), $\mathrm{Hess}\,f$ is a $(1,1)$-tensor field. This tensor field is self-adjoint, and it is common practice to identify it with a $(0,2)$-tensor field,
\begin{equation} \label{eq:hessianbis}
\mathrm{Hess}\,f = \nabla\,df = \frac{1}{2}\mathcal{L}_{\mathrm{grad}\,f}\,g
\end{equation}
where the second equality follows from (<ref>) and (<ref>). This yields a lighter notation,
\langle\mathrm{Hess}\,f\cdot u\,,v\rangle = \mathrm{Hess}\,f(u,v)
Recall the Riemannian exponential map $\mathrm{Exp}$, from <ref> (always assume geodesic completeness). Proposition <ref> can be used to write down a Taylor expansion with Lagrange remainder,
\begin{equation} \label{eq:taylor2}
f\left(\mathrm{Exp}_x(v)\right) = f(x) + \langle \mathrm{grad}\,f,v\rangle_{\scriptscriptstyle x} + \frac{1}{2}\,\mathrm{Hess}\,f_{\scriptscriptstyle c(t^*)}(\dot{c},\dot{c})
\end{equation}
where $c(t^*)$ is a point along the geodesic $c(t) = \mathrm{Exp}_x(t\,v)$, corresponding to an instant $t^* \in (0,1)$.
Remark : writing (<ref>) in local coordinates,
\begin{equation} \label{eq:hesscoordinates}
\mathrm{Hess}\,f = \left\lbrace \partial_{ ij} f - \Gamma^k_{ij\,} \partial_k f\right\rbrace dx^i\otimes dx^{\,j}
\end{equation}
The second derivatives $\partial_{ ij} f$ do not transform like a covariant tensor, but the Christoffel symbols correct for this problem, yeilding a true covariant tensor, $\mathrm{Hess}\,f$. A very nice way of saying this is that the Levi-Civita connection transforms second-order differentials, into covariant tensors.
The concepts of second-order vectors and of second-order differentials are reviewed in [5], where they are used as a starting point for stochastic analysis in manifolds.
§ EXAMPLE : THE PCA OBJECTIVE FUNCTION
The problem of principal component analysis consists in maximising the objective function
\begin{equation} \label{eq:pcaf}
f(x) = \mathrm{tr}\left(x\Delta\right) \hspace{1cm} x \in \mathrm{Gr}_{\scriptscriptstyle \mathbb{R}}(p\,,q)
\end{equation}
where $\Delta$ is a symmetric positive-definite matrix, of size $(p+q)\times(p+q)$. The maximisation is over $x$ in the real Grassmann manifold $\mathrm{Gr}_{\scriptscriptstyle \mathbb{R}}(p\,,q)$, identified with a space of orthogonal projectors
\begin{equation} \label{eq:grassconst}
\mathrm{Gr}_{\scriptscriptstyle \mathbb{R}}(p\,,q)= \left \lbrace x \in \mathbb{R}^{\scriptscriptstyle (p+q)\times(p+q)}\,:x^{\dagger} - x = 0 \hspace{0.1cm},\hspace{0.1cm} x^2 - x = 0 \hspace{0.1cm},\hspace{0.1cm} \mathrm{tr}(x) = p\right\rbrace
\end{equation}
where $^\dagger$ denotes the transpose. Remarkably, it is possible to show that $\mathrm{Gr}_{\scriptscriptstyle \mathbb{R}}(p\,,q)$ is a submanifold of $\mathrm{S}(p+q)$, the affine space of symmetric matrices of size $(p+q)\times(p+q)$, with tangent spaces (the proof of this statement may be found in [6]),
\begin{equation} \label{eq:tangentgrassconst}
T_x \mathrm{Gr}_{\scriptscriptstyle \mathbb{R}}(p\,,q)= \left\lbrace v \in \mathrm{S}(p+q)\,: xv + vx = v \right\rbrace
\end{equation}
It then follows that $\mathrm{Gr}_{\scriptscriptstyle \mathbb{R}}(p\,,q)$ is of dimension $pq$. Clearly, $\mathrm{Gr}_{\scriptscriptstyle \mathbb{R}}(p\,,q)$ admits of a Riemannian metric, which is the restriction of the trace scalar product of $\mathrm{S}(p+q)$,
\begin{equation} \label{eq:grasssp}
\langle u,\!v\rangle_{\scriptscriptstyle x} = \mathrm{tr}(uv) \hspace{1cm} u\,,v \in T_x\mathrm{Gr}_{\scriptscriptstyle \mathbb{R}}(p\,,q)
\end{equation}
By (<ref>) and (<ref>), it follows from (<ref>) that the gradient of $f(x)$ is given by
\begin{equation} \label{eq:pcapx}
\mathrm{grad}\,f(x) = \mathrm{P}_x(\Delta)
\end{equation}
where $\mathrm{P}_x:\mathrm{S}(p+q)\rightarrow \mathrm{S}(p+q)$ is the orthogonal projection onto $T_x \mathrm{Gr}_{\scriptscriptstyle \mathbb{R}}(p\,,q)$. Now, let $x = o$, the projector onto the span of the first $p$ vectors in the canonical basis of $\mathbb{R}^{\scriptscriptstyle p+q}$. One readily checks from (<ref>) that
\begin{equation} \label{eq:grassto}
T_o \mathrm{Gr}_{\scriptscriptstyle \mathbb{R}}(p\,,q)= \left\lbrace \tilde{\omega} =
\left(\begin{array}{cc} 0_{\scriptscriptstyle p\times p} & \omega^\dagger \\[0.15cm] \omega & 0_{\scriptscriptstyle q\times q}\end{array}\right)\,;\, \omega \in \mathbb{R}^{\scriptscriptstyle q\times p}
\right\rbrace
\end{equation}
Therefore, $\mathrm{P}_o(\Delta)$ is just $\Delta$ with its main diagonal blocks of size $p \times p$ and $q\times q$ set to zero.Then, note that the orthogonal group $O(p+q)$ acts transitively on $\mathrm{Gr}_{\scriptscriptstyle \mathbb{R}}(p\,,q)$, by $g\cdot x = gxg^\dagger$for $g \in O(p+q)$ and $x \in \mathrm{Gr}_{\scriptscriptstyle \mathbb{R}}(p\,,q)$, and that this action preserves the Riemannian metric (<ref>). Therefore, one has the following alternative to (<ref>)[By an abuse of notation, $g\cdot a= g\,a\,g^\dagger$, for any matrix $a$ of size $(p+q)\times(p+q)$.],
\begin{equation} \label{eq:tangentgrassgroup}
T_x \mathrm{Gr}_{\scriptscriptstyle \mathbb{R}}(p\,,q)= \left \lbrace v = g\cdot \tilde{\omega}\,;\, g\cdot o = x\hspace{0.1cm},\hspace{0.1cm} \tilde{\omega} \in T_o \mathrm{Gr}_{\scriptscriptstyle \mathbb{R}}(p\,,q) \right\rbrace
\end{equation}
where $g\cdot o = x$ simply means the first $p$ columns of $g$ span the image space of $x$. Since the action of $O(p+q)$ preserves the Riemannian metric (<ref>), it easily follows
\begin{equation} \label{eq:grasspx}
\mathrm{P}_x(\Delta) = g \cdot \mathrm{P}_o(g^\dagger\cdot \Delta) \text{ for any $g$ such that $g\cdot o = x$}
\end{equation}
which can be used to evaluate the gradient of $f(x)$, in (<ref>). For the Hessian of $f(x)$, note that, according to Propositon <ref>,
\begin{equation} \label{eq:pcahess}
\mathrm{Hess}\,f_x(v,v) = \left.\frac{d^{\scriptscriptstyle\hspace{0.03cm}2}}{dt^{\scriptscriptstyle 2}}\right|_{t=0} f\left(\mathrm{Exp}_x(tv)\right)
\end{equation}
Here, the Riemannian exponential can be transformed into a matrix exponential (see Proposition <ref>, in <ref>). For $g \in O(p+q)$, note that $g \cdot o = o$ if and only if $g \in O(p) \times O(q) \subset O(p+q)$. Denote $\mathfrak{g}$ and $\mathfrak{k}$ the Lie algebras of $O(p+q)$ and $O(p) \times O(q) \subset O(p+q)$. Let $\mathfrak{p}$ denote the orthogonal complement of $\mathfrak{k}$ (with respect to the bilinear form $Q(\xi\hspace{0.02cm},\eta) = \mathrm{tr}(\xi\eta)$, for $\xi\,,\eta \in \mathfrak{o}(p+q)$). Then,
\begin{equation} \label{eq:grassp}
\mathfrak{p} = \left\lbrace \hat{\omega} =
\left(\begin{array}{cc} 0_{\scriptscriptstyle p\times p} & -\omega^\dagger \\[0.15cm] \omega & 0_{\scriptscriptstyle q\times q}\end{array}\right)\,;\, \omega \in \mathbb{R}^{\scriptscriptstyle q\times p}
\right\rbrace
\end{equation}
From (<ref>) and (<ref>), it is clear there exists a canonical isomorphism $\pi_o: T_o \mathrm{Gr}_{\scriptscriptstyle \mathbb{R}}(p\,,q) \rightarrow \mathfrak{p}$ (just add a minus sign in front of $\omega^\dagger$ in (<ref>)). In terms of this isomorphism,
\begin{equation} \label{eq:grasslift}
\mathrm{Exp}_x(tv) = \exp(t\, \hat{\omega}_{\scriptscriptstyle v})\cdot x \hspace{1cm} \hat{\omega}_{\scriptscriptstyle v} = g\cdot \pi_o(g^\dagger \cdot v)
\end{equation}
Replacing (<ref>) into (<ref>), the second derivative is easily computed,
\begin{equation} \label{eq:pcahessbis}
\mathrm{Hess}\,f_x(v,v) = \mathrm{tr}\!\left( \Delta\,\hat{\omega}^2_{\scriptscriptstyle v}\,x \right) +
\mathrm{tr}\!\left( \Delta\,x\,\hat{\omega}^2_{\scriptscriptstyle v} \right) -2\,\mathrm{tr}\!\left(\Delta\,\hat{\omega}_{\scriptscriptstyle v}\,x\, \hat{\omega}_{\scriptscriptstyle v} \right)
\end{equation}
Remark : a nice property of the linear map $v \mapsto \hat{\omega}_{\scriptscriptstyle v}$ is that $\mathrm{tr}(\hat{\omega}^2_{\scriptscriptstyle v}) = \langle v,v\rangle_{\scriptscriptstyle x\,}$.
§ REGULAR RETRACTIONS
A retraction is a map $\mathrm{Ret}:TM \rightarrow M$, taking $v \in T_xM$ to $\mathrm{Ret}_x(v)$, and which verifies [7][8],
\begin{equation} \label{eq:retdefinition}
\mathrm{Ret}_x(0_x) = x \hspace{1cm} d\,\mathrm{Ret}_x(0_x) = \mathrm{Id}_x
\end{equation}
where $0_x \in T_xM$ is the zero vector in $T_xM$, and $\mathrm{Id}_x$ is the identity map of $T_xM$. While the Riemannian exponential $\mathrm{Exp}:TM \rightarrow M$ is itself a retraction[Recall that it is always assumed $M$ is geodesically complete.], other retractions are often used as computationally cheap (or numerically stable) substitutes for the Riemannian exponential.
From (<ref>), for any retraction $\mathrm{Ret}$, $\mathrm{Ret}_x$ agrees with $\mathrm{Exp}_x$ up to first-order derivatives. Further, $\mathrm{Ret}$ will be called geodesic, if $\mathrm{Ret}_x$ agrees with $\mathrm{Exp}_x$ up to second-order derivatives. This means the curve $c(t) = \mathrm{Ret}_x(tv)$ has zero initial acceleration : $\ddot{c}(0) = 0_{x\,}$, for any $v \in T_xM$ (in the notation of (<ref>).
To compare a retraction $\mathrm{Ret}$ with the exponential $\mathrm{Exp}$, it is useful to introduce the maps
\begin{equation} \label{eq:PHI}
\Phi_x : T_xM \rightarrow T_xM \hspace{1cm} \Phi_x(v) = \left(\mathrm{Exp}^{-1}_x \circ \mathrm{Ret}_x\right)(v)
\end{equation}
These maps are well-defined if $\mathrm{Ret}$ is regular. That is, if $\mathrm{Ret}_x(v) \notin \mathrm{Cut}(x)$ for any $v \in T_xM$ ($\mathrm{Cut}(x)$ denotes the cut locus of $x$, whose definition is recalled in <ref>). In addition, they satisfy the following propositions.
Let $\mathrm{Ret}:TM\rightarrow M$ be a regular retraction. Then, $\Phi_x : T_xM \rightarrow T_xM$ verify
(a) $\Phi_x(0_x) = 0_x$ and $\Phi^\prime_x(0_x) = \mathrm{Id}_x$ (the prime denotes the Fréchet derivative).
(b) $\Phi^{\prime\prime}_x(0_x)(v,v) = \ddot{c}(0)$, where the curve $c(t)$ is given by $c(t) = \mathrm{Ret}_x(tv)$.
Let $\mathrm{Ret}:TM\rightarrow M$ be a regular retraction and $f:M \rightarrow \mathbb{R}$ be a $C^2$ function.
\begin{equation} \label{eq:taylor2ret}
f\left(\mathrm{Ret}_x(v)\right) = f(x) + \langle \mathrm{grad}\,f,\Phi_x(v)\rangle_{\scriptscriptstyle x} + \frac{1}{2}\,\mathrm{Hess}\,f_{\scriptscriptstyle \gamma(t^*)}(\dot{\gamma},\dot{\gamma})
\end{equation}
where $\gamma(t^*)$ is a point along the geodesic $\gamma(t) = \mathrm{Exp}_x(t\Phi_x(v))$, corresponding to some $t^* \in (0,1)$.
As an application of Proposition <ref>, consider the following examples.
Example 1 : let $M = S^n \subset \mathbb{R}^{n+1}$, the unit sphere of dimension $n$, with its usual (round) Riemannian metric. The retraction $\mathrm{Ret}_x(v) = \left(x+v)\middle/\Vert x + v \Vert\right.$ ($\Vert \cdot \Vert$ is the Euclidean norm) is regular, and the maps $\Phi_x$ are given by
\begin{equation} \label{eq:phisphere}
\Phi_x(v) = \arctan(\Vert v \Vert)\,\frac{v}{\Vert v \Vert}
\end{equation}
Example 2 : let $M = U(d)$, the Lie group of $d \times d$ unitary matrices, with its bi-invariant metric $\langle u,\!v\rangle_{\scriptscriptstyle x} = -(1/2)\mathrm{tr}(uv)$. The retraction $\mathrm{Ret}_x(v) = \mathrm{Pol}(x+v)$ ($\mathrm{Pol}$ denotes the left polar factor) is regular, and the maps $\Phi_x$ are given by
\begin{equation} \label{eq:phiun}
\Phi_x(v) = x\left(u \exp(i\arctan(\theta))\, u^\dagger\right)
\end{equation}
where $^\dagger$ denotes the conjugate-tranpose, and $\omega = x^\dagger v$ has spectral decomposition $\omega = u(i\theta)u^\dagger$, where $u$ is unitary and $\theta$ is real and diagonal — as one may expect, $\arctan(\theta) = \mathrm{diag}(\arctan(\theta_{ii}))$.
Now, (b) of Proposition <ref> implies the retractions in question are geodesic, since the Taylor expansion at zero of the arctangent only contains odd powers. Both of these retractions are based on orthogonal projection onto the manifold $M$, which is embedded in a Euclidean space.
Proof of Proposition <ref> : note that (a) is immediate, by (<ref>), and the fact that $\mathrm{Exp}$ is a retraction. To prove (b), note that
\begin{equation} \label{eq:PHInorm}
\Phi_x(v) = \tau^i(\mathrm{Ret}_x(v))\,\partial_i(x)
\end{equation}
where $(\tau^i\,;i=1,\ldots,n)$ are normal coordinates with origin at $x$, and where $\partial_i = \left.\partial\middle/\partial \tau^i\right.$. Since $\Phi_x$ is smooth (precisely, $C^2$),
\Phi^{\prime\prime}_x(0_x)(v,v) = \left. \frac{d^{\scriptscriptstyle\hspace{0.03cm}2}}{dt^{\scriptscriptstyle 2}}\right|_{t=0}\! \Phi_x(tv)
Thus, if $c(t) = \mathrm{Ret}_x(tv)$ and $c^i(t) = (\tau^i \circ c)(t)$, then
\Phi^{\prime\prime}_x(0_x)(v,v) = \frac{d^{\scriptscriptstyle\hspace{0.03cm}2}}{dt^{\scriptscriptstyle 2}}c^i(0)\,\partial_i(x) =
\left\lbrace\frac{d^{\scriptscriptstyle\hspace{0.03cm}2}}{dt^{\scriptscriptstyle 2}}c^i(0) + \Gamma^i_{jk}(c(0))\frac{d}{dt}c^{\,j}(0)\frac{d}{dt}c^k(0)\right\rbrace\partial_i(x)
where the second equality holds since $\Gamma^i_{jk}(c(0)) = \Gamma^i_{jk}(x) = 0$, by the definition of normal coordinates. Comparing to (<ref>) and (<ref>), it is clear $\Phi^{\prime\prime}_x(0_x)(v,v) = \ddot{c}(0)$.
Proof of Proposition <ref> : this is a direct application of (<ref>), using $\mathrm{Ret}_x(v) =\mathrm{Exp}_x(\Phi_x(v))$.
Remark : the claims in Examples 1 and 2 above will not be proved in detail. Example 1 is quite elementary, and only requires one to recall that $\mathrm{Cut}(x) = \lbrace - x\rbrace$. For Example 2, the cut locus on a point $x$ in $U(d)$ is described in [9], and (<ref>) follows by a straightforward matrix calculation.
§ EXAMPLE : A RETRACTION FOR $\MATHRM{GR}_{\SCRIPTSCRIPTSTYLE \MATHBB{R}}(P\,,Q)$
Let $\mathrm{St}_{\scriptscriptstyle \mathbb{R}}(p\,,q)$ denote the Stiefel manifold, whose elements are the $d \times p$ matrices $b$ with
$b^\dagger b = \mathrm{I}_p$($\mathrm{I}_p$ is the $p \times p$ identity matrix, and $d = p+q$). Note that $T_{b\,}\mathrm{St}_{\scriptscriptstyle \mathbb{R}}(p\,,q) = \lbrace w\,: w^\dagger b + b^\dagger w = 0\rbrace$. For $w \in T_{b\,}\mathrm{St}_{\scriptscriptstyle \mathbb{R}}(p\,,q)$, let $[b] = bb^\dagger$ and $[w] = wb^\dagger + bw^\dagger$. If $v \in T_x \mathrm{Gr}_{\scriptscriptstyle \mathbb{R}}(p\,,q)$, one says that $(b,w)$ is representative of $(x,v)$, whenever $x = [b]$ and $v = [w]$.
Recall that $x$ and $v$ may always be expressed $x = g\cdot o$ and $v = g\cdot \tilde{\omega}$, using (<ref>). If $x = [b]$, then $g$ may be chosen
$g = (b,b^{\scriptscriptstyle \perp})$ (the columns of $b^{\scriptscriptstyle \perp}$ span the orthogonal complement of the image space of $x$). Then, a direct calculation shows $v = [w_{\scriptscriptstyle v}]$, where $w_{\scriptscriptstyle v} = b^{\scriptscriptstyle \perp}\omega$. Now, define
\begin{equation} \label{eq:retgrassman1}
\mathrm{Ret}_x(v) = \mathrm{Proj}\left( b + w_{\scriptscriptstyle v}\right) \text{ for some $b$ such that } x = [b]
\end{equation}
where $\mathrm{Proj}(h)$ denotes the orthogonal projector onto the span of the columns of $h$, for $h \in \mathbb{R}^{\scriptscriptstyle d\times p}$. This is well-defined, since it does not depend on the choice of $b$ and $b^{\scriptscriptstyle \perp}$, and is indeed a retraction, since it verifies (<ref>).
For a nicer expression of (<ref>), identify each $x \in \mathrm{Gr}_{\scriptscriptstyle \mathbb{R}}(p\,,q)$ with its image space $\mathrm{Im}(x)$.In other word, consider $\mathrm{Gr}_{\scriptscriptstyle \mathbb{R}}(p\,,q)$ as the space of all $p$-dimensional subspaces of $\mathbb{R}^d$. Then,
\begin{equation} \label{eq:retgrassman}
\mathrm{Ret}_x(v) = \mathrm{Span}\left( b + w_{\scriptscriptstyle v}\right)
\end{equation}
where $\mathrm{Span}(h)$ denotes the span of the column space of $h \in \mathbb{R}^{\scriptscriptstyle d\times p}$.
[Whenever $a = (\alpha\,,0_{\scriptscriptstyle p\times q})^\dagger$, where $\alpha$ is $p\times p$ and diagonal, let $\arctan(a) = (\arctan(\alpha)\,,0_{\scriptscriptstyle p\times q})^\dagger$ where $\arctan(\alpha) = \mathrm{diag}(\arctan(\alpha_{ii}))$. For the proof on the following page, define $\cos(a)$ and $\sin(a)$ in the same way.] The retraction $\mathrm{Ret}$ in (<ref>) is regular, and the corresponding maps $\Phi_x$ (defined as in (<ref>) are given by
\begin{equation} \label{eq:phigrass}
\Phi_x(v) = \left[ b^{\scriptscriptstyle \perp}(r\arctan(a) s^\dagger) \right]
\end{equation}
for $x = [b]$ and $v = [b^{\scriptscriptstyle \perp}\omega]$, where $\omega$ has s.v.d. $\omega = ras^\dagger$ with $r \in O(q)$ and $s \in O(p)$.
As for Examples 1 and 2 in <ref>, (b) of Proposition <ref> now implies $\mathrm{Ret}$ is a geodesic retraction.
Proof of Proposition <ref> : here, $x \in \mathrm{Gr}_{\scriptscriptstyle \mathbb{R}}(p\,,q)$ is identified with its image space, $\mathrm{Im}(x)$. Without loss of generality, it is assumed $p \leq q$.
With $\Phi_x$ given by (<ref>), the aim will be to show that, for $x \in \mathrm{Gr}_{\scriptscriptstyle \mathbb{R}}(p\,,q)$ and $v \in T_x \mathrm{Gr}_{\scriptscriptstyle \mathbb{R}}(p\,,q)$,
\begin{equation} \label{eq:phigrassproof1}
\mathrm{Exp}_x(\Phi_x(v)) = \mathrm{Ret}_x(v)
\end{equation}
In [9], the cut locus of $x$ is obtained under the form
\begin{equation} \label{eq:cutgrass}
\mathrm{Cut}(x) = \left\lbrace\mathrm{Exp}_x\left([b^{\scriptscriptstyle \perp}\omega]\right)\,;\,\omega = ras^\dagger\,,\,\Vert a\Vert_{\scriptscriptstyle \infty} = \frac{\pi}{2}\right\rbrace
\end{equation}
where $\omega= ras^\dagger$ is the s.v.d. of $\omega \in \mathbb{R}^{\scriptscriptstyle\,q\times p}$, with $r \in O(q)$ and $s \in O(p)$, and $\Vert a\Vert_{\scriptscriptstyle \infty} = \max_{ij}|a_{ij}|$. Since $\Vert \arctan(a)\Vert_{\scriptscriptstyle \infty} < \pi/2$, it follows from(<ref>) and (<ref>) that $\mathrm{Ret}_x(v) \notin \mathrm{Cut}(x)$, so $\mathrm{Ret}$ is a regular retraction. Thus, to prove the proposition, one only has to prove (<ref>).
Starting with the left-hand side of (<ref>), let $\varphi = r \arctan(a) s^\dagger$, so $\Phi_x(v) = [ b^{\scriptscriptstyle \perp}\varphi]$. By the discussion before (<ref>), it follows that
\begin{equation} \label{eq:proofgrass1}
\Phi_x(v) = g \cdot \tilde{\varphi} \hspace{1cm} \text{(where $g = (b,b^{\scriptscriptstyle \perp})$)}
\end{equation}
However, then, by (<ref>),
\begin{equation} \label{eq:proofgrass2}
\mathrm{Exp}_x(\Phi_x(v)) = \exp(g\cdot \hat{\varphi})\cdot x = \left(g\exp(\hat{\varphi})\right)\cdot o
\end{equation}
where the second equality follows from $g^\dagger x = o$, using $g\cdot \hat{\varphi} = g\,\hat{\varphi}\,g^\dagger$. Using the s.v.d. of $\varphi$ ($\varphi = r\arctan(a) s^\dagger$), a straightforward matrix multiplication yields
\begin{equation} \label{eq:proofgrass3}
\hat{\varphi} = k\cdot\hat{q} \hspace{0.5cm} \text{where } k = \left(\begin{array}{cc} s & 0 \\[0.1cm] 0 & r \end{array}\right) \;,\;
q = \arctan(a)
\end{equation}
Thus, from (<ref>) and (<ref>), using the fact that $k \in O(p) \times O(q)$, so $k\cdot o = o$ (or $k^\dagger \cdot o = o$),
\mathrm{Exp}_x(\Phi_x(v)) = \left(g\,k \exp(\hat{q})\right)\cdot o
That is, by the group action property,
\begin{equation} \label{eq:proofgrass33}
\mathrm{Exp}_x(\Phi_x(v)) = g\,k\cdot\left(\exp(\hat{q})\cdot o\right)
\end{equation}
Now, let $b_o = (\mathrm{I}_p\,,0_{\scriptscriptstyle p\times q})^\dagger$, so $o = \mathrm{Span}(b_o)$ and
\begin{equation} \label{eq:proofgrass4}
\exp(\hat{q})\cdot o = \mathrm{Span}\left( \exp(\hat{q})\,b_o\right)
\end{equation}
Then, let $a = (\alpha\,,0_{\scriptscriptstyle p\times q})^{\dagger\,}$, where $\alpha$ is $p \times p$ and diagonal. It will be shown below that
\begin{equation} \label{eq:proofgrass5}
\exp(\hat{q})\,b_o \,=\, \left(\begin{array}{c} \cos(\arctan(\alpha))\\[0.1cm] \sin(\arctan(a))\end{array}\right) =
\left(\begin{array}{c} \mathrm{I}_p\\[0.1cm] a\end{array}\right)\left(\mathrm{I}_p + \alpha \right)^{-\frac{1}{2}}
\end{equation}
where the second equality follows from the identities
\cos(\arctan(\alpha_{ii})) = (1+ \alpha^2_{ii})^{-\frac{1}{2}} \text{ and } \sin(\arctan(\alpha_{ii})) = \alpha_{ii}(1+ \alpha^2_{ii})^{-\frac{1}{2}}
By (<ref>) and (<ref>), after ignoring the invertible matrix $\left(\mathrm{I}_p + \alpha \right)^{-\frac{1}{2}}$,
\exp(\hat{q})\cdot o = \mathrm{Span}\left(\begin{array}{c} \mathrm{I}_p\\[0.1cm] a\end{array}\right) = \mathrm{Span}\left(b_o + b^{\scriptscriptstyle\perp}_oa\right)
Replacing this into (<ref>), it follows that
\begin{equation} \label{eq:proofgrass6}
\mathrm{Exp}_x(\Phi_x(v)) = g\,k\cdot\mathrm{Span}\left(b_o + b^{\scriptscriptstyle\perp}_oa\right) =
g\cdot \mathrm{Span}\left( k(b_o + b^{\scriptscriptstyle\perp}_oa)\right)
\end{equation}
and, by carrying out the matrix products, one may perform the simplification,
\mathrm{Span}\left( k(b_o + b^{\scriptscriptstyle\perp}_oa)\right) =
\mathrm{Span}\left(b_o + b^{\scriptscriptstyle\perp}_ora\right) =
\mathrm{Span}\left(b_o + b^{\scriptscriptstyle\perp}_o\omega\right)
to obtain from (<ref>),
\mathrm{Exp}_x(\Phi_x(v)) = g\cdot\mathrm{Span}\left(b_o + b^{\scriptscriptstyle\perp}_o\omega\right)
which immediately yields (<ref>), since $g\,b_o = b$ and $g\,b^{\scriptscriptstyle \perp}_o = b^{\scriptscriptstyle \perp}$.
Proof of (<ref>) : write $q = (\kappa\,,0_{\scriptscriptstyle p\times q})^{\dagger\,}$, where $\kappa$ is $p \times p$ and diagonal. It is enough to show
\begin{equation} \label{eq:finalgrassproof1}
\exp(\hat{q}) = \left(\begin{array}{cc}\cos(\kappa) & -\sin(q)^\dagger \\[0.1cm] \sin(q) & \cos(\kappa)_{\scriptscriptstyle q\times q}\end{array}\right)
\end{equation}
wehre $\cos(\kappa)_{\scriptscriptstyle q\times q}$ is the $q \times q$ matrix,
\cos(\kappa)_{\scriptscriptstyle q\times q} = \left(\begin{array}{cc} \cos(\kappa) & \\[0.1cm] & \mathrm{I}_{q-p}\!\end{array}\right)
This follows by writing
\hat{q} = \sum^p_{i=1}\,\kappa_{ii}\,\hat{f}_{i} \hspace{1cm} f_{i} = (\delta_{i}\,,0_{\scriptscriptstyle p\times q})^\dagger
where $\delta_i$ is $p\times p$, diagonal, with its only non-zero element on the $i$-th line, and equal to $1$. Indeed, the matrices $\hat{f}_{i}$ commute with one another, so that
\begin{equation} \label{eq:finalgrassproof3}
\exp(\hat{q}) = \prod^p_{i=1}\,\exp(\kappa_{ii}\,\hat{f}_{i})
\end{equation}
and one readily checks $\hat{f}^{\scriptscriptstyle\hspace{0.03cm}2}_i = -e_{i\,}$, where $e_i$ is $d\times d$, diagonal, with its only non-zero elements on the $i$-th and $(p+i)$-th lines, and equal to $1$. Therefore,
\begin{equation} \label{eq:finalgrassproof2}
\exp(t\,\hat{f}_i) = \mathrm{I}_d + (\cos(t)-1)\,e_i + \sin(t)\,\hat{f}_i
\end{equation}
Then, (<ref>) obtains after replacing (<ref>) into (<ref>), and using
\begin{array}{ccccc}
e_i\,e_j = 0 && e_i\,\hat{f}_j = 0 & &\\[0.2cm]
\hat{f}_i\,e_j = 0 && \hat{f}_i\,\hat{f}_j = 0 & & \text{for $i \neq j$}
\end{array}
which may be shown by performing the matrix products.
Remark : the above proof has a flavor of the structure theory of Riemannian symmetric spaces. In fact, $\mathrm{Gr}_{\scriptscriptstyle \mathbb{R}}(p\,,q) = \left. O(p+q) \middle/ O(p) \times O(q)\right.$ is a Riemannian symmetric space. The associated Cartan decomposition is
\begin{equation} \label{eq:grasscartan}
\mathfrak{o}(p+q) = \mathfrak{k} \,+\,\mathfrak{p}
\end{equation}
where $\mathfrak{k}$ is the Lie algebra of $K = O(p)\times O(q)$, and where $\mathfrak{p}$ was given in (<ref>). Then,
\begin{equation} \label{eq:grassaa}
\mathfrak{a} = \left\lbrace \hat{a}\,;\, a = (\alpha\,,0_{\scriptscriptstyle p\times q})^{\dagger\,}\,,\, \alpha \text{ is $p\times p$ diagonal}\right\rbrace
\end{equation}
is a maximal Abelian subspace of $\mathfrak{p}$. From [10] (Lemma 6.3, Chapter V), it follows that any $\hat{\omega} \in \mathfrak{p}$ is of the form $\hat{\omega} = \mathrm{Ad}(k)\,\hat{a}$ where $\mathrm{Ad}$ denotes the adjoint representation, $k \in K$ and $\hat{a} \in \mathfrak{a}$.In the present context, this reads $\hat{\omega} = k\cdot \hat{a}$, which is indeed realised if $\omega$ has s.v.d. $\omega = ras^\dagger$, and $k$ is the same as in (<ref>).
§ THE SQUARED DISTANCE FUNCTION
§.§ Cut locus
A Riemannian manifold $M$ becomes a metric space, when equipped with the distance function
\begin{equation} \label{eq:distance}
d(x,y) = \inf \left\lbrace L(c)\,;\, c \in C^1([0,1]\,,M):c(0) = x \,,\,c(1) = y\,\right\rbrace
\end{equation}
known as the Riemannian distance. Here, $L(c)$ is the length functional (<ref>). When $M$ is geodesically complete, the infimum in (<ref>) is always achieved by some curve $c^*$, which is then said to be length-minimising. In addition, any length-minimising curve is a geodesic.
This is not to say that all geodesics are length-minimising. A geodesic curve $c$, with $c(0) = x$, may reach a point $c(t) = y$, such that $L(\left.c\right|_{\scriptscriptstyle [0,t]}) \geq d(x,y)$. Roughly, this happens when $t$ is so large that $c$ becomes too long.
For $v \in T_xM$ with $\Vert v \Vert_x = 1$ ($\Vert\cdot \Vert_x$ is the norm given by the scalar product $\langle\cdot,\cdot\rangle_x$), define
\begin{equation} \label{eq:tv}
\mathrm{t}(v) = \sup\left\lbrace t\geq 0 : L(\left.c_{\scriptscriptstyle v}\right|_{\scriptscriptstyle [0,t]}) = d(x,c_{\scriptscriptstyle v}(t))\right\rbrace
\end{equation}
where $c_{\scriptscriptstyle v}$ denotes the geodesic curve with $\dot{c}_{\scriptscriptstyle v}(0) = v$. The following sets
\begin{equation} \label{eq:tangentcut}
\mathrm{TC}(x) = \left\lbrace t\,v\,;\, t = \mathrm{t}(v)\,,\,\Vert v\Vert_x = 1 \right\rbrace \hspace{1cm}
\mathrm{TD}(x) = \left\lbrace t\,v\,;\, t < \mathrm{t}(v)\,,\,\Vert v\Vert_x = 1 \right\rbrace
\end{equation}
are known as the tangent cut locus and tangent injectivity domain of $x$. The cut locus and injectivity domain of $x$ are the sets $\mathrm{Cut}(x) = \mathrm{Exp}\left( \mathrm{TC}(x)\right)$ and $\mathrm{D}(x) = \mathrm{Exp}\left( \mathrm{TD}(x)\right)$.
Since any two points $x$ and $y$ in $M$ are connected by a length-minimising geodesic $c^*$,
\begin{equation} \label{eq:cutdecomp}
M = \mathrm{D}(x) \,\cup\,\mathrm{Cut}(x)
\end{equation}
It is interesting to note that $\mathrm{Cut}(x)$ is a closed and negligible set.
§.§ Normal coordinates
The exponential map $\mathrm{Exp}_x$ is a diffeomorphism of $\mathrm{TD}(x)$ onto $\mathrm{D}(x)$ ($\mathrm{TD}(x)$ is the largest subset of $T_xM$ with this property). Pick some orthonormal basis $(u_i)$ of $T_xM$, and define, for $y \in \mathrm{D}(x)$,
\begin{equation} \label{eq:normalcoordinates}
\tau^i(y) = \left\langle \mathrm{Exp}^{-1}_x(y)\hspace{0.02cm}, u_i\hspace{0.02cm}\right\rangle_x \hspace{1cm} i = 1\,,\ldots,\,n
\end{equation}
Then, $\tau^i:\mathrm{D}(x) \rightarrow \mathbb{R}$ are well-defined local coordinates, known as normal coordinates. These coordinates satisfy
\begin{equation} \label{eq:normal0}
\tau^i(x) = 0 \hspace{0.4cm} g_{ij}(x) = \delta_{ij} \hspace{0.5cm} \Gamma^i_{jk}(x) = 0
\end{equation}
in the notation of (<ref>) and (<ref>). Even more, (<ref>) is equivalent to the property that geodesics through $x$ appear as straight lines through $0 \in \mathbb{R}^n$, in the normal coordinate map $\tau:\mathrm{D}(x) \rightarrow \mathbb{R}^n$.
Now, the coordinate vector fields $\partial_i = \left.\partial\middle/\partial \tau^i\right.$ are given by
\begin{equation} \label{eq:dexpnorm}
\partial_i(y) \,=\,\mathrm{d\hspace{0.02cm}Exp}_x(v)(u_i) \hspace{1cm} \text{where } v = \tau^i(y)\hspace{0.02cm}u_i
\end{equation}
where $\mathrm{d\hspace{0.02cm}Exp}_x$ is the derivative of $\mathrm{Exp}_{x}:T_xM\rightarrow M$. This may be computed using Jacobi fields,
\begin{equation} \label{eq:dexpjacobi}
\mathrm{d\hspace{0.02cm}Exp}_x(tv)(tu) = J(t)
\end{equation}
where $J$ is a vector field (Jacobi field) along the geodesic $c(t) = \mathrm{Exp}_x(tv)$, which solves the Jacobi equation
\begin{equation} \label{eq:jacobiequation}
\nabla^2_{t\,}{J} - R(\dot{c},J)\dot{c} = 0
\end{equation}
where $J(0) = 0$, $\nabla_{t\,}J(0) = u$, and where $R$ denotes the Riemann curvature tensor. Of course, there do exist other means of computing the derivative $\mathrm{d\hspace{0.02cm}Exp}_x$ (e.g. when $\mathrm{Exp}_x$ coincides with a matrix exponential).
§.§ Distance function
For $x \in M$, consider the distance function $r_x(y) = d(x,y)$. For $y \in \mathrm{D}(x)$, it is possible to show
\begin{equation} \label{eq:localdistance}
r_x(y) = \left( \sum^n_{i=1} \tau^i(y)^{\hspace{0.02cm}2}\right)^{\!\!\frac{1}{2}} \hspace{0.5cm}
% f_x(y) = \frac{1}{2}\,\sum^n_{i=1} \tau^i(u)^{\hspace{0.02cm}2}
\end{equation}
in terms of the normal coordinates $\tau^i$. From (<ref>), the distance function $r_x$ is smooth on
$\mathrm{U}_x = \mathrm{D}(x) - \lbrace x \rbrace$.
When $y \in \mathrm{U}_{x}$ is of the form $y = c_{\scriptscriptstyle v}(t)$, where $c_{\scriptscriptstyle v}$ is a geodesic with $\dot{c}_{\scriptscriptstyle v}(0) = v$ and $\Vert v \Vert_x = 1$, define $\partial_{\hspace{0.02cm}r}(y) = \dot{c}_{\scriptscriptstyle v}(t)$. By the first variation of arc length formula (Theorem II.4.1 in [11]),
\begin{equation} \label{eq:gradr}
\mathrm{grad}\,r_x(y) = \partial_{\hspace{0.02cm}r}(y) \hspace{1cm} \text{for } y \in \mathrm{U}_x
\end{equation}
Introduce geodesic spherical coordinates $(r,\theta^{\scriptscriptstyle\,\alpha})$ on $\mathrm{U}_{x\,}$. If $y = c_{\scriptscriptstyle v}(t)$ these are given by $r = t$ and $(\theta^{\scriptscriptstyle\,\alpha}) = \theta(v)$, where $\theta$ identifies the unit sphere in $T_xM$ with the Euclidean unit sphere $S^{n-1}$.In these coordinates, the metric is given by
\begin{equation} \label{eq:lengthspherical}
g = dr \otimes dr \,+\, g^{\hspace{0.03cm} r}_{\scriptscriptstyle \alpha\beta}\;d\theta^{\scriptscriptstyle\,\alpha}\!\otimes\!d\theta^{\scriptscriptstyle\,\beta}
\end{equation}
reflecting the fact that $\partial_{\hspace{0.02cm}r}$ is orthogonal to constant $r_x$ surfaces, here parameterised by $(\theta^{\scriptscriptstyle\,\alpha})$.
The coordinate vector fields $\partial_{\hspace{0.02cm}\scriptscriptstyle \alpha}$ are given by (<ref>) : $\partial_{\hspace{0.02cm}\alpha}(y) = J(r)$ for $y = c_{\scriptscriptstyle v}(r)$, where $J(0) = 0$ and $\nabla_{t\,}J(0) = u_{\hspace{0.02cm}\scriptscriptstyle \alpha}$ (where $u_{\hspace{0.02cm}\scriptscriptstyle \alpha} = \left.\partial\middle/\partial \theta^{\scriptscriptstyle\,\alpha}\right.$ are coordinate vector fields on the unit sphere in $T_xM$). In particular, if $A:T_xM \rightarrow T_yM$ solves the operator Jacobi equation (along the geodesic $c_{\scriptscriptstyle v}$)
\begin{equation} \label{eq:jacobioperator}
\nabla^2_{t\,}{A} - R_{\dot{c}_{\scriptscriptstyle v}}\hspace{0.02cm}A = 0 \hspace{1cm} A(0) = 0 \,,\, \nabla_{t\,}A(0) = \mathrm{Id}_x
\end{equation}
where $R_{\hspace{0.02cm}\dot{c}_{\scriptscriptstyle v}}(\cdot) = R(\dot{c}_{\scriptscriptstyle v\hspace{0.02cm}},\cdot)\hspace{0.02cm}\dot{c}_{\scriptscriptstyle v\,}$, then $\partial_{\hspace{0.02cm}\alpha}(y) = A(r)\hspace{0.02cm} u_{\hspace{0.02cm}\scriptscriptstyle \alpha\,}$. Thus, if $\mathcal{A}(y):T_xM\rightarrow T_xM$ is given by $\mathcal{A}(y) = \Pi^{\scriptscriptstyle 0}_{r} \circ A(r)$, then $g^{\hspace{0.03cm} r}(y) = (\mathcal{A}(y))^*(h)$, the pullback under $\mathcal{A}(y)$ of the metric $h$ of the unit sphere in $T_xM$. It should be noted $\mathcal{A}(y)$ maps tangent spaces of this unit sphere to themselves.
The Hessian of $r_x$ follows from (<ref>) and (<ref>), which yield (after using the fact that the vector fields $\partial_{\hspace{0.02cm}r}$ and $\partial_{\hspace{0.02cm}\scriptscriptstyle \alpha}$ commute)
\mathrm{Hess}\,r_x\cdot \partial_{\hspace{0.02cm}r\hspace{0.02cm}} = 0 \hspace{0.3cm}\text{and}\hspace{0.3cm}\mathrm{Hess}\,r_x\cdot \partial_{\hspace{0.02cm}\alpha} =\nabla_{\partial_{\hspace{0.02cm}r}}\hspace{0.02cm}\partial_{\hspace{0.02cm}\alpha}
Then, using the expression of the $\partial_{\hspace{0.02cm}\alpha}$ as Jacobi fields,
\begin{equation} \label{eq:hessr}
\mathrm{Hess}\,r_x(y) \,=\, \left.\nabla_{t\,}A(t)A^{-1}(t)\right|_{t=r}
\end{equation}
Taking the covariant derivative $\nabla_{t}$ of this formula yields the Ricatti equation
\begin{equation} \label{eq:ricatti}
\nabla_{\partial_{\hspace{0.02cm}r}} \mathrm{Hess}\,r_x \,=\, R_{\scriptscriptstyle \partial_{\hspace{0.02cm}r}} - \left(\mathrm{Hess}\,r_x\right)^{\hspace{0.02cm}2}
\end{equation}
The Jacobi equation (<ref>) and the Ricatti equation (<ref>) lead up to the comparison theorems[The inequalities (<ref>) and (<ref>) are in the sense of the usual Loewner order for self-adjoint operators.].
Assume the sectional curvatures of $M$ lie within the interval $[\kappa_{\min}\hspace{0.02cm},\kappa_{\max}\hspace{0.03cm}]\hspace{0.02cm}$. Then,
\begin{equation} \label{eq:metcomp}
\mathrm{sn}^2_{\kappa_{\max}}(r)\,h\,\leq\, g^{\hspace{0.03cm} r}(y) \,\leq\, \mathrm{sn}^2_{\kappa_{\min}}(r)\,h %g^{\hspace{0.03cm} r}(y)
\end{equation}
\begin{equation} \label{eq:hescomp}
\mathrm{ct}_{\kappa_{\max}}(r)\,g^{\hspace{0.03cm} r}(y)\,\leq\, \mathrm{Hess}\,r_x(y)\,\leq\,
\mathrm{ct}_{\kappa_{\min}}(r)\,g^{\hspace{0.03cm} r}(y)
\end{equation}
for $y \in \mathrm{U}_x\,$. Here, $\mathrm{sn}^{\prime\prime}_\kappa(r) + \kappa\,\mathrm{sn}_{\kappa}(r) = 0$ with $\mathrm{sn}_{\kappa}(0) = 0$ and $\mathrm{sn}^\prime_{\kappa}(0) = 1$, and $\mathrm{ct}_{\kappa} = \left. \mathrm{sn}^\prime_{\kappa}\middle/\mathrm{sn}_{\kappa}\right.$.
Remark : in addition to its singularity at $x$, the distance function $r_x$ is singular on $\mathrm{Cut}(x)$.If $y\in\mathrm{Cut}(x)$, then either $y$ is a first conjugate point ($A(r)$ is singular, for the first time after $x$),or there exist two distinct length-minimising geodesics connecting $x$ to $y$. In the first case, $\mathrm{Hess}\,r_x(y)$ has an eigenvalue equal to $-\infty$. In the second case, $\mathrm{grad}\,r_x$ is discontinuous at $y$.The distributional Hessian of $r_x$ was studied in [12].
Remark : the reader may have noted, or recalled, that $y \in \mathrm{Cut}(x)$ if and only if $x \in \mathrm{Cut}(y)$.
§.§ Squared distance
For $x \in M$, consider the squared distance function $f_x(y) = d^{\hspace{0.03cm}2}(x,y)/2$. For $y \in \mathrm{D}(x)$,
\begin{equation} \label{eq:localfx}
f_x(y) = \frac{1}{2}\,\sum^n_{i=1} \tau^i(y)^{\hspace{0.02cm}2}
\end{equation}
in terms of the normal coordinates $\tau^i$. It follows that $f_x$ is smooth on $\mathrm{D}(x)$. Of course, $f_x = r^{\hspace{0.03cm}2}_x/2$. Therefore, applying the chain rule to (<ref>),
\begin{equation} \label{eq:gradfx}
\mathrm{grad}\,f_x(y) \,=\, -\hspace{0.02cm}\mathrm{Exp}^{-1}_y(x) \hspace{0.5cm} \text{for } y \in \mathrm{D}(x)
\end{equation}
and, by another application of the chain rule,
\begin{equation} \label{eq:hessfx}
\mathrm{Hess}\, f_x(y) = dr_x \otimes dr_x + r_x\,\mathrm{Hess}\,r_x
\end{equation}
Just like $r_{x\,}$, $f_x$ is singular on $\mathrm{Cut}(x)$. If $y \in \mathrm{Cut}(x)$ is a first conjugate point, then $ \mathrm{Hess}\, f_x(y)$ has an eigenvalue equal to $-\infty$.
The convexity of the function $f_x$ will play a significant rôle, in the following, especially when $M$ is a Hadamard manifold : a simply connected, geodesically complete Riemannian manifold of non-positive sectional curvature. When $M$ is a Hadamard manifold, the following properties hold : any $x\hspace{0.02cm},y \in M$ are connected by a unique geodesic $c$ ; for all $x \in M$, $\mathrm{Cut}(x)$ is empty, and $f_x$ is smooth and $1/2$-strongly convex ; all geodesic balls are convex (see the remarks below,for the notions of convex set and function).
Assume $M$ is a Hadamard manifold. In addition, assume that the sectional curvature of $M$ is bounded below by $\kappa_{\min} = -c^{\hspace{0.02cm}\scriptscriptstyle 2}$. Theorem <ref> may be applied to (<ref>), after setting
$\kappa_{\max} = 0$. This yields
\begin{equation} \label{eq:hesfxcomp}
g(y) \,\leq\,\mathrm{Hess}\,f_x(y)\,\leq\, c\hspace{0.03cm} r_x(y)\coth(c\hspace{0.03cm} r_x(y))\,g(y)
\end{equation}
for $y \in M$. In addition to showing that $f_x$ is $1/2$-strongly convex, this shows that $\mathrm{Hess}\,f_x$ has, at most, linear growth
\begin{equation} \label{eq:hesfxcompbis}
\mathrm{Hess}\,f_x(y)\,\leq\, (1+c\hspace{0.03cm}r_x(y))\,g(y)
\end{equation}
since $x\coth(x) \leq 1 + x$ for $x \geq 0$.
Remark : a subset $A \subset M$ is called convex (that is, strongly convex, in the terminology of [11]) if any $x\hspace{0.02cm},y \in A$ are connected by a unique length-minimising geodesic $c$, and $c$ lies entirely in $A$.A function $f:A \rightarrow \mathbb{R}$ is then called (strictly) convex if $f \circ c:\mathbb{R} \rightarrow \mathbb{R}$ is (strictly) convex, for any geodesic $c$ which lies in $A$. It is called $\alpha$-strongly convex (for some $\alpha > 0$) if $f \circ c:\mathbb{R} \rightarrow \mathbb{R}$ is $\alpha$-strongly convex, for any geodesic $c$ which lies in $A$,
\begin{equation} \label{eq:strongconv}
(f\circ c)(p\hspace{0.02cm}s + q\hspace{0.02cm}t) \leq p\hspace{0.02cm}(f\circ c)(s) + q\hspace{0.02cm}(f\circ c)(t) - \alpha\hspace{0.02cm}p\hspace{0.02cm}q\,d^{\hspace{0.03cm}2}(c(s),c(t))
\end{equation}
whenever $p\hspace{0.02cm},q \geq 0$ and $p+q = 1$. For example, if $M$ is a sphere and $A$ is the open northern hemisphere, then $A$ is convex. Then, $f_x:A \rightarrow \mathbb{R}$, where $x$ denotes the north pole, is strictly convex, but not strongly convex.
Remark : for $x \in M$, let $\mathrm{inj}(x) = d(x\hspace{0.02cm},\mathrm{Cut}(x))$ denote the injectivity radius at $x$. Then, let $\mathrm{inj}(M) = \inf_{x\in M}\mathrm{inj}(x)$, the injectivity radius of $M$. Assume all the sectional curvatures of $M$ are less than $\kappa_{\max} = c^{\hspace{0.02cm}\scriptscriptstyle 2}$. If $B(x,R)$ is a geodesic ball with radius $R \leq (1/2)\hspace{0.02cm}\min\lbrace \mathrm{inj}(M)\hspace{0.02cm},\pi\hspace{0.03cm}c^{\scriptscriptstyle -1}\rbrace$, then $B(x,R)$ is convex. Here, if $\kappa_{\max} = 0$, then $c^{\scriptscriptstyle -1}$ is understood to be $+\infty$. However, there do exist manifolds $M$ with negative sectional curvature, and with $\mathrm{inj}(M) = 0$ (e.g. the quotient of the Poincaré upper half-plane, by a discrete group of translations).
§ EXAMPLE : ROBUST RIEMANNIAN BARYCENTRE
Let $M$ be a Hadamard manifold, with sectional curvatures bounded below by $\kappa_{\min} = -c^{\hspace{0.02cm}\scriptscriptstyle 2}$. Recall that $f_x$ is $1/2$-strongly convex, and $\mathrm{Hess}\,f_x$ has, at most, linear growth (as in (<ref>)). On the other hand, consider the function
\begin{equation} \label{eq:hdist}
V_x(y) = \delta^{\hspace{0.02cm} \scriptscriptstyle 2}\left[\mathstrut 1+ \left(d(x,y)\middle/\delta\right)^{\scriptscriptstyle 2\,}\right]^{\scriptscriptstyle\frac{1}{\mathstrut 2}}\, -\,\delta^{\hspace{0.02cm} \scriptscriptstyle 2}
\end{equation}
where $\delta > 0$ is a cutoff parameter. Note that $V_x(y) \geq 0$, and $V_x(y) = 0$ if and only if $x = y$. Moreover, $V_x \sim f_{x\,}$, when $\left.d(x\hspace{0.02cm},y)\middle/\delta\right.$ is small, and $V_x \sim \delta\hspace{0.02cm}r_{x\,}$, when $\left.d(x\hspace{0.02cm},y)\middle/\delta\right.$ is large.
Let $M$ be a Hadamard manifold, with sectional curvatures bounded below by $\kappa_{\min} = -c^{\hspace{0.02cm}\scriptscriptstyle 2}$. If $V_x:M\rightarrow \mathbb{R}$ is defined as in (<ref>), then
$V_x$ is smooth, strictly (but not strongly) convex, and $\mathrm{Hess}\,V_x$ is bounded by $1+\delta\hspace{0.02cm}c$.
Let $\pi$ be a probability measure on $M$, and consider the problem of minimising
\begin{equation} \label{eq:huberpi}
V_{\pi}(y) = \int_M\,V_x(y)\,\pi(dx)
\end{equation}
A global minimiser of $V_\pi$ will be called a robust Riemannian barycentre of $\pi$. Here, the adjective “robust" comes from the field of robust statistics [13].
Let $\pi$ be a probability distribution on a Hadamard manifold $M$. If $\pi$ has finite first-order moments, then the function $V_{\pi}$ is a proper, strictly convex function, with a unique global minimum $x^* \in \Theta$. Therefore, $\pi$ has a unique robust Riemannian barycentre $x^*$.
Recall that $\pi$ has finite first-order moments, if and only if there exists $y_o \in M$ with
\begin{equation} \label{eq:firstorder}
\int_M\,r_{x}(y_o)\,\pi(dx) \,<\,\infty
\end{equation}
and recall that $V_{\pi}$ is said to be proper if it takes on finite values.
Proof of Proposition <ref> : by applying the chain rule to (<ref>), and using (<ref>),
\begin{equation} \label{eq:hgrad}
\mathrm{grad}\,V_x(y) = -\,\frac{\mathrm{Exp}^{-1}_y(x)}{\mathstrut \left[\mathstrut 1+ \left(d(x,y)\middle/\delta\right)^{\scriptscriptstyle 2\,}\right]^{\scriptscriptstyle\frac{1}{\mathstrut 2}}}
\end{equation}
Then, by applying (<ref>),
\begin{equation} \label{eq:hhess}
\mathrm{Hess}\,V_x(y) = -\,
\frac{{\small \mathrm{Exp}^{-1}_y(x)\otimes \mathrm{Exp}^{-1}_y(x)}}{\mathstrut\delta^{\hspace{0.02cm} \scriptscriptstyle 2} \left[\mathstrut 1+ \left(d(x,y)\middle/\delta\right)^{\scriptscriptstyle 2\,}\right]^{\scriptscriptstyle \frac{3}{\mathstrut 2}}}\,-\, \frac{\nabla\,\mathrm{Exp}^{-1}_y(x)}{\mathstrut \left[\mathstrut 1+ \left(d(x,y)\middle/\delta\right)^{\scriptscriptstyle 2\,}\right]^{\scriptscriptstyle\frac{1}{\mathstrut 2}}}
\end{equation}
To conclude, it is enough to note the inequalities,
0\,\leq\,\mathrm{Exp}^{-1}_y(x)\otimes \mathrm{Exp}^{-1}_y(x)\leq d^{\hspace{0.03cm}\scriptscriptstyle 2}(x,y)\hspace{0.03cm}g(y) %\hspace{0.4cm}\text{and}\hspace{0.4cm}
%1\,\leq\,-\,\nabla\,\mathrm{Exp}^{-1}_y(x)\,\leq\, (1+\kappa\hspace{0.02cm} r_x(y))
which follows since $\mathrm{Exp}^{-1}_y(x)\otimes \mathrm{Exp}^{-1}_y(x)$ is a rank-one operator in $T_yM$, and
g(y)\,\leq\,-\,\nabla\,\mathrm{Exp}^{-1}_y(x)\,\leq\, (1+c\hspace{0.03cm}r_x(y))\hspace{0.03cm}g(y)
which is the same as (<ref>), and follows from (<ref>) and (<ref>). Replacing these into (<ref>), a direct calculation shows
\begin{equation} \label{eq:hdistproof}
0\,<\,\mathrm{Hess}\,V_x(y)\,\leq\, (1+\delta\hspace{0.02cm}c)\hspace{0.03cm}g(y)
\end{equation}
which completes the proof. Proof of Proposition <ref> : using the sub-additivity of the square root, (<ref>) and (<ref>) imply that for any $y \in M$,
V_{\pi}(y) \,\leq\, \int_M\,r_{x}(y)\,\pi(dx)
But, by the triangle inequality, and (<ref>),
\int_M\,r_{x}(y)\,\pi(dx) \leq d(y\hspace{0.02cm},y_o) + \int_M\,r_{x}(y_o)\,\pi(dx) \,<\infty
Therefore, $V_\pi$ is proper. That $V_\pi$ is also strictly convex is an immediate result of Proposition <ref> :each function $V_x$ is strictly convex, and $V_\pi(y)$ is the expectation of $V_x(y)$ with respect to a random $x$ with distribution $\pi$. Now, to show that $V_\pi$ has a unique global minimum, it is enough to show that $V_\pi(y)$ goes to infinity as $y$ goes to infinity. Note that $\varphi(x) = (1+x^{\scriptscriptstyle 2})^{\scriptscriptstyle \frac{1}{2}}$ is convex. This implies (using the elementary fact that the graph of a convex function remains above any of its tangents),
V_x(y) \geq (\sqrt{2} - 1)\hspace{0.04cm}\delta^{\hspace{0.02cm}\scriptscriptstyle 2}\,+\, \frac{\delta}{\sqrt{2}}\hspace{0.04cm}r_x(y)
Taking the expectation with respect to $\pi$,
V_\pi(y) \geq (\sqrt{2} - 1)\hspace{0.04cm}\delta^{\hspace{0.02cm}\scriptscriptstyle 2}\,+\, \frac{\delta}{\sqrt{2}}\hspace{0.04cm}\int_M\,r_x(y)\,\pi(dx)
To see that $V_\pi(y)$ goes to infinity as $y$ goes to infinity, it is now enough to note, using the triangle inequality,
\int_M\,r_x(y)\,\pi(dx)\,\geq\,d(y\hspace{0.02cm},y_o)\,-\int_M\,r_{x}(y_o)\,\pi(dx)
where $d(y\hspace{0.02cm},y_o)$ goes to infinity as $y$ goes to infinity.
Remark : the above Proposition <ref> only requires $M$ to be a Hadamard manifold, without the additional condition that it have sectional curvatures bounded below. Indeed, Proposition <ref> only relies on the fact that $V_x$ is strictly convex, and not on the fact that the Hessian of $V_x$ is bounded above by $1+\delta\hspace{0.02cm}c$.
Remark : if a function $V:M\rightarrow \mathbb{R}$, on a Riemannian manifold $M$, has bounded Hessian, then it has Lipschitz-gradient.
That is, if there exists $\ell \geq 0$ such that $\left|\mathrm{Hess}\,V(x)(u,u)\right|\leq \ell\hspace{0.02cm}g(u,u)$ for all $x \in M$ and $v \in T_xM$, then
\begin{equation} \label{eq:lipschitzgrad}
\left\Vert \Pi^{\scriptscriptstyle 0}_{\scriptscriptstyle 1}\left(\mathrm{grad}\,V_{c(1)}\right) - \mathrm{grad}\,V_{c(0)}\hspace{0.03cm}\right\Vert_{c(0)}\,\leq\,\ell\hspace{0.02cm}L(c)
\end{equation}
for any smooth curve $c:[0,1]\rightarrow M$, where $L(c)$ is the length of $c$. This is due to the following.
Let $X$ be a vector field on a Riemannian manifold $M$. If the operator norm of the covariant derivative $\nabla X$ is bounded by $\ell \geq 0$, then
\begin{equation} \label{eq:lipschitzfield}
\left\Vert \Pi^{\scriptscriptstyle 0}_{\scriptscriptstyle 1}\left(X_{c(1)}\right) - X_{c(0)}\hspace{0.03cm}\right\Vert_{c(0)}\,\leq\,\ell\hspace{0.02cm}L(c)
\end{equation}
for any smooth curve $c:[0,1]\rightarrow M$.
Sketch of proof : let $u_i$ be a parallel orthonormal base along $c$ ($u_i$ are vector fields along $c$, with $u_i(t)$ an orthonormal basis of $T_{c(t)}M$, for each $t$). Let $X^i(t) = \langle X\hspace{0.02cm},u_i\rangle_{c(t)\,}$ and note
\left\Vert \Pi^{\scriptscriptstyle 0}_{\scriptscriptstyle 1}\left(X_{c(1)}\right) - X_{c(0)}\hspace{0.03cm}\right\Vert^2_{c(0)} =
\sum^n_{i=1}\left(X^i(1) - X^i(0)\right)^{\!2} = \sum^n_{i=1}\left(\int^1_0\,\langle \nabla_{\dot{c}\,}X\hspace{0.02cm},u_{i\hspace{0.02cm}}\rangle_{c(t)\,}dt\right)^{\!\!2}
the proof then follows by using Jensen's inequality, since $\Vert \nabla_{\dot{c}\,}X\Vert_{c(t)} \leq \ell\hspace{0.03cm}\Vert \dot{c}\Vert_{c(t)\,}$.
§ RIEMANNIAN VOLUME AND INTEGRAL FORMULAE
§.§ Elementary volume comparison
If a Riemannian manifold $M$ is orientable, then $M$ admits a volume form, called the Riemannian volume form, to be denoted $\mathrm{vol}$, in the following. In terms of local coordinates $(x^i\,;i=1,\ldots,n)$
\begin{equation} \label{eq:volumeform}
\mathrm{vol} = \det(g)^{\frac{1}{2}}\,dx^1 \wedge \ldots \wedge dx^n
\end{equation}
where $\det(g)$ is the determinant of the metric, which is equal the determinant of the matrix $(g_{ij})$, defined in (<ref>). Then, the integral of a continuous, compactly-supported function $f:M\rightarrow \mathbb{R}$, with respect to $\mathrm{vol}$, is the integral of the $n$-form $f\hspace{0.02cm}\mathrm{vol}$ over $M$. This is denoted $\int_M\, f(x)\,\mathrm{vol}(dx)$. There exists a unique measure $|\mathrm{vol}|$ on the Borel $\sigma$-algebra of $M$, such that [14] (Chapter 8), for continuous, compactly-supported $f$,
\int_M\, f(x)\,\mathrm{vol}(dx) \,=\, \int_M\,f(x)\,|\mathrm{vol}|(dx)
where the integral on the left is a Riemann integral, and the integral on the right is a Lebesgue integral. It is quite useful to study these integrals using geodesic spherical coordinates (which were introduced in <ref>). Let $(r,\theta^{\scriptscriptstyle\,\alpha})$ be geodesic spherical coordinates, with origin at $x \in M$. Recall that these are defined on $\mathrm{U}_{x} = \mathrm{D}(x) - \lbrace x \rbrace$, where $\mathrm{D}(x)$ is the injectivity domain of $x$. Since $M$ can be decomposed as in (<ref>), and $\mathrm{Cut}(x)$ is negligible,
\begin{equation} \label{eq:integralux}
\int_M\,f(y)\,\mathrm{vol}(dy) \,=\,\int_{\,\mathrm{U}_{x}}f(y)\,\mathrm{vol}(dy)
\end{equation}
Using (<ref>) and (<ref>), $\mathrm{vol}(dy) = \det(\mathcal{A}(y))\,dr\wedge \omega_{n-1}(d\theta)$, where $\omega_{n-1}$ is the area measure on the unit sphere in $T_xM$ (as of now, this is identified with the Euclidean unit sphere $S^{n-1}$). Using (<ref>) and $\mathrm{D}(x) = \mathrm{Exp}\left( \mathrm{TD}(x)\right)$, (<ref>) yields
\begin{equation} \label{eq:integralspherical}
\int_M\,f(y)\,\mathrm{vol}(dy) \,=\, \int^{\mathrm{t}(\theta)}_0\!\!\!\int_{S^{n-1}}f(r,\theta)\,\det(\mathcal{A}(r,\theta))\,dr\hspace{0.03cm} \omega_{n-1}(d\theta)
\end{equation}
where $\mathrm{t}$ was defined in (<ref>). This formula expresses integrals, with respect to the Riemannian volume form, using geodesic spherical coordinates.
Recall the Laplacian $\Delta\hspace{0.03cm} r_x = \mathrm{div}\hspace{0.03cm}\partial_{\hspace{0.02cm}r\,}$. By definition of the divergence, $\mathcal{L}_{\partial_{\hspace{0.02cm}r}}\hspace{0.02cm}\mathrm{vol} = (\mathrm{div}\hspace{0.02cm}\partial_{\hspace{0.02cm}r})\hspace{0.02cm}\mathrm{vol}$. Writing this in geodesic spherical coordinates,
\begin{equation} \label{eq:laplacelogdet}
\Delta\hspace{0.03cm} r_x(r,\theta) \,=\, \partial_{\hspace{0.02cm}r}\hspace{0.02cm}\log\det(\mathcal{A}(r,\theta))
\end{equation}
Accordingly, the comparison theorems <ref> can be used to obtain the volume comparison theorem.
Assume the sectional curvatures of $M$ lie within the interval $[\kappa_{\min}\hspace{0.02cm},\kappa_{\max}\hspace{0.03cm}]\hspace{0.02cm}$. Then,
\begin{equation} \label{eq:volcomp}
\mathrm{sn}^{n-1}_{\kappa_{\max}}(r)\,\leq\, \det(\mathcal{A}(r,\theta)) \,\leq\, \mathrm{sn}^{n-1}_{\kappa_{\min}}(r) %g^{\hspace{0.03cm} r}(y)
\end{equation}
\begin{equation} \label{eq:laplacecomp}
(n-1)\hspace{0.02cm}\mathrm{ct}_{\kappa_{\max}}(r)\,\leq\, \partial_{\hspace{0.02cm}r}\hspace{0.02cm}\log\det(\mathcal{A}(r,\theta))\,\leq\,
\end{equation}
This volume comparison theorem is quite elementary, as stronger and deeper comparison results do exist[For example, Gromov's volume comparison theorem can be used to give a short proof of the famous “sphere theorem", Theorem III.4.6 in [11]. ]. Moreover, in this theorem, the lower bound on sectional curvature may be replaced by a lower bound on Ricci curvature, without any change to the conclusion.
Remark : roughly, (<ref>) states that “more curvature means less volume". If $f:M\rightarrow \mathbb{R}$ is a non-negative function of distance to $x$, so $f(y) = f(r)$ in terms of the coordinates $(r,\theta^{\scriptscriptstyle\,\alpha})$, then
\begin{equation} \label{eq:integralcomp}
\omega_{n-1}\,\int^{\scriptscriptstyle R}_{\scriptscriptstyle 0}\,f(r)\,\mathrm{sn}^{n-1}_{\kappa_{\max}}(r)\hspace{0.02cm}dr\leq\,\int_{\scriptscriptstyle B(x,R)}f(y)\,\mathrm{vol}(dy)\,\leq\,\omega_{n-1}\,
\int^{\scriptscriptstyle R}_{\scriptscriptstyle 0}\,f(r)\,\mathrm{sn}^{n-1}_{\kappa_{\min}}(r)\hspace{0.02cm}dr
\end{equation}
for any $R \leq \min\lbrace \mathrm{inj}(x)\hspace{0.03cm},\pi\hspace{0.03cm}c^{\scriptscriptstyle -1}\rbrace$. Here, $\mathrm{inj}(x)$ is the injectivity radius at $x$, $c = |\kappa_{\max}|^{\scriptscriptstyle 1/2\hspace{0.03cm}}$, and $\omega_{n-1}$ denotes the area of $S^{\hspace{0.02cm}n-1}$. In addition, if $\kappa_{\max} \leq 0$, then $c^{\scriptscriptstyle -1}$ is understood to be $+\infty$.
In general, it may be impossible to apply the integral formula (<ref>), since $\mathrm{t}(\theta)$ may be unknown. Here are two examples where $\mathrm{t}(\theta)$ is known, and quite tractable (in fact, constant).
Example 1 : if $M$ is a Hadamard manifold, then for any choice of the origin $x$, and any $\theta \in S^{n-1}$, one has $\mathrm{t}(\theta) = \infty$, and (<ref>) becomes
\begin{equation} \label{eq:integralsphericalhadamard}
\int_M\,f(y)\,\mathrm{vol}(dy) \,=\, \int^{\infty}_0\!\!\!\int_{S^{n-1}}f(r,\theta)\,\det(\mathcal{A}(r,\theta))\,dr\hspace{0.03cm} \omega_{n-1}(d\theta)
\end{equation}
Example 2 : compact rank-one symmetric space are the following manifolds : spheres, real projective spaces, complex projective spaces, quaternion projective spaces, or the Cayley plane. These are manifolds all of whose geodesics are closed (i.e. periodic) and isometric to one another
(see [15], for a detailed account). Therefore, $\mathrm{t}(\theta)$ does not depend on $x$ nor on $\theta$, but is always equal to $l/2$, where $l$ is the length of a simple geodesic loop. Scaling the metric so the maximum sectional curvature is equal to $1$, it can be shown $l = \pi$ for real projective spaces, and $l = 2\pi$in all other cases. Moreover (<ref>) takes on the form (this may be found by looking up the solution of the Jacobi equation in [15], Page 82),
\begin{equation} \label{eq:integralsphericalcross}
\int_M\,f(y)\,\mathrm{vol}(dy) \,=\, \int^{\frac{l}{\mathstrut 2}}_0\!\!\!\int_{S^{n-1}}f(r,\theta)\left(\sin(r)\right)^{k-1}\left(2\sin(r/2)\right)^{n-k}\,dr\hspace{0.03cm} \omega_{n-1}(d\theta)
\end{equation}
where $k= n$ for spheres and real projective spaces, and $k = 2$ or $4$ for complex or quaternion projective spaces, respectively. For the Cayley plane, $n = 16$ and $k = 8$.
§.§ Riemannian symmetric spaces
A Riemannian symmetric space is a Riemannian manifold $M$, such that, for each $x \in M$, there exists an isometry $s_x : M \rightarrow M$, with $s_x(x) = x$ and $d\hspace{0.02cm}s_x(x) = -\mathrm{Id}_x\,$. This isometry $s_x$ is called the geodesic symmetry at $x$.
Let $G$ denote the identity component of the isometry goup of $M$, and $K = K_o$ be the stabiliser in $G$ of some point $o \in M$[According to the Myers-Steenrod theorem, $G$ is a connected Lie group, and $K$ a compact subgroup of $G$.]. Then, $M = G/K$ is a Riemannian homogeneous space. The mapping $\theta : G \rightarrow G$, where $\theta(g) = s_o\circ g \circ s_o$ is an involutive isomorphism of $G$.
Let $\mathfrak{g}$ denote the Lie algebra of $G$, and consider the Cartan decomposition, $\mathfrak{g} = \mathfrak{k} + \mathfrak{p}$, where $\mathfrak{k}$ is the $+1$ eigenspace of $d\hspace{0.02cm}\theta$ and $\mathfrak{p}$ is the $-1$ eigenspace of $d\hspace{0.02cm}\theta$. One clearly has the commutation relations,
\begin{equation} \label{eq:sscommute}
[\mathfrak{k},\mathfrak{k}] \subset \mathfrak{k}\hspace{0.2cm};\hspace{0.2cm}
[\mathfrak{k},\mathfrak{p}] \subset \mathfrak{p}\hspace{0.2cm};\hspace{0.2cm}
[\mathfrak{p},\mathfrak{p}] \subset \mathfrak{k}
\end{equation}
In addition, it turns out that $\mathfrak{k}$ is the Lie algebra of $K$, and that $\mathfrak{p}$ may be identified with $T_oM$.
The Riemannian metric of $M$ may always be expressed in terms of an $\mathrm{Ad}(K)$-invariant scalar product $Q$ on $\mathfrak{g}$. If $x \in M$ is given by $x = g\cdot o$ for some $g \in G$ (where $g\cdot o = g(o)$), then
\begin{equation} \label{eq:ssmetric}
\langle u,\!v\rangle_{\scriptscriptstyle x} = Q(g^{\scriptscriptstyle -1}\cdot u\hspace{0.02cm},g^{\scriptscriptstyle -1}\cdot v)
\end{equation}
where the vectors $g^{\scriptscriptstyle -1}\cdot u$ and $g^{\scriptscriptstyle -1}\cdot v$, which belong to $T_oM$, are identified with elements of $\mathfrak{p}$. Here, by an abuse of notation, $d\hspace{0.02cm}g^{\scriptscriptstyle -1}\cdot u$ is denoted $g^{\scriptscriptstyle -1}\cdot u$.
Let $\exp:\mathfrak{g} \rightarrow G$ denote the Lie group exponential. If $v \in T_oM$, then the Riemannian exponential $\mathrm{Exp}_o(v)$ is given by
\begin{equation} \label{eq:ssexp1}
\mathrm{Exp}_o(v) = \exp(v)\cdot o
\end{equation}
Moreover, if $\Pi^t_{\scriptscriptstyle 0}$ denotes parallel transport along the geodesic $c(t) = \mathrm{Exp}_o(tv)$, then
\begin{equation} \label{eq:ssparallel}
\Pi^t_{\scriptscriptstyle 0}(u) = \exp(tv)\cdot u
\end{equation}
for any $u \in T_o M$ (note that the identification $T_oM \simeq \mathfrak{p}$ is always made, implicitly). Using (<ref>), one can derive the following expression for the Riemann curvature tensor at $o$,
\begin{equation} \label{eq:sscurvature}
R_o(v,u)w = -[[v\hspace{0.03cm},u]\hspace{0.02cm},w] \hspace{1cm} v,u,w \in T_oM
\end{equation}
A fundamental property of symmetric spaces is that the curvature tensor is parallel :$\hspace{0.03cm}\nabla\,R = 0$. This is often used to solve the Jacobi equation (<ref>), and then express the derivative of the Riemannian exponential, using in (<ref>),
\begin{equation} \label{eq:dexpss}
\mathrm{d\hspace{0.02cm}Exp}_x(v)(u) \,=\, \exp(v)\cdot \mathrm{sh}(R_v)(u)
\end{equation}
where $\mathrm{sh}(R_v) = \sum^{\infty}_{n=0} (R_v)^n/(2n+1)!$ for the self-adjoint curvature operator $R_v(u) = [v\hspace{0.03cm},[v\hspace{0.02cm},u]]$. Since $\exp(v)$ is an isometry, the following expression of the Riemannian volume is immediate
\begin{equation} \label{eq:ssvol}
\mathrm{Exp}^*_o(\mathrm{vol}) = \left|\det(\mathrm{sh}(R_v))\right|\hspace{0.02cm}dv
\end{equation}
where $dv$ denotes the volume form on $T_oM$, associated with the restriction of $Q$ to $\mathfrak{p}$.
Expression (<ref>) yields applicable integral formulae, when $\mathfrak{g}$ is a reductive Lie algebra ($\mathfrak{g} = \mathfrak{z} + \mathfrak{g}_{ss}$ : $\mathfrak{z}$ the centre of $\mathfrak{g}$ and $\mathfrak{g}_{ss}$ semisimple). If $\mathfrak{a}$ is a maximal Abelian subspace of $\mathfrak{p}$[Recall that the dimension of $\mathfrak{a}$ is known as the rank of $M$. In fact, $\mathrm{Exp}_o(\mathfrak{a})$ is a totally flat submanifold of $M$,of maximal dimension, and the only such submanifold, up to isometry.], any $v \in \mathfrak{p}$ is of the form $v = \mathrm{Ad}(k)\,a$ for some $k \in K$ and $a \in\mathfrak{a}$ (see [10], Lemma 6.3, Chapter V).Moreover, using the fact that $\mathrm{Ad}(k)$ is an isomorphism of $\mathfrak{g}$,
\begin{equation} \label{eq:raeigen}
\mathrm{Ad}(k^{\scriptscriptstyle -1})\circ R_v \circ \mathrm{Ad}(k) = R_a = \sum_{\lambda \in \Delta_+} (\lambda(a))^2\;\Pi_{\lambda}
\end{equation}
where each $\lambda \in \Delta_+$ is a linear form $\lambda : \mathfrak{a} \rightarrow \mathbb{R}$, and $\Pi_{\lambda}$ is the orthogonal projectors onto the corresponding eigenspace of $R_{a\,}$. Here, $\Delta_+$ is the set of positive roots of $\mathfrak{g}$ with respect to $\mathfrak{a}$ [10] (see Lemma 2.9, Chapter VII).
It is possible to use the diagonalisation (<ref>), in order to evaluate the determinant (<ref>). To obtain a regular parameterisation, let $S = K/K_{\mathfrak{a\,}}$, where $K_{\mathfrak{a}}$ is the centraliser of $\mathfrak{a}$ in $K$. Then, let $\varphi : S \times \mathfrak{a} \rightarrow M$ be given by $\varphi(s\hspace{0.02cm},a) = \mathrm{Exp}_o(\beta(s\hspace{0.02cm},a))$ where $\beta(s,a) = \mathrm{Ad}(s)\,a$. Now, by (<ref>) and (<ref>),
\varphi^*(\mathrm{vol}) = \prod_{\lambda \in \Delta_+} \left| \frac{\sinh\hspace{0.02cm}\lambda(a)}{\lambda(a)}\right|^{m_\lambda}\hspace{0.03cm}\beta^*(dv)
where $m_\lambda$ is the multiplicity of $\lambda$ (the rank of $\Pi_\lambda$). On the other hand, one may show that
\begin{equation} \label{eq:beta*}
\beta^*(dv) = \prod_{\lambda \in \Delta_+} |\lambda(a)|^{ m_\lambda}\hspace{0.05cm}da\,\omega(ds)
\end{equation}
where $da$ is the volume form on $\mathfrak{a}$, and $\omega$ is the invariant volume induced onto $S$ from $K$.
Finally, the Riemannian volume, in terms of the parameterisation $\varphi$, takes on the form
\begin{equation} \label{eq:ssvolka}
\varphi^*(\mathrm{vol}) = \prod_{\lambda \in \Delta_+} \left| \sinh\hspace{0.02cm}\lambda(a)\right|^{ m_\lambda}\hspace{0.03cm}da\,\omega(ds)
\end{equation}
Using (<ref>), it will be possible to write down integral formulae for Riemannian symmetric spaces, either non-compact or compact.
§.§.§ The non-compact case
This is the case were $\mathfrak{g}$ admits an $\mathrm{Ad}(G)$-invariant, non-degenerate, symmetric bilinear form $B$, such that $Q(u\hspace{0.02cm},z) = - B(u,d\hspace{0.02cm}\theta(z))$ is an $\mathrm{Ad}(K)$-invariant scalar product on $\mathfrak{g}$. In this case, $B$ is
negative-definite on $\mathfrak{k}$ and positive-definite on $\mathfrak{p}$. Moreover, $\mathrm{ad}(z) = [z,\cdot]$ is skew-symmmetric or symmetric (with respect to $Q$), according to whether $z \in \mathfrak{k}$ or $z \in \mathfrak{p}$.
If $u_{\scriptscriptstyle 1\hspace{0.02cm}},u_{\scriptscriptstyle 2} \in \mathfrak{p}$ are orthonormal, the sectional curvature of $\mathrm{Span}(u_{\scriptscriptstyle 1\hspace{0.02cm}},u_{\scriptscriptstyle 2})$ is found from (<ref>), $\kappa(u_{\scriptscriptstyle 1\hspace{0.02cm}},u_{\scriptscriptstyle 2}) = - \Vert [u_{\scriptscriptstyle 1\hspace{0.02cm}},u_{\scriptscriptstyle 2}] \Vert^2_o \leq 0$. Therefore, $M$ has non-positive sectional curvature.
In fact, $M$ is a Hadamard manifold. It is geodesically complete by (<ref>). It is moreover simply connected, because $\mathrm{Exp}_o:\mathfrak{p} \rightarrow M$ is a diffeomorphism [10] (Theorem 1.1, Chapter VI). Thus, (<ref>) yields a first integral formula,
\begin{equation} \label{eq:ssvolnc}
\int_M\,f(x)\,\mathrm{vol}(dx) = \int_{\mathfrak{p}}\, f(\mathrm{Exp}_o(v))\hspace{0.02cm}\left|\det(\mathrm{sh}(R_v))\right|\hspace{0.02cm}dv
\end{equation}
To obtain an integral formula from (<ref>), one should first note that $\beta : S \times \mathfrak{a} \rightarrow \mathfrak{p}$ is not regular, nor one-to-one. Recall the following :
$\bullet$ the hyperplanes $\lambda(a) = 0$, where $\lambda \in \Delta_+\,$, divide $\mathfrak{a}$ into finitely many connected components, which are open and convex sets, known as Weyl chambers. From (<ref>), $\beta$ is regular on each Weyl chamber.
$\bullet$ let $K^\prime_{\mathfrak{a}}$ denote the normaliser of $\mathfrak{a}$ in $K$. Then, $W = \left.K^\prime_{\mathfrak{a}}\middle/K_{\mathfrak{a}}\right.$ is a finite group of automorphisms of $\mathfrak{a}$, called the Weyl group, which acts freely transitively on the set of Weyl chambers [10] (Theorem 2.12, Chapter VII).
Then, for each Weyl chamber $C$, $\beta$ is regular and one-to-one, from $S\times C$ onto its image in $\mathfrak{p}$. Moreover, if $\mathfrak{a}_r$ is the union of the Weyl chambers ($a \in \mathfrak{a}_r$ if and only if $\lambda(a) \neq 0$ for any $\lambda \in \Delta_+$), then $\beta$ is regular and $|W|$-to-one from $S \times \mathfrak{a}_r$ onto its image in $\mathfrak{p}$.
To obtain the desired integral formula, it only remains to note that $\varphi$ is a diffeomorphism from $S\times C$ onto its image in $M$. However, this image is the set $M_r$ of regular values of $\varphi$. By Sard's lemma, its complement is negligible [16].
Let $M = G/K$ be a Riemannian symmetric space, which belongs to the “non-compact case", just described. Then, for any bounded continuous function $f:M\rightarrow \mathbb{R}$,
\begin{equation} \label{eq:ssvolncka1}
\int_M\,f(x)\,\mathrm{vol}(dx) \,=\,
\int_{C_+}\int_{S}f(\varphi(s,a))\hspace{0.03cm}\prod_{\lambda \in \Delta_+}\left( \sinh\hspace{0.02cm}\lambda(a)\right)^{ m_\lambda}\hspace{0.03cm}da\,\omega(ds)
\end{equation}
\begin{equation} \label{eq:ssvolncka2}
\phantom{\int_M\,f(x)\,\mathrm{vol}(dx)} \,=\,
\frac{1}{|W|}\,\int_{\mathfrak{a}}\int_{S}\,f(\varphi(s,a))\hspace{0.03cm}\prod_{\lambda \in \Delta_+}\left| \sinh\hspace{0.02cm}\lambda(a)\right|^{ m_\lambda}\hspace{0.03cm}da\,\omega(ds)
\end{equation}
Here, $C_+$ is the Weyl chamber $C_+ = \lbrace a \in \mathfrak{a}\,: \lambda \in \Delta_+ \Rightarrow \lambda(a) > 0\rbrace$.
Example 1 : consider $M = \mathrm{H}(N)$ the space of $N \times N$ Hermitian positive-definite matrices. Here, $G = \mathrm{GL}(N,\mathbb{C})$ and $K = U(N)$, the groups of $N \times N$, complex, invertible and unitary matrices. Moreover, $B(u,\!z) = \mathrm{Re}(\mathrm{tr}(uz))$ and $d\hspace{0.02cm}\theta(z) = -z^\dagger$. Thus, $\mathfrak{p}$ is the space of $N \times N$ Hermitian matrices, and one may choose $\mathfrak{a}$ the space of $N \times N$ real diagonal matrices. The positive roots are the linear maps $\lambda(a) = a_{ii} - a_{jj}$ where $i < j$, and each one has its multiplicity equal to $2$. Thus, $C_+$ is the cone of real diagonal matrices $a$ with $a_{\scriptscriptstyle 11} > \ldots > a_{\scriptscriptstyle NN} > 0$. The Weyl group $W$ is the group of permutation matrices in $U(N)$ (so $|W| = N!$). Finally, $S= U(N)/T_{\scriptscriptstyle N} \equiv S_{\scriptscriptstyle N\,}$, where $T_{\scriptscriptstyle N}$ is the torus of diagonal unitary matrices. By (<ref>),
\begin{equation} \label{eq:ssvolncka2hn}
\int_{\mathrm{H}(N)}\,f(x)\,\mathrm{vol}(dx) \,=\,
\frac{1}{N!}\,\int_{\mathfrak{a}}\int_{S_{\scriptscriptstyle N}}\,f\left(s\hspace{0.02cm}\exp(2a)\hspace{0.02cm}s^\dagger\right)\hspace{0.03cm}\prod_{i < j} \sinh^2(a_{ii} - a_{jj})\hspace{0.03cm}da\,\omega(ds)
\end{equation}
where $da = da_{\scriptscriptstyle 11}\ldots da_{\scriptscriptstyle NN\,}$. Example 2 : pursuing the previous example, assume $f$ is a class function : $f(k\cdot x) = f(x)$ for $k \in K$ and $x \in \mathrm{H}(N)$. That is, $f(x)$ depends only on the eigenvalues $x_i = e^{r_i}$ of $x$. By (<ref>),
\begin{equation} \label{eq:integralhn}
\int_{\mathrm{H}(N)}\,f(x)\,\mathrm{vol}(dx) \,=\,
\frac{\omega(S_{\scriptscriptstyle N})}{2^{\scriptscriptstyle N}N!}\, \int_{\mathbb{R}^{\scriptscriptstyle N}}\,f\left(\exp(r)\right)\hspace{0.03cm}\prod_{i < j}\sinh^2((r_i - r_j)/{2})\hspace{0.03cm}dr
\end{equation}
or, by introducing the eigenvalues $x_i$ as integration variables,
\begin{equation} \label{eq:integralhnvmonde}
\int_{\mathrm{H}(N)}\,f(x)\,\mathrm{vol}(dx) \,=\,
\frac{\omega(S_{\scriptscriptstyle N})}{2^{\scriptscriptstyle N^2}N!} \,\int_{\mathbb{R}^{\scriptscriptstyle N}_+}\,f\left(x_{\scriptscriptstyle 1\,},\ldots,x_{\scriptscriptstyle N}\right)\hspace{0.03cm}|V(x)|^2\hspace{0.03cm}\prod^N_{i=1} x^{\scriptscriptstyle -N}_i\hspace{0.02cm}dx_i
\end{equation}
where $V(x) = \prod_{i<j} (x_j - x_i)$ is the Vandermonde determinant. Integrals of this form are well-known in random matrix theory [17].
§.§.§ The compact case
In this case, $\mathfrak{g}$ admits an $\mathrm{Ad}(G)$-invariant scalar product $Q$. Therefore, $\mathrm{ad}(z)$ is skew-symmmetric, with respect to $Q$, for each $z \in \mathfrak{g}$. Using (<ref>), it follows that $M$ is compact, with non-negative sectional curvature.
In fact, the compact case may be obtained from the previous non-compact case by duality. Denote $\mathfrak{g}_{\scriptscriptstyle\, \mathbb{C}}$ the complexification of $\mathfrak{g}$, and let $\mathfrak{g}^* = \mathfrak{k} + \mathfrak{p}_*$ where $\mathfrak{p}_* = i\hspace{0.02cm}\mathfrak{p}$. Then, $\mathfrak{g}^*$ is a compact real form of $\mathfrak{g}_{\scriptscriptstyle\, \mathbb{C}}$ (that is, $\mathfrak{g}^*$ is a compact Lie algebra, and its complexification is equal to $\mathfrak{g}_{\scriptscriptstyle\, \mathbb{C}}$). Denote $G^*$ the connected Lie group with Lie algebra $\mathfrak{g}^*$.
If $M = G/K$ is a Riemannian symmetric space which belongs to the non-compact case, then $M^* = G^*\!/K$ is a Riemannian symmetric space which belongs to the compact case. Formally, to pass from the non-compact case to the compact case, all one has to do is replace $a$ by $i\hspace{0.02cm}a$. Applying this recipe to (<ref>), one obtains
\begin{equation} \label{eq:ssvolkac}
\varphi^*(\mathrm{vol}) = \prod_{\lambda \in \Delta_+} \left| \sin\hspace{0.02cm}\lambda(a)\right|^{ m_\lambda}\hspace{0.03cm}da\,\omega(ds)
\end{equation}
where $da$ is the volume form on $\mathfrak{a}_* = i\hspace{0.02cm}\mathfrak{a}$, and $\omega$ is the invariant volume induced onto $S$ from $K$.
Note that the image under $\mathrm{Exp}_o$ of $\mathfrak{a}_*$ is the torus $T_* = \mathfrak{a}_*/\mathfrak{a}_{\scriptscriptstyle K\,}$, where $\mathfrak{a}_{\scriptscriptstyle K}$ is the lattice given by $\mathfrak{a}_{\scriptscriptstyle K} = \lbrace a \in \mathfrak{a}_*: \mathrm{Exp}_o(a) = o \rbrace$. Recall the following :
$\bullet$ $\varphi(s,a)$ only depends on $t = \mathrm{Exp}_o(a)$. Thus, $\varphi$ may be considered as a map from $S \times T_*$ to $M$.
$\bullet$ if $a \in \mathfrak{a}_{\scriptscriptstyle K}$ then $\exp(2a) = e$ (the identity element in $G^*$). Thus, $\lambda(a) \in i\hspace{0.02cm}\pi\,\mathbb{Z}$ for all $\lambda \in \Delta_+$ [10] (Page 383). Therefore, there exists a function $D:T \rightarrow \mathbb{R}$, such that
D(t) = \prod_{\lambda \in \Delta_+} \left| \sin\hspace{0.02cm}\lambda(a)\right|^{ m_\lambda} \hspace{0.5cm}
\text{whenever $t = \mathrm{Exp}_o(a)$}
Now, $T_*$ is a totally flat submanifold of $M$. Therefore, $\mathrm{Exp}^*(dt) = da$, where $dt$ denotes the invariant volume induced onto $T_*$ from $M$. With a slight abuse of notation, (<ref>) now reads,
\begin{equation} \label{eq:ssvolkac1}
\varphi^*(\mathrm{vol}) = D(t)\hspace{0.03cm}dt\,\omega(ds)
\end{equation}
Denote $(T_*)_r$ the set of $t \in T_*$ such that $D(t) \neq 0$. By the same arguments as in the non-compact case, $\varphi$ is a regular $|W|$-to-one map from $S \times (T_*)_r$ onto $M_r\,$, the set of regular values of $\varphi$.
Let $M = G^*\!/K$ be a Riemannian symmetric space, which belongs to the “compact case", just described. Then, for any bounded continuous function $f:M\rightarrow \mathbb{R}$,
\begin{equation} \label{eq:ssvolkac2}
\int_M\,f(x)\,\mathrm{vol}(dx) \,=\,
\frac{1}{|W|}\,\int_{T_*}\int_{S}\,f(\varphi(t,a))\hspace{0.03cm}D(t)\hspace{0.03cm}dt\,\omega(ds)
\end{equation}
Example 1 : the dual of $\mathrm{H}(N)$ is the unitary group $U(N)$. Here, $G^* = U(N)\times U(N)$ and $K \simeq U(N)$, is the diagonal group $K = \lbrace (x\hspace{0.02cm},x)\,;x \in U(N)\rbrace$. The Riemannian metric is given by the trace scalar product $Q(u,\!z) = -\mathrm{tr}(uz)$. Moreover, $T_* = T_{\scriptscriptstyle N}$ and $S= S_{\scriptscriptstyle N}$ (this is $U(N)/T_{\scriptscriptstyle N}$). The positive roots are $\lambda(ia) = a_{ii} - a_{jj}$ where $i < j$ and where $a$ is $N \times N$, real and diagonal[Please do no confuse the imaginary number $i$ with the subscript $i$.]. By writing the integral over $T_{\scriptscriptstyle N}$ as a multiple integral, (<ref>) reads,
\begin{equation} \label{eq:ssvolkacun}
\int_{U(N)}\,f(x)\,\mathrm{vol}(dx) \,=\,
\frac{1}{N!}\,\int_{[0\hspace{0.02cm},2\pi]^N}\int_{S_{\scriptscriptstyle N}}\,f\left(s\hspace{0.02cm}\exp(2ia)\hspace{0.02cm}s^\dagger\right)\hspace{0.03cm}\prod_{i < j} \sin^2(a_{ii} - a_{jj})\hspace{0.03cm}\omega(ds)\,da
\end{equation}
where $da = da_{\scriptscriptstyle 11}\ldots da_{\scriptscriptstyle NN\,}$.
Example 2 : assume $f$ is a class function. That is, $f(x)$ depends only on eigenvalues $e^{i\theta_i}$ of $x$. Integrating out $s$, from (<ref>), it follows,
\begin{equation} \label{eq:integralun}
\int_{U(N)}\,f(x)\,\mathrm{vol}(dx) \,=\,
\frac{\omega(S_{\scriptscriptstyle N})}{2^{\scriptscriptstyle N}N!}\, \int_{[0\hspace{0.02cm},2\pi]^N}\,f\left(\exp(i\theta)\right)\hspace{0.03cm}\prod_{i < j}\sin^2((\theta_i - \theta_j)/{2})\hspace{0.03cm}d\theta
\end{equation}
or, after an elementary manipulation,
\begin{equation} \label{eq:integralunvmonde}
\int_{U(N)}\,f(x)\,\mathrm{vol}(dx) \,=\,
\frac{\omega(S_{\scriptscriptstyle N})}{2^{\scriptscriptstyle N^2}N!} \,\int_{[0\hspace{0.02cm},2\pi]^N}\,f\left(\theta_{\scriptscriptstyle 1\,},\ldots,\theta_{\scriptscriptstyle N}\right)\hspace{0.03cm}|V(e^{i\theta})|^2\hspace{0.03cm}d\theta_{\scriptscriptstyle 1}\ldots\theta_{\scriptscriptstyle N}
\end{equation}
where $V(e^{i\theta}) = \prod_{i<j} (e^{i\theta_j} - e^{i\theta_i})$ is the Vandermonde determinant. Integrals of this form are well-known in the random matrix theory of compact groups [18].
§ GEODESICS IN SYMMETRIC SPACES
Let $M = G/K$ be a Riemannian symmetric space, and assume $G$ has reductive Lie algebra $\mathfrak{g}$.Let the metric of $M$ be given by an $\mathrm{Ad}(K)$-invariant scalar product $Q$ on $\mathfrak{g}$, according to (<ref>).
Recall the Cartan decomposition, $\mathfrak{g} = \mathfrak{k} + \mathfrak{p}$. Assume $\mathfrak{k}$ and $\mathfrak{p}$ are orthogonal, with respect to $Q$, and extend $Q$ to a left-invariant Riemannian metric $(\cdot,\cdot)$ on $G$.
Then, consider the natural projection $\pi:G \rightarrow M$, which is given by $\pi(g) = g\cdot o$ for $g \in G$. Denote by $V_g$ the kernel of $d\pi(g)$, and by $H_g$ its orthogonal complement with respect to $(\cdot,\cdot)_{\scriptscriptstyle g\,}$. Since $(\cdot,\cdot)$ is left-invariant, and $\mathfrak{k}$ and $\mathfrak{p}$ are orthogonal,
\begin{equation} \label{eq:kptovh}
V_g = dL_g(\mathfrak{k}) \hspace{1cm} H_g = dL_g(\mathfrak{p})
\end{equation}
where $L_g$ denotes left translation by $g$ (below, $R_g$ will denote right translation).
For $v \in T_xM$, there exists a unique $v^{\scriptscriptstyle H}(g) \in H_g$ such that $d\pi\left( v^{\scriptscriptstyle H}(g)\right) = v$. By an abuse of notation, denote $v^{\scriptscriptstyle H}(e) = dL_{\scriptscriptstyle g^{-1}}\left( v^{\scriptscriptstyle H}(g)\right)$. Recall that for any $u\hspace{0.03cm},v \in T_xM$ (see [19], Chapter X),
\begin{equation} \label{eq:vhtometric}
\langle u\hspace{0.02cm},v\rangle_{\scriptscriptstyle x} \,=\, \left(u^{\scriptscriptstyle H}(g)\hspace{0.02cm},v^{\scriptscriptstyle H}(g)\right)_{\scriptscriptstyle g} \,=\, Q\left(u^{\scriptscriptstyle H}(e)\hspace{0.02cm},v^{\scriptscriptstyle H}(e)\right)
\end{equation}
For Propositions <ref> and <ref>, consider the “infinitesimal action",
\begin{equation} \label{eq:infaction}
\xi\cdot x = \left.\frac{d}{dt}\right|_{t=0} \exp(t\hspace{0.02cm}\xi)\cdot x \hspace{1cm} \xi \in \mathfrak{g}\text{ and } x \in M
\end{equation}
In addition, let $\mathrm{B}$ be the bilinear form on $\mathfrak{g}$, given by $\mathrm{B} = B$, if $M$ belongs to the non-compact case, and by $\mathrm{B} = Q$, if $M$ belongs to the compact case (these two cases were described in <ref>).
Let $M = G/K$ be a symmetric space, of the “non-compact case", or of the “compact case". For $g \in G$, $x = \pi(g)$ and $v \in T_xM$, let $\omega_{\scriptscriptstyle v} = \mathrm{Ad}(g)\hspace{0.02cm}v^{\scriptscriptstyle H}(e)$. Then, $\omega_{\scriptscriptstyle v}\cdot x = v$ and $\langle u\hspace{0.02cm},v\rangle_{\scriptscriptstyle x} = \mathrm{B}(\xi\hspace{0.02cm},\omega_{\scriptscriptstyle v})$ whenever $u = \xi \cdot x$.
In the notation of the previous proposition,
\begin{equation} \label{eq:geodesiclift}
\mathrm{Exp}_x(v) = \exp(\omega_{\scriptscriptstyle v})\cdot x
\end{equation}
for $x \in M$ and $v \in T_xM$.
Propositions <ref> and <ref> offer a straightforward computational route to the Riemannian exponential map $\mathrm{Exp}$. To compute $\mathrm{Exp}_x(v)$, one begins by “lifting" $v$ from $T_xM$ to $\mathfrak{g}$, under the form of $\omega_{\scriptscriptstyle v\,}$. Then, it is enough to compute the action of $\exp(\omega_{\scriptscriptstyle v})$, which is just a matrix exponential, in practice.
Example 1 : consider an example of the non-compact case, $M = \mathrm{H}(N)$, the space of $N \times N$ Hermitian positive-definite matrices. Here, $G = \mathrm{GL}(N,\mathbb{C})$ and $\pi(g) = gg^\dagger$ for $g \in G$. Then,
d\pi(g)\cdot h = h\hspace{0.02cm}g^\dagger + g\hspace{0.02cm}h^\dagger
for $h \in T_gG$. For $x = \pi(g)$ and $v \in T_xM$, it follows that $v^{\scriptscriptstyle H}(g) = \left.v\hspace{0.03cm}\theta(g)\middle/2\right.$, where $\theta(g) = (g^\dagger)^{-1}$. By definition, $\omega_{\scriptscriptstyle v} = dR_{\scriptscriptstyle g^{-1}}(v^{\scriptscriptstyle H}(g))$. Since $gg^\dagger = x$, this gives $\omega_{\scriptscriptstyle v} = (v/2)\hspace{0.03cm}x^{-1}$. Therefore, using the fact that $g\cdot x = g\hspace{0.02cm}x\hspace{0.02cm}g^\dagger$, it follows
\mathrm{Exp}_x(v) = \exp\left(v\hspace{0.03cm}x^{-1}\!\middle/2\right) \,x \,\exp\left(x^{-1}\hspace{0.03cm}v\middle/2\right)
Accordingly, by an elementary property of the matrix exponential[Matrix functions (powers, logarithms, etc.) of Hermitian arguments should be understood as Hermitian matrix functions, obtained using the spectral decomposition — see [20].],
\begin{equation} \label{eq:pennec}
\mathrm{Exp}_x(v) = x^{\frac{1}{2}}\exp\left(x^{-\frac{1}{2}}\hspace{0.04cm}v\hspace{0.04cm}x^{-\frac{1}{2}}\right)x^{\frac{1}{2}}
\end{equation}
which is the formula made popular by [21].
Example 2 : let $M=G/K$ be a Riemannian symmetric space of the compact case. That is, the scalar product $Q$ on $\mathfrak{g}$ is $\mathrm{Ad}(G)$-invariant. Write $\mathfrak{g} = \mathfrak{k} + \mathfrak{p}$ the Cartan decomposition of $\mathfrak{g}$. For $x \in M$, denote $K_x$ the stabiliser of $x$ in $G$. If $x = \pi(g)$, this has Lie algebra $\mathfrak{k}_x =\mathrm{Ad}(g)(\mathfrak{k})$ (that is, the image under $\mathrm{Ad}(g)$ of $\mathfrak{k}$). For $v \in T_xM$, by Proposition <ref>, its “lift" $\omega_{\scriptscriptstyle v}$ should verify (note that, for the present example, $\mathrm{B} = Q$)
\omega_{\scriptscriptstyle v}\cdot x = v \hspace{0.3cm}\text{and}\hspace{0.3cm} Q(\xi\hspace{0.02cm},\omega_{\scriptscriptstyle v}) = 0 \text{ for } \xi \in
\mathfrak{k}_x
where the second identity is because $\xi \cdot x = 0$ for $\xi \in \mathfrak{k}_x\,$. Because $Q$ is $\mathrm{Ad}(G)$-invariant, this second identity is equivalent to
$Q(\kappa\hspace{0.02cm},\mathrm{Ad}(g^{\scriptscriptstyle -1})(\omega_{\scriptscriptstyle v})) = 0$ for $\kappa \in \mathfrak{k}$. That is, $\omega_{\scriptscriptstyle v} = \mathrm{Ad}(g)(\omega_{\scriptscriptstyle v}(o))$ for some $\omega_{\scriptscriptstyle v}(o) \in \mathfrak{p}$. This $\omega_{\scriptscriptstyle v}(o)$ is determined from $\omega_{\scriptscriptstyle v}\cdot x = v$, which yields $\omega_{\scriptscriptstyle v}(o) \cdot o = g^{\scriptscriptstyle -1}\cdot v$. However, the map $\omega \mapsto \omega \cdot o$ is an isomorphism from $\mathfrak{p}$ onto $T_oM$. Denoting its inverse by $\pi_o : T_oM \rightarrow \mathfrak{p}$, it follows that $\omega_{\scriptscriptstyle v}(o) = \pi_o(g^{\scriptscriptstyle -1}\cdot v)$. Finally,
\begin{equation} \label{eq:compactlift}
\omega_{\scriptscriptstyle v} = \mathrm{Ad}(g)\left( \pi_o(g^{\scriptscriptstyle -1}\cdot v)\right)
\end{equation}
A special case of this formula was used in (<ref>) of <ref>.
Proof of Proposition <ref> : to begin, one must prove
\begin{equation} \label{eq:proofgeolemma1}
\omega_{\scriptscriptstyle v}\cdot x = v
\end{equation}
From the definition of $\omega_{\scriptscriptstyle v}$ and $v^{\scriptscriptstyle H}(e)$, it is clear $\omega_{\scriptscriptstyle v} = dR_{g^{\scriptscriptstyle -1}}\hspace{0.02cm}(v^{\scriptscriptstyle H}(g))$. Replacing this into (<ref>), the left-hand side of (<ref>) becomes,
\omega_{\scriptscriptstyle v}\cdot x = \left.\frac{d}{dt}\right|_{t=0}\exp(t\,dR_{g^{\scriptscriptstyle -1}}\hspace{0.03cm}v^{\scriptscriptstyle H}(g))\cdot x = \left.\frac{d}{dt}\right|_{t=0} \left(\gamma(t)\hspace{0.04cm}g^{\scriptscriptstyle-1}\right)\cdot x
where $\gamma$ is any curve in $G$, through $g$ with $\dot{\gamma}(0) = v^{\scriptscriptstyle H}(g)$. Therefore,
\omega_{\scriptscriptstyle v}\cdot x = \left.\frac{d}{dt}\right|_{t=0} \gamma(t)\cdot o = d\pi\left( v^{\scriptscriptstyle H}(g)\right) = v
from the definition of $v^{\scriptscriptstyle H}(g)$. This proves (<ref>). It remains to show,
\begin{equation} \label{eq:proofgeolemma2}
\langle u\hspace{0.02cm},v\rangle_{\scriptscriptstyle x} = Q(\xi\hspace{0.02cm},\omega_{\scriptscriptstyle v}) \hspace{1cm}
\text{for } u = \xi \cdot x
\end{equation}
The proof is separated into two cases.
non-compact case : in this case, $Q(\xi\hspace{0.02cm},\omega) = - B(\xi,d\hspace{0.02cm}\theta(\omega))$, where $B$ is an $\mathrm{Ad}(G)$-invariant, non-degenerate, symmetric bilinear form. To prove (<ref>), note that
d\pi(g)\left( dR_{\scriptscriptstyle g}(\xi)\right) = \left.\frac{d}{dt}\right|_{t=0} \left(\exp(t\xi)\,g\right)\cdot o = \left.\frac{d}{dt}\right|_{t=0} \exp(t\xi) \cdot x = u
Therefore, $dR_{\scriptscriptstyle g}(\xi) = u^{\scriptscriptstyle H}(g) + w$ where $w \in V_g$. From (<ref>), using left-invariance of $(\cdot,\cdot)$,
\begin{equation} \label{eq:proofgeolemma21}
\langle u\hspace{0.02cm},v\rangle_{\scriptscriptstyle x} \,=\, \left(dR_{\scriptscriptstyle g}(\xi)\hspace{0.02cm},v^{\scriptscriptstyle H}(g)\right)_{\scriptscriptstyle g} \,=\, Q\left(\mathrm{Ad}(g^{\scriptscriptstyle -1})(\xi)\hspace{0.02cm},v^{\scriptscriptstyle H}(e)\right)
\end{equation}
Thus, using the definition of $Q$, and the fact that $v^{\scriptscriptstyle H}(e) \in \mathfrak{p}$,
\langle u\hspace{0.02cm},v\rangle_{\scriptscriptstyle x} = - B(\mathrm{Ad}(g^{\scriptscriptstyle -1})(\xi),d\hspace{0.02cm}\theta(v^{\scriptscriptstyle H}(e))) =
B(\mathrm{Ad}(g^{\scriptscriptstyle -1})(\xi),v^{\scriptscriptstyle H}(e))
Finally, since $B$ is $\mathrm{Ad}(G)$-invariant,
\langle u\hspace{0.02cm},v\rangle_{\scriptscriptstyle x} = B(\mathrm{Ad}(g^{\scriptscriptstyle -1})(\xi),v^{\scriptscriptstyle H}(e)) =
B(\xi,\mathrm{Ad}(g)(v^{\scriptscriptstyle H}(e)))
which is the same as (<ref>), by the definition of $\omega_{\scriptscriptstyle v\,}$. Indeed, in the present case, $\mathrm{B} = B$.
compact case : this follows from (<ref>), using the fact that $Q$ is $\mathrm{Ad}(G)$-invariant. Indeed, in the present case, $\mathrm{B} = Q$.
Proof of Proposition <ref> : for $\xi \in \mathfrak{g}$, introduce the corresponding vector fields $X_\xi$ on $M$, given by $
X_\xi(x) = \xi\cdot x$. Since this is a Killing vector field [19], if $c:\mathbb{R} \rightarrow M$ is a geodesic curve in $M$, then $\ell(\xi) = \langle X_\xi\hspace{0.03cm},\dot{c}\rangle_{\scriptscriptstyle c(t)}$ is a constant, (a law of conservation, really due to Noether's theorem!). Now, in the notation of Proposition <ref>, let $\omega(t) = \omega_{\scriptscriptstyle \dot{c}(t)\,}$. By Proposition <ref>,
\mathrm{B}(\omega(t),\xi) = \ell(\xi)
Since this is a constant, and since $\mathrm{B}$ is non-degenerate, it follows that $\omega(t) = \omega$ is a constant. Proposition <ref> also implies that $c$ satisfies the ordinary differential equation
\dot{c} = \omega \cdot c
But this differential equation is also satisfied by $c(t) = \exp(t\hspace{0.03cm}\omega)\cdot\hspace{0.02cm} c(0)$, as one may see from (<ref>).By uniqueness of the solution, for given initial conditions,
c(t) = \exp(t\hspace{0.03cm}\omega_{\scriptscriptstyle \dot{c}(0)})\cdot c(0)
This immediately implies (<ref>), by setting $t = 1$, $c(0) = x$ and $\dot{c}(0) = v$.
CHAPTER: THE BARYCENTRE PROBLEM
State-of-the art results establish the existence and uniqueness of the Riemannian barycentre of a probability distribution which is supported inside a compact convex geodesic ball. What happens for a probability distribution which is not supported, but concentrated, inside a convex geodesic ball ?This question raises new difficulties that cannot be resolved by using the tools applicable to distributions which have compact convex support. The present chapter develops new tools, able to deal with these difficulties (at least in part), following the approach in [22].
* <ref> and <ref> review some of the major contributions to the study of Riemannian barycentres, due to Fréchet, Emery, Kendall, Afsari, and Arnaudon.
* <ref> introduces the main problem treated in the following, which is to study the existence and uniqueness of Riemannian barycentres of Gibbs distributions on compact Riemannian symmetric spaces.
* <ref> – <ref> lead up to the following conclusion : let $\pi_{\scriptscriptstyle T} \propto \exp(-U/T)$ be a Gibbs distribution on a simply connected compact Riemannian symmetric space $M$, such that the potential function $U$ has a unique global minimum at $x^* \in M$. If $M$ is simply connected, then for each $\delta < r_{\scriptscriptstyle cx}/2$ (where $r_{\scriptscriptstyle cx}$ is the convexity radius of $M$), there exists a critical temperature $T_{\scriptscriptstyle \delta}$ such that $T < T_{\scriptscriptstyle \delta}$ implies that $\pi_{\scriptscriptstyle T}$ has a unique Riemannian barycentre $\hat{x}_{\scriptscriptstyle T}$ and this $\hat{x}_{\scriptscriptstyle T}$ belongs to the geodesic ball $B(x^*\!,\delta)$. The assumption that $M$ is simply connected cannot be removed (see Lemma <ref> and the following remark).
* <ref> provides expressions which can be used to analytically compute the critical temperature $T_{\scriptscriptstyle \delta\hspace{0.03cm}}$.
* <ref> introduces additional background material, on the geometry of compact symmetric spaces, which is required for the proofs of the results in <ref> – <ref>.
* <ref> details the proofs of these results, concerning the concentration, differentiability, convexity,existence and uniqueness of Riemannian barycentres of Gibbs distributions on compact symmetric spaces.
§ FRÉCHET'S FRUITFUL IDEA
In 1948, Maurice Fréchet proposed a generalisation of the concept of mean value, from Euclidean spaces to general metric spaces [23].
Today, this generalisation is known as the Fréchet mean. Precisely, a Fréchet mean, of a probability distribution $\pi$ on a metric space $M$, is any global minimum of the so-called variance function
\begin{equation} \label{eq:frechet}
\mathcal{E}_{\pi}(y) \,=\,
\frac{1}{2}\hspace{0.03cm}
\int_M\,d^{\hspace{0.03cm}\scriptscriptstyle 2}(y\hspace{0.02cm},x)\hspace{0.03cm}\pi(dx)
%ONE HALF
%mathcal{E} separately
\end{equation}
where $d(x\hspace{0.02cm},y)$ denotes the distance between $x$ and $y$ in $M$. In the following, the focus will be on the case where $M$ is a Riemannian manifold. Then, a Fréchet mean of $\pi$ will be called a Riemannian barycentre, or just a barycentre, of $\pi$.
If $\mathcal{E}_{\pi}(y)$ takes on finite values (in fact, if it is finite for just one $y = y_o$), then $\pi$ has at least one Fréchet mean. In particular, if $M$ is a Euclidean space, then this Fréchet mean is always unique, and equal to the mean value (expectation) of $\pi$. In general, the Fréchet mean of a probability distribution $\pi$ is not unique, and one may think of the Fréchet mean of $\pi$ as the set $F(\pi)$, of all global minima of its variance function $\mathcal{E}_{\pi\hspace{0.03cm}}$.
Example 1 : if $M = S^1$, the unit circle, and $\pi$ is the uniform distribution (i.e. Haar measure), on $S^1$, then $F(\pi) = S^1$. Any point on the circle is a barycentre of the uniform distribution.
If $x_{\scriptscriptstyle 1},\ldots,x_{\scriptscriptstyle N} \in M$, then an empirical Fréchet mean of $(x_{\scriptscriptstyle 1},\ldots,x_{\scriptscriptstyle N})$ is any Fréchet mean of the empirical distribution $(\delta_{x_{\scriptscriptstyle 1}}+\ldots+\delta_{x_{\scriptscriptstyle N}})/N$ ($\delta_x$ denotes the Dirac distribution concentrated at $x$). In other words, an empirical Fréchet mean of $(x_{\scriptscriptstyle 1},\ldots,x_{\scriptscriptstyle N})$ is any global minimum of the empirical variance function
\begin{equation} \label{eq:empiricalfrechet}
\mathcal{E}_{\scriptscriptstyle N}(y) \,=\,\frac{1}{2N}\sum^N_{n=1}d^{\hspace{0.03cm}2}(y\hspace{0.02cm},x_n)
%ONE HALF
%mathcal{E}_N separately
\end{equation}
When $M$ is a Riemannian manifold, the term “empirical Fréchet mean" will be replaced by the term “empirical barycentre".
Example 2 : if $M = S^1$, and $x_{\scriptscriptstyle 1\hspace{0.02cm}},x_{\scriptscriptstyle 2}$ are two opposite points on $S^1$, then the empirical barycentre of $(x_{\scriptscriptstyle 1\hspace{0.02cm}},x_{\scriptscriptstyle 2})$ is a two-point set. For example, if $x_{\scriptscriptstyle 1} = 1$ and $x_{\scriptscriptstyle 2} = -1$, then the empirical barycentre is the set $\lbrace i,-i\rbrace$ ($i$ being the square root of $-1$).
Two important problems, in relation to the concept of Fréchet mean, are establishing the uniqueness of the Fréchet mean of some probability distribution, and effectively computing this Fréchet mean, (or the set of Fréchet means, in case uniqueness does not hold).
Another type of problem is related to the large-sample theory of the Fréchet mean, and was treated in [24][25]. Assume $(x_n\,;n\geq 1)$ are independent samples from the distribution $\pi$. If $F_{\scriptscriptstyle N}$ is the set of empirical Fréchet means of $(x_{\scriptscriptstyle 1},\ldots,x_{\scriptscriptstyle N})$, then one is interested in using $F_{\scriptscriptstyle N}$ to somehow approximate $F(\pi)$.
In [24], it was shown that, if the metric space $M$ is such that any closed and bounded subset of $M$ is compact[That is, for any $x \in M$, the function $y \mapsto d(x\hspace{0.02cm},y)$ is a proper function, meaning it has compact sublevel sets.], then for any $\epsilon > 0$, the set $F_{\scriptscriptstyle N}$ almost-surely belongs to the $\epsilon$-neighborhood of the set $F(\pi)$, when $N$ is sufficiently large.
Moreover, if $\pi$ has a unique Fréchet mean, say $F(\pi) = \lbrace\hat{x}_{\pi}\rbrace$, then any sequence of empirical Fréchet means, $\bar{x}_{\scriptscriptstyle N} \in F_{\scriptscriptstyle N}$ converges almost-surely to $\hat{x}_{\pi}$ (an extension of this last result, from independent to Markovian samples, is obtained in <ref>).
In [25], a central limit theorem was added to this last convergence result. Specifically, if $M$ is a Riemannian manifold, the distribution of $N^{\scriptscriptstyle \frac{1}{2}}\hspace{0.03cm}\mathrm{Exp}_{\hat{x}_\pi}(\bar{x}_{\scriptscriptstyle N})$ converges to a multivariate normal distribution (in the tangent space at $\hat{x}_\pi$). This “central limit theorem" requires several technical conditions, in order to hold true, and should therefore only be applied after due verification.
§ EXISTENCE AND UNIQUENESS
The problem of the existence and uniqueness of Riemannian barycentres has generated a rich literature, with ramifications in stochastic analysis on manifolds, Riemannian geometry, and probability theory. The present section attempts a quick, non-exhaustive summary of some famous results from this literature.
§.§ Emery and Kendall
The works of Emery and Kendall [26], later expanded upon by Afsari [27], are related to the existence and uniqueness of the Riemannian barycentre of a probability distribution $\pi$, supported inside some geodesic ball $B(x^*\!,\delta)$, in a Riemannian manifold $M$.
Emery and Kendall, among others, considered the so-called Karcher mean of $\pi$. This is a local minimum of the variance function $\mathcal{E}_\pi$ in (<ref>). In [26], $\pi$ is assumed to have compact support, inside a so-called regular geodesic ball $B(x^*\!,\delta)$. Here, “regular geodesic ball" means
$\bullet$ $\delta < \frac{\pi}{2}c^{\scriptscriptstyle -1}$, where all sectional curvatures of $M$ are less than $\kappa_{\max} = c^{\hspace{0.02cm}\scriptscriptstyle 2}$.
$\bullet$ the cut locus of $x^*$ does not intersect $B(x^*\!,\delta)$ (that is $\delta < \mathrm{inj}(x^*)$).
These two conditions guarantee that the closed ball $\bar{B}(x^*\!,\delta)$ is weakly convex, and that it has convex geometry.
Weakly convex means for any $x\hspace{0.02cm},y \in \bar{B}(x^*\!,\delta)$ there exists a unique geodesic $\gamma:[0,1]\rightarrow M$, such that $\gamma(0) = x$, $\gamma(1) = y$ and $\gamma(t) \in \bar{B}(x^*\!,\delta)$ for all $t \in [0,1]$ (this is equivalent to the terminology of [11][This geodesic $\gamma$ is the unique length-minimising curve, among all curves which connect $x$ to $y$ and lie in $\bar{B}(x^*\!,\delta)$. See the proof of Theorem IX.6.2, Page 405 in [11].]).
Convex geometry means there exists a positive, bounded, continuous, and convex function $\Psi$, defined on $\bar{B}(x^*\!,\delta) \times \bar{B}(x^*\!,\delta)$, such that $\Psi(x\hspace{0.02cm},y) = 0$ if and only if $x = y$.
When $\pi$ is supported inside $B(x^*\!,\delta)$, the function $\mathcal{E}_\pi$ takes on finite values, and therefore has a global minimum $\hat{x}_\pi\hspace{0.03cm}$. However, it is not immediately clear this $\hat{x}_\pi$ should lie within $B(x^*\!,\delta)$. In [26], the existence of a local minimum, i.e. Karcher mean, within $B(x^*\!,\delta)$ is guaranteed, subject to interpreting the distance in (<ref>) as geodesic distance within $B(x^*\!,\delta)$.
If $\hat{x}_\pi$ is a local minimum of $\mathcal{E}_\pi$ in $B(x^*\!,\delta)$, then the convex geometry property of the closed ball $\bar{B}(x^*\!,\delta)$ guarantees this local minimum is unique. This follows by using a general form of Jensen's inequality, due to Emery. Specifically, if $\hat{x}_{\scriptscriptstyle 1}$ and $\hat{x}_{\scriptscriptstyle 2}$ are Karcher means in $B(x^*\!,\delta)$, then $(\hat{x}_{\scriptscriptstyle 1\hspace{0.02cm}},\hat{x}_{\scriptscriptstyle 2})$ is a Karcher mean of the image distribution $\delta^*\pi$ of $\pi$, under the map $\delta(x) = (x,x)$. Then, applying Jensen's inequality to the convex function $\Psi$, it follows
\Psi(\hat{x}_{\scriptscriptstyle 1\hspace{0.02cm}},\hat{x}_{\scriptscriptstyle 2}) \leq \int_{\scriptscriptstyle \bar{B}(x^*\!,\delta)}\Psi(x,x)\hspace{0.03cm}\pi(dx)
so $\Psi(\hat{x}_{\scriptscriptstyle 1\hspace{0.02cm}},\hat{x}_{\scriptscriptstyle 2}) = 0$, and therefore $\hat{x}_{\scriptscriptstyle 1} = \hat{x}_{\scriptscriptstyle 2\hspace{0.03cm}}$.
Remark : it was conjectured by Emery that any weakly convex geodesic ball should also have convex geometry. A counterexample to this conjecture was provided by Kendall, in the form of his “propeller" [28].
§.§ Afsari's contribution
Afsari's seminal work on Riemannian barycentres was published ten years ago [27]. It provided the following statement : if $\pi$ is supported inside a geodesic ball $B(x^*\!,\delta)$, then $\pi$ has a unique Riemannian barycentre $\hat{x}_{\pi}$ and $\hat{x}_{\pi} \in B(x^*\!,\delta)$, as soon as
\begin{equation} \label{eq:afsari1}
\delta < \frac{1}{2}\hspace{0.02cm}\min\left\lbrace \pi c^{\scriptscriptstyle -1},\mathrm{inj}(M)\right\rbrace
\end{equation}
Here, $c$ is such that all sectional curvatures of $M$ are less than $\kappa_{\max} = c^{\hspace{0.02cm}\scriptscriptstyle 2}$ (if $M$ has negative sectional curvatures, $c^{\scriptscriptstyle -1}$ is understood to be $+\infty$), and $\mathrm{inj}(M)$ is the injectivity radius of $M$.
Condition (<ref>) ensures the geodesic ball $B(x^*\!,\delta)$ is convex, in the sense of <ref> (strongly convex, in the terminology of [11]), rather than just weakly convex as in <ref>. This stronger condition is required, because the Riemannian barycentre (Fréchet mean) is considered, rather than just the Karcher mean. In fact, Afsari extended his results beyond Riemannian barycentres to $L^{\scriptscriptstyle p}$ Riemannian barycentres, which are obtained by replacing the squared distance in (<ref>) with a distance elevated to the power $p$, where $p\geq 1$.
In [27], the following approach is used, for the proof of existence and uniqueness. First, it is shown that any global minimum of $\mathcal{E}_\pi$ must lie inside $B(x^*\!,\delta)$. This is done using the Alexandrov-Toponogov comparison theorem, under its form stated in [11] (Page 420). Then, the Poincaré-Hopf theorem is employed, in order to prove uniqueness of local minima, inside the geodesic ball $B(x^*\!,\delta)$.
Specifically, $\mathcal{E}_\pi$ is differentiable at any point $y$ which belongs to the closed ball $\bar{B}(x^*\!,\delta)$, and
\mathrm{grad}\,\mathcal{E}_\pi(y) \,=\, -\int_{M}\,\mathrm{Exp}^{-1}_y(x)\hspace{0.03cm}\pi(dx) \hspace{0.5cm} \text{for } y \in \bar{B}(x^*\!,\delta)
Then, it is shown that, if $y \in B(x^*\!,\delta)$ and $\mathrm{grad}\,\mathcal{E}_\pi(y) = 0$, then $\mathrm{Hess}\,\mathcal{E}_\pi(y)$ is positive-definite. In other words, the singular point $y$ of the gradient vector field $\mathrm{grad}\,\mathcal{E}_\pi$ has its index equal to $1$.Since this vector field is outward pointing on the boundary of $\bar{B}(x^*\!,\delta)$, the Poincaré-Hopf theorem implies the sum of the indices of all its singular points in $B(x^*\!,\delta)$ is equal to the Euler-Poincaré characteristic of $\bar{B}(x^*\!,\delta)$, which is equal to $1$ (since $\bar{B}(x^*\!,\delta)$ is homeomorphic to a closed ball in $\mathbb{R}^n$).
Remark : the argument just summarised not only shows that $\mathcal{E}_\pi$ has a unique local minimum in $B(x^*\!,\delta)$, but that it has a unique stationary point in $B(x^*\!,\delta)$. Moreover, the advantage of this argument, over the “convex geometry" uniqueness argument, (summarised in <ref>),is that it can be used to show the uniqueness of $L^{\scriptscriptstyle p}$ Riemannian barycentres, for general $p > 1$.
§.§ Hadamard manifolds
Existence and uniqueness of Riemannian barycentres hold under quite general conditions, when the underlying Riemannian manifold $M$ is a Hadamard manifold (recall definition from <ref>). Mostly, these existence and uniqueness properties are just special cases of the properties of Fréchet means in metric spaces of non-positive curvature, which were developed by Sturm [29].
Let $\pi$ be a probability distribution on a Hadamard manifold $M$. As already mentioned in <ref>, if the variance function $\mathcal{E}_\pi$ in (<ref>) takes on finite values, then $\pi$ has at least one Riemannian barycentre, say $\hat{x}_{\pi\hspace{0.03cm}}$. For this, it is enough that
$\mathcal{E}_\pi(y_o) < \infty$, for just one $y_o \in M$. In other words, it is enough that $\pi$ should have a finite second-order moment
\begin{equation} \label{eq:secondordermoment}
\int_M\,d^{\hspace{0.03cm} 2}(y_o\hspace{0.03cm},x)\,\pi(dx) \,<\,\infty
\end{equation}
Indeed, if (<ref>) is verified, then a straightforward application of the triangle inequality implies that $\mathcal{E}_\pi(y) < \infty$ for all $y \in M$.
When $M$ is a Hadamard manifold, existence of a Riemannian barycentre automatically implies its uniqueness. This can be shown using the “convex geometry" uniqueness argument, discussed in <ref>. Indeed, if $M$ is a Hadamard manifold, then $\Psi:M\times M \rightarrow \mathbb{R}$, where $\Psi(x\hspace{0.02cm},y) = d(x\hspace{0.02cm},y)$ is convex, and $\Psi(x\hspace{0.02cm},y) = 0$ if and only if $x = y$. Alternatively, uniqueness of the Riemannian barycentre follows from the strong convexity of the variance function $\mathcal{E}_{\pi\hspace{0.03cm}}$. Recall from <ref> that $f_x(y) = d^{\hspace{0.03cm} 2}(x\hspace{0.02cm},y)/2$ is a $1/2$-strongly convex function, for each $x \in M$. Then, (<ref>) says that $\mathcal{E}_\pi$ is an expectation of $1/2$-strongly convex functions, and is therefore $1/2$-strongly convex. In turn, this implies that $\mathcal{E}_\pi$ has a unique global minimum, $\hat{x}_\pi \in M$.
When $M$ is a Hadamard manifold, it should also be noted that $\mathcal{E}_\pi$ is smooth throughout $M$, and that its gradient is given by
\begin{equation} \label{eq:gradepsilonhadamard}
\mathrm{grad}\,\mathcal{E}_\pi(y) = -\int_M\,\mathrm{Exp}^{-1}_y(x)\hspace{0.03cm}\pi(dx)
\end{equation}
as can be found by applying (<ref>) under the integral in (<ref>). Strong convexity of $\mathcal{E}_\pi$ implies its global minimum $\hat{x}_\pi$ is also its unique stationary point in $M$ (i.e. the unique point where $\mathrm{grad}\,\mathcal{E}_\pi$ is equal to zero).
§.§ Generic uniqueness
The empirical barycentre of the points $(x_{\scriptscriptstyle 1},\ldots,x_{\scriptscriptstyle N})$, in any complete Riemannian manifold $M$,is generically unique. This means that this empirical barycentre is unique, for almost all $(x_{\scriptscriptstyle 1},\ldots,x_{\scriptscriptstyle N})$ in the product Riemannian manifold $M^{\scriptscriptstyle N} = M \times \ldots \times M$, equipped with its Riemannian volume measure. This interesting result was obtained by Arnaudon and Miclo [30]. In particular, it implies that when $(x_{\scriptscriptstyle 1},\ldots,x_{\scriptscriptstyle N})$ are independent samples, from a distribution $\pi$, which has a probability density with respect to the Riemannian volume of $M$, then their empirical barycentre $\bar{x}_{\scriptscriptstyle N}$ is almost-surely unique.
§ GIBBS DISTRIBUTIONS : AN OPEN PROBLEM
Throughout the following, $M$ will be a compact, orientable Riemannian manifold, with positive sectional curvatures, all less than $\kappa_{\max} = c^{\hspace{0.02cm}\scriptscriptstyle 2}$. Afsari's statement, recalled in <ref>, says that if $\pi$ is a probability distribution on $M$, supported inside a convex geodesic ball $B(x^*\!,\delta)$, then $\pi$ has a unique Riemannian barycentre $\hat{x}_\pi\hspace{0.03cm}$, as soon as
\begin{equation} \label{eq:afsaribis}
\delta < \frac{1}{2}\hspace{0.02cm}\min\left\lbrace \pi c^{\scriptscriptstyle -1},\mathrm{inj}(M)\right\rbrace
\end{equation}
where $\mathrm{inj}(M)$ denotes the injectivty radius of $M$.
Inequality (<ref>) is optimal. Indeed, it is easy to think of examples which show that, if it is replaced by an equality, then $\hat{x}_\pi$ will immediately fail to be unique. On the other hand, this inequality does not tell us what happens in the important case where $\pi = \pi_{\scriptscriptstyle T}$ is a Gibbs distribution,
\begin{equation} \label{eq:gibbs}
\pi_{\scriptscriptstyle T}(dx) = \left(Z(T)\right)^{\scriptscriptstyle -1}\exp\left[-\frac{U(x)}{T}\right]\hspace{0.03cm}\mathrm{vol}(dx) \\[0.12cm]
\end{equation}
for some temperature $T$, and potential function $U:M\rightarrow \mathbb{R}$, where $Z(T)$ is a normalising constant ($\mathrm{vol}$ denotes the Riemannian volume form).
The present chapter will introduce several results, which deal with this case. These are concerned with the concentration, differentiability, convexity, and uniqueness properties, of the Riemannian barycentre $\hat{x}_{\scriptscriptstyle T}$ of the Gibbs distribution $\pi_{\scriptscriptstyle T}$.
The starting assumption for these results is that the potential function $U$ has a unique global minimum at $x^* \in M$. Under this assumption, while $\pi_{\scriptscriptstyle T}$ is not supported inside any convex geodesic ball $B(x^*\!,\delta)$, it is still concentrated on any such ball, provided the temperature $T$ is sufficiently small. Then, the aim is to know exactly how small $T$ should be made, in order to ensure the required properties of $\hat{x}_{\scriptscriptstyle T\hspace{0.02cm}}$. This aim can be fully achieved, under the further assumption that $M$ is a simply connected compact Riemannian symmetric space.
Given these two assumptions, the following conclusion will be obtained : for each $\delta < \frac{1}{2}r_{\scriptscriptstyle cx}$ ($r_{\scriptscriptstyle cx}$ denotes the convexity radius of $M$), there exists a critical temperature $T_{\scriptscriptstyle \delta}$ such that $T < T_{\scriptscriptstyle \delta}$ implies $\pi_{\scriptscriptstyle T}$ has a unique Riemannian barycentre $\hat{x}_{\scriptscriptstyle T}$ and this $\hat{x}_{\scriptscriptstyle T}$ belongs to the geodesic ball $B(x^*\!,\delta)$. Moreover, if $U$ is invariant by geodesic symmetry about $x^*$, then $\hat{x}_{\scriptscriptstyle T} = x^*$.
Remark : if $M$ is a Riemannian manifold, the convexity radius $r_{\scriptscriptstyle cx}(x)$ of $x \in M$ is the supremum of $R > 0$ such that the geodesic ball $B(x\hspace{0.02cm},R)$ is convex (this is strictly positive, for any $x \in M$). The convexity radius $r_{\scriptscriptstyle cx}(M)$ of $M$ is the infimum of $r_{\scriptscriptstyle cx}(x)$, over all $x \in M$ (if $M$ is compact, this is strictly positive). Here, $r_{\scriptscriptstyle cx}(M)$ is just denoted $r_{\scriptscriptstyle cx\hspace{0.03cm}}$.
§ CONCENTRATION OF BARYCENTRES
Denote the variance function of the Gibbs distribution $\pi_{\scriptscriptstyle T}$ in (<ref>) by $\mathcal{E}_{\scriptscriptstyle T\hspace{0.03cm}}$. According to (<ref>),
\begin{equation} \label{eq:ETT}
\mathcal{E}_{\scriptscriptstyle T}(y) =
\frac{1}{2}\hspace{0.03cm}
\int_M\,d^{\hspace{0.03cm}\scriptscriptstyle 2}(y\hspace{0.02cm},x)\hspace{0.03cm}\pi_{\scriptscriptstyle T}(dx)
\end{equation}
Throughout the following, it will be assumed that the potential function $U$, which appears in (<ref>), has a unique global minimum at $x^* \in M$. While $U$ is not required to be smooth, it is required to be well-behaved near $x^*$, in the sense that there exist $\mu_{\min}\hspace{0.02cm},\mu_{\max} > 0$ and $\rho > 0$ such that
\begin{equation} \label{eq:wellbehaved}
\mu_{\min}\hspace{0.03cm}d^{\hspace{0.03cm}\scriptscriptstyle 2}(x\hspace{0.02cm},x^*) \,\leq\, 2(U(x) - U(x^*))\,\leq
\mu_{\max}\hspace{0.03cm}d^{\hspace{0.03cm}\scriptscriptstyle 2}(x\hspace{0.02cm},x^*)
\end{equation}
whenever $d(x\hspace{0.02cm},x^*) \leq \rho$. This is always verified if $U$ is twice differentiable at $x^*$, and the spectrum of $\mathrm{Hess}\,U(x^*)$ is contained in the open interval $(\mu_{\min}\hspace{0.02cm},\mu_{\max})$.
The following Proposition <ref> establishes the concentration property of the Riemannian barycentres of $\pi_{\scriptscriptstyle T}$ as the temperature $T$ is made small. In this proposition, $W$ denotes the Kantorovich ($L^{\scriptscriptstyle 1}$-Wasserstein) distance, and $\delta_{x^*}$ the Dirac distribution concentrated at $x^*$.
Let $M$ be a compact, orientable Riemannian manifold, with positive sectional curvatures, and dimension equal to $n$.
(i) Let $\eta > 0$. For any Riemannian barycentre $\hat{x}_{\scriptscriptstyle T}$ of $\pi_{\scriptscriptstyle T}$
\begin{equation} \label{eq:concentration1}
W(\pi_{\scriptscriptstyle T}\hspace{0.02cm},\delta_{x^*}) < \frac{\eta^2}{4\hspace{0.02cm}\mathrm{diam}\,M} \;\Longrightarrow\; d(\hat{x}_{\scriptscriptstyle T}\hspace{0.02cm},x^*) < \eta
\end{equation}
where $\mathrm{diam}\,M$ is the diameter of $M$.
(ii) There exists a temperature $T_{\scriptscriptstyle W}$ such that $T \leq T_{\scriptscriptstyle W}$ implies
\begin{equation} \label{eq:concentration2}
W(\pi_{\scriptscriptstyle T}\hspace{0.02cm},\delta_{x^*}) \leq (8\pi)^{\!\frac{1}{2}}\hspace{0.02cm}B^{-1}_n\left(\frac{\pi}{2}\right)^{\!n-1}
\left(\frac{\mu_{\max}}{\mu_{\min}}\right)^{\!\!\frac{n}{2}}\left(\frac{T}{\mu_{\min}}\right)^{\!\!\frac{1}{2}}
\end{equation}
where $B_n = B(1/2\hspace{0.02cm},n/2)$ in terms of the Euler Beta function.
Proposition <ref> shows exactly how small $T$ should be made, in order to ensure that all the Riemannian barycentres $\hat{x}_{\scriptscriptstyle T}$ concentrate within an open ball $B(x^*\!,\eta)$. Roughly, (i) states that, if $\pi_{\scriptscriptstyle T}$ is close to $\delta_{x^*}\hspace{0.03cm}$, then all $\hat{x}_{\scriptscriptstyle T}$ will be close to $x^*$. On the other hand, (ii) bounds the distance between $\pi_{\scriptscriptstyle T}$ and $\delta_{x^*}\hspace{0.03cm}$, as a function of $T$. The temperature $T_{\scriptscriptstyle W}$ mentioned in (ii) will be expressed explicitly in <ref>, below.
Here, two things should be noted, concerning (<ref>). First, this inequality is both optimal and explicit. It is optimal because the dependence on $T^{\frac{1}{2}}$ in its right-hand side cannot be improved. Indeed, the multi-dimensional Laplace approximation (for example, see [31]), shows the left-hand side is equivalent to $\mathrm{L}\cdot T^{\frac{1}{2}}$ when $T \rightarrow 0$. While this constant $\mathrm{L}$ is not tractable, the constants appearing in (<ref>) depend explicitly on the manifold $M$ and the function $U$. In fact, (<ref>) does not follow from the multi-dimensional Laplace approximation, but rather from the volume comparison theorems, in <ref>.
Second, in spite of these nice properties, (<ref>) does not escape the curse of dimensionality. Indeed, for fixed $T$, its right-hand side increases exponentially with the dimension $n$ of $M$ (note that $B_n$ decreases like $n^{\scriptscriptstyle -\frac{1}{2}}$). In fact, the temperature $T_{\scriptscriptstyle W}$ also depends on $n$, but it is typically much less affected by it, and decreases slower than $n^{\scriptscriptstyle -1}$ as $n$ increases.
§ DIFFERENTIABILITY OF THE VARIANCE FUNCTION
Assume that $M$ is a simply connected compact Riemannian symmetric space. Under this assumption, it turns out that the variance function $\mathcal{E}_{\scriptscriptstyle T}(y)$ is $C^2$ throughout $M$, for any value $T > 0$ of the temperature $T$. This surprising result is contained in Proposition <ref>.
To state Proposition <ref>, consider for $x \in M$ the function $f_x(y) = d^{\hspace{0.03cm}2}(x,y)/2$. Recall from <ref> that this function is $C^2$ on the open set $\mathrm{D}(x) = M - \mathrm{Cut}(x)$. When $y \in \mathrm{D}(x)$, denote $G_y(x)$ and $H_y(x)$ the gradient and Hessian of $f_x(y)$.
With this notation, for any $x \in M$, the gradient $G_y(x)$ belongs to $T_yM$, and the Hessian $H_y(x)$ defines a symmetric bilinear form on $T_yM$. However (recall the remarks in <ref>), both $G_y(x)$ and $H_y(x)$ are singular on $\mathrm{Cut}(x)$, where $H_y(x)$ will even blow up, as it has an eigenvalue equal to $-\infty$.
Let $M$ be a simply connected compact Riemannian symmetric space.
(i) The following integrals converge for any temperature $T > 0$
\begin{equation} \label{eq:GH}
G_y = \int_{\mathrm{D}(y)}G_y(x)\hspace{0.03cm}\pi_{\scriptscriptstyle T}(dx) \hspace{0.2cm};\hspace{0.2cm}
H_y = \int_{\mathrm{D}(y)}H_y(x)\hspace{0.03cm}\pi_{\scriptscriptstyle T}(dx)
\end{equation}
and both depend continuously on $y$.
(ii) The gradient and Hessian of the variance function $\mathcal{E}_{\scriptscriptstyle T}(y)$ are given by
\begin{equation} \label{eq:derivatives}
\mathrm{grad}\,\mathcal{E}_{\scriptscriptstyle T}(y) = G_y \hspace{0.2cm};\hspace{0.2cm}
\mathrm{Hess}\,\mathcal{E}_{\scriptscriptstyle T}(y) = H_y
\end{equation}
so that $\mathcal{E}_{\scriptscriptstyle T}(y)$ is $C^2$ throughout $M$.
The proof of Proposition <ref> relies on the following lemma.
Assume $M$ is a simply connected compact Riemannian symmetric space. Let $\gamma : I \rightarrow M$ be a geodesic defined on a compact interval $I$. Denote by $\mathrm{Cut}(\gamma)$ the union of all cut loci $\mathrm{Cut}(\gamma(t))$ for $t \in I$. Then, the Hausdorff dimension of $\mathrm{Cut}(\gamma)$ is strictly less than the dimension of $M$. In particular, $\mathrm{Cut}(\gamma)$ is a set with Riemannian volume equal to zero.
Remark : the assumption that $M$ is simply connected cannot be removed. For example, the conclusion of Lemma <ref> does not hold if $M$ is a real projective space.
The proof of Lemma <ref> uses the structure of Riemannian symmetric spaces, as well as some results from dimension theory, found in [32]. The notion of Hausdorff dimension is needed, because $\mathrm{Cut}(\gamma)$ may fail to be a manifold.
Lemma <ref> is crucial to Proposition <ref>, because it leads to the following expression,
\mathcal{E}_{\scriptscriptstyle T}(\gamma(t)) = \int_M\,f_x(\gamma(t))\hspace{0.03cm}\pi_{\scriptscriptstyle T}(dx) =
\int_{\mathrm{D}(\gamma)}f_x(\gamma(t))\hspace{0.03cm}\pi_{\scriptscriptstyle T}(dx) \hspace{0.5cm} \text{for all $t \in I$}
where $\mathrm{D}(\gamma) = M - \mathrm{Cut}(\gamma)$, and the second inequality follows since $\mathrm{Cut}(\gamma)$ has Riemannian volume equal to zero. Then, recalling that $x \in \mathrm{Cut}(\gamma(t))$ if and only if $\gamma(t) \in \mathrm{Cut}(x)$, it becomes possible to differentiate $f_x(\gamma(t))$ under the integral. This leads to the proof of (ii).
§ UNIQUENESS OF THE BARYCENTRE
The following Proposition <ref> establishes the uniqueness of $\hat{x}_{\scriptscriptstyle T}$ as the temperature $T$ is made small. As in the previous Proposition <ref>, $M$ is a simply connected compact Riemannian symmetric space. The convexity radius of $M$ is denoted $r_{\scriptscriptstyle cx\hspace{0.03cm}}$. This is given by $r_{\scriptscriptstyle cx} = \frac{\pi}{2}\hspace{0.03cm}c^{\scriptscriptstyle -1}$ (see <ref>, below).
Recall the definition (<ref>) of the Gibbs distribution $\pi_{\scriptscriptstyle T\hspace{0.03cm}}$, where the potential function $U$ has a unique global minimum at $x^* \in M$. Let $s_{x^*}$ denote the geodesic symmetry at $x^*$ (recall definition from <ref>). The potential function $U$ is said to be invariant by geodesic symmetry about $x^*$, if $U \circ s_{x^*} = U$.
Let $M$ be a simply connected compact Riemannian symmetric space, with convexity radius $r_{\scriptscriptstyle cx\hspace{0.03cm}}$. For $\delta < \frac{1}{2}r_{\scriptscriptstyle cx\hspace{0.03cm}}$, there exists a critical temperature $T_{\scriptscriptstyle \delta}$ such that
(i) When $T < T_{\scriptscriptstyle \delta\hspace{0.03cm}}$, the Riemannian barycentre $\hat{x}_{\scriptscriptstyle T}$ of $\pi_{\scriptscriptstyle T}$ is unique and $\hat{x}_{\scriptscriptstyle T} \in B(x^*\!,\delta)$.
(ii) If, in addition, $U$ is invariant by geodesic symmetry about $x^*$, then $\hat{x}_{\scriptscriptstyle T} = x^*$.
Proposition <ref> shows exactly how small $T$ should be made, in order to ensure that the Riemannian barycentre $\hat{x}_{\scriptscriptstyle T}$ is unique. In turn, this uniqueness of $\hat{x}_{\scriptscriptstyle T}$ follows from the convexity of the variance function $\mathcal{E}_{\scriptscriptstyle T}(y)$, obtained in the following Lemma <ref>.
To state this lemma, consider the function $f(T)$ of the temperature $T$
\begin{equation} \label{eq:fT}
f(T) = \left(\frac{2}{\pi}\right)\left(\frac{\pi}{8}\right)^{\!n-1}\left(\frac{\mu_{\max}}{T}\right)^{\!\!\frac{n}{2}}\exp\left(-\frac{U_\delta}{T}\right)
\end{equation}
for any given $\delta$, where $U_\delta = \inf\lbrace U(x) - U(x^*)\,; x \notin B(x^*\!,\delta)\rbrace$. Note that $f(T)$ decreases to zero as $T$ is made arbitrarily small.
Under the same assumptions as Proposition <ref>, let $\delta < \frac{1}{2}r_{\scriptscriptstyle cx\hspace{0.03cm}}$.
(i) For all $y \in B(x^*\!,\delta)$,
\begin{equation} \label{eq:ethlower}
\mathrm{Hess}\,\mathcal{E}_{\scriptscriptstyle T}(y) \,\geq\, \mathrm{Ct}(2\delta)\hspace{0.03cm}[1 - \mathrm{vol}(M)f(T)] - \pi A_{\scriptscriptstyle M}\hspace{0.03cm}f(T)
\end{equation}
where $\mathrm{Ct}(2\delta) = 2c\delta\hspace{0.02cm}\cot(2c\delta)$ and $A_{\scriptscriptstyle M} > 0$ is a constant which depends only on the symmetric space $M$.
(ii) There exists a critical temperature $T_{\scriptscriptstyle \delta}$ such that $T < T_{\scriptscriptstyle \delta}$ implies the variance function $\mathcal{E}_{\scriptscriptstyle T}(y)$ is strongly convex on $B(x^*\!,\delta)$.
The inequality in (<ref>) should be understood as saying all the eigenvalues of $\mathrm{Hess}\,\mathcal{E}_{\scriptscriptstyle T}(y)$ are greater than the right-hand side (of course, this is an abuse of notation). The critical temperature $T_{\scriptscriptstyle \delta}$ will be expressed in the following section.
§ FINDING $T_{\SCRIPTSCRIPTSTYLE W}$ AND $T_{\SCRIPTSCRIPTSTYLE \DELTA}$
The present paragraph provides expressions of the temperatures $T_{\scriptscriptstyle W}$ and $T_{\scriptscriptstyle \delta\hspace{0.03cm}}$, which appear in Propositions <ref> and <ref>. These are expressions (<ref>) and (<ref>) below, which should be considered as part of Propositions <ref> and <ref>, and will accordingly be proved in <ref>.
Expressions (<ref>) and (<ref>) allow $T_{\scriptscriptstyle W}$ and $T_{\scriptscriptstyle \delta}$ to be computed as solutions of scalar non-linear equations, which depend on Condition (<ref>) and on the Riemannian symmetric space $M$.In order to state them, write
\begin{equation} \label{eq:fTm}
f(T,m,\rho) = \left(\frac{2}{\pi}\right)^{\!\!\frac{1}{2}}\left(\frac{\mu_{\max}}{T}\right)^{\!\!\frac{m}{2}}
\exp\left(-\frac{U_\rho}{T}\right)
\end{equation}
in terms of the temperature $T$ and positive $m$ and $\rho$, where $U_\rho$ is defined as in (<ref>). It should be noted that $f(T,m,\rho)$ decreases to $0$ as $T$ is made arbitrarily small, for fixed $m$ and $\rho$. The following expression holds for $T_{\scriptscriptstyle W}\hspace{0.03cm}$,
\begin{equation} \label{eq:tw}
T_{\scriptscriptstyle W} \,=\,\min\hspace{0.02cm}\lbrace T^1_{\scriptscriptstyle W}\hspace{0.02cm},T^2_{\scriptscriptstyle W}\rbrace
\end{equation}
with $T^1_{\scriptscriptstyle W}$ and $T^2_{\scriptscriptstyle W}$ given by
\begin{array}{l}
T^1_{\scriptscriptstyle W} = \inf\hspace{0.02cm}\lbrace T>0: f(T,n-2,\rho) > \rho^{2-n}\hspace{0.02cm} A_{n-1}\rbrace \\[0.4cm]
T^2_{\scriptscriptstyle W} = \inf\hspace{0.02cm}\left\lbrace T>0: f(T,n+1,\rho) > \left(\mu_{\max}\middle/\mu_{\min}\right)^{\!\scriptscriptstyle \frac{n}{2}}\hspace{0.01cm}C_n\right\rbrace
\end{array}
where $A_n$ is the $n$-th absolute moment of a standard normal random variable ($A_n = \mathbb{E}|X|^n$ where $X \sim N(0,1)$), and $C_n = \left.(\omega_{n-1}\hspace{0.02cm}A_n)\middle/(\mathrm{diam}\,M\times\mathrm{vol}\,M)\right.$, where $\omega_{n-1}$ is the area of the unit sphere $S^{n-1} \subset \mathbb{R}^n$. Moreover, for $T_{\scriptscriptstyle \delta}\hspace{0.03cm}$,
\begin{equation} \label{eq:td}
T_{\scriptscriptstyle \delta} \,=\,\min\hspace{0.02cm}\lbrace T^1_{\scriptscriptstyle \delta}\hspace{0.02cm},T^2_{\scriptscriptstyle \delta}\rbrace
\end{equation}
where, in the notation of (<ref>) and (<ref>),
\begin{array}{l}
T^1_{\scriptscriptstyle \delta} = \inf\hspace{0.02cm}\left\lbrace T\leq T_{\scriptscriptstyle W}: \left(2\pi T\middle/\mu_{\min}\right)^{\!\frac{1}{2}} > \delta^{\scriptscriptstyle 2}\hspace{0.03cm}\left(\mu_{\min}\middle/\mu_{\max}\right)^{\!\frac{n}{2}}\hspace{0.03cm}D_n\right\rbrace \\[0.4cm]
T^2_{\scriptscriptstyle \delta} = \inf\hspace{0.02cm}\left\lbrace T\leq T_{\scriptscriptstyle W}: f(T) > \mathrm{Ct}(2\delta)[\mathrm{Ct}(2\delta)\mathrm{vol}(M) + \pi A_{\scriptscriptstyle M}]^{-1}\right\rbrace
\end{array}
with $D_n = (2/\pi)^{n-1}\!\left.B_n\middle/(4\hspace{0.02cm}\mathrm{diam}\,M)\right.$.
Remark : the following formulae for $A_n$ and $\omega_{n-1}$ will be useful in <ref>,
\begin{equation} \label{eq:gammastuff}
A_n = \pi^{\scriptscriptstyle -\frac{1}{2}}2^{\scriptscriptstyle \frac{n}{2}}\hspace{0.03cm}\Gamma((n+1)/2) \hspace{0.2cm};\hspace{0.2cm}
\omega_{n-1} = \frac{2\hspace{0.02cm}\pi^{\scriptscriptstyle \frac{n}{2}}}{\Gamma(n/2)}
\end{equation}
These are well-known, and follow easily from the definition of the Euler Gamma function [33].
§ COMPACT SYMMETRIC SPACES
Compact Riemannian symmetric spaces belong to the “compact case", already treated in <ref>. Some additional material, on these spaces, is needed for the proofs of Propositions <ref> and <ref>.
§.§ Roots and the Jacobi equation
As of now, let $M = G/K$ be a symmetric space, where $G$ is semisimple and compact, and $K = K_y$ the stabiliser in $G$ of some point $y \in M$. Recall the Cartan decomposition $\mathfrak{g} = \mathfrak{k} + \mathfrak{p}$, where $\mathfrak{g}$ and $\mathfrak{k}$ are the Lie algebras of $G$ and $K$, respectively. Moreover, let $\mathfrak{a}$ be a maximal Abelian subspace of $\mathfrak{p}$, and denote $\Delta_+$ the corresponding set of positive roots $\lambda : \mathfrak{a} \rightarrow \mathbb{R}$.
Then, $\mathfrak{p}$ may be identified with $T_yM$, and any $v \in \mathfrak{p}$ can be written $v = \mathrm{Ad}(k)\,a$ for some $k \in K$ and $a \in\mathfrak{a}$. Accordinly, the self-adjoint curvature operator, $R_v$ (given by $R_v(u) = [v\hspace{0.03cm},[v\hspace{0.02cm},u]]$ for $u \in T_yM$), can be diagonalised (the reader may wish to note (<ref>) differs from (<ref>) by a minus sign, since the space here denoted $\mathfrak{p}$ would have been $\mathfrak{p}_* = i\hspace{0.02cm}\mathfrak{p}$, in Chapter <ref>)
\begin{equation} \label{eq:raeigenbis}
\mathrm{Ad}(k^{\scriptscriptstyle -1})\circ R_v \circ \mathrm{Ad}(k) = R_a \hspace{0.4cm} \text{where } R_a= - \sum_{\lambda \in \Delta_+} (\lambda(a))^2\;\Pi_{\lambda}
\end{equation}
and where $\Pi_{\lambda}$ is the orthogonal projector onto the eigenspace of $R_a$ which corresponds to the eigenvalue $-(\lambda(a))^2$. The rank of $\Pi_\lambda$ is denoted $m_\lambda$ and called the multiplicity of $\lambda$.
Recall that the curvature tensor of a symmetric space is parallel :$\hspace{0.03cm}\nabla\,R = 0$. This property, when combined with the diagonalisation (<ref>), yields the solutions of the operator Jacobi equation (<ref>), and of the Ricatti equation (<ref>).
Alternatively, if $A(t)$ solves (<ref>) and $\mathcal{A}(t) = \Pi^{\scriptscriptstyle 0}_{t} \circ A(t)$, where $\Pi^{\scriptscriptstyle 0}_{t}$ denotes parallel transport, along the geodesic $c_{\scriptscriptstyle v}$ with $c_{\scriptscriptstyle v}(0) = y$ and $\dot{c}_{\scriptscriptstyle v}(0) = v$, then $\mathcal{A}(t)$ solves the differential equation
\begin{equation} \label{eq:jacobisss}
\mathcal{A}^{\prime\prime} - R_v\hspace{0.02cm}\mathcal{A} = 0 \hspace{1cm} \mathcal{A}(0) = 0 \,,\, \mathcal{A}^\prime(0) = \mathrm{Id}_{y}
\end{equation}
where the prime denotes differentiation with respect to $t$. Using (<ref>), it follows that
\begin{equation} \label{eq:compactAA}
\mathcal{A}(t) \,=\, \Pi^{k}_{\mathfrak{a}} \,+\, \sum_{\lambda \in \Delta_+}\left(\sin(\lambda(a)t)\middle/\lambda(a)\right)\hspace{0.02cm}
\Pi^{k}_{\mathfrak{\lambda}}
\end{equation}
where $\Pi^{k}_{\mathfrak{a}} = \mathrm{Ad}(k)\circ \Pi_{\mathfrak{a}} \circ \mathrm{Ad}(k^{\scriptscriptstyle -1})$ and
$\Pi^{k}_{\lambda} = \mathrm{Ad}(k)\circ \Pi_{\lambda} \circ \mathrm{Ad}(k^{\scriptscriptstyle -1})$, with $\Pi_{\mathfrak{a}}$ the orthogonal projector onto $\mathfrak{a}$.
§.§ The cut locus
Let $M$ be a compact Riemannian symmetric space, as above. Assume, as in Propositions <ref> and <ref>, that $M$ is simply connected. In ths case, the following important property holds [9] :the cut locus of any point $y \in M$ is identical to the first conjugate locus of this point.
Accordingly, if $v$ is a unit vector in $\mathfrak{p} \simeq T_yM$, the geodesic $c_{\scriptscriptstyle v}$ will meet the cut locus of $y$for the first time, when $\det(\mathcal{A}(t)) = 0$ for the first time after $t = 0$. But, as seen from (<ref>),if $v = \mathrm{Ad}(k)\,a$, then this happens when $t = \mathrm{t}(v)$ given by
\begin{equation} \label{eq:tc1}
\mathrm{t}(v) = \min_{\lambda \in \Delta_+}\,\frac{\pi}{|\lambda(a)|} =
\min_{\lambda \in \Delta_+}\,\frac{\pi}{\lambda(a)}
\end{equation}
where the absolute value can be dropped because it is always possible to assume $a$ belongs to $\bar{C}_+\hspace{0.02cm}$, the closure of the Weyl chamber $C_+$ (the set of $a \in \mathfrak{a}$ such that $\lambda(a) > 0$ for each $\lambda \in \Delta_+$). If $M$ is an irreducible symmetric space, then there exists a maximal root $c \in \Delta_+\hspace{0.02cm}$, so that $c(a) \geq \lambda(a)$ for all $\lambda \in \Delta_+$ and $a \in \bar{C}_+$ [10]. In this case, $\mathrm{t}(v) = \pi/c(a)$. On the other hand, if $M$ is not irreducible, it is a product of irreducible compact Riemannian symmetric spaces, say $M = M_{\scriptscriptstyle 1}\times\ldots\times M_{ s\hspace{0.03cm}}$. If $c_{\scriptscriptstyle 1},\ldots,c_s$ are the corresponding maximal roots,
\begin{equation}\label{eq:tc2}
\mathrm{t}(v) = \min_{\ell= 1,\ldots,\hspace{0.02cm}s}\,\frac{\pi}{c_{\ell}(a)}
\end{equation}
The cut locus of $y$ is the set of all points $c_{\scriptscriptstyle v}(\mathrm{t}(v))$ where $v$ is a unit vector in $T_yM$. Then, the injectivity radius $\mathrm{inj}(y)$ of $y$ is equal to the minimum of $\mathrm{t}(v)$, taken over all unit vectors $v$.From (<ref>), this is equal to $\pi\hspace{0.03cm}c^{\scriptscriptstyle -1}$ where $c = \max_{\ell= 1,\ldots,\hspace{0.02cm}s} \Vert c_{\ell}\Vert$ and $\Vert c_{\ell}\Vert$ denotes the norm of $c_\ell \in\mathfrak{a}^*$ (the dual space of $\mathfrak{a}$). Since $M$ is a homogeneous space, the injectivity radius of $M$ is also equal to $\pi\hspace{0.03cm}c^{\scriptscriptstyle -1}$, since it is equal to the injectivity radius of any point $y$ in $M$. Incidentally, $c^{\hspace{0.02cm}\scriptscriptstyle 2}$ is the maximum sectional curvature of $M$.
With a bit of additional work, the above description of the cut locus of $y$ can be strengthened, to yield the following statements. Let $S = K/K_{\mathfrak{a}}$ where $K_{\mathfrak{a}}$ is the centraliser of $\mathfrak{a}$ in $K$. Moreover, denote $Q_+$ the set of $a \in \mathfrak{a}$ such that $\lambda(a) \in (0,\pi)$ for each $\lambda \in \Delta_+\hspace{0.02cm}$. Then, consider the mapping
\begin{equation} \label{eq:varphibis}
\varphi(s\hspace{0.02cm},a) = \mathrm{Exp}_y(\beta(s\hspace{0.02cm},a)) \hspace{1cm} (s\hspace{0.02cm},a) \in S \times \bar{Q}_+
\end{equation}
where $\beta(s\hspace{0.02cm},a) = \mathrm{Ad}(s)\,a$ and $\bar{Q}_+$ is the closure of $Q_+\hspace{0.02cm}$. This mapping $\varphi$ is onto $M$, and is a diffeomorphism of $S \times Q_+$ onto its image $M_r\hspace{0.03cm}$, which is also the set of regular values of $\varphi$. Finally,
\begin{equation} \label{eq:cutysccss}
\mathrm{Cut}(y) = \varphi(S\times \bar{Q}_{\pi}) \hspace{0.5cm} \text{where } \bar{Q}_{\pi} = \bar{Q}_+ \,\cap\,(\cup_{\ell}\,\lbrace a: c_{\ell}(a) = \pi \rbrace)
\end{equation}
§.§ The squared distance function
For $x \in M$, consider the squared distance function $f_x(y) = d^{\hspace{0.03cm}2}(x,y)/2$. If $x \notin \mathrm{Cut}(y)$, then $f_x$ is $C^2$ near $y$ (this is because $y \in \mathrm{Cut}(x)$ if and only if $x \in \mathrm{Cut}(y)$).
In this case, write $x = \varphi(s\hspace{0.02cm},a)$, where the map $\varphi$ was defined in (<ref>). Let $G_y(x)$ and $H_y(x)$ denote the gradient and Hessian of $f_x$ at $y$. These are given by
\begin{equation} \label{eq:ssgradfy}
G_y(x) = - \beta(s,a) \hspace{3.3cm}
\end{equation}
\begin{equation} \label{eq:sshessfy}
H_y(x) = \Pi^s_\mathfrak{a} \,+\, \sum_{\lambda \in \Delta_+} \lambda(a)\cot\lambda(a)\;\Pi^s_{\lambda}
\end{equation}
in the notation of (<ref>). Here, (<ref>) follows from (<ref>), since $x = \mathrm{Exp}_y\hspace{0.03cm}(\beta(s\hspace{0.02cm},a))$, and (<ref>) follows from the solution of the Ricatti equation (<ref>), discussed in <ref>.
If $M$ is simply connected, then $\mathrm{Cut}(y)$ is given by (<ref>). Now, if $x \in \mathrm{Cut}(y)$ is written $x = \varphi(s\hspace{0.02cm},a)$, then $\lambda(a) = \pi$ for some $\lambda \in \Delta_+\hspace{0.02cm}$ ($\lambda = c_{\ell}$ which achieves the minimum in (<ref>)). By (<ref>), $H_y(x)$ then has an eigenvalue equal to $-\infty$. In other words, $H_y(x)$ blows up when $x$ approaches $\mathrm{Cut}(y)$.
The convexity radius of a simply connected compact Riemannian symmetric space $M$ is equal to half its injectivity radius. Accordingly, the convexity radius of $M$ is $r_{\scriptscriptstyle cx} = (\pi/2)\hspace{0.03cm}c^{\scriptscriptstyle -1}$. The proof of this statement may be summarised in the following way :
If $\delta < r_{\scriptscriptstyle cx\hspace{0.02cm}}$, then any $y_{\scriptscriptstyle 1\hspace{0.02cm}},y_{\scriptscriptstyle 2}$ in $B(x\hspace{0.02cm},\delta)$ must have $d(y_{\scriptscriptstyle 1\hspace{0.02cm}},y_{\scriptscriptstyle 2}) < \pi\hspace{0.03cm}c^{\scriptscriptstyle -1}$, the injectivity radius of $M$,and are therefore connected by a unique length-minimising geodesic curve $\gamma$. But, by (<ref>), the squared distance function $f_x$ is convex on $B(x\hspace{0.02cm},\delta)$, where all eigenvalues of its Hessian are greater than $c\delta\hspace{0.02cm}\cot(c\delta) > 0$. This can be used to show that the geodesic $\gamma$ lies entirely in $B(x\hspace{0.02cm},\delta)$ [34] (Page 177). In other words, the geodesic ball $B(x\hspace{0.02cm},\delta)$ is convex. On the other hand [9], if $\delta = r_{\scriptscriptstyle cx}$ then there exists a closed (i.e. periodic) geodesic, of length $2\pi\hspace{0.03cm}c^{\scriptscriptstyle -1}$, contained in $B(x\hspace{0.02cm},\delta)$, so that this geodesic ball cannot be convex.
§.§ A different integral formula
Consider again the map $\varphi$, defined in (<ref>). Let $M_r$ denote the set of regular values of $\varphi$. By Sard's lemma [16], the complement of $M_r$ in $M$ has zero Riemannian volume. Therefore,if $f : M\rightarrow \mathbb{R}$ is a measurable function,
\begin{equation}\label{eq:ssbisintegral1}
\int_M\,f(x)\hspace{0.03cm}\mathrm{vol}(dx) = \int_{M_r}\,f(x)\hspace{0.03cm}\mathrm{vol}(dx)
\end{equation}
However, it was seen in <ref> that $\varphi$ is a diffeomorphism of $S \times Q_+$ onto $M_r\hspace{0.02cm}$. Then, performing a “change of variables", it follows that
\begin{equation} \label{eq:ssbisintegral2}
\int_M\,f(x)\hspace{0.03cm}\mathrm{vol}(dx) =
\int_{Q_+}\!\int_S\,f(s\hspace{0.02cm},a)\hspace{0.02cm}D(a)\hspace{0.03cm}da\hspace{0.02cm}\omega(ds)
\end{equation}
where $f(s\hspace{0.02cm},a) = f(\varphi(s\hspace{0.02cm},a))$ and $\varphi^*(\mathrm{vol}) = D(a)\hspace{0.03cm}da\hspace{0.02cm}\omega(ds)$. In particular, the “volume density" $D(a)$ can be read from (<ref>),
\begin{equation} \label{eq:Dbis}
D(a) = \prod_{\lambda \in \Delta_+} \left| \sin\hspace{0.02cm}\lambda(a)\right|^{ m_\lambda} \hspace{2.4cm}
\end{equation}
where the absolute value may be dropped, whenever $a \in Q_+$ is understood from the context.
Remark : the integral formula (<ref>) is somewhat similar to (<ref>). Roughly, both formulae involve the same change of variables, but (<ref>) takes advantage of the the description of the cut locus of $y$ in (<ref>). Of course, (<ref>) only works when the compact symmetric space $M$ is simply connected.
§ ALL THE PROOFS
Throughout the following proofs, it will be assumed that $U(x^*) = 0$. There is no loss of generality in making this assumption. Indeed, looking back at the definition (<ref>) of the Gibbs distribution $\pi_{\scriptscriptstyle T}\hspace{0.03cm}$, it is clear that a factor $\exp(-U(x^*)/T)$ may always be absorbed into $Z(T)$.
§.§ Proof of Proposition <ref>
§.§.§ Proof of (i)
For each $y \in M$, let $f_y(x) = d^{\hspace{0.03cm}2}(y\hspace{0.02cm},x)/2$. It follows from (<ref>) that
\begin{equation} \label{proofconcentration11}
\mathcal{E}_{\scriptscriptstyle T}(y) = \int_M\, f_y(x)\hspace{0.03cm}\pi_{\scriptscriptstyle T}(dx)
\end{equation}
On the other hand, consider the function $\mathcal{E}_{\scriptscriptstyle 0}(y)$,
\begin{equation} \label{eq:E0}
\mathcal{E}_{\scriptscriptstyle 0}(y) = \int_M\, f_y(x)\hspace{0.03cm}\delta_{x^*}(dx) = d^{\hspace{0.03cm}2}(y\hspace{0.02cm},x^*)/2
\end{equation}
For any $y \in M$, it is elementary that $f_y(x)$ is a Lipschitz function of $x$, with Lipschitz constant $\mathrm{diam}\,M$. Then, from the Kantorovich-Rubinshtein formula [35] (see VIII.4)
\begin{equation} \label{eq:proofconcentration12}
\left| \mathcal{E}_{\scriptscriptstyle T}(y) - \mathcal{E}_{\scriptscriptstyle 0}(y)\right| \leq (\mathrm{diam}\,M)\hspace{0.02cm}W(\pi_{\scriptscriptstyle T}\hspace{0.02cm},\delta_{x^*})
\end{equation}
a uniform bound in $y \in M$. It now follows that, for any $\eta > 0$,
\begin{equation} \label{eq:proofconcentration13}
\inf_{y \in B(x^*\!,\eta)} \mathcal{E}_{\scriptscriptstyle T}(y) - \inf_{y \in B(x^*\!,\eta)} \mathcal{E}_{\scriptscriptstyle 0}(y) \leq (\mathrm{diam}\,M)\hspace{0.02cm}W(\pi_{\scriptscriptstyle T}\hspace{0.02cm},\delta_{x^*})
\end{equation}
\begin{equation} \label{eq:proofconcentration14}
\inf_{y \notin B(x^*\!,\eta)} \mathcal{E}_{\scriptscriptstyle 0}(y) - \inf_{y \notin B(x^*\!,\eta)} \mathcal{E}_{\scriptscriptstyle T}(y) \leq (\mathrm{diam}\,M)\hspace{0.02cm}W(\pi_{\scriptscriptstyle T}\hspace{0.02cm},\delta_{x^*})
\end{equation}
However, from (<ref>), it is clear that
\inf_{y \in B(x^*\!,\eta)} \mathcal{E}_{\scriptscriptstyle 0}(y) = 0 \hspace{0.25cm}\text{and}\hspace{0.2cm}
\inf_{y \notin B(x^*\!,\eta)} \mathcal{E}_{\scriptscriptstyle 0}(y) = \frac{\eta^2}{2}
To complete the proof, replace these into (<ref>) and (<ref>), and assume the condition in (<ref>) is verified. It then follows that
\begin{equation} \label{eq:proofconcentration15}
\inf_{y \in B(x^*\!,\eta)} \mathcal{E}_{\scriptscriptstyle T}(y) < \frac{\eta^2}{4} < \inf_{y \notin B(x^*\!,\eta)} \mathcal{E}_{\scriptscriptstyle T}(y)
\end{equation}
However, this means any global minimum of $\mathcal{E}_{\scriptscriptstyle T}(y)$ must belong to $B(x^*\!,\eta)$. Equivalently, any Riemannian barycentre $\hat{x}_{\scriptscriptstyle T}$ of $\pi_{\scriptscriptstyle T}$ must verify $d(\hat{x}_{\scriptscriptstyle T}\hspace{0.02cm},x^*) < \eta$. Thus, the conclusion in (<ref>) holds.
§.§.§ Proof of (ii)
Recall the condition in (<ref>), which holds for $d(x\hspace{0.02cm},x^*) \leq \rho$. By choosing $\rho < \min\lbrace \mathrm{inj}(x^*),\frac{\pi}{2}c^{\scriptscriptstyle -1}\rbrace$, it will be possible to apply (<ref>) from <ref>, in the remainder of the proof. Consider the truncated distribution
\begin{equation} \label{eq:pitruncate}
\pi^{\scriptscriptstyle \rho}_{\scriptscriptstyle T}(dx) \,= \frac{\mathbf{1}_{\scriptscriptstyle B_\rho}(x)}{\pi_{\scriptscriptstyle T}(B_{\scriptscriptstyle \rho})}\hspace{0.03cm} \pi^{\phantom{\scriptscriptstyle \rho}}_{\scriptscriptstyle T}(dx)
\end{equation}
where $\mathbf{1}$ denotes the indicator function, and $B_{\scriptscriptstyle \rho}$ denotes the open ball $B(x^*\!,\rho)$.
Of course, by the triangle inequality
\begin{equation} \label{eq:kantortriangle}
W(\pi_{\scriptscriptstyle T}\hspace{0.02cm},\delta_{x^*}) \leq
W(\pi^{\phantom{\scriptscriptstyle \rho}}_{\scriptscriptstyle T}\hspace{0.02cm},\pi^{\scriptscriptstyle \rho}_{\scriptscriptstyle T}) +
W(\pi^{\scriptscriptstyle \rho}_{\scriptscriptstyle T}\hspace{0.02cm},\delta_{x^*})
\end{equation}
Now, the proof relies on the following estimates, which use the notation of <ref>.
– first estimate : if $T\leq T^1_{\scriptscriptstyle W\hspace{0.03cm}}$, then
\begin{equation} \label{eq:estimate1}
W(\pi^{\phantom{\scriptscriptstyle \rho}}_{\scriptscriptstyle T}\hspace{0.02cm},\pi^{\scriptscriptstyle \rho}_{\scriptscriptstyle T}) \leq (\mathrm{diam}\,M\times\mathrm{vol}\,M)\left(\frac{2}{\pi}\right)\left(\frac{\pi}{8}\right)^{\!\frac{n}{2}}\left(\frac{\mu_{\max}}{T}\right)^{\!\frac{n}{2}}\exp\left(-\frac{U_\rho}{T}\right)
\end{equation}
– second estimate : if $T\leq T^1_{\scriptscriptstyle W\hspace{0.03cm}}$, then
\begin{equation} \label{eq:estimate2}
W(\pi^{\scriptscriptstyle \rho}_{\scriptscriptstyle T}\hspace{0.02cm},\delta_{x^*}) \leq
\hspace{0.02cm}B^{-1}_n
\left(\frac{\pi}{2}\right)^{\!n-1}
\left(\frac{\mu_{\max}}{\mu_{\min}}\right)^{\!\!\frac{n}{2}}
\left(\frac{T}{\mu_{\min}}\right)^{\!\!\frac{1}{2}}
\end{equation}
These two estimates will be proved below. To obtain (<ref>), assume that they hold, and that $T\leq T_{\scriptscriptstyle W\hspace{0.03cm}}$. Then, $T \leq T^2_{\scriptscriptstyle W}$ and the definition of $T^2_{\scriptscriptstyle W}$ implies
f(T,n+1,\rho) \leq \left(\mu_{\max}\middle/\mu_{\min}\right)^{\!\scriptscriptstyle \frac{n}{2}}\hspace{0.01cm}C_n
Using the definition of $C_n$ and formulae (<ref>), this inequality reads
\leq 2\hspace{0.03cm}
\left(\mu_{\max}\middle/\mu_{\min}\right)^{\!\scriptscriptstyle \frac{n}{2}}
This is the same as
(\mathrm{diam}\,M\times\mathrm{vol}\,M)\hspace{0.02cm}\pi^{\scriptscriptstyle -1}
\left(\frac{\pi}{8}\right)^{\!\frac{n}{2}}f(T,n+1,\rho) \leq
\left(\frac{\pi}{2}\right)^{\!n-1}
\left(\mu_{\max}\middle/\mu_{\min}\right)^{\!\scriptscriptstyle \frac{n}{2}}
From the definition of $f(T,n+1,\rho)$, it then follows that the right-hand side of (<ref>) is less than half the right-hand side of (<ref>). Since this is the case, (<ref>) follows from the triangle inequality (<ref>).
– proof of first estimate : consider the probability distribution $K$ on $M \times M$,
\begin{equation} \label{eq:coupling}
K(dx_{\scriptscriptstyle 1}\times
dx_{\scriptscriptstyle 2}) =\pi^{\scriptscriptstyle \rho}_{\scriptscriptstyle T}(dx_{\scriptscriptstyle 1})\left[ \pi_{\scriptscriptstyle T}(B_{\scriptscriptstyle \rho})\hspace{0.02cm}\delta_{x_{\scriptscriptstyle 1}}(dx_{\scriptscriptstyle 2}) + \mathbf{1}_{\scriptscriptstyle B^{\scriptscriptstyle c}_{\scriptscriptstyle \rho}}(x_{\scriptscriptstyle 2})\pi_{\scriptscriptstyle T}(dx_{\scriptscriptstyle 2})\right]
\end{equation}
where $B^{\scriptscriptstyle c}_{\scriptscriptstyle \rho}$ denotes the complement of $B_{\scriptscriptstyle \rho}$ in $M$. This distribution $K$ provides a coupling between $\pi^{\phantom{\scriptscriptstyle \rho}}_{\scriptscriptstyle T}$ and $\pi^{\scriptscriptstyle \rho}_{\scriptscriptstyle T\hspace{0.03cm}}$. Therefore, replacing (<ref>) into the definition of the Kantorovich distance, it follows
\begin{equation} \label{eq:proofestimate11}
W(\pi_{\scriptscriptstyle T}\hspace{0.02cm},\pi^{\scriptscriptstyle \rho}_{\scriptscriptstyle T}) \leq (\mathrm{diam}\,M)\hspace{0.02cm}
\pi_{\scriptscriptstyle T}(B^{\scriptscriptstyle c}_{\scriptscriptstyle \rho})
\end{equation}
However, the definition (<ref>) of $\pi_{\scriptscriptstyle T}$ implies
\begin{equation} \label{eq:proofestimate12}
\pi_{\scriptscriptstyle T}(B^{\scriptscriptstyle c}_{\scriptscriptstyle \rho}) \leq \left(Z(T)\right)^{-1}(\mathrm{vol}\,M)\exp\left(-\frac{U_\rho}{T}\right)
\end{equation}
Now, (<ref>) follows directly from (<ref>) and (<ref>), if the following lower bound on $Z(T)$ can be proved
\begin{equation} \label{eq:lowerz}
Z(T) \geq \left(\frac{\pi}{2}\right)\left(\frac{8}{\pi}\right)^{\!\frac{n}{2}}\left(\frac{T}{\mu_{\max}}\right)^{\!\frac{n}{2}} \hspace{1cm} \text{for } T \leq T^1_{\scriptscriptstyle W}
\end{equation}
To prove this lower bound, note that
Z(T) = \int_M\,\exp\left(-\frac{U(x)}{T}\right)\mathrm{vol}(dx) \,\geq \int_{B_{\scriptscriptstyle \rho}}\,\exp\left(-\frac{U(x)}{T}\right)\mathrm{vol}(dx)
Replacing (<ref>) into this last inequality, it is possible to write
\begin{equation} \label{eq:prooflowerz1}
Z(T) \geq \int_{B_{\scriptscriptstyle \rho}}\,\exp\left(-\frac{U(x)}{T}\right)\mathrm{vol}(dx) \geq
\int_{B_{\scriptscriptstyle \rho}}\,\exp\left(-\frac{\mu_{\max}}{2T}d^{\hspace{0.03cm}2}(x,x^*)\right)\mathrm{vol}(dx)
\end{equation}
Since $\rho < \min\lbrace \mathrm{inj}(x^*),\frac{\pi}{2}c^{\scriptscriptstyle -1}\rbrace$, it is possible to apply (<ref>) from <ref>, to (<ref>). Specifically, the lower bound in (<ref>) yields,
\begin{equation} \label{eq:prooflowerz2}
Z(T) \geq
\omega_{n-1}\,\int^{\rho}_{\scriptscriptstyle 0}\,e^{-\frac{\mu_{\max}}{2T}r^2}\left(c^{\scriptscriptstyle -1}\!\sin(c\hspace{0.02cm}r)\right)^{n-1}\hspace{0.02cm}dr \geq
\omega_{n-1}(2/\pi)^{n-1}\,\int^{\rho}_{\scriptscriptstyle 0}\,e^{-\frac{\mu_{\max}}{2T}r^2}\hspace{0.02cm}r^{n-1}\hspace{0.02cm}dr
\end{equation}
where the second inequality follows since $\sin(t)$ is a concave function of $t \in [0,\pi/2]$, so that $
\sin(c\hspace{0.02cm}r) \geq (2/\pi)c\hspace{0.02cm}r$ for $r \in [0,\rho]$. Now, the required bound (<ref>) follows from (<ref>) by noting
\int^{\rho}_{\scriptscriptstyle 0}\,e^{-\frac{\mu_{\max}}{2T}r^2}r^{n-1}\hspace{0.02cm}dr =
(2\pi)^{\frac{1}{2}}\left(\frac{T}{\mu_{\max}}\right)^{\!\frac{n}{2}}A_{n-1} -
\int^{\scriptscriptstyle \infty}_{\rho}\,e^{-\frac{\mu_{\max}}{2T}r^2}r^{n-1}\hspace{0.02cm}dr
where $A_n = \mathbb{E}|X|^n$ for $X \sim N(0,1)$, and that
\int^{\scriptscriptstyle \infty}_{\rho}\,e^{-\frac{\mu_{\max}}{2T}r^2}r^{n-1}\hspace{0.02cm}dr \leq \frac{\rho^{n-2\hspace{0.02cm}}T}{\mu_{\max}}
\,e^{-\frac{\mu_{\max}}{2T}\rho^2}\leq
\frac{\rho^{n-2\hspace{0.02cm}}T}{\mu_{\max}}
\,e^{-\frac{U_\rho}{T}}
Indeed, taken together, these give
\begin{equation} \label{eq:prooflowerz3}
Z(T) \geq \omega_{n-1}(2/\pi)^{n-1}\,\left[
(2\pi)^{\frac{1}{2}}\left(\frac{T}{\mu_{\max}}\right)^{\!\frac{n}{2}}A_{n-1} -
\frac{\rho^{n-2\hspace{0.02cm}}T}{\mu_{\max}}
\,e^{-\frac{U_\rho}{T}}
\right]
\end{equation}
Then, (<ref>) can be obtained by noting that the second term in square brackets is negligible in comparison to the first as $T\rightarrow 0$, and using formulae (<ref>) for $A_{n-1}$ and $\omega_{n-1\hspace{0.03cm}}$.
– proof of second estimate : the Kantorovich distance $W(\pi^{\scriptscriptstyle \rho}_{\scriptscriptstyle T}\hspace{0.02cm},\delta_{x^*})$ between $\pi^{\scriptscriptstyle \rho}_{\scriptscriptstyle T}$ and $\delta_{x^*}$ is equal to the first-order moment
\int_M\,d(x\hspace{0.02cm},x^*)\hspace{0.02cm}
\pi^{\scriptscriptstyle \rho}_{\scriptscriptstyle T}(dx)
According to (<ref>) and (<ref>), this means
W(\pi^{\scriptscriptstyle \rho}_{\scriptscriptstyle T}\hspace{0.02cm},\delta_{x^*}) =
\left(\pi_{\scriptscriptstyle T}(B_{\scriptscriptstyle \rho})Z(T)\right)^{-1}\,\int_{B_{\scriptscriptstyle \rho}}
Using (<ref>) to express the probability in parentheses, this becomes
\begin{equation} \label{eq:proofestimate21}
W(\pi^{\scriptscriptstyle \rho}_{\scriptscriptstyle T}\hspace{0.02cm},\delta_{x^*}) = \frac{\int_{B_{\scriptscriptstyle \rho}}
\int_{B_{\scriptscriptstyle \rho}}\exp\left(-\frac{U(x)}{T}\right)\mathrm{vol}(dx)}
\end{equation}
A lower bound on the denominator can be found from (<ref>) and the subsequent inequalities, which were used to prove (<ref>). These inequalities provide
\begin{equation} \label{eq:proofestimate22}
\int_{B_{\scriptscriptstyle \rho}}\exp\left(-\frac{U(x)}{T}\right)\mathrm{vol}(dx) \geq
\omega_{n-1}\hspace{0.2cm}
\hspace{0.02cm}A_{n-1}
\left(T\middle/\mu_{\max}\right)^{\!\frac{n}{2}}
\end{equation}
whenever $T \leq T^1_{\scriptscriptstyle W}$. For the numerator in (<ref>), it will be shown that
\begin{equation} \label{eq:proofestimate23}
\int_{B_{\scriptscriptstyle \rho}}
\leq
\omega_{n-1}
\hspace{0.02cm}A_{n}
\left(T\middle/\mu_{\min}\right)^{\!\frac{n+1}{2}}
\end{equation}
Then, (<ref>) follows by dividing (<ref>) by (<ref>), and replacing in (<ref>), using the fact that $A_n/A_{n-1} = (2\pi)^{\!\frac{1}{2}}B^{\scriptscriptstyle -1}_n\hspace{0.03cm}$, which can be found from formulae (<ref>). Now, it only remails to prove (<ref>). This is done by noting, from (<ref>),
\int_{B_{\scriptscriptstyle \rho}}
\leq
\int_{B_{\scriptscriptstyle \rho}}
\exp\left(-\frac{\mu_{\min}}{2T}d^{\hspace{0.03cm}2}(x,x^*)\right)\mathrm{vol}(dx)
Applying the upper bound in (<ref>) (with $\kappa_{\min} = 0$), to the last integral, it follows that
\int_{B_{\scriptscriptstyle \rho}}
\exp\left(-\frac{\mu_{\min}}{2T}d^{\hspace{0.03cm}2}(x,x^*)\right)\mathrm{vol}(dx) \leq
\omega_{n-1}\int^{\rho}_{\scriptscriptstyle 0}
e^{-\frac{\mu_{\min}}{2T}r^2}\hspace{0.02cm}r^n\hspace{0.02cm}dr \leq
\omega_{n-1}\int^{\scriptscriptstyle \infty}_{\scriptscriptstyle 0}
The integral on the right-hand side is half the $n$-th absolute moment of a normal distribution. By expressing it in terms of $A_n\hspace{0.03cm}$, it is possible to directly recover (<ref>).
§.§ Proof of Proposition <ref>
§.§.§ Proof of (i)
Under the integrals in (<ref>), $G_y(x)$ and $H_y(x)$ are given by (<ref>) and (<ref>), for any $x \in \mathrm{D}(y)$. Furthermore, by (<ref>) and (<ref>), both integrands $G_y(x)$ and $H_y(x)$ are continuous on the domaine of integration $\mathrm{D}(y)$.
The integral $G_y$ converges, because $G_y(x)$ is uniformly bounded on $\mathrm{D}(y)$. Indeed, from (<ref>),
\Vert G_y(x)\Vert_y = \Vert \beta(s,a)\Vert_y = d(y\hspace{0.02cm},x)
where the second equality follows from the fact that $x = \mathrm{Exp}_y(\beta(s,a))$. Of course, $d(y\hspace{0.02cm},x)$ is always less than $\mathrm{diam}\,M$.
The integral $H_y$ is an improper integral, since $H_y(x)$ blows up when $x$ approaches $\mathrm{Cut}(y)$, as explained in <ref>. Nonetheless, this integral converges absolutely, as shall be seen from the material in <ref>.
Precisely, recall the mapping $\varphi$ defined in (<ref>). Because $M$ is simply connected, $\mathrm{Cut}(y)$ is identical to the first conjugate locus of $y$. This means that $\mathrm{Cut}(y)$ is contained in the set of critical values of $\mathrm{Exp}_y\hspace{0.03cm}$, and therefore also in the set of critical values of $\varphi$. Equivalently, $\mathrm{D}(y)$ contains the set of regular values of $\varphi$, denoted $M_r$ in (<ref>). It then follows, as in (<ref>),
\begin{equation} \label{eq:Hypolar}
H_y = \int_{Q_+}\!\int_S\,H_y(s\hspace{0.02cm},a)\hspace{0.02cm}p_{\scriptscriptstyle T}(s\hspace{0.02cm},a)\hspace{0.02cm}D(a)\hspace{0.03cm}da\hspace{0.02cm}\omega(ds)
\end{equation}
where $p_{\scriptscriptstyle T}$ denotes the density of $\pi_{\scriptscriptstyle T}$ with respect to the Riemannian volume.
To prove that $H_y$ converges absolutely, it is enough to prove the integrand in (<ref>) is uniformly bounded. However, the density $p_{\scriptscriptstyle T}$ is bounded, since it is continuous and $M$ is compact. Moreover, it is clear from (<ref>) and (<ref>) that
\begin{equation} \label{eq:HypolarH}
H_y(s\hspace{0.02cm},a) = \Pi^s_\mathfrak{a} \,+\, \sum_{\lambda \in \Delta_+} \lambda(a)\cot\lambda(a)\;\Pi^s_{\lambda}
\end{equation}
\begin{equation} \label{eq:HypolarD}
D(a) = \prod_{\lambda \in \Delta_+} \left| \sin\hspace{0.02cm}\lambda(a)\right|^{ m_\lambda} \hspace{2.4cm}
\end{equation}
The product of these two expressions is uniformly bounded, because $\lambda(a) \in (0,\pi)$ on $Q_+$.
Thus, the integrals $G_y$ and $H_y$ converge, and it is clear from the above that this is true for any temperature $T > 0$. The fact that both $G_y$ and $H_y$ depend continuously on $y$ will be clear from the arguments in the proof of (ii).
§.§.§ Proof of (ii)
The proof relies in a crucial way on Lemma <ref>, which is proved in <ref>, below. To compute the gradient and Hessian of $\mathcal{E}_{\scriptscriptstyle T}$ at $y \in M$, consider any geodesic $\gamma:I\rightarrow M$, defined on a compact interval $I = [-\tau,\tau]$, with $\gamma(0) = y$. For each $t \in I$, it is immediate from (<ref>) that
\begin{equation} \label{eq:proofderivatives1}
\mathcal{E}_{\scriptscriptstyle T}(\gamma(t)) = \int_M\,f_x(\gamma(t))\hspace{0.03cm}\pi_{\scriptscriptstyle T}(dx)
\end{equation}
However, Lemma <ref> states that the set
\mathrm{Cut}(\gamma) = \bigcup_{t \in I}\,\mathrm{Cut}(\gamma(t))
has Riemannian volume equal to zero. Thus, since $\pi_{\scriptscriptstyle T}$ is uniformly continuous with respect to Riemannian volume, $\mathrm{Cut}(\gamma)$ can be removed from the domain of integration in (<ref>), to obtain
\begin{equation} \label{eq:proofderivatives2}
\mathcal{E}_{\scriptscriptstyle T}(\gamma(t)) =
\int_{\mathrm{D}(\gamma)}f_x(\gamma(t))\hspace{0.03cm}\pi_{\scriptscriptstyle T}(dx) \hspace{0.5cm} \text{for all $t \in I$}
\end{equation}
where $\mathrm{D}(\gamma) = M - \mathrm{Cut}(\gamma)$. Now, if $x \in \mathrm{D}(\gamma)$, then $x \notin \mathrm{Cut}(\gamma(t))$ for any $t \in I$. According to <ref>, this implies that $f_x(\gamma(t))$ is $C^2$ near each $t \in I$. In other words, $f_x(t) = f_x(\gamma(t))$ is $C^2$ inside the interval $I$. Then, the first and second derivatives of this function are given by
\begin{equation} \label{eq:fprimeprimef}
f^\prime_x(t) = \left\langle G_{\gamma(t)}(x)\hspace{0.02cm},\dot{\gamma}\right\rangle_{\gamma(t)} \hspace{0.3cm};\hspace{0.3cm} f^{\prime\prime}_x(t) = \left\langle H_{\gamma(t)}(x)\cdot \dot{\gamma}\hspace{0.02cm},\dot{\gamma}\right\rangle_{\gamma(t)}
\end{equation}
as in Proposition <ref>. Formally, (<ref>) follows by differentiating under the integral sign in (<ref>), replacing from (<ref>), and then putting $t = 0$. This differentiation under the integral sign is justified, as soon as it is shown that the families of functions,
\left\lbrace x \mapsto G_{\gamma(t)}(x)\,;t\in I\right\rbrace \hspace{0.3cm};\hspace{0.3cm}
\left\lbrace x \mapsto H_{\gamma(t)}(x)\,;t\in I\right\rbrace
which all have common domain of definition $\mathrm{D}(\gamma)$, are uniformly integrable with respect to the probability distribution $\pi_{\scriptscriptstyle T}$ (precisely, with respect to the restriction of $\pi_{\scriptscriptstyle T}$ to $\mathrm{D}(\gamma)$).
Roughly (for the exact definition, see [16]), uniform integrability means that the rate of absolute convergence of the following integrals
\begin{equation} \label{eq:GHt}
G_{\gamma(t)} = \int_{\mathrm{D}(\gamma)}G_{\gamma(t)}(x)\hspace{0.03cm}\pi_{\scriptscriptstyle T}(dx) \hspace{0.2cm};\hspace{0.2cm}
H_{\gamma(t)} = \int_{\mathrm{D}(\gamma)}H_{\gamma(t)}(x)\hspace{0.03cm}\pi_{\scriptscriptstyle T}(dx)
\end{equation}
does not depend on $t \in I$. This is clear for the integrals $G_{\gamma(t)}$ because, as in the proof of (i),
\Vert G_{\gamma(t)}(x)\Vert_{\gamma(t)} = d(\gamma(t)\hspace{0.02cm},x)
and this is bounded by $\mathrm{diam}\,M$, independently of $x$ and $t$.
Then, consider the integral $H_{\gamma(0)} = H_y\hspace{0.03cm}$, and recall Formulae (<ref>)–(<ref>). For simplicity, assume that $M$ is an irreducible symmetric space (see Chapter VIII of [10], Page 307). In this case, according to <ref>, there exists a maximal root $c \in \Delta_+\hspace{0.03cm}$, so that $c(a) \geq \lambda(a)$ for all $\lambda \in \Delta_+$ and $a \in Q_+\hspace{0.02cm}$. Therefore, it follows from (<ref>) that
\begin{equation} \label{eq:proofui1}
\Vert H_y(x)\Vert_{\scriptscriptstyle F} \leq (\dim\,M)^{\!\frac{1}{2}}\hspace{0.03cm}\max\lbrace 1,|c(a)\cot c(a)|\rbrace
\end{equation}
where $\Vert_\cdot\Vert_{\scriptscriptstyle F}$ denotes the Frobenius norm given by the Riemannian metric of $M$. Now, the required uniform integrability is equivalent to the statement that
\begin{equation} \label{eq:uicondition}
\lim_{K\rightarrow \infty}\,\int_{\mathrm{D}(\gamma)}\, \Vert H_y(x)\Vert_{\scriptscriptstyle F}\hspace{0.03cm} \mathbf{1}\lbrace \Vert H_y(x)\Vert_{\scriptscriptstyle F} > K\rbrace\hspace{0.03cm}\pi_{\scriptscriptstyle T}(dx) = 0 \;\;\text{uniformly in $y$}
\end{equation}
But from the inequality in (<ref>), if $K > 1$ there exists $\epsilon > 0$ such that
\lbrace \Vert H_y(x)\Vert_{\scriptscriptstyle F} > K\rbrace \subset \lbrace c(a) > \pi - \epsilon\rbrace
and $\epsilon \rightarrow 0$ as $K \rightarrow \infty$. In this case, the integral in (<ref>) is less than
\begin{equation} \label{eq:proofui2}
(\dim\,M)^{\!\frac{1}{2}}\left( \sup\hspace{0.02cm} p_{\scriptscriptstyle T}(x)\right)\int_{\mathrm{D}(\gamma)}\,
|c(a)\cot c(a)|\hspace{0.03cm}\mathbf{1}\lbrace c(a) > \pi - \epsilon\rbrace\hspace{0.03cm}\mathrm{vol}(dx)
\end{equation}
By expressing this integral as in (<ref>), it is seen to be equal to
\begin{array}{l}
\int_{Q_+}\!\int_S\,|c(a)\cot c(a)|\hspace{0.03cm}\mathbf{1}\lbrace c(a) > \pi - \epsilon\rbrace\hspace{0.02cm}D(a)\hspace{0.03cm}da\hspace{0.02cm}\omega(ds) \,= \\[0.2cm]
\omega(S)\,\int_{Q_+}\left[ |c(a)\cot c(a)|\hspace{0.02cm}D(a)\right]\mathbf{1}\lbrace c(a) > \pi - \epsilon\rbrace\hspace{0.02cm}da
\end{array}
In view, of (<ref>), since $c \in \Delta_+\hspace{0.02cm}$, the function in square brackets is bounded on the closure of $Q_+$ by $c^{\hspace{0.02cm}\scriptscriptstyle 2} = \Vert c \Vert^2$ (incidentally, this is the maximum sectional curvature of $M$, as explained in <ref>). Finally, by (<ref>), the integral in (<ref>) is less than
\left( \sup\hspace{0.02cm} p_{\scriptscriptstyle T}(x)\right)
\omega(S)\hspace{0.02cm}c^{\hspace{0.02cm} \scriptscriptstyle 2}\,
\int_{Q_+}\mathbf{1}\lbrace c(a) > \pi - \epsilon\rbrace\hspace{0.02cm}da
Recall that $c(a) \in (0,\pi)$ for $a \in Q_+\hspace{0.03cm}$. It is then clear this last integral converges to $0$ as $\epsilon \rightarrow 0$, at a rate which does not depend on $y$. This proves the required uniform integrability, so the proof is now complete, at least in the case where $M$ is an irreducible symmetric space.
In the general case, where $M$ is not irreducible, it is enough to note that, according to <ref>, $M$ is a product of irreducible Riemannian symmetric spaces, $M = M_{\scriptscriptstyle 1}\times\ldots\times M_{ s\hspace{0.03cm}}$. Then, the proof boils down to the special case where $M$ is irreducible, as treated above.
§.§ Proof of Lemma <ref>
The proof uses the following general remark.
Remark : let $M$ be a Riemannian manifold, and $g:M\rightarrow M$ be an isometry. Recall that $g\cdot y$ is used to denote $g(y)$, for $y \in M$. Similarly, if $A \subset M$, let $g\cdot A$ denote the image of $A$ under $g$.Then, for any $y \in M$, $\mathrm{Cut}(g\cdot y) = g\cdot \mathrm{Cut}(y)$.
This is because a point $x \in M$ belongs to $\mathrm{Cut}(y)$, if and only if $x$ is a first conjugate point to $y$ along some geodesic, or there exist two different length-minimising geodesics connecting $y$ to $x$, and because both of these properties are preserved by any isometry $g$.
Assume $M$ is a simply connected compact Riemannian symmetric space. In the notation of <ref>, $M \simeq G/K$. Recall (by Proposition <ref> of <ref>) any geodesic $\gamma:I\rightarrow M$ is given by
\begin{equation} \label{eq:prooflemgamma}
\gamma(t) = \exp(t\hspace{0.02cm}\omega)\cdot y
\end{equation}
for some $y \in M$ and $\omega \in \mathfrak{p}$, where $\exp$ denotes the Lie group exponential. From the above remark, for each $t \in I$, the cut locus $\mathrm{Cut}(\gamma(t))$ of $\gamma(t)$ is given by
\begin{equation} \label{eq:cutgammat}
\mathrm{Cut}(\gamma(t)) = \exp(t\hspace{0.02cm}\omega)\cdot\mathrm{Cut}(y)
\end{equation}
However, $\mathrm{Cut}(y)$ is described by (<ref>) in <ref>, which reads
\begin{equation} \label{eq:cutysccss1}
\mathrm{Cut}(y) = \varphi(S\times \bar{Q}_{\pi})
\end{equation}
in terms of the mapping $\varphi$ defined in (<ref>). It follows from (<ref>) and (<ref>) that
\begin{equation} \label{eq:CUTgamma}
\mathrm{Cut}(\gamma) = \Phi(I\times S\times \bar{Q}_{\pi}) \hspace{0.5cm} \Phi(t,s,a) = \exp(t\hspace{0.02cm}\omega)\cdot\varphi(s\hspace{0.02cm},a)
\end{equation}
The aim is to show that this set has Hausdorff dimension strictly less than $\dim\,M$. This is done using results from dimension theory [32]. Precisely, note from (<ref>) that
\bar{Q}_{\pi} = \cup_{\ell}\, \bar{Q}_{\ell} \hspace{0.4cm} \text{where } \bar{Q}_{\ell} = \bar{Q}_+\,\cap\,\lbrace a:c_{\scriptscriptstyle \ell}(a) = \pi\rbrace
Therefore, it is clear that
\begin{equation} \label{eq:CUTgammaunion}
\mathrm{Cut}(\gamma) = \bigcup_{\ell} \Phi(I\times S\times \bar{Q}_{\ell})
\end{equation}
Then, it follows from [32] (Item (2) of Theorem 2) that
\begin{equation} \label{eq:CUTgammauniondim}
\mathrm{dim}_{\scriptscriptstyle H}\,\mathrm{Cut}(\gamma) \leq \max_{\ell}\,\mathrm{dim}_{\scriptscriptstyle H}\,\Phi(I\times S\times \bar{Q}_{\ell})
\end{equation}
where $\mathrm{dim}_{\scriptscriptstyle H}$ is used to denote the Hausdorff dimension. Now, for each $\ell$,
\Phi(I\times S\times \bar{Q}_{\ell}) = \Phi(I\times S_{\ell}\times \bar{Q}_{\ell}) \subset
\Phi(\mathbb{R} \times S_{\ell} \times \lbrace c_{\scriptscriptstyle \ell}(a) = \pi\rbrace)
where $S_{\ell} = K/K_{\ell}$ with $K_{\ell}$ the centraliser of $\lbrace c_{\scriptscriptstyle \ell}(a) = \pi\rbrace$ in $K$. This last inclusion implies [32] (Item (1) of Theorem 2)
\begin{equation} \label{eq:diminclusion}
\mathrm{dim}_{\scriptscriptstyle H}\,\Phi(I\times S\times \bar{Q}_{\ell}) \leq \mathrm{dim}_{\scriptscriptstyle H}\,
\Phi(\mathbb{R} \times S_{\ell} \times \lbrace c_{\scriptscriptstyle \ell}(a) = \pi\rbrace)
\end{equation}
To conclude, note that the set $\mathbb{R} \times S_{\ell} \times \lbrace c_{\scriptscriptstyle \ell}(a) = \pi\rbrace$ is a differentiable manifold. Let this differentiable manifold be equipped with a product Riemannian metric (arising from flat metrics on $\mathbb{R}$ and $\lbrace c_{\scriptscriptstyle \ell}(a) = \pi\rbrace$, and from the invariant metric induced onto $S_{\ell}$ from $K$). It is clear from (<ref>) that $\Phi$ is smooth, and therefore locally Lipschitz. Then [32] (Item (5) of Theorem 2),
\begin{equation} \label{eq:dimlipsch}
\mathrm{dim}_{\scriptscriptstyle H}\,
\Phi(\mathbb{R} \times S_{\ell} \times \lbrace c_{\scriptscriptstyle \ell}(a) = \pi\rbrace) \leq
\mathrm{dim}_{\scriptscriptstyle H}\,
(\mathbb{R} \times S_{\ell} \times \lbrace c_{\scriptscriptstyle \ell}(a) = \pi\rbrace)
\end{equation}
But the Hausdorff dimension of a Riemannian manifold is the same as its (usual) dimension. Accordingly,
\mathrm{dim}_{\scriptscriptstyle H}\,
(\mathbb{R} \times S_{\ell} \times \lbrace c_{\scriptscriptstyle \ell}(a) = \pi\rbrace)
= 1 + \dim\,S_{\ell} + (\dim\,\mathfrak{a} - 1)
since the dimension of a hyperplane in $\mathfrak{a}$ is $\dim\,\mathfrak{a} - 1$. In addition, from [10] (Page 253), $\dim\,S_{\ell} < \dim\,S$. Therefore,
\mathrm{dim}_{\scriptscriptstyle H}\,
(\mathbb{R} \times S_{\ell} \times \lbrace c_{\scriptscriptstyle \ell}(a) = \pi\rbrace) = \dim\,S_{\ell} + \dim\,\mathfrak{a} < \dim\,M
since $\dim\,M = \dim\,S + \dim\,\mathfrak{a}$, as can be seen from (<ref>). Replacing this into (<ref>), it follows from (<ref>) and (<ref>) that $\dim\,\mathrm{Cut}(\gamma) < \dim\,M$. The lemma has therefore been proved.
§.§ Proof of Proposition <ref>
Assume that Lemma <ref> is true. This lemma is proved in <ref>, below.
§.§.§ Proof of (i)
For $\delta < \frac{1}{2}r_{\scriptscriptstyle cx\hspace{0.03cm}}$, let $T_{\scriptscriptstyle \delta}$ be given by (<ref>). By (ii) of Lemma <ref>, $T < T_{\scriptscriptstyle \delta}$ implies the variance function $\mathcal{E}_{\scriptscriptstyle T}(y)$ is strongly convex on $B(x^*\!,\delta)$. It will be proved that any Riemannian barycentre $\hat{x}_{\scriptscriptstyle T}$ of $\pi_{\scriptscriptstyle T}$ belongs to $B(x^*\!,\delta)$. Then, since $\hat{x}_{\scriptscriptstyle T}$ is a minimum of $\mathcal{E}_{\scriptscriptstyle T}(y)$ in $B(x^*\!,\delta)$, it follows that $\hat{x}_{\scriptscriptstyle T}$ is unique (thanks to the strong convexity of $\mathcal{E}_{\scriptscriptstyle T}(y)$).
By (i) of Proposition <ref>, to prove that any $\hat{x}_{\scriptscriptstyle T}$ belongs to $B(x^*\!,\delta)$, it is enough to prove
\begin{equation} \label{eq:bcproofunique1}
W(\pi_{\scriptscriptstyle T\hspace{0.02cm}},\delta_{x^*}) < \frac{\delta^2}{4\hspace{0.02cm}\mathrm{diam}\,M}
\end{equation}
However, if $T < T_{\scriptscriptstyle \delta}$ then $T < T_{\scriptscriptstyle W}$ and, by (ii) of Proposition <ref>,
$W(\pi_{\scriptscriptstyle T\hspace{0.02cm}},\delta_{x^*})$ satisfies inequality (<ref>). In addition (from the definition of $T^1_{\scriptscriptstyle \delta}$ and $T^2_{\scriptscriptstyle \delta}$) one has $T < T^1_{\scriptscriptstyle \delta}$ and
(2\pi)^{\!\frac{1}{2}}\hspace{0.02cm}(T/\mu_{\min})^{\!\frac{1}{2}} < \delta^2\hspace{0.03cm}(\mu_{\min}/\mu_{\max})^{\!\frac{n}{2}}\hspace{0.03cm}D_n
By replacing the expression of $D_n$ and simplifying, this is the same as
\begin{equation} \label{eq:bcproofunique2}
\hspace{0.03cm}(\mu_{\max}/\mu_{\min})^{\!\frac{n}{2}}
< \frac{\delta^2}{4\hspace{0.02cm}\mathrm{diam}\,M}
\end{equation}
Now, (<ref>) follows from (<ref>) and (<ref>).
§.§.§ Proof of (ii)
From the proof of (i), $\mathcal{E}_{\scriptscriptstyle T}(y)$ is strongly convex on $B(x^*\!,\delta)$, and $\hat{x}_{\scriptscriptstyle T}$ is the minimum of
$\mathcal{E}_{\scriptscriptstyle T}(y)$ in $B(x^*\!,\delta)$. To prove that $\hat{x}_{\scriptscriptstyle T} = x^*$, it is then enough to prove that $x^*$ is a stationary point of $\mathcal{E}_{\scriptscriptstyle T}(y)$. However, the fact that $U$ is invariant by the geodesic symmetry $s_{x^*}$ will be seen to imply
\begin{equation} \label{eq:bcproofsx}
d\hspace{0.02cm}s_{x^*}\cdot G_{x^*} = G_{x^*}
\end{equation}
which is equivalent to $G_{x^*} = 0$, since $d\hspace{0.02cm}s_{x^*}$ is equal to minus the identity on $T_{x^*}M$ (see <ref>). Then, by (<ref>) in Proposition <ref>, $\mathrm{grad}\,\mathcal{E}_{\scriptscriptstyle T}(x^*) = 0$, so $x^*$ is indeed a stationary point of $\mathcal{E}_{\scriptscriptstyle T}(y)$.
To obtain (<ref>), it is possible to write from (<ref>),
\begin{equation} \label{eq:bcproofsx1}
d\hspace{0.02cm}s_{x^*}\cdot G_{x^*} = d\hspace{0.02cm}s_{x^*}\cdot \int_{\mathrm{D}(x^*)}\,G_{x^*}(x)
\hspace{0.03cm}\pi_{\scriptscriptstyle T}(dx)
\end{equation}
But, it follows from (<ref>) and (<ref>) that $x = \mathrm{Exp}_{x^*}(-G_{x^*}(x))$. Then, since $s_{x^*}$ reverses geodesics through $x^*$,
d\hspace{0.02cm}s_{x^*}\cdot G_{x^*}(x) = G_{x^*}(s_{x^*}\cdot x)
Replacing this into (<ref>), and introducing the new variable of integration $z = s_{x^*}\cdot x$[$\pi_{\scriptscriptstyle T}\circ s_{x^*}$ is the image of the distribution $\pi_{\scriptscriptstyle T}$ under the map $s_{x^*}:M\rightarrow M$. In other places of this thesis, this would be noted $(s_{x^*})^*\pi_{\scriptscriptstyle T\hspace{0.03cm}}$, but this notation seems kind of clumsy, in the present case.],
\begin{equation} \label{eq:bcproofsx2}
d\hspace{0.02cm}s_{x^*}\cdot G_{x^*} = \int_{\mathrm{D}(x^*)}\,G_{x^*}(z)\hspace{0.02cm}(\pi_{\scriptscriptstyle T}\circ s_{x^*})(dz)
\end{equation}
since $s^{\scriptscriptstyle -1}_{x^*} = s_{x^*}$ and $s_{x^*}$ maps $\mathrm{D}(x^*)$ onto itself. Now, note that $\pi_{\scriptscriptstyle T}\circ s_{x^*} = \pi_{\scriptscriptstyle T\hspace{0.03cm}}$. This is clear, since from (<ref>),
(\pi_{\scriptscriptstyle T}\circ s_{x^*})(dz) = \left(Z(T)\right)^{-1}\exp\left[-\frac{(U\circ s_{x^*})(z)}{T}\right](\mathrm{vol}\circ s_{x^*})(dz)
However, by assumption, $(U\circ s_{x^*})(z) = U(z)$, since $U$ is invariant by geodesic symmetry about $x^*$. Moreover, because $s_{x^*}$ is an isometry, it must preserve Riemannian volume, so
$(\mathrm{vol}\circ s_{x^*})(dz) = \mathrm{vol}(dz)$. Thus, (<ref>) reads
d\hspace{0.02cm}s_{x^*}\cdot G_{x^*} =
\int_{\mathrm{D}(x^*)}\,G_{x^*}(z)\hspace{0.03cm}\pi_{\scriptscriptstyle T}(dz)
From (<ref>), the right-hand side is $G_{x^*\hspace{0.03cm}}$, so (<ref>) is obtained. From (<ref>), since $d\hspace{0.02cm}s_{x^*} = -\mathrm{Id}_{x^*}\hspace{0.03cm}$, and $G_{x^*}$ belongs to $T_{x^*}M$,
G_{x^*} = -\,G_{x^*}
Of course, this means $G_{x^*} = 0$, as required.
§.§ Proof of Lemma <ref>
§.§.§ Proof of (i)
Let $y \in B(x^*\!,\delta)$ where $\delta < \frac{1}{2}r_{\scriptscriptstyle cx\hspace{0.03cm}}$. Now, recall that $\mathrm{Hess}\,\mathcal{E}_{\scriptscriptstyle T}(y) = H_y$ for all $y \in B(x^*\!,\delta)$. Then, from (<ref>), it is possible to write
\begin{equation} \label{eq:proofetconv1}
H_y \,= \int_{B(y\hspace{0.02cm},r_{\scriptscriptstyle cx})}\,H_y(x)\hspace{0.03cm}\pi_{\scriptscriptstyle T}(dx) +
\int_{\mathrm{D}(y)-B(y\hspace{0.02cm},r_{\scriptscriptstyle cx})}\,H_y(x)\hspace{0.03cm}\pi_{\scriptscriptstyle T}(dx)
\end{equation}
Indeed, $B(y\hspace{0.02cm},r_{\scriptscriptstyle cx}) \subset \mathrm{D}(y)$, since the injectivity radius of $M$ is $2r_{\scriptscriptstyle cx}$ as given in <ref>. The first integral in (<ref>) will be denoted $I_{\scriptscriptstyle 1}$ and the second integral $I_{\scriptscriptstyle 2\hspace{0.03cm}}$.
With regard to $I_{\scriptscriptstyle 1\hspace{0.03cm}}$, note the inclusions $B(x^*\!,\delta) \subset B(y\hspace{0.02cm},2\delta) \subset B(y\hspace{0.02cm},r_{\scriptscriptstyle cx})$, which follow from the triangle inequality. In addition, note that $H_y(x) \geq 0$ for $x \in B(y\hspace{0.02cm},r_{\scriptscriptstyle cx})$. Therefore,
\begin{equation} \label{eq:proofetconv2}
I_{\scriptscriptstyle 1} \geq \int_{B(x^*\!,\delta)}\,H_y(x)\hspace{0.03cm}\pi_{\scriptscriptstyle T}(dx)
\end{equation}
where $H_y(x)$ is given by (<ref>). But, from (<ref>); the eigenvalues of $H_y(x)$ are
\begin{equation} \label{eq:etconveigen}
\lambda(a)\cot\lambda(a) \geq \min_{\ell}\,c_{\scriptscriptstyle \ell}(a)\cot c_{\scriptscriptstyle \ell}(a)
\end{equation}
where the maximal roots $c_{\ell}$ were introduced before (<ref>). By the Cauchy-Schwarz inequality, $c_{\scriptscriptstyle \ell}(a) \leq \Vert c_{\scriptscriptstyle \ell}\Vert\hspace{0.02cm}\Vert a \Vert \leq c \Vert a\Vert$, where $c^{\hspace{0.02cm}\scriptscriptstyle 2}$ denotes the maximum sectional curvature of $M$, whose expression was recalled in <ref>. Now, if $x \in B(y\hspace{0.02cm},2\delta)$, then $\Vert a \Vert = d(y\hspace{0.02cm},x) < 2\delta$, and it follows from (<ref>) that
\begin{equation} \label{eq:proofetconv3}
H_y(x) \geq \min_{\ell}\,c_{\scriptscriptstyle \ell}(a)\cot c_{\scriptscriptstyle \ell}(a) \geq
2c\delta\hspace{0.02cm}\cot(2c\delta) = \mathrm{Ct}(2\delta) > 0
\end{equation}
where the last inequality is because $2c\delta < \frac{\pi}{2}$. Replacing in (<ref>) gives
I_{\scriptscriptstyle 1} \geq \mathrm{Ct}(2\delta)\hspace{0.03cm}\pi_{\scriptscriptstyle T}(B(x^*\!,\delta)) =
\mathrm{Ct}(2\delta)[1 - \pi_{\scriptscriptstyle T}(B^{\scriptscriptstyle c}(x^*\!,\delta))]
Finally, (<ref>) and (<ref>) imply that $\pi_{\scriptscriptstyle T}(B^{\scriptscriptstyle c}(x^*\!,\delta)) \leq \mathrm{vol}(M)\hspace{0.02cm} f(T)$, where $f(T)$ was defined in (<ref>) (precisely, this follows after replacing $\rho$ by $\delta$ in (<ref>)).
\begin{equation} \label{eq:proofetconv4}
I_{\scriptscriptstyle 1} \geq \mathrm{Ct}(2\delta)[1 -\mathrm{vol}(M)\hspace{0.02cm} f(T)]
\end{equation}
The proof of (<ref>) will be completed by showing
\begin{equation} \label{eq:AMM}
I_{\scriptscriptstyle 2} \geq - \pi A_{\scriptscriptstyle M}\hspace{0.03cm}f(T)
\end{equation}
To do so, introduce the function
\begin{equation} \label{eq:funka}
k(a) = \min_{\ell}\,c_{\scriptscriptstyle \ell}(a)\cot c_{\scriptscriptstyle \ell}(a) \hspace{0.5cm} \text{for $a \in Q_+$}
\end{equation}
and note using (<ref>) that
\begin{equation} \label{eq:AMM1}
I_{\scriptscriptstyle 2} \geq \int_{\mathrm{D}(y) - B(y\hspace{0.02cm},r_{\scriptscriptstyle cx})}\,k(a)\hspace{0.03cm}\pi_{\scriptscriptstyle T}(dx) \geq
\int_{\mathrm{D}(y)}\,\mathbf{1}\lbrace k(a) \leq 0\rbrace k(a)\hspace{0.03cm}\pi_{\scriptscriptstyle T}(dx)
\end{equation}
Indeed, the set of $a$ such that $k(a) \leq 0$ is contained in $\mathrm{D}(y) - B(y\hspace{0.02cm},r_{\scriptscriptstyle cx})$, because
\begin{equation} \label{eq:AMM2}
\lbrace k(a) \leq 0\rbrace = \cup_{\ell}\hspace{0.03cm} \lbrace c_{\scriptscriptstyle \ell}(a)\cot c_{\scriptscriptstyle \ell}(a) \leq 0 \rbrace =
\cup_{\ell}\hspace{0.03cm} \lbrace c_{\scriptscriptstyle \ell}(a) \geq \pi/2 \rbrace
\end{equation}
and $c_{\scriptscriptstyle \ell}(a) \geq \pi/2$ implies $d(y\hspace{0.02cm},x) = \Vert a \Vert \geq \frac{\pi}{2}\hspace{0.03cm}c^{\scriptscriptstyle -1} = r_{\scriptscriptstyle cx}$ (by Cauchy-Schwarz). By expressing the last integral in (<ref>) as in (<ref>), it is seen to be equal to
\begin{array}{l}
\int_{Q_+}\!\int_S\,
\mathbf{1}\lbrace k(a) \leq 0\rbrace k(a)\hspace{0.02cm}
p_{\scriptscriptstyle T}(s\hspace{0.02cm},a)\hspace{0.02cm}D(a)\hspace{0.03cm}da\hspace{0.02cm}\omega(ds) \geq \\[0.3cm]
\int_{Q_+}\!\int_S\,
\mathbf{1}\lbrace k(a) \leq 0\rbrace\hspace{0.02cm}
p_{\scriptscriptstyle T}(s\hspace{0.02cm},a)\hspace{0.03cm}da\hspace{0.02cm}\omega(ds)
\end{array}
Indeed, it follows from (<ref>) and (<ref>) that $k(a)\hspace{0.02cm}D(a) \geq \min_{\ell} -c_{\scriptscriptstyle \ell}(a)$, and this is greater than $-\pi$ because $c_{\scriptscriptstyle \ell}(a) \in (0,\pi)$ for $a \in Q_{+}\hspace{0.02cm}$. Now, (<ref>) implies
\begin{equation} \label{eq:AMM3}
I_{\scriptscriptstyle 2} \geq -\pi\,
\int_{Q_+}\!\int_S\,
\mathbf{1}\lbrace k(a) \leq 0\rbrace\hspace{0.02cm}
p_{\scriptscriptstyle T}(s\hspace{0.02cm},a)\hspace{0.03cm}da\hspace{0.02cm}\omega(ds)
\end{equation}
As seen from (<ref>), the set $\lbrace k(a) \leq 0\rbrace \subset B^{\scriptscriptstyle c}(y\hspace{0.02cm},r_{\scriptscriptstyle cx}) \subset B^{\scriptscriptstyle c}(x^*\!,\delta)$. On the other hand, $p_{\scriptscriptstyle T}(x) \leq f(T)$ for $x \in B^{\scriptscriptstyle c}(x^*\!,\delta)$. Replacing in (<ref>),
\begin{equation} \label{eq:AMM4}
I_{\scriptscriptstyle 2} \geq -\pi f(T)\,\int_{Q_+}\!\int_S\,da\hspace{0.02cm}\omega(ds)
\end{equation}
The double integral on the right-hand side is a positive constant which depends only on the symmetric space $M$. Denoting this by $A_{\scriptscriptstyle M}$ yields (<ref>).
§.§.§ Proof of (ii)
Let $\delta < \frac{1}{2}r_{\scriptscriptstyle cx\hspace{0.03cm}}$. According to (<ref>), which has just been proved
\begin{equation} \label{eq:chap2fin1}
\mathrm{Hess}\,\mathcal{E}_{\scriptscriptstyle T}(y) \,\geq\, \mathrm{Ct}(2\delta)\hspace{0.03cm}[1 - \mathrm{vol}(M)f(T)] - \pi A_{\scriptscriptstyle M}\hspace{0.03cm}f(T)
\end{equation}
for all $y \in B(x^*\!,\delta)$. Now, let $T_{\scriptscriptstyle \delta}$ be given by (<ref>). It follows from the definition of $T^2_{\scriptscriptstyle \delta}$ that $T < T_{\scriptscriptstyle \delta}$ implies
\begin{equation} \label{eq:chap2fin2}
f(T) < \frac{\mathrm{Ct}(2\delta)}{\mathrm{Ct}(2\delta)\hspace{0.02cm}\mathrm{vol}(M) + \pi\hspace{0.02cm}A_{\scriptscriptstyle M}}
\end{equation}
This amounts to saying the right-hand side of (<ref>) is strictly positive. Since this is independent of $y$, it is clear that the variance function $\mathcal{E}_{\scriptscriptstyle T}(y)$ is indeed strongly convex on $B(x^*\!,\delta)$.
CHAPTER: GAUSSIAN DISTRIBUTIONS AND RMT
Gaussian distributions on Riemannian symmetric spaces were introduced in [36]. The present chapter expands on this work in several ways. In particular, it uncovers and exploits the connection between Gaussian distributions on Riemannian symmetric spaces and random matrix theory (RMT).
* <ref> attempts to answer the seemingly easy question what is a Gaussian distribution ?, by adopting a historical perspective (the source material used is from [37][38][39]).
* <ref> defines Gaussian distributions as a family of distributions on a Riemannian manifold, for which maximum-likelihood estimation is equivalent to the Riemannian barycentre problem.
* <ref> expresses the normalising factor of a Gaussian distribution on a Riemannian symmetric space, which belongs to the non-compact case, under the form of a multiple integral. It also discusses analytic and numerical evaluation of this multiple integral.
* <ref> proves existence and uniqueness of maximum-likelihood estimates for Gaussian distributions, defined on a Hadamard manifold which is also a homogeneous space. It also shows that these distributions maximise Shannon entropy, for given barycentre and dispersion.
* <ref> describes the Riemannian barycentre and the covariance tensor of a Gaussian distribution.
* <ref> and draws on random matrix theory to obtain an analytic formula for the normalising factor of a Gaussian distribution, on the space $\mathrm{H}(N)$ of $N \times N$ Hermitian positive-definite matrices.
* <ref> and <ref> study large $N$ asymptotics of Gaussian distributions on $\mathrm{H}(N)$. In particular, <ref> provides an asymptotic expression of the normalising factor.
* <ref> introduces $\Theta$ distributions, a family of distributions, on the unitary group $U(N)$ (which is the dual symmetric space of $\mathrm{H}(N))$, with a remarkable connection to Gaussian distributions on $\mathrm{H}(N)$.
§ FROM GAUSS TO SHANNON
The story of Gaussian distributions is a story of discovery and re-discovery. Different scientists, at different times, were repeatedly lead to these distributions, through different routes.
In 1801, on New Year's day, Giuseppe Piazzi sighted a heavenly body (in fact, the asteroid Ceres), which he thought to be a new planet. Less than six weeks later, this “new planet" disappeared behind the sun. Using a method of least squares, Gauss predicted the area in the sky, where it re-appeared one year later. His justification of this method of least squares (cast in modern language) is that measurement errors follow a family of distributions, which satisfies
property 1 : maximum-likelihood estimation is equivalent to the least-squares problem.
In an 1809 paper, he used this property to show that the distribution of measurement errors is (again, in modern language) a Gaussian distribution.
In 1810, Laplace studied the distribution of a quantity, which is the aggregate of a great number of elementary observations. He was lead in this (completely different) way, to the same distribution discovered by Gauss. Laplace was among the first scientists to show
property 2 : the distribution of the sum of a large number of elementary observations is (asymptotically) a Gaussian distribution.
Around 1860, Maxwell rediscovered Gaussian distributions, through his investigation of the velocity distribution of particles in an ideal gas (which he viewed as freely colliding perfect elastic spheres). Essentially, he showed that
property 3 : the distribution of a rotationally-invariant random vector, which has independent components, is a Gaussian distribution.[A deeper version of Maxwell's idea was obtained by Poincaré and Borel, around 1912, who showed that : if $v = (v_n\,;n=1,\ldots,N)$ is uniformly distributed, on an $(N-1)$-dimensional sphere of radius $N^{\frac{1}{2}}$, then the distribution of $v_{\scriptscriptstyle 1}$ is (asymptotically) a Gaussian distribution. This is Poincaré's model of the one-dimensional ideal gas, with $N$ particles.]
Kinetic theory lead to another fascinating development, related to Gaussian distributions. Around 1905, Einstein (and, independently, Smoluchowsky) showed that
property 4 : the distribution of the position of a particle, which is undergoing a Brownian motion, is a Gaussian distribution.
In addition to kinetic theory, alternative routes to Gaussian distributions have been found in quantum mechanics, information theory, and other fields.
In quantum mechanics, a Gaussian distribution is a position distribution with minimum uncertainty. That is, it achieves equality in Heisenberg's inequality (this is because a Gaussian function is proportional to its own Fourier transform). In information theory, one may attribute to Shannon the following maximumentropy characterisation
property 5 : a probability distribution with maximum entropy, among all distributions with a given mean and variance, is a Gaussian distribution.
The above list of re-discoveries of Gaussian distributions, by means of different definitions, may be extended much longer. However, the main point is the following. In Euclidean space, any one of the above five properties leads to the same famous expression of a Gaussian distribution,
P(dx|\bar{x},\sigma) = \left(2\pi\sigma^2\right)^{\!-\frac{n}{2}}\exp\left[ -\frac{(x-\bar{x})^2}{2\sigma^2}\right]dx
In non-Euclidean space, each one of these properties may lead to a different kind of distribution, which may then be called a Gaussian distribution, but only from a more restricted point of view. People interested in Brownian motion may call the heat kernel of a Riemannian manifold a Gaussian distribution on that manifold. However, statisticians will not like this definition, since it will (in general) fail to have a straightforward connection to maximum-likelihood estimation.
Going from Euclidean to non-Euclidean spaces, the concept of Gaussian distribution breaks down into several different concepts. At best, one may define Gaussian distributions based on a practical motivation, which makes one (or more) of their classical properties seem more advantageous than the others.
§ THE “RIGHT" GAUSSIAN
As of now, the following definition of Gaussian distributions is chosen. Gaussian distributions, on a Riemannian manifold $M$, are a family of distributions $P(\bar{x}\hspace{0.03cm},\sigma)$, parameterised by $\bar{x} \in M$ and $\sigma > 0$, such that : a maximum-likelihood estimate $\hat{x}_{\scriptscriptstyle N}$ of $\bar{x}$, based on independent samples $(x_n\,;n=1,\ldots,N)$ from $P(\bar{x}\hspace{0.02cm},\sigma)$, is a solution of the least-squares problem
\text{minimise over $x \in M$} \hspace{0.3cm} \mathcal{E}_{\scriptscriptstyle N}(x) = \sum^N_{n=1}d^{\hspace{0.03cm}2}(x_n\hspace{0.02cm},x)
Of course, this is the same least-squares problem as (<ref>), so $\hat{x}_{\scriptscriptstyle N}$ is an empirical barycentre of the samples $(x_n)$. Therefore (as discussed in <ref>), $\hat{x}_{\scriptscriptstyle N}$ is almost-surely unique, if $P(\bar{x}\hspace{0.02cm},\sigma)$ has a probability density with respect to the Riemannian volume of $M$ (this will indeed be the case).
Now, consider the density profile
\begin{equation} \label{eq:gaussprofile}
f(x|\bar{x}\hspace{0.02cm},\sigma) \,=\, \exp\left[ -\frac{d^{\hspace{0.03cm}2}(x,\bar{x})}{2\sigma^2}\right]
\end{equation}
and the normalising factor,
\begin{equation} \label{eq:gaussnorm}
Z(\bar{x}\hspace{0.02cm},\sigma) = \int_M\, f(x|\bar{x}\hspace{0.02cm},\sigma)\,\mathrm{vol}(dx)
\end{equation}
If this is finite, then
\begin{equation} \label{eq:pregaussdensity}
P(dx|\bar{x},\sigma) \,=\, \left(Z(\bar{x}\hspace{0.02cm},\sigma)\right)^{-1}\hspace{0.03cm}f(x|\bar{x}\hspace{0.02cm},\sigma)\hspace{0.04cm}\mathrm{vol}(dx)
\end{equation}
is a well-defined probability distribution on $M$. In <ref>, below, it will be shown that $P(\bar{x}\hspace{0.02cm},\sigma)$, as defined by (<ref>), is indeed a Gaussian distribution, if $M$ is a Hadamard manifold, and also a homogeneous space. The following propositions will then be helpful.
Let $M$ be a Hadamard manifold, whose sectional curvatures lie in $[\kappa\hspace{0.03cm},0]$, where $\kappa = -c^{\hspace{0.02cm}\scriptscriptstyle 2}$. Then, for any $\bar{x} \in M$ and $\sigma > 0$, if $Z(\bar{x}\hspace{0.02cm},\sigma)$ is given by (<ref>),
\begin{equation} \label{eq:zhadamardk}
Z_{\scriptscriptstyle 0}(\sigma) \leq\, Z(\bar{x}\hspace{0.02cm},\sigma) \,\leq Z_{\scriptscriptstyle c}(\sigma)
\end{equation}
where $Z_{\scriptscriptstyle 0}(\sigma) = \left(2\pi\sigma^2\right)^{\!\frac{n}{2}}$ and $Z_{\scriptscriptstyle c}(\sigma)$ is positive and given by (recall $n$ is the dimension of $M$)
\begin{equation} \label{eq:zcsigma}
Z_{\scriptscriptstyle c}(\sigma) = \omega_{n-1}\hspace{0.02cm}\frac{\sigma}{(2c)^{n-1}}\hspace{0.03cm}\sum^{n-1}_{k=0}(-1)^k\hspace{0.03cm}\left(\!\!\begin{array}{c}n-1 \\ k \end{array}\!\!\right)\frac{\Phi\left((n-1-2k)\hspace{0.03cm}\sigma c\right)}{\mathstrut\Phi^\prime\left((n-1-2k)\hspace{0.03cm}\sigma c\right)}
\end{equation}
with $\omega_{n-1}$ the area of the unit sphere $S^{\hspace{0.02cm}n-1}$, and $\Phi$ the standard normal distribution function.
If $M$ is a Riemannian homogeneous space, and $Z(\bar{x}\hspace{0.02cm},\sigma)$ is given by (<ref>), then $Z(\bar{x}\hspace{0.02cm},\sigma)$ does not depend on $\bar{x}$. In other words, $Z(\bar{x}\hspace{0.02cm},\sigma) = Z(\sigma)$.
If $M$ is a Hadamard manifold, and also a homogeneous space, then both Propositions <ref> and <ref> apply to $M$. Indeed, if $M$ is a Riemannian homogeneous space, then its sectional curvatures lie within a bounded subset of the real line. Therefore, Proposition <ref> implies $Z(\bar{x}\hspace{0.02cm},\sigma)$ is finite for all $\bar{x} \in M$ and $\sigma > 0$. On the other hand, Proposition <ref> implies that $Z(\bar{x}\hspace{0.02cm},\sigma) = Z(\sigma)$.
Thus, if $M$ is a Hadamard manifold, and also a homogeneous space, then (<ref>), reduces to
\begin{equation} \label{eq:gaussdensity}
P(dx|\bar{x},\sigma) \,=\, \left(Z(\sigma)\right)^{-1}\hspace{0.03cm}\exp\left[ -\frac{d^{\hspace{0.03cm}2}(x,\bar{x})}{2\sigma^2}\right]\hspace{0.04cm}\mathrm{vol}(dx)
\end{equation}
and yields a well-defined probability distribution $P(\bar{x}\hspace{0.02cm},\sigma)$ on $M$. This will be the main focus, throughout the following.
Proof of Proposition <ref> : (<ref>) is a direct application of (<ref>). Let $f(y) = f(y|\bar{x}\hspace{0.02cm},\sigma)$, and $\kappa_{\max} = 0$, $\kappa_{\min} = \kappa$. Also, since $M$ is a Hadamard manifold, note that $\min\lbrace\mathrm{inj}(\bar{x})\hspace{0.03cm},\pi\hspace{0.03cm}c^{\scriptscriptstyle -1}\rbrace = \infty$. Therefore, (<ref>) (applied with $x = \bar{x}$), yields
\omega_{n-1}\,\int^{\infty}_{0}\,\exp\left[-\frac{r^{\hspace{0.03cm}\scriptscriptstyle 2}}{\mathstrut 2\sigma^{\hspace{0.03cm}\scriptscriptstyle 2}}\right]\mathrm{sn}^{n-1}_{\scriptscriptstyle 0}(r)\hspace{0.02cm}dr \leq\, Z(\bar{x}\hspace{0.02cm},\sigma) \,\leq
\omega_{n-1}\,\int^{\infty}_{ 0}\,\exp\left[-\frac{r^{\hspace{0.03cm}\scriptscriptstyle 2}}{\mathstrut 2\sigma^{\hspace{0.03cm}\scriptscriptstyle 2}}\right]\mathrm{sn}^{n-1}_{\kappa}(r)\hspace{0.02cm}dr
However, $\mathrm{sn}_{\scriptscriptstyle 0}(r) = r$ and $\mathrm{sn}_{\kappa}(r) = c^{\scriptscriptstyle -1}\hspace{0.02cm}\sinh(c\hspace{0.03cm} r)$. Therefore, the expression for $Z_{\scriptscriptstyle 0}(\sigma)$ follows easily. For $Z_{\scriptscriptstyle c}(\sigma)$, on the other hand, note that
\int^{\infty}_{ 0}\,\exp\left[-\frac{r^{\hspace{0.03cm}\scriptscriptstyle 2}}{\mathstrut 2\sigma^{\hspace{0.03cm}\scriptscriptstyle 2}}\right]\mathrm{sn}^{n-1}_{\kappa}(r)\hspace{0.02cm}dr = \frac{1}{(2c)^{n-1}}\hspace{0.03cm}\int^{\infty}_{ 0}\,
\exp\left[-\frac{r^{\hspace{0.03cm}\scriptscriptstyle 2}}{\mathstrut 2\sigma^{\hspace{0.03cm}\scriptscriptstyle 2}}\right]\left( e^{c\hspace{0.03cm}r} - e^{-c\hspace{0.03cm}r\hspace{0.03cm}} \right)^{n-1}\hspace{0.03cm}dr
Then, (<ref>) follows by performing a binomial expansion, and using
\int^{\infty}_{ 0}\,\exp\left[-\frac{r^{\hspace{0.03cm}\scriptscriptstyle 2}}{\mathstrut 2\sigma^{\hspace{0.03cm}\scriptscriptstyle 2}} +
(n-1-2k)\hspace{0.03cm}c\hspace{0.03cm}r \right]dr = \sigma\,\frac{\Phi\left((n-1-2k)\hspace{0.03cm}\sigma c\right)}{\mathstrut\Phi^\prime\left((n-1-2k)\hspace{0.03cm}\sigma c\right)}
Remark : clearly, $Z_{\scriptscriptstyle 0}(\sigma)$ is the normalising factor of a Gaussian distribution, when $M$ is a Euclidean space, $M = \mathbb{R}^n\,$. On the other hand, $Z_{\scriptscriptstyle c}(\sigma)$ is the normalising factor of a Gaussian distribution, when $M$ is a hyperbolic space of dimension $n$, and constant negative curvature $\kappa = -c^{\hspace{0.02cm}\scriptscriptstyle 2}$. This will become clear in <ref>, below.
Proof of Proposition <ref> : assume $M$ is a homogeneous space, and fix some point $o \in M$. There exists an isometry $g$ of $M$ such that $g\cdot\bar{x} = o$. In the integral (<ref>), introduce the new variable of integration $z = g \cdot x$. Since $g$ (being an isometry) preserves Riemannian volume,
Z(\bar{x}\hspace{0.02cm},\sigma) = \int_M\, f(g^{\scriptscriptstyle -1}\cdot z|\bar{x}\hspace{0.02cm},\sigma)\,\mathrm{vol}(dz)
= \int_M\, f( z|o\hspace{0.02cm},\sigma)\,\mathrm{vol}(dz) = Z(o\hspace{0.02cm},\sigma)
where the second equality follows from (<ref>). Thus, $Z(\bar{x}\hspace{0.02cm},\sigma) = Z(o\hspace{0.02cm},\sigma)$ does not depend on $\bar{x}$.
§ THE NORMALISING FACTOR $Z(\SIGMA)$
Assume now $M = G/K$ is a Riemannian symmetric space which belongs to the non-compact case, described in <ref>. In particular, $M$ is a Hadamard manifold, and also a homogeneous space. Thus, for each $\bar{x} \in M$ and $\sigma > 0$, there is a well-defined probability distribution $P(\bar{x}\hspace{0.02cm},\sigma)$ on $M$, given by (<ref>). Here, the normalising factor $Z(\sigma)$ can be expressed as a multiple integral, using the integral formula (<ref>), of Proposition <ref>. Applying this proposition (with $o=\bar{x}$), it is enough to note
f(\varphi(s\hspace{0.02cm},a)|\bar{x}\hspace{0.02cm},\sigma) = \exp\left[ -\frac{\Vert a \Vert^{2}_{\scriptscriptstyle B}}{2\sigma^2}\right]
where $\Vert a \Vert^{2}_{\scriptscriptstyle B} = B(a,a)$, in terms of the $\mathrm{Ad}(G)$-invariant symmetric bilinear form $B$. Since this expression only depends on $a$, it is possible to integrate $s$ out of (<ref>), to obtain
\begin{equation} \label{eq:ssz}
Z(\sigma) \,=\,
\frac{\omega(S)}{|W|}\,\int_{\mathfrak{a}}\exp\left[ -\frac{\Vert a \Vert^{2}_{\scriptscriptstyle B}}{2\sigma^2}\right]\prod_{\lambda \in \Delta_+}\left| \sinh\hspace{0.02cm}\lambda(a)\right|^{ m_\lambda}\hspace{0.03cm}da
\end{equation}
This formula expresses the normalising factor $Z(\sigma)$ as a multiple integral on the vector space $\mathfrak{a}$.
Example 1 : the easiet instance of (<ref>) arises when $M$ is a hyperbolic space of dimension $n$, and constant sectional curvature equal to $-1$. Then, $M$ has rank equal to $1$, so that $\mathfrak{a} = \mathbb{R}\hspace{0.03cm}\hat{a}$ for some unit vector $\hat{a} \in \mathfrak{a}$. Since the sectional curvature is equal to $-1$, there is only one positive root $\lambda$, say $\lambda(\hat{a}) = 1$, with multiplicity $m_\lambda= n-1$. In addition, there are two Weyl chambers, $C_+ = \lbrace t\hspace{0.02cm}\hat{a}\,; t > 0\rbrace$ and $C_- = \lbrace t\hspace{0.02cm}\hat{a}\,; t < 0\rbrace$. In other words, $|W| = 2$. Now, (<ref>) reads
Z(\sigma) \,=\,
\frac{\omega_{n-1}}{2}\,\int^{+\infty}_{-\infty}\exp\left[-\frac{r^{\hspace{0.03cm}\scriptscriptstyle 2}}{\mathstrut 2\sigma^{\hspace{0.03cm}\scriptscriptstyle 2}}\right]\left| \sinh(r)\right|^{n-1}\hspace{0.03cm}dr = \omega_{n-1}\,\int^{+\infty}_{ 0}\exp\left[-\frac{r^{\hspace{0.03cm}\scriptscriptstyle 2}}{\mathstrut 2\sigma^{\hspace{0.03cm}\scriptscriptstyle 2}}\right]\sinh^{n-1}(r)\hspace{0.03cm}dr
In general, if all distances are divided by $c > 0$, the sectional curvature $-1$ is replaced by $-c^{\hspace{0.02cm}\scriptscriptstyle 2}$. Thus, when $M$ is a hyperbolic space of dimension $n$, and sectional curvature $-c^{\hspace{0.02cm}\scriptscriptstyle 2}$, one has
Z(\sigma) = \omega_{n-1}\,\int^{+\infty}_{ 0}\exp\left[-\frac{r^{\hspace{0.03cm}\scriptscriptstyle 2}}{\mathstrut 2\sigma^{\hspace{0.03cm}\scriptscriptstyle 2}}\right](c^{\scriptscriptstyle -1}\hspace{0.02cm}\sinh(c\hspace{0.03cm}r))^{n-1}\hspace{0.03cm}dr
This is exactly $Z_{\scriptscriptstyle c}(\sigma)$, expressed analytically in (<ref>).
Example 2 : another example, also susceptible of analytic expression, is when $M$ is a cone of positive-definite matrices (covariance matrices), with real, complex, or quaternion coefficients. Then, $M = G/K$ with $G = \mathrm{GL}(N,\mathbb{K})$, where $\mathbb{K} = \mathbb{R}, \mathbb{C}$ or $\mathbb{H}$ (real numbers, complex numbers, or quaternions), and $K$ is a maximal compact subgroup of $G$, say $K = U(N), O(N)$ or $Sp(n)$.
In each of these three cases, $\mathfrak{a}$ is the space of $N \times N$ real diagonal matrices, and the positive roots are the linear maps $\lambda(a) = a_{ii} - a_{jj}$ where $i < j$, each one having its multiplicity $m_\lambda = \beta$, ($\beta = 1,2$ or $4$, according to $\mathbb{K} = \mathbb{R}, \mathbb{C}$ or $\mathbb{H}$). In addition, $\Vert a \Vert^{2}_{\scriptscriptstyle B} = 4\hspace{0.02cm}\mathrm{tr}(a^{\scriptscriptstyle 2}) = 4\hspace{0.02cm}a^{{\scriptscriptstyle 2}}_{\scriptscriptstyle 11} + \ldots + 4\hspace{0.02cm}a^{{\scriptscriptstyle 2}}_{\scriptscriptstyle NN\,}$.
The Weyl group $W$ is the groupe of permutation matrices in $K$, so $|W| = N!$. Finally, $S = K/T_{\scriptscriptstyle N}$ where $T_{\scriptscriptstyle N}$ is the subgroup of all matrices $t$ which are diagonal and belong to $K$. Replacing all of this into (<ref>), it follows that
[Curious readers will want to compute $\omega_{\beta}(N)$. But, how ? For example, $\omega_{\scriptscriptstyle 2}(N)$ can be found using the Weyl integral formula on U(N) [40]. This yields $\omega_{\scriptscriptstyle 2}(N) = \mathrm{vol}(U\!(N))/(2\pi)^N$, where the volume of the unitary group $U(N)$ is $\mathrm{vol}(U\!(N)) = (2\pi)^{(N^2+N)/2}/G(N)$, in terms of $G(N) = 1!\times2!\times\ldots\times(N-1)!$, which can be found, just by looking at the normalising factor of a Gaussian unitary ensemble.]
\begin{equation} \label{eq:covzbeta}
Z(\sigma) =
\frac{\omega_{\beta}(N)}{N!}\,\int_{\mathfrak{a}}\,\prod^N_{i=1}\,\exp\left[ -\frac{2\hspace{0.02cm}a^{{\scriptscriptstyle 2}}_{\scriptscriptstyle ii}}{\sigma^2}\right]\prod_{i<j}\left| \sinh(a_{ii} - a_{jj})\right|^{ \beta}\hspace{0.03cm}da
\end{equation}
where $\omega_{\beta}(N)$ stands for $\omega(S)$, and $da = da_{\scriptscriptstyle 11}\ldots da_{\scriptscriptstyle NN\,}$. Passing to the variables $x_i = \exp(2a_{\scriptscriptstyle ii})$,
Z(\sigma) =
\frac{\omega_{\beta}(N)}{\mathstrut 2^{\scriptscriptstyle N N_\beta}N!} \,\int_{\mathbb{R}^{\scriptscriptstyle N}_+}\,\prod^N_{i=1}\left[\rho(x_i\hspace{0.02cm},2\sigma^{\scriptscriptstyle 2})\hspace{0.03cm} x^{\!-N_\beta\,}_i\right]\hspace{0.03cm}|V(x)|^\beta\hspace{0.03cm}\prod^N_{i=1} \hspace{0.02cm}dx_i
where $N_\beta = (\beta/2)(N-1) + 1$, $\rho(x,k) = \exp(-\log^2(x)/k)$ and $V(x) = \prod_{i<j} (x_j - x_i)$ is the Vandermonde determinant. Finally, using the elementary identity
\rho(x,k)\hspace{0.03cm} x^\alpha = \exp\left[\frac{k}{4}\hspace{0.03cm}\alpha^{\scriptscriptstyle 2}\right]\omega\left(e^{-\frac{k}{2}\hspace{0.02cm}\alpha}\hspace{0.02cm}x \hspace{0.02cm},k\right)
it is immediately found that
\begin{equation} \label{eq:covzvv}
Z(\sigma) = \frac{\omega_{\beta}(N)}{\mathstrut 2^{\scriptscriptstyle N N_\beta}N!}\times \exp\left[-N\hspace{0.02cm}N^2_\beta\hspace{0.04cm}(\sigma^{2}\!/2)\right]\times\int_{\mathbb{R}^{\scriptscriptstyle N}_+}\,\prod^N_{i=1}\rho(u_i\hspace{0.02cm},2\sigma^{\scriptscriptstyle 2})\,|V(u)|^\beta\,\prod^N_{i=1} \hspace{0.02cm}du_i
\end{equation}
For the case $\beta = 2$, the integral in (<ref>) will be expressed analytically in <ref>, below. The cases $\beta = 1, 4$ should be pursued using the techniques in [17] (Chapter 5).
Example 3 : for this last example, I am still unaware of any valid means of analytic expression. Let $M = \mathrm{D}_{\scriptscriptstyle N}$ be the Siegel domain [41]. This is the set of $N \times N$ symmetric complex matrices $z$,such that $\mathrm{I}_N - z^\dagger z \geq 0$ (where the inequality is understood in the sense of the Loewner order). Here, $M = G/K$, where $G\simeq \mathrm{Sp}(N,\mathbb{R})$ (real symplectic group) and $K \simeq U(N)$ (unitary group). Precisely, $G$ is the group of all $2N \times 2N$ complex matrices $g$, with $g^\mathrm{t}\hspace{0.02cm}\Omega\hspace{0.02cm}g = \Omega$ and $g^\dagger\hspace{0.02cm}\Gamma\hspace{0.02cm}g = \Gamma$, where $^\mathrm{t}$ denotes the transpose, and where $\Omega$ and $\Gamma$ are the matrices
\Omega = \left(\begin{array}{cc}\! & \mathrm{I}_N \\[0.1cm] - \mathrm{I}_N & \end{array}\!\right) \hspace{0.25cm};\hspace{0.25cm}
\Gamma = \left(\!\begin{array}{cc} \mathrm{I}_N & \\[0.1cm] & - \mathrm{I}_N \end{array}\!\right)
In addition, $K$ is the group of all block-diagonal matrices $k = \mathrm{diag}(U,U^*)$ where $U \in U(N)$, and $^*$ denotes the conjugate. The action of $G$ on $M$ is given by the matrix analogue of Möbius transformations,
\begin{equation} \label{eq:siegel1}
g\cdot z = (A\hspace{0.02cm}z +B)(C\hspace{0.02cm}z+D)^{\scriptscriptstyle -1} \hspace{0.5cm} g = \left(\!\begin{array}{cc} A & B \\[0.1cm] C & D\end{array}\,\right)
\end{equation}
This action preserves the Siegel metric, which is defined by
\begin{equation} \label{eq:siegel2}
\langle v,\!v\rangle_{\scriptscriptstyle z} = \Vert (\mathrm{I}_N - z\hspace{0.02cm}z^\dagger)^{\scriptscriptstyle -1}\hspace{0.03cm} v\hspace{0.03cm}\Vert^2_{\scriptscriptstyle B} \hspace{1cm} \Vert v \Vert^2_{\scriptscriptstyle B} = \frac{1}{2}\hspace{0.02cm}\mathrm{tr}(v\hspace{0.02cm}v^\dagger)
\end{equation}
where each tangent vector $v$ is identified with a symmetric complex matrix (with this metric, it is easy to see that geodesic symmetry at $0 \in \mathrm{D}_{\scriptscriptstyle N}$ is given by $s_{\scriptscriptstyle 0}(z) = -z$ for $z \in \mathrm{D}_{\scriptscriptstyle N}$). Now [42],
\begin{equation} \label{eq:siegel3}
\mathfrak{a} = \left \lbrace \left(\begin{array}{cc} & a \\[0.1cm] a & \end{array}\right)\,;\hspace{0.05cm} a = \mathrm{diag}(a_{\scriptscriptstyle 11},\ldots,
a_{\scriptscriptstyle NN})\right\rbrace
\end{equation}
The positive roots are $\lambda(a) = a_{ii} - a_{jj}$ for $i < j$, and $\lambda(a) = a_{ii} + a_{jj}$ for $i \leq j$, all with $m_\lambda = 1$. The order of the Weyl group is $|W| = 2^NN!$, and $\omega(S) = \mathrm{vol}(U(N))/2^N$. Replacing into (<ref>),
\begin{equation} \label{eq:siegelz}
Z(\sigma) \,=\,
\frac{\mathrm{vol}(U(N))}{2^{\scriptscriptstyle 2N}N!}\,\int_{\mathfrak{a}}\,\prod^N_{i=1}\,\exp\left[ -\frac{\hspace{0.02cm}a^{{\scriptscriptstyle 2}}_{\scriptscriptstyle ii}}{2\sigma^2}\right]\prod_{i<j} \sinh|a_{ii} - a_{jj}|\prod_{i\leq j} \sinh|a_{ii} + a_{jj}|\,da
\end{equation}
In [36], a special Monte Carlo method, for computing (<ref>) was indicated (this method is owed to Paolo Zanini). The idea is the following : if $\mathbf{a}$ is a random variable with normal distribution, of mean zero and covariance $\sigma^2\hspace{0.03cm}\mathrm{I}_N$ in $\mathbb{R}^N$, then $Z(\sigma)$ is given by the expectation,
\begin{equation} \label{eq:paolo}
Z(\sigma) \,=\,
\frac{\mathrm{vol}(U(N))}{2^{\scriptscriptstyle 2N}N!}\,\mathbb{E}\left[ \prod_{i<j} \sinh|\mathbf{a}_{i} - \mathbf{a}_{j}|\prod_{i\leq j} \sinh|\mathbf{a}_{i} + \mathbf{a}_{j}|\right]
\end{equation}
For a given value of $\sigma$, this is easily approximated by an empirical average. However, it then remains to guarantee that $Z(\sigma)$ is a well-behaved function of $\sigma$. Precisely (see Proposition <ref> in <ref>), if
$\eta = (-2\sigma^{\scriptscriptstyle 2})^{\scriptscriptstyle -1}$, then $\psi(\eta) = \log\hspace{0.02cm}Z(\sigma)$ is a strictly convex function, from the half-line $(-\infty,0)$ onto $\mathbb{R}$. By approximating $Z(\sigma_{\scriptscriptstyle n})$ at certain nodes $\sigma_{\scriptscriptstyle n\,}$, and then performing a suitable spline interpolation, it becomes possible to guarantee this behavior of $\psi(\eta)$.
This Monte Carlo method applies, with very little modification, not only to the computation of (<ref>), but to the computation of the general formula (<ref>). It has been used to produce tables of the function $Z(\sigma)$, for various Riemannian symmetric spaces $M$, of rank $N$ up to $30$, which have been successfully used, in numerical computation (recall the rank of $M$ is the dimension of $\mathfrak{a}$). Unfortunately, this method breaks down, when $N$ is larger (roughly $\approx 50$). Either an analytic expression of $Z(\sigma)$ (see <ref>, below), or an asymptotic formula, for large $N$ (see <ref>, below), are then needed.
§ MLE AND MAXIMUM ENTROPY
Let $M$ be a Hadamard manifold, which is also a homogeneous space. Propositions <ref> and <ref> then imply that, for any $\bar{x} \in M$ and $\sigma > 0$, there exists a well-defined probability distribution $P(\bar{x}\hspace{0.02cm},\sigma)$ on $M$, given by (<ref>). The family of distributions $P(\bar{x}\hspace{0.02cm},\sigma)$ fits the definition of Gaussian distributions, stated at the beginning of <ref>.
Let $P(\bar{x}\hspace{0.02cm},\sigma)$ be given by (<ref>), for $\bar{x} \in M$ and $\sigma > 0$. The maximum-likelihood estimate of the parameter $\bar{x}$, based on independent samples $(x_n\,;n=1,\ldots,N)$ from $P(\bar{x}\hspace{0.02cm},\sigma)$, is unique and equal to the empirical barycentre $\hat{x}_{\scriptscriptstyle N}$ of the samples $(x_n)$.
The proof of this proposition is immediate. From (<ref>), one has the log-likelihood function
\begin{equation} \label{eq:rgdll}
\ell(\bar{x}\hspace{0.02cm},\sigma) \,=\, - N\log Z(\sigma) - \frac{1}{2\sigma^2}\hspace{0.03cm}\sum^N_{n=1}\,d^{\hspace{0.03cm}2}(x_n\hspace{0.02cm},\bar{x})
\end{equation}
Since the first term does not depend on $\bar{x}$, one may maximise $\ell(\bar{x}\hspace{0.02cm},\sigma)$, first over $\bar{x}$ and then over $\sigma$. Clearly, maximising over $\bar{x}$ is equivalent to minimising the sum of squared distances $d^{\hspace{0.03cm}2}(x_n\hspace{0.02cm},\bar{x})$. This is just the least-squares problem (<ref>), whose solution is the empirical barycentre $\hat{x}_{\scriptscriptstyle N}\,$. Moreover, $\hat{x}_{\scriptscriptstyle N}$ is unique, since $M$ is a Hadamard manifold (as discussed in <ref>)
Consider now maximum-likelihood estimation of $\sigma$. This is better carried out in terms of the natural parameter $\eta = (-2\sigma^{\scriptscriptstyle 2})^{\scriptscriptstyle -1}$, or in terms of the moment parameter $\delta = \psi^\prime(\eta)$, where $\psi(\eta) = \log\hspace{0.02cm}Z(\sigma)$ and the prime denotes the derivative.
The function $\psi(\eta)$, just defined, is a strictly convex function, which maps the half-line $(-\infty,0)$ onto $\mathbb{R}$. The maximum-likelihood estimates of the parameters $\eta$ and $\delta$ are
\begin{equation} \label{eq:mlsigma}
\hat{\eta}_{\scriptscriptstyle N} = (\psi^\prime)^{\scriptscriptstyle -1}(\hat{\delta}_{\scriptscriptstyle N}) \hspace{0.5cm}\text{and}\hspace{0.5cm}
\hat{\delta}_{\scriptscriptstyle N} = \frac{1}{N} \sum^N_{n=1}\,d^{\hspace{0.03cm}2}(x_n\hspace{0.02cm},\hat{x}_{\scriptscriptstyle N})
\end{equation}
where $(\psi^\prime)^{\scriptscriptstyle -1}$ denotes the reciprocal function.
The proof of this proposition is given below. For now, note the maximum-entropy property of Gaussian distributions, stated in the following proposition.
The Gaussian distribution $P(\bar{x}\hspace{0.02cm},\sigma)$, given by (<ref>), is the unique distribution on $M$, having maximum Shannon entropy, among all distributions with given barycentre $\bar{x}$ and dispersion $\delta = \mathbb{E}_{\hspace{0.03cm}\scriptscriptstyle x \sim P}[d^{\hspace{0.03cm}\scriptscriptstyle 2}(x\hspace{0.02cm},\bar{x})]$. Its entropy is equal to $\psi^*(\delta)$ where $\psi^*$ is the Legendre transform of $\psi$.
Proof of Proposition <ref> : denote $\mu$ the image of the distribution $P(\bar{x}\hspace{0.02cm},\sigma)$ under the mapping $x \mapsto d^{\hspace{0.03cm}2}(x\hspace{0.02cm},\bar{x})$. Then, $\psi(\eta)$ is the cumulant generating function of $\mu$,
\begin{equation}
\psi(\eta) = \log\,\int^\infty_0\hspace{0.01cm}e^{\eta\hspace{0.02cm}s}\mu(ds)
\end{equation}
and is therefore strictly convex. Note from (<ref>) and (<ref>) that $Z(\sigma) = 0$ when $\sigma = 0$ and $Z(\sigma)$ increases to $+\infty$ when $\sigma$ increases to $+\infty$. Recalling $\eta = (-2\sigma^{\scriptscriptstyle 2})^{\scriptscriptstyle -1}$ and $\psi(\eta) = \log\hspace{0.02cm}Z(\sigma)$, it becomes clear that $\psi$ is (in fact, strictly increasing, and) maps the half-line $(-\infty,0)$ onto $\mathbb{R}$.
After maximisation with respect to $\bar{x}$, the log-likelihood function (<ref>) becomes,
\begin{equation} \label{eq:rdglegendre1}
\ell(\eta) \,=\, N\left\lbrace \eta\hspace{0.03cm}\hat{\delta}_{\scriptscriptstyle N} - \psi(\eta)\right\rbrace
\end{equation}
which is a strictly concave function. Differentiating, and setting the derivative equal to $0$, directly yields the maximum-likelihood estimates (<ref>). Remark : $\hat{\eta}_{\scriptscriptstyle N}$ in (<ref>) is well-defined, since the range of $\psi^\prime$ is equal to $(0,\infty)$. Indeed, it is possible to use (<ref>), as in the proof of (<ref>), to show that
\begin{equation} \label{eq:psihadamardk}
\psi^\prime_{\scriptscriptstyle 0}(\eta) \,\leq \psi^\prime(\eta) \leq\, \psi^\prime_c(\eta)
\end{equation}
where $\psi_{\scriptscriptstyle 0}(\eta) = \log Z_{\scriptscriptstyle 0}(\sigma)$, and $\psi_{\scriptscriptstyle c}(\eta) = \log Z_{\scriptscriptstyle c}(\sigma)$, with $\kappa = -c^{\hspace{0.02cm}\scriptscriptstyle 2}$ a lower bound on the sectional curvatures of $M$. Precisely, (<ref>) can be obtained by replacing $f(y) = d^{\hspace{0.03cm}\scriptscriptstyle 2}(y\hspace{0.02cm},\bar{x})\hspace{0.03cm}p(y|\bar{x}\hspace{0.02cm},\sigma)$ into (<ref>), where $p(y|\bar{x}\hspace{0.02cm},\sigma)$ is the probability density function in (<ref>). Now, $\psi^\prime_{\scriptscriptstyle 0}(\eta) = n\hspace{0.02cm}\sigma^{\scriptscriptstyle 2}$, which increases to $+\infty$ when $\sigma$ increases to $+\infty$. On the other hand, by a straightforward application of the chain rule, it is seen that
\begin{equation} \label{eq:psicsigma}
\psi^\prime_c(\eta) \,=\, \sigma^3\hspace{0.03cm}\frac{d}{d\sigma}\!\left(\log Z_{\scriptscriptstyle c}(\sigma)\right)
\end{equation}
which, from (<ref>), is $=0$ when $\sigma = 0$. Now, it follows from (<ref>), $\psi^\prime$ maps the half-line $(-\infty,0)$ onto the half-line $(0,+\infty)$.
Proof of Proposition <ref> : let $Q(dx)$ be a probability distribution on $M$ with barycentre $\bar{x}$ and dispersion $\delta = \mathbb{E}_{\hspace{0.03cm}\scriptscriptstyle x \sim Q}[d^{\hspace{0.03cm}\scriptscriptstyle 2}(x\hspace{0.02cm},\bar{x})]$. Assume $Q(dx)$ has probablity density function $q(x)$, with respect to Riemannian volume. The Shannon entropy of $Q$ is given by
\begin{equation} \label{eq:shannon}
S(q) \,=\, \int_M\,\log(q(x))\hspace{0.02cm}q(x)\hspace{0.02cm}\mathrm{vol}(dx)
\end{equation}
Since $M$ is a homogeneous space, $S(q)$ does not depend on $\bar{x}$. Fixing some point $o \in M$, it is possible to assume, without loss of generality, that $\bar{x} = o$. Then, it is enough to maximise $S(q)$, subject to the constraints,
\int_M\,q(x)\hspace{0.02cm}\mathrm{vol}(dx) \,=\,1 \hspace{0.4cm}\text{ and }\hspace{0.3cm}
\int_M\,d^{\hspace{0.03cm}\scriptscriptstyle 2}(x\hspace{0.02cm},o)\hspace{0.03cm}q(x)\hspace{0.02cm}\mathrm{vol}(dx) \,=\, \delta
Using the method of Lagrange multipliers, this leads to a stationary point
\begin{equation} \label{eq:maxent1}
q(x) \,=\, \exp\left(\hspace{0.02cm}\eta\hspace{0.03cm}d^{\hspace{0.03cm}\scriptscriptstyle 2}(x\hspace{0.02cm},o) - \psi(\eta)\right)
\end{equation}
where the Lagrange multiplier $\eta$ is finally given by $\eta = (\psi^\prime)^{\scriptscriptstyle -1}(\delta)$, in terms of the cumulant generating function,
\psi(\eta) = \log\,\int_M\exp\left(\eta\hspace{0.03cm}d^{\hspace{0.03cm}\scriptscriptstyle 2}(x\hspace{0.02cm},o)\right)\mathrm{vol}(dx)
Of course, $q(x)$ in (<ref>) is just $p(x|o\hspace{0.02cm},\sigma)$, once the parameter $\sigma > 0$ is defined by $\eta = (-2\sigma^{\scriptscriptstyle 2})^{\scriptscriptstyle -1}$. Since the Shannon entropy is strictly concave, this stationary point $q(x)$ is a unique maximum, over the (convex) set of probability density functions on $M$, which satisfy the above constraints. Its entropy is equal to
\begin{equation} \label{eq:maxent2}
S(q) \,=\,\int_M\left(\hspace{0.02cm}\eta\hspace{0.03cm}d^{\hspace{0.03cm}\scriptscriptstyle 2}(x\hspace{0.02cm},o) - \psi(\eta)\right)q(x)\hspace{0.02cm}\mathrm{vol}(dx) = \eta\hspace{0.02cm}\delta - \psi(\eta)
\end{equation}
To show that this is $\psi^*(\delta)$, as stated in the proposition, it is enough to show
\begin{equation} \label{eq:maxent3}
S(q) \,=\, \sup_\eta\hspace{0.03cm}\lbrace \eta\hspace{0.02cm}\delta - \psi(\eta)\rbrace
\end{equation}
However, since $\psi$ is a strictly convex function, it is seen by differentiation that the $\sup$ is achieved when $\psi^\prime(\eta) = \delta$, exactly as in (<ref>). Accordingly, the right-hand side of (<ref>) is equal to $\eta\hspace{0.02cm}\delta - \psi(\eta)$, as in (<ref>).
§ BARYCENTRE AND COVARIANCE
§.§ The Riemannian barycentre
Let $M$ be a Hadamard manifold, which is also a homogeneous space. Here, it is shown that the barycentre of the Gaussian distribution $P(\bar{x}\hspace{0.03cm},\sigma)$ on $M$, given by (<ref>), is equal to $\bar{x}$.
First, it should be noted $P(\bar{x}\hspace{0.03cm},\sigma)$ does indeed have a well-defined Riemannian barycentre, since it has finite second-order moments. To see that this is true, it is enough to note that
\int_M\,d^{\hspace{0.03cm}\scriptscriptstyle 2}(\bar{x}\hspace{0.02cm},x)\hspace{0.03cm}p(x|\bar{x}\hspace{0.02cm},\sigma)\hspace{0.03cm}\mathrm{vol}(dx) \,<\, \infty
Ineded, this integral is just $\psi^\prime(\eta)$ in (<ref>). This means $\pi = P(\bar{x}\hspace{0.03cm},\sigma)$ satisfies (<ref>) for $y_o = \bar{x}$.
Let $P(\bar{x}\hspace{0.03cm},\sigma)$ be given by (<ref>), for $\bar{x} \in M$ and $\sigma > 0$. The Riemannian barycentre of
$P(\bar{x}\hspace{0.03cm},\sigma)$ is equal to $\bar{x}$.
First proof : the proof of this proposition relies on the fact that the variance function,
\mathcal{E}(y) \,=\, \frac{1}{2}\hspace{0.03cm}
\int_M\,d^{\hspace{0.03cm}\scriptscriptstyle 2}(y\hspace{0.02cm},x)\hspace{0.03cm}p(x|\bar{x}\hspace{0.02cm},\sigma)\hspace{0.03cm}\mathrm{vol}(dx)
is $1/2$-strongly convex. In particular, it has a unique stationary point, $\hat{x}$ with $\mathrm{grad}\,\mathcal{E}(\hat{x}) = 0$, which is also its unique global minimum, and (by definition) the Riemannian barycentre of $P(\bar{x}\hspace{0.03cm},\sigma)$. Now, let $f(\bar{x})$ be the function given by
f(\bar{x}) \,=\,\int_M\,p(x|\bar{x}\hspace{0.02cm},\sigma)\hspace{0.03cm}\mathrm{vol}(dx)
Clearly, this is a constant function, equal to $1$ for all $\bar{x}$. On the other hand, its gradient may be written down, by differentiating under the integral, with respect to $\bar{x}$, using (<ref>) and (<ref>),
\mathrm{grad}\,f(\bar{x}) \,=\, \sigma^{-2}\hspace{0.03cm}\int_M\,\mathrm{Exp}^{-1}_{\bar{x}}(x)\hspace{0.04cm}
Now, $\mathrm{grad}\,f(\bar{x})$ is identically zero. But, the right-hand side of the above expression is equal to $-\sigma^{-2}\hspace{0.03cm}\mathrm{grad}\,\mathcal{E}(\bar{x})$, by (<ref>). This shows that $\mathrm{grad}\,\mathcal{E}(\bar{x}) = 0$, and therefore $\bar{x}$ is the Riemannian barycentre of $P(\bar{x}\hspace{0.03cm},\sigma)$.
Second proof : this proof works if $M$ is a Riemannnian symmetric space which belongs to the non-compact case. From (<ref>),
\mathrm{grad}\,\mathcal{E}(\bar{x}) \,=\,-\hspace{0.03cm}\int_M\,\mathrm{Exp}^{-1}_{\bar{x}}(x)\hspace{0.04cm}
Let $s_{\bar{x}}$ be the geodesic symmetry at $\bar{x}$. From the definition of $s_{\bar{x}\,}$, $s_{\bar{x}}\cdot \mathrm{grad}\,\mathcal{E}(\bar{x}) = -\hspace{0.03cm}\mathrm{grad}\,\mathcal{E}(\bar{x})$. On the other hand,
s_{\bar{x}}\cdot \mathrm{grad}\,\mathcal{E}(\bar{x}) \,=\,
Since $s_{\bar{x}}$ is an isometry and fixes $\bar{x}$, it follows that
s_{\bar{x}}\cdot\mathrm{Exp}^{-1}_{\bar{x}}(x) =
\mathrm{Exp}^{-1}_{\bar{x}}(s_{\bar{x}}\cdot x) \text{ and }
p(x|\bar{x}\hspace{0.02cm},\sigma) =
p(s_{\bar{x}}\cdot x|\bar{x}\hspace{0.02cm},\sigma)
s_{\bar{x}}\cdot \mathrm{grad}\,\mathcal{E}(\bar{x}) \,=\,
-\hspace{0.03cm}\int_M\,\mathrm{Exp}^{-1}_{\bar{x}}(s_{\bar{x}}\cdot x)\hspace{0.03cm}
p(s_{\bar{x}}\cdot x|\bar{x}\hspace{0.02cm},\sigma)\hspace{0.03cm}\mathrm{vol}(dx)
and, introducing the variable of integration $z = s_{\bar{x}}\cdot x$, it follows that $s_{\bar{x}}\cdot \mathrm{grad}\,\mathcal{E}(\bar{x}) =
\mathrm{grad}\,\mathcal{E}(\bar{x})$.
Now, it has been shown that $s_{\bar{x}}\cdot \mathrm{grad}\,\mathcal{E}(\bar{x}) = -\hspace{0.03cm}\mathrm{grad}\,\mathcal{E}(\bar{x})$ and that $s_{\bar{x}}\cdot \mathrm{grad}\,\mathcal{E}(\bar{x}) =
\mathrm{grad}\,\mathcal{E}(\bar{x})$. Thus, $\mathrm{grad}\,\mathcal{E}(\bar{x}) = 0$ and one may conclude as in the first proof.
§.§ The covariance tensor
The covariance form of the distribution $P(\bar{x}\hspace{0.03cm},\sigma)$ is the symmetric bilinear form $C_{\bar{x}}$ on $T_{\bar{x}}M$,
\begin{equation} \label{eq:covariance}
C_{\bar{x}\hspace{0.02cm}}(u\hspace{0.02cm},v) \,=\, \int_M\,\langle u\hspace{0.03cm},\mathrm{Exp}^{-1}_{\bar{x}}(x)\rangle\hspace{0.03cm}
\langle \mathrm{Exp}^{-1}_{\bar{x}}(x),v\rangle \,
p(x|\bar{x}\hspace{0.02cm},\sigma)\hspace{0.03cm}\mathrm{vol}(dx) \hspace{1cm}
u\,,v \in T_{\bar{x}}M
\end{equation}
With $\sigma > 0$ fixed, the map which assigns to $\bar{x} \in M$ the covariance form $C_{\bar{x}}$ is a (0,2)-tensor field on $M$, here called the covariance tensor of $P(\bar{x}\hspace{0.03cm},\sigma)$. In order to compute this tensor field, consider the following situation.
Assume $M = G/K$ is a Riemannian symmetric space which belongs to the non-compact case. Here, $K = K_o\hspace{0.04cm}$, the stabiliser in $G$ of $o \in M$. For $k \in K$ and $u \in T_oM$, it is clear $k\cdot u \in T_oM$. This defines a representation of $K$ in the tangent space $T_oM$, called the isotropy representation. One says that $M$ is an irreducible symmetric space, if this isotropy representation is irreducible.
If $M$ is not irreducible, then it is a product of irreducible Riemannian symmetric spaces $M = M_{\scriptscriptstyle 1}\times\ldots\times M_{ s}$ [10] (Proposition 5.5, Chapter VIII. This is the de Rham decomposition of $M$).
Accordingly, for $x \in M$ and $u \in T_x M$, one may write $x = (x_{\scriptscriptstyle 1},\ldots,x_s)$ and $u = (u_{\scriptscriptstyle 1},\ldots,u_s)$, where $x_r \in M_r$ and $u_r \in T_{x_r}M_r\hspace{0.04cm}$. Now, looking back at (<ref>), it may be seen that
\begin{equation} \label{eq:gdensitydrh}
p(x|\bar{x}\hspace{0.02cm},\sigma) \,=\, \prod^s_{r=1}p(x_r|\bar{x}_r\hspace{0.02cm},\sigma) \hspace{1cm}
p(x_r|\bar{x}_r\hspace{0.02cm},\sigma) \,=\,
\left(Z_{r}(\sigma)\right)^{-1}\hspace{0.03cm}\exp\left[ -\frac{d^{\hspace{0.03cm}2}(x_r\hspace{0.02cm},\bar{x}_r)}{2\sigma^2}\right]
\end{equation}
For the following proposition, let $\eta = (-2\sigma^{\scriptscriptstyle 2})^{\scriptscriptstyle -1}$ and $\psi_r(\eta) = \log Z_r(\sigma)$.
Assume that $M$ is a product of irreducible Riemannian symmetric spaces, $M = M_{\scriptscriptstyle 1}\times\ldots\times M_{ s\hspace{0.04cm}}$. The covariance tensor $C$ in (<ref>) is given by
\begin{equation} \label{eq:gausscovariance}
C_{\bar{x}}(u\hspace{0.02cm},u) \,=\, \sum^s_{r=1}\frac{\psi^\prime_r(\eta)}{\dim\hspace{0.03cm} M_r}\hspace{0.04cm} \Vert u_r \Vert^2_{\bar{x}_r}
\end{equation}
for $u \in T_{\bar{x}}M$ where $\bar{x} = (\bar{x}_{\scriptscriptstyle 1},\ldots,\bar{x}_s)$ and $u = (u_{\scriptscriptstyle 1},\ldots,u_s)$, with $\bar{x}_r \in M_r$ and $u_r \in T_{\bar{x}_r}M_r\hspace{0.04cm}$.
Example : let $M = \mathrm{H}(N)$, so $M = \mathrm{GL}(N,\mathbb{C})/U(N)$, with $U(N)$ the stabiliser of $o = \mathrm{I}_N\,$.The de Rham decomposition of $M$ is $M = M_{\scriptscriptstyle 1}\times M_{\scriptscriptstyle 2\hspace{0.04cm}}$, where $M_{\scriptscriptstyle 1} = \mathbb{R}$ and $M_{\scriptscriptstyle 2}$ is the submanifold whose elements are those $x \in M$ such that $\det(x) = 1$. Accordingly, each $\bar{x} \in M$ is identified with the couple $(\bar{x}_{\scriptscriptstyle 1}\hspace{0.02cm},\bar{x}_{\scriptscriptstyle 2})$,
\bar{x}_{\scriptscriptstyle 1} \,=\, \frac{1}{N}\log\det(\bar{x}) \hspace{0.5cm}
\bar{x}_{\scriptscriptstyle 2} \,=\, (\det(\bar{x}))^{-{\scriptscriptstyle 1/N}}\hspace{0.03cm} \bar{x}
and each $u \in T_{\bar{x}}M$ is written $u = u_{\scriptscriptstyle 1}\hspace{0.02cm}\bar{x} + u_{\scriptscriptstyle 2}$
u_{\scriptscriptstyle 1} \,=\, \frac{1}{N}\hspace{0.03cm}\mathrm{tr}(\bar{x}^{\scriptscriptstyle -1}\hspace{0.02cm}u) \hspace{0.5cm}
u_{\scriptscriptstyle 2} \,=\, u - \frac{1}{N}\hspace{0.03cm}\mathrm{tr}(\bar{x}^{\scriptscriptstyle -1}\hspace{0.02cm}u)\hspace{0.04cm} \bar{x}
These may be replaced into expression (<ref>),
\begin{equation} \label{eq:gausshnprefim0}
C_{\bar{x}}(u\hspace{0.02cm},u) \,=\, \psi^\prime_{\scriptscriptstyle 1}(\eta)\hspace{0.03cm} u^2_{\scriptscriptstyle 1} \,+\, \frac{\psi^\prime_{\scriptscriptstyle 2}(\eta)}{N^{\scriptscriptstyle 2} - 1}\hspace{0.02cm}\Vert u_{\scriptscriptstyle 2}\Vert^2_{\bar{x}_{\scriptscriptstyle 2}}
\end{equation}
where $\psi_{\scriptscriptstyle 1}(\eta) = \log \left( 2\pi\hspace{0.03cm}\sigma^2\right)^{\!\frac{1}{2}}$, and $\psi_{\scriptscriptstyle 2}(\eta) = \log Z(\sigma) - \psi_{\scriptscriptstyle 1}(\eta)$ ($Z(\sigma)$ is given by (<ref>) in <ref>, below). After a direct calculation, this can be brought under the form
\begin{equation} \label{eq:gausshnprefim}
C_{\bar{x}}(u\hspace{0.02cm},u) \,=\, g_{\scriptscriptstyle 1}(\sigma)\hspace{0.03cm} \mathrm{tr}^2(\bar{x}^{\scriptscriptstyle -1}\hspace{0.02cm}u) +
g_{\scriptscriptstyle 2}(\sigma)\hspace{0.03cm} \mathrm{tr}(\bar{x}^{\scriptscriptstyle -1}\hspace{0.02cm}u)^2
\end{equation}
where $g_{\scriptscriptstyle 1}(\sigma)$ and $g_{\scriptscriptstyle 2}(\sigma) $ are certain functions of $\sigma$.
Remark : as a corollary of Proposition <ref>, the covariance tensor $C$ is a $G$-invariant Riemannian metric on $M$. This is clear, for example, in the special case of (<ref>), which coincides with the general expression of a $\mathrm{GL}(N,\mathbb{C})$-invariant metric. Proof of Proposition <ref> : since $C_{\bar{x}}$ is bilinear
\begin{equation} \label{eq:proofcovariance}
C_{\bar{x}}(u\hspace{0.02cm},u) \,=\, \sum^s_{r=1}\sum^s_{q=1}\, C_{\bar{x}}(u_r\hspace{0.02cm},u_q)
\end{equation}
It will be shown that
\begin{equation} \label{eq:proofcovariance1}
C_{\bar{x}}(u_r\hspace{0.02cm},u_q) = 0 \hspace{0.5cm} \text{for } r\neq q
\end{equation}
and, on the other hand, that
\begin{equation} \label{eq:proofcovariance2}
C_{\bar{x}}(u_r\hspace{0.02cm},u_r) = \frac{\psi^\prime_r(\eta)}{\dim\hspace{0.03cm} M_r}\hspace{0.04cm} \Vert u_r \Vert^2_{\bar{x}_r}
\end{equation}
Then, (<ref>) will follow immediately, by replacing (<ref>) and (<ref>) into (<ref>).
Proof of (<ref>) : from (<ref>),
\begin{equation} \label{eq:proofproofcovariance1}
C_{\bar{x}}(u_r\hspace{0.02cm},u_q) \,=\, \int_M\,\langle u_r\hspace{0.03cm},\mathrm{Exp}^{-1}_{\bar{x}}(x)\rangle\hspace{0.03cm}
\langle \mathrm{Exp}^{-1}_{\bar{x}}(x),u_q\rangle \,
\end{equation}
However, since $M$ is given as a product Riemannian manifold,
\begin{equation} \label{eq:proofproofcovariance11}
\langle u_r\hspace{0.03cm},\mathrm{Exp}^{-1}_{\bar{x}}(x)\rangle \,=\,
\langle u_r\hspace{0.03cm},\mathrm{Exp}^{-1}_{\bar{x}_r}(x_r)\rangle \hspace{0.2cm}\text{and}\hspace{0.2cm}
\langle u_q\hspace{0.03cm},\mathrm{Exp}^{-1}_{\bar{x}}(x)\rangle \,=\,
\langle u_q\hspace{0.03cm},\mathrm{Exp}^{-1}_{\bar{x}_q}(x_q)\rangle
\end{equation}
Using (<ref>) and (<ref>), it follows from (<ref>) that
\begin{array}{rl}
C_{\bar{x}}(u_r\hspace{0.02cm},u_q) =& \int_{\scriptscriptstyle M_r}\langle u_r\hspace{0.03cm},\mathrm{Exp}^{-1}_{\bar{x}_r}(x_r)\rangle\hspace{0.03cm}
\int_{\scriptscriptstyle M_q}\langle u_q\hspace{0.03cm},\mathrm{Exp}^{-1}_{\bar{x}_q}(x_q)\rangle\hspace{0.03cm}
p(x_q|\bar{x}_q\hspace{0.02cm},\sigma)\hspace{0.03cm}\mathrm{vol}(dx_q) \\[0.2cm]
=& \mathrm{grad}\,\mathcal{E}_r(\bar{x}_r)\,\mathrm{grad}\,\mathcal{E}_q(\bar{x}_q) \\[0.2cm]
=& 0
\end{array}
where the second equality follows from (<ref>), applied to the variance functions
\mathcal{E}_r(y) \,=\, \frac{1}{2}\hspace{0.03cm}
\int_{M_r}d^{\hspace{0.03cm}\scriptscriptstyle 2}(y\hspace{0.02cm},x_r)\hspace{0.03cm}p(x_r|\bar{x}_r\hspace{0.02cm},\sigma)\hspace{0.03cm}\mathrm{vol}(dx_r)
\hspace{0.1cm}\text{and}\hspace{0.1cm}
\mathcal{E}_q(y) \,=\, \frac{1}{2}\hspace{0.03cm}
\int_{M_q}d^{\hspace{0.03cm}\scriptscriptstyle 2}(y\hspace{0.02cm},x_q)\hspace{0.03cm}p(x_q|\bar{x}_q\hspace{0.02cm},\sigma)\hspace{0.03cm}\mathrm{vol}(dx_q)
which, by Proposition <ref>, respectively have their global minima at $\bar{x}_r$ and $\bar{x}_q\hspace{0.04cm}$.
Proof of (<ref>) : let $K_{\bar{x}}$ denote the stabiliser of $\bar{x}$ in $G$. For $k \in K_{\bar{x}}$ and $u_r \in T_{\bar{x}_r}M_r\hspace{0.04cm}$, note that $k\cdot u_r \in T_{\bar{x}_r}M_r\hspace{0.04cm}$. This defines an irreducible representation of $K_{\bar{x}}$ in $T_{\bar{x}_r}M_r\hspace{0.04cm}$. The symmetric bilinear form $C_{\bar{x}}$ is invariant under this representation. Precisely, since any $k \in K_{\bar{x}}$ is an isometry which fixes $\bar{x}$, it follows from (<ref>),
\begin{array}{rll}
C_{\bar{x}}(k\cdot u_r\hspace{0.02cm},k\cdot u_r) = &
\int_{\scriptscriptstyle M_r}\,\langle k\cdot u_r\hspace{0.03cm},\mathrm{Exp}^{-1}_{\bar{x}_r}(x_r)\rangle^2\,
p(x_r|\bar{x}_r\hspace{0.02cm},\sigma)\hspace{0.03cm}\mathrm{vol}(dx_r) & \\[0.2cm]
\int_{\scriptscriptstyle M_r}\,\langle u_r\hspace{0.03cm},\mathrm{Exp}^{-1}_{\bar{x}_r}(k^{\scriptscriptstyle -1}\cdot x_r)\rangle^2\,
p(x_r|\bar{x}_r\hspace{0.02cm},\sigma)\hspace{0.03cm}\mathrm{vol}(dx_r) & \\[0.2cm]
=& \int_{\scriptscriptstyle M_r}\,\langle u_r\hspace{0.03cm},\mathrm{Exp}^{-1}_{\bar{x}_r}(k^{\scriptscriptstyle -1}\cdot x_r)\rangle^2\,
p(k^{\scriptscriptstyle -1}\cdot x_r|\bar{x}_r\hspace{0.02cm},\sigma)\hspace{0.03cm}\mathrm{vol}(dx_r) =&\!\!
\end{array}
where the last equality follows by introducing the new variable of integration $z = k^{\scriptscriptstyle -1}\cdot x_r\hspace{0.04cm}$. Finally, from Schur's lemma [40], $C_{\bar{x}}$ is a multiple of the metric,
C_{\bar{x}}(u_r\hspace{0.02cm},u_r) \,=\, f(\eta)\,\hspace{0.04cm} \Vert u_r \Vert^2_{\bar{x}_r}
where $f(\eta)$ may be found from $\mathrm{tr}(C_{\bar{x}})\,=\,(\dim\hspace{0.02cm}M_r)\hspace{0.02cm}f(\eta)$. To conclude, it is enough to note that
the trace may be evaluated by introducing an orthonormal basis of $T_{\bar{x}_r}M_r\hspace{0.04cm}$. It then follows that,
\mathrm{tr}(C_{\bar{x}}) \,=\,
\int_{M_r}\,\Vert \mathrm{Exp}^{-1}_{\bar{x}_r}(x_r)\Vert^2\hspace{0.03cm}p(x_r|\bar{x}_r\hspace{0.02cm},\sigma)\hspace{0.03cm}\mathrm{vol}(dx_r)
\,=\,\int_{M_r}\,d^{\hspace{0.03cm}\scriptscriptstyle 2}(\bar{x}_r\hspace{0.02cm},\hspace{0.03cm}x_r)\hspace{0.03cm}p(x_r|\bar{x}_r\hspace{0.02cm},\sigma)\hspace{0.03cm}\mathrm{vol}(dx_r)
which is equal to $\psi^\prime_r(\eta)$, by the same argument as in the discussion before Proposition <ref>.
§ AN ANALYTIC FORMULA FOR $Z(\SIGMA)$
Consider the special case where $M = \mathrm{H}(N)$, which corresponds to $\beta = 2$ in Example 2 of <ref>. In this case, using the tools of random matrix theory (see [17], Chapter 5), it is possible to provide an analytic formula for the normalising factor $Z(\sigma)$.
When $M = \mathrm{H}(N)$, the normalising factor $Z(\sigma)$, given by (<ref>) with $\beta = 2$, admits of the following analytic formula
\begin{equation} \label{eq:sw_z}
Z(\sigma) =
\frac{\omega_{\scriptscriptstyle 2}(N)}{\mathstrut 2^{\scriptscriptstyle N^2}}\left( 2\pi\hspace{0.03cm}\sigma^2\right)^{\!\frac{N}{2}}\hspace{0.02cm} \exp\left[{\small\left(\frac{N^3 - N}{6}\right)}\sigma^{2}\right] \prod^{N-1}_{n=1}\left(1 - e^{-n\hspace{0.02cm}\sigma^2} \right)^{\!N-n}
\end{equation}
Remark : when $N = 2$, (<ref>) reduces to
\begin{equation} \label{eq:sw_z_2}
Z(\sigma) = \left(\frac{\pi\hspace{0.02cm}\sigma}{2}\right)^{\!2}\left( e^{\sigma^2} - 1\right)
\end{equation}
which can be checked, by directly calculating the integral (<ref>).
Proof of Proposition <ref> : putting $\beta = 2$ in (<ref>), and noting that $N_{\scriptscriptstyle 2} = N$, it follows that
\begin{equation} \label{eq:sw_z_proof}
Z(\sigma) \,=\,
\frac{\omega_{\scriptscriptstyle 2}(N)}{\mathstrut 2^{\scriptscriptstyle N^2}N!}\hspace{0.02cm} \exp\left[-\frac{N^3}{2}\hspace{0.04cm}\sigma^{2}\right]\times I_{\scriptscriptstyle 2}
\end{equation}
where $I_{\scriptscriptstyle 2}$ is the integral
\begin{equation} \label{eq:i2sw}
I_{\scriptscriptstyle 2} = \int_{\mathbb{R}^{\scriptscriptstyle N}_+}\,\prod^N_{i=1}\rho(u_i\hspace{0.02cm},2\sigma^{\scriptscriptstyle 2})\,|V(u)|^2\,\prod^N_{i=1} \hspace{0.02cm}du_i
\end{equation}
This can be expressed using a well-known formula from random matrix theory [17] (Chapter 5, Page 79). Precisely, if $(p_{\hspace{0.02cm}n}\,; n = 0,1,\ldots)$ are orthonormal polynomials, with respect to the weight function $\rho(u\hspace{0.02cm},2\sigma^{\scriptscriptstyle 2})$ on $\mathbb{R}_+\hspace{0.04cm}$, then $I_{\scriptscriptstyle 2}$ is given by
\begin{equation} \label{eq:mehtasw}
I_{\scriptscriptstyle 2} \,=\, N!\hspace{0.03cm}\prod^{N-1}_{n=0} p^{-2}_{\hspace{0.02cm}nn}
\end{equation}
where $p_{\hspace{0.02cm}nn}$ is the leading coefficient in $p_{\hspace{0.02cm}n\hspace{0.04cm}}$. The required orthonormal polynomials $p_{\hspace{0.02cm}n}$ are given by $p_{\hspace{0.02cm}n} = (2\pi\sigma^2)^{-\frac{1}{4}}\hspace{0.03cm}s_{\hspace{0.02cm}n\hspace{0.04cm}}$, where $s_{\hspace{0.02cm}n}$ are the Stieltjes-Wigert polynomials [43] (Page 33). Accordingly,
p^{-2}_{\hspace{0.02cm}nn} \,=\, \left( 2\pi\hspace{0.03cm}\sigma^2\right)^{\!\frac{1}{2}}\hspace{0.02cm}\exp\left[\frac{(2n+1)^2}{2}\hspace{0.04cm}\sigma^2\right] \prod^{n}_{m=1}\left(1 - e^{-m\hspace{0.02cm}\sigma^2} \right)
Then, working out the product (<ref>), it easily follows
\begin{equation} \label{eq:i2final}
I_{\scriptscriptstyle 2} \,=\, N!\hspace{0.03cm}\left( 2\pi\hspace{0.03cm}\sigma^2\right)^{\!\frac{N}{2}}\hspace{0.02cm} \exp\left[{\small\left(\frac{4N^3-N}{6}\right)}\hspace{0.04cm}\sigma^{2}\right] \prod^{N-1}_{n=1}\left(1 - e^{-n\hspace{0.02cm}\sigma^2} \right)^{\!N-n}
\end{equation}
and (<ref>) may be obtained by replacing this into (<ref>).
Remark : the product appearing in (<ref>) can be written as a product of $q$-Gamma functions. Letting $q = e^{-\sigma^2}$, and recalling the definition of the $q$-Gamma function [44], it may be seen that
\begin{equation}
\prod^{N-1}_{n=1}\left(1 - e^{-n\hspace{0.02cm}\sigma^2} \right)^{\!N-n} \,=\, (1-q)^{\scriptscriptstyle (N^2-N)/2}\hspace{0.02cm}\prod^{N}_{n=2}\Gamma_{q\hspace{0.02cm}}(n) \hspace{1cm}\text{($\Gamma_q$ the $q$-Gamma function)}
\end{equation}
In other words, the product of $q$-Gamma functions plays, for the present problem, the same role that the product of classical Gamma functions (known as the Barnes function) plays, for the Gaussian unitary ensemble.
§ LARGE $N$ ASYMPTOTICS
Pursuing the development started in <ref>, it is possible to derive an asymptotic expression of $Z(\sigma)$, valid in the limit where $N$ goes to infinity, while the product $t = N\sigma^2$ remains constant.
Let $Z(\sigma)$ be given by (<ref>). If $N \rightarrow \infty$, while $t = N\sigma^2$ remains constant, then the following equivalence holds,
\begin{equation} \label{eq:asymp_z}
\frac{1}{N^{\scriptscriptstyle 2}}\hspace{0.03cm}\log Z(\sigma) \sim -\frac{1}{2}\log\left(\frac{2N}{\pi}\right) + \frac{3}{4} + \frac{t}{6} - \frac{\mathrm{Li}_{\scriptscriptstyle 3}(e^{-t}) - \zeta(3)}{t^{\scriptscriptstyle 2}}
\end{equation}
where $\mathrm{Li}_{\scriptscriptstyle 3}(x) = \sum^\infty_{k=1} x^k/k^{\scriptscriptstyle 3}$ for $|x| < 1$ (the trilogarithm), and $\zeta$ is the Riemann Zeta function.
The proposition follows by a direct calculation, once the following lemmas have been shown.
In the notation of (<ref>), if $N \rightarrow \infty$,
\begin{equation} \label{eq:lemma_z_asymp_1}
\frac{1}{N^{\scriptscriptstyle 2}}\hspace{0.03cm}\log \omega_{\scriptscriptstyle 2}(N) \sim - \frac{1}{2}\log\left(\frac{N}{2\pi}\right) + \frac{3}{4}
\end{equation}
If $N \rightarrow \infty$, while $t = N\sigma^2$ remains constant, then
\begin{equation} \label{eq:lemma_z_asymp_2}
\lim\hspace{0.04cm} \frac{1}{N^{\scriptscriptstyle 2}}\hspace{0.03cm}\log \prod^{N-1}_{n=1}\left(1 - e^{-n\hspace{0.02cm}\sigma^2} \right)^{\!N-n} \,=\, \int^{\scriptscriptstyle 1}_{\scriptscriptstyle 0}(1-x)\log\left(1 - e^{-tx}\right)\hspace{0.01cm}dx
\end{equation}
and this improper integral is equal to $-(\mathrm{Li}_{\scriptscriptstyle 3}(e^{-t}) - \zeta(3))/t^{\scriptscriptstyle 2}$.
Proof of Lemma <ref> : recall, from the footnote in <ref>, that
\omega_{\scriptscriptstyle 2}(N) \,=\, (2\pi)^{(N^2-N)/2}/G(N) \hspace{0.4cm}\text{where } G(N) = 1!\times2!\times\ldots\times(N-1)!
Then, from the asymptotic formula of the Barnes function [33] (see Chapter XII, Exercice 49)
\log \omega_{\scriptscriptstyle 2}(N) \,=\, \frac{N^2}{2}\hspace{0.03cm}\log(2\pi) - N^2\left[ \frac{1}{2}\log(N) -\frac{3}{4}\right] + o(N^2)
which directly implies (<ref>).
Proof of Lemma <ref> : taking the logarithm of the product, the left-hand side of (<ref>) reads
\frac{1}{N^{\scriptscriptstyle 2}}\hspace{0.03cm}\log \prod^{N-1}_{n=1}\left(1 - e^{-n\hspace{0.02cm}\sigma^2} \right)^{\!N-n} \,=\, \frac{1}{N}\sum^{N-1}_{n=1}\left(1 - \frac{n}{N}\right)\log \left(1 - e^{-t\hspace{0.02cm}\frac{n}{N}}\right)
which is a Riemann sum for the improper integral in the right-hand side. To evaluate this integral, one may resort to a symbolic computation software, or introduce the power series of the logarithm, under the integral,
\int^{\scriptscriptstyle 1}_{\scriptscriptstyle 0}(1-x)\log\left(1 - e^{-tx}\right)\hspace{0.01cm}dx \,=\,
- \sum^\infty_{k=1}\frac{1}{k}\hspace{0.03cm}\int^{\scriptscriptstyle 1}_{\scriptscriptstyle 0}(1-x)\hspace{0.03cm}e^{-ktx}\hspace{0.01cm}dx
and note that
\int^{\scriptscriptstyle 1}_{\scriptscriptstyle 0}(1-x)\hspace{0.03cm}e^{-ktx}\hspace{0.01cm}dx = \frac{1 - e^{-ktx}}{(kt)^{\scriptscriptstyle 2}}
in order to obtain $-(\mathrm{Li}_{\scriptscriptstyle 3}(e^{-t}) - \zeta(3))/t^{\scriptscriptstyle 2}$.
Remark : from (<ref>), it follows that $Z(\sigma) \rightarrow 0$ as $N \rightarrow \infty$, while $t = N\sigma^2$ remains constant. However, this is merely because $\omega_{\scriptscriptstyle 2}(N) \rightarrow 0$ as $N \rightarrow \infty$. Therefore, one should keep in mind,
\begin{equation} \label{eq:asymp_z_bis}
\lim\hspace{0.04cm} \frac{1}{N^{\scriptscriptstyle 2}}\hspace{0.03cm}\log\left[\frac{Z(\sigma)}{\omega_{\scriptscriptstyle 2}(N)}\right] \,=\, -\frac{1}{2}\log(2) + \frac{3}{4} + \frac{t}{6} - \frac{\mathrm{Li}_{\scriptscriptstyle 3}(e^{-t}) - \zeta(3)}{t^{\scriptscriptstyle 2}}
\end{equation}
which may be thought of as the “asymptotic cumulant generating function".
§ THE ASYMPTOTIC DISTRIBUTION
From the point of view of random matrix theory, a Gaussian distribution $P(\mathrm{I}_N\hspace{0.02cm},\sigma)$ on $M = \mathrm{H}(N)$ defines a unitary matrix ensemble. If $x$ is a random matrix, drawn from this ensemble, and $(x_i\,;i=1,\ldots, N)$ are its eigenvalues, which all belong to $(0,\infty)$, then the empirical distribution $\nu_{\scriptscriptstyle N\hspace{0.03cm}}$, which is given by (as usual, $\delta_{x_i}$ is the Dirac distribution at $x_i$)
\begin{equation} \label{eq:rr1}
\nu_{\scriptscriptstyle N}(B) = \mathbb{E}\left[\frac{1}{N}\sum^{\scriptscriptstyle N}_{i=1}\delta_{x_i}(B)\right]
\end{equation}
for measurable $B \subset (0,\infty)$, converges to an absolutely continuous distribution $\nu_{\scriptscriptstyle t\hspace{0.02cm}}$, when $N$ goes to infinity, while the product $t = N\sigma^2$ remains constant.
Let $c = e^{-t}$ and $a(t) = c(1+\sqrt{1-c})^{\scriptscriptstyle -2}$ while $b(t) = c(1-\sqrt{1-c})^{\scriptscriptstyle -2}$. When $N$ goes to infinity, while the product $t = N\sigma^2$ remains constant, the empirical distribution $\nu_{\scriptscriptstyle N}$ converges weakly to the distribution $\nu_{\scriptscriptstyle t}$ with probability density function
\begin{equation} \label{eq:rmtbis}
\frac{d\nu_{\scriptscriptstyle t}}{dx}(x) = \frac{1}{\pi\hspace{0.02cm}tx}\arctan\left(\frac{4\hspace{0.02cm}e^tx - (x+1)^2}{x+1}\right) \mathbf{1}_{[a(t),b(t)]}(x)
\end{equation}
where $\mathbf{1}_{[a(t),b(t)]}$ denotes the indicator function of the interval $[a(t),b(t)]$.
Remark : as one should expect, when $t = 0$ (so $\sigma^2 = 0$), $a(t) = b(t) = 1$.
The proof of Proposition <ref> is a relatively direct application of a result in [45] (Page 191). Recall the variables $u_i = e^{t}x_i$ which appear in (<ref>). Let $\tilde{\nu}_{\scriptscriptstyle N}$ be the empirical distribution of the $u_i$ (this is the same as (<ref>), but with $u_i$ instead of $x_i$). By applying [17] (Chapter 5, Page 81),
\begin{equation} \label{eq:onepointcorr}
\tilde{\nu}_{\scriptscriptstyle N}(B) = \frac{1}{N}\hspace{0.02cm}\int_B R^{\scriptscriptstyle \hspace{0.02cm}(1)}_{\scriptscriptstyle N}(u)\hspace{0.03cm}(du)
\end{equation}
for measurable $B \subset (0,\infty)$, where the one-point correlation function $R^{\scriptscriptstyle \hspace{0.02cm}(1)}_{\scriptscriptstyle N}(u)$ is given by
\begin{equation} \label{eq:onepointcorrbis}
R^{\scriptscriptstyle \hspace{0.02cm}(1)}_{\scriptscriptstyle N}(u) = \rho(u\hspace{0.02cm},2\sigma^{\scriptscriptstyle 2})\sum^{N-1}_{n=0}p^2_{\hspace{0.02cm} n}(u)
\end{equation}
in the notation of <ref> ($p_{\hspace{0.02cm}n}$ are orthonormal polynomials, with respect to the weight $\rho(u\hspace{0.02cm},2\sigma^{\scriptscriptstyle 2})$). According to [46] (Page 133), $\tilde{\nu}_{\scriptscriptstyle N}$ given by (<ref>) converges weakly to the so-called equilibrium distribution $\tilde{\nu}_{\scriptscriptstyle t\hspace{0.02cm}}$, which minimises the electrostatic energy functional
\begin{equation} \label{eq:electrostatic}
E(\nu) = \frac{1}{t}\int^{\scriptscriptstyle \infty}_{\scriptscriptstyle 0}\frac{1}{2}\log^2(u)\nu(du) - \int^{\scriptscriptstyle \infty}_{\scriptscriptstyle 0}\int^{\scriptscriptstyle \infty}_{\scriptscriptstyle 0} \log|u-v|\nu(du)\nu(dv)
\end{equation}
over probability distributions $\nu$ on $(0,\infty)$. Also according to [46] (Page 133), this equilibrium distribution is the asymptotic distribution of the zeros of the polynomial $p_{\hspace{0.02cm}\scriptscriptstyle N}$ (in the limit $N \rightarrow \infty$ while $N\sigma^2 = t$). Fortunately, $p_{\hspace{0.02cm}\scriptscriptstyle N}$ is just a constant multiple of the Stieltjes-Wigert polynomial $s_{\hspace{0.02cm}N}$ [43] (Page 33). Therefore, the required asymptotic distribution of zeros can be read from [45] (Page 191). Finally, (<ref>) follows by introducing the change of variables $x = e^{-t}u$.
Remark : in [47], the equilibrium distribution $\tilde{\nu}_{\scriptscriptstyle t}$ is derived directly, by searching for stationary distributions of the energy functional (<ref>). This leads to a singular integral equation, whose solution reduces to a Riemann-Hilbert problem. Astoundingly, the Gaussian distributions on $\mathrm{H}(N)$, as introduced in the present chapter, provide a matrix model for Chern-Simons quantum field theory (a detailed account is given in [47]).
§ DUALITY : THE $\THETA$ DISTRIBUTIONS
Recall the Riemannian symmetric space $M = \mathrm{H}(N)$ of <ref>. Its dual space is the unitary group $M^* = U(N)$. Consider now a family of distributions on $M^*$, which will be called $\Theta$ distributions, and which display an interesting connection with Gaussian distributions on $M$, studied in <ref>.
Recall Jacobi's $\vartheta$ function[To follow the original notation of Jacobi [33], this should be written $\vartheta(e^{i\phi}|q)$ where $q = e^{-\sigma^2}$. In other popular notations, this function is called $\vartheta_{\scriptscriptstyle 00}$ or $\vartheta_{\scriptscriptstyle 3\,}$.],
\vartheta(e^{\scriptscriptstyle i\phi}|\sigma^{\scriptscriptstyle 2}) \,=\, \sum^{+\infty}_{m=-\infty} \exp(-m^2\hspace{0.02cm}\sigma^2 + 2m\hspace{0.03cm}i\phi)
As a function of $\phi$, up to some minor modifications, this is just a wrapped normal distribution (in other words, the heat kernel of the unit circle),
\frac{1}{2\pi}\hspace{0.03cm}
\vartheta\!\left(e^{\scriptscriptstyle i\phi}|{\scriptstyle \frac{\sigma^2}{2}}\right) \,=\, \hspace{0.03cm}\sum^{\infty}_{m=-\infty} \exp\left[ - \frac{(2\phi - 2m\pi)^2}{2\sigma^2}\right]
Each $x \in M^*$ can be written $x = k\cdot e^{i\theta}$ for some $k \in U(N)$ and $e^{i\theta} = \mathrm{diag}(e^{i\theta_i}\,;i=1,\ldots, N)$, where $k\cdot y = k\hspace{0.02cm}y\hspace{0.02cm}k^\dagger$, for $y \in M^*$. With this notation, define the following matrix $\vartheta$ function,
\begin{equation} \label{eq:THETAF}
\Theta\left(x\middle|\sigma^2\right) \,=\, k\cdot \vartheta\!\left(e^{\scriptscriptstyle i\theta}|{\scriptstyle \frac{\sigma^2}{2}}\right)
\end{equation}
which is obtained from $x$ by applying Jacobi's $\vartheta$ function to each eigenvalue of $x$. Further, consider the positive function,
\begin{equation} \label{eq:THETAD}
f_*(x|\bar{x}\hspace{0.02cm},\sigma) \,=\, \det\left[\left( 2\pi\hspace{0.03cm}\sigma^2\right)^{\!\frac{1}{2}}\hspace{0.03cm}\Theta\!\left(x\bar{x}^\dagger\middle|\sigma^2\right)\right]
\end{equation}
which is also equal to
\det\left[\left( 2\pi\hspace{0.03cm}\sigma^2\right)^{\!\frac{1}{2}}\hspace{0.03cm}\Theta\!\left(\bar{x}^\dagger x\middle|\sigma^2\right)\right]
since the matrices $x\bar{x}^\dagger$ and $\bar{x}^\dagger x$ are similar. Then, let $Z_{\scriptscriptstyle M^*}(\sigma)$ denote the normalising constant
\begin{equation} \label{eq:zstar}
Z_{\scriptscriptstyle M^*}(\sigma) = \int_{M^*}f_*(x|\bar{x}\hspace{0.02cm},\sigma)\,\mathrm{vol}(dx)
\end{equation}
which does not depend on $\bar{x}$, as can be seen, by introducing the new variable of integration $z = x\bar{x}^\dagger$, and using the invariance of $\mathrm{vol}(dx)$. (compare to the proof of Proposition <ref>).
Now, define a $\Theta$ distribution $\Theta(\bar{x},\sigma)$ as the probability distribution on $M^*$, whose probability density function, with respect to $\mathrm{vol}(dx)$, is given by
\begin{equation} \label{eq:thetadensity}
p_*(x|\bar{x}\hspace{0.02cm},\sigma) \,=\,\left(Z_{\scriptscriptstyle M^*}(\sigma)\right)^{-1}\hspace{0.03cm}f_*(x|\bar{x}\hspace{0.02cm},\sigma)
\end{equation}
Let $Z_{\scriptscriptstyle M}(\sigma) = Z(\sigma)$, be given by (<ref>), and $Z_{\scriptscriptstyle M^*}(\sigma)$ be given by (<ref>). Then, the following equality holds
\begin{equation} \label{eq:thetadual}
\frac{Z_{\scriptscriptstyle M}(\sigma)}{Z_{\scriptscriptstyle M^*}(\sigma)} = \exp\left[{\small\left(\frac{N^3 - N}{6}\right)}\sigma^{2}\right]
\end{equation}
Remark : the Gaussian density (<ref>) on $M$, and the $\Theta$ distribution density (<ref>) on $M^*$ are apparently unrelated. Therefore, it is interesting to note their normalising constants $Z_{\scriptscriptstyle M}(\sigma)$ and $Z_{\scriptscriptstyle M^*}(\sigma)$ scale together according to the simple relation (<ref>). The connection between the two distributions is due to the duality between the two spaces ($M$ and $M^*$).
Proof of Proposition <ref> : since $Z_{\scriptscriptstyle M^*}(\sigma)$ does not depend on $\bar{x}$, one may set $\bar{x} = o$ in (<ref>), where $o = \mathrm{I}_N\hspace{0.03cm}$. Then, $f_*(x|o\hspace{0.02cm},\sigma)$ is a class function, so (<ref>) can be computed using (<ref>). Note that $\omega(S_{\scriptscriptstyle N})$, which appears in (<ref>), is equal to $\omega_{\scriptscriptstyle 2}(N)$, in the current notation. Therefore,
\begin{equation} \label{eq:proofduality1}
Z_{\scriptscriptstyle M^*}(\sigma) =
\frac{\omega_{\scriptscriptstyle 2}(N)}{\mathstrut 2^{\scriptscriptstyle N^2} N!}\left( 2\pi\hspace{0.03cm}\sigma^2\right)^{\!\frac{N}{2}}\times I_{\scriptscriptstyle 2}
% \,\int_{[0\hspace{0.02cm},2\pi]^N}\,\prod^N_{i=1}\vartheta\left(e^{i\theta_i}\middle|\sigma^2\!/2\right)\hspace{0.03cm}|V(e^{i\theta})|^2\hspace{0.04cm}d\theta_{\scriptscriptstyle 1}\ldots\theta_{\scriptscriptstyle N}
\end{equation}
where $I_{\scriptscriptstyle 2}$ is the integral
\begin{equation} \label{eq:proofduality2}
I_{\scriptscriptstyle 2} \,=\,
\int_{[0\hspace{0.02cm},2\pi]^N}\,\prod^N_{i=1}\vartheta\!\left(e^{\scriptscriptstyle i\theta_i}|{\scriptstyle \frac{\sigma^2}{2}}\right)|V(e^{i\theta})|^2\hspace{0.04cm}d\theta_{\scriptscriptstyle 1}\ldots\theta_{\scriptscriptstyle N}
\end{equation}
which follows from the identity
\det \Theta\!\left(x\middle|\sigma^2\right)
= \prod^N_{i=1}\vartheta\!\left(e^{\scriptscriptstyle i\theta_i}|{\scriptstyle \frac{\sigma^2}{2}}\right)
Now, $I_{\scriptscriptstyle 2}$ can be expressed using [17] (Chapter 5, Page 79), as in the proof of Proposition <ref>. Precisely, if $(p_{\hspace{0.02cm}n}\,; n = 0,1,\ldots)$ are orthonormal trigonometric polynomials, with respect to the weight function $\vartheta\!\left(e^{\scriptscriptstyle i\theta}|{\scriptstyle \sigma^2\!/2}\right)$, on the unit circle, then $I_{\scriptscriptstyle 2}$ is given by (<ref>),
I_{\scriptscriptstyle 2} \,=\, N!\hspace{0.03cm}\prod^{N-1}_{n=0} p^{-2}_{\hspace{0.02cm}nn}
in terms of the leading coefficients $p_{\hspace{0.02cm}nn}$ of the polynomials $p_{\hspace{0.02cm}n}$ (these leading coefficients may always be chosen to be real). At present, the required orthonormal polynomials $p_{\hspace{0.02cm}n}$ are given by
\begin{equation} \label{eq:rogersz1}
p_{\hspace{0.02cm}n}(z) \,=\, \left[q^{n}\!\prod^n_{m=1}( 1 - q^{m})^{-1}\right]^{\!\frac{1}{2}}r_{\hspace{0.02cm}n}(-q^{-\frac{1}{2}}z)
\end{equation}
where $q = e^{-\sigma^2}$ and $r_n(z)$ is the $n$-th Rogers-Szegö polynomial, which is monic [48]. Therefore,
\begin{equation} \label{eq:rogerspnn}
p^{-2}_{\hspace{0.02cm}nn} \,=\, \prod^{n}_{m=1}\left(1 - e^{-m\hspace{0.02cm}\sigma^2} \right)
\end{equation}
and, from (<ref>), $I_{\scriptscriptstyle 2}$ is given by
\begin{equation} \label{eq:szegoi2}
I_{\scriptscriptstyle 2} \,=\, N! \prod^{N-1}_{n=1}\left(1 - e^{-n\hspace{0.02cm}\sigma^2} \right)^{\!N-n}
\end{equation}
which may be replaced into (<ref>) to obtain
\begin{equation} \label{eq:zstarformula}
Z_{\scriptscriptstyle M^*}(\sigma) =
\frac{\omega_{\scriptscriptstyle 2}(N)}{\mathstrut 2^{\scriptscriptstyle N^2}}\left( 2\pi\hspace{0.03cm}\sigma^2\right)^{\!\frac{N}{2}}\hspace{0.02cm} \prod^{N-1}_{n=1}\left(1 - e^{-n\hspace{0.02cm}\sigma^2} \right)^{\!N-n}
\end{equation}
Finally, (<ref>) follows easily, by comparing (<ref>) to (<ref>).
Remark : the construction of the $\Theta$ distributions seems to indicate a general construction of “dual distributions" on pairs of dual Riemannian symmetric spaces. Recalling the general notation of <ref>, it seems that Gaussian distributions arise from a classical Gaussian density profile on the maximal Abelian subspace $\mathfrak{a}$, while $\Theta$ distributions (“their duals") arise from wrapping this Gaussian density profile around the torus $\mathrm{Exp}_{o}(i\hspace{0.02cm}\mathfrak{a})$.
CHAPTER: BAYESIAN INFERENCE AND MCMC
The present chapter is entirely made up of previously unpublished material. It continues the study of Gaussian distributions, from the previous chapter, in a new direction : Bayesian inference, and the Markov chain Monte Carlo (MCMC) techniques, useful in Bayesian inference.
* <ref> introduces two Bayesian estimators, the MAP and the MMS, for Gaussian distributions on a Riemannian symmetric space $M$. Proposition <ref> states these two estimators are equal, if the likelihood and prior densities are identical.
* <ref> discusses a surprising experimental result : when $M$ is a space of constant negative curvature, numerical computation shows the MAP and the MMS are so close to each other that they appear to be equal, even if the likelihood and prior densities are different.
* <ref> states the original Proposition <ref>, which provides easy-to-verify sufficient conditions, for the geometric ergodicity of an isotropic Metropolis-Hastings Markov chain, in a Riemannian symmetric space which belongs to the non-compact case. This is then applied to the computation of the MMS, via the subsequent Proposition <ref>.
* <ref> discusses the Riemanian gradient descent method. Proposition <ref> states this method has an exponential rate of convergence, when used to find the global mnimum of a strongly convex function, defined on a Hadamard manifold. Propositions <ref>, <ref>, and <ref> are the three essential ingredients of the recipe, used here for the computation of the MMS.
* <ref> gives Lemma <ref>, to be used in the proof of Proposition <ref>. This lemma states that the logarithmic rate of growth of the volume density function is bounded at infinity, for a Riemannian symmetric space which belongs to the non-compact case.
* <ref> is devoted to the proof of Proposition <ref>. The proof is a generalisation of the proof in [49], carried out in the special case of Metropolis algorithms in a Euclidean space.
§ MAP VERSUS MMS
Let $M$ be a Riemannian symmetric space, which belongs to the non-compact case (see <ref>). Recall the Gaussian distribution $P(x\hspace{0.03cm},\sigma)$ on $M$ is given by its probability density function (<ref>)
\begin{equation} \label{eq:likelihood}
p(y|x\hspace{0.02cm},\sigma)\,=\, \left(Z(\sigma)\right)^{-1}\hspace{0.03cm}\exp\left[ -\frac{d^{\hspace{0.03cm}2}(y,x)}{2\sigma^2}\right]
\end{equation}
In <ref>, it was seen that maximum-likelihood estimation of the parameter $x$, based on independent samples $(y_n\,;n=1,\ldots,N)$, amounts to computing the Riemannian barycentre of these samples. The one-sample maximum-likelihood estimate, given a single observation $y$, is therefore $\hat{x}_{\scriptscriptstyle ML} = y$.
Instead of maximum-likelihood estimation, consider a Bayesian approach to estimating $x$, based on the observation $y$. To do so, assign to $x$ a prior density, which is also Gaussian,
\begin{equation} \label{eq:prior}
p(x|z\hspace{0.02cm},\tau)\,=\,\left(Z(\tau)\right)^{-1}\hspace{0.03cm}\exp\left[ -\frac{d^{\hspace{0.03cm}2}(x,z)}{2\tau^2}\right]
\end{equation}
Upon observation of $y$, Bayesian inference concerning $x$ is carried out, using the posterior density
\begin{equation} \label{eq:posterior}
\pi(x) \propto \exp\left[ -\frac{d^{\hspace{0.03cm}2}(y,x)}{2\sigma^2}-\frac{d^{\hspace{0.03cm}2}(x,z)}{2\tau^2}\right]
\end{equation}
where $\propto$ indicates a missing (unknown) normalising factor.
In particular, the maximum a posteriori estimator $\hat{x}_{\scriptscriptstyle MAP}$ of $x$ is equal to the mode of the posterior density $\pi(x)$. In other words, $\hat{x}_{\scriptscriptstyle MAP}$ minimises the weighted sum of squared distances $d^{\hspace{0.03cm}2}(y,x)/\sigma^2+d^{\hspace{0.03cm}2}(x,z)/\tau^2$. This is expressed in the following notation[If $p\hspace{0.03cm},q \in M$ and $c:[0,1]\rightarrow M$ is a geodesic curve with $c(0) = p$ and $c(1) = q$, then $p\,\#_{\scriptscriptstyle t}\, q = c(t)$, for $t \in [0,1]$. Therefore, $p\,\#_{\scriptscriptstyle t}\, q$ is a geodesic convex combination of $p$ and $q$, with respective weights $(1-t)$ and $t$.],
\begin{equation} \label{eq:mapformula}
\hat{x}_{\scriptscriptstyle MAP} \,=\, z\, \#_{\scriptscriptstyle \rho}\, y \hspace{0.5cm} \text{where } \rho = \frac{\tau^2}{\sigma^2+\tau^2}
\end{equation}
Thus, $\hat{x}_{\scriptscriptstyle MAP}$ is a geodesic convex combination of the prior barycentre $z$ and the observation $y$, with respective weights $\sigma^2/(\sigma^2+\tau^2)$ and $\tau^2/(\sigma^2+\tau^2)$.
On the other hand, the minimum mean square error estimator $\hat{x}_{\scriptscriptstyle MMS}$ is the barycentre of the posterior density $\pi(x)$. That is, $\hat{x}_{\scriptscriptstyle MMS}$ is the global minimiser of
\begin{equation} \label{eq:posteriorvariance}
\mathcal{E}_{\pi}(y) \,=\,
\frac{1}{2}\hspace{0.03cm}
\int_M\,d^{\hspace{0.03cm}\scriptscriptstyle 2}(y\hspace{0.02cm},x)\hspace{0.03cm}\pi(x)\hspace{0.03cm}\mathrm{vol}(dx)
\end{equation}
whose existence and uniqueness are established in the remark below. While it is easy to compute $\hat{x}_{\scriptscriptstyle MAP}$ from (<ref>), it is much harder to find $\hat{x}_{\scriptscriptstyle MMS\,}$, as this requires minimising the integral (<ref>), where the density $\pi(x)$ is known only up to normalisation.
Still, there is one special case where these two estimators are equal.
In the above notation, if $\sigma^2 = \tau^2$ (that is $\rho = 1/2$), then $\hat{x}_{\scriptscriptstyle MMS} = \hat{x}_{\scriptscriptstyle MAP\,}$.
This relies on the following (intuitively quite obvious) lemma.
Assume that $\pi$ is a probability distribution on $M$ with Riemannian barycentre $b$. If $g$ is an isometry of $M$ such that $g^*\pi = \pi$ ($g^*\pi$ denotes the image of the distribution $\pi$ under the mapping $g:M\rightarrow M$), then $g\cdot b = b$.
This lemma is proved by noting that, for any isometry $g$ of $M$, one has $\mathcal{E}_{g^*\pi} = \mathcal{E}_{\pi}\circ g^{\scriptscriptstyle -1}$. Accordingly, if $b$ is the Riemannian barycentre of $\pi$, $g\cdot b$ is the Riemannian barycentre of $g^*\pi$.
Proof of Proposition <ref> : in this case,
\pi(x) \propto \exp\left[ -\frac{d^{\hspace{0.03cm}2}(y,x)+d^{\hspace{0.03cm}2}(x,z)}{2\sigma^2}\right]
On the other hand, $\hat{x}_{\scriptscriptstyle MAP} \,=\, z\, \#_{\scriptscriptstyle 1/2}\, y$ is the midpoint of the geodesic segment connecting $z$ to $y$ (note that $\rho = 1/2$). Let $s$ denote the geodesic symmetry at $\hat{x}_{\scriptscriptstyle MAP\,}$. Then, $s$ permutes $z$ and $y$, and therefore leaves invariant $\pi(x)$. Lemma <ref> (applied with $g = s$) implies the Riemannian barycentre $\hat{x}_{\scriptscriptstyle MMS}$ of $\pi$ verifies $s\cdot \hat{x}_{\scriptscriptstyle MMS} = \hat{x}_{\scriptscriptstyle MMS\,}$. However, $\hat{x}_{\scriptscriptstyle MAP}$ is the unique fixed point of $s$. Therefore, $\hat{x}_{\scriptscriptstyle MMS} = \hat{x}_{\scriptscriptstyle MAP\,}$.
Remark : to see that $\hat{x}_{\scriptscriptstyle MMS}$ is well-defined, it is enough to show the posterior density $\pi$ in (<ref>) satisfies (<ref>). Indeed, this implies that $\pi$ has a well-defined Riemannian barycentre.
Consider then the second-order moment in (<ref>), with $y_o = \hat{x}_{\scriptscriptstyle MAP\,}$. Specifically, this is
\begin{equation} \label{eq:mmap}
m_{\scriptscriptstyle 2}(\hat{x}_{\scriptscriptstyle MAP}) = \int_M d^{\hspace{0.03cm}\scriptscriptstyle 2}(\hat{x}_{\scriptscriptstyle MAP}\hspace{0.02cm},x)\hspace{0.03cm}\pi(x)\hspace{0.03cm}\mathrm{vol}(dx)
\end{equation}
Rearrange (<ref>) to obtain
\begin{equation} \label{eq:rearrangepi}
\pi(x) \propto \exp\left[ -h\left(\rho f_y(x)+(1-\rho)f_z(x)\right)\right] \hspace{1cm} \left(h = 1/\sigma^2 + 1/\tau^2\right)
\end{equation}
in the notation of <ref>. Now, let $f(x) = \rho f_y(x)+(1-\rho)f_z(x)$. For $x \in M$, let $x = \mathrm{Exp}_{\hat{x}_{\scriptscriptstyle MAP}}(v)$, and recall the Taylor expansion (<ref>),
\begin{equation} \label{eq:rearrangetaylor}
f\left(x\right) = f(\hat{x}_{\scriptscriptstyle MAP}) + \langle \mathrm{grad}\,f,v\rangle_{\scriptscriptstyle \hat{x}_{\scriptscriptstyle MAP}} + \frac{1}{2}\,\mathrm{Hess}\,f_{\scriptscriptstyle c(t^*)}(\dot{c},\dot{c})
\end{equation}
where $c(t^*)$ is a point along the geodesic $c(t) = \mathrm{Exp}_x(t\,v)$, corresponding to an instant $t^* \in (0,1)$. Note that $\mathrm{grad}\,f(\hat{x}_{\scriptscriptstyle MAP}) = 0$, as can be checked from (<ref>), and that, using (<ref>),
\mathrm{Hess}\,f(x) = \rho\hspace{0.03cm}\mathrm{Hess}\,f_y(x) + (1-\rho)\hspace{0.03cm}\mathrm{Hess}\,f_z(x)\,\geq\,g(y)
Replacing these into (<ref>), it follows that
f(x) \,\geq\, \rho\hspace{0.02cm}(1-\rho)\hspace{0.03cm}d^{\hspace{0.03cm}2}(z,y) + \frac{1}{2}\hspace{0.02cm}d^{\hspace{0.03cm}2}(\hat{x}_{\scriptscriptstyle MAP}\hspace{0.02cm},x)
Then, if $C^{-1}_\pi$ is the missing normalising factor in (<ref>),
\begin{equation} \label{eq:rearrangepi1}
\pi(x) \leq C^{-1}_\pi\hspace{0.03cm}\exp\left[ -\frac{\rho}{\tau^{\scriptscriptstyle 2}}\hspace{0.03cm}d^{\hspace{0.03cm}2}(z,y)-\frac{h}{2}\hspace{0.03cm}d^{\hspace{0.03cm}2}(\hat{x}_{\scriptscriptstyle MAP}\hspace{0.02cm},x)\right]
\end{equation}
From (<ref>) and (<ref>),
\begin{equation} \label{eq:rearrangepi2}
m_{\scriptscriptstyle 2}(\hat{x}_{\scriptscriptstyle MAP}) \,\leq\, C^{-1}_\pi\exp\left[ -\frac{\rho}{\tau^{\scriptscriptstyle 2}}\hspace{0.03cm}d^{\hspace{0.03cm}2}(z,y)\right]\hspace{0.02cm}\int_Md^{\hspace{0.03cm}\scriptscriptstyle 2}(\hat{x}_{\scriptscriptstyle MAP}\hspace{0.02cm},x)\hspace{0.03cm}
\exp\left[-\frac{h}{2}\hspace{0.03cm}d^{\hspace{0.03cm}2}(\hat{x}_{\scriptscriptstyle MAP}\hspace{0.02cm},x)\right]
\hspace{0.03cm}\mathrm{vol}(dx)
\end{equation}
which is finite, as required in (<ref>). In fact, by a direct application of the integral formula (<ref>), it is possible to show that
\int_Md^{\hspace{0.03cm}\scriptscriptstyle 2}(\hat{x}_{\scriptscriptstyle MAP}\hspace{0.02cm},x)\hspace{0.03cm}
\exp\left[-\frac{h}{2}\hspace{0.03cm}d^{\hspace{0.03cm}2}(\hat{x}_{\scriptscriptstyle MAP}\hspace{0.02cm},x)\right]
\hspace{0.03cm}\mathrm{vol}(dx) =
h^{\scriptscriptstyle -3/2}\hspace{0.02cm}Z^\prime(h^{\scriptscriptstyle -1/2})
where $Z(\sigma)$ was given in (<ref>), and the prime denotes the derivative. Finally, replacing this into (<ref>), it follows that
\begin{equation} \label{eq:prewasserstein}
m_{\scriptscriptstyle 2}(\hat{x}_{\scriptscriptstyle MAP}) \,\leq\, C^{-1}_\pi\exp\left[ (-\rho/\tau^{\scriptscriptstyle 2})\hspace{0.03cm}d^{\hspace{0.03cm}2}(z,y)\right]\hspace{0.02cm}h^{\scriptscriptstyle -3/2}\hspace{0.02cm}Z^\prime(h^{\scriptscriptstyle -1/2})
\end{equation}
§ BOUNDING THE DISTANCE
Proposition <ref> states that $\hat{x}_{\scriptscriptstyle MMS} = \hat{x}_{\scriptscriptstyle MAP\,}$, if $\rho = 1/2$. When $M$ is a Euclidean space, it is famously known that $\hat{x}_{\scriptscriptstyle MMS} = \hat{x}_{\scriptscriptstyle MAP}$ for any value of $\rho$. In general, one expects these two estimators to be different from one another, if $\rho \neq 1/2$.
However, when $M$ is a space of constant negative curvature, numerical experiments show that $\hat{x}_{\scriptscriptstyle MMS}$ and $\hat{x}_{\scriptscriptstyle MAP}$ lie surprisingly close to each other, and that they even appear to be equal. I am still unaware of any mathematical explanation of this phenomenon.
It is possible to bound the distance between $\hat{x}_{\scriptscriptstyle MMS}$ and $\hat{x}_{\scriptscriptstyle MAP\,}$, using the so-called fundamental contraction property [29] (this is an immediate application of Jensen's inequality, as explained in the proof of Theorem 6.3 in [29]).
\begin{equation} \label{eq:contraction}
d(\hat{x}_{\scriptscriptstyle MMS}\hspace{0.02cm},\hat{x}_{\scriptscriptstyle MAP}) \leq W(\pi,\delta_{\hat{x}_{\scriptscriptstyle MAP}})
\end{equation}
where $W$ denotes the Kantorovich ($L^{\scriptscriptstyle 1}$-Wasserstein) distance, and $\delta_{\hat{x}_{\scriptscriptstyle MAP}}$ denotes the Dirac probability distribution concentrated at $\hat{x}_{\scriptscriptstyle MAP\,}$. Now, the right-hand side of (<ref>) is equal to the first-order moment
\begin{equation} \label{eq:mmap1}
m_{\scriptscriptstyle 1}(\hat{x}_{\scriptscriptstyle MAP}) = \int_M d(\hat{x}_{\scriptscriptstyle MAP}\hspace{0.02cm},x)\hspace{0.03cm}\pi(x)\hspace{0.03cm}\mathrm{vol}(dx)
\end{equation}
Of course, the upper bound in (<ref>) is not tight, since it is strictly positive, even when $\rho = 1/2$, as one may see from (<ref>).
It will be shown below that a Metropolis-Hastings algorithm, with Gaussian proposals, can be used to generate (geometrically ergodic) samples $(x_n\,;n\geq 1)$ from the posterior density $\pi$.Using these samples, it is possible to approximate (<ref>), by an empirical average,
\begin{equation} \label{eq:mmapbis}
\bar{m}_{\scriptscriptstyle 1}(\hat{x}_{\scriptscriptstyle MAP}) = \frac{1}{N}\hspace{0.03cm}\sum^N_{n=1}d(\hat{x}_{\scriptscriptstyle MAP}\hspace{0.02cm},x_n)
\end{equation}
In addition, the samples $(x_n)$ can be used to compute a convergent approximation of $\hat{x}_{\scriptscriptstyle MMS\,}$. Precisely, the empirical barycentre $\bar{x}_{\scriptscriptstyle MMS}$ of the samples $(x_{\scriptscriptstyle 1},\ldots,x_{\scriptscriptstyle N})$ converges almost-surely to $\hat{x}_{\scriptscriptstyle MMS}$ (this is proved in <ref>).
Numerical experiments were conducted in the case when $M$ is a space of constant curvature, equal to $-1$, and of dimension $n$. The following table was obtained for the values $\sigma^2 = \tau^2 = 0.1$, using samples $(x_{\scriptscriptstyle 1},\ldots,x_{\scriptscriptstyle N})$ where $N = 2\times10^5$.
\begin{array}{rlllllllll}
\text{dimension }n & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\[0.2cm]
\bar{m}_{\scriptscriptstyle 1}(\hat{x}_{\scriptscriptstyle MAP}) & 0.28 & 0.35 & 0.41 & 0.47 & 0.50 & 0.57 & 0.60 & 0.66& 0.70 \\[0.2cm]
d(\bar{x}_{\scriptscriptstyle MMS}\hspace{0.02cm},\hat{x}_{\scriptscriptstyle MAP}) & 0.00 & 0.00 & 0.00 & 0.01 & 0.01 & 0.02 & 0.02 & 0.02 & 0.03
\end{array}
and the following table for $\sigma^2 = 1$ and $\tau^2 = 0.5$, again using $N = 2\times10^5$.
\begin{array}{rlllllllll}
\text{dimension }n & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\[0.2cm]
\bar{m}_{\scriptscriptstyle 1}(\hat{x}_{\scriptscriptstyle MAP}) & 0.75 & 1.00 & 1.12 & 1.44 & 1.73 & 1.97 & 2.15 & 2.54 &2.91 \\[0.2cm]
d(\bar{x}_{\scriptscriptstyle MMS}\hspace{0.02cm},\hat{x}_{\scriptscriptstyle MAP}) & 0.00 & 0.00 & 0.03 & 0.02 & 0.02 & 0.03 & 0.04 & 0.03 & 0.12
\end{array}
The first table, as expected, confirms Proposition <ref>. The second table, more surprisingly, shows that $\hat{x}_{\scriptscriptstyle MMS}$ and $\hat{x}_{\scriptscriptstyle MAP}$ can be quite close to each other, even when $\rho \neq 1/2$.
Other values of $\sigma^2$ and $\tau^2$ lead to similar orders of magnitude for $\bar{m}_{\scriptscriptstyle 1}(\hat{x}_{\scriptscriptstyle MAP})$ and $ d(\bar{x}_{\scriptscriptstyle MMS}\hspace{0.02cm},\hat{x}_{\scriptscriptstyle MAP})$. While $\bar{m}_{\scriptscriptstyle 1}(\hat{x}_{\scriptscriptstyle MAP})$ increases with the dimension $n$, $d(\bar{x}_{\scriptscriptstyle MMS}\hspace{0.02cm},\hat{x}_{\scriptscriptstyle MAP})$ does not appear sensitive to increasing dimension.
Based on these experimental results, one may be tempted to conjecture that $\hat{x}_{\scriptscriptstyle MMS} = \hat{x}_{\scriptscriptstyle MAP\,}$, even when $\rho \neq 1/2$. Naturally, numerical experiments do not equate to a mathematical proof.
§ COMPUTING THE MMS
§.§ Metropolis-Hastings algorithm
A crucial step, in Bayesian inference, is sampling from the posterior density. Here, this is $\pi(x)$, given by (<ref>).
Since $\pi(x)$ is known only up to normalisation, a suitable sampling method is afforded by the Metropolis-Hastings algorithm. This algorithm generates a Markov chain $(x_n\,;n\geq 1)$, with transition kernel [50]
\begin{equation} \label{eq:hmP}
Pf(x) \,=\, \int_M\,\alpha(x\hspace{0.02cm},y)\hspace{0.02cm}q(x\hspace{0.02cm},y)f(y)\hspace{0.02cm}\mathrm{vol}(dy) + \rho(x)\hspace{0.02cm}f(x)
\end{equation}
for any bounded measurable function $f:M\rightarrow \mathbb{R}$, where $\alpha(x\hspace{0.02cm},y)$ is the probability of accepting a transition from $x$ to $dy$, and $\rho(x)$ is the probability of staying at $x$, and where $q(x,y)$ is the proposed transition density
\begin{equation} \label{eq:hmTr}
q(x\hspace{0.02cm},y) \geq 0 \hspace{0.2cm}\text{and}\hspace{0.2cm} \int_M\,q(x\hspace{0.02cm},y)\hspace{0.02cm}\mathrm{vol}(dy) = 1 \hspace{0.5cm} \text{for } x\in M
\end{equation}
In the following, $(x_n)$ will always be an isotropic Metropolis-Hastings chain, in the sense that $q(x\hspace{0.02cm},y) = q(d(x\hspace{0.02cm},y))$, so $q(x\hspace{0.02cm},y)$ only depends on the distance $d(x\hspace{0.02cm},y)$. In this case, the acceptance probability $\alpha(x\hspace{0.02cm},y)$ is given by $\alpha(x\hspace{0.02cm},y) = \mathrm{min}\left\lbrace1\hspace{0.02cm},\pi(y)/\pi(x) \right\rbrace$.
The aim of the Metropolis-Hastings algorithm is to produce a Markov chain $(x_n)$ which is geometrically ergodic. Geometric ergodicity means the distribution $\pi_n$ of $x_n$ converges to $\pi$, with a geometric rate, in the sense that there exist $\beta \in (0,1)$ and $R(x_{\scriptscriptstyle 1}) \in (0,\infty)$, as well as a function $V:M\rightarrow \mathbb{R}$, such that (in the following, $\pi(dx) = \pi(x)\hspace{0.02cm}\mathrm{vol}(dx)$)
\begin{equation} \label{eq:VV}
V(x) \geq \max\left\lbrace 1\hspace{0.02cm},d^{\hspace{0.03cm} 2}(x,x^*)\right\rbrace \text{ for some } x^* \in M
\end{equation}
\begin{equation} \label{eq:gergodic}
\left| \int_M\, f(x)\hspace{0.02cm}(\pi_n(dx) - \pi(dx)) \right| \,\leq\,R(x_{\scriptscriptstyle 1})\hspace{0.02cm}\beta^n \hspace{0.9cm}
\end{equation}
for any function $f:M\rightarrow \mathbb{R}$ with $|f|\leq V$. If the chain $(x_n)$ is geometrically ergodic, then it satisfies the strong law of large numbers [51]
\begin{equation} \label{eq:mcmclln}
\frac{1}{N}\hspace{0.03cm}\sum^N_{n=1}f(x_n) \longrightarrow \int_M\,f(x)\hspace{0.02cm}\pi(dx) \text{ \;\;(almost-surely)}
\end{equation}
as well as a corresponding central limit theorem (see Theorem 17.0.1, in [51]). Then, in practice, the Metropolis-Hastings algorithm generates samples $(x_n)$ from the posterior density $\pi(x)$.
In <ref>, the following general statement will be proved, concerning the geometric ergodicity of isotropic Metropolis-Hastings chains.
Let $M$ be a Riemannian symmetric space, which belongs to the non-compact case. Assume $(x_n\,;n\geq 1)$ is a Markov chain in $M$, with transition kernel given by (<ref>), with proposed transition density $q(x\hspace{0.02cm},y) = q(d(x\hspace{0.02cm},y))$, and with strictly positive invariant density $\pi$.
The chain $(x_n)$ satisfies (<ref>) and (<ref>), if the following assumptions hold,
(a1) there exists $x^* \in M$, such that $r(x) = d(x^*,x)$ and $\ell(x) = \log\hspace{0.02cm}\pi(x)$ satisfy
\limsup_{r(x)\rightarrow \infty}\hspace{0.02cm}\frac{\langle\mathrm{grad}\,r,\mathrm{grad}\,\ell\rangle_{\scriptscriptstyle x}}{r(x)}\,<\,0
(a2) if $n(x) = \left.\mathrm{grad}\,\ell(x)\middle/\Vert \mathrm{grad}\,\ell(x)\Vert\right.$, then $n(x)$ satisfies
\limsup_{r(x)\rightarrow \infty}\hspace{0.02cm}\langle\mathrm{grad}\,r,n\rangle_{\scriptscriptstyle x}\,<\,0
(a3) there exist $\delta_{\scriptscriptstyle q} > 0$ and $\varepsilon_{\scriptscriptstyle q} > 0$ such that $d(x\hspace{0.02cm},y) <\delta_{\scriptscriptstyle q}$ implies $q(x\hspace{0.02cm},y) > \varepsilon_{\scriptscriptstyle q\,}$
Remark : the posterior density $\pi$ in (<ref>) verifies Assumptions (a1) and (a2). To see this, let $x^* = z$, and note from (<ref>) and (<ref>) that
\mathrm{grad}\,\ell(x) \,=\, -\frac{1}{\tau^2}\hspace{0.02cm}r(x)\hspace{0.02cm}\mathrm{grad}\,r(x) - \frac{1}{\sigma^2}\hspace{0.02cm}\mathrm{grad}\,f_y(x)
Then, taking the scalar product with $\mathrm{grad}\,r$,
\begin{equation} \label{eq:proofconditiona1}
\langle\mathrm{grad}\,r,\mathrm{grad}\,\ell\rangle_{x} = -\frac{1}{\tau^2}\hspace{0.02cm}r(x) -
\frac{1}{\sigma^2}\hspace{0.02cm}\langle\mathrm{grad}\,r,\mathrm{grad}\,f_y\rangle_{x}
\end{equation}
since $\mathrm{grad}\,r(x)$ is a unit vector, for all $x \in M$. Now, $\mathrm{grad}\,f_y(x) = -\mathrm{Exp}^{-1}_x(y)$, by (<ref>). But,
since $r(x)$ is a convex function of $x$,
\langle \mathrm{grad}\,r,\mathrm{Exp}^{-1}_x(y)\rangle \,\leq\, r(y) - r(x)
for any $y \in M$. Thus, the right-hand side of (<ref>) is strictly negative, as soon as $r(x) > r(y)$, and Assumption (a1) is indeed verified. That Assumption (a2) is also verified can be proved by a similar reasoning.
Remark : on the other hand, Assumption (a3) holds, if the proposed transition density $q(x\hspace{0.02cm},y)$ is a Gaussian density, $q(x\hspace{0.02cm},y) = p(y|x,\tau^{\scriptscriptstyle 2}_{\scriptscriptstyle q})$.
With this choice of $q(x\hspace{0.02cm},y)$, all the assumptions of Proposition <ref> are verified, for the posterior density $\pi$ in (<ref>). Proposition <ref> therefore implies that the Metropolis-Hastings algorithm generates geometrically ergodic samples $(x_n\,;n\geq 1)$, from this posterior density.
§.§ The empirical barycentre
Let $(x_n\,;n\geq 1)$ be a Metropolis-Hastings Markov chain in $M$, with its transition kernel (<ref>), and invariant density $\pi$. Assume the chain $(x_n)$ is geometrically ergodic, so it satisfies the strong law of large numbers (<ref>).
Then, let $\bar{x}_{\scriptscriptstyle N}$ denote the empirical barycentre of the first $N$ samples $(x_{\scriptscriptstyle 1},\ldots,x_{\scriptscriptstyle N})$. This is the unique global minimum of the variance function
\begin{equation} \label{eq:emprecursive}
\mathcal{E}_{\scriptscriptstyle N}(y) \,=\,\frac{1}{2N}\sum^N_{n=1}d^{\hspace{0.03cm}2}(y\hspace{0.02cm},x_n)
\end{equation}
Assuming it is well-defined, let $\hat{x}$ denote the Riemannian barycentre of the invariant density $\pi$. It turns out that $\bar{x}_{\scriptscriptstyle N}$ converges almost-surely to $\hat{x}$.
Let $(x_n)$ be any Markov chain in a Hadamard manifold $M$, with invariant distribution $\pi$. Denote
$\bar{x}_{\scriptscriptstyle N}$ the empirical barycentre of $(x_{\scriptscriptstyle 1},\ldots,x_{\scriptscriptstyle N})$, and $\hat{x}$ the Riemannian barycentre of $\pi$ (assuming it is well-defined). If $(x_n)$ satisfies the strong law of large numbers (<ref>), then $\bar{x}_{\scriptscriptstyle N}$ converges to $\hat{x}$, almost-surely.
According to the remarks after Proposition <ref>, the Metropolis-Hastings Markov chain $(x_n)$, whose invariant density is the posterior density $\pi(x)$, given by (<ref>), is geometrically ergodic. Therefore, by Proposition <ref>, the empirical barycentre $\bar{x}_{\scriptscriptstyle MMS\,}$, of the samples $(x_{\scriptscriptstyle 1},\ldots,x_{\scriptscriptstyle N})$, converges almost-surely to the minimum mean square error estimator $\hat{x}_{\scriptscriptstyle MMS}$ (since this is just the barycentre of the posterior density $\pi$). This provides a practical means of approximating $\hat{x}_{\scriptscriptstyle MMS\,}$. Indeed, $\bar{x}_{\scriptscriptstyle MMS}$ can be computed using the Riemannian gradient descent method (this method is discussed in <ref>, below).
The proof of Proposition <ref> is nearly a word-for-word repetition of the proof in [24] (that of Theorem 2.3).
Proof of Proposition <ref> : denote $\mathcal{E}_{\pi}$ the variance function of the invariant distribution $\pi$,
\mathcal{E}_{\pi}(y) = \frac{1}{2}\hspace{0.03cm}
\int_M\,d^{\hspace{0.03cm}\scriptscriptstyle 2}(y\hspace{0.02cm},x)\hspace{0.03cm}\pi(dx)
First, for any compact $K \subset M$, it will be proved that
\begin{equation} \label{eq:proofbhatta1}
\sup_{y\in K}\hspace{0.03cm}\left| \mathcal{E}_{\scriptscriptstyle N}(y) - \mathcal{E}_{\pi}(y)\right| \longrightarrow 0\text{ \;\;(almost-surely)}
\end{equation}
To do so, let $\delta > 0$ and let $\lbrace w_j\,;j=1,\ldots,J\rbrace$ be a $\delta$-net in $K$ (for any $y \in K$, there exists $w_j$ such that $d(w_j\hspace{0.02cm},y) < \delta$). By the strong law of large numbers (<ref>),
\begin{equation} \label{eq:proofbhatta11}
\max_{j=1,\ldots, J}\hspace{0.03cm}\left| \mathcal{E}_{\scriptscriptstyle N}(w_j) - \mathcal{E}_{\pi}(w_j)\right| \longrightarrow 0\text{ \;\;(almost-surely)}
\end{equation}
Using the elementary identity
\left| d^{\hspace{0.03cm}\scriptscriptstyle 2}(y\hspace{0.02cm},x_n) - d^{\hspace{0.03cm}\scriptscriptstyle 2}(w\hspace{0.02cm},x_n)\right| \leq \left(d(y\hspace{0.02cm},x_n) + d(w\hspace{0.02cm},x_n)\right)\left|d(y\hspace{0.02cm},x_n) - d(w\hspace{0.02cm},x_n)\right|
it follows by the triangle inequality that
\begin{equation} \label{eq:squarelipschitz}
\left| d^{\hspace{0.03cm}\scriptscriptstyle 2}(y\hspace{0.02cm},x_n) - d^{\hspace{0.03cm}\scriptscriptstyle 2}(w\hspace{0.02cm},x_n)\right| \leq \left(d(y\hspace{0.02cm},x_n) + d(w\hspace{0.02cm},x_n)\right)d(w\hspace{0.02cm},y)
\end{equation}
From (<ref>), it is possible to show that, for $y$ and $w$ in $K$,
\begin{equation} \label{eq:proofbhatta12}
\left| \mathcal{E}_{\scriptscriptstyle N}(y) - \mathcal{E}_{\scriptscriptstyle N}(w)\right| \leq \sup_{z\in K}\left( \frac{1}{N}\sum^N_{n=1}d(z\hspace{0.02cm},x_n)\right)d(w\hspace{0.02cm},y)
\end{equation}
However, by the strong law of large numbers (<ref>), if $y_o \in K$ and $N$ is sufficiently large,
\frac{1}{N}\sum^N_{n=1}d(z\hspace{0.02cm},x_n) \leq 1 +
\int_M\,d(y_o\hspace{0.02cm},x)\hspace{0.03cm}\pi(dx) + \mathrm{diam}\,K\text{ \;\;(almost-surely)}
Calling this quantity $A$, it follows that for $N$ sufficiently large (note that this is the same $N$, for all $y$ and $w$ in $K$),
\begin{equation} \label{eq:proofbhatta13}
\left| \mathcal{E}_{\scriptscriptstyle N}(y) - \mathcal{E}_{\scriptscriptstyle N}(w)\right| \leq A\hspace{0.03cm}d(w\hspace{0.02cm},y)\text{ \;\;(almost-surely)}
\end{equation}
From (<ref>), it is also possible to show that, for $y$ and $w$ in $K$,
\begin{equation} \label{eq:proofbhatta14}
\left| \mathcal{E}_{\pi}(y) - \mathcal{E}_{\pi}(w)\right| \leq A\hspace{0.03cm}d(w\hspace{0.02cm},y)
\end{equation}
Now, if $y \in K$, let $w(y) \in \lbrace w_j\rbrace$ be such that $d(w(y),y) < \delta$. Then, for $y$ in $K$,
\left| \mathcal{E}_{\scriptscriptstyle N}(y) - \mathcal{E}_{\pi}(y)\right| \leq
\left| \mathcal{E}_{\scriptscriptstyle N}(y) - \mathcal{E}_{\scriptscriptstyle N}(w(y))\right|+
\left| \mathcal{E}_{\scriptscriptstyle N}(w(y)) - \mathcal{E}_{\pi}(w(y))\right|+
\left| \mathcal{E}_{\pi}(w(y)) - \mathcal{E}_{\pi}(y)\right|
By (<ref>) and (<ref>), if $N$ is sufficiently large, it follows that
\left| \mathcal{E}_{\scriptscriptstyle N}(y) - \mathcal{E}_{\pi}(y)\right| \leq 2A\delta + \max_{j=1,\ldots, J}\hspace{0.03cm}\left| \mathcal{E}_{\scriptscriptstyle N}(w_j) - \mathcal{E}_{\pi}(w_j)\right|
and (<ref>) follows from (<ref>), since $\delta > 0$ is arbitrary.
Second, for $N$ sufficiently large, and for any $C > 0$, it will be proved that there exists a compact $K \subset M$, such that
\begin{equation} \label{eq:proofbhatta2}
y\notin K \;\;\Longrightarrow\;\; \mathcal{E}_{\scriptscriptstyle N}(y) > C \text{ \;\;(almost-surely)}
\end{equation}
To do so, note from (<ref>), by the triangle inequality
\mathcal{E}_{\scriptscriptstyle N}(y) \geq \frac{1}{2N}\sum^N_{n=1}(d(y\hspace{0.02cm},\hat{x})-d(\hat{x}\hspace{0.02cm},x_n))^2
\, \geq \,\frac{1}{2}d^{\hspace{0.03cm}2}(y\hspace{0.02cm},\hat{x}) - \left(\frac{1}{N}\sum^N_{n=1}d(\hat{x}\hspace{0.02cm},x_n) \right)d(y\hspace{0.02cm},\hat{x})
However, by the strong law of large numbers (<ref>), if $N$ is sufficiently large
\frac{1}{N}\sum^N_{n=1}d(\hat{x}\hspace{0.02cm},x_n) \leq 1 + \int_M\,d(\hat{x}\hspace{0.02cm},x)\hspace{0.03cm}\pi(dx)
Calling this quantity $B$, it follows that for $N$ sufficiently large,
\begin{equation} \label{eq:proofbhatta21}
\mathcal{E}_{\scriptscriptstyle N}(y) \geq
\frac{1}{2}d^{\hspace{0.03cm}2}(y\hspace{0.02cm},\hat{x}) - B\hspace{0.02cm}d(y\hspace{0.02cm},\hat{x})
\end{equation}
and this directly yields (<ref>), since closed and bounded sets are compact (as a consequence of the Hopf-Rinow theorem [11]).
Now, to complete the proof, note the following. By (<ref>), for $N$ sufficiently large, there exists a compact $K \subset M$, such that
$\mathcal{E}_{\scriptscriptstyle N}(y) > \mathcal{E}_{\pi}(\hat{x})+1$ almost-surely, whenever $y \notin K$. That is,
\begin{equation} \label{eq:proofbhatta3}
\inf_{y\notin K}\hspace{0.02cm}\mathcal{E}_{\scriptscriptstyle N}(y) > \mathcal{E}_{\pi}(\hat{x})+1\text{ \;\;(almost-surely)}
\end{equation}
Moreover, one may always assume that $K$ is a neighborhood of $\hat{x}$. Then, if $B(\hat{x},\epsilon) \subset K$, it follows from (<ref>) that, for $N$ sufficiently large,
\inf_{y \in B(\hat{x},\epsilon)}\hspace{0.02cm}\mathcal{E}_{\scriptscriptstyle N}(y) <
\inf_{y \in B(\hat{x},\epsilon)}\hspace{0.02cm}\mathcal{E}_{\pi}(y) + \frac{\epsilon^2}{4}\text{ \;\;(almost-surely)}
or, since $\hat{x}$ is the unique global minimum of $\mathcal{E}_{\pi}(y)$,
\begin{equation} \label{eq:proofbhatta4}
\inf_{y \in B(\hat{x},\epsilon)}\hspace{0.02cm}\mathcal{E}_{\scriptscriptstyle N}(y) <
\mathcal{E}_{\pi}(\hat{x}) + \frac{\epsilon^2}{4}\text{ \;\;(almost-surely)}
\end{equation}
But, also by (<ref>), for $N$ sufficiently large,
\inf_{y \in K- B(\hat{x},\epsilon)}\hspace{0.02cm}\mathcal{E}_{\scriptscriptstyle N}(y) >
\inf_{y \in K- B(\hat{x},\epsilon)}\hspace{0.02cm}\mathcal{E}_{\pi}(y) - \frac{\epsilon^2}{4}\text{ \;\;(almost-surely)}
However, since the variance function $\mathcal{E}_{\pi}$ is $1/2$-strongly convex, with its global minimum at $\hat{x}$,
\mathcal{E}_{\pi}(y) \geq \mathcal{E}_{\pi}(\hat{x}) + \frac{1}{2}d^{\hspace{0.03cm}2}(y\hspace{0.02cm},\hat{x})
and this implies
\begin{equation} \label{eq:proofbhatta5}
\inf_{y \in K- B(\hat{x},\epsilon)}\hspace{0.02cm}\mathcal{E}_{\scriptscriptstyle N}(y) >
\mathcal{E}_{\pi}(\hat{x}) + \frac{\epsilon^2}{4}\text{ \;\;(almost-surely)}
\end{equation}
Finally, (<ref>), (<ref>) and (<ref>) show that, for $N$ sufficiently large
\inf_{y\in M}\hspace{0.02cm}\mathcal{E}_{\scriptscriptstyle N}(y) =
\inf_{y\in B(\hat{x},\epsilon)}\hspace{0.02cm}\mathcal{E}_{\scriptscriptstyle N}(y)\text{ \;\;(almost-surely)}
Since $\mathcal{E}_{\scriptscriptstyle N}$ has a unique global minimum $\bar{x}_{\scriptscriptstyle N\,}$, it follows that $\bar{x}_{\scriptscriptstyle N}$ belongs to the closure of $B(\hat{x},\epsilon)$, almost-surely, when $N$ is sufficiently large. The proof is now complete, since $\epsilon$ is arbitrary.
§ RIEMANNIAN GRADIENT DESCENT
Since the minimum mean square error estimator $\hat{x}_{\scriptscriptstyle MMS}$ could not be computed directly, it was approximated by $\bar{x}_{\scriptscriptstyle MMS\hspace{0.03cm}}$, the global minimum of the variance function $\mathcal{E}_{\scriptscriptstyle N\hspace{0.03cm}}$, defined as in (<ref>). This function $\mathcal{E}_{\scriptscriptstyle N}$ being $1/2$-strongly convex, its global minimum can be computed using the Riemannian gradient descent method, which even guarantees an exponential rate of convergence.
This method is here studied from a general point of view. The aim is to minimise a function $f:M\rightarrow \mathbb{R}$, where $M$ is a Hadamard manifold, with sectional curvatures in the interval $[-c^{\hspace{0.02cm}\scriptscriptstyle 2},0]$, and $f$ is an $(\alpha/2)$-strongly convex function.
Recall from (<ref>) in <ref>, this means $f$ is $(\alpha/2)$-strongly convex along any geodesic in $M$. In particular, for $x\hspace{0.02cm}, y \in M$,
\begin{equation} \label{eq:strongconvtan}
f(y) - f(x) \geq \langle\mathrm{Exp}^{-1}_x(y),\mathrm{grad}\,f(x)\rangle_x\,+\,(\alpha/2)\hspace{0.02cm}d^{\hspace{0.03cm}2}(x,y)
\end{equation}
This implies that $f$ has compact sublevel sets. Indeed, let $x^*$ be the global minimum of $f$, so $\mathrm{grad}\,f(x^*) = 0$. Putting $x = x^*$ and $y = x$ in (<ref>), it follows that
\begin{equation} \label{eq:properconv}
f(x) - f(x^*) \geq (\alpha/2)\hspace{0.02cm}d^{\hspace{0.03cm}2}(x^*\!,x)
\end{equation}
Accordingly, if $S(y)$ is the sublevel set of $y$, then $S(y)$ is contained in the closed ball $\bar{B}(x^*\!,R_y)$, where $R_y = (2/\alpha)(f(y) - f(x^*))$. Therefore, $S(y)$ is compact, since it is closed and bounded [11].
The Riemannian gradient descent method is based on the iterative scheme
\begin{equation} \label{eq:pgd}
x^{t+1} = \mathrm{Exp}_{x^t}(-\mu\hspace{0.02cm}\mathrm{grad}\,f(x^t))
\end{equation}
where $\mu$ is a positive step-size, $\mu \leq 1$. If this is chosen sufficiently small, then the iterates $x^t$ remain within the sublevel set $S(x^{\scriptscriptstyle 0})$.
In fact, let $\bar{B}_{\scriptscriptstyle 0} = \bar{B}(x^*\!,R_{x^{\scriptscriptstyle 0}})$ and
$\bar{B}^\prime_{\scriptscriptstyle 0} = \bar{B}(x^*\!,R_{x^{\scriptscriptstyle 0}} + G)$, where $G$ denotes the supremum of the norm of $\mathrm{grad}\,f(x)$, taken over $x \in \bar{B}_{\scriptscriptstyle 0}$. Then, let $H^\prime_{\scriptscriptstyle 0}$ denote the supremum of the operator norm of
$\mathrm{Hess}\,f(x)$, taken over $x \in \bar{B}^\prime_{\scriptscriptstyle 0}$.
For the Riemannian gradient descent method (<ref>), if $\mu \leq 2/\!H^\prime_{\scriptscriptstyle 0\hspace{0.02cm}}$, then the iterates $x^t$ remain within the sublevel set $S(x^{\scriptscriptstyle 0})$.
Once it has been ensured that the iterates $x^t$ remain within $S(x^{\scriptscriptstyle 0})$, it is even possible to choose $\mu$ in such a way that these iterates achieve an exponential rate of convergence towards $x^*$. This relies on the fact that $x^*$ is a “strongly attractive" critical point of the vector field $\mathrm{grad}\,f$. Precisely, putting $y = x^*$ in (<ref>), it follows that
\begin{equation}\label{eq:strongtangattract}
\langle\mathrm{Exp}^{-1}_x(x^*),\mathrm{grad}\,f(x)\rangle_x \leq -\,(\alpha/2)\hspace{0.02cm}d^{\hspace{0.03cm}2}(x,x^*)+ (f(x^*) - f(x))
\end{equation}
Now, let $C_{\scriptscriptstyle 0} = c\hspace{0.02cm}R_{x^{\scriptscriptstyle 0}}\coth(c\hspace{0.02cm}R_{x^{\scriptscriptstyle 0}})$.
Let $\bar{H}^\prime_{\scriptscriptstyle 0} =\max\lbrace H^\prime_{\scriptscriptstyle 0\hspace{0.02cm}},1\rbrace$. If $\mu \leq 1/\!(\bar{H}^\prime_{\scriptscriptstyle 0}C^{\phantom{\prime}}_{\scriptscriptstyle 0})$ (this implies $\mu \leq 2/\!H^\prime_{\scriptscriptstyle 0}$) and $\mu \leq 1/\alpha$,
\begin{equation} \label{eq:proppgd}
d^{\hspace{0.03cm} 2}(x^t,x^*) \leq (1- \mu\alpha)^t\hspace{0.02cm}d^{\hspace{0.03cm} 2}(x^{\scriptscriptstyle 0},x^*)
\end{equation}
The proof of Proposition <ref> will employ the following lemma.
Let $\bar{H}^\prime_{\scriptscriptstyle 0} =\max\lbrace H^\prime_{\scriptscriptstyle 0\hspace{0.02cm}},1\rbrace$. For any $x \in \bar{B}_{\scriptscriptstyle 0\hspace{0.02cm}}$,
\begin{equation} \label{eq:lempgd}
\Vert\mathrm{grad}\,f\Vert^2_x \leq 2\bar{H}^\prime_{\scriptscriptstyle 0}(f(x) - f(x^*))
\end{equation}
Remark : the rate of convergence predicted by (<ref>) is exponential, but depends on the initial guess $x^{\scriptscriptstyle 0}$, through the constants $\bar{H}^\prime_{\scriptscriptstyle 0}$ and $C^{\phantom{\prime}}_{\scriptscriptstyle 0\hspace{0.02cm}}$. This rate can become arbitrarily bad, if $x^{\scriptscriptstyle 0}$ is chosen sufficiently far from $x^*$, since both $\bar{H}^\prime_{\scriptscriptstyle 0}$ and $C^{\phantom{\prime}}_{\scriptscriptstyle 0\hspace{0.02cm}}$ may then become arbitrarily large. By contrast, if $M$ is a Euclidean space (that is, in the limit $c = 0$), $C_{\scriptscriptstyle 0} = 1$, is a constant.
Remark : I have never met with a function $f:M \rightarrow \mathbb{R}$ ($M$ a non-Euclidean Hadamard manifold), which is strongly convex, and also has a bounded Hessian. I do not even know whether it is possible or not to construct such a function. Proof of Lemma <ref> : let $c:[0,1]\rightarrow M$ be the geodesic curve with $c(0) = x^t$ and $c(1) = x^{t+1}$. From (<ref>), $\dot{c}(0) = -\mu\hspace{0.02cm}\mathrm{grad}\,f(x^t)$. Then, by the Taylor expansion (<ref>),
\begin{equation} \label{eq:proofsublevel1}
f(x^{t+1}) = f(x^t) - \mu\hspace{0.02cm}\Vert \mathrm{grad}\,f\Vert^2_{x^t}
+ \frac{1}{2}\,\mathrm{Hess}\,f_{\scriptscriptstyle c(u)}(\dot{c},\dot{c})
\end{equation}
for some $u \in (0,1)$. Assume that $x^t$ belongs to $S(x^{\scriptscriptstyle 0}) \subset \bar{B}_{\scriptscriptstyle 0\hspace{0.02cm}}$. Then, by the triangle inequality,
d(x^*,c(u)) \leq d(x^*,x^t) + d(x^t,c(u)) \leq R_{x^{\scriptscriptstyle 0}} + \mu\hspace{0.02cm}G
where the second inequality follows from the definition of $G$, because $d(x^t,c(u)) = u\Vert \dot{c}(0)\Vert$. Since $\mu \leq 1$, it follows that $d(x^*,c(u)) \leq R_{x^{\scriptscriptstyle 0}} + G$. Therefore, $c(u) \in \bar{B}^\prime_{\scriptscriptstyle 0\hspace{0.02cm}}$. Then, from the definition of $H^\prime_{\scriptscriptstyle 0}$,
\mathrm{Hess}\,f_{\scriptscriptstyle c(u)}(\dot{c},\dot{c}) \leq H^\prime_{\scriptscriptstyle 0}\hspace{0.02cm}\Vert \dot{c}\Vert^2_{\scriptscriptstyle c(u)}
= H^\prime_{\scriptscriptstyle 0}\hspace{0.02cm}\mu^2\Vert \mathrm{grad}\,f\Vert^2_{x^t}
Replacing this into (<ref>),
\begin{equation} \label{eq:proofsublevel2}
f(x^{t+1}) \leq f(x^t) - \mu(1-\mu\hspace{0.02cm}(H^\prime_{\scriptscriptstyle 0}/2))\hspace{0.02cm}\Vert \mathrm{grad}\,f\Vert^2_{x^t}
\end{equation}
Clearly, then, taking $\mu \leq 2/\!H^\prime_{\scriptscriptstyle 0\hspace{0.03cm}}$, it follows that $f(x^{t+1}) \leq f(x^t)$ so that $x^{t+1}$ belongs to $S(x^{\scriptscriptstyle 0})$. The lemma is proved by induction.
Proof of Proposition <ref> : let $c:[0,1]\rightarrow M$ be the geodesic with $c(0) = x^t$ and $c(1) = x^{t+1}$.Note from (<ref>) that $\dot{c}(0) = -\mu\hspace{0.02cm}\mathrm{grad}\,f(x^t)$. Let $W(x) = d^{\hspace{0.03cm} 2}(x,x^*)/2$, and write down its Taylor expansion (<ref>),
\begin{equation} \label{eq:proofpgd1}
W(x^{t+1}) = W(x^t) - \mu\hspace{0.02cm}\langle\mathrm{grad}W,\mathrm{grad}\,f\rangle_{x^t} +
\frac{1}{2}\,\mathrm{Hess}\,W_{\scriptscriptstyle c(u)}(\dot{c},\dot{c})
\end{equation}
for some $u \in (0,1)$. Note that $\mathrm{grad}\,W$ and $\mathrm{Hess}\,W$ are given by (<ref>) and (<ref>), and also that $x^t$ and $x^{t+1}$ belong to $S(x^{\scriptscriptstyle 0}) \subset \bar{B}_{\scriptscriptstyle 0}\hspace{0.02cm}$, by Lemma <ref>, since $\mu \leq 2/\!H^\prime_{\scriptscriptstyle 0}$. Since $S(x^{\scriptscriptstyle 0})$ is a convex set (recall the definition from <ref>), $c(u)$ also belongs to $S(x^{\scriptscriptstyle 0}) \subset \bar{B}_{\scriptscriptstyle 0}\hspace{0.02cm}$. By the definition of $C_{\scriptscriptstyle 0\hspace{0.02cm}}$,
\mathrm{Hess}\,W_{\scriptscriptstyle c(u)}(\dot{c},\dot{c}) \leq
C_{\scriptscriptstyle 0}\hspace{0.02cm}\Vert \dot{c}\Vert^2_{\scriptscriptstyle c(u)}
= C_{\scriptscriptstyle 0}\hspace{0.02cm}\mu^2\Vert \mathrm{grad}\,f\Vert^2_{x^t}
Replacing into (<ref>), one now has
\begin{equation} \label{eq:proofpgd2}
W(x^{t+1}) \leq W(x^t) + \mu\hspace{0.02cm}\langle\mathrm{Exp}^{-1}_{x^t}(x^*),\mathrm{grad}\,f\rangle_{x^t} +
(C_{\scriptscriptstyle 0}/2)\hspace{0.02cm}\mu^2\Vert \mathrm{grad}\,f\Vert^2_{x^t}
\end{equation}
Therefore, by (<ref>) and (<ref>),
\begin{equation} \label{eq:proofpgd3}
W(x^{t+1}) \leq W(x^t)(1-\mu\alpha) + \mu(1-\mu(\bar{H}^\prime_{\scriptscriptstyle 0}C^{\phantom{\prime}}_{\scriptscriptstyle 0}))(f(x^*) - f(x))
\end{equation}
If $\mu \leq 1/\!(\bar{H}^\prime_{\scriptscriptstyle 0}C^{\phantom{\prime}}_{\scriptscriptstyle 0})$, then (<ref>) implies $W(x^{t+1}) \leq (1-\mu\alpha)W(x^t)$, because $f(x^*) - f(x) \leq 0$.The proposition easily follows by induction, since $1-\mu\alpha \geq 0$.
Proof of Lemma <ref> : let $c$ denote the geodesic with $c(0) = x$ and $\dot{c}(0) = (-1/\bar{H}^\prime_{\scriptscriptstyle 0})\hspace{0.02cm}\mathrm{grad}\,f(x)$. By the same arguments as in the proof of Lemma <ref>, one has that $c(u) \in \bar{B}^\prime_{\scriptscriptstyle 0}$ for all $u \in [0,1]$. Therefore, letting $y = c(1)$ and writing down the Taylor expansion (<ref>),
f(y) - f(x) \leq (-1/\bar{H}^\prime_{\scriptscriptstyle 0})\Vert\mathrm{grad}\,f\Vert^2_x + (\bar{H}^\prime_{\scriptscriptstyle 0}/2)\Vert (1/\bar{H}^\prime_{\scriptscriptstyle 0})\hspace{0.02cm}\mathrm{grad}\,f(x)\Vert^2_x = (-1/2\bar{H}^\prime_{\scriptscriptstyle 0})\Vert\mathrm{grad}\,f\Vert^2_x
Multiplying this inequality by $-2\bar{H}^\prime_{\scriptscriptstyle 0\hspace{0.02cm}}$,
2\bar{H}^\prime_{\scriptscriptstyle 0}(f(x) - f(y)) \geq \Vert \mathrm{grad}\,f\Vert^2_x
Now, (<ref>) obtains by noting that $f(x) - f(x^*) \geq f(x) - f(y)$.
§ A VOLUME GROWTH LEMMA
Lemma <ref> will be used in the proof of Proposition <ref>, to be carried out in <ref>. This lemma is of a purely geometric content, and is therefore considered separately, beforehand.
Let $M$ be a Riemannian symmetric space, which belongs to the non-compact case (see <ref>). Then, in particular, $M$ is a Hadamard manifold.
Fix $x^* \in M$, and let $(r,\theta)$ be geodesic spherical coordinates, with origin at $x^*$. Any $z \in M$, other than $x^*$, is uniquely determined by its coordinates $(r,\theta)$, and will be written $z(r,\theta)$.
Recall the volume density function $\det(\mathcal{A}(r,\theta))$, from the integral formula (<ref>). This will be denoted $\lambda(r,\theta) =\det(\mathcal{A}(r,\theta))$.
Essentially, the following lemma states the logarithmic rate of growth of the volume density function $\lambda(r,\theta)$ is bounded at infinity.
Let $M$ be a Riemannian symmetric space, which belongs to the non-compact case. Fix $x^* \in M$ and denote $r(x) = d(x^*,x)$ for $x \in M$. Then, for any $R > 0$,
\begin{equation} \label{eq:volmcmc}
\limsup_{r(x) \rightarrow \infty}\hspace{0.04cm}\frac{\sup_{\scriptscriptstyle z(r,\theta) \in B(x,R)}\hspace{0.04cm} \lambda(r,\theta)}
{\inf_{\scriptscriptstyle z(r,\theta) \in B(x,R)}\hspace{0.04cm} \lambda(r,\theta)}\,<\,\infty
\end{equation}
The proof of this lemma proceeds in the following way. Identify the unit sphere in $T_{x^*}M$ with $S^{n-1}$, and consider for $\theta \in S^{n-1}$ the self-adjoint curvature operator $R_\theta:T_{x^*M} \rightarrow T_{x^*}M$, given by
R_\theta(v) = -R(\theta,v)\hspace{0.02cm} \theta \hspace{0.2cm};\hspace{0.2cm} v \in T_{x^*}M
Recall that the Riemann curvature tensor is parallel (because $M$ is a symmetric space). Then, from (<ref>) and the definition of $\mathcal{A}(r,\theta)$, it follows that $\mathcal{A}(r,\theta)$ solves the Jacobi equation
\begin{equation} \label{eq:jacobiss}
\mathcal{A}^{\prime\prime} - R_{\theta}\hspace{0.02cm}\mathcal{A} = 0 \hspace{1cm} \mathcal{A}(0) = 0 \,,\, \mathcal{A}^\prime(0) = \mathrm{Id}_{x^*}
\end{equation}
where the prime denotes differentiation with respect to $r$. At present, all the eigenvalues of $R_\theta$ are positive. If $c^{\scriptscriptstyle 2}(\theta)$ runs through these eigenvalues, then it follows from (<ref>) that
\begin{equation} \label{eq:lambdamc1}
\lambda(r,\theta) \,=\, \prod_{c(\theta)}\left( \frac{\sinh(c(\theta)\hspace{0.02cm}r)}{c(\theta)}\right)^{\!\!m_{c(\theta)}}
\end{equation}
where $m_{c(\theta)}$ denotes the multiplicity of the eigenvalue $c^{\scriptscriptstyle 2}(\theta)$ of $R_\theta\hspace{0.02cm}$.
It is possible to express (<ref>) in a different form. Let $M = G/K$ where $K$ is the stabiliser in $G$ of $x^*$. Let $\mathfrak{g}$ and $\mathfrak{k}$ be the Lie algebras of $G$ and $K$, and $\mathfrak{g} = \mathfrak{k} + \mathfrak{p}$ the corresponding Cartan decomposition. Let $\mathfrak{a}$ be a maximal Abelian subspace of $\mathfrak{p}$, and recall that it is always possible to write $r\theta = \mathrm{Ad}(k)\,a$ for some $k \in K$ and $a \in \mathfrak{a}$ (see Lemma 6.3, Chapter V, in [10]). In this notation, $r = \Vert a \Vert_{x^*}$ and $c(\theta) = \lambda(a)/ \Vert a \Vert_{x^*}\hspace{0.02cm}$, where $\lambda$ is a positive roots of $\mathfrak{g}$ with respect to $\mathfrak{a}$, with multiplicity $m_\lambda = m_{c(\theta)}$ (see Lemma 2.9, Chapter VII, in [10]). Replacing into (<ref>) gives
\begin{equation} \label{eq:lambdamc2}
\lambda(r,\theta) \,=\, \prod_{\lambda \in \Delta_+}\left( \frac{\sinh(\lambda(a))}{\lambda(a)/ \Vert a \Vert}\right)^{\!\!m_\lambda}
\end{equation}
Here, if the right-hand side is denoted by $f(a)$, then it is elementary that $\log\hspace{0.02cm}f(a)$ is a Lipschitz function, on the complement of any bounded subset of $\mathfrak{a}$ which contains the zero element of $\mathfrak{a}$.
Returning to (<ref>), let the supremum in the numerator be achieved at $(r_{\max\hspace{0.02cm}},\theta_{\max})$ and the infimum in the denominator be achieved at $(r_{\min\hspace{0.02cm}},\theta_{\min})$. Let $(k_{\max},a_{\max})$ and $(k_{\min},a_{\min})$ be corresponding values of $k$ and $a$. Note that for $z(r,\theta) \in B(x,R)$, by the triangle inequality, $r \geq r(x) - R$. But, since $r = \Vert a \Vert_{x^*\hspace{0.02cm}}$, this also means $\Vert a \Vert_{x^*\hspace{0.02cm}} \geq r(x) - R$.
Therefore, if $r(x) > R$ then, as stated above, $\log\hspace{0.02cm}f(a)$ is a Lipschitz function, on the set of $a$ such that $\Vert a \Vert_{x^*\hspace{0.02cm}} \geq r(x) - R$. If $\mathrm{C}$ is the corresponding Lipschitz constant,
\begin{equation} \label{eq:prooflemvolmc1}
\frac{\sup_{\scriptscriptstyle z(r,\theta) \in B(x,R)}\hspace{0.04cm} \lambda(r,\theta)}
{\inf_{\scriptscriptstyle z(r,\theta) \in B(x,R)}\hspace{0.04cm} \lambda(r,\theta)}\,\leq\, \exp[\mathrm{C}\hspace{0.02cm}\Vert a_{\max} - a_{\min}\Vert_{x^*}]
\end{equation}
Now, (<ref>) will follow by showing that $\Vert a_{\max} - a_{\min}\Vert_{x^*} < 2R$ wherever $r(x) > R$.
To do so, let $z_{\max} = z(r_{\max\hspace{0.02cm}},\theta_{\max})$ and $z_{\min} = z(r_{\min\hspace{0.02cm}},\theta_{\min})$, and note $d(z_{\max\hspace{0.02cm}},z_{\min}) \leq 2R$.If $c: [0,1]\rightarrow M$ is a geodesic curve with $c(0) = z_{\min}$ and $c(1) = z_{\max\hspace{0.02cm}}$, then
\begin{equation} \label{eq:prooflemvolmc2}
\int^{\scriptscriptstyle 1}_{\scriptscriptstyle 0}\,\Vert \dot{c}(t)\Vert_{\scriptscriptstyle c(t)}\hspace{0.03cm}dt = d(z_{\max\hspace{0.02cm}},z_{\min}) \leq 2R
\end{equation}
On the other hand, if $c(t) = c(r(t)\hspace{0.02cm},\theta(t))$, then it is possible to write $r(t)\hspace{0.02cm}\theta(t) = \mathrm{Ad}(k(t))\,a(t)$, where $k(t)$ and $a(t)$ are differentiable curves in $K$ and $\mathfrak{a}$. It will be shown below that this implies
\begin{equation} \label{eq:prooflemvolmc3}
\Vert \dot{c}(t)\Vert^2_{\scriptscriptstyle c(t)} = \Vert \dot{a}(t)\Vert^2_{\scriptscriptstyle x^*} + \sum_{\lambda \in \Delta_+} \sinh^2(\lambda(a(t))\hspace{0.02cm}\Vert \dot{k}_{\lambda}(t)\Vert^2_{\scriptscriptstyle x^*}
\end{equation}
where $\dot{k}_{\lambda}(t)$ is defined following (<ref>), below. Finally, from (<ref>) and (<ref>), it follows that
\Vert a_{\max} - a_{\min}\Vert_{x^*} \,\leq\, \int^{\scriptscriptstyle 1}_{\scriptscriptstyle 0}\Vert \dot{a}(t)\Vert_{\scriptscriptstyle x^*}\hspace{0.03cm}dt
\,\leq\, \int^{\scriptscriptstyle 1}_{\scriptscriptstyle 0}\Vert \dot{c}(t)\Vert_{\scriptscriptstyle c(t)}\hspace{0.03cm}dt \leq 2R
Replacing into (<ref>), this yields
\frac{\sup_{\scriptscriptstyle z(r,\theta) \in B(x,R)}\hspace{0.04cm} \lambda(r,\theta)}
{\inf_{\scriptscriptstyle z(r,\theta) \in B(x,R)}\hspace{0.04cm} \lambda(r,\theta)}\,\leq\, \exp(2\mathrm{C}R)
for all $x$ such that $r(x) > R$. However, this immediately implies (<ref>).
Proof of (<ref>) : in the notation of <ref>, $c(t) = \varphi(s(t)\hspace{0.02cm},a(t))$, where $s(t)$ is the representative of $k(t)$ in the quotient $K/K_{\mathfrak{a}\hspace{0.03cm}}$. Recall that $\varphi(s\hspace{0.02cm},a) = \mathrm{Exp}_o(\beta(s\hspace{0.02cm},a))$ where $\beta(s,a) = \mathrm{Ad}(s)\,a$(the dependence on $t$ is now suppressed). Then, by differentiating with respect to $t$,
\dot{\beta}(s,a) \,=\, \mathrm{Ad}(s)\left(\dot{a} \,+\, [\dot{s},a]\right)
Further, by replacing from (<ref>),
\dot{c} \,=\, \exp(r\theta)\cdot\mathrm{sh}(R_{r\theta})( \dot{\beta}(s,a))
However, $\mathrm{Ad}(s)$ preserves norms, and $\mathrm{Ad}(s^{\scriptscriptstyle -1})\circ R_{r\theta}\circ \mathrm{Ad}(s) = R_{a\hspace{0.02cm}}$, as in <ref>). Therefore,
\begin{equation} \label{eq:ncmetric1}
\Vert \dot{c}\Vert^2_{\scriptscriptstyle c} \,=\, \left\Vert\mathrm{sh}(R_{a})\left(\dot{a} \,+\, [\dot{s},a]\right)\right\Vert^2_{\scriptscriptstyle x^*}
\end{equation}
and from the definition of $\mathrm{sh}(R_{a})$,
\begin{equation} \label{eq:ncmetric2}
\mathrm{sh}(R_{a}) \,=\, \Pi_{\mathfrak{a}} + \sum_{\lambda \in \Delta_+}\frac{\sinh(\lambda(a))}{\lambda(a)}\,\Pi_{\lambda}
\end{equation}
Now, one has the orthogonal decomposition $\dot{s} = \sum_{\lambda \in \Delta_+} (\xi_\lambda + d\hspace{0.02cm}\theta(\xi_\lambda))$ where $[a,\xi_\lambda] = \lambda(a)\hspace{0.03cm}\xi_\lambda$ and $d\hspace{0.02cm}\theta$ was introduced before (<ref>) (see Lemma 3.6, Chapter VI, in [10]). In turn, this yields the orthogonal decomposition
\begin{equation} \label{eq:ncmetric3}
[a,\dot{s}] \,=\,\sum_{\lambda \in \Delta_+} \lambda(a)\hspace{0.03cm}(\xi_\lambda - d\hspace{0.02cm}\theta(\xi_\lambda))
\end{equation}
Letting $\dot{s}_\lambda = (\xi_\lambda - d\hspace{0.02cm}\theta(\xi_\lambda))$, it follows from (<ref>) and (<ref>) that
\Vert \dot{c}\Vert^2_{\scriptscriptstyle c} \,=\, \Vert \dot{a}\Vert^2_{\scriptscriptstyle x^*} + \sum_{\lambda \in \Delta_+} \sinh^2(\lambda(a))\hspace{0.02cm}\Vert \dot{s}_{\lambda}\Vert^2_{\scriptscriptstyle x^*}
This is the same as (<ref>), once $\dot{k}$ is identified with its representative $\dot{s}$.
§ PROOF OF GEOMETRIC ERGODICITY
The proof of Proposition <ref> relies on the so-called geometric drift condition. This condition requires that there exist a function $V:M \rightarrow \mathbb{R}$ such that
\begin{equation} \label{eq:VVbis}
V(x) \geq \max\left\lbrace 1\hspace{0.02cm},d^{\hspace{0.03cm} 2}(x,x^*)\right\rbrace \text{ for some } x^* \in M
\end{equation}
\begin{equation} \label{eq:gdrift}
PV(x) \leq \,\lambda\hspace{0.02cm}V(x) + b\hspace{0.02cm}\mathbf{1}_{\scriptscriptstyle C}(x) \hspace{3cm}
\end{equation}
for some $\lambda \in (0,1)$ and $b \in (0,\infty)$, and where $C$ is a small set for $P$ (for the definition, see [51]).
If the geometric drift condition (<ref>) is verified, then the geometric ergodicity condition (<ref>) holds [51].
The proof is a generalisation of the proof carried out in the special case where $M$ is a Euclidean space, in [49]. The idea is to use Assumptions (a1)–(a3) to show that the following two conditions hold,
\begin{equation} \label{eq:ergod1}
\limsup_{r(x)\rightarrow \infty}\hspace{0.03cm}\frac{PV(x)}{V(x)}\,<\,1
\end{equation}
\begin{equation} \label{eq:ergod2}
\phantom{li}\, \sup_{x \in M}\hspace{0.03cm} \frac{PV(x)}{V(x)}\,<\,\infty
\end{equation}
where $r(x) = d(x^*,x)$, and $V(x) = a\hspace{0.03cm}\pi^{\scriptscriptstyle -\frac{1}{2}}(x)$ with $a$ chosen so $V(x) \geq 1$ for all $x \in M$. However, under Assumption (a3), these two conditions are shown to imply (<ref>).
Let $(x_n)$ be a Markov chain in $M$, with transition kernel (<ref>), with proposed transition density $q(x\hspace{0.02cm},y) = q(d(x\hspace{0.02cm},y))$, and continuous, strictly positive invariant density $\pi$. Moreover, assume the proposed transition density satisfies Assumption (a3). If Conditions (<ref>) and (<ref>) are verified, then the geometric drift condition (<ref>) holds.
On the other hand, (<ref>) (which is just the same as (<ref>)) is a straightforward result of Assumption (a1), which implies the existence of strictly positive $\mu, R$ and $\pi_{\scriptscriptstyle R}$ such that
\begin{equation} \label{eq:preVV}
r(x) \geq R \;\Longrightarrow\; \pi(x) \leq \pi_{\scriptscriptstyle R}\hspace{0.02cm}\exp\left(-\mu\hspace{0.02cm}r^2(x)\right)
\end{equation}
Then, to obtain (<ref>), it is enough to chose $a = \max\left\lbrace 1,R^{\hspace{0.03cm}\scriptscriptstyle 2},\pi^{\scriptscriptstyle 1/2}_{\scriptscriptstyle R},2\hspace{0.02cm}\mu^{\scriptscriptstyle -1}\right\rbrace\hspace{0.02cm}$.
Proof of Lemma <ref> : the proof is almost identical to the proofs for random-walk Metropolis chains in Euclidean space [49][52]. The main point is that Assumption (a3) implies that every non-empty bounded subset of $M$ is a small set for the transition kernel $P$ in (<ref>). With this in mind, the geometric drift condition (<ref>) follows almost directly from the two conditions (<ref>) and (<ref>). Indeed, (<ref>) implies that there exist $\lambda \in (0,1)$ and $R \in (0,\infty)$ such that
r(x) \geq R \;\Longrightarrow\; PV(x) \leq \lambda\hspace{0.02cm}V(x)
That is, (<ref>) is verified on $M - C$, where $C$ is the open ball $B(x^*,R)$. In addition, by (<ref>),
b = \left[ \sup_{x \in B(x^*,R)} V(x)\right]\,\left[\hspace{0.04cm} \sup_{x \in M}\hspace{0.03cm} \frac{PV(x)}{V(x)}\right]\,<\,\infty
Therefore, (<ref>) is also verified on $C$, since for $x \in C$,
PV(x) \leq b \leq \lambda\hspace{0.02cm}V(x) + b
Thus, (<ref>) is verified throughout $M$. It remains to note that $C$ is a small set, since it is bounded.
Now, the aim is to establish the two conditions (<ref>) and (<ref>). These will follow from Propositions <ref> and <ref>, below. Consider the proposed transition kernel
\begin{equation}\label{eq:propkernelQ}
Qf(x) \,=\, \int_M\,q(x\hspace{0.02cm},y)f(y)\hspace{0.02cm}\mathrm{vol}(dy)
\end{equation}
for any bounded measurable function $f:M\rightarrow \mathbb{R}$. If $f$ is the indicator function of a measurable set $A$, then it is usual to write $Qf(x) = Q(x,A)$. For $x \in M$, consider its acceptance region
A(x) = \left\lbrace y \in M: \pi(y) \geq \pi(x)\right\rbrace
Under the assumptions of Proposition <ref>, the following limit holds
\begin{equation} \label{eq:limQ}
\liminf_{r(x) \rightarrow \infty} Q(x\hspace{0.02cm},A(x))\,> \, 0
\end{equation}
Under the assumptions of Proposition <ref>, if (<ref>) holds, then the two conditions (<ref>) and (<ref>) are verified, where $V(x) = a\hspace{0.03cm}\pi^{\scriptscriptstyle -\frac{1}{2}}(x)$ with $a$ chosen so $V(x) \geq 1$ for all $x \in M$.
The proof of these two propositions will use the following fact, concerning the contour manifolds of the probability density function $\pi(x)$. For $x \in M$, the contour manifold of $x$ is the set $C_{x}$ of all $y \in M$ such that $\pi(y) = \pi(x)$. This is a hypersurface in $M$, whenever $\pi(x)$ is a regular value of $\pi$ (by the “regular level set theorem" [53]).
fact : if $r(x)$ is sufficiently large, then $C_{x}$ can be parameterised by the unit sphere in $T_{x^*}M$. Precisely, it is possible to write
\begin{equation} \label{eq:contourm}
C_{x} = \left\lbrace \mathrm{Exp}_{x^*}\left( c(v)\hspace{0.02cm}v\right)\,; v \in S_{x^*}M\right\rbrace
\end{equation}
where $c$ is a positive continuous function on $S_{x^*}M$, the set of unit vectors $v$ in $T_{x^*}M$. Moreover, $A(x)$ is exactly the region inside of $C_{x\hspace{0.03cm}}$. Precisely, $y \in A(x)$ if and only if $y = \mathrm{Exp}_{x^*}(c\hspace{0.02cm}v)$ where $v \in S_{x^*}M$ and $c \leq c(v)$.
Proof of Proposition <ref> : by Assumption (a2), there exist $\delta > 0$ and $R > 0$ such that
\begin{equation} \label{eq:proofproofgergodic11}
r(y) \geq R\;\Longrightarrow\; \langle\mathrm{grad}\,r,n\rangle_{y}\,<\,-\delta
\end{equation}
Let $-c^{\scriptscriptstyle 2}$ be a lower bound on the sectional curvatures of $M$, and $\Lambda$ be a positive number with
\begin{equation} \label{eq:Lambda}
(\dim\hspace{0.03cm}M)^{\scriptscriptstyle \frac{1}{2}}\hspace{0.03cm}\Lambda \,\leq\, \frac{\delta}{2c}\hspace{0.02cm}\tanh(c\hspace{0.02cm}R)
\end{equation}
Now, for any $x \in M$ with $r(x) \geq R + \Lambda$, consider the set
\Omega(x) \,=\,\left\lbrace \mathrm{Exp}_{x}(-a\hspace{0.02cm}u)\,; a \in (0,\Lambda)\,,u \in S_xM\,, \Vert \mathrm{grad}\,r(x) - u\Vert_{x} \hspace{0.02cm}\leq \frac{\delta}{2}\right\rbrace
Let $y = \mathrm{Exp}_{x}(-a\hspace{0.02cm}u)$ be a point in $\Omega(x)$, and $\gamma(t)$ the unit-speed geodesic with $\gamma(0) = x$ and $\gamma(a) = y$. It is first proved that
\begin{equation} \label{eq:proofproofgergodic12}
\langle\dot{\gamma}\hspace{0.02cm},n\rangle_{\gamma(t)}\, > 0 \hspace{0.3cm} \text{for } t \in (0,a)
\end{equation}
Indeed, the left-hand side of (<ref>) may be written
\langle\dot{\gamma}\hspace{0.02cm},n\rangle_{\gamma(t)} = -\,\langle\mathrm{grad}\,r\hspace{0.02cm},n\rangle_{\gamma(t)}\hspace{0.03cm}
Then, if $\Pi^t_{\scriptscriptstyle 0}$ denotes the parallel transport along $\gamma$ from $\gamma(0) = x$ to $\gamma(t)$,
\begin{equation} \label{eq:proofproofgergodic13}
\langle\dot{\gamma}\hspace{0.02cm},n\rangle_{\gamma(t)} = -\,\langle\mathrm{grad}\,r\hspace{0.02cm},n\rangle_{\gamma(t)}
+\langle\Pi^t_{\scriptscriptstyle 0}(\mathrm{grad}\,r(x) - u)\hspace{0.02cm},n\rangle_{\gamma(t)}
+\langle\mathrm{grad}\,r - \Pi^t_{\scriptscriptstyle 0}(\mathrm{grad}\,r(x))\hspace{0.02cm},n\rangle_{\gamma(t)}
\end{equation}
which may be checked by adding together the three terms, and noting that $\dot{\gamma}(t) = \Pi^t_{\scriptscriptstyle 0}(-u)$, since $\gamma$ is a geodesic with $\dot{\gamma}(0) = -u$. But, by the triangle inequality
r(\gamma(t)) \,\geq\, r(x) - d(x\hspace{0.02cm},\gamma(t))\,>\, (R+\Lambda) - \Lambda = R
since $d(x^*,x) = r(x) \geq R + \Lambda$ and $d(x\hspace{0.02cm},\gamma(t)) \leq a \leq \Lambda$. Thus, it follows from (<ref>)
\begin{equation} \label{eq:proofproofgergodic14}
- \langle\mathrm{grad}\,r,n\rangle_{\gamma(t)}\,>\,\delta
\end{equation}
Moreover, since the parallel transport $\Pi^t_{\scriptscriptstyle 0}$ preserves norms, and since by definition of $\Omega(x)$, $\Vert\mathrm{grad}\,r(x) - u\Vert_{x} \leq \delta/2$, it follows from the Cauchy-Schwarz inequality
\begin{equation} \label{eq:proofproofgergodic15}
\langle\Pi^t_{\scriptscriptstyle 0}(\mathrm{grad}\,r(x) - u)\hspace{0.02cm},n\rangle_{\gamma(t)} \,\geq - \Vert \Pi^t_{\scriptscriptstyle 0}(\mathrm{grad}\,r(x) -u)\Vert_{x}= -\Vert\mathrm{grad}\,r(x) - u\Vert_{x} \geq -\delta/2
\end{equation}
On the other hand, let $(e_{\scriptscriptstyle i}\,;1,\ldots,n)$ be a parallel orthonormal base, along the geodesic $\gamma$. Then,
\langle \mathrm{grad}\,r - \Pi^t_{\scriptscriptstyle 0}(\mathrm{grad}\,r(x))\hspace{0.02cm},e_{\scriptscriptstyle i}\rangle_{\gamma(t)} \,=\,
\int^t_{\scriptscriptstyle 0}\left\langle\mathrm{Hess}\,r\cdot\dot{\gamma}\hspace{0.02cm},e_{\scriptscriptstyle i}\right\rangle_{\gamma(s)}ds
But, according to (<ref>) from Theorem <ref>,
\int^t_{\scriptscriptstyle 0}\left\langle\mathrm{Hess}\,r\cdot\dot{\gamma}\hspace{0.02cm},e_{\scriptscriptstyle i}\right\rangle_{\gamma(s)}ds\, \leq \int^t_{\scriptscriptstyle 0}c\hspace{0.02cm}\coth\left(c\hspace{0.02cm} r(\gamma(s))\right)ds\,\leq \Lambda\hspace{0.03cm}c\hspace{0.02cm}\coth\left(c\hspace{0.02cm} R\right)
Thus, using (<ref>), it follows by the Cauchy-Schwarz inequality
\begin{equation} \label{eq:proofproofgergodic16}
\langle \mathrm{grad}\,r - \Pi^t_{\scriptscriptstyle 0}(\mathrm{grad}\,r(x))\hspace{0.02cm},n\rangle_{\gamma(t)}\hspace{0.03cm}
\geq -\delta/2
\end{equation}
Finally, by adding (<ref>) to (<ref>) and (<ref>), it follows from (<ref>)
\langle\dot{\gamma}\hspace{0.02cm},n\rangle_{\gamma(t)}\, > \delta - \delta/2 - \delta/2 = 0
which is the same as (<ref>). Moving on, from (<ref>), it is possible to prove that
\begin{equation} \label{eq:proofproofgergodic17}
\Omega(x) \subset A(x)
\end{equation}
for all $x$ such that $r(x) \geq R + \Lambda$, where $A(x)$ is the acceptance region of $x$, defined after (<ref>).
To prove (<ref>), consider $y \in \Omega(x)$ and $\gamma(t)$ as before, with $\gamma(0) = x$ and $\gamma(a) = y$. Now, assume that $y \in C_{x\hspace{0.03cm}}$, the contour manifold of $x$, defined in (<ref>). Then, $\pi(\gamma(0)) = \pi(\gamma(a))$, so that, by the mean-value theorem, there exists $t \in (0,a)$ such that
\frac{d}{dt}\pi(\gamma(t)) = \langle\dot{\gamma}(t),\mathrm{grad}\,\pi\rangle_{\gamma(t)} = 0
But, from the definition of $n(x)$, this implies
\langle \dot{\gamma}(t),n\rangle_{\gamma(t)}=\,\Vert\mathrm{grad}\,\pi(x)\Vert^{\scriptscriptstyle -1}\hspace{0.03cm}\langle\dot{\gamma}(t),\mathrm{grad}\,\pi\rangle_{\gamma(t)} = 0
in contradiction with (<ref>). Thus, the assumption that $y \in C_{x}$ cannot hold. Since $y \in \Omega(x)$ is arbitrary, this means that
\begin{equation} \label{eq:proofproofgergodic18}
\Omega(x)\,\cap\, C_{x}\,=\, \varnothing
\end{equation}
However, note that $y_* = \mathrm{Exp}_{x}(-a\hspace{0.02cm}\mathrm{grad}\,r(x))$ belongs to $\Omega(x)$, as can be seen from the definition of $\Omega(x)$. Also, since $r(y_*) = r(x) - a$, it follows that $y_*$ is inside of $C_{x\hspace{0.03cm}}$. Therefore, $y_* \in A(x)$, and the intersection of $\Omega(x)$ and $A(x)$ is non-empty. Finally, it is enouh to note that the set $\Omega(x)$ is connected, since it is the image under $\mathrm{Exp}_{x}$ of a connected set. This implies that, if the intersection of $\Omega(x)$ and $R(x)$, the complement of $A(x)$, were non-empty, then $\Omega(x)$ would also intersect $C_{x\hspace{0.03cm}}$. Clearly, this would be in contradiction with (<ref>). Using (<ref>), it is now possible to prove (<ref>). Indeed, for $x$ such that $r(x) \geq R+\Lambda$, it follows from (<ref>) that
\begin{equation} \label{eq:proofproofgergodic19}
Q(x\hspace{0.02cm},A(x)) \geq Q(x\hspace{0.02cm},\Omega(x)) = \int_{\Omega(x)}\,q(x\hspace{0.02cm},y)\hspace{0.02cm}\mathrm{vol}(dy)
\end{equation}
where the last equality follows from (<ref>). However, by Assumption (a3),
\begin{equation} \label{eq:proofproofgergodic20}
\int_{\Omega(x)}\,q(x\hspace{0.02cm},y)\hspace{0.02cm}\mathrm{vol}(dy) \,\geq\,\varepsilon_{\scriptscriptstyle q}\times\mathrm{vol}\left(\Omega(x)\cap B(x,\delta_{\scriptscriptstyle q})\right)
\end{equation}
Now, to prove (<ref>), it only remains to show that
\begin{equation} \label{eq:proofproofgergodic21}
\mathrm{vol}\left(\Omega(x)\cap B(x,\delta_{\scriptscriptstyle q})\right) \,\geq \mathrm{c}> 0
\end{equation}
where the constant $\mathrm{c}$ does not depend on $x$. Indeed, it is then clear from (<ref>) and (<ref>) that
\liminf_{r(x) \rightarrow \infty} Q(x\hspace{0.02cm},A(x))\,> \varepsilon_{\scriptscriptstyle q}\times\mathrm{c}> 0
To obtain (<ref>), let $(r,\theta)$ be geodesic spherical coordinates, with origin at $x$. Using the integral formula (<ref>), after noting $\lambda(r,\theta) =\det(\mathcal{A}(r,\theta))$, it follows
\begin{equation} \label{eq:proofproofgergodic22}
\mathrm{vol}\left(\Omega(x)\cap B(x,\delta_{\scriptscriptstyle q})\right) \,=\, \int^{\tau}_{\scriptscriptstyle 0}\!\!\!\int_{\scriptscriptstyle S^{n-1}}\mathbf{1}\lbrace\Vert \mathrm{grad}\,r(x) - u(\theta)\Vert_{x} \leq \delta/2 \rbrace\hspace{0.04cm}\lambda(r,\theta)\hspace{0.03cm}dr\hspace{0.02cm}\omega_{n-1}(d\theta)
\end{equation}
where $\tau = \min\lbrace\Lambda,\delta_{\scriptscriptstyle q}\rbrace$, and the map $\theta \mapsto u(\theta)$ identifies $S^{n-1}$ with $S_xM$. Here, by (<ref>) from Theoreom <ref>, $\lambda(r,\theta) \geq r^{n-1}$. Therefore, (<ref>) implies
\mathrm{vol}\left(\Omega(x)\cap B(x,\delta_{\scriptscriptstyle q})\right) \geq\left(\tau^{\scriptscriptstyle n}/n\right)\times\omega_{n-1}\!\left(\lbrace\Vert \mathrm{grad}\,r(x) - u(\theta)\Vert_{x} \leq \delta/2 \rbrace\right)
However, since the area measure $\omega$ is invariant by rotation, the area
\omega_{n-1}\!\left(\lbrace\Vert \mathrm{grad}\,r(x) - u(\theta)\Vert_{x} \leq \delta/2 \rbrace\right) = \varsigma
does not depend on $x$. Precisely, $\varsigma$ is equal to the area of a spherical cap, with angle equal to $2\hspace{0.02cm}\mathrm{acos}(1-\delta^{\scriptscriptstyle 2}/8)$.
Finally, (<ref>) is immediately obtained, by letting $\mathrm{c} = \left(\tau^{\scriptscriptstyle d}/d\right)\times \varsigma$.
Proof of Proposition <ref> : let $V(x) = a\hspace{0.03cm}\pi^{\scriptscriptstyle -\frac{1}{2}}(x)$, as in the proposition. Recall the transition kernel $P$ is given by (<ref>), which implies
\rho(x) = \int_M(1-\alpha(x\hspace{0.02cm},y))\hspace{0.02cm}q(x\hspace{0.02cm},y)\hspace{0.02cm}\mathrm{vol}(dy)
since the right-hand side of (<ref>) should integrate to $1$ when $f(x)$ is the constant function $f(x) = 1$. But, since $\alpha(x\hspace{0.02cm},y) = \mathrm{min}\left\lbrace1\hspace{0.02cm},\pi(y)/\pi(x) \right\rbrace$, it follows that $1 - \alpha(x\hspace{0.02cm},y) = 0$ when $y \in A(x)$, the acceptance region of $x$, defined after (<ref>). Thus,
\rho(x) = \int_{R(x)}\left[1 - \frac{\pi(y)}{\pi(x)}\right]q(x\hspace{0.02cm},y)\hspace{0.02cm}\mathrm{vol}(dy)
where $R(x)$, the complement of $A(x)$, is the rejection region of $x$. With this expression of $\rho(x)$, putting $f(x) = V(x)$ in (<ref>), it follows by a direct calculation that $PV(x)/V(x)$ is equal to
\begin{equation} \label{eq:proofproofgergodic21}
\int_{A(x)}q(x\hspace{0.02cm},y)\left[\frac{\pi(x)}{\pi(y)}\right]^{\! \frac{1}{2}\hspace{0.02cm}}\mathrm{vol}(dy) +
\int_{R(x)}q(x\hspace{0.02cm},y)\left( 1 - \left[\frac{\pi(y)}{\pi(x)}\right] + \left[\frac{\pi(y)}{\pi(x)}\right]^{\! \frac{1}{2}}\right)\mathrm{vol}(dy)
\end{equation}
Here, all the ratios are less than or equal to $1$, so that (<ref>) immediately implies (<ref>).
In order to prove (<ref>), it is enough to prove that
\begin{equation}\label{eq:proofproofgergodic22}
\lim_{r(x)\rightarrow\infty}\,\int_{A(x)}q(x\hspace{0.02cm},y)\left[\frac{\pi(x)}{\pi(y)}\right]^{\! \frac{1}{2}\hspace{0.02cm}}\mathrm{vol}(dy)\,=\,0\hspace{2.3cm}
\end{equation}
\begin{equation}\label{eq:proofproofgergodic23}
\lim_{r(x)\rightarrow\infty}\,\int_{R(x)}q(x\hspace{0.02cm},y)\left(\left[\frac{\pi(y)}{\pi(x)}\right]^{\! \frac{1}{2}}- \left[\frac{\pi(y)}{\pi(x)}\right] \right)\mathrm{vol}(dy) \,=\,0
\end{equation}
Indeed, if these two limits are replaced in (<ref>), it will follow that
\limsup_{r(x)\rightarrow\infty}\, \frac{PV(x)}{V(x)} = \limsup_{r(x)\rightarrow\infty}\,Q(x,R(x))
= \limsup_{r(x)\rightarrow\infty}\, 1 - Q(x,A(x)) < 1
where the inequality is obtained using (<ref>). However, this is the same as (<ref>). Thus, to complete the proof, it is enough to prove (<ref>) and (<ref>). The proofs of (<ref>) and (<ref>) being very similar, only the proof of (<ref>) is presented.
Proof of (<ref>) : this is divided into three steps. First, it is proved that
\begin{equation} \label{eq:LLL1}
\lim_{L\rightarrow \infty}\,\int_{\scriptscriptstyle A(x) - B(x,L)}q(x\hspace{0.02cm},y)\hspace{0.02cm}(\alpha(y,x))^{\scriptscriptstyle \frac{1}{2}}\hspace{0.03cm}\mathrm{vol}(dy) = 0 \hspace{0.5cm} \text{uniformly in $x$}
\end{equation}
where $\alpha(y,x) = \pi(x)/\pi(y)$. To prove (<ref>) note that $\alpha(y,x) \leq 1$ for $y \in A(x)$, and that $A(x) - B(x,L) \subset M - B(x,L)$. It follows that, for any $x \in M$,
\begin{equation} \label{eq:LLL2}
\int_{\scriptscriptstyle A(x) - B(x,L)}q(x\hspace{0.02cm},y)\hspace{0.02cm}(\alpha(y,x))^{\scriptscriptstyle \frac{1}{2}}\hspace{0.03cm}\mathrm{vol}(dy) \,\leq\,
\int_{\scriptscriptstyle M - B(x,L)}q(x\hspace{0.02cm},y)\hspace{0.03cm}\mathrm{vol}(dy)
\end{equation}
Since $M$ is a symmetric space, there exists an isometry $g$ of $M$ such that $g\cdot x^* = x$. Since $g$ preserves Riemannian volume,
\int_{\scriptscriptstyle M - B(x,L)}q(x\hspace{0.02cm},y)\hspace{0.03cm}\mathrm{vol}(dy) =
\int_{\scriptscriptstyle M - B(x^*,L)}q(x\hspace{0.02cm},g\cdot y)\hspace{0.03cm}\mathrm{vol}(dy)
But, $q(x\hspace{0.02cm},y) = q(d(x\hspace{0.02cm},y))$ depends only on the Riemannian distance $d(x\hspace{0.02cm},y)$. This implies that $q(x\hspace{0.02cm},g\cdot y) = q(x^*,y)$, since $g$ is an isometry. Thus,
\int_{\scriptscriptstyle M - B(x,L)}q(x\hspace{0.02cm},y)\hspace{0.03cm}\mathrm{vol}(dy) =
\int_{\scriptscriptstyle M - B(x^*,L)}q(x^*,y)\hspace{0.03cm}\mathrm{vol}(dy)
Here, the right-hand side does not depend on $x$, and tends to zero as $L \rightarrow \infty$, as can be seen by putting $x = x^*$ in (<ref>). Now (<ref>) follows directly from (<ref>).
Second, assume that $r(x)$ is so large that the level set $C_x$ verifies (<ref>) and $A(x)$ is equal to the region inside $C_{x\hspace{0.03cm}}$. It is then proved that, for any $L > 0$,
\begin{equation} \label{eq:KKK1}
\lim_{r(x)\rightarrow\infty}\,\int_{\scriptscriptstyle A(x) \cap B(x,L) - C_x(\varepsilon)}
q(x\hspace{0.02cm},y)\hspace{0.02cm}(\alpha(y,x))^{\scriptscriptstyle \frac{1}{2}}\hspace{0.03cm}\mathrm{vol}(dy) = 0
\end{equation}
where $C_x(\varepsilon)$ is the tubular neighborhood of $C_x$ given by
C_x(\varepsilon) \,=\,\left\lbrace \mathrm{Exp}_y\left(s\hspace{0.02cm}\mathrm{grad}\,r(y)\right)\,;y \in C_x\,,|s|<\varepsilon\right\rbrace
Because of (<ref>), to prove (<ref>) it is enough to prove that
\begin{equation} \label{eq:KKK2}
\lim_{r(x)\rightarrow\infty}\,\alpha(y,x) = 0 \hspace{0.5cm} \text{uniformly in $y \in A(x) \cap B(x,L) - C_x(\varepsilon)$}
\end{equation}
However, this follows by Assumption (a1). Indeed, this assumption guarantees the existence of some strictly positive $\mu\hspace{0.02cm},R$ and $\pi_{\scriptscriptstyle R\hspace{0.04cm}}$, as in (<ref>). Then, take $r(x) \geq R + \varepsilon$ and note that,by (<ref>), for $y$ as in (<ref>), if $r(y) \leq R$,
\begin{equation} \label{eq:KKK3}
\alpha(y,x) \leq \frac{\pi_{\scriptscriptstyle R}\hspace{0.02cm}\exp\left(-\mu\hspace{0.02cm}r^2(x)\right)}{\pi(y)} \leq
\frac{\pi_{\scriptscriptstyle R}\hspace{0.02cm}\exp\left(-\mu\hspace{0.02cm}r^2(x)\right)}{\min_{r(y)\leq R}\pi(y)}
\end{equation}
where the right-hand side converges to zero as $r(x) \rightarrow \infty$, uniformly in $y$. On the other hand, if $r(y) > R$, let $c$ be the unit-speed geodesic connecting $x^*$ to $y$. Since $y \in A(x)$ (so $y$ lies inside $C_x$) there exists some $r \geq r(y)$ such that $c(r) \in C_{x\hspace{0.03cm}}$. Moreover, since $y \notin C_x(\varepsilon)$, it follows that $r > r(y) + \varepsilon$. Then, it is possible to show, by Assumption (a1),
\alpha(y,x) = \frac{\pi(c(r))}{\pi(c(r(y)))} \leq \,\exp[-\mu\left( r^{\scriptscriptstyle 2} - r^{\scriptscriptstyle 2}(y)\right)]
By a direct calculation, this implies
\begin{equation} \label{eq:KKK4}
\alpha(y,x) \leq\exp[-\mu\left( 2\hspace{0.03cm}\varepsilon r - \varepsilon^{\scriptscriptstyle 2}\right)] \leq
\exp[-\mu\left( 2\hspace{0.03cm}\varepsilon r(w) - \varepsilon^{\scriptscriptstyle 2}\right)]
\end{equation}
where $w \in C_x$ is such that $r(w)$ is the minimum of $r(w^\prime)$, taken over all $w^\prime \in C_{x\hspace{0.03cm}}$. Note that the right-hand side of (<ref>) does not depend on $y$. Moreover, $\pi(w)$ tends to zero as $r(x) \rightarrow \infty$, since $\pi(w) = \pi(x)$, and $\pi(x)$ tends to zero as $r(x) \rightarrow \infty$. Therefore, because $\pi(w)$ is positive, it follows that $r(w) \rightarrow \infty$ as $r(x)\rightarrow \infty$. But, this implies the right-hand side of (<ref>) converges to zero as $r(x) \rightarrow \infty$, uniformly in $y$. Now, (<ref>) follows from (<ref>).
The third, and final, step is to show that, for any $L>0$,
\begin{equation} \label{eq:III1}
\lim_{\varepsilon\rightarrow 0}\limsup_{r(x)\rightarrow \infty}\,\int_{\scriptscriptstyle A(x) \cap B(x,L) \cap C_x(\varepsilon)}
q(x\hspace{0.02cm},y)\hspace{0.02cm}(\alpha(y,x))^{\scriptscriptstyle \frac{1}{2}}\hspace{0.03cm}\mathrm{vol}(dy) = 0
\end{equation}
For brevity, the proof is carried out under the assumption that $q(x\hspace{0.02cm},y)$ is bounded, uniformly in $x$ and $y$. If this assumption holds, then (<ref>) follows immediately by showing
\begin{equation} \label{eq:III2}
\lim_{\varepsilon\rightarrow 0}\limsup_{r(x)\rightarrow \infty}\,\mathrm{vol}\left( B(x,L) \cap C_x(\varepsilon)\right) = 0
\end{equation}
To show (<ref>), let $\theta \mapsto v(\theta)$ identify the Euclidean unit sphere $S^{n-1}$ with $S_{x^*}M$, and consider the following sets
\begin{array}{rl}
T(x) =& \lbrace \theta \in S^{n-1}: \mathrm{Exp}_{x^*}(rv(\theta)) \in B(x,L)\;\;\text{for some $r\geq 0$}\rbrace \\[0.2cm]
S(x) =& \lbrace \mathrm{Exp}_{x^*}(rv(\theta))\,; \theta \in T(x)\;\text{and}\;|r-r(x)|\leq L\rbrace
\end{array}
Using the triangle inequality, it is possible to show that
\begin{equation}\label{eq:III3}
B(x,L) \subset S(x) \subset B(x,3L)
\end{equation}
To estimate the volume in (<ref>), let $(r,\theta)$ be geodesic spherical coordinates, with origin at $x^*$. The first inclusion in (<ref>) implies $\mathrm{vol}\left(B(x,L)\cap C_x(\varepsilon)\right) \leq \mathrm{vol}\left(S(x)\cap C_x(\varepsilon)\right)$, and this yields
\mathrm{vol}\left(B(x,L)\cap C_x(\varepsilon)\right) \,\leq\, \int^{\scriptscriptstyle r(x)+L}_{\scriptscriptstyle r(x)-L}\!\!\int_{\scriptscriptstyle T(x)}\mathbf{1}_{C_x(\varepsilon)}\left( \mathrm{Exp}_{x^*}(rv(\theta))\right)\hspace{0.03cm}\lambda(r,\theta)\hspace{0.02cm}dr\hspace{0.02cm}\omega_{n-1}(d\theta)
in the notation of (<ref>), where $\lambda(r,\theta) = \det(\mathcal{A}(r,\theta))$. Bounding the last integral from above,
\begin{equation} \label{eq:III4}
\mathrm{vol}\left(B(x,L)\cap C_x(\varepsilon)\right) \,\leq\,2\varepsilon\hspace{0.02cm}\omega_{n-1}(T(x))\hspace{0.02cm}\sup_{z(r,\theta)\in B(x,3L)}\hspace{0.04cm} \lambda(r,\theta)
\end{equation}
where $z(r,\theta) = \mathrm{Exp}_{x^*}(rv(\theta))$. Similarly, the second inclusion in (<ref>) implies
\mathrm{vol}\left( B(x,3L)\right)\,\geq\,\mathrm{vol}(S(x))=
\int^{\scriptscriptstyle r(x)+L}_{\scriptscriptstyle r(x)-L}\!\!\int_{\scriptscriptstyle T(x)}\lambda(r,\theta)\hspace{0.02cm}dr\hspace{0.02cm}\omega_{n-1}(d\theta)
and bounding the last integral from below gives
\begin{equation} \label{eq:III5}
\mathrm{vol}\left( B(x,3L)\right)\,\geq\,2L
\hspace{0.02cm}\omega_{n-1}(T(x))\hspace{0.02cm}\inf_{z(r,\theta)\in B(x,3L)}\hspace{0.04cm} \lambda(r,\theta)
\end{equation}
From (<ref>) and (<ref>), it follows that
\begin{equation} \label{eq:III6}
\mathrm{vol}\left(B(x,L)\cap C_x(\varepsilon)\right) \,\leq\,
\left(\varepsilon\middle/L\right)\mathrm{vol}\left(B(x,3L)\right)\frac{\sup_{z(r,\theta)\in B(x,3L)}\hspace{0.04cm} \lambda(r,\theta)}{\inf_{z(r,\theta)\in B(x,3L)}\hspace{0.04cm} \lambda(r,\theta)}
\end{equation}
However, by the volume growth lemma <ref>, from <ref>,
\limsup_{r(x) \rightarrow \infty}\hspace{0.04cm}\frac{\sup_{z(r,\theta)\in B(x,3L)}\hspace{0.04cm} \lambda(r,\theta)}{\inf_{z(r,\theta)\in B(x,3L)}\hspace{0.04cm} \lambda(r,\theta)}= \mathrm{R}\,<\,\infty
Replacing into (<ref>), and noting that, since $M$ is a symmetric space,
$$\mathrm{vol}\left(B(x,3L)\right) = \mathrm{vol}\left(B(x^*,3L)\right)
it follows that
\limsup_{r(x) \rightarrow \infty}\hspace{0.04cm}\mathrm{vol}\left(B(x,L)\cap C_x(\varepsilon)\right) \leq
\left(\varepsilon\middle/L\right)\mathrm{vol}\left(B(x^*,3L)\right)\mathrm{R}
This immediately implies (<ref>), and therefore (<ref>).
Conclusion : finally, (<ref>) can be obtained by combining (<ref>), (<ref>) and (<ref>). Precisely, the integral under the limit in (<ref>) can be decomposed into the sum of three integrals
\left(\int_{\scriptscriptstyle A(x) - B(x,L)}+
\int_{\scriptscriptstyle A(x) \cap B(x,L) - C_x(\varepsilon)}+
\int_{\scriptscriptstyle A(x) \cap B(x,L) \cap C_x(\varepsilon)\hspace{0.02cm}}
\right)q(x\hspace{0.02cm},y)\hspace{0.02cm}(\alpha(y,x))^{\scriptscriptstyle \frac{1}{2}}\hspace{0.03cm}\mathrm{vol}(dy)
By (<ref>), for any $\Delta > 0$, it is possible to choose $L$ to make the first integral less than $\Delta/3$, irrespective of $x$ and $\varepsilon$. By (<ref>), it is possible to choose $\varepsilon$ to make the third integral less than $\Delta/3$, for all $x$ with sufficiently large $r(x)$. With $L$ and $\varepsilon$ chosen in this way, (<ref>) implies the second integral is less than $\Delta/3$, if $r(x)$ is sufficiently large. Then, the sum of the three integrals is less than $\Delta$, and (<ref>) follows, because $\Delta$ is arbitrary.
CHAPTER: STOCHASTIC APPROXIMATION
The present chapter is based on [2][3]. It aims to give a general treatment, under realistic assumptions, of two problems related to stochastic approximation on Riemannian manifolds.
The first problem is to estimate the rate of convergence of a stochastic approximation scheme, to the set of critical points (i.e. zeros) of its mean field.
* <ref> introduces the concept of an approximate critical point of the mean field.
* <ref> and <ref> provide non-asymptotic upper bounds, for the number of iterations necessary, for a stochastic approximation scheme to find an approximate critical point of its mean field (specifically, exponential schemes are considered in <ref>, and retraction schemes in <ref>).
* <ref> and <ref> apply the results of <ref> and <ref> to two examples : estimation of a mixture of Gaussian densities, and principal component analysis (PCA).
The second problem is to derive a central limit theorem, describing the asymptotic behavior of constant-step-size exponential schemes, defined on Hadamard manifolds.
* <ref> states the central limit theorem : under realistic assumptions, a constant-step-size exponential scheme defines a geometrically ergodic Markov chain. As the step-size goes to zero, a re-scaled version of this Markov chain has the same asymptotic behavior as a linear diffusion process, with a multivariate normal invariant distribution.
* <ref> details the proof of this central limit theorem.
As a follow-up on the second problem, one final example is studied, as part of the present chapter.
* <ref> introduces the Riemannian AR(1) model : a Markov chain $(x_t\,;t=0,1,\ldots)$ with values in a Hadamard manifold $M$, where each $x_{t+1}$ is a geodesic convex combination of the old $x_t$ and of a new input $y_{\hspace{0.02cm}t+1}$, with respective weights $1-\mu$ (for $x_t$) and $\mu$ (for $y_{\hspace{0.02cm}t+1}$), for some $\mu \in (0,1)$.If $(y_t\,;t=1,2,\ldots)$ are independent samples from a probability distribution $P$ on $M$, then the Markov chain $(x_t)$ is geometrically ergodic, and its invariant distribution concentrates at the Riemannian barycentre of $P$, as $\mu$ goes to zero.
§ APPROXIMATE CRITICAL POINTS
Here, the main object of study will be a stochastic approximation scheme, on a Riemannian manifold $M$. Given some initial value $x_{\scriptscriptstyle 0} \in M$, and independent observations $(y_t\,;t=1,2,\ldots)$, drawn from a probability distribution $P$ on a measurable space $Y$, this computes a sequence of iterates $(x_t\,;t = 1,2,\ldots)$, according to the update rule
\begin{equation} \label{eq:retscheme}
x_{t+1} = \mathrm{Ret}_{x_t}\!\left(\mu_{\scriptscriptstyle t+1}\hspace{0.02cm}X_{y_{\scriptscriptstyle\hspace{0.02cm}t+1}}(x_t)\right)
\end{equation}
where $\mathrm{Ret}:TM \rightarrow M$ is a retraction, $(\mu_{\scriptscriptstyle t}\,;t=1,2,\ldots)$ is a sequence of (positive) step-sizes, and the map $X:Y\times M \rightarrow TM$ is such that $X(y,x) = X_y(x)$ always belongs to $T_xM$.
One says that $X:Y\times M \rightarrow TM$ is a random vector field. The corresponding mean vector field $X:M \rightarrow TM$ is given by
\begin{equation} \label{eq:meanfield}
X(x) = \int_{Y}\,X_y(x)\hspace{0.03cm}P(dy)
\end{equation}
which means that the noise vector field, given by $e_y(x) = X_y(x) - X(x)$, has zero expectation. In the following, it will be assumed the variance of this noise vector field is not too large,
\begin{equation} \label{eq:variancecontrol}
\int_{Y}\,\Vert e_y(x)\Vert^2_x\hspace{0.04cm}P(dy) \leq \sigma^2_{\scriptscriptstyle 0} + \sigma^2_{\scriptscriptstyle 1}\hspace{0.02cm}\Vert X\Vert^2_x
\end{equation}
for some constants $\sigma^2_{\scriptscriptstyle 0}\hspace{0.02cm},\sigma^2_{\scriptscriptstyle 1}$.
The scheme (<ref>) is often used to search for zeros (critical points) of the mean vector field $X$[Zeros of vector fields are also called “singular points", and “stationary points". The term “critical points" seems more in line with the context of stochastic approximation, where the mean vector field is often a gradient vector field, so the scheme (<ref>) is a stochastic gradient scheme, used to solve some optimisation problem.].After $t$ iterations, this scheme will have generated the iterates $(x_s\,;s = 1,\ldots,t)$. One may randomly sample these, by looking at $x_{\tau_t}$ where $\tau_t$ follows a discrete probability distribution
\begin{equation} \label{eq:tautt}
\mathbb{P}(\tau_t = s) = \frac{\mu_{s+1}}{\sum^t_{s=1}\mu_{s+1}} \hspace{1cm} s = 1,\ldots, t
\end{equation}
Then, the scheme is said to have found an approximate critical point (precisely, an $\epsilon$-critical point, for some suitable accuracy $\epsilon > 0$) in expectation, if $\mathbb{E}\Vert X(x_{\tau_t})\Vert^2 \leq \epsilon$.
For example, note that if $\mu_{\scriptscriptstyle t} = \mu$ is a constant, so (<ref>) is a constant-step-size scheme,
\mathbb{E}\left[\Vert X(x_{\tau_t})\Vert^2_{x_{\tau_t}}\right] = \frac{1}{t}\sum^t_{s=1} \mathbb{E}\left[\Vert X(x_{s})\Vert^2_{x_{s}}\right]
is just the average, over the first $t$ iterates, of the expected norm of the mean field.
In order to study the stochastic approximation scheme (<ref>), it is helpful to introduce a Lyapunov function $V:M\rightarrow \mathbb{R}$. This is a positive function, which is continuously differentiable, and has $\ell$-Lipschitz gradient, in the sense of (<ref>). It is moreover assumed to satisfy
\begin{equation} \label{eq:lyapunov}
c\hspace{0.02cm}\Vert X\Vert^2_x \leq - \langle \mathrm{grad}\,V,X\rangle_x %\hspace{0.5cm}\text{and}\hspace{0.5cm} \Vert \mathrm{grad}\,V\Vert_x \leq \bar{c}\hspace{0.02cm}\Vert X\Vert_x
\end{equation}
for some constant $c > 0$.
Example 1 : let $M = S^n \subset \mathbb{R}^{n+1}$, the unit sphere of dimension $n$. If $x^*$ is some critical point of the mean field $X$, then one may choose $V(x) = 1-\cos d(x\hspace{0.02cm},x^*)$, where $d(x\hspace{0.02cm},x^*)$ denotes the Riemannian distance between $x$ and $x^*$. In this case, $V$ is positive and has $1$-Lipschitz gradient.
Example 2 : let $M$ be a Hadamard manifold, with sectional curvatures bounded below by $\kappa_{\min} = -c^{\hspace{0.02cm}\scriptscriptstyle 2}$. If $x^*$ is some critical point of the mean field $X$, then one may choose $V(x) = V_{x^*}(x)$, for some $\delta >0$, as in (<ref>). From Proposition <ref> and Lemma <ref>, $V$ is positive and has $(1+\delta\hspace{0.02cm}c)$-Lipschitz gradient. Lemma <ref> will be essential to all further analysis of the scheme (<ref>). The proof of this lemma, being somewhat elementary, is not given in detail.
If $V:M\rightarrow \mathbb{R}$ has $\ell$-Lipschitz gradient, then
\begin{equation} \label{eq:liptaylor}
\left|V(\mathrm{Exp}_x(v)) - V(x) - \langle\mathrm{grad}\,V,v\rangle_{x}\right|\leq (\ell/2)\hspace{0.02cm}\Vert v\Vert^2_x
\end{equation}
for any $x \in M$ and $v \in T_xM$.
Sketch of proof : consider the geodesic $c:[0,1]\rightarrow M$, given by $c(t) = \mathrm{Exp}_x(t\hspace{0.02cm}v)$. Then, let $V(t) = V(c(t))$ and note that $V^\prime(t) = \langle\mathrm{grad}\,V,\dot{c}\rangle_{c(t)\hspace{0.02cm}}$. Let $\Pi^{\scriptscriptstyle 0}_{t}$ denote parallel transport along $c$, from $c(t)$ to $c(0)$. Since this preserves scalar products, and $\dot{c}$ is parallel,
\begin{equation} \label{eq:vprime1}
V^\prime(t) = \langle\mathrm{grad}\,V,\dot{c}\rangle_{c(0)} + \langle \Pi^{\scriptscriptstyle 0}_{t}\left(\mathrm{grad}\,V_{c(t)}\right) - \mathrm{grad}\,V_{c(0)},\dot{c}\rangle_{c(0)}
\end{equation}
Then, using (<ref>), it may be shown that
\begin{equation} \label{eq:vprime2}
\left| \langle \Pi^{\scriptscriptstyle 0}_{t}\left(\mathrm{grad}\,V_{c(t)}\right) - \mathrm{grad}\,V_{c(0)},\dot{c}\rangle_{c(0)}\right| \leq
\ell\hspace{0.02cm}t(L(c))^2
\end{equation}
Since $c(0) = x$ and $\dot{c}(0) = v$, (<ref>) follows by replacing (<ref>) into (<ref>), and integrating over $t$.
§ EXPONENTIAL SCHEMES
Consider now the case where $\mathrm{Ret} = \mathrm{Exp}$, in (<ref>). That is,
\begin{equation} \label{eq:expscheme}
x_{t+1} = \mathrm{Exp}_{x_t}\!\left(\mu_{\scriptscriptstyle t+1}\hspace{0.02cm}X_{y_{\scriptscriptstyle\hspace{0.02cm}t+1}}(x_t)\right)
\end{equation}
For this exponential scheme, Proposition <ref> provides a non-asymptotic bound on $\mathbb{E}\Vert X(x_{\tau_t})\Vert^2$, where
$\tau_t$ was defined in (<ref>). This proposition uses the notation
\begin{equation} \label{eq:barmup}
% \lbrace \mu\hspace{0.02cm}t\rbrace_{\scriptscriptstyle -1} = \frac{1}{\sum^t_{s=1}\mu^{\scriptscriptstyle \phantom{p+1}}_s}\hspace{0.3cm};\hspace{0.2cm}
\lbrace\mu^{\scriptscriptstyle p}\rbrace_{t} = \frac{\sum^t_{s=1} \mu^{\scriptscriptstyle p+1}_{s+1}}{\sum^t_{s=1}\mu^{\scriptscriptstyle \phantom{p+1}}_{s+1}} %\hspace{0.5cm} \text{for $p \geq -1$}
\end{equation}
which is motivated by the fact that if $\mu_{\scriptscriptstyle t} = \mu$ is a constant, so (<ref>) is a constant-step-size scheme,then $\lbrace\mu^{\scriptscriptstyle p}\rbrace_{t} = \mu^{\scriptscriptstyle p}$. In this spirit, $\lbrace\mu^{\scriptscriptstyle 1}\rbrace_{t}$ will be written $\lbrace\mu^{\scriptscriptstyle 1}\rbrace_{ t} = \lbrace\mu\rbrace_{ t}$, throughout the following.
Consider the exponential scheme (<ref>), with mean vector field (<ref>), and where the noise variance satisfies (<ref>). Assume that there exists a positive function $V:M\rightarrow \mathbb{R}$, with $\ell$-Lipschitz gradient, which verifies (<ref>). If $\mu_t \leq c\hspace{0.02cm}(2\ell(1+\sigma^{\scriptscriptstyle 2}_{\scriptscriptstyle 1}))^{\scriptscriptstyle -1}$ for all $t$, then
\begin{equation} \label{eq:nasymp_exp}
\mathbb{E}\left[\Vert X(x_{\tau_t})\Vert^2_{x_{\tau_t}}\right] \leq (2/\!\hspace{0.02cm}c)\!\,\left[\left(V(x_{\scriptscriptstyle 0})\middle/t\right)\!\hspace{0.02cm}\lbrace \mu^{\scriptscriptstyle -1}\rbrace_{t} + (\ell\hspace{0.02cm}\sigma^2_{\scriptscriptstyle 0})\hspace{0.03cm}\lbrace\mu\rbrace_{t}\hspace{0.02cm} \right]
\end{equation}
Remark : the simplest application of this proposition is to a stochastic gradient scheme, whith mean field $X(x) = -\mathrm{grad}\,f(x)$ for a cost function $f:M\rightarrow \mathbb{R}$. If $f$ is positive (or just bounded below), and has $\ell_f$-Lipschitz gradient, then $V = f$ can be introduced, as a Lyapunov function, since (<ref>) then holds with $c = 1$. In the case of a constant-step-size scheme, with $\mu \leq (2\ell_f(1+\sigma^{\scriptscriptstyle 2}_{\scriptscriptstyle 1}))^{\scriptscriptstyle -1}$, it follows from (<ref>) that
\begin{equation} \label{eq:nasymp_exp_grad}
\frac{1}{2t}\sum^t_{s=1} \mathbb{E}\left[\Vert \mathrm{grad}\,f(x_{s})\Vert^2_{x_{s}}\right] \leq \left(f(x_{\scriptscriptstyle 0})\middle/t\hspace{0.02cm}\mu\right) + (\ell_f\hspace{0.02cm}\sigma^2_{\scriptscriptstyle 0})\hspace{0.03cm}\mu
\end{equation}
In particular, if $t$ is sufficiently large, then one must have $\mathbb{E}\Vert \mathrm{grad}\,f(x_{s})\Vert^2 \leq 3(\ell_f\hspace{0.02cm}\sigma^2_{\scriptscriptstyle 0})\hspace{0.03cm}\mu$, for at least one $s$ in the range $s = 1,\ldots, t$.
Remark : Proposition <ref> provides an estimate of the rate of convergence of a stochastic approximation scheme, to the set of critical points of its mean field, which is applicable even when this set of critical points is complicated. This is especially helpful for stochastic gradient schemes, with a cost function that has many global minima (see the the example in <ref>). Proposition <ref> will extend Proposition <ref>, from exponential schemes, to retraction schemes.
Proof of Proposition <ref> : for each $s = 0,1,\ldots,$ it follows from Lemma <ref> that
V(x_{s+1}) - V(x_s) \leq \mu_{\scriptscriptstyle s+1}\hspace{0.02cm}\langle\mathrm{grad}\,V,X_{y_{\scriptscriptstyle\hspace{0.02cm}s+1}}\rangle_{x_s} + \mu^2_{\scriptscriptstyle s+1}(\ell/2)\Vert X_{y_{\scriptscriptstyle\hspace{0.02cm}s+1}}\Vert^2_{x_s}
%\langle\mathrm{grad}\,V,\dot{c}\rangle_{c(0)} + (\ell/2)\hspace{0.02cm}(L(c))^2
Then, since $X_{y_{\scriptscriptstyle\hspace{0.02cm}s+1}}(x_s) = X(x_s) + e_{y_{\scriptscriptstyle\hspace{0.02cm}s+1}}(x_s)$,
\begin{equation} \label{eq:proofexpscheme1}
V(x_{s+1}) - V(x_s) \leq \mu_{\scriptscriptstyle s+1}\hspace{0.02cm}\langle\mathrm{grad}\,V,X_{y_{\scriptscriptstyle\hspace{0.02cm}s+1}}\rangle_{x_s} + \mu^2_{\scriptscriptstyle s+1}\ell\left(\Vert X \Vert^2_{x_s} + \Vert e_{y_{\scriptscriptstyle\hspace{0.02cm}s+1}}\Vert^2_{x_s}\right)
%\langle\mathrm{grad}\,V,\dot{c}\rangle_{c(0)} + (\ell/2)\hspace{0.02cm}(L(c))^2
\end{equation}
Let $\mathcal{Y}_s$ be the $\sigma$-algebra generated by $y_{\scriptscriptstyle 1},\ldots, y_s\hspace{0.03cm}$. Taking conditional expectations in (<ref>), it follows from (<ref>) that
-\mu_{\scriptscriptstyle s+1}\hspace{0.02cm}\langle\mathrm{grad}\,V,X\rangle_{x_s} \leq \mathbb{E}\left[V(x_{s}) - V(x_{s+1})\middle|\mathcal{Y}_s\right] + \mu^2_{\scriptscriptstyle s+1}\ell\left(\Vert X \Vert^2_{x_s} + \mathbb{E}\left[\Vert e_{y_{\scriptscriptstyle\hspace{0.02cm}s+1}}\Vert^2_{x_s}\middle|\mathcal{Y}_s\right]\right)
Then, from (<ref>),
-\mu_{\scriptscriptstyle s+1}\hspace{0.02cm}\langle\mathrm{grad}\,V,X\rangle_{x_s} \leq \mathbb{E}\left[V(x_{s}) - V(x_{s+1})\middle|\mathcal{Y}_s\right] + \mu^2_{\scriptscriptstyle s+1}\ell\left(\sigma^2_{\scriptscriptstyle 0} + (1+\sigma^2_{\scriptscriptstyle 1})\Vert X \Vert^2_{x_s}\right)
Therefore, using (<ref>), and rearranging terms,
\begin{equation} \label{eq:proofexpscheme2}
(c - \ell(1+\sigma^2_{\scriptscriptstyle 1})\mu_{\scriptscriptstyle s+1})\hspace{0.03cm}\mu_{\scriptscriptstyle s+1}\hspace{0.02cm}\Vert X \Vert^2_{x_s} \leq \mathbb{E}\left[V(x_{s}) - V(x_{s+1})\middle|\mathcal{Y}_s\right] + (\ell\hspace{0.02cm}\sigma^2_{\scriptscriptstyle 0})\hspace{0.03cm}\mu^2_{\scriptscriptstyle s+1}
\end{equation}
If $\mu_{s+1} \leq c\hspace{0.02cm}(2\ell(1+\sigma^{\scriptscriptstyle 2}_{\scriptscriptstyle 1}))^{\scriptscriptstyle -1}$, this becomes
(c/\!\hspace{0.04cm}2)\hspace{0.03cm}\mu_{\scriptscriptstyle s+1}\hspace{0.02cm}\Vert X \Vert^2_{x_s} \leq \mathbb{E}\left[V(x_{s}) - V(x_{s+1})\middle|\mathcal{Y}_s\right] + (\ell\hspace{0.02cm}\sigma^2_{\scriptscriptstyle 0})\hspace{0.03cm}\mu^2_{\scriptscriptstyle s+1}
Finally, (<ref>) follows by summing over $s = 1,\ldots, t$ and dividing by $\sum^t_{s=1}\mu_{s+1} = t/\!\hspace{0.02cm}\lbrace \mu^{\scriptscriptstyle -1}\rbrace_t\,$.
§ RETRACTION SCHEMES
Consider now the case where $\mathrm{Ret}$ in (<ref>) is a regular retraction, in the sense of <ref>. Then (<ref>) can be written under an exponential form,
\begin{equation} \label{eq:retexpscheme}
x_{t+1} = \mathrm{Exp}_{x_t}\!\left(\Phi_{x_t}\!\left(\mu_{\scriptscriptstyle t+1}\hspace{0.02cm}X_{y_{\scriptscriptstyle\hspace{0.02cm}t+1}}(x_t)\right)\right)
\end{equation}
where the maps $\Phi_x :T_xM\rightarrow T_xM$ were defined in (<ref>). This new exponential form is useful, since it renders possible the application of Lemma <ref>, as in the proof of Proposition <ref>.
In addition to being regular, the retraction $\mathrm{Ret}$ is assumed to verify
\begin{equation} \label{eq:retract_asump}
\Vert\Phi_x(v)\Vert_x \leq \Vert v\Vert_x \hspace{0.3cm}\text{and}\hspace{0.2cm}
\Vert\Phi_x(v) - v \Vert_x \leq \delta\hspace{0.02cm}\Vert v\Vert^3_x
\end{equation}
for all $x \in M$ and $v\in T_x M$, where $\delta > 0$ is a constant. This assumption holds true for the retractions studied in <ref> and <ref>, as may be verified, using elementary properties of the arctangent function.
It will also be assumed that the random vector field $X_y(x)$ has bounded third-order moments,
\begin{equation} \label{eq:thirdordermoments}
\int_{Y}\,\Vert X_y(x)\Vert^a_x\hspace{0.04cm}P(dy) \leq \tau_{\scriptscriptstyle a} \hspace{0.2cm}; a = 2,3
\end{equation}
for some constants $\tau_{\scriptscriptstyle 2}\hspace{0.02cm},\tau{\scriptscriptstyle 3} > 0$. This implies that it is possible to replace $\sigma^2_{\scriptscriptstyle 1} = 0$ in (<ref>).
The following Proposition <ref> is obtained by applying Lemma <ref> to the exponential form (<ref>) of the retraction scheme (<ref>), and taking advantage of the assumptions (<ref>) and (<ref>).
Consider the retraction scheme (<ref>), where $\mathrm{Ret}$ is a regular retraction, which satisfies (<ref>). Assume that (<ref>) holds, so it is possible replace $\sigma^2_{\scriptscriptstyle 1} = 0$ in (<ref>). Assume also that there exists a positive function, with bounded and $\ell$-Lipschitz gradient, which verifies (<ref>).If $\mu_t \leq (c/2\ell)$ for all $t$, then
\begin{equation} \label{eq:nasymp_ret}
\mathbb{E}\left[\Vert X(x_{\tau_t})\Vert^2_{x_{\tau_t}}\right] \leq (2/\!\hspace{0.02cm}c)\!\,\left[\left(V(x_{\scriptscriptstyle 0})\middle/t\right)\!\hspace{0.02cm}\lbrace \mu^{\scriptscriptstyle -1}\rbrace_{t} + (\ell\hspace{0.02cm}\sigma^2_{\scriptscriptstyle 0})\hspace{0.03cm}\lbrace\mu\rbrace_{t}+(\delta\tau_{\scriptscriptstyle 3}\hspace{0.02cm}\Vert V\Vert_{\scriptscriptstyle 1,\infty})\hspace{0.03cm}\lbrace\mu^{\scriptscriptstyle 2}\rbrace_{t}\hspace{0.02cm} \right]
\end{equation}
where $\Vert V\Vert_{\scriptscriptstyle 1,\infty} = \sup_{x \in M}\Vert \mathrm{grad}\,V\Vert_x\hspace{0.02cm}$.
The assumptions of Proposition <ref> (namely, that $X_y(x)$ has bounded third order moments, and that $\mathrm{grad}\,V(x)$ is uniformly bounded), can seem a bit too strong. In fact, these assumptions are quite natural, in several applications, where the underlying Riemannian manifold $M$ is compact. One such application, to the PCA problem, is presented in <ref>.
Remark : the first two terms on the right-hand side of (<ref>) are the same as on the right-hand side of (<ref>). Thus, replacing the Riemannian exponential $\mathrm{Exp}$ by a regular retraction $\mathrm{Ret}$ has the effect of introducing a second-order term (i.e. a constant multiple of $\lbrace\mu^{\scriptscriptstyle 2}\rbrace_{t}$) into (<ref>). This additional term vanishes, in the limit where $\delta$ goes to zero.
Proof of Proposition <ref> : for $s = 0,1,\ldots,$ it follows by applying Lemma <ref> to (<ref>) that
\begin{equation} \label{eq:proof_nasympret1}
V(x_{s+1}) - V(x_s) \leq \left\langle \mathrm{grad}\,V,\Phi_{x_s}\!\left(\mu_{\scriptscriptstyle s+1}\hspace{0.02cm}X_{y_{\scriptscriptstyle\hspace{0.02cm}s+1}}\right)\right\rangle_{x_s} + (\ell/2)\left\Vert
\Phi_{x_s}\!\left(\mu_{\scriptscriptstyle s+1}\hspace{0.02cm}X_{y_{\scriptscriptstyle\hspace{0.02cm}s+1}}\right)\right\Vert^2_{x_s}
\end{equation}
Here, the right-hand side may also be written
\mu_{\scriptscriptstyle s+1}\hspace{0.02cm}\left\langle \mathrm{grad}\,V,X_{y_{\scriptscriptstyle\hspace{0.02cm}s+1}}\right\rangle_{x_s} + (\ell/2)\left\Vert
\Phi_{x_s}\!\left(\mu_{\scriptscriptstyle s+1}\hspace{0.02cm}X_{y_{\scriptscriptstyle\hspace{0.02cm}s+1}}\right)\right\Vert^2_{x_s}
\left\langle \mathrm{grad}\,V,\Phi_{x_s}\!\left(\mu_{\scriptscriptstyle s+1}\hspace{0.02cm}X_{y_{\scriptscriptstyle\hspace{0.02cm}s+1}}\right) - \mu_{\scriptscriptstyle s+1}\hspace{0.02cm}X_{y_{\scriptscriptstyle\hspace{0.02cm}s+1}}\right\rangle_{x_s}
However, by (<ref>),
\begin{equation} \label{eq:proof_nasympret2}
\left\Vert
\Phi_{x_s}\!\left(\mu_{\scriptscriptstyle s+1}\hspace{0.02cm}X_{y_{\scriptscriptstyle\hspace{0.02cm}s+1}}\right)\right\Vert^2_{x_s} \leq
\mu^2_{s+1}\hspace{0.02cm}
\Vert X_{y_{\scriptscriptstyle\hspace{0.02cm}s+1}}\Vert^2_{x_s}
\end{equation}
and, in addition,
\begin{equation} \label{eq:proof_nasympret3}
\left\Vert
\Phi_{x_s}\!\left(\mu_{\scriptscriptstyle s+1}\hspace{0.02cm}X_{y_{\scriptscriptstyle\hspace{0.02cm}s+1}}\right) - \mu_{\scriptscriptstyle s+1}\hspace{0.02cm}X_{y_{\scriptscriptstyle\hspace{0.02cm}s+1}}\right\Vert_{x_s} \leq
\delta\hspace{0.02cm}\mu^3_{s+1}\hspace{0.02cm}
\Vert X_{y_{\scriptscriptstyle\hspace{0.02cm}s+1}}\Vert^3_{x_s}
\end{equation}
Replacing (<ref>) and (<ref>) into (<ref>), and using the Cauchy-Schwarz inequality,
V(x_{s+1}) - V(x_s) \leq \mu_{\scriptscriptstyle s+1}\hspace{0.02cm}\langle \mathrm{grad}\,V,X_{y_{\scriptscriptstyle\hspace{0.02cm}s+1}}\rangle_{x_s} + \mu^2_{s+1}(\ell/2)\hspace{0.02cm}
\Vert X_{y_{\scriptscriptstyle\hspace{0.02cm}s+1}}\Vert^2_{x_s} +
(\delta\Vert V\Vert_{\scriptscriptstyle 1,\infty})\hspace{0.02cm}\mu^3_{s+1}\hspace{0.02cm}
\Vert X_{y_{\scriptscriptstyle\hspace{0.02cm}s+1}}\Vert^3_{x_s}
Now, it is possible to proceed as in the proof of Proposition <ref>. Taking conditional expectations with respect to $\mathcal{Y}_s\hspace{0.03cm}$, and using (<ref>) and (<ref>),
- \mu_{\scriptscriptstyle s+1}\hspace{0.02cm}\langle \mathrm{grad}\,V,X\rangle_{x_s} -
\mu^2_{s+1}\ell\hspace{0.02cm}
\Vert X_{y_{\scriptscriptstyle\hspace{0.02cm}s+1}}\Vert^2_{x_s} \leq
-\Delta V_s +
(\ell\hspace{0.02cm}\sigma^2_{\scriptscriptstyle 0})\hspace{0.03cm}\mu^2_{s+1} +
(\delta\Vert V\Vert_{\scriptscriptstyle 1,\infty})\hspace{0.02cm}\mu^3_{s+1}\hspace{0.02cm}
\mathbb{E}\left[\Vert X_{y_{\scriptscriptstyle\hspace{0.02cm}s+1}}\Vert^3_{x_s}\middle|\mathcal{Y}_s\right]
where $\Delta V_s = \mathbb{E}\left[V(x_{s+1}) - V(x_{s})\middle|\mathcal{Y}_s\right]$.
Then, using (<ref>), it follows that
(c - \ell\mu_{\scriptscriptstyle s+1})\hspace{0.03cm}\mu_{\scriptscriptstyle s+1}\hspace{0.02cm}\Vert X \Vert^2_{x_s} \leq
-\Delta V_s +
(\ell\hspace{0.02cm}\sigma^2_{\scriptscriptstyle 0})\hspace{0.03cm}\mu^2_{s+1} +
(\delta\Vert V\Vert_{\scriptscriptstyle 1,\infty})\hspace{0.02cm}\mu^3_{s+1}\hspace{0.02cm}
\mathbb{E}\left[\Vert X_{y_{\scriptscriptstyle\hspace{0.02cm}s+1}}\Vert^3_{x_s}\middle|\mathcal{Y}_s\right]
so, inserting (<ref>), one obtains the inequality
(c - \ell\mu_{\scriptscriptstyle s+1})\hspace{0.03cm}\mu_{\scriptscriptstyle s+1}\hspace{0.02cm}\Vert X \Vert^2_{x_s} \leq
-\Delta V_s +
(\ell\hspace{0.02cm}\sigma^2_{\scriptscriptstyle 0})\hspace{0.03cm}\mu^2_{s+1} +
(\delta\tau_{\scriptscriptstyle 3}\hspace{0.02cm}\Vert V\Vert_{\scriptscriptstyle 1,\infty})\hspace{0.03cm}\mu^3_{s+1}
Here, if $\mu_t \leq (c/2\ell)$ for all $t$, then
(c/\!\hspace{0.04cm}2)\hspace{0.03cm}\mu_{\scriptscriptstyle s+1}\hspace{0.02cm}\Vert X \Vert^2_{x_s} \leq
-\Delta V_s +
(\ell\hspace{0.02cm}\sigma^2_{\scriptscriptstyle 0})\hspace{0.03cm}\mu^2_{s+1} +
(\delta\tau_{\scriptscriptstyle 3}\hspace{0.02cm}\Vert V\Vert_{\scriptscriptstyle 1,\infty})\hspace{0.03cm}\mu^3_{s+1}
Finally, (<ref>) follows by summing over $s = 1,\ldots, t$ and dividing by $\sum^t_{s=1}\mu_{s+1} = t/\!\hspace{0.02cm}\lbrace \mu^{\scriptscriptstyle -1}\rbrace_t\,$.
§ EXAMPLE : MIXTURE ESTIMATION
Let $M$ be a Riemannian symmetric space, which belongs to the non-compact case, (see <ref>). Consider a probability density $m$ on $M$, which is a mixture of Gaussian densities (of the kind defined in <ref>),
\begin{equation} \label{eq:mixture}
m(y|x) = \frac{1}{K}\sum^K_{\kappa = 1}p(y|x_{\kappa}) \hspace{0.5cm} \text{where }\, p(y|x_{\kappa}) = (Z(1))^{-1}\exp\left[ -\frac{d^{\hspace{0.03cm}2}(y,x_\kappa)}{2}\right]
\end{equation}
where $K$ is the number of mixture components, and the normalising factor $Z(1)$ is given by (<ref>).The parameters $x = (x_\kappa\,;\kappa=1,\ldots,K)$ are to be estimated, by fitting the mixture density (<ref>) to data $y_{\scriptscriptstyle 1},\ldots, y_{\scriptscriptstyle N}\hspace{0.03cm}$. Then, maximum-likelihood estimation amounts to minimising the negative log-likelihood function
\begin{equation} \label{eq:neglh}
f(x) = -\log\hspace{0.02cm}Z(1)-\frac{1}{N}\sum^N_{n=1}\log\hspace{0.02cm} m(y_n|x)
\end{equation}
where the first term, $-\log\hspace{0.02cm}Z(1)$, has been added to ensure that $f(x)$ is positive. The function $f$ is defined on the product Riemannian manifold, $M^{\scriptscriptstyle K} = M\times\ldots\times M$. Its gradient is then $\mathrm{grad}\,f = (\mathrm{grad}_{\kappa}\,f\,;\kappa = 1,\ldots, K)$, where $\mathrm{grad}_{\kappa}\,f$ denotes the gradient with respect to $x_{\kappa}\hspace{0.03cm}$.
For the negative log-likelihood function (<ref>),
\begin{equation} \label{eq:mixgrad}
\mathrm{grad}_{\kappa}\,f(x) = -\frac{1}{N}\sum^N_{n=1}\omega_\kappa(y_n)\hspace{0.03cm}\mathrm{Exp}^{-1}_{x_\kappa}(y_n)
\end{equation}
where $\omega_\kappa(y) \propto p(y|x_{\kappa})$ are positive weights, which add up to $1$.
Let $(y_t\,;t=1,2,\ldots)$ be chosen at random among the data $y_{\scriptscriptstyle 1},\ldots, y_{\scriptscriptstyle N}\hspace{0.03cm}$. By Lemma <ref>,
\begin{equation} \label{eq:mixstochgrad}
x^{t+1}_\kappa = \mathrm{Exp}_{x^t_\kappa}\!\left(\mu\hspace{0.03cm}X_\kappa(y_{\scriptscriptstyle\hspace{0.02cm}t+1},x^t_\kappa)\right) \hspace{0.5cm} \text{where }\,X_\kappa(y_{\scriptscriptstyle\hspace{0.02cm}t+1},x^t_\kappa) = \omega_\kappa(y_{\scriptscriptstyle\hspace{0.02cm}t+1})\hspace{0.03cm}\mathrm{Exp}^{-1}_{x^t_\kappa}(y_{\scriptscriptstyle\hspace{0.02cm}t+1})
\end{equation}
is a constant-step-size stochastic gradient scheme, for the cost function $f$. Here, the step-size $\mu$ is assumed to be less than $1$ (in comparison to (<ref>), $t$ and $t+1$ have been written as superscripts, rather than subscripts, in order to accommodate the appearance of $\kappa$).
Now, let $C$ be a compact and convex subset of $M$, which contains all of the data points $y_{n}\hspace{0.03cm}$, as well as all of the initial values $x^{\scriptscriptstyle 0}_\kappa$ (since $M$ is a Hadamard manifold, one may take $C$ to be any sufficiently large closed geodesic ball). The diameter of $C$ will be denoted $\mathrm{D}_{\scriptscriptstyle C}\hspace{0.03cm}$. From (<ref>),
x^{t+1}_\kappa = x^t_\kappa\; \#_{\scriptscriptstyle \rho_{\hspace{0.01cm}t+1}}\,y_{\scriptscriptstyle\hspace{0.02cm}t+1} \hspace{0.5cm} \text{where } \rho_{\hspace{0.02cm}t+1} =
\mu\hspace{0.03cm}\omega_\kappa(y_{\scriptscriptstyle\hspace{0.02cm}t+1})
in the notation of (<ref>), from <ref>. Accordingly, the iterates $x^t_\kappa$ remain within $C$, for all $t$ and $\kappa$.Because $C$ is compact and convex, it then becomes possible to derive the following result, by repeating, with very minor changes, the arguments leading to (<ref>).
For the stochastic gradient scheme (<ref>), let $C$ be a compact and convex subset of $M$, which contains all of the data points $y_{n}\hspace{0.03cm}$, as well as all of the initial values $x^{\scriptscriptstyle 0}_\kappa$.If $\mu \leq (1/2\ell_{\scriptscriptstyle C})$, where $\ell_{\scriptscriptstyle C}$ denotes the supremum of the operator norm of $\mathrm{Hess}\,f(x)$, taken over $x = (x_{\kappa}\,;\kappa = 1,\ldots, K) \in C^{\scriptscriptstyle K}$, then for all $t = 1,2,\ldots$,
\begin{equation} \label{eq:mixnasymp_exp_grad}
\frac{1}{2t}\sum^t_{s=1} \mathbb{E}\left[\Vert \mathrm{grad}\,f(x^{s})\Vert^2_{x^{s}}\right] \leq \left(f_{\scriptscriptstyle C}\middle/t\hspace{0.02cm}\mu\right) + (\ell_{\scriptscriptstyle C}\hspace{0.02cm}\sigma^2_{\scriptscriptstyle 0})\hspace{0.03cm}\mu
\end{equation}
Here, $f_{\scriptscriptstyle C} = \sup_{x \in C^{\scriptscriptstyle K}} f(x)$ and $\sigma_{\scriptscriptstyle 0} = \sup_{x \in C^{\scriptscriptstyle K}}\Vert \mathrm{grad}\,f\Vert_x$ (explicit bounds on $f_{\scriptscriptstyle C\hspace{0.02cm}}$, $\sigma_{\scriptscriptstyle 0}$ and $\ell_{\scriptscriptstyle C}$ are given in the remark below).
Remark : tedious, but straightforward, calculations provide the upper bounds
\begin{equation} \label{eq:Cconstants1}
f_{\scriptscriptstyle C} \leq \frac{\mathrm{D}^2_{\scriptscriptstyle C}}{2} \hspace{0.27cm};\hspace{0.23cm} \sigma_{\scriptscriptstyle 0} \leq K\hspace{0.02cm}\mathrm{D}_{\scriptscriptstyle C}
\end{equation}
\begin{equation} \label{eq:Cconstants2}
\ell_{\scriptscriptstyle C} \leq (1+c\hspace{0.02cm}\mathrm{D}_{\scriptscriptstyle C}) + (1 + Z(1)\exp(\mathrm{D}_{\scriptscriptstyle C}/2))\mathrm{D}^2_{\scriptscriptstyle C}
\end{equation}
where $c$ is such that the sectional curvatures of $M$ lie within $[-c^{\hspace{0.02cm}\scriptscriptstyle 2},0]$.
Proof of Lemma <ref> : taking the gradient of (<ref>), it is clear that
\begin{equation} \label{eq:proofmixgrad0}
\mathrm{grad}_{\kappa}\,f(x) = -\frac{1}{N}\sum^N_{n=1} \mathrm{grad}_{\kappa}\log m(y_n|x)
\end{equation}
Now, $\mathrm{grad}_{\kappa}\log m(y|x)$ can be computed as follows. If $\lambda$ is a random variable, independent from $y$, with $\mathbb{P}(\lambda = \kappa) = K^{\scriptscriptstyle -1}$, for $\kappa = 1,\ldots, K$, then $\mathbb{P}(\lambda = \kappa|y) = \omega_\kappa(y)$, with $\omega_\kappa(y)$ as in (<ref>). Therefore, using Bayes rule,
\frac{p(\lambda\hspace{0.02cm},y)}{m(y|x)} = \sum^K_{\nu = 1}\mathbf{1}\lbrace\lambda = \nu\rbrace\hspace{0.03cm}\omega_\nu(y)
where $p(\lambda\hspace{0.02cm},y)$ is the joint distribution of the couple $(\lambda\hspace{0.02cm},y)$. Taking logarithms,
\begin{equation} \label{eq:proofmixgrad1}
\log p(\lambda\hspace{0.02cm},y) - \log m(y|x) = \sum^K_{\nu = 1}\mathbf{1}\lbrace\lambda = \nu\rbrace\hspace{0.03cm}\log \omega_\nu(y)
\end{equation}
If $\mathbb{E}_y$ denotes conditional expectation with respect to $y$,
\mathbb{E}_y\left[\mathrm{grad}_{\kappa}\sum^K_{\nu = 1}\mathbf{1}\lbrace\lambda = \nu\rbrace\hspace{0.03cm}\log \omega_\nu(y)\right] = \sum^K_{\nu = 1}\omega_\nu(y)\hspace{0.03cm}\mathrm{grad}_{\kappa}\log \omega_\nu(y) = 0
where the second equality follows since the conditional probabilities $\omega_\nu(y)$ always add up to $1$. By replacing this into (<ref>),
\begin{equation} \label{eq:proofmixgrad2}
\mathrm{grad}_{\kappa}\log m(y|x) = \mathbb{E}_y\left[ \mathrm{grad}_{\kappa} \log p(\lambda\hspace{0.02cm},y)\right]
\end{equation}
But, since $\lambda$ and $y$ are independent, the joint distribution $p(\lambda\hspace{0.02cm},y)$ reads
p(\lambda\hspace{0.02cm},y) = \frac{1}{K}\sum^K_{\nu = 1}\mathbf{1}\lbrace\lambda = \nu\rbrace\hspace{0.03cm} p(y|x_{\nu})
Therefore, taking logarithms,
\log p(\lambda\hspace{0.02cm},y) = -\log(K) + \sum^K_{\nu = 1}\mathbf{1}\lbrace\lambda = \nu\rbrace\hspace{0.03cm} \log p(y|x_{\nu})
This immediately yields,
\begin{equation} \label{eq:proofmixgrad4}
\mathbb{E}_y\left[ \mathrm{grad}_{\kappa} \log p(\lambda\hspace{0.02cm},y)\right] =
\omega_\kappa(y)\hspace{0.03cm} \mathrm{grad}_{\kappa}\log p(y|x_{\kappa}) =
\omega_\kappa(y)\hspace{0.03cm}\mathrm{Exp}^{-1}_{x_\kappa}(y)
\end{equation}
where the second equality follows from (<ref>), and from the definition of $p(y|x_{\kappa})$ in (<ref>). Finally, replacing (<ref>) into (<ref>),
\begin{equation} \label{eq:proofmixgrad5}
\mathrm{grad}_{\kappa}\log m(y|x) = \omega_\kappa(y)\hspace{0.03cm}\mathrm{Exp}^{-1}_{x_\kappa}(y)
\end{equation}
so that (<ref>) follows by plugging (<ref>) into (<ref>).
§ EXAMPLE : THE PCA PROBLEM
Here, the notation will be the same as in <ref> and <ref>. The aim is to apply Proposition <ref>, to a constant-step-size stochastic gradient scheme, for the objective function (<ref>),
\begin{equation} \label{eq:pcafbis}
f(x) = \mathrm{tr}\left(x\Delta\right) \hspace{1cm} x \in \mathrm{Gr}_{\scriptscriptstyle \mathbb{R}}(p\,,q)
\end{equation}
where $\Delta$ is the covariance matrix of a zero-mean random vector $y$, with values in $\mathbb{R}^d$ ($d= p+q)$. It is assumed that $y$ has finite moments of order $6$.
The gradient of the objective function $f$ was given by (<ref>) and (<ref>). These can be written
\begin{equation} \label{eq:pcapxbis}
\mathrm{grad}\,f(x) = g\cdot \tilde{\omega}(x) \hspace{0.5cm} \text{where } \tilde{\omega}(x) = \mathrm{P}_o(g^\dagger\cdot \Delta)
\end{equation}
Now, let $b \in \mathrm{St}_{\scriptscriptstyle \mathbb{R}}(p\,,q)$ be such that $x = [b]$. By the discussion before (<ref>), choosing $g = (b,b^{\scriptscriptstyle \perp})$, it follows that $\mathrm{grad}\,f(x) = [X(b)]$, where $X(b) = b^{\scriptscriptstyle \perp}\omega(b)\hspace{0.02cm}$. From (<ref>) and (<ref>), it is clear that $\omega(b) = (b^{\scriptscriptstyle \perp})^\dagger\Delta\hspace{0.03cm} b$. Therefore, using the fact that $x = bb^\dagger$ (this is the definition of $[b]$),
\begin{equation} \label{eq:pcameanfield}
X(b) = (\mathrm{I}_d - x)\Delta\hspace{0.03cm} b
\end{equation}
In terms of the random vector $y$, $X(b)$ is the expectation of $X_y(b)$, where
\begin{equation} \label{eq:pcaxy}
X_y(b) = (\mathrm{I}_d - x)(yy^\dagger)\hspace{0.03cm} b
\end{equation}
Let $X_y(x) = [X_y(b)]$, and note that the expectation of $X_y(x)$ is equal to $\mathrm{grad}\,f(x)$ (by linearity).
Then, consider the constant-step-size stochastic gradient scheme
\begin{equation} \label{eq:pcascheme1}
x_{t+1} = \mathrm{Ret}_{x_t}\!\left(\mu\hspace{0.02cm}X_{y_{\scriptscriptstyle\hspace{0.02cm}t+1}}(x_t)\right)
\end{equation}
where $(y_t\,;t = 1,2,\ldots)$ are independent copies of $y$. If the retraction $\mathrm{Ret}$ is given as in (<ref>), this becomes
\begin{equation} \label{eq:pcascheme}
x_{t+1} = \mathrm{Span}(b_{t+1}) \hspace{0.25cm};\hspace{0.21cm} b_{t+1} = b_t + \mu\hspace{0.03cm}(\mathrm{I}_d - b^{\phantom{\dagger}}_{\scriptscriptstyle\hspace{0.02cm}t}b^\dagger_{\scriptscriptstyle\hspace{0.02cm}t})(y^{\phantom{\dagger}}_{\scriptscriptstyle\hspace{0.02cm}t+1}y^\dagger_{\scriptscriptstyle\hspace{0.02cm}t+1})\hspace{0.03cm} b_t
\end{equation}
Proposition <ref>, applied to this scheme, yields the following bound.
Consider the constant-step-size scheme (<ref>)-(<ref>). For all $t = 1,2,\ldots,$
\begin{equation} \label{eq:pca_nasymp}
\frac{1}{2t}\sum^t_{s=1} \mathbb{E}\left[\Vert \mathrm{grad}\,f(x_{s})\Vert^2_{x_{s}}\right] \leq\left(p\Vert\Delta\Vert_{\scriptscriptstyle \mathrm{op}})\middle/t\hspace{0.02cm}\mu\right)\!\hspace{0.02cm} + (4\Vert\Delta\Vert_{\scriptscriptstyle \mathrm{op}}\hspace{0.02cm}m^{4}_y )\hspace{0.03cm}\mu+(\sqrt{8}\Vert \Delta\Vert_{\scriptscriptstyle F}\hspace{0.02cm}m^{6}_y)\hspace{0.03cm}\mu^{\scriptscriptstyle 2}
\end{equation}
where $\Vert\Delta\Vert_{\scriptscriptstyle \mathrm{op}}$ and $\Vert\Delta\Vert_{\scriptscriptstyle F}$ denote the operator norm and Frobenius norm of the matrix $\Delta$, while $m^{4}_y$ and $m^{6}_y$ denote the fourth-order and sixth-order moments of the random vector $y$.
Proposition <ref> follows directly from Proposition <ref>, by introducing $V(x) = p\Vert\Delta\Vert_{\scriptscriptstyle \mathrm{op}}-f(x)$, which satisfies $0 \leq V(x) \leq p\Vert\Delta\Vert_{\scriptscriptstyle \mathrm{op}\hspace{0.02cm}}$. Since $-\mathrm{grad}\,V(x) =\mathrm{grad}\,f(x)$, (<ref>) now holds with $c = 1$.
The function $V$ has $2\Vert\Delta\Vert_{\scriptscriptstyle \mathrm{op}}$-Lipschitz gradient, as will be shown in the remark below, and the norm of its gradient can be computed from (<ref>),
\begin{equation} \label{eq:pcav10}
\Vert\mathrm{grad}\,V\Vert_x = \Vert\mathrm{grad}\,f\Vert_x =
\Vert\tilde{\omega}_x\Vert_o \leq \Vert g^\dagger\cdot \Delta \Vert_{\scriptscriptstyle F}
\end{equation}
where the inequality follows from (<ref>), since $\mathrm{P}_o$ is an orthogonal projection. But, since $g$ is orthogonal, (<ref>) implies that $\Vert\mathrm{grad}\,V\Vert_x$ is bounded by $\Vert \Delta\Vert_{\scriptscriptstyle F\hspace{0.02cm}}$, uniformly in $x$.
Thus, to obtain (<ref>), it is possible to replace into (<ref>), $V(x_{\scriptscriptstyle 0}) \leq p\Vert\Delta\Vert_{\scriptscriptstyle \mathrm{op}\hspace{0.02cm}}$, $\ell = 2\Vert\Delta\Vert_{\scriptscriptstyle \mathrm{op}\hspace{0.02cm}}$, and $\Vert V\Vert_{\scriptscriptstyle 1,\infty} = \Vert \Delta\Vert_{\scriptscriptstyle F\hspace{0.02cm}}$.
For $\sigma^2_{\scriptscriptstyle 0}$ and $\tau_{\scriptscriptstyle 3\hspace{0.03cm}}$, recall that $X_y(x) = [X_y(b)]$, where $X_y(b) = b^{\scriptscriptstyle \perp}\omega_y(b)$, with $\omega_y(b) =
(b^{\scriptscriptstyle \perp})^\dagger(yy^\dagger)\hspace{0.03cm} b$. However, this implies $X_y(x) = g\cdot \tilde{\omega}_y(b)$, ($\tilde{\omega}_y(b)$ is obtained from $\omega_y(b)$, according to (<ref>)). Thus, $\Vert X_y \Vert_x = \Vert \tilde{\omega}_y(b)\Vert_o = \sqrt{2}\Vert \omega_y(b)\Vert_{\scriptscriptstyle F\hspace{0.02cm}}$. By evaluating the Frobenius norm,
\Vert \omega_y(b)\Vert^2_{\scriptscriptstyle F} = \mathrm{tr}\left((\mathrm{I}_d - x)(yy^\dagger)x(yy^\dagger)\right) \leq
\Vert(\mathrm{I}_d - x)(yy^\dagger)\Vert_{\scriptscriptstyle F}\hspace{0.02cm}
\Vert (yy^\dagger)x\Vert_{\scriptscriptstyle F} \leq \Vert yy^\dagger\Vert^2_{\scriptscriptstyle F}
where the first inequality follows from the Cauchy-Schwarz inequality, and the second inequality follows because $x$ and $\mathrm{I}_d - x$ are orthogonal projectors. Since $\Vert yy^\dagger\Vert_{\scriptscriptstyle F}= \Vert y \Vert^2$ (the squared Euclidean norm of $y$), this implies $\Vert X_y \Vert_x \leq \sqrt{2}\Vert y \Vert^2$. Therefore, it is possible to set $\sigma^2_{\scriptscriptstyle 0} =2m^{4}_y$ and $\tau_{\scriptscriptstyle 3} = \sqrt{8}m^{6}_y\hspace{0.03cm}$.
Finally, the constant $\delta$ in (<ref>) can be taken equal to $1$. Indeed, if $\mathrm{Ret}_x$ is given by (<ref>) and $\Phi_x$ is given by (<ref>), then for $v \in T_x\mathrm{Gr}_{\scriptscriptstyle \mathbb{R}}(p\,,q)$, where $v = g\cdot \tilde{\omega}$ and $\omega$ has s.v.d. $\omega = ras^\dagger$, $\Phi_x(v) = g\cdot \tilde{\varphi}$ where $\varphi$ has s.v.d. $\varphi= r \arctan(a) s^\dagger$. Therefore,
\begin{equation} \label{eq:proofdeltaa1}
\Vert\Phi_x(v) - v \Vert_x = \Vert g\cdot\tilde{\varphi} - g\cdot\tilde{\omega}\Vert_x = \Vert \tilde{\varphi} - \tilde{\omega}\Vert_o
%\leq \delta\hspace{0.02cm}\Vert v\Vert^3_x
\end{equation}
If $k$ is given by (<ref>), then
\Vert \tilde{\varphi} - \tilde{\omega}\Vert_o =
\Vert k\cdot \arctan(\tilde{a}) - k \cdot \tilde{a}\Vert_o = \Vert \arctan(\tilde{a}) - \tilde{a}\Vert_o
By an elementary property of the $\arctan$ function, $\Vert \arctan(\tilde{a}) - \tilde{a}\Vert_o \leq \Vert \tilde{a}\Vert^3_o\hspace{0.03cm}$. Therefore,
\begin{equation} \label{eq:proofdeltaa2}
\Vert \tilde{\varphi} - \tilde{\omega}\Vert_o \leq \Vert \tilde{a}\Vert^3_o
\end{equation}
Replacing (<ref>) into (<ref>), and noting that $\Vert \tilde{a} \Vert_o = \Vert v \Vert_x\hspace{0.03cm}$, it follows that
$\Vert\Phi_x(v) - v \Vert_x \leq \Vert v \Vert^3_x\hspace{0.03cm}$. This is the second inequality in (<ref>), with $\delta = 1$. The first inequality in (<ref>) is obtained by an analogous reasoning, using once more the properties of the $\arctan$ function.
Remark : it was claimed that the function $V$ has $2\Vert\Delta\Vert_{\scriptscriptstyle \mathrm{op}}$-Lipschitz gradient (this means $\mathrm{grad}\,V$ satisfies (<ref>), with $\ell = 2\Vert\Delta\Vert_{\scriptscriptstyle \mathrm{op}})$. To prove this claim, let
$c(t)$ be a geodesic, with $c(0) = x$ and $\dot{c}(0) = v$. In the notation of (<ref>), $
c(t) = \exp(t\, \hat{\omega}_{\scriptscriptstyle v})\cdot x$. From [10] (Theorem 3.3, Chapter IV),
\begin{equation} \label{eq:grass_parallel}
\Pi^{\scriptscriptstyle 1}_{\scriptscriptstyle 0}\left(\mathrm{grad}\,V(x)\right) = \exp(\hat{\omega}_{\scriptscriptstyle v})\cdot \mathrm{grad}\,V(x)
\end{equation}
But, $\mathrm{grad}\,V(x) = - \mathrm{grad}\,f(x)$, which is given by (<ref>). Therefore,
\begin{equation} \label{eq:grass_parallel1}
\Pi^{\scriptscriptstyle 1}_{\scriptscriptstyle 0}\left(\mathrm{grad}\,V(x)\right) = (\exp(\hat{\omega}_{\scriptscriptstyle v})g)\cdot \mathrm{P}_o(g^\dagger\cdot \Delta)
\end{equation}
On the other hand, letting $y = c(1)$, one has $y = (\exp(\hat{\omega}_{\scriptscriptstyle v})g)\cdot o$. Thus, from (<ref>),
\begin{equation} \label{eq:grass_parallel2}
\mathrm{grad}\,V(y) =
(\exp(\hat{\omega}_{\scriptscriptstyle v})g)\cdot \mathrm{P}_o((\exp(\hat{\omega}_{\scriptscriptstyle v})g)^\dagger\cdot \Delta)
\end{equation}
From (<ref>) and (<ref>),
\Vert \mathrm{grad}\,V(y) - \Pi^{\scriptscriptstyle 1}_{\scriptscriptstyle 0}\left(\mathrm{grad}\,V(x)\right)\Vert_y =
\Vert \mathrm{P}_o((\exp(\hat{\omega}_{\scriptscriptstyle v})g)^\dagger\cdot \Delta) - \mathrm{P}_o(g^\dagger\cdot \Delta) \Vert_o \leq
\Vert (\exp(\hat{\omega}_{\scriptscriptstyle v})g)^\dagger\cdot \Delta - g^\dagger\cdot \Delta \Vert_{\scriptscriptstyle F}
where the inequality holds since $\mathrm{P}_o$ is an orthogonal projection. Using the fact that
(\exp(\hat{\omega}_{\scriptscriptstyle v})g)^\dagger\cdot \Delta - g^\dagger\cdot \Delta = \int^1_0 \left(\Delta(t)\hspace{0.02cm}\hat{\omega}_{\scriptscriptstyle v} - \hat{\omega}_{\scriptscriptstyle v}\hspace{0.02cm}\Delta(t)\right) dt
where $\Delta(t) = (\exp(t\,\hat{\omega}_{\scriptscriptstyle v})g)^\dagger\cdot \Delta$, so $\Vert \Delta(t)\Vert_{\scriptscriptstyle op} = \Vert \Delta \Vert_{\scriptscriptstyle op}\hspace{0.02cm}$, it follows that
\Vert \mathrm{grad}\,V(y) - \Pi^{\scriptscriptstyle 1}_{\scriptscriptstyle 0}\left(\mathrm{grad}\,V(x)\right)\Vert_y \leq
\int^1_0 \Vert\Delta(t)\hspace{0.02cm}\hat{\omega}_{\scriptscriptstyle v} - \hat{\omega}_{\scriptscriptstyle v}\hspace{0.02cm}\Delta(t)\Vert_{\scriptscriptstyle F}\hspace{0.04cm} dt \leq 2\Vert \Delta\Vert_{\scriptscriptstyle op}\hspace{0.02cm}\Vert \hat{\omega}_{\scriptscriptstyle v}\Vert_{\scriptscriptstyle F}
By the remark at the end of <ref>, the right-hand side is $2\Vert \Delta\Vert_{\scriptscriptstyle op}\hspace{0.02cm}\Vert v\Vert_{x\hspace{0.03cm}}$. In other words,
\Vert \mathrm{grad}\,V(y) - \Pi^{\scriptscriptstyle 1}_{\scriptscriptstyle 0}\left(\mathrm{grad}\,V(x)\right)\Vert_y \leq
2\Vert \Delta\Vert_{\scriptscriptstyle op}\hspace{0.02cm}L(c)
This is equivalent to the required form of (<ref>), as can be seen by applying $\Pi^{\scriptscriptstyle 0}_{\scriptscriptstyle 1}$ under the norm.
§ A CENTRAL LIMIT THEOREM (CLT)
Here, the aim will be to derive a central limit theorem, describing the asymptotic behavior of certain constant-step-size exponential schemes, defined on Hadamard manifolds. This is a generalisation of the central limit theorem, which holds in Euclidean space, found in [54].
§.§ Geometric ergodicity
Consider the constant-step-size exponential scheme, defined on a Hadamard manifold $M$,
\begin{equation} \label{eq:cssexp}
x_{t+1} = \mathrm{Exp}_{x_t}\!\left(\mu\hspace{0.02cm}X_{y_{\scriptscriptstyle\hspace{0.02cm}t+1}}(x_t)\right)
\end{equation}
Since the observations $(y_t\,;t=1,2,\ldots)$ are independent and identically distributed, it follows that $(x_t\,;t=0,1,\ldots)$ is a time-homogeneous Markov chain with values in $M$. The following assumptions ensure that $(x_t)$ is geometrically ergodic, and therefore has a unique invariant distribution $\pi_\mu$.
e1. the noise vector $e_y(x)$ satisfies (<ref>).
e2. $e_y(x)$ is $P$-almost-surely a continuous function of $x$, and the distribution of $e_y(x)$ has strictly positive density, with respect to the Lebesgue measure on $T_xM$.
v1. there exists a positive function $V:M\rightarrow \mathbb{R}$, with compact sublevel sets, and $\ell$-Lipschitz gradient, which satisfies (<ref>).
v2. there exist $x^* \in M$ and $\lambda > 0$, such that
\begin{equation} \label{eq:attractive}
\langle \mathrm{grad}\,V,X\rangle_x \leq -\lambda\hspace{0.02cm}V(x) \hspace{0.5cm} \text{for } x \neq x^*
\end{equation}
v3. $V(x) = 0$ if and only if $x = x^*$.
Consider the constant-step-size scheme (<ref>), on a Hadamard manifold $M$.Assume that e1, e2, v1, v2 hold. If $\mu \leq c\hspace{0.02cm}(2\ell(1+\sigma^{\scriptscriptstyle 2}_{\scriptscriptstyle 1}))^{\scriptscriptstyle -1}$, then the Markov chain $(x_t)$ is geometrically ergodic, with a unique invariant distribution $\pi_\mu$.
As the step-size $\mu$ goes to zero, the invariant distribution $\pi_\mu$ concentrates on the point $x^*$.
Under the same conditions as in Proposition <ref>, if v3 holds, then $\pi_{\mu}\,\Rightarrow\,\delta_{x^*}$ as $\mu \rightarrow 0$ (here, $\Rightarrow$ denotes weak convergence of probability measures).
§.§ A diffusion limit
Consider now the re-scaled sequence $(u_t\,;t=0,1,\ldots)$, with values in $T_{x^*}M$,
\begin{equation} \label{eq:re-scaledu}
u_t = \psi_\mu(x_t) \hspace{0.5cm} \text{where } \psi_\mu(x) = \mu^{-\frac{1}{2}}\mathrm{Exp}^{-1}_{x^*}(x)
\end{equation}
This is the image of $(x_t\,;t=0,1,\ldots)$, under the diffeomorphism $\psi_\mu:M\rightarrow T_{x^*}M$. It is therefore a time-homogeneous Markov chain with values in $T_{x^*}M$. The trasition kernels of $(x_t)$ and $(u_t)$ will be denoted $Q_\mu$ and $\tilde{Q}_\mu\hspace{0.03cm}$, respectively. Note that
\begin{equation} \label{eq:qtoqtilde}
\tilde{Q}_\mu\phi(u) = Q_\mu(\phi\circ \psi_\mu)(\psi^{\scriptscriptstyle -1}(u))
\end{equation}
for any measurable function $\phi:T_xM \rightarrow \mathbb{R}$.
The following assumptions ensure that, as $\mu$ goes to zero, $(u_t\,;t = 0,1,\ldots)$ behave like samples, taken at evenly spaced times $\tau_t = t\mu$, from a linear diffusion process $(U_\tau\,;\tau \geq 0)$.
d1. the (2,0)-tensor field $\Sigma$, defined by
\begin{equation} \label{eq:fieldsigma}
\Sigma(x) = \int_{Y}\, e_y(x) \otimes e_y(x)\hspace{0.03cm}P(dy) \hspace{0.5cm} \text{for } x\in M
\end{equation}
is continuous on $M$. d2. there exists a linear map $A:T_{x^*}M\rightarrow T_{x^*}M$, such that for $x \in M$,
\begin{equation} \label{eq:fieldA}
X(x) = \Pi^{\scriptscriptstyle 1}_{\scriptscriptstyle 0}\left(A\left(\mathrm{Exp}^{-1}_{x^*}(x)\right) + R(x)\right)
\end{equation}
where $\Pi^{\scriptscriptstyle 1}_{\scriptscriptstyle 0}$ denotes parallel transport along the unique geodesic $c:[0,1] \rightarrow M$, connecting $x^*$ to $x$,and $\Vert R(x) \Vert_{x^*} = o(d(x\hspace{0.02cm},x^*))$.
Now, let $(U_\tau\,:\tau \geq 0)$, be the linear diffusion process, with generator,
\begin{equation} \label{eq:generator}
\mathcal{L}\phi(u) = A^i_ju^j\hspace{0.02cm}\frac{\partial\phi}{\partial u^i}(u) + \frac{1}{2} \Sigma^{ij}_*\hspace{0.02cm}\frac{\partial^2\phi}{\partial u^i\partial u^j}(u)
\end{equation}
where $(A^i_j)$ and $(\Sigma^{ij}_*)$ are the matrices which represent the linear map $A$ and the tensor $\Sigma(x^*)$, in a basis of normal coordinates centred at $x^*$.
Consider the constant-step-size scheme (<ref>), on a Hadamard manifold $M$.Let $(u_t\,:t=0,1,\ldots)$ be given by (<ref>), and assume that e1, d1, d2 hold. For any compactly-supported, smooth function $\phi:T_xM \rightarrow \mathbb{R}$,
\begin{equation} \label{eq:functional}
\mu^{-1}\left[ \tilde{Q}_\mu\phi(u) - \phi(u)\right] = \mathcal{L}\phi(u) \,+\, \varepsilon_{\mu}(u)
\end{equation}
where $\varepsilon_{\mu}(u) \rightarrow 0$ as $\mu \rightarrow 0$, uniformly on compact subsets of $T_{x^*}M$.
Remark : this proposition implies a functional central limit theorem, by application of [55] (Theorem 19.28). This says that the process $(U^\mu_\tau\,;\tau \geq 0)$, equal to $u_t$ for $t\mu \leq \tau < (t+1)\mu$, converges in distribution to the linear diffusion $U$, with generator (<ref>), in Skorokhod space. This functional central limit theorem can be used to study the asymptotic behavior of $(x_t)$, near a general critical point, which satisfies d2. Sadly, I have not yet had time to develop this idea.
§.§ The stable case
A central limit theorem can be derived, in the case where $x^*$ is a stable critical point of the mean field $X$, in the following sense.
t1. the linear map $A$ in (<ref>) has its spectrum contained in the open left half-plane.
In this case, the generator $\mathcal{L}$ in (<ref>) admits of a unique invariant distribution, which is multivariate normal with mean zero and covariance matrix $V$, the solution of the Lyapunov equation $AV + V\!\hspace{0.01cm}A^\dagger = \Sigma_*$ [56] ($A = (A^i_j)$ and $\Sigma_* = (\Sigma^{ij}_*)$). This will be denoted $\mathrm{N}(0,V)$.
Under the conditions of Proposition <ref>, the Markov chain $(x_t)$ has a unique invariant distribution $\pi_\mu$. Then, the same holds for the Markov chain $(u_t)$, which will have a unique invariant distribution $\tilde{\pi}_\mu$. This is $\tilde{\pi}_\mu(A) = \pi_\mu(\mathrm{Exp}_{x^*}(\mu^{1/2}A))$, for any measurable $A \subset T_{x^*}M$.
The following assumptions will be essential for the central limit theorem, which is stated in Proposition <ref>. Assumption t2 ensures tightness of the family $(\tilde{\pi}_\mu\,;\mu \leq c\hspace{0.02cm}(2\ell(1+\sigma^{\scriptscriptstyle 2}_{\scriptscriptstyle 1}))^{\scriptscriptstyle -1})$.
t2. for each $r > 0$ there exists $v(r) > 0$ such that $V(x) \geq v(r)$ if $d(x\hspace{0.02cm},x^*) > r$. Moreover, $v(r) \rightarrow \infty$ as $r \rightarrow \infty$ and $\left. a\middle/v(a^{\scriptscriptstyle 1/2}r)\right.$ is a non-descreasing function of $a > 0$, for any $r$.
t3. there exists $\alpha > 0$ such that
\begin{equation} \label{eq:extravariancecontrol}
\int_{Y}\,\Vert e_y(x)\Vert^{2+\alpha}_x\hspace{0.04cm}P(dy) \leq \tilde{\sigma}^2_{\scriptscriptstyle 0} + \tilde{\sigma}^2_{\scriptscriptstyle 1}\hspace{0.02cm}V(x)
\end{equation}
for some constants $\tilde{\sigma}^2_{\scriptscriptstyle 0}\hspace{0.02cm},\tilde{\sigma}^2_{\scriptscriptstyle 1}$.
Under the conditions of Propositions <ref> and <ref>, if t1, t2, t3 hold, then $\tilde{\pi}_\mu \Rightarrow \mathrm{N}(0,V)$ as $\mu \rightarrow 0$.
§ PROOF OF THE CLT
§.§ Proof of Proposition <ref>
The proof relies on the following two lemmas, which will be proved below.
Assume that e2 holds. Then, the Markov chain $(x_t)$ is Feller, and $|\mathrm{vol}|$-irreducible and aperiodic (where $|\mathrm{vol}|$ denotes the Riemannian volume measure on $M$).
Assume that e1, v1, v2 hold. If $\mu \leq c\hspace{0.02cm}(2\ell(1+\sigma^{\scriptscriptstyle 2}_{\scriptscriptstyle 1}))^{\scriptscriptstyle -1}$, then
\begin{equation} \label{eq:driftzero}
Q_\mu V(x) \leq (1-\lambda\mu/2)V(x) + (\ell\hspace{0.02cm}\sigma^2_{\scriptscriptstyle 0})\hspace{0.03cm}\mu^2
\end{equation}
for all $x \in M$.
Admitting these lemmas, the fact that the chain $(x_t)$ is geometrically ergodic follows from [51] (Theorem 16.0.1). Specifically, let $\tilde{V}(x) = V(x) + 1$. Then, by (<ref>),
Q_\mu \tilde{V}(x) \leq (1-\lambda\mu/2)\tilde{V}(x) + b \hspace{0.5cm} \text{where } b = (\ell\hspace{0.02cm}\sigma^2_{\scriptscriptstyle 0})\hspace{0.03cm}\mu^2 + \lambda\mu/2
Let $C = \lbrace x:V(x) \leq 4b/(\lambda\mu)\rbrace$. Clearly,
\begin{equation} \label{eq:drift1}
Q_\mu \tilde{V}(x) \leq (1-\lambda\mu/4)\tilde{V}(x) + b\hspace{0.02cm}\mathbf{1}_{\scriptscriptstyle C}(x)
\end{equation}
By v1, $\tilde{V}$ has compact sublevel sets, so $C$ is a compact subset of $M$. Therefore, since $(x_t)$ is Feller, $C$ is a small set for $Q_\mu$ [51] (Theorem 6.0.1). Then, (<ref>) is a geometric drift condition towards the small set $C$. This is equivalent to $(x_t)$ being geometrically ergodic.
Proof of Lemma <ref> : let $f:M\rightarrow \mathbb{R}$ be a bounded continuous function. By a slight abuse of notation, let $y$ denote a random variable with distribution $P$. From (<ref>),
\begin{equation} \label{eq:qmuf}
Q_\mu f(x) = \mathbb{E}\left[f\!\left(\mathrm{Exp}_x\!\left(\mu\hspace{0.02cm}X(x) + \mu\hspace{0.02cm}e_y(x)\right)\right)\right]
\end{equation}
By e2, $e_y(x)$ is $P$-almost-surely a continuous function of $x$. By the dominated convergence theorem, $Q_\mu f(x)$ is a bounded continuous function of $x$. In other words, the transition kernel $Q_\mu$ is a Feller kernel, so the chain $(x_t)$ is Feller.
To show that $(x_t)$ is $|\mathrm{vol}|$-irreducible and aperiodic, it is enough to show that $Q_\mu(x,B) > 0$ whenever $\mathrm{vol}(B) > 0$, where $Q_\mu(x,B) = Q_\mu\mathrm{1}_{\scriptscriptstyle B}(x)$. By e2, if $w = e_y(x)$ then the distribution of $w$ is of the form $\gamma(w)\hspace{0.03cm}dw$, where $\gamma(w) > 0$, and $dw$ denotes the Lebesgue measure on $T_xM$. Therefore, from (<ref>),
Q_\mu(x,B) = \int_{T_xM} \mathbf{1}_{\scriptscriptstyle B}\!\left(\mathrm{Exp}_x\!\left(\mu\hspace{0.02cm}X(x) + \mu\hspace{0.02cm} w\right)\right)\,\gamma(w)\hspace{0.03cm}dw
Since $M$ is a Hadamard manifold, $\mathrm{Exp}_x$ is a diffeomorphism of $T_xM$ onto $M$. Accordingly,
Q_\mu(x,B) = (1/\mu)^{n}\hspace{0.05cm}\int_{B} \gamma\!\left((1/\mu)\mathrm{Exp}^{-1}_x(z) - X(x)\right)\!\left| J_x(z)\right|^{-1}\mathrm{vol}(dz)
where $n$ is the dimension of $M$, and $\mathrm{Exp}^*_x(\mathrm{vol})(dw) = (\left|J_x\right|\circ \mathrm{Exp}_x)(w)dw$, so that $|J_x(z)| > 0$.
Now, if $\mathrm{vol}(B) > 0$, it is clear that $Q_\mu(x,B) > 0$.
Proof of Lemma <ref> : for any $x_{\scriptscriptstyle 0} \in M$, it follows from (<ref>) that
\begin{equation} \label{eq:qmuV}
Q_\mu V(x_{\scriptscriptstyle 0}) = \mathbb{E}\left[V(x_{\scriptscriptstyle 1})\right] \hspace{0.5cm} \text{where } x_{\scriptscriptstyle 1} =
\mathrm{Exp}_{x_{\scriptscriptstyle 0}}\!\left(\mu\hspace{0.02cm}X(x_{\scriptscriptstyle 0}) + \mu\hspace{0.02cm}e_y(x_{\scriptscriptstyle 0})\right)
\end{equation}
where $y$ denotes a random variable with distribution $P$ (this is the same abuse of notation made in (<ref>)). However, using Lemma <ref>, it is possible to write, as in (<ref>),
V(x_{\scriptscriptstyle 1}) \leq V(x_{\scriptscriptstyle 0}) + \mu\hspace{0.02cm}\langle\mathrm{grad}\,V,X_y\rangle_{x_{\scriptscriptstyle 0}} + \mu^2\ell\left(\Vert X \Vert^2_{x_{\scriptscriptstyle 0}} + \Vert e_{y}\Vert^2_{x_{\scriptscriptstyle 0}}\right)
By e1 and (<ref>), it follows after taking expectations,
\begin{equation} \label{eq:prooflemdrifto1}
Q_\mu V(x_{\scriptscriptstyle 0}) \leq V(x_{\scriptscriptstyle 0}) + \mu\hspace{0.02cm}\langle\mathrm{grad}\,V,X\rangle_{x_{\scriptscriptstyle 0}} +
\mu^2\ell(1+\sigma^2_{\scriptscriptstyle 1})\Vert X \Vert^2_{x_{\scriptscriptstyle 0}}
+ (\ell\hspace{0.02cm}\sigma^2_{\scriptscriptstyle 0})\hspace{0.03cm}\mu^2
\end{equation}
By v1, $V$ satisfies (<ref>), so that (<ref>) implies
\begin{equation} \label{eq:prooflemdrifto2}
Q_\mu V(x_{\scriptscriptstyle 0}) \leq V(x_{\scriptscriptstyle 0}) + \mu\hspace{0.02cm}(1 - \mu\hspace{0.02cm}
\ell(1+\sigma^2_{\scriptscriptstyle 1})/c)
\hspace{0.02cm}\langle\mathrm{grad}\,V,X\rangle_{x_{\scriptscriptstyle 0}} +
+ (\ell\hspace{0.02cm}\sigma^2_{\scriptscriptstyle 0})\hspace{0.03cm}\mu^2
\end{equation}
Since $\langle\mathrm{grad}\,V,X\rangle$ is negative, if $\mu \leq c\hspace{0.02cm}(2\ell(1+\sigma^{\scriptscriptstyle 2}_{\scriptscriptstyle 1}))^{\scriptscriptstyle -1}$, then (<ref>) implies
Q_\mu V(x_{\scriptscriptstyle 0}) \leq V(x_{\scriptscriptstyle 0}) + (\mu/2)
\hspace{0.02cm}\langle\mathrm{grad}\,V,X\rangle_{x_{\scriptscriptstyle 0}}
+ (\ell\hspace{0.02cm}\sigma^2_{\scriptscriptstyle 0})\hspace{0.03cm}\mu^2
Finally, by v2, this yields
\begin{equation} \label{eq:prooflemdrift3}
Q_\mu V(x_{\scriptscriptstyle 0}) \leq (1-\lambda\mu/2)V(x_{\scriptscriptstyle 0}) + (\ell\hspace{0.02cm}\sigma^2_{\scriptscriptstyle 0})\hspace{0.03cm}\mu^2
\end{equation}
which is the same as (<ref>), since $x_{\scriptscriptstyle 0}$ is arbitrary.
§.§ Proof of Proposition <ref>
Proposition <ref> implies that the chain $(x_t)$ has a unique invariant distribution, here denoted $\pi_\mu\hspace{0.02cm}$, for any $\mu \leq c\hspace{0.02cm}(2\ell(1+\sigma^{\scriptscriptstyle 2}_{\scriptscriptstyle 1}))^{\scriptscriptstyle -1}$. Integrating both sides of (<ref>) with respect to $\pi_\mu\hspace{0.02cm}$, it follows that
\int_M Q_\mu V(x)\hspace{0.02cm}\pi_\mu(dx) \leq (1-\lambda\mu/2)\int_M V(x)\hspace{0.02cm}\pi_\mu(dx) + (\ell\hspace{0.02cm}\sigma^2_{\scriptscriptstyle 0})\hspace{0.03cm}\mu^2
Since $\pi_\mu$ is an invariant distribution of the transition kernel $Q_\mu\hspace{0.02cm}$, this means
\int_M V(x)\hspace{0.02cm}\pi_\mu(dx) \leq (1-\lambda\mu/2)\int_M V(x)\hspace{0.02cm}\pi_\mu(dx) + (\ell\hspace{0.02cm}\sigma^2_{\scriptscriptstyle 0})\hspace{0.03cm}\mu^2
In other words,
\begin{equation} \label{eq:Vmoment}
\int_M V(x)\hspace{0.02cm}\pi_\mu(dx) \leq 2(\ell\hspace{0.02cm}\sigma^2_{\scriptscriptstyle 0}/\lambda)\hspace{0.02cm}\mu
\end{equation}
so, by Markov's inequality,
\begin{equation} \label{eq:Vmarkov}
\pi_{\mu}(V > v) \leq 2(\ell\hspace{0.02cm}\sigma^2_{\scriptscriptstyle 0}/\lambda)\hspace{0.02cm}\frac{\mu}{v} \hspace{0.5cm} \text{for all $v > 0$}
\end{equation}
By v1, $V$ has compact sublevel sets, so (<ref>) implies the family $(\pi_\mu\,;\mu \leq c\hspace{0.02cm}(2\ell(1+\sigma^{\scriptscriptstyle 2}_{\scriptscriptstyle 1}))^{\scriptscriptstyle -1})$ is tight. If $\pi_*$ is a limit point of this family at $\mu = 0$, then by the Portmaneau theorem, $\pi_*(V > v) = 0$ for all $v > 0$. In other words, $\pi_*(V = 0) = 1$. By v3, this is equivalent to $\pi_*(\lbrace x^*\rbrace) = 1$, or to $\pi_* = \delta_{x^*}\hspace{0.02cm}$.
§.§ Proof of Proposition <ref>
The proof exploits the relation (<ref>), between the transition kernels $Q_\mu$ and $\tilde{Q}_\mu\hspace{0.02cm}$, using the following Lemmas <ref>, <ref>, and <ref>.
In Lemma <ref>, $[H\!:\!T]$ will denote the contraction of a (0,2)-tensor $H$ with a (2,0)-tensor $T$.This is $[H\!:\!T] = H_{ij}T^{ij}$, in any local coordinates. Moreover, if $f :M\rightarrow \mathbb{R}$ is compactly-supported and smooth, $A_f\hspace{0.02cm},B_f$ denote positive numbers such that
\begin{equation} \label{eq:AfBf}
|\mathrm{Hess}\,f_x(w,w)| \leq A_f\Vert w \Vert^2_x \hspace{0.25cm}\text{and}\hspace{0.2cm}
|\nabla\mathrm{Hess}\,f_x(w,w,w)| \leq B_f\Vert w \Vert^3_x
\end{equation}
for any $x \in M$ and $w \in T_xM$, where $\nabla\mathrm{Hess}\,f$ is the covariant derivative of the Hessian of $f$ (respectively, $\mathrm{Hess}\,f$ and $\nabla\mathrm{Hess}\,f$ are (0,2)- and (0,3)-tensor fields).
For any compactly-supported, smooth $f :M\rightarrow \mathbb{R}$,
\begin{equation} \label{eq:qtaylor}
Q_\mu f(x) = f(x) + \mu\hspace{0.02cm}Xf(x) + \frac{\mu^2}{2}[\mathrm{Hess}\,f\!:\!\Sigma + X \otimes X]_x + \mu^2\hspace{0.03cm}\mathcal{R}_x(f,\mu)
\end{equation}
where the remainder term $\mathcal{R}_x(f,\mu)$ satisfies
\leq 2A_f\hspace{0.03cm}\mathbb{E}\left[\mathbf{1}\lbrace \Vert e_y\Vert_x> K\rbrace\Vert e_y\Vert^2_x\right] + \hspace{8cm}
\begin{equation} \label{eq:qremainder}
\phantom{abcd}6B_f\hspace{0.03cm}
\mu\hspace{0.02cm}(2\Vert X \Vert^2_x + 2K^2)\hspace{0.03cm}\mathbb{E}\left[\mathbf{1}\lbrace \Vert e_y\Vert_x> K\rbrace\Vert e_y\Vert_x\right]+2B_f\hspace{0.03cm}\mu\hspace{0.02cm}(4\Vert X \Vert^3_x + 4K^3)
\end{equation}
for any (arbitrarily chosen) $K > 0$.
Given normal coordinates $(x^i\,;i=1,\ldots, n)$ on $M$, with origin at $x^*$, recall the coordinate vector fields $\partial_i = \left.\partial\middle/\partial x^i\right.$. Any function $\phi:T_{x^*}M\rightarrow \mathbb{R}$ may be identified with a function of$n$ variables, $\phi(u) = \phi(u^{\scriptscriptstyle 1},\ldots,u^n)$, for $u \in T_{x^*}M$ where $u = u^i\partial_i(x^*)$.
Let $(x^i\,;i=1,\ldots, n)$ be normal coordinates on $M$ with origin at $x^*$. For any smooth function $\phi:T_{x^*}M\rightarrow \mathbb{R}$, if $\psi_\mu$ is given by (<ref>), then
\begin{equation} \label{eq:normalderivatives}
\partial_i(\phi\circ \psi_\mu)(\psi^{\scriptscriptstyle -1}(u)) = \mu^{-\frac{1}{2}}\frac{\partial\phi}{\partial u^i}(u) \hspace{0.36cm};\hspace{0.3cm}
\partial_{ij}(\phi\circ \psi_\mu)(\psi^{\scriptscriptstyle -1}(u)) = \mu^{-1}\frac{\partial^2\phi}{\partial u^i\partial u^j}(u)
\end{equation}
Let $X^i(x)$ denote the components of the mean field $X$, with respect to the normal coordinates $(x^i\,;i=1,\ldots,n)$. If d2 holds, then
\begin{equation} \label{eq:xnormals}
X^i(\psi^{\scriptscriptstyle -1}(u)) = \mu^{\frac{1}{2}}\hspace{0.02cm}A^i_ju^j + R^i(\mu^{\frac{1}{2}}u)
\end{equation}
where $|R^i(u)| = o(\Vert u \Vert_{x^*})$.
Lemmas <ref>, <ref>, and <ref> will be proved below. Accepting them to be true, recall (<ref>)
\tilde{Q}_\mu\phi(u) = Q_\mu(\phi\circ \psi_\mu)(\psi^{\scriptscriptstyle -1}(u))
Replacing (<ref>) into the right-hand side gives
\mu^{-1}\left[ \tilde{Q}_\mu\phi(u) - \phi(u)\right] = \hspace{7.6cm}
\begin{equation} \label{eq:proofunctional}
X(\phi\circ \psi_\mu)(\psi^{\scriptscriptstyle -1}(u)) + \frac{\mu}{2}\hspace{0.02cm}[\mathrm{Hess}\,(\phi\circ \psi_\mu)\!:\!T]_{\psi^{-1}(u)} + \mu\hspace{0.02cm}\mathcal{R}_{\psi^{-1}(u)}(\phi\circ \psi_\mu\hspace{0.02cm},\mu)
\end{equation}
where $T = \Sigma + X \otimes X$. However, working in normal coordinates,
X(\phi\circ \psi_\mu)(\psi^{\scriptscriptstyle -1}(u)) =
X^i(\psi^{\scriptscriptstyle -1}(u)) \partial_i(\phi\circ \psi_\mu)(\psi^{\scriptscriptstyle -1}(u))
so that, by (<ref>) and (<ref>),
X(\phi\circ \psi_\mu)(\psi^{\scriptscriptstyle -1}(u)) = \left\lbrace A^i_ju^j
\mu^{-\frac{1}{2}} R^i(\mu^{\frac{1}{2}}u)
\right\rbrace
\frac{\partial\phi}{\partial u^i}(u)
Since $\phi$ is compactly-supported, this can be written
\begin{equation} \label{eq:proofunctional1}
X(\phi\circ \psi_\mu)(\psi^{\scriptscriptstyle -1}(u)) = A^i_ju^j\hspace{0.02cm}\frac{\partial\phi}{\partial u^i}(u) + \varepsilon^{\scriptscriptstyle 1}_\mu(u)
\end{equation}
where $\varepsilon^{\scriptscriptstyle 1}_\mu(u) \rightarrow 0$, uniformly on $T_{x^*}M$, as $\mu \rightarrow 0$. For the second term in (<ref>), using (<ref>),
[\mathrm{Hess}\,(\phi\circ \psi_\mu)\!:\!T]_{\psi^{-1}(u)} = T^{ij}(\psi^{-1}(u))\left[ \partial_{ij}(\phi\circ \psi_\mu)(\psi^{\scriptscriptstyle -1}(u)) - \Gamma^k_{ij}(\psi^{-1}(u))\hspace{0.02cm}\partial_k(\phi\circ \psi_\mu)(\psi^{\scriptscriptstyle -1}(u))\right]
so that, by (<ref>),
[\mathrm{Hess}\,(\phi\circ \psi_\mu)\!:\!T]_{\psi^{-1}(u)} = \mu^{-1}\hspace{0.03cm}T^{ij}(\psi^{-1}(u))\left[\frac{\partial^2\phi}{\partial u^i\partial u^j}(u) - \mu^{\frac{1}{2}}\Gamma^k_{ij}(\psi^{-1}(u))\frac{\partial\phi}{\partial u^k}(u)\right]
where $(\Gamma^i_{jk})$ denote the Christoffel symbols. Since $\phi$ is compactly-supported, this can be written
[\mathrm{Hess}\,(\phi\circ \psi_\mu)\!:\!T]_{\psi^{-1}(u)} = \mu^{-1}\hspace{0.03cm}T^{ij}_*\hspace{0.02cm}\frac{\partial^2\phi}{\partial u^i\partial u^j}(u) + \varepsilon^{\scriptscriptstyle 2}_\mu(u)
%- \mu^{\frac{1}{2}}\Gamma^k_{ij}(\psi^{-1}(u))\frac{\partial\phi}{\partial u^i}(u)\right]
where $(T^{ij}_*)$ is the matrix which represents the tensor $T(x^*)$ in normal coordinates, and where $\varepsilon^{\scriptscriptstyle 2}_\mu(u) \rightarrow 0$, uniformly on $T_{x^*}M$, as $\mu \rightarrow 0$. Since (clearly, from (<ref>)), $T(x^*) = \Sigma(x^*)$, it follows
\begin{equation} \label{eq:proofunctional2}
[\mathrm{Hess}\,(\phi\circ \psi_\mu)\!:\!T]_{\psi^{-1}(u)} = \mu^{-1}\hspace{0.03cm}\Sigma^{ij}_*\hspace{0.02cm}\frac{\partial^2\phi}{\partial u^i\partial u^j}(u) + \varepsilon^{\scriptscriptstyle 2}_\mu(u)
\end{equation}
Then, replacing (<ref>) and (<ref>) into (<ref>), and recalling the definition of $\mathcal{L}$ from (<ref>),
\begin{equation} \label{eq:proofunctional11}
\mu^{-1}\left[ \tilde{Q}_\mu\phi(u) - \phi(u)\right] =
\mathcal{L}\phi(u) \,+\, \varepsilon^{\scriptscriptstyle 1}_\mu(u) + \varepsilon^{\scriptscriptstyle 2}_\mu(u) +
\mu\hspace{0.02cm}\mathcal{R}_{\psi^{-1}(u)}(\phi\circ \psi_\mu\hspace{0.02cm},\mu)
\end{equation}
To conclude, let $\varepsilon_\mu(u) = \varepsilon^{\scriptscriptstyle 1}_\mu(u) + \varepsilon^{\scriptscriptstyle 2}_\mu(u) +
\mu\hspace{0.02cm}\mathcal{R}_{\psi^{-1}(u)}(\phi\circ \psi_\mu\hspace{0.02cm},\mu)$, and recall that $\varepsilon^{\scriptscriptstyle 1}_\mu(u)$ and
$\varepsilon^{\scriptscriptstyle 2}_\mu(u)$ converge to zero, uniformly on $T_{x^*}M$. Moreover, using (<ref>) and (<ref>), it is straightforward that $\mathcal{R}_{\psi^{-1}(u)}(\phi\circ \psi_\mu\hspace{0.02cm},\mu)$ is bounded on compact subsets of $T_{x^*}M$, (by an upper bound which is independent of $\mu$). Therefore, $\varepsilon_{\mu}(u) \rightarrow 0$ as $\mu \rightarrow 0$, uniformly on compact subsets of $T_{x^*}M$.
Proof of Lemma <ref> : the proof will rely on the following variant of Taylor expansion(compare to [54], Section 2). Let $f:M \rightarrow \mathbb{R}$ be a compactly-supported, smooth function. If $A_f\hspace{0.02cm},B_f$ are given by (<ref>),
$x \in M$ and $\xi\hspace{0.02cm},\eta \in T_x M$, then
f(\mathrm{Exp}_x(\xi + \eta)) = f(x) + (\xi + \eta)f + \frac{1}{2}\hspace{0.02cm}[\mathrm{Hess}\,f\!:\!(\xi + \eta)\otimes(\xi+\eta)] + \mathcal{R}_f(x) \hspace{0.62cm}
\begin{equation} \label{eq:pflug}
\text{where }\left|\mathcal{R}_f(x)\right| \leq 2A_f\hspace{0.02cm}\Vert \eta\Vert^2_x + 6B_f\hspace{0.02cm}\Vert \xi\Vert^2_x\Vert \eta\Vert^{\phantom{2}}_x + 2B_f\hspace{0.02cm}\Vert \xi\Vert^3_x \hspace{2.965cm}
\end{equation}
To apply (<ref>), recall (<ref>),
Q_\mu f(x) = \mathbb{E}\left[f\!\left(\mathrm{Exp}_x\!\left(\mu\hspace{0.02cm}X(x) + \mu\hspace{0.02cm}e_y(x)\right)\right)\right]
and let $\xi = \mu\hspace{0.02cm}X(x) + \mu\hspace{0.02cm}\mathbf{1}\lbrace \Vert e_y\Vert_x \leq K\rbrace\hspace{0.02cm}e_y(x)$, $\eta = \mu\hspace{0.02cm}\mathbf{1}\lbrace \Vert e_y\Vert_x > K\rbrace\hspace{0.02cm}e_y(x)$. Taking the expectation of the Taylor expansion in (<ref>) and using (<ref>) and (<ref>), it follows that, as in (<ref>),
Q_\mu f(x) = f(x) + \mu\hspace{0.02cm}Xf(x) + \frac{\mu^2}{2}[\mathrm{Hess}\,f\!:\!\Sigma + X \otimes X]_x + \mu^2\hspace{0.03cm}\mathcal{R}_x(f,\mu)
where $|\mathcal{R}_x(f,\mu)|$ is less than or equal to
\begin{array}{l}
2A_f\hspace{0.03cm}\mathbb{E}\left[\mathbf{1}\lbrace \Vert e_y\Vert_x> K\rbrace\Vert e_y\Vert^2_x\right] + \\[0.15cm]
\mu\hspace{0.02cm}\mathbb{E}\left[\Vert \hspace{0.02cm}X(x) + \mathbf{1}\lbrace \Vert e_y\Vert_x \leq K\rbrace\hspace{0.02cm}e_y(x)\Vert^2_x \times \Vert \mathbf{1}\lbrace \Vert e_y\Vert_x > K\rbrace\hspace{0.02cm}e_y(x)\Vert_x\right] + \\[0.15cm]
2B_f\hspace{0.03cm}\mu\mathbb{E}\left[\Vert \hspace{0.02cm}X(x) + \mathbf{1}\lbrace \Vert e_y\Vert_x \leq K\rbrace\hspace{0.02cm}e_y(x)\Vert^3_x \right]
%\Vert \xi\Vert^2_x\Vert \eta\Vert^{\phantom{2}}_x + \Vert \eta\Vert^2_x]
\end{array}
Then, to obtain (<ref>), it is enough to note
\begin{array}{l}
\Vert \hspace{0.02cm}X(x) + \mathbf{1}\lbrace \Vert e_y\Vert_x \leq K\rbrace\hspace{0.02cm}e_y(x)\Vert^3_x \leq 4\Vert X \Vert^3_x + 4K^3 \\[0.2cm]
\Vert \hspace{0.02cm}X(x) + \mathbf{1}\lbrace \Vert e_y\Vert_x \leq K\rbrace\hspace{0.02cm}e_y(x)\Vert^2_x \leq 2\Vert X \Vert^2_x + 2K^2
\end{array}
which follow from the elementary inequalities $(a+b)^3 \leq 4a^3 + 4b^3$ and $(a+b)^2 \leq 2a^2 + 2b^2$. Proof of (<ref>) : if $f:M\rightarrow \mathbb{R}$ is smooth and compactly-supported, for $x \in M$ and $\zeta \in T_xM$,
one has from the second- and third-order Taylor expansions of $f$ at $x$, that
f(\mathrm{Exp}_x(\zeta)) = f(x) + \zeta f + \frac{1}{2}\hspace{0.02cm}[\mathrm{Hess}\,f\!:\!\zeta\otimes \zeta] + \mathcal{R}_f(x)
where, simultaneously, $|\mathcal{R}_f(x)| \leq A_f\Vert \zeta\Vert^2_x$ and $|\mathcal{R}_f(x)| \leq B_f\Vert \zeta\Vert^3_x\hspace{0.2cm}$. If $\zeta = \xi + \eta$, then
\begin{array}{rl}
|\mathcal{R}_f(x)| \leq 2A_f\Vert \eta\Vert^2_x & \text{if $\Vert \eta\Vert_x \geq \Vert \xi\Vert_x$}\\[0.12cm]
|\mathcal{R}_f(x)| \leq 2B_f\Vert \xi\Vert^3_x + 6B_f\Vert \xi\Vert^2_x\Vert \eta\Vert_x& \text{if $\Vert \eta\Vert_x < \Vert \xi\Vert_x$}
\end{array}
and (<ref>) is obtained by adding up these two cases.
Proof of Lemma <ref> : let $f:M\rightarrow \mathbb{R}$ be a smooth function. From the definition of coordinate vector fields [53] (Page 49),
\begin{equation} \label{eq:coofields}
\partial_if(x) = (f\circ \mathrm{Exp}_{x^*})^\prime(\mathrm{Exp}^{-1}_{x^*}(x))(\partial_i(x^*))
\end{equation}
where the prime denotes the Fréchet derivative. To obtain (<ref>), set $f = \phi\circ \psi_\mu$ and $x = \psi^{\scriptscriptstyle -1}(u)$, so that $f\circ \mathrm{Exp}_{x^*}(w) = \phi(\mu^{-\frac{1}{2}}w)$ (for $w \in T_{x^*}M$) and $\mathrm{Exp}^{-1}_{x^*}(x) = \mu^{\frac{1}{2}}u$. Then, in particular, $(f\circ \mathrm{Exp}_{x^*})^\prime = \mu^{-\frac{1}{2}}\phi^\prime$. Replacing into (<ref>), it follows that
\partial_i(\phi\circ \psi_\mu)(\psi^{\scriptscriptstyle -1}(u)) = \mu^{-\frac{1}{2}}\phi^\prime(u)(\partial_i(x^*))
Now, if $\phi$ is identified with a function of $n$ variables, $\phi(u) = \phi(u^{\scriptscriptstyle 1},\ldots,u^n)$ where $u = u^i\partial_i(x^*)$, then $\phi^\prime(u)(\partial_i(x^*)) = \partial\phi(u)/\partial u^i$. This yields the first identity in (<ref>). The second identity follows from the first by repeated application.
Proof of Lemma <ref> : assume that d2 holds. Using the same notation as in (<ref>), consider the Taylor expansion of the coordinate vector fields $\partial_i$ (see [11], Page 90),
\partial_i(x) = \Pi^{\scriptscriptstyle 1}_{\scriptscriptstyle 0}\left(\partial_i(x^*) + \nabla\partial_i(x^*)\left(\mathrm{Exp}^{-1}_{x^*}(x)\right) + o(d(x\hspace{0.02cm},x^*))\right)
where $\nabla \partial_i(x^*):T_{x^*}M\rightarrow T_{x^*}M$ is the covariant derivative of $\partial_i$ at $x^*$. From (<ref>) and (<ref>), it is clear that $\nabla \partial_i(x^*) = 0$, and therefore
\begin{equation} \label{eq:coordinateA}
\partial_i(x) = \Pi^{\scriptscriptstyle 1}_{\scriptscriptstyle 0}\left(\partial_i(x^*) + o(d(x\hspace{0.02cm},x^*))\right)
\end{equation}
Take the scalar product of (<ref>) and (<ref>). Since parallel transport preserves scalar products,
\langle X\hspace{0.02cm},\partial_i\rangle_x = \langle A\left(\mathrm{Exp}^{-1}_{x^*}(x)\right),\partial_i(x^*)\rangle_{x^*} + o(d(x\hspace{0.02cm},x^*))
However, $\mathrm{Exp}^{-1}_{x^*}(x) = x^i\partial_i(x^*)$, and $\partial_i(x^*)$ form an orthonormal basis of $T_{x^*}M$. Therefore,
\begin{equation} \label{eq:proofxnormals1}
\langle X\hspace{0.02cm},\partial_i\rangle_x = A^i_jx^j + o(d(x\hspace{0.02cm},x^*))
\end{equation}
where $A(\partial_i(x^*)) = A^k_i\partial^{\phantom{k}}_k(x^*)$. Finally, note that (in normal coordinates), the metric coefficients satisfy
g_{ij}(x) = \delta_{ij} + o(d(x\hspace{0.02cm},x^*))
Using these to express the scalar product in (<ref>), it can be seen that
\begin{equation} \label{eq:proofnormals2}
X^i(x) = A^i_jx^j + o(d(x\hspace{0.02cm},x^*))
\end{equation}
Thus, (<ref>) follows by putting $x = \psi^{-1}_\mu(u)$ in (<ref>). Then, $x^j = \mu^{\frac{1}{2}}u^j$ and $d(x\hspace{0.02cm},x^*) = \mu^{\frac{1}{2}}\Vert u \Vert_{x^*\hspace{0.02cm}}$.
§.§ Proof of Proposition <ref>
To begin, it is helpful to establish tightness of the family $(\tilde{\pi}_\mu\,;\mu \leq c\hspace{0.02cm}(2\ell(1+\sigma^{\scriptscriptstyle 2}_{\scriptscriptstyle 1}))^{\scriptscriptstyle -1})$.
Assume that e1, e2, v1, v2, t2 hold. Then, the family of probability distributions $(\tilde{\pi}_\mu\,;\mu \leq c\hspace{0.02cm}(2\ell(1+\sigma^{\scriptscriptstyle 2}_{\scriptscriptstyle 1}))^{\scriptscriptstyle -1})$ is tight.
Accepting this lemma, let $\tilde{\pi}_*$ be some limit point of the family $(\tilde{\pi}_\mu\,;\mu \leq c\hspace{0.02cm}(2\ell(1+\sigma^{\scriptscriptstyle 2}_{\scriptscriptstyle 1}))^{\scriptscriptstyle -1})$at $\mu = 0$. By integrating both sides of (<ref>) with respect to
$\tilde{\pi}_\mu\hspace{0.02cm}$, and recalling that $\tilde{\pi}_\mu$ is an invariant distribution of $\tilde{Q}_\mu$ (so the integral of the left-hand side is zero), it follows that
\begin{equation} \label{eq:proofclt11}
\int_{T_{x^*}M}\mathcal{L}\phi(u)\hspace{0.03cm}\tilde{\pi}_\mu(du) = - \int_{T_{x^*}M} \varepsilon_{\mu}(u)\hspace{0.03cm}\tilde{\pi}_\mu(du)
\end{equation}
where $\varepsilon_\mu(u) = \varepsilon^{\scriptscriptstyle 1}_\mu(u) + \varepsilon^{\scriptscriptstyle 2}_\mu(u) + \mu\hspace{0.02cm}\mathcal{R}_{\psi^{-1}(u)}(\phi\circ \psi_\mu\hspace{0.02cm},\mu)$, in the notation of (<ref>), (<ref>) and (<ref>), from the proof of Proposition <ref>. Since both $\varepsilon^{\scriptscriptstyle 1}_\mu(u)$ and
$\varepsilon^{\scriptscriptstyle 2}_\mu(u)$ converge to zero as $\mu \rightarrow 0$, uniformly on $T_{x^*}M$, it follows from (<ref>) that,
\left|\int_{T_{x^*}M}\mathcal{L}\phi(u)\hspace{0.03cm}\tilde{\pi}_*(du)\right| \leq \limsup_{\mu \rightarrow 0} \int_{T_{x^*}M}\,
\mu\left|\mathcal{R}_{\psi^{-1}(u)}(\phi\circ \psi_\mu\hspace{0.02cm},\mu)\right|\tilde{\pi}_\mu(du)
Since $\tilde{\pi}_\mu$ is the image of $\pi_\mu$ under $\psi_\mu\hspace{0.02cm}$, this is the same as
\begin{equation} \label{eq:proofclt12}
\left|\int_{T_{x^*}M}\mathcal{L}\phi(u)\hspace{0.03cm}\tilde{\pi}_*(du)\right| \leq \limsup_{\mu \rightarrow 0} \int_{M}\,
\mu\left|\mathcal{R}_{x}(\phi\circ \psi_\mu\hspace{0.02cm},\mu)\right|{\pi}_\mu(dx)
\end{equation}
To bound the right-hand side, put $f = \phi\circ \psi_\mu$ in (<ref>). If $\bar{f} = \phi \circ \mathrm{Exp}_{x^*}$, then $\bar{f}$ is compactly-supported and smooth. Moreover, applying the chain rule, it follows from (<ref>) that $A_f = \mu^{-1}\!A_{\bar{f}}$ and $B_f =
\mu^{-\frac{3}{2}}B_{\bar{f}\hspace{0.03cm}}$. Therefore, by (<ref>),
\left|\mathcal{R}_{x}(\phi\circ \psi_\mu\hspace{0.02cm},\mu)\right|
\leq 2\mu^{-1}\!A_{\bar{f}}\hspace{0.04cm}\mathbb{E}\left[\mathbf{1}\lbrace \Vert e_y\Vert_x> K\rbrace\Vert e_y\Vert^2_x\right] + \hspace{8cm}
\begin{equation} \label{eq:qremainderters}
\phantom{abcd}6\mu^{-\frac{1}{2}}B_{\bar{f}}\hspace{0.03cm}
(2\Vert X \Vert^2_x + 2K^2)\hspace{0.03cm}\mathbb{E}\left[\mathbf{1}\lbrace \Vert e_y\Vert_x> K\rbrace\Vert e_y\Vert_x\right]+2\mu^{-\frac{1}{2}}B_{\bar{f}}\hspace{0.03cm}(4\Vert X \Vert^3_x + 4K^3)
\end{equation}
Now, since t3 holds, it follows from (<ref>) that
\begin{equation} \label{eq:proofclt13}
\mathbb{E}\left[\mathbf{1}\lbrace \Vert e_y\Vert_x> K\rbrace\Vert e_y\Vert^2_x\right] \leq K^{-\alpha}\hspace{0.02cm}(\tilde{\sigma}^2_{\scriptscriptstyle 0} + \tilde{\sigma}^2_{\scriptscriptstyle 1}\hspace{0.02cm}V(x))
\end{equation}
Moreover, by (<ref>) (assuming that $K > 1$),
\begin{equation} \label{eq:proofclt14}
(2\Vert X \Vert^2_x + 2K^2)\hspace{0.03cm}\mathbb{E}\left[\mathbf{1}\lbrace \Vert e_y\Vert_x> K\rbrace\Vert e_y\Vert_x\right] \leq
(2\Vert X \Vert^2_x + 2K^2)(\sigma^2_{\scriptscriptstyle 0} + \sigma^2_{\scriptscriptstyle 1}\hspace{0.02cm}\Vert X\Vert^2_x)
\end{equation}
Then, it follows from (<ref>), (<ref>) and (<ref>) that
\begin{array}{l}
%\limsup_{\mu \rightarrow 0}
\mu\left|\mathcal{R}_{x}(\phi\circ \psi_\mu\hspace{0.02cm},\mu)\right| \leq \\[0.2cm]
2A_{\bar{f}}\hspace{0.04cm}K^{-\alpha}\hspace{0.02cm}(\tilde{\sigma}^2_{\scriptscriptstyle 0} + \tilde{\sigma}^2_{\scriptscriptstyle 1}\hspace{0.02cm}V(x)) \,+ 6\mu^{\frac{1}{2}}B_{\bar{f}}(2\Vert X \Vert^2_{x} + 2K^2)(\sigma^2_{\scriptscriptstyle 0} + \sigma^2_{\scriptscriptstyle 1}\hspace{0.02cm}\Vert X\Vert^2_{x}) +
2\mu^{\frac{1}{2}}B_{\bar{f}}\hspace{0.03cm}(4\Vert X \Vert^3_{x} + 4K^3)
\end{array}
Integrate this inequality with respect to $\pi_\mu\hspace{0.02cm}$, and recall from Proposition <ref> that $\pi_\mu$ converges weakly to $\delta_{x^*}$ as $\mu \rightarrow 0$. It follows that,
\limsup_{\mu \rightarrow 0}
\int_{M}\,
\mu\left|\mathcal{R}_{x}(\phi\circ \psi_\mu\hspace{0.02cm},\mu)\right|{\pi}_\mu(dx) \leq
2A_{\bar{f}}\hspace{0.04cm}K^{-\alpha}\hspace{0.02cm}(\tilde{\sigma}^2_{\scriptscriptstyle 0}
+ \tilde{\sigma}^2_{\scriptscriptstyle 1}\hspace{0.02cm}V(x^*))
However, since $K$ can be chosen arbitrarily large, and $\alpha > 0$, the limit superior is equal to zero, and (<ref>) becomes
\int_{T_{x^*}M}\mathcal{L}\phi(u)\hspace{0.03cm}\tilde{\pi}_*(du) = 0
This means that $\tilde{\pi}_*$ is an invariant distribution of the generator $\mathcal{L}$, and therefore $\tilde{\pi}_* = \mathrm{N}(0,V)$, as required.
Proof of Lemma <ref> : by Proposition <ref>, e1, e2, v1, v2 ensure that the chain $(x_t)$ has a unique invariant distribution $\pi_\mu\hspace{0.02cm}$, whenever $\mu \leq c\hspace{0.02cm}(2\ell(1+\sigma^{\scriptscriptstyle 2}_{\scriptscriptstyle 1}))^{\scriptscriptstyle -1}$. Then, the chain $(u_t)$ has a unique invariant distribution $\tilde{\pi}_\mu\hspace{0.02cm}$. According to (<ref>), this is $\tilde{\pi}_\mu(A) = \pi_\mu(\mathrm{Exp}_{x^*}(\mu^{1/2}A))$,for any measurable $A \subset T_{x^*}M$. The same e1, e2, v1, v2 also imply (<ref>), in the proof of Proposition (<ref>). Now, for $u \in T_{x^*} M$, let $x = \mathrm{Exp}_{x^*}(\mu^{1/2}u)$, and note that $\Vert u \Vert_{x^*} > r$ if and only if $d(x\hspace{0.02cm},x^*) > \mu^{\scriptscriptstyle 1/2}r$. It then follows from Assumption t2 that
\tilde{\pi}_\mu(\Vert u \Vert_{x^*} > r) \leq \pi_\mu(V > v(\mu^{\scriptscriptstyle 1/2}r))
so, using Markov's inequality and (<ref>),
\tilde{\pi}_\mu(\Vert u \Vert_{x^*} > r) \leq 2(\ell\hspace{0.02cm}\sigma^2_{\scriptscriptstyle 0}/\lambda)\left(\mu\middle/v(\mu^{\scriptscriptstyle 1/2}r)\right)
To conclude, let $\bar{\mu} = c\hspace{0.02cm}(2\ell(1+\sigma^{\scriptscriptstyle 2}_{\scriptscriptstyle 1}))^{\scriptscriptstyle -1}$. By t2,
$\left.\mu\middle/v(\mu^{\scriptscriptstyle 1/2}r)\right. \leq \left.\bar{\mu}\middle/v(\bar{\mu}^{\scriptscriptstyle 1/2}r)\right.$. Therefore,
\tilde{\pi}_\mu(\Vert u \Vert_{x^*} > r) \leq 2(\ell\hspace{0.02cm}\sigma^2_{\scriptscriptstyle 0}/\lambda)\left(\bar{\mu}\middle/v(\bar{\mu}^{\scriptscriptstyle 1/2}r)\right)
However (again by t2), the right-hand side of this inequality is independent of $\mu$, and goes to zero as $r \rightarrow \infty$. This is equivalent to the required tightness.
§ RIEMANNIAN AR(1)
Let $M$ be a Hadamard manifold, and $P$ a probability distribution on $M$, which has a strictly positive probability density, with respect to Riemannian volume. Then, let $(y_t\,;t = 1,2,\ldots)$ be independent samples from $P$, and $x_{\scriptscriptstyle 0}$ be a point in $M$. Consider the stochastic update rule, starting from $x_{\scriptscriptstyle 0}$ at $t = 0$,
\begin{equation} \label{eq:ar1}
x_{t+1} = x_t\, \#_{\scriptscriptstyle \mu}\, y_{\scriptscriptstyle\hspace{0.02cm}t+1} \hspace{0.5cm}\text{where $\mu \in (0,1)$}
\end{equation}
where the notation is that of (<ref>) from <ref>. This will be called a Riemannian AR(1) model, since each new $x_{t+1}$ is a geodesic convex combination of the old $x_t$ and of the new sample $y_{\scriptscriptstyle\hspace{0.02cm}t+1\hspace{0.02cm}}$. If $M$ is a Euclidean space, $M = \mathbb{R}^n$, then (<ref>) reads $x_{t+1} = (1-\mu)\hspace{0.02cm}x_t + \mu\hspace{0.02cm}y_{\scriptscriptstyle\hspace{0.02cm}t+1\hspace{0.02cm}}$, which is a first-order auto-regressive model (whence the name AR(1)).
The update rule (<ref>) may be viewed as a constant-step-size exponential scheme, of the form (<ref>). Specifically, (<ref>) is equivalent to
\begin{equation} \label{eq:arscheme}
x_{t+1} = \mathrm{Exp}_{x_t}\!\left(\mu\hspace{0.02cm}X_{y_{\scriptscriptstyle\hspace{0.02cm}t+1}}(x_t)\right) \hspace{0.5cm} \text{where } X_y(x) = \mathrm{Exp}^{-1}_x(y)
\end{equation}
which defines a time-homogeneous Markov chain $(x_t)$ with values in $M$.
One is tempted to apply the results of <ref> (e.g. on geometric ergodicity), directly to the scheme (<ref>). However, some of the assumptions in <ref> (especially e1), turn out to be quite unnatural. Fortunately, it is possible to proceed along a different path, which only requires the existence of second-order moments. Specifically, it is merely required that
\begin{equation} \label{eq:arsecondorder}
\mathcal{E}(x) = \frac{1}{2}\hspace{0.03cm}\int_M\,d^{\hspace{0.03cm} 2}(x\hspace{0.02cm},y)\hspace{0.03cm}P(dy) < \,\infty
\end{equation}
for some (and therefore all) $x \in M$. As discussed in <ref>, (<ref>) guarantees existence and uniqueness of the Riemannian barycentre $x^*$ of $P$. This is enough for the following proposition.
Consider the Riemannian AR(1) model (<ref>) (or (<ref>)), on a Hadamard manifold $M$. If (<ref>) is verified, then
the Markov chain $(x_t)$ is geometrically ergodic, with a unique invariant distribution $\pi_\mu$. Moreover, $\pi_{\mu}\,\Rightarrow\,\delta_{x^*}$ as $\mu \rightarrow 0$.
The proof of this proposition begins like that of Proposition <ref>, by noting that the Markov chain $(x_t)$ is Feller and $|\mathrm{vol}|$-irreducible and aperiodic. Indeed, since $X_y(x)$ is given by (<ref>), and since $P$ has a strictly positive probability density with respect to $|\mathrm{vol}|$, it follows that Assumption e2 holds. Therefore, it is possible to argue exactly as in the proof of Lemma <ref>.
Now, let $V(x) = d^{\hspace{0.02cm} 2}(x^*,x)/2$. To prove that the chain $(x_t)$ is geometrically ergodic, it is enough to obtain the inequality
\begin{equation} \label{eq:ardrift}
Q_\mu V(x) \leq (1-\mu)^2\hspace{0.02cm}V(x) + \mu^2\hspace{0.02cm}\mathcal{E}(x^*)
\end{equation}
which is similar to (<ref>) of Lemma <ref>. This can then be used, exactly as in the proof of Proposition <ref>, based on [51] (Theorem 16.0.1).
Proof of (<ref>) : for any $x \in M$, note from (<ref>) that
\begin{equation} \label{eq:finalproof1}
Q_\mu V(x) = \mathbb{E}\left[V(x\, \#_{\scriptscriptstyle \mu}\, y)\right]
\end{equation}
where $y$ denotes a random variable with distribution $P$. Recall from <ref> that $V(x)$ is $1/2$-strongly convex. Therefore, by (<ref>),
V(x\, \#_{\scriptscriptstyle \mu}\, y) \leq (1-\mu)V(x) + \mu V(y) - \mu(1-\mu)d^{\hspace{0.03cm} 2}(x\hspace{0.02cm},y)/2
taking expectations, this yields,
\begin{equation} \label{eq:finalproof2}
Q_\mu V(x) \leq
(1-\mu)V(x) + \mu\hspace{0.02cm}\mathcal{E}(x^*) - \mu(1-\mu)\mathcal{E}(x)
\end{equation}
after using the fact that $\mathcal{E}(x) = \mathbb{E}[d^{\hspace{0.03cm} 2}(x\hspace{0.02cm},y)/2]$ for any $x \in M$, which is clear from (<ref>). Now, recall from <ref> that $\mathcal{E}$ is $1/2$-strongly convex. Therefore, by (<ref>)
\begin{equation} \label{eq:finalproof3}
\mathcal{E}(x) \geq \mathcal{E}(x^*) + d^{\hspace{0.02cm} 2}(x^*,x)/2
\end{equation}
where the right-hand side is just $\mathcal{E}(x^*) + V(x)$. Thus, replacing (<ref>) into (<ref>), one has
Q_\mu V(x) \leq
(1-\mu)V(x) + \mu\hspace{0.02cm}\mathcal{E}(x^*) - \mu(1-\mu) V(x)- \mu(1-\mu)\mathcal{E}(x^*)
which immediately yields (<ref>).
Geometric ergodicity ensures the chain $(x_t)$ has a unique invariant distribution $\pi_\mu$. To prove that $\pi_{\mu}\,\Rightarrow\,\delta_{x^*}$ as $\mu \rightarrow 0$, it is possible to argue as in the proof of Proposition <ref>. Precisely, integrating both sides of (<ref>) with respect to $\pi_\mu\hspace{0.02cm}$, it follows that
\int_M Q_\mu V(x)\hspace{0.02cm}\pi_\mu(dx) \leq (1-\mu)^2\hspace{0.02cm}\int_M V(x)\hspace{0.02cm}\pi_\mu(dx) + \mu^2\hspace{0.02cm}\mathcal{E}(x^*)
%\int_M Q_\mu V(x)\hspace{0.02cm}\pi_\mu(dx) \leq (1-\lambda\mu/2)\int_M V(x)\hspace{0.02cm}\pi_\mu(dx) + (\ell\hspace{0.02cm}\sigma^2_{\scriptscriptstyle 0})\hspace{0.03cm}\mu^2
Since $\pi_\mu$ is an invariant distribution of the transition kernel $Q_\mu\hspace{0.02cm}$, this means
\int_M V(x)\hspace{0.02cm}\pi_\mu(dx) \leq (1-\mu)^2\hspace{0.02cm}\int_M V(x)\hspace{0.02cm}\pi_\mu(dx) + \mu^2\hspace{0.02cm}\mathcal{E}(x^*)
%\int_M V(x)\hspace{0.02cm}\pi_\mu(dx) \leq (1-\lambda\mu/2)\int_M V(x)\hspace{0.02cm}\pi_\mu(dx) + (\ell\hspace{0.02cm}\sigma^2_{\scriptscriptstyle 0})\hspace{0.03cm}\mu^2
In other words,
\begin{equation} \label{eq:arVmoment}
\int_M V(x)\hspace{0.02cm}\pi_\mu(dx) \leq \mathcal{E}(x^*)\mu/(2-\mu)
\end{equation}
Since $\mu/(2-\mu) \leq \mu$ for $\mu \leq 1$, (<ref>) can be used like (<ref>) in the proof of Proposition <ref>. In this way, $\pi_{\mu}\,\Rightarrow\,\delta_{x^*}$ follows by noting that $V$, defined by $V(x)= d^{\hspace{0.02cm} 2}(x^*,x)/2$, has compact sublevel sets, by the Hopf-Rinow theorem and the fact that $M$ is complete (see [11]), and that $V(x) = 0$ if and only if $x = x^*$.
Remark : thanks to Proposition <ref>, it is now possible to prove that a central limit theorem, identical to Proposition <ref>, holds for the Riemannian AR(1) model (<ref>). This only requires the additional condition (<ref>).
CHAPTER: OPEN PROBLEMS
While working on this thesis, there are several problems which I found very interesting, and important, but could not solve, or even attack in a meaningful way. I would therefore like to close the thesis with a list of these problems, in the hope that they will attract the attention of more people (not just myself).
In Chapter <ref> : the conclusion of Lemma <ref> only holds for compact Riemannian symmetric spaces which are simply connected. Therefore, the subsequent Propositions <ref> and <ref> are restricted to this simply connected case. The problem is to describe, at least partially, what happens for compact Riemannian symmetric spaces which are not simply connected. It would be particularly interesting to give counterexamples to either one of Propositions <ref> and <ref>, in the non-simply-connected case.
In Chapter <ref> : formula (<ref>) gives the normalising factor $Z(\sigma)$ for a Gaussian distribution on a symmetric space $M$, which belongs to the non-compact case. When $M$ is the space of $N \times N$ Hermitian positive-definite matrices, (<ref>) and (<ref>) provide a closed form expression (valid for any $N$), and an asymptotic expression (valid for large $N$), of the multiple integral in (<ref>).
The problem is to find similar expressions of this integral, for other symmetric spaces. It should be easiest to first deal with the spaces of $N \times N$ real or quaternion positive-definite matrices, and then move on to other spaces, such as the Siegel domaine (Example 3, in <ref>). In Chapter <ref> : the problem is to prove or disprove the conjecture, mentioned in <ref>. Namely, that the MAP and MMS Bayesian estimators are equal, for Gaussian distributions on a space of constant negative curvature.
In Chapter <ref> : as mentioned in <ref>, I have never met with a function $f:M \rightarrow \mathbb{R}$ ($M$ a non-Euclidean Hadamard manifold), which is strongly convex, and also has a bounded Hessian. The problem is to construct a function $f$ with these properties, or to show this is not possible. Another problem, which is quite important for convex optimisation, is to show that a function $f:M \rightarrow \mathbb{R}$, which is convex and has a bounded Hessian, has co-coercive gradient, in the sense of [57] (Theorem 2.1.5, property (2.1.11)).
In Chapter <ref> : to state in clear and general terms the functional central limit theorem, which follows from Proposition <ref>, and also to derive a similar functional central limit theorem for decreasing-step-size schemes. These can be used in studying the behavior of stochastic approximation schemes, in the presence of unstable critical points (only stable critical points were considered, in the above).
In Chapter <ref> : to generalise Proposition <ref> to the case where $M$ is not a Hadamard manifold. I believe that, in this case, the asymptotic form of the invariant distribution will no longer be multivariate normal (roughly, the scheme can always “jump across" the cut locus of a stable critical point).
[1]
S. Said, L. Bombrun, and Y. Berthoumieu, “Warped Riemannian metrics for
location-scale models,” in Geometric structures of information,
F. Nielsen, Ed.1em plus 0.5em minus 0.4emSpringer Switzerland,
[2]
A. Durmus, P. Jimenez, E. Moulines, S. Said, and H. T. Wai, “Convergence
analysis of Riemannian stochastic approximation schemes,”
, 2020.
[3]
A. Durmus, P. Jimenez, E. Moulines, and S. Said, “On Riemannian stochastic
approximation schemes with fixed step-size (under review),” in
Artificial Intelligence and Statistics, 2021.
[4]
L. Santilli and M. Tierz, “Riemannian gaussian distributions, random matrix
ensembles and diffusion kernels,” , 2020.
[5]
P. A. Meyer, “Géométrie stochastique sans larmes,” Séminaire de
probabilités (Strasbourg), vol. 15, pp. 44–102, 1981.
[6]
J. H. Eschenburg, Lecture notes on symmetric spaces (course material
available online), , 1997.
[7]
P. A. Absil, R. Mahony, and R. Sepulchre, Optimization algorithms on
matrix manifolds.1em plus 0.5em minus 0.4emPrinceton
University Press, 2008.
[8]
P. A. Absil and J. Malick, “Projection-like retractions on matrix manifolds,”
SIAM Journal on Optimization, vol. 22, pp. 135–158, 2012.
[9]
T. Sakai, “On cut loci of compact symmetric spaces,” Hokkaido
Mathematical Journal., vol. 6, pp. 136–161, 1977.
[10]
S. Helgason, Differential geometry and symmetric spaces.1em plus
0.5em minus 0.4emNew York and London: Academic Press, 1962.
[11]
I. Chavel, Riemannian geometry, a modern introduction.1em plus
0.5em minus 0.4emCambridge University Press, 2006.
[12]
C. Mantegazza, G. Mascellani, and G. Uraltsev, “On the distributional hessian
of the distance function,” , 2013.
[13]
P. J. Huber and E. M. Ronchetti, Robust statistics (2nd edition).1em plus 0.5em minus 0.4emWiley-Blackwell, 2009.
[14]
J. E. Marsden, T. Ratiu, and R. Abraham, Manifolds, tensor analysis, and
applications.1em plus 0.5em minus 0.4emSpringer-Verlag, 2001.
[15]
A. L. Besse, Manifolds all of whose geodesics are closed.1em plus
0.5em minus 0.4emNew York: Springer-Verlag, 1978.
[16]
V. I. Bogachev, Measure Theory, Volume I.1em plus 0.5em minus
0.4emSpringer-Verlag, 2007.
[17]
M. L. Mehta, Random matrices (3rd edition).1em plus 0.5em minus
0.4emElsevier Ltd., 2004.
[18]
E. S. Meckes, The random matrix theory of the classical compact
groups.1em plus 0.5em minus 0.4emCambridge University Press,
[19]
S. Kobayashi and K. Nomizu, Foundations of differential geometry, Volume
II.1em plus 0.5em minus 0.4emInterscience Publishers, 1969.
[20]
H. J. Higham, Functions of matrices : theory and computation.1em
plus 0.5em minus 0.4emSIAM Publications, 2008.
[21]
X. Pennec, P. Fillard, and N. Ayache, “A Riemannian framework for tensor
computing,” International Journal of Computer Vision, vol. 66, no. 1,
pp. 41–66, 2006.
[22]
S. Said and J. H. Manton, “Riemannian barycentres of Gibbs distributions :
new results on concentration and convexity,” Information Geometry
(under review), 2020.
[23]
M. Fréchet, “Les éléments aléatoires de nature quelconque dans un espace
distancié,” Annales de l'I.H.P., vol. 10, no. 4, pp. 215–210, 1948.
[24]
R. Bhattacharya and V. Patrangenaru, “Large sample theory of instrinsic and
extrinsic sample means on manifolds I,” The annals of statistics,
vol. 31, no. 1, pp. 1–29, 2003.
[25]
——, “Large sample theory of instrinsic and extrinsic sample means on
manifolds II,” The annals of statistics, vol. 33, no. 3, pp.
1225–1259, 2005.
[26]
W. S. Kendall, “Probability, convexity, and harmonic maps with small image
I : uniqueness and fine existence,” Proceedings of the London
Mathematical Society, vol. 61, no. 2, pp. 371–406, 1990.
[27]
B. Afsari, “Riemannian $L^{\scriptscriptstyle p}$ center of mass :
existence, uniqueness and convexity,” Proceedings of the American
Mathematical Society, vol. 139, no. 2, pp. 655–673, 2010.
[28]
W. S. Kendall, “The propeller : a counterexample to a conjectured criterion
for the existence of certain convex functions,” Journal of the London
Mathematical Society, vol. 36, no. 2, pp. 364–374, 1992.
[29]
K. T. Sturm, “Probability measures on metric spaces of nonpositive
curvature,” Contemporary mathematics, vol. 338, pp. 1–34, 2003.
[30]
M. Arnaudon and L. Miclo, “Means in complete manifolds : completeness and
approximation,” ESAIM :Probability and Statistics, vol. 18, pp.
185–206, 2014.
[31]
R. Wong, Asymptotic approximation of integrals, Society of Industrial
and Applied Mathematics, 2001.
[32]
D. Schleicher, Hausdorff dimension, its properties and its surprises
(available online), , 2007.
[33]
E. T. Whittaker and G. N. Watson, A course of modern analysis (4th
edition), Cambridge University Press, 1950.
[34]
P. Petersen, Riemannian geometry (2nd edition), Springer Science, 2006.
[35]
L. P. Kantorovich and G. P. Akilov, Functional analysis (2nd edition),
Pergamon Press, 1982.
[36]
S. Said, H. Hajri, L. Bombrun, and B. C. Vemuri, “Gaussian distributions on
Riemannian symmetric spaces : statistical learning with structured
covariance matrices,” IEEE Transactions on Information Theory,
vol. 64, no. 2, pp. 752–772, 2018.
[37]
S. Stahl, “The evolution of the normal distribution,” Mathematics
Magazine, vol. 79, no. 2, pp. 96–113, 2006.
[38]
E. Borel, Introduction géométrique à quelques théories physiques,
Gauthier-Villars, 1914.
[39]
J. Perrin, “Mouvement brownien et molécules,” Journal de physique
théorique et appliquée, vol. 9, no. 1, pp. 5–39, 1910.
[40]
A. W. Knapp, Lie groups, beyond an introduction (2nd edition).1em
plus 0.5em minus 0.4emBirkhauser, 2002.
[41]
C. L. Siegel, “Symplectic geometry,” American Journal of Mathematics,
vol. 65, no. 1, pp. 1–86, 1943.
[42]
A. Terras, Harmonic analysis on symmetric spaces and applications, Vol.
II.1em plus 0.5em minus 0.4emSpringer-Verlag, 1988.
[43]
G. Szegö, Orthogonal Polynomials (1st edition).1em plus 0.5em
minus 0.4emAmerican Mathematical Society, 1939.
[44]
B. C. Berndt, What is a $q$-series? (appeared in Ramanujan
rediscovered, available online),
, 2012.
[45]
A. B. J. Kuijlaars and W. Van Assche, “The asymptotic zero distribution of
orthogonal polynomials with varying recurrence coefficients,” Journal
of Approximation theory, vol. 99, pp. 167–197, 1999.
[46]
P. Deift, Orthogonal polynomials and random matrices : a
Riemann-Hilber approach, American Mathematical Sociery, 1998.
[47]
M. Mariño, Chern-Simons theory, matrix models, and topological strings,
Oxford University Press, 2005.
[48]
G. E. Andrews, The theory of partitions, Addison-Wesley Publishing
Company, 1976.
[49]
S. F. Jarner and E. Hansen, “Geometric ergodicity of Metropolis
algorithms,” Stochastic Processes and Applications, vol. 58, pp.
341–361, 1998.
[50]
R. O. Roberts and J. S. Rosenthal, “General state-space Markov chains and
MCMC algorithms,” Probability Surveys, vol. 1, pp. 20–71, 2004.
[51]
S. Meyn and R. L. Tweedie, Markov chains and stochastic stability,
Cambrdge University Press, 2008.
[52]
G. O. Roberts and R. L. Tweedie, “Geometric ergodicity and central limit
theorems for multidimensional Metropolis and Hastings algorithms,”
Biometrika, vol. 82, no. 1, pp. 95–110, 1996.
[53]
J. M. Lee, Introduction to smooth manifolds (2nd edition), Springer
Science, 2012.
[54]
G. C. Pflug, “Stochastic minimisation with constant step-size : asymptotic
laws,” SIAM Journal on Control and Optimization, vol. 24, no. 4, pp.
655–666, 1986.
[55]
O. Kallenberg, Foundations of modern probability (2nd edition),
Springer-Verlag, 2002.
[56]
M. Duflo, Algorithmes stochastiques, Springer-Verlag, 1996.
[57]
Y. Nesterov, Lectures on convex optimization.1em plus 0.5em minus
0.4emSpringer Switzerland, 2018.
|
[1]
[1]This document is the results of the research project funded by Helmholtz
Association under grant no. VH-NG-1352.
[cor1]Corresponding author<EMAIL_ADDRESS>(Martha Maria)
Conceptualization, Methodology, Software, Formal Analysis, Data Curation,
Writing - Original Draft, Writing - Review & Editing, Visualization
Conceptualization, Methodology, Software, Data Curation, Writing - Review &
Editing, Visualization
Writing - Review & Editing, Funding Acquisition
Conceptualization, Methodology, Writing - Original Draft, Writing - Review &
Editing, Project Administration, Funding Acquistion
# The strong effect of network resolution on electricity system models with
high shares of wind and solar
Martha Maria Frysztacki Jonas Hörsch Veit Hagenmeyer Tom Brown Institute
for Automation and Applied Informatics, Karlsruhe Institute of Technology,
76344 Eggenstein-Leopoldshafen, Germany Frankfurt Institute for Advanced
Studies, Ruth-Moufang-Straße 1, 60438 Frankfurt am Main, Germany
###### Abstract
Energy system modellers typically choose a low spatial resolution for their
models based on administrative boundaries such as countries, which eases data
collection and reduces computation times. However, a low spatial resolution
can lead to sub-optimal investment decisions for wind and solar generation.
Ignoring power grid bottlenecks within regions tends to underestimate system
costs, while combining locations with different wind and solar capacity
factors in the same resource class tends to overestimate costs. We investigate
these two competing effects in a capacity expansion model for Europe’s power
system with a high share of renewables, taking advantage of newly-available
high-resolution datasets as well as computational advances. We vary the number
of nodes, interpolating between a 37-node model based on country and
synchronous zone boundaries, and a 1024-node model based on the location of
electricity substations. If we focus on the effect of renewable resource
resolution and ignore network restrictions, we find that a higher resolution
allows the optimal solution to concentrate wind and solar capacity at sites
with better capacity factors and thus reduces system costs by up to 10%
compared to a low resolution model. This results in a big swing from offshore
to onshore wind investment. However, if we introduce grid bottlenecks by
raising the network resolution, costs increase by up to 23% as generation has
to be sourced more locally at sites with worse capacity factors. These effects
are most pronounced in scenarios where grid expansion is limited, for example,
by low local acceptance. We show that allowing grid expansion mitigates some
of the effects of the low grid resolution, and lowers overall costs by around
16%.
###### keywords:
energy system modelling spatial scale clustering transmission grid modelling
resource resolution
Highly-renewable European power system is optimized at high spatial resolution
High-resolution capacity placement for wind and solar reduces costs by up to
10%
Models with low network resolution ignore congestion, underestimating costs by
23%
Costs underestimated most when grid expansion limited by, e.g., public
acceptance
Grid reinforcements relieve congestion and lower system costs by up to 16%
## 1 Introduction
Electricity systems with high shares of wind and solar photovoltaic generation
require a fundamentally different kind of modelling to conventional power
systems with only dispatchable generation [63]. While investments in
conventional power plants can be dimensioned according to simple heuristics
like screening curves [10], the assessment of wind and solar resources
requires a high temporal and spatial resolution to capture their weather-
driven variability. The need to assess investments in generation, transmission
and flexibility options over thousands of representative weather and demand
situations, as well as over thousands of potential locations, means that
balancing model accuracy against computational resources has become a critical
challenge.
The effects of temporal resolution have been well researched in the
electricity system planning literature [12], including the need for at least
hourly modelling resolution [63], the consequences of clustering
representative conditions [41], and the need to include extreme weather events
[50]. On the spatial side, it has been recognized that integrating renewable
resources on a continental scale can smooth large-scale weather variations,
particularly from wind [23], and avoid the need for temporal balancing. This
smoothing effect has been found in studies of the benefits of grid expansion
both in Europe, where the impact on balancing needs [53] and storage
requirements [55] has been analysed, and in the United States [44]. However,
there has been little research on the effects of spatial resolutions on
planning results. This is partly due to the fact that collecting high-
resolution spatial data is challenging, as well as the fact that optimization
at high-resolution over large areas is computationally demanding.
Choosing the spatial resolution based on administrative boundaries such as
country borders –which is a common approach in the literature [23, 53, 31]–
fails to account for the variation of resources inside large countries like
Germany. Aggregating low-yield sites together with high-yield sites takes away
the opportunity to optimize generation placement, which distorts investment
decisions and drives up costs.
On the other hand, aggregating diverse resources to single points tends to
underestimate network-related costs, since the models are blind to network
bottlenecks that might hinder the welfare-enhancing integration of renewable
resources located far from demand centers. The effects of network restrictions
are all the more important given the apparent low public acceptance for new
overhead transmission lines, observed in Germany [30] and across Europe [20],
and the long planning and construction times for new grid infrastructure [26].
In the present contribution we introduce a novel methodology to disentangle
these two competing spatial effects of resource and network resolution, so
that for the first time their different impacts on system costs and technology
choices can be quantified. We then demonstrate the methodology by running
simulations in a model of the future European electricity system with a higher
spatial resolution than has previously been achieved in the literature. We
optimize investments and operation of generation, storage and transmission
jointly in a system with a high share of renewables under a 95% reduction in
CO2 emissions compared to 1990, which is consistent with European targets for
2050 [25]. A recently-developed, high-resolution, open-source model of the
European transmission network, PyPSA-Eur [37], is sequentially clustered from
1024 nodes down to 37 nodes in order to examine the effects on optimal
investments in generation, transmission and storage.
Previous work in the engineering literature has focused on the effect of
different network clustering algorithms [40] on the flows in single power flow
simulations [11, 33], or used clustering algorithms that are dependent on
specific dispatch situations [18, 62, 59] and therefore unsuitable when making
large changes to generation and transmission capacities. In the planning
literature that considers a high share of renewables in the future energy
system, the effects of clustering applied separately to wind, solar and demand
were investigated in [61], but neglected potential transmission line
congestion within large regions. In [43] the previous study was extended by
including a synthesized grid and renewable profiles, but it ignored the
existing topology of the transmission grid. Effects of varying the resolution
were not considered in either of the studies. Recent work has examined
regional solutions for the European power system, but did not take into
account existing transmission lines, potential low public acceptance for grid
reinforcement or the grid flow physics [67]. Other studies have examined
transmission grid expansion at substation resolution, but either the temporal
resolution was too low to account for wind and solar variability [24, 35], or
only single countries were considered [46, 1, 35], or transmission expansion
was not co-optimized with generation and storage [24, 15, 58]. The competing
effect of clustering transmission lines versus variable resource sites on the
share of renewables was also discussed in [21], but the report did not provide
an analysis of how strongly the respective clustering impacts modeling and
planning results. The effects of model resolution on system planning results
were considered for the United States in [42], where a cost-benefit was seen
for higher wind and solar resolution, but the resource resolution was not
separated from the network resolution, and only a small number of time slices
were considered to represent weather variations.
Advances in solver algorithms and code optimization in the modelling framework
PyPSA [13], as well as hardware improvements, allow us to achieve what was
previously not possible in the literature: the co-optimization of
transmission, generation and storage at high temporal and spatial resolution
across the whole of Europe, while taking into account linearized grid physics,
existing transmission lines and realistic restrictions on grid reinforcement.
In previous work by some of the authors large effects of spatial resolution on
investment results were seen [36], but because the resource and network
resolution were changed in tandem, it was not possible to analyse which effect
dominates the results. In the present contribution we present a novel study
design that separates the effects of resource and network resolution, and
demonstrate the substantial differences between the two effects using the
high-resolution simulations enabled by recent software and hardware advances.
## 2 Methods
In this section we present an overview of the underlying model and the study
design, before providing more details on the clustering methodology and the
investment optimisation. A list of notation is provided in Table 2.
### 2.1 Model input data
Figure 1: PyPSA-Eur model of the European electricity system, including all
existing and planned high-voltage alternating current (HVAC) and direct
current (HVDC) lines.
The study is performed in a model of the European electricity system at the
transmission level, PyPSA-Eur, which is fully described in a separate
publication [37]. Here we give a brief outline of the input data.
The PyPSA-Eur model shown in Figure 1 contains all existing high-voltage
alternating current (HVAC) and direct current (HVDC) lines in the European
system, as well as those planned by the European Network of Transmission
System Operators for Electricity (ENTSO-E) in the Ten Year Network Development
Plan (TYNDP) [26]. The network topology and electrical parameters are derived
from the ENTSO-E interactive map [3] using a power grid extraction toolkit
[69]. In total the network consists of 4973 nodes, 5721 HVAC and 32 HVDC lines
existing as of 2018, as well as 279 HVAC and 29 HVDC planned lines.
Historical hourly load data for each country are taken from the Open Power
System Data project [5] and distributed to the nodes within each country
according to population and gross domestic product data. Generation time
series are provided for the surrounding wind and solar plants based on
historical wind and insolation data derived from the ERA5 reanalysis dataset
[4] and the SARAH2 surface radiation dataset [51]. Renewable installation
potentials are based on land cover maps, excluding for example nature
reserves, cities or streets.
The model was partially validated in [37]. Further validation against
historical data was carried out in [28], where it was shown that the model
could reproduce curtailment of wind and solar in Germany due to transmission
bottlenecks in the years 2013-2018. The ability to reproduce historical
congestion provides a strong check on the match between the transmission
network data and the availability of wind and solar generation in the model.
### 2.2 Clustering study design
The nodes of the model are successively clustered in space into a smaller
number of representative nodes using the $k$-means algorithm [34]. This groups
close-by nodes together, so that, for example, multiple nodes representing a
single city are merged into one node. Nodes from different countries or
different synchronous zones are not allowed to be merged; to achieve this, the
overall number of desired nodes is partitioned between the countries and
synchronous zones before the $k$-means algorithm is applied in each partition
separately. In total there are 37 ‘country-zones’ in the model, i.e. regions
of countries belonging to separate synchronous zones.
Figure 2, Case 1 shows the results for Ireland and the United Kingdom (where
Northern Ireland is in a separate synchronous zone to Great Britain). Once the
nodes have been clustered, they are reconnected with transmission corridors
representing the major transmission lines from the high-resolution model.
Electricity demand, conventional generation and storage options are also
aggregated to the nearest network node. More technical details on the
clustering can be found in subsection 2.5. An analysis of the effects of
clustering on the network flows can be found in the Appendix, Section A.1.
### 2.3 Resource versus network resolution case studies
Figure 2: Clustering of network nodes (red, number $n$) and renewable sites (grey, number $s$) in each of the cases (rows) for Ireland and the United Kingdom at different levels of clustering (columns). Case | Short name | Description
---|---|---
1 | Simultaneous clustering | Successive increase in number of generation sites $s$ and transmission nodes $n$: $s=n\in\mathcal{B}$
2 | Clustering on siting resolution | Fix the transmission network to one-node-per-country-zone $n=37$ and increase the number of generation and storage sites $s\in\mathcal{B}$
3 | Clustering on network nodes | Maintain a high resolution of generation sites $s=1024$ and successively increase the number of transmission nodes $n\in\mathcal{B}$
Table 1: Case descriptions.
($\mathcal{B}=\\{37\\}\cup\bigl{\\{}\lfloor\sqrt{2^{i}}\rfloor\bigr{\\}}_{i=11,...,20}=\\{37,45,64,90,128,\dots
1024\\}$)
To separate the effects of the spatial resolution on the renewable resources
and the network, we consider three cases in which they are clustered
differently. The three cases are summarized in Table 1 and shown graphically
in Figure 2 for each case (rows) and for each level of clustering (columns).
In Case 1 the wind and solar sites are clustered to the same resolution as the
network. The number of clusters is varied between 37, the number of country-
zones, and 1024, which represents the maximum resolution for which generation,
transmission and storage investment can be co-optimized in reasonable time.
The number of nodes is increased in half-powers of 2, so that nine different
resolutions are considered:
$\mathcal{B}=\\{37\\}\cup\bigl{\\{}\lfloor\sqrt{2^{i}}\rfloor\bigr{\\}}_{i=11,...,20}$.
In Case 2 network bottlenecks inside each country-zone are removed so that
there are only 37 transmission nodes, and only the resolution of the wind and
solar generation is varied. Inside each country-zone, all wind and solar
generators are connected to the central node. This allows the optimization to
exploit the best wind and solar sites available.
Finally in Case 3 we fix a high resolution of renewable sites and vary the
number of network nodes, in order to explore the effects of network
bottlenecks. Each renewable site is connected to the nearest network node,
where the transmission lines, electricity demand, conventional generators and
storage are also connected.
For each case we optimize investments and operation for wind and solar power,
as well as open cycle gas turbines, batteries, hydrogen storage and
transmission. Flexibility from existing hydroelectric power plants is also
taken into account. The model is run with perfect foresight at a 3-hourly
temporal resolution over a historical year of load and weather data from 2013,
assuming a 95% reduction in CO2 emissions compared to 1990. The temporal
resolution is 3-hourly to capture changes in solar generation and electricity
demand while allowing reasonable computation times. The technology selection
is also limited for computational reasons. More details on the investment
optimization can be found in subsection 2.6.
For each simulation we also vary the amount of new transmission that can be
built, in order to understand the effect of possible grid reinforcements on
the results. The model is allowed to optimize new transmission reinforcements
to the grid as it was in 2018, up to a limit on the sum over new capacity
multiplied by line length measured relative to the grid capacity in 2018. For
example, a transmission expansion of 25% means that on top of 2018’s grid, new
lines corresponding to a quarter of 2018’s grid can be added to the network.
The exact constraint is given in equation (17) in subsection 2.6.
### 2.4 Network preparation
Before the clustering algorithm can be applied to the network, several
simplifications are applied to the data.
In order to avoid the difficulty of keeping track of different voltage levels
as the network is clustered, all lines are mapped to their electrical
equivalents at 380 kV, the most prevalent voltage in the European transmission
system. If the original reactance of the line $\ell_{i,j}$ was $x_{i,j}$ at
its original voltage $v_{i,j}$, the new equivalent reactance becomes
$x_{i,j}^{\prime}=x_{i,j}\left(\frac{380\textrm{ kV}}{v_{i,j}}\right)^{2}.$
(1)
This guarantees that the per unit reactance is preserved after the
equivalencing.
The impedances and thermal ratings of all transformers are neglected, since
they are small and cannot be consistently included with the mapping of all
voltage levels to 380 kV.
Univalent nodes, also known as dead-ends, are removed sequentially until no
univalent nodes exist. That is, if node $i$ has no other neighbor than node
$j$, then node $i$ is merged to node $j$. We repeat the process until each
node is multi-valent and update the merged node attributes and its attached
assets (loads, generators and storage units) according to the rules in Table
5.
HVDC lines in series or parallel are simplified to a single line $\ell$ using
the rules in Tables 6 and 7. Capital costs per MW of capacity for HVDC lines
$\ell_{i,j}$ with length $l_{\ell_{i,j}}$ and a fraction
$u_{\ell_{i,j}}\in[0,1]$ underwater are given by
$\displaystyle c_{i,j}=1.25\cdot l_{i,j}\cdot\left(u_{i,j}\cdot
c_{\mathrm{marine}}+(1-u_{i,j})\cdot c_{\mathrm{ground}}\right)\,,$
where $c_{\mathrm{marine}}$ is the capital cost for a submarine connection and
$c_{\mathrm{ground}}$ for an underground connection. The factor of $1.25$
accounts for indirect routing and height fluctuations.
### 2.5 Clustering methodology
Table 2: Notation symbol | meaning
---|---
| general abbreviations
$s$ | technology type
$t$ | time point
$i,j$ | nodes in high resolution network
$c,d$ | clustered nodes
$\ell_{i,j}$ | high resolution line connecting nodes $i$ and $j$
$\ell_{c,d}$ | aggregated representative line connecting clusters $c$ and $d$
$N_{c}$ | set of high resolution nodes in cluster $c$
$N_{c,d}$ | set of high resolution lines between clusters $c$ and $d$
$RE$ | set of renewable generator technologies
$CG$ | set of conventional generator and storage technologies
| line attributes
$x_{i,j}$ | reactance of line $\ell_{i,j}$
$v_{i,j}$ | voltage of line $\ell_{i,j}$
$c_{i,j}$ | capital costs for line $\ell_{i,j}$
$l_{i,j}$ | length of line $\ell_{i,j}$
$F_{i,j}$ | capacity of line $\ell_{i,j}$
$f_{i,j,t}$ | flow of line $\ell_{i,j}$ at time $t$
$c_{\begin{subarray}{c}\mathrm{marine}/\mathrm{ground}\end{subarray}}$ | capital costs for a submarine/ underground connection
| nodal and technology attributes
$x_{i}$ | coordinates of node $i$ in $\mathbb{R}^{2}$
$w_{i}$ | nodal weighting
$e_{s}$ | CO2 emissions of technology $s$
$w_{i,s}$ | nodal technology weighting
$c_{i,s}$ | annualised fixed costs
$G_{i,s}$ | (optimal) capacity of technology $s$ at node $i$
$G^{\mathrm{max}}_{i,s}$ | maximal installable capacity of technology $s$ at node $i$
$o_{i,s}$ | variable costs of technology $s$ at node $i$
$E_{i,s}$ | storage energy efficiency
$\eta_{i,s}$ | storage losses or efficiencies at node $i$ for technology $s$
$w_{t}$ | time weighting
$d_{i,t}$ | demand per node $i$ and time $t$
$\bar{g}_{i,s,t}$ | capacity factor for RE $\in[0,1]$
$g_{i,s,t}$ | dispatch in node $i$ of technology $s$ at time $t$
$e_{i,s,t}$ | energy level of technology $s$ in node $i$ at time $t$
| graph related attributes
$K_{i,\ell}$ | incidence matrix
$C_{\ell,c}$ | Cycle matrix, here, $c$ represents a cylce
Different methods have been used to cluster networks in the literature. We
chose a version of $k$-means clustering [34] based on the geographical
location of the original substations in the network, weighted by the average
load and conventional capacity at the substations, since this represents how
the topology of the network was historically planned to connect major
generators to major loads. It leaves the long transmission lines between
regions, which are expensive to upgrade and are more likely to encounter low
local acceptance, unaggregated, so that these lines can be optimized in the
model. Regions with a high density of nodes, for example around cities, are
aggregated together, since the short lines between these nodes are inexpensive
to upgrade and rarely present bottlenecks. Geographical $k$-means clustering
has the advantage over other clustering methods of not making any assumptions
about the future generation, storage and network capacity expansion.
Other clustering methods applied in the literature are not suitable for the
co-optimization of supply and grid technologies: these include clustering
based on electrical distance using $k$-medoids [11, 22], a modified version of
$k$-medoids to avoid assigning both end nodes of a critical branch to the same
zone [2], hierarchical clustering [9], or $k$-decomposition and eigenvector
partitioning [66] (which we do not use because we want to optimize new grid
reinforcements that alter electrical distances), spectral partitioning of the
graph Laplacian matrix [33] (avoided for same reason), an adaptation of
$k$-means called $k$-means$++$ combined with a max-$p$ regions algorithm
applied to aggregate contiguous sites with similar wind, solar and electricity
demand [61] (avoided since we want a coherent clustering of all network nodes
and assets), hierarchical clustering based on a database of electricity
demand, conventional generation and renewable profiles including a synthesized
grid [43] (avoided for the same reason and because we do not want to alter the
topology of the existing transmission grid), $k$-means clustering based on
renewable resources as well as economic, sociodemographic and geographical
features [19] (avoided because we need a clustering focused on network
reduction), as well as clustering based on zonal Power Transfer Distribution
Factors (PTDFs) to detect congestion zones [18], to yield the same flow
patterns as the original network [49] or to analyse policy options and
emissions [60] (avoided because they encode electrical parameters that change
with reinforcement), Available Tranfer Capacities (ATCs) [59] (avoided because
they depend on pre-defined dispatch patterns) and locational marginal prices
(LMP) [62] (again avoided because they depend on pre-defined dispatch
patterns).
We do not allow nodes in different countries or different synchronous zones to
be clustered together, so that we can still obtain country-specific results
and so that all HVDC between synchronous zones are preserved during the
aggregation. This results in a minimum number of 37 clustered nodes for the
country-zones. First we partition the desired total number $n$ of clusters
between the 37 country-zones, then we apply the $k$-means clustering algorithm
within each country-zone.
In order to partition the $n$ nodes between the 37 country-zones, the
following minimisation problem is solved
$\displaystyle\mathrm{argmin}_{\\{n_{z}\\}\in\mathbb{N}^{37}}\sum_{z=1}^{37}\left(n_{z}-\frac{L_{z}}{\sum_{y}L_{y}}n\right)^{2}\,,$
(2)
where $L_{z}$ is the total load in each country-zone $z$. An additional
constraint ensures that the number of clusters per country-zone matches the
desired number of clusters for the whole network: $\sum_{z}n_{z}=n$.
Then the $k$-means algorithm is applied to partition the nodes inside each
country-zone into $n_{z}$ clusters. The algorithm finds the partition that
minimizes the sum of squared distances from the mean position of each cluster
$x_{c}\in\mathbb{R}^{2}$ to the positions $x_{i}\in\mathbb{R}^{2}$ of its
members $i\in N_{c}$
$\displaystyle\mathrm{min}_{\\{x_{c}\in\mathbb{R}^{2}\\}}\sum_{c=1}^{k}\sum_{i\in
N_{c}}w_{i}\cdot\|x_{c}-x_{i}\|_{2}\,.$ (3)
Each node is additionally assigned a normalised weighting $w_{i}$ based on its
nominal power for conventional generators and averaged load demand:
$\displaystyle w_{i}$
$\displaystyle=\frac{\sum\limits_{s_{\mathrm{conv.}}}G_{i,s}}{\sum\limits_{s_{\mathrm{conv.}}}\sum_{i=1}^{B}G_{i,s}}+\frac{d_{i,T}}{\sum_{i=1}^{B}d_{i,T}}\,,\quad\forall
i$ (4)
where $d_{i,T}$ corresponds to the averaged demand over the considered time
period $T$. $w_{i}$ is normalised according to $\lfloor\frac{100\cdot
w_{i}}{\|w\|_{\mathrm{max}}}\rfloor$.
The optimization is run with $n_{\mathrm{init}}=10^{3}$ different centroid
seeds, a maximum number of iterations for a single run of
$\mathrm{max}_{\mathrm{iter}}=3\cdot 10^{4}$ and a relative tolerance with
regards to inertia to declare convergence of $\varepsilon=10^{-6}$ .
Attributes of the nodes in $N_{c}$ and their attached assets are aggregated to
the clustered node $c$ according to the rules in Table 5.
Lines connecting nodes $N_{c}$ in cluster $c$ with nodes $N_{d}$ in cluster
$c$, given by the set $N_{c,d}$
$N_{c,d}=\\{\ell_{i,j},\ i\in N_{c},\ j\in N_{d}\\}\,,\quad\forall c,d$ (5)
are aggregated to a single representative line. The length of the
representative line is determined using the haversine formula (which computes
the great-circle distance between two points on a sphere) multiplied by a
factor of $1.25$ to take indirect routing into account. The representative
line inherits the attributes of the lines $N_{c,d}$ as described in Table 7.
If any of the replaced lines in $N_{c,d}$ had the attribute that their
capacity was extendable, then the aggregated line inherits this extendability.
An analysis of the effects of clustering on the network flows can be found in
the Appendix, Section A.1.
For Case 1, generators are clustered to the same resolution as the network.
Times series containing hourly resolved capacity factors
$\bar{g}_{i,s,t}\in[0,1]$ for variable renewable generation are aggregated
using a weighted average
$\displaystyle\bar{g}_{c,s,t}=\frac{1}{\sum_{i\in N_{c}}w_{i,s}}\sum_{i\in
N_{c}}w_{i,s}\cdot\bar{g}_{i,s,t}\,,\quad\forall c,s,t$ (6)
The resulting capacity factor $\bar{g}_{c,s,t}$ is in $[0,1]$ by definition.
For renewables, the weighting $w_{i,s}$ is proportional to the maximal yearly
yield for technology $s$ at node $i$, found by multiplying the maximal
installable capacity $G^{\mathrm{max}}_{i,s}$ with the average capacity
factor. In the case of conventional technologies the weightings are
distributed equally, i.e $w_{i,s}=1$. Note that there is no relation between
the weightings $w_{i,s}$ and the bus weightings $w_{i}$ of (4).
For Case 2, the network is fixed at 37 nodes, and the wind and solar
generators are merged in the aggregation step. Time series for VRE
availability are aggregated according to (6) to their respective resolution.
For Case 3, the network is clustered, but wind and solar generators are not
merged in the aggregation step. Their time series remain fixed at high
resolution of 1024 nodes.
### 2.6 Investment optimisation
Investments in generation, storage and transmission are optimized in the PyPSA
modelling framework [13], which minimises the total system costs. The
objective function is
$\displaystyle\min_{\begin{subarray}{c}G_{i,s},\ F_{\ell},\\\ g_{i,s,t},\
f_{\ell,t}\end{subarray}}\Bigl{[}\sum_{i=1}^{B}\sum_{s=1}^{S}\Bigl{(}c_{i,s}G_{i,s}+\sum_{t=1}^{T}w_{t}o_{i,s}g_{i,s,t}\Bigr{)}+\sum_{\ell=1}^{L}c_{\ell}F_{\ell}\Bigr{]}\,,$
consisting of the annualised fixed costs $c_{i,s}$ for capacities $G_{i,s}$ at
each node $i$ and storage/generation technology $s$, the dispatch $g_{i,s,t}$
of the unit at time $t$ and associated variable costs $o_{i,s}$ multiplied by
a weight factor $w_{t}$ corresponding to the temporal resolution of the
system, and the line capacities $F_{\ell}$ for each line $\ell$ including both
high voltage alternating current and direct current lines and their annualised
fixed costs $c_{\ell}$. The time period $T$ runs over a full year at a
3-hourly resolution, so each time period $t$ is weighted with $w_{t}=3$.
Investment cost assumptions are provided in Table 3, based on projections for
the year 2030. Assumptions are based on [7] for wind technologies, [57] in
case of OCGT, PHS, hydro, run-of-river, [16] for storage technologies and [68]
for solar technologies. 2030 is chosen for the cost projections since this is
the earliest possible time that such a system transformation might be
feasible, and because it results in conservative cost assumptions compared to
projections for a later date. The only CO2-emitting generators are the open
cycle gas turbines with natural gas with specific emissions 0.187
tCO2/MWh${}_{\textrm{th}}$ and fuel cost 21.6 €/MWh${}_{\textrm{th}}$.
Investment costs are annualized with a discount rate of 7%. Lifetimes,
efficiencies and operation and maintenance costs can be found in the GitHub
repository [8].
Table 3: Technology investment costs with $1\$=0.7532$€. asset | cost | unit
---|---|---
onshore wind | 1110 | €/kW
offshore wind | 1640 | €/kW
(AC/DC grid connection separate) | |
solar PV utility | 425 | €/kW
solar PV rooftop | 725 | €/kW
open cycle gas turbine | 400 | €/kW
run of river | 3000 | €/kW
pumped hydro storage | 2000 | €/kW
hydro storage | 2000 | €/kW
battery storage | 192 | $/kWh
battery power conversion | 411 | $/kW${}_{\textrm{el}}$
hydrogen storage | 11.3 | $/kWh
hydrogen power conversion | 689 | €/kW${}_{\textrm{el}}$
HVAC overhead transmission | 400 | €/(MWkm)
HVAC underground transmission | 1342 | €/(MWkm)
HVAC subsea transmission | 2685 | €/(MWkm)
HVDC underground transmission | 1000 | €/(MWkm)
HVDC subsea transmission | 2000 | €/(MWkm)
The dispatch of conventional generators $g_{i,s,t}$ is constrained by their
capacity $G_{i,s}$
$0\leq g_{i,s,t}\leq G_{i,s}\hskip 28.45274pt\forall\,i,t,s\in CG$ (7)
The maximum producible power of renewable generators depends on the weather
conditions, which is expressed as an availability $\bar{g}_{i,s,t}$ per unit
of its capacity:
$0\leq g_{i,s,t}\leq\bar{g}_{i,s,t}G_{i,s}\hskip 28.45274pt\forall\,i,t,s\in
RE$ (8)
The installable renewable capacity $G_{i,s}$ is constrained by land
eligibility for placing e.g. wind turbines or solar panels in each node and
for each renewable technology. The land restrictions are derived using the
Geospatial Land Availability for Energy Systems (GLAES) tool [54] and are
always finite for renewable carriers:
$\displaystyle G_{i,s}$ $\displaystyle\leq
G^{\mathrm{max}}_{i,s}<\infty\qquad\forall i,s\in RE$ (9)
There is no capacity constraint for conventional generators:
$\displaystyle G_{i,s}$ $\displaystyle<\infty\qquad\forall i,s\in CG$ (10)
The energy levels $e_{i,s,t}$ of all storage units have to be consistent
between all hours and are limited by the storage energy capacity $E_{i,s}$
$\displaystyle e_{i,s,t}=$
$\displaystyle\eta_{0}^{w_{t}}e_{i,s,t-1}+\eta_{1}w_{t}\left[g_{i,s,t}\right]^{-}-\eta_{2}^{-1}w_{t}\left[g_{i,s,t}\right]^{+}$
$\displaystyle+w_{t}g_{i,s,t}^{\textrm{inflow}}-w_{t}g_{i,s,t}^{\textrm{spillage}}$
$\displaystyle 0$ $\displaystyle\leq e_{i,s,t}\leq E_{i,s}\hskip
28.45274pt\forall\,i,s,t$ (11)
Positive and negative parts of a value are denoted as
$[\cdot]^{+}=\max(\cdot,0)$, $[\cdot]^{-}=-\min(\cdot,0)$. The storage units
can have a standing loss $\eta_{0}$, a charging efficiency $\eta_{1}$, a
discharging efficiency $\eta_{2}$, inflow (e.g. river inflow in a reservoir)
and spillage. The energy level is assumed to be cyclic, i.e.
$e_{i,s,t=0}=e_{i,s,t=T}$.
CO2 emissions are limited by a cap $\textrm{CAP}_{CO2}$, implemented using the
specific emissions $e_{s}$ in CO2-tonne-per-MWh of the fuel $s$ and the
efficiency $\eta_{i,s}$ of the generator:
$\sum_{i,s,t}\frac{1}{\eta_{i,s}}w_{t}\cdot g_{i,s,t}\cdot
e_{s}\leq\textrm{CAP}_{CO2}\quad\leftrightarrow\quad\mu_{CO2}$ (12)
In all simulations this cap was set at a reduction of 95% of the electricity
sector emissions from 1990.
The (perfectly inelastic) electricity demand $d_{i,t}$ at each node $i$ must
be met at each time $t$ by either local generators and storage or by the flow
$f_{\ell,t}$ from a transmission line $\ell$
$\sum_{s}g_{i,s,t}-d_{i,t}=\sum_{\ell}K_{i,\ell}f_{\ell,t}\hskip
28.45274pt\forall\,i,t$ (13)
where $K_{i,\ell}$ is the incidence matrix of the network. This equation is
Kirchhoff’s Current Law (KCL) expressed in terms of the active power.
In the present paper the linear load flow is used, which has been shown to be
a good approximation for a well-compensated transmission network [65],
including for simulations using a large-scale European transmission model
[15]. To guarantee the physicality of the network flows, in addition to KCL,
Kirchhoff’s Voltage Law (KVL) must be enforced in each connected network. KVL
states that the voltage differences around any closed cycle in the network
must sum to zero. If each independent cycle $c$ is expressed as a directed
combination of lines $\ell$ by a matrix $C_{\ell,c}$ then KVL becomes the
constraint
$\sum_{\ell}C_{\ell,c}x_{\ell}f_{\ell,t}=0\quad\hskip 28.45274pt\forall c,t$
(14)
where $x_{\ell}$ is the series inductive reactance of line $\ell$. It was
found in [39] that expressing the linear load flow equations in this way with
cycle constraints is computationally more efficient than angle- or PTDF-based
formulations. Note that point-to-point HVDC lines have no cycles, so there is
no constraint on their flow beyond KCL.
The flows are also constrained by the line capacities $F_{\ell}$
$|f_{\ell,t}|\leq b_{B}\cdot F_{\ell}\hskip 28.45274pt\forall\,\ell,t$ (15)
Although the capacities $F_{\ell}$ are subject to optimisation, no new grid
topologies are considered beyond those planned in the TYNDP 2018 [26]. The
factor $b_{B}=0.7$ leaves a buffer of 30% of the line capacities to account
for $n-1$ line outages and reactive power flows. The choice of 70% for $b_{B}$
is standard in the grid modelling literature [64, 17, 29, 15] and is also the
target fraction of cross-border capacity that should be available for cross-
border trading in the European Union (EU) by 2025, as set in the 2019 EU
Electricity Market Regulation [6].
Since line capacities $F_{\ell}$ can be continuously expanded to represent the
addition of new circuits, the impedances $x_{\ell}$ of the lines would also
decrease. In principle this would introduce a bilinear coupling in equation
(14) between the $x_{\ell}$ and the $f_{\ell,t}$. To keep the optimisation
problem linear and therefore computationally fast, $x_{\ell}$ is left fixed in
each optimisation problem, updated and then the optimisation problem is run,
in up to 4 iterations to ensure convergence, following the methodology of [32,
47].
In order to investigate the effects of transmission expansion, each line
capacity $F_{\ell}$ can be extended beyond the capacity in 2018, $F_{\ell}\geq
F^{2018}_{\ell}$, up to a a line volume cap $\textrm{CAP}_{\textrm{trans}}$,
which is then varied in different simulations:
$\sum_{\ell}l_{\ell}\cdot(F_{\ell}-F^{2018}_{\ell})\leq\textrm{CAP}_{\textrm{trans}}\quad\leftrightarrow\quad\mu_{\textrm{trans}}$
(16)
The caps are defined in relation to 2018’s line capacities $F_{\ell}^{2018}$,
i.e.
$\textrm{CAP}_{\textrm{trans}}=x\cdot\sum_{\ell}l_{\ell}\cdot F_{\ell}^{2018}$
(17)
where $x$ is varied between zero and 50%.
Since there is a cap on the transmission expansion, the line costs $c_{\ell}$
can be set to zero. For the results, costs are added after the simulation
based on the assumptions in Table 3.
### 2.7 Model output data
The optimised model returns the spatially-resolved capacity for each
technology $G_{i,s}$ as well as the amount of transmission expansion of each
included line $F_{\ell}$. Additionally, the results also provide dispatch time
series for each of the generators $g_{i,s,t}$ and electricity flows
$f_{\ell,t}$ for included lines that obey the constraints described above in
subsection 2.6.
## 3 Results
Figure 3: Total annual system costs as a function of the number of clusters
for Cases 1, 2 and 3. Figure 4: Breakdown of the annual system costs for
generation (top) and flexibility options (bottom) as a function of the number
of clusters for Cases 1, 2 and 3 when there is no grid expansion. Figure 5:
Costs as a function of the transmission expansion level for 256 nodes in Case
1.
Figure 3 presents the total annual system costs for each case. To obtain a
better understanding of the system composition, Figure 4 breaks down the total
costs into individual components when there is no grid expansion. In Figure 5
we present total system costs for different grid expansion scenarios for 256
clusters in the simultaneous case (Case 1). An example map of investments can
be found in Figure 6 for a 25% grid expansion (a similar level to ENTSO-E’s
TYNDP [26]).
### 3.1 Case 1 - Increasing number of both generation sites and transmission
nodes
If the resource and network resolutions increase in tandem according to Case 1
without grid expansion, the total annual system costs in Figure 3 rise gently
with the increasing number of nodes, reaching a maximum of $273$ billion euros
per year at 1024 nodes, which is 10% more expensive than the solution with 37
nodes. This corresponds to an average system cost of 87 €/MWh. If some
transmission expansion is allowed, costs are lower, and there is almost no
change in total system costs as the number of nodes is varied.
However, the fact that costs are flat does not mean that the solutions are
similar: a large shift from offshore wind at low resolution to onshore wind at
high resolution can be observed in the left graph of Figure 4 (Case 1). This
is an indication that spatial resolution can have a very strong effect on
energy modelling results. To understand what causes this effect, we must
examine Cases 2 and 3.
### 3.2 Case 2 - Importance of wind and solar resource granularity
In Case 2 we use the lowest network resolution of 37 nodes, corresponding to
one-node-per-country-zone, and investigate the effect of changing the number
of wind and solar sites on the results. As the resolution increases, total
costs without grid expansion in Figure 3 drop by $10$% from $248$ to $222$
billion euro per year. Although the slope of the cost curve appears constant,
note that the $x$-axis is logarithmic, so that the rate of cost decrease slows
as the number of sites increases.
The cost reduction is driven by strong changes in the investment between
generation technologies, particularly the ratio between offshore and onshore
wind (see Figure 4). At low spatial resolution, good and bad onshore sites are
mixed together, diluting onshore capacity factors and making onshore a less
attractive investment. Figure 9 in the Appendix shows how the capacity factors
for wind and solar vary across the continent. While offshore is spatially
concentrated and solar capacity factors are relatively evenly spread in each
country-zone, onshore wind is stronger near coastlines. At high spatial
resolution the model can choose to put onshore wind only at the best sites
(within land restrictions), increasing average capacity factors and thus lower
the per-MWh-cost. (The increasing average capacity factors are plotted in
Figure 11 in the Appendix.) As a result, onshore wind investments more than
double from $24$ to $54$ billion euros per year, while offshore investments
drop $37$% from $100$ to $64$ billion per year and solar by $23\%$. The
biggest effect on the technology mix is when going from 37 to around $181$
clusters; beyond that the changes are smaller.
### 3.3 Case 3 - Impact of transmission bottlenecks
In Case 3 we fix a high resolution of wind and solar generators (1024 sites)
and vary the resolution of the transmission network to gauge the impact of
transmission bottlenecks. With 37 network nodes many bottlenecks are not
visible, so costs are lower, but as the resolution increases to 1024 nodes it
drives up the costs by $23$%. Note that because the $x$-axis is logarithmic,
the highest rate of cost increase is when the number of nodes is small.
As can be seen from the breakdown in Figure 4, the rising transmission
investments from the higher resolution only have a small contribution to the
result. Instead, rising costs are driven by generation and storage. Unlike
Cases 1 and 2, the ratio between the generation technologies does not change
dramatically with the number of clusters, but the capacities for onshore wind,
solar, batteries and hydrogen storage all rise.
The transmission bottlenecks limit the transfer of power from the best sites
to the load, forcing the model to build onshore wind and solar more locally at
sites with lower capacity factors. Average capacity factors of onshore wind
and solar sink by $11$% and $6$% respectively with no grid expansion (see
Figure 11 in the Appendix), meaning that more capacity is needed for the same
energy yield. Curtailment is generally low in the optimal solution (around 3%
of available wind and solar energy) and has less of an effect on costs (see
Figure 12 in the Appendix).
Investment in battery and hydrogen storage rises with the number of network
nodes since the storage is used to balance local wind and solar variations in
order to avoid overloading the grid bottlenecks.
### 3.4 Comparison of the three cases
Separating the effects of resource resolution from network resolution reveals
that the apparent stability of total system costs in Case 1 in Figure 3 as the
number of clusters changes, as reported in [36], is deceptive. In fact, the
sinking costs from the higher resource resolution are counter-acted by the
rising costs from network bottlenecks. With no grid expansion, the system cost
of network bottlenecks is double the benefit of the higher resource
resolution.
While these two effects offset each other at the level of total system costs,
they have very different effects on the technology mix. Resource resolution
leads to much stronger investment in onshore wind, once good sites are
revealed. Network bottlenecks have only a weak effect on the ratio of
generation technologies, but lead to lower average capacity factors and drive
up storage requirements.
Figure 6: Example of investments with 25% grid expansion and 256 nodes in Case
1.
### 3.5 Benefits of grid expansion
Grid expansion does not affect the main qualitative features of the different
Cases, but it does have the overall effect of lowering total system costs. In
Case 1, the total cost-benefit of grid expansion is highest at around 16% for
a 50% increase in grid capacity, with the marginal benefit still increasing,
but it is subject to diminishing returns (see Appendix Figure 14 for a
comparison of the marginal benefit to the cost of transmission). The first 9%
of additional grid capacity brings total cost savings of up to 8%, but for
each extra increment of grid expansion, the benefit is weaker. There is more
benefit from grid expansion at a higher number of nodes, since the higher
network resolution reveals more critical bottlenecks in the transmission
system.
The total savings from 25% and 50% grid expansion are around 36 and 44 billion
euros per year respectively. In a 2018 study ENTSO-E examined scenarios with
up to 75% renewable electricity in Europe in 2040 with and without planned
TYNDP grid expansions (corresponding to around 25% grid expansion), given
fixed demand and a fixed generation fleet. They found that the grid
reinforcements reduce generation costs by 43 billion euros per year. This is
higher than our cost-benefit for 25% grid expansion, despite their study’s
lower level of renewable electricity, because in our simulations the
generation and storage fleet can be re-optimised to accommodate the lower
level of grid capacity, and because we subtract the costs of new grid
reinforcement from the cost-benefit (a contribution of around 3.5 billion
euros per year).
The breakdown of system cost as the grid is expanded for a fixed number of
clusters (256), plotted in Figure 5, reveals how costs are reduced. Although
the investment in transmission lines rises, generation and storage costs
reduce faster as investment shifts from solar and onshore wind to offshore
wind. Offshore wind reduces costs because of its high capacity factors and
more regular generation pattern in time. It can be transported around the
continent more easily with more transmission, and benefits from the smoothing
effects over a large, continental area that grid expansion enables. The map of
investments in Figure 6 shows how offshore wind is balanced by new
transmission around the North Sea, which smooths out weather systems that roll
across the continent from the Atlantic. Further transmission reinforcements
bring energy inland from the coastlines to load centers. With more
transmission, there is less investment in battery and hydrogen storage, as a
result of the better balancing of weather-driven variability in space.
Turning to Case 3, we see that grid expansion mitigates the effect of network
resolution by allowing bottlenecks to be alleviated. For a 50% increase in
transmission capacity, total costs rise by only 4% from 90 nodes up to 1024
nodes. The distribution of investments between technologies also barely
changes in this range (see Appendix Figure 10). This means that a grid
resolution of around 90 nodes can give acceptable solutions for grid expansion
scenarios if computational resources are limited, as long as the wind and
solar resolution is high enough (as in Case 2, 181 generation sites would
suffice). Without grid expansion, a higher grid resolution is needed to
capture the effects of bottlenecks and achieve reliable results.
### 3.6 Computation times and memory
Besides the poor availability of data at high resolution, one of the main
motivations for clustering the network is to reduce the number of variables
and thus the computation time of the optimisation. In Appendix Figure 15 the
memory and solving time requirements for each Case are displayed as a function
of the number of clusters. Both memory and solving time become limiting
factors in Cases 1 and 3, with random access memory (RAM) usage peaking at
around 115 GB and solving time at around 6 days for 1024 clusters. Beyond this
number of clusters no consistent convergence in the solutions was seen.
Case 2, where the network resolution is left low and the resource resolution
is increased, shows seven times lower memory consumption and up to thirteen
times faster solving times compared to Cases 1 and 3 for the same number of
clusters. It is therefore the network resolution rather than the resource
resolution that drives up computational requirements, which it does by
introducing many new variables and possible spatial trade-offs into the
optimisation. Since Case 2 proved relatively reliable for estimating the ratio
between technologies, if not their total capacity, it may prove attractive to
increase the resource resolution rather than the network resolution if
computational resources are limited.
### 3.7 Further results
Further results on curtailment, average capacity factors, the distribution of
technologies between countries, maps, network flows and shadow prices can be
found in the Appendix, as well as a discussion of the limitations of the
model.
## 4 Conclusion
From these investigations we can draw several conclusions. Modellers need to
take account of spatial resolution, since it can have a strong effect on
modelling results. In our co-optimization of generation, storage and network
capacities, higher network resolution can drive up total system costs by as
much as 23%. Higher costs are driven by the network bottlenecks revealed at
higher resolution that limit access to wind and solar sites with high capacity
factors. On the other hand, resource resolution affects the balance of
technologies by revealing more advantageous onshore wind sites. In both cases
the system costs are driven more by the useable generation resources than
investments in the grid or storage.
If grid expansion can be assumed, a grid resolution of 90 nodes for Europe is
sufficient to capture costs and technology investments as long as the solar
and onshore wind resolution is at least around 181 nodes. If grid expansion is
not possible, a higher spatial resolution for the grid is required for
reliable results on technology choices. Since grid expansion is likely to be
limited in the future by low public acceptance, more attention will have to be
paid to the computational challenge of optimizing investments at high spatial
granularity.
## 5 Data availability
### 5.1 Lead contact
Please contact the Lead Contact, Martha M. Frysztacki
(martha.frysztacki@kit.edu), for information related to the data and code
described in the following Material and Methods section.
### 5.2 Materials Availability
No materials were used in this study.
### 5.3 Data and Code Availability
All the code and input data from PyPSA-Eur are openly available online on
GitHub and Zenodo [8, 38]. All model output data is available on Zenodo under
a Creative Commons Attribution Licence [27].
## 6 Glossary
All notation is listed in Table 2.
## 7 Acknowledgements
We thank Martin Greiner, Fabian Neumann, Lina Reichenberg, Mirko Schäfer,
David Schlachtberger, Kais Siala and Lisa Zeyen for helpful discussions,
suggestions and comments. MF, JH and TB acknowledge funding from the Helmholtz
Association under grant no. VH-NG-1352. The responsibility for the contents
lies with the authors.
## 8 Declaration of Interests
The authors declare that they have no competing financial interests.
## Appendix A Appendix
### A.1 Preservation of flow patterns with clustering
Figure 7: Pearson’s correlation coefficient of mapped flows (blue). Note that
the x-axis is non-linear, therefore we mark a linear fit to the data (red).
Figure 8: Kernel Density Estimation (KDE) of aggregated flows from a high
resolution network grid with 1024 nodes on the $x$-axis and a low resolution
grid with 45 nodes (left) and 362 nodes (right) on the $y$-axis. 0.25, 0.5 and
0.75 quantiles of the distribution are displayed as purple isolines around the
KDE.
To understand how well the $k$-means clustering preserves flow patterns, we
took a fixed dispatch pattern for the assets in Europe at high resolution and
examined how the network flows changed as the network was clustered.
The fixed dispatch was determined by solving the linearised optimal power flow
problem for a 1024-node representation of today’s European electricity system.
The asset dispatch was then mapped into the clustered networks, and a regular
linearised power flow was solved in the clustered network.
If lines $\ell\in N_{c,d}$ in the 1024-node network were mapped to a single
representative line $\ell_{c,d}$ in the clustered network, the summed flows
from the original network $\hat{f}_{c,d,t}=\sum_{\ell\in N_{c,d}}f_{\ell,t}$
(‘microscopic flows’) were then compared to the flow $f_{c,d,t}$ in line
$\ell_{c,d}$ of the clustered network (‘macroscopic flows’).
Figure 7 shows the Pearson correlation coefficient between the flows
$f_{c,d,t}$ of aggregated lines $\ell_{c,d}$ in the lower resolution network
and the summed flows $\hat{f}_{c,d,t}$ of all lines in $N_{c,d}$ in the full
resolution network. Red is a linear fit through the points. The distortion
from linearity is due to a non-linear scale in the $x$-axis. Even at 37 nodes
the correlation between the flows is good (Pearson correlation coefficient
above 0.90) and shows an improving trend until at full 1024-node resolution
the flows are once again perfectly equal.
Example density plots of the $\hat{f}_{c,d,t}$ against the $f_{c,d,t}$ for all
lines and all times are plotted for different clustering levels in Figure 8.
The match between the flows is better for higher resolution networks, with a
near-diagonal line already for 362 nodes.
For a more probabilistic approach, we perform a kernel density estimation
(KDE) by applying a fast Fourier transformation of aggregated flows of the
higher resolved network versus the flows of the low resolution network.
Aggregated flows $\hat{f}_{c,d,t}$ are considered an estimator for the flow
$f_{c,d,t}$ in the representative lower resolution network. The resulting
density functions from the KDE are displayed in Figure 8. For the low
resolution network, the probability distribution has two different modes,
while a higher resolution network approaches a Gaussian distribution. The
variance of the probability density function for a low resolution network is
higher than for a high resolution network, as each of the quantile isolines
are broader.
### A.2 Maps of capacity factors for wind and solar
Figures 9(a), 9(b), 9(c) present average capacity factors over the weather
year 2013 for solar, wind on- and off-shore respectively, i.e.
$\displaystyle\bar{g}_{n,s}=\langle\bar{g}_{n,s,t}\rangle_{t}\quad\forall
n\,,$
where $s\in\\{\mathrm{solar},\ \mathrm{wind\ onshore},\ \mathrm{wind\
offshore}\\}$. The capacity factors are shown in the Voronoi cells around each
of the 1024 node of the original network, i.e. the set of points closest to
each node.
The graphics show that capacity factors for solar are decreasing from South to
North while those for wind are increasing towards the North and Baltic Sea.
The average capacity factors are spatially correlated, but as they are
aggregated over larger and larger areas using the weighted average from the
clustering approach in equation (6), they decline as bad sites are mixed with
good sites. This is reflected in Figure 11, which shows how the average
capacity factors per technology for the generation fleet optimized over the
whole of Europe change with the clustering.
(a) Solar
(b) Wind onshore
(c) Wind offshore
Figure 9: Wind and solar capacity factors in Europe for the weather year 2013
at full resolution.
### A.3 Breakdowns for multiple transmission expansion scenarios
Figure 10 shows an extension of the cost breakdowns in Figure 4 from the
scenario with no transmission to scenarios with 25% and 50% grid expansion.
The general trends are the same as for the scenario without grid expansion,
but grid expansion generally allows more wind capacity to be built, resulting
in lower investment in solar, batteries and hydrogen storage, as was seen in
Figure 5.
Figure 10: Technology breakdown of the annual system costs for generation
(top) and flexibility options (bottom) as a function of the number of clusters
for Cases 1, 2 and 3. Cases correspond to the rows, while transmission
expansion scenarios correspond to the columns.
### A.4 Average capacity factors per technology
To understand how the model exploits the best available resource sites per
node, we examine a time-averaged technology-specific capacity factor
$\bar{g}_{s}$. The capacity factor is weighted by how much capacity $G_{n,s}$
of technology $s$ was built at each node $n$ with time-averaged capacity
factor $\bar{g}_{n,s}=\langle\bar{g}_{n,s,t}\rangle_{t}$.
$\displaystyle\bar{g}_{s}:=\frac{\sum_{n}\bar{g}_{n,s}\cdot
G_{n,s}}{\sum_{n}G_{n,s}}\,.$
We present this technology-specific capacity factor in Figure 11 for all three
cases with the no-expansion transmission scenario, i.e. where
$F_{\ell}=F_{\ell}^{2018}$.
As the number of clusters increases, Case 2 has a larger variety of sites per
node to choose where capacity should be installed optimally and is not
restricted by transmission constraints beyond country-zones. Therefore, the
more sites are available, the higher the weighted capacity factor is because
it is not mixed with lower capacity factor sites in equation (6). The highest
resolution of Case 2 is also the lowest resolution of Case 3: many resource
sites and only one node per country-zone. As the number of nodes in Case 3
increases while the same sites are available, transmission bottlenecks force
the model to build more capacity in locations of worse capacity factors.
Therefore, the capacity factors drop again. For Case 1, where resource
resolution and network resolution change in tandem, the resource resolution
dominates and we see increasing capacity factors like in Case 2.
Figure 11: Average capacity factors for each technology for the no
transmission expansion scenario in all three cases.
### A.5 Curtailment per technology
Curtailment is the amount of energy that is available in theory but cannot be
injected into the grid because of transmission constraints or a lack of
demand:
$\displaystyle\bar{g}_{n,s,t}\cdot G_{n,s}-g_{n,s,t}$
Figure 12 shows total curtailment per technology in all Cases. Curtailment in
all situations is low (less than $4\%$ of total demand). Curtailment increases
with higher network resolution in both the Cases $1$ and $3$ that incorporate
transmission constraints, while it is gently decreasing with resource
resolution in Case $2$ where there are only transmission constraints at the
boundaries of country-zones.
Figure 12: Curtailment for the no transmission expansion scenario in all three
cases.
### A.6 Breakdowns by country
Figures 4 and 10 show the breakdown of total costs by technology for the whole
of Europe. However, it could be that for each technology, the spatial
distribution is unstable, moving from country to country with the clustering
changes.
For a better understanding of the spatial distribution of installed capacity,
we examine the total installed renewable capacity per country in all Cases in
Figure 13 with no transmission expansion. The general trend is that the total
installed capacity per country is relatively stable with cluster resolution.
In Case 2 capacity decreases with resolution, since the exploitation of better
resource sites means that less capacity is needed for a given energy yield.
The opposite effect is seen in Case 3, while Case 1 reveals a mix of the
effects of Case 2 and 3.
Figure 13: Capacities per country for the no transmission expansion scenario
in all three cases.
### A.7 Shadow price of line volume constraint
The shadow price $\mu_{\mathrm{trans}}$ of the transmission expansion
constraint in equation (16) corresponds to the system cost benefit of an
incremental MWkm of line volume. Read another way, it is the line cost
required to obtain the same solution with the constraint removed (i.e. lifting
the constraint into the objective function as a Lagrangian relaxation).
We present the resulting shadow prices in Figure 14, where they are compared
with the annuity for underground and overhead lines. Using the cost of
underground cables, the cost-optimal solution would give a grid expansion of
25-50% at high resolution. For overhead transmission, the cost optimum would
be over 50%.
Figure 14: Shadow (dual) price of the line volume constraint. Figure 15:
Memory consumption and solving time.
### A.8 Capacity factors within each cluster region for wind and solar
In this subsection we analyse the homogeneity of time-average capacity factors
for wind and solar within each cluster region as the number of clusters
changes. Duration curves of the capacity factors in each of the 0.3∘ $\times$
0.3∘ weather pixels of the original ERA5 reanalysis dataset [4] for the
European area (‘cutout’) are plotted in blue in Figure 16. In addition, the
duration curves for the pixels in each cluster are plotted in orange, with the
median for each cluster in red. This reveals how much the capacity factors of
wind and solar vary within each cluster region, compared to the whole of
Europe. Table 4 presents the average standard deviation with each cluster
region for each technology and resolution.
For a high resolution of 1024 clusters, we observe that the median values (red
dots) for solar lie very close to the representative values of Europe (black
line) with a relatively small average standard deviation of $1.9\cdot 10^{-3}$
inside each cluster region (scattering of the orange dots). In the case of
onshore wind, the high capacity factors are underestimated by the median
value, while intermediate and low capacity factors are represented with a
minor difference between median and representative European value. For onshore
wind, the average standard deviation of the capacity factors within each
region is larger than for solar by one magnitude ($\mathcal{O}(10^{-2})$,
represented by the scattering of orange dots). The largest variance can be
observed in offshore regions, where the average standard deviation is
$4.3\cdot 10^{-2}$, twice as large as for onshore regions, and the low
capacity factors are overestimated by their representative median values.
In the case of 256 clusters, the standard deviation per region (scattered
orange dots) doubles compared to a resolution of $1024$ sites for solar and
increases by $\sim 50\%$ for onshore and offshore wind. However, the median
values (red dots) per site do not change much compared to the higher
resolution case. Only at very low resolutions or, in the extreme, one site
representing one country-zone, the median values (red dots) do not agree with
the European curve (black line), and the capacity values per site (orange
scattered dots) cover a wide range of values (for example $0-0.5$ for wind
onshore, or $0.11-0.0.18$ for solar). At 37 nodes, the average standard
deviation is three times larger for solar compared to a resolution of 1024
sites and twice as large for onshore wind.
From this analysis we can conclude that a resource resolution of at least
several hundred nodes is required to adequately capture the resource variation
within Europe, with a higher resolution required for wind than for solar.
Figure 16: Breakdown of capacity factors per technology for the weather cutout pixels inside each cluster region as a duration curve (orange), with the median marked in red. The overall duration curve of pixel capacity factors for the whole of Europe is plotted in blue. n clusters | solar | wind onshore | wind offshore
---|---|---|---
$1024$ | $1.9\cdot 10^{-3}$ | $2.2\cdot 10^{-2}$ | $4.3\cdot 10^{-2}$
$724$ | $2.3\cdot 10^{-3}$ | $2.5\cdot 10^{-2}$ | $4.5\cdot 10^{-2}$
$512$ | $2.7\cdot 10^{-3}$ | $2.8\cdot 10^{-2}$ | $4.9\cdot 10^{-2}$
$362$ | $3.2\cdot 10^{-3}$ | $3.3\cdot 10^{-2}$ | $5.1\cdot 10^{-2}$
$256$ | $3.7\cdot 10^{-3}$ | $3.6\cdot 10^{-2}$ | $5.3\cdot 10^{-2}$
$181$ | $4.2\cdot 10^{-3}$ | $3.9\cdot 10^{-2}$ | $5.7\cdot 10^{-2}$
$128$ | $4.5\cdot 10^{-3}$ | $4.3\cdot 10^{-2}$ | $5.8\cdot 10^{-2}$
$90$ | $5.0\cdot 10^{-3}$ | $4.6\cdot 10^{-2}$ | $5.9\cdot 10^{-2}$
$64$ | $6.1\cdot 10^{-3}$ | $4.9\cdot 10^{-2}$ | $6.2\cdot 10^{-2}$
$45$ | $6.1\cdot 10^{-3}$ | $4.9\cdot 10^{-2}$ | $6.2\cdot 10^{-2}$
$37$ | $6.2\cdot 10^{-3}$ | $4.9\cdot 10^{-2}$ | $6.2\cdot 10^{-2}$
Table 4: average standard deviation of the capacity factor (per unit) per
region for a network resolution of $1024$, $256$ and $37$ sites.
### A.9 Limitations of this study
The need to solve the models at high spatial resolution and 3-hourly temporal
resolution in reasonable time means that compromises have been made elsewhere:
the conventional generation technologies are limited to hydroelectricity and
gas turbines, the storage is limited to batteries and hydrogen storage, only a
single weather year is modelled, and ancillary services, grid losses,
discretisation of new grid capacities, distribution grids and forecast error
are not modelled. This allows us to focus on the main interactions between
wind, solar and the transmission grid; the effects of the other factors are
expected to be small [12] since wind and solar investment dominates system
costs. If it were cost-effective to build dispatchable low-carbon generators
like nuclear or fossil generators with carbon capture and sequestration, then
the effects of resource and network resolution would be dampened, since there
would be less wind and solar investment.
Some of the quantitative conclusions may depend on the technology assumptions,
such as the relative cost of solar PV, onshore wind and offshore wind.
However, investigations of the sensitivities of similar models to generation
costs [56] and of the near-optimal space of solutions [48] have shown that a
large share of wind in low-cost scenarios for Europe is robust across many
scenarios because of the seasonal matching of wind to demand in Europe. It is
the interactions between wind and the transmission grid that drive the results
in this paper.
The results may also change as additional energy sectors are coupled to the
power sector, such as building heating, transport and non-electric industry
demand. While extra flexibility from these sectors might offer an alternative
to grid expansion, grid expansion is still expected to be cost-effective [14],
while the effects of resource resolution on the optimal solution remain the
same.
In the present paper different market structures to today’s are assumed,
namely nodal pricing to manage grid congestion, and a high CO2 price to obtain
a 95% CO2 reduction compared to 1990 levels.
We weighted the distribution of wind and solar inside each nodal region
(Voronoi cell) proportional to the installable capacity and capacity factor at
each weather grid cell [37]. This means good and bad sites are not mixed
evenly, but skewed slightly towards good sites. This effect disappears at high
resolution, where the capacity factor is more uniform inside each Voronoi
cell.
Another approach would be to keep a low one-node-per-country network
resolution and then have multiple resource classes defined not by region, like
our Case 2, but by capacity factor [55, 52, 45] (e.g. a good class with sites
with full load hours above 2000, a medium class between 1500 and 2000, and a
bad class below 1500). This would also be beneficial but would not be
compatible with the increasing grid resolution, since the generators in each
class would be spread non-contiguously over the country.
Table 5: Aggregation rules for attributes of nodes and attached assets attribute | aggregated attribute | mapping | values or units
---|---|---|---
latitude & longitude | $x_{c}$ | $\frac{1}{|N_{c}|}\sum_{i\in N_{c}}x_{i}$ | $\mathbb{R}^{2}$
(optimal) power capacity | $G_{c,s}$ | $\sum_{i\in N_{c}}G_{i,s}$ | $MW$
asset installable potential | $G^{\mathrm{max}}_{c,s}$ | $\sum_{i\in N_{c}}G^{\mathrm{max}}_{i,s}$ | $MW$
Table 6: Aggregation rules for attributes of lines in series attribute | aggregated attribute | mapping | values or units
---|---|---|---
length (HVDC lines) | $l_{c,d}$ | $\min_{\ell_{i,j}\in N_{c,d}}l_{i,j}$ | km
power capacity | $F_{c,d}$ | $\sum_{\ell_{i,j}\in N_{c,d}}F_{i,j}$ | MVA
fraction of length underwater | $u_{c,d}$ | $\frac{1}{l_{c,d}}\sum_{\ell_{i,j}\in N_{c,d}}l_{i,j}\cdot u_{i,j}$ | per unit
Table 7: Aggregation rules for attributes of lines in parallel attribute | aggregated attribute | mapping | values or units
---|---|---|---
power capacity | $s^{\mathrm{nom}}_{c,d}$ | $\sum_{\ell_{i,j}\in N_{c,d}}s^{\mathrm{nom}}_{i,j}$ | $MVA$
power capacity maximum | $s^{\mathrm{min}}_{c,d}$ | $\sum_{\ell_{i,j}\in N_{c,d}}s^{\mathrm{min}}_{i,j}$ | $MVA$
power capacity minimum | $s^{\mathrm{max}}_{c,d}$ | $\sum_{\ell_{i,j}\in N_{c,d}}s^{\mathrm{max}}_{i,j}$ | $MVA$
number of parallel lines | $n^{\mathrm{parallel}}_{c,d}$ | $\sum_{\ell_{i,j}\in N_{c,d}}n^{\mathrm{parallel}}_{i,j}$ | $\mathbb{R}$
terrain factor for capital costs | $\mathrm{terr}_{c,d}$ | $\frac{1}{|N_{c,d}|}\sum_{\ell_{i,j}\in N_{c,d}}\mathrm{terr}_{i,j}$ | per unit
## References
* KWK [2014] , 2014. Kombikraftwerk 2: Abschlussbericht. Technical Report. Fraunhofer IWES and others.
* eHi [2015] , 2015. eHighways 2050 Final Reports. Technical Report. ENTSO-E and others.
* int [2017] , 2017. ENTSO-E Interactive Transmission System Map. URL: https://www.entsoe.eu/map/.
* ERA [2017] , 2017. ERA5 Reanalysis. https://software.ecmwf.int/wiki/display/CKB/ERA5+data+documentation.
* OPS [2019] , 2019. Load in hourly resolution. http://www.open-power-system-data.org/. doi:10.25832/time_series/2019-06-05.
* EMR [2019] , 2019. Regulation (EU) 2019/943 on the Internal Market for Electricity. URL: http://data.europa.eu/eli/reg/2019/943/oj.
* dea [2019] , 2019. Technology Data for Generation of Electricity and District Heating, Energy Storage and Energy Carrier Generation and Conversion. Technical Report. Danish Energy Agency and Energinet.dk. URL: https://ens.dk/en/our-services/projections-and-models/technology-data.
* PyP [2020] , 2020. PyPSA-Eur on GitHub. URL: https://github.com/PyPSA/pypsa-eur.
* Biener and Garcia Rosas [2020] Biener, W., Garcia Rosas, K.R., 2020\. Grid reduction for energy system analysis. Electric Power Systems Research 185, 106349. URL: https://doi.org/10.1016/j.epsr.2020.106349, doi:10.1016/j.epsr.2020.106349.
* Biggar and Hesamzadeh [2014] Biggar, D.R., Hesamzadeh, M.R. (Eds.), 2014\. The Economics of Electricity Markets. John Wiley & Sons Ltd. URL: https://doi.org/10.1002/9781118775745, doi:10.1002/9781118775745.
* Blumsack et al. [2009] Blumsack, S., Hines, P., Patel, M., Barrows, C., Sanchez, E.C., 2009. Defining power network zones from measures of electrical distance, in: 2009 IEEE Power Energy Society General Meeting, pp. 1–8. doi:10.1109/PES.2009.5275353.
* Brown et al. [2018a] Brown, T., Bischof-Niemz, T., Blok, K., Breyer, C., Lund, H., Mathiesen, B., 2018a. Response to ‘Burden of proof: A comprehensive review of the feasibility of 100% renewable-electricity systems’. Renewable and Sustainable Energy Reviews 92, 834 – 847. URL: https://doi.org/10.1016/j.rser.2018.04.113, doi:10.1016/j.rser.2018.04.113.
* Brown et al. [2018b] Brown, T., Hörsch, J., Schlachtberger, D., 2018b. PyPSA: Python for Power System Analysis. Journal of Open Research Software 6, 4. doi:10.5334/jors.188.
* Brown et al. [2018c] Brown, T., Schlachtberger, D., Kies, A., Greiner, M., 2018c. Synergies of sector coupling and transmission extension in a cost-optimised, highly renewable European energy system . Energy 160, 720–730. URL: https://doi.org/10.1016/j.energy.2018.06.222, doi:10.1016/j.energy.2018.06.222.
* Brown, T. et al. [2016] Brown, T., Schierhorn, P., Tröster, E., Ackermann, T., 2016\. Optimising the European transmission system for 77% renewable electricity by 2030. IET Renewable Power Generation 10, 3–9.
* Budischak et al. [2013] Budischak, C., Sewell, D., Thomson, H., Mach, L., Veron, D.E., Kempton, W., 2013\. Cost-minimized combinations of wind power, solar power and electrochemical storage, powering the grid up to 99.9% of the time. Journal of Power Sources 225, 60 – 74. URL: https://doi.org/10.1016/j.jpowsour.2012.09.054, doi:10.1016/j.jpowsour.2012.09.054.
* Büchel et al. [2015] Büchel, H.B., Natemeyer, H., Winter, S., 2015. Leistungsflüsse und Netzauslastung im europäischen Übertragungsnetz bis 2050: Eine Studie im Auftrag des Bundesministeriums für Umwelt, Naturschutz, Bau und Reaktorsicherheit. Technical Report. URL: https://www.bmu.de/fileadmin/Daten_BMU/Pools/Forschungsdatenbank/fkz_um_11_41_130_energieinfrastruktur_europa_bf.pdf.
* Cheng and Overbye [2005] Cheng, X., Overbye, T.J., 2005\. PTDF-based power system equivalents. IEEE Transactions on Power Systems 20, 1868–1876. doi:10.1109/TPWRS.2005.857013.
* Chiara Scaramuzzino and Zambelli [03/2019] Chiara Scaramuzzino, G.G., Zambelli, P., 03/2019. Integrated approach for the identification of spatial patterns related to renewable energy potential in European territories. Renewable and Sustainable Energy Reviews 101, 1–13. URL: https://doi.org/10.1016/j.rser.2018.10.024, doi:10.1016/j.rser.2018.10.024.
* Cohen et al. [2014] Cohen, J.J., Reichl, J., Schmidthaler, M., 2014. Re-focussing research efforts on the public acceptance of energy infrastructure: A critical review. Energy 76, 4 – 9\. URL: https://doi.org/10.1016/j.energy.2013.12.056, doi:10.1016/j.energy.2013.12.056.
* Cole et al. [2017] Cole, W., Frew, B., Mai, T., Sun, Y., Bistline, J., Blanford, G., Young, D., Marcy, C., Namovicz, C., Edelman, R., Meroney, B., Sims, R., Stenhouse, J., Donohoo-Vallett, P., 2017. Variable Renewable Energy in Long-Term Planning Models: A Multi-Model Perspective. Technical Report. URL: https://www.osti.gov/biblio/1416124, doi:10.2172/1416124.
* Cotilla-Sanchez et al. [2013] Cotilla-Sanchez, E., Hines, P.D.H., Barrows, C., Blumsack, S., Patel, M., 2013. Multi-attribute partitioning of power networks based on electrical distance. IEEE Transactions on Power Systems 28, 4979–4987. doi:10.1109/TPWRS.2013.2263886.
* Czisch [2005] Czisch, G., 2005. Szenarien zur zukünftigen Stromversorgung: Kostenoptimierte Variationen zur Versorgung Europas und seiner Nachbarn mit Strom aus erneuerbaren Energien. Ph.D. thesis. Universität Kassel. URL: https://kobra.bibliothek.uni-kassel.de/bitstream/urn:nbn:de:hebis:34-200604119596/1/DissVersion0502.pdf.
* Egerer, J. et al. [2013] Egerer, J., Lorenz, C., Gerbaulet, C., 2013. European electricity grid infrastructure expansion in a 2050 context, in: 10th International Conference on the European Energy Market, IEEE. pp. 1–7. URL: http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=6607408.
* European-Commission [12/2011] European-Commission, 12/2011. Energy roadmap 2050. COM. 2011 885/2.
* European Network of Transmission System Operators for Electricity [2018] European Network of Transmission System Operators for Electricity, 2018. Ten-Year Network Development Plan (TYNDP) 2018. Technical Report. ENTSO-E.
* Frysztacki et al. [2020] Frysztacki, M., Hörsch, J., Hagenmeyer, V., Brown, T., 2020\. Clustering Dataset. URL: https://doi.org/10.5281/zenodo.3965781, doi:10.5281/zenodo.3965781.
* Frysztacki and Brown [2020] Frysztacki, M.M., Brown, T., 2020\. Modeling curtailment in Germany: How spatial resolution impacts line congestion, in: Proceedings of 17th International Conference on the European Energy Market (EEM 2020). URL: https://doi.org/10.1109/EEM49802.2020.9221886, doi:10.1109/EEM49802.2020.9221886.
* Fuchs et al. [2015] Fuchs, B., Roehder, A., Mittelstaedt, M., Massmann, J., Natemeyer, H., Schnettler, A., 2015\. Studie zu Aspekten der elektrischen Systemstabilität im deutschen Übertragungsnetz bis 2023: Eine Studie im Auftrag der Bundesnetzagentur. Technical Report. URL: https://www.bundesnetzagentur.de/SharedDocs/Downloads/DE/Sachgebiete/Energie/Unternehmen_Institutionen/Versorgungssicherheit/System-_u_Netzsicherheit/Gutachten_IFHT_RWTH_Systemstabilitaet_2015.pdf?__blob=publicationFile&v=1.
* Galvin [2018] Galvin, R., 2018. Trouble at the end of the line: Local activism and social acceptance in low-carbon electricity transmission in Lower Franconia, Germany. Energy Research & Social Science 38, 114 – 126. URL: https://doi.org/10.1016/j.erss.2018.01.022, doi:10.1016/j.erss.2018.01.022.
* Gils et al. [2017] Gils, H.C., Scholz, Y., Pregger, T., de Tena, D.L., Heide, D., 2017. Integrated modelling of variable renewable energy-based power supply in Europe. Energy 123, 173 – 188. URL: https://doi.org/10.1016/j.energy.2017.01.115, doi:10.1016/j.energy.2017.01.115.
* Hagspiel, S. et al. [2014] Hagspiel, S., Jägemann, C., Lindenburger, D., Brown, T., Cherevatskiy, S., Tröster, E., 2014\. Cost-optimal power system extension under flow-based market coupling. Energy 66, 654–666. URL: https://doi.org/10.1016/j.energy.2014.01.025, doi:10.1016/j.energy.2014.01.025.
* Hamon et al. [2015] Hamon, C., Shayesteh, E., Amelin, M., Söder, L., 2015\. Two partitioning methods for multi-area studies in large power systems. International Transactions on Electrical Energy Systems 25, 648–660. URL: http://dx.doi.org/10.1002/etep.1864, doi:10.1002/etep.1864. eTEP-12-0480.R1.
* Hartigan and Wong [1979] Hartigan, J.A., Wong, M.A., 1979\. Algorithm AS 136: A K-means clustering algorithm. Applied Statistics 28, 100–108.
* Hess et al. [2018] Hess, D., Wetzel, M., Cao, K.K., 2018. Representing node-internal transmission and distribution grids in energy system models. Renewable Energy 119, 874 – 890. URL: https://doi.org/10.1016/j.renene.2017.10.041, doi:10.1016/j.renene.2017.10.041.
* Hörsch and Brown [2017] Hörsch, J., Brown, T., 2017\. The role of spatial scale in joint optimisations of generation and transmission for European highly renewable scenarios, in: Proceedings of 14th International Conference on the European Energy Market (EEM 2017). URL: https://arxiv.org/abs/1705.07617, doi:10.1109/EEM.2017.7982024.
* Hörsch et al. [11/2018] Hörsch, J., Hofmann, F., Schlachtberger, D., Brown, T., 11/2018. PyPSA-Eur: An Open Optimisation Model of the European Transmission System. Energy Strategy Reviews 22, 207–215. URL: https://doi.org/10.1016/j.esr.2018.08.012, doi:10.1016/j.esr.2018.08.012.
* Hörsch et al. [2019] Hörsch, J., Neumann, F., Hofmann, F., Schlachtberger, D., Brown, T., 2019. PyPSA-Eur: An Open Optimisation Model of the European Transmission System (Version 0.1). URL: https://doi.org/10.5281/zenodo.3517935, doi:10.5281/zenodo.3517935.
* Hörsch et al. [2018] Hörsch, J., Ronellenfitsch, H., Witthaut, D., Brown, T., 2018\. Linear optimal power flow using cycle flows. Electric Power Systems Research 158, 126 – 135. URL: https://doi.org/10.1016/j.epsr.2017.12.034, doi:10.1016/j.epsr.2017.12.034.
* Jain et al. [1999] Jain, A.K., Murty, M.N., Flynn, P.J., 1999. Data clustering: A review. ACM Comput. Surv. 31, 264–323. doi:10.1145/331499.331504.
* Kotzur et al. [2018] Kotzur, L., Markewitz, P., Robinius, M., Stolten, D., 2018\. Impact of different time series aggregation methods on optimal energy system design. Renewable Energy 117, 474 – 487. URL: https://doi.org/10.1016/j.renene.2017.10.017, doi:10.1016/j.renene.2017.10.017.
* Krishnan and Cole [2016] Krishnan, V., Cole, W., 2016\. Evaluating the value of high spatial resolution in national capacity expansion models using reeds, in: 2016 IEEE Power and Energy Society General Meeting (PESGM), pp. 1–5. doi:10.1109/PESGM.2016.7741996.
* Küppers et al. [2020] Küppers, M., Perau, C., Franken, M., Heger, H., Huber, M., Metzger, M., Niessen, S., 2020. Data-driven regionalization of decarbonized energy systems for reflecting their changing topologies in planning and optimization. Energies 13, 4076\. doi:10.3390/en13164076.
* MacDonald et al. [2017] MacDonald, A.E., , Clack, C.T.M., Alexander, A., Dunbar, A., Wilczak, J., Xie, Y., 2017\. Future cost-competitive electricity systems and their impact on US CO2 emissions. Nature Climate Change 6, 526–531. URL: https://doi.org/10.1038/nclimate2921, doi:10.1038/nclimate2921.
* Mattsson et al. [2020] Mattsson, N., Verendel, V., Hedenus, F., Reichenberg, L., 2020\. An autopilot for energy models – automatic generation of renewable supply curves, hourly capacity factors and hourly synthetic electricity demand for arbitrary world regions. Technical Report. URL: https://arxiv.org/abs/2003.01233.
* Nabe [2014] Nabe, C., 2014. Impacts of restricted transmission grid expansion in a 2030 perspective in Germany, in: Wind Integration Workshop, Berlin.
* Neumann and Brown [2019] Neumann, F., Brown, T., 2019\. Heuristics for transmission expansion planning in low-carbon energy system models, in: 2019 16th International Conference on the European Energy Market (EEM), pp. 1–8. URL: https://www.doi.org/10.1109/EEM.2019.8916411, doi:10.1109/EEM.2019.8916411.
* Neumann and Brown [2021] Neumann, F., Brown, T., 2021\. The near-optimal feasible space of a renewable power system model. Electric Power Systems Research 190, 106690. URL: https://doi.org/10.1016/j.epsr.2020.106690, doi:10.1016/j.epsr.2020.106690.
* Oh [2010] Oh, H., 2010. A new network reduction methodology for power system planning studies. IEEE Transactions on Power Systems 25, 677 – 684. doi:10.1109/TPWRS.2009.2036183.
* Perera et al. [2020] Perera, A.T.D., Nik, V.M., Chen, D., Scartezzini, J.L., Hong, T., 2020. Quantifying the impacts of climate change and extreme climate events on energy systems. Nature Energy 5, 150–159. URL: https://doi.org/10.1038/s41560-020-0558-0, doi:10.1038/s41560-020-0558-0.
* Pfeifroth et al. [2017] Pfeifroth, U., Kothe, S., Müller, R., Trentmann, J., Hollmann, R., Fuchs, P., Werscheck, M., 2017. Surface Radiation Data Set - Heliosat (SARAH) - Edition 2. URL: https://doi.org/10.5676/EUM_SAF_CM/SARAH/V002, doi:10.5676/EUM_SAF_CM/SARAH/V002.
* Reichenberg et al. [2018] Reichenberg, L., Hedenus, F., Odenberger, M., Johnsson, F., 2018\. The marginal system LCOE of variable renewables – Evaluating high penetration levels of wind and solar in Europe. Energy 152, 914 – 924. URL: https://doi.org/10.1016/j.energy.2018.02.061, doi:10.1016/j.energy.2018.02.061.
* Rodriguez et al. [2014] Rodriguez, R., Becker, S., Andresen, G., Heide, D., Greiner, M., 2014. Transmission needs across a fully renewable European power system. Renewable Energy 63, 467–476. URL: https://doi.org/10.1016/j.renene.2013.10.005, doi:10.1016/j.renene.2013.10.005.
* Ryberg et al. [2018] Ryberg, D., Robinius, M., Stolten, D., 2018. Evaluating Land Eligibility Constraints of Renewable Energy Sources in Europe. Energies 11, 1246\. URL: http://www.mdpi.com/1996-1073/11/5/1246, doi:10.3390/en11051246.
* Schlachtberger et al. [09/2017] Schlachtberger, D., Brown, T., Schramm, S., Greiner, M., 09/2017. The benefits of cooperation in a highly renewable European electricity network. Energy 134, 469–481. URL: https://doi.org/10.1016/j.energy.2017.06.004, doi:10.1016/j.energy.2017.06.004.
* Schlachtberger et al. [2018] Schlachtberger, D., Brown, T., Schäfer, M., Schramm, S., Greiner, M., 2018. Cost optimal scenarios of a future highly renewable european electricity system: Exploring the influence of weather data, cost parameters and policy constraints. Energy 163, 100 – 114. URL: https://doi.org/10.1016/j.energy.2018.08.070, doi:10.1016/j.energy.2018.08.070.
* Schröder et al. [2013] Schröder, A., Kunz, F., Meiss, J., Mendelevitch, R., von Hirschhausen, C., 2013. Current and prospective costs of electricity generation until 2050. Data Documentation, DIW 68. Deutsches Institut für Wirtschaftsforschung (DIW). Berlin. URL: http://hdl.handle.net/10419/80348.
* Schäfer et al. [2017] Schäfer, M., Siggaard, S.B., Zhu, K., Poulsen, C.R., Greiner, M., 2017. Scaling of transmission capacities in coarse-grained renewable electricity networks. EPL (Europhysics Letters) 119, 38004\. URL: https://doi.org/10.1209/0295-5075/119/38004, doi:10.1209/0295-5075/119/38004.
* Shayesteh et al. [2015] Shayesteh, E., Hobbs, B.F., Söder, L., Amelin, M., 2015\. ATC-Based System Reduction for Planning Power Systems With Correlated Wind and Loads. IEEE Transactions on Power Systems 30, 429–438. doi:10.1109/TPWRS.2014.2326615.
* Shi and Tylavsky [2015] Shi, D., Tylavsky, D.J., 2015\. A novel bus-aggregation-based structure-preserving power system equivalent. IEEE Transactions on Power Systems 30, 1977–1986. URL: https://doi.org/10.1109/TPWRS.2014.2359447, doi:10.1109/TPWRS.2014.2359447.
* Siala and Mahfouz [08/2019] Siala, K., Mahfouz, M.Y., 08/2019. Impact of the choice of regions on energy system models. Energy Strategy Reviews 25, 75–85. URL: https://doi.org/10.1016/j.esr.2019.100362, doi:10.1016/j.esr.2019.100362.
* Singh and Srivastava [2005] Singh, H.K., Srivastava, S.C., 2005\. A reduced network representation suitable for fast nodal price calculations in electricity markets, in: IEEE Power Engineering Society General Meeting, 2005, pp. 2070–2077 Vol. 2. doi:10.1109/PES.2005.1489092.
* Stefan Pfenninger and Keirstead [05/2014] Stefan Pfenninger, A.H., Keirstead, J., 05/2014. Energy systems modeling for twenty-first century energy challenges. Renewable and Sustainable Energy Reviews 22, 74–86. doi:10.1016/j.rser.2014.02.003.
* Stigler et al. [2012] Stigler, H., et al., 2012. Gutachten zur Ermittlung des erforderlichen Netzausbaus im deutschen Übertragungsnetz: Gutachten im Auftrag der Bundesnetzagentur. Technical Report. URL: https://data.netzausbau.de/2022/NEP/NEMO_II.pdf.
* Stott et al. [2009] Stott, B., Jardim, J., Alsac, O., 2009. Dc power flow revisited. IEEE Trans. Power Syst. 24, 1290\. doi:10.1109/TPWRS.2009.2021235.
* Temraz et al. [1994] Temraz, H., Salama, M., Quintana, V., 1994. Application of partitioning techniques for decomposing large-scale electric power networks. International Journal of Electrical Power & Energy Systems 16, 301 – 309. URL: http://www.sciencedirect.com/science/article/pii/0142061594900345, doi:http://dx.doi.org/10.1016/0142-0615(94)90034-5.
* Tröndle et al. [2020] Tröndle, T., Lilliestam, J., Marelli, S., Pfenninger, S., 2020\. Trade-Offs between Geographic Scale, Cost, and Infrastructure Requirements for Fully Renewable Electricity in Europe. Joule , 1 – 20URL: https://doi.org/10.1016/j.joule.2020.07.018, doi:10.1016/j.joule.2020.07.018.
* Vartiainen et al. [2017] Vartiainen, E., Masson, G., Breyer, C., 2017. The True Competitiveness of Solar PV: A European Case Study. Technical Report. European Technology and Innovation Platform for Photovoltaics. URL: http://www.etip-pv.eu/fileadmin/Documents/ETIP_PV_Publications_2017-2018/LCOE_Report_March_2017.pdf.
* Wiegmans [2016] Wiegmans, B., 2016. GridKit extract of ENTSO-E interactive map. doi:10.5281/zenodo.55853.
|
# A Review on Deep Learning in UAV Remote Sensing
Lucas Prado Osco
University of Western São Paulo
Presidente Prudente, SP, Brazil
<EMAIL_ADDRESS>
José Marcato Junior
Federal University of Mato Grosso do Sul
Campo Grande, MS, Brazil
<EMAIL_ADDRESS>
Ana Paula Marques Ramos
University of Western São Paulo
Presidente Prudente, SP, Brazil
<EMAIL_ADDRESS>
Lúcio André de Castro Jorge
Brazilian Agricultural Research Agency
São Carlos, SP, Brazil
<EMAIL_ADDRESS>
Sarah Narges Fatholahi
University of Waterloo
Waterloo, ON, Canada
<EMAIL_ADDRESS>
Jonathan de Andrade Silva
Federal University of Mato Grosso do Sul
Campo Grande, MS, Brazil
<EMAIL_ADDRESS>
Edson Takashi Matsubara
Federal University of Mato Grosso do Sul
Campo Grande, MS, Brazil
<EMAIL_ADDRESS>
Hemerson Pistori
Catholic University of Dom Bosco
Campo Grande, MS, Brazil
<EMAIL_ADDRESS>
Wesley Nunes Gonçalves
Federal University of Mato Grosso do Sul
Campo Grande, MS, Brazil
<EMAIL_ADDRESS>
Jonathan Li
University of Waterloo
Waterloo, ON, Canada
<EMAIL_ADDRESS>
corresponding author
(January, 2021)
###### Abstract
Deep Neural Networks (DNNs) learn representation from data with an impressive
capability, and brought important breakthroughs for processing images, time-
series, natural language, audio, video, and many others. In the remote sensing
field, surveys and literature revisions specifically involving DNNs
algorithms’ applications have been conducted in an attempt to summarize the
amount of information produced in its subfields. Recently, Unmanned Aerial
Vehicles (UAV) based applications have dominated aerial sensing research.
However, a literature revision that combines both “deep learning” and “UAV
remote sensing” thematics has not yet been conducted. The motivation for our
work was to present a comprehensive review of the fundamentals of Deep
Learning (DL) applied in UAV-based imagery. We focused mainly on describing
classification and regression techniques used in recent applications with UAV-
acquired data. For that, a total of 232 papers published in international
scientific journal databases was examined. We gathered the published material
and evaluated their characteristics regarding application, sensor, and
technique used. We relate how DL presents promising results and has the
potential for processing tasks associated with UAV-based image data. Lastly,
we project future perspectives, commentating on prominent DL paths to be
explored in the UAV remote sensing field. Our revision consists of a friendly-
approach to introduce, commentate, and summarize the state-of-the-art in UAV-
based image applications with DNNs algorithms in diverse subfields of remote
sensing, grouping it in the environmental, urban, and agricultural contexts.
_Keywords_ convolutional neural networks $\cdot$ remote sensing imagery
$\cdot$ unmanned aerial vehicles
## 1 Introduction
For investigations using remote sensing image data, multiple processing tasks
depend on computer vision algorithms. In the past decade, applications
conducted with statistical and Machine Learning (ML) algorithms were mainly
used in classification/regression tasks. The increase of remote sensing
systems allowed a wide collection of data from any target on the Earth’s
surface. Aerial imaging has become a common approach to acquiring data with
the advent of Unnamed Aerial Vehicles (UAV). These are also known as Remotely
Piloted Aircrafts (RPA), or, as in a popular term, drones (multi-rotor, fixed
wings, hybrid, etc). These devices have grown in market availability for their
relatively low-cost and high-operational capability to capture images quickly
and in an easy manner. The high-spatial-resolution of UAV-based imagery and
its capacity for multiple visits allowed the creation of large and detailed
amounts of datasets to be dealt with.
The surface mapping with UAV platforms presents some advantages compared to
orbital and other aerial sensing methods of acquisition. Less atmospheric
interference, the possibility to fly within lower altitudes, and mainly, the
low operational cost have made this acquisition system popular in both
commercial and scientific explorations. However, the visual inspection of
multiple objects can still be a time-consuming, biased, and inaccurate
operation. Currently, the real challenge in remote sensing approaches is to
obtain automatic, rapid and accurate information from this type of data. In
recent years, the advent of Deep Learning (DL) techniques has offered robust
and intelligent methods to improve the mapping of the Earth’s surface.
DL is an Artificial Neural Network (ANN) method with multiple hidden layers
and deeper combinations, which is responsible for optimizing and returning
better learning patterns than a common ANN. There is an impressive amount of
revision material in the scientific journals explaining DL-based techniques,
its historical evolution, general usage, as well as detailing networks and
functions. However, these are not the main concerns of this paper, and we just
briefly explain the necessary information to assist the reader in the subject
before dividing into its applications. For those interested in an in-depth
approach, we recommend both Lecun’s paper (Lecun et al., 2015) and
Goodfellow’s book (Goodfellow et al., 2016).
As computer processing and labeled examples (i.e. samples) became more
available in recent years, the performance of Deep Neural Networks (DNNs)
increased in the image-processing applications. DNN has been successfully
applied in data-driven methods. However, much needs to be covered to truly
understand its potential, as well as its limitations. In this regard, several
surveys on the application of DL in remote sensing were developed in both
general and specific contexts to better explain its importance.
The context in which remote sensing literature surveys are presented is
variated. Zhang et al. (Zhang et al., 2016) organized a revision material
which explains how DL methods were being applied, at the time, to image
classification tasks. Later on, Cheng et al. (Cheng and Han, 2016)
investigated object detection in optical images, but focused more on the
traditional ANN and ML. A more complete and systematic review was presented by
Ball et al. (Ball et al., 2017) in a survey describing DL theories, tools, and
its challenges in dealing with remote sensing data. This work should serve as
an introductory approach to the theme for first-time readers. Cheng et al.
(Cheng et al., 2017) produced a revision on image classification with examples
produced at their experiments. Also, focusing on classification, Zhu et al.
(Zhu et al., 2017) summarized most of the current information to understand
the DL methods used for this task.
Yet in the literature revision theme, a survey performed by Li et al. (Li et
al., 2018) helped to understand some DL applications regarding the overall
performance of DNNs in publicly available datasets for image classification
task. Yao et al. (Yao et al., 2018) stated in their survey that DL will become
the dominant method of image classification in remote sensing community.
Although DL does provide promising results, many observations and examinations
are still required. Interestingly enough, at this time, multiple remote
sensing applications using hyperspectral data were in process, which gained
attention as a literature revision subject. In Petersson et al. (Petersson et
al., 2017), probably one of the first surveys on hyperspectral data was
performed. A comparative review by Audebert et al. (Audebert et al., 2019) was
conducted by examining various families of networks’ architectures while
providing a toolbox to perform such methods to be publicly available. In this
regard, another paper written by Paoletti et al. (Paoletti et al., 2019)
organized the source code of DNNs to be easily reproduced. Similar to (Cheng
et al., 2017), Li et al. (Li et al., 2019) conducted a literature revision
while presenting an experimental analysis with DNNs’ methods.
As of recently, literature revision focused on more specific approaches within
this theme. Some of which included DL methods for enhancement of remote
sensing observations, as super-resolution, denoising, restoration, pan-
sharpening, and image fusion techniques, as demonstrated by Tsagkatakis et al.
(Tsagkatakis et al., 2019). Also, one recent meta-analysis by Ma et al. (Ma et
al., 2019) was performed concerning the usage of DL algorithms in seven
subfields of remote sensing: image fusion and image registration, scene
classification, object detection, land use and land cover classification,
semantic segmentation, and object-based image analysis (OBIA). Although, from
these recent reviews, various remote sensing applications using DL can be
verified, it should be noted that the authors did not focus on specific
surveying in the context of DL algorithms applied to UAV-image sets, which is
something that, at the time of writing, has gained the attention of remote
sensing investigations.
Another interesting take on DL-based methods was related to image segmentation
in a survey by Hossain et al. (Hossain and Chen, 2019), which its theme was
expanded by Yuan et al. (Yuan et al., 2021) and included state-of-the-art
algorithms. A summarized analysis by Zheng et al. (Zheng et al., 2020) focused
on remote sensing images with object detection approaches, indicating some of
the challenges related to the detection with few labeled samples, multi-scale
issues, network structure problems, and cross-domain detection difficulties.
In more of a “niche” type of research, environmental applications and land
surface change detection were investigated in literature revision papers by
Yuan et al. (Yuan et al., 2020) and Khelifi et al. (Khelifi and Mignotte,
2020), respectively.
The aforementioned studies were evaluated with a text processing method that
returned a word-cloud in which the word-size denotes the frequency of the word
within these papers (Fig. 1). An interesting observation regarding this world-
cloud is that the term “UAV” is under or not represented at all. This revision
gap is a problem since UAV image data is daily produced in large amounts, and
no scientific investigation appears to offer a comprehensive literature
revision to assist new research on this matter. In the UAV context, there are
some revision papers published in important scientific journals from the
remote sensing community. As of recently, a revision-survey (Bithas et al.,
2019) focused on the implications of ML methods being applied to UAV image
processing, but no investigation was conducted on DL algorithms for this
particular issue. This is an important theme, especially since UAV platforms
are more easily available to the public and DL-based methods are being tested
to provide accurate mapping in highly detailed imagery.
Figure 1: Word-cloud of different literature-revision papers related to the
“remote sensing” and “deep learning” themes.
As mentioned, UAVs offer flexibility in data collection, as flights are
programmed under users’ demand; are low-cost when compared to other platforms
that offer similar spatial-resolution images; produces high-level of detail in
its data collection; presents dynamic data characteristics since it is
possible to embed RGB, multispectral, hyperspectral, thermal and, LiDAR
sensors on it; and are capable of gathering data from difficult to access
places. Aside from that, sensors embedded in UAVs are known to generate data
at different altitudes and point-of-views. These characteristics, alongside
others, are known to produce a higher dynamic range of images than common
sensing systems. This ensures that the same object is viewed from different
angles, where not only their spatial and spectral information is affected, as
well as form, texture, pattern, geometry, illumination, etc. This becomes a
challenge for multi-domain detection. As such, studies indicate that DL is the
most prominent solution for dealing with these disadvantages. These studies,
which most are presented in this revision paper, were conducted within a
series of data-criteria and evaluated DL architectures in classifying,
detecting, and segmenting various objects from UAV scenes.
To the best of our knowledge, there is a literature gap related to review
articles combining both “deep learning” and “UAV remote sensing” thematics.
This survey is important to summarize the direction of DL applications in the
remote sensing community, particularly related to UAV-imagery. The purpose of
this study is to provide a brief review of DL methods and their applications
to solve classification, object detection, and semantic segmentation problems
in the remote sensing field. Herein, we discuss the fundamentals of DL
architectures, including recent proposals. There is no intention of
summarizing all of the existing literature, but to present an examination of
DL models while offering the necessary information to understand the state-of-
the-art in which it encounters. Our revision is conducted highlighting traits
about the UAV-based image data, their applications, sensor types, and
techniques used in recent approaches in the remote sensing field.
Additionally, we relate how DL models present promising results and project
future perspectives of prominent paths to be explored. In short, this paper
brings the following contributions:
1. 1.
A presentation of fundamental ideas behind the DL models, including
classification, object detection, and semantic segmentation approaches; as
well as the application of these concepts to attend UAV-image based mapping
tasks;
2. 2.
The examination of published material in scientific sources regarding sensors
types and applications, categorized in environmental, urban, and agricultural
mapping contexts;
3. 3.
The organization of publicly available datasets from previous researches,
conducted with UAV-acquired data, also labeled for both object detection and
segmentation tasks;
4. 4.
A description of the challenges and future perspectives of DL-based methods to
be applied with UAV-based image data.
## 2 Deep Neural Networks Overview
DNNs are based on neural networks which are composed of neurons (or units)
with certain activation and parameters that transform input data (e.g., UAV
remote-sensing image) to outputs (e.g., land use and land cover maps) while
progressively learning higher-level features (Ma et al., 2019; Schmidhuber,
2015). This progressive feature learning occurs, among others, on layers
between the input and the output, which are referred to as hidden layers (Ma
et al., 2019). DNNs are considered as a DL method in their most traditional
form (i.e. with 2 or more hidden layers). Their concept, based on an
Artificial Intelligence (AI) modeled after the biological neurons’
connections, exists since the 1950s. But only later, with advances in computer
hardware and the availability of a high number of labeled examples, its
interest has resurged in major scientific fields. In the remote sensing
community, the interest in DL algorithms has been gaining attention since mid
2010s decade, specifically because these algorithms achieved significant
success at digital image processing tasks (Ma et al., 2019; Khan et al.,
2020).
A DNN works similarly to an ANN, in a sense that it, when as a supervised
algorithm, uses a given number of input features to be trained, and that these
features observations are combined through multiple operations, where a final
layer is used to return the desired prediction. Still, this explanation does
not do much to highlight the differences between traditional ANNs and DNNs.
LeCun et. al. (Lecun et al., 2015), the paper amongst the most cited articles
in DL literature, defines DNN as follows: “Deep-learning methods are
representation-learning methods with multiple levels of representation”.
Representation-learning is a key concept in DL. It allows the DL algorithms to
be fed with raw data, usually unstructured data such as images, texts, and
videos, to automatically discover representations.
The most common DNNs (Fig. 2) are generally composed of dense layers, wherein
activation functions are implemented in. Activation functions compute the
weighted sum of input and biases, which is used to decide if a neuron can be
activated or not (Nwankpa et al., 2018). These functions constitute decision
functions that help in learning intrinsic patterns (Khan et al., 2020); i.e.,
they are one of the main aspects of how each neuron learns from its
interaction with the other neurons. Commonly applied activation functions
include linear, sigmoid, tahn, max-out, Rectified Linear Unit (ReLu), and
variants of ReLu, including leaky ReLu, Exponential Linear Unit (ELU), and
Parametric Rectified Linear Unit (PReLU) (Khan et al., 2020). Known as a
piecewise linear function type, ReLu defines the 0 valor for all negative
values of X. This function is, at the time of writing, the most popular in
current DNNs models. There are some reasons for that since this function is
not-much computationally expensive as in comparison against others, deals well
with the vanishing gradient problem (Nwankpa et al., 2018), leads to more
sparse representations of data, and, as described in recent literature
(Naitzat et al., 2020), has the ability to change data topology. Regardless,
another potential activation function recently explored is Mish, a self
regularized non-monotonic activation function, which is returning interesting
outcomes (Khan et al., 2020), as more investigations are currently conducted.
Figure 2: A DNN architecture. This is a simple example of how a DNN may be
built. Here the initial layer (Xinput) is composed of the collected data
samples. Later this data information can be extracted by hidden layers in a
back-propagation manner, which is used by subsequent hidden layers to learn
these features’ characteristics. In the end, another layer is used with an
activation function related to the given problem (classification or
regression, as an example), by returning a prediction outcome (Ylabel).
Aside from the activation function, another important information on how a DNN
works is related to its layers, such as dropout, batch-normalization,
convolution, deconvolution, max-pooling, encode-decode, memory cells, and
others. For now, we will focus on dropout and batch-normalization layers, as
the remaining will be further mentioned. Dropout layers are important to
introduce regularization within the network since it randomly chooses to
“drop” connections and units with a given probability. This not only helps to
reduce overfitting by removing the presence of co-adapted connections but also
improves its generalization and contributes to optimized and faster learning-
rates (Khan et al., 2020; Hinton et al., 2012). The batch-normalization layer
acts as a regulating factor and smoothens the flow of the loss gradient, which
also improves generalization. This layer is regularly used to solve issues
with covariance-shift within feature-maps (Khan et al., 2020). The
organization in which these and the other layers are composed, as well as its
parameters, is one of the main aspects of the architecture.
When compiling a model to be further trained, some basic information is also
needed. One of which is the optimizer that will be implemented to calculate
the learning-rate. Some of the most used methods are Adam, momentum algorithm,
Stochastic Gradient Descent (SGD), and Root Mean Aquared Propagation
(RMSprop). There are several optimizers and the correct choice, according to
the model and its objective, could help in optimizing accuracies. SGD is the
simplest method, where the neurons are converged and shifting towards the
optimum cost function by calculating it one example per step. Momentum tries
to solve the issue of being stuck at a local minimum by adding a temporal
concept to it. RMSprop, a gradient based optimization technique, implements an
exponentially decaying average of the gradients combining both the momentum
and another algorithm known as the Adaptive Gradient Algorithm (AdaGrad).
Adam, for instance, is currently the most used option, and its popularity is
due to its ability to use both momentum and adaptive learning rates. In this
topic, a more detailed discussion is presented in both (Ruder, 2017) and (Khan
et al., 2020). The optimizers are an important aspect of the DL network and,
combined with the correct loss function, can influence its accuracy.
In the optimization context, the function defined to evaluate the model is
known a loss function (also known as objective or cost function). This
function represents the ability of the model to represent the training data in
a single scalar value. With this reduction, the learning problem is now
related to finding ways to adjust the model’s parameters to minimize the loss
function. This allows for possible solutions to be ranked and then compared
between the neuron interactions (Goodfellow et al., 2016). Loss functions are
calculated according to mathematical probabilities. This metric is related to
the nature of the problem itself; i.e., if the network is dealing with a
classification or a regression problem. For solving classifications, also
known as probabilistic losses, one may use functions like cross-entropy
(binary, category, and category-sparse), Poisson, Kullback-Leibler (KL)
divergence, as others. For regression-related problems, losses based on Mean-
Squared Error (MSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error
(MAPE), Mean Squared Logarithmic Error (MSLE), etc. are commonly implemented.
A detailed intake on function losses can be read in the (Goodfellow et al.,
2016) material.
For evaluating the DNN’s performance, different metrics have been adopted
(Minaee et al., 2020a), as specialists often rely on the same aforementioned
division. For classification, although accuracy (or recall; or sensitivity) is
a commonly used parameter, a comparison of metrics like precision, F-measure
(or F-score), area under the Receiver Operating Characteristics (ROC) curve,
and the Intersection over Union (IoU) are also preferred to judge the
performance of a network. Another used metric is the Kappa coefficient, but it
should be avoided, as explained in recent publications in the remote sensing
area (Foody, 2020). For regression related problems, metrics like MSE, MAE,
Mean Relative Error (MRE), Root Mean Squared Error (RMSE), and Correlation
Coefficient (r) are also used. These metrics are important to establish a
relationship between predictions and labeled examples (or ground-truth in some
cases) and are necessary when comparing one model against the other (Minaee et
al., 2020a). Although regression is not as common in the analysis of remote
sensing data as classification, we discuss UAV-based applications implemented
in both situations (classification and regression problems) in the subsequent
sections.
Multiple types of architectures were proposed in recent years to improve and
optimize DNNs by implementing different kinds of layers, optimizers, loss
functions, depth-level, etc. However, it is well known that one of the major
reasons behind DNNs’ popularity today is also related to the high amount of
available data to learn from it. A rule of thumb conceived among data
scientists indicates that at least 5,000 labeled examples per category was
recommended (Goodfellow et al., 2016). But, as of today, many of DNNs’
proposals focused on improving these network’s capacities to predict features
with fewer examples than that. Some applications which are specifically
oriented may benefit from it, as it reduces the amount of labor required at
sample collection by human inspection. Even so, it should be noted that,
although this pursuit is being conducted, multiple takes are performed by the
vision computer communities and novel research includes methods for data-
augmentation, self-supervising, and unsupervised learning strategies, as
others. A detailed discussion of this manner is presented in (Khan et al.,
2020), but we briefly discuss some by the end of our revision.
### 2.1 Convolutional and Recurrent Neural Networks
A DNN may be formed by different architectures, and the complexity of the
model is related to how each layer and additional computational method is
implemented. Different DL architectures are proposed regularly, Convolutional
Neural Networks (CNN), Recurrent Neural Networks (RNN), and Deep Belief
Networks (DBN) (Ball et al., 2017), and, more recently yet, Generative
Adversarial Networks (GAN) (Goodfellow et al., 2016). However, the most common
DNNs in the supervised networks categories are usually classified as CNNs and
RNNs (Khan et al., 2020).
For image processing and object recognition tasks, the majority of current
research is focused on CNNs architectures. CNNs are well-known in computer
vision but did not receive as much attention as of today. Although studies
envisaged that CNN architectures would offer a high potential to classify
images, it was only when, in 2012, Krizhevsky et al. (Krizhevsky et al., 2012)
demonstrated a method that won an image classification competition by a large
marge, that others became interested in CNNs for image processing. The
network, which came to be known as AlexNet, was built with 8 layers, in which
the 5 initial layers were all convolutional, some followed by max-pooling
layers, and finished with 3 fully-connected layers; which all used the ReLu
activation function (Khan et al., 2020). The success of this method, now
considered as a simple DL network, was associated with its depth.
CNNs (Fig. 3) are a type of architecture that is composed mainly of three
distinct hierarchical structures, such as convolution layers, pooling layers,
and fully connected layers (Ma et al., 2019), and have a large number of
parameters like weights, biases, the number of layers and neurons, filter
size, stride, activation function, learning rate, etc. (Khan et al., 2020). At
each layer, the input image is convolved with a set of kernels (i.e. filters)
and added biases, generating feature maps (Ma et al., 2019). The convolution
operation considers the neighborhood of input pixels, thus different levels of
correlation can be explored according to the filter sizes (Khan et al., 2020).
The CNNs were originally designed to process data in the form of multiple
arrays, and this trait is particularly well-suited to deal with multiband
remote-sensing images since pixels are arranged regularly. As a result, this
architecture is being considered one of the most popular DNN models today (Ma
et al., 2019), and its success has been demonstrated in several UAV-based
image applications.
Figure 3: A CNN type of architecture with convolution and deconvolution
layers. This example architecture is formed by convolutional layers, where a
dropout layer is added between each conv layer, and a max-pooling layer is
adopted each time the convolution window-size is decreased. By the end of it,
a deconvolutional layer is used with the same size as the last convolutional,
and then it uses information from the previous step to reconstruct the image
with its original size. The final layer is of a softmax, where it returns the
models’ predictions.
As a different kind of DL network structure, RNNs refer to another supervised
learning model. Although RNNs have been used for a while in other computer
vision tasks, only later it was proposed to be used with remote sensing data.
The RNN model was originally developed to deal with discrete sequence analysis
(Ma et al., 2019). The main idea behind implementing RNNs regards their
capability of improving their learning in repetitive observations of a given
phenom or object, often associated with a time-series collection. A type of
RNN being currently implemented in multiple tasks is the Long Short-Term
Memory (LSTM). LSTMs are an interesting choice for time-series related
predictions as they solve the vanishing gradient problem produced in the
original RNNs. For that they use additional additive components, allowing the
gradients to flow through the network more efficiently (Hochreiter and
Schmidhuber, 1997). An LSTM unit is normally composed of a cell, as well as
input, output, and forget gates. As the cell “remembers” values from arbitrary
time intervals, these three gates regulate the flow of information in and out
of the cell.
In the remote sensing field, RNN models have been applied to deal with time
series tasks analysis, aiming to produce, for example, land cover mapping
(Ienco et al., 2017; Ho Tong Minh et al., 2018). For a pixel-based time series
analysis aiming to discriminate classes of winter vegetation coverage using
SAR Sentinel-1 (Ho Tong Minh et al., 2018), it was verified that RNN models
outperformed classical ML approaches. A recent approach, (Feng et al., 2020),
for accurate vegetation mapping, combined a multi-scale CNN to extract spatial
features from UAV-RGB imagery and then fed an attention-based RNN to establish
the sequential dependency between multi-temporal features. The aggregated
spatial-temporal features are used to predict the vegetable category. Such
examples with remote sensing data demonstrate the potential in which RNNs are
being used. Also, one prominent type of architecture is the CNN-LSTM method
(Fig. 4). This network uses convolutional layers to extract important features
from the given input image, and feed the LSTM. Although few studies
implemented this type of network, it should be noted that it serves specific
purposes, and its usage, for example, can be valued for multitemporal
applications.
Figure 4: An example of a neural network based on the CNN-LSTM type of
architecture. The input image is processed with convolutional layers, and a
max-pooling layer is used to introduce the information to the LSTM. Each
memory cell is updated with weights from the previous cell. After this
process, one may use a flatten layer to transform the data in an arrangement
to be read by a dense (fully-connected) layer, returning a classification
prediction, for instance.
As aforementioned, other types of neural networks, aside from CNNs and RNNs,
are currently being proposed to also deal with an image type of data. GANs are
amongst the most innovative unsupervised DL models. GANs are composed of two
networks: generative and discriminative, that contest between themselves. The
generative network is responsible for extracting features from a particular
data distribution of interest, like images, while the discriminative network
distinguishes between real (reference or ground truth data) and those data
generated by the generative part of GANs (fake data) (Goodfellow et al., 2014;
Ma et al., 2019). Recently approaches in the image processing context like the
classification of remote sensing images (Lin et al., 2017a) and image-to-image
translation problems solution (Isola et al., 2018) adopted GANs as DL models,
obtaining successful results.
In short, several DNNs are constantly developed, in both scientific and/or
image competition platforms, to surpass existing methods. However, as each
year passes, some of these neural networks are often mentioned, remembered, or
even improved by novel approaches. A summary of well-known DL methods built in
recent years is presented in Fig. 5. A detailed take on this, which we
recommend to anyone interested, is found in Khan et al. (Khan et al., 2020).
Alongside the creations and developments of these and others, researchers
observed that higher depth, channel exploration, and, as of recently proposed,
attention-based feature extraction neural networks, are regarded as some of
the most prominent approaches for DL.
Figure 5: A DL time-series indicating some popular architectures implemented
in image classification (yellowish color), object detection (greenish color),
and segmentation (bluish color). These networks often intertwine, and many
adaptations have been proposed for them. Although it may appear that most of
the DL methods were developed during 2015-2017 annuals, it is important to
note that, as some, novel deep networks use most of the already developed
methods as backbones, or accompanied from other types of architectures, mainly
used as the feature extraction part of a much more complex structure.
Initially, most of the proposed supervised DNNs, like CNN and RNN, or CNN-LSTM
models, were created to perform and deal with specific issues. Often, these
approaches can be grouped into classification tasks, like scene-wise
classification, object detection, semantic and instance segmentation (pixel-
wise), and regression tasks. Here, we aimed to comprehensively resume them as
shown in the next subsections. What follows is a short description on how
these approaches are being used in image related tasks and how it is capable
of overcoming some of the challenges faced by the previous methods.
### 2.2 Classification and Regression Approaches
When considering remote sensing data processed with DL-based algorithms, the
following tasks can be highlighted: scene-wise classification, semantic and
instance segmentation, and object detection. Scene-wise classification
involves assigning a class label to each image (or patch), while the object
detection task aims to draw bounding boxes around objects in an image (or
patch) and labeling each of them according to the class label. Object
detection can be considered a more challenging task since it requires to
locate the objects in the image and then perform their classification. Another
manner to detect objects in an image, instead of drawing bounding boxes, is to
draw regions or structures around the boundary of objects, i.e., distinguish
the class of the object at the pixel level. This task is known as semantic
segmentation. However, in semantic segmentation, it is not possible to
distinguish multiple objects of the same category, as each pixel receives one
class label (Wu et al., 2020a). To overcome this drawback, a task that
combines semantic segmentation and object detection named instance
segmentation was proposed to detect multiple objects in pixel-level mask and
labeling each mask into a class label (Sharma and Mir, 2020).
To produce a deep-regression approach, the model needs to be adapted so that
the last fully-connected layer of the architecture is changed to deal with a
regression problem instead of a common classification one. With this
adaptation, continuous values are estimated, differently from classification
tasks. In comparison to classification, regression tasks using DL is not often
used; however, recent publications have shown its potential in remote sensing
applications. One approach (Lathuilière et al., 2020) performed a
comprehensive analysis of deep regression methods and pointed out that well-
known fine-tuned networks, like VGG-16 (Simonyan and Zisserman, 2015) and
ResNet-50 (He et al., 2016), can provide interesting results. These methods,
however, are normally developed for specific applications, which is a drawback
for general-purpose solutions. Another important point is that depending on
the application, not always deep regression succeeds. A strategy is to
discretize the output space and consider it as a classification solution. For
UAV remote sensing applications, the strategy of using well-known networks is
in general adopted. Not only VGG-16 and ResNet-50, as investigated by
(Lathuilière et al., 2020), but also other networks including AlexNet
(Krizhevsky et al., 2012) and VGG-11 have been used. An important issue that
could be investigated in future research, depending on the application, is the
optimizer. Algorithms with adaptive learning rates such as AdaGrad, RMSProp,
AdaDelta (an extension of AdaGrad), and Adam are between the commonly used.
#### 2.2.1 Scene-Wise Classification, Object Detection, and Segmentation
Scene-wise classification or scene recognition refers to methods that
associate a label/theme for one image (or patch) based on numerous images,
such as in agricultural scenes, beach scenes, urban scenes, and others (Zou et
al., 2015; Ma et al., 2019). Basic DNNs methods were developed for this task,
and they are among the most common networks for traditional image recognition
tasks. In remote sensing applications, scene-wise classification is not
usually applied. Instead, most applications benefit more from object detection
and pixel-wise semantic segmentation approaches. For scene-wise
classification, the method needs only the annotation of the class label of the
image, while other tasks like object detection methods needs a drawn of a
bounding box for all objects in an image, which makes it more costly to build
labeled datasets. For instance or semantic segmentation, the specialist (i.e.
person who performs the annotation or object labeling) needs to draw a mask
involving each pixel of the object, which needs more attention and precision
in the annotation task, reducing, even more, the availability of datasets.
Fig. 6 shows the examples of both annotation approaches (object detection and
instance segmentation).
Figure 6: Labeled examples. The first-row consists of a bounding-box type of
object detection approach label-example to identify individual tree-species in
an urban environment. The second-row is a labeled-example of instance
segmentation to detect rooftops in the same environment.
Object detection methods can be described into two mainstream categories: one-
stage detectors (or regression-based methods) and two-stage detectors (or
region proposal-based methods) (Zhao et al., 2019; Liu et al., 2019; Wu et
al., 2020a). The usual two-stage object detection pipeline is to generate
region proposals (candidate rectangular bounding boxes) on the feature map. It
then classifies each one into an object class label and refines the proposals
with a bounding box regression. A widely used strategy in the literature to
generate proposals was proposed with the Faster-RCNN algorithm with the Region
Proposal Network (RPN) (Zhao et al., 2019). Other state-of-the-art
representatives of such algorithms are Cascade-RCNN (Cai and Vasconcelos,
2018), Trident-Net (Li et al., 2019), Grid-RCNN (Lu et al., 2019), Dynamic-
RCNN (Zhang et al., 2020a), DetectoRS (Qiao et al., 2020). As for one-stage
detectors, they directly make a classification and detect the location of
objects without a region proposal classification step. This reduced component
achieves a high detection speed for the models but tends to reduce the
accuracy of the results. These are known as region-free detectors since they
typically use cell grid strategies to divide the image and predict the class
label of each one. Besides that, some detectors may serve for both one-stage
and two-stage categories.
Object detection based methods can be described in three components: a)
backbone, which is responsible to extract semantic features from images; b)
the neck, which is an intermediate component between the backbone and the head
components, used to enrich the features obtained by the backbone, and; c) head
component, which performs the detection and classification of the bounding
boxes.
The backbone is a CNN that receives as input an image and outputs a feature
map that describes the image with semantically features. In the DL literature
the state-of-the-art is composed of the following backbones: VGG (Simonyan and
Zisserman, 2015), ResNet (He et al., 2016), ResNeXt (Xie et al., 2017), HRNet
(Wang et al., 2020), RegNet (Radosavovic et al., 2020), Res2Net (Gao et al.,
2021), and ResNesT (Zhang et al., 2020b). The neck component combines in
several scales low-resolution and semantically strong features, capable of
detecting large objects, with high-resolution and semantically weak features,
capable of detecting small objects, which is done with the lateral and top-
down connections of the convolutional layers of the Feature Pyramid Network
(FPN) (Lin et al., 2017b), and its variants like PAFPN (Liu et al., 2018) and
NAS-FPN (Ghiasi et al., 2019). Although FPN was originally designed to be a
two-stage method, the method’ purpose was a manner to use the FPN on single-
stage detectors by removing RPN and adding a classification subnet and a
bounding box regression subnet. The head component is responsible for the
detection of the objects with the softmax classification layer, which produces
probabilities for all classes and a regression layer to predict the relative
offset of the bounding box positions with the ground-truth.
Despite the differences in object detectors (one or two-stage), their
universal problem consists of dealing with a large gap between positive
samples (foreground) and negative samples (background) during training, i.e
class imbalance problem that can deteriorate the accuracy results (Chen et
al., 2020). In these detectors, the candidate bounding boxes can be
represented into two main classes: positive samples, which are bounding boxes
that match with the ground-truth, according to a metric; and negative samples,
which do not match with the ground-truth. In this sense, a non-max suppression
filter can be used to refine these dense candidates by removing overlaps to
the most promising ones. The Libra-RCNN (Pang et al., 2019), ATSS (Zhang et
al., 2019a), Guided Anchoring (Wang et al., 2019), FSAF (Zhu et al., 2019a),
PAA (Kim and Lee, 2020), GFL (Li et al., 2020a), PISA (Cao et al., 2020) and
VFNet (Zhang et al., 2020c) detectors explore different sampling strategies
and new loss metrics to improve the quality of selected positive samples and
reduce the weight of the large negative samples.
Another theme explored in the DL literature is the strategy of encoding the
bounding boxes, which influences the accuracy of the one-stage detectors as
they do not use region proposal networks (Zhang et al., 2020c). In this report
(Zhang et al., 2020c), the authors represent the bounding boxes like a set of
representatives or key-points and find the farthest top, bottom, left, and
right points. CenterNet (Duan et al., 2019) detects the object center point
instead of using bounding boxes, while CornerNet (Law and Deng, 2020)
estimates the top-left corner and the bottom-right corner of the objects. SABL
(Wang et al., 2020) uses a chunk based strategy to discretize horizontally and
vertically the image and estimate the offset of each side (bottom, up, left,
and right). The VFNet (Zhang et al., 2020c) method proposes a loss function
and a star-shaped bounding box (described by nine sampling points) to improve
the location of objects.
Regarding semantic segmentation and instance segmentation approaches, they are
generally defined as a pixel-level classification problem (Minaee et al.,
2020b). The main difference between semantic and instance is that the former
one is capable to identify pixels belonging to one class but can not
distinguish objects of the same class in the image. However, instance
segmentation approaches can not distinguish overlapping of different objects,
since they are concerned with identifying objects separately. For example, it
may be problematic to identify in an aerial urban image the location of the
cars, trucks, motorcycle, and the asphalt pavement which consists of the
background or region in which the other objects are located. To unify these
two approaches, a method was recently proposed in (Kirillov et al., 2019),
named panoptic segmentation. With panoptic segmentation, the pixels that are
contained in uncountable regions (e.g. background) receive a specific value
indicating it.
Considering the success of the RPN method for object detection, some variants
of Faster R-CNN was considered to instance segmentation as Mask R-CNN (He et
al., 2017), which in parallel to bounding box regression branch add a new
branch to predict the mask of the objects (mask generation). The Cascade Mask
R-CNN (Cai and Vasconcelos, 2019) and HTC (Chen et al., 2019) extend Mask
R-CNN to refine in a cascade manner the object localization and mask
estimation. The PointRend (Kirillov et al., 2020) is a point-based method that
reformulates the mask generation branch as a rendering problem to iteratively
select points around the contour of the object. Regarding semantic
segmentation, methods like U-Net (Ronneberger et al., 2015), SegNet
(Badrinarayanan et al., 2017), DeepLabV3+ (Chen et al., 2018), and Deep Dual-
domain Convolutional Neural Network (DDCN) (Nogueira et al., 2019) have also
been regularly used and adapted for recent remote sensing investigations
(Nogueira et al., 2020). Another important remote sensing approach that is
been currently investigated is the segmentation of objects considering sparse
annotations (Hua et al., 2021). Still, as of today, the CGnet (Wu et al.,
2020b) and DLNet (Yin et al., 2020) are considered the state-of-art methods to
semantic segmentation.
## 3 Deep Learning in UAV Imagery
To identify works related to DL in UAV remote sensing applications, we
performed a search in the Web of Science (WOS) and Google Scholar databases.
WOS is one of the most respected scientific databases and hosts a high number
of scientific journals and publications. We conducted a search using the
following string in the WOS: (“TS = ((deep learning OR CNN OR convolutional
neural network) AND (UAV OR unmanned aerial vehicle OR drone OR RPAS) AND
(remote sensing OR photogrammetry)) AND LANGUAGE: (English) AND Types of
Document: (Article OR Book OR Book Chapter OR Book Review OR Letter OR
Proceedings Paper OR Review); Indexes=SCI-EXPANDED, SSCI, A%HCI, CPCI-S, CPCI-
SSH, ESCI. Stipulated-time=every-years.”). We considered DL, but added CNN, as
its one of the main DL-based architectures used in remote sensing applications
(Ma et al., 2019).
We filtered the results to consider only papers that implemented approaches
with UAV-based systems. A total of 190 papers were found in the WOS database,
where 136 were articles, 46 proceedings, and 10 reviews. An additional search
was conducted in the Google Scholar database to identify works not detected in
the WOS. We adopted the same combination of keywords in this search. We
performed a detailed evaluation of its results and selected only those that,
although from respected journals, were not encountered in the WOS search. This
resulted in a total of 34 articles, 16 proceedings, and 8 reviews. The entire
dataset was composed of 232 articles + proceedings and 18 reviews from
scientific journals indexed in those bases. These papers were then organized
and revised. Fig. 7 demonstrates the main steps to map this research. The
encountered publications were registered only in the last five years (from
2016 to 2021), which indicates how recent UAV-based approaches integrated with
DL methods are in the scientific journals.
Figure 7: The schematic procedure adopted to organize the revised material
according to their respective categories as proposed in this review.
The review articles gathered at those bases were separated and mostly used in
the cloud-text analysis of Fig. 1, while the remaining papers (articles and
proceedings) were organized according to their category. A total of 283.785
words were analyzed for the word-cloud, as we removed words with less than 5%
occurrences to cut lesser-used words unrelated to the theme, and higher than
95% occurrences to remove plain and simple words frequently used in the
English language. The published articles and proceedings were divided in terms
of DL-based networks (classification: scene-wise classification, segmentation,
and object detection and; regression), sensors type used (RGB, multispectral,
hyperspectral, and LiDAR); and; applications (environmental, urban, and
agricultural context). We also provided, in a subsequent section, datasets
from previously conducted research for further investigation by novel studies.
These datasets were organized and their characteristics were also summarized
accordingly.
Most of our research was composed of publications from peer-review publishers
in the area of remote sensing journals (Fig. 8). Even though the review
articles encountered in the WoS and Google Scholar databases do mention, to
some extent, UAV-based applications, none of them were dedicated to it.
Towards the end of our paper, we examined state-of-the-art approaches, like
real-time processing, data dimensionality reduction, domain adaptation,
attention-based mechanisms, few-shot learning, open-set, semi-supervised and
unsupervised learning, and others. This information provided an overview of
the future opportunities and perspectives on DL methods applied in UAV-based
images, where we discussed the implications and challenges of novel
approaches.
Figure 8: The distribution of the evaluated scientific material according to
data gathered at Web of Science (WOS) and Google Scholar databases. The y-axis
on the left represents the number (n) of published papers, illustrated by
solid-colored boxes. The y-axis on the right represents the number of
citations that these publications, according to peer-review scientific
journals, received since their publication, illustrated by dashed-lines of the
same color to its corresponding solid-colored box.
The 232 papers (articles + proceedings) were investigated through a
quantitative perspective, where we evaluated the number of occurrences per
journal, the number of citations, year of publication, and location of the
conducted applications according to country. We also prepared and organized a
sampling portion in relation to the corresponding categories, as previously
explained, identifying characteristics like architecture used, evaluation
metric approach, task conducted, type of sensor and mapping context objective.
After evaluating it, we adopted a qualitative approach by revising and
presenting some of the applications conducted within the papers (UAV + DL)
encountered in the scientific databases, summarizing the most prominent ones.
This narrative over these applications was separated accordingly to the
respective categories related to mapping context (environmental, urban, and
agricultural). Later on, when presenting future perspectives and current
trends in DL, we mentioned some of these papers alongside other investigations
proposed at computer vision scientific journals that could be potentially used
for remote sensing and UAV-based applications.
### 3.1 Sensors and Applications Worldwide
In the UAV-based imagery context, several applications were beneficiated from
DL approaches. As these networks’ usability is increasing throughout different
remote sensing areas, researchers are also experimenting with their capability
in substituting laborious-human tasks, as well as improving traditional
measurements performed by shallow learning or conventional statistical
methods. As of recently, several articles and proceedings were published in
renowned scientific journals. Our survey, which its specifics were previously
mentioned, was able to detect some important characteristics. From the data
collected, we verified that most UAV-based applications with DL are conducted
mostly in countries like China and the USA (Fig. 9). This is somewhat expected
since these countries, alongside their educational and scientific investments,
have been traditionally focusing on both computer vision and remote sensing
advances for a long time.
Figure 9: Published material according to their respective country of origin.
The names from the top publishing countries per continent were also
highlighted on the map.
The top 9 countries (highlighted under Fig. 9 map) are responsible for almost
90% of scientific publication production regarding this theme. This spatially-
distributed global information is also important to pinpoint some of the
characteristics in which these UAV-based applications were conducted. In
European countries like Germany, UK, Netherlands, and Spain, our data
indicated that most of the applied methods were used to map the environmental
context. In South-American countries like Brazil, precision agriculture
practices are the preferred approaches. In Asian countries like China and
India, both urban and agricultural contexts are the most focused areas. In
North-America, articles publications from the USA focused on both
agricultural, urban, and environmental contexts. Although loosely, this
observation analysis may shine some light on how each one of these regions is
treating its problems and implementing practices related to these themes.
In general terms, the articles collected at the scientific databases
demonstrated a pattern related to its architecture (CNN or RNN), evaluation
(classification or regression) approach (object detection, segmentation or
scene-wise classification), type of sensor (RGB, multispectral, hyperspectral
or LiDAR) and mapping context (environmental, urban, or agricultural). These
patterns can be viewed with a simple diagram (Fig. 10). The following
observations can be extracted from this graphic:
1. 1.
The majority of networks in UAV-based applications still rely mostly on CNNs;
2. 2.
Even though object detection is the highest type of approach, there has been a
lot of segmentation approaches in recent years;
3. 3.
Most of the used sensors are RGB, followed by multispectral, hyperspectral,
and LiDAR, and;
4. 4.
There is an interesting amount of papers published within the environmental
context, with forest-type related applications being the most common approach
in this category, while both urban and agricultural categories were almost
evenly distributed among opted approaches.
Figure 10: Diagram describing proceedings and articles according to the
defined categories using WOS and Google Scholar datasets.
The majority of papers published using UAV-based applications implemented a
type of CNN (91.2%). Most of these articles used established architectures
(Fig. 5) and a small portion proposed their models and compared them against
the state-of-the-art networks. In reality, this comparison appears to be a
crucial concern regarding recent publications, since it is necessary to
ascertain the performance of the proposed method in relation to well-known DL-
based models. Still, the popularity of CNNs architectures in remote sensing
images is not new, mainly because of reasons already stated in the previous
sections. Besides that, even though presented in a small number of articles,
RNNs (8.8%), mostly composed of CNN-LSTM architectures, are an emerging trend
in this area and appear to be the focus of novel proposals. As UAV systems are
capable of operating mostly according to the users’ own desire (i.e. can
acquire images from multiple dates in a more personalized manner), the same
object is viewed through a type of time-progression approach. This is
beneficial for many applications that include monitoring of stationary
objects, like rivers, vegetation, or terrain slopes, for example.
Although classification (97.7%) tasks are the most common evaluation metrics
implemented in these papers, regression (2.3%) is an important estimate and
may be useful in future applications. The usage of regression metrics in
remote sensing applications is worth it simply because it enables the
estimation of continuous data. Applications that could benefit from regression
analysis are present in environmental, urban, and agricultural contexts, as in
many others, and it is useful to return predictions on measured variables.
Classification, on the other hand, is more of a common ground for remote
sensing approaches and it is implemented in every major task (object
detection; pixel-wise semantic segmentation and scene-wise classification).
The aforementioned DL-based architectures were majorly applied in object
detection (53.9%) and image segmentation (40.7%) problems, while (scene-wise)
classification (5.4%) were the least common. This preference for object
detection may be related to UAV-based data, specifically, since the high
amount of detail of an object provided by the spatial resolution of the images
is both an advantage and a challenge. It is an advantage because it increases
the number of objects to be detected on the surface (thus, more labeled
examples), and it is a challenge because it difficulties both recognition and
segmentation of these objects (higher detail implies more features to be
extracted and analyzed). Classification (scene-wise), on the other hand, is
not as common in remote sensing applications, and image segmentation is often
preferred in some applications since assigning a class to each pixel of the
image has more benefits for this type of analysis than rather only identifying
a scene.
Following it, there is an interesting distribution pattern related to the
application context. The data indicated that most of the applications were
conducted in the environmental context (46.6%). This context includes
approaches that aimed to, in a sense, deal with detection and classification
tasks on land use and change, environmental hazards and disasters, erosion
estimates, wild-life detection, forest-tree inventory, monitoring difficult to
access regions, as others. Urban and agricultural categories (both 27.2% and
26.4%, respectively) were associated with cars and traffic detection,
buildings, streets, and rooftops extraction, as well as plant counting,
plantation-row detection, weed infestation identification, and others.
Interestingly, all of the LiDAR data applications were related to
environmental mapping, while RGB images were mostly used for urban, followed
by the agricultural context. Multispectral and hyperspectral data, however,
were less implemented in the urban context when in comparison against the
other categories. As these categories benefit differently from DL-based
methods, a more detailed intake is needed to understand its problems,
challenges, and achievements. In the following subsections, we explain these
issues and advances while citing some suitable examples from within our search
database.
Lastly, another important observation to be made regarding the categorization
division used here is that there is a visible dichotomy between the type of
sensor used. Most of the published papers in this area evaluated the
performance of DL-based networks in RGB sensors (52.4%). This was respectively
followed by multispectral (24.3%), hyperspectral (17.8%), and LiDAR (5.5%).
The preference for RGB sensors in UAV-based systems may be associated with
their low-cost and high market availability. As such, the published articles
may reflect on this, since it is a viable option for practical reasons when
considering the replicability of the methods. It should be noted that the
number of labeled examples in public databases are mostly RGB, which helps
improvements and investigations with this type of data. Also, data obtained
from multispectral, hyperspectral, and LiDAR sensors are used in more specific
applications, which contributes to this division.
Most of the object detection applications went on RGB types of data, while
segmentation problems were dealt with both RGB, multispectral, hyperspectral,
and LiDAR data. A possible explanation for this is that object detection often
relies on the spatial, texture, pattern, and shape characteristics of the
object in the image, as segmentation approaches are a diverse type of
applications, which benefit from the amount of spectral and terrain
information provided by these sensors. In object detection, DL-based methods
may have potentialized the usage of RGB images, since simpler and traditional
methods need additional spectral information to perform it. Also, apart from
the spectral information, LiDAR, for example, offers important features of the
objects for the networks to learn and refine edges around them, specifically
where their patterns are similar. Regardless, many of these approaches are
related to the available equipment and nature of the application itself, so it
is difficult to pinpoint a specific reason.
### 3.2 Environmental Mapping
Environmental approaches with DNNs-based methods hold the most diverse
applications with remote sensing data, including UAV-imagery. These
applications adopt different sensors simply because of their divergent nature.
To map natural habits and their characteristics, studies often relied on
methods and procedures specifically related to its goals, and no “universal”
approach could be proposed nor discovered. However, although DL-based methods
have not reached this type of “universal” approach, they are changing some
skepticism by being successfully implemented in the most unique scenarios.
Although UAV-based practices still offer some challenges to both
classification and regression tasks, DNNs methods are proving to be generally
capable of performing such tasks. Regardless, there is still much to be
explored.
Several environmental practices could potentially benefit from deep networks
like CNNs and RNNs. For example, monitoring and counting wild-life (Barbedo et
al., 2020; Hou et al., 2020; Sundaram and Loganathan, 2020), detecting and
classifying vegetation from grasslands and heavily-forested areas (Horning et
al., 2020; Hamdi et al., 2019), recognizing fire and smoke signals (Alexandra
Larsen et al., 2020; Zhang et al., 2019b), analyzing land use, land cover, and
terrain changes, which are often implemented into environmental planning and
decision-making models (Kussul et al., 2017; Zhang et al., 2020d), predicting
and measuring environmental hazards (Dao et al., 2020; Bui et al., 2020),
among others. What follows is a brief description of recent material published
in the remote sensing scientific journals that aimed to solve some of these
problems by integrating data from UAV embedded sensors with DL-based methods.
One of the most common approaches related to environmental remote sensing
applications regards land use, land cover, and other types of terrain
analysis. A recent study (Giang et al., 2020) applied semantic segmentation
networks to map land use over a mining extraction area. Another one, (Al-
Najjar et al., 2019), combined information from a Digital Surface Model (DSM)
with UAV-based RGB images and applied a type of feature fusion as input for a
CNN model. To map coastal regions, an approach (Buscombe and Ritchie, 2018),
with RGB data registered at multiple scales, used a CNN in combination with a
graphical method named conditional random field (CRF). Another research (Park
and Song, 2020), with hyperspectral images in a combination between 2D and 3D
convolutional layers, was developed to determine the discrepancy of land cover
in the assigned land category of cadastral map parcels.
With a semantic segmentation approach, road extraction by a CNN was
demonstrated in another investigation (Li et al., 2019). Another study
(Gevaert et al., 2020) investigated the performance of a FCN to monitor
household upgrading in unplanned settlements. Terrain analysis is a
diversified topic in any type of cartographic scale, but for UAV-based images,
in which most data-acquisitions are composed by a high-level of detail, DL-
based methods are resulting in important discoveries, demonstrating the
feasibility of these methods to perform this task. Still, although these
studies are proving this feasibility, especially in comparison with other
methods, novel research should focus on evaluating the performance of deep
networks regarding their domain-adaptation, as well as its generalization
capability, like using data in different spatial-resolution, multitemporal
imagery, etc.
The detection, evaluation, and prediction of flooded areas represents another
type of investigation with datasets provided by UAV-embedded sensors. A study
(Gebrehiwot et al., 2019) demonstrated the importance of CNNs for the
segmentation of flooded regions, where the network was able to separate water
from other targets like buildings, vegetation, and roads. One potential
application that could be conducted with UAV-based data, but still needs to be
further explored, is mapping and predicting regions of possible flooding with
a multitemporal analysis, for example. This, as well as many other
possibilities related to flooding, water-bodies, and river courses (Carbonneau
et al., 2020), could be investigated with DL-based approaches.
For river analysis, an investigation (Zhang et al., 2020e) used a CNN
architecture for image segmentation by fusing both the positional and channel-
wise attentive features to assist in river ice monitoring. Another study
(Jakovljevic et al., 2019) compared LiDAR data with point cloud generated by
UAV mapping and demonstrated an interesting approach to DL-based methods
applications for point cloud classification and a rapid Digital Elevation
Model (DEM) generation for flood risk mapping. One type of application with
CNN in UAV data involved measuring hailstones in open areas (Soderholm et al.,
2020). For this approach, image segmentation was used in RGB images and
returned the maximum dimension and intermediate dimension of the hailstones.
Lastly, on this topic, a comparison (Ichim and Popescu, 2020) with CNNs and
GANs to segment both river and vegetation areas demonstrated that a type of
“fusion” between these networks into a global classifier had an advantage of
increasing the efficiency of the segmentation.
UAV-based forest mapping and monitoring is also an emerging approach that has
been gaining the attention of the scientific community and, at some level,
governmental bodies. Forest areas often pose difficulties for precise
monitoring and investigation, since they can be hard to access and may be
dangerous to some extent. In this aspect, images taken from UAV embedded
sensors can be used to identify single tree-species in forested environments
and compose an inventory. From the papers gathered, multiple types of sensors,
RGB, both multi and hyperspectral, and also LiDAR, were used for this
approach. An application investigated the performance of a 3D-CNN method to
classify tree species in a boreal forest, focusing on pine, spruce, and birch
trees, with a combination between RGB and hyperspectral data (Nezami et al.,
2020).
Single-tree detection and species classification by CNNs were also
investigated in (Ferreira et al., 2020) in which three types of palm-trees in
the Amazon forest, considered important for its population and native
communities, were mapped with this type of approach. Another example (Hu et
al., 2020) includes the implementation of a Deep Convolutional Generative
Adversarial Network (DCGAN) to discriminate between health diseased pinus-
trees in a heavily-dense forested park area. Another recent investigation
(Miyoshi et al., 2020) proposed a novel DL method to identify single-tree
species in highly-dense areas with UAV- hyperspectral imagery. These and other
scientific studies demonstrate how well DL-based methods can deal with such
environments.
Although the majority of approaches encountered at the databases for this
category relate to tree-species mapping, UAV-acquired data was also used for
other applications in these natural environments. A recent study (Zhang et
al., 2020f) proposed a method based on semantic segmentation and scene-wise
classification of plants in UAV-based imagery. The method bases itself on a
CNN that classifies individual plants by increasing the image scale while
integrating features learned from small scales. This approach is an important
intake in multi-scale information fusion. Also related to vegetation
identification, multiple CNNs architectures were investigated in (Hamylton et
al., 2020) to detect between plants and non-type of plants with UAV-based RGB
images on an island achieving interesting performances.
Another application aside from vegetation mapping involves wild-life
identification. Animal monitoring in open-spaces and grasslands is also
something that received attention as DL-based object detection and semantic
segmentation methods are providing interesting outcomes. A paper by
(Kellenberger et al., 2018) covers this topic and discusses, with practical
examples, how CNNs may be used in conjunction with UAV-based images to
recognize mammals in the African Savannah. This study relates the challenges
related to this task and proposes a series of suggestions to overcome them,
focusing mostly on imbalances in the labeled dataset. The identification of
wild-life, also, was not only performed in terrestrial environments, but also
in marine spaces, where a recent publication (Gray et al., 2019) implemented a
CNN-based semantic segmentation method to identify cetacean species, mainly
blue, humpback, and minke whales, in the ocean. These studies not only
demonstrate that such methods can be highly accurate at different tasks but
also implies the potential of DL approaches with UAVs in the current
literature.
### 3.3 Urban Mapping
For urban environments, many DL-based proposals with UAV data have been
presented in the literature in the last years. The high-spatial-resolution
easily provided by UAV embedded sensors are one of the main reasons behind its
usage in these areas. Object detection and instance segmentation methods in
those images are necessary to individualize, recognize, and map highly-
detailed targets. Thus, many applications rely on CNNs and, in small cases,
RNNs (CNN-LSTM) to deal with them. Some of the most common examples
encountered in this category during our survey are the identification of
pedestrians, car and traffic monitoring, segmentation of individual tree-
species in urban forests, detection of cracks in concrete surfaces and
pavements, building extraction, etc. Most of these applications were conducted
with RGB type of sensors, and, in a few cases, spectral ones.
The usage of RGB sensors is, as aforementioned, a preferred option for small-
budget experiments, but also is related to another important preference of
CNNs, and that is that features like pixel-size, form, and texture of an
object are essential to its recognition. In this regard, novel experiments
could compare that the performance of DL-based methods with RGB imagery
against other types of sensors. As low-budge systems are easy to implement in
larger quantities, many urban monitoring activities could benefit from such
investigations. In urban areas, the importance of UAV real-time monitoring is
relevant, and that is one of the current objectives when implementing such
applications.
The most common practices with UAV-based imagery in urban environments with
DL-based methods involve the detection of vehicles and traffic. Car
identification is an important task to help urban monitoring and may be useful
for real-time analysis of traffic flow in those areas. It is not an easy task,
since vehicles can be occluded by different objects like buildings and trees,
for example. A recent approach using RGB video footage obtained with UAV, as
presented in (Zhang et al., 2019c), used an object detection CNN for this
task. They also dealt with differences in traffic monitoring to motorcycles,
where a frame-by-frame analysis enabled the neural network to determine if the
object in the image was a person (pedestrian) or a person riding a motorcycle
since differences in its pattern and frame-movement indicated it. Regarding
pedestrian traffic, an approach with thermal cameras presented by (de Oliveira
and Wehrmeister, 2018) demonstrated that CNNs are appropriate to detect
persons with different camera rotations, angles, sizes, translation, and
scale, corroborating the robustness of its learning and generalization
capabilities.
Another important survey in those areas is the detection and localization of
single-tree species, as well as the segmentation of their canopies.
Identifying individual species of vegetation in urban locations is an
important requisite for urban-environmental planning since it assists in
inventorying species and providing information for decision-making models. A
recent study (dos Santos et al., 2019) applied object detection methods to
detect and locate tree-species threatened by extinction. Following their
intentions, a research (Torres et al., 2020) evaluated semantic segmentation
neural networks to also map endangered tree-species in urban environments.
While one approach aimed to recognize the object to compose an inventory, the
other was able to identify it and return important metrics, like its canopy-
area for example. Indeed, some proposals that were implemented in a forest
type of study could also be adopted in urban areas, and this leaves an open-
field for future research that intends to evaluate DL-based models in this
environment. Urban areas pose different challenges for tree monitoring, so
these applications need to consider their characteristics.
DL-based methods have also been used to recognize and extract infrastructure
information. An interesting approach demonstrated by (Boonpook et al., 2021),
based on semantic segmentation methods, was able to extract buildings in
heavily urbanized areas, with unique architectural styles and complex
structures. Interestingly enough, a combination of RGB with a DSM improved
building identification, indicating that the segmentation model was able to
incorporate appropriate information related to the objects’ height. This type
of combinative approach, between spatial-spectral data and height, may be
useful in other identification and recognition approaches. Also regarding
infrastructure, another possible application in urban areas is the
identification and location of utility poles (Gomes et al., 2020). This
application, although being of rather a specific example, is important to
maintain and monitor the conditions of poles regularly. These types of
monitoring in urban environments is something that benefits from DL-based
models approaches, as it tends to substitute multiple human inspection tasks.
Another application involves detecting cracks in concrete pavements and
surfaces (Bhowmick et al., 2020). Because some regions of civil structures are
hard to gain access to, UAV-based data with object detection networks may be
useful to this task, returning a viable real-life application.
Another topic that is presenting important discoveries relates to land cover
pixel segmentation in urban areas, as demonstrated by (Benjdira et al.,
2019a). In this investigation, an unsupervised domain adaptation method based
on GANs was implemented, working with different data from UAV-based systems,
while being able to improve image segmentation of buildings, low vegetation,
trees, cars, and impervious surfaces. As aforementioned, GANs or DCGANs are
quickly gaining the attention of computer vision communities due to their wide
area of applications and the way they function by being trained to
differentiate between real and fake data (Goodfellow et al., 2014).
Regardless, its usage in UAV-based imagery is still underexplored, and future
investigations regarding not only land change and land cover but also other
types of applications’ accuracies may be improved with them. Nonetheless,
apart from differences in angles, rotation, scales, and other UAV-based
imagery related characteristics, diversity in urban scenarios is a problem
that should be considered by unsupervised approaches. Therefore, in the
current state, DL-based networks still may rely on some supervised manner to
guide image processing, specifically regarding domain shift factors.
### 3.4 Agricultural Mapping
Precision agriculture applications have been greatly benefited from the
integration between UAV-based imagery and DL methods in recent scientific
investigations. The majority of issues related to these approaches involve
object detection and feature extraction for counting plants and detecting
plantation-lines, recognizing plantation-gaps, segmentation of plants species
and invasive species as weeds, phenology, and phenotype detection, and many
others. These applications offer numerous possibilities for this type of
mapping, especially since most of these tasks are, still, conducted manually
by human-vision inspection. As a result, they can help precision farming
practices by returning predictions with rapid, unbiased, and accurate results,
influencing decision-making for the management of agricultural systems.
Regardless, although automatic methods do provide important information in
this context, they face difficult challenges. Some of these include similarity
between the desired plant and invasive plants, hard-to-detect plants in high-
density environments (i.e. presenting small spacing between plants and lines),
plantation-lines that do not follow a straight-path, edge-segmentation in
mapping canopies with conflicts between shadow and illumination, and many
others. Still, novel investigations aim to achieve a more generative
capability to these networks in dealing with such problems. In this sense,
approaches that implement methods in more than one condition or plantation are
being the main focus of recent publications. Thus, varied investigation
scenarios are currently being proposed, with different types of plantations,
sensors, flight-altitudes, angles, spatial and spectral divergences, dates,
phenological-stages, etc.
An interesting approach that has the potential to be expanded to different
orchards was used in (Apolo-Apolo et al., 2020). There, a low-altitude flight
approach was adopted with side-view angles to map yield by counting fruits
with the CNN-based method. Counting fruits is not something entirely new in
DL-based approaches, some papers demonstrated the effectiveness of bounding-
box and point-feature methods to extract it (Biffi et al., 2021; Tian et al.,
2019a; Kang and Chen, 2020) aside from several differences in occlusion,
lightning, fruit size, and image corruption.
Today’s deep networks demonstrate high potential in yield-prediction, as some
applications are adapting to CNN architectures mainly because of its benefits
in image processing. One of which includes predicting pasture-forage with only
RGB images (Castro et al., 2020). Another interesting example in crop-yield
estimates is presented by (Nevavuori et al., 2020), where a CNN-LSTM was used
to predict yield with a spatial-multitemporal approach. There the authors
implemented this structure since RNNs are more appropriate to learn with
temporal data, while a 3D-CNN was used to process and classify the image.
Although used less frequently than CNNs in the literature, there is emerging
attention to LSTM architectures in precision agriculture approaches, as appear
to be an appropriate intake for temporal-monitoring of these areas.
Nonetheless, one of the most used and beneficiated approaches in precision
agriculture with DL-based networks is counting and detecting plants and
plantation-lines. Counting plants is essential to produce estimates regarding
production-rates, as well as, by geolocating it, determine if a problem
occurred during the seedling process by identifying plantation-gaps. In this
regard, plantation-lines identification with these gaps is also a desired
application. Both object detection and image segmentation methods were
implemented in the literature, but most approaches using image semantic
segmentation algorithms rely on additional procedures, like using a blob
detection method (Kitano et al., 2019), for example. These additional steps
may not always be desirable, and to prove the generality capability of one
model, multiple tests at different conditions should be performed.
For plantation-line detection, segmentations are currently being implemented
and often used to assist in more than one information extraction. In (Osco et
al., 2021) semantic segmentation methods were applied in UAV-based
multispectral data to extract canopy areas and was able to demonstrate which
spectral regions were more appropriate to it. A recent application with UAV-
based data was also proposed in (Osco et al., 2020a), where a CNN model is
presented to simultaneously count and detect plants and plantation-lines. This
model is based on a confidence map extraction and was an upgraded version from
previous research with citrus-tree counting (Osco et al., 2020b). This CNN
works by implementing some convolutional layers, a Pyramid Pooling Module
(PPM) (Zhao et al., 2017), and a Multi-Stage Module (MSM) with two information
branches that, concatenated at the end of the MSM processes, shares knowledge
learned from one to another. This method ensured that the network learned to
detect plants that are located at a plantation-line, and understood that a
plantation-line is formed by linear conjunction of plants. This type of method
has also been proved successful in dealing with highly-dense plantations.
Another research (Ampatzidis and Partel, 2019) that aimed to count citrus-
trees with a bounding-box-based method also returned similar accuracies.
However, it was conducted in a sparse plantation, which did not impose the
same challenges faced at (Osco et al., 2020b, a). Regardless, to deal with
highly-dense scenes, feature extraction from confidence maps appears to be an
appropriate approach.
But agricultural applications do not always involve plant counting or
plantation-line detection. Similar to wild-animal identification as included
in other published studies (Kellenberger et al., 2018; Gray et al., 2019),
there is also an interest in cattle detection, which is still an onerous task
for human-inspection. In UAV-based imagery, some approaches included DL-based
bounding-boxes methods (Barbedo et al., 2019), which were also successfully
implemented. DNNs used for this task are still underexplored, but published
investigations (Rivas et al., 2018) argue that one of the main reasons behind
the necessity to use DL methods is based on occurrences of changes in terrain
(throughout the seasons of the year) and the non-uniform distribution of the
animals throughout the area. On this matter, one interesting approach should
involve the usage of real-time object detection on the flight. This is because
it is difficult to track animal movement, even in open areas such as pastures,
when a UAV system is acquiring data. Another agricultural application example
refers to the monitoring offshore aquaculture farms using UAV-underwater color
imagery and DL models to classify them (Bell et al., 2020). These examples
reveal the widespread variety of agriculture problems that can be attended
with the integration of DL models and UAV remote sensing data.
Lastly, a field yet to be also explored by the literature is the
identification and recognition of pests and disease indicators in plants using
DL-based methods. Most recent approaches aimed to identify invasive species,
commonly named “weeds”, in plantation-fields. In a demonstration with
unsupervised data labeling, (Dian Bah et al., 2018) evaluated the performance
of a CNN-based method to predict weeds in the plantation-lines of different
crops. This pre-processing step to automatically generate labeled-data, which
is implemented outside the CNN model structure, is an interesting approach.
However, others prefer to include a “one-step” network to deal with this
situation, and different fronts are emerging in the literature. Unsupervised
domain-adaptation, in which the network extracts learning-features from new
unviewed data is one of the most current aimed models.
A recent publication (Li et al., 2020b) proposed it to recognize and count in-
field cotton-boll status identification. Regardless, with UAV-based data
examples, this is still an issue. As for disease detection, a study (Kerkech
et al., 2020) investigated the use of image segmentation for vine-crops with
multispectral images, and was able to separate visible symptoms (RGB),
infrared symptoms (i.e. when considering only the infrared band) and in an
intersection between visible and infrared spectral data. Another interesting
example regarding pests identification with UAV-based image was demonstrated
in (Tetila et al., 2020) where superpixel image samples of multiple pest
species were considered, and activation filters used to recognize undesirable
visual patterns implemented alongside different DL-based architectures.
## 4 Publicly Available UAV-Based Datasets
As mentioned, one of the most important characteristics of DL-based methods is
that they tend to increase their learning capabilities as the number of
labeled examples are used to train a network. In most of the early approaches
with remote sensing data, CNNs were initialized with pre-trained weights from
publicly available image repositories over the internet. But most of these
repositories are not from data acquired with remote sensing platforms. Still,
there are some known aerial repositories with labeled examples, which were
presented in recent years, such as the DOTA (Xia et al., 2018), UAVDT (Du et
al., 2018), VisDrone (B et al., 2019), WHU-RS19 (Sheng et al., 2012), RSSCN7
(Zou et al., 2015), RSC11 (Zhao et al., 2016), Brazilian Coffee Scene (Penatti
et al., 2015) datasets. These and others are gaining notoriety in UAV-based
applications and could be potentially used to pre-train or benchmark DL
methods. These datasets not only serve as an additional option to start a
network but also may help in novel proposals to be compared against the
evaluated methods.
Since there is a still scarce amount of labeled examples with UAV-acquired
data, specifically in multispectral and hyperspectral data, we aimed to
provide UAV-based datasets in both urban and rural scenarios for future
research to implement and compare the performance of novel DL-based methods
with them. Table 1 summarizes some of the information related to these
datasets, as well as indicates recent publications in which previously
conducted approaches were implemented, as well as results achieved on them.
They are available on the following webpage, which is to be constantly updated
with novel labeled datasets from here on: Geomatics and Computer
Vision/Datasets
Reference | Task | Target | Sensor | GSD(cm) | Best Method | Result
---|---|---|---|---|---|---
(dos Santos et al., 2019) | Detection | Trees | RGB | 0.82 | RetinaNet | AP = 92.64%
(Torres et al., 2020) | Segmentation | Trees | RGB | 0.82 | FC-DenseNet | F1 = 96.0%
(Osco et al., 2021) | Segmentation | Citrus | Multispectral | 12.59 | DDCN | F1 = 94.4%
(Osco et al., 2020a) | Detection | Citrus | RGB | 2.28 | (Osco et al., 2020a) | F1 = 96.5%
(Osco et al., 2020a) | Detection | Corn | RGB | 1.55 | (Osco et al., 2020a) | F1 = 87.6%
(Osco et al., 2020b) | Detection | Citrus | Multispectral | 12.59 | (Osco et al., 2020b) | F1 = 95.0%
Table 1: UAV-based datasets that are publically available from previous
research.
## 5 Perspectives in Deep Learning with UAV Data
There is no denying that DL-based methods are a powerful and important tool to
deal with the numerous amounts of data daily produced by remote sensing
systems. What follows in this section is a short commentary on the near
perspectives of one of the most emerging fields in the DL and remote sensing
communities that could be implemented with UAV-based imagery. These topics,
although individually presented here, have the potential to be combined, as
already performed in some studies, contributing to the development of novel
approaches.
### 5.1 Real-Time Processing
Most of the environmental, urban, and agricultural applications presented in
this study can benefit from real-time responses. Although UAV and DL-based
combinations speed up the processing pipeline, these algorithms are highly
computer-intensive. Usually, they do require post-processing in data centers
or dedicated Graphics Processing Units (GPUs) machines. Although DL is
considered a fast method to extract information from data after its training,
it still bottlenecks real-time applications mainly because of the number of
layers intrinsic to the DL methods architectures. Research groups, especially
from the IoT industry/academy, race to develop real-time DL methods because of
it. The approach usually goes in two directions: developing faster algorithms
and developing dedicated GPU processors.
DL models use 32-bit floating points to represent the weights of the neural
network. A simple strategy known as quantization reduces the amount of memory
required by DL models representing the weights, using 16, 8, or even 1 bit
instead of 32-bits floating points. This idea dates back to the 1990s (Fiesler
et al., 1990; Balzer et al., 1991) and, recently, was revived due to DL
models’ size. For instance, XNOR-Net (Rastegari et al., 2016), a popular
binarized weight strategy, results in 58 times faster convolution operations
and 32 times faster memory savings. The compact representation comes with a
possible degradation in predictive performance. A 32-bit full precision
ResNet-18 (He et al., 2016) achieves 89.2% top-5 accuracy on the ImageNet
dataset (ImageNet, 2018), while the ResNet-18 (He et al., 2016) ported to
XNOR-Net achieves 73.2% top-5 accuracy in the same dataset. The quantization
goes beyond weights, in all network components, while the literature reports
activation functions and gradient optimizations quantized methods. The survey
conducted in (Guo, 2018) gives an important overview of quantization methods.
Also, knowledge distillation (Hinton et al., 2015) is another example of
training a model using a smaller network, where a larger “teacher” network
guides the learning process of a smaller “student” network.
Another strategy to develop fast DL models is to design layers using fewer
parameters that are still capable of retaining predictive performance.
MobileNets (Howard et al., 2017) and its variants are a good example of this
idea. The first version of MobileNet is based on a depthwise convolution
(Chollet, 2017) and a point-wise convolution (Szegedy et al., 2015). The
MobileNet (569 million mult/adds and 3.3 million parameters) achieved 83.3%
top-1 accuracy on Stanford Dogs. The Inception V3 (5000 million mult/adds and
23.3 million parameters) achieved 84.0% top-1 accuracy on the same dataset.
The MobileNet V3 (Howard et al., 2019) architecture was developed using the
Network Architecture Search (NAS) (Elsken et al., 2019), followed by a h-swish
activation function and the NetAdapt Algorithm (Yang et al., 2018). According
to this paper, MobileNetV3-Large is 3.2% and 20.0% more accurate (on ImageNet
(ImageNet, 2018)) and faster (low latency), respectively, compared to
MobileNetV2. In specific tasks, such as object detection, it is possible to
develop architectural enhancements for this approach, such as the Context
Enhanced Module (CEM) and the Spatial Attention Module (SAM) (Qin et al.,
2019). The mAP Frames per Second (FPS) are proportional to the size of the
backbone. ThunderNet can deliver 24.1 FPS in ARM Snapdragon 845 on 19.2 mAP
(0.5;0.95) on COCO benchmarks (Lin et al., 2014) using SNET49 backbone.
Swapping the backbone to a bigger model, SNET 535, the mAP increased to 28.1,
but the FPS was reduced to 5.8.
When considering even smaller computational power, it is possible to find DL
running on microcontroller units (MCU) where the memory and computational
power are 3-4 orders of magnitude smaller than mobile phones. MCUNet (Lin et
al., 2020) combines TinyNAS and TinyEngine to build a model that requires
320kB of memory and 1MB of storage. MCUNet achieves 70.7% top-1 accuracy on
ImageNet (ImageNet, 2018), which is similar to ResNet18 (He et al., 2016) and
MobileNetV2 (Sandler et al., 2018) accuracy. On hardware, the industry already
developed embedded AI platforms that run DL algorithms. NVIDIA’s Jetson is
amongst the most popular choices and a survey (Mittal, 2019) of studies using
the Jetson platform and its applications demonstrate it. Also, a broader
survey on this theme, that considers GPU, ASIC, FPGA, and MCUs of AI
platforms, can be read in (Imran et al., 2020). Regardless, research in the
context of UAV remote sensing is quite limited, and there is a gap that can be
fulfilled by future works. Several applications can be benefited by this
technology, including, for example, agricultural spraying UAV, which can
recognize different types of weeds in real-time, and simultaneously use the
spray. Other approaches may include real-time monitoring of trees in both
urban and forest environments, as well as the detection of other types of
objects that benefit from a rapid intake.
### 5.2 Dimensionality Reduction
Due to recent advances in capture devices, hyperspectral images can be
acquired even in UAVs. These images consist of tens to hundreds of spectral
bands that can assist in the classification of objects in a given application.
However, two main issues arise from the high dimensionality: i) the bands can
be highly correlated, and ii) the excessive increase in the computational cost
of DL models. High-dimensionality could invoke a problem known as the Hughes
phenomenon, which is also known as the curse of dimensionality, i.e. when the
accuracy of a classification is reduced due to the introduction of noise and
other implications encountered in hyperspectral or high-dimensional data
(Hennessy et al., 2020). Regardless, hyperspectral data may pose an hindrance
for the DL-based approaches accuracies, thus being an important issue to be
considered in remote sensing practices. The classic approach to address high
dimensionality is by applying a Principal Component Analysis (PCA) (Licciardi
et al., 2012).
Despite several proposals, PCA is generally not applied in conjunction with
DL, but as a pre-processing step. Although this method may be one of the most
known approaches to reduce dimensionality when dealing with hyperspectral
data, different intakes were already presented in the literature. A novel DL
approach, implemented with UAV-based imagery, was demonstrated in Miyoshi et
al. (Miyoshi et al., 2020). There, the authors proposed a one-step approach,
conducted within the networks’ architecture, to consider a combination of
bands of a hyperspectral sensor that were highly related to the labeled
example provided in the input layer at the initial stage of the network.
Another investigation (Vaddi and Manoharan, 2020) combines a band selection
approach, spatial filtering, and CNN to simultaneously extract the spectral
and spatial features. Still, the future perspective to solve this issue
appears to be a combination of spectral band selection and DL methods in an
end-to-end approach. Thus, both selection and DL methods can exchange
information and improve results. This can also contribute to understanding how
DL operates with these images, which was slightly accomplished at Miyoshi et
al. (Miyoshi et al., 2020).
### 5.3 Domain Adaptation and Transfer Learning
The training steps of DL models are generally carried out with images captured
in a specific geographical region, in a short-time period, or with single
capture equipment (also known as domains). When the model is used in practice,
it is common for spectral shifts to occur between the training and test images
due to differences in acquisition, geographic region, atmospheric conditions,
among others (Tuia et al., 2016). Domain adaptation is a technique for
adapting models trained in a source domain to a different, but still related,
target domain. Therefore, domain adaptation is also viewed as a particular
form of transfer learning (Tuia et al., 2016). On the other hand, transfer
learning (Zhuang et al., 2020; Tan et al., 2018) does include applications in
which the characteristics of the domain’s target space may differ from the
source domain.
A promising research line for domain adaptation and transfer learning is to
consider GANs (Goodfellow et al., 2014; Elshamli et al., 2017). For example,
(Benjdira et al., 2019b) proposed the use of GANs to convert an image from the
source domain to the target domain, causing the source images to mimic the
characteristics of the images from the target domain. Recent approaches seek
to align the distribution of the source and target domains, although they do
not consider direct alignment at the level of the problem classes. Approaches
that are attentive to class-level shifts may be more accurate, as the
category-sensitive domain adaptation proposed by (Fang et al., 2019). Thus,
these approaches reduce the domain shift related to the quality and
characteristics of the training images and can be useful in practice for UAV
remote sensing.
### 5.4 Attention Based Mechanisms
Attention mechanisms aim to highlight the most valuable features or image
regions based on assigning different weights for them in a specific task. It
is a topic that has been recently applied in remote sensing, providing
significant improvements. As pointed out by (Xu et al., 2018), high-resolution
images in remote sensing provide a large amount of information and exhibit
minor intra-class variation while it tends to increase. These variations and a
large amount of information make extraction of relevant features more
difficult, since traditional CNNs process all regions with the same weight
(relevance). Attention mechanisms, such as the one proposed by (Xu et al.,
2018), are useful tools to focus the feature extraction in discriminative
regions of the problem, be it image segmentation (Ding et al., 2021; Su et
al., 2019; Zhou et al., 2020), scene-wise classification (Zhu et al., 2019b;
Li et al., 2020c), or object detection (Li et al., 2019, 2020c), as others.
Besides, (Su et al., 2019) argue that when remote sensing images are used,
they are generally divided into patches for training the CNNs. Thus, objects
can be divided into two or more sub-images, causing the discriminative and
structural information to be lost. Attention mechanisms can be used to
aggregate learning by focusing on relevant regions that describe the objects
of interest, as presented in (Su et al., 2019), through a global attention
upsample module that provides global context and combines low and high-level
information. Recent advances in computer vision were achieved with attention
mechanisms for classification (e.g., Vision Transformer (Dosovitskiy et al.,
2020) and Data-efficient Image Transformers (Touvron et al., 2020)) and in
object detection (e.g., DETR (Carion et al., 2020)) that have not yet been
fully evaluated in remote sensing applications. Some directions also point to
the use of attention mechanisms directly in a sequence of image patches
(Dosovitskiy et al., 2020; Touvron et al., 2020). These new proposals can
improve the results already achieved in remote sensing data, just as they have
advanced the results on the traditional image datasets in computer vision
(e.g., ImageNet (ImageNet, 2018)).
### 5.5 Few-Shot Learning
Although recent material demonstrated the feasibility of DL-based methods for
multiple tasks, they still are considered limited in terms of high
generalization. This occurs when dealing with the same objects in different
geographical areas or when new object classes are considered. Traditional
solutions require retraining the model with a robust labeled dataset for the
new area or object. Few-shot learning aims to cope with situations in which
few labeled datasets are available. A recent study (Li et al., 2020), in the
context of scene classification, pointed out that few-shot methods in remote
sensing are based on transfer learning and meta-learning. Meta-learning can be
more flexible than transfer learning, and when applied in the training set to
extract meta-knowledge, contributes significantly to few-shot learning in the
test set. An interesting strategy to cope with large intraclass variation and
interclass similarity is the implementation of the attention mechanism in the
feature learning step, as previously described. The datasets used in the (Li
et al., 2020) study were not UAV-based; however, the strategy can be explored
in UAV imagery.
In the context of UAV remote sensing, there are few studies on few-shot
learning. Recently, an investigation (Karami et al., 2020) aimed for the
detection of maize plants using the object detection method CenterNet. The
authors adopted a transfer learning strategy using pre-trained models from
other geographical areas and dates. Fewer images (in total, 150 images), when
compared to the previous training (with 600 images), from the new area were
used for fine-tuning the model. Based on the literature survey, there is a
research-gap to be further explored in the context of object detection using
few-shot learning in UAV remote sensing. The main idea behind this is to
consider less labeled datasets for training, which may help in some remote
applications where data availability is scarce or presents few occurrences.
### 5.6 Semi-Supervised Learning and Unsupervised Learning
With the increasing availability of remote sensing images, the labeling task
for supervised training of DL models is expensive and time-consuming. Thus,
the performance of DL models is impacted due to the lack of large amounts of
labeled training images. Efforts have been made to consider unlabeled images
in training through unsupervised (unlabeled images only) and semi-supervised
(labeled and unlabeled images) learnings. In remote sensing, most semi-
supervised or unsupervised approaches are based on transfer learning, which
usually requires a supervised pre-trained model (Liu and Qin, 2020). In this
regard, a recent study (Kang et al., 2020) proposed a promising approach for
unlabeled remote sensing images that define spatial augmentation criteria for
relating close sub-images. Regardless, this is still an under-developed
practice with UAV-based data and should be investigated in novel approaches.
Future perspectives point to the use of contrastive loss (Bachman et al.,
2019; Tian et al., 2019b; Hjelm et al., 2019; He et al., 2020) and clustering-
based approaches (Caron et al., 2018, 2021). Recent publications have shown
interesting results with the use of contrastive loss that has not yet been
fully evaluated in remote sensing. For example, (He et al., 2020) proposed an
approach based on contrastive loss that surpassed the performance of its
supervised pre-trained counterpart. As for clustering-based methods, they
often group images with similar characteristics (Caron et al., 2018). On this
matter, a research (Caron et al., 2018) presented an approach that groups the
data while reinforcing the consistency between the cluster assignments
produced for a pair of images (same images with two augmentations). An
efficient and effective way to use a large number of unlabeled images can
considerably improve performance, mainly related to the generalizability of
the models.
### 5.7 Multitask Learning
Multitask learning aims to perform multiple tasks simultaneously. Several
advantages are mentioned in (Crawshaw, 2020), including fast learning and the
minimization of overfitting problems. Recently, in the context of UAV remote
sensing, there were some important researches already developed. A study (Wang
et al., 2021) proposed a method to conduct three tasks (semantic segmentation,
height estimation, and boundary detection), which also considered boundary
attention modules. Another research (Osco et al., 2020a) simultaneously
detected plants and plantation lines in UAV-based imagery. The proposed
network benefited from the contributions of considering both tasks in the same
structure, since the plants must, essentially, belong to a plantation-line. In
short, improvements occurred in the detection task when line detection was
considered at the same time. This approach can be further explored in several
UAV-based remote sensing applications.
### 5.8 Open-Set
The main idea of an open-set is to deal with unknown or unseen classes during
the inference in the testing set (Bendale and Boult, 2016). As the authors
mention, recognition in real-world scenarios is “open-set”, different from
neural networks’ nature, which is in a “close-set”. Consequently, the testing
set is classified considering only the classes used during the training.
Therefore, unknown or unseen classes are not rejected during the test. There
are few studies regarding open-set in the context of remote sensing. Regarding
semantic segmentation of aerial imagery, a study by (da Silva et al., 2020)
presented an approach considering the open-set context. There, an adaptation
of a close-set semantic segmentation method, adding a probability threshold
after the softmax, was conducted. Later, a post-processing step based on
morphological filters was applied to the pixels classified as unknown to
verify if they are inside pixels or from borders. Another interesting approach
is to combine open-set and domain adaptation methods, as proposed by (Adayel
et al., 2020) in the remote sensing context.
### 5.9 Photogrammetric Processing
Although not as developed as other practices, DL-based methods can be adopted
for processing and optimizing the UAV photogrammetric processing task. This
process aims to generate a dense point cloud and an orthomosaic, and it is
based on Structure-from-Motion (SfM) and Multi-View Stereo (MVS) techniques.
In SfM, the interior and exterior orientation parameters are estimated, and a
sparse point cloud is generated. A matching technique between the images is
applied in SfM. A recent survey on image matching (Ma et al., 2021) concluded
that this thematic is still an open problem and also pointed out the potential
of DL is this task. The authors mentioned that DL techniques are mainly
applied to feature detection and description, and further investigations on
feature matching can be explored. Finally, they pointed out that a promising
direction is the customization of modern feature matching techniques to attend
SfM. Regarding DL for UAV image matching, there is a lack of works, indicating
a potential for future exploration.
In the UAV photogrammetric process, DL also can be used in filtering the DSM,
which is essential to generate high-quality orthoimages. Previous work
(Gevaert et al., 2018) showed the potential of using DL to filter the DSM and
generate the DTM. Further investigations are required in this thematic, mainly
considering UAV data. Besides, another task that can be beneficiated by DL is
the color balancing between images when generating orthomosaic from thousands
of images, corresponding to extensive areas.
To summarize, the topics addressed in this section compose some of the hot
topics in the computer vision community, and the combination of them with
remote sensing data can contribute to the development of novel approaches in
the context of UAV mapping. In this regard, it is important to emphasize that
not only these topics are currently being investigated by computer vision
research, but that they also are being fastly implemented in multiple
approaches aside from remote sensing. As other domains are investigated, novel
ways of improving and adapting these networks can be achieved. Future studies
in the remote sensing communities, specifically with UAV-based systems, may
benefit from these improvements and incorporate them into their applications.
## 6 Conclusions
DL is still considered, up to the time of writing, a “black-box” type of
solution for most of the problems, although novel research is advancing in
minimizing this notion at considerable proportions. Regardless, in the remote
sensing domain, it already provided important discoveries on most of its
implementation. Our literature-revision has focused on the application of
these methods in UAV-based image processing. In this sense, we structured our
study to offer more of a comprehensive approach to the subject while
presenting an overview of state-of-the-art techniques and perspectives
regarding its usage. As such, we hope that this literature revision may serve
as an inclusive survey to summarize the UAV applications based on DNNs. Thus,
in the evaluated context, this review concludes that:
1. 1.
In the context of UAV remote sensing, most of the published materials are
based on object detection methods and RGB sensors; however, some applications,
as in precision agriculture and forest-related, benefit from
multi/hyperspectral data;
2. 2.
There is a need for additional labeled public available datasets obtained with
UAVs to be used to train and benchmark the networks. In this context, we
contributed by providing a repository with some of our UAV datasets in both
agricultural and environmental applications;
3. 3.
Even though CNNs are the most adopted architecture, other methods based on
CNN-LSTMs and GANs are gaining attention in UAV remote sensing and image
applications, and future UAV remote sensing works may benefit from their
inclusion;
4. 4.
DL, when assisted by GPU processing, can provide fast inference solutions.
However there is still a need for further investigation regarding real-time
processing using embedded systems on UAVs, and, lastly;
5. 5.
Some promising thematics, such as open-set, attention-based mechanisms, few
shot and multitask learning can be combined and provide novel approaches in
the context of UAV remote sensing; also, these thematics can contribute
significantly to the generalization capacity of the DNNs.
## Funding
This research was funded by CNPq (p: 433783/2018-4, 310517/2020-6,
314902/2018-0, 304052/2019-1 and 303559/2019-5), FUNDECT (p: 59/300.066/2015)
and CAPES PrInt (p: 88881.311850/2018-01). The authors acknowledge the support
of the UFMS (Federal University of Mato Grosso do Sul) and CAPES (Finance Code
001).
## Acknowledgments
The authors would like to acknowledge Nvidia Corporation for the donation of
the Titan X graphics card.
## Conflicts of Interest
The authors declare no conflict of interest. The funders had no role in the
design of the study; in the collection, analyses, or interpretation of data;
in the writing of the manuscript, or in the decision to publish the results.
## Abbreviations
The following abbreviations were used in this manuscript:
AdaGrad | Adaptive Gradient Algorithm
---|---
AI | Artificial Intelligence
ANN | Artificial Neural Network
CEM | Context Enhanced Module
CNN | Convolutional Neural Network
DCGAN | Deep Convolutional Generative Adversarial network
DDCN | Deep Dual-domain Convolutional neural Network
DL | Deep Learning
DNN | Deep Neural Network
DEM | Digital Elevation Model
DSM | Digital Surface Model
FPS | Frames per Second
GAN | Generative Adversarial Network
GPU | Graphics Processing Unit
KL | Kullback-Leibler
LSTM | Long Short-Term Memory
IoU | Intersection over Union
ML | Machine Learning
MAE | Mean Absolute Error
MAPE | Mean Absolute Percentage Error
MRE | Mean Relative Error
MSE | Mean Squared Error
MSLE | Mean Squared Logarithmic Error
MSM | Multi-Stage Module
MVS | Multiview Stereo
NAS | Network Architecture Search
PCA | Principal Component Analysis
PPM | Pyramid Pooling Module
r | Correlation Coefficient
RMSE | Root Mean Squared Error
RNN | Recurrent Neural Network
ROC | Receiver Operating Characteristics
RPA | Remotely Piloted Aircraft
SAM | Spatial Attention Module
SGD | Stochastic Gradient Descent
SfM | Structure from Motion
UAV | Unmanned Aerial Vehicle
WOS | Web of Science
## References
* Lecun et al. [2015] Yann Lecun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. _Nature_ , 521(7553):436–444, 2015. ISSN 14764687. doi:10.1038/nature14539.
* Goodfellow et al. [2016] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. _Deep Learning_. MIT Press, 2016.
* Zhang et al. [2016] Liangpei Zhang, Lefei Zhang, and Bo Du. Deep learning for remote sensing data: A technical tutorial on the state of the art. _IEEE Geoscience and Remote Sensing Magazine_ , 4(2):22–40, 2016. ISSN 21686831. doi:10.1109/MGRS.2016.2540798.
* Cheng and Han [2016] Gong Cheng and Junwei Han. A survey on object detection in optical remote sensing images. _ISPRS Journal of Photogrammetry and Remote Sensing_ , 117:11–28, 2016. ISSN 09242716. doi:10.1016/j.isprsjprs.2016.03.014. URL http://dx.doi.org/10.1016/j.isprsjprs.2016.03.014.
* Ball et al. [2017] John E. Ball, Derek T. Anderson, and Chee Seng Chan. A comprehensive survey of deep learning in remote sensing: Theories, tools and challenges for the community. _arXiv_ , 11(4), 2017. ISSN 1931-3195. doi:10.1117/1.jrs.11.042609.
* Cheng et al. [2017] Gong Cheng, Junwei Han, and Xiaoqiang Lu. Remote sensing image scene classification: Benchmark and state of the art. _arXiv_ , 2017. ISSN 23318422.
* Zhu et al. [2017] Xiao Xiang Zhu, Devis Tuia, Lichao Mou, Gui Song Xia, Liangpei Zhang, Feng Xu, and Friedrich Fraundorfer. Deep Learning in Remote Sensing: A Comprehensive Review and List of Resources. _IEEE Geoscience and Remote Sensing Magazine_ , 5(4):8–36, 2017. ISSN 21686831. doi:10.1109/MGRS.2017.2762307.
* Li et al. [2018] Ying Li, Haokui Zhang, Xizhe Xue, Yenan Jiang, and Qiang Shen. Deep learning for remote sensing image classification: A survey. _Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery_ , 8(6):1–17, 2018. ISSN 19424795. doi:10.1002/widm.1264.
* Yao et al. [2018] Chuchu Yao, Xianxian Luo, Yudan Zhao, Wei Zeng, and Xiaoyu Chen. A review on image classification of remote sensing using deep learning. _2017 3rd IEEE International Conference on Computer and Communications, ICCC 2017_ , 2018-Janua:1947–1955, 2018. doi:10.1109/CompComm.2017.8322878.
* Petersson et al. [2017] Henrik Petersson, David Gustafsson, and David Bergström. Hyperspectral image analysis using deep learning - A review. _2016 6th International Conference on Image Processing Theory, Tools and Applications, IPTA 2016_ , 2017. doi:10.1109/IPTA.2016.7820963.
* Audebert et al. [2019] Nicolas Audebert, Bertrand Le Saux, and Sebastien Lefevre. Deep learning for classification of hyperspectral data: A comparative review. _IEEE Geoscience and Remote Sensing Magazine_ , 7(2):159–173, 2019. ISSN 21686831. doi:10.1109/MGRS.2019.2912563.
* Paoletti et al. [2019] M. E. Paoletti, J. M. Haut, J. Plaza, and A. Plaza. Deep learning classifiers for hyperspectral imaging: A review. _ISPRS Journal of Photogrammetry and Remote Sensing_ , 158(September):279–317, 2019. ISSN 09242716. doi:10.1016/j.isprsjprs.2019.09.006. URL https://doi.org/10.1016/j.isprsjprs.2019.09.006.
* Li et al. [2019] Shutao Li, Weiwei Song, Leyuan Fang, Yushi Chen, Pedram Ghamisi, and Jon Atli Benediktsson. Deep learning for hyperspectral image classification: An overview. _IEEE Transactions on Geoscience and Remote Sensing_ , 57(9):6690–6709, 2019. ISSN 15580644. doi:10.1109/TGRS.2019.2907932.
* Tsagkatakis et al. [2019] Grigorios Tsagkatakis, Anastasia Aidini, Konstantina Fotiadou, Michalis Giannopoulos, Anastasia Pentari, and Panagiotis Tsakalides. Survey of deep-learning approaches for remote sensing observation enhancement. _Sensors (Switzerland)_ , 19(18):1–39, 2019. ISSN 14248220. doi:10.3390/s19183929.
* Ma et al. [2019] Lei Ma, Yu Liu, Xueliang Zhang, Yuanxin Ye, Gaofei Yin, and Brian Alan Johnson. Deep learning in remote sensing applications: A meta-analysis and review. _ISPRS Journal of Photogrammetry and Remote Sensing_ , 152:166 – 177, 2019. ISSN 0924-2716. doi:https://doi.org/10.1016/j.isprsjprs.2019.04.015. URL http://www.sciencedirect.com/science/article/pii/S0924271619301108.
* Hossain and Chen [2019] Mohammad D. Hossain and Dongmei Chen. Segmentation for Object-Based Image Analysis (OBIA): A review of algorithms and challenges from remote sensing perspective. _ISPRS Journal of Photogrammetry and Remote Sensing_ , 150(February):115–134, 2019. ISSN 09242716. doi:10.1016/j.isprsjprs.2019.02.009. URL https://doi.org/10.1016/j.isprsjprs.2019.02.009.
* Yuan et al. [2021] Xiaohui Yuan, Jianfang Shi, and Lichuan Gu. A review of deep learning methods for semantic segmentation of remote sensing imagery. _Expert Systems with Applications_ , 169(November 2020):114417, 2021. ISSN 09574174. doi:10.1016/j.eswa.2020.114417. URL https://doi.org/10.1016/j.eswa.2020.114417.
* Zheng et al. [2020] Zhe Zheng, Lin Lei, Hao Sun, and Gangyao Kuang. A Review of Remote Sensing Image Object Detection Algorithms Based on Deep Learning. _2020 IEEE 5th International Conference on Image, Vision and Computing, ICIVC 2020_ , pages 34–43, 2020. doi:10.1109/ICIVC50857.2020.9177453.
* Yuan et al. [2020] Qiangqiang Yuan, Huanfeng Shen, Tongwen Li, Zhiwei Li, Shuwen Li, Yun Jiang, Hongzhang Xu, Weiwei Tan, Qianqian Yang, Jiwen Wang, Jianhao Gao, and Liangpei Zhang. Deep learning in environmental remote sensing: Achievements and challenges. _Remote Sensing of Environment_ , 241(February):111716, 2020. ISSN 00344257. doi:10.1016/j.rse.2020.111716. URL https://doi.org/10.1016/j.rse.2020.111716.
* Khelifi and Mignotte [2020] Lazhar Khelifi and Max Mignotte. Deep Learning for Change Detection in Remote Sensing Images: Comprehensive Review and Meta-Analysis. _IEEE Access_ , 8:126385–126400, 2020. ISSN 21693536. doi:10.1109/ACCESS.2020.3008036.
* Bithas et al. [2019] Petros S. Bithas, Emmanouel T. Michailidis, Nikolaos Nomikos, Demosthenes Vouyioukas, and Athanasios G. Kanatas. A survey on machine-learning techniques for UAV-based communications. _Sensors (Switzerland)_ , 19(23):1–39, 2019. ISSN 14248220. doi:10.3390/s19235170.
* Schmidhuber [2015] Jürgen Schmidhuber. Deep learning in neural networks: An overview. _Neural Networks_ , 61:85 – 117, 2015. ISSN 0893-6080. doi:10.1016/j.neunet.2014.09.003. URL http://www.sciencedirect.com/science/article/pii/S0893608014002135.
* Khan et al. [2020] Asifullah Khan, Anabia Sohail, Umme Zahoora, and Aqsa Saeed Qureshi. _A survey of the recent architectures of deep convolutional neural networks_ , volume 53. Springer Netherlands, 2020. ISBN 0123456789. doi:10.1007/s10462-020-09825-6. URL https://doi.org/10.1007/s10462-020-09825-6.
* Nwankpa et al. [2018] Chigozie Nwankpa, Winifred Ijomah, Anthony Gachagan, and Stephen Marshall. Activation functions: Comparison of trends in practice and research for deep learning. _arXiv preprint arXiv:1811.03378_ , 2018.
* Naitzat et al. [2020] Gregory Naitzat, Andrey Zhitnikov, and Lek Heng Lim. Topology of deep neural networks. _Journal of Machine Learning Research_ , 21:1–40, 2020\. ISSN 15337928.
* Hinton et al. [2012] Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. _CoRR_ , abs/1207.0580, 2012. URL http://arxiv.org/abs/1207.0580.
* Ruder [2017] Sebastian Ruder. An overview of gradient descent optimization algorithms, 2017.
* Minaee et al. [2020a] Shervin Minaee, Yuri Boykov, Fatih Porikli, Antonio Plaza, Nasser Kehtarnavaz, and Demetri Terzopoulos. Image segmentation using deep learning: A survey, 2020a.
* Foody [2020] Giles M. Foody. Explaining the unsuitability of the kappa coefficient in the assessment and comparison of the accuracy of thematic maps obtained by image classification. _Remote Sensing of Environment_ , 239(December 2019):111630, 2020. ISSN 00344257. doi:10.1016/j.rse.2019.111630. URL https://doi.org/10.1016/j.rse.2019.111630.
* Krizhevsky et al. [2012] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In _Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1_ , NIPS’12, page 1097–1105, Red Hook, NY, USA, 2012. Curran Associates Inc.
* Hochreiter and Schmidhuber [1997] S. Hochreiter and J. Schmidhuber. Long short-term memory. _Neural Computation_ , 9, 1997. doi:10.1162/neco.1997.9.8.1735.
* Ienco et al. [2017] D. Ienco, R. Gaetano, C. Dupaquier, and P. Maurel. Land cover classification via multitemporal spatial data by deep recurrent neural networks. _IEEE Geoscience and Remote Sensing Letters_ , 14(10):1685–1689, 2017. doi:10.1109/LGRS.2017.2728698.
* Ho Tong Minh et al. [2018] D. Ho Tong Minh, D. Ienco, R. Gaetano, N. Lalande, E. Ndikumana, F. Osman, and P. Maurel. Deep recurrent neural networks for winter vegetation quality mapping via multitemporal sar sentinel-1. _IEEE Geoscience and Remote Sensing Letters_ , 15(3):464–468, 2018. doi:10.1109/LGRS.2018.2794581.
* Feng et al. [2020] Quanlong Feng, Jianyu Yang, Yiming Liu, Cong Ou, Dehai Zhu, Bowen Niu, Jiantao Liu, and Baoguo Li. Multi-temporal unmanned aerial vehicle remote sensing for vegetable mapping using an attention-based recurrent convolutional neural network. _Remote Sensing_ , 12(10), 2020. ISSN 20724292. doi:10.3390/rs12101668.
* Goodfellow et al. [2014] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks, 2014.
* Lin et al. [2017a] D. Lin, K. Fu, Y. Wang, G. Xu, and X. Sun. Marta gans: Unsupervised representation learning for remote sensing image classification. _IEEE Geoscience and Remote Sensing Letters_ , 14(11):2092–2096, 2017a. doi:10.1109/LGRS.2017.2752750.
* Isola et al. [2018] P. Isola, Jun-Yan Zhu, T. Zhou, and A.A Efros. Image-to-image translation with conditional adversarial networks, 2018\.
* Wu et al. [2020a] Xiongwei Wu, Doyen Sahoo, and Steven C.H. Hoi. Recent advances in deep learning for object detection. _Neurocomputing_ , 396:39 – 64, 2020a. ISSN 0925-2312. doi:https://doi.org/10.1016/j.neucom.2020.01.085.
* Sharma and Mir [2020] Vipul Sharma and Roohie Naaz Mir. A comprehensive and systematic look up into deep learning based object detection techniques: A review. _Computer Science Review_ , 38:100301, 2020. ISSN 1574-0137. doi:https://doi.org/10.1016/j.cosrev.2020.100301.
* Lathuilière et al. [2020] S. Lathuilière, P. Mesejo, X. Alameda-Pineda, and R. Horaud. A comprehensive analysis of deep regression. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , 42(9):2065–2081, 2020. doi:10.1109/TPAMI.2019.2910523.
* Simonyan and Zisserman [2015] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In _International Conference on Learning Representations_ , page 14, 2015.
* He et al. [2016] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. _Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition_ , 2016-December:770–778, 2016. ISSN 10636919. doi:10.1109/CVPR.2016.90.
* Zou et al. [2015] Q. Zou, L. Ni, T. Zhang, and Q. Wang. Deep learning based feature selection for remote sensing scene classification. _IEEE Geoscience and Remote Sensing Letters_ , 12(11):2321–2325, 2015. doi:10.1109/LGRS.2015.2475299.
* Zhao et al. [2019] Zhong-Qiu Zhao, Peng Zheng, Shou-Tao Xu, and Xindong Wu. Object detection with deep learning: A review. _IEEE transactions on neural networks and learning systems_ , 30(11):3212—3232, November 2019. ISSN 2162-237X. doi:10.1109/tnnls.2018.2876865.
* Liu et al. [2019] L. Liu, W. Ouyang, X. Wang, W. P. Fieguth, J. Chen, X. Liu, and M. Pietikäinen. Deep learning for generic object detection: A survey. _International Journal of Computer Vision_ , pages 261–318, 2019\.
* Cai and Vasconcelos [2018] Z. Cai and N. Vasconcelos. Cascade r-cnn: Delving into high quality object detection. In _2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 6154–6162, 2018. doi:10.1109/CVPR.2018.00644.
* Li et al. [2019] Y. Li, Y. Chen, N. Wang, and Z. Zhang. Scale-aware trident networks for object detection. In _2019 IEEE/CVF International Conference on Computer Vision (ICCV)_ , pages 6053–6062, 2019. doi:10.1109/ICCV.2019.00615.
* Lu et al. [2019] Xin Lu, Buyu Li, Yuxin Yue, Quanquan Li, and Junjie Yan. Grid R-CNN plus: Faster and better. _CoRR_ , abs/1906.05688, 2019. URL http://arxiv.org/abs/1906.05688.
* Zhang et al. [2020a] Hongkai Zhang, Hong Chang, Bingpeng Ma, Naiyan Wang, and Xilin Chen. Dynamic R-CNN: Towards high quality object detection via dynamic training. _arXiv preprint arXiv:2004.06002_ , 2020a.
* Qiao et al. [2020] Siyuan Qiao, Liang-Chieh Chen, and Alan Yuille. Detectors: Detecting objects with recursive feature pyramid and switchable atrous convolution. _arXiv preprint arXiv:2006.02334_ , 2020.
* He et al. [2016] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In _2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , pages 770–778, 2016. doi:10.1109/CVPR.2016.90.
* Xie et al. [2017] S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He. Aggregated residual transformations for deep neural networks. In _2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , pages 5987–5995, 2017. doi:10.1109/CVPR.2017.634.
* Wang et al. [2020] J. Wang, K. Sun, T. Cheng, B. Jiang, C. Deng, Y. Zhao, D. Liu, Y. Mu, M. Tan, X. Wang, W. Liu, and B. Xiao. Deep high-resolution representation learning for visual recognition. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , pages 1–1, 2020. doi:10.1109/TPAMI.2020.2983686.
* Radosavovic et al. [2020] I. Radosavovic, R. Kosaraju, R. Girshick, K. He, and P. Dollar. Designing network design spaces. In _2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , pages 10425–10433, Los Alamitos, CA, USA, 2020. URL https://doi.ieeecomputersociety.org/10.1109/CVPR42600.2020.01044.
* Gao et al. [2021] S. H. Gao, M. M. Cheng, K. Zhao, X. Y. Zhang, M. H. Yang, and P. Torr. Res2net: A new multi-scale backbone architecture. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , 43(2):652–662, 2021. doi:10.1109/TPAMI.2019.2938758.
* Zhang et al. [2020b] Hang Zhang, Chongruo Wu, Zhongyue Zhang, Yi Zhu, Haibin Lin, Zhi Zhang, Yue Sun, Tong He, Jonas Mueller, R. Manmatha, Mu Li, and Alexander Smola. Resnest: Split-attention networks, 2020b.
* Lin et al. [2017b] T. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie. Feature pyramid networks for object detection. In _2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , pages 936–944, 2017b. doi:10.1109/CVPR.2017.106.
* Liu et al. [2018] Shu Liu, Lu Qi, Haifang Qin, Jianping Shi, and Jiaya Jia. Path aggregation network for instance segmentation. In _Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , page 11, 2018.
* Ghiasi et al. [2019] Golnaz Ghiasi, Tsung-Yi Lin, and Quoc V Le. Nas-fpn: Learning scalable feature pyramid architecture for object detection. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pages 7036–7045, 2019.
* Chen et al. [2020] J. Chen, Q. Wu, D. Liu, and T. Xu. Foreground-background imbalance problem in deep object detectors: A review. In _2020 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR)_ , pages 285–290, 2020. doi:10.1109/MIPR49039.2020.00066.
* Pang et al. [2019] Jiangmiao Pang, Kai Chen, Jianping Shi, Huajun Feng, Wanli Ouyang, and Dahua Lin. Libra R-CNN: Towards balanced learning for object detection. _Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition_ , 2019-June:821–830, 2019. ISSN 10636919. doi:10.1109/CVPR.2019.00091.
* Zhang et al. [2019a] Shifeng Zhang, Cheng Chi, Yongqiang Yao, Zhen Lei, and Stan Z. Li. Bridging the gap between anchor-based and anchor-free detection via adaptive training sample selection. _arXiv preprint arXiv:1912.02424_ , 2019a.
* Wang et al. [2019] Jiaqi Wang, Kai Chen, Shuo Yang, Chen Change Loy, and Dahua Lin. Region proposal by guided anchoring. In _IEEE Conference on Computer Vision and Pattern Recognition_ , page 12, 2019.
* Zhu et al. [2019a] Chenchen Zhu, Yihui He, and Marios Savvides. Feature selective anchor-free module for single-shot object detection. _Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition_ , 2019-June:840–849, 2019a. ISSN 10636919. doi:10.1109/CVPR.2019.00093.
* Kim and Lee [2020] Kang Kim and Hee Seok Lee. Probabilistic anchor assignment with iou prediction for object detection. In _European Conference on Computer Vision (ECCV)_ , page 22, 2020\.
* Li et al. [2020a] Xiang Li, Wenhai Wang, Lijun Wu, Shuo Chen, Xiaolin Hu, Jun Li, Jinhui Tang, and Jian Yang. Generalized focal loss: Learning qualified and distributed bounding boxes for dense object detection. _arXiv preprint arXiv:2006.04388_ , 2020a.
* Cao et al. [2020] Yuhang Cao, Kai Chen, Chen Change Loy, and Dahua Lin. Prime sample attention in object detection. In _IEEE Conference on Computer Vision and Pattern Recognition_ , page 9, 2020.
* Zhang et al. [2020c] Haoyang Zhang, Ying Wang, Feras Dayoub, and Niko Sünderhauf. Varifocalnet: An iou-aware dense object detector. _arXiv preprint arXiv:2008.13367_ , 2020c.
* Duan et al. [2019] Kaiwen Duan, Song Bai, Lingxi Xie, Honggang Qi, Qingming Huang, and Qi Tian. CenterNet: Keypoint triplets for object detection. _Proceedings of the IEEE International Conference on Computer Vision_ , 2019-October:6568–6577, 2019. ISSN 15505499. doi:10.1109/ICCV.2019.00667.
* Law and Deng [2020] Hei Law and Jia Deng. CornerNet: Detecting Objects as Paired Keypoints. _International Journal of Computer Vision_ , 128(3):642–656, 2020. ISSN 15731405. doi:10.1007/s11263-019-01204-1.
* Wang et al. [2020] Jiaqi Wang, Wenwei Zhang, Yuhang Cao, Kai Chen, Jiangmiao Pang, Tao Gong, Jianping Shi, Chen Change Loy, and Dahua Lin. Side-aware boundary localization for more precise object detection. In _European Conference on Computer Vision (ECCV)_ , page 21, 2020\.
* Minaee et al. [2020b] Shervin Minaee, Yuri Boykov, Fatih Porikli, Antonio Plaza, Nasser Kehtarnavaz, and Demetri Terzopoulos. Image segmentation using deep learning: A survey, 2020b.
* Kirillov et al. [2019] A. Kirillov, K. He, R. Girshick, C. Rother, and P. Dollár. Panoptic segmentation. In _2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , pages 9396–9405, 2019. doi:10.1109/CVPR.2019.00963.
* He et al. [2017] K. He, G. Gkioxari, P. Dollár, and R. Girshick. Mask r-cnn. In _2017 IEEE International Conference on Computer Vision (ICCV)_ , pages 2980–2988, 2017. doi:10.1109/ICCV.2017.322.
* Cai and Vasconcelos [2019] Zhaowei Cai and Nuno Vasconcelos. Cascade r-cnn: high quality object detection and instance segmentation. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , 2019.
* Chen et al. [2019] Kai Chen, Jiangmiao Pang, Jiaqi Wang, Yu Xiong, Xiaoxiao Li, Shuyang Sun, Wansen Feng, Ziwei Liu, Jianping Shi, Wanli Ouyang, Chen Change Loy, and Dahua Lin. Hybrid task cascade for instance segmentation. In _IEEE Conference on Computer Vision and Pattern Recognition_ , page 10, 2019.
* Kirillov et al. [2020] Alexander Kirillov, Yuxin Wu, Kaiming He, and Ross Girshick. Pointrend: Image segmentation as rendering. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , page 10, June 2020.
* Ronneberger et al. [2015] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. _Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)_ , 9351:234–241, 2015. ISSN 16113349. doi:10.1007/978-3-319-24574-4_28.
* Badrinarayanan et al. [2017] Vijay Badrinarayanan, Alex Kendall, and Roberto Cipolla. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , 39(12):2481–2495, 2017. ISSN 01628828. doi:10.1109/TPAMI.2016.2644615.
* Chen et al. [2018] Liang Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L. Yuille. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , 40(4):834–848, 2018. ISSN 01628828. doi:10.1109/TPAMI.2017.2699184.
* Nogueira et al. [2019] Keiller Nogueira, Mauro Dalla Mura, Jocelyn Chanussot, William Robson Schwartz, and Jefersson Alex Dos Santos. Dynamic multicontext segmentation of remote sensing images based on convolutional networks. _IEEE Transactions on Geoscience and Remote Sensing_ , 57(10):7503–7520, 2019. ISSN 15580644. doi:10.1109/TGRS.2019.2913861.
* Nogueira et al. [2020] Keiller Nogueira, Gabriel L.S. Machado, Pedro H.T. Gama, Caio C.V. da Silva, Remis Balaniuk, and Jefersson A. dos Santos. Facing erosion identification in railway lines using pixel-wise deep-based approaches. _Remote Sensing_ , 12(4):1–21, 2020. ISSN 20724292. doi:10.3390/rs12040739.
* Hua et al. [2021] Yuansheng Hua, Diego Marcos, Lichao Mou, Xiao Xiang Zhu, and Devis Tuia. Semantic segmentation of remote sensing images with sparse annotations. _IEEE Geoscience and Remote Sensing Letters_ , 2021.
* Wu et al. [2020b] Tianyi Wu, Sheng Tang, Rui Zhang, Juan Cao, and Yongdong Zhang. Cgnet: A light-weight context guided network for semantic segmentation. _IEEE Transactions on Image Processing_ , 30:1169–1179, 2020b.
* Yin et al. [2020] Minghao Yin, Zhuliang Yao, Yue Cao, Xiu Li, Zheng Zhang, Stephen Lin, and Han Hu. Disentangled non-local neural networks, 2020.
* Barbedo et al. [2020] Jayme Garcia Arnal Barbedo, Luciano Vieira Koenigkan, Patrícia Menezes Santos, and Andrea Roberto Bueno Ribeiro. Counting cattle in uav images—dealing with clustered animals and animal/background contrast changes. _Sensors_ , 20(7), 2020. ISSN 1424-8220. doi:10.3390/s20072126. URL https://www.mdpi.com/1424-8220/20/7/2126.
* Hou et al. [2020] Jin Hou, Yuxin He, Hongbo Yang, Thomas Connor, Jie Gao, Yujun Wang, Yichao Zeng, Jindong Zhang, Jinyan Huang, Bochuan Zheng, and Shiqiang Zhou. Identification of animal individuals using deep learning: A case study of giant panda. _Biological Conservation_ , 242:108414, 2020. ISSN 0006-3207. doi:https://doi.org/10.1016/j.biocon.2020.108414. URL http://www.sciencedirect.com/science/article/pii/S000632071931609X.
* Sundaram and Loganathan [2020] Divya Meena Sundaram and Agilandeeswari Loganathan. FSSCaps-DetCountNet: fuzzy soft sets and CapsNet-based detection and counting network for monitoring animals from aerial images. _Journal of Applied Remote Sensing_ , 14(2):1 – 30, 2020. doi:10.1117/1.JRS.14.026521. URL https://doi.org/10.1117/1.JRS.14.026521.
* Horning et al. [2020] Ned Horning, Erica Fleishman, Peter J. Ersts, Frank A. Fogarty, and Martha Wohlfeil Zillig. Mapping of land cover with open-source software and ultra-high-resolution imagery acquired with unmanned aerial vehicles. _Remote Sensing in Ecology and Conservation_ , 6(4):487–497, 2020. ISSN 20563485. doi:10.1002/rse2.144.
* Hamdi et al. [2019] Zayd Mahmoud Hamdi, Melanie Brandmeier, and Christoph Straub. Forest damage assessment using deep learning on high resolution remote sensing data. _Remote Sensing_ , 11(17):1–14, 2019. ISSN 20724292. doi:10.3390/rs11171976.
* Alexandra Larsen et al. [2020] A. Alexandra Larsen, I. Hanigan, B. J. Reich, Y. Qin, M Cope, G. Morgan, and A. G Rappold. A deep learning approach to identify smoke plumes in satellite imagery in near-real time for health risk communication. _Journal of Exposure Science & Environmental Epidemiology_, 31:170–176, 2020.
* Zhang et al. [2019b] Guoli Zhang, Ming Wang, and Kai Liu. Forest Fire Susceptibility Modeling Using a Convolutional Neural Network for Yunnan Province of China. _International Journal of Disaster Risk Science_ , 10(3):386–403, 2019b. ISSN 21926395. doi:10.1007/s13753-019-00233-1. URL https://doi.org/10.1007/s13753-019-00233-1.
* Kussul et al. [2017] N. Kussul, M. Lavreniuk, S. Skakun, and A. Shelestov. Deep learning classification of land cover and crop types using remote sensing data. _IEEE Geoscience and Remote Sensing Letters_ , 14(5):778–782, 2017. doi:10.1109/LGRS.2017.2681128.
* Zhang et al. [2020d] Xin Zhang, Liangxiu Han, Lianghao Han, and Liang Zhu. How well do deep learning-based methods for land cover classification and object detection perform on high resolution remote sensing imagery? _Remote Sensing_ , 12(3), 2020d. ISSN 2072-4292. doi:10.3390/rs12030417. URL https://www.mdpi.com/2072-4292/12/3/417.
* Dao et al. [2020] Dong Van Dao, Abolfazl Jaafari, Mahmoud Bayat, Davood Mafi-Gholami, Chongchong Qi, Hossein Moayedi, Tran Van Phong, Hai-Bang Ly, Tien-Thinh Le, Phan Trong Trinh, Chinh Luu, Nguyen Kim Quoc, Bui Nhi Thanh, and Binh Thai Pham. A spatially explicit deep learning neural network model for the prediction of landslide susceptibility. _CATENA_ , 188:104451, 2020. ISSN 0341-8162. doi:https://doi.org/10.1016/j.catena.2019.104451. URL http://www.sciencedirect.com/science/article/pii/S0341816219305934.
* Bui et al. [2020] Dieu Tien Bui, Paraskevas Tsangaratos, Viet-Tien Nguyen, Ngo Van Liem, and Phan Trong Trinh. Comparing the prediction performance of a deep learning neural network model with conventional machine learning models in landslide susceptibility assessment. _CATENA_ , 188:104426, 2020. ISSN 0341-8162. doi:https://doi.org/10.1016/j.catena.2019.104426. URL http://www.sciencedirect.com/science/article/pii/S0341816219305685.
* Giang et al. [2020] T. L. Giang, K. B. Dang, Q. Toan Le, V. G. Nguyen, S. S. Tong, and V. M. Pham. U-net convolutional networks for mining land cover classification based on high-resolution uav imagery. _IEEE Access_ , 8:186257–186273, 2020. doi:10.1109/ACCESS.2020.3030112.
* Al-Najjar et al. [2019] Husam A. H. Al-Najjar, Bahareh Kalantar, Biswajeet Pradhan, Vahideh Saeidi, Alfian Abdul Halin, Naonori Ueda, and Shattri Mansor. Land cover classification from fused dsm and uav images using convolutional neural networks. _Remote Sensing_ , 11(12), 2019. ISSN 2072-4292. doi:10.3390/rs11121461. URL https://www.mdpi.com/2072-4292/11/12/1461.
* Buscombe and Ritchie [2018] Daniel Buscombe and Andrew C. Ritchie. Landscape classification with deep neural networks. _Geosciences_ , 8(7), 2018. ISSN 2076-3263. doi:10.3390/geosciences8070244. URL https://www.mdpi.com/2076-3263/8/7/244.
* Park and Song [2020] Seula Park and Ahram Song. Discrepancy analysis for detecting candidate parcels requiring update of land category in cadastral map using hyperspectral uav images: A case study in jeonju, south korea. _Remote Sensing_ , 12(3), 2020. ISSN 2072-4292. doi:10.3390/rs12030354. URL https://www.mdpi.com/2072-4292/12/3/354.
* Li et al. [2019] Yuxia Li, Bo Peng, Lei He, Kunlong Fan, Zhenxu Li, and Ling Tong. Road extraction from unmanned aerial vehicle remote sensing images based on improved neural networks. _Sensors (Switzerland)_ , 19(19), 2019. ISSN 14248220. doi:10.3390/s19194115.
* Gevaert et al. [2020] Caroline M. Gevaert, Claudio Persello, Richard Sliuzas, and George Vosselman. Monitoring household upgrading in unplanned settlements with unmanned aerial vehicles. _International Journal of Applied Earth Observation and Geoinformation_ , 90(January):102117, 2020. ISSN 03032434. doi:10.1016/j.jag.2020.102117. URL https://doi.org/10.1016/j.jag.2020.102117.
* Gebrehiwot et al. [2019] Asmamaw Gebrehiwot, Leila Hashemi-Beni, Gary Thompson, Parisa Kordjamshidi, and Thomas E. Langan. Deep convolutional neural network for flood extent mapping using unmanned aerial vehicles data. _Sensors_ , 19(7), 2019. ISSN 1424-8220. doi:10.3390/s19071486. URL https://www.mdpi.com/1424-8220/19/7/1486.
* Carbonneau et al. [2020] Patrice E. Carbonneau, Stephen J. Dugdale, Toby P. Breckon, James T. Dietrich, Mark A. Fonstad, Hitoshi Miyamoto, and Amy S. Woodget. Adopting deep learning methods for airborne RGB fluvial scene classification. _REMOTE SENSING OF ENVIRONMENT_ , 251, DEC 15 2020. ISSN 0034-4257. doi:10.1016/j.rse.2020.112107.
* Zhang et al. [2020e] Xiuwei Zhang, Jiaojiao Jin, Zeze Lan, Chunjiang Li, Minhao Fan, Yafei Wang, Xin Yu, and Yanning Zhang. ICENET: A semantic segmentation deep network for river ice by fusing positional and channel-wise attentive features. _Remote Sensing_ , 12(2):1–22, 2020e. ISSN 20724292. doi:10.3390/rs12020221.
* Jakovljevic et al. [2019] Gordana Jakovljevic, Miro Govedarica, Flor Alvarez-Taboada, and Vladimir Pajic. Accuracy assessment of deep learning based classification of lidar and uav points clouds for dtm creation and flood risk mapping. _Geosciences_ , 9(7), 2019. ISSN 2076-3263. doi:10.3390/geosciences9070323. URL https://www.mdpi.com/2076-3263/9/7/323.
* Soderholm et al. [2020] J. S. Soderholm, M. R. Kumjian, N. McCarthy, P. Maldonado, and M. Wang. Quantifying hail size distributions from the sky – application of drone aerial photogrammetry. _Atmospheric Measurement Techniques_ , 13(2):747–754, 2020. doi:10.5194/amt-13-747-2020. URL https://amt.copernicus.org/articles/13/747/2020/.
* Ichim and Popescu [2020] Loretta Ichim and Dan Popescu. Segmentation of vegetation and flood from aerial images based on decision fusion of neural networks. _Remote Sensing_ , 12(15), 2020. ISSN 2072-4292. doi:10.3390/rs12152490. URL https://www.mdpi.com/2072-4292/12/15/2490.
* Nezami et al. [2020] S. Nezami, E. Khoramshahi, O. Nevalainen, I. Pölönen, and E. Honkavaara. ree species classification of drone hyperspectral and rgb imagery with deep learning convolutional neural networks. _Remote Sensing_ , 12(2), 2020. doi:10.3390/rs12071070.
* Ferreira et al. [2020] Matheus Pinheiro Ferreira, Danilo Roberti Alves de Almeida, Daniel de Almeida Papa, Juliano Baldez Silva Minervino, Hudson Franklin Pessoa Veras, Arthur Formighieri, Caio Alexandre Nascimento Santos, Marcio Aurélio Dantas Ferreira, Evandro Orfanó Figueiredo, and Evandro José Linhares Ferreira. Individual tree detection and species classification of Amazonian palms using UAV images and deep learning. _Forest Ecology and Management_ , 475(April):118397, 2020. ISSN 03781127. doi:10.1016/j.foreco.2020.118397. URL https://doi.org/10.1016/j.foreco.2020.118397.
* Hu et al. [2020] Gensheng Hu, Cunjun Yin, Mingzhu Wan, Yan Zhang, and Yi Fang. Recognition of diseased Pinus trees in UAV images using deep learning and AdaBoost classifier. _Biosystems Engineering_ , 194:138–151, 2020. ISSN 15375110. doi:10.1016/j.biosystemseng.2020.03.021. URL https://doi.org/10.1016/j.biosystemseng.2020.03.021.
* Miyoshi et al. [2020] Gabriela Takahashi Miyoshi, Mauro dos Santos Arruda, Lucas Prado Osco, José Marcato Junior, Diogo Nunes Gonçalves, Nilton Nobuhiro Imai, Antonio Maria Garcia Tommaselli, Eija Honkavaara, and Wesley Nunes Gonçalves. A novel deep learning method to identify single tree species in uav-based hyperspectral images. _Remote Sensing_ , 12(8), 2020. ISSN 2072-4292. doi:10.3390/rs12081294. URL https://www.mdpi.com/2072-4292/12/8/1294.
* Zhang et al. [2020f] Ce Zhang, Peter M. Atkinson, Charles George, Zhaofei Wen, Mauricio Diazgranados, and France Gerard. Identifying and mapping individual plants in a highly diverse high-elevation ecosystem using UAV imagery and deep learning. _ISPRS Journal of Photogrammetry and Remote Sensing_ , 169(May):280–291, 2020f. ISSN 09242716. doi:10.1016/j.isprsjprs.2020.09.025. URL https://doi.org/10.1016/j.isprsjprs.2020.09.025.
* Hamylton et al. [2020] S.M. Hamylton, R.H. Morris, R.C. Carvalho, N. Roder, P. Barlow, K. Mills, and L. Wang. Evaluating techniques for mapping island vegetation from unmanned aerial vehicle (UAV) images: Pixel classification, visual interpretation and machine learning approaches. _International Journal of Applied Earth Observation and Geoinformation_ , 89(February):102085, 2020. ISSN 03032434. doi:10.1016/j.jag.2020.102085. URL https://doi.org/10.1016/j.jag.2020.102085.
* Kellenberger et al. [2018] Benjamin Kellenberger, Diego Marcos, and Devis Tuia. Detecting mammals in uav images: Best practices to address a substantially imbalanced dataset with deep learning. _Remote Sensing of Environment_ , 216:139 – 153, 2018. ISSN 0034-4257. doi:https://doi.org/10.1016/j.rse.2018.06.028. URL http://www.sciencedirect.com/science/article/pii/S0034425718303067.
* Gray et al. [2019] Patrick C. Gray, Kevin C. Bierlich, Sydney A. Mantell, Ari S. Friedlaender, Jeremy A. Goldbogen, and David W. Johnston. Drones and convolutional neural networks facilitate automated and accurate cetacean species identification and photogrammetry. _Methods in Ecology and Evolution_ , 10(9):1490–1500, 2019. ISSN 2041210X. doi:10.1111/2041-210X.13246.
* Zhang et al. [2019c] Huaizhong Zhang, Mark Liptrott, Nik Bessis, and Jianquan Cheng. Real-time traffic analysis using deep learning techniques and UAV based video. _2019 16th IEEE International Conference on Advanced Video and Signal Based Surveillance, AVSS 2019_ , pages 1–5, 2019c. doi:10.1109/AVSS.2019.8909879.
* de Oliveira and Wehrmeister [2018] Diulhio Candido de Oliveira and Marco Aurelio Wehrmeister. Using deep learning and low-cost rgb and thermal cameras to detect pedestrians in aerial images captured by multirotor uav. _Sensors (Switzerland)_ , 18(7), 2018. ISSN 14248220. doi:10.3390/s18072244.
* dos Santos et al. [2019] Anderson Aparecido dos Santos, José Marcato Junior, Márcio Santos Araújo, David Robledo Di Martini, Everton Castelão Tetila, Henrique Lopes Siqueira, Camila Aoki, Anette Eltner, Edson Takashi Matsubara, Hemerson Pistori, Raul Queiroz Feitosa, Veraldo Liesenberg, and Wesley Nunes Gonçalves. Assessment of CNN-based methods for individual tree detection on images captured by RGB cameras attached to UAVS. _Sensors (Switzerland)_ , 19(16):1–11, 2019. ISSN 14248220. doi:10.3390/s19163595.
* Torres et al. [2020] Daliana Lobo Torres, Raul Queiroz Feitosa, Patrick Nigri Happ, Laura Elena Cué La Rosa, José Marcato Junior, José Martins, Patrik Olã Bressan, Wesley Nunes Gonçalves, and Veraldo Liesenberg. Applying fully convolutional architectures for semantic segmentation of a single tree species in urban environment on high resolution UAV optical imagery. _Sensors (Switzerland)_ , 20(2):1–20, 2020. ISSN 14248220. doi:10.3390/s20020563.
* Boonpook et al. [2021] Wuttichai Boonpook, Yumin Tan, and Bo Xu. Deep learning-based multi-feature semantic segmentation in building extraction from images of UAV photogrammetry. _International Journal of Remote Sensing_ , 42(1):1–19, 2021. ISSN 13665901. doi:10.1080/01431161.2020.1788742.
* Gomes et al. [2020] Matheus Gomes, Jonathan Silva, Diogo Gonçalves, Pedro Zamboni, Jader Perez, Edson Batista, Ana Ramos, Lucas Osco, Edson Matsubara, Jonathan Li, José Marcato Junior, and Wesley Gonçalves. Mapping utility poles in aerial orthoimages using atss deep learning method. _Sensors (Switzerland)_ , 20(21):1–14, 2020. ISSN 14248220. doi:10.3390/s20216070.
* Bhowmick et al. [2020] Sutanu Bhowmick, Satish Nagarajaiah, and Ashok Veeraraghavan. Vision and deep learning-based algorithms to detect and quantify cracks on concrete surfaces from UAV videos. _Sensors (Switzerland)_ , 20(21):1–19, 2020. ISSN 14248220. doi:10.3390/s20216299.
* Benjdira et al. [2019a] Bilel Benjdira, Yakoub Bazi, Anis Koubaa, and Kais Ouni. Unsupervised domain adaptation using generative adversarial networks for semantic segmentation of aerial images. _Remote Sensing_ , 11(11), 2019a. ISSN 20724292. doi:10.3390/rs11111369.
* Apolo-Apolo et al. [2020] O. E. Apolo-Apolo, J. Martínez-Guanter, G. Egea, P. Raja, and M. Pérez-Ruiz. Deep learning techniques for estimation of the yield and size of citrus fruits using a UAV. _European Journal of Agronomy_ , 115(August 2019):126030, 2020. ISSN 11610301. doi:10.1016/j.eja.2020.126030.
* Biffi et al. [2021] Leonardo Josoé Biffi, Edson Mitishita, Veraldo Liesenberg, Anderson Aparecido Dos Santos, Diogo Nunes Gonçalves, Nayara Vasconcelos Estrabis, Jonathan de Andrade Silva, Lucas Prado Osco, Ana Paula Marques Ramos, Jorge Antonio Silva Centeno, Marcos Benedito Schimalski, Leo Rufato, Sílvio Luís Rafaeli Neto, José Marcato Junior, and Wesley Nunes Gonçalves. Article atss deep learning-based approach to detect apple fruits. _Remote Sensing_ , 13(1):1–23, 2021. ISSN 20724292. doi:10.3390/rs13010054.
* Tian et al. [2019a] Yunong Tian, Guodong Yang, Zhe Wang, Hao Wang, En Li, and Zize Liang. Apple detection during different growth stages in orchards using the improved YOLO-V3 model. _Computers and Electronics in Agriculture_ , 157(October 2018):417–426, 2019a. ISSN 01681699. doi:10.1016/j.compag.2019.01.012.
* Kang and Chen [2020] Hanwen Kang and Chao Chen. Fast implementation of real-time fruit detection in apple orchards using deep learning. _Computers and Electronics in Agriculture_ , 168(July):105108, 2020. ISSN 01681699. doi:10.1016/j.compag.2019.105108. URL https://doi.org/10.1016/j.compag.2019.105108.
* Castro et al. [2020] Wellington Castro, José Marcato Junior, Caio Polidoro, Lucas Prado Osco, Wesley Gonçalves, Lucas Rodrigues, Mateus Santos, Liana Jank, Sanzio Barrios, Cacilda Valle, Rosangela Simeão, Camilo Carromeu, Eloise Silveira, Lúcio André de Castro Jorge, and Edson Matsubara. Deep learning applied to phenotyping of biomass in forages with uav-based rgb imagery. _Sensors (Switzerland)_ , 20(17):1–18, 2020. ISSN 14248220. doi:10.3390/s20174802.
* Nevavuori et al. [2020] Petteri Nevavuori, Nathaniel Narra, Petri Linna, and Tarmo Lipping. Crop yield prediction using multitemporal UAV data and spatio-temporal deep learning models. _Remote Sensing_ , 12(23):1–18, 2020. ISSN 20724292. doi:10.3390/rs12234000.
* Kitano et al. [2019] Bruno T. Kitano, Caio C. T. Mendes, Andre R. Geus, Henrique C. Oliveira, and Jefferson R. Souza. Corn Plant Counting Using Deep Learning and UAV Images. _IEEE Geoscience and Remote Sensing Letters_ , pages 1–5, 2019. ISSN 1545-598X. doi:10.1109/lgrs.2019.2930549.
* Osco et al. [2021] Lucas Prado Osco, Keiller Nogueira, Ana Paula Marques Ramos, Mayara Maezano Faita Pinheiro, Danielle Elis Garcia Furuya, Wesley Nunes Gonçalves, Lucio André de Castro Jorge, José Marcato Junior, and Jefersson Alex dos Santos. Semantic segmentation of citrus-orchard using deep neural networks and multispectral uav-based imagery. _Precision Agriculture_ , 2021. ISSN 1573-1618. doi:10.1007/s11119-020-09777-5.
* Osco et al. [2020a] Lucas Prado Osco, Mauro dos Santos de Arruda, Diogo Nunes Gonçalves, Alexandre Dias, Juliana Batistoti, Mauricio de Souza, Felipe David Georges Gomes, Ana Paula Marques Ramos, Lúcio André de Castro Jorge, Veraldo Liesenberg, Jonathan Li, Lingfei Ma, José Marcato Junior, and Wesley Nunes Gonçalves. A cnn approach to simultaneously count plants and detect plantation-rows from uav imagery, 2020a.
* Osco et al. [2020b] Lucas Prado Osco, Mauro dos Santos de Arruda, José Marcato Junior, Neemias Buceli da Silva, Ana Paula Marques Ramos, Érika Akemi Saito Moryia, Nilton Nobuhiro Imai, Danillo Roberto Pereira, José Eduardo Creste, Edson Takashi Matsubara, Jonathan Li, and Wesley Nunes Gonçalves. A convolutional neural network approach for counting and geolocating citrus-trees in UAV multispectral imagery. _ISPRS Journal of Photogrammetry and Remote Sensing_ , 160(December 2019):97–106, 2020b. ISSN 09242716. doi:10.1016/j.isprsjprs.2019.12.010. URL https://doi.org/10.1016/j.isprsjprs.2019.12.010.
* Zhao et al. [2017] Hengshuang Zhao, Jianping Shi, Xiaojuan Qi, Xiaogang Wang, and Jiaya Jia. Pyramid scene parsing network, 2017.
* Ampatzidis and Partel [2019] Yiannis Ampatzidis and Victor Partel. UAV-based high throughput phenotyping in citrus utilizing multispectral imaging and artificial intelligence. _Remote Sensing_ , 11(4), 2019. ISSN 20724292. doi:10.3390/rs11040410.
* Barbedo et al. [2019] Jayme Garcia Arnal Barbedo, Luciano Vieira Koenigkan, Thiago Teixeira Santos, and Patrícia Menezes Santos. A study on the detection of cattle in UAV images using deep learning. _Sensors (Switzerland)_ , 19(24):1–14, 2019. ISSN 14248220. doi:10.3390/s19245436.
* Rivas et al. [2018] Alberto Rivas, Pablo Chamoso, Alfonso González-Briones, and Juan Manuel Corchado. Detection of cattle using drones and convolutional neural networks. _Sensors (Switzerland)_ , 18(7):1–15, 2018. ISSN 14248220. doi:10.3390/s18072048.
* Bell et al. [2020] T. W. Bell, N. J. Nidzieko, D. A. Siegel, R. J. Miller, K. C. Cavanaugh, N. B. Nelson, and M. … Griffith. The utility of satellites and autonomous remote sensing platforms for monitoring offshore aquaculture farms: A case study for canopy forming kelps. _Frontiers in Marine Science_ , 2020.
* Dian Bah et al. [2018] M. Dian Bah, Adel Hafiane, and Raphael Canals. Deep learning with unsupervised data labeling for weed detection in line crops in UAV images. _Remote Sensing_ , 10(11):1–22, 2018. ISSN 20724292. doi:10.3390/rs10111690.
* Li et al. [2020b] Yanan Li, Zhiguo Cao, Hao Lu, and Wenxia Xu. Unsupervised domain adaptation for in-field cotton boll status identification. _Computers and Electronics in Agriculture_ , 178:105745, 2020b. ISSN 0168-1699. doi:https://doi.org/10.1016/j.compag.2020.105745. URL http://www.sciencedirect.com/science/article/pii/S0168169920306517.
* Kerkech et al. [2020] Mohamed Kerkech, Adel Hafiane, and Raphael Canals. Vine disease detection in UAV multispectral images using optimized image registration and deep learning segmentation approach. _Computers and Electronics in Agriculture_ , 174(April), 2020. ISSN 01681699. doi:10.1016/j.compag.2020.105446.
* Tetila et al. [2020] Everton Castelão Tetila, Bruno Brandoli Machado, Gabriel Kirsten Menezes, Adair Da Silva Oliveira, Marco Alvarez, Willian Paraguassu Amorim, Nícolas Alessandro De Souza Belete, Gercina Gonçalves Da Silva, and Hemerson Pistori. Automatic Recognition of Soybean Leaf Diseases Using UAV Images and Deep Convolutional Neural Networks. _IEEE Geoscience and Remote Sensing Letters_ , 17(5):903–907, 2020. ISSN 15580571. doi:10.1109/LGRS.2019.2932385.
* Xia et al. [2018] Gui Song Xia, Xiang Bai, Jian Ding, Zhen Zhu, Serge Belongie, Jiebo Luo, Mihai Datcu, Marcello Pelillo, and Liangpei Zhang. DOTA: A Large-Scale Dataset for Object Detection in Aerial Images. _Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition_ , pages 3974–3983, 2018. ISSN 10636919. doi:10.1109/CVPR.2018.00418.
* Du et al. [2018] Dawei Du, Yuankai Qi, Hongyang Yu, Yifan Yang, Kaiwen Duan, Guorong Li, Weigang Zhang, Qingming Huang, and Qi Tian. The unmanned aerial vehicle benchmark: Object detection and tracking. _Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)_ , 11214 LNCS:375–391, 2018. ISSN 16113349. doi:10.1007/978-3-030-01249-6_23.
* B et al. [2019] Pengfei Zhu B, Longyin Wen, Dawei Du, Xiao Bian, Haibin Ling, Qinghua Hu, Qinqin Nie, Hao Cheng, Chenfeng Liu, Xiaoyu Liu, Wenya Ma, Haotian Wu, Lianjie Wang, Arne Schumann, Chase Brown, and Robert Lagani. _VisDrone-DET2018 : The Vision Meets Drone Object Detection in Image Challenge Results_ , volume 1. Springer, Cham, 2019. ISBN 9783030110215. doi:10.1007/978-3-030-11021-5.
* Sheng et al. [2012] Guofeng Sheng, Wen Yang, Tao Xu, and Hong Sun. High-resolution satellite scene classification using a sparse coding based multiple feature combination. _International Journal of Remote Sensing_ , 33(8):2395–2412, 2012. ISSN 13665901. doi:10.1080/01431161.2011.608740.
* Zou et al. [2015] Qin Zou, Lihao Ni, Tong Zhang, and Qian Wang. Remote Sensing Scene Classification. _IEEE Transactions on Geoscience and Remote Sensing Letters_ , 12(11):2321–2325, 2015.
* Zhao et al. [2016] Lijun Zhao, Ping Tang, and Lianzhi Huo. Feature significance-based multibag-of-visual-words model for remote sensing image scene classification. _Journal of Applied Remote Sensing_ , 10(3):1 – 21, 2016. doi:10.1117/1.JRS.10.035004. URL https://doi.org/10.1117/1.JRS.10.035004.
* Penatti et al. [2015] Otavio A.B. Penatti, Keiller Nogueira, and Jefersson A. Dos Santos. Do deep features generalize from everyday objects to remote sensing and aerial scenes domains? _IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops_ , 2015-October:44–51, 2015. ISSN 21607516. doi:10.1109/CVPRW.2015.7301382.
* Fiesler et al. [1990] Emile Fiesler, Amar Choudry, and H John Caulfield. Weight discretization paradigm for optical neural networks. In _Optical interconnections and networks_ , volume 1281, pages 164–173. International Society for Optics and Photonics, 1990.
* Balzer et al. [1991] Wolfgang Balzer, Masanobu Takahashi, Jun Ohta, and Kazuo Kyuma. Weight quantization in boltzmann machines. _Neural Networks_ , 4(3):405–409, 1991.
* Rastegari et al. [2016] Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classification using binary convolutional neural networks. In _European conference on computer vision_ , pages 525–542. Springer, 2016.
* ImageNet [2018] ImageNet. Imagenet object localization challenge, 2018. URL https://www.kaggle.com/c/imagenet-object-localization-challenge.
* Guo [2018] Yunhui Guo. A survey on methods and theories of quantized neural networks. _arXiv preprint arXiv:1808.04752_ , 2018.
* Hinton et al. [2015] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. _arXiv preprint arXiv:1503.02531_ , 2015.
* Howard et al. [2017] Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. _arXiv preprint arXiv:1704.04861_ , 2017.
* Chollet [2017] François Chollet. Xception: Deep learning with depthwise separable convolutions. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pages 1251–1258, 2017.
* Szegedy et al. [2015] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pages 1–9, 2015.
* Howard et al. [2019] Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, et al. Searching for mobilenetv3. In _Proceedings of the IEEE International Conference on Computer Vision_ , pages 1314–1324, 2019.
* Elsken et al. [2019] Thomas Elsken, Jan Hendrik Metzen, Frank Hutter, et al. Neural architecture search: A survey. _J. Mach. Learn. Res._ , 20(55):1–21, 2019.
* Yang et al. [2018] Tien-Ju Yang, Andrew Howard, Bo Chen, Xiao Zhang, Alec Go, Mark Sandler, Vivienne Sze, and Hartwig Adam. Netadapt: Platform-aware neural network adaptation for mobile applications. In _Proceedings of the European Conference on Computer Vision (ECCV)_ , pages 285–300, 2018.
* Qin et al. [2019] Zheng Qin, Zeming Li, Zhaoning Zhang, Yiping Bao, Gang Yu, Yuxing Peng, and Jian Sun. Thundernet: Towards real-time generic object detection on mobile devices. In _Proceedings of the IEEE International Conference on Computer Vision_ , pages 6718–6727, 2019.
* Lin et al. [2014] Tsung-Yi Lin, Michael Maire, Serge Belongie, Lubomir Bourdev, Ross Girshick, James Hays, Pietro Perona, Deva Ramanan, C. Lawrence Zitnick, and Piotr Dollár. Microsoft coco: Common objects in context, 2014. URL http://arxiv.org/abs/1405.0312. cite arxiv:1405.0312Comment: 1) updated annotation pipeline description and figures; 2) added new section describing datasets splits; 3) updated author list.
* Lin et al. [2020] Ji Lin, Wei-Ming Chen, Yujun Lin, John Cohn, Chuang Gan, and Song Han. Mcunet: Tiny deep learning on iot devices. _arXiv preprint arXiv:2007.10319_ , 2020.
* Sandler et al. [2018] Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang Chieh Chen. MobileNetV2: Inverted Residuals and Linear Bottlenecks. _Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition_ , pages 4510–4520, 2018. ISSN 10636919. doi:10.1109/CVPR.2018.00474.
* Mittal [2019] Sparsh Mittal. A survey on optimized implementation of deep learning models on the nvidia jetson platform. _Journal of Systems Architecture_ , 97:428–442, 2019.
* Imran et al. [2020] Hamza Ali Imran, Usama Mujahid, Saad Wazir, Usama Latif, and Kiran Mehmood. Embedded development boards for edge-ai: A comprehensive report. _arXiv preprint arXiv:2009.00803_ , 2020.
* Hennessy et al. [2020] Andrew Hennessy, Kenneth Clarke, and Megan Lewis. Hyperspectral Classification of Plants: A Review of Waveband Selection Generalisability. _Remote Sensing_ , 12(1):113, 2020. ISSN 2072-4292. doi:10.3390/rs12010113.
* Licciardi et al. [2012] G. Licciardi, P. R. Marpu, J. Chanussot, and J. A. Benediktsson. Linear versus nonlinear pca for the classification of hyperspectral data based on the extended morphological profiles. _IEEE Geoscience and Remote Sensing Letters_ , 9(3):447–451, 2012. doi:10.1109/LGRS.2011.2172185.
* Vaddi and Manoharan [2020] Radhesyam Vaddi and Prabukumar Manoharan. Cnn based hyperspectral image classification using unsupervised band selection and structure-preserving spatial features. _Infrared Physics & Technology_, 110:103457, 2020. ISSN 1350-4495. doi:https://doi.org/10.1016/j.infrared.2020.103457. URL http://www.sciencedirect.com/science/article/pii/S1350449520305053.
* Tuia et al. [2016] D. Tuia, C. Persello, and L. Bruzzone. Domain adaptation for the classification of remote sensing data: An overview of recent advances. _IEEE Geoscience and Remote Sensing Magazine_ , 4(2):41–57, 2016. doi:10.1109/MGRS.2016.2548504.
* Zhuang et al. [2020] Fuzhen Zhuang, Zhiyuan Qi, Keyu Duan, Dongbo Xi, Yongchun Zhu, Hengshu Zhu, Hui Xiong, and Qing He. A comprehensive survey on transfer learning. _Proceedings of the IEEE_ , 109(1):43–76, 2020\.
* Tan et al. [2018] Chuanqi Tan, Fuchun Sun, Tao Kong, Wenchang Zhang, Chao Yang, and Chunfang Liu. A survey on deep transfer learning. In _International conference on artificial neural networks_ , pages 270–279. Springer, 2018.
* Elshamli et al. [2017] A. Elshamli, G. W. Taylor, A. Berg, and S. Areibi. Domain adaptation using representation learning for the classification of remote sensing images. _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_ , 10(9):4198–4209, 2017. doi:10.1109/JSTARS.2017.2711360.
* Benjdira et al. [2019b] Bilel Benjdira, Yakoub Bazi, Anis Koubaa, and Kais Ouni. Unsupervised domain adaptation using generative adversarial networks for semantic segmentation of aerial images. _Remote Sensing_ , 11(11), 2019b. ISSN 2072-4292. doi:10.3390/rs11111369. URL https://www.mdpi.com/2072-4292/11/11/1369.
* Fang et al. [2019] Bo Fang, Rong Kou, Li Pan, and Pengfei Chen. Category-sensitive domain adaptation for land cover mapping in aerial scenes. _Remote Sensing_ , 11(22), 2019. ISSN 2072-4292. doi:10.3390/rs11222631. URL https://www.mdpi.com/2072-4292/11/22/2631.
* Xu et al. [2018] Rudong Xu, Yiting Tao, Zhongyuan Lu, and Yanfei Zhong. Attention-mechanism-containing neural networks for high-resolution remote sensing image classification. _Remote Sensing_ , 10(10), 2018. ISSN 2072-4292. doi:10.3390/rs10101602. URL https://www.mdpi.com/2072-4292/10/10/1602.
* Ding et al. [2021] L. Ding, H. Tang, and L. Bruzzone. Lanet: Local attention embedding to improve the semantic segmentation of remote sensing images. _IEEE Transactions on Geoscience and Remote Sensing_ , 59(1):426–435, 2021. doi:10.1109/TGRS.2020.2994150.
* Su et al. [2019] Y. Su, Y. Wu, M. Wang, F. Wang, and J. Cheng. Semantic segmentation of high resolution remote sensing image based on batch-attention mechanism. In _IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium_ , pages 3856–3859, 2019. doi:10.1109/IGARSS.2019.8898198.
* Zhou et al. [2020] Dengji Zhou, Guizhou Wang, Guojin He, Tengfei Long, Ranyu Yin, Zhaoming Zhang, Sibao Chen, and Bin Luo. Robust building extraction for high spatial resolution remote sensing images with self-attention network. _Sensors_ , 20(24), 2020. ISSN 1424-8220. doi:10.3390/s20247241. URL https://www.mdpi.com/1424-8220/20/24/7241.
* Zhu et al. [2019b] Ruixi Zhu, Li Yan, Nan Mo, and Yi Liu. Attention-based deep feature fusion for the scene classification of high-resolution remote sensing images. _Remote Sensing_ , 11(17), 2019b. ISSN 2072-4292. doi:10.3390/rs11171996. URL https://www.mdpi.com/2072-4292/11/17/1996.
* Li et al. [2020c] Yangyang Li, Qin Huang, Xuan Pei, Licheng Jiao, and Ronghua Shang. Radet: Refine feature pyramid network and multi-layer attention network for arbitrary-oriented object detection of remote sensing images. _Remote Sensing_ , 12(3), 2020c. ISSN 2072-4292. doi:10.3390/rs12030389. URL https://www.mdpi.com/2072-4292/12/3/389.
* Li et al. [2019] C. Li, C. Xu, Z. Cui, D. Wang, T. Zhang, and J. Yang. Feature-attentioned object detection in remote sensing imagery. In _2019 IEEE International Conference on Image Processing (ICIP)_ , pages 3886–3890, 2019. doi:10.1109/ICIP.2019.8803521.
* Dosovitskiy et al. [2020] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale, 2020.
* Touvron et al. [2020] Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Hervé Jégou. Training data-efficient image transformers & distillation through attention, 2020.
* Carion et al. [2020] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm, editors, _Computer Vision – ECCV 2020_ , pages 213–229, Cham, 2020. Springer International Publishing.
* Li et al. [2020] L. Li, J. Han, X. Yao, G. Cheng, and L. Guo. Dla-matchnet for few-shot remote sensing image scene classification. _IEEE Transactions on Geoscience and Remote Sensing_ , pages 1–10, 2020. doi:10.1109/TGRS.2020.3033336.
* Karami et al. [2020] A. Karami, M. Crawford, and E. J. Delp. Automatic plant counting and location based on a few-shot learning technique. _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_ , 13:5872–5886, 2020. doi:10.1109/JSTARS.2020.3025790.
* Liu and Qin [2020] W. Liu and R. Qin. A multikernel domain adaptation method for unsupervised transfer learning on cross-source and cross-region remote sensing data classification. _IEEE Transactions on Geoscience and Remote Sensing_ , 58(6):4279–4289, 2020. doi:10.1109/TGRS.2019.2962039.
* Kang et al. [2020] J. Kang, R. Fernandez-Beltran, P. Duan, S. Liu, and A. J. Plaza. Deep unsupervised embedding for remotely sensed images based on spatially augmented momentum contrast. _IEEE Transactions on Geoscience and Remote Sensing_ , pages 1–13, 2020. doi:10.1109/TGRS.2020.3007029.
* Bachman et al. [2019] Philip Bachman, R Devon Hjelm, and William Buchwalter. Learning representations by maximizing mutual information across views. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, _Advances in Neural Information Processing Systems_ , volume 32, pages 15535–15545. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/file/ddf354219aac374f1d40b7e760ee5bb7-Paper.pdf.
* Tian et al. [2019b] Yonglong Tian, Dilip Krishnan, and Phillip Isola. Contrastive multiview coding. _CoRR_ , abs/1906.05849, 2019b. URL http://arxiv.org/abs/1906.05849.
* Hjelm et al. [2019] Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Philip Bachman, Adam Trischler, and Yoshua Bengio. Learning deep representations by mutual information estimation and maximization. In _ICLR 2019_ , page 24. ICLR, April 2019.
* He et al. [2020] K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick. Momentum contrast for unsupervised visual representation learning. In _2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , pages 9726–9735, 2020. doi:10.1109/CVPR42600.2020.00975.
* Caron et al. [2018] Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. Deep clustering for unsupervised learning of visual features. In Vittorio Ferrari, Martial Hebert, Cristian Sminchisescu, and Yair Weiss, editors, _Computer Vision – ECCV 2018_ , pages 139–156, Cham, 2018\. Springer International Publishing.
* Caron et al. [2021] Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. Unsupervised learning of visual features by contrasting cluster assignments, 2021.
* Crawshaw [2020] Michael Crawshaw. Multi-task learning with deep neural networks: A survey, 2020.
* Wang et al. [2021] Y. Wang, W. Ding, R. Zhang, and H. Li. Boundary-aware multitask learning for remote sensing imagery. _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_ , 14:951–963, 2021. doi:10.1109/JSTARS.2020.3043442.
* Bendale and Boult [2016] Abhijit Bendale and Terrance E. Boult. Towards open set deep networks. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , page 14, June 2016.
* da Silva et al. [2020] C. C. V. da Silva, K. Nogueira, H. N. Oliveira, and J. A. d. Santos. Towards open-set semantic segmentation of aerial images. In _2020 IEEE Latin American GRSS ISPRS Remote Sensing Conference (LAGIRS)_ , pages 16–21, 2020. doi:10.1109/LAGIRS48042.2020.9165597.
* Adayel et al. [2020] Reham Adayel, Yakoub Bazi, Haikel Alhichri, and Naif Alajlan. Deep open-set domain adaptation for cross-scene classification based on adversarial learning and pareto ranking. _Remote Sensing_ , 12(11):1716, May 2020. ISSN 2072-4292. doi:10.3390/rs12111716. URL http://dx.doi.org/10.3390/rs12111716.
* Ma et al. [2021] Jiayi Ma, Xingyu Jiang, Aoxiang Fan, Junjun Jiang, and Junchi Yan. Image matching from handcrafted to deep features: A survey. _International Journal of Computer Vision_ , 129(1):23–79, Jan 2021. ISSN 1573-1405. doi:10.1007/s11263-020-01359-2. URL https://doi.org/10.1007/s11263-020-01359-2.
* Gevaert et al. [2018] C.M. Gevaert, C. Persello, F. Nex, and G. Vosselman. A deep learning approach to dtm extraction from imagery using rule-based training labels. _ISPRS Journal of Photogrammetry and Remote Sensing_ , 142:106 – 123, 2018. ISSN 0924-2716. doi:https://doi.org/10.1016/j.isprsjprs.2018.06.001. URL http://www.sciencedirect.com/science/article/pii/S0924271618301643.
|
# Bulk topological states in a new collective dynamics model
Pierre Degond Institut de Mathématiques de Toulouse; UMR5219; Université de
Toulouse; CNRS; UPS; F-31062 Toulouse Cedex 9, France<EMAIL_ADDRESS>toulouse.fr
Antoine Diez Department of Mathematics, Southern University of Science and
Technology, Shenzhen, 518055, China<EMAIL_ADDRESS>
Mingye Na
###### Abstract
In this paper, we demonstrate the existence of topological states in a new
collective dynamics model. This individual-based model (IBM) describes self-
propelled rigid bodies moving with constant speed and adjusting their rigid-
body attitude to that of their neighbors. In previous works, a macroscopic
model has been derived from this IBM in a suitable scaling limit. In the
present work, we exhibit explicit solutions of the macroscopic model
characterized by a non-trivial topology. We show that these solutions are well
approximated by the IBM during a certain time but then the IBM transitions
towards topologically trivial states. Using a set of appropriately defined
topological indicators, we reveal that the breakage of the non-trivial
topology requires the system to go through a phase of maximal disorder. We
also show that similar but topologically trivial initial conditions result in
markedly different dynamics, suggesting that topology plays a key role in the
dynamics of this system.
Keywords: individual-based model, macroscopic model, self-organization,
topological phase transition, winding number, order parameter
AMS subject classification: 22E70, 35Q70, 37B25, 60J76, 65C35, 70F10
Acknowledgements: Part of this research was done when PD and MN were
affiliated to Department of Mathematics, Imperial College London, London, SW7
2AZ, United Kingdom. PD acknowledges support by the Engineering and Physical
Sciences Research Council (EPSRC) under grants no. EP/M006883/1 and
EP/P013651/1, by the Royal Society and the Wolfson Foundation through a Royal
Society Wolfson Research Merit Award no. WM130048. The work of AD is supported
by an EPSRC-Roth scholarship cofounded by the Engineering and Physical
Sciences Research Council and the Department of Mathematics at Imperial
College London.
Data statement: no new data were collected in the course of this research.
###### Contents
1. 1 Introduction
2. 2 Models
1. 2.1 The Individual-Based body-alignment Model
1. 2.1.1 Description of the model
2. 2.1.2 Numerical simulations of the IBM
3. 2.1.3 Relation with other collective dynamics models
2. 2.2 The macroscopic body-alignment model
1. 2.2.1 Description of the model
2. 2.2.2 Interpretation of the model
3. 2.2.3 Relation with other models
3. 3 Special solutions of the macroscopic model
1. 3.1 Three classes of explicit solutions
1. 3.1.1 Flocking state
2. 3.1.2 Milling orbits
3. 3.1.3 Helical traveling wave
4. 3.1.4 Generalized topological solutions
2. 3.2 Some properties of these special solutions
3. 3.3 Agreement between the models
1. 3.3.1 The IBM converges to the macroscopic model as $N\to\infty$
2. 3.3.2 Quantitative comparison between the models
4. 3.4 Topology
4. 4 Order parameters and topological indicators
1. 4.1 Global order parameter
2. 4.2 Roll angle
1. 4.2.1 Definition
2. 4.2.2 Roll polarization
3. 4.2.3 Indicators of RPZ-curve morphology
5. 5 Topological phase transitions: are the MO and HW topologically protected?
1. 5.1 Initial conditions
1. 5.1.1 Milling orbit
2. 5.1.2 Helical traveling wave
2. 5.2 Observation of topological phase transitions
3. 5.3 Reproducibility
4. 5.4 Robustness against perturbations of the initial conditions
5. 5.5 Critique
6. 6 Discussion and conclusion
7. A List of supplementary videos
8. B Quaternion framework
9. C Numerical methods
10. D Derivation of the macroscopic model
11. E Alternate expressions of $\delta$
12. F MO, HW, GS and generalized HW solutions
1. F.1 Proof of Lemma 3.1
2. F.2 Generalized HW and proof of Lemma 3.2
3. F.3 Proof of Lemma 3.3
4. F.4 GOP of the MO and generalized HW
13. G Convergence rate of $|\mathrm{d}\bar{\varphi}/\mathrm{d}t|$ as $N\to\infty$
14. H Rare events
1. H.1 From milling orbit to helical wave
2. H.2 From milling to flocking via a helical wave state
## 1 Introduction
Systems of particles (or agents) which exhibit self-organized collective
behavior are ubiquitous in the living world at all scales, from bird flocks
[71] to sperm [27] or bacterial colonies [29]. Examples are also found in
social sciences [18, 39] or for inert matter [15]. In such systems, the agents
interact locally with a limited number of neighbors through rather simple
rules such as attraction, repulsion or alignment [3, 26, 52] without any
leader or centralized control. When the number of agents becomes large, vast
structures encompassing many agents appear, such as clusters [73, 90],
traveling bands [23], vortices [24, 29], lanes [25], etc. As there is no
direct or apparent relation between these structures and the nature of the
agents interactions, such a phenomenon is named “emergence”. Its study has
stimulated a vast literature (see e.g. [90] for a review).
There are mainly two levels of description of particle systems: the most
detailed one consists of individual based models (IBM) where the agents
dynamics are described by coupled ordinary or stochastic differential
equations. When the number of agents becomes large, a macroscopic description
in terms of average quantities such as the agents mean density or velocity is
preferred. The rigorous link between these two levels of description involves
two successive limits by which the number of agents is first sent to infinity
(mean-field limit) and then, the system size relative to the typical
interaction distance between the agents is also sent to infinity (hydrodynamic
limit), see e.g. [21, 31]. In collective dynamics, particles are capable of
self-propulsion by transforming an internal source of chemical energy into
motion [90]. There are two main classes of IBM of self-propelled particles.
The first class is based on the Cucker-Smale model [4, 28, 55, 56] where self-
propulsion is treated as an external force. The second class is based on the
Vicsek model [2, 19, 23, 29, 41, 45, 73, 89] where self-propulsion is modeled
by imposing the norm of the particle velocity to be a constant. At the mean-
field or hydrodynamic levels, the two frameworks give rise to corresponding
models (see e.g. [1, 5] for Cucker-Smale type models and [10, 34, 41, 45, 79,
87] for Vicsek type models). The two categories are linked by an asymptotic
limit [12, 13]. Of course, there are many variants of these models and we
refer to [8, 9, 17, 20, 42, 46, 75] for a non-exhaustive set of examples.
Recently, a series of studies has investigated the existence of topological
states in collective dynamics. Topological states have appeared with the
quantum Hall effect [67, 69, 76, 86] which relies on so-called conducting
chiral edge states: when a sample of a 2-dimensional insulator is placed in a
magnetic field, its bulk conductance is nil but a current can flow around its
edges in only one direction (hence the ’chiral’ terminology). Then, materials
that exhibit chiral edge states without a magnetic field have been discovered,
the so-called “topological insulators” [58, 77, 80]. Chiral edge states are
robust against perturbations because of their non trivial topology which can
be characterized by a integer, the winding number. Any destruction of the
chiral edge state would require a finite jump of this integer, which consumes
a finite amount of energy. Hence lower energy perturbations will fail to
destroy the chiral edge state. This property is of strategic interest for
various applications such as quantum computers. Recently a series of works
have explored the occurrence of topological states in collective dynamics (see
e.g. [83, 84, 85]). They are based on numerical simulations of the Toner and
Tu model [87], which is a continuum analog of the Vicsek model [89].
Investigating appropriate geometrical configurations (a sphere in [83], a
network of rings in [84, 85]), they show that linearized perturbations of the
stationary state (i.e. sound waves) generate chiral edge states which
propagate uni-directionally, revealing an underpinning non-trivial topology.
However, the question of whether this effect could be realized with a finite
(even large) number of discrete particles and whether the topological states
would survive the noise induced by this finite particle number long enough is
not investigated.
In this paper, we demonstrate the existence of non-trivial bulk topological
states in a new collective dynamics model. Bulk states propagate in the whole
domain, by opposition to edge states which are localized at the boundary. The
collective dynamics model studied here has first been proposed in [35] and
later analyzed and expanded in [32, 37, 38]. Referred to below as the “Body-
Alignment Individual-Based Model” (BA-IBM or IBM for short), it describes
self-propelled rigid bodies moving with constant speed and trying to adjust
their rigid body attitude to that of their neighbors. In [37, 35] the BA-IBM
was based on Stochastic Differential Equations (SDE) and a macroscopic model
named the “Self-Organized Hydrodynamics for Body-orientation (SOHB)” was
derived. In [38, 32], SDE were replaced by Piecewise Deterministic Markov
Processes (PDMP) in the IBM but the macroscopic model remained the SOHB model
(with possibly different coefficients). In [32], a variant of the BA-IBM was
shown to exhibit phase transitions which were rigorously studied. In the
present work, we derive explicit solutions of the SOHB model which exhibit
striking non-trivial topologies revealed by non-zero winding numbers. We
explore how these non-trivial topologies are maintained at the level of the
IBM by solving the PDMP of [38]. In particular, we observe that, due to noise
induced by the finite particle number, topological phase transitions from
states with non-trivial topology to states with trivial one may occur and we
study these phase transitions in detail. Using a set of appropriately defined
topological indicators, we reveal that the breakage of the non-trivial
topology requires the system to go through a phase of maximal disorder. We
also show that similar but topologically trivial initial conditions result in
markedly different dynamics, suggesting that topology plays a key role in the
dynamics of this system. We are led to question the possible existence of
topological protection against perturbations as mentioned above for
topological insulators. Compared to previous works on topological states in
collective dynamics, we deal with bulk states instead of edge states and we
explore them at the level of the IBM and not just at the continuum level,
which is closer to realistic particle systems. The present work adds a new
item to the list of collective dynamics models exhibiting topological states.
The topological protection concept could bring new perspectives to poorly
understood questions such as the robustness of morphogenesis or the emergence
of symmetries in growing organisms.
The present model belongs to the category of Vicsek-like models in the sense
that it introduces a geometrical constraint within the degrees of freedom of
the particles. In the Vicsek model, the particle velocities were constrained
to belong to the unit sphere (after convenient normalization). In the present
IBM, the particles carry an orthonormal frame, or equivalently, a rotation
matrix, that describes their body attitude. Thus their degrees of freedom are
constrained to belong to the manifold SO${}_{3}({\mathbb{R}})$ of $3\times 3$
rotation matrices. Fig. 1 highlights the difference between the Vicsek and
body orientation models. The left picture shows alignment of two agents in the
Vicsek sense, while the right picture shows alignment in the body-alignment
sense. We mention that models involving full body attitudes have already been
considered in [20, 59, 60, 61] in the context of flocking, but the alignment
rules were different and essentially based on a velocity orientation (and not
full body attitude) alignment.
Figure 1: Vicsek model versus body-alignment model. Left: polar alignment of
velocity orientations (red vectors) of two agents. Right: alignment of body-
orientations: in addition to its velocity orientation (red), each agent has
two other axes (green and blue), the three vectors forming a direct orthogonal
frame.
We complete this introduction by a review of the mathematical literature on
the Vicsek model and the BA-IBM. The mean-field limit of the IBM has been
proven in [10] for the Vicsek model and in [43] for the body orientation
model. Existence theory for the mean-field Vicsek model is available in [14,
48, 51] but the corresponding theory for the mean-field body orientation model
is still open. The mean-field kinetic models exhibit phase transitions which
have been studied in [33, 34, 49] and [32] for the Vicsek and body orientation
models respectively. The numerical approximation of the mean-field kinetic
model has been undertaken for the Vicsek model only in [50, 54]. The
derivation of macroscopic equations from the mean-field Vicsek kinetic
equations has first been formally achieved in [41] and later rigorously proved
in [65]. Corresponding works for the body alignment model are only formal [35,
37, 38]. Existence theory for the hydrodynamic models derived from the Vicsek
model can be found in [40, 91] and numerical methods in [45, 50, 74]. Both
questions are still open for the body orientation model.
The organization of this paper is as follows. Section 2 is devoted to the
exposition of the IBM and macroscopic models. Then explicit solutions of the
macroscopic model are derived in Section 3 and are shown to exhibit non-
trivial topology. They also serve as benchmarks to show that the macroscopic
model is an accurate approximation of the IBM. But after a some time, the IBM
departs from the special solutions of the macroscopic model and undergoes a
topological phase transition. The study of these phase transitions require
appropriate topological indicators which are developed in Section 4. Then, the
topological phase transitions are analyzed in Section 5. A discussion and some
open questions raised by these observations can be found in Section 6. The
supplementary material (SM) collects additional information: a list of
supplementary videos (Section A), a summary of the quaternion framework
(Section B), a description of the numerical methods (Section C), a summary of
the derivation of the macroscopic models (Section D) and finally a derivation
of the explicit solutions presented in Section 3 (Section F).
## 2 Models
### 2.1 The Individual-Based body-alignment Model
#### 2.1.1 Description of the model
In this section, we present the Individual-Based body-alignment Model (IBM).
This model was first proposed in [38]. We consider $N$ particles (or
individuals, or agents) indexed by $k\in\\{1,\ldots,N\\}$ whose spatial
locations are denoted by $\mathbf{X}_{k}(t)\in{\mathbb{R}}^{3}$ where
$t\in[0,\infty)$ is the time. A direct orthonormal frame
$\\{\Omega_{k}(t),\mathbf{u}_{k}(t),\mathbf{v}_{k}(t)\\}$ is attached to each
particle (i.e.
$\Omega_{k},\,\mathbf{u}_{k},\,\mathbf{v}_{k}\in{\mathbb{S}}^{2}$,
$\Omega_{k}\cdot\mathbf{u}_{k}=0$ and
$\mathbf{v}_{k}=\Omega_{k}\times\mathbf{u}_{k}$). Likewise, if
$(\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3})$ is a fixed direct orthonormal
reference frame, we define $A_{k}(t)$ to be the unique element of the special
orthonormal group SO${}_{3}({\mathbb{R}})$ which maps
$(\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3})$ onto
$(\Omega_{k}(t),\mathbf{u}_{k}(t),\mathbf{v}_{k}(t))$. We will choose
$(\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3})$ once for all and write
$A_{k}(t)=[\Omega_{k}(t),\mathbf{u}_{k}(t),\mathbf{v}_{k}(t)]$. This will be
referred to as the local particle frame or as the particle’s body orientation.
$\Omega_{k}(t)$ is the self-propulsion direction: Particle $k$ moves in
straight line in the direction of $\Omega_{k}$ with unchanged local frame
$A_{k}$ except at exponentially distributed times at which the local frame
jumps and adjusts itself to the average neighbors’ local frame up to some
noise. The motion of the particles is thus described by the functions
$[0,\infty)\ni
t\mapsto(\mathbf{X}_{k}(t),A_{k}(t))\in{\mathbb{R}}^{3}\times\mbox{SO}_{3}({\mathbb{R}})$
for $k\in\\{1,\ldots,N\\}$.
We first describe how the average neighbors’ local frame is defined. We
introduce a fixed observation (or sensing) kernel $K$:
${\mathbb{R}}^{3}\ni\mathbf{x}\mapsto K(\mathbf{x})\in[0,\infty)$. We assume
that $K$ is a radial function (i.e. there exists $\tilde{K}$: $[0,\infty)\ni
r\mapsto\tilde{K}(r)\in[0,\infty)$ such that
$K(\mathbf{x})=\tilde{K}(|\mathbf{x}|)$, where $|\mathbf{x}|$ is the euclidean
norm of $\mathbf{x}$). For a collection of $N$ particles
$\\{(\mathbf{X}_{k},A_{k})\\}_{k\in\\{1,\ldots,N\\}}\in({\mathbb{R}}^{3}\times\mbox{SO}_{3}({\mathbb{R}}))^{N}$,
we define the local flux as the following $3\times 3$ matrix:
$J_{k}=\frac{1}{N}\sum_{j=1}^{N}K(\mathbf{X}_{k}-\mathbf{X}_{j})\,A_{j}.$
Typically, we can think of $K(\mathbf{x})$ as the indicator function of the
ball centered at zero with radius $R$. In this case, $J_{k}$ is just the sum
of the matrices $A_{j}$ of all particles $j$ located within a distance $R$ to
Particle $k$, divided by the total number of particles $N$. However, more
sophisticated sensing functions can be used to account for the fact that e.g.
distant particles will contribute to $J_{k}$ less than neighboring particles.
In general, $J_{k}$ is not a rotation matrix. To recover a rotation matrix, we
need to map $J_{k}$ back onto the manifold SO${}_{3}({\mathbb{R}})$. To do so,
the space $\mathcal{M}_{3}({\mathbb{R}})$ of $3\times 3$ matrices, is equipped
with the inner product:
$A\cdot B:=\frac{1}{2}\mbox{Tr}(A^{\mathrm{T}}B),$ (1)
where Tr denotes the trace operator and $A^{\mathrm{T}}$ is the transpose of
the matrix $A$. Now, we define the average neighbors’ local frame
${\mathbb{A}}_{k}$ of Particle $k$ as follows:
${\mathbb{A}}_{k}:=\mbox{arg\,max}_{A\in\mbox{\scriptsize{SO}}_{3}({\mathbb{R}})}A\cdot
J_{k}.$ (2)
This expression stands for the element
${\mathbb{A}}_{k}\in\mbox{SO}_{3}({\mathbb{R}})$ that maximizes the function
$\mbox{SO}_{3}({\mathbb{R}})\ni A\mapsto A\cdot J_{k}\in{\mathbb{R}}$. The
maximization procedure (2) has a unique solution as soon as $J_{k}$ is not
singular, i.e. $\det J_{k}\not=0$ where $\det$ stands for the determinant.
Since the singular matrices form a zero-measure set in
$\mathcal{M}_{3}({\mathbb{R}})$ it is legitimate to assume that, except for a
zero-measure set of initial data, this situation will not occur. Furthermore,
when $\det J_{k}>0$, ${\mathbb{A}}_{k}$ is nothing but the unique rotation
matrix involved in the polar decomposition of $J_{k}$.
We let the particles evolve according to the following Piecewise Deterministic
Markov Process (PDMP).
* •
To each agent $k\in\\{1,\ldots,N\\}$ is attached an increasing sequence of
random times (jump times) $T_{k}^{1},\,T_{k}^{2},\ldots$ such that the
intervals between two successive times are independent and follow an
exponential law with constant parameter $\nu>0$ (Poisson process). At each
jump time $T_{k}^{n}$, the function $\mathbf{X}_{k}$ is continuous and the
function $A_{k}$ has a discontinuity between its left and right states
respectively denoted by $A_{k}(T_{k}^{n}-0)$ and $A_{k}(T_{k}^{n}+0)$.
* •
Between two jump times $(T_{k}^{n},T_{k}^{n+1})$, the evolution is
deterministic: the orientation of Agent $k$ does not change and it moves in
straight line at speed $c_{0}>0$ in the direction
$A_{k}(T_{k}^{n}+0)\,\mathbf{e}_{1}$, i.e. for all
$t\in[T_{k}^{n},T_{k}^{n+1})$, we have
$\mathbf{X}_{k}(t)=\mathbf{X}_{k}(T_{k}^{n})+c_{0}\,(t-T_{k}^{n})\,A_{k}(t)\,\mathbf{e}_{1},\,\,\,A_{k}(t)=A_{k}(T_{k}^{n}+0).$
(3)
* •
To compute $A_{k}(T_{k}^{n}+0)$ from $A_{k}(T_{k}^{n}-0)$, we compute the
local flux defined at time $T_{k}^{n}-0$ given by:
$J_{k}^{n-}:=\frac{1}{N}\sum_{j=1}^{N}K\big{(}\mathbf{X}_{k}(T_{k}^{n})-\mathbf{X}_{j}(T_{k}^{n})\big{)}A_{j}(T_{k}^{n}-0),$
(4)
having in mind that $A_{j}(T_{k}^{n}-0)=A_{j}(T_{k}^{n})$ for $j\not=k$. From
$J_{k}^{n-}$, which we assume is a non-singular matrix, we compute
${\mathbb{A}}_{k}^{n}$ as the unique solution of the maximization problem (2)
(with $J_{k}$ replaced by $J_{k}^{n-}$). Then, $A_{k}(T_{k}^{n}+0)$ is drawn
from a von Mises distribution:
$A_{k}(T_{k}^{n}+0)\sim M_{{\mathbb{A}}_{k}^{n}}.$ (5)
The von Mises distribution on SO${}_{3}({\mathbb{R}})$ with parameter
${\mathbb{A}}\in$ SO${}_{3}({\mathbb{R}})$ is defined to be the probability
density function:
$M_{{\mathbb{A}}}(A):=\frac{\mathrm{e}^{\kappa{\mathbb{A}}\cdot
A}}{\int_{\mbox{{\scriptsize
SO}}_{3}({\mathbb{R}})}\mathrm{e}^{\kappa{\mathbb{A}}\cdot
A^{\prime}}\mathrm{d}A^{\prime}},$ (6)
where $\kappa>0$ is a supposed given parameter named concentration parameter,
or inverse of the noise intensity. The von Mises distribution, also known in
the literature as the matrix Fisher distribution [66, 70], is an analog (in
the case of SO${}_{3}({\mathbb{R}})$) of the Gaussian distribution in a flat
space. The new orientation of Agent $k$ at time $T_{n}$ can therefore be
interpreted as a small random perturbation of the average local orientation
given by ${\mathbb{A}}_{k}^{n}$, where the perturbation size is measured by
$1/\sqrt{\kappa}$.
In Formula (6) and in the remainder of this paper, the manifold
SO${}_{3}({\mathbb{R}})$ is endowed with its unique normalized Haar measure
defined for any test function $\varphi$ by:
$\int_{\mbox{{\scriptsize
SO}}_{3}({\mathbb{R}})}\varphi(A)\,\mathrm{d}A:=\frac{2}{\pi}\int_{0}^{\pi}\int_{\mathbb{S}^{2}}\varphi({\mathcal{A}}(\theta,\mathbf{n}))\,\sin^{2}(\theta/2)\,\mathrm{d}\theta\,\mathrm{d}\mathbf{n},$
(7)
where $\mathrm{d}\mathbf{n}$ is the uniform probability measure on the sphere
$\mathbb{S}^{2}$. Here, a rotation matrix
$A\equiv{\mathcal{A}}(\theta,\mathbf{n})$ is parametrized by its rotation
angle $\theta\in[0,\pi]$ and its axis $\mathbf{n}\in\mathbb{S}^{2}$ through
Rodrigues’ formula:
${\mathcal{A}}(\theta,\mathbf{n}):=I_{3}+\sin\theta\,[\mathbf{n}]_{\times}+(1-\cos\theta)\,[\mathbf{n}]_{\times}^{2}=\exp(\theta[\mathbf{n}]_{\times})$
(8)
with $\mathbf{n}=(n_{1},n_{2},n_{3})^{\mathrm{T}}$ and $I_{3}$ is the $3\times
3$ identity matrix. For any vector
$\mathbf{w}=(w_{1},w_{2},w_{3})^{\mathrm{T}}\in{\mathbb{R}}$,
$[\mathbf{w}]_{\times}$ is the antisymmetric matrix of the linear map
${\mathbb{R}}^{3}\ni\mathbf{u}\mapsto\mathbf{w}\times\mathbf{u}$ (where
$\times$ denotes the cross product) which has the following expression:
$[\mathbf{w}]_{\times}:=\left(\begin{array}[]{ccc}0&-w_{3}&w_{2}\\\
w_{3}&0&-w_{1}\\\ -w_{2}&w_{1}&0\end{array}\right).$ (9)
Additional details on the structure of $SO_{3}({\mathbb{R}})$ can be found for
instance in [64]. The IBM (3), (5) is schematically represented in Fig. 2.
Figure 2: Schematic representation of the PDMP described in the text: the
motion of Particle $k$ is represented in physical space as the black broken
dotted line. The body frame $A_{k}$ is represented with $\Omega_{k}$ in red,
$\mathbf{u}_{k}$ in green and $\mathbf{v}_{k}$ in blue. Each angular point of
the trajectory corresponds to one of the jump times $T_{k}^{n}$. Between two
jump times, the trajectory is the straight line spanned by $\Omega_{k}$ and
the body frame stays constant. The jump dynamics is depicted at time
$T_{k}^{n}$. At this time, the observation region is colored in yellow and
body frames of the other particles present in this region are depicted in
light blue. The averaged body frame ${\mathbb{A}}_{k}^{n}$ is depicted with
thick lightly colored arrows. The body frame before the jump
$A_{k}(T_{k}^{n}-0)$ is drawn in broken lines whereas that after the jump
$A_{k}(T_{k}^{n}+0)$ is drawn in plain lines. $A_{k}(T_{k}^{n}+0)$ is close,
but not equal to ${\mathbb{A}}_{k}^{n}$ because of the noise intensity
proportional to $1/\kappa$. For clarity, the frames involved in the
description of the jump are magnified.
#### 2.1.2 Numerical simulations of the IBM
Unless otherwise specified, throughout this paper, a square box of side length
$L$ with periodic boundary conditions is used. As sensing kernel $K$, we use
the indicator function of the ball centered at $0$ and of radius $R$. Thus, an
agent interacts with all its neighbors at a distance less than $R$ (radius of
interaction). Table 1 summarizes the model parameters.
Parameter | Symbol
---|---
Number of particles | $N$
Computational box side length | $L$
Interaction radius | $R$
Particle speed | $c_{0}$
Concentration parameter | $\kappa$
Alignment frequency | $\nu$
Table 1: Parameters of the IBM (3), (5).
For the numerical simulations presented in this paper, we have used the
convenient framework offered by quaternions. Indeed, there is a group
isomorphism between $\mathrm{SO}_{3}({\mathbb{R}})$ and ${\mathbb{H}}/\\{\pm
1\\}$ where ${\mathbb{H}}$ is the group of unit quaternions. We can express
the IBM (3), (5) using this representation (see [38] and Section B). Roughly
speaking, body-alignment as described here is equivalent to nematic alignment
of the corresponding quaternions (nematic alignment of a unit quaternions
$\mathbf{q}$ to the mean direction $\mathbf{Q}$ is unchanged if $\mathbf{q}$
is replaced by $-\mathbf{q}$, as opposed to polar alignment where the result
depends on the sign of $\mathbf{q}$). This is because a given rotation can be
represented by two opposite quaternions and thus, the outcome of the alignment
process should not depend of the choice of this representative. The numerical
algorithm is described in Section C. Additionally, the quaternion framework
also suggests to use order parameters derived from nematic alignment dynamics
(such as in liquid crystal polymers). We shall use this analogy to define
appropriate order parameters in Section 4.1.
All the simulations were written in Python using the SiSyPHE library [44]
specifically developed for the simulation of large-scale mean-field particle
systems by the second author. The implementation is based on the PyTorch [78]
library and more specifically on the GPU routines introduced by the KeOps [22]
library. The computational details as well as the source code are freely
available on the documentation website https://sisyphe.readthedocs.io/. The
outcomes of the simulations were analyzed and plotted using the NumPy [57] and
Matplotlib [63] libraries. The 3D particle plots were produced using VPython
[81]. All the particle simulations have been run on a GPU cluster at Imperial
College London using an Nvidia GTX 2080 Ti GPU chip.
A typical outcome of the IBM is shown in Figure 3 (see also Section A, Video
1) for a moderate number of particles ($N=3000$). Throughout this paper, in
the plots, we will represent each agent graphically by an elongated
tetrahedron pointing in the direction of motion. The three large faces around
the height will be painted in blue, green and magenta and the base will be in
gold, as described in Fig. 3a. We notice that, starting from a uniformly
random initial state (Fig. 3b), the system self-organizes in small clusters
(Fig. 3c) and finally reaches a flocking equilibrium where all the agents have
roughly the same body-orientation (Fig. 3d). We will see below that flocking
is not necessarily the ultimate fate of the system, because it may be trapped
in a so-called topologically protected state. To better understand these
aspects, we first need to develop the continuum (or macroscopic) description
of the system. This is done in the next section.
(a) Graphical representation of particles
(b) Time=0
(c) Time=4
(d) Time=40
Figure 3: (a) Graphical representation of particles and their body
orientations as elongated tetrahedra pointing towards the self-propulsion
direction with blue, magenta and green large faces and gold bases. (b,c,d)
Snapshots of a typical output of the simulation at three different times (b)
Time=0, (c) Time=4 and (d) Time=40. Parameters: $N=3000$, $L=1$, $R=0.075$,
$\kappa=20$, $\nu=5$, $c_{0}=0.2$. see also Section A, Video 1.
#### 2.1.3 Relation with other collective dynamics models
We finally make a comparison with previous models. First, there is a version
of the IBM where particles follow a stochastic differential equation (SDE)
instead of a jump process [35, 37]. Both the current and previous models have
the same hydrodynamic model as macroscopic limit (see forthcoming section).
There are two reasons for us to prefer the jump process. First, its simulation
is slightly easier and second, the coefficients of the macroscopic model are
explicit, which is not so in the SDE case where they require the resolution of
an auxiliary elliptic problem [35, 37].
Beyond the present body-orientation model, numerous models of self-propelled
particles have been proposed in the literature (see the review [90]). The most
closely related one is the celebrated Vicsek model [89]. There are several
versions of this model: time-discrete ones [23, 89], time-continuous ones
relying on an SDE description of the particle trajectories [41] and time-
continuous ones using a jump process instead [45]. The latter version is the
most closely related to the present work. In [45], the difference is that
particles carry a single direction vector $\Omega_{k}$ instead of a whole body
frame. This vector gives the direction of self-propulsion. The particles
follow a similar PDMP, namely
* •
The random jump times are defined in the same way: they follow an exponential
law with constant parameter $\nu>0$. At jump times, the position is continuous
and the direction vector $\Omega_{k}$ is discontinuous with left and right
states respectively denoted by $\Omega_{k}(T_{k}^{n}-0)$ and
$\Omega_{k}(T_{k}^{n}+0)$.
* •
Between two jump times $T_{k}^{n}$, $T_{k}^{n+1}$, the direction vector
$\Omega_{k}$ does not change and the particle moves in straight line at speed
$c_{0}>0$ in the direction given by $\Omega_{k}(T_{k}^{n}+0)$.
* •
To pass from $\Omega_{k}(T_{k}^{n}-0)$ to $\Omega_{k}(T_{k}^{n}+0)$, we
compute the local flux given by $\mathbf{J}_{k}^{n-}=$
$\frac{1}{N}\sum_{j=1}^{N}K\big{(}\mathbf{X}_{k}(T_{k}^{n})-\mathbf{X}_{j}(T_{k}^{n})\big{)}\,\Omega_{j}(T_{k}^{n}-0)\in{\mathbb{R}}^{3}$
and, assuming that it is non-zero, the mean direction
$\bar{\Omega}_{k}^{n}=\mathbf{J}_{k}^{n-}/|\mathbf{J}_{k}^{n-}|\in{\mathbb{S}}^{2}$
at time $T_{k}^{n}-0$. Then, $\Omega_{k}(T_{k}^{n}+0)$ is drawn from a von
Mises distribution on ${\mathbb{S}}^{2}$:
$\Omega_{k}(T_{k}^{n}+0)\sim\tilde{M}_{\bar{\Omega}_{k}^{n}}$, with
$\tilde{M}_{\bar{\Omega}}(\Omega)=e^{\kappa(\bar{\Omega}\cdot\Omega)}/\int_{{\mathbb{S}}^{2}}e^{\kappa(\bar{\Omega}\cdot\Omega)}\,\mathrm{d}\Omega$,
for $\Omega$ and $\bar{\Omega}$ in ${\mathbb{S}}^{2}$.
So, the current model is an elaboration of [45] replacing self-propulsion
directions by whole body frames and polar alignment of unit vectors (as
expressed by the von Mises distribution on the sphere) by alignment of
rotations matrices. Outcomes of numerical simulations of the Vicsek model do
not show striking differences whether one uses any of the above mentioned
versions (time-discrete, time-continuous with SDE or time-continuous with jump
process). Results given in [23, 89] for the time-discrete version display the
emergence of a global alignment together with the formation of clusters when
the noise intensity $1/\kappa$ is not too big. The outcome strongly resembles
what is shown in Fig. 3 for the body-orientation model, but for the depiction
of the body orientation itself which is not provided by the Vicsek model. So,
it is legitimate to wonder whether the inclusion of the full body orientation
instead of the mere self-propulsion direction makes any change in the dynamics
of the particle positions and direction vectors. In particular, do the
particle positions and directions follow the same dynamics in the Vicsek and
body orientation model? We will see below that this is not the case and that
in certain circumstances, striking differences between the two models are
obtained. To show this, the use of the macroscopic limit of the IBM, as
developed in the forthcoming section, will be of crucial importance.
### 2.2 The macroscopic body-alignment model
#### 2.2.1 Description of the model
As soon as $N$ is not very small, the IBM (3), (5) involves a large number of
unknowns which makes its mathematical analysis virtually impossible. A reduced
description, more amenable to mathematical analysis, is obtained through the
macroscopic limit of the IBM, and consists of a system of partial differential
equations. This reduced description gives a valid approximation of the IBM in
an appropriate range of parameters, namely
$N\gg 1,\qquad\frac{R}{L}\sim\frac{c_{0}}{\nu\,L}\ll 1.$ (10)
Throughout the remainder of this paper, we will focus on this regime. The
macroscopic limit of the IBM (3), (5) has first been proposed in [38] and
leads to a model called “Self-Organized Hydrodynamics for Body orientation
(SOHB)”. The derivation relies on earlier work [35, 37]. This derivation is
“formally rigorous” in the sense that, if appropriate smoothness assumptions
are made on the involved mathematical objects, the limit model can be
identified rigorously as being the SOHB. For the reader’s convenience, we
summarize the main steps of this mathematical result in Section D.
The unknowns in the SOHB are the particle density $\rho(t,\mathbf{x})$ and
mean body-orientation ${\mathbb{A}}(t,\mathbf{x})\in$ SO${}_{3}({\mathbb{R}})$
at time $t$ and position $\mathbf{x}=(x,y,z)\in{\mathbb{R}}^{3}$. They satisfy
the following set of equations:
$\displaystyle\partial_{t}\rho+c_{1}\,\nabla_{\mathbf{x}}\cdot(\rho\,{\mathbb{A}}\mathbf{e}_{1})=0,$
(11a)
$\displaystyle\big{(}\partial_{t}+c_{2}({\mathbb{A}}\mathbf{e}_{1})\cdot\nabla_{\mathbf{x}}\big{)}{\mathbb{A}}+\big{[}({\mathbb{A}}\mathbf{e}_{1})\times(c_{3}\nabla_{\mathbf{x}}\log\rho+c_{4}\,\mathbf{r})+c_{4}\,\,\delta\,{\mathbb{A}}\mathbf{e}_{1}\big{]}_{\times}{\mathbb{A}}=0.$
(11b)
The quantities $\mathbf{r}$ and $\delta$ have intrinsic expressions in terms
of ${\mathbb{A}}$ [35]. However, it is more convenient to write the rotation
field ${\mathbb{A}}$ in terms of the basis vectors
$\Omega={\mathbb{A}}\mathbf{e}_{1},\quad\mathbf{u}={\mathbb{A}}\mathbf{e}_{2},\quad\mathbf{v}={\mathbb{A}}\mathbf{e}_{3}.$
With these notations, the vector $\mathbf{r}(t,\mathbf{x})\in{\mathbb{R}}^{3}$
and scalar $\delta(t,\mathbf{x})\in{\mathbb{R}}$ fields are defined by
$\displaystyle\mathbf{r}$ $\displaystyle:=$
$\displaystyle(\nabla_{\mathbf{x}}\cdot\Omega)\,\Omega+(\nabla_{\mathbf{x}}\cdot\mathbf{u})\,\mathbf{u}+(\nabla_{\mathbf{x}}\cdot\mathbf{v})\,\mathbf{v},$
(12) $\displaystyle\delta$ $\displaystyle:=$
$\displaystyle[(\Omega\cdot\nabla_{\mathbf{x}})\,\mathbf{u}]\cdot\mathbf{v}+[(\mathbf{u}\cdot\nabla_{\mathbf{x}})\mathbf{v}]\cdot\Omega+[(\mathbf{v}\cdot\nabla_{\mathbf{x}})\Omega]\cdot\mathbf{u}.$
(13)
Here, for a vector field $\mathbf{B}(\mathbf{x})\in{\mathbb{R}}^{3}$ and a
scalar field $\lambda(\mathbf{x})\in{\mathbb{R}}$ we denote by
$\nabla_{\mathbf{x}}\cdot\mathbf{B}$, and
$\nabla_{\mathbf{x}}\times\mathbf{B}$ the divergence and curl of $\mathbf{B}$
respectively, by $\nabla_{\mathbf{x}}\lambda$, the gradient of $\lambda$ and
we set
$(\mathbf{B}\cdot\nabla_{\mathbf{x}})\lambda=\mathbf{B}\cdot\nabla_{\mathbf{x}}\lambda$
with $\cdot$ the inner product of vectors in ${\mathbb{R}}^{3}$. We remind
that $\times$ denotes the cross product and we refer to formula (9) for the
definition of $[\mathbf{w}]_{\times}$ when $\mathbf{w}$ is a vector in
${\mathbb{R}}^{3}$. Alternate expressions of $\delta$ can be found in Section
E of the Supplementary Material.
The quantities $c_{1}$, $c_{2}$, $c_{3}$, $c_{4}$ are functions of $\kappa$
and $c_{0}$ given as follows:
$\displaystyle\frac{c_{1}}{c_{0}}$
$\displaystyle=\frac{2}{3}\,\big{\langle}\frac{1}{2}+\cos\theta\big{\rangle}_{\exp\left(\kappa\left(\frac{1}{2}+\cos\theta\right)\right)\,\sin^{2}\left(\frac{\theta}{2}\right)},$
(14) $\displaystyle\frac{c_{2}}{c_{0}}$
$\displaystyle=\frac{1}{5}\,\left\langle
2+3\cos\theta\right\rangle_{\exp\left(\kappa\left(\frac{1}{2}+\cos\theta\right)\right)\,\sin^{4}\left(\frac{\theta}{2}\right)\,\cos^{2}\left(\frac{\theta}{2}\right)},$
(15) $\displaystyle\frac{c_{3}}{c_{0}}$ $\displaystyle=\frac{1}{\kappa},$ (16)
$\displaystyle\frac{c_{4}}{c_{0}}$ $\displaystyle=\frac{1}{5}\,\left\langle
1-\cos\theta\right\rangle_{\exp\left(\kappa\left(\frac{1}{2}+\cos\theta\right)\right)\,\sin^{4}\left(\frac{\theta}{2}\right)\,\cos^{2}\left(\frac{\theta}{2}\right)},$
(17)
where, for two functions $f$ and $g$: $[0,\pi]\to{\mathbb{R}}$, we write
$\langle
f\rangle_{g}=\frac{\int_{0}^{\pi}f(\theta)\,g(\theta)\,\mathrm{d}\theta}{\int_{0}^{\pi}g(\theta)\,\mathrm{d}\theta}.$
Fig. 4 provides a graphical representation of these functions.
Figure 4: Dimensionless coefficients $c_{i}/c_{0}$ as functions of the inverse
of concentration parameter $1/\kappa$. Blue curve $c_{1}/c_{0}$, orange curve
$c_{2}/c_{0}$, green curve $c_{3}/2c_{0}$ and red curve $c_{4}/c_{0}$. At the
crossover value $\kappa^{*}\simeq 2.58$, the sign of $c_{2}-c_{1}$ changes
(see Section 3.2).
#### 2.2.2 Interpretation of the model
To better understand what the SOHB system (11) does, we re-write it as
follows:
$\displaystyle\partial_{t}\rho+c_{1}\,\nabla_{\mathbf{x}}\cdot(\rho\,\Omega)=0,$
(18a) $\displaystyle D_{t}{\mathbb{A}}+[\mathbf{w}]_{\times}\,{\mathbb{A}}=0,$
(18b)
where the convective derivative $D_{t}$ and the vector $\mathbf{w}$ are given
by:
$\displaystyle D_{t}=\partial_{t}+c_{2}\Omega\cdot\nabla_{\mathbf{x}},$ (19)
$\displaystyle\mathbf{w}=-\Omega\times\mathbf{F}+c_{4}\,\delta\,\Omega,\quad\mbox{
with }\quad\mathbf{F}=-c_{3}\,\nabla_{\mathbf{x}}\,\log\rho-
c_{4}\,\mathbf{r},$ (20)
Eq. (18a) is the mass conservation equation of the fluid. The vector $\Omega$
gives the direction of the fluid motion. The fluid velocity deduced from (18a)
is $c_{1}\Omega$. Since $c_{1}/c_{0}\in[0,1]$ as can be seen from Fig. 4 (see
also [35] for a rigorous proof), the fluid motion is oriented positively along
$\Omega$ and its magnitude is smaller than the particles self-propulsion
velocity $c_{0}$. This is because the average of vectors of identical norms
has smaller norm. The quantity $c_{1}/c_{0}$ can be seen as an order parameter
[32] but we will not dwell on this issue here.
Eq. (18b) provides the rate of change of ${\mathbb{A}}$ with time along the
integral curves of the vector field $c_{2}\Omega$ as expressed by the
convective derivative $D_{t}$. Note that this vector field is not the fluid
velocity $c_{1}\Omega$ since $c_{2}\not=c_{1}$. It can be interpreted as the
propagation velocity of ${\mathbb{A}}$ when $\mathbf{w}$ is zero. Since
$D_{t}{\mathbb{A}}$ is the derivative of an element of
SO${}_{3}({\mathbb{R}})$, it must lie in the tangent space to
SO${}_{3}({\mathbb{R}})$ at ${\mathbb{A}}$ which consists of all matrices of
the form ${\mathbb{W}}\,{\mathbb{A}}$ with ${\mathbb{W}}$ antisymmetric. This
structure is indeed satisfied by Eq. (18b) since, from the definition (9), the
matrix $[\mathbf{w}]_{\times}$ is antisymmetric. It can be shown that the SOHB
system is hyperbolic [36].
In fact, Eq. (18b) shows that the vector $\mathbf{w}$ is the instantaneous
rotation vector of the frame ${\mathbb{A}}(t,\mathbf{X}(t))$, where
$t\mapsto\mathbf{X}(t)$ is any solution of
$\frac{\mathrm{d}\mathbf{X}}{\mathrm{d}t}=c_{2}\,\Omega(t,\mathbf{X}(t))$.
Indeed, Eq. (18b) can be equivalently written as a system of equations for
$(\Omega,\mathbf{u},\mathbf{v})$ of the form
$D_{t}\mathbf{Z}=\mathbf{w}\times\mathbf{Z}$, with
${\mathbf{Z}}=\Omega,\,{\mathbf{u}},\,{\mathbf{v}}$. This describes a rigid
body rotation of the frame $\\{\Omega,\mathbf{u},\mathbf{v}\\}$ with angular
velocity $\mathbf{w}$. The rotation vector $\mathbf{w}$ has two components.
The first one is $\Omega\times{\mathbf{F}}$ and tends to relax $\Omega$
towards ${\mathbf{F}}$. Due to its expression (20), the force ${\mathbf{F}}$
includes two contributions: that of the pressure gradient
$-c_{3}\,\nabla_{\mathbf{x}}\,\log\rho$ and that of gradients of the body
orientation through the vector $-c_{4}\,\mathbf{r}$. The second component of
the rotation vector is $-c_{4}\delta\Omega$ and corresponds to a rotation of
the body frame about the self propulsion direction $\Omega$ driven by
gradients of the body orientation through the scalar $-c_{4}\,\delta$. The
contributions of gradients of body orientation in the two components of the
rotation vector are under the control of the single coefficient $c_{4}$. Fig.
5 gives a graphical representation of the actions of these two infinitesimal
rotations.
(a) Action of $\Omega\times{\mathbf{F}}$ (b) Action of $-c_{4}\delta\Omega$
Figure 5: Graphical representations of the two components of the infinitesimal
rotation. $(\Omega,{\mathbf{u}},{\mathbf{v}})$ denotes the position of the
frame at time $t$ while
$(\Omega^{\prime},{\mathbf{u}}^{\prime},{\mathbf{v}}^{\prime})$ is its
position at time $t+dt$ with $dt\ll 1$. The frame at time $t$ is denoted in
plain colors (red for $\Omega$, green for $\mathbf{u}$ and blue for
$\mathbf{v}$) while that at time $t+dt$ is in light colors. The motion of the
vectors is indicated by a segment of circle in black color. (a) Action of
$\Omega\times{\mathbf{F}}$: the vectors ${\mathbf{F}}$ and
$\Omega\times{\mathbf{F}}$ are in plain and light black respectively. The
vector ${\mathbf{F}}$ is shown with unit norm for the ease of the
representation but could be of any norm in reality. The passage from
$(\Omega,{\mathbf{u}},{\mathbf{v}})$ to
$(\Omega^{\prime},{\mathbf{u}}^{\prime},{\mathbf{v}}^{\prime})$ is via an
infinitesimal rotation of axis $\Omega\times{\mathbf{F}}$. (b) Action of
$\delta$: the vector $-c_{4}\delta\Omega$ is shown in black. The vectors
$\Omega$ and $\Omega^{\prime}$ are identical and collinear to
$-c_{4}\delta\Omega$. The passage from $(\Omega,{\mathbf{u}},{\mathbf{v}})$ to
$(\Omega^{\prime},{\mathbf{u}}^{\prime},{\mathbf{v}}^{\prime})$ is via an
infinitesimal rotation of axis $\Omega$.
#### 2.2.3 Relation with other models
To better understand how the SOHB model (11) relates to other models, we re-
write the equation for $\Omega$ as follows:
$D_{t}\Omega=\mathrm{P}_{\Omega^{\perp}}\mathbf{F},$ (21)
where $\mathrm{P}_{\Omega^{\perp}}$ is the $3\times 3$ projection matrix on
the orthogonal plane to the vector $\Omega$ and is written
$\mathrm{P}_{\Omega^{\perp}}=\mbox{I}_{3}-\Omega\otimes\Omega$ with $\otimes$
standing for the tensor (or outer) product. Eq. (21) bears similarities and
differences with the momentum equation of isothermal compressible fluids. The
latter is exactly recovered if the following three modifications are made:
1. 1.
the projection matrix $\mathrm{P}_{\Omega^{\perp}}$ is removed from (21) (i.e.
it is replaced by I3);
2. 2.
$c_{2}=c_{1}$ in the convective derivative $D_{t}$ (see (19));
3. 3.
$c_{4}=0$ in the expression of $\mathbf{F}$ (see (20)).
Indeed, under these three modifications, we get the following system for
$(\rho,\mathbf{U})$ where $\mathbf{U}=c_{1}\Omega$ is the fluid velocity:
$\partial_{t}\rho+\nabla_{\mathbf{x}}\cdot(\rho\mathbf{U})=0,\quad(\partial_{t}+\mathbf{U}\cdot\nabla_{\mathbf{x}})\mathbf{U}=-\Theta\,\nabla_{\mathbf{x}}\,\log\rho.$
This is the isothermal compressible Euler equations with the fluid temperature
$\Theta=c_{1}\,c_{3}$.
We now investigate what consequences follow from undoing the above three
modifications, one by one.
1. 1.
Introducing the projection $\mathrm{P}_{\Omega^{\perp}}$ in (21) guarantees
that the constraint $|\Omega|=1$ is preserved in the course of time, if it is
satisfied at time $0$. Indeed, dotting Eq. (21) with $\Omega$ (and assuming
that all functions are smooth) leads to $D_{t}|\Omega|^{2}=0$, which
guarantees that $|\Omega|$ is constant along the integral curves of the vector
field $c_{2}\Omega$. Thus, if $|\Omega|=1$ at time $t=0$, it will stay so at
any time.
2. 2.
Having $c_{2}\not=c_{1}$ is a signature of a loss of Galilean invariance. This
is consistent with the fact that the microscopic system is not Galilean
invariant as well, Indeed, there is a distinguished reference frame where the
particle speed is $c_{0}$. Of course, this speed does not remain equal to
$c_{0}$ in frames that translate at constant speed with respect to this frame.
So far, with the introduction of $\mathrm{P}_{\Omega^{\perp}}$ and different
constants $c_{2}\not=c_{1}$ but still with $c_{4}=0$, the system for
$(\rho,\Omega)$ is decoupled from the equations for $u$ and $v$ and is written
(see Eqs. (18a), (21) with $\mathbf{F}$ given by (20) in which $c_{4}=0$):
$\displaystyle\partial_{t}\rho+c_{1}\,\nabla_{\mathbf{x}}\cdot(\rho\,\Omega)=0,$
(22a) $\displaystyle
D_{t}\Omega=-c_{3}\,\mathrm{P}_{\Omega^{\perp}}\nabla_{\mathbf{x}}\,\log\rho.$
(22b)
This is nothing but the hydrodynamic limit of the Vicsek particle model (known
as “Self-Organized Hydrodynamics (SOH)”) as established in [41, 45]. This
system has been shown to be hyperbolic [41] and to have local-in-time smooth
solutions [40].
3. 3.
When $c_{4}\not=0$, in addition to the pressure gradient, a second component
of the force $\mathbf{F}$ appears. This component depends on the full rotation
matrix ${\mathbb{A}}$ through $\Omega$, $\mathbf{u}$, $\mathbf{v}$ and their
gradients (see Eq. 12). It is thus truly specific of the body orientation
model.
We are now going to compare the IBM and the SOHB models on a set of explicit
stationary solutions of the SOHB model described in the next section.
## 3 Special solutions of the macroscopic model
### 3.1 Three classes of explicit solutions
In this section, we exhibit three different classes of global-in-time
solutions of the SOHB model (18). They are special classes of a larger family
of solutions which will also be introduced. All these solutions are
characterized by uniform (i.e. independent of the spatial coordinate) fields
$\rho$, $\mathbf{r}$ and $\delta$. From now on we fix a wave-number (inverse
of the length) $\xi\in{\mathbb{R}}\setminus\\{0\\}$ and define
$\omega=\xi\,c_{4},\qquad\lambda=c_{2}+c_{4}.$ (23)
We denote by $\mathbf{x}=(x,y,z)^{\mathrm{T}}$ the coordinates of $\mathbf{x}$
in the basis $(\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3})$.
#### 3.1.1 Flocking state
The flocking state (FS) is a trivial but important special solution of the
SOHB model (18) where both the density and rotation fields are constant (i.e.
independent of time) and uniform:
$\rho(t,\mathbf{x})\equiv\rho_{0}=\text{constant},\quad{\mathbb{A}}(t,\mathbf{x})\equiv{\mathbb{A}}_{0}=\text{constant},\quad\forall(t,\mathbf{x})\in[0,\infty)\times{\mathbb{R}}^{3}.$
#### 3.1.2 Milling orbits
We have the following
###### Lemma 3.1.
The pair $(\rho,{\mathbb{A}})$ consisting of a constant and uniform density
$\rho(t,\mathbf{x})=\rho_{0}=$ constant and the following rotation field:
$\displaystyle{\mathbb{A}}(t,\mathbf{x})$ $\displaystyle=$
$\displaystyle\tilde{\mathbb{A}}_{\mbox{\scriptsize mill}}(t,z)$ (27)
$\displaystyle=$ $\displaystyle\left(\begin{array}[]{lll}\cos(\omega
t)&\sin(\omega t)\,\cos(\xi z)&-\sin(\omega t)\,\sin(\xi z)\\\ -\sin(\omega
t)&\cos(\omega t)\,\cos(\xi z)&-\cos(\omega t)\,\sin(\xi z)\\\ 0&\sin(\xi
z)&\cos(\xi z)\end{array}\right)$ $\displaystyle=$
$\displaystyle{\mathcal{A}}(-\omega t,\mathbf{e}_{3})\,{\mathcal{A}}(\xi
z,\mathbf{e}_{1}),$ (28)
is a solution of the SOHB system (18), where $\omega$ and $\xi$ are given by
(23). We recall that ${\mathcal{A}}(\theta,\mathbf{n})$ is the rotation of
axis $\mathbf{n}\in{\mathbb{S}}^{2}$ and angle $\theta\in{\mathbb{R}}$ defined
by (8). This solution will be referred to as a milling orbit (MO).
The proof of this lemma is deferred to Section F. The MO is independent of $x$
and $y$. Its initial condition is
${\mathbb{A}}_{\mbox{\scriptsize mill}}(0,z)={\mathcal{A}}(\xi
z,\mathbf{e}_{1})=\left(\begin{array}[]{ccc}1&0&0\\\ 0&\cos(\xi z)&-\sin(\xi
z)\\\ 0&\sin(\xi z)&\cos(\xi z)\end{array}\right).$ (29)
The initial direction of motion (the first column of
${\mathbb{A}}_{\mbox{\scriptsize mill}}(0,z)$) is independent of $z$ and
aligned along the $x$-direction, i.e. $\Omega(0,z)\equiv\mathbf{e}_{1}$. As
$z$ varies, the body-orientation rotates uniformly about the $x$-direction
with spatial angular frequency $\xi$. As the rotation vector is perpendicular
to the direction of variation, (29) is called a “perpendicular twist”. As time
evolves, the rotation field is obtained by multiplying on the left the initial
perpendicular twist by the rotation ${\mathcal{A}}(-\omega t,\mathbf{e}_{3})$.
This means that the whole body frame undergoes a uniform rotation about the
$z$-axis with angular velocity $-\omega$. As a consequence, the direction of
motion is again independent of $z$. It belongs to the plane orthogonal to $z$
and undergoes a uniform rotation about the $z$-axis. Consequently, the fluid
streamlines, which are the integral curves of $c_{1}\Omega$, are circles
contained in planes orthogonal to $z$ of radius
$\frac{c_{1}}{\omega}=\frac{c_{1}}{c_{4}}\frac{1}{\xi}$ traversed in the
negative direction if $\xi>0$. These closed circular streamlines motivate the
“milling” terminology. It can be checked that the MO satisfies:
$\mathbf{r}=\xi\,(\sin(\omega t),\cos(\omega
t),0)^{\mathrm{T}},\qquad\delta=0.$
As announced, $\mathbf{r}$ and $\delta$ are uniform but $\mathbf{r}$ depends
on time. Actually, $\Omega\times\mathbf{r}=\xi\mathbf{e}_{3}$ is independent
of time. The MO is depicted in Fig. 6 and its dynamics is visualized in Video
2 (see Section A).
(a) $t=0$
(b) $t>0$
Figure 6: Graphical representation of the milling orbit (MO) at (a): initial
time, and (b): time $t>0$. The frame vectors $\Omega$, $\mathbf{u}$ and
$\mathbf{v}$ are represented at a certain number of points of the $(O,x,y)$
and $(O,y,z)$ planes. In (b), the rotation motion of the frame vectors is
depicted by dotted circles of the color of the corresponding frame vector. The
red dotted circle can be seen as a depiction of the fluid streamlines. See
also Section A, Video 2.
Many examples of milling (also known as vortex) solutions have been observed
in the collective dynamics literature as well as in biological systems [16,
25, 90]. On the modelling side, milling states have not been observed so far
in alignment models without the inclusion of an additional process such as an
attraction-repulsion force between the agents [17], a bounded cone of vision
[24] or an anticipation mechanism [53]. The body-orientation framework is, to
the best of our knowledge, a new situation in which milling can be observed
just with alignment assumptions. Milling states can also be found in physical
systems. A typical and important example is the motion of a charged particle
in a uniform magnetic field, resulting in the formation of so-called cyclotron
orbits. Once again, in the body-orientation framework, an external field is
not needed and self-induced cyclotron orbits emerge only from the variations
of the internal body-orientation. Here, the analog of the magnetic field would
be $\Omega\times\mathbf{r}$ and the cyclotron frequency would be $\omega$.
Note that $\omega$ is under the control of coefficient $c_{4}$ which depends
on the noise intensity $1/\kappa$.
#### 3.1.3 Helical traveling wave
We have the following
###### Lemma 3.2.
The pair $(\rho,{\mathbb{A}})$ consisting of a constant and uniform density
$\rho(t,\mathbf{x})=\rho_{0}=$ constant and the following rotation field:
$\displaystyle{\mathbb{A}}(t,\mathbf{x})$ $\displaystyle=$
$\displaystyle\tilde{\mathbb{A}}_{\mbox{\scriptsize htw}}(t,x)$ (33)
$\displaystyle=$ $\displaystyle\left(\begin{array}[]{ccc}1&0&0\\\
0&\cos\left(\xi(x-\lambda t)\right)&-\sin\left(\xi(x-\lambda t)\right)\\\
0&\sin\left(\xi(x-\lambda t)\right)&\cos\left(\xi(x-\lambda
t)\right)\end{array}\right)$ $\displaystyle=$
$\displaystyle{\mathcal{A}}(\xi(x-\lambda t),\mathbf{e}_{1}),$ (34)
is a solution of the SOHB system (18) where $\xi$ and $\lambda$ are defined by
(23). This solution will be referred to as a helical traveling wave (HW).
The proof of this lemma is given in Section F.2. The HW is independent of $y$
and $z$. Its initial condition is
${\mathbb{A}}_{\mbox{\scriptsize htw}}(0,x)={\mathcal{A}}(\xi
x,\mathbf{e}_{1})=\left(\begin{array}[]{ccc}1&0&0\\\ 0&\cos(\xi x)&-\sin(\xi
x)\\\ 0&\sin(\xi x)&\cos(\xi x)\end{array}\right).$ (35)
Here the self-propulsion direction is still independent of $x$ and equal to
$\mathbf{e_{1}}$. Also, the body orientation still rotates uniformly about
$\mathbf{e_{1}}$ with spatial angular frequency $\xi$ but when $x$ is varied
instead of $z$. This means that the body orientation is now twisted when
varied along the propagation direction. So, this initial condition is called a
“parallel twist”. In the HW, the self propulsion direction $\Omega$ remains
constant in time and uniform in space. The initial twist is propagated in time
in this direction at speed $\lambda$ and gives rise to a traveling wave
$\tilde{\mathbb{A}}_{\mbox{\scriptsize
htw}}(t,x)=\tilde{\mathbb{A}}_{\mbox{\scriptsize htw}}(0,x-\lambda t).$
Note that the traveling wave speed $\lambda$ depends on the noise intensity
$1/\kappa$ and is different from the fluid speed $c_{1}$. So, the frame
carried by a given fluid element followed in its motion is not fixed but
rotates in time. Since $\Omega$ does not change, the fluid streamlines are now
straight lines parallel to $\mathbf{e}_{1}$. So, as a fluid element moves, the
ends of the frame vectors $\mathbf{u}$ and $\mathbf{v}$ follow a helical
trajectory with axis $\mathbf{e}_{1}$, hence the terminology “helical
traveling waves” for these solutions. It can be checked that
$\mathbf{r}=0,\qquad\delta=\xi,$
and again, $\mathbf{r}$ and $\delta$ are spatially uniform as announced. The
HW is depicted graphically in Fig. 7. Its dynamics is visualized in Video 3
(see Section A). The HW belongs to a larger class of solutions described in
Section F.2.
(a) $t=0$
(b) $t>0$
Figure 7: Graphical representation of the helical traveling wave (HW) at (a):
initial time, and (b): time $t>0$. See Fig. 6 for captions. See also Section
A, Video 3.
#### 3.1.4 Generalized topological solutions
The three above described classes of solutions can be encompassed by a single
family of generalized solutions as stated in the following lemma.
###### Lemma 3.3 (Generalized solutions).
Let $\xi\in\mathbb{R}$ and $\theta\in[0,\pi]$ be two parameters. Let
$\omega\in\mathbb{R}$ and $\tilde{\lambda}\in\mathbb{R}$ be defined by
$\omega=c_{4}\xi,\quad\tilde{\lambda}=c_{2}\cos\theta.$
The pair $(\rho,{\mathbb{A}})$ consisting of a constant and uniform density
$\rho(t,\mathbf{x})=\rho_{0}=$ constant and the following rotation field:
$\mathbb{A}(t,\mathbf{x})=\mathbb{A}_{\xi,\theta}(t,z):=\mathcal{A}(-\omega
t,\mathbf{e}_{3})\,\mathcal{A}\left(\theta-\frac{\pi}{2},\mathbf{e}_{2}\right)\mathcal{A}(\xi(z-\tilde{\lambda}t),\mathbf{e}_{1}),$
(36)
is a solution of the SOHB system (18). We recall that
${\mathcal{A}}(\theta,\mathbf{n})$ is the rotation of axis
$\mathbf{n}\in{\mathbb{S}}^{2}$ and angle $\theta\in\mathbb{R}$. This solution
will be referred to as a Generalized topological Solution (GS).
The proof of this lemma is deferred to the Supplementary Material F.3. Each of
the three previous classes of solutions can be obtained for specific values of
the parameters $\xi$ and $\theta$.
* •
When $\xi=0$, the solution $\mathbb{A}_{0,\theta}$ is constant for any
$\theta$, which corresponds to a FS.
* •
When $\theta=\frac{\pi}{2}$ and $\xi\in\mathbb{R}$, then $\tilde{\lambda}=0$
and the rotation with respect to the $y$-axis is equal to the identity: the
solution $\mathbb{A}_{\xi,\pi/2}$ is therefore equal to the MO (28).
* •
When $\theta=0$ and $\xi\in\mathbb{R}$ then $\tilde{\lambda}=c_{2}$ and the
solution $\mathbb{A}_{\xi,0}$ is equal to
$\mathbb{A}_{\xi,0}=\left(\begin{array}[]{ccc}0&-\sin(\xi(z-\lambda
t))&-\cos(\xi(z-\lambda t))\\\ 0&\cos(\xi(z-\lambda t))&-\sin(\xi(z-\lambda
t))\\\ 1&0&0\end{array}\right),\quad\lambda=c_{2}+c_{4},$
which is an HW along the $z$-axis. The situation is analogous when
$\theta=\pi$.
All these solutions have a non-zero gradient in the body-orientation variable
which is always along the $z$-axis. This gradient is controlled by the
parameter $\xi$. However, in the GS, the direction of motion $\Omega$ (or
fluid velocity) is not necessarily parallel nor perpendicular to this
gradient. Specifically, $\Omega$ has a constant polar angle equal to the
parameter $\theta$. The behavior of the solution is then a combination of the
two previously introduced phenomena: milling around the $z$-axis and a
travelling wave of the body-orientation variable along the same axis. The
applet accessible at
https://www.glowscript.org/#/user/AntoineDiez/folder/MyPrograms/program/BOfield
provides a graphical representation of the GS for arbitrary polar angles using
VPython [81] and with the same conventions as in Fig. 6.
In the following, we will focus on each of these two elementary behaviors,
i.e. the standard milling and helical travelling wave solutions, and in
particular on their topological properties. The study of the full continuum of
generalized solutions is left for future work. However, we will encounter GS
obtained from a perturbed milling solution in Section 5.4.
### 3.2 Some properties of these special solutions
Clearly, in the definitions of the MO and HW, the choice of reference frame is
unimportant. So, in the whole space ${\mathbb{R}}^{3}$, such solutions exist
in association with any reference frame. In a square domain of side-length $L$
with periodic boundary conditions, periodicity imposes some constraints on the
direction of the reference frame. For simplicity, we will only consider the
case where the reference frame has parallel axes to the sides of the square
and $\xi$ is linked to $L$ by an integrality condition $L\,\xi=2\pi\,n$, with
$n\in{\mathbb{Z}}\setminus\\{0\\}$.
The study of the stability of the MO and the HW is left for future work. By
contrast, the FS is linearly stable as the SOHB system is hyperbolic [36].
However, there is no guarantee that the FS at the level of the IBM is stable.
Indeed, there are strong indications that the FS is not stable for the Vicsek
model [23] for some parameter ranges and a similar trend is likely to occur
here.
We can now answer the question posed at the end of Section 2.1.3 namely
whether the inclusion of the full body orientation makes any change in the
dynamics of the particle positions and directions compared to the Vicsek
model. To this end, we consider the corresponding macroscopic models, i.e. the
SOH model (22) for the Vicsek model and the SOHB model (11) for the body-
orientation dynamics. If we initialize the SOH model with uniform initial
density $\rho$ and mean direction $\Omega$, inspection of (22) shows that the
solution remains constant in time and thus corresponds to a flocking state of
the Vicsek model. In the SOHB model, the three classes of solutions described
in the previous sections (the FS, MO and HW) also have uniform initial density
$\rho$ and mean direction $\Omega$. If the dynamics of the particle positions
and directions in the body orientation model was the same as in the Vicsek
model, these three classes of solutions should have a constant mean direction
$\Omega$. However, it is not the case for the MO, where $\Omega$ changes with
time and is subject to a planar rotation. This means that gradients of body
attitude do have a non-trivial influence on the direction of motion of the
particles and that the body orientation model does not reduce to a Vicsek
model for the particle positions and directions.
There is another, more subtle, difference between the two models concerning
the dynamics of $\Omega$. It does not concern the MO and HW but we discuss it
here in relation with the previous paragraph. Indeed, Fig. 4 reveals that the
velocities $c_{1}$ and $c_{2}$ for the SOHB model crossover at a certain value
$\kappa^{*}$ of the concentration parameter. The coefficients $c_{1}$ and
$c_{2}$ for the SOH model can be found in [45], Fig. A1(b) and appear to
satisfy $c_{1}>c_{2}$ for the whole range of values of $\kappa$, i.e. do not
exhibit any crossover. In particular, at large noise, the propagation velocity
$c_{2}$ of $\Omega$ in the SOHB model is larger than the mass transport
velocity $c_{1}$. This means that information (which triggers adjustments in
$\Omega$) propagates downstream the fluid by contrast to the Vicsek case where
it propagates upstream. While the reason for this difference is unclear at
this stage, we expect that it may induce large qualitative differences in the
behavior of the system in some cases. This point will be investigated in
future work.
Numerical simulation of the SOHB will be subject to future work. Here, we will
restrict ourselves to the MO and HW for which we have analytical formulas. In
the next section, using these two special solutions, we verify that the SOHB
model and the IBM are close in an appropriate parameter range.
### 3.3 Agreement between the models
In this section we use the MO and HW to demonstrate the quantitative agreement
between the SOHB model (11) and the IBM (3), (5) in the scaling (10). In the
simulations below, we consider a periodic cube of side-length $L$ and choose
$R=0.025,\quad\nu=40,\quad c_{0}=1,\quad L=1,\quad\xi=2\,\pi,$ (37)
so that $\frac{R}{L}=\frac{c_{0}}{\nu\,L}=0.025\ll 1$, ensuring that the
scaling (10) is satisfied. Furthermore, we see that the choice of $\xi$ is
such that the twists in the MO or HW have exactly one period over the domain
size.
#### 3.3.1 The IBM converges to the macroscopic model as $N\to\infty$
In this section, we numerically demonstrate that the solutions of the IBM
converge to those of the macroscopic model in the limit $N\to\infty$ and
investigate the behavior of the IBM at moderately high values of $N$.
We sample $N$ particles according to the initial condition (29) of the MO and
simulate the IBM (3), (5). We recall that the average direction $\Omega(t)$ of
the exact MO (27) is spatially uniform at any time and undergoes a uniform
rotation motion about the $z$-axis. So, we will compare $\Omega(t)$ with the
average direction $\overline{\Omega}(t)$ of all the particles of the IBM,
where
$\overline{\Omega}(t)=(\overline{\Omega}^{1},\overline{\Omega}^{2},\overline{\Omega}^{3})^{\mathrm{T}}$
is defined by:
$\overline{\Omega}=\frac{\sum_{k=1}^{N}\Omega_{k}(t)}{|\sum_{k=1}^{N}\Omega_{k}(t)|},$
(provided the denominator is not zero, and where we recall that
$\Omega_{k}(t)=A_{k}(t)\,\mathbf{e}_{1}$). To ease the comparison, we compute
the azimuthal and polar angles of $\overline{\Omega}$ respectively defined by:
$\bar{\varphi}:=\mathrm{arg}(\overline{\Omega}^{1}+i\overline{\Omega}^{2})\in[0,2\pi),\quad\bar{\theta}=\arccos(\overline{\Omega}^{3})\in[0,\pi],$
(38)
where $\mathrm{arg}(x+iy)$ stands for the argument of the complex number
$x+iy$. We note that the corresponding angles $\varphi$ and $\theta$ of
$\Omega(t)$ are given by
$\varphi(t)=-\omega\,t=-2\pi\,c_{4}(\kappa)\,t,\qquad\theta=\pi/2,$ (39)
where we have used (23) and (37) to compute the value of $\omega$.
Fig. 8a shows the azimuthal angle $\bar{\varphi}$ as a function of time over 5
units of time, for increasing particle numbers: $N=5\,10^{4}$ (green curve),
$N=1.5\,10^{5}$ (orange curve) and $N=1.5\,10^{6}$ (blue curve). Note that for
very small values of $N$, the macroscopic model loses its relevance: below a
few thousand particles we only observe a noisy behavior, not shown in the
figure. For the considered range of particle numbers, we notice that the angle
$\bar{\varphi}$ decreases linearly with time, which shows that the behavior of
the IBM is consistent with the exact solution (39). However, quantitatively,
we see that $|\mathrm{d}\bar{\varphi}/\mathrm{d}t|$ depends on the particle
number and decreases with increasing particle number. We investigate this
behavior in more detail in Fig. 8b where the difference between the measured
angular velocity $|\mathrm{d}\bar{\varphi}/\mathrm{d}t|$ and the theoretical
prediction $2\pi c_{4}(\kappa)$ is plotted as a function of $N$. Each data
point (blue dot) is an average of 10 independent simulations. This figure
confirms that, as $N$ increases, $|\mathrm{d}\bar{\varphi}/\mathrm{d}t|$
decreases and converges towards $2\pi c_{4}(\kappa)$. The inset in Fig. 8b
shows the same data points in a log-log-scale with the associated regression
line (orange solid line). We observe that the error between the measured and
theoretical angular velocities behaves like $N^{-\alpha}$ with a measured
exponent $\alpha\simeq 1.01$ which is close to the theoretical value
$\alpha=1$ derived in Section G of the Supplementary Material.
(a)
(b)
Figure 8: (a) Time evolution of the angle $\bar{\varphi}$ for three values of
$N$ : $N=0.05\,10^{6}$ (green curve), $N=0.15\,10^{6}$ (orange curve) and
$N=1.5\,10^{6}$ (blue curve). (b) Difference between the measured angular
velocity $|\mathrm{d}\bar{\varphi}/\mathrm{d}t|$ and the theoretical value
$2\pi c_{4}(\kappa)$. Each data point (blue dot) is an average of 10
independent simulations with the error bar showing one standard deviation.
Solid black horizontal line at 0 for convenience. Inset: same data in log-log
scale and regression line (solid orange line). Parameters: $L=1$, $\xi=2\pi$,
$R=0.025$, $\nu=40$, $c_{0}=1$, $\kappa=10$.
#### 3.3.2 Quantitative comparison between the models
In order to quantitatively confirm the agreement between the IBM and the
macroscopic model, we fix a large number $N=1.5\,10^{6}$ of particles and we
run the IBM for different values of the concentration parameter $\kappa$ and
for the two classes of special solutions, the MO and the HW. To compare the
models, we compute the following macroscopic quantities:
* •
For the MO: starting from a sampling of the initial condition (29), we measure
the angular velocity $|\mathrm{d}\bar{\varphi}/\mathrm{d}t|$ in a similar way
as in the previous section. Given the parameter choice (37), the theoretical
value of $|\mathrm{d}\varphi/\mathrm{d}t|$ predicted by (27) is $|\omega|=2\pi
c_{4}(\kappa)$ where the function $c_{4}$ is given by (17).
* •
For the HW, starting from a sampling of the initial condition (35), we measure
the wave speed. To this aim, using (2), we compute the mean body-orientation
${\mathbb{A}}$ of the agents in a slice of size $10^{-3}$ along the $x$-axis
(which is the global direction of motion) as a function of time. As predicted
by (33) the coefficient ${\mathbb{A}}_{22}$ of the mean orientation is a
periodic signal. The inverse of the period of this signal (obtained through a
discrete Fourier transform) gives the traveling wave speed of the HW. The
theoretical value predicted by (33) is given by
$\lambda=c_{2}(\kappa)+c_{4}(\kappa)$ where the function $c_{2}$ is given by
(15).
The output of these simulations is shown in Figs. 9a for the MO and 9b for the
HW. They respectively display the angular velocity and traveling wave speed
obtained by running the IBM for a discrete set of values of $\kappa$ (big blue
dots). By comparison, the black dotted curves show the theoretical values as
functions of $\kappa$. For the parameters of Fig. 9, the order of magnitude of
the standard deviation of 10 independent simulations is $10^{-3}$. The
relative error between the average measured value and its theoretical
prediction varies between 2% and 5% on the whole range of concentration
parameters considered.
These figures show an excellent agreement between the prediction of the
macroscopic SOHB model and the results obtained by running the IBM when the
number of particles is large. This confirms that the SOHB model provides an
excellent approximation of the IBM, at least during a certain period of time
which is a function of the particle number. We will see below that
fluctuations induced by the finite number of particles may eventually
destabilize the MO and lead to a HW or a FS. As these solutions are associated
with different topological structure, these transitions will be analyzed as
topological phase transitions in the forthcoming sections.
(a)
(b)
Figure 9: (a) MO: angular velocity $|\mathrm{d}\varphi/\mathrm{d}t|$ as a
function of $1/\kappa$. (b) HW: traveling wave speed $\lambda$ as a function
of $1/\kappa$. Measured values from the IBM at discrete values of $\kappa$
(big blue dots) and theoretical prediction from the SOHB model (dotted black
curve). Parameters: $N=1.5\,10^{6}$, $L=1$, $\xi=2\pi$, $R=0.025$, $\nu=40$,
$c_{0}=1$.
### 3.4 Topology
Both the MO and HW have non-trivial topology: inspecting the perpendicular
twist (29) (see also Fig. 6a), we observe that the two-dimensional curve
generated by the end of the vector $\mathbf{u}$ in the $(y,z)$-plane as one
moves along the $z$-axis is a closed circle. A similar observation can be made
on the parallel twist (35) (see Fig. 7a) as one moves along the $x$-axis. Both
curves have therefore non-zero winding numbers about the origin. When the
domain is ${\mathbb{R}}^{3}$, these winding numbers are $\pm\infty$ (where the
sign corresponds to that of $\xi$) as these curves make an infinite number of
turns. If the domain has finite extension $L$ along the $z$-axis (in the MO
case) or the $x$-axis (in the HW case) and, due to the periodic boundary
conditions, $L$ is related to $\xi$ by $L=n\,2\pi/\xi$ with
$n\in{\mathbb{Z}}\setminus\\{0\\}$, then the winding numbers are equal to $n$.
As observed on Formulas (27) and (33) (or on Figs 6b and 7b), this initial
non-trivial topological structure is propagated in time.
When we initialize particles by sampling the initial conditions (29) or (35),
we expect that the solution of the IBM remains an approximation of the MO (27)
or HW (33) respectively as evidenced in Section 3.3.2. However, noise induced
by both the inherent stochasticity of the IBM and finite particle number
effects as explained in Section 3.3.1 may eventually destabilize the IBM.
Then, in most cases, its solution is seen to transition towards an
approximation of the FS after some time. This transition implies a change of
the topology of the solution which, from initially non-trivial, becomes
trivial, since the winding number of the FS is zero. One may wonder whether
the evolution towards a FS is slower if the initial state has non-trivial
topology and exhibits some kind of “topological protection” against noise-
induced perturbations. To test this hypothesis quantitatively, we first need
to develop appropriate indicators. This is done in the next section.
## 4 Order parameters and topological indicators
We will use two types of indicators. The first one is the global order
parameter which will discriminate between the various types of organization of
the system (disorder, MO or HW and FS). The second type of indicators are
based on analyzing the roll angle. They will enable a finer characterization
of topological phase transitions.
### 4.1 Global order parameter
We first introduce the following scalar binary order parameter which measures
the degree of alignment between two agents with body-orientations $A$,
$\tilde{A}\in\mathrm{SO}_{3}({\mathbb{R}})$ :
$\psi(A,\tilde{A}):=\frac{1}{2}\,A\cdot\tilde{A}+\frac{1}{4}.$ (40)
In the quaternion framework (see Section 2.1.2 and B for details), we have
$\psi(A,\tilde{A})=(q\cdot\tilde{q})^{2},$ (41)
where $q$ and $\tilde{q}$ are two unit quaternions respectively associated to
$A$ and $\tilde{A}$, and $q\cdot\tilde{q}$ indicates the inner product of two
quaternions. This expression makes it clear that $\psi(A,\tilde{A})\in[0,1]$.
The square exponent in (41) indicates that $\psi(A,\tilde{A})$ measures the
nematic alignment of the two associated unit quaternions, as it should because
two opposite quaternions represent the same rotation. We note that
$\psi(A,\tilde{A})=1$ if and only if $\tilde{A}=A$. On the other hand,
$\psi(A,\tilde{A})=0$ if and only if $A\cdot\tilde{A}=-1/2$, which corresponds
to the two rotation axes being orthogonal and one rotation being an inversion
about its axis.
The Global Order Parameter (GOP) of a system of $N$ agents at time $t>0$ is
the average of all binary order parameters over all pairs of particles:
$\mbox{GOP}^{N}(t)=\frac{1}{N(N-1)}\,\sum_{k\not=\ell}\psi\big{(}A_{k}(t),A_{\ell}(t)\big{)}.$
(42)
From (42) we have GOP${}^{N}(t)\in[0,1]$. A small GOPN indicates large
disorder and a large one, strong alignment. This is a global measure of
alignment, by contrast to a local one where $\psi$ would be averaged over its
neighbors only (and the result, averaged over all the particles). This global
measure of alignment allows us to separate the MO and HW from the FS as shown
below, which would not be possible with a local one.
The GOP (42) can also be defined at the continuum level. As shown in Section
D, in the macroscopic limit, the particles become independent and identically
distributed over ${\mathbb{R}}^{3}\times$SO${}_{3}({\mathbb{R}})$, with common
distribution $\rho\,M_{{\mathbb{A}}}$ where $(\rho,{\mathbb{A}})$ satisfies
the SOHB system (11) and $M_{{\mathbb{A}}}$ is the von Mises distribution (6).
Therefore, the GOP of a solution of the SOHB system $(\rho,{\mathbb{A}})$ is
obtained as (42) where the sum is replaced by an integral, $A_{k}(t)$ is
replaced by $A$ distributed according to the measure
$(\rho\,M_{{\mathbb{A}}})(t,\mathbf{x},A)\,\mathrm{d}\mathbf{x}\,\mathrm{d}A$
and $A_{\ell}(t)$ is replaced by $\tilde{A}$ distributed according to the same
measure, but independently to $A$. Therefore,
$\mbox{GOP}(\rho,{\mathbb{A}}):=\iint_{({\mathbb{R}}^{3}\times\mbox{{\scriptsize
SO}}_{3}({\mathbb{R}}))^{2}}\psi(A,\tilde{A})\,\rho(\mathbf{x})\,\rho(\tilde{\mathbf{x}})\,M_{{\mathbb{A}}(\mathbf{x})}(A)\,M_{{\mathbb{A}}(\tilde{\mathbf{x}})}(\tilde{A})\,\mathrm{d}\mathbf{x}\,\mathrm{d}\tilde{\mathbf{x}}\,\mathrm{d}A\,\mathrm{d}\tilde{A}.$
Using (7) and (8) one can prove that for any
${\mathbb{A}}\in$SO${}_{3}({\mathbb{R}})$, we have
$\int_{\mbox{{\scriptsize
SO}}_{3}({\mathbb{R}})}A\,M_{{\mathbb{A}}}(A)\,\mathrm{d}A=\frac{c_{1}(\kappa)}{c_{0}}\,{\mathbb{A}},$
(43)
with $c_{1}(\kappa)$ defined by (14) and $c_{0}$ being the particle speed.
Using (40), we obtain:
$\mbox{GOP}(\rho,{\mathbb{A}})=\frac{1}{2}\left(\frac{c_{1}(\kappa)}{c_{0}}\right)^{2}\int_{{\mathbb{R}}^{3}\times{\mathbb{R}}^{3}}{\mathbb{A}}(\mathbf{x})\cdot{\mathbb{A}}(\tilde{\mathbf{x}})\,\rho(\mathbf{x})\,\rho(\tilde{\mathbf{x}})\,\mathrm{d}\mathbf{x}\,\mathrm{d}\tilde{\mathbf{x}}+\frac{1}{4}.$
(44)
From now on, we let $\rho$ be the uniform distribution on a square box of
side-length $L$. We can compute the GOP corresponding to each of the three
solutions defined in Section 3.1. For the MO (27), HW (33) and GS (36), for
all time $t>0$, in all cases, the GOP remains equal to:
$\mbox{GOP}_{1}=\frac{1}{4}\,\left(\frac{c_{1}(\kappa)}{c_{0}}\right)^{2}+\frac{1}{4}.$
(45)
For the FS, ${\mathbb{A}}(\mathbf{x})\equiv{\mathbb{A}}=$ constant and the GOP
is equal to
$\mbox{GOP}_{2}=\frac{3}{4}\,\left(\frac{c_{1}(\kappa)}{c_{0}}\right)^{2}+\frac{1}{4}.$
(46)
Note that the GOP:
$\mbox{GOP}_{0}=\frac{1}{4},$
corresponds to a disordered state of the IBM where the body-orientations of
the particles are chosen independently and randomly uniformly (or equivalently
to the SOHB case $\kappa\to 0$ in (45) and (46)). For the typical value
$\kappa=10$ used in our simulations, one can compute that:
$\mbox{GOP}_{1}\simeq 0.45,\qquad\mbox{GOP}_{2}\simeq 0.85.$ (47)
The GOP values between $\mbox{GOP}_{1}$ and $\mbox{GOP}_{2}$ can be reached by
generalized HW as shown in Section F.4.
### 4.2 Roll angle
#### 4.2.1 Definition
Let $A=[\Omega,\mathbf{u},\mathbf{v}]\in$ SO${}_{3}({\mathbb{R}})$ be a body-
orientation. Let $\theta\in[0,\pi]$, $\varphi\in[0,2\pi)$ be the spherical
coordinates of $\Omega$ defined by (38) (omitting the bars). We let
$\\{\Omega,\mathbf{e}_{\theta},\mathbf{e}_{\varphi}\\}$ be the local
orthonormal frame associated with the spherical coordinates $(\theta,\varphi)$
and we define $\mathbf{p}(\Omega)=\mathbf{e}_{\varphi}$ and
$\mathbf{q}(\Omega)=-\mathbf{e}_{\theta}$. Then we define the rotation matrix
$\mathsf{R}(\Omega):=[\Omega,\mathbf{p}(\Omega),\mathbf{q}(\Omega)]=\left(\begin{array}[]{ccc}\sin\theta\,\cos\varphi&-\sin\varphi&-\cos\theta\,\cos\varphi\\\
\sin\theta\,\sin\varphi&\cos\varphi&-\cos\theta\,\sin\varphi\\\
\cos\theta&0&\sin\theta\end{array}\right).$
Since $\mathbf{u}$ and $\mathbf{v}$ belong to the plane spanned by
$\mathbf{p}(\Omega)$ and $\mathbf{q}(\Omega)$, we let $\zeta\in[0,2\pi)$ be
the angle between $\mathbf{p}(\Omega)$ and $\mathbf{u}$. Then, it is an easy
matter to show that
$A=\mathsf{R}(\Omega)\,{\mathcal{A}}(\zeta,\mathbf{e}_{1})$. In aircraft
navigation, $\theta$, $\varphi$ and $\zeta$ are respectively called the pitch,
yaw and roll angles: the pitch and yaw control the aircraft direction with
respect to the vertical and in the horizontal plane respectively, while the
roll controls the plane attitude (see Fig. 10a). These angles are related to
the Euler angles. The construction of the roll angle $\zeta$ is summarized in
Figure 10b. Pursuing the analogy with aircraft navigation, we see from Fig. 5
that $\mathbf{F}$ controls variations of pitch and yaw while $\delta$ controls
variations of roll.
(a)
(b)
Figure 10: (a) Pitch, yaw and roll angles of an aircraft with body
orientation $[\Omega,\mathbf{u},\mathbf{v}]$ (original picture released under
the Creative Commons CC0 license by https://pixabay.com). (b) Construction of
the roll angle of $A=[\Omega,\mathbf{u},\mathbf{v}]$, where the vectors
$\Omega$, $\mathbf{u}$ and $\mathbf{v}$ are respectively in red, green and
blue. The local frame is $(\Omega,\mathbf{p}(\Omega),\mathbf{q}(\Omega))$
where $\mathbf{p}(\Omega)$ and $\mathbf{q}(\Omega))$ and the plane generated
by them are in purple. $\mathbf{u}$ and $\mathbf{v}$ belong to this plane.
$\zeta$ is the angle between $\mathbf{p}(\Omega)$ and $u$.
As an example, we examine the pitch, yaw and roll of the three solutions of
the SOHB model (11) described in Section 3.1.
1. 1.
FS: ${\mathbb{A}}$ is constant and uniform. Then, the pitch, yaw and roll are
also constant and uniform.
2. 2.
MO: ${\mathbb{A}}$ is given by (27) (see Figs. 6). Using Eq. (28), we have
$\mathsf{R}(\Omega)={\mathcal{A}}(-\omega\,t,\mathbf{e}_{3})$ and the roll is
given by $\zeta=\xi z$. The pitch and yaw are constant and uniform. The roll
is constant in time and is also uniform on planes of constant $z$. The non-
trivial topology of the MO results from the roll making a complete turn when
$z$ increases by the quantity $2\pi/\xi$.
3. 3.
HW: ${\mathbb{A}}$ is given by (33) (see Fig. 7). Then, we have
$\mathsf{R}(\Omega)=$ I3 and $\zeta=\xi\,(x-\lambda\,t)$. The pitch and yaw
are constant and uniform while the roll is uniform on planes of constant $x$.
It depends on $x$ and time through the traveling phase $x-\lambda\,t$. Here,
the non-trivial topology results from the roll making a complete turn when $x$
increases by the quantity $2\pi/\xi$.
The goal of the next section is to see how we can recover the roll field from
the simulation of a large particle system.
#### 4.2.2 Roll polarization
As shown in the last section, the roll of the MO is uniform on planes of
constant $z$. When simulating the MO by the IBM, we will use this property to
compute an average roll on planes of constant $z$. To cope with the
discreteness of the particles, we will rather consider slices comprised
between two planes of constant $z$. If the distance $\Delta z$ between the
planes is chosen appropriately, we can access to both the average and the
variance of the roll. They will be collected into one single vector, the Roll
Polarization in planes of constant $z$ or RPZ. A similar quantity
characterizes the HW, the Roll Polarization in planes of constant $x$ or RPX.
Below, we detail the construction of the RPZ. Obviously the procedure is the
same (changing $z$ into $x$) for the RPX.
We assume that the domain is a rectangular box of the form
$\mathcal{D}:=[0,L_{x}]\times[0,L_{y}]\times[0,L_{z}]$, and
$L_{z}=n\,(2\pi/\xi)$ with $n\in{\mathbb{Z}}\setminus\\{0\\}$. The domain
$\mathcal{D}$ is partitioned into $M$ slices of fixed size across $z$, where
$M$ is a fixed integer. For $m\in\leavevmode\nobreak\ \\{1,\ldots,M\\}$, the
slice $S_{m}$ is defined by:
$S_{m}:=[0,L_{x}]\times[0,L_{y}]\times\left[\frac{m-1}{M}L_{z},\frac{m}{M}L_{z}\right].$
Let us consider a system of $N$ agents with positions and body-orientations
$(\mathbf{X}_{k},A_{k})$, indexed by $k\in\\{1,\ldots,N\\}$. Each body
orientation $A_{k}$ has roll $\zeta_{k}\in[0,2\pi)$. We define the discrete
RPZ for Slice $m$, $\mathbf{\bar{u}}_{m}$, by
$\mathbf{\bar{u}}_{m}:=\frac{1}{N_{m}}\sum_{k\in
I_{m}}(\cos\zeta_{k},\sin\zeta_{k})^{\mathrm{T}}\in{\mathbb{R}}^{2},$ (48)
where $I_{m}=\\{k\in\\{1,\ldots,N\\},X_{k}\in S_{m}\\}$ and $N_{m}$ is the
cardinal of $I_{m}$. Note that the RPZ $\mathbf{\bar{u}}_{m}$ has norm smaller
than one. The unit vector $\mathbf{\bar{u}}_{m}/|\mathbf{\bar{u}}_{m}|$ or
equivalently, its angle with the vector $(1,0)^{\mathrm{T}}$ gives the average
roll in $S_{m}$. The euclidean norm $|\mathbf{\bar{u}}_{m}|$ is a measure of
the variance of the set of roll angles $\\{\zeta_{k}\\}_{k\in I_{m}}$. If this
variance is small, then $|\mathbf{\bar{u}}_{m}|\sim 1$, while if the variance
is large, $|\mathbf{\bar{u}}_{m}|\ll 1$. When plotted in the plane
${\mathbb{R}}^{2}$, the set of RPZ $\\{\mathbf{\bar{u}}_{m}\\}_{m=1,\ldots,M}$
forms a discrete curve referred to as the RPZ-curve. It will be used to
characterize the topological state of the particle system. A summary of this
procedure is shown in Figure 11.
Figure 11: Construction of the RPZ and graphical representation. The spatial
domain $\mathcal{D}$ is partitioned into $M$ slices represented in different
colors (top left). In each slice $S_{m}$, we have $I_{m}$ particles with roll
$\zeta_{k}$ each of them plotted in the particle’s local plane spanned by
$\mathbf{p}(\Omega_{k})$, $\mathbf{q}(\Omega_{k})$ (top right: we plot $3$
particles in the slice $S_{1}$). Note that the local planes of different
particles of the same slice may not coincide when imbedded in
${\mathbb{R}}^{3}$. For this given slice, the RPZ $\mathbf{\bar{u}}_{m}$ is
computed and plotted in ${\mathbb{R}}^{2}$ (bottom right). The RPZ has norm
smaller than $1$ and belongs to the unit disk, whose boundary, the unit
circle, is plotted for clarity. The RPZ of each slice is then plotted on a
single figure in the same color as the slice it corresponds to (bottom left).
This collection of points forms a discrete curve (here a fragment of a
circle): the RPZ-curve.
#### 4.2.3 Indicators of RPZ-curve morphology
The RPZ-curve is shown in Figure 12 (a) to (c), in the three following cases.
1. 1.
Disordered state: the particles are drawn independently uniformly randomly in
the product space $\mathcal{D}\times$ SO${}_{3}({\mathbb{R}})$. For each $m$,
the RPZ (48) is an average of uniformly distributed vectors on the circle and
its norm is therefore close to 0. The RPZ-curve is thus reduced to the origin,
as shown in Figure 12a;
2. 2.
FS: the positions of the particles are drawn independently uniformly in
$\mathcal{D}$ and their body-orientations independently according to a von
Mises distribution $M_{{\mathbb{A}}_{0}}$ with a fixed mean body orientation
${\mathbb{A}}_{0}\in$ SO${}_{3}({\mathbb{R}})$. In this case, for all slices,
the corresponding RPZ (48) is an average of identically distributed vectors on
the circle whose distribution is peaked around the same point of the unit
circle, and the peak is narrower as $\kappa$ is larger. Therefore, the RPZ
vectors (48) concentrate on a point near the unit circle (Figure 12b). The
RPZ-curve reduces to a single point different from the origin;
3. 3.
MO: the positions of the particles are drawn independently uniformly in
$\mathcal{D}$. Then for a particle at position $\mathbf{x}$, its body-
orientation is drawn independently according to a von Mises distribution
$M_{{\mathbb{A}}_{\mbox{\scriptsize mill}}(0,z)}$ with
${\mathbb{A}}_{\mbox{\scriptsize mill}}(0,z)$ defined by (29) (with
$\xi=2\pi/L_{z}$). This time, the von Mises distribution is peaked around a
point which depends on $z$. For each slice, the position of the RPZ (48)
depends on $m$. Since ${\mathbb{A}}_{\mbox{\scriptsize mill}}(0,z)$ is
$L_{z}$-periodic, the RPZ-curve is a discrete closed circle (Figure 12c). Note
that the RPX-curve of a HW is similar.
(a)
(b)
(c)
(d)
Figure 12: Examples of RPZ-curves: in each figure, the roll Polarization RPZ
vectors corresponding to $M=1000$ slices are plotted. The color bar to the
right of each figure assigns a unique color to each slice. The same color is
used to plot the corresponding RPZ. In each figure the unit circle and its
center are represented in blue. (a) Disordered state: all RPZ concentrate near
the origin. (b) FS: all RPZ concentrate on a point close to the unit circle.
(c) MO (29): the RPZ-curve is a discrete circle centered at the origin and of
radius close to unity. The total number of particles is $N=1.5\cdot 10^{6}$.
Note that in Figs. (a) and (b), all RPZ are superimposed and only the last one
(in magenta color) is visible. (d) Quantifiers of RPZ curve morphology: point
$G$ (in red) is the center-of-mass of the RPZ curve and $d_{z}$ is its
distance to the origin $O$ (shown in blue). The mean radius $\bar{r}_{z}$ of
the RPZ curve is illustrated by the circle in black broken line which has same
radius. The winding number, which is the number of turns one makes following
the spectrum of colors in the same order as in the color bar from bottom to
top (the green arrow indicates the direction of progression along the RPZ
curve) is $w_{z}=-1$ in this example.
From Figure 12, we realize that three quantities of interest can be extracted
from the RPZ-curve:
1. 1.
the distance of its center of mass to the origin $d_{z}$:
$d_{z}=\Big{|}\frac{1}{M}\sum_{m=1}^{M}\mathbf{\bar{u}}_{m}\Big{|},$ (49)
2. 2.
its mean distance to the origin $\bar{r}_{z}$:
$\bar{r}_{z}=\frac{1}{M}\sum_{m=1}^{M}|\mathbf{\bar{u}}_{m}|,$ (50)
3. 3.
its winding number about the origin $w_{z}$: for $m\in\\{1,\ldots,M\\}$, let
$\beta_{m}=\mathrm{arg}\big{(}(\mathbf{\bar{u}}_{m})^{1}+i(\mathbf{\bar{u}}_{m})^{2}\big{)}\in[0,2\pi)$
(with
$\mathbf{\bar{u}}_{m}=((\mathbf{\bar{u}}_{m})^{1},(\mathbf{\bar{u}}_{m})^{2})^{\mathrm{T}}$)
and $\delta\beta_{m+1/2}\in[-\pi,\pi)$ be such that
$\delta\beta_{m+1/2}\equiv\beta_{m+1}-\beta_{m}$ modulo $2\pi$, where we let
$\beta_{M+1}=\beta_{1}$. Then:
$w_{z}=\frac{1}{2\pi}\sum_{m=1}^{M}\delta\beta_{m+1/2},$
(see e.g. [62, p. 176]).
The subscript $z$ indicates that the slicing has been made across $z$. Similar
quantities with an index ’$x$’ will correspond to the slicing made across $x$.
Fig. 12d provides a graphical illustration of the triple
$(d_{z},\bar{r}_{z},w_{z})$. For the examples given above, this triple has the
following values:
$\displaystyle\mbox{Disordered
state:}\,(d_{z},\bar{r}_{z},w_{z})=(0,0,\mbox{ND}),\,\mbox{where ND stands for
``undefined''},$ (51)
$\displaystyle\mbox{FS:}\,(d_{z},\bar{r}_{z},w_{z})\approx(1,1,0),$ (52)
$\displaystyle\mbox{MO:}\,(d_{z},\bar{r}_{z},w_{z})\approx(0,1,w),\,\mbox{with}\,\,w\not=0.$
(53)
We have a similar conclusion with $(d_{x},\bar{r}_{x},w_{x})$ for a disordered
state or an FS. For an HW, we have $(d_{x},\bar{r}_{x},w_{x})\approx(0,1,w)$
with $w\not=0$. Thus, monitoring either or both triples (according to the
situation) will give us an indication of the state of the system in the course
of time. In particular, non-trivial topological states are associated with
non-zero winding numbers $w_{x}$ or $w_{z}$. In practice, we will use the
nonzero-rule algorithm to compute the winding numbers numerically [62, p.
176].
## 5 Topological phase transitions: are the MO and HW topologically
protected?
As pointed out in Section 3.4, for the IBM, the MO and HW are only metastable:
they typically persist for a finite time before degenerating into a FS. This
is in stark contrast with the macroscopic model for which they persist for
ever. The transition of a MO or HW to a FS implies a topological change. To
analyze whether the MO or HW are more robust due to their non-trivial
topological structure (i.e. are topologically protected), we will compare them
with similar but topologically trivial initial conditions (Sections 5.1, 5.2
and 5.3). We also test their robustness against perturbed initial conditions
and show that, in this case, MO may transition to GS (Section 5.4). In the
Supplementary Material H, we investigate rarer events, where an MO does not
transition directly to an FS but through a HW.
### 5.1 Initial conditions
In Section 5.2, we will compare the solutions of the IBM with different
initial conditions using the perpendicular or parallel twists as building
blocks. Some will have a non-trivial topology and the others, a trivial one.
Specifically we define the following initial conditions.
#### 5.1.1 Milling orbit
Let $\mathcal{D}=[0,L]\times[0,L]\times[0,2L]$ be a rectangular domain with
periodic boundary conditions and let $\xi=2\pi/L$. We consider the following
two initial conditions:
* •
Double mill initial condition MO1:
${\mathbb{A}}_{m,1}(0,z)={\mathcal{A}}(\xi\,z,\mathbf{e}_{1}),\quad
z\in[0,2L],$ (54)
where we recall again that ${\mathcal{A}}(\theta,\mathbf{n})$ is the rotation
of axis $\mathbf{n}\in{\mathbb{S}}^{2}$ and angle $\theta\in{\mathbb{R}}$
defined by (8). This initial condition has non-trivial topology: the curve
generated by the end of the vector $\mathbf{u}$ in the $(y,z)$-plane as $z$
ranges in $[0,2L]$ makes two complete turns around the origin in the same
direction. Thus, this initial condition has winding number equal to $2$.
* •
Opposite mills initial condition MO2:
${\mathbb{A}}_{m,2}(0,z)=\left\\{\begin{array}[]{ll}{\mathcal{A}}(\xi\,z,\mathbf{e}_{1}),&\quad
z\in[0,L],\\\ {\mathcal{A}}(-\xi\,z,\mathbf{e}_{1}),&\quad
z\in[L,2L].\end{array}\right.$ (55)
This initial condition has trivial topology: starting from $z=0$, the curve
generated by the end of the vector $\mathbf{u}$ makes one complete turn around
the origin in the counterclockwise direction until it reaches $z=L$ but then
reverses its direction and makes a complete turn in the clockwise direction
until it reaches $z=2L$. Thus, this initial condition has winding number equal
to $0$ and has trivial topology.
* •
Perturbed double mill initial condition MO3:
${\mathbb{A}}_{m,3}(0,z)={\mathcal{A}}(\xi\,z+\sqrt{\sigma}B_{z},\mathbf{e}_{1}),\quad
z\in[0,2L],$ (56)
where $(B_{z})_{z}$ is a given one-dimensional standard Brownian motion in the
$z$ variable and $\sigma>0$ is a variance parameter which sets the size of the
perturbation. The Brownian motion is subject to $B_{0}=B_{2L}=0$ (i.e. it is a
Brownian bridge). Similarly to the initial condition MO1 (54), this initial
condition has a nontrivial topology, in this case a winding number equal to 2.
#### 5.1.2 Helical traveling wave
Let now $\mathcal{D}=[0,2L]\times[0,L]\times[0,L]$. Compared to the previous
case, the domain has size $2L$ in the $x$-direction instead of the
$z$-direction. Let again $\xi=2\pi/L$. We consider now the following two
initial conditions:
* •
Double helix initial condition HW1:
${\mathbb{A}}_{h,1}(0,x)={\mathcal{A}}(\xi\,x,\mathbf{e}_{1}),\quad
x\in[0,2L],$ (57)
This initial condition has non-trivial topology and has winding number equal
to $2$ by the same consideration as for initial condition MO1.
* •
Opposite helices initial condition HW2:
${\mathbb{A}}_{h,2}(0,x)=\left\\{\begin{array}[]{ll}{\mathcal{A}}(\xi\,x,\mathbf{e}_{1}),&\quad
x\in[0,L],\\\ {\mathcal{A}}(-\xi\,x,\mathbf{e}_{1}),&\quad
x\in[L,2L].\end{array}\right.$ (58)
Again, by the same considerations as for MO2, this initial condition has
trivial topology, i.e. winding number equal to $0$.
### 5.2 Observation of topological phase transitions
We initialize the IBM by drawing $N$ positions independently uniformly
randomly in the spatial domain and $N$ body-orientations independently from
the von Mises distribution $M_{{\mathbb{A}}(0,\mathbf{x})}$ where
${\mathbb{A}}(0,\mathbf{x})$ is one of the initial conditions MO1 or MO2.
Then, we run the IBM and record the various indicators introduced in Section 4
as functions of time. The results are plotted in Fig. 13, as plain blue lines
for the solution issued from MO1 (the topologically non-trivial initial
condition), and as broken orange lines for that issued from MO2 (the
topologically trivial one). We proceed similarly for the two initial
conditions HW1 and HW2 and display the results in Fig. 14. See also Videos 4
to 7 in Section A supplementing Fig. 13 and Videos 8 to 11 supplementing Fig.
14.
Figs. 13a and 14a display the GOP. We observe that, for all initial
conditions, the GOP has initial value GOP1, which is consistent with the fact
that the initial conditions are either MO or HW. Then, again, for all initial
conditions, at large times, the GOP has final value GOP2 which indicates that
the final state is a FS. This is confirmed by the inspection of the second
line of figures in Figs. 13 and 14 which provide the triplet of topological
indicators $(d_{z},\bar{r}_{z},w_{z})$ for MO solutions and
$(d_{x},\bar{r}_{x},w_{x})$ for HW solutions. Specifically, $d_{z}$ and
$d_{x}$ are given in Figs. 13d and 14d respectively, $\bar{r}_{z}$ and
$\bar{r}_{x}$ in Figs. 13e and 14e, and $w_{z}$ and $w_{x}$ in Figs. 13f and
14f. Initially both triplets corresponding to MO1 or HW1 solutions have value
$(0,1,2)$ as they should (see (53)). Their final value is $(1,1,0)$ which
indicates a FS (see (52)). The fact that the final state is a FS implies, for
MO1 and HW1, first that the IBM has departed from the MO and HW exact
solutions of the macroscopic model described in Sections 3.1.2 and 3.1.3, and
second, that a topological phase transition has taken place, bringing the
topologically non-trivial MO1 and HW1 to a topologically trivial FS. For the
topologically trivial MO2 and HW2 initial conditions, no topological phase
transition is needed to reach the FS. The differences in the initial topology
of the solutions induce strong differences in the trajectories followed by the
system.
For the topologically non-trivial initial conditions MO1 or HW1, the system
remains in the MO or HW state for some time; hence it follows the macroscopic
solution during this phase. Indeed, the GOP displays an initial plateau at the
value GOP1, while the triplet of topological indicators stays at the value
$(0,1,2)$, which characterize the MO or HW state. For MO1, this is also
confirmed by the yaw $\bar{\varphi}$ (Fig. 13c, blue curve), which varies
linearly in time and by the pitch $\bar{\theta}$ (Fig. 13b blue curve) which
is constant in time, consistently with the MO solution of the macroscopic
model (Section 3.1.2) (see also Fig. 8a for the linear variation of the yaw).
The duration of this initial phase, also referred to as the persistence time,
is significantly longer for HW1 than for MO1. In our experiments, the former
can reach several hundred units of time and sometimes be infinite (up to our
computational capacity). By contrast, the latter is random and of the order of
ten units of time. After this initial plateau, the GOP decreases until it
reaches a minimum at a time highlighted in Figs. 13, 14 and subsequent figures
by a gray shaded zone, showing that the system passes through a state of
maximal disorder. Around that time, $\bar{r}$ has a sharp drop which is
another confirmation of an increased disorder. The topological transition
precisely occurs at this time with a transition of the winding number from $2$
to $0$ through a short sequence of oscillations. However, $\bar{r}$ has not
reached $0$ and $d$ has already started to increase, which suggests that
disorder is not complete. At this time also, the linear variation of
$\bar{\varphi}$ suddenly stops and $\bar{\varphi}$ remains constant afterward,
while $\bar{\theta}$ shows a small oscillation and jump. For HW1,
$\bar{\theta}$ and $\bar{\varphi}$ are initially plateauing with small
oscillations. At the time when the system leaves the HW state (around $t\simeq
178$), we observe a sudden drop of $\bar{\varphi}$ from $2\pi$ to $\pi$ which
indicates that the system suddenly reverses its average direction of motion.
The GOP starts to decrease significantly before this time so we can infer that
during the time period between $t\simeq 125$ and $t\simeq 178$, even though
the mean direction of motion $\bar{\Omega}$ remains constant, groups of
particles of almost similar proportions are moving in opposite directions,
which preserves the average direction of motion (and may explain the
oscillations during the initial persistence phase). This is confirmed by Video
8 (see description in Section A). Then, once this minimum is reached, the GOP
increases quickly to finally reach the value GOP2 of the FS. Likewise,
$\bar{r}$ and $d$ quickly reach the value $1$ while the winding number stays
at the value $0$.
By contrast to the previous case, the system immediately leaves the
topologically trivial initial conditions MO2 or HW2 as shown by the GOP
immediately leaving the value GOP1. For HW2 the GOP increases right after
initialization and smoothly reaches the value GOP2, at a much earlier time
than HW1. The trend is different for MO2. In this case, the GOP first
decreases. Then, after a minimum value, it increases again and smoothly
reaches the value GOP2 at a time similar to MO1. The initial decay of the GOP
for the MO2 solution can be explained by the fact that the macroscopic
direction $\Omega$ turns in opposite directions for the two opposite mills,
thus decreasing the global order. For HW2, the macroscopic direction stays
constant and uniform. So, it is the same for the two opposite helices, giving
rise to a larger GOP. The mean radii $\bar{r}_{z}$ and $\bar{r}_{x}$ stay
constant it time, showing that the evolutions of MO2 and HW2 do not involve
phases of larger disorder. The quantity $d_{x}$ increases monotonically
towards the value $1$ while $d_{z}$ is subject to some oscillations close to
convergence. This is due to the fact that the RPZ or RPX curves stay arcs of
circles with decreasing arc length for the RPX and with some arc length
oscillations for the RPZ as displayed in Videos 7 and 11. Of course, the
winding number stays constant equal to $0$ as it should for topologically
trivial solutions. In both the MO2 and HW2 cases, $\bar{\theta}$ and
$\bar{\varphi}$ remain constant throughout the entire simulation. In the MO2
case, this is the consequence of the two counter-rotating mills which preserve
the direction of motion on average. In the HW2 case, this is due to the fact
that there is no variation of the direction of motion for HW solutions in
general (see also Video 6 and Video 10). Again, we observe that the
convergence towards the FS takes more time for HW2 than for MO2. This points
towards a greater stability of the HW-type solutions compared to the MO ones.
(a)
(b)
(c)
(d)
(e)
(f)
Figure 13: Examples of solutions of the IBM for initial conditions sampled
from the double mill MO1 (plain blue curves) and the opposite mills MO2 (boken
orange curves). The following indicators are plotted as functions of time: (a)
Global Order Parameter (GOP) (see Eq. (42)). Horizontal lines at GOP values
$0.25$, $0.45$ and $0.85$ materialize the special values GOP0, GOP1 and GOP2
respectively corresponding to totally disordered states, MO or HW, and FS (see
Eqs. (45)-(47)). (b) Pitch angle $\bar{\theta}$ of the global particle average
direction $\bar{\Omega}$ (see (38)). (c) Yaw $\bar{\varphi}$ of
$\bar{\Omega}$. (d) Distance of center of mass of RPZ curve to the origin
$d_{z}$ (see (49)). (e) Mean distance of RPZ curve to the origin $\bar{r}_{z}$
(see (50)). (f) Winding number of RPZ curve $w_{z}$ (see (49)). Gray shaded
zones highlight a small region around the time of minimal GOP for the MO1
solution. Parameters: $N=3\,10^{6}$, $R=0.025$, $\kappa=10$, $\nu=40$,
$c_{0}=1$, $L=1$, $\xi=2\pi$. See also Videos 4 to 7 in Section A.
(a)
(b)
(c)
(d)
(e)
(f)
Figure 14: Examples of solutions of the IBM for initial conditions sampled
from the double helix HW1 (plain blue curves) and the opposite helices HW2
(broken orange curves). The following indicators are plotted as functions of
time: (a) Global Order Parameter (GOP). (b) Pitch angle $\bar{\theta}$ of
$\bar{\Omega}$. (c) Yaw $\bar{\varphi}$ of $\bar{\Omega}$. (d) Distance of
center of mass of RPX curve to the origin $d_{x}$. (e) Mean distance of RPX
curve to the origin $\bar{r}_{x}$. (f) Winding number of RPX curve $w_{x}$.
Gray shaded zones highlight a small region around the time of minimal GOP for
the HW1 solution. The HW2 and HW1 solutions are computed during 200 and 250
units of time respectively. The two simulations have reached equilibrium by
their final time. Parameters: $N=3\,10^{6}$, $R=0.025$, $\kappa=10$, $\nu=40$,
$c_{0}=1$, $L=1$, $\xi=2\pi$. See caption of Fig. 13 for further indications.
See also Videos 8 to 11 in Section A.
### 5.3 Reproducibility
Since the IBM is a stochastic model, one may wonder whether Figs. 13 and 14
are representative of a typical solution. In Fig. 15, the GOP is plotted as a
function of time for 20 independent simulations with MO1 initial conditions
and the same parameters as in Fig. 13 (blue curves). The same features as in
Fig. 13 are observed, namely: (i) an initial stable milling phase which lasts
about 10 units of time; (ii) a decrease of the GOP between approximately 10 to
15 units of time; (iii) a subsequent increase of the GOP which reaches the
value $\mathrm{GOP}_{2}$ of the FS. A similar reproducibility of the results
has been observed for the other initial conditions (MO2, HW1, HW2) (not
shown).
Figure 15: GOP as a function of time for 20 independent simulations of the
transition from a MO to a FS starting from MO1. The parameters are the same as
the ones on Figure 13.
### 5.4 Robustness against perturbations of the initial conditions
In this section, we study the robustness of the MO when the initial condition
is randomly perturbed as described by the initial condition MO3 (56). Three
typical outcomes for three different values of the perturbation size $\sigma$
are shown in Fig. 16. For each value of $\sigma$, the temporal evolution of
the four main indicators are shown: the GOP (Figs. 16a, 16e, 16i), the mean
polar angle or pitch (Figs. 16b, 16f, 16j), the mean azimuthal angle or yaw
(Figs. 16c, 16g, 16k) and the winding number along the $z$-axis (Figs. 16d,
16h, 16l). For small to moderate values (approximately $\sigma<100$), the
outcomes of the simulation are the same as in Fig. 13 and are not shown.
However, they demonstrates the robustness of the topological solutions. When
$\sigma$ increases and crosses this threshold, the behavior becomes different.
Around this threshold (for $\sigma=134$), in Fig. 16a, we observe that the GOP
does not remain initially constant (contrary to the un-perturbed case shown in
Fig. 13a) but immediately decreases, then increases and oscillates around the
value $\mathrm{GOP}_{1}$ before transitioning towards the value
$\mathrm{GOP}_{2}$ corresponding to a FS. In Figs. 16c and 16d, we observe
that the MO is preserved during a comparable, slightly longer, time than in
Figs. 13c and 13f (around 20 units of time) before degenerating into a FS.
Passed this threshold, when $\sigma$ increases again and up to another
threshold value around $\sigma\simeq 1000$, a new topological phase transition
is observed from a MO with winding number 2 to a GS (36) with winding number
1. For $\sigma=753$, the GOP shown in Fig. 16e initially strongly oscillates
around the value $\mathrm{GOP}_{1}$ before stabilizing, still around this
value, which is in stark contrast with the previous experiments. The winding
number shown in Fig. 16h reveals that this final steady behavior is linked to
a winding number equal to 1 after a transition around $t\simeq 12$.
Consequently, a milling behavior is observed in Fig. 16g for the mean
azimuthal angle. This angle evolves linearly but with a slower speed,
approximately divided by 2, after the transition, as expected since the
winding number has dropped from 2 to 1. However, the final mean polar angle
$\bar{\theta}$ shown in Fig. 16f is not equal to $\pi/2$. Since the gradient
in body-orientation is along the $z$-axis, this indicates that the final state
corresponds to a GS rather than a standard MO. This demonstrates that the
family of generalized topological solutions enjoys some greater stability. The
transition between MO and GS has not been observed when starting from a non-
perturbed initial state. However, starting with perturbed initial conditions,
the MO and GS with winding number 1 seem stable during several tens of units
of time.
The transition between MO and GS with different winding numbers happens when
the perturbation size is large enough and seems to be the typical behavior:
out of 6 independent simulations for values of $\sigma$ evenly spread between
258 and 876, 5 simulations led to a MO or a GS with winding number 1 stable
during more than 50 units of time. The other one led to a FS. We can think
that the perturbation brings the system to a state closer to the MO with
winding number 1, in particular due to the stochastic spatial inhomogeneities
of the perturbation. On the particle simulations, we observe that the density
of agents does not remain uniform, which creates different milling zones with
possibly different milling speeds depending on the local gradient of body-
orientations. The denser region then seems to attract the other particles
before expanding into the full domain. The global direction of motion is not
necessarily preserved during this process. In comparison, starting from an
unperturbed MO with winding number 2, the density remains uniform and the
system is globally subject to numerical errors which homogeneously degrade the
topology up to the point that the system becomes closer to a FS. The situation
is analogous when the size of the perturbation is too large as shown in Figs.
16i, 16k, 16l for $\sigma=1000$ : the MO is preserved during less than 5 units
of time and after an immediate drop of the GOP, the system quickly reaches a
FS.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)
Figure 16: Different outcomes of the simulation of the IBM starting from
perturbed initial MO. Only the four main indicators are shown: from left to
right, the GOP, the mean polar angle (or pitch) $\bar{\theta}$, the mean
azimuthal angle (or yaw) $\bar{\varphi}$ and the winding number $w_{z}$.
(a)-(d) For $\sigma=134$, the system stays a MO for a long time ($t\simeq 20$)
but eventually converges to a FS; (e)-(h) for $\sigma=753$, the system
converges towards a generalized solution with a polar angle not equal to
$\pi/2$ and a winding number equal to 1 along the $z$-axis; (i)-(l) for
$\sigma=1000$, the MO is quickly disrupted (at $t\simeq 5$) and converges
almost immediately towards a FS. Parameters: $N=3\,10^{6}$, $R=0.025$,
$\kappa=10$, $\nu=40$, $c_{0}=1$, $L=1$, $\xi=2\pi$.
### 5.5 Critique
The existence of a persistence time for the MO1 and HW1 solutions suggests
that they enjoy some kind of topological protection against the noisy
perturbations induced by the IBM and that MO2 and HW2 do not have such
protection. However, since explicit solutions of the SOHB model for the
initial conditions MO2 and HW2 are not available, it is not possible to assess
the role of noise in the observed evolutions of the MO2 and HW2 solutions. So,
further investigations are needed to confirm that non-trivial topology
actually provides increased robustness against perturbations. Moreover, the
MO1 is robust against perturbed initial conditions. The MO and GS with winding
number 1 seem to be much more more stable than with winding number 2.
## 6 Discussion and conclusion
An Individual Based Model describing the alignment of body-orientations in 3D
and its macroscopic limit have been presented. The model involves new kinds of
internal degrees of freedom involving geometrical constraints, here due to the
manifold structure of SO${}_{3}({\mathbb{R}})$, leading to new types of self-
organized phenomena. In particular, the macroscopic model has been shown to
host special solutions with non-trivial topological structures. Corresponding
solutions of the Individual Based Model have been computed and their non-
trivial topological structure, shown to persist for a certain time before
being destroyed by noise-induced fluctuations. Quantitative estimates of the
agreement between the Individual Based Model and the Macroscopic model have
been given. This study provides one more evidence of the role of geometry and
topology in the emergence of self-organized behavior in active particle
systems. The model presented in this article opens many new research
directions. Some of them are listed below.
1. 1.
The stability of the MO (27), HW (33) and GS (36) solutions as well as those
of the generalized HW solutions described in Section F is an open problem. It
would enable us to investigate the potential link between topological
structure and stability.
2. 2.
Numerical simulations have been carried out in a periodic setting. Real
systems though are confined by solid walls. To model the influence of
confinement, it is necessary to explore wider classes of boundary conditions.
3. 3.
Most topological states in physical systems consist of linear perturbations of
bulk states that propagate on the edges of the system (edge states). It would
be interesting to determine whether linear perturbations of the MO or HW
solutions could host such edge states.
4. 4.
Beyond the mean-field limit $N\to\infty$, it would be interesting to quantify
the fluctuation about the mean-field, for instance through a large deviation
approach (see e.g. [6, 7, 11, 30, 47, 68]).
5. 5.
Direct numerical simulations of the macroscopic model need to be developed to
answer some of the questions raised by the study of topological protection
(see Section 5).
6. 6.
It is desirable to develop more sophisticated topological indicators to gain
better insight into the topological structure of the solutions.
7. 7.
The multiscale approach developed here could be extended to other
geometrically structured systems involving e.g. a wider class of manifolds
which would enlarge the applicability of the models.
Supplementary Material
## Appendix A List of supplementary videos
This article is supplemented by several videos which can be accessed by
following this link:
https://figshare.com/projects/Bulk_topological_states_in_a_new_collective_dynamics_model/96491.
They are listed and described below.
###### Video 1.
It supplements Fig. 3 of Section 2.1.2 and provides a visualization of the
time evolution of the system considered in this figure.
###### Video 2.
It supplements Fig. 6 of Section 3.1.2: it provides a visualization of the
time evolution of a MO. Several frames
${\mathbb{A}}=(\Omega,\mathbf{u},\mathbf{v})\in$ SO${}_{3}({\mathbb{R}})$ are
placed at various locations of space and evolve according to (27) (with
arbitrary chosen parameters). The vectors $\Omega$, $\mathbf{u}$ and
$\mathbf{v}$ are displayed respectively in red, green and blue.
###### Video 3.
It supplements Fig. 7 of Section 3.1.3: it provides a visualization of the
time evolution of a HW. See caption of Video 2 for details on the graphical
representation.
###### Video 4.
It supplements Fig. 13 in Section 5.2. It shows the time-evolution of the
particles for the initial condition MO1 (54). For clarity, only a sample of
5000 particles are shown. We refer to Fig. 3a for details on the
representation of the body orientation using four-colored tetrahedra. We
notice the ensemble rotation of the particle directions about the $z$ axis
until an instability disrupts the body orientation twist along the $z$ axis
(around time $t\approx 13$) and eventually drives the system to a FS.
###### Video 5.
It supplements Fig. 13 in Section 5.2. It provides the time-evolution of the
RPZ curve for the initial condition MO1 (54). The RPZ curve remains a circle
until time $t\approx 8$ where its radius shrinks down. Then, the RPZ-curve
shows a fairly chaotic dynamics during which the topology is lost. This
happens around time $t\approx 13$ which is the first time when the RPZ-curve
passes through the origin; at this time, the winding number is not defined.
Then, the RPZ-curve slowly migrates towards the unit circle while shrinking to
a single point which signals a FS. From time $t\approx 15$ on, it remains a
single immobile point.
###### Video 6.
It supplements Fig. 13 in Section 5.2. It shows the time-evolution of the
particles for the initial condition MO2 (55). For clarity, only a sample of
5000 particles are shown (see Fig. 3a for details on the representation of the
body orientation). We notice the counter-rotation of the particle directions
about the $z$ axis in the bottom and top halves of the domain, corresponding
to the opposite mills. These two counter-rotations gradually dissolve while
the solution approaches the FS.
###### Video 7.
It supplements Fig. 13 in Section 5.2. It provides the time-evolution of the
RPZ curve for the initial condition MO2 (55). The circle formed by the initial
RPZ curve immediately opens. The opening width constantly increases, until the
arc is reduced to a single point opposite to the opening point at time
$t\approx 10$. Then there is a bounce and the arc forms again and increases in
size until it reaches a maximum and decreases again. Several bounces are
observed with decreasing amplitudes. These bounces result in the non-
monotonous behavior of the quantity $d_{z}$ displayed on Fig. 13d.
###### Video 8.
It supplements Fig. 14 in Section 5.2. It shows the time-evolution of the
particles for the initial condition HW1 (57) (see Fig. 3a for details on the
representation of the body orientation). For clarity, only a sample of 5000
particles are shown. Before time $t\simeq 125$, we observe a steady HW state.
Then, after time $t\approx 125$, the particles show an undulating wave-like
behavior, with slowly increasing frequency and amplitude, which causes the
decrease of the GOP. Around time $t\approx 178$, the particles are divided
into two groups with pitch angles $\theta\simeq 0$ and $\theta\simeq\pi$,
which suddenly reverses the global direction of motion. After time $t\approx
178$, the particles quickly adopt the same body-orientation. Shortly after
time $t=178$, the particles still have an undulating behavior but it quickly
fades away until a FS is reached.
###### Video 9.
It supplements Fig. 14 in Section 5.2. It shows the time-evolution of the RPX-
curve for the initial condition HW1. Unlike in the MO case, the RPX curve does
not shrinks to the center of the circle before migrating to its limiting
point. In this case, the limiting point near the unit circle towards which the
RPX curve is converging attracts the RPX. During this transition, the circular
shape of the RPX curve is preserved until it becomes a point.
###### Video 10.
It supplements Fig. 14 in Section 5.2. It shows the time-evolution of the
particles for the initial condition HW2 (58). For clarity, only a sample of
5000 particles are shown (see Fig. 3a for details on the representation of the
body orientation). At the beginning, we see two opposite alternations of the
three side colors of the tetrahedra (green-blue-magenta followed by green-
magenta-blue), which signals a double parallel twist. Then, gradually, the
green color is eaten up by the blue and magenta ones and only one alternation
of the blue and magenta colors remains. Then the color alternation shades away
and gives room to a homogeneous color showing that the body orientations have
stopped rolling and a FS is attained.
###### Video 11.
It supplements Fig. 14 in Section 5.2. It provides the time-evolution of the
RPX curve for the initial condition HW2 (58). The circle formed by the initial
RPX curve immediately opens. The opening width constantly increases, although
at a slower pace than for MO2 (see Video 7). Here, also contrasting with the
MO2 case, the monotonous opening of the arc results in a monotonously
increasing quantity $d_{x}$ as shown in Fig. 14d.
###### Video 12.
It supplements Fig. 18 in Section H.1. It shows the time-evolution of the
particles for a MO initial condition (54) in a rare case where it evolves into
a HW. For clarity, only a sample of 5000 particles are shown (see Fig. 3a for
details on the representation of the body orientation). It starts like Video 4
with the ensemble rotation of the particle directions about the $z$ axis until
an instability initiated at time $t\approx 10$ gradually disrupts this
organization. However, the disruption does not drive the system to an FS, but
rather to a HW as shown by the alternations of blue, green and magenta colors
propagating along the particle orientations.
###### Video 13.
It supplements Fig. 18 in Section H.1. It provides the time-evolution of the
RPZ curve for a MO initial condition (54) in a rare case where it evolves into
a HW. The behavior is essentially the same as in Video 5 except that the RPZ-
curve shrinks to a single point far away from the unit circle. This shows that
the end state of the RPZ-curve is closer to disorder than for a milling to
flocking transition. Before that, the non-trivial topology across $z$ is lost
following a similar scenario as for the milling-to-flocking transition.
###### Video 14.
It supplements Fig. 18 in Section H.1. It provides the time-evolution of the
RPX curve for a MO initial condition (54) in a rare case where it evolves into
a HW. Initially, the RPX-curve is reduced to the origin, showing total
disorder across the $x$ direction. Then, after some chaotic transient, a
closed curve enclosing the origin is formed. This curve initially stays close
to the origin, still showing strong disorder. But gradually, the radius of the
curve increases and approaches the unit circle. Thus, across $x$, the topology
is initially undefined, but when it builds up, it shows its non-trivial
character, the emerging RPX-curve having non-zero winding number about the
origin.
###### Video 15.
It supplements Fig. 19 in Section H.2. It shows the time-evolution of the
particles for a MO initial condition (54) in a rare case where it evolves into
a FS through a transient HW. For clarity, only a sample of 5000 particles are
shown (see Fig. 3a for details on the representation of the body orientation).
The point of view is changed from Video 12 to better visualize the transient
HW moving along the diagonal, appearing around time $t\approx 16$. At the
beginning we witness the ensemble rotation of the particles and its disruption
by an instability. After some chaotic behavior, the transient HW establishes
as shown by the alternations of blue, green and magenta colors propagating
along the diagonal. But after some time, the HW structure is disrupted again
and the system eventually establishes a FS.
###### Video 16.
It supplements Fig. 19 in Section H.2. It provides the time-evolution of the
RPZ curve for a MO initial condition (54) in a rare case where it evolves into
a FS through a transient HW. The behavior is essentially the same as in Video
5 except that the RPZ-curve undergoes a longer-lasting chaotic dynamics before
shrinking to a point which migrates towards the unit circle.
## Appendix B Quaternion framework
Despite its formal simplicity, the SO${}_{3}({\mathbb{R}})$-framework used in
the definition of the Individual Based Model is not well suited to numerical
simulations due to the high computational cost required to store and
manipulate rotation matrices. A more efficient representation of rotations in
${\mathbb{R}}^{3}$ is the quaternion representation based on the group
isomorphism
$\begin{array}[]{rcl}\Phi:\mathbb{H}/\pm
1&\longrightarrow&\mbox{SO}_{3}({\mathbb{R}})\\\
q&\longmapsto&\Phi(q):\mathbf{w}\in{\mathbb{R}}^{3}\mapsto\\{q[\mathbf{w}]q^{*}\\}\in{\mathbb{R}}^{3},\end{array}$
where the 3-dimensional vector
$\mathbf{w}=(w_{1},w_{2},w_{3})^{\mathrm{T}}\in{\mathbb{R}}^{3}$ is identified
with the pure imaginary quaternion denoted by
$[\mathbf{w}]=iw_{1}+jw_{2}+kw_{3}$ and $q^{*}$ denotes the conjugate
quaternion to $q$. Conversely, the pure imaginary quaternion
$q=iq_{1}+jq_{2}+kq_{3}$ is identified with the 3-dimensional vector denoted
by $\\{q\\}:=(q_{1},q_{2},q_{3})^{\mathrm{T}}$. Note that for any quaternion
$q$ and any vector $\mathbf{w}\in{\mathbb{R}}^{3}$, the quaternion
$q[\mathbf{w}]q^{*}$ is a pure imaginary quaternion. The group of unit
quaternions is denoted by $\mathbb{H}$ and is homeomorphic to the sphere
$\mathbb{S}^{3}\subset{\mathbb{R}}^{4}$.
We refer the reader to [38, Section 2] and [37, Appendix A] where details
about the equivalence between the two representations can be found. Note that
[37] studies a model in a full quaternion framework. Table 2 below summarizes
how the different objects can be computed in either of the two
representations.
| Matrix | Quaternion
---|---|---
Orientation | $A\in\mbox{SO}_{3}({\mathbb{R}})$ | $q\in\mathbb{H}/\pm 1$ such that $\Phi(q)=A$
Flux | $J_{k}=\sum_{j}K(\mathbf{X}_{k}-\mathbf{X}_{j})A_{j}$ | $Q_{k}=\sum_{j}K(\mathbf{X}_{k}-\mathbf{X}_{j})\,(q_{j}\otimes q_{j}-1/4\mbox{I}_{4})$
Mean orientation | ${\mathbb{A}}=\mbox{arg\,max}\\{A\mapsto A\cdot J\\}$ | $\bar{q}\in\mathbb{H}$ eigenvector associated to the largest eigenvalue of $Q$
Von Mises distribution | $\displaystyle{M_{\mathbb{A}}(A)=\frac{\exp(\kappa{\mathbb{A}}\cdot A)}{\mathcal{Z}}}$ | $\displaystyle{M_{\overline{q}}(q)=\frac{\exp(2\kappa(\overline{q}\cdot q)^{2})}{\mathcal{Z}}}$
Table 2: Matrix vs quaternion formulation
## Appendix C Numerical methods
The IBM (3), (5) has been discretized within the quaternion framework using
the time-discrete algorithm described in Table 3 below. This table shows one
iteration of the algorithm during which the positions
$\mathbf{X}_{k}^{n}\in{\mathbb{R}}^{3}$ and orientations
$q_{k}^{n}\in\mathbb{H}$ for $k\in\\{1,\ldots,N\\}$ are updated into
$\mathbf{X}_{k}^{n+1}$ and $q_{k}^{n+1}$ respectively.
Algorithm: Iteration $n\to n+1$ of the time-discrete algorithm
---
1\. Update the positions: for $k\in\\{1\ldots,N\\}$, set
$\mathbf{X}_{k}^{n+1}=\mathbf{X}_{k}^{n}+c_{0}\,\\{q_{k}^{n}[\mathbf{e}_{1}](q_{k}^{n})^{*}\\}\,\Delta
t$
2\. Draw a subset $I\subset\\{1,\ldots,N\\}$ of jumping agents: for each agent
$k\in\\{1\ldots,N\\}$, draw a random number $r_{k}$ uniformly in $[0,1]$. If
$r_{k}>\exp(-\nu\,\Delta t)$, then $k\in I$.
3\. Compute the local flux: for $k\in I$, compute
$\overline{Q}_{k}^{n}=\frac{1}{N}\sum_{j=1}^{N}K(\mathbf{X}_{k}^{n}-\mathbf{X}_{j}^{n})\,(q_{j}^{n}\otimes
q_{j}^{n}-\frac{1}{4}\mbox{I}_{4}).$
4\. Update the orientations: for $k\in I$ compute one unit eigenvector
$\overline{q}_{k}^{n}$ of $Q_{k}^{n}$ of maximal eigenvalue and draw
$q_{k}^{n+1}\sim M_{\overline{q}_{k}^{n}}$.
Table 3: One iteration of the time-discrete algorithm
At step 2, the Poisson process is discretized with a time step $\Delta t$
during which the indices of the jumping agents are recorded. In the
simulations $\Delta t$ has to be chosen small enough so that the event that an
agent jumps twice or more during a time interval of size $\Delta t$ is
negligible. In all the simulations, we take $\Delta t$ such that $\nu\,\Delta
t=10^{-2}$.
At step 3, a random quaternion $q$ sampled from a von Mises distribution with
prescribed mean orientation $\bar{q}$ can be obtained as $q=\bar{q}r$ where
$r\in\mathbb{H}$ is sampled from a von Mises distribution with mean
orientation 1 (see [38, Proposition 9]). An efficient rejection algorithm to
sample von Mises distributions can be found in [66].
All the simulations in this paper take place in a periodic box of size
$L=(L_{x},L_{y},L_{z})$. The observation kernel $K$ is the indicator of the
ball centered at $0$ and of radius $R>0$. The six parameters of the
simulations are summarized in Table 1.
Finally, we would like to stress that the quaternion formulation is not only a
convenient numerical trick. The equivalence it provides between body-
orientation models and models of nematic alignment of polymers in dimension
four has been exploited in [32] to study phase transitions in the body
alignment model.
## Appendix D Derivation of the macroscopic model
The derivation of the continuum theory presented in Section 2.2 has been
achieved in [38] (see also [32]) following earlier works [35, 37]. It consists
of two steps. The first step is the derivation of a mean-field kinetic model
in the limit $N\to\infty$ showing that the system satisfies the propagation of
chaos property: the agents, seen as random variables in
${\mathbb{R}}^{3}\times$ SO${}_{3}({\mathbb{R}})$ become independent and
identically distributed. Their law is given by the kinetic particle
distribution $f$ which satisfies the following PDE:
$\partial_{t}f+c_{0}\,A\mathbf{e}_{1}\cdot\nabla_{\mathbf{x}}f=\nu\,(\rho_{f}\,M_{{\mathbb{A}}_{K*f}}-f),$
where $\rho_{f}\equiv\rho_{f}(t,\mathbf{x})$ is the local spatial density:
$\rho_{f}(t,\mathbf{x})=\int_{\mbox{{\scriptsize
SO}}_{3}({\mathbb{R}})}f(t,\mathbf{x},A)\,\mathrm{d}A,$
and ${\mathbb{A}}_{K*f}\equiv{\mathbb{A}}_{K*f}(t,\mathbf{x})$ is the local
average body-attitude defined by
${\mathbb{A}}_{K*f}(t,\mathbf{x}):=\mbox{arg\,max}_{A\in\mbox{\scriptsize{SO}}_{3}({\mathbb{R}})}A\cdot
J_{K*f}(t,\mathbf{x}),$
computed from the local flux:
$J_{K*f}\equiv
J_{K*f}(t,\mathbf{x}):=\iint_{{\mathbb{R}}^{3}\times\mbox{{\scriptsize
SO}}_{3}({\mathbb{R}})}K(\mathbf{x}-\mathbf{y})\,A\,f(t,\mathbf{y},A)\,\mathrm{d}\mathbf{y}\,\mathrm{d}A.$
From a mathematical point of view, the probability distribution $f\equiv
f(t,\mathbf{x},A)$ is obtained as the limit in law of the empirical measure of
the $N$-particle system. We refer to [43] where a rigorous proof of this
result is presented for a similar model, and to [10] for a related work on the
Vicsek model.
In the macroscopic regime the agent interactions become strong, which is
expressed by the following hydrodynamic scaling:
$\varepsilon\sim\frac{c_{0}}{\nu\,L}\sim\frac{R}{L}\ll 1,$
where $L$ is a typical macroscopic length-scale of the system (such as the
typical size of the flock). We define $\tilde{c}_{0}=\varepsilon\nu
L={\mathcal{O}}(1)$ and $c^{\prime}_{0}=c_{0}/\tilde{c}_{0}$. Then, defining
dimensionless time and space variables $t^{\prime}$ and $\mathbf{x}^{\prime}$
such that $\mathbf{x}=L\mathbf{x}^{\prime}$ and
$t=(L/\tilde{c}_{0})t^{\prime}$, we obtain (dropping the primes for
simplicity):
$\partial_{t}f^{\varepsilon}+c_{0}\,A\mathbf{e}_{1}\cdot\nabla_{\mathbf{x}}f^{\varepsilon}=\frac{1}{\varepsilon}\,(\rho_{f^{\varepsilon}}\,M_{{\mathbb{A}}_{f^{\varepsilon}}}-f^{\varepsilon})+\mathcal{O}(\varepsilon),$
(59)
where
${\mathbb{A}}_{f^{\varepsilon}}\equiv{\mathbb{A}}_{f^{\varepsilon}}(t,\mathbf{x}):=\mbox{arg\,max}_{A\in\mbox{\scriptsize{SO}}_{3}({\mathbb{R}})}A\cdot
J_{f^{\varepsilon}}(t,\mathbf{x}),$
and
$J_{f^{\varepsilon}}\equiv
J_{f^{\varepsilon}}(t,\mathbf{x}):=\int_{\mbox{{\scriptsize
SO}}_{3}({\mathbb{R}})}\,A\,f^{\varepsilon}(t,\mathbf{x},A)\,\mathrm{d}A.$
This last expression is obtained by Taylor expanding
$J_{K*f^{\varepsilon}}=J_{f^{\varepsilon}}+\mathcal{O}(\varepsilon^{2})$ and
means that the interactions between the agents become spatially localized in
the macroscopic regime.
The macroscopic model is obtained by formally taking the limit $\varepsilon\to
0$ in (59). If such a limit exists, it is necessarily of the form
$f^{\varepsilon}\,\underset{\varepsilon\to
0}{\longrightarrow}\,\rho\,M_{{\mathbb{A}}}$ (60)
where $\rho\equiv\rho(t,\mathbf{x})$ and
${\mathbb{A}}\equiv{\mathbb{A}}(t,\mathbf{x})$ depend on $t$ and $\mathbf{x}$.
Thus, the limiting distribution is fully described by the spatial density of
agents and their average orientation. To obtain a system of equations for
$(\rho,{\mathbb{A}})$, we first use the local conservation of mass:
integrating (59) over SO${}_{3}({\mathbb{R}})$ and noting the right-hand side
vanishes, it holds that,
$\partial_{t}\int_{\mbox{{\scriptsize
SO}}_{3}({\mathbb{R}})}f^{\varepsilon}\,\mathrm{d}A+c_{0}\,\int_{\mbox{{\scriptsize
SO}}_{3}({\mathbb{R}})}A\,\mathbf{e}_{1}\cdot\nabla_{\mathbf{x}}f^{\varepsilon}\,\mathrm{d}A=\mathcal{O}(\varepsilon).$
When $\varepsilon\to 0$, assuming (60) and using (43), we obtain (11a).
To obtain an equation for ${\mathbb{A}}$, it could be tempting to pursue this
approach and multiply (59) by $A$ before integrating it over
SO${}_{3}({\mathbb{R}})$. However, the term resulting from the right-hand side
of (59) does not vanish but equals (using (43) again):
$\frac{1}{\varepsilon}\int_{\mbox{{\scriptsize
SO}}_{3}({\mathbb{R}})}A\,(\rho_{f^{\varepsilon}}\,M_{{\mathbb{A}}_{f^{\varepsilon}}}-f^{\varepsilon})\,\mathrm{d}A=\frac{1}{\varepsilon}\Big{(}\frac{c_{1}}{c_{0}}\,\rho_{f^{\varepsilon}}\,{\mathbb{A}}_{f^{\varepsilon}}-J_{f^{\varepsilon}}\Big{)}\neq
0.$
Due to the factor $\varepsilon^{-1}$, its limit as $\varepsilon\to 0$ is
unknown. An easy fix can be found if, instead of multiplying Eq. (59) by $A$
before integrating it over SO${}_{3}({\mathbb{R}})$, we multiply it by the
quantity
$\psi_{{\mathbb{A}}_{f^{\varepsilon}}}(A):={\mathbb{A}}_{f^{\varepsilon}}^{\mathrm{T}}A-A^{\mathrm{T}}{\mathbb{A}}_{f^{\varepsilon}}$.
The rationale for using this quantity is because we aim to find an equation
for the time-derivative of ${\mathbb{A}}$. Such a derivative must lie in the
tangent space to SO${}_{3}({\mathbb{R}})$ at ${\mathbb{A}}$, denoted by
$T_{\mathbb{A}}$. This suggests to multiply (59) by an element of
$T_{\mathbb{A}}$. Given an arbitrary matrix $A$, a natural way to obtain an
element of $T_{\mathbb{A}}$ is to take its orthogonal projection on
$T_{\mathbb{A}}$, which is given by
$\frac{1}{2}(A-{\mathbb{A}}A^{\mathrm{T}}{\mathbb{A}})$. We could therefore
choose to multiply (59) by this quantity. But a further simplification is
possible by noting that this quantity is equal to
$\frac{1}{2}{\mathbb{A}}\,\psi_{{\mathbb{A}}}(A)$ and that
$\frac{1}{2}{\mathbb{A}}$ does not depend on $A$ and so can be factored out of
the integral with respect to $A$. These considerations naturally lead to the
choice of the antisymmetric matrix $\psi_{{\mathbb{A}}_{f^{\varepsilon}}}(A)$
as a multiplier. Because ${\mathbb{A}}_{f^{\varepsilon}}$ is obtained as the
polar decomposition of $J_{f^{\varepsilon}}$, there exists a symmetric matrix
$S$ such that $J_{f^{\varepsilon}}={\mathbb{A}}_{f^{\varepsilon}}S$. Using
this remark and (43), we easily find that
$\frac{1}{\varepsilon}\int_{\mbox{{\scriptsize
SO}}_{3}({\mathbb{R}})}\psi_{{\mathbb{A}}_{f^{\varepsilon}}}(A)\,(\rho_{f^{\varepsilon}}\,M_{{\mathbb{A}}_{f^{\varepsilon}}}-f^{\varepsilon})\,\mathrm{d}A=0.$
Then, multiplying (59) by $\psi_{f^{\varepsilon}}$, taking the limit
$\varepsilon\to 0$ and assuming (60) leads to:
$\int_{\mbox{{\scriptsize SO}}_{3}({\mathbb{R}})}(\partial_{t}(\rho
M_{\mathbb{A}})+c_{0}\,A\mathbf{e}_{1}\cdot\nabla_{\mathbf{x}}(\rho
M_{\mathbb{A}}))\,\psi_{{\mathbb{A}}}(A)\,\mathrm{d}A=0.$
Eq. (11b) of the SOHB model follows from this equation through tedious but
straightforward computations detailed in [35, 37].
Note that the simple form of the multiplier $\psi_{{\mathbb{A}}_{f}}$ is due
to a particular simple expression of the collision operator. In more general
cases, the obtention of the multiplier (referred to as the generalized
collision invariant in [41]) is more involved (see e.g. [35, 37, 38]). A
rigorous convergence result for the limit $\varepsilon\to 0$ is not available
to date. In the case of the Vicsek model, such a rigorous result has been
proved in [65].
## Appendix E Alternate expressions of $\delta$
The following lemma provides alternate expressions for $\delta$:
###### Lemma E.1.
We have
$\displaystyle\delta$ $\displaystyle=$
$\displaystyle-\big{\\{}[(\mathbf{u}\cdot\nabla_{\mathbf{x}})\,\Omega]\cdot\mathbf{v}+[(\mathbf{v}\cdot\nabla_{\mathbf{x}})\mathbf{u}]\cdot\Omega+[(\Omega\cdot\nabla_{\mathbf{x}})\mathbf{v}]\cdot\mathbf{u}\big{\\}}$
(61) $\displaystyle=$
$\displaystyle-\frac{1}{2}\big{\\{}(\nabla_{\mathbf{x}}\times\Omega)\cdot\Omega+(\nabla_{\mathbf{x}}\times\mathbf{u})\cdot\mathbf{u}+(\nabla_{\mathbf{x}}\times\mathbf{v})\cdot\mathbf{v}\\}.$
(62)
###### Proof.
Eq. (61) follows from inserting the formula
$0=\nabla_{\mathbf{x}}(\Omega\cdot\mathbf{u})=(\Omega\cdot\nabla_{\mathbf{x}})\mathbf{u}+(\mathbf{u}\cdot\nabla_{\mathbf{x}})\Omega+\Omega\times(\nabla_{\mathbf{x}}\times\mathbf{u})+\mathbf{u}\times(\nabla_{\mathbf{x}}\times\Omega),$
and similar formulas after circular permutation of
$\\{\Omega,\mathbf{u},\mathbf{v}\\}$into (13). Eq. (62) follows from taking
the half sum of (13) and (61) and applying the formula
$\nabla_{\mathbf{x}}\times\mathbf{v}=\nabla_{\mathbf{x}}\times(\Omega\times\mathbf{u})=(\nabla_{\mathbf{x}}\cdot\mathbf{u})\,\Omega-(\nabla_{\mathbf{x}}\cdot\Omega)\,\mathbf{u}+(\mathbf{u}\cdot\nabla_{\mathbf{x}})\Omega-(\Omega\cdot\nabla_{\mathbf{x}})\mathbf{u},$
and similar formulas after circular permutation of
$\\{\Omega,\mathbf{u},\mathbf{v}\\}$. ∎
## Appendix F MO, HW, GS and generalized HW solutions
In this section, we provide proofs of Lemmas 3.1, 3.2 and 3.3. The
prototypical helical traveling wave (HW) presented in Lemma 3.2 belongs to a
more general class of solutions called generalized HW solutions described in
Section F.2 below.
### F.1 Proof of Lemma 3.1
Starting from the initial condition (29), we are looking for solutions of
(11b) of the form
${\mathbb{A}}(t,\mathbf{x})=\left(\begin{array}[]{ccc}\cos(\omega
t)&u_{1}(t,z)&v_{1}(t,z)\\\ -\sin(\omega t)&u_{2}(t,z)&v_{2}(t,z)\\\
0&u_{3}(t,z)&v_{3}(t,z)\end{array}\right),$
where $\omega\in{\mathbb{R}}$ is an angular velocity which will be related to
the parameters of the problem later and where the basis vectors
$\mathbf{u}=(u_{1},u_{2},u_{3})^{\mathrm{T}}$ and
$\mathbf{v}=(v_{1},v_{2},v_{3})^{\mathrm{T}}$ depend only on the $z$ variable
and time. In this situation, Equation (11a) is trivially satisfied which means
that the system stays homogeneous in space. Solutions of this form have to
satisfy three geometrical constraints which ensure that ${\mathbb{A}}\in$
SO${}_{3}({\mathbb{R}})$. The first two ones are
$\Omega\times\mathbf{u}=\mathbf{v}$ and $\mathbf{v}\times\Omega=\mathbf{u}$,
which lead to
${\mathbb{A}}(t,\mathbf{x})=\left(\begin{array}[]{ccc}\cos(\omega
t)&\sin(\omega t)v_{3}(t,z)&-\sin(\omega t)u_{3}(t,z)\\\ -\sin(\omega
t)&\cos(\omega t)v_{3}(t,z)&-\cos(\omega t)u_{3}(t,z)\\\
0&u_{3}(t,z)&v_{3}(t,z)\end{array}\right).$ (63)
The third one is a normalization constraint:
$\forall t>0,\quad\forall z\in{\mathbb{R}},\qquad
u_{3}(t,z)^{2}+v_{3}(t,z)^{2}=1.$ (64)
Using (64), we define a function $\alpha\equiv\alpha(t,z)$ such that
$u_{3}(t,z)=\sin(\alpha(t,z)),\qquad v_{3}(t,z)=\cos(\alpha(t,z)).$
A direct computation shows that for ${\mathbb{A}}$ of the form (63), we have
$\mathbf{r}=(\partial_{z}u_{3})\,\mathbf{u}+(\partial_{z}v_{3})\,\mathbf{v},\qquad\delta=0.$
Therefore, Eq. (11b) can be rewritten more concisely into:
$\partial_{t}{\mathbb{A}}+c_{4}\,[\Omega\times\mathbf{r}]_{\times}{\mathbb{A}}=0,$
(65)
where we recall Eq. (9) for the definition of $[\,]_{\times}$. A direct
computation shows that
$\Omega\times\mathbf{r}=(v_{3}\,\partial_{z}u_{3}-u_{3}\,\partial_{z}v_{3})\,\mathbf{e}_{3}=(\partial_{z}\alpha)\,\mathbf{e}_{3}.$
(66)
Inserting this in (65) implies that $u_{3}(t,z)\equiv u_{3}(z)$ and
$v_{3}(t,z)\equiv v_{3}(z)$ are independent of time. We then observe that:
${\mathbb{A}}(t,\mathbf{x})={\mathcal{A}}(-\omega
t,\mathbf{e}_{3})\,{\mathcal{A}}(\alpha(z),\mathbf{e}_{1}),$ (67)
where we recall Eq. (8) for the meaning of ${\mathcal{A}}$. Therefore, using
(65) and (66), we obtain:
$-\omega\,[\mathbf{e}_{3}]_{\times}{\mathbb{A}}+c_{4}\,(\partial_{z}\alpha)\,[\mathbf{e}_{3}]_{\times}{\mathbb{A}}=0,$
from which we deduce that ${\mathbb{A}}$ satisfies (11b) if and only if
$\alpha$ and $\omega$ satisfy:
$c_{4}\,\partial_{z}\alpha=\omega,$
which implies
$\alpha(z)=\frac{\omega}{c_{4}}\,z+\bar{\alpha},$ (68)
where $\bar{\alpha}$ is a constant, which can be interpreted as the phase at
the origin $z=0$. To recover Eq. (27), we just need to take $\bar{\alpha}=0$
and define $\xi=\omega/c_{4}$. Eq. (28) follows from (67).
### F.2 Generalized HW and proof of Lemma 3.2
Starting from the initial condition (35), we are looking for solutions of
(11b) of the form
${\mathbb{A}}(t,\mathbf{x})=\left(\begin{array}[]{ccc}1&0&0\\\
0&\cos(\alpha(t,x))&-\sin(\alpha(t,x))\\\
0&\sin(\alpha(t,x))&\cos(\alpha(t,x))\end{array}\right),$
for a real-valued function $\alpha$ of the $t$ and $x$ variables only. In this
case, $\Omega$ is a constant vector and Equation (18a) is trivially satisfied.
Moreover a direct computation shows that:
$\mathbf{r}=0,\qquad\delta=(\partial_{x}\alpha)(t,x).$
As a consequence, Eq. (21) is trivially satisfied and straightforward
computations show that Eq. (11b) reduces to
$\partial_{t}\alpha+(c_{2}+c_{4})\,\partial_{x}\alpha=0.$
This last equation is a linear transport equation with velocity $c_{2}+c_{4}$,
the solutions of which are given by
$\alpha(t,x)=\alpha_{0}(x-(c_{2}+c_{4})t)$ (69)
for any initial condition $\alpha_{0}\in L^{1}_{\text{loc}}({\mathbb{R}})$. In
the case of (35), $\alpha_{0}(x)=\xi\,x$. However, we see that there are as
many different solutions as functions in $L^{1}_{\text{loc}}({\mathbb{R}})$.
Such general solutions are called “generalized HW”.
### F.3 Proof of Lemma 3.3
The three rotation matrices are given by
$\mathcal{A}(-\omega t,\mathbf{e}_{3})=\left(\begin{array}[]{ccc}\cos(\omega
t)&\sin(\omega t)&0\\\ -\sin(\omega t)&\cos(\omega t)&0\\\
0&0&1\end{array}\right),$
$\mathcal{A}(\theta-\pi/2,\mathbf{e}_{2})=\left(\begin{array}[]{ccc}\sin\theta&0&-\cos\theta\\\
0&1&0\\\ \cos\theta&0&\sin\theta\end{array}\right),$
$\mathcal{A}(\xi(z-\tilde{\lambda
t}),\mathbf{e}_{1})=\left(\begin{array}[]{ccc}1&0&0\\\
0&\cos(\xi(z-\tilde{\lambda t}))&-\sin(\xi(z-\tilde{\lambda t}))\\\
0&\sin(\xi(z-\tilde{\lambda t}))&\cos(\xi(z-\tilde{\lambda
t}))\end{array}\right),$
and a direct computation shows that the three column vectors $\Omega$,
$\mathbf{u}$ and $\mathbf{v}$ of the matrix $\mathbb{A}_{\xi,\theta}$ are
given by
$\Omega=\left(\begin{array}[]{c}\sin\theta\cos(\omega t)\\\
-\sin\theta\sin(\omega t)\\\ \cos\theta\end{array}\right),$
$\mathbf{u}=\left(\begin{array}[]{c}-\cos\theta\sin(\xi(z-\tilde{\lambda
t}))\cos(\omega t)+\cos(\xi(z-\tilde{\lambda t}))\sin(\omega t)\\\
\cos\theta\sin(\xi(z-\tilde{\lambda t}))\sin(\omega
t)+\cos(\xi(z-\tilde{\lambda t}))\cos(\omega t)\\\
\sin\theta\sin(\xi(z-\tilde{\lambda t}))\end{array}\right),$
$\mathbf{v}=\left(\begin{array}[]{c}-\cos\theta\cos(\xi(z-\tilde{\lambda
t}))\cos(\omega t)-\sin(\xi(z-\tilde{\lambda t}))\sin(\omega t)\\\
\cos\theta\cos(\xi(z-\tilde{\lambda t}))\sin(\omega
t)-\sin(\xi(z-\tilde{\lambda t}))\cos(\omega t)\\\
\sin\theta\cos(\xi(z-\tilde{\lambda t}))\end{array}\right).$
Then we compute
$\displaystyle\mathbf{r}$
$\displaystyle=\xi\sin\theta\cos(\xi(z-\tilde{\lambda
t}))\mathbf{u}-\xi\sin\theta\sin(\xi(z-\tilde{\lambda
t}))\mathbf{u}=\xi\sin\theta(\sin(\omega t),\cos(\omega t),0)^{\mathrm{T}},$
$\displaystyle\delta$
$\displaystyle=\cos\theta\partial_{z}\mathbf{u}\cdot\mathbf{v}+u_{3}\delta_{z}\mathbf{v}\cdot\Omega=\xi\cos\theta,$
where we have used that $\partial_{z}\mathbf{u}=\xi\mathbf{v}$ and
$\partial_{z}\mathbf{v}=-\xi\mathbf{u}$. It remains to check that Eq. (11b)
holds true. We split this equation into three equations, one for each vector
$\Omega$, $\mathbf{u}$ and $\mathbf{v}$. The first equation on $\Omega$ reads
$(\partial_{t}+c_{2}(\Omega\cdot\nabla_{\mathbf{x}}))\Omega+c_{4}P_{\Omega^{\perp}}\mathbf{r}=0.$
This equation holds true because
$\partial_{t}\Omega=-\omega\left(\begin{array}[]{c}\sin\theta\sin(\omega t)\\\
\sin\theta\cos(\omega t)\\\
0\end{array}\right),\quad(\Omega\cdot\nabla_{\mathbf{x}})\Omega=0,\quad
P_{\Omega^{\perp}}\mathbf{r}=\mathbf{r}-(\mathbf{r}\cdot\Omega)\Omega=\xi\sin\theta\left(\begin{array}[]{c}\sin(\omega
t)\\\ \cos(\omega t)\\\ 0\end{array}\right),$
and $\omega=c_{4}\xi$. The second equation on $\mathbf{u}$ reads
$(\partial_{t}+c_{2}(\Omega\cdot\nabla_{\mathbf{x}}))\mathbf{u}-c_{4}(\mathbf{u}\cdot\mathbf{r})\Omega+c_{4}\delta\mathbf{v}=0.$
Because $\tilde{\lambda}=c_{2}\cos\theta$, we have
$\partial_{t}+c_{2}\Omega\cdot\nabla_{\mathbf{x}}=\partial_{t}+c_{2}\cos\theta\partial_{z}=\partial_{t}+\tilde{\lambda}\partial_{z}\quad\textrm{and}\quad\partial_{t}+\tilde{\lambda}\partial_{z}(z-\tilde{\lambda}t)=0.$
Thus
$(\partial_{t}+c_{2}(\Omega\cdot\nabla_{\mathbf{x}}))\mathbf{u}=\omega\left(\begin{array}[]{c}\cos\theta\sin(\xi(z-\tilde{\lambda
t}))\sin(\omega t)+\cos(\xi(z-\tilde{\lambda t}))\cos(\omega t)\\\
\cos\theta\sin(\xi(z-\tilde{\lambda t}))\cos(\omega
t)-\cos(\xi(z-\tilde{\lambda t}))\sin(\omega t)\\\ 0\end{array}\right),$
and using $\omega=c_{4}\xi$, it can be checked that
$(\partial_{t}+c_{2}(\Omega\cdot\nabla_{\mathbf{x}}))\mathbf{u}-c_{4}(\mathbf{u}\cdot\mathbf{r})\Omega=-c_{4}\xi\cos\theta\mathbf{v}=-c_{4}\delta\mathbf{v},$
which yields the result. The equation on $\mathbf{v}$ is analogous.
### F.4 GOP of the MO and generalized HW
The GOP (given by Eq. (44)) of the MO and HW do not depend on time and only
depend on the function $\alpha$ defined respectively by (68) and (69). Using
Eq. (44), we can compute that the GOP is equal to:
$\mbox{GOP}=\frac{1}{2}\left(\frac{c_{1}(\kappa)}{c_{0}}\right)^{2}\big{(}1+2\,|\langle\mathbf{u}\rangle|^{2}\big{)}+\frac{1}{4},$
where $\langle\mathbf{u}\rangle$ denotes the spatial average of the vector
$\mathbf{u}$ with respect to $\rho$ (here the with respect to the uniform
measure on the domain since $\rho$ is constant and uniform). With the previous
notations, we obtain
$|\langle\mathbf{u}\rangle|^{2}=\langle\cos\alpha\rangle^{2}+\langle\sin\alpha\rangle^{2},$
For the generalized HW, depending on the choice of $\alpha$, the GOP can take
any value between GOP1 and GOP2, these two extreme values being attained
respectively when $|\langle\mathbf{u}\rangle|=0$ and
$|\langle\mathbf{u}\rangle|=1$.
## Appendix G Convergence rate of $|\mathrm{d}\bar{\varphi}/\mathrm{d}t|$ as
$N\to\infty$
The fact that the convergence rate of $|\mathrm{d}\bar{\varphi}/\mathrm{d}t|$
is close to $N^{-1}$ agrees with previously documented observations in
spherical statistics. Indeed, it has been shown in [82, Theorem 3(e)] that the
estimation of the concentration parameter of a (spherical) von Mises
distribution obtained from a crude averaging procedure from $N$ independent
samples produces a biased estimator with a (nonnegative) bias of order
$N^{-1}$ (see also [72, Section 10.3]). In the present case, a similar
reasoning can be applied, which we now briefly develop. The key observation is
that all the measured quantities are functions of empirical averages of the
form (4). Under the chaos assumption (see Section D), when $N$ is large, the
body-orientations of the particles behave as $N$ independent samples with
common law $M_{\mathbb{A}}$, where $\mathbb{A}$ solves the SOHB model (11) and
$M_{\mathbb{A}}$ is defined by (6). In [35, Theorem 4.1], it has been shown
that $c_{4}(\kappa)$ can actually be expressed as a function of a certain
number $p$ of averaged quantities
$c_{4}(\kappa)=F(\langle g_{1}\rangle_{M_{\mathbb{A}}},\ldots,\langle
g_{p}\rangle_{M_{\mathbb{A}}}),$
where $g_{i}:\mathrm{SO}_{3}({\mathbb{R}})\to\mathcal{M}_{3}({\mathbb{R}})$
and $F:\mathcal{M}_{3}({\mathbb{R}})^{p}\to{\mathbb{R}}$ are smooth functions.
The IBM simulation thus defines an estimator $\hat{\kappa}$ of the
concentration parameter such that
$c_{4}(\hat{\kappa})=F(\hat{g}_{1},\ldots,\hat{g}_{p}),$
where $\hat{g}_{i}$ is the average of $g_{i}$ obtained by replacing
$M_{\mathbb{A}}$ by the empirical measure of the $N$ body-orientations of the
particles. We can then measure the bias by taking the expectation of the
Taylor expansion of the previous expression around the point $(\langle
g_{1}\rangle_{M_{\mathbb{A}}},\ldots,\langle g_{p}\rangle_{M_{\mathbb{A}}})$ :
$c_{4}(\hat{\kappa})=c_{4}(\kappa)+\delta\mathbf{\hat{g}}\cdot\nabla
F+(\delta\mathbf{\hat{g}})^{\mathrm{T}}(\mathrm{Hess}\,F)\delta\mathbf{\hat{g}}+R,$
where
$\delta\mathbf{\hat{g}}=(\hat{g}_{1},\ldots,\hat{g}_{p})^{\mathrm{T}}-(\langle
g_{1}\rangle_{M_{\mathbb{A}}},\ldots,\langle
g_{p}\rangle_{M_{\mathbb{A}}})^{\mathrm{T}}$ and $R$ is a remainder. The
gradient $\nabla$ and Hessian $\mathrm{Hess}$ are defined within the Euclidean
framework given by (1). By the chaos hypothesis
${\mathbb{E}}[\delta\mathbf{\hat{g}}]=0$ and by the central limit theorem, the
term of order two behaves as $N^{-1}$. Since SO${}_{3}({\mathbb{R}})$ is
compact, higher order moments of $\delta\mathbf{\hat{g}}$ can be controlled by
a classical argument based on Hoeffding’s inequality [88, Lemma 5.5 and
Theorem 5.29]. This ensures that ${\mathbb{E}}[R]$ is $\mathcal{O}(N^{-2})$.
We therefore obtain a biased estimator:
${\mathbb{E}}[c_{4}(\hat{\kappa})]=c_{4}(\kappa)+\frac{a}{N}+\mathcal{O}(N^{-2}),$
where $a\in{\mathbb{R}}$ depends on the derivatives of the considered
functions and on the variance of the estimator (4) where the particles are
replaced by independent identically distributed samples with law
$M_{\mathbb{A}}$. The fact that $a>0$ can be empirically verified on Fig. 8b
but has not been proved yet. For each $N$, the fluctuations around the average
(biased) value can be monitored by computing the standard deviation of the 10
independent simulations. Fig. 17 shows this standard deviation as a function
of $N$ in a log-log-scale (blue dots). Although fluctuations remain
significant with only 10 simulations per data point, by a standard linear
regression (solid orange line) we obtain that the size of the standard
deviation behaves as $N^{-\beta}$ with $\beta\simeq 0.54$. which is close to
the value $\beta=1/2$ which we expect from an application of the central limit
theorem.
Figure 17: Standard deviation of the 10 independent simulations as a function
of $N$ (blue dots) and regression line (solid orange line) in log-log scale.
Parameters: $L=1$, $\xi=2\pi$, $R=0.025$, $\nu=40$, $c_{0}=1$, $\kappa=10$.
## Appendix H Rare events
Although the scenario described in Section 5 of the main text is the most
common one, the IBM sometimes leads to different, slightly more complex
scenarios which are described in the present section. Now, the IBM is
initialized by drawing $N$ positions independently uniformly in the cubic
domain $\mathcal{D}=[0,L]\times[0,L]\times[0,L]$ with periodic boundary
conditions and $N$ body-orientations independently from the von Mises
distribution $M_{\mathbb{A}(0,\mathbf{x})}$ where $\mathbb{A}(0,\mathbf{x})$
is given by (29) with $\xi=2\pi/L$ (winding number equal to $1$).
### H.1 From milling orbit to helical wave
Here, we report on the occurrence of transitions from a MO to a HW. Among
twenty independent simulations, this transition occurred only once (the other
cases being a transition from a MO to a FS). We run the IBM and record the
time-evolution of a set of indicators as shown in Fig. 18 (see also
supplementing videos 12 to 14 in Section A).
As shown in Fig. 18a, the GOP does not converge towards GOP2 characterizing
the FS, but towards an intermediate value between GOP1 (which characterizes MO
or HW) and GOP2. As explained in Section F.4, such values of the GOP can be
attained by a generalized helical wave solution (as can be observed in Video
12). The pitch $\bar{\theta}$ (Fig. 18b) and yaw $\bar{\varphi}$ (Fig. 18c)
behave like in the milling-to-flocking transition (see Figs. 13b and 13c)
except for small-amplitude, slow-frequency oscillations appearing after the
topological transition time. This may be due to some competition between two
attractors, the FS and the HW, which being alternately stronger and weaker,
generate this oscillatory behavior. Note that a transition to a HW cannot
occur when the global direction of motion at the transition time is not one of
the principal axes of the square domain since a HW along another direction is
not compatible with the periodic boundary conditions (see Section H.2). This
is confirmed by the final values of $\bar{\varphi}$ and $\bar{\theta}$ (both
equal to $\pi/2$) which correspond to a global direction of motion oriented
along the $y$-axis (in what follows, in reference to (57) and to avoid
confusion, we will still call that direction, the $x$ direction).
The second and third lines of figures in Fig. 18 show the triplets of
topological indicators $(d_{z},\bar{r}_{z},w_{z})$ and
$(d_{x},\bar{r}_{x},w_{x})$ which materialize the MO and HW structures
respectively. The mean distance of the RPZ-curve to the origin $\bar{r}_{z}$
(Figs. 18e) decreases, revealing an increase of the disorder. Simultaneously,
the distance of its center of mass to the origin $d_{z}$ increases (Figs. 18d)
showing a transition trend to a FS. The winding number $w_{z}$ (Fig. 18f)
jumps from $1$ to $0$ at the time of maximal disorder. However, $d_{z}$ and
$\bar{r}_{z}$ do not reach zero, showing that complete disorder across $z$ is
not reached. Since the final state of the system is a generalized helical wave
state (see Section F.4), we do not necessarily expect that complete disorder
will be reached along the $z$-direction. In the mean time, $\bar{r}_{x}$
starts from $0$ (complete disorder) and increases up to a value close to
unity, showing the build-up of a HW. The quantity $d_{x}$ increases during
some time but eventually decreases to $0$ (not shown in the figure) as it
should for a HW. Finally, the winding number $w_{x}$ is undefined in the
initial stage, as it should for complete disorder, but builds up to $1$ at the
time where the winding number $w_{z}$ drops to $0$. There is a transfer of
non-trivial topology from an MO structure to a HW structure.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
Figure 18: Transition from a MO to a HW: example of a solution of the IBM for
an initial condition sampled from (54) in the rare case where it leads to a
HW. The following indicators are plotted as functions of time: (a) GOP (b)
Pitch $\bar{\theta}$ of $\bar{\Omega}$. (c) Yaw $\bar{\varphi}$ of
$\bar{\Omega}$. (d) Distance of center of mass of RPZ curve to the origin
$d_{z}$. (e) Mean distance of RPZ curve to the origin $\bar{r}_{z}$. (f)
Winding number of RPZ curve $w_{z}$. (g) Distance of center of mass of RPX
curve to the origin $d_{x}$. (h) Mean distance of RPX curve to the origin
$\bar{r}_{x}$. (i) Winding number of RPX curve $w_{x}$. Gray shaded zones
highlight a small region around the time of minimal GOP. Parameters:
$N=1.5\cdot 10^{6}$, $R=0.025$, $L=1$, $D=0.1$, $\nu=40$, $c_{0}=1$. See
caption of Fig. 13 for further indications. See also Videos 12 to 14 in
Section A.
### H.2 From milling to flocking via a helical wave state
In some rare cases an intermediate unstable HW can be observed. Note that due
to the periodic setting, an HW cannot be stable for most of the the global
directions of motion. Although stable or unstable HW typically appear in one
over twenty of our simulations, it should be kept in mind that the occurrence
frequency also depends on the geometry of the domain and that this phenomena
may be more frequent for other simulation settings. The procedure is the same
as in the previous section. Fig. 19 shows the results (see also supplementing
videos 15 and 16 in Section A).
The transition stage between the MO and FS is significantly longer than in the
previous situations. During that phase, the GOP (Fig. 19a) oscillates between
the value $\Psi_{1}$ characterizing the MO and lower values, i.e. lower order.
Likewise, there are significant variations of the pitch $\bar{\theta}$ (Fig.
19b) and yaw $\bar{\varphi}$ (Fig. 19c). As in the previous section, this
could be explained by antagonist effects of different attractors (the MO and
HW) and subsequent oscillations of the system between them. Video 15 reveals
large scale band structures similar to a HW except that the global direction
of motion is not one of the principal axes of the square domain. As, in most
cases, this cannot be compatible with the periodic boundary conditions, such
state cannot persist in time. The relatively long-time persistence of this
stage could be explained in the present case by the fact that the global
direction of motion seems to oscillate around the direction given by
$\mathbf{e}_{1}+\mathbf{e}_{2}$ (i.e. $\varphi=\pi/4$ and $\theta=\pi/2$)
which is theoretically compatible with the periodic boundary conditions,
provided the wave length $\xi$ is changed from $2\pi/L$ to $\sqrt{2}\pi/L$.
This state does not seem to be stable as shown by the large oscillations of
$\bar{\varphi}$ and $\bar{\theta}$. The topological indicators
$(d_{z},\bar{r}_{z},w_{z})$ shown in the second line of figures of Fig. 19
also display large oscillations. The quantity $\bar{r}_{z}$ drops, and at the
same time, $d_{z}$ remains small, while the winding number $w_{z}$ has strong
oscillations, indicating a state of large disorder across $z$, which is
consistent with the fact that the temporary HW order is organized in a
different direction. However, we see that $w_{z}$ has a calmer period between
two series of oscillations. This calmer period corresponds to the interval of
time during which the temporary HW order prevails. Eventually the triplet
converges to the value $(1,1,0)$ characterizing the FS.
(a)
(b)
(c)
(d)
(e)
(f)
Figure 19: Transition from a MO to a FS via an unstable HW: example of a
solution of the IBM for an initial condition sampled from (54) in the rare
case where it leads to a FS through a transient HW. The following indicators
are plotted as functions of time: (a) GOP (b) Pitch $\bar{\theta}$ of
$\bar{\Omega}$. (c) Yaw $\bar{\varphi}$ of $\bar{\Omega}$. (d) Distance of
center of mass of RPZ curve to the origin $d_{z}$. (e) Mean distance of RPZ
curve to the origin $\bar{r}_{z}$. (f) Winding number of RPZ curve $w_{z}$.
Gray shaded zones highlight a small region around the time of minimal GOP.
Parameters: $N=1.5\cdot 10^{6}$, $R=0.025$, $L=1$, $D=0.1$, $\nu=40$,
$c_{0}=1$. See caption of Fig. 13 for further indications. See also Videos 15
and 16 in Section A.
## References
* [1] P. Aceves-Sanchez, M. Bostan, J.-A. Carrillo, and P. Degond. Hydrodynamic limits for kinetic flocking models of Cucker-Smale type. Math. Biosci. Eng., 16:7883–7910, 2019.
* [2] M. Aldana, H. Larralde, and B. Vázquez. On the emergence of collective order in swarming systems: a recent debate. Int. J. Mod. Phys. B, 23(18):3661–3685, 2009.
* [3] I. Aoki. A simulation study on the schooling mechanism in fish. Bull. Japan. Soc. Sci. Fish, 48:1081–1088, 1982.
* [4] A. Barbaro and P. Degond. Phase transition and diffusion among socially interacting self-propelled agents. Discrete Contin. Dyn. Syst. Ser. B, 19:1249–1278, 2014.
* [5] A. B. Barbaro, J. A. Canizo, J. A. Carrillo, and P. Degond. Phase transitions in a kinetic flocking model of Cucker–Smale type. Multiscale Model. Simul., 14(3):1063–1088, 2016.
* [6] J. Barré, C. Bernardin, R. Chétrite, Y. Chopra, and M. Mariani. Gamma convergence approach for the large deviations of the density in systems of interacting diffusion processes. arXiv preprint arXiv:1910.04026, 2019.
* [7] L. Berlyand, R. Creese, P.-E. Jabin, and M. Potomkin. Continuum approximations to systems of correlated interacting particles. J. Stat. Phys., 174(4):808–829, 2019.
* [8] E. Bertin, M. Droz, and G. Grégoire. Boltzmann and hydrodynamic description for self-propelled particles. Phys. Rev. E, 74(2):022101, 2006.
* [9] E. Bertin, M. Droz, and G. Grégoire. Hydrodynamic equations for self-propelled particles: microscopic derivation and stability analysis. J. Phys. A, 42(44):445001, 2009.
* [10] F. Bolley, J. A. Cañizo, and J. A. Carrillo. Mean-field limit for the stochastic Vicsek model. Appl. Math. Lett., 25(3):339–343, 2012.
* [11] L. Bortolussi and N. Gast. Mean-field limits beyond ordinary differential equations. In International School on Formal Methods for the Design of Computer, Communication and Software Systems, pages 61–82. Springer, 2016.
* [12] M. Bostan and J. A. Carrillo. Asymptotic fixed-speed reduced dynamics for kinetic equations in swarming. Math. Models Methods Appl. Sci., 23(13):2353–2393, 2013.
* [13] M. Bostan and J. A. Carrillo. Reduced fluid models for self-propelled particles interacting through alignment. Math. Models Methods Appl. Sci., 27(07):1255–1299, 2017.
* [14] M. Briant, A. Diez, and S. Merino-Aceituno. Cauchy theory and mean-field limit for general Vicsek models in collective dynamics. arXiv preprint arXiv:2004.00883, 2020.
* [15] A. Bricard, J.-B. Caussin, D. Das, C. Savoie, V. Chikkadi, K. Shitara, O. Chepizhko, F. Peruani, D. Saintillan, and D. Bartolo. Emergent vortices in populations of colloidal rollers. Nat. Commun., 6:7470, 2015.
* [16] D. S. Calovi, U. Lopez, S. Ngo, C. Sire, H. Chaté, and G. Theraulaz. Swarming, schooling, milling: phase diagram of a data-driven fish school model. New J. Phys., 16(1):015026, 2014.
* [17] J. A. Carrillo, M. R. D’Orsogna, and V. Panferov. Double milling in self-propelled swarms from kinetic theory. Kinet. Relat. Models, 2(2):363, 2009.
* [18] C. Castellano, S. Fortunato, and V. Loreto. Statistical physics of social dynamics. Rev. Modern Phys., 81(2):591, 2009.
* [19] J.-B. Caussin, A. Solon, A. Peshkov, H. Chaté, T. Dauxois, J. Tailleur, V. Vitelli, and D. Bartolo. Emergent spatial structures in flocking models: a dynamical system insight. Phys. Rev. Lett., 112(14):148102, 2014.
* [20] A. Cavagna, L. Del Castello, I. Giardina, T. Grigera, A. Jelic, S. Melillo, T. Mora, L. Parisi, E. Silvestri, M. Viale, et al. Flocking and turning: a new model for self-organized collective motion. J. Stat. Phys., 158(3):601–627, 2015.
* [21] C. Cercignani, R. Illner, and M. Pulvirenti. The Mathematical Theory of Dilute Gases, volume 106. Springer Science & Business Media, 2013.
* [22] B. Charlier, J. Feydy, J. A. Glaunès, F.-D. Collin, and G. Durif. Kernel operations on the GPU, with autodiff, without memory overflows. arXiv preprint arXiv:2004.11127, 2020.
* [23] H. Chaté, F. Ginelli, G. Grégoire, and F. Raynaud. Collective motion of self-propelled particles interacting without cohesion. Phys. Rev. E, 77(4):046113, 2008.
* [24] A. Costanzo and C. Hemelrijk. Spontaneous emergence of milling (vortex state) in a Vicsek-like model. J. Phys. D: Appl. Phys., 51(13):134004, 2018.
* [25] I. D. Couzin and N. R. Franks. Self-organized lane formation and optimized traffic flow in army ants. Proc. Biol. Sci., 270(1511):139–146, 2003.
* [26] I. D. Couzin, J. Krause, R. James, G. D. Ruxton, and N. R. Franks. Collective memory and spatial sorting in animal groups. Journal of theoretical biology, 218(1):1–12, 2002.
* [27] A. Creppy, F. Plouraboué, O. Praud, X. Druart, S. Cazin, H. Yu, and P. Degond. Symmetry-breaking phase transitions in highly concentrated semen. J. R. Soc. Interface, 13(123):20160575, 2016.
* [28] F. Cucker and S. Smale. Emergent behavior in flocks. IEEE Trans. Automat. Control, 52(5):852–862, 2007.
* [29] A. Czirók, E. Ben-Jacob, I. Cohen, and T. Vicsek. Formation of complex bacterial colonies via self-generated vortices. Phys. Rev. E, 54(2):1791, 1996.
* [30] D. A. Dawson and J. Gärtner. Large deviations from the McKean-Vlasov limit for weakly interacting diffusions. Stochastics, 20(4):247–308, 1987.
* [31] P. Degond. Macroscopic limits of the Boltzmann equation: a review. In P. Degond, L. Pareschi, and G. Russo, editors, Modeling and Computational Methods for Kinetic Equations, Modeling and Simulation in Science, Engineering and Technology, pages 3–57. Birkhäuser Basel, 2004.
* [32] P. Degond, A. Diez, A. Frouvelle, and S. Merino-Aceituno. Phase transitions and macroscopic limits in a BGK model of body-attitude coordination. J. Nonlinear Sci., 30:2671–2736, 2020.
* [33] P. Degond, A. Frouvelle, and J.-G. Liu. Macroscopic limits and phase transition in a system of self-propelled particles. J. Nonlinear Sci., 23(3):427–456, 2013.
* [34] P. Degond, A. Frouvelle, and J.-G. Liu. Phase transitions, hysteresis, and hyperbolicity for self-organized alignment dynamics. Arch. Ration. Mech. Anal., 216(1):63–115, 2015.
* [35] P. Degond, A. Frouvelle, and S. Merino-Aceituno. A new flocking model through body attitude coordination. Math. Models Methods Appl. Sci., 27(06):1005–1049, 2017.
* [36] P. Degond, A. Frouvelle, S. Merino-Aceituno, and A. Trescases. Some properties of Self-Organized Hydrodynamics for body orientation. In preparation.
* [37] P. Degond, A. Frouvelle, S. Merino-Aceituno, and A. Trescases. Quaternions in collective dynamics. Multiscale Model. Simul., 16(1):28–77, 2018.
* [38] P. Degond, A. Frouvelle, S. Merino-Aceituno, and A. Trescases. Alignment of self-propelled rigid bodies: from particle systems to macroscopic equations. In G. Giacomin, S. Olla, E. Saada, H. Spohn, and G. Stoltz, editors, Stochastic Dynamics Out of Equilibrium, volume 282 of Springer Proceedings in Mathematics and Statistics, pages 28–66. Institut Henri Poincaré, Paris, France, 2017, Springer International Publishing, 2019.
* [39] P. Degond, J.-G. Liu, S. Merino-Aceituno, and T. Tardiveau. Continuum dynamics of the intention field under weakly cohesive social interaction. Math. Models Methods Appl. Sci., 27(01):159–182, 2017.
* [40] P. Degond, J.-G. Liu, S. Motsch, and V. Panferov. Hydrodynamic models of self-organized dynamics: derivation and existence theory. Methods Appl. Anal., 20:89–114, 2013.
* [41] P. Degond and S. Motsch. Continuum limit of self-driven particles with orientation interaction. Math. Models Methods Appl. Sci., 18(supp01):1193–1215, 2008.
* [42] P. Degond and S. Motsch. A macroscopic model for a system of swarming agents using curvature control. J. Stat. Phys., 143(4):685–714, 2011.
* [43] A. Diez. Propagation of chaos and moderate interaction for a piecewise deterministic system of geometrically enriched particles. Electron. J. Probab., 25(90), 2020.
* [44] A. Diez. SiSyPHE: A Python package for the Simulation of Systems of interacting mean-field Particles with High Efficiency. Journal of Open Source Software, 6(65):3653, 2021.
* [45] G. Dimarco and S. Motsch. Self-alignment driven by jump processes : Macroscopic limit and numerical investigation. Math. Models Methods Appl. Sci., 26(07):1385–1410, 2016.
* [46] M. R. D’Orsogna, Y.-L. Chuang, A. L. Bertozzi, and L. S. Chayes. Self-propelled particles with soft-core interactions: patterns, stability, and collapse. Phys. Rev. Lett., 96(10):104302, 2006.
* [47] B. Fernandez and S. Méléard. A Hilbertian approach for fluctuations on the McKean-Vlasov model. Stochastic Process. Appl., 71(1):33–53, 1997.
* [48] A. Figalli, M.-J. Kang, and J. Morales. Global well-posedness of the spatially homogeneous Kolmogorov–Vicsek model as a gradient flow. Arch. Ration. Mech. Anal., 227(3):869–896, 2018.
* [49] A. Frouvelle and J.-G. Liu. Dynamics in a kinetic model of oriented particles with phase transition. SIAM J. Math. Anal., 44(2):791–826, 2012.
* [50] I. M. Gamba, J. R. Haack, and S. Motsch. Spectral method for a kinetic swarming model. J. Comput. Phys., 297:32–46, 2015.
* [51] I. M. Gamba and M.-J. Kang. Global weak solutions for Kolmogorov–Vicsek type equations with orientational interactions. Arch. Ration. Mech. Anal., 222(1):317–342, 2016.
* [52] J. Gautrais, F. Ginelli, R. Fournier, S. Blanco, M. Soria, H. Chaté, and G. Theraulaz. Deciphering interactions in moving animal groups. PLoS Comput. Biol., 2012.
* [53] P. Gerlee, K. Tunstrøm, T. Lundh, and B. Wennberg. Impact of anticipation in dynamical systems. Phys. Rev. E, 96:062413, Dec 2017.
* [54] Q. Griette and S. Motsch. Kinetic equations and self-organized band formations. In Active Particles, Volume 2, pages 173–199. Springer, 2019.
* [55] S.-Y. Ha, J.-G. Liu, et al. A simple proof of the Cucker-Smale flocking dynamics and mean-field limit. Commun. Math. Sci., 7(2):297–325, 2009.
* [56] S.-Y. Ha and E. Tadmor. From particle to kinetic and hydrodynamic descriptions of flocking. Kinet. Relat. Models, 1:415–435, 2008.
* [57] C. R. Harris, K. J. Millman, S. J. van der Walt, R. Gommers, P. Virtanen, D. Cournapeau, E. Wieser, J. Taylor, S. Berg, N. J. Smith, R. Kern, M. Picus, S. Hoyer, M. H. van Kerkwijk, M. Brett, A. Haldane, J. F. del Río, M. Wiebe, P. Peterson, P. Gérard-Marchant, K. Sheppard, T. Reddy, W. Weckesser, H. Abbasi, C. Gohlke, and T. E. Oliphant. Array programming with NumPy. Nature, 585(7825):357–362, 2020.
* [58] M. Z. Hasan and C. L. Kane. Colloquium: topological insulators. Rev. Modern Phys., 82(4):3045, 2010.
* [59] C. K. Hemelrijk and H. Hildenbrandt. Schools of fish and flocks of birds: their shape and internal structure by self-organization. Interface Focus, 2(6):726–737, Aug 2012.
* [60] C. K. Hemelrijk, H. Hildenbrandt, J. Reinders, and E. J. Stamhuis. Emergence of oblong school shape: models and empirical data of fish. Ethology, 116(11):1099–1112, 2010.
* [61] H. Hildenbrandt, C. Carere, and C. K. Hemelrijk. Self-organized aerial displays of thousands of starlings: a model. Behavioral Ecology, 21(6):1349–1359, 2010.
* [62] J. F. Hughes, A. van Dam, M. McGuire, D. F. Sklar, J. D. Foley, S. K. Feiner, and A. Kurt. Computer Graphics: Principles and Practice. Addison-Wesley Professional, 3rd edition edition, 2013.
* [63] J. D. Hunter. Matplotlib: A 2D graphics environment. Comput Sci Eng., 9(3):90–95, 2007.
* [64] D. Q. Huynh. Metrics for 3D rotations: Comparison and analysis. J. Math. Imaging Vis., 35(2):155–164, 2009.
* [65] N. Jiang, L. Xiong, and T.-F. Zhang. Hydrodynamic limits of the kinetic self-organized models. SIAM J. Math. Anal., 48(5):3383–3411, 2016.
* [66] J. T. Kent, A. M. Ganeiber, and K. V. Mardia. A new unified approach for the simulation of a wide class of directional distributions. J. Comput. Graph. Statist., 27(2):291–301, 2018.
* [67] K. v. Klitzing, G. Dorda, and M. Pepper. New Method for High-Accuracy Determination of the Fine-Structure Constant Based on Quantized Hall Resistance. Phys. Rev. Lett., 45(6):494–497, Aug 1980.
* [68] C. Lancellotti. On the fluctuations about the Vlasov limit for N-particle systems with mean-field interactions. J. Stat. Phys., 136(4):643–665, 2009.
* [69] R. B. Laughlin. Quantized Hall conductivity in two dimensions. Phys. Rev. B, 23(10):5632–5633, May 1981.
* [70] T. Lee. Bayesian attitude estimation with the matrix Fisher distribution on SO(3). IEEE Trans. Automat. Contr., 63(10):3377–3392, 2018.
* [71] R. Lukeman, Y.-X. Li, and L. Edelstein-Keshet. Inferring individual rules from collective behavior. Proc. Natl. Acad. Sci. USA, 107(28):12576–12580, 2010.
* [72] K. V. Mardia and P. E. Jupp. Directional Statistics, volume 494. John Wiley & Sons, 2009.
* [73] A. Martín-Gómez, D. Levis, A. Díaz-Guilera, and I. Pagonabarraga. Collective motion of active brownian particles with polar alignment. Soft Matter, 14(14):2610–2618, 2018.
* [74] S. Motsch and L. Navoret. Numerical simulations of a nonconservative hyperbolic system with geometric constraints describing swarming behavior. Multiscale Model. Simul., 9(3):1253–1275, 2011.
* [75] S. Motsch and E. Tadmor. A new model for self-organized dynamics and its flocking behavior. J. Stat. Phys., 144(5):923, 2011.
* [76] Nobel Foundation. The Nobel Prize in Physics 1985. https://www.nobelprize.org/prizes/physics/1985/summary/.
* [77] Nobel Foundation. The Nobel Prize in Physics 2016. https://www.nobelprize.org/prizes/physics/2016/summary/.
* [78] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala. PyTorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024–8035. Curran Associates, Inc., 2019.
* [79] F. Peruani, A. Deutsch, and M. Bär. A mean-field theory for self-propelled particles interacting by velocity alignment mechanisms. Eur. Phys. J. Spec. Top., 157(1):111–122, 2008.
* [80] X.-L. Qi and S.-C. Zhang. Topological insulators and superconductors. Rev. Modern Phys., 83(4):1057, 2011.
* [81] D. Scherer, P. Dubois, and B. Sherwood. VPython: 3D interactive scientific graphics for students. Comput Sci Eng., 2(5):56–62, 2000.
* [82] G. Schou. Estimation of the concentration parameter in von Mises–Fisher distributions. Biometrika, 65(2):369–377, 1978.
* [83] S. Shankar, M. J. Bowick, and M. C. Marchetti. Topological sound and flocking on curved surfaces. Phys. Rev. X, 7(3):031039, 2017.
* [84] K. Sone and Y. Ashida. Anomalous topological active matter. Phys. Rev. Lett., 123:205502, Nov 2019.
* [85] A. Souslov, B. C. Van Zuiden, D. Bartolo, and V. Vitelli. Topological sound in active-liquid metamaterials. Nature Phys., 13(11):1091, 2017.
* [86] D. J. Thouless. Quantization of particle transport. Phys. Rev. B, 27(10):6083–6087, May 1983.
* [87] J. Toner and Y. Tu. Flocks, herds, and schools: A quantitative theory of flocking. Phys. Rev. E, 58(4):4828, 1998.
* [88] R. Vershynin. Introduction to the non-asymptotic analysis of random matrices. In Y. C. Eldar and G. Kutinyok, editors, Compressed sensing, theory and applications, pages 210–260. Cambridge University Press, 2012.
* [89] T. Vicsek, A. Czirók, E. Ben-Jacob, I. Cohen, and O. Shochet. Novel type of phase transition in a system of self-driven particles. Phys. Rev. Lett., 75(6):1226, 1995.
* [90] T. Vicsek and A. Zafeiris. Collective motion. Phys. Rep., 517(3-4):71–140, 2012.
* [91] T.-F. Zhang and N. Jiang. A local existence of viscous self-organized hydrodynamic model. Nonlinear Anal. Real World Appl., 34:495–506, 2017.
|
# Deep neural network-based automatic metasurface design with a wide frequency
range
Fardin Ghorbani<EMAIL_ADDRESS>Sina Beyraghi Javad Shabanpour
<EMAIL_ADDRESS>Homayoon Oraizi Hossein Soleimani Mohammad
Soleimani [
###### Abstract
Beyond the scope of conventional metasurface which necessitates plenty of
computational resources and time, an inverse design approach using machine
learning algorithms promises an effective way for metasurfaces design. In this
paper, benefiting from Deep Neural Network (DNN), an inverse design procedure
of a metasurface in an ultra-wide working frequency band is presented where
the output unit cell structure can be directly computed by a specified design
target. To reach the highest working frequency, for training the DNN, we
consider 8 ring-shaped patterns to generate resonant notches at a wide range
of working frequencies from 4 to 45 GHz. We propose two network architectures.
In one architecture, we restricted the output of the DNN, so the network can
only generate the metasurface structure from the input of 8 ring-shaped
patterns. This approach drastically reduces the computational time, while
keeping the network’s accuracy above 91%. We show that our model based on DNN
can satisfactorily generate the output metasurface structure with an average
accuracy of over 90% in both network architectures. Determination of the
metasurface structure directly without time-consuming optimization procedures,
having ultra-wide working frequency, and high average accuracy equip an
inspiring platform for engineering projects without the need for complex
electromagnetic theory.
###### keywords:
American Chemical Society, LaTeX
Iran University of Science and Technology] School of Electrical Engineering,
Iran University of Science and Technology, Narmak, Tehran 16486-13114, Iran
IR,NMR,UV
## 1 1\. Introduction
Metamaterials,as artificial media composed of engineered subwavelength
periodic or nonperiodic geometric arrays, have witnessed enormous attentions
due to their exotic properties to modify the permittivity and permeability of
materials1, 2, 3. Today, just two decades after the first implementation of
metamaterials by David Smith and colleagues4 who unearthed Veselago’s original
paper5, metamaterials and their 2D counterpart, metasurfaces, have been widely
used in practical applications such as, but not limited to, polarization
conversion6, 7, reconfigurable wave manipulation8, 9, vortex generation10, 11,
and perfect absorption12, 13.
However, all of the above-mentioned works are based on a traditional design
approaches, consisting of model designs, trial-and-error method, parameter
sweep, and optimization algorithms. Conducting numerical full-wave numerical
simulations assisted by optimization algorithm is a time-consuming process
which consumes plenty of computing resources. Besides, if the design
requirements change, simulations must be repeated afresh which impedes users
from paying attention to their actual needs. Therefore, to fill the existing
gaps to find a fast, efficient, and automated design approach, machine
learning has been into our consideration.
Machine learning and its specific branch, deep learning, are an approaches to
automatically learn the connection between input data and target data from the
examples of past experiences. Machine learning is an effort to employ
algorithms to devise a machine to learn and operate without explicitly
planning and dictating individual actions. To be more specific, machine
learning equips an inspiring platform to deduce the fundamental principles
based on previously given data thus, for another given input, machines can
make logical decisions automatically. With the ever-increasing evolution of
machine learning and its potential capacity to handle crucial challenges, such
as signal processing 14, and physical science 15, we are witnessing their
applications to electromagnetic problems. Due to its remarkable potentials
such as providing less computational resources, more accuracy, less design
time, and more flexibility, machine learning has been entered in various wave-
interaction phenomena, such as Electromagnetic Compatibility (EMC)16, 17,
Antenna Optimization and Design 18, 19, All-Dielectric Metasurface20, Optical
and photonic structures21, and Plasmonic nanostructure22.
Recently, T.Cui et al. have proposed a deep learning-based metasurface design
method named REACTIVE, which is capable of detecting the inner rules between a
unit-cell building and its EM properties with an average accuracy of 76.5% 23.
A machine-learning method to realize anisotropic digital coding metasurfaces
has been investigated, whereby 70000 training coding patterns have been
applied to train the network24. In Ref25 a deep convolutional neural network
has been studied to encode the programmable metasurface for steered multiple
beam generation with an average accuracy of more than 94 percent. A
metasurface inverse design method using a machine learning approach has been
introduced in26 to design an output unit cell for specified electromagnetic
properties with 81% accuracy in a low-frequency bandwidth of 16-20 GHz.
Recently, a double deep Q-learning network (DDQN) to identify the right
material type and optimize the design of metasurface holograms has been
developed27.
In this paper, benefiting from Deep Neural Network (DNN), an inverse design
procedure of a metasurface with an average accuracy of up to 92 percent has
been presented. Unlike the previous works, to reach the highest working
frequency, we consider 8 ring-shaped digital distributions (See top left pf
the Fig. 1) to generate resonant notches in a wide range of working
frequencies from 4 to 45 GHz. Therefore, after training the deep learning
model by a set of samples, our proposed model can automatically generate the
desired metasurface pattern, with four predetermined reflection information
(as number of resonances, resonance frequencies, resonance depth, and
resonance bandwidths) for ultra-wide working frequency bands. Comparison of
the output of numerical simulations with the design target illustrates that
our proposed approach is successful in generating corresponding metasurface
structures with any desired S-parameter configurations. Determination of the
metasurface structures directly without ill-posed optimization procedures,
consuming less computational resources, ultra-wide working frequency bands,
and high average accuracy paves the way for our approach to become beneficial
for those engineers who are not specialists in the field of electromagnetic,
thus, they can focus on their practical necessitates which significantly boost
the speed of the engineering projects.
## 2 2\. METHODOLOGIES
### 2.1 2.1. Metasurface Design
Fig. 1 shows the schematic representation of the proposed metasurface
structure consisting of three layers, from top to bottom, as a copper ring-
shaped pattern layer, a dielectric layer, and a ground layer to impede the
backward transmission of EM energy. FR4 is chosen as the substrate with
permittivity of 4.2+0.025i, and thickness of h=1.5mm. The top metallic layer
comprises 8 ring-shaped patterns distributed side by side, each of which can
be divided into 8 × 8 lattices labeled as “0” and “1” which denote the area
without and with the copper. Each metasurface composed of an infinite array of
unit-cells. Each unit-cell consists of 4 × 4 randomly distributed of 8 × 8
ring-shaped patterns. Therefore, each unit cell comprises 32 × 32 lattices.
The length of the lattices, periodicity of unit cells, and thickness of the
copper metallic patterns are l = 0.2 mm, p = 6.4 mm, and t =0.018 mm
respectively. Unlike the previous works23, 26, defining 8 ring-shaped patterns
to train the DNN is the novelty employed here to generate the desired
resonance notches in a wide frequency band. Each ring-shaped pattern is
designed in such a way to generate resonant notches at different frequencies
from 4 to 45 GHz, thus, we can import the data set of S-parameters to train
the network for our specified targets. It is almost impossible to obtain the
relationship between the metasurface matrices and S-parameters. Due to the
close connection between the metasurface pattern matrix and its corresponding
reflection characteristics, the deep learning algorithm is used to reduce the
computational burden for obtaining the optimal solution.
Figure 1: Sketch representation of the design process of DNN-based approach
for metasurface inverse design. The process consists of three steps of
generating data and pre-processing, Training of machine learning, and
evaluations of a model.
### 2.2 2.2. Deep Learning
Artificial neural networks have emerged in the last two decades with many
applications, especially in ”optimization” and ”artificial intelligence”. Fig.
2 shows an overview of an artificial neuron, with $X_{1}$, $X_{2}$, … as its
inputs (input neurons) . In neural networks, each $X$ has a weight, denoted by
$W$. Observe that each input is connected to a weight; thus, each input must
be multiplied by its weight. Then in the neural network, the sum function
(sigma) adds the products of $X_{i}$’s by $W_{i}$’s. Finally, an activation
function determines the output of these operations. Then the output of neurons
by the activation function $\phi(u)$, with b as a bias value is:
$Y=\phi(\sum\limits_{i=1}^{n}W_{i}X_{i}+b_{i})$ (1)
Figure 2: An overview of an artificial neuron
The neural network is made up of neurons in different layers. In general, a
neural network consists of three layers: input, hidden, and output. The
greater the number of layers and neurons in each hidden layer is, the more
complex the model becomes. When the numbers of hidden layers and the number of
neurons increases, our neural network becomes a deep neural network. In this
work, we use a DNN to design the desired metasurface.
#### 2.2.1 A. Non-restricted output
The inverse design of the metasurface is anticipated to determine the
intrinsic relationships between the final metasurface structure and its
geometrical dimensions by DNN. We have generated 2000 sets of random matrices
that represent the metasurface structures using the “RAND” function in MATLAB
software. In the next step, we have linked the MATLAB with CST MWS to
calculate the S-parameters of the metasurface. To calculate the reflection
characteristics of the infinite arrays of the unit cells, we have conducted
the simulations when the unit-cell boundary conditions are employed in x and y
directions and open boundary conditions in the z-direction. Finally, when it
comes to the design procedure, we only need to enter the predetermined EM
reflection properties, and our model can generate the output metasurface based
on the learned data during the training step. The dataset is established to
generate 16 random numbers between 1 and 8 to form 4×4 matrices where each
number represents one of the 8 ring-shaped patterns. To form our datasets, we
have generated two thousand pairs of S-parameter and metasurface pattern
matrices ( 70% as a training set and 30% as a testing set), and the output of
the training model is a matrix of 32×32. Each unit-cell can generate 8 notches
in the frequency band of 4 to 45 GHz. By defining three features for each
resonance ( namely, notch frequency, notch depth, and notch bandwidth), the
input of our proposed DNN is a vector with dimension 24, and the output is a
vector of dimension 1024, which represents a unit cell of 32 × 32 pixels. The
details of the designed network are summarized in Table 1.
Table 1: Detailed information of the non-restricted output network
architecture.
Layer number | Layer | output shape | number of parameter | activation function
---|---|---|---|---
1 | dense_1 (Dense) | (None, 24) | 600 | relu
2 | dropout_1 (Dropout) | (None, 24) | 0 | -
3 | dense_2 (Dense) | (None, 300) | 90300 | relu
4 | dropout_2 (Dropout) | (None, 300) | 0 | -
5 | dense_3 (Dense) | (None, 300) | 90300 | relu
6 | dropout_3 (Dropout) | (None, 300) | 0 | -
7 | dense_4 (Dense) | (None, 300) | 90300 | relu
8 | dropout_4 (Dropout) | (None, 300) | 0 | -
9 | dense_5 (Dense) | (None, 300) | 90300 | relu
10 | dropout_5 (Dropout) | (None, 300) | 0 | -
11 | dense_6 (Dense) | (None, 1024) | 308224 | sigmoid
In the proposed model, dense and dropout layers are used one after the other.
In the fully connected (dense) layer, each neuron in the input layer is
connected to all the neurons in the previous layers. In the dropout layer,
some neurons are accidentally ignored in the training process in order to
avoid the misleading of the learning process as well as increasing the
learning speed and reducing the risk of over-fitting. By selecting relevant
features from the input data, the performance of the machine learning
algorithms is efficiently enhanced. In the proposed model, the values of batch
size and learning rate are set to 30 and 0.001, respectively. Besides, the
Adam optimization algorithm is used for tuning the weighting values ($W_{i}$).
During the training process, the difference between original and generated
data is calculated repeatedly by tuning and optimizing the weight values for
each layer. When the difference reaches the satisfying predetermined criterion
which is defined as loss function, then the training process stops. The Mean
Square Error (MSE) is used as a loss function defined as:
${\rm{MSE}}=\frac{1}{N}\sum\limits_{i=1}^{N}{{{({f_{i}}-{y_{i}})}^{2}}}$ (2)
where $f_{i}$ and $y_{i}$ denote the anticipated value and the actual value,
respectively. For selecting an appropriate activation function, Since our
desired output in the neural network is 0 or 1, we used the sigmoid function
in the last layer, while using other activation functions reduce the accuracy.
Formulation of the activation of relu and sigmoid functions are given in
equations 3 and 4, respectively:
$\phi(x)=\begin{cases}0&x\leq 0\\\ x&x>0\end{cases}$ (3)
$\phi(x)=\dfrac{1}{1+e^{-x}}$ (4)
for validation, several design goals of S-parameters are suggested in
anticipation that our proposed DNN is capable of producing equivalent unit-
cell structures. The DNN algorithm is realized by the python version 3.8, and
the Tensorflow and Keras framework28 are used to establish the model. As an
example, a metasurface structure is designed with three notches using the DNN
method. The specified reflection informations are [number of resonances;
resonances frequencies; resonance depth; and the bandwidth of each resonance]
= [ 3; 17.5, 23.5, 25.3 GHz; -30, -20, -20 dB; 0.5, 0.5, 0.4 GHz].Observe in
Fig. 3a, that the output full-wave results achieve the design goals.
For the next example, , a uni-cell is designed with one resonance frequency
(-15 dB) at 15 GHz. The simulation result shows good conformity with our
design target (See Fig. 3b). Furthermore, the curves of the mean square error
and the accuracy of the presented non-restricted output DNN method are
proposed in Fig. 4 showing the accuracy rate higher than 92%.
Figure 3: The simulated reflection coefficient of non-restricted output
network architecture a) metasurface with three notches under -10 dB. b)
metasurface with a single notch under -10 dB.
(a) Accuracy
(b) Loss
Figure 4: Curves of a) accuracy and, b) loss function relative to 10000 Epochs
for non-restricted network architecture.
#### 2.2.2 B. Restricted output
In order to increase the learning speed, reduce the number of calculations,
and improving the efficiency of a design process, the output of network
architecture is restricted in such a way that the DNN should generate the
metasurface structure by using the proposed 8 ring-shaped patterns. Unlike the
previous approach in which the output generates a 1024 size vector to form the
32×32 metasurface pixels, in this case, the output will generate a 48 size
vector. More specifically, each unit-cell consists of 4×4 matrices of these 8
ring-shaped patterns, where each ring-shaped pattern consists of 8×8 pixels.
To form the output vector, ring-shaped patterns are denoted by eight digital
codes (3-bit) of ”000” to ”111”. Therefore, the output of the DNN generates a
16×3 = 48 size vector. By restricting the output to produce a 48 size vector,
the amount of calculations will be reduced. It will be shown that the accuracy
of the network reaches up to 91%. The details of the designed DNN are
summarized in Table 2. The other parameters are similar to the non-restricted
output network. Fig. 5 shows the curves of the loss function and accuracy.
(a) Accuracy
(b) Loss
Figure 5: Curves of a) accuracy and, b) loss function relative to 10000 Epochs
for restricted network architecture. Table 2: Detailed information of the
restricted output network architecture
Layer number | Layer | output shape | number of parameter | activation function
---|---|---|---|---
1 | dense_1 (Dense) | (None, 24) | 600 | relu
2 | dropout_1 (Dropout) | (None, 24) | 0 | -
3 | dense_2 (Dense) | (None, 500) | 12500 | relu
4 | dropout_2 (Dropout) | (None, 500) | 0 | -
5 | dense_3 (Dense) | (None, 500) | 250500 | relu
6 | dropout_3 (Dropout) | (None, 500) | 0 | -
7 | dense_4 (Dense) | (None, 500) | 250500 | relu
8 | dropout_4 (Dropout) | (None, 500) | 0 | -
9 | dense_5 (Dense) | (None, 500) | 250500 | relu
10 | dense_6 (Dense) | (None, 48) | 24048 | sigmoid
To further validate the effectiveness of the proposed DNN method for
restricted output, four different examples are presented. The specified
S-parameters are provided into our network and the matrix of unit cells are
generated through the input S-parameters. We re-enter these matrices into CST
MWS to simulate the reflection coefficient of the metasurface. The simulated
results are in good accordance with our desired design target (See Table. 3
and Fig. 6).
Consequently, it has been amply demonstrated that the proposed DNN method is
superior to other inverse design algorithms of metasurface structure, either
from the perspective of computational repetitions, teaching time consumption,
and network accuracy. The conformity between the simulated results and design
targets promises that the proposed DNN approach is an effective method of
metasurfaces design for a variety of practical applications.
Figure 6: Metasurface design examples through restricted output network
architecture. Table 3: Desired input targets for four S-parameters which are
presented in Fig. 6.
Examples | Number of notches | notches frequency (GHz) | notches depth (dB) | notches bandwidth (GHz)
---|---|---|---|---
Fig. 5a | 1 | 42 | -35 | 0.7
Fig. 5b | 1 | 5.8 | -25 | 0.2
Fig. 5c | 2 | 5.5, 10.5 | -12.5, -24.5 | 0.1, 1.8
Fig. 5d | 3 | 28, 33.5, 41.5 | -14, -25, -13.5 | 0.3, 0.5, 0.7
## 3 Conclusion
Herein, we have proposed an inverse metasurface design method based on a deep
neural network, whereby metasurface structures may be computed directly by
merely specifying the design targets. After training the deep learning model
by a set of samples, our proposed model can automatically generate the
metasurface pattern as the output by four specified reflection criteria
(namely, number of resonances, resonance frequencies, resonance depths, and
resonance bandwidths) as the input in an ultra-wide operating frequency.
Comparing the numerical simulations with the desired design target illustrates
that our proposed approach successfully generates the required metasurface
structures with an accuracy of more than 90%. By using 8 ring-shaped patterns
during the training process, restricting the output of the network to generate
a 48 size vector, our presented method serves as a fast and effective approach
in terms of computational iterations, design time consumption, and network
accuracy. The presented DNN-based method can pave the way for new research
avenues in automatic metasurface realization and highly complicated wave
manipulations.
## 4 Conflict of Interest
The author declare no conflict of interest.
## References
* Pendry 2000 Pendry, J. B. Negative refraction makes a perfect lens. _Physical review letters_ 2000, _85_ , 3966
* Rajabalipanah et al. 2019 Rajabalipanah, H.; Abdolali, A.; Shabanpour, J.; Momeni, A.; Cheldavi, A. Asymmetric spatial power dividers using phase–amplitude metasurfaces driven by huygens principle. _ACS omega_ 2019, _4_ , 14340–14352
* Shabanpour 2020 Shabanpour, J. Full Manipulation of the Power Intensity Pattern in a Large Space-Time Digital Metasurface: From Arbitrary Multibeam Generation to Harmonic Beam Steering Scheme. _Annalen der Physik_ 2020, _532_ , 2000321
* Smith et al. 2000 Smith, D. R.; Padilla, W. J.; Vier, D.; Nemat-Nasser, S. C.; Schultz, S. Composite medium with simultaneously negative permeability and permittivity. _Physical review letters_ 2000, _84_ , 4184
* Veselago 1967 Veselago, V. G. Electrodynamics of substances with simultaneously negative and. _Usp. Fiz. Nauk_ 1967, _92_ , 517
* Grady et al. 2013 Grady, N. K.; Heyes, J. E.; Chowdhury, D. R.; Zeng, Y.; Reiten, M. T.; Azad, A. K.; Taylor, A. J.; Dalvit, D. A.; Chen, H.-T. Terahertz metamaterials for linear polarization conversion and anomalous refraction. _Science_ 2013, _340_ , 1304–1307
* 7 Shabanpour, J.; Beyraghi, S.; Ghorbani, F.; Oraizi, H. Implementation of conformal digital metasurfaces for THz polarimetric sensing. _arXiv preprint arXiv:2101.02298_
* Shabanpour 2020 Shabanpour, J. Programmable anisotropic digital metasurface for independent manipulation of dual-polarized THz waves based on a voltage-controlled phase transition of VO 2 microwires. _Journal of Materials Chemistry C_ 2020, _8_ , 7189–7199
* Hashemi et al. 2016 Hashemi, M. R. M.; Yang, S.-H.; Wang, T.; Sepúlveda, N.; Jarrahi, M. Electronically-controlled beam-steering through vanadium dioxide metasurfaces. _Scientific reports_ 2016, _6_ , 35439
* Yang et al. 2014 Yang, Y.; Wang, W.; Moitra, P.; Kravchenko, I. I.; Briggs, D. P.; Valentine, J. Dielectric meta-reflectarray for broadband linear polarization conversion and optical vortex generation. _Nano letters_ 2014, _14_ , 1394–1399
* Shabanpour et al. 2020 Shabanpour, J.; Beyraghi, S.; Cheldavi, A. Ultrafast reprogrammable multifunctional vanadium-dioxide-assisted metasurface for dynamic THz wavefront engineering. _Scientific Reports_ 2020, _10_ , 1–14
* Landy et al. 2008 Landy, N. I.; Sajuyigbe, S.; Mock, J. J.; Smith, D. R.; Padilla, W. J. Perfect metamaterial absorber. _Physical review letters_ 2008, _100_ , 207402
* Shabanpour et al. 2020 Shabanpour, J.; Beyraghi, S.; Oraizi, H. Reconfigurable honeycomb metamaterial absorber having incident angular stability. _Scientific Reports_ 2020, _10_ , 1–8
* Ghorbani et al. 2020 Ghorbani, F.; Hashemi, S.; Abdolali, A.; Soleimani, M. EEGsig machine learning-based toolbox for End-to-End EEG signal processing. _arXiv preprint arXiv:2010.12877_ 2020,
* Carleo et al. 2019 Carleo, G.; Cirac, I.; Cranmer, K.; Daudet, L.; Schuld, M.; Tishby, N.; Vogt-Maranto, L.; Zdeborová, L. Machine learning and the physical sciences. _Reviews of Modern Physics_ 2019, _91_ , 045002
* Medico et al. 2018 Medico, R.; Lambrecht, N.; Pues, H.; Ginste, D. V.; Deschrijver, D.; Dhaene, T.; Spina, D. Machine learning based error detection in transient susceptibility tests. _IEEE Transactions on Electromagnetic Compatibility_ 2018, _61_ , 352–360
* Shi et al. 2020 Shi, D.; Wang, N.; Zhang, F.; Fang, W. Intelligent Electromagnetic Compatibility Diagnosis and Management With Collective Knowledge Graphs and Machine Learning. _IEEE Transactions on Electromagnetic Compatibility_ 2020,
* Cui et al. 2020 Cui, L.; Zhang, Y.; Zhang, R.; Liu, Q. H. A Modified Efficient KNN Method for Antenna Optimization and Design. _IEEE Transactions on Antennas and Propagation_ 2020, _68_ , 6858–6866
* Sharma et al. 2020 Sharma, Y.; Zhang, H. H.; Xin, H. Machine Learning Techniques for Optimizing Design of Double T-shaped Monopole Antenna. _IEEE Transactions on Antennas and Propagation_ 2020,
* An et al. 2019 An, S.; Fowler, C.; Zheng, B.; Shalaginov, M. Y.; Tang, H.; Li, H.; Zhou, L.; Ding, J.; Agarwal, A. M.; Rivero-Baleine, C., et al. A deep learning approach for objective-driven all-dielectric metasurface design. _ACS Photonics_ 2019, _6_ , 3196–3207
* So and Rho 2019 So, S.; Rho, J. Designing nanophotonic structures using conditional deep convolutional generative adversarial networks. _Nanophotonics_ 2019, _8_ , 1255–1261
* Malkiel et al. 2018 Malkiel, I.; Mrejen, M.; Nagler, A.; Arieli, U.; Wolf, L.; Suchowski, H. Plasmonic nanostructure design and characterization via deep learning. _Light: Science & Applications_ 2018, _7_ , 1–8
* Qiu et al. 2019 Qiu, T.; Shi, X.; Wang, J.; Li, Y.; Qu, S.; Cheng, Q.; Cui, T.; Sui, S. Deep learning: A rapid and efficient route to automatic metasurface design. _Advanced Science_ 2019, _6_ , 1900128
* Zhang et al. 2019 Zhang, Q.; Liu, C.; Wan, X.; Zhang, L.; Liu, S.; Yang, Y.; Cui, T. J. Machine-learning designs of anisotropic digital coding metasurfaces. _Advanced Theory and Simulations_ 2019, _2_ , 1800132
* Shan et al. 2020 Shan, T.; Pan, X.; Li, M.; Xu, S.; Yang, F. Coding programmable metasurfaces based on deep learning techniques. _IEEE Journal on Emerging and Selected Topics in Circuits and Systems_ 2020, _10_ , 114–125
* Shi et al. 2020 Shi, X.; Qiu, T.; Wang, J.; Zhao, X.; Qu, S. Metasurface inverse design using machine learning approaches. _Journal of Physics D: Applied Physics_ 2020, _53_ , 275105
* Sajedian et al. 2019 Sajedian, I.; Lee, H.; Rho, J. Double-deep Q-learning to increase the efficiency of metasurface holograms. _Scientific reports_ 2019, _9_ , 1–8
* Chollet et al. 2015 Chollet, F., et al. Keras. https://keras.io, 2015
|
# An In-depth Review of Privacy Concerns Raised by the COVID-19 Pandemic
Jiaqi Wang
###### Abstract
COVID-19 has hugely changed our lives, work, and interactions with people.
With more and more online activities, people are easily exposed to privacy
threats. In this paper, we explore how users self-disclose on social media and
privacy concerns raised from these behaviors. Based on recent news,
techniques, and research, we indicate three increasing privacy threats caused
by the COVID-19 pandemic. After that, we provide a systematic analysis of
potential privacy issues related to the COVID pandemic. Furthermore, we
propose a series of research directions about online user self-disclosure and
privacy issues for future work as well as possible solutions.
## Introduction
COVID-19 has spread across the world and affected how people work, live, and
interact with each other. People are recommended or required to work remotely,
quarantine at home, and keep social distance. Under these circumstances,
people expect more interactions with others via social media platforms, which
has led to a huge increase of social media usage (Holmes 2020). Based on a
study (Kanter 2020) of 25,000 consumers across 30 markets published on April
3rd, 2020, WhatsApp has seen a 40% increase in usage; in the early phase of
the pandemic usage increases 27%, in mid-phase 41% and countries in the late
phase of the pandemic see an increase of 51%; Facebook usage has increased
37%. China experienced a 58% increase in usage of local social media apps
including Wechat and Weibo. Another study of 4500 Influenster community
members, most of respondents agreed that their social media consumption (72%)
and posting (43%) have increased during the pandemic. Moreover, TikTok, one of
new social media platforms, was used by the largest share of teenagers (48%),
overtaking even Instagram (47%) from March, 2020 to April, 2020 (Perez 2020).
One possible reason is that people are searching for alternative approaches to
interact with others to stay mentally healthy. People generate content,
comment content, forward content, and communicate with others on social media
platforms. To increase a sense of intimacy with others, people share details
of their lives with text, pictures, videos, live video streaming, etc. To a
great extent, the content can reveal personal private information including
age, gender, location, race, etc. Compared with interactions in the real
world, self-disclosure information can more easily be propagated, searched,
saved, and even processed on social media. The increasing and more abundant
self-disclosure may cause unpredictable and unacceptable privacy disclosure to
users online. Furthermore, a recent research shows that people’s mental health
problems are prevalent because of social media exposure (Gao et al. 2020)
itself, which means the expected results might be on the contrary to the
mental health cure.
However, the pandemic is changing people’s sensitivity and attitude to privacy
including what and how personal information can be disclosed (Nabity-Grover,
Cheung, and Thatcher 2020). Discussion about COVID-19 may include basic
personal information, travel schedule, test results, symptom description, and
medicine in use. These acts of self-disclosure reveal a lot of sensitive
information that people are not willing to share previously (Kordzadeh and
Warren 2017). For example, health status and detailed description of
individual body information are shared to ask for comparison, suggestions or
pre-diagnosis. Some communities even encourage people to share more personal
information related to COVID-19 in the name of society responsibility without
clarifying the boundary of gathered information and how to use the collected
data. Based on the observation, users would sacrifice personal information to
a unprecedented degree to help the society back to the expected normal status.
Recent work (Blose et al. 2020) provides early evidence that the situational
factors caused by COVID-19 may affect people’s self-disclosures and privacy
calculus.
Figure 1: A Systematic Overview of Privacy Threats from Multiple Domains
Related to the COVID-19 Pandemic
There is another issue we need to pay attention to. Along with the COVID-19
pandemic, 2020 the United States presidential elections started from February
and ends in November. Noting that the date when United States officially
declared the COVID-19 pandemic as a national emergency is March 13 and the
first statewide ”stay-at-home” order was issued at California is March 16.
That time is approximately only one month later than the early voting in
February. During the whole process of the presidential election, people are
isolated at home and keep social distance in essential activities at most
time. People have participated extensively in political discussions, and
actively engaged in social media pushed by a highly divisive environment. This
is likely linked to users disclosing sensitive information including but not
limited to political stand, home address, and family relative information. The
potential privacy harms to users in the context of political debates have been
studied before (Rubinstein 2014). However, this election has introduced even
additional situational factors, as it happened in the middle of a pandemic.
Information sources across multiple social media may cause serious user
privacy issues and unclear self-disclosures under the chaotic interactions
with natural and social environment. Advanced machine learning and data mining
techniques investigate non-obvious relationships and search hidden data
patterns, which can provide insights to the data owners and external parties
for unknown analysis (Chamikara et al. 2020).
In the following, we first summarize and analyze emerging privacy threats
triggered by or enhanced by the COVID-19 Pandemic. Based on our findings, we
provide a high-level comprehensive analysis of privacy from multiple domains,
propose related potential research directions,and conclude implications for
future online public privacy in crisis.. Finally, we discuss possible
solutions of proposed research questions.
## Increasing Privacy Threats due to the COVID-19 Pandemic
### Mass Surveillance
There is an ongoing public conversation about whether and under what
circumstances the United States should embrace a surveillance program for
COVID-19 (Ram and Gray 2020). Here, we focus on what tools the government and
companies are leveraging from the phenomenon perspective. There is increasing
surveillance over people’s daily behaviors from the government and companies
during the COVID-19 pandemic in the name of monitoring and tracing the virus
spread (Hussein et al. 2020). Many countries and companies are leveraging
people’s personal data (location, body temperature, facial information, etc.),
which is collected by cell phones, traffic cameras, and other sensors, to
track human mobility, identify individuals with risk, and monitor the disease
spread (Singer and Sang-hun 2020). In the United Kingdom and India, smart city
infrastructure has been re-used to monitor the people’s social distance. In
China, people can download a cell phone application that can tell whether they
have been exposed to COVID-19 by analyzing the collected location data and
local infection situation (BBC 2020). In the United States, Apple and Google
provided a contact tracing application for their mobile users as well with
bluetooth specification (Apple and Google 2020a) and cryptography
specification (Apple and Google 2020b). However, as a key part of the
extension of the surveillance state, researchers stated that the anonymized
data is not always anonymous and location data can exacerbate inequality.
(Frith and Saker 2020).
### Data Usage across Multiple Platforms
During the COVID-19 pandemic, people spent extensive time online
communicating, generating content, and engaging in other activities. With the
development of data science techniques, people have more computational power
and various channels to collect, process, and share data. There have already a
lot of released open datasets focusing on different aspects related to the
COVID-19 (Blose et al. 2020; Chen, Lerman, and Ferrara 2020; Pepe et al. 2020;
Cohen et al. 2020; Cheng et al. 2020; Dong, Du, and Gardner 2020). Many social
media platforms provide APIs for people to acquire data, such as Twitter
111https://developer.twitter.com/en/docs/twitter-api and Reddit
222https://www.reddit.com/dev/api/. Those APIs lower the barrier to access
social media data. However, we can not fully prevent malicious usage of the
collected data.
At the same time, more digital records and accounts containing sensitive
information are being created online, for example, online shopping accounts
(Brough and Martin 2020) and other services that are brought online. Online
users may not be fully aware of the fact their private information can be
collected, shared, and used in an unexpected way (Malandrino et al. 2013).
Many users may have more than one accounts on social media. How to measure
privacy disclosure score based on the information across multiple social
networks has been discussed (Aghasian et al. 2017) extensively. Zola et al.
explored a cross-source cross-domain sentiment analysis with training data
from Amazon and Tripadvisor and testing on the data from Facebook and Twitter
(Zola et al. 2019).
### Change of Individual Privacy Calculus
Another observed phenomenon and potential concern is the change of
individuals’ perception to self-disclosure and privacy. Individual-level
behavior during the pandemic is a result of voluntary and government-enforced
behavioral change (Farooq, Laato, and Islam 2020). From the individual
perspective, people are calibrating their behavior between information
acquisition and privacy loss. Users may have different attitudes and
sensitivity to their privacy and self-disclosure during the pandemic (Fahey
and Hino 2020). People would more easily sacrifice their private health status
information to get suggestions, pre-diagnosis, or contribute to what the
government appeals during the COVID-19 pandemic, especially in Asia (Cha
2020). Discussing personal health status, symptom, and test results on social
media has become more common. Governments and companies provide convenient
tools for people to update their personal information and implicitly convince
people that the behaviors are a contribution to the public good (Nabity-
Grover, Cheung, and Thatcher 2020). However, to my best knowledge, there are
not enough official files to remind people about individual privacy issues or
broadcast basic knowledge of data usage for people during the COVID pandemic.
A systematic overview of privacy issues from different aspects during the
COVID-19 Pandemic is shown in Figure 1.
## Post-pandemic Potential Privacy Risks
### Over-collected Data Abuse
The COVID-19 pandemic has promoted the development of e-commerce, online
education, social media platforms, smart phone applications, and related
virtual service. Due to the health emergency, many countries relax the
regulation restrictions or cooperate with companies to put the public security
in the first place by collecting and analyzing data to support governmental
prevention decision making. The governments could leverage contact tracing
information to monitor and analyze citizens’ behaviors, e.g. LGBT people
identification in South Korea (Fahey and Hino 2020). Some countries will put
pressure on their companies to release the collected data and provide data
analysis on the involved users. The European Commission has invited
telecommunications companies to make their metadata available (Turner 2020).
Tech companies, including Instagram, Twitter, Facebook, and etc., can abuse
this detailed data sets of individually, by selling, processing it to derive
sensitive information, or sharing it inappropriately. Relying on powerful
computational resources such as GPU clusters, a huge amount of data, and
advanced data processing techniques, users behaviors can be described,
modelled, and predicted accurately without any consideration for users’
privacy. For example, an example of user behavior identification and
prediction across multiple social media platforms is shown in Figure 2.
Moreover, people share content via text, pictures, video, live streaming, and
other formats, which can provide comprehensive information of users. Online
interactions, e.g., “Follow”, “Hashtag”, “Mention”, “Reply”, can even reveal
users’ friends and relatives and create their social network structure. That
would cause other related users’ the privacy loss and over-disclosure and the
propagation of the threat across the whole social media.
Figure 2: Users Potential Privacy Risks: User Identity Inference based on
Multiple Social Media. For each social media, one user would self-disclose
part of personal information, for example, Information 1, Information 2, and
Information 3. According to the disclosed information, one user can be treated
as fuzzy image with released and limited inferred information on one social
media, for example, Image 1, Image 2, and Image 3. However, given multiple
social media data of one user and advanced across-platform data processing
techniques, data can be aggregated to infer a more accurate user identity with
detailed personal information. Table 1: Possible Research Directions and
Questions about Privacy Issues and Self-disclosure related t Crisis On Social
Media
Research Directions | Research Questions
---|---
Self-disclosure Interaction and Propagation | $\bullet$ How and to what extent users’ self-disclosure behaviors can affect other related users on social media?
$\bullet$ How the self-disclosure behaviors propagate on the social media?
$\bullet$ To what extent the crisis would affect the user self-disclosure
behaviors?
$\bullet$ How to find the balance point between the privacy preserving and
self-disclosure to get enough and appropriate information in crisis?
$\bullet$ How to quantify self-disclosure across multiple social media and
provide a varying evaluation considering situational factors?
Public Privacy Concern and Attitude Tracing | $\bullet$ How to trace the public privacy attitude change to their current status?
$\bullet$ How to design an appropriate data-driven mechanism and regulation to
gather appropriate data and decrease the public privacy concern?
$\bullet$ How to model the complex and dynamic observations considering users’
privacy concern, users’ behaviors, and the pandemic crisis?
Mental Health in the COVID-19 Pandemic | $\bullet$ How to find a balance between keeping mental health and privacy during the pandemic?
$\bullet$ How the mental health status, self-disclosure, and privacy concern
affect each other? Certain self-disclosure can help users keep a good mental
health, while it takes private concerns to users as well.
$\bullet$ During the health emergency crisis, considering users with different
physical health status, would there be any differences of their mental health
and online behaviors?
Prevention, Prediction, and Protection | $\bullet$ How to design a comprehensive mechanism to prevent over self-disclosure and privacy-disclosure according to complicated scenarios in crisis?
$\bullet$ How to predict public behavior and provide appropriate suggestions
with limited access of data during the pandemic?
$\bullet$ How to protect users’ provided data, protect the stability on social
media, and establish social trust?
### Public Privacy Concern and Social Trustworthiness
As the COVID-19 pandemic carries on, debates and laws surrounding surveillance
capabilities are at the forefront of many minds (ROSS 2020). However, a
majority of Americans said that they were concerned about how their personal
data would be used by data collectors and they knew extremely little about the
laws or regulations to protect their data privacy(Auxier 2020). Many
governments gather or even over-collect people’s data during the pandemic via
different approaches. There is a great possibility that they will not delete
the collected personal data or even continue collecting the data without
informing users. Another survey result in (Auxier 2020) shows that 69% U.S.
adults thought they should have the right to have the medical data permanently
deleted after necessary and legal usage. While people enjoy the benefit of
pandemic tracking and controlling via the data-driven approach, it also raises
public concerns for their individual privacy. Kye and Hwang argued that the
government actions do have a huge impact on social trust and government
Trustworthiness. The temporal over-disclosed data and privacy data disclosure
is gradually causing a stronger public privacy concern and challenging the
government social trust.
## Potential Research about Pandemic-related Privacy Issues on Social Media
Based on previous work and our discussion, we propose a set of related
research directions (shown in Table 1) to understand and explore further
privacy issues at time of COVID. They include: (i) self-disclosure interaction
and propagation; (ii) public privacy concern and attitude tracing; (iii)
mental health; (iv) prevention, prediction, and protection in the COVID
pandemic. For each research direction, we provide several related specific
research questions in the table 1 as well for future exploration.
## Conclusion
The COVID-19 pandemic has generated a lot of practical problems and research
questions related to privacy issues in online settings. In this paper, we
describe how the COVID-19 affects user behaviors on social media. After that,
we discuss three increasing privacy threats due to the pandemic including mass
surveillance, data usage across multiple platforms, and change of people’s
privacy calculus. Furthermore, we introduce possible privacy risk after the
pandemic. Finally, we propose a set of related research topics for further
study. There could be several possible research directions: (i) appropriate
and adaptive approaches to quantify self-disclosure and privacy combining
peoples’ comprehensive behaviors in multiple scenarios; (ii) mathematical and
statistical models of privacy and human behaviors rather that can complement
data-driven approaches ; (iii) study the interactions between people’s
awareness and sensitivity of privacy and self-disclosure considering the
changes of environment. Different people may have different initial attitudes
towards their personal information and decide how much information they feel
comfortable to self-disclose. The exploration of the hidden relation between
privacy attitudes, self-disclosure behaviors, and the reaction got from the
environment can help us understand humans’ privacy-related behaviors better
and provide comprehensive suggestions for privacy-preserving mechanism design.
## References
* Aghasian et al. (2017) Aghasian, E.; Garg, S.; Gao, L.; Yu, S.; and Montgomery, J. 2017. Scoring users’ privacy disclosure across multiple online social networks. _IEEE access_ 5: 13118–13130.
* Apple and Google (2020a) Apple; and Google. 2020a. Contact Tracing Bluetooth Specification. URL https://www.blog.google/documents/58/Contact˙Tracing˙-˙Bluetooth˙Specification˙v1.1˙RYGZbKW.pdf.
* Apple and Google (2020b) Apple; and Google. 2020b. Contact Tracing Cryptography Specification. URL https://www.blog.google/documents/56/Contact˙Tracing˙-˙Cryptography˙Specification.pdf.
* Auxier (2020) Auxier, B. 2020. How Americans see digital privacy issues amid the COVID-19 outbreak. URL https://www.pewresearch.org/fact-tank/2020/05/04/how-americans-see-digital-privacy-issues-amid-the-covid-19-outbreak/.
* BBC (2020) BBC. 2020. China launches coronavirus ’close contact detector’ app. URL https://www.bbc.com/news/technology-51439401.
* Blose et al. (2020) Blose, T.; Umar, P.; Squicciarini, A.; and Rajtmajer, S. 2020. Privacy in Crisis: A study of self-disclosure during the Coronavirus pandemic. _arXiv preprint arXiv:2004.09717_ .
* Brough and Martin (2020) Brough, A. R.; and Martin, K. D. 2020. Consumer Privacy During (and After) the COVID-19 Pandemic. _Journal of Public Policy & Marketing_ 0743915620929999\.
* Cha (2020) Cha, V. 2020. Asia’s COVID-19 Lessons for the West: Public Goods, Privacy, and Social Tagging. _The Washington Quarterly_ 1–18.
* Chamikara et al. (2020) Chamikara, M. A. P.; Bertók, P.; Liu, D.; Camtepe, S.; and Khalil, I. 2020. Efficient privacy preservation of big data for accurate data mining. _Information Sciences_ 527: 420–443.
* Chen, Lerman, and Ferrara (2020) Chen, E.; Lerman, K.; and Ferrara, E. 2020. Covid-19: The first public coronavirus twitter dataset. _arXiv preprint arXiv:2003.07372_ .
* Cheng et al. (2020) Cheng, C.; Barceló, J.; Hartnett, A. S.; Kubinec, R.; and Messerschmidt, L. 2020\. Covid-19 government response event dataset (coronanet v. 1.0). _Nature human behaviour_ 4(7): 756–768.
* Cohen et al. (2020) Cohen, J. P.; Morrison, P.; Dao, L.; Roth, K.; Duong, T. Q.; and Ghassemi, M. 2020\. Covid-19 image data collection: Prospective predictions are the future. _arXiv preprint arXiv:2006.11988_ .
* Dong, Du, and Gardner (2020) Dong, E.; Du, H.; and Gardner, L. 2020. An interactive web-based dashboard to track COVID-19 in real time. _The Lancet infectious diseases_ 20(5): 533–534.
* Fahey and Hino (2020) Fahey, R. A.; and Hino, A. 2020. COVID-19, digital privacy, and the social limits on data-focused public health responses. _International Journal of Information Management_ 55: 102181.
* Farooq, Laato, and Islam (2020) Farooq, A.; Laato, S.; and Islam, A. N. 2020. Impact of online information on self-isolation intention during the COVID-19 pandemic: cross-sectional study. _Journal of medical Internet research_ 22(5): e19128.
* Frith and Saker (2020) Frith, J.; and Saker, M. 2020. It Is All About Location: Smartphones and Tracking the Spread of COVID-19. _Social Media+ Society_ 6(3): 2056305120948257.
* Gao et al. (2020) Gao, J.; Zheng, P.; Jia, Y.; Chen, H.; Mao, Y.; Chen, S.; Wang, Y.; Fu, H.; and Dai, J. 2020. Mental health problems and social media exposure during COVID-19 outbreak. _Plos one_ 15(4): e0231924.
* Holmes (2020) Holmes, R. 2020. Is COVID-19 Social Media’s Levelling Up Moment? URL https://www.forbes.com/sites/ryanholmes/2020/04/24/is-covid-19-social-medias-levelling-up-moment/?sh=5fa080dd6c60.
* Hussein et al. (2020) Hussein, M. R.; Shams, A. B.; Apu, E. H.; Mamun, K. A. A.; and Rahman, M. S. 2020\. Digital Surveillance Systems for Tracing COVID-19: Privacy and Security Challenges with Recommendations. _arXiv preprint arXiv:2007.13182_ .
* Kanter (2020) Kanter. 2020. COVID-19 Barometer: Consumer attitudes, media habits and expectations. URL https://www.kantar.com/Inspiration/Coronavirus/COVID-19-Barometer-Consumer-attitudes-media-habits-and-expectations.
* Kordzadeh and Warren (2017) Kordzadeh, N.; and Warren, J. 2017. Communicating personal health information in virtual health communities: an integration of privacy calculus model and affective commitment. _Journal of the Association for Information Systems_ 18(1): 1.
* Kye and Hwang (2020) Kye, B.; and Hwang, S.-J. 2020. Social trust in the midst of pandemic crisis: Implications from COVID-19 of South Korea. _Research in social stratification and mobility_ 68: 100523.
* Malandrino et al. (2013) Malandrino, D.; Petta, A.; Scarano, V.; Serra, L.; Spinelli, R.; and Krishnamurthy, B. 2013. Privacy awareness about information leakage: Who knows what about me? In _Proceedings of the 12th ACM workshop on Workshop on privacy in the electronic society_ , 279–284.
* Nabity-Grover, Cheung, and Thatcher (2020) Nabity-Grover, T.; Cheung, C. M.; and Thatcher, J. B. 2020. Inside out and outside in: How the COVID-19 pandemic affects self-disclosure on social media. _International Journal of Information Management_ 55: 102188.
* Pepe et al. (2020) Pepe, E.; Bajardi, P.; Gauvin, L.; Privitera, F.; Lake, B.; Cattuto, C.; and Tizzoni, M. 2020. COVID-19 outbreak response, a dataset to assess mobility changes in Italy following national lockdown. _Scientific data_ 7(1): 1–7.
* Perez (2020) Perez, S. 2020. TikTok Engagement Among Kids Surges During the Pandemic. URL https://www.marketingcharts.com/demographics-and-audiences/teens-and-younger-113749.
* Ram and Gray (2020) Ram, N.; and Gray, D. 2020. Mass surveillance in the age of COVID-19. _Journal of Law and the Biosciences_ 7(1): lsaa023.
* ROSS (2020) ROSS, C. R. 2020. Will we give up privacy for security after Covid-19? URL https://www.statnews.com/2020/04/08/coronavirus-will-we-give-up-privacy-for-security/.
* Rubinstein (2014) Rubinstein, I. S. 2014. Voter privacy in the age of big data. _Wis. L. Rev._ 861\.
* Singer and Sang-hun (2020) Singer, N.; and Sang-hun, C. 2020. As Coronavirus Surveillance Escalates, Personal Privacy Plummets. URL https://www.nytimes.com/2020/03/23/technology/coronavirus-surveillance-tracking-privacy.html.
* Turner (2020) Turner, J. 2020. Privacy vs. Security in the Post-Pandemic World. URL https://www.natlawreview.com/article/privacy-vs-security-post-pandemic-world.
* Zola et al. (2019) Zola, P.; Cortez, P.; Ragno, C.; and Brentari, E. 2019. Social media cross-source and cross-domain sentiment classification .
|
# Solitary waves and double layers in complex plasma media
A A Mamuna,c and Abdul Mannana,b cAlso at Wazed Miah Science Research Centre,
Jahangirnagar University, Savar, Dhaka-1342, Bangladesh. Email: mamun-
<EMAIL_ADDRESS>author: Abdul Mannan. Email:
<EMAIL_ADDRESS>Telephone: +4915210286280; Fax: +49-345-55-27005
aDepartment of Physics, Jahangirnagar University, Savar, Dhaka-1342,
Bangladesh
bInstitut für Mathematik, Martin Luther Universität Halle-Wittenberg, D-06099
Halle, Germany
###### Abstract
A complex plasma medium (containing Cairns nonthermal electron species,
adiabatically warm inertial ion species, and stationary positively charged
dust (PCD) species (making a plasma system very complex) is considered. The
effects of PCD species, nonthermal electron species, and adiabatic ion-
temperature on ion-acoustic (IA) solitary waves (SWs) and double layers (DLs)
are investigated by the pseudo-potential approach, which is valid for the
arbitrary amplitude time-independent SWs and DLs. It is observed that the
presence of the PCD species reduces the phase speed of the IA waves, and
consequently supports the IA subsonic compressive SWs in such electron-ion-PCD
plasmas. On the other hand, the increase in both adiabatic ion-temperature and
the number of nonthermal or fast electrons causes to reduce the possibility
for the formation of the subsonic SWs, and thus convert subsonic SWs into
supersonic ones. It is also observed that after at a certain value of the
nonthermal parameter, the IA supersonic SWs with both positive and negative
potentials as well as the DLs with only negative potential exist. The
applications of the work in space environments (viz. Earth’s mesosphere,
cometary tails, Jupiter’s magnetosphere, etc.) and laboratory devices, where
the warm ion and nonthermal electron species along with PCD species have been
observed, are briefly discussed.
###### keywords:
Positive dust; Non-thermal electrons; Subsonic and supersonic SWs; Double
layers
††articletype: Original Article
## 1 Introduction
Nowadays, the existence of positively charged dust (PCD) species in electron-
ion plasmas received a renewed interest because of their vital role in
modifying existing features as well as introducing new features of linear and
nonlinear ion-acoustic (IA) waves propagating in many space plasma
environments [viz. Earth’s mesosphere [1, 2, 3], cometary tails [4, 5],
Jupiter’s surroundings [6], Jupiter’s magnetosphere [7], etc.] and laboratory
devices [8, 9, 10], where in addition to electron-ion plasmas, the PCD species
have been observed. Three principal mechanisms by which the dust species
becomes positively charged [11, 12, 13, 14] are as follows:
* •
The photoemission of electrons from the dust grain surface induced by the flux
of high energy photons [13].
* •
The thermionic emission of electrons from the dust grain surface by the
intense radiative or thermal heating [12].
* •
The secondary emission of electrons from the dust grain surface by the impact
of high energetic plasma particles like electrons or ions [11].
The dispersion relation for the IA waves in an electron-ion-PCD plasma system
(containing inertialess isothermal electron species, inertial cold ion
species, and stationary PCD species) is given by [15]
$\displaystyle\frac{\omega}{kC_{i}}=\frac{1}{\sqrt{1+\mu+k^{2}\lambda_{D}^{2}}},$
(1)
where $\omega=2\pi f$ and $k=2\pi/\lambda$ in which $f$ ($\lambda$) is the IA
wave frequency (wavelength); $C_{i}=(z_{i}k_{B}T_{e}/m_{i})^{1/2}$ is the IA
speed in which $k_{B}$ is the Boltzmann constant, $T_{e}$ is the electron
temperature, and $m_{i}$ is the ion mass; $\lambda_{D}=(k_{B}T_{e}/4\pi
z_{i}n_{i0}e^{2})^{1/2}$ is the IA wave-length scale in which $n_{i0}$
($z_{i}$) is the number density (charge state) of the ion species at
equilibrium, and $e$ is the magnitude of the charge of an electron;
$\mu=z_{d}n_{d0}/z_{i}n_{i0}$ with $n_{d0}$ ($z_{d}$) being the number density
(charge state) of the PCD species at equilibrium. This means that $\mu=0$
corresponds to the usual electron-ion plasma, and $\mu\rightarrow\infty$
corresponds to electron-dust plasma [5, 8, 9, 10]. Thus, $0<\mu<\infty$ is
valid for the electron-ion-PCD plasmas. The dispersion relation defined by (1)
for the long-wavelength limit (viz. $\lambda\gg\lambda_{D}$) becomes
$\displaystyle\frac{\omega}{kC_{i}}\simeq\sqrt{\frac{1}{1+\mu}}.$ (2)
The dispersion relation (2) indicates that the phase speed decreases with the
rise of the value of $\mu$. This new feature of the IA waves (continuous as
well as periodic compression and rarefaction or vise-versa of the positive ion
fluid) is introduced due to the reduction of the space charge electric field
by the presence of PCD.
Recently, based on this new linear feature, Mamun and Sharmin [15] and Mamun
[16] have shown the existence of subsonic shock and SWs, respectively, by
considering the assumption of Maxwellian electron species and cold ion
species. The IA waves in different plasma systems composed of ions and
electrons have also been studied by a number of authors [17, 18, 19]. However,
the reduction of the IA wave phase speed by the presence of PCD species can
also make the IA phase speed comparable with the ion thermal speed
$V_{Ti}=(k_{B}T_{i}/m_{i})^{1/2}$ (where $T_{i}$ is the ion-fluid temperature)
so that the effect of the ion-thermal pressure cannot be neglected. On the
other hand, the electron species in space environments mentioned does not
always follow the Maxwellian electron velocity distribution function. This
means that the linear dispersion relation (2), and the works of Mamun and
Sharmin [15] and Mamun [16] are valid for a cold ion fluid ($T_{i}=0$) limit
and for the Maxwell electron velocity distribution function, which can be
expressed in one dimensional (1D) normalized [normalized by $n_{e0}/V_{Te}$,
where $V_{Te}=(k_{B}T_{e}/m_{e})^{1/2}$ is the thermal speed of the electron
species, and $v$ is normalized by $V_{Te}$] form as
$f(v)=\frac{1}{\sqrt{2\pi}}\exp\left[-\frac{1}{2}(v^{2}-2\phi)\right],$ (3)
where $\phi$ is the IA wave potential normalized by $k_{B}T_{e}/e$.
To overcome these two limitations, we consider (i) adiabatically warm ion
fluid and (ii) nonthermal electron species following Cairns velocity
distribution function, which can be similarly expressed in 1D normalized form
as [20]
$f(v)=\frac{1+\alpha(v^{2}-2\phi)^{2}}{(1+3\alpha)\sqrt{2\pi}}\exp\left[-\frac{1}{2}(v^{2}-2\phi)\right],$
(4)
where $\alpha$ is a parameter determining the population of fast (energetic)
particles present in the plasma system under consideration. We note that
equation (4) is identical to equation (3) for $\alpha=0$. Thus, how the
nonthermal parameter $\alpha$ modifies the Maxwell distribution of the
electron species is shown mathematically by equation (4) and graphically by
the left panel of figure 1. On the other hand, including the effects of the
Cairns nonthermal electron distribution ($\alpha$) and the adiabatic ion-
temperature ($\sigma$), the dispersion relation for the long wavelength IA
waves can be expressed as
$\displaystyle\frac{\omega}{kC_{i}}=\sqrt{\frac{1+3\alpha}{(1+\mu)(1-\alpha)}+3\sigma},$
(5)
where $\sigma=T_{i0}/z_{i}T_{e}$ with $T_{i0}$ is the ion-temperature at
equilibrium. The dispersion relation (5) indicates that as $\alpha$ and
$\sigma$ increase, the phase speed of the IA waves increases. The is due to
the enhancement of the space charge electric field by nonthermal electron
species and of the flexibility of the ion fluid by its temperature. The
variation of the phase speed of the IA waves [defined by equation (5)] with
$\alpha$ and $\sigma$ is shown in the right panel of figure 1.
|
---|---
Figure 1: The left panel shows the curves representing the Cairns nonthermal
velocity distribution function [defined by equation (4)] for $\phi=0.5$ and
different values of $\alpha$, whereas the right panel shows how the normalized
phase speed ($\omega/kC_{i}$) of the IA waves [defined by equation (5)] varies
with $\sigma$ and $\alpha$ for $\mu=0.6$.
The aim of this work is to investigate the combined effects of positively
charged stationary dust species, Cairns nonthermal electron distribution and
adiabatic ion-temperature on the basic features of the IA solitary waves (SWs)
and double layers (DLs) in electron-ion-PCD plasma system by the pseudo-
potential approach [20, 21, 22].
The manuscript is structured as follows. The equations describing the
nonlinear dynamics of the IA waves in an electron-ion-PCD plasma are provided
in section 2. The combined effects of stationary PCD species, adiabatic ion-
temperature and nonthermally distributed electron species on IA SWs and DLs
are investigated by the pseudo-potential approach in section 3. A brief
discussion is finally presented in section 4.
## 2 Governing equations
To investigate the nonlinear propagation of the IA waves defined by the
equation (5), we consider an electron-ion-PCD plasma medium. The nonlinear
dynamics of the IA waves propagating in such an electron-ion-PCD plasma medium
is described by
$\displaystyle\frac{\partial n_{i}}{\partial t}+\frac{\partial}{\partial
x}(n_{i}u_{i})=0,$ (6) $\displaystyle\frac{\partial u_{i}}{\partial
t}+u_{i}\frac{\partial u_{i}}{\partial x}=-\frac{\partial\phi}{\partial
x}-\frac{\sigma}{n_{i}}\frac{\partial P_{i}}{\partial x},$ (7)
$\displaystyle\frac{\partial P_{i}}{\partial t}+u_{i}\frac{\partial
P_{i}}{\partial\xi}+\gamma P_{i}\frac{\partial u_{i}}{\partial x}=0,$ (8)
$\displaystyle\frac{\partial^{2}\phi}{\partial x^{2}}=(1+\mu)n_{e}-n_{i}-\mu,$
(9)
where $n_{i}$ is the ion number density normalized by $n_{i0}$; $u_{i}$ is the
ion fluid speed normalized by $C_{i}$; $P_{i}$ is the adiabatic ion-thermal
pressure normalized by $n_{i0}k_{B}T_{i0}$; $\gamma\,[=(2+{\cal N})/{\cal N}]$
is the ion fluid adiabatic index with ${\cal N}$ being the number of degrees
of freedom, which has the value $1$ ($3$) for the 1D (3D) case so that in our
present work ${\cal N}=1$ and $\gamma=3$; $t$ ($x$) is the time (space)
variable normalized by $\omega_{pi}^{-1}$ ($\lambda_{D}$); $n_{e}$ is the
nonthermal electron number density normalized by $n_{e0}$, and is determined
by integrating equation (4) with respect to $v$ from $-\infty$ to $+\infty$,
i.e. $n_{e}$ can be expressed as [21]
$\displaystyle n_{e}=(1-\beta\phi+\beta\phi^{2})\exp(\phi),$ (10)
with $\beta=4\alpha/(1+3\alpha)$. We note that for isothermal electron species
$\gamma=1$, $T_{i}=T_{i0}$ and $P_{i}=n_{i}k_{B}T_{i}$, equations (6) and (8)
are identical.
## 3 SWs and DLs
To study arbitrary amplitude IA SWs and DLs, we employ the pseudo-potential
approach [20, 21, 22] by assuming that all dependent variables in equations
(6)–(9) depend only on a single variable $\xi=x-{\cal M}t$, where ${\cal M}$
is the Mach number (defined by $\omega/kC_{i}$). This transformation
($\xi=x-{\cal M}t$) along with the substitution of equation (10) into equation
(9) and $\gamma=3$ into equation (8) as well as the use of the steady state
condition allow us to write (6)–(9) as
$\displaystyle{\cal M}\frac{dn_{i}}{d\xi}-\frac{d}{d\xi}(n_{i}u_{i})=0,$ (11)
$\displaystyle{\cal
M}\frac{du_{i}}{dl\xi}-u_{i}\frac{du_{i}}{d\xi}=\frac{d\phi}{d\xi}+\frac{\sigma}{n_{i}}\frac{dP_{i}}{d\xi},$
(12) $\displaystyle{\cal
M}\frac{dP_{i}}{d\xi}-u_{i}\frac{dP_{i}}{d\xi}-3P_{i}\frac{du_{i}}{d\xi}=0,$
(13)
$\displaystyle\frac{d^{2}\phi}{d\xi^{2}}=(1+\mu)\left(1-\beta\phi+\beta\phi^{2}\right)\exp(\phi)-n_{i}-\mu.$
(14)
The appropriate conditions (viz. $n_{i}\rightarrow 1$ and $u_{i}\rightarrow 0$
at $\xi\rightarrow\pm\infty$) reduce (11) to
$\displaystyle u_{i}={\cal M}\left(1-\frac{1}{n_{i}}\right),$ (15)
$\displaystyle n_{i}=\frac{{\cal M}}{{\cal M}-u_{i}}.$ (16)
The substitution of (15) into (13) gives rise to
$\displaystyle\frac{1}{n_{i}}\frac{dP_{i}}{d\xi}+3P_{i}\frac{d}{d\xi}\left(\frac{1}{n_{i}}\right)=0,$
(17)
which finally reduces to
$\displaystyle P_{i}=n_{i}^{3},$ (18)
where the integration constant is found to be $1$ under the conditions that
$P_{i}\rightarrow 1$ and $n_{i}\rightarrow 1$ at $\xi\rightarrow\pm\infty$.
Similarly, the substitution of (15) into equation (12) yields
$\displaystyle{\cal
M}\frac{du_{i}}{d\xi}-u_{i}\frac{du_{i}}{d\xi}-{\sigma}\frac{dP_{i}}{d\xi}+\frac{\sigma}{{\cal
M}}u_{i}\frac{dP_{i}}{d\xi}=\frac{d\phi}{d\xi}.$ (19)
Again, multiplying (13) by $\sigma/{\cal M}$ one can write
$\displaystyle\sigma\frac{dP_{i}}{d\xi}-\frac{\sigma}{{\cal
M}}u_{i}\frac{dP_{i}}{d\xi}-3P_{i}\frac{\sigma}{{\cal
M}}\frac{du_{i}}{d\xi}=0.$ (20)
Now, performing (20)$-2\times$(19) we obtain
$\displaystyle 3\sigma(P_{i}-1)-\frac{3\sigma}{{\cal M}}(P_{i}u_{i})-2{\cal
M}u_{i}+u_{i}^{2}+2\phi=0,$ (21)
where the integration constant is found to be $3\sigma$ under the conditions
that $P_{i}\rightarrow 1$, $n_{i}\rightarrow 1$, $u_{i}\rightarrow 0$, and
$\phi\rightarrow 0$ at $\xi\rightarrow\pm\infty$. The substitution of
equations (15) and (18) into equation (21) yields
$\displaystyle 3\sigma n_{i}^{4}-({\cal M}^{2}+3\sigma-2\phi)n_{i}^{2}+{\cal
M}^{2}=0.$ (22)
This is the quadratic equation for $n_{i}^{2}$. Thus, the expression for
$n_{i}$ can be expressed as
$\displaystyle
n_{i}=\frac{1}{\sqrt{6\sigma}}\left[\sqrt{\Psi-\sqrt{\Psi^{2}-12\sigma{\cal
M}^{2}}}\right],$ (23)
where $\Psi={\cal M}^{2}+3\sigma-2\phi$. Now, substituting equation (23) into
equation (14), we obtain
$\displaystyle\frac{d^{2}\phi}{d\xi^{2}}=(1+\mu)\left(1-\beta\phi+\beta\phi^{2}\right)\exp(\phi)-\frac{1}{\sqrt{6\sigma}}\left[\sqrt{\Psi-\sqrt{\Psi^{2}-12\sigma{\cal
M}^{2}}}\right]-\mu,$ (24)
We finally multiply both side of equation (24) by ($d\phi/d\xi$) and
integrating the resulting equation with respect to $\phi$, we obtain
$\displaystyle\frac{1}{2}\left(\frac{d\phi}{d\xi}\right)^{2}+V(\phi,\mathcal{M})=0,$
(25)
which represents an energy integral of a pseudo-particle of unit mass, pseudo
time $\xi$, pseudo-position $\phi$ and pseudo-potential $V(\phi,\mathcal{M})$
is defined by
$\displaystyle
V(\phi,\mathcal{M})=C_{0}+\mu\phi-(1+\mu)\left[1+\frac{4\alpha}{1+3\alpha}\left(3-3\phi+\phi^{2}\right)\right]\exp[\phi]$
$\displaystyle\hskip
56.9055pt-\frac{\sqrt{2}}{3\sqrt{3\sigma}}\left(\sqrt{\Psi-\sqrt{\Psi^{2}-12\sigma{\cal
M}^{2}}}\right)\left(\Psi+\frac{1}{2}\sqrt{\Psi^{2}-12\sigma{\cal
M}^{2}}\right),$ (26)
where
$\displaystyle
C_{0}=(1+\mu)\left[1+\frac{12\alpha}{1+3\alpha}\right]+\sigma+{\cal M}^{2}$
(27)
is the integration constant, and it is chosen in such a way that $V(\phi,{\cal
M})=0$ at $\phi=0$.
It is clear that $V(0,{\cal M})=0$ is satisfied because of our choice of the
integration constant, and $V^{\prime}(0,{\cal M})=0$ is satisfied because of
the equilibrium charge neutrality condition, where the prime denotes the
derivative of $V(\phi,{\cal M})$ with respect to $\phi$. So, the conditions
for the existence of SWs and DLs are: (i) $V^{\prime\prime}(0,{\cal M})<0$ so
that the fixed point at the origin is unstable (i.e. the convexity condition
at the origin); (ii) $V^{\prime}(\phi_{m},{\cal M})>0$ for the SWs with
$\phi>0$; (iii) $V^{\prime}(\phi_{m},{\cal M})<0$ for the SWs with $\phi<0$;
(iv) $V^{\prime}(\phi_{m},\mathcal{M})=0$ for the DLs, where $\phi_{m}$ is the
amplitude of the SWs or DLs. Thus, SWs or DLs exist if and only if
$V^{\prime\prime}(0,{\cal M})<0$, i.e. ${\cal M}>{\cal M}_{c}$, where
${\cal M}_{c}=\sqrt{\frac{1+3\alpha}{(1+\mu)(1-\alpha)}+3\sigma}.$ (28)
We note that the expression for ${\cal M}_{c}$ [given by equation (28)] is
identical to equation (5). The phase speed of the IA waves decreases and the
possibility for the formation of subsonic IA SWs increases as the number of
PCD species increases. This is depicted in figure 1(a). On the other hand, the
possibility for the formation of subsonic (supersonic) IA SWs decreases
(increases) with the increase of the values of $\alpha$ and $\sigma$. This is
shown in figure 1(b). The ranges of the value of ${\cal M}$, viz.
$\mathcal{M}_{c}<\mathcal{M}<1$ and $\mathcal{M}>\mathcal{M}_{c}>1$ determine
the formation of subsonic and supersonic IA SWs, respectively. The variation
of $\mathcal{M}_{c}$ with $\mu$ and $\alpha$ for the fixed value of $\sigma$
is graphically shown in figure 2(a), where the shaded (non-shaded) area
represents the domain for the existence of subsonic (supersonic) SWs.
It is well known [20, 21] that the sign of
$V^{\prime\prime\prime}(0,\mathcal{M}_{c})=\frac{3(1-\alpha)^{2}(1+\mu)^{2}[1+3\alpha+4(1-\alpha)(1+\mu)\sigma]}{(1+3\alpha)^{3}}-(1+\mu)$
(29)
determines either the existence of the IA SWs with $\phi>0$ or the coexistence
of the IA SWs with $\phi>0$ and $\phi<0$. Thus, the IA SWs with $\phi>0$
[$\phi<0$ and $\phi>0$] will exist (coexist) if
$V^{\prime\prime\prime}(0,\mathcal{M}_{c})>0$
[$V^{\prime\prime\prime}(0,\mathcal{M}_{c})<0$].
Figure 2: (a) The variation of $\mathcal{M}_{c}$ with $\mu$ for $\sigma=0.01$
and $\alpha=0$ (solid curve), $\alpha=0.05$ (dashed curve) and $\alpha=0.1$
(dot-dashed curve). The shaded area corresponds to the existence of subsonic
SWs; (b) The contour plot of $V^{\prime\prime\prime}(0,\mathcal{M}_{c})=0$ as
a function of $\alpha$ and $\sigma$ for different values of $\mu$, viz
$\mu=0.1$ (solid curve), $\mu=0.3$ (dashed curve) and $\mu=0.5$ (dot-dashed
curve).
Figure 3: The formation of the potential wells representing the subsonic SWs
(a) for $\alpha=0.05$, $\mu=0.7$ (solid curve), $\mu=0.75$ (dashed curve)
$\mu=0.8$ (dot-dashed curve); (b) for $\mu=0.75$, $\alpha=0$ (solid curve),
$\alpha=0.05$ (dashed curve) $\alpha=0.1$ (dot-dashed curve). The other
parameters, which are kept fixed, are ${\cal M}=0.985$ and $\sigma=0.01$.
Figure 4: The formation of the potential wells representing the subsonic SWs
(a) for $\sigma=0.01$, ${\cal M}=0.95$ (solid curve), ${\cal M}=0.97$ (dashed
curve), ${\cal M}=0.99$ (dot-dashed curve); (b) for ${\cal M}=0.99$,
$\sigma=0.01$ (solid curve), $\sigma=0.03$ (dashed curve) $\sigma=0.06$ (dot-
dashed curve). The other parameters, which are kept fixed, are $\mu=0.7$ and
$\alpha=0.05$.
Figure 5: The formation of the potential wells representing the coexistence of
supersonic SWs with $\phi>0$ and $\phi<0$ (a) for $\alpha=0.266$, $\mu=0.3$
(solid curve), $\mu=0.35$ (dashed curve) and $\mu=0.4$ (dot-dashed curve); (b)
for $\mu=0.357$, $\alpha=0.26$ (solid curve), $\alpha=0.27$ (dashed curve) and
$\alpha=0.28$ (dot-dashed curve). The other parameters, which are kept fixed,
are ${\cal M}=1.5934$ and $\sigma=0.2$.
Figure 6: The formation of the potential wells representing (a) the
coexistence of supersonic SWs with $\phi>0$ and $\phi<0$ for $\alpha=0.258$,
$\mu=0.36$, ${\cal M}=1.5868$, $\sigma=0.2$ (solid curve), $\sigma=0.22$
(dashed curve) and $\sigma=0.24$ (dot-dashed curve); (b) the existence of DLs
with $\phi<0$ for ${\cal M}=1.4648$, $\alpha=0.25$ (solid curve), ${\cal
M}=1.50362$, $\alpha=0.26$ (dashed curve), ${\cal M}=1.5452$, $\alpha=0.27$
(dot-dashed curve), $\mu=0.5$, and $\sigma=0.2$.
Figure 7: The formation of the potential wells representing the DLs with
$\phi<0$ (a) for ${\cal M}=1.4704$, $\sigma=0.15$ (solid curve), ${\cal
M}=1.50362$, $\sigma=0.2$ (dashed curve), and ${\cal M}=1.5367$, $\sigma=0.25$
(dot-dashed curve), and $\mu=0.5$; (b) for ${\cal M}=1.5682$, $\mu=0.4$ (solid
curve), ${\cal M}=1.50362$, $\mu=0.5$ (dashed curve), and ${\cal M}=1.4466$,
$\mu=0.6$ (dot-dashed curve), and $\sigma=0.2$. The value of $\alpha=0.26$ is
kept fixed for both cases.
Figure 2(b) shows how the parametric regimes for that existence of IA SWs with
$\phi>0$ and for the coexistence of SWs with $\phi>0$ and $\phi<0$ changes
with different plasma parameters. It means that the SWs with $\phi>0$
($\phi<0$ and $\phi>0$ ) exist for the complex plasma parameters satisfying
$V^{\prime\prime\prime}(0,\mathcal{M}_{c})>0$
[$V^{\prime\prime\prime}(0,\mathcal{M}_{c})<0$]. It is seen that the increase
in the number density of the PCD species enhances the regime for the existence
of the SWs with $\phi>0$. The possibility for the formation of SWs with
$\phi<0$ as well as $\phi>0$ increases as the population of fast/energetic
electrons increases. On the other hand, the rise of the ion-temperature
($\sigma$) increases (decreases) the possibility for existence of SWs with
$\phi>0$ ($\phi<0$ as well as $\phi>0$).
Figures 3-7 can provide the visualization of the amplitude ($\phi_{m}$), which
is the intercept on the positive or negative $\phi$-axis, and width
($\phi_{m}/\sqrt{|V_{m}|}$), where $|V_{m}|$ is the maximum value of $V(\phi)$
in the pseudo-potential wells formed in positive or negative $\phi$-axis.
Figures 3 and 4 indicate the formation of the pseudo-potential wells in the
positive $\phi$-axis, which corresponds to the formation of the subsonic IA
SWs only with $\phi>0$, i.e. the subsonic IA SWs with $\phi<0$ does not exist
in the complex plasma system under consideration. The possibility for the
formation of subsonic solitary wave increases (decreases) with increasing the
value of $\mu$ ($\alpha$ and $\sigma$). It is seen that the amplitude (width)
of the subsonic IA SWs decreases (increases) as we decrease the value of
$\mu$. On the other hand, the amplitude (width) of subsonic SWs decreases
(increases) with increasing the values of $\alpha$ and $\sigma$. It is worth
to mention that the lower value of $\mu$ and higher values of $\alpha$ and
$\sigma$ convert the subsonic SWs into supersonic ones. It is seen in figures
5 and 6(a) that for $\mathcal{M}>\mathcal{M}_{c}$ the supersonic SWs with
$\phi>0$ and $\phi<0$ coexist. The amplitude (width) of supersonic SWs with
both $\phi>0$ and $\phi<0$ increases (decreases) with increasing the value of
$\mu$. On the other hand, the depth of potential wells (representing the
coexistence of supersonic SWs with $\phi>0$ and $\phi<0$) decreases with
increasing the values of $\sigma$ and $\alpha$.
The IA DLs only with negative potential is formed for
$\mathcal{M}>\mathcal{M}_{c}$ as illustrated in figures 6(b) and 7. The rise
of the values of $\mu$ and $\sigma$ causes to decrease (increase) the
amplitude (width) of the DLs (as shown in figure 7). On the other hand, in
figure 6, the potential wells in the negative $\phi$-axis becomes wider as the
nonthermal parameter $\alpha$ increases. It means that the amplitude of DLs
are increased by the effect of nonthermal parameter, but the width of DLs
decreases. It is noted here that for the formation of DLs, the increase in the
values of $\alpha$ and $\sigma$ ($\mu$) is required a larger (smaller) value
of the Mach number.
## 4 Discussion
We have considered a complex plasma medium containing Cairns nonthermally
distributed electron species, adiabatically warm ion species, and PCD species,
and have investigated the IA SWs and DLs in such a plasma medium. We have
employed the pseudo-potential approach which is valid for arbitrary or large-
amplitude SWs and DLs. The results obtained from this theoretical work and
their applications can be briefly discussed as follows:
* •
The effect of the PCD causes to reduce the IA wave phase speed, and to form
subsonic SWs only with positive potential. On the other hand, the effects of
Cairns nonthermal electron distribution and adiabatic ion-temperature cause to
enhance the IA wave phase speed, and to reduce possibility for the formation
of the subsonic SWs, and finally convert the subsonic SWs into supersonic
ones.
* •
The amplitude (width) of the subsonic IA SWs increases (decreases) with the
rise of the value $\mu$ and ${\cal M}$, but the amplitude (width) of the
subsonic IA SWs decreases (increases) with the rise of the values $\alpha$ and
$\sigma$. This is due to the fact that the phase speed of the IA waves
decreases with rise of the value $\mu$, but increases with the rise of the
value of $\alpha$ and $\sigma$.
* •
The supersonic IA SWs with $\phi>0$ and $\phi<0$ coexist due to the presence
of the certain amount of nonthermal or fast electrons (after a certain value
of $\alpha$) in the plasma system under consideration. However, the increase
in the value of $\sigma$ and $\mu$ decreases the possibility for the formation
of the IA SWs with $\phi<0$.
* •
The amplitude (width) of the supersonic IA SWs (which coexist with $\phi>0$
and $\phi<0$) increases (decreases) as the values of $\mu$ and ${\cal M}$
increase, but it decreases (increases) as the values of $\alpha$ and $\sigma$
increase.
* •
The height (thickness) of the IA DLs (which exist only with $\phi<0$)
increases (decreases) as the values of both parameters of its set $\\{{\cal
M},~{}\alpha\\}$ increase. On the other hand, it decreases (increases) with
the rise of the value of both parameters of their sets $\\{{\cal M},~{}\mu\\}$
and $\\{{\cal M},~{}\sigma\\}$.
The advantage of the pseudo-potential method [20, 21, 22] is that it is valid
for arbitrary amplitude SWs and DLs, but it does not allow us to observe the
time evolution of the SWs or DLs. To overcome these limitations, one has to
develop a numerical code to solve the basic equations (6)$-$(10) numerically.
This type of simulation will be able to show the time evolution of arbitrary
amplitude SWs and DLs. This is, of course, a challenging research problem of
recent interest, but beyond the scope of our present work.
To conclude, we hope that the results of our present investigation should also
be useful in understanding the basic features of the IA waves and associated
nonlinear structures like SWs and DLs in space environments (viz. Earth’s
mesosphere or ionosphere [1, 2, 3], cometary tails [4], Jupiter’s surroundings
[7, 6] and magnetosphere [7], etc.) and laboratory devices [23, 8, 9, 10].
## Data availability
Data sharing is not applicable to this article as no new data were created or
analyzed in this study.
## Disclosure statement
The authors declare that there is no conflict of interest.
## Acknowledgement
A. Mannan gratefully acknowledges the financial support of the Alexander von
Humboldt Stiftung (Bonn, Germany) through its post-doctoral research
fellowship.
## References
* [1] Havnes O, Trøim J, Blix T, et al. First detection of charged dust particles in the earth’s mesosphere. J Geophys Res. 1996;101(A5):10839.
* [2] Gelinas LJ, Lynch KA, Kelley MC, et al. First observation of meteoritic charged dust in the tropical mesosphere. Geophys Res Lett. 1998;25(21):4047–4050.
* [3] Mendis DA, Wong WH, Rosenberg M. On the observation of charged dust in the tropical mesosphere. Phys Scr. 2004;T113:141.
* [4] Horányi M. Charged dust dynamics in the solar system. Annu Rev Astron Astrophys. 1996;34:383.
* [5] Mamun AA, Shukla PK. Dust-acoustic mach cones in magnetized electron-dust plasmas of saturn. Geophys Res Lett. 2004;31(L06808):1–4.
* [6] Tsintikidis D, Gurnett DA, Kurth WS, et al. Micron-sized particles detected in the vicinity of jupiter by the voyager plasma wave instruments. Geophys Res Lett. 1996;23(9):997–1000.
* [7] Horanyi M, Morfill GE, Grün E. Mechanism for the acceleration and ejection of dust grains from jupiter’s magnetosphere. Nature. 1993;363:144–146.
* [8] Khrapak SA, Morfill G. Waves in two component electron-dust plasma. Phys Plasmas. 2001;8(6):2629.
* [9] Fortov VE, Nefedov AP, Vaulina OS, et al. Dynamics of dust grains in an electron–dust plasma induced by solar radiation under microgravity conditions. New J Phys. 2003;5:102.1–102.17.
* [10] Davletov AE, Kurbanov F, Mukhametkarimov YS. Chemical model for positively charged dust particles. Phys Plasmas. 2018;25(12):120701.
* [11] Chow VW, Mendis DA, Rosenberg M. Role of grain size and particle velocity distribution in secondary electron emission in space plasmas. J Geophys Res. 1993;98(A11):19065.
* [12] Rosenberg M, Mendis DA. Uv-induced coulomb crystallization in a dusty gas. IEEE Trans Plasma Sci. 1995;23(2):177.
* [13] Rosenberg M, Mendis DA, Sheehan D. Uv-induced coulomb crystallization of dust grains in high-pressure gas. IEEE Trans Plasma Sci. 1996;24(6):1422 – 1430.
* [14] Fortov VE, Nefedov AP, Vaulina OS, et al. Dusty plasma induced by solar radiation under microgravitational conditions: An experiment on board the mir orbiting space station. JETP. 1998;87:1087.
* [15] Mamun AA, Sharmin BE. Nonplanar ion-acoustic subsonic shock waves in dissipative electron-ion-pcd plasmas. AIP Advances. 2020;10(12):125317.
* [16] Mamun AA. Roles of positively charged dust in the formation of ion-acoustic subsonic solitary waves in electron-ion-pcd plasmas. Contrib Plasma Phys. 2021;61(1):0.
* [17] Zedan NA, Atteya A, El-Taibany WF, et al. Stability of ion-acoustic solitons in a multi-ion degenerate plasma with the effects of trapping and polarization under the influence of quantizing magnetic field. Waves in Random and Complex Media. 2020;0(0):1–15.
* [18] El-Monier SY, Atteya A. Dynamics of ion-acoustic waves in nonrelativistic magnetized multi-ion quantum plasma: the role of trapped electrons. Waves in Random and Complex Media. 2020;0(0):1–19.
* [19] Mehdipoor M. Characteristics of nonlinear ion-acoustic waves in collisional plasmas with ionization effects. Waves in Random and Complex Media. 2020;0(0):1–25.
* [20] Cairns RA, Mamun AA, Bingham R, et al. Electrostatic solitary structures in non‐thermal plasmas. Geophys Res Lett. 1995;22(20):2709.
* [21] Mamun AA. Effects of ion temperature on electrostatic solitary structures in nonthermal plasmas. Phys Rev E. 1997;55(2):1852.
* [22] Bernstein IB, Greene GM, Kruskal MD. Exact nonlinear plasma oscillations. Phys Rev. 1957;108(3):546.
* [23] Nakamura Y, Sarma A. Observation of ion-acoustic solitary waves in a dusty plasma. Phys Plasmas. 2001;8(9):3921.
|
[a]Sezen Sekmen, for the CMS Collaboration
# Digging deeper into SUSY parameter space with the CMS experiment
###### Abstract
The classic searches for supersymmetry have not given any strong indication
for new physics. Therefore CMS is designing dedicated searches to target the
more difficult and specific supersymmetry scenarios. This contribution present
three such recent searches based on 13 TeV proton-proton collisions recorded
with the CMS detector in 2016, 2017 and 2018: a search for heavy gluinos
cascading via heavy next-to-lightest neutralino in final states with boosted Z
bosons and missing transverse momentum; a search for compressed supersymmetry
in final states with soft taus; and a search for compressed, long-lived
charginos in hadronic final states with disappearing tracks.
The Compact Muon Solenoid (CMS) Experiment [1] at the Large Hadron Collider
(LHC) has collected an unprecedented 137 fb-1of data with proton-proton
collisions at a center-of-mass energy of 13 TeV, which is continuously
explored for traces of supersymmetry in a wide variety of searches. As of
2020, classical searches, such as those looking for gluinos and squarks in
inclusive SUSY final states with a large number of search bins, looking for
top squarks in hadronic, single lepton or dilepton final states, or looking
for charginos or neutralinos in single lepton, dilepton or trilepton final
states have not yet observed a deviation from the standard model (SM), and
excluded parts of the SUSY parameter space. However, SUSY can still be
realized in many alternative well-motivated ways in hidden, remote corners of
the vast, multi-dimensional SUSY parameter space, to which the more standard
and inclusive searches may not be sensitive. Nowadays, CMS is enriching its
physics program by an increasing diversity of dedicated searches to probe such
corners and enhance the chances of discovery.
One example of a special scenario with a final state difficult to observe is
that of a compressed mass spectrum where masses of two accessible SUSY
partners are very close to each other. Here, decays of the heavier particle
lead to final states with low momentum (soft) objects and low missing
transverse momentum. Such final states are explored by searches that use soft
objects and a high momentum initial state radiation jet. Another consequence
of compressed spectra can be long-lived particles, for which an increasing
number of searches are being developed. On the opposite end, there are
scenarios with high mass SUSY partners and high mass differences, for which
several searches featuring objects with high Lorentz boost, leading to merged
decay products, and thus substructure, are being designed. Moreoever, there
are dedicated searches for direct production of sleptons or staus, which are
hard to access due to low cross sections and challenges in triggering. Cascade
decays with Higgs boson are also explored by explicitly reconstructing the
Higgs boson and incorporating it into multi-object SUSY final states.
Additionally, signatures with special combinations of objects predicted by
certain SUSY scenarios, such as $\gamma+b$ jets or $\gamma+$ lepton are
investigated. Besides all these searches, which are mainly targeting
$R$-parity conserving models, a whole suite of analyses targeting variations
of $R$-parity violating SUSY scenarios exist or are in progress. This
contribution presents 3 examples of recent non-classical SUSY searches based
on CMS data collected in 2016, 2017 and 2018, aiming to dig deeper into the
SUSY parameter space.
Boosted $ZZ+p_{T}^{miss}$ search: The first search is targeting a scenario
motivated by naturalness, where pair-produced gluinos with $\sim$2-3 TeV mass
decay cascading via a massive $\tilde{\chi}^{0}_{2}$ to a light
$\tilde{\chi}^{0}_{1}$ and a $Z$ boson [2]. The large mass difference between
$\tilde{\chi}^{0}_{2}$ and $\tilde{\chi}^{0}_{1}$ give the $Z$ bosons a large
Lorentz boost. The signature for a boosted $Z$ boson candidate is a wide-cone
jet having a measured mass compatible with the $Z$ boson mass. The analysis,
performed on 137 fb-1 of 13 TeV data, selects events with 0 leptons, $\geq 2$
$Z$ bosons, $\geq 2$ jets, missing transverse momentum $p_{T}^{miss}>300$ GeV
and hadronic transverse momentum $H_{T}>400$ GeV. The $Z$ boson candidates are
selected among anti-$k_{T}$ jets with a size parameter of 0.8 (AK8 jets), and
are required to have $p_{T}>200$ GeV and a mass of $40$ GeV$<m_{AK8jet}<140$
GeV. The 2nd highest $p_{T}$ $Z$ boson should be separated from any $b$-jet by
an angular distance of $\Delta R(Z_{2},b)>0.8$ to eliminate backgrounds.
The dominant SM background in this final state is the
$Z(\rightarrow\nu\nu)+$jets process. SM backgrounds are estimated directly
from data, in control regions defined using the masses of the $Z$ candidate
jets as seen in Figure 1, top left. The mass sideband control regions are used
to fit the leading AK8 jet mass distribution for estimating the background
normalization integrated over $p_{T}^{miss}$, as seen in Figure 1, top right.
The $p_{T}^{miss}$ control region is used to derive the $p_{T}^{miss}$ shape,
based on the assumption that jet mass and $p_{T}^{miss}$ have minimal
correlation.
The estimated background yields are shown as a function of $p_{T}^{miss}$ and
compared to data, as shown in Figure 1, bottom left. No excess over the SM
expectation is observed. The results are interpreted using a simplified SUSY
model of gluino pair production in which the gluino decays to a low momentum
quark pair and $\tilde{\chi}^{0}_{2}$, and $\tilde{\chi}^{0}_{2}$ decays to a
boosted $Z+\tilde{\chi}^{0}_{1}$, where
$m_{\tilde{g}}-m_{\tilde{\chi}^{0}_{2}}=50$ GeV and
$m_{\tilde{\chi}^{0}_{1}}=1$ GeV, as shown in Figure 1, bottom right. For this
scenario, data exclude gluino masses below 1920 GeV at 95% confidence level.
Figure 1: Definition of the search and control regions in the plane of
subleading vs. leading jet mass (top left), leading AK8 jet mass shape fit in
the mass sidebands (top right), observed data and background prediction as
functions of $p_{T}^{miss}$ (bottom left), and the 95% CL upper limit on the
production cross section for the gluino signal model as a function of the
gluino mass (bottom right) in the boosted $ZZ+p_{T}^{miss}$ search [2].
Compressed SUSY search with soft taus: The second search targets directly or
indirectly produced staus with low $m_{\tilde{\tau}}-m_{\tilde{\chi}^{0}_{1}}$
($<50$ GeV), a compressed case favored by dark matter coannihilation
scenarios, where the coannihilation between the stau and the lightest
neutralino can generate the observed relic density. It is the first LHC search
for a signature of one soft, hadronically decaying $\tau$ lepton ($\tau_{h}$),
one energetic jet from initial-state radiation (ISR), and large transverse
momentum imbalance [3]. The search, using 77 fb-1 of 13 TeV data, selects
events having exactly one $\tau_{h}$ with $20<p_{T}(\tau_{h})<40$ GeV, an ISR
jet with $p_{T}>100$ GeV, $p_{T}^{miss}>230$ GeV, angular separation between
the ISR jet and $p_{T}^{miss}$ $\Delta R(j_{ISR},p_{T}^{miss})>0.7$ and zero
$b$-jets. The analysis looks for an excess in the distribution of $\tau$
transverse mass
$m_{T}(\tau_{h},p_{T}^{miss})=\sqrt{2p_{T}^{miss}p_{T}(\tau_{h})(1-\cos\Delta\phi(\vec{p}_{T}^{miss},\tau_{h}))}$.
The dominant SM backgrounds are $tt+$jets and $W/Z+$jets. Their $m_{T}$ shapes
are estimated from control regions and yields are extrapolated from
simulation. Data-simulation agreement in control regions is used to validate
the modeling of the $\tau_{h}$ selections and to measure data-tosimulation
scale factors to correct ISR jet and the $p_{T}^{miss}$ modeling. For QCD
multijet backgrounds, both $m_{T}$ shape and yields are estimated from data
control regions. The resulting $m_{T}$ distribution is shown in Figure 2, top,
where data are seen to be consistent with the SM. Figure 2 (bottom left) shows
the interpretation of this result in a simplified SUSY model of
$\tilde{\chi}^{0}_{2}\tilde{\chi}^{\pm}_{1}/\tilde{\chi}_{1}^{+}\tilde{\chi}_{1}^{-}$
production. For 100% wino $\tilde{\chi}^{0}_{2}/\chi_{1}$,
$m_{\tilde{\chi}^{\pm}_{1}}-m_{\tilde{\chi}^{0}_{1}}=50$ GeV,
$m_{\tilde{\tau}}=\frac{1}{2}(m_{\tilde{\chi}^{\pm}_{1}}+m_{\tilde{\chi}^{0}_{1}})$
and
BR($\tilde{\chi}^{\pm}_{1}\rightarrow\tilde{\tau}\nu_{\tau}\rightarrow\tau\tilde{\chi}^{0}_{1}\nu_{\tau})=100\%$,
$\tilde{\chi}^{0}_{2}/\tilde{\chi}^{\pm}_{1}$ masses up to 290 GeV are
excluded at 95% confidence level. This sensitivity exceeds that of all other
searches to date, including the LEP exclusion of $m_{\chi_{1}}>103.5$ GeV in
compressed scenarios [4]. Figure 2, bottom right shows the ratio of the 95% CL
upper limit on the direct $\tilde{\tau}$ production signal cross section to
the theoretical cross section as a function of $m_{\tilde{\tau}}$ and $\Delta
m(\tilde{\tau},\tilde{\chi}^{0}_{1})$. No sensitivity is achieved to direct
stau pair production yet.
Figure 2: The $m_{T}$ distribution of data, background prediction and signal
benchmarks in the signal region (top); the 95% CL upper limits on the
$\tilde{\chi}^{0}_{2}\tilde{\chi}^{\pm}_{1}/\tilde{\chi}_{1}^{+}\tilde{\chi}_{1}^{-}$
production cross sections as a function of $m(\tilde{\chi}_{1}^{\pm})$ (bottom
left); and ratio of the 95% CL upper limit on the direct $\tilde{\tau}$ pair
production cross section to the theory prediction as function of
$m(\tilde{\tau})$ and $\Delta m(\tilde{\tau},\tilde{\chi}_{0}^{1})$ (bottom
right) in the compressed SUSY search with soft taus [3].
Disappearing track search using $M_{T2}$: For compressed SUSY with
$m_{\tilde{\chi}^{\pm}_{1}}-m_{\tilde{\chi}^{0}_{1}}\sim O(100~{}MeV)$,
$\tilde{\chi}^{\pm}_{1}$ is long lived. It would decay in the CMS tracker to a
soft, undetectable pion and a $\tilde{\chi}^{0}_{1}$. This would lead to a
disappearing track $+E_{T}^{miss}$ signature. The final search presented here
looks in 137 fb-1 of 13 TeV data for such compressed charginos in gluino or
squark decays by extending the classical inclusive hadronic search based on
the stransverse mass variable $M_{T2}$ with final states consisting of
disappearing tracks (DTs) and at least 2 jets and $M_{T2}>200$ GeV [5]. The
search explores categories of short and medium/long DT selections, which
consist of hits in the pixel or pixel $+$ strip tracking detectors of CMS,
respectively, in order to search for a wide range of lifetimes. Including DTs
in the search gives a possibility to loosen kinematic requirements without
accumulating large amounts of backgrounds. For instance, the $M_{T2}$
requirement is reduced from 400 to 200 GeV. The analysis categorizes events in
68 search bins defined in jet multiplicity, hadronic transverse momentum
$H_{T}$, DT length and DT $p_{T}$. Main sources of backgrounds are hadrons and
leptons poorly reconstructed in the tracker and tracks built out of incorrect
combinations of hits. They are estimated by calculating fake rates in data
control regions and applying these fake rates to DT candidates.
Figure 3: Exclusion limits at 95% CL for direct gluino pair production where
the gluinos decay to light-flavor quarks (top left), light squark pair
production (top center), and top squark pair production (top right) with
$c\tau_{0}(\tilde{\chi}^{\pm}_{1})=50$ cm. Exclusion limits on
$m_{\tilde{\chi}^{0}_{1}}$ with
$m_{\tilde{\chi}^{\pm}_{1}}=m_{\tilde{\chi}^{0}_{1}}+O(100MeV)$ as a function
of $\tilde{\chi}^{\pm}_{1}$ proper decay length for gluino pair production
with $m_{\tilde{g}}=1900$ GeV (bottom left), squark pair production with
$m_{\tilde{q}}=1500$ GeV (bottom center), and top squark pair production with
$m_{\tilde{t}}=1000$ GeV (bottom right) in the disappearing track search using
$M_{T2}$ [5].
The search found no deviation in data from the SM expectation. Figure 3 shows
the interpretation of the search results in various simplified SUSY models.
The top row shows exclusion limits at 95% CL for direct gluino pair production
where the gluinos decay to light-flavor (u, d, s, c) quarks (top left), light
squark pair production (top center), and top squark pair production (top
right) for $c\tau_{0}(\tilde{\chi}^{\pm}_{1})=50$ cm. Extending the inclusive
$M_{T2}$ search with disappearing tracks increased $m_{\tilde{g}}$ reach from
$\sim$2 to $2.46$ TeV and $m_{\tilde{\chi}^{0}_{1}}$ reach from $\sim$1.2 to
$\sim$2 TeV for gluino pair production; $m_{\tilde{q}}$ reach from $\sim$1.8
to $\sim$2.1 TeV and $m_{\tilde{\chi}^{0}_{1}}$ reach from $\sim$0.8 to
$\sim$1.6 TeV for squark pair production, and $m_{\tilde{t}}$ reach from
$\sim$1.2 to 1.65 TeV and $m_{\tilde{\chi}^{0}_{1}}$ reach from $\sim$0.55 to
$\sim$1.25 TeV for stop pair production. In all cases, sensitivity in the
compressed region was significantly improved. The bottom row in Figure 3 shows
exclusion limits versus chargino decay length for selected gluino, squark and
stop masses.
In summary, 3 recent examples of dedicated CMS SUSY searches targeting
specific scenarios and exclusive signatures were presented, namely, a boosted
$ZZ+p_{T}^{miss}$ search, which extended the gluino mass reach to 1.9 TeV for
$m_{\tilde{g}}-m_{\tilde{\chi}^{0}_{2}}=50$ GeV; a soft hadronic
$\tau+p_{T}^{miss}+$ISR jet search for compressed staus motivated by dark
matter coannihilation models, which obtained a sensitivity for charginos
extending the LEP limits; and a search that added regions with disappearing
tracks to the inclusive hadronic $M_{T2}$ search, which increased gluino and
squark mass limits by 400-600 GeV and significantly improved sensitivity in
the compressed region. Other searches dedicated to specific final states have
been performed earlier, such as the search for $H\rightarrow gg$ using razor
and $M_{T2}$ variables [6]; searches for hadronic and semileptonic staus [7,
8]; $E_{T}^{miss}$ and boosted Higgs to $bb$ [9]; SUSY in vector boson fusion
channels [10]; RPV smuons [11]; selectrons and smuons [12]; the searches in
diphoton and $E_{T}^{miss}$ final states [13], $b$ jets and photons final
states [14], and photon, lepton and $E_{T}^{miss}$ final states [15]. More
searches are ongoing for soft opposite-sign 2 lepton signatures and steath and
RPV stops.
Acknowledgements: I would like to thank my colleagues in the CMS
Collaboration for their hard work in producing the results in this
contribution, and to the organizers of ICHEP 2020 for their efforts in
realizing this important conference virtually during the difficult Covid-19
period.
## References
* [1] S. Chatrchyan et al. [CMS], JINST 3 (2008), S08004
* [2] A. M. Sirunyan et al. [CMS], JHEP 09 (2020), 149 [arXiv:2008.04422 [hep-ex]].
* [3] A. M. Sirunyan et al. [CMS], Phys. Rev. Lett. 124 (2020) no.4, 041803 [arXiv:1910.01185 [hep-ex]].
* [4] A. Heister et al. [ALEPH], Phys. Lett. B 526 (2002), 206-220 [arXiv:hep-ex/0112011 [hep-ex]]; J. Abdallah et al. [DELPHI], Eur. Phys. J. C 31 (2003), 421-479 [arXiv:hep-ex/0311019 [hep-ex]]; P. Achard et al. [L3], Phys. Lett. B 580 (2004), 37-49 [arXiv:hep-ex/0310007 [hep-ex]]; G. Abbiendi et al. [OPAL], Eur. Phys. J. C 32 (2004), 453-473 [arXiv:hep-ex/0309014 [hep-ex]].
* [5] A. M. Sirunyan et al. [CMS], Eur. Phys. J. C 80 (2020) no.1, 3 [arXiv:1909.03460 [hep-ex]].
* [6] A. M. Sirunyan et al. [CMS], Eur. Phys. J. C 80 (2020) no.3, 189 [arXiv:1907.13179 [hep-ex]].
* [7] , A. M. Sirunyan et al. [CMS], CMS-PAS-SUS-17-002.
* [8] A. M. Sirunyan et al. [CMS], JHEP 11 (2018), 151 [arXiv:1807.02048 [hep-ex]].
* [9] A. M. Sirunyan et al. [CMS], Phys. Rev. Lett. 120 (2018) no.24, 241801 [arXiv:1712.08501 [hep-ex]].
* [10] A. M. Sirunyan et al. [CMS], JHEP 08 (2019), 150 [arXiv:1905.13059 [hep-ex]].
* [11] A. M. Sirunyan et al. [CMS], Eur. Phys. J. C 79 (2019) no.4, 305 [arXiv:1811.09760 [hep-ex]].
* [12] A. M. Sirunyan et al. [CMS], Phys. Lett. B 790 (2019), 140-166 [arXiv:1806.05264 [hep-ex]].
* [13] A. M. Sirunyan et al. [CMS], JHEP 06 (2019), 143 [arXiv:1903.07070 [hep-ex]].
* [14] A. M. Sirunyan et al. [CMS], Eur. Phys. J. C 79 (2019) no.5, 444 [arXiv:1901.06726 [hep-ex]].
* [15] A. M. Sirunyan et al. [CMS], JHEP 01 (2019), 154 [arXiv:1812.04066 [hep-ex]].
|
# Regularizing (away) vacuum energy
Adam Koberinski111Department of Philosophy, University of Waterloo, Waterloo,
ON N2L 3G1, Canada<EMAIL_ADDRESS>
(Forthcoming in Foundations of Physics )
###### Abstract
In this paper I formulate Minimal Requirements for Candidate Predictions in
quantum field theories, inspired by viewing the standard model as an effective
field theory. I then survey standard effective field theory regularization
procedures, to see if the vacuum expectation value of energy density
($\langle\rho\rangle$) is a quantity that meets these requirements. The
verdict is negative, leading to the conclusion that $\langle\rho\rangle$ is
not a physically significant quantity in the standard model. Rigorous
extensions of flat space quantum field theory eliminate $\langle\rho\rangle$
from their conceptual framework, indicating that it lacks physical
significance in the framework of quantum field theory more broadly. This
result has consequences for problems in cosmology and quantum gravity, as it
suggests that the correct solution to the cosmological constant problem
involves a revision of the vacuum concept within quantum field theory.
## 1 Introduction
The cosmological constant problem has been a major focus of physicists working
on theories of quantum gravity since at least the mid-1980s. The problem
originates with unpublished remarks by Pauli, while interest in the problem
increased in the 1980s due to inflation. [30] famously laid out the the state
of the field in the late 1980s, and used anthropic considerations to place
bounds on the possible values of a cosmological constant in the Einstein field
equations. The problem arises in a semiclassical merging of quantum field
theory (QFT) and general relativity, where the stress-energy tensor for
classical matter is replaced by an expectation value of the stress-energy
tensor predicted by a particular model of QFT. When one does this, the vacuum
expectation values of energy densities for each field have the same form as a
cosmological constant term (i.e., a constant multiple of the metric), and so
should contribute to the observed cosmological constant. However, when one
takes a standard “prediction” of the combined vacuum energy densities from a
model of QFT, the result is dozens of orders of magnitude larger than what is
observed. Candidate solutions to the problem attempt to introduce new physics
to reconcile the semiclassical prediction with observation; the predominant
view in the physics literature is that an acceptable candidate for a theory of
quantum gravity must solve the cosmological constant problem. Though many toy
models have been proposed, there is no agreed upon solution pointing the way
to the correct theory of quantum gravity.
The stubborn persistence of the cosmological constant problem provides
motivation for a more detailed philosophical analysis of its assumptions.
Assuming the “old-fashioned” view of renormalization, [17] breaks down the
steps required to formulate the problem, and criticizes the justification
behind each step. One of these steps involves the assumption that models of
QFT predict the vacuum expectation value of energy density,
$\langle\rho\rangle$. The prediction is taken to indicate that
$\langle\rho\rangle$ is a physically significant quantity in the standard
model. However, the problem changes shape when one accounts for the fact that
the standard model is widely believed to be an effective field theory (EFT),
with a built-in energy scale at which it breaks down. The EFT approach to QFTs
makes sense of the old requirement of renormalizability, and uses the
renormalization group equations to understand renormalization non-
perturbatively.333For recent philosophical discussions of EFTs, see [29, 32,
11, 24, 19].
As is well known, QFTs require renormalization in order to generate finite
predictions. Renormalization consists of two steps: first, one introduces
regulators to replace infinite quantities with quantities depending on an
arbitrary parameter. The regulator $\mu$ must be such that (i) the regularized
terms are rendered finite for all finite values of $\mu$, and (ii) the
original divergent term is recovered in the limit $\mu\rightarrow\infty$.
Next, one redefines some set of couplings such that the physically relevant
value is independent of the regulator. Then the regulator is smoothly removed
and the renormalized quantity remains finite. We say a model in QFT is
renormalizable if all of its S-matrix elements can be made finite with a
finite number of renormalized parameters. Even in a renormalizable model,
vacuum energy density can only be regularized, but not fully renormalized.
Since vacuum energy density is not a renormalizable quantity and plays no role
in the empirical success of the standard model, [17] argued that one should
not treat any value regulator-dependent value as a valid candidate prediction.
If, instead of predicting a value for $\langle\rho\rangle$, we simply expect
the standard model to accommodate it as empirical input, the failure of
naturalness prevents this weakened desideratum. In quantum electrodynamics
(QED), for example, the electron mass and charge are renormalized to make the
theory predictive. The theory takes these quantities as empirical inputs and
therefore does not predict their values. Nevertheless, mass and charge are
physically significant quantities in QED, necessary to the empirical success
of the theory as a whole. Unfortunately, $\langle\rho\rangle$ cannot be input
as an empirical parameter in the same way, due to its radiative instability
order by order in perturbation theory. Further, since it plays no role in the
empirical success of the standard model, there is little reason for
$\langle\rho\rangle$ to play a central role analogous to mass and charge.
Thus, if QFTs don’t predict its value, it is best to understand vacuum energy
density as outside their domain, and therefore not physically significant to
QFT.444[19] provide a more sustained argument that the cosmological constant
problem signals a failure of naturalness for vacuum energy, in QFT and in
general relativity as an EFT. The solution proposed there is to embrace new
heuristics in theory construction, and to accept the limitations of the EFT
framework for understanding fundamental physics.
In light of the EFT view of the standard model, full renormalizability loses
importance. If the standard model is an EFT, then (under the standard
interpretation) it comes equipped with a physically significant cutoff scale
and an infinite set of coupling constants consistent with the symmetries of
the fields.The new couplings with mass dimension greater than four (in four
dimensional spacetime) will be nonrenormalizable, but will have coupling
constants that are suppressed by the momentum cutoff:
$\alpha_{i}=g_{i}/\mu^{n}$. The explicit presence of the regulator in these
terms is not a problem, since the regulator $\mu$ is much larger than the
energy scales for which the effective theory is used. The renormalization
group flow indicates that, at energies $E\ll\mu$, only the renormalizable
terms have any appreciable effect. However, at higher energies, one may indeed
see small deviations from the purely renormalizable terms, and these may be
due to higher-order terms. Therefore, suitably regularized, nonrenormalizable
terms can be physically significant when suppressed appropriately by a
regulator.555Using precision tests of the standard model, one may find
deviations from the predictions made using only the renormalizable terms.
Examples of possible experimental tests include the anomalous magnetic moment
of the electron or muon [2, 18, 3] as well as the fine structure of
positronium and muonium [12]. In all of these cases, small deviations from the
predictions made using the renormalizable standard model may be accounted for
with higher-order couplings, suppressed by the physical cutoff scale.
Renormalizability is no longer a requirement, so long as the effects of
nonrenormalizable terms become negligible at low energies.
If a suitably regularized vacuum energy density meets the requirements of a
prediction in the EFT framework, then perhaps one is justified in claiming
that the standard model predicts its value. There exist several regularization
schemes for QFTs, and in general these will not agree on the algebraic form
for any quantities until the renormalization procedure has been completed.
Inspired by the EFT approach, and under the view that regulators are
arbitrary, a suitable weakening of the requirement of renormalizability must
satisfy the following requirements:
Minimal Requirements for Candidate Predictions: In order for a quantity within
a model of QFT to count as a candidate prediction of some corresponding
physical quantity, it must be the case that: (1) the quantity is largely
insensitive to the regularization procedure; and (2) it is largely insensitive
to changes to the value of the regulator.
These requirements are motivated as follows. Violation of (1) would entail
that different regularization schemes might be physically meaningful in that
they encode different ways of parameterizing/forgetting high energy effects,
and that for the quantity in question these differences matter. Supposing one
views regularization schemes in this way, we learn that the quantity in
question is sensitive to the physics at high-energies, and therefore does not
fall within the proper scope of the EFT. Under the alternative view of
regularization—as a formal tool used to render formally divergent terms
finite—the independence of the predicted quantity from regularization scheme
follows naturally. Under either interpretation, for an EFT to predict some
quantity, it must satisfy (1).
Even though an EFT comes equipped with a physically significant cutoff energy
scale, an important feature relevant to making predictions with EFTs is that
the low-energy physics is largely insensitive to the exact value of that
cutoff scale. In the context of the standard model, we are ignorant of the
exact scale at which it breaks down. Any “predictions” from within the
standard model that violate (2) are not true predictions at all; instead, they
signify either that the quantity is meaningless when restricted to the low-
energy EFT, or that it is highly sensitive to the details of the high-energy
theory. In either case, one cannot say that the EFT predicts its value. Under
the standard interpretation of EFTs, violation of (2) would signal that the
EFT is insufficient to understand the phenomena in question. I will argue that
the standard model $\langle\rho\rangle$ violates both minimal requirements,
and this is best understood in the context of EFTs.
Physically significant quantities in a theory must be consistently described
by that theory; if the standard model cannot provide a univocal, reasonable
candidate prediction for the expectation value of vacuum energy density, then
that failure is evidence that $\langle\rho\rangle$ is not physically
significant in the standard model.666By physical significance of vacuum energy
density, I mean the inference from a vacuum expectation value of an energy
density term within a model of QFT to a real physical quantity onto which that
value maps. One can believe that there is some real physical quantity of a
suitably averaged value of vacuum energy density, to which our best physical
theories don’t accurately map (cf. [27]). The arguments in this paper
undermine taking values from QFT to map onto the world; they say nothing about
whether vacuum energy density exists. Undermining the physical significance of
vacuum energy density for QFTs means that we should not trust that our best
QFTs to accurately capture the relevant physics. Continuing the process
discussed in [25], a further revision of the vacuum concept in QFT may be
required, or perhaps even a full theory of quantum gravity. Borrowing a common
example of a classical fluid mechanics from [29], we know that EFTs cannot
predict all possible quantities relevant to the low-energy, macroscopic
physics. In fluid mechanics, the formation of droplets and shock waves depend
on the microphysical details of the fluid. We cannot use the effective theory
of fluid mechanics to predict such behaviour, as the separability of scales
breaks down. The underlying microphysical theory is then needed. Droplet
formation and shock waves are physically real phenomena described by the
microphysics, though fluid mechanics fails to describe them. I claim that the
vacuum energy density $\langle\rho\rangle$ is a similar quantity that falls
outside the domain of QFT. Vacuum energy may be a physically real phenomena,
and some future theory may describe it, but it is beyond the scope of our best
QFTs. The EFT framework helps to make this point more salient, because EFTs
are explicitly meant to be limited in scope of applicability. The failure of
$\langle\rho\rangle$ to satisfy either Minimal Requirement excludes it as a
candidate for physical significance in QFT. Thus we should think of the
cosmological constant problem as highlighting one limitation of our current
best EFT. Since we are currently ignorant of the underlying microphysical
theory to which the standard model is effective, there is little we can say
about vacuum energy at present. In a separate paper [19] I provide more
general arguments that would lead one to a similar conclusion, and extends to
the semiclassical merging of QFT and general relativity. My goal here is to
show that, from within QFT as an EFT, $\langle\rho\rangle$ fails to meet the
Minimal Requirements for a candidate prediction, and vacuum energy is
therefore ill-defined until the future microphysical theory is known.
Though this conclusion is easiest to see within the EFT framework, the
argument extends to QFT more broadly. [17] provides arguments for this
conclusion in the context of the standard model as a fully renormalizable
standalone QFT, and in Section 3 I argue that more rigorous extensions of QFT
eliminate $\langle\rho\rangle$ from their conceptual framework, thereby
supporting the conclusion that vacuum energy falls outside the domain of QFT,
in any of its guises.
The strategy for the remainder of the paper is as follows. I provide a
conceptual outline two major regularization and renormalization procedures
that one might apply to extract a finite prediction of $\langle\rho\rangle$
from models of QFT, and discuss ways in which vacuum energy is removed in more
rigorous local formulations of QFT. In Sec. 2 I consider the mainstream
approaches to regularizing the standard model: lattice regularization and
dimensional regularization. In Sec. 3 I consider some more mathematically
rigorous approaches to QFT, and the ways that regularization and
renormalization are treated there. In each case, I arrive at a value of
$\langle\rho\rangle$ derived using that regularization scheme. Finally, in
Sec. 4, I compare the results to see if they satisfy the above requirements.
As I will show below, purely regularized values of $\langle\rho\rangle$
satisfy neither Minimal Requirement, and we have no reason to accept a one-
loop renormalized quantity as a candidate prediction either. Further, rigorous
extensions of QFT that aim to provide a local description of fields remove the
quantity $\langle\rho\rangle$ entirely, suggesting that vacuum energy falls
outside the scope of QFT and any merger of QFT and general relativity that
emphasizes local covariance.
## 2 Orthodox regularization of $\langle\rho\rangle$
Standard cutoff regularization schemes in QFT require the inclusion of two
momentum cutoffs: a lower bound to regulate the infrared divergences, and an
upper bound to regulate the ultraviolet divergences. In position space, this
is equivalent to defining the theory on a four-dimensional lattice in a box.
Under the orthodox reading of EFT, the upper bound gains physical significance
as the scale at which the effective theory breaks down.777The lower bound may
be interpreted as encoding the fact that QFTs are only used in local regions
of spacetime. Imposing some set of boundary conditions for long distances just
means that we don’t expect the model to apply in all of spacetime. This view
has recently been criticized [23], but is the dominant view of particle
physicists and is becoming more mainstream amongst philosophers [28, 31, 10].
Below (Sec. 2.1) I will outline the textbook approach to cutoff regularization
in more detail, and discuss the modifications made to this formalism by the
EFT view.
Historically, dimensional regularization was the favoured scheme for
renormalizing Yang-Mills gauge models of QFT, like the electroweak model and
quantum chromodynamics. Though it has received less philosophical attention
due to its more formal nature, dimensional regularization is a powerful tool,
and one that maintains Lorentz invariance. If one hopes to have a regularized
candidate prediction of the vacuum energy density from the standard model, it
should obey the correct equation of state that is required by the cosmological
constant. Dimensional regularization gives this equation of state and Lorentz
invariance, and the one-loop renormalized value
$\langle\tilde{\rho}_{dim}\rangle$ (Eq. (15)) calculated using dimensional
regularization thus provides the best claim to a prediction of vacuum energy
density from within the standard model. Thus, if any orthodox quantity serves
as a candidate prediction for vacuum energy density, it is
$\langle\tilde{\rho}_{dim}\rangle$. However, the instability of a one-loop
renormalized vacuum energy density under radiative corrections indicates that
naturalness fails here, and that vacuum energy may be sensitive to the details
of high-energy physics.
### 2.1 Momentum cutoffs and effective field theory
For simplicity, I will illustrate the regularization techniques using a free
scalar field theory, whose action is
$S[\phi,J]=-\int
d^{4}x\left(\frac{\eta^{\mu\nu}}{2}\partial_{\mu}\phi(x)\partial_{\nu}\phi(x)+\frac{m^{2}}{2}\phi^{2}(x)+J(x)\phi(x)\right),$
(1)
with $\eta_{\mu\nu}$ the Minkowski metric (here written with a $(-,+,+,+)$
signature), and the expression inside the integral is the Lagrangian density
$\mathcal{L}$ for the model, plus source term $J(x)\phi(x)$. One can define a
particular model of QFT with a built-in set of cutoffs, or one can impose
cutoffs on individual expressions as the need arises. The former accords more
closely with the EFT view, while the latter was standard in the early history
of quantum electrodynamics, and remains standard in most introductory texts.
Under the latter view, cutoffs are imposed in order to regulate divergences,
and are removed from the renormalized theory.888For a more detailed analysis
of the differences between the two approaches to renormalization, see [32,
22]. The latter argues that EFTs are best understood strictly under cutoff
regularization. However, as I show below for the vacuum energy density, many
features of QFTs are most easily understood under dimensional regularization.
We start with the latter approach to illustrate the algebraic form for
expectation values of energy density and pressure.
In the case of calculating the energy density associated with the vacuum
state, we are looking for the vacuum expectation value of the Hamiltonian
density. In the case of the free scalar model, this is
$\langle\rho\rangle=\bra{0}\mathcal{H}\ket{0}=\frac{1}{2}\bra{0}\left((\partial_{t}\phi)^{2}+\delta^{ij}\partial_{i}\phi\partial_{j}\phi+m^{2}\phi^{2}\right)\ket{0}.$
(2)
Using the Fourier expansion of $\phi$ one can calculate this to be (cf. [[,
IV.A]Eq. 68]MartinCCPReview)
$\langle\rho\rangle=\frac{1}{2(2\pi)^{3}}\int
d^{3}\mathbf{k}\omega_{\mathbf{k}},$ (3)
which diverges as $k^{4}$ for large $k$. Similarly, the pressure associated
with the vacuum energy is
$\langle p\rangle=\frac{1}{6(2\pi)^{3}}\int
d^{3}\mathbf{k}\frac{k^{2}}{\omega_{\mathbf{k}}}.$ (4)
This is where one can regularize by introducing a momentum cutoff $\mu$, above
which one no longer integrates. Doing so, one obtains the following
expressions for the energy density and pressure:
$\displaystyle\langle\rho\rangle$
$\displaystyle=\frac{\mu^{4}}{16\pi^{2}}\left[\sqrt{1+\frac{m^{2}}{\mu^{2}}}\left(1+\frac{m^{2}}{2\mu^{2}}\right)-\frac{m^{4}}{2\mu^{4}}\ln\left(\frac{\mu}{m}+\frac{\mu}{m}\sqrt{1+\frac{m^{2}}{\mu^{2}}}\right)\right],$
(5) $\displaystyle\langle p\rangle$
$\displaystyle=\frac{\mu^{4}}{48\pi^{2}}\left[\sqrt{1+\frac{m^{2}}{\mu^{2}}}\left(1-\frac{3m^{2}}{2\mu^{2}}\right)+\frac{3m^{4}}{2\mu^{4}}\ln\left(\frac{\mu}{m}+\frac{\mu}{m}\sqrt{1+\frac{m^{2}}{\mu^{2}}}\right)\right].$
(6)
There are two things to note here. First, to leading order, both regularized
terms depend on the cutoff scale to the fourth power. This regularization is
therefore highly sensitive to what one takes as the cutoff scale, violating
Reasonable Requirement (2). Under the old approach, one could renormalize
$\langle\rho\rangle$ by introducing counterterms to remove any
$\mu$-dependence. Unfortunately, the renormalized term does not carry over in
a straightforward way to a field theory with interactions. Though one could
simply define $\langle\rho_{physical}\rangle\equiv 0$ by subtracting off the
entirety of the “bare” prediction, such a procedure is not stable against
higher order quantum corrections. This holds true whether one subtracts off
the entire prediction, or just the leading order divergent terms. In
interacting theories, such as the scalar $\lambda\phi^{4}$ theory, the
coupling between vacuum and gravity will contain contributions proportional to
$\lambda$, $\lambda^{2}$, $\lambda^{3}$ and so on. If one defines
$\langle\rho_{physical}\rangle$ to be independent of the cutoff scale at order
$\lambda$, then equally large ($\sim\mu^{4}$) contributions spoil this
cancellation at order $\lambda^{2}$, and so on for higher orders. So the value
of $\langle\rho\rangle$ in Eq. (5) cannot be fully renormalized, and as it
stands depends too sensitively on the (supposedly arbitrary) cutoff scale to
count as a prediction.
Second, notice that the ratio $\langle p\rangle/\langle\rho\rangle\neq-1$, as
one would expect from a Lorentz-invariant vacuum. This is because the cutoff
procedure is itself not Lorentz-invariant. In order to obtain a vacuum energy
density that respects the Lorentz symmetry and reproduces the equation of
state required by a cosmological constant term, one must subtract the leading
order $\mu^{4}$ terms in each, which is only justified in the context of
modified minimal subtraction schemes using dimensional regularization.
The above discussion is framed in the old-fashioned context of ad-hoc
regularization. What changes when we think of QFTs as EFTs, where the cutoff
plays a more direct role? In the EFT framework, a QFT is defined with a built-
in UV cutoff. To make the overall theory finite, an IR regulator is often
used, though this may be smoothly removed at the end of the calculation to
return to a continuum theory. I start with both regulators, which effectively
places the field theory on a Euclidean lattice, converting the integrals in
the action and the over field operations into discrete sums. For 4D lattice
spacing $a$, placing the model in hypercube of length $L$, the generating
functional becomes
$\displaystyle\mathcal{Z}[J]$
$\displaystyle=\displaystyle\int_{\mu}\mathcal{D}\phi\exp\left(i\int
d^{4}x[\mathcal{L}(\phi(x))+J(x)\phi(x)]\right)$ (7)
$\displaystyle\equiv\int\displaystyle\prod_{l=1}^{N}d\phi_{l}\>\exp\left(ia^{4}\sum_{l=1}^{N}[\mathcal{L}(\phi_{j})+J_{l}\phi_{l}]\right),$
(8)
where $N=(L/a)^{4}$ and $\mu=2\pi/a$. The quantities $a$ and $L$ are built-in
ultraviolet and infrared regulators. Once a set of fields is specified, along
with the expected symmetries of the model, the Lagrangian is defined to
include all terms involving the chosen fields and respecting the symmetries;
this means that the Lagrangian is likely to be a formally infinite sum of
terms, each multiplied by its own coupling constant. As initially stated, this
would be a major problem; though the path integral has been IR and UV
regulated, we now have an infinite number of terms in the Lagrangian. There is
no a priori reason to expect that the bare coupling parameters decrease for
higher-order field contributions, and thus no indication of an appropriate
truncation of terms in the Lagrangian.
However, one uses the renormalization group transformations to rewrite the
generating functional in terms of a new, lower ultraviolet cutoff
$\mu^{\prime}=\mu-\delta\mu$. One separates the integral over field
configurations
$\int_{\mu}\mathcal{D}\phi\rightarrow\int_{\mu^{\prime}}\mathcal{D}\phi_{\mu^{\prime}}\int_{\delta\mu}\mathcal{D}\phi_{\delta\mu}$,
and integrates out the field modes $\phi_{\delta\mu}$. The amazing feature of
the renormalization group is that, when one does this, the new expression for
the Lagrangian retains the same form. All of the effects of the field modes
above the new cutoff can be absorbed into a redefinition of the coupling
constants in the Lagrangian. Since coupling constants will be dimensionful
quantities (the Lagrangian has units of $[\mathrm{energy}^{4}]$, and scalar
fields have dimensions of energy) redefinitions of coupling involve powers of
the new cutoff scale. If the cutoff scale is large compared to energy levels
of interest for the effective theory, then higher-order terms in the
Lagrangian will be suppressed by the new coupling constants $g_{i}\rightarrow
g_{i}/(\mu^{\prime})^{n}$. In the limit where energy scales of interest are
vanishingly small compared to the cutoff, all terms with high powers of fields
and their derivatives will be suppressed by inverse powers of the cutoff.
Though there is much more to be said about the renormalization group and EFT,
there are two major points relevant to the discussion of regularizing vacuum
energy. First, one defines a model in EFT with built-in regulators.
Renormalization is no longer a primary focus, since the renormalization group
techniques indicate the irrelevance of most nonrenormalizable terms. Since
regulators are present in the definition of the theory, one needn’t worry
about regulators appearing in predictions. As long as the predictions do not
depend sensitively on the precise value of the cutoff—since the value of the
physically meaningful cutoff is unknown until a future successor theory is
developed—its presence is not a problem in the EFT framework. Thus, the EFT
framework motivates Minimal Requirement (2) discussed in the Introduction.
However, the vacuum energy is still a problem, since it depends sensitively on
the cutoff—as mentioned above, $\langle\rho\rangle\sim\mu^{4}$. The problem of
renormalization changes dramatically under the EFT view, since the presence of
$\mu$ in Eq. (5) is not in itself a problem. The momentum cutoff is standardly
taken to have physical significance for the future successor theory; there is
therefore no reason to renormalize by subtracting the $\mu^{4}$ term, and so
even an illusory insensitivity is to $\mu$ is lost.
Second, by defining models of QFT with a built in lattice scale, issues of
Lorentz invariance may lose importance. If the lattice is to be physically
significant, then Lorentz invariance of EFTs only holds approximately.
Accordingly, one would not expect the vacuum energy density to be exactly
Lorentz invariant, and so the concern regarding the wrong equation of state
from Eqs. (5) and (6) is less pressing. However, the failure of exact Lorentz
invariance would undermine the motivation to subtract off only the $\mu^{4}$
term for a one-loop renormalization, and it would be much harder to input the
vacuum energy density into the Einstein field equations. If straightforwardly
input into the Einstein field equations as is, one would get an entirely
different equation of state for the cosmological constant. Given that the EFT
framework is predicated on the idea that physics at disparate energy scales
separates, it would be curious if a consequence of that framework was that
small scale violations of Lorentz invariance implied qualitative changes to
physics on cosmological scales. In any case, failure of Lorentz invariance
would undermine the standard motivations for the cosmological constant
problem, though the presence of an enormous vacuum energy density for the
standard model would remain.999The fact that Lorentz invariance is lost if the
lattice structure of effective field theories is taken literally should have
observable consequences. Incredibly sensitive tests have failed to detect
violation of Lorentz invariance at small scales [21]. Though outside the scope
of this paper, one might argue that a literal interpretation of the lattice is
therefore unmotivated from the point of view of both QFTs and general
relativity.
### 2.2 Dimensional regularization
Dimensional regularization has historically played an important role in the
development of the standard model. [1] first proved that Yang-Mills gauge
models are renormalizable by developing and employing dimensional
regularization. The method is often more powerful, since the symmetries of a
model—both gauge symmetries and spacetime symmetries—remain intact. It allows
for an easier identification of divergences than the momentum cutoff approach,
and naturally suggests a minimal subtraction (or, alternatively, modified
minimal subtraction) method of renormalization. Finally, this method also
removes infrared divergences associated with massless fields without
introducing a further regulator. The disadvantage is that a physical
interpretation for the regulator is rather opaque; the method is more clearly
formal than the momentum cutoff approach.101010This is only a disadvantage if
one expects a regulator to be physically significant. If regularization is
treated simply as a procedure for taming divergences, then the regulators need
not have a physical significance. Further, if the analogy between lattice
regularization in condensed matter physics and particle physics is misleading,
then the physical interpretation that lattice regularization provides may
actually lead to an unjustified physical interpretation (cf. [8, 9]).
In the case of the vacuum energy density one aims to include its expectation
value in the Einstein field equations. It is therefore important to ensure
that the Lorentz symmetry of the expression is maintained—since it is this
feature of $\langle\rho\rangle$ that justifies its interpretation as a
contribution to the cosmological constant. Dimensional regularization is best
suited for this purpose. I will outline the regularization technique for
vacuum energy for a scalar field. As [20, Sec. VII] demonstrates, the
calculations for fermions and gauge bosons proceeds in a similar fashion,
though the leading multiplicative coefficients (of $\mathcal{O}(1)$) differ.
The integral for energy density in Eq. (3), in $D$-dimensional spacetime
becomes
$\displaystyle\langle\rho\rangle$
$\displaystyle=\frac{\mu^{4-D}}{2(2\pi)^{D-1}}\int
d^{D-1}\mathbf{k}\>\omega_{\mathbf{k}}$ (9)
$\displaystyle=\frac{\mu^{4-D}}{2(2\pi)^{D-1}}\displaystyle\int_{0}^{\infty}dk\>d^{D-2}\Omega\>k^{D-2}\omega_{\mathbf{k}},$
(10)
where $d^{D-2}\Omega$ is the volume element of the $(D-2)$-sphere, and the
$\mu$ is an arbitrary scale factor such that the equation has the right unit
dimensions.111111I use $\mu$ as an arbitrary scale factor here because it
appears in the formal expression for $\langle\tilde{\rho}_{dim}\rangle$ in the
same way that the (arbitrary) momentum cutoff appears in the lattice
regularized expression. The fact that these scales have different meanings
supports my argument that these terms differ significantly. The same term for
the regulator is used simply to aid algebraic comparison. Using the fact that
the general solution of angular integrals can be expressed in terms of gamma
functions, the solution to this integral is
$\langle\rho\rangle=\frac{\mu^{4}}{2(2\pi)^{(D-1)/2}}\frac{\Gamma(-D/2)}{\Gamma(-1/2)}\left(\frac{m}{\mu}\right)^{D}.$
(11)
Performing the same operation for the pressure, one obtains
$\displaystyle\langle p\rangle$
$\displaystyle=\frac{\mu^{(4-D)}}{2(D-1)(2\pi)^{D-1}}\int
d^{D-1}k\>\frac{k^{2}}{\omega_{\mathbf{k}}}$ (12)
$\displaystyle=\frac{\mu^{4}}{4(2\pi)^{(D-1)/2}}\frac{\Gamma(-D/2)}{\Gamma(1/2)}\left(\frac{m}{\mu}\right)^{D}.$
(13)
Since $\Gamma(-1/2)=-2\Gamma(1/2)$, we obtain the correct equation of state,
$\langle p\rangle/\langle\rho\rangle=-1$. If one expands the gamma functions
in the above expressions, and sets $D=4-\epsilon$, then the regularized
$\langle\rho\rangle$ and a one-loop renormalized expression
$\langle\tilde{\rho}_{dim}\rangle$ are
$\displaystyle\langle\rho\rangle$
$\displaystyle\approx-\frac{m^{4}}{64\pi^{2}}\left(\frac{2}{\epsilon}+\frac{3}{2}-\gamma-\ln\left[\frac{m^{2}}{4\pi\mu^{2}}\right]\right)+\cdots$
(14) $\displaystyle\langle\tilde{\rho}_{dim}\rangle$
$\displaystyle=\frac{m^{4}}{64\pi^{2}}\ln\left(\frac{m^{2}}{\mu^{2}}\right),$
(15)
where $\gamma\approx-0.57772$ is the Euler-Mascheroni constant (cf. [20,
IV.A], renormalized using modified minimal subtraction).121212This is a first-
order renormalized calculation. As [20, Sec. VI] highlights, this prediction
is largely unchanged under a Gaussian approximation to an interaction term
(i.e., to one loop). Since the expression remains the same, I refer to Eq.
(15) as a one-loop renormalized term. This expression actually agrees (up to
constants of $\mathcal{O}(1)$) with the leading order logarithmic term
predicted using the momentum cutoff approach in Eq. 5, after subtraction of
the $\mu^{4}$ term. [20] notes that “it is well-known that the dimensional
regularization scheme removes the power law terms,” (p. 13) so this is not a
surprising result. Like in the case of Yang-Mills gauge models, dimensional
regularization leaves the underlying symmetries of the model intact, and leads
to a correct regularization that respects those symmetries. We see that,
instead of a functional dependence on the fourth power of the cutoff, the
vacuum energy density for a given field depends on the fourth power of the
mass of that field. This means that massless fields (photons, gluons) do not
contribute to the dimensionally regularized or renormalized vacuum energy, at
least to leading order.
It turns out that fermion fields and boson fields share this functional
dependence, though each contains a numerical factor $n_{i}$ to multiply
$\langle\rho_{ren}\rangle$. For the Higgs scalar, $n_{H}=1$; for fermions,
$n_{F}=-4$; for bosons, $n_{B}=3$. [[, IX]Eq. (516)]MartinCCPReview determines
the vacuum energy density coming from vacuum fluctuations (ignoring early
universe phase transitions) to be
$\langle\rho_{SM}\rangle=\sum\langle\tilde{\rho}_{dim}\rangle=-2\times
10^{8}GeV^{4},$ (16)
assuming a scale factor $\mu\approx 3\times 10^{-25}GeV$, though the
prediction is relatively insensitive to the exact value of $\mu$. This
therefore seems like an impressive renormalization and prediction of the
vacuum energy from the standard model. Since modified minimal subtraction is a
natural procedure for dimensional regularization, the renormalization method
is also justified. However, this term is renormalized to one loop; radiative
instability will spoil renormalization at higher orders, and thus naturalness
fails here as it does for lattice regularization. In general, the
contributions from next-to-leading order for
$\langle\tilde{\rho}_{dim}\rangle$ will be large enough to spoil the
renormalization performed at leading order. The functional form of of Eq (15)
hides the high sensitivity to the regulator that appears at higher orders.
If we treat the standard model as an EFT, we may be justified in trusting
predictions of some quantities only up to one-loop. As an example, the Fermi
theory of weak interactions in now known to be an effective approximation to
the electroweak model, valid for energies far less than the mass of W and Z
bosons.131313This example is discussed in more detail in Sec. 4. The Fermi
theory is well-behaved up to one-loop, but is nonrenormalizable and badly
divergent beyond this scale. The difference with the vacuum energy density is
that $\langle\rho\rangle$ displays the same types of nonrenormalizable
divergence at every order, while more severe divergences occur in the Fermi
theory only at higher order than the one-loop terms.
The proper focus of our attention should therefore be the regularized term
(Eq. (14)). As should be obvious by inspection, this value displays a
sensitive dependence on the regulator $\epsilon$, and differs markedly from
the lattice regularized quantity (Eq. (5)). Thus $\langle\rho\rangle$ fails to
satisfy either Minimal Requirement under orthodox approaches. One might argue
that this failure is worse in the EFT framework, since EFTs are explicitly
constructed to exclude contributions from certain energy scales. In the next
section, I use more rigorous extensions of standard QFT to show that, even
outside of the EFT framework, one should not expect QFTs to describe vacuum
energy.
## 3 Splitting hairs: splitting points
Outside of the mainstream work in QFT and particle physics, there has been
persistent effort to place the QFT formalism on more secure mathematical
footing. One major goal of this work is to be clear about the validity of
assumptions and algebraic manipulations standardly employed in particle
physics. Point-splitting procedures are used to track more carefully the ways
in which quantum fields—as operator-valued distributions—are multiplied
together at coincident points. The project of doing QFT on curved spacetimes
likewise demands a re-examination of the assumptions that go into constructing
QFTs in Minkowski spacetime. In this section I discuss the Epstein-Glaser
point-splitting procedure as a candidate regularization scheme, and consider
the modifications needed to put QFT on curved spacetimes, a project largely
pursued by Hollands and Wald. The modifications necessary indicate that
Minkowski spacetime is particularly special, and that significant alterations
to QFT may be needed even for a semiclassical merging with general relativity.
If one hopes for an extension of QFT beyond the EFT framework, approaches like
these are a likely first step. We see in both approaches that the vacuum
energy concept does not arise, indicating that $\langle\rho\rangle$ is not a
meaningful concept in QFT as a whole.
### 3.1 Minkowski background
Point splitting and other local approaches to regularization stem from [33]’s
(Wilson69) early work on the operator product expansion, which is a formalism
for defining products of operator-valued distributions at coincident points.
Since we are concerned here with short distance behaviour of fields, the work
in this tradition uses the position space representation of quantum fields. In
ordinary approaches to QFT, distributions are not carefully handled, and this
leads to divergences in products of operators at the same point. Wilson
originally proposed an ansatz that two operators $A$ and $B$ defined at
coincident points should be described by
$A(x)B(x)=\displaystyle\lim_{\chi\rightarrow
0}A(x+\chi/2)B(x-\chi/2)=\lim_{\chi\rightarrow
0}\sum_{i=1}^{n}c_{i}(\chi,x)C_{i}(x)+D(x,\chi),$ (17)
with $C_{i}(x)$, $D(x,\chi)$ local operators without divergences, and
$c_{i}(\chi,x)$ coefficients that diverge in the limit $\chi\rightarrow 0$.
The original operator product is then replaced with the regularized product
$\left[A(x+\chi/2)B(x+\chi/2)-\displaystyle\sum_{i=1}^{n}c_{i}(\chi,x)C_{i}(x)\right]/c_{n}(\chi,x),$
(18)
which goes to zero as $\chi$ goes to zero.
Further work on the general properties of products of distributions—as
mathematical physicists came to understand that quantum fields are operator-
valued distributions—led to the Epstein-Glaser approach to regularizing and
renormalizing QFTs. The conceptual move here involves switching focus from
products of observables in neighbouring points to the products of fields at
coincident points.
[7] proved—through more careful analysis of the properties of the
S-matrix—that a renormalized perturbation theory could still obey
microcausality and unitarity. Though a more mathematically technical and
indirect regularization method, this approach tames many UV divergences
present in QFT, and therefore accomplishes renormalization in a similar way.
Essentially the n-point functions must be appropriately smeared with test
functions $f(x_{1},\ldots,x_{n})\equiv f(x)$. Infrared divergences are dealt
with by carefully removing the test functions in observable quantities; one
takes the adiabatic limit $f(x)\rightarrow 1$ after constructing appropriate
integrals.
Instead of treating point-splitting as a more mathematically elegant form of
renormalization, [26, 3.1,3.2] takes the causality condition for distributions
to point to the correct method for defining the n-point distributions
$T_{n}(x_{1},\ldots,x_{n})$ when the set of $\\{T_{m}|\;1\leq m\leq n-1\\}$
are known.141414The treatment of point-splitting in this section follows the
presentation in [26, Ch. 3]. These n-point distributions are related to the
perturbative construction of the S-matrix as follows:
$S(f)=\mathbf{1}+\displaystyle\sum_{n=1}^{\infty}\frac{1}{n!}\int
d^{4}x_{1}\cdots d^{4}x_{n}T_{n}(x_{1},\ldots,x_{n})f(x_{1})\cdots f(x_{n}),$
(19)
where $f$ is a complex-valued test function, and where the limit $f\rightarrow
1$ is taken at the end of the calculation. The causality condition is applied
to the test functions as follows. Suppose there exists a reference frame in
which $f_{1}$ and $f_{2}$ have disjoint supports in time,
$supp\>f_{1}\subset\\{x\in\mathbb{M}\>|\>x^{0}\in(-\infty,r)\\}\quad
supp\>f_{2}\subset\\{x\in\mathbb{M}\>|\>x^{0}\in(r,\infty)\\}.$ (20)
Then the causality condition is the requirement that
$S(f_{1}+f_{2})=S(f_{2})S(f_{1})$.
The $T_{n}(x_{1},\ldots,x_{n})$—operator-valued distributions—are constructed
by induction. One simplifies the procedure by decomposing the $T_{n}$ into
(normal-ordered) free fields and complex number-valued distributions
$T_{n}(x_{1},\ldots,x_{n})=\displaystyle\sum_{k}:\prod_{j}\bar{\psi}(x_{j})t_{n}^{k}(x_{1},\ldots,x_{n})\prod_{l}\psi(x_{l})::\prod_{m}A(x_{m}):,$
(21)
where $t_{n}^{k}$ is the momentum space numerical distribution. Now the
problem switches from defining an appropriate splitting procedure for the
$T_{n}$, to the simpler problem of defining a splitting procedure for the
$t_{n}^{k}$. The usual procedure—in standard versions of interacting
QFT—involves splitting with a series of $\Theta$ functions for each
$x_{i}\in\\{x_{n}\\}$, but this is discontinuous as $x_{i}=0$. If $t_{n}^{k}$
is singular for some $x_{i}=0$, then the product is not well defined, and UV
divergences appear. Instead, one introduces the concept of a scaling dimension
$\omega$, signalling the degree of divergence for the distribution. This
scaling dimension carries over to momentum space representations as well.
For QED in momentum space, distributions properly split have a series of free
parameters, being defined only up to a polynomial of rank $\omega$.151515It is
possible that $\omega$ will not be an integer for some distributions, though
this does not occur in QED. When $\omega$ is not an integer, the polynomial
will be rank $\omega^{\prime}$, the largest integer that is less than
$\omega$. The “regularized” distributions therefore take the form
$t(p)=t^{\prime}(p)+\displaystyle\sum_{|a|=0}^{\omega}C_{a}p^{a}$ (22)
after splitting, where $t^{\prime}(p)$ is defined by the causality condition.
The free parameters $\\{C_{a}\\}$ can be fixed by an appropriate choice of
regulator on the distribution, and this is why, for all practical purposes,
the causality condition is a more mathematically rigorous way to introduce
regulators into the theory. Though no UV divergent terms appear within this
formalism, one still has to introduce arbitrary parameters to regularize the
otherwise ill-defined distributions. Regarding the Minkowski vacuum energy,
one can see from Eq. (21) that the distributions are expanded in terms of
normal-ordered free fields, which implies a vanishing vacuum energy density,
regardless of the particular choice of renormalization of the distributions.
The normal ordering here may be thought of as removing $\langle\rho\rangle$ by
fiat. In light of its irrelevance to flat space calculations in QFT, and its
apparent sensitivity to high-energy physics, we should not be surprised that a
rigorous construction of QFT would consciously exclude vacuum energy.
### 3.2 QFT on curved spacetimes
Instead of altering the conceptual foundations of general relativity to fit
particle physics, some physicists have instead attempted to formulate QFTs on
a classical curved spacetime background. This provides a different “first
step” to unifying the two disciplines. One advantage to this approach is that
it is on much more sound mathematical footing than standard treatments of QFT.
The clarity that comes with mathematical rigour helps for understanding the
nature of assumptions that are needed for defining products of quantum fields.
In particular, careful attention should be paid to the splitting procedures
used for defining time-ordered products of operators. The downfall of such
rigour, however, is that realistic interactions cannot yet be formulated fully
as models of the axioms. A mix of methodology is therefore the clearest way
forward.
As discussed in the previous section, point-splitting procedures have been
successfully employed in the construction of quantum electrodynamics, and more
local modifications are currently used for generalizing QFT to generically
curved spacetimes. Many people are working on defining QFTs in curved
spacetimes, but the most demanding requirements of locality come from the work
of Hollands and Wald
(HollandsWald2001,HollandsWald2002,HollandsWald2008,HollandsWald2010). A key
procedure in their construction of local, covariant time-ordered products is a
modified version of the Epstein-Glaser point splitting prescription.
The Epstein Glaser approach to defining operator-valued distributions is more
local than the standard momentum space cutoff approaches, in that it can be
done in small neighbourhoods of coordinates in position space. Hollands and
Wald note, however, that
> the Epstein-Glaser method is not local in a strong enough sense for our
> purposes, since we need to ensure that the renormalized time ordered
> products will be local, covariant fields. A key step in the Epstein-Glaser
> regularization procedure is the introduction of certain “cutoff functions”
> of compact support in the “relative coordinates” that equal 1 in a
> neighborhood of [conincident points…These] will not depend only on the
> metric in an arbitrary small neighborhood of $p$ and, thus, will not depend
> locally and covariantly on the metric in the sense required by condition t1
> [of locality and general covariance]. There does not appear to be any
> straightforward way of modifying the Epstein-Glaser regularization procedure
> so that the resulting extension […] will satisfy property t1. In particular,
> serious convergence difficulties arise if one attempts to shrink the support
> of the cutoff functions (Hollands and Wald 2002, p. 322).
Since they aim to define quantum fields on generic globally hyperbolic
spacetimes, Hollands and Wald aim to respect the restrictions imposed by the
general covariance of general relativity, and therefore to define time-ordered
products only in terms of local neighbourhoods of points in the spacetime.
Their strategy is to use the equivalence principle to note that the
neighbourhood of a point in a generically curved spacetime looks “flat” to
leading order:
> Although it is true that the leading order divergences […] will be
> essentially the same as in flat spacetime, in general there will be sub-
> leading-order divergences that are sensitive to the presence of curvature
> and are different from the divergences occurring for the corresponding
> [condition] in flat spacetime. Nevertheless, we [show] that any local,
> covariant distribution that satisfies our scaling, smoothness, and
> analyticity conditions admits a “scaling expansion” about [coincident
> points]. This expansion expresses […] as a finite sum of terms plus a
> remainder term with the properties that (i) each term in the finite sum is a
> product of a curvature term times a distribution in the relative coordinates
> that corresponds to a Lorentz invariant distribution in Minkowski spacetime
> […] and (ii) the remainder term admits a unique, natural extension to the
> [coincident limit] (p. 323).
This results in a specific form of the operator product expansion discussed
above, where one first defines a short distant expansion of the c-number
distribution, and uses that in the overall definition of the local covariant
field operators. Due to the lack of symmetries in generically curved
spacetimes, QFTs cannot generically rely on the concepts of large scale
Lorentz covariance, a well-defined frequency splitting procedure, or a
privileged, Lorentz-invariant vacuum state. In the generic case of QFT on a
classical spacetime background, then, one must depend only on the highly local
properties of the fields, defined in with respect to the spacetime metric in a
generally covariant manner. In this case, since there is no globally defined
Lorentz-invariant vacuum state, there is no issue of regularizing vacuum
energy in the standard way. In a later essay, [15] argue that a definition of
QFTs in terms of the operator product expansion coefficients—when placed in
appropriately symmetric spacetimes required to define a unique vacuum
state—will have nonzero vacuum expectation values. They speculate that
nonperturbative effects for interacting, non-Abelian QFTs may lead to
vanishingly small residue terms in the stress-energy vacuum expectation value,
which could explain the observed value of the cosmological constant [15].
Given the current state of defining QFTs on curved spacetimes, however, vacuum
expectation values play an unimportant role, and vacuum energy is only
renormalized to first order, depending on a free parameter as in the case of
dimensional regularization (cf. [15, Eq. (9)]. Certainly, the concept of a
globally well-defined, position-invariant vacuum energy density does not fit
with this framework.
## 4 Conclusions: Does QFT predict the value of the vacuum energy?
Since vacuum energy is not fully renormalizable, the “old-fashioned” view of
QFTs—as only well-defined if renormalizable—would lead one to believe that the
vacuum expectation value of energy is an ill-defined concept in this
framework.161616Technically, old demands of renormalizability were imposed on
the S-matrix of a model of QFT, believed to encode all physically meaningful
content of scattering amplitudes and other dynamics [6, 1]. The QFTs
comprising the standard model of particle physics are all renormalizable,
despite the fact that the vacuum energy for each is nonrenormalizable. If one
demands renormalizability of a model in terms of its S-matrix, additional
nonrenormalizable structure that can be extracted from the action should be
thought of as ill-defined surplus structure, about which the theory remains
silent. But with the interpretation of the standard model as an EFT, full
renormalizability is no longer a strict requirement. Using a Euclidean lattice
formulation of a particular model of QFT with a momentum regulator (cf.
Section 2.1), nonrenormalizable terms in the Lagrangian are suppressed by
powers of the cutoff. If the cutoff is taken to be a physically meaningful
quantity, then there is an accompanying physical interpretation that, at
energy scales far below the cutoff, nonrenormalizable terms will be heavily
suppressed and therefore of little relevance. These arguments are based on the
renormalization group analysis of irrelevant terms in the Lagrangian; marginal
terms are the ones found to play a role at all energy scales, while relevant
terms grow in relative importance at low energies.
Unfortunately for the standard EFT view, the vacuum energy is one of two
seemingly physically significant quantities in the standard model that are
relevant terms under renormalization group flow.171717The other, of course,
being the Higgs mass. In that case the physical significance is undeniable,
since the Higgs boson has been discovered, and has mass about 125GeV [5]. The
physical significance of vacuum energy is a bit less direct, and is subject to
criticism. Aside from the criticism raised in this paper, see [4]. The EFT
approach licences taking nonrenormalizable terms to be physically significant,
but vacuum energy does not fit into the standard physical interpretation,
since it is not suppressed by powers of the cutoff. By insisting that the
vacuum energy is physically significant, this problem of nonrenormalizability
is one part of the cosmological constant problem. In response, one can reject
the assumption that the vacuum energy as predicted by the standard model is
physically meaningful, or one can weaken the demand of renormalizability to
understand what QFTs tell us about the value of the vacuum energy.
I have adopted this latter approach in this paper. By dropping the requirement
of renormalizability, we are left with either regularized, or one-loop
renormalized quantities describing vacuum energy density. In the Introduction,
I claimed that two minimal Reasonable Requirements for a quantity to count as
a candidate prediction are the following.
Minimal Requirements for Candidate Predictions: In order for a quantity within
a model of QFT to count as a candidate prediction of some corresponding
physical quantity, it must be the case that: (1) the quantity is largely
insensitive to the regularization procedure; and (2) it is largely insensitive
to changes to the value of the regulator.
Since regularization procedures in QFT are somewhat arbitrary, and usually the
regulator disappears from the final prediction of a physical quantity, one
might expect that full independence of the regularization technique be
required. This seems like too strict a condition, however, when one considers
that regularization changes the form of a model of QFT. Different changes will
lead to different regulators, and full renormalization is required to make
these different approaches agree. Under the standard EFT view, one can think
of the different regularization schemes as different ways of parameterizing
our ignorance of high-energy physics. One can only trust the predictions of an
EFT when these differences wash out, which happens when the Minimal
Requirements are satisfied.
For the orthodox regularization schemes discussed in Section 2, a purely
regularized vacuum energy density fails to meet either of the Minimal
Requirements. The lattice regularized expression depends on the large-momentum
cutoff $\mu$ as $\langle\rho\rangle\sim\mu^{4}$, while the dimensionally
regularized term depends on the small deviation from four dimensions
$\epsilon$ as $1/\epsilon$. Small changes to these regulators will lead to
large changes in $\langle\rho\rangle$. Further, the expressions in Eqs. (5)
and (14) are quite different, so the value of $\langle\rho\rangle$ is
sensitive to the regularization procedure. The two vacua described under these
procedures even differ in their equation of state.
If one rejects requirement (1), and takes the one-loop renormalized value of
$\langle\rho_{SM}\rangle$ as a first order prediction, then one has a
candidate prediction for vacuum energy density that can be used to motivate a
cosmological constant problem. However, there are two issues here. First,
renormalized quantities in QFTs aren’t taken as predictions of some physical
quantity. After renormalization, the physical value is measured from
experiment and input into the theory. In this sense, Eq. (16) would not count
as a prediction of vacuum energy density, but would be tuned to give the
measured value. The instability of $\langle\rho\rangle$ under radiative
corrections makes this tuning impossible perturbatively; so the failure of
naturalness prevents a consistent tuning. Second, this prediction is not
straightforwardly compatible with EFT, which I have taken to justify the
search for a nonrenormalizable candidate prediction of $\langle\rho\rangle$.
To see this, consider the case of the Fermi model of weak interactions. This
is a model in which four fermions—a proton, neutron, electron, and muon—all
interact at a point. This model is not fully renormalizable, but it is one-
loop renormalizable. Physicists used this model to make predictions at the
one-loop level, even though higher order terms were known to diverge. The
success of the Fermi model can be explained by noting that it is an effective
theory of the electroweak model. Nonrenormalizable terms that appear above the
one-loop level are due to the absence in the Fermi model of the W boson to
mediate the four-fermion interaction. These divergent terms end up being
irrelevant under renormalization group flow, so the mass scale ($M_{W}\approx
80$GeV) of the W boson in an effective modification of the Fermi theory
suppresses the divergent terms. Successful use of Fermi theory for low energy
($m_{F}\approx 10$MeV) predictions is justified by the EFT framework, since
$m_{F}\ll M_{W}$.
When looking at the standard model as an EFT, one might hope that a similar
story can be told for the vacuum energy density. In some successor theory, the
relevant energy scale there will suppress the extremely large value
$\langle\rho_{SM}\rangle$. This is one way of expressing the requirement that
vacuum energy be natural. However, $\langle\rho\rangle$ is relevant under
renormalization group flow, and should depend quartically on a cutoff supplied
by a theory to which the standard model is effective. Given that the quantity
$\langle\rho\rangle$ is so sensitive to the value of the regulator beyond one-
loop, I take this to disqualify it as a candidate prediction. From within the
standard model, we have reason to believe that $\langle\rho\rangle$ depends
sensitively on the details of high-energy physics, and therefore falls outside
the scope of EFT. Even if one rejects the Minimal Requirements and takes
$\langle\rho_{SM}\rangle$ as a candidate prediction, when factoring in all
fundamental fields in the standard model, the value $\langle\rho_{SM}\rangle$
is approximately 55 orders of magnitude too large. While much smaller than the
often quotes 120 orders of magnitude, this is still a remarkably bad
prediction. Given its independence from all predictions within orthodox QFT,
one should therefore be skeptical of such a prediction (cf. [19] for further
discussion).
If standard EFT methods do not provide a candidate prediction of
$\langle\rho\rangle$, should we expect more rigorous extensions of QFT to
incorporate vacuum energy? Normal ordering procedures—including the Epstein-
Glaser approach—define all vacuum expectation values to vanish, so in a sense
these approaches “renormalize” the vacuum energy density to zero. Normal
ordering is typically defined for free fields, and as we have seen for
orthodox approaches, the presence of interactions can spoil renormalizability.
The Epstein-Glaser point splitting approach treats regularization and
renormalization in a very different way, and relates UV divergences to ill-
defined products of distributions at singular points. By carefully splitting
distributions, one avoids divergent integrals. However, there is still freedom
in the definition of these distributions, and this amounts to renormalization
in a similar manner: free parameters in the theory must be fixed by
experiment. These numerical distributions are then used to define operator-
valued distributions, which include normal-ordered free fields. So in this
formalism, normal ordering is directly connected to meaningful time-ordered
products (equivalently, n-point functions), and so Epstein-Glaser point
splitting leads to a vanishing vacuum expectation value of all quantities,
energy density included.
Finally, the Hollands and Wald approach to QFTs in curved spacetime
significantly alters and extends the core concepts of perturbative QFT on
Minkowski spacetime. Their approach to merging QFT and general relativity is
to reformulate the principles of QFT to be compatible with the spacetime
structure of generic globally hyperbolic solutions to the Einstein field
equations. For QFTs on curved spacetimes, analogs to Lorentz covariance and
global frequency splitting—general covariance and the microlocal spectrum
condition—change the mathematical formalism significantly. Even more
significantly, vacuum states are generically ill-defined, and so vacuum
expectation values cannot be the primary building blocks of n-point functions.
[15] have suggested that the operator product expansion coefficients could be
used to define a model of QFT. In highly symmetric cases, one may recover a
vacuum state as a derived concept; it would then make sense to discuss vacuum
energy densities, but this would be highly dependent on the particular
spacetime chosen. A Lorentz-invariant vacuum energy density is not a generic
feature of local covariant QFT, and there is no guarantee that the Minkowski
prediction in this radically different formalism would agree with one of the
orthodox schemes. These extensions of the standard QFT framework support the
conclusion that QFTs (considered as EFTs or otherwise) do not properly include
vacuum energy density.
### 4.1 Verdict: No $\langle\rho\rangle$ from the standard model
Does QFT in general—or the standard model in particular—predict a vacuum
expectation value of energy density? According to the Minimal Requirements
motivated by viewing the standard model as an EFT, it does not. We have seen
that under the orthodox approaches to regularization, vacuum energy density
varies significantly with the choice of regularization scheme—lattice
regularization or dimensional regularization—and the “predicted” value of
$\langle\rho\rangle$ is sensitively dependent on the value of the regulator.
If we reject Requirement (1), then one might be in a position to pick the
dimensionally regularized quantity as a candidate prediction. In order to do
so, one must first acknowledge that $\langle\rho\rangle$ falls outside the
domain of typical quantities in EFTs. One of the remarkable features of
thinking of the standard model as an EFT is that “the details of physics below
the cutoff have almost no empirical consequences for large-scale physics” [29,
p. 10, emphasis original]. By rejecting Requirement (1), we are admitting
that, for some physically meaningful quantities in the EFT, the choice of
regularization scheme—of how to parameterize ignorance of high-energy
physics—makes a considerable difference to the predicted value of that
quantity within QFT. Moreover, this would also amount to claiming that
dimensional regularization is the correct way to do so in this instance.
Instead, one should acknowledge that the sensitivity to regularization scheme
is a sign that the quantity falls outside the scope of the EFT.
If one still insists on prioritizing dimensional regularization, then one must
renormalize the vacuum energy density at one-loop in order to satisfy
Requirement (2). Though the value $\langle\rho_{SM}\rangle=-2\times
10^{8}GeV^{4}$ appears insensitive to the regulator (Requirement (2)), this is
only because high sensitivities at higher orders are hidden by brute
truncation. The quantity is not perturbatively renormalizable, and new
sensitivities to the regulator $\epsilon$ will appear at each order. Further,
there is no principled reason to pick any given order at which to renormalize.
Since the divergences are of the same character at each order, and since the
regulator makes the same order of contributions at each order, the only
principled choice is to renormalize nonperturbatively. Since this cannot be
done with the vacuum energy density, there is no reason to renormalize
perturbatively at any particular order. If renormalization at, e.g, one-loop
level yielded a sensible prediction, then there might be a post-hoc
justification. But since $\langle\rho_{SM}\rangle$ is still so far off the
from the observed value, this seems like an unjustified relaxation of the
Minimal Requirements, and indicates that the quantity
$\langle\rho_{SM}\rangle$ lacks physical significance.
I argue that both Minimal Requirements are needed for a quantity to count as a
candidate prediction of some corresponding physical quantity under the EFT
framework. This is a hallmark of all other predictions of QFTs, and is not
satisfied in the case of vacuum energy density. Since there is no direct
evidence necessitating a physically significant vacuum energy density in QFTs,
I do not think we have grounds for a candidate prediction.181818Cf. [17] for
an argument that the Casimir effect and Lamb shift do not license the
inference to a constant vacuum expectation value of energy. Under the standard
view, vacuum energy density should be treated as analogous to droplet
formation in fluid mechanics: outside the scope of the EFT, and requiring the
details of the high-energy theory in order to make sense. Just as we don’t
expect fluid mechanics to provide the details of droplet formation, we should
not expect the standard model to predict the value of vacuum energy density.
To be clear, I have not argued that the concept of vacuum energy density is
meaningless; it is simply outside the scope of EFT. An alternative approach is
to extend and modify QFT to better fit with the principles of general
relativity, as outlined in Sec. 3.2. In particular, the concept of the vacuum
will likely require significant revision. The absence of $\langle\rho\rangle$
from local extensions of QFT mentioned in Sec. 3 suggests further that vacuum
energy is not a proper part of the physical content of QFT. The cosmological
constant problem should be understood as indicating some inconsistency in
merging Minkowski QFTs with general relativity at the level of EFTs [19]. In
particular, the presence of a large effective cosmological constant undermines
the initial assumption that Minkowski spacetime is a good approximation to the
more realistic curved spacetime. The work of Hollands and Wald highlights how
much of the formalism may need to change if one wants to make QFTs
conceptually compatible with the general covariance and locality of general
relativity. Perhaps the resulting conceptual clarity will also serve to clear
up the concept of vacuum energy density as well.
The cosmological constant problem does require some sort of (dis)solution. By
investigating the foundations of QFT, it is increasingly clear that at least
part of the problem lies in accepting that the standard model provides a
candidate prediction of $\langle\rho\rangle$.
## Acknowledgements
Removed for review I am grateful to Chris Smeenk, Robert Brandenberger, Doreen
Fraser, and the UCI Philosophy of Physics Research group for helpful feedback
on early drafts of this paper, as well as the comments from two anonymous
reviewers. This work was supported by the Social Sciences and Humanities
Research Council of Canada, and the John Templeton Foundation Grant 61048, New
Directions in Philosophy of Cosmology. The opinions expressed in this
publication are those of the author and do not necessarily reflect the views
of the John Templeton Foundation.
## References
* [1] Gerard ’t Hooft and Martinus Veltman “Regularization and renormalization of gauge fields” In _Nuclear Physics B_ 44, 1972, pp. 189–213
* [2] Tatsumi Aoyama, Masashi Hayakawa, Toichiro Kinoshita and Makiko Nio “Tenth-order QED contribution to the electron g- 2 and an improved value of the fine structure constant” In _Physical Review Letters_ 109.11 APS, 2012, pp. 111807
* [3] GW Bennett et al. “Measurement of the negative muon anomalous magnetic moment to 0.7 ppm” In _Physical review letters_ 92.16 APS, 2004, pp. 161802
* [4] Eugenio Bianchi and Carlo Rovelli “Why all these prejudices against a constant?” In _arXiv preprint arXiv:1002.3966_ , 2010
* [5] CMS-collaboration “A measurement of the Higgs boson mass in the diphoton decay channel”, 2019
* [6] Freeman J. Dyson “The S matrix in quantum electrodynamics” In _Physical Review_ 75.11, 1949, pp. 1736–1755
* [7] Henri Epstein and Vladimir Glaser “The role of locality in perturbation theory” In _Annales de l’IHP Physique théorique_ 19.3, 1973, pp. 211–295
* [8] Doreen Fraser “The development of renormalization group methods for particle physics: Formal analogies between classical statistical mechanics and quantum field theory” In _PhilSci-Archive Preprint_ , 2018
* [9] Doreen Fraser and Adam Koberinski “The Higgs mechanism and superconductivity: A case study of formal analogies” In _Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics_ 55 Elsevier, 2016, pp. 72–91
* [10] James Duncan Fraser “The Real Problem with Perturbative Quantum Field Theory” In _The British Journal for the Philosophy of Science_ , 2017
* [11] James Duncan Fraser “Renormalization and the formulation of scientific realism” In _Philosophy of Science_ 85.5 University of Chicago Press Chicago, IL, 2018, pp. 1164–1175
* [12] L. Gurung, T.. Babij, S.. Hogan and D.. Cassidy “Precision Microwave Spectroscopy of the Positronium $n=2$ Fine Structure” In _Phys. Rev. Lett._ 125 American Physical Society, 2020, pp. 073002 DOI: 10.1103/PhysRevLett.125.073002
* [13] Stefan Hollands and Robert M Wald “Local Wick Polynomials and Time Ordered Products of Quantum Fields in Curved Spacetime” In _Communications in Mathematical Physics_ 223.2 Springer, 2001, pp. 289–326
* [14] Stefan Hollands and Robert M Wald “Existence of local covariant time ordered products of quantum fields in curved spacetime” In _Communications in mathematical physics_ 231.2 Springer, 2002, pp. 309–345
* [15] Stefan Hollands and Robert M Wald “Quantum field theory in curved space-time, the operator product expansion, and dark energy” In _International Journal of Modern Physics D_ 17.13n14 World Scientific, 2008, pp. 2607–2615
* [16] Stefan Hollands and Robert M Wald “Axiomatic quantum field theory in curved spacetime” In _Communications in Mathematical Physics_ 293.1 Springer, 2010, pp. 85
* [17] Adam Koberinski “Problems with the cosmological constant problem” Forthcoming, http://philsci-archive.pitt.edu/14244/ In _Philosophy Beyond Spacetime_ Oxford University Press, 2017
* [18] Adam Koberinski and Chris Smeenk “Q.E.D., QED” In _Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics_ 71, 2020, pp. 1–13 DOI: https://doi.org/10.1016/j.shpsb.2020.03.003
* [19] Adam Koberinski and Chris Smeenk “Effective field theory and the failure of naturalness in the cosmological constant problem” In _under review in Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics_ , 2021
* [20] Jerome Martin “Everything you always wanted to know about the cosmological constant problem (but were afraid to ask)” In _Comptes Rendus Physique_ 13.6-7 Elsevier, 2012, pp. 566–665
* [21] David Mattingly “Modern tests of Lorentz invariance” In _Living Reviews in relativity_ 8.1 Springer, 2005, pp. 5
* [22] Sébastien Rivat “Renormalization scrutinized” In _Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics_ 68, 2019, pp. 23–39 DOI: https://doi.org/10.1016/j.shpsb.2019.04.006
* [23] Joshua Rosaler and Robert Harlander “Naturalness, Wilsonian renormalization, and “fundamental parameters” in quantum field theory” In _Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics_ 66 Elsevier, 2019, pp. 118–134
* [24] Laura Ruetsche “Perturbing realism” In _Scientific Realism and the Quantum_ Oxford University Press, USA, 2020, pp. 293
* [25] Simon Saunders “Is the Zero-Point Energy Real?” In _Ontological aspects of quantum field theory_ World Scientific, 2002, pp. 313–343
* [26] Gunter Scharf “Finite quantum electrodynamics: The causal approach” Springer, 1995
* [27] Mike D. Schneider “What’s the problem with the cosmological constant?” In _Philosophy of Science_ 87.1 The University of Chicago Press Chicago, IL, 2020, pp. 1–20
* [28] David Wallace “Taking particle physics seriously: A critique of the algebraic approach to quantum field theory” In _Studies in History and Philosophy of Modern Physics_ 42, 2011, pp. 116–125
* [29] David Wallace “The quantum theory of fields” In __forthcoming in_ The Routledge Companion to the Philosophy of Physics_ Routledge, 2018 URL: http://philsci-archive.pitt.edu/15296/
* [30] Steven Weinberg “The cosmological constant problem” In _Reviews of modern physics_ 61.1 APS, 1989, pp. 1
* [31] Porter Williams “Scientific realism made effective” In _The British Journal for the Philosophy of Science_ , 2017
* [32] Porter Williams “Renormalization Group Methods” In __forthcoming in_ The Routledge Companion to the Philosophy of Physics_ Routledge, 2018 URL: http://philsci-archive.pitt.edu/15346/
* [33] Kenneth G. Wilson “Non-Lagrangian models of current algebra” In _Physical Review_ 179.5 APS, 1969, pp. 1499
|
MS-0001-1922.65
A Primal-Dual Approach to CMDP
A Primal-Dual Approach to Constrained Markov Decision Processes
Yi Chen Department of Industrial Engineering & Management Sciences,
Northwestern University, Evanston, IL, 60208 Jing Dong Columbia Business
School, New York City, NY, 12007 Zhaoran Wang Department of Industrial
Engineering & Management Sciences, Northwestern University, Evanston, IL,
60208
In many operations management problems, we need to make decisions sequentially
to minimize the cost while satisfying certain constraints. One modeling
approach to study such problems is constrained Markov decision process (CMDP).
When solving the CMDP to derive good operational policies, there are two key
challenges: one is the prohibitively large state space and action space; the
other is the hard-to-compute transition kernel. In this work, we develop a
sampling-based primal-dual algorithm to solve CMDPs. Our approach
alternatively applies regularized policy iteration to improve the policy and
subgradient ascent to maintain the constraints. Under mild regularity
conditions, we show that the algorithm converges at rate
$O(\log(T)/\sqrt{T})$, where $T$ is the number of iterations. When the CMDP
has a weakly coupled structure, our approach can substantially reduce the
dimension of the problem through an embedded decomposition. We apply the
algorithm to two important applications with weakly coupled structures: multi-
product inventory management and multi-class queue scheduling, and show that
it generates controls that outperform state-of-art heuristics.
Constrained Markov decision process, primal-dual algorithm, weakly coupled
Markov decision process
## 1 Introduction
In many sequential decision-making problems, a single utility might not
suffice to describe the real objectives faced by the decision-makers. A
natural approach to study such problems is to optimize one objective while
putting constraints on the others. In this context, the constrained Markov
decision process (CMDP) has become an important modeling tool for sequential
multi-objective decision-making problems under uncertainty. A CMDP aims to
minimize one type of cost while keeping the other costs below certain
thresholds. It has been successfully applied to analyze various important
applications, including admission control and routing in telecommunication
networks, scheduling for hospital admissions, and maintenance scheduling for
infrastructures (Altman 1999). Due to the complicated system dynamics and the
scale of the problem, exact optimal solutions to CMDPs can rarely be derived.
Instead, numerical approximations become the main workhorse to study CMDPs. In
this paper, we propose a sampling-based primal-dual algorithm that can
efficiently solve a wide range of CMDPs.
One basic approach to solve the CMDP is to use a linear programming (LP)
formulation based on the occupancy measure. This approach faces two key
challenges in implementations: it requires knowledge of the transition kernel
of the underlying dynamical system explicitly; it does not scale well as the
state space and action space get large. An alternative approach is to apply
the Lagrangian duality. In particular, by dualizing the constraints and
utilizing strong duality, we can translate the CMDP into a max-min problem,
where for a given Lagrangian multiplier, the inner minimization problem is
just a standard Markov decision process (MDP). This approach allows us to
solve the inner problem using standard dynamic programming based methods. It
does not require direct knowledge of the transition kernel as long as we can
estimate the value functions from simulated or empirical data. In
implementations, one would iteratively update the MDP policy and the
Langrangian multiplier. The current development of this approach requires
solving the MDP to get the optimal policy for each updated Langrangian
multiplier (see, for example, Le et al. (2019), Miryoosefi et al. (2019)),
which can be computationally costly. A more natural idea is to solve the MDP
only approximately at each iteration. In this paper, we investigate this idea
and show that at each iteration, we only need to do one iteration of policy
update to achieve the optimal convergence rate (in terms of the number of
primal-dual iterations). Compared to the existing algorithms utilizing
Lagrangian duality, our primal-dual algorithm can be run at a much lower cost
at each iteration. We also demonstrate that our algorithm can be easily
combined with many other approximate dynamic programming techniques, such as
Monte Carlo policy evaluation, TD-learning, and value function approximations
(Sutton and Barto 2018).
A key ingredient of our algorithm is regularized policy iteration. The
standard policy iteration includes two steps: policy evaluation and policy
improvement. The policy evaluation step calculates the action-value function
under a given policy. Then, the policy improvement step defines a new policy
by taking the action that minimizes the action-value function. Through a
Kullback-Leibler (KL) regularization term, the regularized policy iteration
modifies the policy improvement step by reweighing the probability of taking
each action via a softmax transformation of the action-value function. This
modification allows us to view the policy update step as running mirror
descent for the objective function in the policy space (Nemirovski 2012). In
addition, we update the Lagrangian multiplier using subgradient ascent, which
also belongs to the family of mirror descent methods. This unified viewpoint
makes the improved primal-dual algorithm possible. Noticeably, many recent
developments in reinforcement learning also benefit from regularization, which
has been shown to improve exploration and robustness. For example, Trust
Region Policy Optimization and Proximal Policy Optimization use KL divergence
between two consecutive policies as a penalty in policy improvement (Schulman
et al. 2015, 2017). Soft-Q-learning uses Shannon entropy as a penalty in value
iteration (Haarnoja et al. 2017). Geist et al. (2019) propose a unified
framework to analyze the above algorithms via regularized Bellman operator
(see also Liu et al. (2019a), Shani et al. (2019), Wang et al. (2019) for
convergence analysis of regularized policy iteration).
In terms of applications of the algorithm, we study an important class of
CMDPs which we refer to as weakly coupled CMDPs (Singh and Cohn 1998). A
weakly coupled CMDP comprises multiple sub-problems that are independent
except for a collection of coupling constraints. Due to the linking
constraints, the scale of problem grows exponentially in the number of sub-
problems. Hence, even in the case where each sub-problem is computationally
tractable, it can be computationally prohibitive to solve the joint problem.
Our primal-dual algorithm naturally helps break the curse of dimensionality in
this case. In particular, the weakly coupled CMDP can be decomposed into
independent sub-problems in the policy iteration step. In this case, the
complexity only grows linearly with the number of sub-problems. We also
comment that the weakly coupled CMDP can be viewed as a Lagrangian relaxation
of the weakly coupled MDP (Adelman and Mersereau 2008). Even though there is a
relaxation gap between the two, as we will demonstrate in our numerical
experiments, the (modified) policy obtained via CMDP can perform very well for
the original MDP problem in the applications we considered.
We apply the primal-dual algorithm to solve two classical operations
management problems: inventory planning and queue scheduling. For the
inventory planning problem, we consider a multi-product newsvendor problem
with budget constraints (Turken et al. 2012). We formulate this problem as a
weakly coupled CMDP and study a small-scale instance where we can numerically
solve for the optimal policy. We show that our policy can indeed achieve
${O}(\log(T)/\sqrt{T})$ convergence in this case, where $T$ is the number of
iterations. For the queue scheduling problem, we consider a multi-class multi-
pool parallel-server system where the decision-maker needs to route different
classes of customers to different pools of servers in order to minimize the
performance cost (holding cost plus overflow cost). We allow the service rates
to be both customer-class and server-pool dependent. Since each pool only has
a finite number of servers, the routing policy needs to satisfy the capacity
constraints. This optimal scheduling problem can be formulated as a weakly
coupled MDP. We consider instances where it is prohibitive to solve for the
optimal policy. Applying the Lagrangian relaxation, we solve the resulting
weakly coupled CMDP by combining our primal-dual algorithm with value function
approximation techniques. We show that our method generates comparable or even
better policies than the state-of-art policies.
### 1.1 Literature review
In this section, we review some of the existing methods/results to solve
CMDPs. The goal is to clearly state the contribution of our work. Most
existing algorithms for CMDPs is adapted from methods for MDPs, and can be
roughly divided into three categories: LP based approaches, dynamic
programming based approaches (including policy iteration and value iteration),
and the policy gradient methods.
One LP based approach utilizes the occupation measure, which is the weighted
proportion of time the system spends at each state-action pair. The objective
and constraints can be written as the inner products of instantaneous cost
functions and occupation measure. The other LP based approach utilizes the
dynamic programming principle and treats the value function (defined on state
space) as the decision variables. In particular, the optimal value function of
an MDP is the largest super-harmonic function that satisfies certain linear
constraints determined by transition dynamics. For the CMDP, we obtain an LP
by combing the dynamic programming principle with the Lagrangian duality.
These two LP formulations are dual of each other (Altman 1999). Many works on
LP based approaches aim to find efficient ways to solve the LPs by exploiting
the structures of some specific MDPs/CMDPs. For example, Bertsimas and Orlin
(1994) use the ellipsoid method to derive efficient algorithms for problems
with side constraints, including the traveling salesman problem and the
vehicle routing problem. Neely (2011) studies a linear fractional programming
method to solve CMDPs. More recently, Caramanis et al. (2014) propose two
algorithms based on the column generation and the generalized experts
framework, respectively. These algorithms are shown to be as efficient as
value iteration. Another challenge of LP based approaches is that we need to
know the transition kernel up front and in explicit forms. Some recent
developments aim to overcome this challenge. For example, Chen and Wang (2016)
reformulate the LP of an MDP as a saddle point problem and use stochastic
approximation to solve it. Chen and Wang (2016) combine the saddle point
formulation of LP with value function approximation and develop a proximal
stochastic mirror descent method to solve it. However, most existing
developments in this line focus on MDPs.
The policy/value iteration stems from the Bellman operator on the value
function, and converges to the optimal value function linearly. In
implementations, the value function can be estimated via simulation or data.
Thus this class of methods does not require knowledge of the transition kernel
up front. For example, in reinforcement learning, we learn the transition
dynamics of the system while solving for the optimal policy (Sutton and Barto
2018). There are many works that apply policy/value iteration to solve CMDPs.
For example, Gattami (2019) formulates the CMDP as a zero-sum game and uses
primal-dual $Q$-learning to solve the game. It proves the almost sure
convergence of the algorithm, but does not establish the rate of convergence.
Le et al. (2019) study CMDPs in the offline learning setting, and combine
various dynamic programming techniques with Lagrangian duality to solve it.
Miryoosefi et al. (2019) extend the constraints of CMDPs to nonlinear forms
and propose to solve it via Lagrangian duality and $Q$-learning as well. An
$O(1/\sqrt{T})$ rate of convergence is obtained in both Le et al. (2019) and
Miryoosefi et al. (2019). However, their algorithms require solving for the
optimal policy at each updated Lagrangian multiplier. Our method can be viewed
as an improved version of their algorithms. In particular, our algorithm only
requires one policy iteration for each updated Lagrangian multiplier while
achieving the same convergence rate. We comment that Le et al. (2019) and
Miryoosefi et al. (2019) consider more complicated settings than the classical
CMDPs studied in this paper. It would be interesting to extend our primal-dual
algorithm to solve the more general problems.
Both LP based methods and policy/value iteration can suffer from the curse of
dimensionality when dealing with a large action space. The policy gradient
based approaches alleviate the dimensionality issue by approximating the
policy using a parametric family of functions. In this case, searching over
the policy space reduces to a finite dimensional optimization problem. Several
works combine this idea with Lagrangian duality to solve large-scale CMDPs.
For example, Borkar (2005) and Bhatnagar and Lakshmanan (2012) combine actor-
critic algorithms with policy function approximations. Tessler et al. (2018)
use two-timescale stochastic approximation. Beyond duality, several other
policy gradient based approaches to solve CMDPs have been developed. For
example, Achiam et al. (2017) propose a trust region method that focuses on
safe exploration. Liu et al. (2019b) develop an interior point method with
logarithmic barrier functions. Chow et al. (2018) propose to use Lyapunov
functions to handle constraints. However, the key challenge of policy gradient
based methods to solve CMDPs is that the corresponding optimization problems
are non-convex. In most cases, only convergence to a local minimum can be
guaranteed and the convergence rates are often hard to establish.
### 1.2 Organization of the paper and notations
The paper is organized as follows. We first introduce the CMDP and review some
classical results that are relevant to our subsequent development in Section
2. We then introduce our algorithm in Section 3, and show that the algorithm
achieves the optimal convergence rate in Section 4. In Section 5, we discuss
how our algorithm can be applied to (approximately) solve weakly coupled CMDPs
and weakly coupled MDPs. We then implement our algorithm to solve an inventory
planing problem and a queue scheduling problem in Sections 6 and 7
respectively. Lastly, we conclude the paper and discuss some interesting
future directions in Section 8.
The following notations are used throughout the paper. For a positive integer
$K$, we denote $[K]$ as the set $\\{1,2,\ldots,K\\}$. For a vector
$\lambda\in\mathbb{R}^{K}$, $[\lambda]_{k}$ denotes its $k$-th coordinate and
$\|\lambda\|=(\sum_{k=1}^{K}[\lambda]^{2}_{k})^{1/2}$ denotes its $L_{2}$
norm. Given two vectors $a,b\in\mathbb{R}^{K}$, we say $a\leq b$ if the
inequality holds for each coordinate, i.e., $[a]_{k}\leq[b]_{k}\ \forall
k\in[K]$. Given a vector $x\in{\mathbb{R}}^{K}$,
$[x]^{+}=(\max\\{[x]_{1},0\\},\ldots,\max\\{[x]_{K},0\\})$. Finally, given two
sequences of real numbers $\\{a_{n}\\}_{n\geq 1}$ and $\\{b_{n}\\}_{n\geq 1}$,
we say $b_{n}=O(a_{n})$, $b_{n}=\Omega(a_{n})$, and $b_{n}=\Theta(a_{n})$ if
there exist some constants $C,C^{\prime}>0$ such that $b_{n}\leq Ca_{n}$,
$b_{n}\geq C^{\prime}a_{n}$, and $C^{\prime}a_{n}\leq b_{n}\leq Ca_{n}$,
respectively. We also introduce the $\tilde{O}(\cdot)$ notation when we ignore
the logarithmic factors. For example, if $b_{n}\leq Ca_{n}\cdot\log(n)$, we
denote it by $b_{n}=\tilde{O}(a_{n})$.
## 2 Constrained Markov Decision Process
We start by considering a discrete-time MDP characterized by the tuple
$({\mathcal{S}},{\mathcal{A}},P,\gamma,\mu_{0})$. Here, ${\mathcal{S}}$ and
${\mathcal{A}}$ denote the state and action spaces;
$P=\\{P(\cdot|s,a)\\}_{(s,a)\in{\mathcal{S}}\times{\mathcal{A}}}$ is the
collection of probability measures indexed by the state-action pair $(s,a)$.
For each $(s,a)$, $P(\cdot|s,a)$ characterizes the one-step transition
probability of the Markov chain conditioning on being in state $s$ and taking
action $a$. Function
$c=\\{c(s,a)\\}_{(s,a)\in{\mathcal{S}}\times{\mathcal{A}}}$ is the expected
instantaneous cost where $c(s,a)$ is the cost incurred by taking action $a$ at
state $s$. Lastly, $\gamma\in(0,1)$ and
$\mu_{0}=\\{\mu_{0}(s)\\}_{s\in{\mathcal{S}}}$ are the discount rate and the
distribution of the initial state, respectively. Given an MDP
$({\mathcal{S}},{\mathcal{A}},P,\gamma,\mu_{0})$, a policy $\pi$ determines
what action to take at each state. We define the expected cumulative
discounted cost with initial state $s_{0}$ under policy $\pi$ as
$V^{\pi}(s_{0})=(1-\gamma)\cdot{\mathbb{E}}^{\pi}\big{[}\sum_{t=0}^{\infty}\gamma^{t}\cdot
c(s_{t},a_{t})\big{|}s_{0}\big{]},$ (1)
where $s_{t},a_{t}$ are the state and action at time $t$ and
${\mathbb{E}}^{\pi}$ denotes the expectation with respect to the transition
dynamics determined by policy $\pi$. We further weight the expected costs
according to the initial state distribution and define
$\displaystyle
C(\pi)={\mathbb{E}}_{s_{0}\sim\mu_{0}}\big{[}V^{\pi}(s_{0})\big{]}.$ (2)
Our goal is to minimize the cost $C(\pi)$ over a suitably defined class of
policies.
As an extension to MDP, the CMDP model optimizes one objective while keeping
others satisfying certain constraints. Specifically, in addition to the
original cost $c$, we introduce $K$ auxiliary instantaneous costs
$d_{k}=\\{d_{k}(s,a)\\}_{(s,a)\in{\mathcal{S}}\times{\mathcal{A}}},\forall\
k\in[K]$. The goal of a CMDP is to find the policy that minimizes the cost
defined in (2) while keeping the following constraints satisfied
$\displaystyle
D_{k}(\pi)=(1-\gamma)\cdot{\mathbb{E}}_{s_{0}\sim\mu_{0}}\Big{[}{\mathbb{E}}^{\pi}\big{[}\sum_{t=0}^{\infty}\gamma^{t}\cdot
d_{k}(s_{t},a_{t})\big{|}s_{0}\big{]}\Big{]}\leq q_{k},\ \forall\ k\in[K].$
(3)
In order to make the expression more concise, we define
$D(\pi):=(D_{1}(\pi),\ldots,D_{K}(\pi))^{\top}$,
$q:=(q_{1},\ldots,q_{K})^{\top}$, and write the constraints in (3) as
$D(\pi)\leq q$.
We remark that the CMDP is only one modeling choice to model problems with
multiple objectives/constraints. This particular modeling choice turns out to
enjoy a lot of analytical and computational tractability as we will discuss
next. CMDP is also closely connected to an important class of MDPs – weakly
coupled MDP. In particular, CMDPs can be viewed as a relaxation of weakly
coupled MDPs (Adelman and Mersereau 2008). We will provide more discussions
about this in Section 5.
### 2.1 Policy Spaces
Solving a CMDP requires finding the optimal policy over a properly defined
policy space, which is a function space. Imposing suitable regularity
conditions on the policy space will facilitate the development of algorithms.
We next introduce a few classes of commonly used policies. It is natural to
require that all policies are non-anticipative, which means that the decision-
maker does not have access to future information. Define the history at time
$t$ to be the sequence of previous states and actions as well as the current
state, i.e., $h_{t}:=(s_{0},a_{0},\ldots,a_{t-1},s_{t})$. Then a non-
anticipative policy can be viewed as a mapping from $h_{t}$ and $t$ to the
action space. We refer to such a policy as a “behavior policy”. If a policy
only depends on the current state $s_{t}$ and time $t$ instead of the whole
history $h_{t}$, it is called a “Markov policy”. For a Markov policy, if it is
independent of the time index $t$, it is referred to as a “stationary policy”.
When a stationary policy is a deterministic mapping from the state space to
the action space, it becomes a “stationary deterministic policy”. We use
$\Pi$, $\Pi_{M}$, $\Pi_{S}$, $\Pi_{D}$ to denote the space of behavior,
Markov, stationary, and stationary deterministic policies, respectively.
Given an arbitrary policy space $U$, we can further generate a new type of
policy called “mixing policies” via an initial randomization. Specifically,
let $\rho$ be a probability measure on $U$. Under a mixing policy on $U$ with
mixing probability $\rho$, we first draw a policy, say $\pi_{g}$, from $U$
following the distribution $\rho$. Then $\pi_{g}$ is executed for
$t=0,1,2,\cdots$. We denote by ${\mathcal{M}}(U)$ the space of mixing policies
constructed from $U$. An important special case is ${\mathcal{M}}(\Pi_{S})$,
i.e., the space of mixing stationary policies. When allowing mixing operation,
we incorporate the randomness of the initial mixing into the calculation of
the accumulated cost. In particular, for $\pi\in{\mathcal{M}}(U)$ with initial
randomization $\rho$,
$\displaystyle
C(\pi)=(1-\gamma)\cdot{\mathbb{E}}_{s_{0}\sim\mu_{0}}\Big{[}\int_{U}{\mathbb{E}}^{\pi_{g}}\big{[}\sum_{t=0}^{\infty}\gamma^{t}\cdot
c(s_{t},a_{t})\big{|}s_{0}\big{]}\rho(\text{d}g)\Big{]}.$
By definition, we note that
$\Pi\supseteq\Pi_{M}\supseteq\Pi_{S}\supseteq\Pi_{D},\
\mathcal{M}(\Pi_{S})\supseteq\Pi_{S}.$
A class of policies $U$ is called a “dominating class” for a CMDP, if for any
policy $\pi\in\Pi$, there exists a policy $\bar{\pi}\in U$ such that
$C(\bar{\pi})\leq C(\pi),\ D_{k}(\bar{\pi})\leq D_{k}(\pi),\ \forall\
k\in[K].$
For CMDPs, when the instantaneous costs $c(\cdot,\cdot)$ and
$d_{k}(\cdot,\cdot)$ are uniformly bounded from below, $\Pi_{S}$ is dominating
(Altman 1999). The class of mixing stationary polices $\mathcal{M}(\Pi_{S})$
is also dominating in this case (Theorem 8.4 in (Altman 1999)).
### 2.2 Classical Approaches to Solve CMDPs
There are two classical approaches to CMDPs. We use CMDPs with finite state
and action spaces as examples. The first method utilizes the occupation
measure. Given a policy $\pi$, the occupation measure is defined as
$\displaystyle\nu^{\pi}(s,a):=(1-\gamma)\cdot{\mathbb{E}}_{s_{0}\sim\mu_{0}}\Big{[}\sum_{t=0}^{\infty}\gamma^{t}P^{\pi}(s_{t}=s,a_{t}=a|s_{0})\Big{]},$
(4)
where $P^{\pi}(\cdot,\cdot|s_{0})$ denotes the probability measure induced by
policy $\pi$ with initial state $s_{0}$. Note that the occupation measure is
the weighted long-run proportion of time that the system spends at each state-
action pair. We can then express the accumulated costs in (2) and (3) as
$\displaystyle C(\pi)$
$\displaystyle=\sum_{(s,a)\in{\mathcal{S}}\times{\mathcal{A}}}c(s,a)\cdot\nu^{\pi}(s,a)$
$\displaystyle D_{k}(\pi)$
$\displaystyle=\sum_{(s,a)\in{\mathcal{S}}\times{\mathcal{A}}}d_{k}(s,a)\cdot\nu^{\pi}(s,a),~{}\forall\
k\in[K].$
Let $\mathcal{Q}$ denote the set of feasible occupation measures, i.e., for
any occupancy measure $\nu\in\mathcal{Q}$ there exists a policy $\pi$ that
leads to $\nu$. By Theorem 3.2 in Altman (1999), $\mathcal{Q}$ can be
represented by the collection of vectors
$\\{\nu(s,a)\\}_{(s,a)\in{\mathcal{S}}\times{\mathcal{A}}}$ that satisfies the
following system of linear equations:
$\displaystyle\sum_{(s,a)\in{\mathcal{S}}\times{\mathcal{A}}}\nu(s,a)\Big{(}\text{1}(s=s^{\prime})-\gamma
P(s^{\prime}|s,a)\Big{)}$ $\displaystyle=(1-\gamma)\cdot\mu_{0}(s^{\prime}),\
\forall\ s^{\prime}\in{\mathcal{S}},$ $\displaystyle\nu(s,a)\geq 0,\ \forall\
(s,a)\in{\mathcal{S}}\times{\mathcal{A}},$
where $\text{1}(\cdot)$ is the indicator function. Then we obtain the
following LP formulation of CMDP
$\displaystyle\min\sum_{(s,a)\in{\mathcal{S}}\times{\mathcal{A}}}c(s,a)\cdot\nu(s,a)$
(5) $\displaystyle\
\text{s.t.}\quad\big{\\{}\nu(s,a)\big{\\}}_{(s,a)\in{\mathcal{S}}\times{\mathcal{A}}}\in\mathcal{Q},$
$\displaystyle\quad\ \
\sum_{(s,a)\in{\mathcal{S}}\times{\mathcal{A}}}d_{k}(s,a)\cdot\nu(s,a)\leq
q_{k},\ \forall\ k\in[K].$
The second method utilizes the Lagrangian duality. Let
$\lambda\in\mathbb{R}^{K}$ denote the Lagrangian multipler. Define
$\displaystyle
L(\pi,\lambda):=C(\pi)+\sum_{k=1}^{K}[\lambda]_{k}\cdot(D_{k}(\pi)-q_{k}).$
(6)
Then the CMDP can be equivalently formulated as
$\inf_{\pi\in\Pi_{S}}\sup_{\lambda\geq 0}L(\pi,\lambda).$ By Theorem 3.6 in
Altman (1999), we can exchange the order of inf and sup and obtain,
$\inf_{\pi\in\Pi_{S}}\sup_{\lambda\geq 0}L(\pi,\lambda)=\sup_{\lambda\geq
0}\inf_{\pi\in\Pi_{S}}L(\pi,\lambda)=\sup_{\lambda\geq
0}\inf_{\pi\in\Pi_{D}}L(\pi,\lambda),$
where the last equation holds because for each fixed $\lambda$, the inner
problem is an unconstrained MDP and the optimal policy is a stationary
deterministic policy. We emphasize that given the optimal solution
$\lambda^{*}$ to the dual problem, not every policy $\pi(\lambda^{*})$ that
minimizes $L(\pi,\lambda^{*})$ is the optimal policy to the original CMDP. A
necessary condition for $\pi(\lambda^{*})$ to be optimal for the original CMDP
is the complementary slackness: $[\lambda^{*}]_{k}\cdot
D_{k}(\pi(\lambda^{*}))=0,\forall k\in[K]$.
The dual problem $\sup_{\lambda\geq 0}\inf_{\pi\in\Pi_{D}}L(\pi,\lambda)$
leads to the following LP formulation:
$\displaystyle\max_{\phi,\lambda}\sum_{s\in{\mathcal{S}}}\mu_{0}(s)\phi(s)-\sum_{k=1}^{K}[\lambda]_{k}q_{k}$
(7) $\displaystyle\text{s.t.}\
\phi(s)\leq(1-\gamma)\Big{(}c(s,a)+\sum_{k=1}^{K}[\lambda]_{k}d_{k}(s,a)\Big{)}+\gamma\cdot\sum_{s^{\prime}\in{\mathcal{S}}}\phi(s^{\prime})P(s^{\prime}|s,a),$
where $\phi(s)$ denotes the value function with initial state $s$. Note that
(5) and (7) are dual of each other.
Various methods have been developed in the literature to solve the LPs (5) or
(7). There are two main obstacles to solve the LPs in practice. First, it can
be computationally prohibitive when dealing with a large state space or a
large action space. Second, it requires explicit characterization of the
transition kernel $P$. To overcome these difficulties, we next develop a
sampling-based primal-dual algorithm to solve CMDPs.
## 3 The Primal-Dual Algorithm
Consider the Lagrangian dual problem
$\sup_{\lambda\geq 0}\inf_{\pi\in\Pi_{S}}L(\pi,\lambda).$ (8)
For each fixed $\lambda$, the inner problem is an unconstrained MDP. A natural
idea is to solve the unconstrained MDP via a sampling-based method and then
update the Lagrangian multipliers via subgradient ascent. Such an idea is
exploited in (Le et al. 2019). However, this method is computationally
expensive, since we need to solve a new MDP every time the Lagrangian
multipliers are updated. In contrast, our method only requires a single policy
update at each iteration, i.e., we do not need to solve for the corresponding
optimal policy at each iteration.
We develop the algorithm and analyze its convergence in
$\mathcal{M}(\Pi_{S})$, the space of mixing stationary policies, rather than
$\Pi_{S}$. The benefits of allowing the mixing are twofolds. First, it
provides an intuitive way to understand strong duality:
$\displaystyle\inf_{\pi\in{\mathcal{M}}({\Pi_{S}})}\sup_{\lambda\geq
0}L(\pi,\lambda)=\sup_{\lambda\geq
0}\inf_{\pi\in{\mathcal{M}}({\Pi_{S}})}L(\pi,\lambda).$ (9)
With the mixing operation, we can treat $C(\pi)$ and $D(\pi)$ as infinite-
dimensional linear functions with respect to the distributions of initial
randomization of policies in $\Pi_{S}$. Hence, the Lagrangian $L(\pi,\lambda)$
is a bilinear function and strong duality follows from the minimax theorem
(Sion et al. 1958). Second, in primal-dual algorithms, we in general need to
take the average of the trajectories to obtain convergence (Nedić and Ozdaglar
2009). In our case, caution needs to be taken when defining the average. In
particular, note that the objective and constraints are inner products of the
cost functions and the occupation measures. Thus, what we need to average
across are the occupation measures. However, since the mapping from the policy
to the corresponding occupation measure is nonlinear, we cannot average the
policy $\pi(\cdot|s)$, i.e., the probability of taking each action at each
state, directly. The mixing operation provides a simple way to average the
occupation measures. In addition, given a mixing policy, under mild regularity
conditions, there exists a non-mixing stationary policy that has the same
occupation measure (Theorem 3.1 of Altman (1999)). In particular, for
$\pi\in{\mathcal{M}}(\Pi_{S})$, let $\nu^{\pi}(\cdot,\cdot)$ be the
corresponding occupation measure. Then, we can construct such a stationary
policy $\tilde{\pi}$ via
$\displaystyle\tilde{\pi}(a|s)=\frac{\nu^{\pi}(s,a)}{\sum_{a\in{\mathcal{A}}}\nu^{\pi}(s,a)}.$
(10)
Our algorithmic development is based on strong duality (9), which holds under
certain regularity conditions (see Section 4 for details). By the minimax
theorem, there exists a saddle point $(\pi^{*},\lambda^{*})$ such that
$\displaystyle L(\pi^{*},\lambda)\leq L(\pi^{*},\lambda^{*})\leq
L(\pi,\lambda^{*}),\ \forall\
\lambda\in{\mathbb{R}}^{K}_{+},\pi\in{\mathcal{M}}({\Pi_{S}}).$ (11)
Moreover, $\pi^{*}$ is an optimal solution to the primal problem and
$\lambda^{*}$ is an optimal solution to the dual problem. In addition,
$L(\pi^{*},\lambda^{*})$ equals to the optimal cost of the CMDP. The saddle
point property (11) suggests that we can use iterative primal-dual updates to
find the saddle point.
We next introduce our actual algorithm. Note that for a fixed value of
$\lambda$, the inner inf-problem is an unconstrained MDP with modified
instantaneous cost
$c^{\lambda}(s,a):=c(s,a)+\sum_{k=1}^{K}[\lambda]_{k}(d_{k}(s,a)-q_{k})$. In
what follows, we refer to the inner problem
$\inf_{\pi\in{\mathcal{M}}({\Pi_{S}})}L(\pi,\lambda)$ as the modified
unconstrained MDP.
For a given policy $\pi$ and Lagrangian multiplier $\lambda$, define
$\displaystyle
Q^{\pi,\lambda}(s,a):=(1-\gamma)\cdot\Big{(}c^{\lambda}(s,a)+{\mathbb{E}}^{\pi}\Big{[}\sum_{t=1}^{\infty}\gamma^{t}c^{\lambda}(s_{t},a_{t})\big{|}s_{0}=s,a_{0}=a\Big{]}\Big{)},$
(12)
which is known as the action-value function or $Q$-function. Let $\pi_{m}$ and
$\lambda_{m}$ denote the policy and the Lagrangian multiplier obtained at
iteration $m$. For the policy update, we use KL divergence as the
regularization (Geist et al. 2019). In particular, the regularized policy
iteration is defined as
$\pi_{m}(a|s)=\argmin_{\pi(\cdot|s)\in\Delta_{{\mathcal{A}}}}\Big{\\{}\big{\langle}Q^{\pi_{m-1},\lambda_{m-1}}(s,\cdot),\pi(\cdot|s)\big{\rangle}+\eta_{m-1}^{-1}\cdot\text{KL}\big{(}\pi(\cdot|s)\|\pi_{m-1}(\cdot|s)\big{)}\Big{\\}},$
(13)
where $\eta_{m-1}>0$ is the stepsize that determines the power of
regularization. Note that the regularized policy iteration (13) is defined
state-wise, i.e., for each $s\in{\mathcal{S}}$. The minimization is taken over
the probability simplex
$\Delta_{{\mathcal{A}}}:=\\{\pi(\cdot|s):0\leq\pi(a|s)\leq
1,\sum_{a\in{\mathcal{A}}}\pi(a|s)=1\\}$.
Let $\Lambda_{M}$ denote a suitably bounded domain that includes the dual
optimal solution $\lambda^{*}$. We will provide an explicit construction of
$\Lambda_{M}$ in Section 4. To update the Lagrangian multiplier, we use the
projected subgradient ascent:
$\lambda_{m}=\text{Proj}_{\Lambda_{M}}\Big{\\{}\lambda_{m-1}+\eta_{m-1}\cdot\partial_{\lambda}L(\pi_{m-1},\lambda_{m-1})\Big{\\}},$
(14)
where $\text{Proj}_{\Lambda_{M}}\\{\cdot\\}$ denotes the projection (in
$L^{2}$-norm) on $\Lambda_{M}$. We need such a projection to ensure the
boundedness of “subgradient” in order to establish convergence.
By the definition of KL-divergence, the regularized policy iteration can be
re-written as
$\displaystyle\pi_{m}(\cdot|s)$
$\displaystyle=Z_{m-1}^{-1}\cdot\pi_{m-1}(\cdot|s)\cdot\exp\big{\\{}-\eta_{m-1}\cdot
Q^{\pi_{m-1},\lambda_{m-1}}(s,\cdot)\big{\\}},$ (15)
where $Z_{m-1}$ is some normalization constant. For the subgradient ascent
update, we have
$\big{[}\partial_{\lambda}L(\pi_{m-1},\lambda_{m-1})\big{]}_{k}=D_{k}(\pi_{m-1})-q_{k}.$
(16)
Both (15) and (16) can be evaluated/approximated using simulation. More
advanced approximation techniques for policy evaluation like TD-learning can
also be applied here.
Suppose that our algorithm runs $T-1$ iterations and generates a sequence
$\\{(\pi_{m},\lambda_{m})\\}_{0\leq m\leq T-1}$. Then, the algorithm outputs a
mixing policy and Lagrangian multiplier by taking a weighted average of the
outputs:
$\displaystyle\bar{\pi}_{T}=\sum_{m=0}^{T-1}\tilde{\eta}_{m}\pi_{m},\
\bar{\lambda}_{T}=\sum_{m=0}^{T-1}\tilde{\eta}_{m}\lambda_{m},\mbox{ where
$\tilde{\eta}_{m}=\eta_{m}/\sum_{m=0}^{T-1}\eta_{m}$.}$ (17)
The averaging operation is required for convergence, since the objective
$L(\pi,\lambda)$ is bilinear and does not possess sufficient convexity. In
particular, counter-examples that fail to converge without averaging exist.
The summation in the definition of $\bar{\pi}_{T}$ is interpreted as the
mixing operation, i.e., it mixes policies $(\pi_{0},\ldots,\pi_{T-1})$ with
initial randomization distribution
$(\tilde{\eta}_{0},\cdots,\tilde{\eta}_{T-1})$. Note that this essentially
takes the average of the occupation measures of $\pi_{m}$’s. From
$\bar{\pi}_{T}$, we can apply (10) to define a non-mixing stationary policy
that has the same occupation measure.
Above all, our primal-dual algorithm is summarized in Algorithm 1.
Algorithm 1 Primal-Dual Algorithm to CMDP
Input: pre-specified projection domain $\Lambda_{M}$, stepsizes
$\\{\eta_{m}\\}_{m\geq 0}$, initial policy $\pi_{0}$ and Lagrangian multiplier
$\lambda_{0}$
for $m=1,\ldots,T-1$ do
update Lagrangian multipliers and policy as
$\displaystyle\begin{cases}\lambda_{m}=\text{Proj}_{\Lambda_{M}}\Big{\\{}\lambda_{m-1}+\eta_{m-1}\cdot\partial_{\lambda}L(\pi_{m-1},\lambda_{m-1})\Big{\\}},\\\
\pi_{m}(\cdot|s)\propto\pi_{m-1}(\cdot|s)\cdot\exp\big{\\{}-\eta_{m-1}\cdot
Q^{\pi_{m-1},\lambda_{m-1}}(s,\cdot)\big{\\}}.\end{cases}$
end for
Output: mixing policy $\bar{\pi}_{T}=\sum_{m=0}^{T-1}\tilde{\eta}_{m}\pi_{m}$,
where $\tilde{\eta}_{m}={\eta}_{m}/\sum_{m=0}^{T-1}{\eta}_{m}$.
## 4 Convergence Analysis
In this section, we conduct detailed performance analysis of Algorithm 1. In
particular, we study the performance of policy $\bar{\pi}_{T}$ by analyzing
the values of the objective $C(\bar{\pi}_{T})$ and the constraints
$D(\bar{\pi}_{T})$. We show that the objective value $C(\bar{\pi}_{T})$
converges to the optimal $C^{*}:=C(\pi^{*})=L(\pi^{*},\lambda^{*})$ at a rate
of $O(\log(T)/\sqrt{T})$. In addition, even though $\bar{\pi}_{T}$ may be
infeasible, we show that the violation of constraints, which is measured by
$\displaystyle\big{\|}[D(\bar{\pi}_{T})-q]^{+}\big{\|}:=\left(\sum_{k=1}^{K}\big{(}[D_{k}(\bar{\pi}_{T})-q_{k}]^{+}\big{)}^{2}\right)^{1/2},$
(18)
converges to zero at a rate of $O(\log(T)/\sqrt{T})$. The analysis builds on a
combination of subgradient method for saddle point problem and mirror descent
for regularized policy iteration.
Recall that our algorithmic development builds on the strong duality of CMDP.
For CMDPs with finite state and action spaces, the strong duality always holds
(Theorem 3.6 in (Altman 1999)). However, when the state space is countably
infinite, we need more regularity conditions to ensure the strong duality. One
sufficient condition is that the instantaneous costs of the CMDP are uniformly
bounded from below (see Definition 7.1, Theorem 9.9, and Chapter 10.3 in
(Altman 1999)). Specifically, we impose the following assumption. [Lower Bound
of Instantaneous Costs] There exists a constant $W$ such that for all
$s\in{\mathcal{S}}$, $a\in{\mathcal{A}}$, and $k=1,2,\ldots,K$,
$c(s,a)>W,~{}~{}d_{k}(s,a)>W.$
To establish the convergence result, we also require the Slater’s condition:
[Slater’s Condition] There exists some policy $\tilde{\pi}$ such that
$D_{k}(\tilde{\pi})<q_{k},\ \forall 1\leq k\leq K.$
Slater’s condition ensures the existences of finite and bounded optimal
Lagrangian multipliers
$\lambda^{*}=\argmax_{\lambda\geq
0}\Big{\\{}\inf_{\pi\in{\mathcal{M}}(\Pi_{S})}L(\pi,\lambda)\Big{\\}}.$
This condition is commonly assumed in the constrained optimization literature.
For many practical problems, the Slater’s condition holds trivially.
Our last assumption is about the boundedness of the “subgradient”, which
regularizes the movement of policies and Lagrangian multipliers at each
iteration. Recall that in Algorithm 1, after applying the subgradient ascent
for Lagrangian multipliers, we project $\lambda$ onto a bounded domain
$\Lambda_{M}$, which takes the form
$\displaystyle\Lambda_{M}=\big{\\{}\lambda\in{\mathbb{R}}^{K}_{+}:\|\lambda\|\leq
M+r\big{\\}},$ (19)
where $M$ is an upper bound of $\|\lambda^{*}\|$ and $r>0$ is a slackness
constant.
[Bounded Subgradient] There exists some constant $G>0$ such that for any
$\lambda\in\Lambda_{M}$ and policy $\pi\in{\mathcal{M}}(\Pi_{S}),$
$\big{\|}\partial_{\lambda}L(\pi,\lambda)\big{\|}\leq G,\
\sup_{s\in{\mathcal{S}}}\sup_{a\in{\mathcal{A}}}\big{|}Q^{\pi,\lambda}(s,a)\big{|}\leq
G.$
Since $Q^{\pi,\lambda}(s,a)$ is linear in $\lambda$, it is necessary to
restrict $\lambda$ to a bounded domain $\Lambda_{M}$ for Assumption 4 to hold.
That is why we need the projection step in updating $\lambda$. Note that when
the instantaneous cost functions $c(\cdot,\cdot)$ and $d_{k}(\cdot,\cdot)$ are
uniformly bounded or when the state and action spaces are finite, Assumption 4
holds trivially.
Lastly, we comment that the Slater’s condition (Assumption 4) not only
guarantees the existence and boundedness of $\lambda^{*}$, but also provides
an explicit upper bound of $\|\lambda^{*}\|$. In particular, let $\tilde{\pi}$
be a Slater point (a policy that satisfies the Slater’s condition), then we
have
$\displaystyle\|\lambda^{*}\|\leq-\frac{C(\tilde{\pi})-\tilde{c}}{\max_{1\leq
k\leq K}\big{\\{}D_{k}(\tilde{\pi})-q_{k}\big{\\}}},$ (20)
where $\tilde{c}\leq C(\pi^{*})$ is an arbitrary lower bound for the dual
problem. In many applications, it is possible to obtain a better upper bound
of $\|\lambda^{*}\|$ than (20) by exploiting the structure of the specific
problem.
Next, to establish the convergence, we need to construct an appropriate
potential function, which is also known as Bregman divergence in the
optimization literature. The potential function ensures that the regularized
policy iteration is equivalent to minimize the sum of a linear approximation
of the objective function and the potential function. We next introduce this
potential function, which is essentially a weighted KL-divergence.
Consider the state occupation measure $\nu_{s}^{\pi}$ induced by a policy
$\pi\in\Pi_{S}$, i.e.,
$\nu_{s}^{\pi}(s):=(1-\gamma)\cdot{\mathbb{E}}_{s_{0}\sim\mu_{0}}\Big{[}\sum_{t=0}^{\infty}\gamma^{t}P^{\pi}(s_{t}=s|s_{0})\Big{]}.$
The KL-divergence between two stationary policies $\pi_{1}$ and $\pi_{2}$
weighted by $\nu_{s}^{\pi}$ is defined as
$\displaystyle\Phi^{\pi}(\pi_{1}\|\pi_{2})={\mathbb{E}}_{s\sim\nu_{s}^{\pi}}\Big{[}{\text{KL}}\big{(}\pi_{1}(\cdot|s)\|\pi_{2}(\cdot|s)\big{)}\Big{]}.$
(21)
When $\pi_{1}$ and $\pi_{2}$ are mixing policies, we first transform them to
the equivalent stationary policies via (10), and then define
$\Phi^{\pi}(\pi_{1}\|\pi_{2})$ as the weighted KL-divergence between the
equivalent stationary policies.
By definition, $\Phi^{\pi}(\pi_{1}\|\pi_{2})$ measures the discrepancy between
two policies weighted by a given state occupation measure. It connects the
regularized policy iteration in (13), which is defined state-wise, with a
single objective and serves as the Bregman divergence in mirror descent
analysis. Unlike the traditional analysis of mirror descent where the
potential function is fixed (Nemirovski 2012), in the analysis of regularized
policy iteration, we need to construct a policy-dependent potential function
and cannot fix the weight of KL-divergence. However, since policy updates are
defined state-wise, for an arbitrary weight, the regularized policy iteration
always takes the form of minimizing a linear approximation of the objective
function regularized by a certain potential function. Thus, the analysis of
mirror descent can be applied here with some modifications.
We are now ready to introduce the convergence results of our primal-dual
algorithm.
###### Theorem 4.1
(Convergence of Main Algorithm) Under Assumptions 4-4, if the step size
$\eta_{m}=\Theta(1/\sqrt{m})$, then there exist positive constants
$\kappa_{1}$ and $\kappa_{2}$, such that
$\displaystyle\big{\|}[D(\bar{\pi}_{T})-q]^{+}\big{\|}\leq\Big{(}G^{2}\big{(}1+\frac{5}{8}\kappa_{2}\log(T)\big{)}+\Phi^{\pi^{*}}(\pi^{*}\|\pi_{0})\Big{)}\frac{1}{2r(1-\gamma)\kappa_{1}\sqrt{T}},$
and
$\displaystyle C(\bar{\pi}_{T})-L(\pi^{*},\lambda^{*})$
$\displaystyle\leq\Big{(}\frac{5G^{2}}{8}\cdot\kappa_{2}\cdot\log(T)+\Phi^{\pi^{*}}(\pi^{*}\|\pi_{0})+\frac{\|\lambda_{0}\|^{2}}{2}\Big{)}\frac{1}{(1-\gamma)\kappa_{1}\sqrt{T}},$
$\displaystyle C(\bar{\pi}_{T})-L(\pi^{*},\lambda^{*})$
$\displaystyle\geq-\|\lambda^{*}\|\Big{(}G^{2}\big{(}1+\frac{5}{8}\kappa_{2}\log(T)\big{)}+\Phi^{\pi^{*}}(\pi^{*}\|\pi_{0})\Big{)}\frac{1}{2r(1-\gamma)\kappa_{1}\sqrt{T}}.$
If the step size is constant $\eta_{m}=\eta$, then
$\displaystyle\big{\|}[D(\bar{\pi}_{T})-q]^{+}\big{\|}\leq\big{(}G^{2}+(1-\gamma)^{-1}\cdot\Phi^{\pi^{*}}(\pi^{*}\|\pi_{0})\big{)}\frac{1}{2rT\eta}+\Big{(}\frac{1}{2}+\frac{1}{8(1-\gamma)}\Big{)}\frac{G^{2}\eta}{2r},$
and
$\displaystyle C(\bar{\pi}_{T})-L(\pi^{*},\lambda^{*})\leq$
$\displaystyle\left((1-\gamma)^{-1}\cdot\Phi^{\pi^{*}}(\pi^{*}\|\pi_{0})+\frac{\|\lambda_{0}\|^{2}}{2}\right)\frac{1}{T\eta}+\frac{5G^{2}\eta}{8(1-\gamma)},$
$\displaystyle C(\bar{\pi}_{T})-L(\pi^{*},\lambda^{*})\geq$
$\displaystyle-\|\lambda^{*}\|\big{(}G^{2}+(1-\gamma)^{-1}\cdot\Phi^{\pi^{*}}(\pi^{*}\|\pi_{0})\big{)}\frac{1}{2rT\eta}-\|\lambda^{*}\|\Big{(}\frac{1}{2}+\frac{1}{8(1-\gamma)}\Big{)}\frac{G^{2}\eta}{2r},$
where $r$ is the slackness constant in (19).
Theorem 4.1 indicates that with decreasing step size,
$\eta_{m}=\Theta(1/\sqrt{m})$, our primal-dual algorithm achieves
$O(\log(T)/\sqrt{T})$ convergence. In particular,
$\big{\|}[D(\bar{\pi}_{T})-q]^{+}\big{\|}=O(\log(T)/\sqrt{T})\mbox{ and
}|C(\bar{\pi}_{T})-L(\pi^{*},\lambda^{*})|=O(\log(T)/\sqrt{T}).$
For constant step size, $\eta_{m}=\eta$, our primal-dual algorithm converges
to a neighborhood of the optimal at rate $O(1/T)$. In particular,
$\big{\|}[D(\bar{\pi}_{T})-q]^{+}\big{\|}=O(1/(\eta T)+\eta)\mbox{ and
}|C(\bar{\pi}_{T})-L(\pi^{*},\lambda^{*})|=O(1/(\eta T)+\eta)$
These convergence rates match those in Le et al. (2019), which requires
solving the modified unconstrained MDP to the optimal at each iteration. We
also note that it is unlikely to improve the convergence rate beyond
$\Theta(1/\sqrt{T})$. This is because the dual problem is a finite-dimensional
concave optimization problem without strong concavity. The convergence rate of
the subgradient method in this case is lower bounded by $\Omega(1/\sqrt{T})$
(Bubeck 2014). The proof of Theorem 4.1 is deferred to the appendix.
We comment that in the bounds in Theorem 4.1, although the slackness constant
$r$ appears in denominators only, the constant $G$, which is an upper bound of
the subgradients, grows linearly in $r$. In particular, by Assumption 4, $G$
is determined by the shape of $\Lambda_{M}$. Hence, $r$ cannot be set too
large.
## 5 Weakly Coupled MDP and Weakly Coupled CMDP
One fundamental challenge in solving MDPs and CMDPs is the curse of
dimensionality. However, there is an important class of problems that has
certain decomposable structures. These problems, which are often referred to
as weakly coupled MDPs/CMDPs, contain multiple subproblems which are almost
independent of each other except for some linking constraints on the action
space (Singh and Cohn 1998). More precisely, for a weakly coupled MDP
consisting of $I$ sub-problems
$\\{({\mathcal{S}}^{i},{\mathcal{A}}^{i},P^{i},c^{i}(\cdot,\cdot),\gamma,\mu_{0}^{i})\\}_{i\in[I]}$,
we have the following structural properties: P1. Its state and action spaces
can be expressed in the form of Cartesian products, i.e.,
$\displaystyle\bm{s}$ $\displaystyle=(s^{1},\ldots,s^{I}),\
\bm{{\mathcal{S}}}={\mathcal{S}}^{1}\times{\mathcal{S}}^{2}\times\ldots\times{\mathcal{S}}^{I},$
$\displaystyle\bm{a}$ $\displaystyle=(a^{1},\ldots,a^{I}),\
\bm{{\mathcal{A}}}={\mathcal{A}}^{1}\times{\mathcal{A}}^{2}\times\ldots\times{\mathcal{A}}^{I}.$
P2. For each state $\bm{s}_{t}$ and action $\bm{a}_{t}$, the instantaneous
cost admits an additively separable form
$c(\bm{s}_{t},\bm{a}_{t})=\sum_{i=1}^{I}c^{i}(s_{t}^{i},a_{t}^{i}).$
P3. The joint initial distribution satisfies
$\bm{\mu}_{0}(\bm{s})=\mu^{1}_{0}(s^{1})\cdot\mu^{2}_{0}(s^{2})\cdot\ldots\cdot\mu^{I}_{0}(s^{I})$
and the one-step transition dynamics of the sub-MDPs are independent of each
other, i.e.,
$\displaystyle
P(\bm{s}_{t+1}|\bm{s}_{t},\bm{a}_{t})=\prod_{i=1}^{I}P^{i}(s^{i}_{t+1}|s_{t}^{i},a_{t}^{i}).$
For the linking constraints, let
$b^{i}(\cdot,\cdot):{\mathcal{S}}^{i}\times{\mathcal{A}}^{i}\to\mathbb{R}^{K}$
be a $K$-dimensional real function, which can be interpreted as the resource
consumed by the $i$-th sub-problem, $i\in[I]$. Then, at each state $\bm{s}$,
the feasible actions need to satisfy
$\displaystyle b(\bm{s},\bm{a})=\sum_{i=1}^{I}b^{i}(s^{i},a^{i})\leq q.$ (22)
where $q\in\mathbb{R}^{K}$. Note that the linking constraint (22) is a hard
constraint and needs to be satisfied path-by-path almost surely. For a weakly
couple CMDP, it satisfies the same structural properties, P1-P3, as the weekly
coupled MDP. The only difference is that the linking constraint now takes the
form
$\displaystyle(1-\gamma)\cdot{\mathbb{E}}_{\bm{s}_{0}\sim\bm{\mu}_{0}}\Big{[}\sum_{t=0}^{\infty}\gamma^{t}\cdot\sum_{i=1}^{I}b^{i}(s^{i}_{t},a^{i}_{t})\big{|}\bm{s}_{0}\Big{]}\leq
q.$ (23)
The weakly coupled MDP and the weakly coupled CMDP are closely related to each
other. Let
$\bar{{\mathcal{A}}}(\bm{s})=\Big{\\{}\bm{a}=(a^{1},\ldots,a^{I})\in\bm{{\mathcal{A}}}:\
\sum_{i=1}^{I}b^{i}(s^{i},a^{i})\leq q\Big{\\}}$
be the (joint) action space of a weakly coupled MDP. Then, the Bellman
equation is
$\displaystyle
V^{*}_{\bm{\mu}_{0}}(\bm{s})=\min_{\bm{a}\in\bar{{\mathcal{A}}}(\bm{s})}\Big{\\{}\sum_{i=1}^{I}c^{i}(s^{i},a^{i})+\gamma\cdot\sum_{\bm{s}^{\prime}\in\bm{{\mathcal{S}}}}V^{*}_{\bm{\mu}_{0}}(\bm{s}^{\prime})\cdot
P(\bm{s}^{\prime}|\bm{s},\bm{a})\Big{\\}}.$
When the number of sub-MDPs $I$ is large, even if the scale of each subproblem
is small, the size of joint state space $\bm{{\mathcal{S}}}$ can be
prohibitively large. Hence, solving the MDP directly can be intractable. Two
decomposition schemes have been proposed to alleviate the curse of
dimensionality: LP-based ADP relaxation and Lagrangian relaxation (Adelman and
Mersereau 2008). Both of them lead to $I$ independent sub-LPs, which reduces
the complexity significantly. The LP-based ADP relaxation approximates the
value function with additively separable functions, i.e.,
$V^{*}_{\bm{\mu}_{0}}(\bm{s})\approx\sum_{i=1}^{I}V^{*}_{{\mu}^{i}_{0}}(s^{i}).$
The Lagrangian relaxation dualizes the constraints (22) based on the LP
representation of the Bellman equation. The latter relaxation translates the
weakly coupled MDP to a weakly coupled CMDP. It has been established that the
optimal cost of the relaxed CMDP provides a lower bound for the optimal cost
of the original MDP (Adelman and Mersereau 2008).
Many Operations Management problems can be formulated as weakly coupled
MDPs/CMDPs. Examples include inventory planning problems with multiple types
of inventories and budget constraints, and scheduling of parallel-server
queues with multiple classes of customers. We provide more details about these
problems in Sections 6 and 7, where we apply our primal-dual algorithms to
solve them.
When applying the primal-dual algorithm to solve weakly coupled CMDPs, it can
be easily adapted to enjoy the decomposability. We call a policy $\bm{\pi}$
decomposable if it takes the product form:
$\bm{\pi}(\bm{a}|\bm{s})=\prod_{i=1}^{I}\pi^{i}(a^{i}|s^{i}).$
Since our algorithm converges with any initial policy, we shall start with a
decomposable policy. Let $\\{\bm{s}_{t}\\}_{t\geq
0}=\\{(s^{1}_{t},\ldots,s^{I}_{t})\\}_{t\geq 0}$ and $\\{\bm{a}_{t}\\}_{t\geq
0}=\\{(a^{1}_{t},\ldots,a^{I}_{t})\\}_{t\geq 0}$ be the trajectory of the CMDP
under policy $\bm{\pi}=(\pi^{1},\ldots,\pi^{I})$. To simplify the notations,
for each $i\in[I]$, we define
$\displaystyle
C^{i}(\pi^{i})=(1-\gamma)\cdot{\mathbb{E}}^{\pi_{i}}_{s^{i}_{0}\sim\mu^{i}_{0}}\Big{[}\sum_{t=0}^{\infty}\gamma^{t}\cdot
c^{i}(s^{i}_{t},a^{i}_{t})\big{|}s^{i}_{0}\Big{]},\
B^{i}(\pi^{i})=(1-\gamma)\cdot{\mathbb{E}}^{\pi_{i}}_{s^{i}_{0}\sim\mu^{i}_{0}}\Big{[}\sum_{t=0}^{\infty}\gamma^{t}\cdot
b^{i}(s^{i}_{t},a^{i}_{t})\big{|}s^{i}_{0}\Big{]}.$
Then, the CMDP can be written as
$\displaystyle\min_{(\pi^{1},\ldots,\pi^{I})}\sum_{i=1}^{I}C^{i}(\pi^{i}),\quad\text{s.t.}\sum_{i=1}^{I}B^{i}(\pi^{i})\leq
q.$ (24)
When applying the primal-dual algorithm, if we start with a decomposable
policy, then the policies obtained in all subsequent iterations are
decomposable. To see this, we note that the Lagrangian function,
$\displaystyle
L(\bm{\pi},\lambda)=\sum_{i=1}^{I}\big{(}C^{i}(\pi^{i})+\lambda^{\top}B^{i}(\pi^{i})\big{)}-\lambda^{\top}q,$
can be decomposed into $I$ independent subproblems. If $\bm{\pi}_{m}$ is
decomposable,
$\displaystyle
Q^{\bm{\pi}_{m},\lambda}(\bm{s},\cdot)=\sum_{i=1}^{I}Q^{{\pi}^{i}_{m},\lambda}(s^{i},\cdot),$
where $Q^{{\pi}^{i}_{m},\lambda}(\cdot,\cdot)$ is the $Q$-function of the
$i$-th modified sub-MDP with instantaneous cost
$c^{i}(\cdot,\cdot)+\lambda^{\top}b^{i}(\cdot,\cdot)$. Here, we ignore the
constant $\lambda^{T}q$, since subtracting a common constant in the
$Q$-function does not change the updates of regularized policy iteration. This
indicates that the regularized policy iteration, including policy evaluation
and improvement, can be implemented separately in parallel via
${\pi}^{i}_{m+1}(\cdot|s^{i})\propto{\pi}^{i}_{m}(\cdot|s^{i})\cdot\exp\Big{\\{}-\eta_{m}\cdot
Q^{{\pi}^{i}_{m},\lambda}(s^{i},\cdot)\Big{\\}},\ \forall i\in[I].$
Moreover, as the subgradient of Lagrangian multiplier takes form
$\partial_{\lambda}L(\bm{\pi}_{m},\lambda)=\sum_{i=1}^{I}B^{i}(\pi_{m}^{i})-q$,
it can be evaluated for the sub-MDPs in parallel as well. Above all, in this
case, the primal-dual algorithm improves the computational complexity from
depending exponentially on $I$ to linearly on $I$.
## 6 Application to an Inventory Planning Problem
In this section, we apply the primal-dual algorithm to solve a multi-product
multi-period newsvendor problem with budget constraints.
Consider the inventory planning problem with $I$ distinct products. At the
beginning of each period, we need to decide the quantities to order based on
the current inventory levels. The orders are assumed to be fulfilled without
delay. After the inventory is replenished, a random demand is realized. We
assume the demands for each product are independent. Let $F_{i}$ denote the
cumulative distribution function of the demand for product $i$ in each period.
In particular, for each period, the demand for product $i$ is an independent
draw from the distribution $F_{i}$. For each product $i\in[I]$, we denote its
inventory level at the beginning of period $t$ by $s^{i}_{t}$, the quantity we
ordered by $a^{i}_{t}$, and the demand in period $t$ by $w_{t}^{i}$. For
product $i$ in period $t$, if the demand does not exceed the current inventory
level, i.e., $w^{i}_{t}\leq s^{i}_{t}+a^{i}_{t}$, all the demand is fulfilled
and the remaining inventory can be carried to the next period. Otherwise, only
$s^{i}_{t}+a^{i}_{t}$ units are fulfilled in the current period. The remaining
$(w^{i}_{t}-s^{i}_{t}-a^{i}_{t})$ units are carried to the next period as
backlog. We allow $s^{i}_{t}$’s to be negative to represent backlogs. For
product $i$, inventory incurs a holding cost of $h_{i}$ per unit per period
and backlog incurs a backlog cost of $b_{i}$ per unit per period. In addition,
product $i$ in inventory consumes $v_{i}$ resource per unit per period. For a
fixed $q>0$, we impose the following budget constraint
$\displaystyle(1-\gamma)\cdot{\mathbb{E}}\Big{[}\sum_{t=0}^{\infty}\sum_{i=1}^{I}\gamma^{t}\cdot[s^{i}_{t}+a^{i}_{t}]^{+}\cdot
v_{i}\Big{|}(s^{1}_{0},\ldots,s^{I}_{0})\Big{]}\leq q.$ (25)
The resource can be interpreted as, for example, the volume of each product.
In this case, the above constraint put restrictions on the warehouse space.
The inventory planning problem can be formulated as a weakly coupled CMDP with
state $\bm{s}=(s^{1},\ldots,s^{I})$, action $\bm{a}=(a^{1},\ldots,a^{I})$, and
transition dynamics
$s^{i}_{t+1}=s^{i}_{t}+a^{i}_{t}-w^{i}_{t},\ w^{i}_{t}\sim F^{i}(w),\ \forall
i\in[I].$
As the demands are independent,
$P(\bm{s}_{t+1}|\bm{s}_{t},\bm{a}_{t+1})=\prod_{i=1}^{I}P({s}^{i}_{t+1}|{s}^{i}_{t},{a}^{i}_{t+1})$.
The instantaneous cost function and auxiliary cost function are
$\displaystyle c(\bm{s}_{t},\bm{a}_{t})$
$\displaystyle=\sum_{i=1}^{I}h_{i}\cdot[s^{i}_{t}+a^{i}_{t}-w^{i}_{t}]^{+}+b_{i}\cdot[w^{i}_{t}-s^{i}_{t}-a^{i}_{t}]^{+},$
$\displaystyle b(\bm{s}_{t},\bm{a}_{t})$
$\displaystyle=\sum_{i=1}^{I}[s^{i}_{t}+a^{i}_{t}]^{+}\cdot v_{i}.$
To verify the correctness of our convergence analysis, we consider a small-
scale instance of the problem with appropriate truncations. Such a truncation
makes the state and action spaces finite. In this case, the optimal cost can
be solved numerically (using the LP formulation). In particular, consider
$I=2$, and demands for the two products are both uniformly distributed on set
$\\{1,2,\ldots,10\\}$, We impose an upper bound $10$ and a lower bound $-10$
for the state space. In particular, when backlogs drop below $-10$, the excess
demands are lost without incurring any cost. For other systems parameters, we
set the holding costs $h_{1}=1,h_{2}=2$, backlog costs $b_{1}=2,b_{2}=3$,
resource consumptions $v_{1}=1.5,v_{2}=1$, threshold $q=10$, and discount rate
$\gamma=0.75$.
When implementing the primal-dual algorithm, we use the standard Monte Carlo
method to estimate the $Q$-function for a given policy. Since the system scale
is small, we can enumerate all the state-action pairs in policy evaluation,
i.e., no approximation of the value function is needed. The estimation of
$Q$-function is based on an average of $400$ independent replications of the
inventory process over $40$ periods of time. We implement two versions of the
algorithm, one with constant step sizes $\eta_{m}=0.2$, the other with
decreasing step size $\eta_{m}=0.2/\sqrt{m+1}$. In each experiment, we run
$500$ iterations in total and calculate the objective values for each
iteration. The results of numerical experiments are summarized in Figures 1
and 2. Figure 1 shows the trajectories of objective values and the constraint
violations for different iterations with constant step size. We observe that
after $500$ iterations, the averaged CMDP cost (without multiplying the
$(1-\gamma)$ factor) converge to $49.26$, which is close to the optimal value
$46.47$. In terms of feasibility, we calculate the violation of constraints,
which is the expected value of the auxiliary cost minus the budget threshold.
We observe that the averaged violation value converges to $0.1$ and many
policies in the last iterations do not violate the constraint at all. Figure 2
shows the relationship between $\sum_{t=0}^{T-1}\tilde{\eta}_{t}C(\pi_{t})$
and the reciprocal of the number of iterations (for constant step size) or the
reciprocal of square root of the number of iterations (for decreasing step
size). In both cases, we observe a straight line, which confirms the rates of
convergence developed in Theorem 4.1.
(a) $C(\pi_{T})$
(b) $\sum_{t=0}^{T-1}\tilde{\eta}_{t}C(\pi_{t})$
(c) $\|[\sum_{i=1}^{2}B^{i}(\pi_{T})-q]^{+}\|$
(d) $\sum_{t=0}^{T-1}\tilde{\eta}_{t}\|[\sum_{i=1}^{2}B(\pi_{t})-q]^{+}\|$
Figure 1: Trajectories of costs and constraints with constant step sizes
(a) $\eta_{m}=0.2$
(b) $\eta_{m}=0.2/\sqrt{m+1}$
Figure 2: Convergence rate with constant and decreasing step sizes
## 7 Application to Queueing Scheduling
In this section, we apply our primal-dual algorithm to a queue scheduling
problem, which is motivated by applications in service operations management.
Service systems often feature multiple classes of customers with different
service needs and multiple pools of servers with different skillsets.
Efficiently matching customers with compatible servers is critical to the
management of these systems. In this context, we consider a parallel-server
system (PSS) with multiple classes of customers and multiple pools (types) of
servers. Customers waiting in queue incur some holding costs and routing
customers to different pools leads to different routing costs. The goal is to
find a scheduling policy that minimizes the performance cost (holding cost
plus routing cost). This class of problems is known as the skill-based routing
problem and has been widely studied in the literature. We refer to (Chen et
al. 2020) for a comprehensive survey of related works.
In what follows, we first introduce the queueing model and some heuristic
policies adapted from policies developed in the literature. We then present
the implementation details of our primal-dual algorithm in this setting. Due
to the large state and action spaces, we combine our primal-dual algorithm
with several approximation techniques. Lastly, we compare the performance of
our policy with the benchmark policies numerically.
### 7.1 Model and Benchmarks
The multi-class multi-pool queuing network has $I$ classes of customers and
$J$ pools of servers. We consider a discrete time model. In each period, the
number of arrivals of class $i$ customers follows a Poisson distribution with
rate $\theta_{i}$. There are $N_{j}$ homogeneous servers in pool $j$,
$j\in[J]$. We assume that each customer can only be served by one server and
each server can only serve one customer at a time. If a class $i$ customer is
served by a server from pool $j$, its service time follows a geometric
distribution with success probability $\mu_{ij}$. When there is no
compatibility between customer class $i$ and server type $j$, $\mu_{ij}=0$.
Figure 3 provides a pictorial illustration of such a system.
Figure 3: Multi-class multi-pool queueing system
We consider non-preemptive scheduling policies. Let $A_{i}(t)$ denote the
number of new class $i$ arrivals in time period $t$, i.e., $A_{i}(t)$ follows
a Poisson distribution with rate $\theta_{i}$. Let $Z_{ij}(t)$ denote the
number of class $i$ customers in service in pool $j$ at the beginning time
period $t$. We also denote $U_{ij}(t)$ as the number of class $i$ customers
assigned to pool $j$ for time period $t$. Note that $U_{ij}(t)$’s are
determined by our scheduling policy. Then the number of class $i$ departures
from pool $j$ at the end of time period $t$, $R_{ij}(t)$, follows a Binomial
distribution with parameter $Z_{ij}(t)+U_{ij}(t)$ and $\mu_{ij}$. Let
$X_{i}(t)$ denote the number of class $i$ customers waiting in queue at the
beginning of period $t$. Then we have the following system dynamics,
$\begin{split}&X_{i}(t+1)=X_{i}(t)+A_{i}(t)-\sum_{j=1}^{J}U_{ij}(t),~{}~{}\forall
i\in[I]\\\ &Z_{ij}(t+1)=Z_{ij}(t)+U_{ij}(t)-R_{ij}(t),~{}~{}\forall
i\in[I],j\in[J].\end{split}$ (26)
The state of the system is
$\bm{s}(t)=(X_{i}(t),Z_{ij}(t):i\in[I],j\in[J])\in\mathbb{N}^{I\times(J+1)}$.
The action is $\bm{a}(t)=(U_{ij}(t):i\in[I],j\in[J])\in\mathbb{N}^{I\times
J}$.
The routing policy needs to satisfy the following constraints
$\displaystyle U_{ij}(t)\in\mathbb{N},\ \sum_{j=1}^{J}U_{ij}(t)\leq X_{i}(t),\
\forall i\in[I],j\in[J],\ \forall t\geq 0.$ (27)
i.e., we can not schedule more customers than there are waiting, and
$\displaystyle\sum_{i=1}^{I}Z_{ij}(t)+U_{ij}(t)\leq N_{j},\ \forall j\in[J],\
\forall t\geq 0,$ (28)
i.e., the number of customers in service can not exceed the capacity. Note
that constraints (27)-(28) are hard constraints, i.e, they need to be
satisfied path-by-path.
Each class $i$ customer waiting in queue incurs a holding cost of $h_{i}$ per
period. There is also a one-shot routing cost of $r_{ij}$ for scheduling a
class $i$ customer to a pool $j$ server. The overall cost for period $t$ is
given by
$c\big{(}\bm{s}(t),\bm{a}(t)\big{)}=\sum_{i=1}^{I}h_{i}X_{i}(t)+\sum_{i=1}^{I}\sum_{j=1}^{J}r_{ij}U_{ij}(t).$
Our goal is to minimize the cumulative discounted costs:
$(1-\gamma)\cdot{\mathbb{E}}^{\pi}\left[\sum_{t=0}^{\infty}\gamma^{t}\cdot
c(\bm{s}(t),\bm{a}(t))\right].$
The problem we consider here is a weakly coupled MDP with $I$ sub-problems,
where each sub-problem is an inverted-V model (i.e., a single customer class
and multiple server pools). In particular, for the $i$-th sub-problem, define
state and action as $s^{i}(t)=(X_{i}(t),Z_{i1}(t),\ldots,Z_{iJ}(t))$ and
$a^{i}(t)=(U_{i1}(t),\ldots,U_{iJ}(t))$. The transition dynamics of the $i$-th
sub-system follows
$X_{i}(t+1)=X_{i}(t)+A_{i}(t)-\sum_{j=1}^{J}U_{ij}(t),~{}~{}Z_{ij}(t+1)=Z_{ij}(t)+U_{ij}(t)-R_{ij}(t),~{}~{}\forall
j\in[J].$
Given $s^{i}(t)$, the corresponding action space is defined as
${\mathcal{A}}^{i}(s^{i}(t))=\Big{\\{}\big{\\{}U_{ij}(t)\big{\\}}_{j\in[J]}:U_{ij}(t)\in\mathbb{N},\
\sum_{j=1}^{J}U_{ij}(t)\leq X_{i}(t),\ Z_{ij}(t)+U_{ij}(t)\leq N_{j},\ \forall
j\in[J]\Big{\\}}.$
We also define the auxiliary cost function
$b^{i}(s^{i}(t),a^{i}(t))=\big{(}Z_{i1}(t)+U_{i1}(t),\ldots,Z_{iJ}(t)+U_{iJ}(t)\big{)}^{\top}\in\mathbb{N}^{J}.$
Then the capacity constraints (28) can be expressed as
$\displaystyle\sum_{i=1}^{I}b^{i}(s^{i}(t),a^{i}(t))\leq(N_{1},\ldots,N_{J})^{\top},$
which takes the same form as the linking constraint in (22).
There are three important features of the problem that we attempt to address
in this section: 1) non-preemptive routing; 2) both class-and-pool dependent
service rate; 3) routing cost (overflow cost). The first two features require
us to keep track of a very high dimensional state space, i.e., $I(J+1)$. The
third feature has not been extensively studied in the literature.
We next introduce two heuristic policies adapted from policies developed in
the literature. For PSS with multiple classes of customers and multiple pools
of servers, a myopic policy called the $c\mu$-rule (or generalization of it),
has been shown to be asymptotically optimal in some systems, where the goal is
to minimize the holding cost (Mandelbaum and Stolyar 2004). The idea is to
minimize the instantaneous cost-reduction rate at each decision epoch. Another
policy is called the max-pressure policy, which is known to be throughput
optimal and asymptotically cost optimal for some forms of convex holding cost
(Stolyar et al. 2004, Dai et al. 2008). We next consider modified versions of
the above routing policies, which take the routing costs into account (Chen et
al. 2020). At each decision epoch $t$, we choose $U_{ij}(t)$’s that solve the
following optimization problem:
$\displaystyle\max_{U_{ij}(t)}$
$\displaystyle\quad\sum_{i=1}^{I}\sum_{j=1}^{J}\omega_{ij}(t)U_{ij}(t)$ s.t.
$\displaystyle\quad\sum_{j=1}^{J}U_{ij}(t)\leq X_{i}(t),\ \forall
i\in[I],j\in[J],\ \forall t\geq 0,$
$\displaystyle\quad\sum_{i=1}^{I}Z_{ij}(t)+U_{ij}(t)\leq N_{j},\ \forall
j\in[J],\ \forall t\geq 0,$ $\displaystyle\quad U_{ij}(t)\in\mathbb{N}\ \
\forall i\in[I],j\in[J],\ \forall t\geq 0,$
where $\omega_{ij}(t)$’s are some modified instantaneous costs we introduce
next. We consider two different forms of $w_{ij}(t)$’s. The first one sets
$\omega_{ij}(t)=h_{i}-r_{ij}$, which is adapted from the $c\mu$-rule. We refer
to this policy as the modified $c\mu$-rule. The second one sets
$w_{ij}(t)=h_{i}X_{i}(t)-r_{ij}$, which is adapted from the max-pressure
policy. We refer to this policy as the modified max-pressure policy.
### 7.2 Solution method
We consider the CMDP relaxation of the weakly coupled MDP:
$\begin{split}&\min_{\pi}\
(1-\gamma)\cdot{\mathbb{E}}^{\pi}\Big{[}\sum_{t=0}^{\infty}\gamma^{t}\cdot
c(\bm{s}(t),\bm{a}(t))\Big{]}\\\ \mbox{ s.t.
}&(1-\gamma)\cdot{\mathbb{E}}\Big{[}\sum_{i=1}^{I}\sum_{t=0}^{\infty}\gamma^{t}\cdot
d^{i}(s^{i}(t),a^{i}(t))\Big{]}\leq(N_{1},\ldots,N_{J})^{\top},\end{split}$
and apply the primal-dual algorithm to solve it. The decoupling allows us to
translate the original problem to $I$ sub-problems. In particular, in each
iteration, we use regularized policy iteration to update the scheduling policy
for a single-class multi-pool system with modified instantaneous cost:
$c_{\lambda}^{i}\big{(}s^{i}(t),a^{i}(t)\big{)}=h_{i}X_{i}(t)+\sum_{j=1}^{J}r_{ij}U_{ij}(t)+\sum_{j=1}^{J}\lambda_{j}\big{(}Z_{ij}(t)+U_{ij}(t)\big{)}$
for the $i$-th sub-problem.
Even with the decomposition, the state and policy spaces are still too large
in this case. We next introduce some further approximations to reduce the
dimension of the problem. We shall omit the index $i$ in subsequent
discussions as the development focuses on each sub-problem.
Policy space reduction: For each sub-problem, the policy space is still
prohibitive. To see this, consider a system with $3$ pools and $30$ servers in
each pool. When the queue length is $90$ and all pools are empty, there are
roughly $30^{3}$ feasible actions. To overcome the challenge, we reduce the
action space to only include priority rules. State-dependent extreme policies
have been shown to be asymptotically optimal in the scheduling of PSS due to
the linear system dynamics and linear holding costs (Harrison and Zeevi 2004).
Denote $-1$ as the waiting option. The priority rule is denoted by a priority
list that ends with $-1$. For example, priority $(1,2,-1)$ means pool 1 is
preferred to pool 2, which is preferred to waiting. When following priority
$(1,2,-1)$, we first assign as many customers to pool 1 as possible. If there
are still customers waiting after pool 1 assignment, we start assigning them
to pool 2. After that, if there are still customers waiting, we keep them in
the queue. We denote this reduced policy space as $\tilde{\mathcal{A}}$.
Value function approximation: In our policy iteration step, given a policy
$\pi$, we need to estimate the function $Q^{\pi,\lambda}(s,a)$ for all
$s\in{\mathcal{S}}$, $a\in\tilde{\mathcal{A}}$, where the state
$s=(x,z_{1},\ldots,z_{J})$. Due to the large state space, we can not enumerate
all the states to evaluate the value function. Instead, we use value function
approximation with quadratic basis. The idea is to find
$\theta^{\pi,a}\in\mathbb{R}^{(J+1)^{2}+1}$ such that
$Q^{\pi,\lambda}(s,a)\approx\langle\phi(s),\theta^{\pi,a}\rangle.$
where $\phi(s)$ is the quadratic basis. To obtain $\theta^{\pi,a}$ at each
iteration, we first randomly sample $M$ states $\\{s_{i}\\}_{i\in[M]}$ and use
Monte Carlo simulation to estimate $Q^{\pi,\lambda}(s_{i},a)$. Then, set
$\theta^{\pi,a}=\text{argmin}_{\theta}\Big{\\{}\frac{1}{M}\cdot\sum_{i=1}^{M}(Q^{\pi,\lambda}(s_{i},a)-\langle\phi(s_{i}),\theta\rangle)^{2}\Big{\\}}.$
### 7.3 Experiment Results
For the numerical experiments, we consider a similar setting as that in Dai
and Shi (2019), which is motivated by hospital inpatient-flow management. In
particular, we consider a network with 3 classes of customers and 3 pools of
servers. Pool $i$ is considered the primary pool for class $i$ customers with
$r_{ii}=0$, $\forall i\in[I]$. The major difference between our model and the
model considered in Dai and Shi (2019) is that we allow the service rates to
vary for different server types, i.e, $\mu_{ij}$ depends on both $i$ and $j$.
This captures the potential slowdown effect due to off-service placement (Song
et al. 2020).
For the system parameters, we set the arrival rates
$(\theta_{1},\theta_{2},\theta_{3})=(12,16,20)$, the holding costs
$(h_{1},h_{2},h_{3})=(3,2,1)$, the pool sizes
$(N_{1},N_{2},N_{3})=(40,50,60)$, and the service rates
$(\mu_{11},\mu_{12},\mu_{13})=(0.3,0.25,0.2),(\mu_{21},\mu_{22},\mu_{23})=(0.15,0.3,0.2),(\mu_{31},\mu_{32},\mu_{33})=(0.25,0.1,0.4).$
We run two sets of experiments, corresponding to large routing/overflow costs:
$(r_{11},r_{12},r_{13})=(0,2,2),(r_{21},r_{22},r_{23})=(3,0,3),(r_{31},r_{32},r_{33})=(1,1,0),$
(29)
and small routing costs:
$(r_{11},r_{12},r_{13})=(0,0.2,0.2),(r_{21},r_{22},r_{23})=(0.3,0,0.3),(r_{31},r_{32},r_{33})=(0.1,0.1,0).$
(30)
Note that for class $i$ customers, the primary server pool $i$ has the largest
service rate and zero routing cost. For customer class $i$, we define its
nominal traffic intensity as $\rho_{i}=\theta_{i}/(N_{i}\mu_{ii})$. Then the
nominal traffic intensity of the three classes are $\rho_{1}=1$,
$\rho_{2}=16/15$, and $\rho_{3}=5/6$. This indicates that the first two
classes are unstable if we do not do any “overflow”.
We initialize the system with $X_{i}(0)=50$ and $Z_{11}(0)=20$,
$Z_{22}(0)=30$, $Z_{33}(0)=40$, and $Z_{ij}(0)=0$ for $i\neq j$, $i,j\in[3]$.
We compare the performance of our policy with the two benchmark policies for
problems with different routing costs and discount rates.
When constructing the policy space for our primal-dual algorithms, because
each customer class has a primary server pool with the fastest service rate
and zero routing cost, we always give the primary pool the highest priority.
In particular, the action spaces for three classes are defined as
$\begin{split}\mathcal{A}_{1}&=\\{(1,-1),(1,2,-1),(1,3,-1),(1,2,3,-1),(1,3,2,-1)\\},\\\
\mathcal{A}_{2}&=\\{(2,-1),(2,1,-1),(2,3,-1),(2,1,3,-1),(2,3,1,-1)\\},\mbox{
and }\\\
\mathcal{A}_{3}&=\\{(3,-1),(3,2,-1),(3,1,-1),(3,2,1,-1),(3,1,2,-1)\\},\mbox{
respectively.}\end{split}$
In our primal-dual update, we use the constant stepsize $0.1$. When using
simulation to estimate the value function, we truncate at $T=100,150,800$ for
$\gamma=0.9,0.95,0.99$ respectively. This ensures that $\gamma^{T}\approx
10^{-4}$, i.e., the truncation errors are almost negligible. When fitting the
parameters for the quadratic value function approximation, we sample $1000$
states and use simulation to estimate the $Q$-function at these states. For
each value of $\gamma$, we start with the Lagrangian multipliers
$\lambda_{0}=(10,10,10)$ and run the prima-dual algorithm for $30$ iterations,
and take the policy obtained in the last iteration. Note that this policy may
not be feasible to the original weakly coupled MDP. In order to obtain a
feasible policy, we adopt the following modification. In each period, for each
pool, when the number of scheduled customers exceeds the capacity, the primary
customers are prioritized for admission. We then admit the “overflowed”
customers uniformly at random until the capacity is reached. The customers who
are not admitted to service will be sent back to their corresponding queues
and wait for the next decision epoch. For example, suppose that there are $20$
servers available in pool 1 but the policy schedules $(15,5,5)$ customers from
the three classes to this pool. The modified policy first admits the $15$
customers from class $1$ and then randomly picks $5$ among the $10$ customers
of classes $2$ and $3$ to admit.
Given a policy, to evaluate its performance, we estimate the cumulative
discounted costs from $500$ independent replications of the system over $T$
periods of time. The results are summarized in Tables 1 and 2 .
Table 1: Cumulative discounted costs under different policies with large routing costs (29) under different discount factors. (Numbers in bracket are the standard errors from simulation estimation.) | $\gamma=0.90$ | $\gamma=0.95$ | $\gamma=0.99$
---|---|---|---
modified $c\mu$-rule | $\quad 270.49\quad$ | $\quad 286.11\quad$ | $\quad 467.13\quad$
| $\quad(1.50)\quad$ | $\quad(2.07)\quad$ | $\quad(4.26)\quad$
modified max-pressure rule | $271.62$ | $269.19$ | $278.31$
| $(1.25)$ | $(1.74)$ | $(2.31)$
primal-dual algorithm | $\mathbf{252.77}$ | $\mathbf{243.87}$ | $\mathbf{218.95}$
| $(1.47)$ | $(2.26)$ | $(2.11)$
Table 2: Cumulative discounted costs under different policies with small routing costs (30) under different discount factors. (Numbers in bracket are the standard errors from simulation estimation.) | $\gamma=0.90$ | $\gamma=0.95$ | $\gamma=0.99$
---|---|---|---
modified $c\mu$-rule | $\quad 232.22\quad$ | $\quad 230.54\quad$ | $\quad 266.81\quad$
| $\quad(1.20)\quad$ | $\quad(1.71)\quad$ | $\quad(3.65)\quad$
modified max-pressure rule | $260.89$ | $266.83$ | $308.89$
| $(1.21)$ | $(1.77)$ | $(2.06)$
primal-dual algorithm | $\mathbf{253.53}$ | $\mathbf{251.20}$ | $\mathbf{210.34}$
| $(1.50)$ | $(2.40)$ | $(2.24)$
We observe that the policies obtained via the primal-dual algorithm performs
well. It outperforms the two benchmark policies in most cases. When the
routing cost is large (Table 1), the cost under the modified $c\mu$-rule
increases substantially as the discount rate $\gamma$ increases. When taking a
closer look at $w_{ij}(t)$’s, we note that in this case, $w_{21}(t)=-2.7$ and
$w_{23}(t)=-2.6$. This implies that the modified $c\mu$-rule would never
overflow class 2 customers. As a result, the system is unstable, i.e., the
class $2$ queue blow up as $t$ increases. (The cumulative discounted cost is
well-defined as the discount rate decays exponentially in $t$ while the queue
length grows linearly in $t$.) The modified max-pressure is able to achieve
reasonably good performance in this case. When $\gamma$ is small, our
algorithm achieves comparable (slightly better) performance as the max-
pressure policy. When $\gamma$ is large, i.e, $\gamma=0.99$, our policy is
able to achieve a substantially lower cost than the max-pressure policy, i.e.,
a 21% cost reduction. This is because the max-pressure policy only starts
overflowing when the queues are large enough. In this example where overflow
is necessary to achieve system stability, we need more aggressive overflow.
Our policy is able to “learn” this through the primal-dual training.
When the overflow cost is small (Table (2), the modified $c\mu$-rule is able
to achieve better performance than the modified max-pressure policy. Note that
in this case, all $w_{ij}(t)$’s are nonnegative for both the modified
$c\mu$-rule and the modified max-pressure policy (when $X_{i}(t)>0$). When
$\gamma$ is small, our policy achieves comparable performance as the modified
$c\mu$-rule, when $\gamma$ is large, i.e., $\gamma=0.99$, our policy can
achieve a $21\%$ cost reduction over the modified $c\mu$-rule. This suggests
that overflow needs to be exercised carefully.
We next discuss the structure of the policies obtained via primal-dual
algorithm. We observe that our policies in general follow a threshold
structure: overflow customers only when the queue length exceeds some
threshold. However, the thresholds are highly dependent on the states of the
system. Take the scheduling policy for class $1$ and $2$ customers with
discount rate $\gamma=0.9$ as an example. In Figure 4, we plot the values of
the threshold of starting overflowing for different values of $Z_{11}$’s and
$Z_{22}$’s. We observe that holding $Z_{12}$ and $Z_{13}$ fixed, as $Z_{11}$
increases, the threshold for overflow decreases. Similarly, holding $Z_{21}$
and $Z_{23}$ fixed, as $Z_{22}$ increases, the threshold for overflow also
decreases.
(a) Fix $Z_{12}=0,Z_{13}=0$, and vary $Z_{11}$
(b) Fix $Z_{21}=0,Z_{23}=0$, and vary $Z_{22}$
Figure 4: The threshold for class 1 and 2 queues when starting overflowing.
## 8 Conclusion and Future Directions
In this work, we propose a sampling-based primal-dual algorithm to solve
CMDPs. Our approach alternatively applies regularized policy iteration to
improve the policy and subgradient ascent to maintain the constraints. The
algorithm achieves $O(\log(T)/\sqrt{T})$ convergence rate and only requires
one policy update at each primal-dual iteration. Our algorithm also enjoys the
decomposability property for weakly coupled CMDPs. We demonstrate the
applications of our algorithm to solve two important operations management
problems with weakly coupled structures: multi-product inventory management
and multi-class queue scheduling.
In Section 7, we also show the good empirical performance of our algorithm to
solve an important class of weakly coupled MDPs. This opens two directions for
future research. First, it is be important to quantify the optimality gap
between the weakly coupled MDP and its CMDP relaxation theoretically. The gap
can be large in some problems as demonstrate in Adelman and Mersereau (2008).
It would be interesting to establish easy-to-verify conditions about when the
gap is small. Second, the policy obtained via the Lagrangian relaxation may
not satisfy the hard constraints in the original MDP. One approach to overcome
the issue is to use more stringent thresholds when defining constraints in the
CMDP relaxation (Balseiro et al. 2019). The other approach is to modify the
CMDP based policies to construct good MDP policies. For example, Brown and
Smith (2020) study a dynamic assortment problem and propose an index heuristic
from the relaxed problem, and show that the policy achieves asymptotic
optimality. In Section 7, we apply a rather straightforward modification to
the CMDP based policy in order to satisfy the hard constraints in the original
MDP. In general, how to “translate” the policy derived based on the relaxed
problem to the original MDP would be an interesting research direction.
## References
* Achiam et al. (2017) Achiam J, Held D, Tamar A, Abbeel P (2017) Constrained policy optimization. _arXiv preprint arXiv:1705.10528_ .
* Adelman and Mersereau (2008) Adelman D, Mersereau AJ (2008) Relaxations of weakly coupled stochastic dynamic programs. _Operations Research_ 56(3):712–727.
* Altman (1999) Altman E (1999) _Constrained Markov decision processes_ , volume 7 (CRC Press).
* Balseiro et al. (2019) Balseiro SR, Brown DB, Chen C (2019) Dynamic pricing of relocating resources in large networks. _ACM SIGMETRICS Performance Evaluation Review_ 47(1):29–30.
* Bertsekas and Scientific (2015) Bertsekas DP, Scientific A (2015) _Convex optimization algorithms_ (Athena Scientific Belmont).
* Bertsimas and Orlin (1994) Bertsimas D, Orlin JB (1994) A technique for speeding up the solution of the Lagrangian dual. _Mathematical Programming_ 63(1-3):23–45.
* Bhatnagar and Lakshmanan (2012) Bhatnagar S, Lakshmanan K (2012) An online actor–critic algorithm with function approximation for constrained Markov decision processes. _Journal of Optimization Theory and Applications_ 153(3):688–708.
* Borkar (2005) Borkar VS (2005) An actor-critic algorithm for constrained Markov decision processes. _Systems & control letters_ 54(3):207–213.
* Brown and Smith (2020) Brown DB, Smith JE (2020) Index policies and performance bounds for dynamic selection problems. _Management Science_ .
* Bubeck (2014) Bubeck S (2014) Convex optimization: Algorithms and complexity. _arXiv preprint arXiv:1405.4980_ .
* Caramanis et al. (2014) Caramanis C, Dimitrov NB, Morton DP (2014) Efficient algorithms for budget-constrained Markov decision processes. _IEEE Transactions on Automatic Control_ 59(10):2813–2817.
* Chen et al. (2020) Chen J, Dong J, Shi P (2020) A survey on skill-based routing with applications to service operations management. _Queueing Systems_ 1–30.
* Chen and Wang (2016) Chen Y, Wang M (2016) Stochastic primal-dual methods and sample complexity of reinforcement learning. _arXiv preprint arXiv:1612.02516_ .
* Chow et al. (2018) Chow Y, Nachum O, Duenez-Guzman E, Ghavamzadeh M (2018) A Lyapunov-based approach to safe reinforcement learning. _Advances in neural information processing systems_ , 8092–8101.
* Dai and Shi (2019) Dai J, Shi P (2019) Inpatient overflow: An approximate dynamic programming approach. _Manufacturing & Service Operations Management_ 21(4):894–911.
* Dai et al. (2008) Dai JG, Lin W, et al. (2008) Asymptotic optimality of maximum pressure policies in stochastic processing networks. _The Annals of Applied Probability_ 18(6):2239–2299.
* Gattami (2019) Gattami A (2019) Reinforcement learning for multi-objective and constrained Markov decision processes. _arXiv preprint arXiv:1901.08978_ .
* Geist et al. (2019) Geist M, Scherrer B, Pietquin O (2019) A theory of regularized Markov decision processes. _arXiv preprint arXiv:1901.11275_ .
* Haarnoja et al. (2017) Haarnoja T, Tang H, Abbeel P, Levine S (2017) Reinforcement learning with deep energy-based policies. _arXiv preprint arXiv:1702.08165_ .
* Harrison and Zeevi (2004) Harrison JM, Zeevi A (2004) Dynamic scheduling of a multiclass queue in the halfin-whitt heavy traffic regime. _Operations Research_ 52(2):243–257.
* Kakade and Langford (2002) Kakade S, Langford J (2002) Approximately optimal approximate reinforcement learning. _ICML_ , volume 2, 267–274.
* Le et al. (2019) Le HM, Voloshin C, Yue Y (2019) Batch policy learning under constraints. _arXiv preprint arXiv:1903.08738_ .
* Liu et al. (2019a) Liu B, Cai Q, Yang Z, Wang Z (2019a) Neural proximal/trust region policy optimization attains globally optimal policy. _arXiv preprint arXiv:1906.10306_ .
* Liu et al. (2019b) Liu Y, Ding J, Liu X (2019b) Ipo: Interior-point policy optimization under constraints. _arXiv preprint arXiv:1910.09615_ .
* Mandelbaum and Stolyar (2004) Mandelbaum A, Stolyar AL (2004) Scheduling flexible servers with convex delay costs: Heavy-traffic optimality of the generalized c$\mu$-rule. _Operations Research_ 52(6):836–855.
* Miryoosefi et al. (2019) Miryoosefi S, Brantley K, Daume III H, Dudik M, Schapire RE (2019) Reinforcement learning with convex constraints. _Advances in Neural Information Processing Systems_ , 14093–14102.
* Nedić and Ozdaglar (2009) Nedić A, Ozdaglar A (2009) Subgradient methods for saddle-point problems. _Journal of optimization theory and applications_ 142(1):205–228.
* Neely (2011) Neely MJ (2011) Online fractional programming for Markov decision systems. _2011 49th Annual Allerton Conference on Communication, Control, and Computing (Allerton)_ , 353–360 (IEEE).
* Nemirovski (2012) Nemirovski A (2012) Tutorial: Mirror descent algorithms for large-scale deterministic and stochastic convex optimization. _Conference on Learning Theory (COLT)_.
* Schulman et al. (2015) Schulman J, Levine S, Abbeel P, Jordan M, Moritz P (2015) Trust region policy optimization. _International conference on machine learning_ , 1889–1897.
* Schulman et al. (2017) Schulman J, Wolski F, Dhariwal P, Radford A, Klimov O (2017) Proximal policy optimization algorithms. _arXiv preprint arXiv:1707.06347_ .
* Shani et al. (2019) Shani L, Efroni Y, Mannor S (2019) Adaptive trust region policy optimization: Global convergence and faster rates for regularized mdps. _arXiv preprint arXiv:1909.02769_ .
* Singh and Cohn (1998) Singh SP, Cohn D (1998) How to dynamically merge Markov decision processes. _Advances in neural information processing systems_ , 1057–1063.
* Sion et al. (1958) Sion M, et al. (1958) On general minimax theorems. _Pacific Journal of mathematics_ 8(1):171–176.
* Song et al. (2020) Song H, Tucker AL, Graue R, Moravick S, Yang JJ (2020) Capacity pooling in hospitals: The hidden consequences of off-service placement. _Management Science_ 66(9):3825–3842.
* Stolyar et al. (2004) Stolyar AL, et al. (2004) Maxweight scheduling in a generalized switch: State space collapse and workload minimization in heavy traffic. _The Annals of Applied Probability_ 14(1):1–53.
* Sutton and Barto (2018) Sutton RS, Barto AG (2018) _Reinforcement learning: An introduction_ (MIT press).
* Tessler et al. (2018) Tessler C, Mankowitz DJ, Mannor S (2018) Reward constrained policy optimization. _arXiv preprint arXiv:1805.11074_ .
* Turken et al. (2012) Turken N, Tan Y, Vakharia AJ, Wang L, Wang R, Yenipazarli A (2012) The multi-product newsvendor problem: Review, extensions, and directions for future research. _Handbook of newsvendor problems_ , 3–39 (Springer).
* Wang et al. (2019) Wang L, Cai Q, Yang Z, Wang Z (2019) Neural policy gradient methods: Global optimality and rates of convergence. _arXiv preprint arXiv:1909.01150_ .
Proof of Main Results
The proof of Theorem 4.1 relies the following lemma, which upper and lower
bounds the movement of the Lagrangian after a single iteration/update of the
policy and the Lagrangian multipliers.
###### Lemma 8.1
Let $\\{(\pi_{m},\lambda_{m})\\}_{m\geq 0}$ be the sequences of stationary
policies and Lagrangian multipliers generated by Algorithm 1. Then for
arbitrary $\lambda\in{\mathbb{R}}^{K}_{+}$ and $\pi\in\Pi_{S}$, we have the
upper bound
$\displaystyle
L(\pi_{m},\lambda)-L(\pi_{m},\lambda_{m})\leq(2\eta_{m})^{-1}\cdot\big{(}\|\lambda-\lambda_{m}\|^{2}-\|\lambda-\lambda_{m+1}\|^{2}\big{)}+\eta_{m}/2\cdot\big{\|}\partial_{\lambda}L(\pi_{m},\lambda_{m})\big{\|}^{2},$
and the lower bound
$\displaystyle L(\pi,\lambda_{m})-L(\pi_{m},\lambda_{m})$
$\displaystyle\qquad\geq\big{(}(1-\gamma)\eta_{m}\big{)}^{-1}\Big{(}\Phi^{\pi}(\pi\|\pi_{m+1})-\Phi^{\pi}(\pi\|\pi_{m})\Big{)}-\frac{\eta_{m}}{8(1-\gamma)}\cdot\big{(}\sup_{s\in{\mathcal{S}}}\sup_{a\in{\mathcal{A}}}|Q^{\lambda_{m},\pi_{m}}(s,a)|\big{)}^{2}.$
Before we prove Lemma 8.1, we first present two auxiliary lemmas. The first
lemma (Lemma 8.2) is rather standard. A similar version of the result can be
found in Proposition 3.2.2 in Bertsekas and Scientific (2015). For self-
completeness, we still provide the proof here.
###### Lemma 8.2
Let $f$ be a proper convex function on a space $\Omega$ (not necessary a
Euclidean space). Let ${\mathcal{C}}$ be an open set in $\Omega$, and
$\Psi_{\xi}(\cdot\|\cdot)$ be the Bregman divergence induced by a strictly
convex function $\xi$ on $\Omega$. For an arbitrary constant $\eta>0$ and a
point $x_{0}\in\Omega$, define
$x^{*}=\argmin_{x\in{\mathcal{C}}}\Big{\\{}f(x)+\frac{1}{\eta}\Psi_{\xi}\big{(}x\|x_{0}\big{)}\Big{\\}}.$
Then we have
$f(x)-f(x^{*})\geq\frac{1}{\eta}\Big{(}\Psi_{\xi}\big{(}x^{*}\|x_{0}\big{)}+\Psi_{\xi}\big{(}x\|x^{*}\big{)}-\Psi_{\xi}\big{(}x\|x_{0}\big{)}\Big{)},\
\forall\ x\in\Omega.$
By symmetry, for a concave function $g$ on $\Omega$ and
$\hat{x}^{*}=\argmax_{x\in{\mathcal{C}}}\Big{\\{}g(x)-\frac{1}{\eta}\Psi_{\xi}\big{(}x\|x_{0}\big{)}\Big{\\}}.$
Then
$g(x)-g(\hat{x}^{*})\leq-\frac{1}{\eta}\Big{(}\Psi_{\xi}\big{(}\hat{x}^{*}\|x_{0}\big{)}+\Psi_{\xi}\big{(}x\|\hat{x}^{*}\big{)}-\Psi_{\xi}\big{(}x\|x_{0}\big{)}\Big{)},\
\forall\ x\in\Omega.$
###### Proof 8.3
Proof of Lemma 8.2 We first consider the minimization problem. Since $x^{*}$
minimizes the objective $f(x)+\eta^{-1}\cdot\Psi_{\xi}(x\|x_{0})$ on set
$\mathcal{C}$, there exists a subgradient of the form
$p^{*}=q^{*}+\eta^{-1}\cdot\partial_{x}\Psi_{\xi}(x^{*}\|x_{0})=q^{*}+\eta^{-1}\cdot\big{(}\nabla\xi(x^{*})-\nabla\xi(x_{0})\big{)}$
such that
$\langle p^{*},x-x^{*}\rangle\geq 0,\ \forall x\in\mathcal{C}.$
Here $q^{*}\in\partial_{x}f(x^{*})$ is some subgradient of $f(x)$ at $x^{*}$.
As a result, by the property of subgradient, for all $x\in\mathcal{C}$, we
have
$\displaystyle f(x)$ $\displaystyle\geq f(x^{*})+\langle q^{*},x-x^{*}\rangle$
$\displaystyle\geq
f(x^{*})+\eta^{-1}\cdot\langle\nabla\xi(x_{0})-\nabla\xi(x^{*}),x-x^{*}\rangle$
$\displaystyle=f(x^{*})+\eta^{-1}\cdot\Big{(}\Psi_{\xi}\big{(}x^{*}\|x_{0}\big{)}+\Psi_{\xi}\big{(}x\|x^{*}\big{)}-\Psi_{\xi}\big{(}x\|x_{0}\big{)}\Big{)},$
where the last equality follows from the definition of Bregman divergence,
i.e.,
$\Psi_{\xi}(x\|y)=\xi(x)-\xi(y)-\big{\langle}\nabla\xi(y),x-y\big{\rangle}.$
For the maximization problem, we only need to consider $-g$ and apply above
result.
The next lemma is Lemma 6.1 in (Kakade and Langford 2002). Given two policies,
it characterizes the difference of expected accumulated costs as the inner
product of the advantage function of one policy and the occupation measure of
another policy. Note that the value function $V^{\pi}$ and the action-value
function $Q^{\pi}$ of an MDP under policy $\pi$ are defined in (1) and (12).
###### Lemma 8.4
For arbitrary policies $\pi,\pi^{\prime}\in\Pi_{S}$,
${\mathbb{E}}_{s\sim\mu_{0}}\big{[}V^{\pi}(s)\big{]}-{\mathbb{E}}_{s\sim\mu_{0}}\big{[}V^{\pi^{\prime}}(s)\big{]}=\frac{1}{1-\gamma}{\mathbb{E}}_{(s,a)\sim\nu^{\pi^{\prime}}}\big{[}Q^{\pi}(s,a)-V^{\pi}(s)\big{]}.$
where $\nu^{\pi^{\prime}}(\cdot,\cdot)$ is the occupation measure associated
with $\pi^{\prime}$.
###### Proof 8.5
Proof of Lemma 8.1 For the upper bound, note that because $L(\pi_{m},\lambda)$
is linear in $\lambda$,
$\lambda_{m+1}=\text{Proj}_{\Lambda_{M}}\big{\\{}\lambda_{m}+\eta_{m}\cdot\partial_{\lambda}L(\pi_{m},\lambda_{m})\big{\\}}$
is equivalent to
$\displaystyle\lambda_{m+1}=\argmax_{\lambda\in{\Lambda_{M}}}\Big{\\{}L(\pi_{m},\lambda)-\frac{1}{2\eta_{m}}\|\lambda-\lambda_{m}\|^{2}\Big{\\}}.$
Then, by Lemma 8.2, we have
$\displaystyle L(\pi_{m},\lambda)-L(\pi_{m},\lambda_{m+1})$
$\displaystyle\leq(2\eta_{m})^{-1}\big{(}\|\lambda-\lambda_{m}\|^{2}-\|\lambda-\lambda_{m+1}\|^{2}-\|\lambda_{m+1}-\lambda_{m}\|^{2}\big{)}$
$\displaystyle\leq(2\eta_{m})^{-1}\big{(}\|\lambda-\lambda_{m}\|^{2}-\|\lambda-\lambda_{m+1}\|^{2}\big{)}.$
Next,
$\displaystyle L(\pi_{m},\lambda)-L(\pi_{m},\lambda_{m})\leq$
$\displaystyle(2\eta_{m})^{-1}\big{(}\|\lambda-\lambda_{m}\|^{2}-\|\lambda-\lambda_{m+1}\|^{2}\big{)}+L(\pi_{m},\lambda_{m+1})-L(\pi_{m},\lambda_{m})$
$\displaystyle=$
$\displaystyle(2\eta_{m})^{-1}\big{(}\|\lambda-\lambda_{m}\|^{2}-\|\lambda-\lambda_{m+1}\|^{2}\big{)}+\big{\langle}\partial_{\lambda}L(\pi_{m},\lambda_{m}),\lambda_{m+1}-\lambda_{m}\big{\rangle}$
$\displaystyle\leq$
$\displaystyle(2\eta_{m})^{-1}\big{(}\|\lambda-\lambda_{m}\|^{2}-\|\lambda-\lambda_{m+1}\|^{2}\big{)}+\eta_{m}/2\cdot\big{\|}\partial_{\lambda}L(\pi_{m},\lambda_{m})\big{\|}^{2},$
where the last inequality follows from the definition of $\lambda_{m+1}$ and
the non-expansive property of the projection. Then we obtain the upper bound.
For the lower bound, recall that we update $\pi_{m}$ via
$\displaystyle\pi_{m+1}(\cdot|s)=\argmin_{\pi(\cdot|s)\in\Delta_{{\mathcal{A}}}}\Big{\\{}\big{\langle}Q^{\pi_{m},\lambda_{m}}(s,\cdot),\pi(\cdot|s)\big{\rangle}+\frac{1}{\eta_{m}}{\text{KL}}\big{(}\pi(\cdot|s)\|\pi_{m}(\cdot|s)\big{)}\Big{\\}},$
for each state $s\in{\mathcal{S}}$. Then, for an arbitrary stationary policy
$\pi^{\prime}\in\Pi_{S}$, we have
$\displaystyle\pi_{m+1}=\argmin_{\pi\in\Pi_{S}}\Big{\\{}{\mathbb{E}}_{s\sim\nu_{s}^{\pi^{\prime}}}\Big{[}\big{\langle}Q^{\pi_{m},\lambda_{m}}(s,\cdot),\pi(\cdot|s)\big{\rangle}+\frac{1}{\eta_{m}}{\text{KL}}\big{(}\pi(\cdot|s)\|\pi_{m}(\cdot|s)\big{)}\Big{]}\Big{\\}}$
where $\nu_{s}^{\pi^{\prime}}$ is the state occupation measure associated with
$\pi^{\prime}$.
Note that the space of the stationary policy, $\Pi_{S}$, can be represented as
the product space of simplex $\Delta_{{\mathcal{A}}}$. Consider
$\Omega:=\Pi_{S}=\big{(}\Delta_{{\mathcal{A}}}\big{)}^{\otimes|{\mathcal{S}}|}$
and let
$\displaystyle g(\pi)$
$\displaystyle:={\mathbb{E}}_{s\sim\nu_{s}^{\pi^{\prime}}}\Big{[}\big{\langle}Q^{\pi_{m},\lambda_{m}}(s,\cdot),\pi(\cdot|s)\big{\rangle}\Big{]},$
$\displaystyle\Psi_{\xi}(\pi)$
$\displaystyle:={\mathbb{E}}_{s\sim\nu_{s}^{\pi^{\prime}}}\Big{[}{\text{KL}}\big{(}\pi(\cdot|s)\|\pi_{m}(\cdot|s)\big{)}\Big{]}=\Phi^{\pi^{\prime}}(\pi\|\pi_{m}).$
where $\Phi^{\pi^{\prime}}$ is defined in (21). Since $g(\pi)$ is linear in
$\pi$, setting $\pi=\pi^{\prime}$, by Lemma 8.2, we obtain
$\displaystyle{\mathbb{E}}_{s\sim\nu_{s}^{\pi^{\prime}}}\Big{[}\big{\langle}Q^{\pi_{m},\lambda_{m}}(s,\cdot),\pi^{\prime}(\cdot|s)-\pi_{m+1}(\cdot|s)\big{\rangle}\Big{]}\geq\eta_{m}^{-1}\Big{(}\Phi^{\pi^{\prime}}(\pi_{m+1}\|\pi_{m})+\Phi^{\pi^{\prime}}(\pi^{\prime}\|\pi_{m+1})-\Phi^{\pi^{\prime}}(\pi^{\prime}\|\pi_{m})\Big{)},$
which can be equivalently written as
$\displaystyle\eta_{m}^{-1}\cdot\Big{(}\Phi^{\pi^{\prime}}(\pi^{\prime}\|\pi_{m+1})-\Phi^{\pi^{\prime}}(\pi^{\prime}\|\pi_{m})+\Phi^{\pi^{\prime}}(\pi_{m+1}\|\pi_{m})\Big{)}$
$\displaystyle\leq{\mathbb{E}}_{s\sim\nu_{s}^{\pi^{\prime}}}\Big{[}\big{\langle}Q^{\pi_{m},\lambda_{m}}(s,\cdot),\pi^{\prime}(\cdot|s)-\pi_{m}(\cdot|s)\big{\rangle}\Big{]}+{\mathbb{E}}_{s\sim\nu_{s}^{\pi^{\prime}}}\Big{[}\big{\langle}Q^{\pi_{m},\lambda_{m}}(s,\cdot),\pi_{m}(\cdot|s)-\pi_{m+1}(\cdot|s)\big{\rangle}\Big{]}.$
(31)
We next derive an upper bound for the right-hand side of inequalities (8.5).
Let $\|\cdot\|_{\text{TV}}$ denotes the total variation norm of probability
distributions. First, for each state $s\in{\mathcal{S}}$,
$\displaystyle\eta_{m}\cdot\big{\langle}Q^{\pi_{m},\lambda_{m}}(s,\cdot),\pi_{m}(\cdot|s)-\pi_{m+1}(\cdot|s)\big{\rangle}$
$\displaystyle\leq$
$\displaystyle\eta_{m}\cdot\sup_{a\in{\mathcal{A}}}\big{|}Q^{\pi_{m},\lambda_{m}}(s,a)\big{|}\cdot\big{\|}\pi_{m}(\cdot|s)-\pi_{m+1}(\cdot|s)\big{\|}_{\text{TV}}$
$\displaystyle\leq$
$\displaystyle\frac{\eta_{m}^{2}}{8}\cdot\Big{(}\sup_{s\in{\mathcal{S}}}\sup_{a\in{\mathcal{A}}}\big{|}Q^{\pi_{m},\lambda_{m}}(s,a)\big{|}\Big{)}^{2}+2\cdot\big{\|}\pi_{m+1}(\cdot|s)-\pi_{m}(\cdot|s)\big{\|}^{2}_{\text{TV}}$
$\displaystyle\leq$
$\displaystyle\frac{\eta_{m}^{2}}{8}\cdot\Big{(}\sup_{s\in{\mathcal{S}}}\sup_{a\in{\mathcal{A}}}\big{|}Q^{\pi_{m},\lambda_{m}}(s,a)\big{|}\Big{)}^{2}+{\text{KL}}\big{(}\pi_{m+1}(\cdot|s)\|\pi_{m}(\cdot|s)\big{)}\mbox{\
(by Pinsker's inequality)}.$
Hence, by taking the average, we obtain
$\begin{split}&{\mathbb{E}}_{s\sim\nu_{s}^{\pi^{\prime}}}\Big{[}\big{\langle}Q^{\pi_{m},\lambda_{m}}(s,\cdot),\pi_{m}(\cdot|s)-\pi_{m+1}(\cdot|s)\big{\rangle}\Big{]}\\\
\leq&\frac{\eta_{m}}{8}\cdot\Big{(}\sup_{s\in{\mathcal{S}}}\sup_{a\in{\mathcal{A}}}\big{|}Q^{\pi_{m},\lambda_{m}}(s,a)\big{|}\Big{)}^{2}+\eta^{-1}_{m}\cdot\Phi^{\pi^{\prime}}(\pi_{m+1}\|\pi_{m}).\end{split}$
(32)
Second, recall that $\nu^{\pi}(s,a)=\nu_{s}^{\pi}(s)\cdot\pi(a|s)$ and
$V^{\pi}(s)=\langle Q^{\pi}(s,\cdot),\pi(\cdot|s)\rangle$. Then, by Lemma 8.4,
for the modified unconstrained MDP, we have
$\displaystyle{\mathbb{E}}_{s\sim\nu_{s}^{\pi^{\prime}}}\Big{[}\big{\langle}Q^{\pi_{m},\lambda_{m}}(s,\cdot),\pi^{\prime}(\cdot|s)-\pi_{m}(\cdot|s)\big{\rangle}\Big{]}$
$\displaystyle={\mathbb{E}}_{(s,a)\sim\nu^{\pi^{\prime}}}\big{[}Q^{\pi_{m},\lambda_{m}}(s,a)-V^{\pi_{m},\lambda_{m}}(s)\big{]}$
$\displaystyle=(1-\gamma)\cdot\big{(}L(\pi^{\prime},\lambda_{m})-L(\pi_{m},\lambda_{m})\big{)}.$
(33)
Finally, combining (8.5)-(8.5), we obtain
$\displaystyle L(\pi^{\prime},\lambda_{m})-L(\pi_{m},\lambda_{m})$
$\displaystyle\geq$
$\displaystyle\big{(}(1-\gamma)\eta_{m}\big{)}^{-1}\Big{(}\Phi^{\pi^{\prime}}(\pi^{\prime}\|\pi_{m+1})-\Phi^{\pi^{\prime}}(\pi^{\prime}\|\pi_{m})\Big{)}-\frac{\eta_{m}}{8(1-\gamma)}\big{(}\sup_{s\in{\mathcal{S}}}\sup_{a\in{\mathcal{A}}}|Q^{\pi_{m},\lambda_{m}}(s,a)|\big{)}^{2}.$
We are now ready to prove Theorem 4.1.
###### Proof 8.6
Proof of Theorem 4.1 We prove the bound for $D(\bar{\pi}_{T})-q$ first. For
this, we only need to establish an upper bound for
$\big{\|}[\partial_{\lambda}L(\bar{\pi}_{T},\lambda)]^{+}\big{\|}.$
Since $L(\pi_{m},\lambda)$ is linear in $\lambda$, we have
$\displaystyle
L(\pi_{m},\lambda_{m})-L(\pi_{m},\lambda^{*})=(\lambda_{m}-\lambda^{*})^{\top}\partial_{\lambda}L(\pi_{m},\lambda_{m}).$
(34)
By the first part of Lemma 8.1, for any $\lambda$, we have
$\displaystyle\eta_{m}\cdot(\lambda-\lambda_{m})^{\top}\partial_{\lambda}L(\pi_{m},\lambda_{m})$
$\displaystyle=\eta_{m}\cdot(L(\pi_{m},\lambda)-L(\pi_{m},\lambda_{m}))$
$\displaystyle\leq\big{(}\|\lambda_{m}-\lambda\|^{2}-\|\lambda_{m+1}-\lambda\|^{2}\big{)}/2+\eta_{m}^{2}G^{2}/2.$
(35)
On the other hand, by the saddle point property of $(\pi^{*},\lambda^{*})$, we
also have
$\displaystyle L(\pi_{m},\lambda^{*})\geq L(\pi^{*},\lambda^{*}).$ (36)
In the following, we denote by $L^{*}:=L(\pi^{*},\lambda^{*})$. By combining
inequalities (34)-(36), we obtain
$\displaystyle\eta_{m}\cdot(\lambda-\lambda^{*})^{\top}\partial_{\lambda}L(\pi_{m},\lambda_{m})$
$\displaystyle=$
$\displaystyle\eta_{m}\cdot(\lambda-\lambda_{m})^{\top}\partial_{\lambda}L(\pi_{m},\lambda_{m})+\eta_{m}\cdot(\lambda_{m}-\lambda^{*})^{\top}\partial_{\lambda}L(\pi_{m},\lambda_{m})$
$\displaystyle\leq$
$\displaystyle\big{(}\|\lambda_{m}-\lambda\|^{2}-\|\lambda_{m+1}-\lambda\|^{2}\big{)}/2+\eta_{m}^{2}G^{2}/2+\eta_{m}\cdot\big{(}L(\pi_{m},\lambda_{m})-L^{*}\big{)}.$
By taking the telescope sum of above inequality, for any
$\lambda\in\Lambda_{M}$, we have
$\begin{split}&\sum_{m=0}^{T-1}\eta_{m}\cdot(\lambda-\lambda^{*})^{\top}\partial_{\lambda}L(\pi_{m},\lambda_{m})\\\
\leq&\big{(}\|\lambda_{0}-\lambda\|^{2}-\|\lambda_{T}-\lambda\|^{2}\big{)}/2+\Big{(}\sum_{m=0}^{T-1}\eta_{m}^{2}/2\Big{)}\cdot
G^{2}\\\
&+\sum_{m=0}^{T-1}\eta_{m}\cdot\big{(}L(\pi_{m},\lambda_{m})-L^{*}\big{)}.\end{split}$
(37)
For the left hand side of (37), let
$\zeta_{T}:=\sum_{m=0}^{T-1}\eta_{m}\cdot\partial_{\lambda}L(\pi_{m},\lambda_{m})=\Big{(}\sum_{m=0}^{T-1}\eta_{m}\Big{)}\cdot\partial_{\lambda}L(\bar{\pi}_{T},\lambda),$
where the last equality follows from the definition of $\bar{\pi}_{T}$ and the
linearity of value function under the mixing operation. If
$[\zeta_{T}]^{+}=0$, then the upper bound holds trivially. Otherwise, let
$\tilde{\lambda}=\lambda^{*}+r\cdot\frac{[\zeta_{T}]^{+}}{\big{\|}[\zeta_{T}]^{+}\big{\|}},$
where $r$ is the slackness constant in the definition of $\Lambda_{M}$ in
(19). Then it is easy to see that $\tilde{\lambda}\in\Lambda_{M}$. By (37), we
have
$\displaystyle(\tilde{\lambda}-\lambda^{*})^{\top}\zeta_{T}\leq\max_{\lambda\in\Lambda_{M}}\|\lambda-\lambda_{0}\|^{2}/2+\Big{(}\sum_{m=0}^{T-1}\eta_{m}^{2}/2\Big{)}\cdot
G^{2}+\sum_{m=0}^{T-1}\eta_{m}\cdot\big{(}L(\pi_{m},\lambda_{m})-L^{*}\big{)}.$
By the definition of $\tilde{\lambda}$, we also have
$(\tilde{\lambda}-\lambda^{*})^{\top}\zeta_{T}=r\cdot\frac{([\zeta_{T}]^{+})^{\top}\zeta_{T}}{\big{\|}[\zeta_{T}]^{+}\big{\|}}=r\cdot\big{\|}[\zeta_{T}]^{+}\big{\|}=r\cdot\Big{(}\sum_{m=0}^{T-1}\eta_{m}\Big{)}\cdot\big{\|}[\partial_{\lambda}L(\bar{\pi}_{T},\lambda)]^{+}\big{\|}.$
Hence,
$\displaystyle\big{\|}[\partial_{\lambda}L(\bar{\pi}_{T},\lambda)]^{+}\big{\|}$
$\displaystyle\leq\frac{\max_{\lambda\in\Lambda_{M}}\|\lambda-\lambda_{0}\|^{2}}{2r\cdot\sum_{m=0}^{T-1}\eta_{m}}+G^{2}\frac{\sum_{m=0}^{T-1}\eta_{m}^{2}/2}{2r\cdot\sum_{m=0}^{T-1}\eta_{m}}+\frac{\sum_{m=0}^{T-1}\eta_{m}\cdot\big{(}L(\pi_{m},\lambda_{m})-L^{*}\big{)}}{2r\cdot\sum_{m=0}^{T-1}\eta_{m}}.$
(38)
Next, recall that $\bar{\pi}_{T}=\sum_{m=0}^{T-1}\tilde{\eta}_{m}\pi_{m}$,
where $\tilde{\eta}_{m}={\eta_{m}}/({\sum_{m=0}^{T-1}\eta_{m}}),\
m=0,\ldots,T-1.$ Since $\lambda^{*}$ is the optimal solution of the dual
problem and $L(\pi^{*},\lambda^{*})\geq L(\pi^{*},\bar{\lambda}_{T})$, by the
saddle point property, we have
$\displaystyle\sum_{m=0}^{T-1}\tilde{\eta}_{m}\cdot\big{(}L(\pi_{m},\lambda_{m})-L^{*}\big{)}$
$\displaystyle=\sum_{m=0}^{T-1}\tilde{\eta}_{m}\cdot
L(\pi_{m},\lambda_{m})-L^{*}$
$\displaystyle\leq\sum_{m=0}^{T-1}\tilde{\eta}_{m}\cdot
L(\pi_{m},\lambda_{m})-L(\pi^{*},\bar{\lambda}_{T})$
$\displaystyle{=}\sum_{m=0}^{T-1}\tilde{\eta}_{m}\cdot\big{(}L(\pi_{m},\lambda_{m})-L(\pi^{*},{\lambda}_{m})\big{)}.$
(39)
Similarly, under Assumption 4, by the second part in Lemma 8.1, we have
$\displaystyle\sum_{m=0}^{T-1}\tilde{\eta}_{m}\cdot\big{(}L(\pi_{m},\lambda_{m})-L(\pi^{*},{\lambda}_{m})\big{)}$
$\displaystyle\leq\Big{(}(1-\gamma)\cdot\sum_{m=0}^{T-1}\eta_{m}\Big{)}^{-1}\cdot\Big{(}\frac{G^{2}}{8}\cdot\sum_{m=0}^{T-1}\eta^{2}_{m}+\Phi^{\pi^{*}}(\pi^{*}\|\pi_{0})\Big{)},$
(40)
as the weighted KL divergence $\Phi^{\pi^{*}}(\cdot||\cdot)$ is nonnegative.
Lastly, combining inequalities (38)-(40), we have
$\displaystyle\big{\|}[\partial_{\lambda}L(\bar{\pi}_{T},\lambda)]^{+}\big{\|}\leq\frac{G^{2}}{2r\cdot\sum_{m=0}^{T-1}\eta_{m}}+\Big{(}\frac{1}{2}+\frac{1}{8(1-\gamma)}\Big{)}G^{2}\frac{\sum_{m=0}^{T-1}\eta_{m}^{2}}{2r\cdot\sum_{m=0}^{T-1}\eta_{m}}+\frac{(1-\gamma)^{-1}\Phi^{\pi^{*}}(\pi^{*}\|\pi_{0})}{2r\cdot\sum_{m=0}^{T-1}\eta_{m}}.$
If we set $\eta_{m}={\Theta}(1/\sqrt{m})$, there exists finite constants
$\kappa_{1}$ and $\kappa_{2}$ such that
$\sum_{m=0}^{T-1}\eta_{m}\geq\kappa_{1}\sqrt{T}\text{ and
}\sum_{m=0}^{T-1}\eta^{2}_{m}\leq\kappa_{2}\log(T).$
Subsequently, we obtain
$\displaystyle\big{\|}[\partial_{\lambda}L(\bar{\pi}_{T},\lambda)]^{+}\big{\|}$
$\displaystyle\leq\Big{(}G^{2}\cdot\Big{(}1+\frac{5}{8}\kappa_{2}\log(T)\Big{)}+\Phi^{\pi^{*}}(\pi^{*}\|\pi_{0})\Big{)}\frac{1}{2r(1-\gamma)\kappa_{1}\sqrt{T}}.$
Similarly, if we set $\eta_{m}=\eta$ (constant step size), then
$\displaystyle\big{\|}[\partial_{\lambda}L(\bar{\pi}_{T},\lambda)]^{+}\big{\|}$
$\displaystyle\leq\big{(}G^{2}+(1-\gamma)^{-1}\cdot\Phi^{\pi^{*}}(\pi^{*}\|\pi_{0})\big{)}\frac{1}{2rT\eta}+\Big{(}\frac{1}{2}+\frac{1}{8(1-\gamma)}\Big{)}\frac{G^{2}\eta}{2r},$
We next prove the bound for $C(\bar{\pi}_{T})-L^{*}$. We start with the upper
bound. By the definition of $\bar{\pi}_{T}$, we have
$\displaystyle
C(\bar{\pi}_{T})-L^{*}=\sum_{m=0}^{T-1}\tilde{\eta}_{m}\cdot\big{(}L(\pi_{m},\lambda_{m})-L^{*}\big{)}-\sum_{m=0}^{T-1}\tilde{\eta}_{m}\cdot\lambda_{m}^{\top}(D(\pi_{m})-q).$
(41)
From inequalities (8.6) and (40), we have
$\sum_{m=0}^{T-1}\tilde{\eta}_{m}\cdot\big{(}L(\pi_{m},\lambda_{m})-L^{*}\big{)}\leq\Big{(}(1-\gamma)\cdot\sum_{m=0}^{T-1}\eta_{m}\Big{)}^{-1}\cdot\Big{(}\frac{G^{2}}{8}\sum_{m=0}^{T-1}\eta^{2}_{m}+\Phi^{\pi^{*}}(\pi^{*}\|\pi_{0})\Big{)}.$
Next, since $D(\pi_{m})-q=\partial_{\lambda}L(\pi_{m},\lambda_{m})$, setting
$\lambda=0$ in (8.6), similarly, we obtain
$-\sum_{m=0}^{T-1}\tilde{\eta}_{m}\cdot\lambda_{m}^{\top}(D(\pi_{m})-q)\leq\frac{\|\lambda_{0}\|^{2}+G^{2}\cdot\sum_{m=0}^{T-1}\eta_{m}^{2}}{2\sum_{m=0}^{T-1}\eta_{m}}.$
Hence, if $\eta_{m}=\Theta(1/\sqrt{m})$, we have
$C(\bar{\pi}_{T})-L^{*}\leq\Big{(}\frac{5G^{2}}{8}\cdot\kappa_{2}\cdot\log(T)+\Phi^{\pi^{*}}(\pi^{*}\|\pi_{0})+\frac{\|\lambda_{0}\|^{2}}{2}\Big{)}\frac{1}{(1-\gamma)\kappa_{1}\sqrt{T}}.$
For the lower bound, by the saddle point property, we have
$C(\bar{\pi}_{T})=L(\bar{\pi}_{T},\lambda^{*})-(\lambda^{*})^{\top}D(\bar{\pi}_{T})\geq
L^{*}-(\lambda^{*})^{\top}D(\bar{\pi}_{T}).$
Since $\lambda^{*}\geq 0$ and $D(\bar{\pi}_{T})\leq[D(\bar{\pi}_{T})]^{+}$,
$\displaystyle C(\bar{\pi}_{T})-L^{*}$
$\displaystyle\geq-\|\lambda^{*}\|\big{\|}[D(\bar{\pi}_{T})]^{+}\big{\|}$
$\displaystyle\geq-\|\lambda^{*}\|\Big{(}G^{2}\Big{(}1+\frac{5}{8}\kappa_{2}\log(T)\Big{)}+\Phi^{\pi^{*}}(\pi^{*}\|\pi_{0})\Big{)}\frac{1}{2r(1-\gamma)\kappa_{1}\sqrt{T}}.$
Similarly, when if $\eta_{m}=\eta$, we have
$\displaystyle C(\bar{\pi}_{T})-L^{*}$
$\displaystyle\leq\big{(}(1-\gamma)^{-1}\Phi^{\pi^{*}}(\pi^{*}\|\pi_{0})+\|\lambda_{0}\|^{2}/2\big{)}\frac{1}{T\eta}+\frac{5G^{2}\eta}{8(1-\gamma)},$
$\displaystyle C(\bar{\pi}_{T})-L^{*}$
$\displaystyle\geq-\|\lambda^{*}\|\big{(}G^{2}+(1-\gamma)^{-1}\Phi^{\pi^{*}}(\pi^{*}\|\pi_{0})\big{)}\frac{1}{2rT\eta}-\|\lambda^{*}\|\Big{(}\frac{1}{2}+\frac{1}{8(1-\gamma)}\Big{)}\frac{G^{2}\eta}{2r}.$
|
# Artificial Intelligence
for Satellite Communication: A Review
Fares Fourati, Mohamed-Slim Alouini Fares Fourati and Mohamed Slim Alouini are
with King Abdullah University of Science and Technology (KAUST), CEMSE
Division, Thuwal, 23955-6900 KSA, (e-mail<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
Satellite communication offers the prospect of service continuity over
uncovered and under-covered areas, service ubiquity, and service scalability.
However, several challenges must first be addressed to realize these benefits,
as the resource management, network control, network security, spectrum
management, and energy usage of satellite networks are more challenging than
that of terrestrial networks. Meanwhile, artificial intelligence (AI),
including machine learning, deep learning, and reinforcement learning, has
been steadily growing as a research field and has shown successful results in
diverse applications, including wireless communication. In particular, the
application of AI to a wide variety of satellite communication aspects have
demonstrated excellent potential, including beam-hopping, anti-jamming,
network traffic forecasting, channel modeling, telemetry mining, ionospheric
scintillation detecting, interference managing, remote sensing, behavior
modeling, space-air-ground integrating, and energy managing. This work thus
provides a general overview of AI, its diverse sub-fields, and its state-of-
the-art algorithms. Several challenges facing diverse aspects of satellite
communication systems are then discussed, and their proposed and potential AI-
based solutions are presented. Finally, an outlook of field is drawn, and
future steps are suggested.
###### Index Terms:
Satellite Communication, Artificial Intelligence, Machine Learning, Deep
Learning, Reinforcement Learning
## I Introduction
The remarkable advancement of wireless communication systems, quickly
increasing demand for new services in various fields, and rapid development of
intelligent devices have led to a growing demand for satellite communication
systems to complement conventional terrestrial networks to give access over
uncovered and under-covered urban, rural, and mountainous areas, as well as
the seas.
There are three major types of satellites, including the geostationary Earth
orbit, also referred to as a geosynchronous equatorial orbit (GEO), medium
Earth orbit (MEO), and low Earth orbit (LEO) satellites. This classification
depends on three main features, i.e., the altitude, beam footprint size, and
orbit. GEO, MEO, and LEO satellites have an orbit around the Earth at an
altitude of 35786 km, 7000–25000 km, and 300–1500 km, respectively. The beam
footprint of a GEO satellite ranges from 200 to 3500 km; that of an MEO or LEO
beam footprint satellite ranges from 100 to 1000 km. The orbital period of a
GEO satellite is equal to that of the Earth period, which makes it appear
fixed to the ground observers, whereas LEO and MEO satellites have a shorter
period, many LEO and MEO satellites are required to offer continuous global
coverage. For example, Iridium NEXT has 66 LEO satellites and 6 spares,
Starlink by SpaceX plans to have 4425 LEO satellites plus some spares, and O3b
has 20 MEO satellites including 3 on-orbit spares [1].
Satellite communication use cases can also be split into three categories: i)
service continuity, to provide network access over uncovered and under-covered
areas; ii) service ubiquity, to ameliorate the network availability in cases
of temporary outage or destruction of a ground network due to disasters; and
iii) service scalability, to offload traffic from the ground networks. In
addition, satellite communication systems could provide coverage to various
fields, such as the transportation, energy, agriculture, business, and public
safety fields [2].
Although satellite communication offers improved global coverage and increased
communication quality, it has several challenges. Satellites, especially LEO
satellites, have limited on-board resources and move quickly, bringing high
dynamics to the network access. The high mobility of the space segments, and
the inherent heterogeneity between the satellite layers (GEO, MEO, LEO), the
aerial layers (unmanned aerial vehicles (UAVs), balloons, airships), and the
ground layer make network control, network security, and spectrum management
challenging. In addition, achieving high energy efficiency for satellite
communication is more challenging than for terrestrial networks.
Several surveys have discussed different aspects of satellite communication
systems, such as handoff schemes [3], mobile satellite systems [4], MIMO over
satellite [5], satellites for the Internet of Remote Things [6], inter-
satellite communication systems [7], Quality of Service (QoS) provisioning
[8], space optical communication [9], space-air-ground integrated networks
[10], small satellite communication [11], physical space security [12],
CubeSat communications [13], and non-terrestrial networks [2]. Meanwhile,
interest in artificial intelligence (AI) increased in recent years. AI,
including machine learning (ML), deep learning (DL) and reinforcement learning
(RL), has shown successful results in diverse applications in science and
engineering fields, such as electrical engineering, software engineering,
bioengineering, financial engineering, and medicine etc. Several researchers
have thus turned to AI techniques to solve various challenges in their
respective fields and have designed diverse successful AI-based applications,
to overcome several challenges in the wireless communication field.
Many researchers have discussed AI and its applications to wireless
communication in general [14, 15, 16, 17]. Others have focused on the
application of AI to one aspect of wireless communication, such as wireless
communications in the Internet of Things (IoT) [18], network management [19],
wireless security [20], emerging robotics communication [21], antenna design
[22] and UAV networks [23, 24]. Vazquez et al. [25] briefly discussed some
promising use cases of AI for satellite communication, whereas Kato et al.
[26] discussed the use of AI for space-air-integrated networks. The use of DL
in space applications has also been addressed [27].
Figure 1: Applications of artificial intelligence (AI) for different satellite communication aspects AE | Autoencoder
---|---
AI | Artificial intelligence
AJ | Anti-jamming
ARIMA | Auto regressive integrated moving average
ARMA | Auto regressive moving average
BH | Beam hopping
CNN | Convolutional neural network
DL | Deep learning
DNN | Deep neural network
DRL | Deep reinforcement learning
ELM | Extreme learning machine
EMD | Empirical mode decomposition
FARIMA | Fractional auto regressive integrated moving average
FCN | Fully convolutional network
FDMA | Frequency division multiple access
FH | Frequency hopping
GA | Genetic algorithms
GANs | Generative adversarial networks
GNSS | Global navigation satellite system
IoS | Internet of satellites
kNN | k-nearest neighbor
LRD | Long-range-dependence
LSTM | Long short-term memory
MDP | Markov decision process
ML | Machine learning
MO-DRL | Multi-objective deep reinforcement learning
NNs | Neural networks
PCA | Principal component analysis
QoS | Quality of service
RFs | Random forests
RL | Reinforcement learning
RNNs | Recurrent neural networks
RS | Remote sensing
RSRP | Reference signal received power
SAGIN | Space-air-ground integrated network
SRD | Short range dependence
SVM | Support vector machine
SVR | Support vector regression
SatIot | Satellite Internet of Things
UE | User equipment
VAEs | Variational autoencoders
TABLE I: Acronyms and Abbreviations
Overall, several researchers have discussed wireless and satellite
communication systems, and some of these have discussed the use of AI for one
or a few aspects of satellite communication; however, an extensive survey of
AI applications in diverse aspects of satellite communication has yet to be
performed.
This work therefore aims to provide an introduction to AI, a discussion of
various challenges being faced by satellite communication and an extensive
survey of potential AI-based applications to overcome these challenges. A
general overview of AI, its diverse sub-fields and its state-of-the-art
algorithms are presented in Section II. Several challenges being faced by
diverse aspects of satellite communication systems and potential AI-based
solutions are then discussed in Section III; these applications are summarized
in Fig.1. For ease of reference, the acronyms and abbreviations used in this
paper are presented in Table I.
## II Artificial Intelligence (AI)
The demonstration of successful applications of AI in healthcare, finance,
business, industries, robotics, autonomous cars and wireless communication
including satellites has led it to become a subject of high interest in the
research community, industries, and media.
This section therefore aims to provide a brief overview of the world of AI,
ML, DL and RL. Sub-fields, commonly used algorithms, challenges, achievements,
and outlooks are also addressed.
### II-A Artificial Intelligence
Although AI sounds like a novel approach, it can be traced to the 1950s and
encompasses several approaches and paradigms. ML, DL, RL and their
intersections are all parts of AI, as summarized in Fig.2 [28]. Thus, a major
part of AI follows the learning approach, although approaches without any
learning aspects are also included. Overall, research into AI aims to make the
machine smarter, either by following some rules or by facilitating guided
learning. The former refers to symbolic AI; the latter refers to ML. Here
smarter indicates the ability to accomplish complex intellectual tasks
normally necessitating a human such as classification, regression, clustering,
detection, recognition, segmentation, planning, scheduling, or decision
making. In the early days of AI, many believed that these tasks could be
achieved by transferring human knowledge to computers by providing an
extensive set of rules that encompasses the humans’ expertise. Much focus was
thus placed on feature engineering and implementing sophisticated handcrafted
commands to be explicitly used by the computers. Although this symbolic AI has
been suitable for many applications, it has shown various limitations in terms
of both precision and accuracy for more advanced problems that show more
complexity, less structure, and more hidden features such as computer-vision
and language-processing tasks. To address these limitations, researchers
turned to a learning approach known as ML.
Figure 2: Artificial Intelligence, Machine Learning, Deep Learning and
Reinforcement Learning
### II-B Machine Learning (ML)
Figure 3: Machine Learning Approach
ML, which encompasses DL and RL, is a subset of AI. In contrast to symbolic
AI, where the machine is provided with all the rules to solve a certain
problem, ML requires a learning approach. Thus, rather than giving the rules
to solve a problem, the machine is provided with the context to learn the
rules by itself to solve the issue, as shown in Fig.3 and best summarized by
the AI pioneer Alan Turing [29]: ”An important feature of a learning machine
is that its teacher will often be very largely ignorant of quite what is going
on inside, although he may still be able to some extent to predict his pupil’s
behavior,” An ML system is trained rather than programmed with explicit rules.
The learning process requires data to extract patterns and hidden structures;
the focus is on finding optimal representations of the data to get closer to
the expected result by searching within a predefined space of possibilities
using guidance from a feedback signal, where representations of the data refer
to different ways to look at or encode the data. To achieve that, three things
are mandatory: input data, samples of the expected output, and a way to
measure the performance of the algorithm [28]. This simple idea of learning a
useful representation of data has been useful in multiple applications from
image classification to satellite communication.
ML algorithms are commonly classified as either deep or non-deep learning.
Although DL has gained higher popularity and attention, some classical non-
deep ML algorithms are more useful in certain applications, especially when
data is lacking. ML algorithms can also be classified as supervised, semi-
supervised, unsupervised, and RL classes, as shown in Fig.4. In this
subsection, only non-RL, non-deep ML approaches are addressed; RL and DL are
addressed in sections II.C and II.D, respectively.
#### II-B1 Supervised, Unsupervised and Semi-supervised Learning
Supervised, unsupervised and semi-supervised learning are all ML approaches
that can be employed to solve a broad variety of problems.
During supervised learning, all of the training data is labeled, i.e., tagged
with the correct answer. The algorithm is thus fully supervised, as it can
check its predictions are right or wrong at any point in the training process.
During image classification, for example, the algorithm is provided with
images of different classes and each image is tagged with the corresponding
class. The supervised model learns the patterns from the training data to then
be able to predict labels for non-labeled data during inferencing. Supervised
learning has been applied for classification and regression tasks.
As labeling can be impossible due to a lack of information or infeasible due
to high costs, unsupervised learning employs an unlabeled data set during
training. Using unlabeled data, the model can extract hidden patterns or
structures in the data that may be useful to understand a certain phenomenon
or its output could be used as an input for other models. Unsupervised
learning has been commonly used for clustering, anomaly detection, association
and autoencoders (AEs).
As a middle ground between supervised and unsupervised learning, semi-
supervised learning allows a mixture of non-labelled and labaled portions of
training data. Semi-supervised learning is thus an excellent option when only
a small part of the data is labeled and/or the labeling process is either
difficult or expensive. An example of this technique is pseudo-labeling, which
has been used to improve supervised models [33].
Figure 4: Machine Learning Sub-fields
#### II-B2 Probabilistic Modeling
Probabilistic modeling as mentioned by its name, involves models using
statistical techniques to analyze data and was one of the earliest forms of ML
[30]. A popular example is the Naive Bayes classifier, which uses Bayes’
theorem while assuming that all of the input features are independent; as they
generally are not, this is a naive assumption [28]. Another popular example is
logistic regression; as the algorithm for this classifier is simple, it is
commonly used in the data science community.
#### II-B3 Support Vector Machine (SVM)
Kernel methods are a popular class of algorithms [31, 28]; where the most
well-known one of them is the SVM, which aims to find a decision boundary to
classify data inputs. The algorithm maps the data into a high dimensional
representation where the decision boundary is expressed as a hyperplane. The
hyperplane is then searched by trying to maximize the distance between the
hyperplane and the nearest data points from each class in a process called
maximizing the margin. Although mapping the data into a high dimensional space
is theoritically straightforward, it requires high computational resources.
The ’kernel trick’, which is based on kernel functions [32], is thus used to
compute the distance between points without explicit computation of
coordinates, thereby avoiding the computation of the coordinated of a point in
a high-dimensional space. SVMs have been the state-of-the-art for
classification for a fairly long time and have shown many successful
applications in several scientific and engineering areas [34]. However SVMs
have shown limitations when applied on large datasets. Furthermore, when the
SVM is applied to perceptual problems, a feature engineering step is required
to enhance the performance because it is a shallow model; this requires human
expertise. Although it has been surpassed by DL algorithms, it is still useful
because of its simplicity and interpretability.
#### II-B4 Decision Trees
Figure 5: Decision Tree
A decision tree is a supervised learning algorithm that represents features of
the data as a tree by defining conditional control statements, as summarized
in Fig.5 [35, 36]. Given its intelligibility and simplicity, it is one of the
most popular algorithms in ML. Further, decision trees can be used for both
regression and classification, as decisions could be either continuous values
or categories. A more robust version of decision trees, random forests (RFs),
combines various decision trees to bring optimized results. This involves
building many different weak decision trees and then assembling their outputs
using bootstrap aggregating (bagging) [37, 38]. Another popular version of
decision trees, that is often more effective than RFs is a gradient boosting
machine; gradient boosting also combines various decision tree models but
differs from RFs by using gradient boosting [39], which is a way to improve ML
models by iteratively training new models that focus on the mistakes of the
previous models. The XGBoost [40, 41] library is an excellent implementation
of the gradient boosting algorithm that supports C++, Java, Python, R, Julia,
Perl, and Scala. RFs and gradient boosting machines are the most popular and
robust non-deep algorithms that have been widely used to win various data
science competitions on the Kaggle website [42].
#### II-B5 Neural Networks (NNs)
Figure 6: Neural Networks
NNs contain different layers of interconnected nodes, as shown in Fig.6, where
each node is a perceptron that feeds the signal produced by a multiple linear
regression to an activation function that may be nonlinear [43, 44]. A
nonlinear activation function is generally chosen to add more complexity to
the model by eliminating linearity. NNs can be used for regression by
predicting continuous values or for classification by predicting probabilities
for each class. In a NN, the features of one input (e.g., one image) are
assigned as the input layer. Then, according to a matrix of weights the next
hidden layers are computed using matrix multiplications (linear manipulations)
and then non linear activation functions. The training of NNs is all about
finding the best weights. To do so, a loss function is designed to compare the
output of the model and the ground truth for each output, to find the weights
that minimize that loss function. Backpropagation algorithms have been
designed to train chains of weights using optimization techniques such as
gradient-descent [45]. NNs have been successfully used for both regression and
classification, although they are most efficient when dealing a high number of
features (input parameters) and hidden layers, which has led to the
development of DL.
### II-C Deep Learning (DL)
In contrast to shallow models, this sub-field of ML requires high
computational resources [46, 28]. Recent computational advancements and the
automation of feature engineering have paved the way for DL algorithms to
surpass classical ML algorithms for solving complex tasks, especially
perceptual ones such as computer vision and natural language processing. Due
to their relative simplicity, shallow ML algorithms, require human expertise
and intervention to extract valuable features or to transform the data to make
it easier for the model to learn. DL models minimize or eliminate these steps
as these transformations are implicitly done within the deep networks.
#### II-C1 Convolutional Neural Networks (CNN)
CNN [47, 48], are a common type of deep NNs (DNNs) that are composed of an
input layer, hidden convolution layers, and an output layer and have been
commonly used in computer vision applications such as image classification
[50], object detection [51], and object tracking [52]. They have also shown
success in other fields including speech and natural language processing [53].
As their name indicates, CNNs are based on convolutions. The hidden layers of
a CNN consist of a series of convolutional layers that convolve. An activation
function is chosen and followed by additional convolutions. CNN architectures
are defined by by choosing the sizes, numbers, and positions of filters
(kernels) and the activation functions. Learning then involves finding the
best set of filters that can be applied to the input to extract useful
information and predict the correct output.
#### II-C2 Recurrent Neural Networks (RNNs)
Figure 7: Simplified Architecture of a Recurrent Neural Networks
RNNs [54] are another family of neural networks in which nodes form a directed
graph along a temporal sequence where previous outputs are used as inputs.
RNNs are specialized for processing a sequence of values x(0), x(1), x(2), …,
x(T). RNNs use their internal memory to process variable-length sequences of
inputs. Different architectures are designed based on the problem and the
data. In general, RNNs are designed as in Fig. 7, where for each time stamp
$t$, $x(t)$ represents the input at that time, $a(t)$ is the activation, and
$y(t)$ is the output, $W_{a}$, $W_{x}$, $W_{y}$, $b_{x}$ and $b_{y}$ are
coefficients that are shared temporarily and $g_{1}$ and $g_{2}$ are
activation functions.
$a(t)=g_{1}(W_{a}.a(t-1)+W_{x}.x(t)+b_{a})$ (1)
$y(t)=g_{2}(W_{y}.a(t)+b_{y})$ (2)
RNN models are most commonly used in the fields of natural language
processing, speech recognition and music composition.
#### II-C3 Autoencoders (AEs)
Figure 8: Autoencoder
AEs are another type of NNs used to learn efficient data representation in an
unsupervised way [55]. AEs encode the data using the bottleneck technique,
which comprises dimensionality reduction to ignore the noise of the input data
and an initial data regeneration from the encoded data, as summarized in
Fig.8. The initial input and generated output are then compared to asses the
quality of coding. AEs have been widely applied for for dimensionality
reduction [56] and anomaly detection [57].
#### II-C4 Deep generative models
Deep generative models [58] are DL models that involve the automatic
discovering and learning of regularities in the input data in such a way that
new samples can be generated. These models have shown various applications,
especially in the field of computer vision. The most popular generative models
are variational AEs (VAEs) and generative adversarial networks (GANs).
Of these, VAEs learn complicated data distribution using unsupervised NNs
[59]. Although VAEs are a type of AEs, their encoding distribution is
regularized during the training to ensure that their latent space (i.e.,
representation of compressed data) has good properties for generating new
data.
Figure 9: Generative Adverserial Networks GANs
GANs are composed of two NNs in competition, where a generator network G
learns to capture the data distribution and generate new data and a
discriminator model D estimates the probability that a given sample came from
the generator rather than the initial training data, as summarized in Fig. 9
[60, 61]. The generator thus is used to produce misleading samples and to that
the discriminator can determine whether a given sample is fake or real. The
generator fools the discriminator by generating almost real samples and the
discriminator fools the generator by improving its discriminative capability.
### II-D Reinforcement Learning (RL)
This subset of ML involves a different learning method than those using
supervised, semi-supervised, or unsupervised learning [64]. RL is about
learning what actions to take in the hope to maximize a reward signal. The
agent must find which actions bring the most recompense by trying each action,
as shown in 10. These actions can affect immediate rewards as well as
subsequent rewards. Some RL approaches require the introduction of DL; such
approaches are part of deep RL (DRL).
Figure 10: Reinforcement Learning
One of the challenges encountred during RL is balancing the trade-off between
exploration and exploitation. To get a maximum immediate reward, an RL agent
must perform exploitation, i.e., choose actions that it has explored
previously and found to be the best. To find such actions, it must explore the
solution space, i.e., try new actions.
All RL agents have explicit goals, are aware of some aspects of their
environment, can take actions that impact their environments, and act despite
significant uncertainty about their environment. Other than the agent and the
environment, an RL system has four sub-elements: a policy, a reward signal, a
value function, and, sometimes, a model of the environment.
Here, learning involves the agent determining the best method to map states of
the environment to actions to be taken when in those states. After each
action, the environment sends the RL agent a reward signal, which is the goal
of the RL problem. Unlike a reward that brings immediate evaluation of the
action, a value function estimates the total amount of recompense an agent can
anticipate to collect in the longer-term. Finally, a model of the environment
mimics the behavior of the environment. These models can be used for planning
by allowing the agent to consider possible future situations before they
occur. Methods for solving RL problems that utilize models are called model-
based methods, whereas those without models are referred to as model-free
methods.
### II-E Discussion
#### II-E1 Model Selection
AI is a broad field that encompasses various approaches, each of which
encompasses several algorithms. AI could be based on predefined rules or on
ML. This learning can be supervised, semi-supervised, unsupervised, or
reinforcement learning; in each of these categories learning can be deep or
shallow. As each approach offers something different to the world of AI,
interest in each should depend on the given problem; a more-complex approach
or algorithm does not necessarily lead to better results. For example, a
common assumption is that DL is better than shallow learning. Although this
holds in several cases, especially for perceptual problems such as computer
vision problems, it is not always applicable, as DL algorithms require greater
computational resources and large datasets which are not always available.
Supervised learning is an effective approach when a fully labeled dataset is
available. However, this is not always the case, as data can be expensive,
difficult or even impossible. Under these circumstances, semi-supervised or
unsupervised learning or RL is more applicable. Whereas unsupervised learning
can find hidden patterns in non-labeled data, RL learns the best policy to
achieve a certain task. Thus, unsupervised learning is a good tool to extract
information from data, Whereas RL is better suited for decision-making tasks.
Therefore, the choice of an approach or an algorithm should not be based on
its perceived elegance, but by matching the method to characteristics of the
problem at hand, including the goal, the quality of the data, the
computational resources, the time constraints, and the prospective future
updates. Solving a problem may require a combination of more than one
approach.
After assessing the problem and choosing an approach, an algorithm must be
chosen. Although ML has mathematical foundations, it remains an empirical
research field. To choose the best algorithm, data science and ML researchers
and engineers empirically compare different algorithms for a given problem.
Algorithms are compared by splitting the data into a training set and a test
set. The training set is then used to train the model, whereas the test set is
to compare the output between models.
In competitive data science, such as in Kaggle [42] competitions, where each
incrementation matters, models are often combined to improve their overall
results, and various ensemble techniques such as bagging [38], boosting [39],
and adaptive boosting [62] are used.
#### II-E2 Model Regularization
Figure 11: Training and test errors over the training time. Early stopping is
common technique to reduce overfitting by stopping the training process at an
early stage, i.e. when the test error starts to remarkably increasing
After the approach and algorithm have been selected, hyperparameter tuning is
generally done to improve the output of the algorithm. In most cases, ML
algorithms depend on many hyperparameters; choosing the best hyperparameters
for a given problem thus allows for higher accuracy. This step can be done
manually by intuitively choosing better hyperparameters, or automatically
using various methods such as grid search and stochastic methods [63].
A common trap in ML is overfitting, during which the machine stops learning
(generalizing) and instead begins to memorize the data. When this occurs, the
model can achieve good results on seen data but fails when confronted with new
data, i.e., a decreased training error and an increasing test error, as shown
in Fig. Fig.11. Overfitting can be discovered by splitting the data into
training, validation and testing sets, where neither the validation nor the
testing sets are used to train the model. The training set is used to train
the model, the validation set is used to verify the model predictions on
unseen data and for hyperparameter tuning, and the testing set is used for the
final testing of the model.
A variety of methods can be employed to reduce overfitting. It be reduced by
augmenting the size of the dataset, which is commonly performed in the field
of computer vision. For example, image data could be augmented by applying
transformations to the images, such as rotating, flipping, adding noise, or
cutting parts of the images. Although useful, this technique is not always
applicable. Another method involves using cross-validation rather than
splitting the data into a training set and a validation set Early stopping, as
shown in Fig.11, consists of stopping the learning process before the
algorithm begins to memorize the data. Ensemble learning is also commonly
used.
#### II-E3 The hype and the hope
Rapid progress has been made in AI research, including its various subfields,
over the last ten years as a result of exponentially increasing investments.
However, few substantial developments have been made to address real-world
problems; as such, many are doubtful that AI will have much influence on the
state of technology and the world. Chollet [28] compared the progress of AI
with that of the internet in 1995, the majority of people could not foresee
the true potential, consequences, and pertinence of the internet, as it had
yet to come to pass. As the case with the overhyping and subsequent funding
crash throughout the early 2000s before the widespread implementation and
application of the internet, AI may also become an integral part of global
technologies. The authors thus believe that the inevitable progress of AI is
likely to have long-term impacts and that AI will likely be a major part of
diverse applications across all scientific fields, from mathematics to
satellite communication.
## III Artificial Intelligence for Satellite Communication
### III-A Beam hopping
Figure 12: The demand–capacity mismatch among beams demonstrates the
limitation of using fixed and uniformly distributed resources across all beams
in a multi-beam satellite system Figure 13: Simplified architecture of beam
hopping (BH)
#### III-A1 Definition & limitations
Satellite resources are expensive and thus require efficient systems involving
optimizing and time-sharing. In conventional satellite systems the resources
are fixed and uniformly distributed across beams [65]. As a result,
conventional large multi-beam satellite systems have shown a mismatch between
the offered and requested resources; some spot beams have a higher demand than
the offered capacity, leaving the demand pending (i.e., hot-spots), while
others present a demand lower than the installed capacity, leaving the offered
capacity unused (i.e., cold-spots). Thus, to improve multi-beam satellite
communication, the on-board flexible allocation of satellite resources over
the service coverage area is necessary to achieve more efficient satellite
communication.
Beam hopping (BH) has emerged as a promising technique to achieve greater
flexibility in managing non-uniform and variant traffic requests throughout
the day, year and lifetime of the satellite over the coverage area [65], [66].
BH, involves dynamically illuminating each cells with a small number of active
beams, as summarized in 13, thus using all available on-board satellite
resources to offer service to only a subset of beams. The selection of this
subset is time-variant and depends on the traffic demand, which is based on
the time-space dependent BH illumination pattern. The illuminated beams are
only active long enough to fill the request for each beam. Thus, the
challenging task in BH systems is to decide which beams should be activated
and for how long, i.e., the BH illumination pattern; this responsibility is
left to the resource manager who then forwards the selected pattern to the
satellite via telemetry, tracking and command [67].
Of the various methods that researchers have provided to realize BH, most have
been based on classical optimization algorithms. For example, Angeletti et al.
[68], demonstrated several advantages to the performance of a system when
using BH and proposed the use of genetic algorithm (GA) to design the BH
illumination pattern; Anzalchi et al. [69], also illustrated the merits of BH
and compared the performance between BH and non-hopped systems. Alberti et al.
[70], proposed a heuristic iterative algorithm to obtain a solution to the BH
illumination design. BH has also been used to decrease the number of
transponder amplifiers for Terabit/s satellites [71]. An iterative algorithm
has also been proposed to maximize the overall offered capacity under certain
beam demand and power constraints in a joint BH design and spectrum assignment
[72]. Alegre et al. [73], designed two heuristics to allocate capacity
resources basing on the traffic request per-beam, and then further discussed
the long and short-term traffic variations and suggested techniques to deal
with both variations [74]. Liu et al. [75], studied techniques for controlling
the rate of the arriving traffic in BH systems. The QoS delay fairness
equilibrium has also been addressed in BH satellites [76]. Joint BH schemes
were proposed by Shi et al. [77] and Ginesi et al. [78] to further ameliorate
the efficiency of on-board resource allocation. To find the optimal BH
illumination design, Cocco et al. [79] used a simulated annealing algorithm.
Although employing optimization algorithms has achieved satisfactory results
in terms of flexibility and delay reduction of BH systems, some difficulties
remain. As the search space dramatically grow with the number of beams, an
inherent difficulty in designing the BH illumination pattern is finding the
optimal design rather than one of many local optima [72]. For satellites with
hundreds or thousands of beams, classical optimization algorithms may require
long computation times which is impractical in many scenarios.
Additionally, classical optimization algorithms, including the GAs or other
heuristics, require revision when the scenario changes moderately; this leads
to a higher computational complexity, which is impractical for on-board
resource management.
#### III-A2 AI-based solutions
Seeking to overcome these limitations and enhance the performance of BH, some
researchers have proposed AI-based solutions. Some of these solutions have
been fully based on the learning approach, i.e., end-to-end learning, in which
the BH algorithm is a learning algorithm. Others have tried to improve
optimization algorithms by adding a learning layer, thus combining learning
and optimization.
To optimize the transmission delay and the system throughput in multibeam
satellite systems, Hu et al [80] formulated an optimization problem and
modeled it as a Markov decision process (MDP). DRL is then used to solve the
BH illumination design and optimize the long-term accumulated rewards of the
modeled MDP. As a result, the proposed DRL-based BH algorithm can reduce the
transmission delay by up to 52.2% and increased the system throughput by up to
11.4% when compared with previous algorithms.
To combine the advantages of end-to-end learning approaches and optimization
approaches, for a more efficient BH illumination pattern design, Lei et al.
[67] suggested a learning and optimization algorithm to deal with the beam
hopping pattern illumination selection, in which a learning approach, based on
fully connected NNs, was used to predict non-optimal BH patterns and thus
address the difficulties faced when applying an optimization algorithm to a
large search space. Thus, the learning-based prediction reduces the search
space, and the optimization can be reduced on a smaller set of promising BH
patterns.
Researchers have also employed multi-objective DRL (MO-DRL) for the DVB-S2X
satellite. Under real conditions, Zhang et al. [81] demonstrated that the low-
complexity MO-DRL algorithm could ensure the fairness of each cell, and
ameliorate the throughput better than previous techniques including DRL [79]
by 0.172%. In contrast, the complexity of GA producing a similar result is
about 110 times that of the MO-DRL model. Hu et al. [82] proposed a multi-
action selection technique based on double-loop learning and obtained a multi-
dimensional state using a DNN. Their results showed that the proposed
technique can achieve different objectives simultaneously, and can allocate
resources intelligently by adapting to user requirements and channel
conditions.
### III-B Anti-jamming
#### III-B1 Definition & limitations
Satellite communication systems are required to cover a wide area, and provide
high-speed, communication and high-capacity transmission. However, in tactical
communication systems using satellites, reliability and security are the prime
concerns; therefore, an anti-jamming (AJ) capability is essential. Jamming
attacks could be launched toward main locations and crucial devices in a
satellite network to reduce or even paralyze the throughput. Several AJ
methods have thus been designed to reduce possible attacks and guarantee
secure satellite communication.
The frequency-hopping (FH) spread spectrum method has been preferred in many
prior tactical communication systems using satellites [83, 84]. Using the
dehop–rehop transponder method employing FH-frequency division multiple access
(FH-FDMA) scenarios, Bae et al. [85] developed an efficient synchronization
method with an AJ capability.
Most prior AJ techniques are not based on learning and thus cannot deal with
clever jamming techniques that are capable of continuously adjusting the
jamming methodology by interaction and learning. Developing AI algorithms
offer advanced tools to achieve diverse and intelligent jamming attacks based
on learning approaches and thus present a serious threat to satellite
communication reliability. In two such examples, a smart jamming formulation
automatically adjusted the jamming channel [86], whereas a smart jammer
maximized the jamming effect by adjusting both the jamming power and channel
[87]. In addition, attacks could be caused by multiple jammers simultaneously
implementing intelligent jamming attacks based on learning approaches.
Although this may be an unlikely scenario, it has not yet been seriously
considered. Further, most researchers have focused on defending against AJ
attacks in the frequency-based domain, rather than spacebased AJ techniques,
such as routing AJ.
#### III-B2 AI-based solutions
By using a long short-term memory (LSTM) network, which is a DL RNN, to learn
the temporal trend of a signal, Lee et al. [88] demonstrated a reduction of
overall synchronization time in the previously discussed FH-FDMA scenario
[85]. Han et al. [89] proposed the use of a learning approach for AJ to block
smart jamming in the Internet of Satellites (IoS) using a space-based AJ
method, AJ routing, summarized in Fig.14. By combining game theory modeling
with RL and modeling the interactions between smart jammers and satellite
users as a Stackelberg AJ routing game, they demonstrated how to use DL to
deal with the large decision space caused by the high dynamics of the IoS and
RL to deal with the interplay between the satellites and the smart jamming
environment. DRL thus made it possible to solve the routing selection issue
for the heterogeneous IoS while preserving an available routing subset to
simplify the decision space for the Stackelberg AJ routing game. Based on this
routing subset, a popular RL algorithm, Q-Learning, was then used to respond
rapidly to intelligent jamming and adapt AJ strategies.
Han et al. [90] later combined game theory modeling and RL to obtain AJ
policies according to the dynamic and unknown jamming environment in the
Satellite-Enabled Army IoT (SatIoT). Here, a distributed dynamic AJ coalition
formation game was examined to decrease the energy use in the jamming
environment, and a hierarchical AJ Stackelberg game was proposed to express
the confrontational interaction between jammers and SatIoT devices. Finally,
RL-based algorithms were utilized to get the sub-optimal AJ policies according
to the jamming environment.
Figure 14: Space-based anti-jamming (AJ) routing. The red line represents the
found jammed path, and the green one represents the suggested path [89]
### III-C Network Traffic Forecasting
#### III-C1 Definition & limitations
Network traffic forecasting is a proactive approach that aims to guarantee
reliable and high-quality communication, as the predictability of traffic is
important in many satellite applications, such as congestion control, dynamic
routing, dynamic channel allocation, network planning, and network security.
Satellite network traffic is self-similar and demonstrates long-range-
dependence (LRD) [91]. To achieve accurate forecasting, it is therefore
necessary to consider its self-similarity. However,forecasting models for
terrestrial networks based on self-similarity have a high computational
complexity; as the on-board satellite computational resources are limited,
terrestrial models are not suitable for satellites. An efficient traffic
forecasting design for satellite networks is thus required.
Several researchers have performed traffic forecasting for both terrestrial
and satellite networks; these techniques have included the Markov [92],
autoregressive moving average (ARMA) [93], autoregressive integrated moving
average (ARIMA) [94] and fractional ARINA (FARIMA) [95] models. By using
empirical mode decomposition (EMD) to decompose the network traffic and then
applying the ARMA forecasting model, Gao et al. [96] demonstrated remarkable
improvement.
The two major difficulties facing satellite traffic forecasting are the LRD of
satellite networks and the limited on-board computational resources. Due to
the LRD property of satellite networks, short-range-dependence (SRD) models
have failed to achieve accurate forecasting. Although previous LRD models have
achieved better results than SRD models, they suffer from high complexity. To
address these issues, researchers have turned to AI techniques.
#### III-C2 AI-based solutions
Katris and Daskalaki [95] combined FARIMA with NNs for internet traffic
forecasting, whereas Pan et al. [97] combined a differential evolution with
NNs for network traffic prediction. Due to the high complexity of classical
NNs, a least-square SVM, which is an optimized version of a SVM, has also been
used for forecasting [98]. By applying principal component analysis (PCA), to
reduce the input dimensions and then a generalized regression NN, Ziluan and
Xin [99] achieved higher-accuracy forecasting with less training time. Zhenyu
et al. [100] used traffic forecasting as a part of their distributed routing
strategy for LEO satellite network. An extreme learning machine (ELM) has also
been employed for traffic load forecasting of satellite node before routing
[101]. Bie et al. [91] used EMD to decompose the traffic of the satellite with
LRD into a series with SRD and at one frequency to decrease the predicting
complexity and augment the speed. Their combined EMD, fruit-fly optimization,
and ELM methodology achieved more accurate forecasting at a higher speed than
prior approaches.
### III-D Channel Modeling
#### III-D1 Definition & limitations
A channel model is a mathematical representation of the effect of a
communication channel through which wireless signals are propagated; it is
modeled as the impulse response of the channel in the frequency or time
domain.
A wireless channel presents a variety of challenges for reliable high-speed
communication, as it is vulnerable to noise, interference, and other channel
impediments, including path loss and shadowing. Of these, path loss is caused
by the waste of the power emitted by the transmitter and the propagation
channel effects, whereas shadowing is caused by the obstacles between the
receiver and transmitter that absorb power [102].
Precise channel models are required to asses the performance of mobile
communication system and therefore to enhance coverage for existing
deployments. Channel models may also be useful to forecast propagation in
designed deployment outlines, which could allow for assessment before
deployment, and for optimizing the coverage and capacity of actual systems.
For small number of transmitter possible positions, outdoor extensive
environment evaluation could be done to estimate the parameters of the channel
[103, 104]. As more advanced technologies have been used in wireless
communication, more advanced channel modelling was required. Therefore the use
of stochastic models that are computationally efficient while providing
satisfactory results [105].
Ray tracing is used for channel modeling, which requires 3D images that are
generally generated using computer vision methods including stereo-vision-
based depth estimation [106, 107], [108, 109].
A model is proposed for an urban environment requires features, including road
widths, street orientation angles, and height of buildings [110]. A simplified
model was then proposed, by Fernandes and Soares [111] that required only the
proportion of building occupation between the receiver and transmitter, which
could be computed from segmented images manually or automatically [112].
Despite the satisfactory performance of some of the listed techniques, they
still have many limitations. For example, the 3D images required by ray
tracing r are not generally available and their generation is not
computationally efficient. Even when the images are available, ray tracing is
computationally costly and data exhaustive and therefore is not appropriate
for real-time coverage area optimization. Further, the detailed data required
for the model presented by Cichon and Kurner [110] is often unavailable.
#### III-D2 AI-based solutions
Some early applications of AI for path loss forecasting have been based on
classical ML algorithms such as SVM [113, 114], NNs [115, 116, 117, 118, 119,
120] and decision trees [121]. Interested readers are referred to a survey of
ML-based path loss prediction approaches for further details [122].
Figure 15: Channel parameters prediction. 2D aerial/satellite images used as
input to the deep convolutional neural network (CNN)to to predict channel
parameters. The model is trained separately for each parameter.
However, although previous ML efforts have shown great results, many require
3D images. Researchers have recently thus shifted their attention to using DL
algorithms with 2D satellite/aerial images for path loss forecasting. For
example, Ates et al. [123], approximated channel parameters, including the
standard deviation of shadowing and the path loss exponent, from satellite
images using deep CNN without the use of any added input parameters, as shown
in Fig.15.
By using a DL model on satellite images and other input parameters to predict
the reference signal received power (RSRP) for specific receiver locations in
a specific scenario/area, Thrane et al. [124] demonstrated a gain improvement
of $\approx 1$ and $\approx 4.7$ at 811 MHz and 2630 MHz respectively, over
previous techniques, including ray tracing. Similarly Ahmadien et al. [125],
applied DL on satellite images for path loss prediction, although they focused
only on satellite images without any supplemental features and worked on more
generalized data. Despite the practicality of this method, as it only needs
satellite images to forecast the path loss distribution, 2D images will not
always be sufficient to characterize the 3D structure. In these cases, more
features (e.g., building heights) must be input into the model.
### III-E Telemetry Mining
#### III-E1 Definition & limitations
Telemetry is the process of recording and transferring measurements for
control and monitoring. In satellite systems, on-board telemetry helps mission
control centers track platform’s status, detect abnormal events, and control
various situations.
Satellite failure can be caused by a variety of things; most commonly, failure
is due to the harsh environment of space, i.e., heat, vacuum, and radiation.
The radiation environment can affect critical components of a satellite,
including the communication system and power supply.
Telemetry processing enables tracking of the satellite’s behavior to detect
and minimize failure risks. Finding correlations, recognizing patterns,
detecting anomalies, classifying, forecasting, and clustering are applied to
the acquired data for fault diagnosis and reliable satellite monitoring.
One of the earliest and simplest techniques used in telemetry analysis is
limit checking. The method is based on setting a precise range for each
feature (e.g., temperature, voltage, and current), and then monitoring the
variance of each feature to detect out-of-range events. The main advantage of
this algorithm is its simplicity limits, as can be chosen and updated easily
to control spacecraft operation.
Complicated spacecraft with complex and advanced applications challenges
current space telemetry systems. Narrow wireless bandwidth and fixed-length
frame telemetry make transmitting the rapidly augmenting telemetry volumes
difficult. In addition, the discontinuous short-term contacts between
spacecraft and ground stations limit the data transmission capability.
Analyzing, monitoring and interpreting huge telemetry parameters could be
impossible due to the high complexity of data.
#### III-E2 AI-based solutions
In recent years, AI techniques have been largely considered in space missions
with telemetry. Satellite health monitoring has been performed using
probabilistic clustering [126], dimensionality reduction, and hidden Markov
[127], and regression trees [128], whereas others have developed anomaly
detection methods using the K-nearest neighbor (kNN), SVM, LSTM and testing on
the telemetry of Centre National d’Etudes Spatiales spacecraft [129, 130,
131].
Further, the space functioning assistant was further developed in diverse
space applications using data-driven [132] and model-based [133] monitoring
methods. In their study of the use of AI for fault diagnosis in general and
for space utilization, Sun et al. [134] argued that the most promising
direction is the use of DL; suggested its usage for fault diagnosis for space
utilization in China.
By comparing different ML algorithms using telemetry data from the Egyptsat-1
satellite, Ibrahim et al. [135] demonstrated the high prediction accuracy of
LSTM, ARIMA, and RNN models. They suggested simple linear regression for
forecasting critical satellite features for short-lifetime satellites (i.e.,
3–5 years) and NNs for long-lifetime satellites (15-20 years).
Unlike algorithms designed to operate on the ground in the mission control
center, Wan et al. [136] proposed a self-learning classification algorithm to
achieve on-board telemetry data classification with low computational
complexity and low time latency.
### III-F Ionospheric Scintillation Detecting
#### III-F1 Definition & limitations
Figure 16: Representation of ionospheric scintillation, where distortion
occurs during signal propagation. The blue, green, and red lines show the
line-of-sight signal paths from the satellite to the earth antennas, the
signal fluctuation, and the signal delay, respectively.
Signals transmission by satellites toward the earth can be notably impacted
due to their propagation through the atmosphere, especially the ionosphere,
which is the ionized part of the atmosphere higher layer, and is distinguished
by an elevated density of free electrons (Fig.16). The potential
irregularities and gradients of ionization can distort the signal phase and
amplitude, in a process known as ionospheric scintillation.
In particular, propagation through the ionosphere can cause distortion of
global navigation satellite system (GNSS) signals, leading to significant
errors in the GNSS-based applications. GNSSs are radio-communication satellite
systems that allow a user to compute the local time, velocity, and position in
any place on the Earth by processing signals transferred from the satellites
and conducting trilateration [137]. GNSSs can also be used in a wide variety
of applications, such as scientific observations.
Because of the low-received power of GNSS waves, any errors significantly
threaten the accuracy and credibility of the positioning systems. GNSS signals
propagating through the ionosphere face the possibility of both a temporal
delay and scintillation. Although delay compensation methods are applied to
all GNSS receivers [137], scintillation is still a considerable issue, as its
quasi-random nature makes it difficult to model [138]. Ionospheric
scintillation thus remains a major limitation to high-accuracy applications of
GNSSs. The accurate detection of scintillation thus required to improve the
credibility and quality of GNSSs [139]. To observe the signals, which are a
source of knowledge for interpreting and modeling the atmosphere higher
layers, and to raise caution and take countermeasures for GNSS-based
applications, networks of GNSS receivers, have been installed, both at high
and low latitudes, where scintillation is expected to occur [140, 141]. Robust
receivers and proper algorithms for scintillation-detecting algorithms are
thus both required [142].
To evaluate the magnitude of scintillation impacting a signal, many
researchers have employed simple event triggers, based on the comparison of
the amplitude and phase of two signals over defined interval [143]. Other
proposed alternatives, have included using wavelet techniques [144],
decomposing the carrier-to-noise density power propostion via adaptive
frequency-time techniques [145], and assessing the histogram statistical
properties of collected samples [146].
Using simple predefined thresholds to evaluate the magnitude of scintillation
can be deceptive due its complexity. The loss of the transient phases of
events could cause a delay in raising possible caution flags, and weak events
with high variance could be missed. Further, it can be difficult to
distinguish between signal distortions caused by other phenomena, including
multi-path. However, other proposed alternatives depend on complex and
computationally costly operations or on customized receiver architectures.
#### III-F2 AI-based solutions
Recently, studies have proved that AI can be utilized for the detection of
scintillation. For example, Rezende et al. [147], proposed a survey of data
mining methods, that rely on observing and integrating GNSS receivers.
A technique based on the SVM algorithm has been suggested for amplitude
scintillation detection [148, 149], and then later expanded to phase
scintillation detection [150, 151].
By using decision trees and RF to systematically detect ionospheric
scintillation events impacting the amplitude of the GNSS signals, Linty et
al.’s [152] methodology outperformed state-of-the art methodologies in terms
of accuracy (99.7%) and F-score (99.4%), thus reaching the levels of a manual
human-driven annotation.
More recently, Imam and Dovis [153] proposed the use of decision trees, to
differentiate between ionospheric scintillation and multi-path in GNSS
scintillation data. Their model, which annotates the data as scintillated,
multi-path affected, or clean GNSS signal, demonstrated an accuracy of 96%
### III-G Managing Interference
#### III-G1 Definition & limitations
Interference managing is mandatory for satellite communication operators, as
interference negatively affects the communication channel, resulting in a
reduced QoS, lower operational efficiency and loss of revenue [154]. Moreover,
interference is a common event that is increasing with the increasing
congestion of the satellite frequency band as more countries are launching
satellites and more applications are expected. With the growing number of
users sharing the same frequency band, the possibility of interfering
augments, as does the risk of intentional interference, as discussed in
section III.B.
Interference managing is a thus essential to preserve high-quality and
reliable communication systems; management includes detection, classification,
and suppression of interference, as well as the application of techniques to
minimize its occurrence.
Interference detection is a well-studied subject that has been addressed in
the past few decades [155, 156], especially for satellite communication [154,
157].
However, researchers have commonly relied on the decision theory of hypothesis
testing, in which specific knowledge of the signal characteristics and the
channel model is needed. Due, to the contemporary diverse wireless standards,
the design of specific detectors for each signal category is fruitless
approach.
#### III-G2 AI-based solutions
Figure 17: Satellite selection and antenna adjustment
To minimize interference, Liu et al. [158], suggested the use of AI for moving
terminals and stations in satellite-terrestrial networks by proposing a
framework combining different AI approaches including SVM, unsupervised
learning and DRL for satellite selection, antenna pointing and tracking, as
summarized in Fig.17.
Another AI-based approach executes automatic real-time interference detection
is based on the forecasting of the following signal spectrum to be received in
absence of anomaly, by using LSTM trained on historical anomaly-free spectra
[159]. Here the predicted spectra is then compared to the received signal
using a designed metric, to detect anomalies.
Henarejos et al. [160] proposed the use of two AI-based approaches, DNN AEs
and LSTM, for detecting and classifying interference, respectively. In the
former, the AE is trained with interference free signals and tested against
other signals without interference to obtain practical thresholds. The
difference in error in signals with and without interference is then exploited
to detect the presence of interference.
### III-H Remote sensing (RS)
#### III-H1 Definition & limitations
RS is the process of extracting information about an area, object or
phenomenon by processing its reflected and emitted radiation at a distance,
generally from satellite or aircraft.
RS has a wide range of applications in multiple fields including land
surveying, geography, geology, ecology, meteorology, oceanography, military
and communication. As RS offers the possibility of monitoring areas that are
dangerous, difficult or impossible to access, including mountains, forests,
oceans and glaciers it is a popular and active research area.
#### III-H2 AI-based solutions
The revolution in computer vision capabilities caused by DL has led to the
increased development of RS by adopting state-of-the-art DL algorithms on
satellite images, image classification for RS has become most popular task in
computer vision. For example, Kussul et al. [161] used DL to classify land
coverage and crop types using RS images from Landsat-8 and Sentinel-1A over a
test site in Ukraine. Zhang et al [162] combined DNNs by using a gradient-
boosting random CNN for scene classification. More recently, Chirayath et al.
[163] proposed the combination of kNN and CNN to map coral reef marine
habitats worldwide with RS imaging. RS and AI have also been used in
communication theory applications, such as those discussed in section III.D
[123], [124] and [125].
Many object detection and recognition applications have been developed using
AI on RS images [164]. Recently, Zhou et al. [165] proposed the use of YOLOv3
[166, 167], a CNN-based object detection algorithm, for vehicle detection in
RS images. Others have proposed the use of DL for other object detection
tasks, such as, building [168], airplane [169], cloud [170], [171, 172], ship
[173, 174], and military target [175] detection. AI has also been applied to
segment and restore RS images, e.g., in cloud restorations, during which
ground regions shadowed by clouds are restored.
Recently, Zheng et al. [176] proposed a two-stage cloud removal method in
which U-Net [177] and GANs are used to perform cloud segmentation and image
restoration, respectively.
AI proposed for on-board scheduling of agile Earth-observing satellites, as
autonomy improves their performance and allows them to acquire more images, by
relying on on-board scheduling for quick decision-making. By comparing the use
of RF, NNs, and SVM to prior learning and non-learning-based approaches, Lu et
al. [178] demonstrated that RF improved both the solution quality and response
time.
### III-I Behavior Modeling
#### III-I1 Definition & limitations
Owing to the increasing numbers of active and inactive (debris) satellites of
diverse orbits, shapes, sizes, orientations and functions, it is becoming
infeasible for analysts to simultaneously monitor all satellites. Therefore,
AI, especially ML, could play a major role by helping to automate this
process.
#### III-I2 AI-based solutions
Mital et al. [179] discussed the potential of ML algorithms to model satellite
behavior. Supervised models have been used to determine satellite stability
[180], whereas unsupervised models have been used to detect anomalous behavior
and a satellites’ location [181], and an RNN has been used to predict
satellite maneuvers over time[182].
Accurate satellite pose estimation, i.e., identifying a satellite’s relative
position and attitude, is critical in several space operations, such as debris
removal, inter-spacecraft communication, and docking. The recent proposal for
satellite pose estimation from a single image via combined ML and geometric
optimization by Chen et al. [183] won the first place in the recent Kelvins
pose estimation challenge organized by the European Space Agency [184].
The amount of space debris has augmented immensely over the last few years,
which can cause a crucial menace to space missions due to the high velocity of
the debris. It is thus essential to classify space objects and apply collision
avoidance techniques to protect active satellites. As such, Jahirabadkar et
al. [185] presented a survey of diverse AI methodologies, for classification
of space objects using the curves of light as a differentiating property.
Yadava et al. [186] employed NNs and RL for on-board attitude determination
and control; their method effectively provided the needed torque to stabilize
a nanosatellite along three axes.
To avoid catastrophic events because of battery failure, Ahmed et al. [187]
developed an on-board remaining battery life estimation system using ML and a
logical analysis of data approaches.
### III-J Space-Air-Ground Integrating
#### III-J1 Definition & limitations
Recently, notable advances have been made in ground communication systems to
provide users higher-quality internet access. Nevertheless, due to the
restricted capacity and coverage area of networks, such services are not
possible everywhere at all times, especially for users in rural or disaster
areas.
Figure 18: Space-air-ground integrated networks (SAGINs) [26]
Although terrestrial networks have the most resources and highest throughput,
non-terrestrial communication systems have a much broader coverage area.
However, non-terrestrial networks have their own limitations; e.g., satellite
communication systems have a long propagation latency, and air networks have a
narrow capacity and unstable links.
To supply users with better and more-flexible end-to-end services by taking
advantage of the way the networks can complement each other, researchers have
suggested the use of space-air-ground integrated networks (SAGINs) [10], which
include the satellites in space, the balloons, airships, and UAVs in the air,
and the ground segment, as shown in Fig.18.
The multi-layered satellite communication system which consists of GEO, MEO,
and LEO satellites, can use multi-cast and broadcast methods to ameliorate the
network capacity, crucially easing the augmenting traffic burden [10, 26]. As
SAGINs allow packet transmission to destinations via multiple paths of diverse
qualities, they can offer different packet transmissions methods to encounter
diverse service demands [26].
However, the design and optimization of SAGINs is more challenging than that
of conventional ground communication systems owing to their inherent self-
organization, time-variability, and heterogeneity [10]. A variety of factors
that must be considered when designing optimization techniques have thus been
identified [10, 26]. For example, the diverse propagation mediums, the sharing
of frequency bands by different communication types, the high mobility of the
space and air segments, and the inherent heterogeneity between the three
segments, make the network control and spectrum management of SAGIN arduous.
The high mobility results in frequent handoffs, which makes safe routing more
difficult to realize, thus making SAGINs more exposed to jamming. Further, as
optimizing the energy efficiency is also more challenging than in standard
terrestrial networks, energy management algorithms are also required.
#### III-J2 AI-based solutions
In their discussion of challenges facing SAGINs, Kato et al. [26] proposed the
use of a CNN for the routing problem to optimize the SAGIN’s overall
performance using traffic patterns and the remaining buffer size of GEO and
MEO satellites.
Optimizing the satellite selection and the UAV location to optimize the end-
to-end data rate of the Source-Satellite-UAV-Destination communication is
challenging due to the vast orbiting satellites number and the following time-
varying network architecture. To address this problem, Lee et al. [188]
jointly optimized the source-satellite-UAV association and the location of the
UAV via DRL. Their suggested technique achieved up to a 5.74x higher average
data rate than a direct communication baseline in the absence of UAV and
satellite.
For offloading calculation-intensive applications, a SAGIN edge/cloud
computing design has been developed in such a way that satellites give access
to the cloud and UAVs allow near-user edge computing. [189]. Here, a joint
resource allocation and task scheduling approach is used to allocate the
computing resources to virtual machines and schedule the offloaded tasks for
UAV edge servers, whereas an RL-based computing offloading approach handles
the multidimensional SAGIN resources and learns the dynamic network
conditions. Here, a joint resource allocation and task scheduling approach is
used to assign the computing resources to virtual machines and plan the
offloaded functions for UAV edge servers, whereas an RL-based computing
offloading approach handles the multidimensional SAGIN resources and learns
the dynamic network characteristics. Simulation results confirmed the
efficiency and convergence of the suggested technique.
As the heterogeneous multi-layer network requires advanced capacity-management
techniques, Jiang and Zhu [190] suggested a low-complexity technique for
computing the capacity among satellites and suggested a long-term optimal
capacity assignment RL-based model to maximize the long-term utility of the
system.
By formulating the joint resources assignment problem as a joint optimization
problem and using a DRL approach, Qiu et al. [191] proposed a software-defined
satellite-terrestrial network to jointly manage caching, networking, and
computing resources.
### III-K Energy Managing
#### III-K1 Definition & limitations
Recent advances in the connection between ground, aerial, and satellite
networks such as SAGIN have increased the demand imposed on satellite
communication networks. This growing attention towards satellites has led to
increased energy consumption requirements. Satellite energy management thus
represents a hot research topic for the further development of satellite
communication.
Compared with a GEO Satellite, an LEO satellite has restricted on-board
resources and moves quickly. Further, an LEO satellite has a limited energy
capacity owing to its small size [192]; as billions of devices need to be
served around the world [193], current satellite resource capability can no
longer satisfy demand. To address this shortage of satellite communication
resources, an efficient resource scheduling scheme to take full use of the
limited resources, must be designed. As current resource allocation schemes
have mostly been designed for GEO satellites, however, these schemes do not
consider many LEO specific concerns, such as the constrained energy, movement
attribute, or connection and transmission dynamics.
#### III-K2 AI-based solutions
Some researchers have thus turned to AI-based solutions for power saving. For
example, Kothari et al. [27] suggested the usage of DNN compression before
data transmission to improve latency and save power. In the absence of solar
light, satellites are battery energy dependent, which places a heavy load on
the satellite battery and can shorten their lifetimes leading to increased
costs for satellite communication networks. To optimize the power allocation
in satellite to ground communication using LEO satellites and thus extend
their battery life, Tsuchida et al. [194] employed RL to share the workload of
overworked satellites with near satellites with lower load. Similarly,
implementing DRL for energy-efficient channel allocation in Satlot allowed for
a 67.86% reduction in energy consumption when compared with previous models
[195]. Mobile edge computing enhanced SatIoT networks contain diverse
satellites and several satellite gateways that could be jointly optimized with
coupled user association, offloading decisions computing, and communication
resource allocation to minimize the latency and energy cost. In a recent
example, a joint user-association and offloading decision with optimal
resource allocation methodology based on DRL proposed by Cui et al. [196]
improved the long-term latency and energy costs.
### III-L Other Applications
#### III-L1 Handoff Optimization
Link-layer handoff occurs when the change of one or more links is needed
between the communication endpoints due to the dynamic connectivity patterns
of LEO satellites. The management of handoff in LEO satellites varies
remarkably from that of terrestrial networks, since handoffs happen more
frequently due to the movement of satellites [3]. Many researchers have thus
focused on handoff management in LEO satellite networks.
In general, user equipment (UE) periodically measures the strength of
reference signals of different cells to ensure access to a strong cell, as the
handoff decision depends on the signal strength or some other parameters.
Moreover, the historical RSRP contains information to avoid unnecessary
handoff.
Thus, Zhang [197] converted the handoff decision to a classification problem.
Although the historical RSRP is a time series, a CNN was employed rather than
an RNN because the feature map of historical RSRP has a strong local spatial
correlation and the use of an RNN could lead to a series of wrong decisions,
as one decision largely impacts future decisions. In the proposed AI-based
method, the handoff was decreased by more than 25% for more than 70% of the
UE, whereas the commonly used “strongest beam” method only reduced the average
RSRP by 3%.
#### III-L2 Heat Source Layout Design
The effective design of the heat sources used can enhance the thermal
performance of the overall system, and has thus become a crucial aspect of
several engineering areas, including integrated circuit design and satellite
layout design. With the increasingly small size of components and higher power
intensity, designing the heat-source layout has become a critical problem
[198]. Conventionally, the optimal design is acquired by exploring the design
space by repeatedly running the thermal simulation to compare the performance
of each scheme [199, 200, 201]. To avoid the extremely large computational
burden of traditional techniques, Sun et al. [202] employed an inverse design
method in which the layout of heat sources is directly generated from a given
expected thermal performance based on a DL model called Show, Attend, and Read
[203]. Their developed model was capable of learning the underlying physics of
the design problem and thus could efficiently forecast the design of heat
sources under a given condition without any performing simulations. Other DL
algorithms have been used in diverse design areas, such as mechanics [204],
optics [205], fluids [206], and materials [207].
#### III-L3 Reflectarray analysis and design
ML algorithms have been employed in the analysis and design of antennas [22],
including the analysis [208, 209] and design [210, 211] of reflectarrays. For
example, NNs were used by Shan et al. [212] to forecast the phase-shift,
whereas kriging was suggested to forecast the electromagnetic response of
reflectarray components [213]. Support vector regression (SVR) has been used
to accelerate the examination [214] and to directly optimize narrowband
reflectarrays [215]. To hasten calculations without reducing their precision,
Prado et al. [216] proposed a wideband SVR-based reflectarray design method,
and demonstrated its ability to obtain wideband, dual-linear polarized,
shaped-beam reflectarrays for direct broadcast satellite applications.
#### III-L4 Carrier Signal Detection
As each signal must be separated before classification, modulation,
demodulation, decoding and other signal processing, localization, and
detection of carrier signals in the frequency domain is a crucial problem in
wireless communication.
The algorithms used for carrier signal detection have been commonly based on
threshold values and required human intervention [217, 218, 219, 220, 221,
222], although several improvements have been made including the use of a
double threshold [223, 224]. Kim et al. [225] proposed the use of a slope-
tracing-based algorithm to separate the interval of signal elements based on
signal properties such as amplitude, slope, deflection width, or distance
between neighboring deflections.
More recently, DL has been applied to carrier signal detection; for example,
Morozov and Ovchinnikov [226] applied a fully connected NN for their detection
in FSK signals, whereas Yuan et al. [227] used DL, to morse signals blind
detection in wideband spectrum data. Huang er al. [228] employed a fully
convolutional network (FCN) model to detect carrier signal in the broadband
power spectrum. A FCN is a DL method for semantic image segmentation in which
the broadband power spectrum is regarded as a 1D image and each subcarrier as
the target object to transform the carrier detection problem on the broadband
to a semantic 1D image segmentation problem [229, 230, 231]. Here, a 1D deep
CNN FCN-based on was designed to categorize each point on a broadband power
spectrum array into two categories (i.e., subcarrier or noise), and then
position the subcarrier signals’ location on the broadband power spectrum.
After being trained and validated using a simulated and real satellite
broadband power spectrum dataset, respectively, the proposed deep CNN
successfully detected the subcarrier signal in the broadband power spectrum
and achieved a higher accuracy than the slope tracing method.
## Conclusion
This review provided an overview of AI and its different sub-fields, including
ML, DL, and RL. Some limitations to satellite communication were then
presented and their proposed and potential AI-based solutions were discussed.
The application of AI has shown great results in a wide variety of satellite
communication aspects, including beam-hopping, AJ, network traffic
forecasting, channel modeling, telemetry mining, ionospheric scintillation
detecting, interference managing, remote sensing, behavior modeling, space-
air-ground integrating, and energy managing. Future work should aim to apply
AI, to achieve more efficient, secure, reliable, and high-quality
communication systems.
## References
* [1] G. Maral, M. Bousquet, and Z. Sun, “Introduction,” in _Satellite Communications Systems: Systems, Techniques and Technology, 6_ th ed. Hoboken, NJ, USA: Wiley, 2020, ch. $1$, sec. $3$, pp. _3–11._
* [2] F. Rinaldi, H. L. Maattanen, J. Torsner, S. Pizzi, S. Andreev, A. Iera, Y. Koucheryavy, and G. Araniti “Non-Terrestrial Networks in 5G & Beyond: A Survey,” in _IEEE Access,_ vol. 8, pp. 165178-165200, 2020, doi: 10.1109/ACCESS.2020.3022981.
* [3] P. Chowdhury, M. Atiquzzaman, and W. Ivancic, “Handover schemes in satellite networks: State-of-the-art and future research directions,” _IEEE Commun. Surveys Tuts.,_ vol. 8, no. 4, pp. 2-14, Aug. 2006.
* [4] P. Chini, G. Giambene, and S. Kota, “A survey on mobile satellite systems,” _Int. J. Satell. Commun. Netw.,_ vol. 28, no. 1, pp. 29-57, Aug. 2009.
* [5] P.-D. Arapoglou, K. Liolis, M. Bertinelli, A. Panagopoulos, P. Cottis, and R. De Gaudenzi, “MIMO over satellite: A review,” _IEEE Commun. Surveys Tuts.,_ vol. 13, no. 1, pp. 27-51, 1st Quart. 2011.
* [6] M. De Sanctis, E. Cianca, G. Araniti, I. Bisio, and R. Prasad, “Satellite communications supporting Internet of remote things,” _IEEE Internet Things J.,_ vol. 3, no. 1, pp. 113-123, Feb. 2016.
* [7] R. Radhakrishnan, W. W. Edmonson, F. Afghah, R. M. Rodriguez-Osorio, F. Pinto, and S. C. Burleigh, “Survey of inter-satellite communication for small satellite systems: Physical layer to network layer view,” _IEEE Commun. Surveys Tuts.,_ vol. 18, no. 4, pp. 2442-2473, May 2016.
* [8] C. Niephaus, M. Kretschmer, and G. Ghinea, “QoS provisioning in converged satellite and terrestrial networks: A survey of the state-of-the-art,” _IEEE Commun. Surveys Tuts.,_ vol. 18, no. 4, pp. 2415-2441, Apr. 2016.
* [9] H. Kaushal and G. Kaddoum, “Optical communication in space: Challenges and mitigation techniques,” _IEEE Commun. Surveys Tuts.,_ vol. 19, no. 1, pp. 57-96, 1st Quart. 2017.
* [10] J. Liu, Y. Shi, Z. M. Fadlullah, and N. Kato, “Space-Air-Ground Integrated Network: A Survey,” _IEEE Communications Surveys & Tutorials,_ vol. 20, no. 4, pp. 2714-2741, Fourthquarter 2018, doi: 10.1109/COMST.2018.2841996.
* [11] S. C. Burleigh, T. De Cola, S. Morosi, S. Jayousi, E. Cianca, and C. Fuchs, “From connectivity to advanced Internet services: A comprehensive review of small satellites communications and networks,” _Wireless Commun. Mobile Comput.,_ vol. 2019, pp. 1-17, May 2019.
* [12] B. Li, Z. Fei, C. Zhou, and Y. Zhang, “Physical-layer security in space information networks: A survey,” _IEEE Internet Things J.,_ vol. 7, no. 1, pp. 33-52, Jan. 2020.
* [13] N. Saeed, A. Elzanaty, H. Almorad, H. Dahrouj, T. Y. Al-Naffouri, and M. -S. Alouini, “CubeSat Communications: Recent Advances and Future Challenges,” _IEEE Communications Surveys & Tutorials,_ vol. 22, no. 3, pp. 1839-1862, thirdquarter 2020, doi: 10.1109/COMST.2020.2990499.
* [14] O. Simeone, “A Very Brief Introduction to Machine Learning With Applications to Communication Systems,” _IEEE Transactions on Cognitive Communications and Networking,_ vol. 4, no. 4, pp. 648-664, Dec. 2018, doi: 10.1109/TCCN.2018.2881442.
* [15] M. Chen, U. Challita, W. Saad, C. Yin, and M. Debbah, “Artificial Neural Networks-Based Machine Learning for Wireless Networks: A Tutorial,” _IEEE Communications Surveys & Tutorials,_ vol. 21, no. 4, pp. 3039-3071, Fourthquarter 2019, doi: 10.1109/COMST.2019.2926625.
* [16] Y. Qian, J. Wu, R. Wang, F. Zhu, and W. Zhang, “Survey on Reinforcement Learning Applications in Communication Networks,” _Journal of Communications and Information Networks,_ vol. 4, no. 2, pp. 30-39, June 2020, doi: 10.23919/JCIN.2019.8917870.
* [17] EC. Strinati, S. Barbarossa, JL. Gonzalez, D. Ktenas, N. Cassiau, L. Maret, and C. Dehos, “6G: The Next Frontier: From Holographic Messaging to Artificial Intelligence Using Subterahertz and Visible Light Communication,” _IEEE Vehicular Technology Magazine,_ vol. 14, no. 3, pp. 42-50, Sept. 2019, doi: 10.1109/MVT.2019.2921162.
* [18] J. Jagannath, N. Polosky, A. Jagannath, F. Restuccia, T. Melodia, “Machine learning for wireless communications in the Internet of Things: A comprehensive survey,” _Ad Hoc Networks,_ Volume 93, Jan. 2019, 101913, ISSN 1570-8705, [online] Available: https://doi.org/10.1016/j.adhoc.2019.101913.
* [19] G. P. Kumar and P. Venkataram, “Artificial intelligence approaches to network management: recent advances and a survey,” _Computer Communications,_ Volume 20, Issue 15, Dec. 1997, Pages 1313-1322, ISSN 0140-3664, [online] Available: https://doi.org/10.1016/S0140-3664(97)00094-7.
* [20] Y. Zou, J. Zhu, X. Wang, and L. Hanzo, “A Survey on Wireless Security: Technical Challenges, Recent Advances, and Future Trends,” _Proceedings of the IEEE,_ vol. 104, no. 9, pp. 1727-1765, Sept. 2016, doi: 10.1109/JPROC.2016.2558521.
* [21] S. H. Alsamhi, O. Ma, and M. S. Ansari. “Survey on artificial intelligence based techniques for emerging robotic communication,” _Telecommunication Systems: Modelling, Analysis, Design and Management,_ vol. 72, issue 3, no. 12, pp. 483-503, Mars 2019, doi: 10.1007/s11235-019-00561-z
* [22] H. M. E. Misilmani and T. Naous, “Machine Learning in Antenna Design: An Overview on Machine Learning Concept and Algorithms,” _2019 International Conference on High Performance Computing & Simulation (HPCS),_ Dublin, Ireland, 2019, pp. 600-607, doi: 10.1109/HPCS48598.2019.9188224.
* [23] P. S. Bithas, E. T. Michailidis, N. Nomikos, D. Vouyioukas, and A. Kanatas “A survey on machine-learning techniques for UAV-based communications,” _Sensors 2019_ 19.23, Nov. 2019, [online] Available: https://doi.org/10.3390/s19235170.
* [24] M. A. Lahmeri, M. A. Kishk, and MS. Alouini. “Machine learning for UAV-Based networks.” _arXiv preprint,_ 2020, arXiv:2009.11522.
* [25] M. Á. Vázquez, P. Henarejos, A. I. Pérez-Neira, E. Grechi, A. Voight, JC. Gil, I. Pappalardo, FD. Credico, and R. M. Lancellotti, “On the Use of AI for Satellite Communications.” _arXiv preprint,_ 2020, arXiv:2007.10110.
* [26] N. Kato, ZM. Fadlullah, F. Tang, B. Mao, S. Tani, A. Okamura, and J. Liu, “Optimizing Space-Air-Ground Integrated Networks by Artificial Intelligence,” _IEEE Wireless Communications,_ vol. 26, no. 4, pp. 140-147, August 2019, doi: 10.1109/MWC.2018.1800365.
* [27] V. Kothari, E. Liberis, and N. D. Lane. “The Final Frontier: Deep Learning in Space,” _Proceedings of the 21st International Workshop on Mobile Computing Systems and Applications,_ pp. 45-49., 2020.
* [28] F. Chollet, “What is Deep Learning ?” in _Deep Learning with Python, 1_ st ed. New York, NY, USA: Manning, 2017, ch. $1$, pp. _3–24._
* [29] A. M. Turing, “Computing Machinery and Intelligence,” in _Mind, 59_ th ed., 1950, ch. $1$, pp. _433–460._
* [30] C. M. Bishop, “Linear Models for Classification,” in _Pattern Recognition and Machine Learning, 1_ st ed. Berlin, Heidelberg, Germany: Springer-Verlag, 2006, ch. $4$, pp. _179–224._
* [31] C. M. Bishop, “Kernel Methods,” in _Pattern Recognition and Machine Learning, 1_ st ed. Berlin, Heidelberg, Germany: Springer-Verlag, 2006, ch. $6$, pp. _291–325._
* [32] B. E. Boser, I. M. Guyon, and V. N. Vapnik, “A training algorithm for optimal margin classifiers,” _In Proceedings of the fifth annual workshop on Computational learning theory (COLT ’92),_ New York, NY, USA, Association for Computing Machinery, pp. 144–152., 1992, [online] Available: https://doi.org/10.1145/130385.130401
* [33] F. Fourati., W. Souidene, and R. Attia, “An original framework for Wheat Head Detection using Deep, Semi-supervised and Ensemble Learning within Global Wheat Head Detection (GWHD) Dataset,” _arXiv preprint,_ 2020, arXiv:2009.11977.
* [34] J. Cervantesa, F. Garcia-Lamonta, L. Rodríguez-Mazahuab, and A. Lopezc, “A comprehensive survey on support vector machine classification: Applications, challenges and trends,” _Neurocomputing,_ 2020, 408, 189-215.
* [35] J. R. Quinlan, “Induction of decision trees.” Machine learning 1.1, 1986, _81–106_.
* [36] C. M. Bishop, “Graphical Models,” in _Pattern Recognition and Machine Learning, 1_ st ed. Berlin, Heidelberg, Germany: Springer-Verlag, 2006, ch. $8$, pp. _359–423_.
* [37] L. Breiman, “Random forests,” Machine learning 45.1, 2001, pp. _5-32_.
* [38] L. Breiman, “Bagging predictors,” Machine learning 24.2, 1996, pp. _123-140_.
* [39] J. H. Friedman “Greedy function approximation: a gradient boosting machine,” _Annals of statistics,_ 2001, pp._1189-1232_.
* [40] [online] Available: https://xgboost.readthedocs.io/en/latest/
* [41] T. Chen and T. He, “Xgboost: extreme gradient boosting.” Package Version: 1.3.2.1, Jan., 2021, [online] Available: https://cran.r-project.org/web/packages/xgboost/vignettes/xgboost.pdf
* [42] [online] Available: https://www.kaggle.com/
* [43] P. Baldi and K. Hornik, “Neural networks and principal component analysis: Learning from examples without local minima.” Neural networks 2.1, 1989, pp. _53–58_.
* [44] C. M. Bishop, “Neural Networks” in _Pattern Recognition and Machine Learning, 1_ st ed. Berlin, Heidelberg, Germany: Springer-Verlag, 2006, ch. $5$, pp. _225–290_.
* [45] R. Hecht-Nielsen, “Theory of the backpropagation neural network.” Neural networks for perception. Academic Press, 1992, pp. _65–93_.
* [46] I. Goodfellow, Y. Bengio, and A. Courville, “Introduction,” in _Deep learning,_ Cambridge: MIT press, 2016, ch. $1$, pp. _1–26._ [online] Available: https://www.deeplearningbook.org/
* [47] I. Goodfellow, Y. Bengio, and A. Courville, “Convolutional Networks,” in _Deep learning,_ Cambridge: MIT press, 2016, ch. $9$, pp. _326–366._ [online] Available: https://www.deeplearningbook.org/
* [48] S. Albawi, T. A. Mohammed, and S. Al-Zawi, “Understanding of a convolutional neural network,” International Conference on Engineering and Technology (ICET), Antalya, 2017, pp. _1–6_ , doi: 10.1109/ICEngTechnol.2017.8308186.
* [49] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi “You only look once: Unified, real-time object detection.” Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
* [50] T. He, Z. Zhang, H. Zhang, Z. Zhang, J. Xie, M. Li “Bag of tricks for image classification with convolutional neural networks.” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019.
* [51] Z. Zou, Z. Shi, Y. Guo, and J. Ye, “Object detection in 20 years: A survey,” arXiv preprint arXiv:1905.05055, 2019.
* [52] Q. Chu, W. Ouyang, H. Li, X. Wang, B. Liu, and N. Yu “Online multi-object tracking using CNN-based single object tracker with spatial-temporal attention mechanism,” Proceedings of the IEEE International Conference on Computer Vision. 2017.
* [53] K. R. Chowdhary, “Natural language processing,” Fundamentals of Artificial Intelligence. Springer, New Delhi, 2020. pp. _603–649_.
* [54] I. Goodfellow, Y. Bengio, and A. Courville, “Sequence Modeling: Recurrent and Recursive Nets,” in _Deep learning,_ Cambridge: MIT press, 2016, ch. $10$, pp. _367–415._ [online] Available: https://www.deeplearningbook.org/
* [55] I. Goodfellow, Y. Bengio, and A. Courville, “Autoencoders,” in _Deep learning,_ Cambridge: MIT press, 2016, ch. $14$, pp. _499–523._ [online] Available: https://www.deeplearningbook.org/
* [56] Y. Wang, Y. Hongxun , and Z. Sicheng, “Auto-encoder based dimensionality reduction,” Neurocomputing 184, 2016, pp. _232–242_.
* [57] C. Zhou and RC. Paffenroth, “Anomaly detection with robust deep autoencoders,” Proceedings of the 23rd ACM Special Interest Group on Knowledge Discovery and Data Mining International Conference on Knowledge Discovery and Data Mining. 2017.
* [58] I. Goodfellow, Y. Bengio, and A. Courville, “Deep generative models,” in _Deep learning,_ Cambridge: MIT press, 2016, ch. $20$, pp. _651–716._ [online] Available: https://www.deeplearningbook.org/
* [59] C. Doersch, “Tutorial on variational autoencoders,” arXiv preprint arXiv:1606.05908. 2016.
* [60] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio, Generative adversarial nets, Advances in neural information processing systems, 2014.
* [61] A. Creswell, T. White, V. Dumoulin, K. Arulkumaran, B. Sengupta, and A. A. Bharath, “Generative adversarial networks: An overview,” IEEE Signal Processing Magazine 35.1. 2018. pp. _53-65_ .
* [62] DD. Margineantu, and TG. Dietterich,“Pruning adaptive boosting,” ICML. Vol. 97. 1997.
* [63] J. Snoek, H. Larochelle, and RP. Adams, “Practical bayesian optimization of machine learning algorithms,” Advances in neural information processing systems 25. 2012. _2951–2959_.
* [64] RS. Sutton and GB. Andrew, “Reinforcement Learning: An Introduction,” A Bradford Book, Cambridge, MA, USA. 2018.
* [65] J. Anzalchi, A. Couchman, P. Gabellini, G. Gallinaro, L. D’Agristina, N. Alagha, and P. Angeletti, “Beam hopping in multi-beam broadband satellite systems: System simulation and performance comparison with non-hopped systems,” in Proc. 5th Adv. Satell. Multimedia Syst. Conf. 11th Signal Process. Space Commun. Workshop, Sep. 2010, pp. _248–-255_.
* [66] A. Freedman, D. Rainish, and Y. Gat, “Beam hopping: How to make it possible,” in Proc. Broadband Commun. Conf., Oct. 2015, pp. _1–-6_.
* [67] L. Lei, E. Lagunas, Y. Yuan, M. G. Kibria, S. Chatzinotas, and B. Ottersten, “Beam Illumination Pattern Design in Satellite Networks: Learning and Optimization for Efficient Beam Hopping,” in IEEE Access, vol. 8, pp. 136655-136667, 2020, doi: 10.1109/ACCESS.2020.3011746.
* [68] P. Angeletti, D. Fernandez Prim, R. Rinaldo, “Beam hopping in multi-beam broadband satellite systems: system performance and payload architecture analysis,” The 24th AIAA Int. Communications Satellite Systems Conf., San Diego, June 2006
* [69] J. Anzalchi, A. Couchman, P. Gabellini, G. Gallinaro, L. D’Agristina, N. Alagha, and P. Angeletti, “Beam hopping in multibeam broadband satellite systems: system simulation and performance comparison with non-hopped systems,” The 2010 5th Advanced Satellite Multimedia Systems Conf. and the 11th Signal Processing for Space Communications Workshop, Cagliari, Italy, September 2010, pp. _248–-255_.
* [70] X. Alberti, J. M. Cebrian, A. Del Bianco, Z. Katona, J. Lei, M. A Vazquez-Castro, A. Zanus, L. Gilbert, and N. Alagha, “System capacity optimization in time and frequency for multibeam multi-media satellite systems,” in Proc. 11th Signal Process. Space Commun. Workshop, Sep. 2010, pp. _226–-233_.
* [71] B. Evans and P. Thompson, “Key issues and technologies for a Terabit/s satellite,” The 28th AIAA Int. Communications Satellite Systems Conf. (ICSSC 2010), Anaheim, California, USA, June 2010, p. 8713
* [72] J. Lei and M. Vazquez-Castro, “Multibeam satellite frequency/time duality study and capacity optimization,” in Proc. IEEE Int. Conf. Commun., Oct. 2011, vol. 13, no. 5, pp. _471-–480_.
* [73] R. Alegre, N. Alagha, MA. Vázquez, “Heuristic algorithms for flexible resource allocation in beam hopping multi-beam satellite systems,” The 29th AIAA Int. Communications Satellite Systems Conf. (ICSSC 2011), Nara, Japan, July 2011, p. 8001
* [74] R. Alegre, N. Alagha, MA. Vázquez, “Offered capacity optimization mechanisms for multi-beam satellite systems,” The 2012 IEEE Int. Conf. on Communications (ICC), Ottawa, ON, Canada, June 2012, pp. _3180–-3184_
* [75] H. Liu, Z. Yang, Z. Cao, “Max-min rate control on traffic in broadband multibeam satellite communications systems,” IEEE Commun. Lett., 2013, 17, (7), pp. _1396–-1399_
* [76] H. Han, X. Zheng, Q. Huang, and Y. Lin, “QoS-equilibrium slot allocation for beam hopping in broadband satellite communication systems,” Wirel. Netw., 2015, 21, (8), pp. _2617–-2630_
* [77] S. Shi, G. Li, Z. Li, H. Zhu, and B. Gao, “Joint power and bandwidth allocation for beamhopping user downlinks in smart gateway multibeam satellite systems,” Int. J. Distrib. Sensor Netw., 2017, 13, (5), doi: 1550147717709461
* [78] A. Ginesi, E. Re, and P.D. Arapoglou, “Joint beam hopping and precoding in HTS systems,” Int. Conf. on Wireless and Satellite Systems, OXFORD, GREAT BRITAIN, U.K, 2017, _43-–51_
* [79] G. Cocco, T. de Cola, M. Angelone, Z. Katona, and S. Erl, “Radio resource management optimization of flexible satellite payloads for DVB-S2 systems,” IEEE Trans. Broadcast., vol. 64, no. 2, Jun. 2018, pp. _266–-280_
* [80] X. Hu, S. Liu, X. Hu, Y. Wang, L. Xu, Y. Zhang, C. Wang, and W. Wang, “Deep reinforcement learning-based beam Hopping algorithm in multibeam satellite systems,” IET Communications. pp. _2485–91_ , Jan. 2019.
* [81] Y. Zhang, X. Hu, R. Chen, Z. Zhang, L. Wang, and W. Wang, “Dynamic Beam Hopping for DVB-S2X Satellite: A Multi-Objective Deep Reinforcement Learning Approach,” 2019 IEEE International Conferences on Ubiquitous Computing & Communications (IUCC) and Data Science and Computational Intelligence (DSCI) and Smart Computing, Networking and Services (SmartCNS), Shenyang, China, 2019, pp. _164–169_ , doi: 10.1109/IUCC/DSCI/SmartCNS.2019.00056.
* [82] X. Hu, Y. Zhang, X. Liao, Z. Liu, W. Wang, and F. M. Ghannouchi, “Dynamic Beam Hopping Method Based on Multi-Objective Deep Reinforcement Learning for Next Generation Satellite Broadband Systems,” in IEEE Transactions on Broadcasting, vol. 66, no. 3, Sept. 2020, pp. _630–646_ , doi: 10.1109/TBC.2019.2960940.
* [83] M. K. Simon, J. K. Omura, and R. A. Sholtz “Spread spectrum communications,” vols. 1-3. Computer Science Press, Inc., 1985.
* [84] D. Torrieri, “Principles of spread-spectrum communication systems,” Vol. 1. Heidelberg: Springer, 2005.
* [85] S. Bae, S. Kim, and J. Kim, “Efficient frequency-hopping synchronization for satellite communications using dehop-rehop transponders,” in IEEE Transactions on Aerospace and Electronic Systems, vol. 52, no. 1, pp. _261–274_ , Feb. 2016, doi: 10.1109/TAES.2015.150062.
* [86] F. Yao, L. Jia, Y. Sun, Y. Xu, S. Feng, and Y. Zhu, “A hierarchical learning approach to anti-jamming channel selection strategies,” Wirel. Netw., vol. 25, no. 1, Jan. 2019, pp. _201–213_.
* [87] C. Han and Y. Niu, “Cross-Layer Anti-Jamming Scheme: A Hierarchical Learning Approach,” IEEE Access, vol. 6, pp. 34874-34883, Jun. 2018.
* [88] S. Lee, S. Kim, M. Seo, and D. Har, “Synchronization of Frequency Hopping by LSTM Network for Satellite Communication System,” in IEEE Communications Letters, vol. 23, no. 11, Nov. 2019, pp. _2054–2058_ , , doi: 10.1109/LCOMM.2019.2936019.
* [89] C. Han, L. Huo, X. Tong, H. Wang, and X. Liu, “Spatial Anti-Jamming Scheme for Internet of Satellites Based on the Deep Reinforcement Learning and Stackelberg Game,” in IEEE Transactions on Vehicular Technology, vol. 69, no. 5, May 2020, pp. _5331–5342_ , doi: 10.1109/TVT.2020.2982672.
* [90] C. Han, A. Liu, H. Wang, L. Huo, and X. Liang, “Dynamic Anti-Jamming Coalition for Satellite-Enabled Army IoT: A Distributed Game Approach,” in IEEE Internet of Things Journal, vol. 7, no. 11, Nov. 2020, pp._10932–10944_ , doi: 10.1109/JIOT.2020.2991585.
* [91] Y. Bie, L. Wang, Y. Tian, and Z. Hu, “A Combined Forecasting Model for Satellite Network Self-Similar Traffic,” in IEEE Access, vol. 7, 2019, pp. _152004–152013_ , doi: 10.1109/ACCESS.2019.2944895.
* [92] L. Rossi, J. Chakareski, P. Frossard, and S. Colonnese, “A Poisson Hidden Markov Model for Multiview Video Traffic,” in IEEE/ACM Transactions on Networking, vol. 23, no. 2, April 2015, pp. _547–558_ , doi: 10.1109/TNET.2014.2303162.
* [93] D. Yan and L. Wang, “TPDR: Traffic prediction based dynamic routing for LEO&GEO satellite networks,” 2015 IEEE 5th International Conference on Electronics Information and Emergency Communication, Beijing, 2015, pp. _104–107_ , doi: 10.1109/ICEIEC.2015.7284498.
* [94] F. Xu, Y. Lin, J. Huang, D. Wu, H. Shi, J. Song, and Y. Li, “Big Data Driven Mobile Traffic Understanding and Forecasting: A Time Series Approach,” in IEEE Transactions on Services Computing, vol. 9, no. 5, pp. 796-805, 1 Sept.-Oct. 2016, doi: 10.1109/TSC.2016.2599878.
* [95] C. Katris, S. Daskalaki, Comparing forecasting approaches for Internet traffic, Expert Systems with Applications, Volume 42, Issue 21, 2015, _8172–8183_ , ISSN 0957-4174, [online] Available: https://doi.org/10.1016/j.eswa.2015.06.029.
* [96] B. Gao, Q. Zhang, Y.-S. Liang, N.-N. Liu, C.-B. Huang, and N.-T. Zhang, “Predicting self-similar networking traffic based on EMD and ARMA,” 32. 47-56. 2011.
* [97] X. Pan, W. Zhou, Y. Lu, and N. Sun, “Prediction of Network Traffic of Smart Cities Based on DE-BP Neural Network,” in IEEE Access, vol. 7, pp. _55807–55816_ , 2019, doi: 10.1109/ACCESS.2019.2913017.
* [98] JX. LIU and ZH. JIA. “Telecommunication Traffic Prediction Based on Improved LSSVM,” International Journal of Pattern Recognition and Artificial Intelligence. 32. 10.1142/S0218001418500076. 2017.
* [99] L. Ziluan and L. Xin. “Short-term traffic forecasting based on principal component analysis and a generalized regression neural network for satellite networks,” Journal of China Universities of Posts and Telecommunications. 25. 15-28+36. 10.19682/j.cnki.1005-8885.2018.0002. 2018.
* [100] Z. Na, Z. Pan, X. Liu, Z. Deng, Z. Gao, and Q. Guo, “Distributed Routing Strategy Based on Machine Learning for LEO Satellite Network,” Wireless Communications and Mobile Computing, vol. 2018, Article ID 3026405, 10 pages, 2018. [online] Available: https://doi.org/10.1155/2018/3026405
* [101] GB. Huang, QY. Zhu, C. Siew, (2004). “Extreme learning machine: A new learning scheme of feedforward neural networks,” IEEE International Conference on Neural Networks - Conference Proceedings. 2. _985–990_ vol.2. 10.1109/IJCNN.2004.1380068.
* [102] A. Goldsmith, “Path Loss and Shadowing,” in _Wireless Communications_ , Cambridge University Press, 2005, ch. $2$, pp. _25–48_.
* [103] T. S. Rappaport, G. R. MacCartney, M. K. Samimi, and S. Sun, “Wideband Millimeter-Wave Propagation Measurements and Channel Models for Future Wireless Communication System Design,” in IEEE Transactions on Communications, vol. 63, no. 9, pp. _3029–3056_ , Sept. 2015, doi: 10.1109/TCOMM.2015.2434384.
* [104] S. Sangodoyin, S. Niranjayan, and A. F. Molisch, “A Measurement-Based Model for Outdoor Near-Ground Ultrawideband Channels,” in IEEE Transactions on Antennas and Propagation, vol. 64, no. 2, pp. _740–751_ , Feb. 2016, doi: 10.1109/TAP.2015.2505004.
* [105] C. Wang, J. Bian, J. Sun, W. Zhang, and M. Zhang, “A Survey of 5G Channel Measurements and Models,” in IEEE Communications Surveys & Tutorials, vol. 20, no. 4, pp. 3142-3168, Fourthquarter 2018, doi: 10.1109/COMST.2018.2862141.
* [106] B. Ai, K. Guan, R. He, J. Li, G. Li, D. He, Z. Zhong, and K. M. S. Huq, “On Indoor Millimeter Wave Massive MIMO Channels: Measurement and Simulation,” in IEEE Journal on Selected Areas in Communications, vol. 35, no. 7, July 2017, pp. _1678–1690_ , doi: 10.1109/JSAC.2017.2698780.
* [107] G. Liang and H. L. Bertoni, “A new approach to 3-D ray tracing for propagation prediction in cities,” IEEE Trans. Antennas Propag., vol. 46, no. 6, Jun. 1998, pp. _853–-863_.
* [108] M. Zhu, A. Singh, and F. Tufvesson, “Measurement based ray launching for analysis of outdoor propagation,” 2012 6th European Conference on Antennas and Propagation (EUCAP), Prague, 2012, pp. _3332–3336_ , doi: 10.1109/EuCAP.2012.6206329.
* [109] Z. Yun and M. F. Iskander, “Ray Tracing for Radio Propagation Modeling: Principles and Applications,” in IEEE Access, vol. 3, 2015, pp. _1089–1100_ , doi: 10.1109/ACCESS.2015.2453991.
* [110] D. J. Cichon and T. Kürner, “Propagation prediction models,” Florence, Italy, Tech. Rep. COST-231 TD (95) 66, Apr. 1995, pp. _115–207_.
* [111] L. C. Fernandes and A. J. M. Soares, “Simplified characterization of the urban propagation environment for path loss calculation,” IEEE Antennas Wireless Propag. Lett., vol. 9, 2010, pp. _24–-27_.
* [112] L. C. Fernandes and A. J. M. Soares, “On the use of image segmentation for propagation path loss prediction,” in IEEE MTT-S Int. Microw. Symp. Dig., Oct. 2011, pp. _129–-133_.
* [113] M. Piacentini and F. Rinaldi, “Path loss prediction in urban environment using learning machines and dimensionality reduction techniques,” Comput. Manage. Sci., vol. 8, no. 4, Nov. 2011, _371–-385_.
* [114] M. Uccellari, F. Facchini, M. Sola, E. Sirignano, G. M. Vitetta, A. Barbieri, and S. Tondelli, “On the use of support vector machines for the prediction of propagation losses in smart metering systems,” in Proc. IE
* [115] S. P. Sotiroudis, S. K. Goudos, K. A. Gotsis, K. Siakavara, and J. N. Sahalos, “Application of a composite differential evolution algorithm in optimal neural network design for propagation path-loss prediction in mobile communication systems,” IEEE Antennas Wireless Propag. Lett., vol. 12, 2013, pp. _364–-367_.
* [116] S. P. Sotiroudis and K. Siakavara, “Mobile radio propagation path loss prediction using Artificial Neural Networks with optimal input information for urban environments,” AEU-Int. J. Electron. Commun., vol. 69, no. 10, Oct. 2015, pp. _1453–-1463_.
* [117] I. Popescu, I. Nafornita, and P. Constantinou, “Comparison of neural network models for path loss prediction,” in Proc. IEEE Int. Conf. Wireless Mobile Comput., Netw. Commun., Aug. 2005, pp. _44–-49_.
* [118] E. Ostlin, H.-J. Zepernick, and H. Suzuki, “Macrocell path-loss prediction using artificial neural networks,” IEEE Trans. Veh. Technol., vol. 59, no. 6, Jul. 2010, pp. _2735-–2747_.
* [119] B. J. Cavalcanti, G. A. Cavalcante, L. M. D. Mendonça, G. M. Cantanhede, M. M. de Oliveira, and A. G. D’Assunção, “A hybrid path loss prediction model based on artificial neural networks using empirical models for LTE and LTE-A at 800 MHz and 2600 MHz,” J. Microw., Optoelectron. Electromagn. Appl., vol. 16, Sep. 2017, pp._708–-722_.
* [120] Y. Zhang, J. Wen, G. Yang, Z. He, and X. Luo, “Air-to-air path loss prediction based on machine learning methods in urban environments,” Wireless Commun. Mobile Comput., vol. 6, May 2018, Art. no. 8489326.
* [121] C. A. Oroza, Z. Zhang, T. Watteyne, and S. D. Glaser, “A machinelearning-based connectivity model for complex terrain large-scale lowpower wireless deployments,” IEEE Trans. Cogn. Commun. Netw., vol. 3, no. 4, Dec. 2017 pp. _576–-584_.
* [122] Y. Zhang, J. Wen, G. Yang, Z. He, and J. Wang, “Path loss prediction based on machine learning: Principle, method, and data expansion,” Appl. Sci., vol. 9, p. 1908, May 2019.
* [123] H. F. Ates, S. M. Hashir, T. Baykas, and B. K. Gunturk, “Path Loss Exponent and Shadowing Factor Prediction From Satellite Images Using Deep Learning,” in IEEE Access, vol. 7, 2019, pp. _101366–101375_ , doi: 10.1109/ACCESS.2019.2931072.
* [124] J. Thrane, D. Zibar, and H. L. Christiansen, “Model-Aided Deep Learning Method for Path Loss Prediction in Mobile Communication Systems at 2.6 GHz,” in IEEE Access, vol. 8, 2020, pp. _7925–7936_ , doi: 10.1109/ACCESS.2020.2964103.
* [125] O. Ahmadien, H. F. Ates, T. Baykas, and B. K. Gunturk, “Predicting Path Loss Distribution of an Area From Satellite Images Using Deep Learning,” in IEEE Access, vol. 8, 2020, pp. _64982–64991_ , doi: 10.1109/ACCESS.2020.2985929.
* [126] T. Yairi, N. Takeishi, T. Oda, Y. Nakajima, N. Nishimura, and N. Takata, “A Data-Driven Health Monitoring Method for Satellite Housekeeping Data Based on Probabilistic Clustering and Dimensionality Reduction,” in IEEE Transactions on Aerospace and Electronic Systems, vol. 53, no. 3, June 2017, pp. _1384–1401_ , doi: 10.1109/TAES.2017.2671247.
* [127] T. Yairi, T. Tagawa, and N. Takata, “Telemetry monitoring by dimensionality reduction and learning hidden markov model,” in Proceedings of International Symposium on Artificial Intelligence, Robotics and Automation in Space, 2012.
* [128] T. Yairi, M. Nakatsugawa, K. Hori, S. Nakasuka, K. Machida and N. Ishihama, “Adaptive limit checking for spacecraft telemetry data using regression tree learning,” 2004 IEEE International Conference on Systems, Man and Cybernetics (IEEE Cat. No.04CH37583), The Hague, 2004, pp. _5130–5135_ vol.6, doi: 10.1109/ICSMC.2004.1401008.
* [129] T. Shahroz, L. Sangyup, S. Youjin, L. Myeongshin, J. Okchul, C. Daewon, and S. W. Simon, “Detecting Anomalies in Space using Multivariate Convolutional LSTM with Mixtures of Probabilistic PCA,” 25th ACM Special Interest Group on Knowledge Discovery and Data Mining International Conference, Alaska, USA, 2019.
* [130] K. Hundman, V. Constantinou, C. Laporte, I. Colwell, and T. Soderstrom, “Detecting Spacecraft Anomalies Using LSTMs and Nonparametric Dynamic Thresholding,” 24th ACM Special Interest Group on Knowledge Discovery and Data Mining International Conference. London, UK, 2018.
* [131] S. Fuertes, G. Picart, JY. Tourneret, L. Chaari, A. Ferrari, and C. Richard, “Improving Spacecraft Health Monitoring with Automatic Anomaly Detection Techniques,” 14th International Conference on Space Operations. Daejeon, Korea, 2016.
* [132] D. L. Iverson, R. Martin, M. Schwabacher, L. Spirkovska, W. Taylor, R. Mackey, J. P. Castle and V. Baskaran, “General Purpose DataDriven Monitoring for Space Operations,” Journal of Aerospace Computing Information & Communication, 9(2):26-44 2012.
* [133] PI. Robinson, M. H. Shirley, D. Fletcher, R. Alena, D. Duncavage, and C. Lee “Applying model-based reasoning to the fdir of the command and data handling subsystem of the international space station,” in Proc. of International Symposium on Artificial Intelligence, Robotics and Automation in Space, 2003.
* [134] Y. Sun, L. Guo, Y. Wang, Z. Ma, and Y. Niu, “Fault diagnosis for space utilisation,” in The Journal of Engineering, vol. 2019, no. 23, pp. 8770-8775, 12 2019, doi: 10.1049/joe.2018.9102.
* [135] S. K. Ibrahim, A. Ahmed, M. A. E. Zeidan, and I. E. Ziedan, “Machine Learning Methods for Spacecraft Telemetry Mining,” in IEEE Transactions on Aerospace and Electronic Systems, vol. 55, no. 4, pp. _1816–1827_ , Aug. 2019, doi: 10.1109/TAES.2018.2876586.
* [136] P. Wan, Y. Zhan, and W. Jiang, “Study on the Satellite Telemetry Data Classification Based on Self-Learning,” in IEEE Access, vol. 8, pp. 2656-2669, 2020, doi: 10.1109/ACCESS.2019.2962235.
* [137] P. W. Ward, J. W. Betz, and C. J. Hegarty, “Satellite signal acquisition, tracking, and data demodulation in Understanding GPS: Principles and Applications,” Norwood, MA, USA: Artech House, pp. 153–241, 2006.
* [138] A. V. Dierendonck, J. Klobuchar, and Q. Hua, “Ionospheric scintillation monitoring using commercial single frequency C/A code receivers,” in Proc. 6th Int. Tech. Meet. Satellite Div. Inst. Navig., Salt Lake City, UT, USA, vol. 93, pp. _1333–-1342_ , Sep. 1993.
* [139] J. Lee, Y. T. J. Morton, J. Lee, H.-S. Moon, and J. Seo, “Monitoring and mitigation of ionospheric anomalies for GNSSbased safety critical systems: A review of up-to-date signal processing techniques,” IEEE Signal Process. Mag., vol. 34, no. 5, pp. _96-–110_ , Sep. 2017
* [140] C. Cesaroni, L. Alfonsi, R. Romero, N. Linty, F. Dovis, S. V. Veettil, J. Park, D. Barroca, M. C. Ortega, and R. O. Perez, “Monitoring Ionosphere Over South America: The MImOSA and MImOSA2 projects,” 2015 International Association of Institutes of Navigation World Congress (IAIN), Prague, pp. _1–7_ , 2015, doi: 10.1109/IAIN.2015.7352226.
* [141] L. Nicola, R. Rodrigo, C. Calogero, D. Fabio, B. Michele, C. J. Thomas, F. G. Joaquim, W. Jonathan, L. Gert, R. Padraig, C. Pierre, C. Emilia, and A. Lucilla, “Ionospheric scintillation threats to GNSS in polar regions: the DemoGRAPE case study in Antarctica,” in Proc. Eur. Navig. Conf., pp. _1–-7_ , 2016.
* [142] J. Vila-Valls, P. Closas, C. Fernandez-Prades, and J. T. Curran, “On the ionospheric scintillation mitigation in advanced GNSS receivers IEEE Trans.” Aerosp. Electron. Syst., to be published.
* [143] S. Taylor, Y. Morton, Y. Jiao, J. Triplett, and W. Pelgrum, “An improved ionosphere scintillation event detection and automatic trigger for GNSS data collection systems,” in Proc Int. Tech. Meet. Inst. Navig., pp. _1563–-1569_ , 2012.
* [144] W. Fu, S. Han, C. Rizos, M. Knight, and A. Finn, “Real-time ionospheric scintillation monitoring,” in Proc. 12th Int. Tech. Meet. Satellite Div. Inst. Navig., vol. 99, pp. _14–-17_ , 1999.
* [145] S. Miriyala, P. R. Koppireddi, and S. R. “Chanamallu Robust detection of ionospheric scintillations using MF-DFA technique Earth,” Planets Sp., vol. 67, no. 98, pp. _1–-5_ , 2015.
* [146] R. Romero, N. Linty, F. Dovis, and R. V. Field, “A novel approach to ionospheric scintillation detection based on an open loop architecture,” in Proc. 8th ESA Workshop Satellite Navig. Technol. Eur. Workshop GNSS Signals Signal Process., pp. _1–-9_ , Dec. 2016.
* [147] L. F. C. Rezende, E. R. de Paula, S. Stephany, I. J. Kantor, M. T. A. H. Muella, P. M. de Siqueira and K. S. Correa, “Survey and prediction of the ionospheric scintillation using data mining techniques,” Sp. Weather, vol. 8, no. 6, pp. _1–-10_ , 2010.
* [148] Y. Jiao, J. J. Hall, and Y. T. “Morton Performance evaluations of an equatorial GPS amplitude scintillation detector using a machine learning algorithm,” in Proc 29th Int. Tech. Meet. Satellite Div. Inst. Navig., pp. _195–-199_ , Sep. 2016.
* [149] Y. Jiao, J. J. Hall, and Y. T. Morton, “Automatic equatorial GPS amplitude scintillation detection using a machine learning algorithm,” IEEE Trans. Aerosp. Electron. Syst., vol. 53, no. 1, pp. _405–-418_ , Feb. 2017.
* [150] Y. Jiao, J. J. Hall, and Y. T. Morton, “Automatic GPS phase scintillation detector using a machine learning algorithm,” in Proc. Int. Tech. Meet. Inst. Navig., Monterey, CA, USA, pp. _1160–-1172_ , Jan. 2017.
* [151] Y. Jiao, J. J. Hall, and Y. T. Morton, “Performance evaluation of an automatic GPS ionospheric phase scintillation detector using a machine-learning algorithm Navigation,” vol. 64, no. 3, pp. _391–-402_ , 2017.
* [152] N. Linty, A. Farasin, A. Favenza, and F. Dovis, “Detection of GNSS Ionospheric Scintillations Based on Machine Learning Decision Tree,” in IEEE Transactions on Aerospace and Electronic Systems, vol. 55, no. 1, pp. _303–317_ , Feb. 2019, doi: 10.1109/TAES.2018.2850385.
* [153] R. Imam and F. Dovis, “Distinguishing Ionospheric Scintillation from Multipath in GNSS Signals Using Bagged Decision Trees Algorithm,” 2020 IEEE International Conference on Wireless for Space and Extreme Environments (WiSEE), Vicenza, Italy, 2020, pp. 83-88, doi: 10.1109/WiSEE44079.2020.9262699.
* [154] C. Politis, S. MalekiSina, M. Christos, G. TsinosChristos, G. T. Show, “On-board the Satellite Interference Detection with Imperfect Signal Cancellation,”
* [155] A. V. Dandawate and G. B. Giannakis, “Statistical tests for presence of cyclostationarity,” in IEEE Transactions on Signal Processing, vol. 42, no. 9, pp. _2355–2369_ , Sept. 1994, doi: 10.1109/78.317857.
* [156] O. A. Dobre, A. Abdi, Y. Bar-Ness, and W. Su, “Survey of automatic modulation classification techniques: classical approaches and new trends,” in IET Communications, vol. 1, no. 2, pp. _137–156_ , April 2007, doi: 10.1049/iet-com:20050176.
* [157] J. Hu, D. Bian, Z. Xie, Y. Li, and L. Fan, “An approach for narrow band interference detection in satellite communication using morphological filter,” International Conference on Information Technology and Management Innovation, Shenzhen, China, Sept.,
* [158] Q. Liu, J. Yang, C. Zhuang, A. Barnawi, and B. A Alzahrani, “Artificial Intelligence Based Mobile Tracking and Antenna Pointing in Satellite-Terrestrial Network,” in IEEE Access, vol. 7, pp. _177497–177503_ , 2019, doi: 10.1109/ACCESS.2019.2956544.
* [159] L. Pellaco, N. Singh, and J. Jaldén. “Spectrum Prediction and Interference Detection for Satellite Communications,” arXiv preprint arXiv:1912.04716, 2019.
* [160] P. Henarejos, M. Á. Vázquez, and A. I. Pérez-Neira, “Deep Learning For Experimental Hybrid Terrestrial and Satellite Interference Management,” 2019 IEEE 20th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), Cannes, France, 2019, pp. _1–5_ , doi: 10.1109/SPAWC.2019.8815532.
* [161] N. Kussul, M. Lavreniuk, S. Skakun, and A. Shelestov, “Deep Learning Classification of Land Cover and Crop Types Using Remote Sensing Data,” in IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 5, pp. _778–782_ , May 2017, doi: 10.1109/LGRS.2017.2681128.
* [162] F. Zhang, B. Du, and L. Zhang, “Scene Classification via a Gradient Boosting Random Convolutional Network Framework,” in IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 3, pp. _1793–1802_ , March 2016, doi: 10.1109/TGRS.2015.2488681.
* [163] A. S. Li, V. Chirayath, M. Segal-Rozenhaimer, J. L. Torres-Pérez, and J. van den Bergh, “NASA NeMO-Net’s Convolutional Neural Network: Mapping Marine Habitats with Spectrally Heterogeneous Remote Sensing Imagery,” in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 13, pp. _5115–5133_ , 2020, doi: 10.1109/JSTARS.2020.3018719.
* [164] S. A. Fatima, A. Kumar, A. Pratap, and S. S. Raoof, “Object Recognition and Detection in Remote Sensing Images: A Comparative Study,” 2020 International Conference on Artificial Intelligence and Signal Processing (AISP), Amaravati, India, pp. _1–5_ , 2020, doi: 10.1109/AISP48273.2020.9073614.
* [165] L. Zhou, J. Liu, and L. Chen, “Vehicle detection based on remote sensing image of Yolov3,” 2020 IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), Chongqing, China, pp. _468–472_ , 2020, doi: 10.1109/ITNEC48623.2020.9084975.
* [166] J. Redmon, et al. “You only look once: Unified, real-time object detection,” Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
* [167] J. Redmon and A. Farhadi. “Yolov3: An incremental improvement,” arXiv preprint arXiv:1804.02767, 2018.
* [168] A. Femin and K. S. Biju, “Accurate Detection of Buildings from Satellite Images using CNN,” 2020 International Conference on Electrical, Communication, and Computer Engineering (ICECCE), Istanbul, Turkey, pp. _1–5_ , 2020, doi: 10.1109/ICECCE49384.2020.9179232.
* [169] A. Hassan, W. M. Hussein, E. Said and M. E. Hanafy, “A Deep Learning Framework for Automatic Airplane Detection in Remote Sensing Satellite Images,” 2019 IEEE Aerospace Conference, Big Sky, MT, USA, pp. _1–10_ , 2019, doi: 10.1109/AERO.2019.8741938.
* [170] G. Mateo-Garcia, V. Laparra, D. Lopez-Puigdollers, and L. Gomez-Chova, “Cross-Sensor Adversarial Domain Adaptation of Landsat-8 and Proba-V images for Cloud Detection,” in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, doi: 10.1109/JSTARS.2020.3031741.
* [171] Z. Shao, Y. Pan, C. Diao, and J. Cai, “Cloud Detection in Remote Sensing Images Based on Multiscale Features-Convolutional Neural Network,” in IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 6, pp. _4062–4076_ , June 2019, doi: 10.1109/TGRS.2018.2889677.
* [172] M. Tian, H. Chen, and G. Liu, “Cloud Detection and Classification for S-NPP FSR CRIS Data Using Supervised Machine Learning,” IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, pp. _9827–9830_ , 2019, doi: 10.1109/IGARSS.2019.8898876.
* [173] F. Wang, F. Liao, and H. Zhu, “FPA-DNN: A Forward Propagation Acceleration based Deep Neural Network for Ship Detection,” 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, United Kingdom, pp. _1–8_ , 2020, doi: 10.1109/IJCNN48605.2020.9207603.
* [174] L. Zong-ling et al., “Remote Sensing Ship Target Detection and Recognition System Based on Machine Learning,” IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, pp. _1272–1275_ , 2019, doi: 10.1109/IGARSS.2019.8898599.
* [175] H. Bandarupally, H. R. Talusani, and T. Sridevi, “Detection of Military Targets from Satellite Images using Deep Convolutional Neural Networks,” 2020 IEEE 5th International Conference on Computing Communication and Automation (ICCCA), Greater Noida, India, pp. _531–535_ , 2020, doi: 10.1109/ICCCA49541.2020.9250864.
* [176] J. Zheng, X. -Y. Liu, and X. Wang, “Single Image Cloud Removal Using U-Net and Generative Adversarial Networks,” in IEEE Transactions on Geoscience and Remote Sensing, doi: 10.1109/TGRS.2020.3027819.
* [177] O. Ronneberger, P. Fischer, and T. Brox. “U-net: Convolutional networks for biomedical image segmentation,” International Conference on Medical image computing and computer-assisted intervention. Springer, Cham, 2015.
* [178] J. Lu, Y. Chen, and R. He, “A Learning-Based Approach for Agile Satellite Onboard Scheduling,” in IEEE Access, vol. 8, pp. 16941-16952, 2020, doi: 10.1109/ACCESS.2020.2968051.
* [179] R. Mital, K. Cates, J. Coughlin and G. Ganji, “A Machine Learning Approach to Modeling Satellite Behavior,” 2019 IEEE International Conference on Space Mission Challenges for Information Technology (SMC-IT), Pasadena, CA, USA, pp._62–69_ , 2019, doi: 10.1109/SMC-IT.2019.00013.
* [180] K. Weasenforth, J. Hollon, T. Payne, K. Kinateder, and A. Kruchten, “Machine Learning-based Stability Assessment and Change Detection for Geosynchronous Satellites,” Advanced Maui Optical and Space Surveillance Technologies Conference, 2018.
* [181] B. Jia, K. D. Pham, E. Blasch, Z. Wang, D. Shen, and G. Chen, “Space object classification using deep neural networks,” in 2018 IEEE Aerospace Conference, Big Sky, MT, pp. _1–-8_ , 2018.
* [182] K. Hundman, V. Constantinou, C. Laporte, I. Colwell, and T. Soderstrom, “Detecting Spacecraft Anomalies Using LSTMs and Nonparametric Dynamic Thresholding,” in Proceedings of the 24th ACM Special Interest Group on Knowledge Discovery and Data Mining International Conference on Knowledge Discovery & Data Mining - KDD ’18, London, United Kingdom, pp. 387-–395, 2018.
* [183] B. Chen, J. Cao, A. Parra, and T. Chin, “Satellite Pose Estimation with Deep Landmark Regression and Nonlinear Pose Refinement,” 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), Seoul, Korea (South), pp. _2816–2824_ , 2019, doi: 10.1109/ICCVW.2019.00343.
* [184] M. Kisantal, S. Sharma, T. H. Park, D. Izzo, M. Märtens, and S. D’Amico, “Satellite Pose Estimation Challenge: Dataset, Competition Design, and Results,” in IEEE Transactions on Aerospace and Electronic Systems, vol. 56, no. 5, pp. _4083–4098_ , Oct. 2020, doi: 10.1109/TAES.2020.2989063.
* [185] S. Jahirabadkar, P. Narsay, S. Pharande, G. Deshpande, and A. Kitture, “Space Objects Classification Techniques: A Survey,” 2020 International Conference on Computational Performance Evaluation (ComPE), Shillong, India, pp. _786–791_ , 2020, doi: 10.1109/ComPE49325.2020.9199996.
* [186] D. Yadava, R. Hosangadi, S. Krishna, P. Paliwal, and A. Jain, “Attitude control of a nanosatellite system using reinforcement learning and neural networks,” 2018 IEEE Aerospace Conference, Big Sky, MT, pp. _1–8_ , 2018, doi: 10.1109/AERO.2018.8396409.
* [187] A. M. Ahmed, A. Salama, H. A. Ibrahim, M. A. E. Sayed, and S. Yacout, “Prediction of Battery Remaining Useful Life on Board Satellites Using Logical Analysis of Data,” 2019 IEEE Aerospace Conference, Big Sky, MT, USA, pp. _1–8_ , 2019, doi: 10.1109/AERO.2019.8741717.
* [188] JH. Lee, J. Park, M. Bennis, YC. Ko, “Integrating LEO Satellite and UAV Relaying via Reinforcement Learning for Non-Terrestrial Networks,” arXiv preprint arXiv:2005.12521, 2020.
* [189] N. Cheng, F. Lyu, W. Quan, C. Zhou, H. He, W. Shi, and X. Shen, “Space/Aerial-Assisted Computing Offloading for IoT Applications: A Learning-Based Approach,” in IEEE Journal on Selected Areas in Communications, vol. 37, no. 5, pp. _1117–1129_ , May 2019, doi: 10.1109/JSAC.2019.2906789.
* [190] C. Jiang and X. Zhu, “Reinforcement Learning Based Capacity Management in Multi-Layer Satellite Networks,” in IEEE Transactions on Wireless Communications, vol. 19, no. 7, pp. _4685–4699_ , July 2020, doi: 10.1109/TWC.2020.2986114.
* [191] C. Qiu, H. Yao, F. R. Yu, F. Xu, and C. Zhao, “Deep Q-Learning Aided Networking, Caching, and Computing Resources Allocation in Software-Defined Satellite-Terrestrial Networks,” in IEEE Transactions on Vehicular Technology, vol. 68, no. 6, pp. _5871–5883_ , June 2019, doi: 10.1109/TVT.2019.2907682.
* [192] W. Liu, F. Tian, and Z. Jiang, “Beam-hopping based resource allocation algorithm in LEO satellite network,” in Proc. Int. Conf. Space Inf. Netw. Singapore: Springer, pp. _113–-123_ , 2018.
* [193] Z. Qu, G. Zhang, H. Cao, and J. Xie, “LEO satellite constellation for Internet of Things,” IEEE Access, vol. 5, pp. _18391–-18401_ , 2017.
* [194] H. Tsuchida, Y. Kawamoto, N. Kato, K. Kaneko, S. Tani, S. Uchida, and H. Aruga, “Efficient Power Control for Satellite-Borne Batteries Using Q-Learning in Low-Earth-Orbit Satellite Constellations,” in IEEE Wireless Communications Letters, vol. 9, no. 6, pp. 809-812, June 2020, doi: 10.1109/LWC.2020.2970711.
* [195] B. Zhao, J. Liu, Z. Wei, and I. You, “A Deep Reinforcement Learning Based Approach for Energy-Efficient Channel Allocation in Satellite Internet of Things,” in IEEE Access, vol. 8, pp. 62197-62206, 2020, doi: 10.1109/ACCESS.2020.2983437.
* [196] G. Cui, X. Li, L. Xu, and W. Wang, “Latency and Energy Optimization for MEC Enhanced SAT-IoT Networks,” in IEEE Access, vol. 8, pp. 55915-55926, 2020, doi: 10.1109/ACCESS.2020.2982356.
* [197] C. Zhang, “An AI-based optimization of handover strategy in non-terrestrial networks,” presented at the _12 th ITU Academic Conference Kaleidoscope Industry-driven digital transformation,_ Online, Dec. 7-11, 2020.
* [198] X. Chen, W. Yao, Y. Zhao, X. Chen, and X. Zheng, “A practical satellite layout optimization design approach based on enhanced finite-circle method”, Struct. Multidisciplinary Optim., vol. 58, no. 6, pp. 2635-2653, Dec. 2018.
* [199] K. Chen, J. Xing, S. Wang, and M. Song, “Heat source layout optimization in two-dimensional heat conduction using simulated annealing method”, Int. J. Heat Mass Transf., vol. 108, pp. 210-219, May 2017.
* [200] Y. Aslan, J. Puskely, and A. Yarovoy, “Heat source layout optimization for two-dimensional heat conduction using iterative reweighted L1-norm convex minimization,” Int. J. Heat Mass Transf., vol. 122, pp. 432-441, Jul. 2018.
* [201] K. Chen, S. Wang, and M. Song, “Temperature-gradient-aware bionic optimization method for heat source distribution in heat conduction”, Int. J. Heat Mass Transf., vol. 100, pp. 737-746, Sep. 2016.
* [202] J. Sun, J. Zhang, X. Zhang, and W. Zhou, “A Deep Learning-Based Method for Heat Source Layout Inverse Design,” in IEEE Access, vol. 8, pp. 140038-140053, 2020, doi: 10.1109/ACCESS.2020.3013394.
* [203] H. Li, P. Wang, C. Shen, and G. Zhang, ”Show, attend and read: A simple and strong baseline for irregular text recognition,” Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 33. 2019.
* [204] Y. Zhang and W. Ye, “Deep learning–based inverse method for layout design”, Struct. Multidisciplinary Optim., vol. 16, no. 3, pp. 774-788, 2019.
* [205] J. Peurifoy, Y. Shen, L. Jing, Y. Yang, F. Cano-Renteria, B. G. DeLacy, J. D. Joannopoulos, M. Tegmark, and M. Soljačić, “Nanophotonic particle simulation and inverse design using artificial neural networks”, Sci. Adv., vol. 4, no. 6, Jun. 2018.
* [206] J. Tompson, K. Schlachter, P. Sprechmann, and K. Perlin, “Accelerating eulerian fluid simulation with convolutional networks”, Proc. 5th Int. Conf. Learn. Represent. ICLR, pp. 3424-3433, Apr. 2017, [online] Available: http://OpenReview.net.
* [207] A. Agrawal, P. D. Deshpande, A. Cecen, G. P. Basavarsu, A. N. Choudhary, and S. R. Kalidindi, “Exploration of data science techniques to predict fatigue strength of steel from composition and processing parameters”, Integrating Mater. Manuf. Innov., vol. 3, no. 1, pp. 90-108, Dec. 2014.
* [208] P. Robustillo, J. Zapata, J. A. Encinar, and J. Rubio, “ANN characterization of multi-layer reflectarray elements for contoured-beam space antennas in the Ku-band”, IEEE Trans. Antennas Propag., vol. 60, no. 7, pp. 3205-3214, Jul. 2012.
* [209] A. Freni, M. Mussetta and P. Pirinoli, “Neural network characterization of reflectarray antennas”, Int. J. Antennas Propag., vol. 2012, pp. 1-10, May 2012.
* [210] F. Güneş, S. Nesil, and S. Demirel, “Design and analysis of Minkowski reflectarray antenna using 3-D CST Microwave Studio-based neural network model with particle swarm optimization”, Int. J. RF Microw. Comput. Eng., vol. 23, no. 2, pp. 272-284, Mar. 2013.
* [211] P. Robustillo, J. Zapata, J. A. Encinar, and M. Arrebola, “Design of a contoured-beam reflectarray for a Eutelsat European coverage using a stacked-patch element characterized by an artificial neural network”, IEEE Antennas Wireless Propag. Lett., vol. 11, pp. 977-980, 2012
* [212] T. Shan, M. Li, S. Xu and F. Yang, “Synthesis of refiectarray based on deep learning technique”, Proc. Cross Strait Quad-Regional Radio Sci. Wireless Technol. Conf., pp. 1-2, Jul. 2018.
* [213] M. Salucci, L. Tenuti, G. Oliveri, and A. Massa, “Efficient prediction of the EM response of reflectarray antenna elements by an advanced statistical learning method”, IEEE Trans. Antennas Propag., vol. 66, no. 8, pp. 3995-4007, Aug. 2018.
* [214] D. R. Prado, J. A. López-Fernández, G. Barquero, M. Arrebola, and F. Las-Heras, “Fast and accurate modeling of dual-polarized reflectarray unit cells using support vector machines”, IEEE Trans. Antennas Propag., vol. 66, no. 3, pp. 1258-1270, Mar. 2018.
* [215] D. R. Prado, J. A. López-Fernández, M. Arrebola, and G. Goussetis, “Support vector regression to accelerate design and crosspolar optimization of shaped-beam reflectarray antennas for space applications”, IEEE Trans. Antennas Propag., vol. 67, no. 3, pp. 1659-1668, Mar. 2019.
* [216] D. R. Prado, J. A. López-Fernández, M. Arrebola, M. R. Pino, and G. Goussetis, “Wideband Shaped-Beam Reflectarray Design Using Support Vector Regression Analysis,” in IEEE Antennas and Wireless Propagation Letters, vol. 18, no. 11, pp. 2287-2291, Nov. 2019, doi: 10.1109/LAWP.2019.2932902.
* [217] P. Henttu and S. Aromaa, “Consecutive mean excision algorithm”, Proc. IEEE 7th Int. Symp. Spread Spectr. Techn. Appl., vol. 2, pp. 450-454, Sep. 2002.
* [218] H. Saarnisaari, “Consecutive mean excision algorithms in narrowband or short time interference mitigation”, Proc. PLNS, pp. 447-454, Apr. 2004.
* [219] H. Saarnisaari and P. Henttu, “Impulse detection and rejection methods for radio systems”, Proc. MILCOM, vol. 2, pp. 1126-1131, Oct. 2003.
* [220] H. G. Keane, “A new approach to frequency line tracking”, Proc. ACSSC, vol. 2, pp. 808-812, Nov. 1991.
* [221] R. Eschbach, Z. Fan, K. T. Knox, and G. Marcu, “Threshold modulation and stability in error diffusion”, IEEE Signal Process. Mag., vol. 20, pp. 39-50, Jul. 2003.
* [222] H. Mustafa, M. Doroslovacki, and H. Deng, “Algorithms for emitter detection based on the shape of power spectrum”, Proc. CISS, pp. 808-812, Mar. 2003.
* [223] J. Vartiainen, J. Lehtomäki, S. Aromaa, and H. Saarnisaari, “Localization of multiple narrowband signals based on the FCME algorithm,” Proc. NRS, vol. 1, pp. 5, Aug. 2004.
* [224] J. Vartiainen, J. J. Lehtomaki, and H. Saarnisaari, “Double-threshold based narrowband signal extraction,” Proc. VTC, vol. 2, pp. 1288-1292, May 2005.
* [225] J. Kim, M. Kim, I. Won, S. Yang, K. Lee, and W. Huh, “A biomedical signal segmentation algorithm for event detection based on slope tracing,” Proc. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc., pp. 1889-1892, Sep. 2009.
* [226] O. A. Morozov, and P. E. Ovchinnikov, “Neural network detection of MSK signals,” Proc. IEEE 13th Digit. Signal Process. Workshop 5th IEEE Signal Process. Educ. Workshop, pp. 594-596, Jan. 2009.
* [227] Y. Yuan, Z. Sun, Z. Wei, and K. Jia, “DeepMorse: A deep convolutional learning method for blind morse signal detection in wideband wireless spectrum,” IEEE Access, vol. 7, pp. 80577-80587, 2019.
* [228] H. Huang, J. Li, J. Wang, and H. Wang, “FCN-Based Carrier Signal Detection in Broadband Power Spectrum,” in IEEE Access, vol. 8, pp. 113042-113051, 2020, doi: 10.1109/ACCESS.2020.3003683.
* [229] E. Shelhamer, J. Long, and T. Darrell, “Fully convolutional networks for semantic segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 4, pp. 640-651, Apr. 2017.
* [230] K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask R-CNN”, Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 2961-2969, Oct. 2017.
* [231] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention, Munich, Germany:Springer, vol. 9351, pp. 234-241, 2015.
|
# Influence of drug/lipid interaction on the entrapment efficiency of
isoniazid in liposomes for antitubercular therapy: a multi-faced
investigation.
Francesca Sciolla CNR-ISC Sede Sapienza, Piazzale A. Moro 2, I-00185 - Rome
(Italy) Domenico Truzzolillo 111Laboratoire Charles Coulomb - UMR 5221,
Universitè de Montpellier et CNRS, Place E. Bataillon, Campus Triolet,
Batiment 11, cc 0026 34095 Montpellier Cedex 05, (France)
<EMAIL_ADDRESS>Edouard Chauveau Laboratoire Charles
Coulomb (L2C), University of Montpellier, CNRS, Montpellier, (France) Silvia
Trabalzini Dipartimento di Chimica e Tecnologie farmaceutiche, Università di
Roma, Piazzale A. Moro 5, I-00185 - Rome (Italy) Luisa Di Marzio
Dipartimento di Farmacia, Università G.d’Annunzio, Via dei Vestini, 66100 -
Chieti, (Italy) Maria Carafa, Carlotta Marianecci Dipartimento di Chimica e
Tecnologie farmaceutiche La Sapienza Università di Roma, Piazzale A. Moro 2,
I-00185 - Rome (Italy) Angelo Sarra Federico Bordi Simona Sennato 222CNR
ISC Sede Sapienza, Dipartimento di Fisica, La Sapienza Università di Roma,
Piazzale A. Moro 2, I-00185 - Rome (Italy), +39 06 49913503,
<EMAIL_ADDRESS>CNR-ISC Sede Sapienza and Dipartimento di
Fisica, La Sapienza Università di Roma, Piazzale A. Moro 2, I-00185 - Rome
(Italy)
###### Abstract
Hypothesis.
Isoniazid is one of the primary drugs used in tuberculosis treatment.
Isoniazid encapsulation in liposomal vesicles can improve drug therapeutic
index and minimize toxic and side effects. In this work, we consider mixtures
of hydrogenated soy phosphatidylcholine/phosphatidylglycerol (HSPC/DPPG) to
get novel biocompatible liposomes for isoniazid pulmonary delivery. Our goal
is to understand if the entrapped drug affects bilayer structure.
Experiments.
HSPC-DPPG unilamellar liposomes are prepared and characterized by dynamic
light scattering, $\zeta$-potential, fluorescence anisotropy and Transmission
Electron Microscopy. Isoniazid encapsulation is determined by UV and Laser
Transmission Spectroscopy. Calorimetry, light scattering and Surface Pressure
measurements are used to get insight on adsorption and thermodynamic
properties of lipid bilayers in the presence of the drug.
Findings.
We find that INH-lipid interaction can increase the entrapment capability of
the carrier due to isoniazid adsorption. The preferential INH-HSPC dipole-
dipole interaction promotes modification of lipid packing and ordering and
favors the condensation of a HSPC-richer phase in molar excess of DPPG. Our
findings highlight the importance of fundamental investigations of drug-lipid
interactions for the optimal design of liposomal nanocarriers.
###### keywords:
unilamellar liposomes, isoniazid, drug-lipid interaction, laser transmission
spectroscopy, calorimetry, scattering techniques
## 1 Introduction
Tuberculosis (TB) is caused by Mycobacterium tuberculosis (MTB), a bacterium
that most often affects the lungs. The World Health Organization estimates
that about one-quarter of the world’s population has active or latent TB, and
that a total of 1.5 million people died from TB in 2019 [1]. The current TB-
treatment is usually associated with serious adverse effects, resulting in
poor compliance, which is one of the main reasons for the appearance of
multidrug resistant strains and treatment’s failure [2]. Actually,
encapsulation of anti-TB drug in nanocarriers might be the modern answer for
the development of innovative anti-TB strategies. Nanocarriers can improve the
efficacy of the current TB treatments since they can be functionalized to bind
MTB-infected phagocites via biological ligands, and used for inhalation
administration, or to enhance drug loading and pharmacokinetics, increasing
significantly intracellular drug concentration. Since earlier studies proving
a macrophage-specific delivery of anti-TB drugs [3, 4], liposomal vesicles
still remain the most widely studied carrier system for anti-TB drugs.
Moreover, the possibility to nebulize of the liposomal dispersion directly
into the lungs offers a powerful route to overcome the several limitations of
oral and intravenous administration of anti-TB drugs [5].
Isoniazid (INH, pyridine-4-carbohydrazide) is one of the primary drugs used in
the TB treatment and is also well-known for its value in preventive therapy
[6]. Being a small hydrophilic molecule, INH is generally entrapped in
liposomes by using the film hydration method [7] which leads to drug loading
and retention in liposomes due to the very small drug partition coefficient
[8, 9]. In general, the amount of a drug that can be entrapped in a liposome
is difficult to predict, since it may depend on preparation method, physico-
chemical properties of the carrier (such as lipid composition, geometry and
size) and on ionic force and pH of dispersing medium. Actually, for any
solvophilic drug, including hydrophilic ones, due to the small volume ratio
between the internal volume of liposomes and that of the external medium, only
a small amount of the drug molecules are encapsulated within the vesicles.
Several attempts have been described to entrap INH in liposomes with high
efficiency, and since the first investigations lipid composition has been
recognized to be an important factor regulating drug loading, as well as the
accumulation of liposomes in the lungs [10]. It has been shown that
administration of sub-therapeutic INH doses entrapped in stealth liposomes
composed by a mixture of phosphatidylcoline-pegylated
distearoylphosphatidylethanolamine-cholesterol (PC-DSPE-PEG-Chol) with the
anionic dicetilphosphate (DCP), is more effective and sanatory than higher
concentrations of the free drug [3, 11]. Later, several other liposomal
formulations based on DPPC, DSPC, EggPC, crude soy lecithin [12, 13, 14, 15,
15], dioleoylphosphatidylethanolamine (DOPE) and DSPE-PEG [16] have been used
for the efficient loading of INH. It’s worth remarking that all these
investigations considered both multilamellar and unilamellar liposomes and,
being these two structures very different, a rigorous comparison of the
results is difficult, especially for what concerns the entrapment efficiency.
Still, some very general principles guiding the optimization of the
encapsulation process could be established. Chimote and Banerjee [14], were
the first to suggest that the observed high entrapment of INH ($\sim 37\%$)
could be attributed to the multilamellar nature of liposomes. Because of the
increasing encapsulation capability of an hydrophilic drug with increasing
volume of the aqueous compartment of multilamellar vesicle, this system has
been largely investigated. A further boost to the use of multilamellar
vesicles has been given by the possibility to co-encapsulate two anti-TB drugs
as rifampicin and INH. The lipophilic rifampicin can be entrapped in the
bilayers enclosing adjacent aqueous compartments, where INH is dispersed [13,
17]. Moreover, since the penetration depth of liposomes administered to the
lungs through inhalation depends on the size of the particles in the aerosol,
and particles with diameters ranging from 0.1 to 2 $\mu m$ can be effectively
transported to the alveoli [5, 14], multilamellar vesicles are still under
consideration.
However, despite all the aforementioned advantages offered by multilamellar
structures, these are far from being ideal carriers from a biotechnological
point of view, since their size and the number of their compartments cannot be
controlled at will, raising precise regulatory issues [18]. For this reason,
the development of optimal unilamellar liposomal vectors, able to entrap
hydrophilic drugs, is highly desirable and still represents an important goal.
Interestingly, since the earliest studies, it was argued that the presence of
a charged lipid could have a significant impact on the entrapment efficiency.
Wasserman and coworkers [19] showed that the addition of a low content of the
anionic Cardiolipin in a PC:Chol formulation yields a more efficient INH
liposomal loading, possibly due to a minimum of the bilayer permeability at an
optimal PC:Cardiolipin stoichiometric ratio [20, 21]. Conversely, increasing
cardiolipin molar fraction decreases vesicle stability [19]. Since the
Wasserman’s study dealt with multilamellar vesicles, the efficient loading
found at low molar fraction of charged lipid has been explained by arguing
that negatively charged bilayers produce wider aqueous spaces between
lamellae, so to increase the volume of the aqueous compartments available for
INH entrapment. Also, in anionic multilamellar liposomes containing DPPG and
HSPC lipids, a small amount of the anionic DPPG is able to confer stability to
the vesicles, without interfering with the encapsulation and retainment of a
model drug during nebulization [22]. To improve mucoadhesion and nebulization
performances of liposomal nanocarriers for pulmonary administration of drugs,
a strategy based on polymer coating has been explored [23, 24]. Also in this
case, the authors report on the role of vesicle charge for optimizing the
polymeric coating by electrostatic interaction with the bilayer, the charge
conferring further mechanical stability to the liposomes during nebulization.
Within this framework, we focus our investigation on charged unilamellar
vesicles formed by HSPC mixed with the anionic DPPG, which have not been
explored as potential INH carrier so far. This formulation offers, at least,
two advantages: i) HSPC is already employed in several approved liposomal
drugs [25] and ii) the addition of DPPG gives the further possibility of
exploiting polymeric chitosan coatings to confer mucoadhesion properties,
which is relevant for pulmonary delivery [26].
In spite of the many different investigations on the preparation and the
characterization of liposomal systems and in vitro and in vivo liposomal INH
delivery, some crucial aspects concerning the physical-chemical properties of
the carrier are still scarcely explored. To the best of our knowledge, an
inadequate attention has been paid until now to understand the interaction of
INH with the lipid bilayer of the carrier. Only few biophysical investigations
report on the interaction of INH with liposomes mimicking a biological
membrane, having the purpose to understand how the drug finds its way into a
real membrane [27, 28, 29].
Our work leverages on an extensive characterization of the interaction between
INH and mixed HSPC-DPPG liposomes designed to optimize a novel delivery
carrier for INH in anti-TB therapy. After a preliminary characterization of
liposomes by dynamic light scattering, transmission electron microscopy, UV
spectroscopy and laser transmission spectroscopy, which suggested the presence
of drug-lipid association, we focussed on the interaction of INH with the
lipid bilayers, by taking advantages of differential scanning calorimetry,
static light scattering and surface pressure measurements on Langmuir
monolayers, through which we could unambiguously unveil the effect of the drug
on the thermodynamics of the mixed lipid membranes.
The paper is organized by presenting the results obtained by each technique in
a separate subsection. Our results support a scenario in which the interaction
of INH with charged liposomes at physiological pH is affected by the fraction
of charged (anionic) component of the bilayers. We had evidence of the INH
permanence in proximity of the bilayer with a possible intralayer insertion,
which causes modification to lipid arrangement and phase separation at high
DPPG molar fraction. This represents the key finding of our investigation and
we believe that it represents an important step towards a rational design of
effective anti-TB liposomal nanocarriers based on the control of lipid-drug
interactions.
## 2 Materials and methods
### 2.1 Materials
The zwitterionic hydrogenated phosphatidylcholine from soybean (HSPC) with
molecular weight $M_{w}$=790 g/mol (Fig. 1-A) and the anionic 1,2-dipalmitoyl-
sn-glycero-3-
phosphorylglycerol sodium salt (DPPG) with molecular weight $M_{w}$=745 g/mol
(Fig. 1-B) were a kind gift from LIPOID. The typical fatty acid composition
(expressed in % of total fatty acids) of HSPC is: palmitic acid: (5.0% - 20.0
%) and stearic acid (80.0 % - 95.0 %).
Hepes salt [N-(2- hydroxyethyl), piperazine-N-(2-ethanesulphonic acid)] and
isoniazid (pyridine-4-carbohydrazid, nominal purity 99%, hereinafter INH),
were purchased by Sigma Aldrich. Sephadex G-50TM has been purchased from GE -
Healthcare. The chemical structure of INH is shown in Fig. 1-C. From a
chemical point of view, INH has three pKa values, 1.8 for the basic pyridine
nitrogen, 3.5 for the hydrazine nitrogen and 10.8 for the hydrazine group and
it is neutral at physiological pH [29]. The drug was dissolved in 0.01 M Hepes
buffer at pH values of 7.4, prepared with Milli-Q grade water with pH adjusted
with NaOH addition.
Figure 1: Chemical structure of HSPC (A), DPPG-Na (B) and INH (C)
### 2.2 Preparation of liposomes
Lipids were dissolved in a known volume of chloroform/methanol/water (2/1/0.15
v/v/v) at varying DPPG molar fraction $X_{PG}=n_{HS}/(n_{HS}+n_{PG})$, where
$n_{HS}$ and $n_{PG}$ are the number of HSPC and DPPG moles, respectively. A
three-hour rotoevaporation of the solvent under vacuum and above the melting
temperature $T_{m}$ of both lipids resulted in the formation of a dried lipid
film. By rehydration of the lipid film in Hepes 0.01 M solution and pH=7.4,
through a uniform rotation at $T>T_{m}$ for one hour, a dispersion of
multilamellar liposomes was obtained. For calorimetry measurements,
multilamellar liposomes were prepared at a lipid concentration of 25 mg/mL.
In order to obtain unilamellar vesicles, the hydrated lipid suspension was
subsequently homogenized by 5 cycles of freeze-thaw and extruded 10 times
under nitrogen pressure through a 100 nm polycarbonate membrane (Whatman
Nucleopore) in a 2.5 mL extruder (Lipex Biomembranes, Vancouver, Canada) at 60
∘C, well above the main transition temperature of lipids. Unilamellar
liposomes for calorimetry and light scattering experiments have been prepared
at 10 mg/mL and 0.2 mg/mL, respectively.
To entrap INH, the dried lipid film has been hydrated using Hepes buffer
containing the drug dissolved at the target concentration. As for empty
liposomes, 5 freeze-thaw cycles have been applied, since this procedure is
also able to facilitate encapsulation of hydrophilic drugs [30]. The non-
entrapped INH was separated from the liposomes on a Sephadex G-50 gel column
hydrated in Hepes buffer after 24 hours of swelling. The amount of liposomal
solution to be purified was assessed in dependence of the concentration of the
entrapped drug and was set to 100-200 $\mu$L of liposome solution for 2.5 mL
of gel.
### 2.3 Dynamic light scattering and electrophoretic mobility measurements
The size and the size-distribution of liposome formulations entrapping INH
were analyzed via dynamic light scattering (DLS) measurements performed with a
NanoZetaSizer apparatus equipped with a 5 mW HeNe laser (Malvern Instrument,
UK). This instrument employs a backscatter detection, i.e. the scattered light
is collected at an angle of 173o . The main advantage of this detection
geometry, when compared to the more conventional 90∘, is that it is less
sensitive to multiple scattering effects [31]. Decay times $\tau$ were used to
determine the diffusion coefficients $D_{0}=1/(\tau q^{2})$ of the particles,
which in turn can be converted in apparent hydrodynamic radii $R_{h}$, using
the Stokes-Einstein relation $R_{h}=k_{B}T/6\pi\eta D_{0}$. In the above
relations $q=4\pi n\lambda^{-1}\sin(\theta/2)$ is the scattering vector, $n$
the solvent refractive index, $\lambda$ is the light wavelength, $\theta$ the
scattering angle, $k_{B}$ the Boltzmann constant, $T$ the absolute temperature
and $\eta$ is the solvent viscosity. The measured autocorrelation functions
were analyzed using the cumulant methods to get the mean hydrodynamic size and
the polidispersity index (PDI) by the first and second moment of the cumulant
expansion, respectively [32]. Results are expressed as the average of three
different measurements, each measurement was averaged over at least 20 runs.
By using the same apparatus, the electrophoretic mobility has been measured to
determine their $\zeta$-potentials. The mobility $\mu$ of the liposomes is
converted into a $\zeta$-potential using the Smoluchowski equation
$\zeta=\mu\eta/\epsilon$, where $\epsilon$ and $\eta$ are respectively the
zero-frequency absolute dielectric permittivity and the viscosity of the
suspending medium.
### 2.4 Laser Transmission Spectroscopy
The size and the absolute number concentration of the liposomal suspension
were determined using an innovative and customized apparatus implementing the
laser transmission spectroscopy (LTS) technique [33, 34]. Since it is
relatively new and probably unfamiliar to the readership, we will give here a
very brief account of this technique. By measuring the light transmittance
through the vesicle suspension as a function of the wavelength, the particle
density distribution $n(R)$ as a function of their size $R$ can be obtained
through the Beer-Lambert law once the Mie scattering cross section of the
vesicles, represented as shelled spheres, is known [33]. For this purpose we
used a pulsed laser tunable in the wavelength interval from 210 to 2600 nm.
Transmission data are analyzed and inverted by using a mean square root-based
algorithm, giving the particle size distribution in terms of their absolute
concentration. The integral of the density distribution provides the total
number of liposomes per milliliter of solution $N_{LTS}$ [34, 35]. The volume
fraction $\Phi_{in}$ of the liposomal dispersion available for encapsulation
can be hence calculated as $\Phi_{in}=N_{LTS}\cdot 4/3\pi(R-d)^{3}$, where $d$
is the bilayer thickness.
### 2.5 Transmission electron microscopy
Transmission electron microscopy (TEM) measurements were carried out by using
a FEI TECNAI 12 G2 Twin (Thermo Fisher Scientific - FEI Company, Hillsboro,
OR, USA), operating at 120 kV and equipped with an electron energy loss filter
and a slow-scan charge-coupled device camera (794 IF, Gatan Inc, Pleasanton,
CA, USA). 20 $\mu$l of the sample have been deposited on 300-mesh copper grid
covered by thin amorphous carbon film. After 2 minutes the excess liquid was
removed by touching the grid to filter paper and 10 $\mu$l of 2 $\%$ aqueous
phosphotungstic acid (PTA) (pH-adjusted to 7.3 using 1 N NaOH) has been added
to stain the sample.
### 2.6 Lipid bilayer characterization by fluorescence anisotropy
Fluidity of liposomal bilayers was evaluated by inspecting fluorescence
anisotropy of diphenylhexatriene (DPH) probe dissolved in the hydrophobic
region of the lipid bilayer. 250 $\mu$L of DPH (2 mM) was added to lipid
mixtures before vesicle preparation according to a protocol already discussed
in a previous study [36]. Afterwards, DPH-loaded vesicles were extruded as
done for empty liposomes. Fluorescence anisotropy was measured by a LS55
spectrofluorimeter (PerkinElmer, MA, USA) at $\lambda_{exc}$=400 nm and
$\lambda_{em}$= 425 nm, at room temperature. The fluorescence anisotropy (A)
of samples was calculated according to the following equation
$A=\frac{I_{vv}-GI_{vh}}{I_{vv}+2G}{,}$ (1)
where $I_{vv}$ and $I_{vh}$ are the intensities of the emitted fluorescence
(arbitrary units) parallel and perpendicular to the direction of the
vertically polarized excitation light, respectively. $G=I_{vh}/I_{hh}$ is a
correction factor, which is determined experimentally before each measurement
as the ratio between the vertically and the horizontally polarized emission
components, for a given horizontally polarized incident beam. The fluorescence
anisotropy values A are inversely proportional to the membrane fluidity, high
values of A correspond to a high structural order and/or a large viscosity of
the membrane [37].
### 2.7 Entrapment efficiency
Quantification of INH loaded in liposomal formulations was carried out by a
UV-VIS Jasco spectrophotometer with 1 mm quartz cuvettes, at 20.00 ∘C. To
subtract the contribution of background scattering, spectra of empty liposomes
measured in a wide concentration range (1 to 20 mg/ml) have been collected,
with Hepes buffer as reference. Preliminarily, two different methods of
background subtraction have been tested: i) spectra of INH-loaded liposomes
measured with empty liposomes as reference, at the same lipid concentration;
ii) spectra of INH-loaded liposomes measured with Hepes as reference, from
which the spectra of empty liposomes, at the proper concentration and with
Hepes as reference, have been subtracted. Since the obtained background-
subtracted UV spectra were equal, we have chosen method (ii) for convenience.
Drug concentration before and after purification have been determined by
considering the INH absorption maximum at $\lambda\sim$ 264 nm present in the
background-subtracted spectra [38], and comparing the intensity with a
calibration curve obtained in Hepes buffer.
Entrapment efficiency (E.E.) has been calculated according to the following
equation
$E.E.(\%)=\frac{C^{f}_{INH}/C^{f}_{lipid}}{C^{0}_{INH}/C^{0}_{lipid}}\cdot
100{,}$ (2)
where the concentrations $C^{0}_{INH}$, $C^{0}_{lipid}$ and $C^{f}_{INH}$,
$C^{f}_{lipid}$ are referred to the molar concentration of INH or lipid,
before and after purification, respectively.
### 2.8 Differential scanning calorimetry
Differential scanning calorimetry (DSC) experiments were performed with a TA
Q2000 DSC calorimeter. The measurements were carried out under nitrogen flow.
A modulated temperature protocol with an amplitude of 0.3 ∘C over a period of
60 s was applied within heating/cooling ramps in the temperature range 10 ∘C
$\leq T\leq$ 70 ∘C at 2 ∘C/min. For each sample we performed 3 heating/cooling
cycles to erase any past thermal history of the sample and achieve stationary
thermograms. Excess molar calorimetric enthalpy was calculated after baseline
adjustment and normalization to the lipid concentration, by integrating the
peak areas. The actual transition temperatures ($T_{m}$) were determined at
the peak maxima of the heat capacity curves.
To investigate the interaction of INH with the lipid layer, liposomes with
different concentrations of INH have been prepared by mixing a fixed volume of
drug (10 $\mu$l) to 500 $\mu$l of pure unilamellar liposome suspension, for
each molar fraction $X_{PG}$. INH concentration was varied to obtain a final
INH/lipid molar ratio $\rho$ ranging from 0.5 to 25.
### 2.9 Static light scattering
The static light scattering (SLS) experiments have been performed using an
Amtec-goniometer equipped with a green laser with wavelength $\lambda$ = 532.5
nm. All measurements were performed at the same incident light intensity
(laser power set at 70 mW). The temperature $T$ of the cell was set by means
of a temperature-regulated bath with an accuracy of 0.1 ${}^{\circ}C$. We
measured the gyration radius of the liposomes by collecting the scattered
intensity $I(q)$ scattered by dilute samples (0.2 mg/ml) at 60 scattering
angles $\theta$ between $24^{\circ}$ and $150^{\circ}$. From the time averaged
scattering intensity $I(q)$ the radius of gyration $R_{g}$ has been determined
by using the Guinier approximation $I(q)=I(0)\exp[-(qR_{g})^{2}/3]$ [39].
By fitting the $R_{g}(T)$ curves, or the time-averaged intensity at
$\theta$=$90^{\circ}$, with a Boltzmann-type equation, it is possible to
determine the (optical) melting transitions $T_{c}^{opt}$ [40]
$R_{g}(T)=(R_{s}-R_{m})\cdot\left(1+e^{\frac{T-T_{c}^{opt}}{\Delta
T}}\right){,}$ (3)
where $R_{s}$ and $R_{m}$ represent the gyration radii in the solid and melted
state, respectively, and $\Delta T$ is the transition width.
A proper drug amount to get a INH/lipid molar ratio $\rho=5$ was added to each
pure unilamellar liposome suspension, to investigate the effect of INH on the
lipid bilayer.
### 2.10 Monolayer studies
Surface pressure measurements have been performed by a Minitrough (KSV
Instruments Ltd, Helsinki, Finland) equipped with Wilhelmy-type pressure
measuring system, enclosed in a plexiglas box to reduce surface contamination.
Hepes solution (0.01M, pH=7.4) thermostatted at $25.0\pm\leavevmode\nobreak\
0.2$ ∘C has been used as subphase.
Lipid monolayers with different molar fractions of HSPC and DPPG were prepared
at the air-water interface according to the Langmuir technique [41] as
described in previous investigations [42]. Lipids were dissolved in chloroform
at 1 mg/ml and an amount of $20\div 25\mu l$ was spread by a microsyringe onto
the aqueous subphase. After evaporation of the solvent, the monolayers were
compressed by the two barriers moving at a constant rate of 50 mm min-1 to
record the pressure/area ($\Pi/A$) isotherm. The reported isotherms represent
the average over three different compression experiments.
For drug-lipid interaction studies, INH has been injected in the subphase
under an already formed lipid monolayer. INH has been dissolved in ethanol at
its maximum solubility concentration ($\sim$ 67 mM) and its concentration in
the subphase has been varied by changing the injected volume. Ethanol is less
dense than water and rather highly volatile and favors INH spreading at the
air-water interface. Control experiments were further performed by injecting
pure ethanol in the subphase under the monolayer to quantify the extent of
surface pressure increase due to pure ethanol in function of the injected
volume. All the isotherms were recorded after waiting for the stabilization of
the surface pressure.
The miscibility of HSPC and DPPG can be analyzed by calculating the excess of
free energy of mixing $\Delta G$ upon integration of the surface pressure-area
($\Pi$-A) isotherms, from zero to a certain value of the surface pressure,
according to the expression
$\Delta G=\int_{0}^{\pi}(A_{12}-X_{1}A_{1}-X_{2}A_{2})d\pi{,}$ (4)
where $A_{i}$ and $X_{i}$ are the area per molecule and the molar fraction of
component i, respectively, and $A_{12}$ is the area per molecule in the
mixture. In the absence of any interaction between the components, $\Delta
G=0$. Deviations from an ideal behavior results in $\Delta G<0$ (attractive
interactions) or in $\Delta G>0$ (repulsive interactions), providing
information on whether the interaction is energetically favored or not [20].
## 3 Results
### 3.1 Basic characterization of HSPC-DPPG liposomes
Size, polydispersity, $\zeta$-potential and fluorescence anisotropy of empty
unilamellar liposomes have been measured for three different $X_{PG}$ molar
fractions: 0.33, 0.5 and 0.66 (Table 1). We obtain liposomes with size around
100 nm for all the different compositions and a low polydispersity index
(PDI$\leq$0.1). TEM microscopy confirms the presence of a homogeneous
population of unilamellar vesicles, as shown in Fig. 2 for liposomes prepared
with $X_{PG}$= 0.66.
Figure 2: TEM microscopy image obtained by PTA staining of unilamellar liposomes prepared at $X_{PG}$= 0.66. The bar length corresponds to 50 nm. Table 1: Characterization of empty HSPC-DPPG unilamellar liposomes at $T$=25 ∘C and E.E. values for liposomes prepared at INH/lipid molar ratio=10 (200 mM INH). Results represent the mean values over three repeated measurements. No significant variation has been observed on size and $\zeta$-pot values of INH-loaded liposomes (data not shown). $X_{PG}$ | $R_{h}(nm)$ | PDI | $\zeta$-pot (mV) | Anisotropy | E.E. (%)
---|---|---|---|---|---
0.33 | 57 $\pm$ 3 | 0.08 $\pm$ 0.01 | -35 $\pm$ 4 | 0.37 $\pm$ 0.04 | $2.4\pm 0.2$
0.50 | 55 $\pm$ 2 | 0.07 $\pm$ 0.01 | -53 $\pm$ 6 | 0.35 $\pm$ 0.05 | $1.7\pm 0.2$
0.66 | 52 $\pm$ 2 | 0.11 $\pm$ 0.02 | -42 $\pm$ 4 | 0.37 $\pm$ 0.03 | $1.1\pm 0.1$
The values of $\zeta$-potential are negative for all the three formulations,
due to the presence of the anionic lipid DPPG. $\zeta$-potential does not show
a linear increase with increasing charged lipid content since it is not
connected with the stoichiometric charge but with the effective charge of the
system [42]. As it has been shown for other charged liposomal systems, while
the stoichiometric charge increases linearly with the charged lipid content,
the effective charge of liposomes displays a saturation due to the counterion
condensation which minimizes the repulsion between the nearby negative charges
[42]. Moreover we expect that the $\zeta$-potential is affected by the dipolar
orientation of the zwitterionic head groups [43, 44], that are susceptible to
any variation of the local composition within the membrane. However, this
aspect, interesting _per se_ , goes well beyond the scope of our work and has
not been investigated further.
The observed high values of anisotropy point out that all the formulations
have a rather rigid lipid bilayer, as the pure components DPPG, DPSC and DPPC
liposomes [45]. This is indeed expected since both lipids are in a gel state
at 25 ∘C.
We performed a preliminary set of experiments to determine the best operative
condition for encapsulation of INH within the range $1\div 50$ of INH/lipid
molar ratio (see SI, section 1). In Table 1 we reported the highest values of
entrapment efficiency which have been found in the presence of an intermediate
excess of drug, i.e. at INH/lipid ratio equal to 10. In general, we found low
values for the $E.E.\%$ parameter. The highest value of E.E. parameter is
found at lower content of the charged lipid DPPG and a decreasing behavior
with increasing molar fraction $X_{PG}$ is observed. This could indicate a
worse retention capability of the vesicles due to the looser packing in more
charged bilayers. In fact, it is known that the presence of electrostatic
repulsion between charged lipids may alter the bilayer properties and increase
permeability to solutes [46]. However, this hypothesis does not appear
suitable since anisotropy values of HSPC-DPPG liposomes close to 0.36 indicate
a rather rigid bilayer. Moreover, it has been shown that INH does not induce
any change in fluidity in DMPC and DMPG liposomes [27], which are more fluid
and with a lower melting temperature compared to the lipid used in the present
study.
From a purely operative point of view, the values of the E.E. parameter are
useful to select the more convenient mixture to get the highest amount of
entrapped INH, given a fixed amount of lipid mass and drug used to prepare the
samples. Such an optimal mixture here is reached for $X_{PG}$=0.33. On the
other hand, the only determination of E.E. does not allow unveiling how the
amount of entrapped drug is influenced by chemical-physical properties of the
lipid bilayer, namely, by the size and total volume of the carriers available
for drug encapsulation, and by the lipid composition of the carriers.
### 3.2 LTS study of INH-loaded liposomes
To get further insight in the encapsulation properties of HSPC-DPPG liposomes,
we performed LTS experiments on INH-loaded liposomal suspensions prepared at
INH/lipid molar ratio equal to 10, where the highest values of drug entrapment
have been found regardless of lipid composition. The aim of this study is to
determine the radius $R$ and the total number of liposomes per milliliter of
solution $N_{LTS}$, and then calculate the liposome volume fraction available
for drug encapsulation $\Phi_{in}$, as described in section 2.4. Once the
volume fraction is obtained, it is possible to define a new parameter named
’Entrapment ratio’ (hereinafter E.R.) as the ratio between the concentration
of drug encapsulated in the vesicles (determined by UV) and the maximum amount
of drug which can be loaded in their internal volume
$E.R.=\frac{C^{f}_{INH}}{C^{0}_{INH}\cdot\Phi_{in}}=\frac{E.E.\%}{100\cdot\Phi_{in}}{.}$
(5)
Thank to this definition, E.R. can give indications on the entrapment efficacy
of a lipid membrane with a specific lipid composition, independently of its
geometrical features. We will show that the E.R. is of valuable help for our
analysis.
The values of $E.R.$ are shown in Table 2 and have been calculated by assuming
a bilayer thickness $d$ of 5 nm for all the HSPC:DPPG mixtures, corresponding
to the average value reported for pure DSPC and DPPG bilayers [30].
Table 2: Results of LTS study of INH-loaded liposomes, prepared at molar ratio $INH/lipid=10$ (200 mM INH). $R$ is the radius of liposomes, $N_{LTS}$ is the total number of liposomes per milliliter of solution, $INH/lip$ is the liposomal amount of INH, $\Phi_{in}$ is the liposome volume fraction, $E.R.$ is the entrapment ratio calculated according to eq. 5. $X_{PG}$ | R (nm) | $N_{LTS}(part/ml)$ | $INH/lip\,(mM/part)$ | $\Phi_{in}$ | E.R.
---|---|---|---|---|---
0.33 | $45\pm 4$ | $(2.7\pm 0.2)\,10^{12}$ | $(1.8\pm 0.2)\,10^{-16}$ | $(0.73\pm 0.07)\,10^{-3}$ | $3.3\pm 0.3$
0.50 | $40\pm 4$ | $(1.0\pm 0.1)\,10^{13}$ | $(1.4\pm 0.2)\,10^{-16}$ | $(1.8\pm 0.2)\,10^{-3}$ | $3.8\pm 0.4$
0.66 | $37\pm 3$ | $(6.6\pm 0.6)\,10^{12}$ | $(0.9\pm 0.1)\,10^{-16}$ | $(0.91\pm 0.09)\,10^{-3}$ | $3.3\pm 0.3$
LTS measurements show that the radius of liposomes determined by this
technique has a similar behavior to the hydrodynamic radius determined by DLS
technique (see table 1) but it is shifted to smaller values, as expected since
LTS method gives the distribution of the geometrical sizes of the suspended
particles and not their equivalent ’hydrodynamic’ radii (i.e. radius of a
sphere with the same diffusion coefficient) [35].
Thanks to the determination of the particle concentration of the liposomal
suspension ($N_{LTS}$), we can go a step further and calculate the ’INH
liposomal amount’ ($INH/lip$), i.e. the amount of INH loaded in each liposome,
by dividing the total concentration of INH determined by UV measurements with
respect to $N_{LTS}$, assuming a uniform drug repartition in the suspension.
We note that the values of $INH/lip$ decrease with increasing the molar
fraction of the charged lipid DPPG, this could be due to the geometrical
radius which decreases, too. At a single-nanocarrier level, we can conclude
that liposomes with the lowest content of the charged lipid are able to retain
more INH. Both lipid composition and liposome size could affect the entrapment
capability of the single nanocarrier.
This said, to filter out the effect of the geometrical features of the
liposomes, namely the liposomal volume and the number of liposomes in each
suspension, we can perform a further analysis of the properties of drug
encapsulation by considering the E.R. parameter. First, it is noteworthy that
$E.R.$ is always higher than unity, that is the value corresponding to the
absence of drug-lipid interaction and drug leakage [30]. Then, we note that
the E.R. values are rather similar and comprised between 3 ad 4. Considering
that in the pre-purified liposomal dispersion INH is added in large excess,
the high E.R. values mean that vesicles can retain around them and/or within
lipid bilayers about three times more molecules than the amount of drug
expected on the basis of the geometrical volume. More, our findings suggest
the preferential interfacial localization of INH.
In the equimolar mixture, the E.R. value is slightly higher than in the
asymmetrical formulations. By its definition, E.R. helps us to establish which
formulation gives the highest drug entrapment with respect to the available
volume of the suspension. In other words, the analysis of E.R. allows to
compare the different liposomal suspensions as if they were composed by the
same number of identical vesicles to give evidence to the influence of bilayer
properties in drug entrapment. E.R. results give evidence that mixed liposomal
vesicles at equimolar HSPC-DPPG ratio are the most effective for INH
entrapment.
To sum sup, our findings indicate the presence of drug-lipid attractive
interactions, strongly suggesting the occurrence of INH adsorption at the
lipid interface [30] and point out the relevant role of lipid organization. As
argued by Truzzi et al. [17] in PC-Chol multilamellar vesicles, drug
adsorption could be originated by the presence of a certain drug-lipid
affinity. Hereafter we will show that this is indeed what occurs in our
formulations, corroborating this finding by an extensive characterization of
the organization of lipid layer and its interaction with INH.
## 4 Biophysical characterization of liposomes and of the effect of INH
interaction
### 4.1 DSC
#### 4.1.1 Bare HSPC-DPPG liposomes
The DSC investigation has been performed on multilamellar liposomes to get the
highest DSC signal and amplify the fine details of the local structure. Fig. 3
shows the excess molar heat capacity $C_{p}$ obtained after baseline
subtraction from the DSC thermograms of HSPC– DPPG multilamellar liposomes at
different PG molar fraction ($X_{PG}$).
Figure 3: Excess molar heat capacity of multilamellar liposome suspensions (25
mg/ml) for pure DPPG and HSPC liposomes and for different $X_{PG}$ molar ratio
(increasing from bottom to top). Curves are shifted of a constant value for
clarity. The inset shows the endothermic peak relative to the pretransition of
HSPC.
HSPC liposomes exhibit two endothermic peaks corresponding to the pre- and
main transition at 48.8 ∘C and 53.4 ∘C, respectively. The endothermic peak
progressively shifts to lower temperatures with increasing $X_{PG}$, it
broadens down to $X_{PG}=0.50$ and sharpens again as the system approached
pure DPPG membranes ($X_{PG}\geq 0.33$). At the same time the pre-transition
observed for pure HSPC multilamellar liposomes (magnified in the inset in Fig.
3\- top panel) vanishes as the fraction of DPPG is raised. Such a pre-
transition has been already reported in other aqueous environment [47] in HSPC
membranes and has been attributed to the formation of periodic membrane
ripples. In the literature, these two transitions are usually regarded as
independent events, although recent models [48] suggest that both pre- and
main transition are caused by chain melting. The main endothermic transition
never shows peak splitting or two detectable separate peaks, while linearly
shifting to lower temperatures for increasing $X_{PG}$ (Fig. 4-A). This
feature supports the full miscibility of the two lipids that melt
cooperatively in the bilayers, and it is corroborated by the minimum of the
peak height occurring at the equimolar condition ($X_{PG}=0.50$), where the
broadness of the process is maximum. Finally, we point out that the beginning
and the end of the transition region in the mixtures deviate from the melting
temperatures of the pure compounds, confirming again that the melting of the
chains of each lipid is strongly affected by the presence of the other
species.
We can go further and analyze the thermograms in terms of a two-state (gel-
liquid) model [49], through which we determine the average size of the
cooperative unit. The latter corresponds to the number of lipids passing from
one state to the other simultaneously and it is given by [50]
$N_{0}=\frac{\Delta H_{VH}}{\Delta H_{0}}{,}$ (6)
where the van’t Hoff enthalpy, $\Delta H_{VH}$, at the midpoint of the phase
transition, $T_{m}$, is given by [51, 50]
$\Delta H_{VH}=\frac{4RT_{m}^{2}(\Delta C_{p})|_{T_{m}}}{\Delta H_{0}}{.}$ (7)
Here $\Delta H_{0}$ is the calorimetric enthalpy and it is determined by
integrating the DSC peak from the onset temperature, where the deviation from
the baseline starts, to where the signal returns to the baseline. $(\Delta
C_{p})|_{T_{m}}$ is the peak height of the excess enthalpy. Fig. 4 (B,C,D)
show $\Delta H_{0}$, $\Delta H_{VH}$ and $N_{0}$ for the investigated samples.
The calorimetric enthalpy $\Delta H_{0}$ shown in Figure 4-B shows a weak,
albeit detectable, non-monotonic behavior as a function of the DPPG fraction
$X_{PG}$. We attribute the initial decrease of $\Delta H_{0}$ (from $X_{PG}=1$
to $X_{PG}=0.66$) to the effect of the increased translational entropy term
due to the addition of a small amount of longer alkyl-chains in membranes
mostly made of shorter chains. This tends to fluidize the membrane, increases
the susceptibility of the bilayer to compositional fluctuations [52] and
lowers the overall amount of energy necessary to melt the membrane. As
$X_{PG}$ is further decreased, the calorimetric excess enthalpy increases, as
expected when the fraction of the neutral lipid (HSPC) with longest tails
prevails. In this case more energy must be transferred to the suspensions in
order to melt membranes in which lipid-lipid Van der Waals attractive
interactions are not anymore counterbalanced by electrostatic repulsions. Fig.
4-C shows the van’t Hoff enthalpy $\Delta H_{VH}$ for the same systems. In
this regard we point out two important facts: i) $\Delta H_{VH}>\Delta H_{0}$
for all $X_{PG}$, precluding the existence of multistep transitions [53] and
ii) an evident minimum is present at the equimolar composition $X_{PG}=0.5$.
Indeed under this condition and complete miscibility, the system reaches its
maximum heterogeneity and we do expect the minimization of the transition
cooperativity as the amount of mixed HSPC-DPPG contacts is the largest
possible. This brings directly to the minimization of the temperature
dependence of the reaction constant determining the equilibrium between lipids
in the two states (melt and solid)[49]. Finally, figure 4-D shows the size of
the cooperative units $N_{0}=\frac{\Delta H_{VH}}{\Delta H_{0}}$. This shows a
minimum close to the equimolar composition in agreement with previously
reported results for other mixed vesicles [54]. Given our experimental
uncertainty we are not able to discern possible asymmetries due to the
different cooperativity of the transitions in pure one-component liposomes.
Figure 4: Melting temperature $T_{m}$ (Panel A), calorimetric enthalpy $\Delta
H_{0}$ (Panel B), van’t Hoff enthalpy $\Delta H_{VH}$ (Panel C), cooperative
unit $N_{0}$ (Panel D) for multilamellar mixed HSPC-DPPG liposomes in function
of $X_{PG}$. The dashed line in panel A is a linear fit of the data.
It is important to note that, on one hand, multilamellar vesicles are the
optimal system to study the thermodynamic properties of lipid arrangement
since the several enclosed bilayers result in a high DSC signal. On the other
hand, their large size and not controlled number of bilayers represent a
serious limits in the investigation of drug-lipid interaction in a drug
carrier, so that multilamellar vesicles are not anymore a suitable model for
the scope of the present investigation. For a reliable modelling of the
conditions characterizing the carrier, unilamellar liposomal vesicles have
been considered (see SI, section 2). No detectable difference between multi-
and unilamellar vesicles have been observed as far as their calorimetric
behavior is concerned, corroborating once again the full miscibility of the
two lipid components.
This preliminary calorimetric analysis of bare liposomes represents a dutiful
premise and a necessary step summarizing the role of lipid composition for the
in-depth understanding of the drug-liposome interaction that will be discussed
hereafter.
#### 4.1.2 Interaction with Isoniazid
Fig. 5 shows the excess molar heat capacity for the three investigated
formulations of mixed liposomes and selected INH/lipid molar ratio $\rho$.
Thermograms in absence of drug ($\rho$=0) are shown as reference in the top
panels. The vertical dashed lines mark the position of $T_{m}$ for bare
liposomes at the different compositions, pure HSPC formulations and INH-HSPC
segregated domains.
First we note that the shape and the position of the main transition peak is
not drastically altered by INH addition for all formulations. This confirms
liposome stability at all the investigated INH concentration.
However, what is interesting and, in some respects, surprising, is the effect
of INH on lipid miscibility. For liposomes at $X_{PG}=0.5$ the thermograms do
not undergo any noteworthy variation, whereas INH has a relevant effect on
liposomes for the asymmetrical formulations. At $X_{PG}=0.33$ a shoulder on
the right side of the main peak appears and $X_{PG}=0.66$ this effect is even
more striking. Here, in fact, the evident peak splitting occurring at any INH
content is a clear signature of lipid segregation induced by drug-lipid
association.
Figure 5: Excess molar heat capacity of unilamellar liposome suspensions (10
mg/ml) for different DPPG molar ratio $X_{PG}$ and INH/lipid molar ratio
$\rho$ as indicated in the panels. Lines identifying the position of melting
temperatures of peaks have been drawn as guide for eyes (dashed lines: main
peak at $\rho=0$; dashed-dot lines: $T_{m}$ of pure HSPC liposomes; dot lines:
secondary peak appearing at $\rho\geq 0.5$.)
To interpret this complex behavior, one has to consider that the protocol
chosen provides that INH is added to suspensions of previously formed
unilamellar liposomes (as described in section 2.8), thus imposing that INH
molecules interact only with the external lipid layer, while the inner layer
stays scarcely affected by the presence of the drug. Actually, the internal
layer may be affected, in principle, by any change in local composition of the
external leaf induced by the drugs. In fact, the two leaves of a lipid
membrane are coupled in some way either by the interdigitation of hydrocarbon
tails or through the rapid exchange of cholesterol units [55]. However, while
the former mechanism is presumably absent or negligible in our bilayers since
the mismatch between the alkyl chains is very small, the latter is strictly
absent. In this situation, i.e. in presence of a weak coupling, the outer and
the inner layer can show different thermodynamic phases [55]. That is indeed
what our DSC thermograms suggest for the asymmetric mixtures as a result of
INH-lipid interaction.
Figure 6: Schematic picture modelling the effect of INH addition on lipid
organization. HSPC and DPPG lipids have been identified by indication of their
electrostatic properties, INH molecule is drawn as a diamond.
To interpret this complex behavior, one has to consider that the protocol
chosen provides that INH is added to suspensions of previously formed
unilamellar liposomes (as described in section 2.8), thus imposing that INH
molecules interact only with the external lipid layer, while the inner layer
stays scarcely affected by the presence of the drug. Actually, the internal
layer may be affected, in principle, by any change in local composition of the
external leaf induced by the drugs. In fact, the two leaves of a lipid
membrane are coupled in some way either by the interdigitation of hydrocarbon
tails or through the rapid exchange of cholesterol units [55]. However, while
the former mechanism is presumably absent or negligible in our bilayers since
the mismatch between the alkyl chains is very small, the latter is strictly
absent. In this situation, i.e. in presence of a weak coupling, the outer and
the inner layer can show different thermodynamic phases [55]. That is indeed
what our DSC thermograms suggest for the asymmetric mixtures as a result of
INH-lipid interaction.
On the basis of the observed DSC thermograms, in the outer lipid layer we can
hypothesize the formation of super-bound INH-HSPC-rich phases with a $T_{m}$
higher than pure HSPC, as visible in the secondary peaks occurring at
$\rho\geq 0.5$ at $X_{PG}=0.66$ (see dotted lines in Fig. 5). At $X_{PG}=0.33$
the fraction of super-bound INH-HSPC-rich is low and the mixed HSPC-DPPG phase
prevails, since a complete HSPC segregation would imply that charged DPPG
molecules to get closer and closer, this configuration being unfavored due to
the high energetic and entropic penalty. In excess of DPPG ($X_{PG}=0.66$),
the situation is even more complex since bare HSPC molecules segregate in a
distinct phase, as evidenced by the unambiguous presence of a process centered
at the melting temperature of pure HSPC that appears as a left-shoulder of the
secondary peak at high temperature. For both $X_{PG}=0.33$ and $X_{PG}=0.66$
the segregated phases coexist with mixed HSPC-richer and DPPG-richer phases
respectively, which shift to higher temperature due to the INH screening, as
the main DSC peaks indicate.
It is therefore evident that the presence of INH at lipid interface and its
interaction with lipids play a key role. Previous investigations suggested the
preferential surface location of INH in one-component zwitterionic DMPC or
ionic DMPG liposomes [8] and hypothesized that the interaction between drug
and the phosphate region of the lipid polar heads via van der Waals or
hydrogen bonding can modify the lipid packing in DPPC liposomes [17, 56]. At
physiological condition, INH is a non-charged species [29]. Its local
electrostatic properties and its charge density distribution have been
considered relevant for its interaction with drug receptors and lipid
membranes [57]. In particular, INH has a larger dipole moment than water [58],
originating from the deformation of the electronic charge distribution in the
vicinity of O(1) and N(1) atoms, due to their large electronegative potential
[57].
In the presence of HSPC-DPPG mixed bilayers and in aqueous environments, it is
then reasonable to consider that a dipole-dipole interaction between INH
molecules with the zwitterionic HSPC lipids is favored. This is in line with
what has been already observed for other dipolar molecules such as anesthetics
[59, 60], while INH is less affine to the ionic DPPG lipid. We speculate that
this preferential attractive interaction can promote the formation of a
quadrupolar INH-HSPC complexes and increases the binding energy between HSPC
lipids. This mechanism can favor the segregation of the HSPC in condition of
molar excess of DPPG, as suggested by the onset of a secondary peak at a
temperature higher than that characterizing the melting of the bare HSPC
membranes. On the other hand, the positive region of INH dipole can interact
also with the anionic polar head of DPPG, thus stabilizing the DPPG-richer
phase by electrostatic screening. A simple naive scheme of this situation is
sketched in Fig. 6.
Lipid segregation or enhanced disorder typically gives rise to an interplay
between cooperativity change and shift of the melting transition temperature
of the single lipids, which reflects the formation or disruption of
homogeneous domains [61]. For this reason, a more detailed description of the
effect of INH addition can be captured by simultaneous analysis of i) the
shift of the melting temperature $\Delta T_{m}$, ii) the variation of the
calorimetric enthalpy and iii) the cooperative unit change for each
endothermic transition. $\Delta T_{m}$ is defined as the difference
$T_{m}^{\rho}-T_{m}^{\rho=0}$ between the melting temperature of lipid
membranes in presence of INH and that measured for $\rho=0$ (no added INH).
Analogously, the enthalpy and cooperative unit variations are calculated as
the ratios between $\Delta H_{0}$ and $N_{0}$ measured in the presence of INH
and the same quantities measured for bare liposomes ($\Delta H_{0}^{\rho=0}$
and $N_{0}^{\rho=0}$). Finally, to decouple endothermic peaks relative to
DPPG-reach and HSPC-reach domains for $X_{PG}=0.33$ and evaluate the
corresponding melting temperature and calorimetric enthalpy, we have further
performed a double Gaussian fit. The results are shown in Fig. 7.
Figure 7: Panel A: Melting temperature shift (Panel A), normalized
calorimetric enthalpy (Panel B) and normalized cooperative unit (Panel C) in
function of the INH/lipid molar ratio $\rho$ at different $X_{PG}$ as
indicated in panel A. The horizontal dashed lines correspond to the $\rho=0$
case.
For asymmetric mixtures, INH addition produces a positive shift of the melting
temperature for all the detected processes. This suggests that primarily INH
screens electrostatic repulsions between the phosphate groups decorating the
membrane surfaces, lowering the effective surface charge of the liposomes and
enhancing lipid local order [62]. As also observed by Pinheiro et al. [29] for
DPPC liposomes, the positive shift of $T_{m}$ indicates the presence of drug
at the lipid interface in the chain region (C1-C9) close to the polar heads.
Clearly, the largest increase is obtained for the HSPC-rich domains appearing
in the mixtures at $X_{PG}=0.33$ and $X_{PG}=0.66$, where lipid demixing may
take place [63], while for $X_{PG}=$0.5 this shift is barely detectable or
even weakly negative for large $\rho$ values, suggesting that the high mixing
entropy of the mixture in this case dominates over any enthalpic change due to
the INH insertion. The weak, albeit detectable, negative trend observed for
all the formulations in large excess of INH is presumably the signature of a
local disorder induced by the INH insertion in the bilayer, that facilitates
the melting transition.
At the same time we observe the calorimetric enthalpy variation decreasing
progressively as the amount of DPPG increases in the mixed liposomes. This is
indeed a quite remarkable result, since it unambiguously proves that
compositional differences in lipid membranes determine the amount of heat
needed to melt the bilayers in the presence of INH, the latter playing a
pivotal role in dictating the differences between the response of lipid
formulations. For the sake of clarity we remind here that the enthalpy
variation for $X_{PG}=$0.66 and $X_{PG}=$0.33 refers to the whole melting
process and not only to one of the two endothermic transitions observed for
such mixtures.
We now try to detail even more the description of the calorimetric behavior of
the suspensions. The mixed HSPC-DPPG phases in two asymmetric mixtures show a
quite specular behavior, both in terms of calorimetric enthalpy variation,
(Figure 7-B), and normalized cooperative unit (Fig.7-C). At $X_{PG}=0.33$ the
INH addition causes an overall increase of the energy necessary to bring all
the lipids in the melt state and a weak reduction of the cooperative unit,
suggesting that the screening of the surface charge increases the average
energy barrier between the melt and the solid state of the lipids, bringing at
the same time the cooperative unit closer to the one obtained for pure HSPC
membranes, since the charge defects introduced by the phosphate groups of the
DPPG are now presumably compensated by INH adsorption. By contrast, a large
cooperative unit variation characterizes the HSPC-rich domains, suggesting
that lipid-lipid correlations are enhanced by the INH-HSPC coupling. We refer
the reader to section 4.3 for a further discussion of the occurrence of lipid
demixing in asymmetric HSPC-DPPG formulations.
For $X_{PG}=0.66$ the decrease of the calorimetric enthalpy with respect to
the bare liposomes is accompanied by an increase of the cooperativity of both
the mixed HSPC-DPPG domains, which is generally attributed to a higher
solvation and reduced charge of the phospholipid head groups [64], and HSPC-
rich domains. In the latter case the large cooperativity of the melting
process is strictly related to demixing. As previously discussed, INH
addition, is able to strongly destabilize the lipid mixture, inducing lipid
segregation and phase separation also for a relatively small amount of the
drug. A higher affinity between INH and HSPC, for which dipole-dipole short
range interactions are large compared to the dipole-monopole interaction
between INH and DPPG rationalizes this result. It’s worth stressing again that
unbalanced interactions between lipids and an external agent (INH here) are
very important in determining the lipid local order within the membrane, as
they typically favor segregation of one of the two species, as also reported
for DNA-cationic liposome complexes [61, 63, 65, 66, 67]. All in all, in
excess of PG headgroups INH addition enhances the formation of DPPG-rich
(mixed) and HSPC-rich (pure) domains, both characterized by high
cooperativity, since in HSPC-rich domains lipids are more strongly bound and,
at the same time, INH screens residual charges borne by PG headgroups.
It is also interesting to note that the two components of the demixed
membranes respond differently to an increase of INH content in the
$X_{PG}=0.66$ formulation. On the one hand, the cooperativity of the
transition of DPPG-rich domains first increases for $\rho\lesssim 2.5$,
presumably due to the electrostatic coupling with INH molecules, and then
weakly decreases for larger values of $\rho$, probably due to INH insertion in
the bilayer. On the other hand, the values of the cooperative unit of HSPC-
rich domains decreases progressively for increasing values of $\rho$, showing
that the a large amount of INH molecules in the bilayer mainly degrades
molecular correlation and cooperativity in zwitterionic domains.
Actually, as observed also for nucleic acid-lipid complexes [61, 63], INH
molecules could affect more intimately the lipid bilayer structure than simply
enhancing electrostatic screening by molecular insertion. This may favor
disorder and nucleation of defects and facilitate the transition to the melt
state of the mixed bilayer.
Finally we note that for $\rho\gtrsim 2.5$ and for $X_{PG}=0.5$ local disorder
is enhanced, as reflected by a decreased melting temperature for high INH
content. The interpretation of the data for the symmetric mixture is
definitely more intricate as cooperativity stays basically unaffected by INH
addition, while the calorimetric enthalpy scatters and suggest a subtle
balance between the onset of demixing, electrostatic screening and INH
insertion within the membrane. This aspect deserves by all means a more
detailed investigation through techniques probing more directly the membrane
structure, including more refined calorimetric characterization, and will be
the subject of a future publication.
### 4.2 Static light scattering
By taking advantage of Static light scattering (SLS) we measured the time-
averaged scattered intensity at different scattering vectors $q$ (see section
2), and hence the form factor and the gyration radius $R_{g}$ of the
liposomes. At the same time, it is interesting to investigate the intensity at
one fixed scattering angle (here $I_{90}$ measured at $\theta=90^{\circ}$) as
a function of temperature, since a change of the refractive index of the
vesicles induced by the membrane melting transition affects this observable
[40]. We have then characterized the melting transition of the lipid bilayers
via light scattering technique (results are reported in SI). Finally, we also
note that the melting transition of all the liposomes investigated here gives
rise, as expected, to a thinning of the lipid bilayers (see SI).
The gyration radii of all mixed liposomes in absence of INH $\rho=0$) and at a
selected drug concentration ($\rho=5$) is shown in Fig. 8. INH addition gives
rise to an increase of $R_{g}$ of the liposomes for all the $X_{PG}$
investigated and at all temperatures. This rules out 1) the simple
electrostatic screening effect due to the INH localization within a diffuse
layer around the liposomes that would facilitate the lipid compactness within
the bilayer and hence an overall decreases of the liposome size, and 2) an
osmotic effect due to the excess of solutes outside the liposomes giving rise
to the partial evacuation of water from the interior of the liposomes [68] and
hence to their shrinkage. On the other hand, the increased size suggests that
INH does accumulate on liposome surface possibly penetrating (even partially)
in the bilayer. As a matter of fact both superficial adsorption and/or partial
insertion of INH within the bilayer would give an increase of the liposome
size.
By fitting the $R_{g}(T)$ with a Boltzmann-type equation (eq. 3) we have
extracted the optical transition temperature $T_{c}^{opt}$ for bare liposomes
and in the presence of INH ($\rho=5$). Consistently with the DSC results at
$\rho=5$, we observe a net decrease of the transition temperature with respect
of the bare liposomes only in the case of equimolar mixtures ($X_{PG}=0.5$)
(see table 3), while for the other two mixtures the shift is positive
($X_{PG}=0.66$) or not detectable ($X_{PG}=0.33$) (see table 3).
Table 3: Melting temperatures ($T_{c}^{opt}$) and gyration radii ($R_{s}$,$R_{m}$) below and above the melting transition of unilamellar liposomes for the three $X_{PG}$ employed in this work for bare liposomes ($\rho=$0) and at a fixed INH/lipid molar ratio ($\rho=$5). All values are obtained by a non-linear fit of $R_{g}(T)$ via equation 3. $X_{PG}$ | $\rho$ | $R_{s}$ [nm] | $R_{m}$ [nm] | $T_{c}^{opt}$
---|---|---|---|---
0.33 | 0 | 49.3 $\pm$ 0.5 | 58.6 $\pm$ 0.5 | 46.1 $\pm$ 0.5
0.33 | 5 | 51.5 $\pm$ 0.2 | 60.6 $\pm$ 0.4 | 46.1 $\pm$ 0.3
0.50 | 0 | 46.4 $\pm$ 0.5 | 56.4 $\pm$ 0.5 | 43.9 $\pm$ 1.0
0.50 | 5 | 49.1 $\pm$ 0.2 | 58.9 $\pm$ 0.4 | 38.7 $\pm$ 0.2
0.66 | 0 | 47.1 $\pm$ 0.7 | 55.3 $\pm$ 0.5 | 40.5 $\pm$ 0.9
0.66 | 5 | 49.1 $\pm$ 0.2 | 57.7 $\pm$ 0.4 | 42.8 $\pm$ 0.5
Once again, all the above described results corroborates the scenario where
INH molecules strongly interact with mixed HSPC-DPPG bilayer and affect its
internal structure by binding preferentially with one type of lipid rather
than with both the components of the bilayer at the same extent.
Figure 8: Gyration radii of the unilamellar liposomes for bare liposomes
($\rho=0$, empty symbols) and in the presence of INH ($\rho=5$, full symbols)
for $X_{PG}=0.33$ (Panel A), $X_{PG}=0.50$ (Panel B), $X_{PG}=0.66$ (Panel C).
### 4.3 Monolayers
To further clarify the interaction of INH with HSPC and DPPG and unveil the
very nature of the interaction of INH with both the lipids we have studied the
isotherms of mixed HSPC-DPPG and pure (one-component) DPPG and HSPC lipid
monolayers at air-water interface in the presence of INH. The drug has been
dissolved in ethanol and injected in the subphase under the monolayers in the
liquid-expanded phase. The so-formed layers have been then compressed. The INH
insertion in this precise condition allows to observe the maximum extent of
drug-lipid interaction, since the low lipid density of the monolayer
facilitates drug-lipid association [28].
Preliminarily, we have tuned the nominal INH/lipid molar ratio $\rho$ by
injecting increasing volumes of INH, dissolved in ethanol at fixed
concentration (see SI). Isotherms obtained after injection of 200 $\mu$l of
INH solution at concentration 0.013 mM, i.e. the maximal amount used in
monolayer experiments, are shown in Fig. 9, for mixed (panels A-B-C) and one-
component monolayers (panels D-E). To evaluate the net effect of INH, isotherm
obtained after the injection of the same volume of pure ethanol are shown for
comparison. Ethanol is an amphiphilic surface-active compound and it may
interact with the lipid monolayers by insertion at the interface [69].
Injection of INH or ethanol causes a shift to larger area per molecules, at a
given surface pressure. This is evident by comparing the different curves with
those obtained for bare monolayers, i.e. in the absence of injection (Panel
A-E of Fig. 9). This effect is commonly attributed to the adsorption of the
injected molecules at the interface and is enhanced by the interaction between
the drug and lipid layers, as claimed by Chimote et al. [28] and by Marques et
al. [56], who suggested an intercalation of INH close to the polar head of
DMPC liposomes. Since here INH is in large excess with respect to number of
lipids, its insertion may occur, followed by a lipid realignment upon
compression due to the varying balance between the lateral
attractive/repulsive forces between lipids upon increasing their packing.
In fact, if the amount of molecules inserted in the lipid film does not change
during compression, an increase of the molecular areas in the condensed phase
due to the additional space needed for INH molecules is expected. This
’additional space’ cannot be neglected, and indicates that the drug is located
in some extent in the lipid layer. Here we observe that $\Pi$-A isotherms of
DPPG and HSPC in the condensed state have an almost-parallel course in the
presence and in the absence of INH. In mixed monolayers this effect decreases
upon decreasing the DPPG content. For $X_{PG}=0.33$, the isotherm recorded in
the presence of INH converges towards the isotherms of the bare monolayer at
high surface pressure. This could be attributed to a ’squeezing out’ of drug
during compression or to a peculiar rearrangement between the drug and the
lipids, which minimizes the overall molecular hindrance.
Fig. 9-F shows the excess free energy $\Delta G$ for mixed HSPC-DPPG
monolayers in the presence and the absence of INH and ethanol. The values are
calculated at $\Pi=35mN/m$, i.e the pressure correspondent to the packing
density of lipid bilayers [70, 71]. Mixed HSPC-DPPG monolayers ($\circ$) on
pure Hepes subphase show an almost ideal-behavior with $\Delta G$ approaching
0 at low PG content and a slightly higher negative deviation for
$X_{PG}=0.66$, indicative of attractive interactions between the two lipids
which can stabilize the mixed film. This confirms the results obtained by DSC
indicating a full miscibility of the two lipids (see section 4.1) and it is in
agreement with simulations [52], showing that demixing does not occur when the
length mismatch between the alkyl chains of the two lipids in the mixture is
lower than 6 carbons.
Figure 9: Surface pressure isotherms of mixed HSPC-DPPG monolayers (A-B-C) and
pure lipids (D-E) on Hepes subphase ($\bigcirc$) and after injection of 200
$\mu$l of ethanolic INH or pure ethanol (full and dashed line, respectively).
Panel E shows the excess free energy of mixing $\Delta G$ calculated for mixed
HSPC-DPPG monolayers on pure hepes subphase $\bigcirc$) and after injection of
200 $\mu$l of ethanolic INH solution ($\blacktriangle$) or pure ethanol
($\square$), with the calculated difference ($\ast=\blacktriangle-\square$) to
determine the effect of only INH. $\Delta G$ is calculated at $\Pi$=35 mN/m.
The addition of a polar compound as ethanol and INH ($\square$,
$\blacktriangle$, respectively) causes deviations from the ideal behavior,
indicating the onset of non-negligible interactions between the solutes added
in the subphase and the lipid film. The large negative deviations at
$X_{PG}=0.33$ in the presence of INH indicate that HSPC-DPPG bonds are
preferred with respect to HSPC-HSPC and DPPG-DPPG ones. It’s worth remarking
here that this is not in contrast with the onset of super-bound INH-HSPC
states observed in DSC thermograms for this lipid formulation since in large
excess of HSPC the presence of such states is statistically favored even
without lipid demixing.
Conversely, beyond equimolarity, the positive deviations of $\Delta G$
indicate that interaction between lipids of the same kind are favored and may
cause lipid segregation. The latter indeed can be induced in the presence of
positive values of $\Delta G$ of the order of $K_{B}T$ at T=298 K, as in our
system, in line with Monte Carlo simulations of binary lipid mixtures [72]. In
agreement with DSC results, this finding indicates that INH-lipid interaction
can modify the miscibility of HSPC and DPPG in a complex interplay with lipid
composition.
We have already discussed in section 4.1.2 how the attractive interaction
between the dipoles of INH and HSPC can promote the formation of quadrupolar
INH-HSPC complexes and increases the binding energy between HSPC lipids.
However it’s worth stressing once again that this mechanism can favor the
segregation of a HSPC-rich phase in excess of DPPG, as indicated by the
positive values of $\Delta G$ at $X_{PG}=0.66$. While the condensation of the
relatively few HSPC lipids ’diluted’ in a enriched DPPG matrix is entropically
and energetically costly in the absence of INH, the addition of INH is able to
modify this scenario via the formation of attractive complexes and screening
of electrostatic repulsion between DPPG molecules. Conversely, in excess of
HSPC, the addition of INH does not cause lipid demixing because the
condensation of the DPPG molecules, now acting as repulsive ’defects’, would
always imply an high energetic and entropic penalty. In this condition and in
excess of INH, the negative value of $\Delta G$ suggests a stabilization of
the mixed film by screening the electrostatic repulsions between DPPG
molecules.
All in all, these findings corroborate the scenario in which the strong lipid
demixing observed in unilamellar liposomes with $X_{PG}=0.66$ and in the
presence of INH, is driven by the modification of lipid-lipid interaction due
to INH-HSPC binding.
## 5 Conclusion
We have investigated the encapsulation of the antitubercular drug isoniazid
(INH) in charged unilamellar vesicles composed by mixtures of zwitterionic
HSPC and anionic DPPG lipids, and its interaction with the lipid bilayers. For
the first time the amount of drug encapsulated in the vesicles, determined by
UV spectroscopy, has been compared with the one expected from geometrical
arguments and based on the determination of the liposome volume fraction by
’Laser transmission Spectroscopy’ (LTS) technique. We found that the
encapsulation of INH is much larger than the expected one, showing that drug-
lipid interaction is relevant. Such a result represents indeed a crucial
result of the present work and has motivated a further deep investigation of
drug-lipid interaction by calorimetry, static light scattering and Langmuir
monolayer technique.
INH can accumulate at the lipid interface, as indicated by the systematic
$\sim$2-nm increase of the gyration radius $R_{g}$ of liposomes in the
presence of INH, that further modifies lipid miscibility in a complex
interplay between electrostatic screening, entropy, lipid-lipid and drug-lipid
interaction, as shown by calorimetry. Surprisingly, we found that INH can
induce lipid segregation in asymmetric mixtures which gives rise to a clear
phase separation at excess of the anionic species (DPPG). Conversely, at
excess of HSPC and at equimolar composition, the screening effect of INH
prevails and the lipid layer remaining fully miscible as in the absence of the
drug, with the maximum heterogeneity observed at the equimolar composition.
The results obtained on HSPC-DPPG Langmuir monolayers confirmed the
accumulation of drug at the interface and the phase separation at DPPG excess,
pointed out the modification of the lipid packing due to INH insertion. At the
equimolar composition the maximal heterogeneity of the lipid layer occurs and
INH insertion in the bilayer could be favored, thus explaining the slightly
larger value of entrapment ratio found for INH-loaded liposomes.
Since INH is a small dipolar molecule with the amine group protruding out of
the molecular plane of the piridine ring [57, 73], its peculiar structure and
electronic configuration could favor the electrostatic interaction with
zwitterionic lipids and its insertion in the bilayer. In a naive picture of
the INH-lipid interaction, it can be speculated that, thanks to its dipolar
nature at physiological pH, INH is more affine to the zwitterionic HSPC than
DPPG and can form a quadrupolar complex which increases the binding energy
between HSPC lipids. The condensation of these complexes and the segregation
of a HSPC-rich phase occurs mainly at DPPG excess, where the entropic penalty
due to lipid segregation can be counterbalanced by the strong enthalpy gain
obtained by bringing together quadrupolar INH-HSPC complexes.
At the best of our knowledge, our investigation represents the first piece of
evidence on the effect of INH on lipid organization in charged PC-based
liposomes to be employed as anti-TB nanocarrier for pulmonary delivery. While
previous works dealing with uncharged lipid bilayers have shown the crucial
role of bilayer composition for targeting the biological membranes and
understanding the mechanism of action of INH, a comprehensive investigation on
the interaction between this drug and charged liposomal nanocarriers is
lacking. Only recently, a small-angle neutron-scattering (SANS) investigation
focused on the structure of neutral PC-Chol multilamellar vesicles loaded with
isoniazid and rifampicin and hypothesized an affinity between INH and PC [17].
Our work points out the importance of the investigation of the drug-lipid
interface to improve the design of a nanocarrier. The control of the transport
of materials across the bilayer, i.e. the release of entrapped cargo from
liposomes, is a critical element needed to harness the potential of lipid-
based vesicular carriers. A simpler mean to exert control over efflux in
synthetic liposomes involves the knowledge of lipid structure and lipid/drug
interaction dictating self-assembly and permeability properties [74].
Furthermore, the ability to understand the modification induced by drug/lipid
interaction could help in finding strategies to modulate bilayer stability and
semi-permeable properties and to stabilize the liposome bilayer during
circulation [75, 76]. A deeper understanding of drug-bilayer interactions may
lead to development of safer and more efficient drugs and drug delivery
systems.
## 6 Acknowledgments
This research was funded by Phospholipid Research Center (Grant n.
FBO-2017-051/1-1) and supported by Lipoid. F.S and S. T. acknowledge support
from Torno Subito projects of Lazio Adisu-Regione Lazio; S. S. thanks S.
Casciardi for TEM microscopy and C. Bombelli and F. Ceccacci for use of
Minitrough and for scientific discussions.
## References
* [1] World Health Organization, Global Tuberculosis Report 2019, Tech. rep. (2019).
* [2] K. Xu, Z. C. Liang, X. Ding, H. Hu, S. Liu, M. Nurmik, S. Bi, F. Hu, Z. Ji, J. Ren, et al., Nanomaterials in the prevention, diagnosis, and treatment of Mycobacterium tuberculosis infections, Advanced healthcare materials 7 (2018) 1700509.
* [3] P. Deol, G. Khuller, K. Joshi, Therapeutic efficacies of isoniazid and rifampin encapsulated in lung-specific stealth liposomes against mycobacterium tuberculosis infection induced in mice, Antimicrobial agents and chemotherapy 41 (1997) 1211–1214.
* [4] D. C. Quenelle, J. K. Staas, G. A. Winchester, E. L. Barrow, W. W. Barrow, Efficacy of microencapsulated rifampin in mycobacterium tuberculosis-infected mice, Antimicrobial agents and chemotherapy 43 (1999) 1144–1151.
* [5] C. Marianecci, L. Di Marzio, F. Rinaldi, M. Carafa, F. Alhaique, et al., Pulmonary delivery: innovative approaches and perspectives, Journal of Biomaterials and Nanobiotechnology 2 (2011) 567–575.
* [6] P. Preziosi, Isoniazid: metabolic aspects and toxicological correlates, Current drug metabolism 8 (2007) 839–851.
* [7] A. D. Bangham, M. M. Standish, J. C. Watkins, Diffusion of univalent ions across the lamellae of swollen phospholipids, Journal of molecular biology 13 (1965) 238–252.
* [8] C. Rodrigues, P. Gameiro, S. Reis, J. Lima, B. de Castro, Spectrophotometric determination of drug partition coefficients in dimyristoyl-l-$\alpha$-phosphatidylcholine/water: a comparative study using phase separation and liposome suspensions, Analytica chimica acta 428 (2001) 103–109.
* [9] C. Becker, J. Dressman, G. Amidon, H. Junginger, S. Kopp, K. Midha, V. Shah, S. Stavchansky, D. Barends, Biowaiver monographs for immediate release solid oral dosage forms: Isoniazid, Journal of pharmaceutical sciences 96 (2007) 522–531.
* [10] R. Abra, C. A. Hunt, D. Lau, Liposome disposition in vivo VI: delivery to the lung, Journal of pharmaceutical sciences 73 (1984) 203–206.
* [11] R. Pandey, S. Sharma, G. Khuller, Liposome-based antitubercular drug therapy in a guinea pig model of tuberculosis, International journal of antimicrobial agents 23 (2004) 414–415.
* [12] O. R. Justo, Â. M. Moraes, Incorporation of antibiotics in liposomes designed for tuberculosis therapy by inhalation, Drug delivery 10 (2003) 201–207.
* [13] A. Gürsoy, E. Kut, S. Özkırımlı, Co-encapsulation of isoniazid and rifampicin in liposomes and characterization of liposomes by derivative spectroscopy, International journal of pharmaceutics 271 (2004) 115–123.
* [14] G. Chimote, R. Banerjee, In vitro evaluation of inhalable isoniazid-loaded surfactant liposomes as an adjunct therapy in pulmonary tuberculosis, Journal of Biomedical Materials Research Part B: Applied Biomaterials 94 (2010) 1–10.
* [15] C. I. Nkanga, R. W. Krause, X. S. Noundou, R. B. Walker, Preparation and characterization of isoniazid-loaded crude soybean lecithin liposomes, International journal of pharmaceutics 526 (2017) 466–473.
* [16] N. Kosa, B. Bocskei-Antal, K. Horváti, S. Bosze, L. Herenyi, I. Voszka, Investigation of encapsulated liposomal antituberculotics and effects on in vitro model systems, Biophysical Journal 110 (2016) 246a–247a.
* [17] E. Truzzi, F. Meneghetti, M. Mori, L. Costantino, V. Iannuccelli, E. Maretti, F. Domenici, C. Castellano, S. Rogers, A. Capocefalo, et al., Drugs/lamellae interface influences the inner structure of double-loaded liposomes for inhaled anti-TB therapy: an in-depth small-angle neutron scattering investigation, Journal of colloid and interface science 541 (2019) 399–406.
* [18] S. Bremer-Hoffmann, B. Halamoda-Kenzaoui, S. E. Borgos, Identification of regulatory needs for nanomedicines, Journal of Interdisciplinary Nanomedicine 3 (2018) 4–15.
* [19] M. Wasserman, R. M. Beltrán, F. O. Quintana, P. M. Mendoza, L. C. Orozco, G. Rodriguez, A simple technique for entrapping rifampicin and isoniazid into liposomes, Tubercle 67 (1986) 83–90.
* [20] S. Sennato, F. Bordi, C. Cametti, C. Coluzza, A. Desideri, S. Rufini, Evidence of domain formation in cardiolipin-glycerophospholipid mixed monolayers. A thermodynamic and AFM study, The Journal of Physical Chemistry B 109 (2005) 15950–15957.
* [21] S. Lupi, A. Perla, P. Maselli, F. Bordi, S. Sennato, Infrared spectra of phosphatidylethanolamine–cardiolipin binary system, Colloids and Surfaces B: Biointerfaces 64 (2008) 56–64.
* [22] R. W. Niven, H. Schreier, Nebulization of liposomes. i. Effects of lipid composition, Pharmaceutical research 7 (1990) 1127–1133.
* [23] M. L. Manca, C. Sinico, A. M. Maccioni, O. Diez, A. M. Fadda, M. Manconi, Composition influence on pulmonary delivery of rifampicin liposomes, Pharmaceutics 4 (2012) 590–606.
* [24] M. L. Manca, D. Valenti, O. D. Sales, A. Nacher, A. M. Fadda, M. Manconi, Fabrication of polyelectrolyte multilayered vesicles as inhalable dry powder for lung administration of rifampicin, International journal of pharmaceutics 472 (2014) 102–109.
* [25] Q. T. Zhou, S. S. Y. Leung, P. Tang, T. Parumasivam, Z. H. Loh, H.-K. Chan, Inhaled formulations and pulmonary drug delivery systems for respiratory infections, Advanced drug delivery reviews 85 (2015) 83–99.
* [26] T. M. M Ways, W. M. Lau, V. V. Khutoryanskiy, Chitosan and its derivatives for application in mucoadhesive drug delivery systems, Polymers 10 (2018) 267.
* [27] C. Rodrigues, P. Gameiro, M. Prieto, B. de Castro, Interaction of rifampicin and isoniazid with large unilamellar liposomes: spectroscopic location studies, Biochimica et Biophysica Acta (BBA) - General Subjects 1620 (2003) 151–159.
* [28] G. Chimote, R. Banerjee, Evaluation of antitubercular drug insertion into preformed dipalmitoylphosphatidylcholine monolayers, Colloids and Surfaces B: Biointerfaces 62 (2008) 258–264.
* [29] M. Pinheiro, A. S. Silva, S. Pisco, S. Reis, Interactions of isoniazid with membrane models: Implications for drug mechanism of action, Chemistry and physics of lipids 183 (2014) 184–190.
* [30] X. Xu, M. A. Khan, D. J. Burgess, Predicting hydrophilic drug encapsulation inside unilamellar liposomes, International journal of pharmaceutics 423 (2012) 410–418.
* [31] H. S. Dhadwal, R. R. Ansari, W. V. Meyer, A fiber-optic probe for particle sizing in concentrated suspensions, Review of scientific instruments 62 (1991) 2963–2968.
* [32] D. E. Koppel, Analysis of macromolecular polydispersity in intensity correlation spectroscopy: the method of cumulants, The Journal of Chemical Physics 57 (1972) 4814–4820.
* [33] F. Li, R. Schafer, C.-T. Hwang, C. E. Tanner, S. T. Ruggiero, High-precision sizing of nanoparticles by laser transmission spectroscopy, Applied optics 49 (2010) 6602–6611.
* [34] A. De Marcellis, A. Sarra, G. D. P. Stanchieri, F. Bruni, F. Bordi, E. Palange, P. Postorino, Balanced laser transmission spectroscopy based on a tunable gain double channel LIA for nanoparticles detection in biomedical applications, in: 2019 IEEE Biomedical Circuits and Systems Conference (BioCAS), IEEE, 2019, pp. 1–4.
* [35] M. De Robertis, A. Sarra, V. D’Oria, F. Mura, F. Bordi, P. Postorino, D. Fratantonio, Blueberry-derived exosome-like nanoparticles counter the response to TNF-$\alpha$-induced change on gene expression in EA-hy926 cells, Biomolecules 10 (2020) 742.
* [36] C. Marianecci, F. Rinaldi, L. Di Marzio, A. Ciogli, S. Esposito, M. Carafa, Polysorbate 20 vesicles as multi-drug carriers: in vitro preliminary evaluations, Letters in Drug Design & Discovery 10 (2013) 212–218.
* [37] M. Shinitzky, Y. Barenholz, Fluidity parameters of lipid regions determined by fluorescence polarization, Biochimica et Biophysica Acta (BBA)-Reviews on Biomembranes 515 (1978) 367–394.
* [38] N. Barsoum, M. S. Kamel, M. M. Diab, Spectrophotometric determination of isoniazid and rifampicin from pharmaceutical preparations and biological fluids, Research Journal of Agricultural and Biological Sciences 4 (2008) 471–484.
* [39] B. J. Berne, R. Pecora, Dynamic Light scattering with application to chemistry, biology, and physics, Dover Publications, Inc, 2000.
* [40] N. Michel, A.-S. Fabiano, A. Polidori, R. Jack, B. Pucci, Determination of phase transition temperatures of lipids by light scattering, Chemistry and physics of lipids 139 (2006) 11–19.
* [41] G. Roberts, Langmuir-Blodgett Films, Plenum Press, New York, 1990.
* [42] F. Bordi, C. Cametti, S. Sennato, B. Paoli, C. Marianecci, Charge renormalization in planar and spherical charged lipidic aqueous interfaces, The Journal of Physical Chemistry B 110 (2006) 4808–4814.
* [43] O. Szekely, A. Steiner, P. Szekely, E. Amit, R. Asor, C. Tamburu, U. Raviv, The structure of ions and zwitterionic lipids regulates the charge of dipolar membranes, Langmuir 27 (2011) 7419–7438.
* [44] R. Zimmermann, D. Küttner, L. Renner, M. Kaufmann, J. Zitzmann, M. Müller, C. Werner, Charging and structure of zwitterionic supported bilayer lipid membranes studied by streaming current measurements, fluorescence microscopy, and attenuated total reflection fourier transform infrared spectroscopy, Biointerphases 4 (2009) 1–6.
* [45] N. J. Zuidam, H. M. E. Gouw, Y. Barenholz, D. J. Crommelin, Physical (in)stability of liposomes upon chemical hydrolysis: the role of lysophospholipids and fatty acids, Biochimica et Biophysica Acta (BBA)-Biomembranes 1240 (1995) 101–110.
* [46] Y. Okahata, H.-j. Lim, S. Hachiya, G.-i. Nakamura, Bilayer-coated capsule membranes. IV.: Control of NaCl permeability by phase transition of synthetic bilayer coatings, depending on their hydrophilic head groups, Journal of Membrane Science 19 (1984) 237–247.
* [47] H. Kitayama, Y. Takechi, N. Tamai, H. Matsuki, C. Yomota, H. Saito, Thermotropic phase behavior of hydrogenated soybean phosphatidylcholine–cholesterol binary liposome membrane, Chemical and Pharmaceutical Bulletin 62 (2014) 58–63.
* [48] T. Heimburg, A model for the lipid pretransition: coupling of ripple formation with the chain-melting transition, Biophysical journal 78 (2000) 1154–1165.
* [49] D. Marsh, A. Watts, P. Knowles, Cooperativity of the phase transition in single-and multibilayer lipid vesicles, Biochimica et Biophysica Acta (BBA)-Biomembranes 465 (1977) 500–514.
* [50] R. Malcolmson, J. Higinbotham, P. Beswick, P. Privat, L. Saunier, DSC of DMPC liposomes containing low concentrations of cholesteryl esters or cholesterol, Journal of membrane science 123 (1997) 243–253.
* [51] A. Blume, Biological calorimetry: membranes, Thermochimica Acta 193 (1991) 299–347.
* [52] G. S. Longo, M. Schick, I. Szleifer, Stability and liquid-liquid phase separation in mixed saturated lipid bilayers, Biophysical journal 96 (2009) 3977–3986.
* [53] A. Saboury, A. Moosavi-Movahedi, Clarification of calorimetric and van’t hoff enthalpies for evaluation of protein transition states, Biochemical Education 22 (1994) 210–211.
* [54] P. Losada-Pérez, N. Mertens, B. De Medio-Vasconcelos, E. Slenders, J. Leys, M. Peeters, B. Van Grinsven, J. Gruber, C. Glorieux, H. Pfeiffer, et al., Phase transitions of binary lipid mixtures: a combined study by adiabatic scanning calorimetry and quartz crystal microbalance with dissipation monitoring, Advances in Condensed Matter Physics 2015 (2015).
* [55] G. G. Putzel, M. Schick, Phase behavior of a model bilayer membrane with coupled leaves, Biophysical journal 94 (2008) 869–877.
* [56] A. V. Marques, P. M. T. Júnior, S. Marques, T. Brum, E. Harte, M. O. Rodrigues, M. G. Montes D′Oca, P. A. da Silva, A. R. Pohlmann, I. D. Alves, et al., Isoniazid interaction with phosphatidylcholine-based membranes, Journal of Molecular Structure 1051 (2013) 237–243.
* [57] G. Rajalakshmi, B. Devipriya, A. R. Parameswari, A. D. Stephen, P. Kumaradhas, Understanding the N-N bond cleavage and the electrostatic properties of isoniazid drug molecule via theoretical charge density study, Computational and Theoretical Chemistry 966 (2011) 259–264.
* [58] A. k. Pandey, A. Bajpai, V. Baboo, A. Dwivedi, Structural, electronic, and vibrational properties of isoniazid and its derivative n-cyclopentylidenepyridine-4-carbohydrazide: a quantum chemical study, Journal of Theoretical Chemistry 2014 (2014).
* [59] S. A. Kane, S. D. Floyd, Interaction of local anesthetics with phospholipids in langmuir monolayers, Physical Review E 62 (2000) 8400.
* [60] R. Cseh, R. Benz, Interaction of phloretin with lipid monolayers: relationship between structural changes and dipole potential change, Biophysical journal 77 (1999) 1477–1488.
* [61] A. Kõiv, P. Mustonen, P. K. Kinnunen, Differential scanning calorimetry study on the binding of nucleic acids to dimyristoylphosphatidylcholine-sphingosine liposomes, Chemistry and physics of lipids 70 (1994) 1–10.
* [62] P. A. Forsyth Jr, S. Marčelja, D. J. Mitchell, B. W. Ninham, Phase transition in charged lipid membranes, Biochimica et Biophysica Acta (BBA)-Biomembranes 469 (1977) 335–344.
* [63] S. Giatrellis, G. Nounesis, Nucleic acid-lipid membrane interactions studied by dsc, Journal of Pharmacy and Bioallied Sciences 3 (2011) 70.
* [64] C. Nunes, G. Brezesinski, D. Lopes, J. L. Lima, S. Reis, M. Lúcio, Lipid–drug interaction: biophysical effects of tolmetin on membrane mimetic systems of different dimensionality, The Journal of Physical Chemistry B 115 (2011) 12615–12623.
* [65] A. Iglič, P. A. Beales, Advances in Planar Lipid Bilayers and Liposomes, Vol. 20, Elsevier, 2014.
* [66] D. Harries, S. May, W. M. Gelbart, A. Ben-Shaul, Structure, stability, and thermodynamics of lamellar DNA-lipid complexes, Biophysical journal 75 (1998) 159–173.
* [67] R. Bruinsma, J. Mashl, Long-range electrostatic interaction in DNA-cationic lipid complexes, EPL (Europhysics Letters) 41 (1998) 165\.
* [68] J. Sabın, G. Prieto, J. M. Ruso, R. Hidalgo-Alvarez, F. Sarmiento, Size and stability of liposomes: a possible role of hydration and osmotic forces, The European Physical Journal E 20 (2006) 401–408.
* [69] M. A. Wilson, A. Pohorille, Adsorption and Solvation of Ethanol at the Water Liquid−Vapor Interface: A Molecular Dynamics Study, The Journal of Physical Chemistry B 101 (1997) 3130–3135.
* [70] D. Marsh, Lateral pressure in membranes, Biochimica et Biophysica Acta (BBA) - Reviews on Biomembranes 1286 (1996) 183–223.
* [71] M. Dahim, N. K. Mizuno, X.-M. Li, W. E. Momsen, M. M. Momsen, H. L. Brockman, Physical and photophysical characterization of a BODIPY Phosphatidylcholine as a membrane probe, Biophysical Journal 83 (2002) 1511–1524.
* [72] F. A. Heberle, G. W. Feigenson, Phase separation in lipid membranes, Cold Spring Harbor perspectives in biology 3 (2011) a004630.
* [73] N. Saikia, R. C. Deka, Density functional study on the adsorption of the drug isoniazid onto pristine and B-doped single wall carbon nanotubes, Journal of molecular modeling 19 (2013) 215–226.
* [74] J. Lou, M. D. Best, Strategies for altering lipid self-assembly to trigger liposome cargo release, Chemistry and Physics of Lipids (2020) 104966.
* [75] M. Pinheiro, J. Magalhães, S. Reis, Antibiotic interactions using liposomes as model lipid membranes, Chemistry and physics of lipids 222 (2019) 36–46.
* [76] I. M. Le-Deygen, A. A. Skuredina, A. S. Safronova, I. D. Yakimov, I. M. Kolmogorov, D. M. Deygen, T. V. Burova, N. V. Grinberg, V. Y. Grinberg, E. V. Kudryashova, Moxifloxacin interacts with lipid bilayer, causing dramatic changes in its structure and phase transitions, Chemistry and Physics of Lipids 228 (2020) 104891.
|
# PyFstat: a Python package for continuous gravitational-wave data analysis
David Keitel Rodrigo Tenorio Gregory Ashton Reinhard Prix
††margin: DOI:
https://doi.org/10.21105/joss.03000https://doi.org10.21105/joss.03000 Software
• https://github.com/openjournals/joss-
reviews/issues/3000https://doi.orgReview •
https://github.com/pyfstat/pyfstathttps://doi.orgRepository •
https://doi.org/10.5281/zenodo.4660591https://doi.orgArchive Editor:
https://danielskatz.org/https://doi.orgDaniel S. Katz
Reviewers: • https://github.com/RobertRoscahttps://doi.org@RobertRosca •
https://github.com/khanx169https://doi.org@khanx169 Submitted: 26 January 2021
Published: 06 April 2021 License
Authors of papers retain copyright and release the work under a Creative
Commons Attribution 4.0 International License
(http://creativecommons.org/licenses/by/4.0/https://doi.orgCC BY 4.0).
## Summary
Gravitational waves in the sensitivity band of ground-based detectors can be
emitted by a number of astrophysical sources, including not only binary
coalescences, but also individual spinning neutron stars. The most promising
signals from such sources, although as of 2020 not yet detected, are the long-
lasting, quasi-monochromatic ‘Continuous Waves’ (CWs). Many search methods
have been developed and applied on LIGO (Aasi et al. 2015) and Virgo (Acernese
et al. 2015) data. See Prix (2009), Riles (2017), and Sieniawska and Bejger
(2019) for reviews of the field.
The PyFstat package provides tools to perform a range of CW data analysis
tasks. It revolves around the $\mathcal{F}$-statistic, first introduced by
Jaranowski, Krolak, and Schutz (1998): a matched-filter detection statistic
for CW signals described by a set of frequency evolution parameters and
maximized over amplitude parameters. This has been one of the standard methods
for LIGO-Virgo CW searches for two decades. PyFstat is built on top of
established routines in LALSuite (LIGO Scientific Collaboration 2018) but
through its more modern Python interface it enables a flexible approach to
designing new search strategies.
Classes for various search strategies and target signals are contained in
three main submodules:
* •
core: The basic wrappers to LALSuite’s $\mathcal{F}$-statistic algorithm. End-
users should rarely need to access these directly.
* •
grid_based_searches: Classes to search over regular parameter-space grids.
* •
mcmc_based_searches: Classes to cover promising parameter-space regions
through stochastic template placement with the Markov Chain Monte Carlo (MCMC)
sampler ptemcee (Vousden, Farr, and Mandel 2015).
Besides standard CWs from isolated neutron stars, PyFstat can also be used to
search for CWs from sources in binary systems (including the additional
orbital parameters), for CWs with a discontinuity at a pulsar glitch, and for
CW-like long-duration transient signals, e.g., from _after_ a pulsar glitch.
Specialized versions of both grid-based and MCMC-based search classes are
provided for these scenarios. Both fully-coherent and semi-coherent searches
(where the data is split into several segments for efficiency) are covered,
and an extension to the $\mathcal{F}$-statistic that is more robust against
single-detector noise artifacts (Keitel et al. 2014) is also supported. While
PyFstat’s grid-based searches do not compete with the sophisticated grid
setups and semi-coherent algorithms implemented in various LALSuite programs,
its main scientific use cases so far are for the MCMC exploration of
interesting parameter-space regions and for the long-duration transient case.
PyFstat was first introduced in Ashton and Prix (2018), which remains the main
reference for the MCMC-based analysis implemented in the package. The
extension to transient signals, which uses PyCUDA (Klöckner et al. 2012) for
speedup, is discussed in detail in Keitel and Ashton (2018), and the glitch-
robust search approaches in Ashton, Prix, and Jones (2018).
Additional helper classes, utility functions, and internals are included for
handling the common Short Fourier Transform (SFT) data format for LIGO data,
simulating artificial data with noise and signals in them, and plotting
results and diagnostics. Most of the underlying LALSuite functionality is
accessed through SWIG wrappings (Wette 2020) though for some parts, such as
the SFT handling, we still (as of the writing of this paper) call stand-alone
lalapps executables. Completing the backend migration to pure SWIG usage is
planned for the future.
The source of PyFstat is hosted on
https://github.com/PyFstat/PyFstat/https://doi.orgGitHub. The repository also
contains an automated test suite and a set of introductory example scripts.
Issues with the software can be submitted through GitHub and pull requests are
always welcome. PyFstat can be installed through pip, conda or docker
containers. Documentation in html and pdf formats is available from
https://readthedocs.org/projects/pyfstat/https://doi.orgreadthedocs.org and
installation instructions can be found there or in the
https://github.com/PyFstat/PyFstat/blob/master/README.mdhttps://doi.orgREADME
file. PyFstat is also listed in the Astrophysics Source Code Library as
https://ascl.net/2102.027https://doi.orgascl:2102.027.
## Statement of need
The sensitivity of searches for CWs and long-duration transient GWs is
generally limited by computational resources, as the required number of
matched-filter templates increases steeply for long observation times and wide
parameter spaces. The C-based LALSuite library (LIGO Scientific Collaboration
2018) contains many sophisticated search methods with a long development
history and high level of optimization, but is not very accessible for
researchers new to the field or for students; nor is it convenient for rapid
development and integration with modern technologies like GPUs or machine
learning. Hence, PyFstat serves a dual function of (i) making LALSuite CW
functionality more easily accessible through a Python interface, thus
facilitating the new user experience and, for developers, the exploratory
implementation of novel methods; and (ii) providing a set of production-ready
search classes for use cases not yet covered by LALSuite itself, most notably
for MCMC-based followup of promising candidates from wide-parameter-space
searches.
So far, PyFstat has been used for
* •
the original proposal of MCMC followup for CW candidates (Ashton and Prix
2018);
* •
developing glitch-robust CW search methods (Ashton, Prix, and Jones 2018);
* •
speeding up long-transient searches with GPUs (Keitel and Ashton 2018);
* •
followup of candidates from all-sky searches for CWs from sources in binary
systems, see Covas and Sintes (2020) and Abbott et al. (2021);
* •
studying the impact of neutron star proper motions on CW searches (Covas
2020).
## Acknowledgements
We acknowledge contributions to the package from Karl Wette, Sylvia Zhu and
Dan Foreman-Mackey; as well as helpful suggestions by John T. Whelan, Luca
Rei, and the LIGO-Virgo-KAGRA Continuous Wave working group. D.K. and R.T. are
supported by European Union FEDER funds; the Spanish Ministerio de Ciencia,
Innovación y Universidades and Agencia Estatal de Investigación grants
PID2019-106416GB-I00/AEI/10.13039/501100011033, RED2018-102661-T,
RED2018-102573-E, FPA2017-90687-REDC, FPU 18/00694, and BEAGAL 18/00148
(cofinanced by the Universitat de les Illes Balears); the Comunitat Autonoma
de les Illes Balears through the Direcció General de Política Universitaria i
Recerca with funds from the Tourist Stay Tax Law ITS 2017-006 (PRD2018/24) and
the Conselleria de Fons Europeus, Universitat i Cultura; the Generalitat
Valenciana (PROMETEO/2019/071); and EU COST Actions CA18108, CA17137, CA16214,
and CA16104. This paper has been assigned document number LIGO-P2100008.
## References
Aasi, J., B. P. Abbott, R. Abbott, and others. 2015. “Advanced LIGO.” _Class.
Quant. Grav._ 32: 074001. https://doi.org/10.1088/0264-9381/32/7/074001.
Abbott, R., T. D. Abbott, S. Abraham, and others. 2021. “All-sky search in
early O3 LIGO data for continuous gravitational-wave signals from unknown
neutron stars in binary systems.” _Phys. Rev. D_ 103 (6): 064017\.
https://doi.org/10.1103/PhysRevD.103.064017.
Acernese, F., M. Agathos, K. Agatsuma, and others. 2015. “Advanced Virgo: a
second-generation interferometric gravitational wave detector.” _Class. Quant.
Grav._ 32 (2): 024001. https://doi.org/10.1088/0264-9381/32/2/024001.
Ashton, Gregory, and Reinhard Prix. 2018. “Hierarchical multistage MCMC
follow-up of continuous gravitational wave candidates.” _Phys. Rev. D_ 97
(10): 103020. https://doi.org/10.1103/PhysRevD.97.103020.
Ashton, Gregory, Reinhard Prix, and D.I. Jones. 2018. “A semicoherent glitch-
robust continuous-gravitational-wave search method.” _Phys. Rev. D_ 98 (6):
063011. https://doi.org/10.1103/PhysRevD.98.063011.
Covas, P. B. 2020. “Effects of proper motion of neutron stars on continuous
gravitational-wave searches.” _Mon. Not. Roy. Astron. Soc._ 500 (4): 5167–76.
https://doi.org/10.1093/mnras/staa3624.
Covas, P. B., and Alicia M. Sintes. 2020. “First all-sky search for continuous
gravitational-wave signals from unknown neutron stars in binary systems using
Advanced LIGO data.” _Phys. Rev. Lett._ 124 (19): 191102.
https://doi.org/10.1103/PhysRevLett.124.191102.
Jaranowski, Piotr, Andrzej Krolak, and Bernard F. Schutz. 1998. “Data analysis
of gravitational - wave signals from spinning neutron stars. 1. The Signal and
its detection.” _Phys. Rev. D_ 58: 063001.
https://doi.org/10.1103/PhysRevD.58.063001.
Keitel, David, and Gregory Ashton. 2018. “Faster search for long
gravitational-wave transients: GPU implementation of the transient
$\mathcal{F}$-statistic.” _Class. Quant. Grav._ 35 (20): 205003.
https://doi.org/10.1088/1361-6382/aade34.
Keitel, David, Reinhard Prix, Maria Alessandra Papa, Paola Leaci, and Maham
Siddiqi. 2014. “Search for continuous gravitational waves: Improving
robustness versus instrumental artifacts.” _Phys. Rev. D_ 89 (6): 064023.
https://doi.org/10.1103/PhysRevD.89.064023.
Klöckner, Andreas, Nicolas Pinto, Yunsup Lee, B. Catanzaro, Paul Ivanov, and
Ahmed Fasih. 2012. “PyCUDA and PyOpenCL: A Scripting-Based Approach to GPU
Run-Time Code Generation.” _Parallel Computing_ 38 (3): 157–74.
https://doi.org/10.1016/j.parco.2011.09.001.
LIGO Scientific Collaboration. 2018. “LIGO Algorithm Library - LALSuite.” free
software (GPL). https://doi.org/10.7935/GT1W-FZ16.
Prix, Reinhard. 2009. “Gravitational Waves from Spinning Neutron Stars.” In
_Neutron Stars and Pulsars_ , edited by Werner Becker, 357:651–85. Astrophys.
Space Sci. Lib. Berlin Heidelberg: Springer.
https://doi.org/10.1007/978-3-540-76965-1_24.
Riles, Keith. 2017. “Recent searches for continuous gravitational waves.”
_Mod. Phys. Lett. A_ 32 (39): 1730035.
https://doi.org/10.1142/S021773231730035X.
Sieniawska, Magdalena, and Michał Bejger. 2019. “Continuous gravitational
waves from neutron stars: current status and prospects.” _Universe_ 5 (11):
217. https://doi.org/10.3390/universe5110217.
Vousden, W. D., W. M. Farr, and I. Mandel. 2015. “Dynamic temperature
selection for parallel tempering in Markov chain Monte Carlo simulations.”
_Mon. Not. Roy. Astron. Soc._ 455 (2): 1919–37.
https://doi.org/10.1093/mnras/stv2422.
Wette, Karl. 2020. “SWIGLAL: Python and Octave interfaces to the LALSuite
gravitational-wave data analysis libraries.” _SoftwareX_ 12: 100634.
https://doi.org/10.1016/j.softx.2020.100634.
|
# New upper bounds for $(b,k)$-hashing
Stefano Della Fiore, Simone Costa, Marco Dalai Department of Information
Engineering, University of Brescia
{s.dellafiore001, simone.costa<EMAIL_ADDRESS>
###### Abstract
For fixed integers $b\geq k$, the problem of perfect $(b,k)$-hashing asks for
the asymptotic growth of largest subsets of $\\{1,2,\ldots,b\\}^{n}$ such that
for any $k$ distinct elements in the set, there is a coordinate where they all
differ.
An important asymptotic upper bound for general $b,k$, was derived by Fredman
and Komlós in the ’80s and improved for certain $b\neq k$ by Körner and Marton
and by Arikan. Only very recently better bounds were derived for the general
$b,k$ case by Guruswami and Riazanov, while stronger results for small values
of $b=k$ were obtained by Arikan, by Dalai, Guruswami and Radhakrishnan and by
Costa and Dalai.
In this paper, we both show how some of the latter results extend to $b\neq k$
and further strengthen the bounds for some specific small values of $b$ and
$k$. The method we use, which depends on the reduction of an optimization
problem to a finite number of cases, shows that further results might be
obtained by refined arguments at the expense of higher complexity.
###### Index Terms:
perfect hashing, list decoding, zero-error capacity
## I Introduction
Figure 1: A $4/2$ channel. Edges represent positive probabilities. Here, zero-
error communication is possible when decoding with list-size equal to $2$.
Let $b$, $k$ and $n$ be integers, with $b\geq k$, and let $\mathcal{C}$ be a
subset of $\\{1,2,\ldots,b\\}^{n}$ with the property that for any $k$ distinct
elements we can find a coordinate where they all differ. Such a set can be
interpreted, by looking at it coordinate-wise, as a family of $n$ hashing
functions on some universe of size $|\mathcal{C}|$. The required property then
says that the family is a perfect hash family, that is, any $k$ elements in
the universe are $k$-partitioned by at least one function. Alternatively
$\mathcal{C}$ can be interpreted as a code of rate
$\frac{1}{n}\log|\mathcal{C}|$ for communication over a channel with $b$
inputs. Assume that the channels is a $b/(k-1)$ channel, meaning that any
$k-1$ of the $b$ inputs share one output but no $k$ distinct inputs do (see
Figure 1). The required property for $\mathcal{C}$ is what is needed for the
code to be a zero-error code when list decoding with list-size $k-1$ is
allowed. We refer the reader to [8], [9], [13], [14] and [4] for an overview
of the the more general context of this problem.
We will call any subset $\mathcal{C}$ of $\\{1,2,\ldots,b\\}^{n}$ with the
described property a $(b,k)$-hash code. For the reasons mentioned above,
bounding the size of $(b,k)$-hash codes is a combinatorial problem which has
been of interest both in computer science and information theory. It is known
that $(b,k)$-hash codes of exponential size in $n$ can be constructed and the
quantity of interest is usually the rate of such codes. We will thus study the
quantity
$R_{(b,k)}=\limsup_{n\to\infty}\frac{1}{n}\log|\mathcal{C}_{n}|\,,$ (1)
where the $\mathcal{C}_{n}$ are $(b,k)$-hash codes of length $n$ with maximal
rate. Note that, throughout, all logarithms are to base 2. Few lower bounds on
$R_{(b,k)}$ are known. First results in this sense were given by [9], [8] and
a better bound was derived in [12] for $(b,k)=(3,3)$. More recently, new lower
bounds were derived in [16] for infinitely many other values of $k$. The
first, landmark result concerning upper bounds was obtained by Fredman and
Komlós [9], who showed that
$R_{(b,k)}\leq\frac{b^{\underline{k-1}}}{b^{k-1}}\log(b-k+2)\,,$ (2)
where $b^{\underline{k-1}}=b(b-1)\cdots(b-k+2)$. Progresses have since been
rare. A generalization of the bound given in equation (2) was derived by
Körner and Marton [12] in the form
$R_{(b,k)}\leq\min_{2\leq j\leq
k-2}\frac{b^{\underline{j+1}}}{b^{j+1}}\log\frac{b-j}{k-j-1}\,.$ (3)
This was further improved for different values of $b$ and $k$ by Arikan [3].
In the case $b=k$, an improvement was first obtained for $k=4$ in [2] and then
in [6], [7]. It was proved only recently in [10] that the Fredman-Komlós bound
is not tight for any $k>3$; explicit better values were given there for
$k=5,6$, and for larger $k$ modulo a conjecture which is proved in [5], where
further improvements are also obtained for $k=5,6$.
In this paper, we develop a new strategy to attack some of the cases which
appear not to be optimally handled by those methods, obtaining new bounds for
$b=k=5,\ldots,8$. Furthermore, we also show that our procedure improves on the
existing literature for some $b\neq k$ cases, among which for example
$(b,k)=(6,5)$, $(9,8)$, $(10,9)$, $(11,10)$. In order to evaluate in a fair
way these $b\neq k$ cases, we first analyze the results (not derived in the
referenced papers) which are obtained when the methods of [6] and [5] are
extended to $b\neq k$, and compare them with the ones of [12], [3] and [10].
The generalization of the procedure used in [6] is rather easy111The
interested reader will find, upon inspection of the proof of Theorem 3 in [6],
that modulo using a hypergraph version of the Hansel Lemma, the only new
condition to check is that the upper bound given in (4) is greater than
$\log\frac{2b-2}{2b-3}$ for every $b\geq k\geq 4$. and it provides us the
following bound
$R_{(b,k)}\leq\left(\frac{1}{\log
b}+\frac{b^{2}}{(b^{2}-3b+2)\log\frac{b-2}{k-3}}\right)^{-1}.$ (4)
In Table I we give a comparison between the bounds (4) and (3), the bounds
from [3] and [10] and the generalized bound from [5] for different values of
$b$ and $k$. The integers in the parentheses for the bound (3) represent the
minimizing $j$; a parameter $j$ with the same role is involved in the other
bounds and it will be discussed later. For the bounds of [5], [3] and [10] it
is equal to $k-2$, while for the bound of [6] it is equal to $2$.
In Table II we compare our new bounds with the best known bounds for
$b=k=5,\ldots,8$ and for $(b,k)=(6,5)$, $(9,8)$, $(10,9)$, $(11,10)$.
TABLE I: Upper bounds on $R_{(b,k)}$. All numbers are rounded upwards.
$(b,k)$ | [5]* | [6]* | [3] | [10] | [12]
---|---|---|---|---|---
$(5,4)$ | 0.66126 | 0.57303 | 0.61142 | 0.74834 | 0.73697(0)
$(6,4)$ | 0.87963 | 0.77709 | 0.83904 | 1.09604 | 1.00000(0)
$(7,4)$ | 1.03711 | 0.94372 | 1.02931 | 1.40593 | 1.22239(0)
$(5,5)$ | 0.16964 | 0.25050 | 0.23560 | 0.19079 | 0.19200(3)
$(6,5)$ | 0.34597 | 0.45728 | 0.44149 | 0.43207 | 0.44027(3)
$(6,6)$ | 0.08760 | 0.21170 | 0.15484 | 0.09228 | 0.09260(4)
$(7,6)$ | 0.19897 | 0.38873 | 0.30554 | 0.23524 | 0.23765(4)
$(8,6)$ | 0.31799 | 0.53847 | 0.44888 | 0.40330 | 0.41016(4)
$(7,7)$ | 0.04379 | 0.18417 | 0.09747 | 0.04279 | 0.04284(5)
$(8,7)$ | 0.10865 | 0.34034 | 0.20340 | 0.12134 | 0.12189(5)
$(9,7)$ | 0.19054 | 0.47461 | 0.31204 | 0.22547 | 0.22761(5)
$(8,8)$ | 0.02077 | 0.16323 | 0.05769 | 0.01922 | 0.01923(6)
$(9,8)$ | 0.05686 | 0.30348 | 0.12874 | 0.06001 | 0.06013(6)
$(10,8)$ | 0.10791 | 0.42566 | 0.20754 | 0.12048 | 0.12096(6)
$(10,9)$ | 0.02889 | 0.27417 | 0.07668 | 0.02874 | 0.02876(7)
$(11,10)$ | 0.01407 | 0.25018 | 0.04289 | 0.01342 | 0.01343(8)
$*$ The generalized bound for the $(b,k)$ case
TABLE II: Upper bounds on $R_{(b,k)}$. All numbers are rounded upwards. $(b,k)$ | This work | [5] | [6] | [3] | [10]
---|---|---|---|---|---
$(5,5)$ | 0.16894 | 0.16964 | 0.25050 | 0.23560 | 0.19079
$(6,5)$ | 0.34512 | 0.34597 | 0.45728 | 0.44149 | 0.43207
$(6,6)$ | 0.08475 | 0.08760 | 0.21170 | 0.15484 | 0.09228
$(7,7)$ | 0.04090 | 0.04379 | 0.18417 | 0.09747 | 0.04279
$(8,8)$ | 0.01889 | 0.02077 | 0.16323 | 0.05769 | 0.01922
$(9,8)$ | 0.05616 | 0.05686 | 0.30348 | 0.12874 | 0.06001
$(10,9)$ | 0.02773 | 0.02889 | 0.27417 | 0.07668 | 0.02874
$(11,10)$ | 0.01321 | 0.01407 | 0.25018 | 0.04289 | 0.01342
The paper is structured as follows. In the Section II we give the general
structure of the method used in the mentioned recent series of works to find
upper bounds using the hypergraph version of the Hansel’s lemma. In Section
III we present the main new ingredient of this paper, which is a way to
improve the bounds derived in [5] by means of a more careful analysis of a
quadratic form that was also objective of that study. In Section IV, we show
how this idea can be effectively implemented after an appropriate reduction of
the problem to a list of cases that can be studied exhaustively.
## II Structure of the General Method
The best upper bounds on $R_{(b,k)}$ available in the literature can all be
seen as different applications of a central idea, which is the study of
$(b,k)$-hashing by comparison with a combinations of binary partitions. This
main line of approach to the problem comes from the original work of Fredman
and Kómlos [9]. A clear and productive formulation of the idea was given by
Radhakrishnan in terms of Hansel’s lemma [15], which remained the main tool
used in all recent results [7], [10] and [5]. We state the Lemma here and
briefly revise for the reader convenience how this was applied in those works.
###### Lemma 1 (Hansel for Hypergraphs [11], [14])
Let $K_{r}^{d}$ be a complete $d$-uniform hypergraph on $r$ vertices and let
$G_{1},\ldots,G_{m}$ be $c$-partite $d$-uniform hypergraphs on those same
vertices such that $\cup_{i}G_{i}=K_{r}^{d}$. Let $\tau(G_{i})$ be the number
of non-isolated vertices in $G_{i}$. Then
$\log\frac{c}{d-1}\sum_{i=1}^{m}\tau(G_{i})\geq\log\frac{r}{d-1}\,.$ (5)
The application to $(b,k)$-hashing relies on the following observation. Given
a $(b,k)$-hash code $C$, fix any $j$ elements $x_{1},x_{2},\ldots,x_{j}$ in
$C$, with $j=2,\ldots,k-2$. For any coordinate $i$ let
$G_{i}^{x_{1},\ldots,x_{j}}$ be the $(b-j)$-partite $(k-j)$-uniform hypergraph
with vertex set $G\setminus\\{x_{1},x_{2},\ldots,x_{j}\\}$ and edge set
$\displaystyle E=$ $\displaystyle\big{\\{}(y_{1},\ldots,y_{k-j}):$
$\displaystyle x_{1,i},\ldots,x_{j,i},y_{1,i},\ldots,y_{k-j,i}\mbox{ are all
distinct}\big{\\}}\,.$ (6)
Since $C$ is a $(b,k)$-hash code, then $\bigcup_{i}G_{i}^{x_{1},\ldots,x_{j}}$
is the complete $(k-j)$-uniform hypergraph on
$G\setminus\\{x_{1},x_{2},\ldots,x_{j}\\}$ and so
$\log\frac{b-j}{k-j-1}\sum_{i=1}^{n}\tau(G_{i}^{x_{1},\ldots,x_{j}})\geq\log\frac{|C|-j}{k-j-1}\,.$
(7)
This inequality allows one to upper bound $|C|$ by upper bounding the left
hand side. Inequality (7) holds for any choice of $x_{1},x_{2},\ldots,x_{j}$,
so the main goal is proving that the left hand side is not too large for all
possible choices of $x_{1},x_{2},\ldots,x_{j}$. The choice can be
deterministic or we can take the expectation over any random selection.
Note that if the $x_{1,i},x_{2,i},\ldots,x_{j,i}$ are not all distinct (let us
say that they “collide”) then the hypergraph in (6) is empty, that is the
corresponding $\tau$ in the left hand side of (7) is zero. So, using codewords
$x_{1},x_{2},\ldots,x_{j}$ which collide in many coordinates helps in upper
bounding $|\mathcal{C}|$. On the other hand, in a coordinate $i$ where the
codewords do _not_ collide, $\tau(G_{i}^{x_{1},\ldots,x_{j}})$ depends on what
a fraction of the code uses the remaining $b-j$ symbols in the alphabet. This
can be made small “on average” if $x_{1},\ldots,x_{j}$ are picked randomly.
More precisely, let $f_{i}$ be probability distribution of the $i$-th
coordinate of $C$, that is, $f_{i,a}$ is the fraction of elements of $C$ whose
$i$-th coordinate is $a$. Then, we have
$\tau(G_{i}^{x_{1},\ldots,x_{j}})=\\\ \hskip 8.5359pt\begin{cases}0\hskip
28.45274ptx_{1},\ldots,x_{j}\mbox{ collide in coordinate }i\\\
\left(\frac{|C|}{|C|-j}\right)\left(1-\sum_{h=1}^{j}f_{i,x_{hi}}\right)\hskip
14.22636pt\mbox{otherwise}\end{cases}.$ (8)
So, one can make the left hand side in (7) small by using $x_{1},\ldots,x_{j}$
which collide in many coordinates and at the same time have in the remaining
coordinates symbols $x_{hi}$ for which the $f_{i,x_{hi}}$ are not too small.
This can be obtained “on average” if $x_{1},\ldots,x_{j}$ are picked in some
random way over the code, since this will force values with large
$f_{i,x_{hi}}$ to a appear frequently as the $i$-th coordinate in some of the
$x_{1},\ldots,x_{j}$. There are different ways to turn this into a precise
agrument to bound the right hand side of (7). We refer the reader to [5] for a
detailed discussion, and we only discuss here the procedure as used there,
since it is the base for our current contribution.
The idea is to partition the code $\mathcal{C}$ in subcodes
$\mathcal{C}_{\omega}$, $\omega\in\Omega$. The only requirement is that each
subcode has size which grows unbounded with $n$ and uses in any of its first
$\ell$ coordinates only $(j-1)$ symbols. It can be show, by an easy extension
of the method used for the case $b=k$ and $j=k-2$ in [5], that if the original
code has rate $R$, then for any $\epsilon>0$ one can do this with a choice of
$\ell=n(R-\epsilon)/\log\left(\frac{b}{j-1}\right)$ for $n$ large enough.
Given such a partition of our code, if we select codewords
$x_{1},\ldots,x_{j}$ within the same subcode $\mathcal{C}_{\omega}$, they will
collide in the first $\ell$ coordinates and the corresponding contribution to
the l.h.s. of (7) will be zero. We then add the randomization. We pick
randomly one of the subcodes $\mathcal{C}_{\omega}$ and randomly select the
codewords $x_{1},\ldots,x_{j}$ within $\mathcal{C}_{\omega}$. We then upper
bound the expected value of the left hand side of (7) under this random
selection to obtain an upper bound on $|\mathcal{C}|$, that is
$\displaystyle\log$ $\displaystyle\frac{|C|-j}{k-j-1}$
$\displaystyle\leq\log\frac{b-j}{k-j-1}\mathbb{E}_{\omega}(\mathbb{E}[\sum_{i\in[\ell+1,n]}\tau(G_{i}^{x_{1},x_{2},\dots,x_{j}})|\omega])$
$\displaystyle=\log\frac{b-j}{k-j-1}\sum_{i\in[\ell+1,n]}\mathbb{E}_{\omega}(\mathbb{E}[\tau(G_{i}^{x_{1},x_{2},\dots,x_{j}})|\omega]).$
(9)
Here, each subcode $\mathcal{C}_{\omega}$ is taken with probability
$\lambda_{\omega}=|\mathcal{C}_{\omega}|/|\mathcal{C}|$, and
$x_{1},\ldots,x_{j}$ are taken uniformly at random (without repetitions) from
$\mathcal{C}_{\omega}$.
As mentioned before, let $f_{i}$ be the probability distribution of the $i$-th
coordinate of $C$, and let instead $f_{i|\omega}$ be the distribution of the
$i$-th coordinate of the subcode $C_{\omega}$ (with components, say,
$f_{i,a|\omega}$) . Then, for $i>\ell$, we can write
$\displaystyle\mathbb{E}$
$\displaystyle[\tau(G_{i}^{x_{1},\ldots,x_{j}})|\omega]=\left(1+o(1)\right)$
$\displaystyle\sum_{\stackrel{{\scriptstyle\text{distinct
}}}{{a_{1},\ldots,a_{j}}}}f_{i,a_{1}|\omega}f_{i,a_{2}|\omega}\cdots
f_{i,a_{j}|\omega}(1-f_{i,a_{1}}-\cdots-f_{i,a_{j}})$ (10)
where the $o(1)$ is meant as $n\to\infty$ and is due, under the assumption
that $C_{\omega}$ grows unbounded with $n$, to sampling without replacement
within $C_{\omega}$. Now, since
$\lambda_{\omega}=|\mathcal{C}_{\omega}|/|\mathcal{C}|$, $f_{i}$ is actually
the expectation of $f_{i|\omega}$ over the random $\omega$, that is, using a
different dummy variable $\mu$ to index the subcodes for convenience,
$f_{i}=\sum_{\mu}\lambda_{\mu}f_{i|\mu}\,.$
Using this in (10), one notices that when taking further expectation over
$\omega$ it is possible to operate a symmetrization in $\omega$ and $\mu$. If
we denote with $\Psi$ for the polynomial function defined for two probability
distribution $p=(p_{1},p_{2},\dots,p_{b})$ and $q=(q_{1},q_{2},\dots,q_{b})$
as
$\displaystyle\Psi(p,q)=$ $\displaystyle\frac{1}{(b-j-1)!}$ (11)
$\displaystyle\sum_{\sigma\in S_{b}}$ $\displaystyle
p_{\sigma(1)}p_{\sigma(2)}\dots p_{\sigma(j)}q_{\sigma(j+1)}+$ $\displaystyle
q_{\sigma(1)}q_{\sigma(2)}\dots q_{\sigma(j)}p_{\sigma(j+1)}.$ (12)
Then the expectation of (10) over $\omega$ can be written as
$\displaystyle\mathbb{E}[\tau(G_{i}^{x_{1},x_{2},\dots,x_{j}})]=\left(1+o(1)\right)\frac{1}{2}\sum_{\omega,\mu\in\Omega}\lambda_{\omega}\lambda_{\mu}\Psi(f_{i|\omega},f_{i|\mu}).$
(13)
In [5], the global maximum of the function $\Psi(p,q)$, over arbitrary
distributions $p$ and $q$, say
$\Psi_{\max}=\max_{p,q}\Psi(p,q)\,,$ (14)
was used to deduce the inequality, valid for any $i>\ell$,
$\mathbb{E}[\tau(G_{i}^{x_{1},x_{2},\dots,x_{j}})]\leq(1+o(1))\frac{1}{2}\Psi_{\max}\,.$
(15)
Then
$\log{|C|}\leq(1+o(1))\frac{1}{2}(n-\ell)\Psi_{\max}\log\frac{b-j}{k-j-1}\,,$
(16)
from which, using the value of $\ell$ described above, one deduces
$\displaystyle
R\leq(1+o(1))\frac{1}{2}\left[1-\frac{R}{\log\left(\frac{b}{j-1}\right)}\right]\Psi_{\max}\log\frac{b-j}{k-j-1}.$
This gives the explicit bound
$\displaystyle
R_{(b,k)}\leq\frac{1}{\frac{2}{\Psi_{\max}\log\frac{b-j}{k-j-1}}+\frac{1}{\log\left(\frac{b}{j-1}\right)}}\,.$
(17)
A weakness in this bound comes from the fact that distributions $p$ and $q$
that maximize $\Psi(p,q)$ could exhibit some opposing asymmetries, in the
sense that they give higher probabilities to different symbols. When used as a
replacement for _each_ of the pairs of $f_{i|\omega}$ and $f_{i|\mu}$ in (13),
we have a rather conservative bound, because pairs $(p,q)$ which give high
values for $\Psi(p,q)$ will give low values for $\Psi(p;p)$ and $\Psi(q;q)$,
and equation (13) contains a weighted contribution from all pairings of
$f_{i|\omega}$ and $f_{i|\mu}$. In other words, observed that (13) is a
quadratic form in the distribution $\lambda$ with kernel $\Psi(p,q)$, if the
kernel has maximum value $\Psi_{\max}$ in some off-diagonal $(p,q)$-positions
to which there correspond small “in-diagonal” values at $(p,p)$ and $(q,q)$,
then using $\Psi_{\max}$ as a bound for the whole quadratic form can be quite
a conservative approach.
In this paper, we approach (13) more carefully by clustering the possible
distributions $f_{i|\omega}$ in different groups depending on how balanced or
unbalanced they are, and bounding $\Psi(f_{i|\omega},f_{i|\mu})$ for
$f_{i|\omega}$ and $f_{i|\mu}$ in those different groups. From this, we deduce
a bound on the quadratic form. Note that since in the problem under
consideration (that is, as $n\to\infty$) we have no limit in the granularity
of the distributions $f_{i,\omega}$, the quadratic form that we have to bound
might in principle have a limiting value which is only achieved with a
continuous distribution $\lambda$ over the simplex of $b$-dimensional
distributions $\mathcal{P}_{b}$. Still, once we consider a finite number of
clusters $r$ for the distributions $f_{i|\omega}$, our quadratic form is upper
bounded by a corresponding $r$-dimensional one. In our derivation, we will use
$b+1$ clusters with some symmetric structure which allows us to further reduce
the complexity to an equivalent four dimensional form and then to a quadratics
in one single variable.
## III Bounding the quadratic form
Based on the discussion in the previous Section, we now enter the problem of
determining better upper bounds on the right hand side of (13). We simplify
here the notation and consider the quadratic form
$\sum_{p,q}\lambda_{p}\lambda_{q}\Psi(p,q)$ (18)
where $p$ and $q$ run over an arbitrary finite set of points in the simplex
$\mathcal{P}_{b}$ of $b$-dimensional probability distribution and $\lambda$ is
a probability distribution over such set. We consider partitions of
$\mathcal{P}_{b}$ in disjoint subsets to find upper bounds on the quadratic
form (18) in terms of simpler ones. If we have a partition
$\\{\mathcal{P}_{b}^{0},\mathcal{P}_{b}^{1},\ldots,\mathcal{P}_{b}^{r}\\}$ of
$\mathcal{P}_{b}$ and we define
$m_{i,h}=\sup_{p\in\mathcal{P}_{b}^{i},q\in\mathcal{P}_{b}^{h}}\Psi(p,q)\,,\qquad\eta_{i}=\sum_{p\in\mathcal{P}_{b}^{i}}\lambda_{p}\,,$
then clearly
$\displaystyle\sum_{p,q}\lambda_{p}\lambda_{q}\Psi(p,q)$
$\displaystyle\leq\sum_{i,h}\sum_{p\in\mathcal{P}_{b}^{i}}\sum_{q\in\mathcal{P}_{b}^{h}}\lambda_{p}\lambda_{q}m_{i,h}$
$\displaystyle\leq\sum_{i,h}\eta_{i}\eta_{h}m_{i,h}\,.$ (19)
This is a convenient simplification since we have now an $r$-dimensional
problem which we might be able to deal with in some computationally feasible
way. We will use this procedure with two different partitions in terms of how
balanced or unbalanced the distributions are. We take $b+1$ subsets with some
symmetry which allows us to further reduce the complexity.
Partition based on maximum value. We first consider a partition of
$\mathcal{P}_{b}$ in terms of the largest probability value which appears in a
distribution. We use a parameter $\epsilon<1/(b-1)$; all quantities will
depend on $\epsilon$ but we do not write this in order to avoid cluttering the
notation. We define $b$ sets of unbalanced distributions
$\widecheck{\mathcal{P}}_{b}^{i}=\left\\{p\in\mathcal{P}_{b}:p_{i}>1-\epsilon\right\\}\,$
for every $1\leq i\leq b$, and correspondingly a set of balanced distributions
$\widecheck{\mathcal{P}}_{b}^{0}=\left\\{p\in\mathcal{P}_{b}:p_{i}\leq
1-\epsilon\ \forall i\right\\}\,.$
Note that these are all disjoint sets since $\epsilon<1/(b-1)$. Following the
scheme mentioned above, we can consider the values $m_{i,h}$ and $\eta_{i}$
for this specific partition. However, due to symmetry, the values $m_{i,h}$
can be reduced to only four cases, depending on whether $p$ and $q$ are both
balanced, one balanced and one unbalanced, or both unbalanced, either on the
same coordinate or on different coordinates.
Assuming $1\leq i,h\leq b$ with $i\neq h$, the following quantities are then
well defined and independent of the specific values chosen for $i$ and $h$
$\displaystyle\widecheck{M}_{1}$
$\displaystyle=\sup_{p,q\in\widecheck{\mathcal{P}}_{b}^{0}}\Psi(p,q)$
$\displaystyle\widecheck{M}_{2}$
$\displaystyle=\sup_{p\in\widecheck{\mathcal{P}}_{b}^{0},q\in\widecheck{\mathcal{P}}_{b}^{i}}\Psi(p,q)$
(20) $\displaystyle\widecheck{M}_{3}$
$\displaystyle=\sup_{p,q\in\widecheck{\mathcal{P}}_{b}^{i}}\Psi(p,q)$
$\displaystyle\widecheck{M}_{4}$
$\displaystyle=\sup_{p\in\widecheck{\mathcal{P}}_{b}^{i},q\in\widecheck{\mathcal{P}}_{b}^{h}}\Psi(p,q)$
These values can then be used in (19) in place of the values $m_{i,h}$.
Partition based on the minimum value. We also consider a partition of
$\mathcal{P}_{b}$ using constraints from below. Again we use a parameter
$\epsilon$ which will be then tuned. We assume here $\epsilon<1/b$. Consider
now the following disjoint sets of unbalanced distributions
$\widehat{\mathcal{P}}_{b}^{i}=\left\\{p\in\mathcal{P}_{b}:p_{i}<\epsilon\,,p_{h}\geq
p_{i}\ \forall h\,,p_{h}>p_{i}\ \forall h<i\right\\}\,$
for $1\leq i\leq b$, that is, distributions in $\widehat{\mathcal{P}}_{b}^{i}$
have a minimum component in the $i$-th coordinate, which is smaller than
$\epsilon$, and strictly smaller than any of the preceding components (unless
of course $i=1$). Correspondingly, define a set of balanced distributions as
$\widehat{\mathcal{P}}_{b}^{0}=\left\\{p\in\mathcal{P}_{b}:p_{i}\geq\epsilon\
\forall i\right\\}\,.$
The symmetry argument mentioned before also applies in this case and we can
continue in analogy replacing the $m_{i,h}$ of (19) with the following
quantities
$\displaystyle\widehat{M}_{1}$
$\displaystyle=\sup_{p,q\in\widehat{\mathcal{P}}_{b}^{0}}\Psi(p,q)$
$\displaystyle\widehat{M}_{2}$
$\displaystyle=\sup_{p\in\widehat{\mathcal{P}}_{b}^{0},q\in\widehat{\mathcal{P}}_{b}^{i}}\Psi(p,q)$
(21) $\displaystyle\widehat{M}_{3}$
$\displaystyle=\sup_{p,q\in\widehat{\mathcal{P}}_{b}^{i}}\Psi(p,q)$
$\displaystyle\widehat{M}_{4}$
$\displaystyle=\sup_{p\in\widehat{\mathcal{P}}_{b}^{i},q\in\widehat{\mathcal{P}}_{b}^{h}}\Psi(p,q)$
where again $1\leq i,h\leq b$ with $i\neq h$.
Applying the above scheme with the symmetric partitions we just defined, we
can now rewrite the upper bound of equation (19) in the form
$\displaystyle\sum_{p,q}$ $\displaystyle\lambda_{p}\lambda_{q}\Psi(p,q)$
$\displaystyle\leq\eta_{0}^{2}M_{1}+\eta_{0}\sum_{i>0}\eta_{i}M_{2}+\sum_{i>0}\eta_{i}^{2}M_{3}+2\sum_{0<i<h}\eta_{i}\eta_{h}M_{4}\,.$
(22)
Call $M$ be the maximum value achieved by the right hand side of (22) over all
possible probability distributions $\eta=\eta_{0},\eta_{1},\ldots,\eta_{b}$
(which will of course depend on whether we use the $\widehat{M}_{i}$’s or
$\widecheck{M}_{i}$’s values in place of the $M_{i}$’s). The optimization of
(22), once known the $M_{i}$’s values, is easy using the standard lagrange
multipliers method (or see Lemma 2 of [17]). Then we can then replace
$\Psi_{\max}$ in (17) with $M$ to derive the bound
$R_{(b,k)}\leq\frac{1}{\frac{2}{M\log\frac{b-j}{k-j-1}}+\frac{1}{\log\left(\frac{b}{j-1}\right)}}\,.$
We will describe in the next Section our procedure to determine, or upper
bound the values $\widehat{M}_{i}$, $\widecheck{M}_{i}$ and the corresponding
$M$. Here we only state the obtained results.
Using the partition based on the maximum value
$\\{\widecheck{\mathcal{P}}_{b}^{i}\\}_{i=0,\ldots,b}$ we obtain the following
theorem.
###### Theorem 1
We have
$\displaystyle R_{(7,7)}\leq$ $\displaystyle 0.0408975,\>R_{(8,8)}\leq
0.0188887,\>R_{(9,8)}\leq 0.0561537,$ $\displaystyle R_{(10,9)}\leq
0.0277279,\>R_{(11,10)}\leq 0.0132033\,.$
Using the partition based on the minimum value
$\\{\widehat{\mathcal{P}}_{b}^{i}\\}_{i=0,\ldots,b}$ we obtain the following
theorem.
###### Theorem 2
We have
$\displaystyle R_{(5,5)}\leq 0.1689$ $\displaystyle 325,\qquad R_{(6,5)}\leq
0.3451130,$ $\displaystyle R_{(6,6)}$ $\displaystyle\leq\frac{5}{59}\approx
0.0847458\,.$
Based on the results in [7], on its generalization given in equation (4) and
on Theorem 2 when $(b,k)=(6,6)$, we are led to formulate the following
conjecture.
###### Conjecture 1
For $b\geq k>3$,
$R_{(b,k)}\leq\min_{2\leq j\leq
k-2}\left(\frac{1}{\log\frac{b}{j-1}}+\frac{b^{j+1}}{b^{\underline{j+1}}\log\frac{b-j}{k-j-1}}\right)^{-1}\,.$
Note that the conjectured expression can be seen as a modification of the
Körner-Marton bound in (3) which takes into account the effects of prefix-
based partitions.
## IV Computation of $M$
Thanks to a straightforward generalization of some lemmas defined and proved
in [17], we have determined and inspected using Mathematica all the possible
maximum points (see the Appendices in [17]) in which each $\widecheck{M}_{i}$
(or $\widehat{M}_{i}$) can be attained, obtaining the following propositions.
###### Proposition 1
For $j=k-2$, we have that
$(b,k)$ | $\epsilon$ | $\widecheck{M}_{1}$ | $\widecheck{M}_{2}$ | $\widecheck{M}_{3}$ | $\widecheck{M}_{4}$
---|---|---|---|---|---
$(7,7)$ | $9/100$ | 0.085679 | 0.092593 | 0.000006 | 0.000107
$(8,8)$ | $3/25$ | 0.038453 | 0.042840 | 0.000002 | 0.000022
$(9,8)$ | $1/10$ | 0.075870 | 0.076905 | 0.000001 | 0.000015
$(10,9)$ | $1/15$ | 0.036289 | 0.037935 | $3.4\cdot 10^{-9}$ | $8.5\cdot 10^{-8}$
$(11,10)$ | $1/11$ | 0.016928 | 0.018144 | $1.4\cdot 10^{-9}$ | $2.7\cdot 10^{-8}$
$\widecheck{M}_{1}$ attained at
$(\frac{1}{b},\ldots,\frac{1}{b};\frac{1}{b},\ldots,\frac{1}{b})$
---
$\widecheck{M}_{2}$ attained at
$(1,0,\ldots,0;0,\frac{1}{b-1},\ldots,\frac{1}{b-1})$
$\widecheck{M}_{3}$ attained at
$(1-\epsilon,\frac{\epsilon}{b-1},\ldots,\frac{\epsilon}{b-1};1-\epsilon,\frac{\epsilon}{b-1},\ldots,\frac{\epsilon}{b-1})$
$\widecheck{M}_{4}$ attained at
$(1-\epsilon,\frac{\epsilon}{b-2},\ldots,\frac{\epsilon}{b-2},0;0,\frac{\epsilon}{b-2},\ldots,\frac{\epsilon}{b-2},1-\epsilon)$
###### Proposition 2
For $j=3$, $(b,k)=(5,5)$ and $\epsilon=\frac{1}{44}(4+\sqrt{5})$ we have that
$\widehat{M}_{i}$ | Attained at point $(p;q)$ | Values $\approx$
---|---|---
$\widehat{M}_{1}$ | $(\epsilon,\frac{1-\epsilon}{b-1},\ldots,\frac{1-\epsilon}{b-1};\gamma,\delta,\ldots,\delta),\delta\approx 0.185275$ | 0.384033
$\widehat{M}_{2}$ | $(0,\frac{1}{b-1},\ldots,\frac{1}{b-1};\gamma,\delta,\ldots,\delta),\delta=\epsilon$ | 0.389226
$\widehat{M}_{3}$ | $(\epsilon,\frac{1-\epsilon}{b-2},\ldots,\frac{1-\epsilon}{b-2},0;\epsilon,\alpha,\ldots,\alpha,\beta),\beta\approx 0.4542$ | 0.374759
$\widehat{M}_{4}$ | $(0,\frac{1}{b-1},\ldots,\frac{1}{b-1};\gamma,\delta,\ldots,\delta),\delta=\epsilon$ | 0.389226
For $j=3$, $(b,k)=(6,5)$ and $\epsilon=\frac{1}{10}$ we have that
$\widehat{M}_{i}$ | Attained at point $(p;q)$ | Values $\approx$
---|---|---
$\widehat{M}_{1}$ | $(\epsilon,\frac{1-\epsilon}{b-1},\ldots,\frac{1-\epsilon}{b-1};\gamma,\delta,\ldots,\delta),\delta\approx 0.153159$ | 0.555625
$\widehat{M}_{2}$ | $(0,\frac{1}{b-1},\ldots,\frac{1}{b-1};\gamma,\delta,\ldots,\delta),\delta\approx 0.130217$ | 0.558467
$\widehat{M}_{3}$ | $(\epsilon,\frac{1-\epsilon}{b-2},\ldots,\frac{1-\epsilon}{b-2},0;\epsilon,\alpha,\ldots,\alpha,\beta),\beta\approx 0.37693$ | 0.535106
$\widehat{M}_{4}$ | $(0,\frac{1}{b-1},\ldots,\frac{1}{b-1};\gamma,\delta,\ldots,\delta),\delta\approx 0.130217$ | 0.558467
For $j=4$, $(b,k)=(6,6)$ and $\epsilon=\frac{1}{20}$ we have that
$\widehat{M}_{i}$ | Attained at point $(p;q)$ | Values $\approx$
---|---|---
$\widehat{M}_{1}$ | $(\frac{1}{b},\ldots,\frac{1}{b};\frac{1}{b},\ldots,\frac{1}{b})$ | 0.185185
$\widehat{M}_{2}$ | $(\epsilon,\frac{1-\epsilon}{b-1},\ldots,\frac{1-\epsilon}{b-1};\gamma,\delta,\ldots,\delta),\delta\approx 0.147757$ | 0.178857
$\widehat{M}_{3}$ | $(\epsilon,0,\frac{1-\epsilon}{b-2},\ldots,\frac{1-\epsilon}{b-2};0,1,0,\ldots,0)$ | 0.140664
$\widehat{M}_{4}$ | $(1,0,\ldots,0;0,\frac{1}{b-1},\ldots,\frac{1}{b-1})$ | $0.192000$
The values reported for $\widehat{M}_{3}$ are not approximate values of the
exact values of $\widehat{M}_{3}$ but, instead, they are upper bounds.
###### Remark 1
We point out that the value $\widehat{M}_{1}$ for $(b,k)=(6,6)$ is only
attained for uniform distributions.
As a consequence of Propositions 1, 2 and equation (22) we are able to
evaluate the values of $M$ for both the partitions
$\\{\widecheck{P}_{b}^{i}\\}_{i=0,\ldots,b}$ and
$\\{\widehat{P}_{b}^{i}\\}_{i=0,\ldots,b}$. Then we state the following
theorem
###### Theorem 3
Using the partition $\\{\widecheck{P}_{b}^{i}\\}_{i=0,\ldots,b}$ we get
* •
for $(b,k)=(7,7)$ we have that $M\approx 0.0861594$;
* •
for $(b,k)=(8,8)$ we have that $M\approx 0.0388599$;
* •
for $(b,k)=(9,8)$ we have that $M\approx 0.0758830$.
* •
for $(b,k)=(10,9)$ we have that $M\approx 0.0363565$.
* •
for $(b,k)=(11,10)$ we have that $M\approx 0.0170049$.
Using the partition $\\{\widehat{P}_{b}^{i}\\}_{i=0,\ldots,b}$ we get
* •
for $(b,k)=(5,5)$ we have that $M\approx 0.3873676$;
* •
for $(b,k)=(6,5)$ we have that $M\approx 0.5567010$;
* •
for $(b,k)=(6,6)$ we have that $M=\frac{5}{27}\approx 0.185185$.
For the values of $(b,k)$ reported in Table I except the cases in which $k=5$,
$b=k=6,7,8$ and $(b,k)=(9,8)$, $(10,9)$, $(11,10)$, it is interesting to note
that the bounds in bold (the generalized bounds [5] or [6]) are achieved for
uniform distributions. This means that, for these particular cases, any new
upper bounds that can be found on the quadratic form in equation (13) cannot
further improve those bounds. However, for such globally balanced codes, one
can use a different argument based on the minimum distance of the code to get
even stronger upper bounds. A proof that $R_{(6,6)}<5/59$, based on the
Aaltonen bound [1], can be found in [17].
## References
* [1] M. Aaltonen. _A new upper bound on nonbinary block codes_ , Discrete Math. vol 83, 139-160, 1990.
* [2] E. Arikan, An upper bound on the zero-error list-coding capacity, IEEE Transactions on Information Theory 40 (1994), 1237–1240.
* [3] E. Arikan, An improved graph-entropy bound for perfect hashing, IEEE International Symposium on Information Theory (1994).
* [4] S. Bhandari and J. Radhakrishnan, Bounds on the Zero-Error List-Decoding Capacity of the q/(q-1) Channel, 2018 IEEE International Symposium on Information Theory (ISIT), Vail, CO, 2018, pp. 906-910.
* [5] S. Costa, M. Dalai. _New bounds for perfect $k$-hashing, in press on Discrete Applied Mathematics, 2020_.
* [6] M. Dalai, V. Guruswami, and J. Radhakrishnan, An improved bound on the zero-error listdecoding capacity of the 4/3 channel, IEEE International Symposium on Information Theory (ISIT) (2017), 1658–1662.
* [7] M. Dalai, V. Guruswami, and J. Radhakrishnan, An improved bound on the zero-error listdecoding capacity of the 4/3 channel, in IEEE Transactions on Information Theory, vol. 66, no. 2, pp. 749-756, Feb. 2020
* [8] P. Elias, Zero error capacity under list decoding, IEEE Transactions on Information Theory 34 (1988), 1070–1074.
* [9] Michael L. Fredman and János Komlós, On the Size of Separating Systems and Families of Perfect Hash Functions, SIAM Journal on Algebraic Discrete Methods 5 (1984), 61–68.
* [10] V. Guruswami, A. Riazanov, Beating Fredman-Komlos for perfect $k$-hashing, Leibniz International Proceedings in Informatics (2019).
* [11] G. Hansel, Nombre minimal de contacts de fermature nécessaires pour réaliser une fonction booléenne symétrique de $n$ variables, _C. R. Acad. Sci. Paris_ , pp. 6037–6040, 1964.
* [12] J. Korner and K. Marton, New Bounds for Perfect Hashing via Information Theory, European Journal of Combinatorics 9 (1988), 523–530.
* [13] J. Korner, Fredman–Komlós bounds and information theory, SIAM Journal on Algebraic Discrete Methods 7 (1986), 560–570.
* [14] A. Nilli, “Perfect hashing and probability,” _Combinatorics, Probability and Computing_ , vol. ~3, pp. 407–409, 1994.
* [15] J. Radhakrishnan, Entropy and Counting, available at: http://www.tcs.tifr.res. in/~jaikumar/Papers/EntropyAndCounting.pdf.
* [16] C. Xing and C. Yuan, Beating the probabilistic lower bound on perfect hashing, arXiv preprint arXiv:1908.08792, (2019).
* [17] S. Della Fiore, S. Costa and M. Dalai, Further strengthening of upper bounds for perfect $k$-Hashing, arXiv preprint arXiv:2012.00620, (2020).
|
[1]
[cor1]Corresponding author
# A Blockchain-based Trust System for Decentralised Applications: When
trustless needs trust
Nguyen Truong<EMAIL_ADDRESS>Data Science Institute, South Kensington
Campus, Imperial College London, London SW7 2AZ, United Kingdom Gyu Myoung
Lee Department of Computer Science, Liverpool John Moores University,
Liverpool L3 3AF, United Kingdom<EMAIL_ADDRESS>Kai Sun
<EMAIL_ADDRESS>Florian Guitton<EMAIL_ADDRESS>YiKe Guo
Department of Computer Science, Hong Kong Baptist University, Kowloon Tong,
Hong Kong<EMAIL_ADDRESS>
###### Abstract
Blockchain technology has been envisaged to commence an era of decentralised
applications and services (DApps) without the need for a trusted intermediary.
Such DApps open a marketplace in which services are delivered to end-users by
contributors which are then incentivised by cryptocurrencies in an automated,
peer-to-peer, and trustless fashion. However, blockchain, consolidated by
smart contracts, only ensures on-chain data security, autonomy and integrity
of the business logic execution defined in smart contracts. It cannot
guarantee the quality of service of DApps, which entirely depends on the
services’ performance. Thus, there is a critical need for a trust system to
reduce the risk of dealing with fraudulent counterparts in a blockchain
network. These reasons motivate us to develop a fully decentralised trust
framework deployed on top of a blockchain platform, operating along with DApps
in the marketplace to demoralise deceptive entities while encouraging
trustworthy ones. The trust system works as an underlying decentralised
service providing a feedback mechanism for end-users and maintaining trust
relationships among them in the ecosystem accordingly. We believe this
research fortifies the DApps ecosystem by introducing an universal trust
middleware for DApps as well as shedding light on the implementation of a
decentralised trust system.
###### keywords:
Blockchain DApps Decentralised Ecosystem Reputation Trust System
Introduce a novel concept and provision of a universal decentralised trust
system that can be integrated into any DApps sharing a same Blockchain
platform.
Present a decentralised trust model with theoretical analysis, algorithms, and
simulations.
Provide the whole agenda of the trust system development including technical
solutions, implementation reference, as well as performance evaluation.
## 1 Introduction
The turn of the last decade brought us to the disruptive Blockchain technology
(BC) that provides a trusted infrastructure for enabling a variety of
decentralised applications and services (DApps) without the need for an
intermediary. To actualise this vision, Smart Contracts (SCs) technology is
consolidated into the BC-based infrastructure: SCs are programmed to perform
services’ business logic, compiled into byte-code, and deployed onto a BC
platform (i.e., replicated into full-nodes in the platform) so that a user can
create transactions to execute the business logic implemented in the SCs in a
decentralised fashion [9]. This infrastructural BC platform offers some
advanced features including immutability, transparency, trace-ability, and
autonomy that are promising to effectively implement plentiful DApps from
financial services (i.e., cryptocurrencies trading) to numerous services such
as digital asset management [1], provenance tracking in logistics and supply-
chain [14, 21], and data sharing and processing in the Internet of Things
(IoT) [28, 22].
Indeed, various DApps have already been developed and employed into the real-
world. For instance, there are over $4000$ DApps deployed on top of the
Ethereum, Tron, and EOS platforms, serving about $150k$ active users daily in
2019111https://cointelegraph.com/news/report-ethereum-tron-and-eos-dominated-
dapp-ecosystem-in-2019. This is a considerable ecosystem and a huge
decentralised peer-to-peer (P2P) marketplace. Although there are numerous
challenges due to the limitation of the current BC technology hindering the
advancement of DApps, we believe that ?everything that can be decentralized,
will be decentralized? \- David A. Johnston222http://www.johnstonslaw.org. The
DApps ecosystem is just in its preliminary state and will be the future of the
next-generation Internet.
### 1.1 Features of DApps
There are different perspectives of DApps definition and system development
among the cryptocurrency space. Nonetheless, mutual perceptions were pointed
out that a DApp must satisfy some requirements: $(i)$ open source so that
participants can audit the system, $(ii)$ application operations and data are
recorded and executed in a decentralised BC (e.g., using SCs), and $(iii)$ a
crypto token is used to access the service and to contribute to the operations
(e.g., token reward) [18, 8]. As of these features, ideally, DApps have the
ability to operate without human intervention and to be self-sustaining
because the participation of stakeholders is continuously strengthening the
systems. According to Vitalik Buterin, DApps generally fall into two overlay
categories, namely fully anonymous DApps and reputation-based ones [8]. The
first category is DApps which participants are essentially anonymous and the
whole service business logic is autonomously executed by a series of instant
atomic operations. Pure financial services such as Bitcoin are examples of
this. Another example is digital assets trading DApps such as software
license, data, and digitised properties in which the ownership can be
impeccably transferred once a contract (defined and implemented using SCs) has
been performed [32].
The second category refers to a type of DApps which business logic requires a
reputation-like mechanism to keep track of participants’ activities for trust-
related purposes. For instance, DApps for data storage and computation,
similar to $Dropbox$ and $Amazon$ $AWS$ in the centralised space, do require
to maintain reputation-like statistic record of peers for service quality and
security-related purposes (e.g., anti-DDoS). This requirement of trust is
irrelevant to BC technology which supposedly ensures only data security (e.g,
for distributed ledgers), autonomy and integrity of the business logic
execution programmed in corresponding SCs. The quality of service (QoS) of
such a DApp also depends on the service itself (i.e., how well the service
handles the business logic defined in the SCs and caters to customers).
### 1.2 Necessity of a Trust System in DApps Ecosystem
DApps usage always comes with token movement from end-users to service
contributors as a result of an incentive scheme, which is crucial to
maintaining the service. However, due to the immutable nature, it is
practically impossible to revoke any transaction once it is settled onto BC.
Thus, a DApp has to make sure that end-users are dealing with trustworthy
counter-parties before invoking any SCs’ functions that can lead to a token
payment. Intuitively, end-users tend to look for an indication of ’assurance’
before using any services. Indeed, a variety of DApps share the same stance on
a challenge of lacking a unified decentralised framework to evaluate the
trustworthiness of participants (for instance, decentralised storage and
computing (similar to cloud storage like $Dropbox$ and $Amazon$ $AWS$), home-
sharing (similar to $Airbnb$), car-sharing (similar to $Uber$), or a hotel
distribution and reservation service (similar to $Booking.com$) backed by a BC
platform). Consequently, a trust middleware that supports DApps’ end-users to
transact with trustworthy counterparts is of paramount importance as it
penalises deceptive participants while encouraging authentic ones. As
illustrated in Fig. 1, DApps, built upon a BC platform empowered by a
decentralised trust system, naturally build up trust with clients and create a
virtuous cycles that bolster the whole DApps ecosystem growth.
Figure 1: A BC platform strengthened with a trust system creates a virtuous
cycle sustaining the DApps ecosystem growth
### 1.3 Objectives and Contributions
Our objectives are to envision and develop a universal decentralised system
that operates along with any DApps to evaluate trust relationships between
entities in the ecosystem. This trust system plays as middleware between a BC
platform and DApps that provides mechanisms for DApps’ end-users to build up
and maintain a trust relationships network among the users. Operations of the
system are fully decentralised, transparent, and accessible to all of the
participants which are autonomously and flawlessly executed in a trustless
fashion. It is also expected to effectively prevent from reputation attacks
(e.g., Sybil, White-washing, Self-promoting, and Bad&Good-mouthing) and to
dismiss masquerading hostile participants.
The main contributions of this paper are three-fold:
* •
Introduction to the concept and provision of a universal decentralised trust
system that can be integrated into any DApps sharing a same Blockchain
platform.
* •
A decentralised trust model with theoretical analysis, algorithms, and
simulations.
* •
Providing the whole agenda of the real-world development of the system
including technical solutions, implementation reference, as well as
performance evaluation.
The rest of the paper is organised as follows. Section II briefly brings up
background and related work and presents the provision and conceptual model of
a decentralised trust system. Section III describes a system design with a
trust evaluation model for the proposed system. Section IV provides the
algorithms and the theoretical analysis of the trust evaluation model. Section
V is to discuss on the technical solutions and the implementation reference
for the system development. Section VI is dedicated to the system analysis and
discussion. Section VII concludes our work along with the future research
directions.
## 2 Decentralised Trust System Provision for DApps Ecosystem
To craft a BC platform into a mature DApp development environment, fundamental
elements must be incorporated such as an Identity Management (IdM), a name
registry, a wallet, a P2P messaging for end-users, a browser, and a
decentralised trust/reputation system [8]. These elements are core built-in
services of a BC-based infrastructure for DApps development.
### 2.1 Related Work
A large number of trust management mechanisms that have been proposed in
various environments including social networks[34], P2P or ad-hoc networks
[2], and IoT [37, 31, 30]. Those trust models could be adapted to different
scenarios including BC-related environment. However, as the emerging BC
technology is in the early stage, there is limited research on trust
management for DApps. Most of the related research is to develop a trust or
reputation management platform leveraging the advantages of BC such as
decentralisation, immutability, trace-ability, and transparency. In this
respect, researchers have proposed BC-based trust mechanisms to fortify
specific applications in various environments including vehicular networks and
intelligent transportation systems [38, 12], wireless sensor networks [24,
29], or IoT [13, 20]. For instance, W. She et al. in [29] have proposed a BC-
based trust model to detect malicious nodes in wireless sensor networks by
implementing a voting mechanism on-chain, ensuring the trace-ability and
immutability of voting information. M. Debe et al. have developed a
reputation-based trust model built on top of Ethereum platform for fog nodes
in a Fog-based architectural system [6]. The idea is similar in that a
reputation mechanism, comprising of several SCs, is implemented on top of
Ethereum platform so that clients can give feedback as ratings toward a Fog
node when using a service provided by such node. The reputation of a fog node
is simply accumulated on-chain from users’ ratings. Being executed on-chain,
such ratings and reputation values are immutably recorded in a decentralised
fashion, thus ensuring data integrity as well as preventing from Denial of
Service (DDoS) attack.
We, instead, look at a different angle of trust in BC-based applications in
which a trust system plays a complementary component of the BC platform that
cooperates with DApps to empower the ecosystem built on top of the platform.
We target to develop a trust system for decentralised services in a BC
ecosystem (e.g., Ethereum) in which participants (clients and service
providers) interact with each other on-chain in a P2P manner. Our system plays
as a unified trust solution working with any DApps. Our previous research in
[32] has presented an introductory concept of a unified trust system to
strengthen a BC platform. However, it has come without detailed analysis,
algorithm, and technical solutions for the development of the decentralised
trust system. In this paper, we further explore the concept and the
feasibility of a unified trust system as middleware between a BC platform and
DApps, as well as provide a proof-of-concept of the decentralised trust system
along with the system design, algorithms, technical solutions and
implementation reference.
### 2.2 High-level architecture of BC-based infrastructure and Trust System
For a better understanding of the big picture of the whole BC-based
infrastructure including the proposed trust system, we represent the high-
level architecture of a full-stack IoT infrastructure by harmonising these
components to the IoT and Smart Cities & Communities reference
model333http://itu.int/en/ITU-T/studygroups/2017-2020/20/Pages/default.aspx.
As can be seen in Fig. 2, the BC platform is located in the Service Support
and Application Support layer, which is a layer between the Application and
Network layers in the IoT architecture. DApps is located in the Application
layer. Unlike client-server applications and services whose reputation/trust
systems are separately developed, we envisage that DApps in the same ecosystem
could leverage a universal trust system, which serves as a fundamental service
for the BC-based infrastructure (Fig. 2). This trust middleware exists because
DApps’ end-users in an ecosystem are identified by the same IdM and a name
registry, and use the same cryptocurrency (e.g., provided by a BC platform) to
consume the services.
Figure 2: Functional model of a BC-based infrastructure comprising of a trust
system and other elements in alignment with IoT high-level architecture.
### 2.3 High-level Architecture of Trust System
In this sub-section, fundamental elements of a decentralised trust middleware
between a BC platform and DApps are described. As can be seen in Fig. 3, the
proposed system consists of two basic components named Data Collection &
Extraction and Trust Evaluation that collect and aggregate necessary trust-
related information and evaluate trust relationships, respectively. These two
components are along with North-bound and South-bound APIs for providing
trust-related services to DApps and for collecting data from a BC or
applications and services, respectively.
#### 2.3.1 Trust Evaluation Mechanism
We adopt the REK trust model proposed in [31, 30] to the DApps ecosystem
scenario in which both $trustors$ and $trustees$ are end-users of DApps. In
the REK model, a trust relationship is evaluated by assembling three
indicators called Reputation (of the trustee), Experience and Knowledge (of
the trustor toward the trustee). In DApps scenarios, there is limited
availability (or difficult to obtain) of off-chain information (i.e.,
information that is recorded outside BC) of end-users for evaluating Knowledge
indicator as users’ identity is normally pseudo-anonymised and challenging to
link to outside world [23]. Instead, transactions between end-users are
immutably recorded (and publicly available) on-chain, which can be leveraged
for Experience and Reputation evaluations. As a result, in this paper, we
employ an adoption of the REK trust evaluation model called DER which only
utilises two indicators Experience and Reputation in decentralised
environment. Details of the DER trust system is described in the next section.
Figure 3: Conceptual model of the proposed trust system
Generally, after each transaction between entities in a DApp, the trust system
enables a participant to give feedback toward its counterpart, thus
establishing and updating the Experience relationship between the two. By
doing this, the trust system maintains an Experience network among
participants, which is publicly recorded on-chain. This Experience network is
autonomously updated whenever an entity gives feedback to the other.
Reputations of all participants are then calculated accordingly, following the
idea of Google Rage-Rank algorithm. Finally, the trust value between two
entity is calculated as composition between Experience and Reputation.
#### 2.3.2 Data Collection and Extraction
By nature, a BC is a record of a continuous growing list of transactions among
end-users which can be analysed to extract a network topology of end-user
interactions. Nonetheless, further information about QoS is required to be
collected and aggregated in order for the DER trust evaluation mechanism to be
performed. Therefore, a decentralised feedback mechanism associated with DApps
in a BC platform is required to reflect QoS once end-users (e.g., service
clients) successfully carry out transactions with their counterparts (e.g.,
DApp providers). This mechanism creates a distributed ledger that logs users’
feedback (toward a DApps service) along with the information about associated
transactions (e.g., end-user ID ($from$ address), counterpart ID ($to$
address), and $timestamp$). Feedback can be either implicit or explicit which
may or may not require human participation [17]. The trust system then
extracts feedback and transactions information recorded in BCs as inputs for
the DER trust evaluation model (i.e., calculate the Experience and Reputation
indicators) in order to evaluate trust relationships between any two peers in
the decentralised ecosystem.
## 3 System Design and DER Trust Model
### 3.1 Use-cases
For better explanation and clarification, we scrutinise the decentralised data
storage services (DDS), in regard to some projects being developed and
implemented in the real-world like Storj444https://storj.io,
Sia555https://sia.tech, and Filecoin666https://filecoin.io (built on top of
the InterPlanetary File System777https://ipfs.io (IPFS)). Decentralised
storage is a promising solution to cooperate or even to take over the
conventional centralised cloud storage where data is split into multiple
chunks and distributed to storage nodes across a P2P network. These storage
nodes, as DDS providers, are expected to reliably store the data as well as
provided reasonable network bandwidth with appropriate responsiveness for data
owners to retrieve their data. As a reward, such storage nodes are
incentivised by crypto tokens. It is worth noting that end-users in DApps
ecosystem can be both data owners (DDS clients) and storage nodes (DDS
providers). The decentralised storage concept is similar to the legacy P2P
file sharing such as BitTorrent888https://en.wikipedia.org/wiki/BitTorrent but
fortified with advanced cryptography and encryption mechanisms as well as
incentive schemes built upon a BC platform. It is expected to solve the long-
standing challenges of single-point-of-control and -failure in centralised
data silos, and to bring essential control of data back to the owners whilst
discharging full control of cloud server managers.
Figure 4: Decentralised storage service built on top of a BC platform that
incentivizes storage nodes with crypto tokens.
The DDS deploys necessary SCs on top of a BC platform to execute the business
agreement between DDS clients (i.e., data owners) and DDS providers (i.e.,
storage nodes) such as storage space and period, guaranteed performance (e.g.,
availability, throughput, bandwidth, and latency), and the Incentive scheme
(i.e., Token Reward) (Fig. 4). Unfortunately, such SCs are unable to ensure
the QoS of the DDS service provided by a set of storage nodes because (i) it
is impractical for the SCs to monitor and enforce the performance of the DDS
providers, and (ii) the guaranteed performance can only be measured once the
SCs are already invoked. In this regard, a trust system that manages the
performance history of the storage nodes and ranks them in order of
trustworthiness (to provide high QoS) is of paramount importance.
### 3.2 DER Trust Model
In the proposed DER model, trust relationship between two entities is a
compound of two elements: Experience (of the trustor toward the trustee) and
Reputation (of the trustee). This section describes the mechanisms to
calculate such two elements.
#### 3.2.1 Experience mechanism
Experience is an asymmetric relationship from an entity to the another which
is built up from previous transactions between the two. Experience is an
indicator of trust [31]. For instance, an experience (denoted as $Exp(A,B)$)
is constituted from a DDS client (i.e., a data owner, denoted as $A$) to a DDS
provider (i.e., a storage node, denoted as $B$) once $A$ invokes an SC to use
the storage service offered by $B$. Higher $Exp(A,B)$ value represents higher
degree of trust from $A$ to $B$. Essentially, $Exp(A,B)$ increases if $B$
provides high-quality storage service to $A$ (which is reflected by a feedback
score $\vartheta_{t}$) and vice versa. It is worth noting that feedback can be
provided by either clients (e.g., $A$) or an authorised third-party who is
monitoring performance of service providers (e.g., $B$). Also, $Exp(A,B)$ gets
decay if no transactions taken place after a period of time or a transaction
is neutral (i.e., neither cooperative nor uncooperative). The amount of
increase, decrease and decay depends on intensity of transactions, feedback
scores $\vartheta$, and the current value of $Exp(A,B)$ which can be modelled
by linear difference equations and a decay function as follows (notations are
denoted in Table 1) [31, 30]:
Table 1: NOTATIONS USED IN THE EXPERIENCE MODEL Notation | Description
---|---
$Exp_{t}$ | Experience value at time $t$, ${Exp_{0}}$ is the initial value
$min_{Exp}$ | minimum $Exp$ value, $min_{Exp}=0$ if $Exp$ is normalised in [0,1]
$max_{Exp}$ | maximum $Exp$ value, $max_{Exp}=1$ if $Exp$ is normalised in [0,1]
$\vartheta_{t}$ | Feedback score at time $t$
$\alpha$ | Maximum increase value of $Exp$ in two consecutive transactions, $0<\alpha<max_{Exp}$
$\beta$ | Decrease rate, $\beta>1$
$\theta_{co}$ | Cooperative threshold for a feedback score $\vartheta_{t}$. A feedback is cooperative if $\vartheta_{t}\geq\theta_{co}$
$\theta_{unco}$ | Uncooperative threshold for a feedback score $\vartheta_{t}$. A feedback is uncooperative if $\vartheta_{t}\leq\theta_{unco}$
$\delta$ | Minimum Decay value ensuring any Experience relationship degenerates if it is not maintained
$\gamma$ | Decay rate controlling the amount of the decay
* •
Increase model
The current $Exp(A,B)$ (denoted as $Exp_{t-1}$) increases when there occurs a
cooperative transaction (at the time $t$, indicated by the feedback score
$\vartheta_{t}\geq\theta_{co}$) that follows the linear difference equation:
$Exp_{t}=Exp_{t-1}+\vartheta_{t}{\Delta}Exp_{t}$ (1)
where ${\Delta}Exp_{t}$ is defined as follows:
${\Delta}Exp_{t}=\alpha(1-\frac{Exp_{t-1}}{max_{Exp}})$ (2)
* •
Decrease model
Similarly, $Exp(A,B)$ decreases if the transaction is uncooperative (indicated
by the feedback score $\vartheta_{t}\leq\theta_{unco}$), following the
equation:
$Exp_{t}=Max(min_{Exp},Exp_{t-1}-\beta(1-\vartheta_{t}){\Delta}Exp_{t})$ (3)
in which ${\Delta}Exp_{t}$ is specified in Equation (2). The decrease rate
$\beta>1$ implies that it is easier to lose the $Exp(A,B)$ value due to an
uncooperative transaction than to gain it (by a cooperative transaction).
* •
Decay model
$Exp(A,B)$ decays if there is no transaction after a period of time or a
feedback is neutral (i.e., $\theta_{unco}<\vartheta<\theta_{co}$) and the
decay rate is assumed to be inversely proportional to the strength of the
experience relationship (i.e., $Exp_{t}$ value) [27]. Based on these
observations, the Decay model is proposed as follows:
$Exp_{t}=Max(min_{Exp},Exp_{t-1}-\Delta{Decay_{t}})$ (4)
$\Delta{Decay_{t}}=\delta{(1+\gamma-\frac{Exp_{t-2}}{max_{Exp}})}$ (5)
#### 3.2.2 Reputation mechanism
The reputation of an entity represents the overall perception of a community
regarding the characteristic of the entity such as trustworthiness. In the
DApps ecosystem, the reputation of an end-user $U$ (denoted as $Rep(U)$) can
be calculated by aggregating $Exp(i,U)$, $\forall{i}$ are users who have
already been transacted with $U$. To calculate the reputation of end-users, we
utilise the model proposed in [31, 30] which is based on the standard PageRank
[7] and the weighted PageRank [35, 33].
Let $N$ be the number of end-users in the DApps ecosystem, an directed graph
$G(V,E)$ is constructed in which $V$ is a set of $N$ users,
$E\subseteq\\{(x,y)|(x,y)\in V^{2}\wedge x\neq y\\}$ is set of edges
representing experience relationship $E(x,y)=Exp(x,y)$. If there is no prior
transaction between $(x,y)$; $E(x,y)=0$. To enable the reputation model,
$G(V,E)$ is divided into two sub-graphs: positive experience $PG(V,PE)$ in
which any edge $PE(x,y)=Exp(x,y)$ satisfying $Exp(x,y)>\theta$ and negative
experience $NG(V,NE)$ in which any edge $NE(x,y)=Exp(x,y)$ satisfying
$Exp(x,y)<\theta$, where $\theta$ is a predefined threshold. $d$ parameter is
a damping factor ($0<d<1$) introduced in standard PageRank [7]. The reputation
for each sub-graph is then calculated as follows:
* •
Positive Reputation
$Rep_{Pos}(U)=\frac{1-d}{N}+d(\sum_{\forall{i}}Rep_{Pos}(i)\times\frac{PE(i,U)}{C_{Pos}(i)})$
(6)
in which $C_{Pos}(i)=\sum_{\forall{j}}{PE(i,j)}$ representing the sum of all
positive experience values that the end-user $i$ holds (toward other end-
users).
* •
Negative Reputation
$Rep_{Neg}(U)=\frac{1-d}{N}+d(\sum_{\forall{i}}Rep_{Neg}(i)\times\frac{1-NE(i,U)}{C_{Neg}(i)})$
(7)
in which $C_{Neg}(i)=\sum_{\forall{j}}{(1-NE(i,j))}$ representing the sum of
all complements of negative experience values (i.e., $1-NE(i,j)$) that the
end-user $i$ holds (toward other end-users).
* •
Overall Reputation
$Rep(U)$ is the aggregation of $Rep_{Pos}(U)$ and $Rep_{Neg}(U)$:
$Rep(U)=max(0,Rep_{Pos}(U)-Rep_{Neg}(U))$ (8)
#### 3.2.3 Trust Aggregation
Trust relationship between trustor $A$ and trustee $B$ is a composite of
$Exp(A,B)$ and $Rep(B)$:
$Trust(A,B)=w_{1}Rep(B)+w_{2}Exp(A,B)$ (9)
in which $w_{1}$ and $w_{2}$ are weighting factors satisfying $w_{1}+w_{2}=1$.
It is worth noting that any end-user once signing up for a DApp is assigned a
default value at bootstrap (e.g., $\frac{1}{N}$). If $A$ and $B$ have no prior
transaction then $Exp(A,B)=0$. In this case, $w_{1}=1$ and $w_{2}=0$; thus,
$Trust(A,B)=Rep(B)$.
## 4 Trust Model: Evaluation and Simulation
This section provides detailed evaluation of the DER trust model including
model equations analysis, algorithms, and simulation of the Experience and
Reputation models.
### 4.1 Experience Model
#### 4.1.1 Analysis
For simplicity, $Exp$ values and feedback score $\vartheta$ are normalised to
the range $(0,1)$ with $max_{Exp}=1$, $min_{Exp}=0$ and the initial value
$0<Exp_{0}<1$.
###### Lemma 4.1.
The Increase model defined in Equation 1 is (*) a monotonically increasing
function and (**) asymptotic to $1$.
###### Proof.
From Equation 1 and 2, with $max_{Exp}=1$, we have:
$Exp_{t}=Exp_{t-1}+(1-Exp_{t-1})\vartheta_{t}\ \alpha$ (10)
Subtracting both sides of Equation 10 from $1$:
$\displaystyle 1-Exp_{t}$
$\displaystyle=1-(Exp_{t-1}+(1-Exp_{t-1})\vartheta_{t}\ \alpha)$
$\displaystyle=(1-Exp_{t-1})(1-\vartheta_{t}\ \alpha)$
$\displaystyle=(1-Exp_{t-2})(1-\vartheta_{t}\ \alpha)(1-\vartheta_{t-1}\
\alpha)$ $\displaystyle=...$
$\displaystyle=(1-Exp_{0})\prod_{i=1}^{t}(1-\vartheta_{i}\ \alpha)$ (11)
As $0<Exp_{0}<1$, $0<\alpha<max_{Exp}=1$, and $0<\vartheta_{i}<1$
$\forall{i}$; from Equation 11 we have $0<Exp_{t}<1$ $\forall{t}$. Therefore,
$Exp_{t}$ function defined in Equation 1 is increasing as the increment value
between $Exp_{t}$ and $Exp_{t-1}$ is $\vartheta_{t}\times{\Delta}Exp_{t}$
where ${\Delta}Exp_{t}=\alpha(1-Exp_{t-1})>0$. Hence, Lemma (*) is proven.
Furthermore, as Increase model is for cooperative transactions, meaning that
$\vartheta_{i}\geq\theta_{co};\forall{i}\in\\{1,..,t\\}$; from Equation 11 we
have:
$0<1-Exp_{t}\leq(1-Exp_{0})(1-\theta_{co}\ \alpha)^{t}$ (12)
As $\theta_{co}$, $\alpha$, and $Exp_{0}$ are the three pre-defined parameters
in the range $(0,1)$; therefore:
$\lim_{t\to\infty}(1-Exp_{0})(1-\theta_{co}\ \alpha)^{t}=0$ (13)
Applying the Squeeze theorem on (12) and (13), we then have:
$\lim_{t\to\infty}(1-Exp_{t})=0$ (14)
In other word, the monotonically increasing $Exp_{t}$ function is asymptotic
to $1$; hence Lemma (**) is proven. ∎
As the Increase model is monotonically increasing, it is obvious that the
Decrease model defined in Equation 3, which is based on ${\Delta}Exp_{t}$ in
Equation 2, is decreasing. The decrements depend on the current $Exp_{t}$
value and the uncooperative $\vartheta_{t}$ feedback score. The decrease rate
$\beta$ depicts the ratio of the decrements compared to the increments, which
is normally greater than $1$ as the current experience $Exp_{t}$ is
?difficult to gain but easy to loose?.
The Decay model defined in Equation 4 ensures that an experience relationship
gets weakened if there is no or neutral transactions after a period of time.
This is because the decay value $\Delta{Decay_{t}}$ specified in Equation 5 is
always $>0$ as $0<Exp_{t-2}<1$ $\forall{t\geq 2}$; and it is inversely
proportional to $Exp_{t-2}$, implying that a strong relationship persists
longer than a weak one.
#### 4.1.2 Algorithm and Simulation
Based on the Experience model defined in Section $3.2.1$ along with the
analysis, the algorithm calculates experience value $Exp(A,B)$ of entity $A$
toward entity $B$ is demonstrated in mathematical-style pseudo-code as in
Algorithm 1. It is worth noting that the parameters controlling the Experience
model are preset for our demonstration and should be optimised for specific
scenarios.
1
Input : Current experience value $Exp_{t-1}$
Previous experience value $Exp_{t-2}$
Feedback score $\vartheta_{t}$
Output : Updated experience value $Exp_{t}$
2
3Parameters Preset
4 ${Exp_{0}}=0.5$; $\triangleright$ In case there is no prior transaction,
$Exp_{t-1}$ and $Exp_{t-1}$ are set to $Exp_{0}$;
5 $min_{Exp}=0$; $max_{Exp}=1$; $\triangleright$ Experience value is
normalised in the range [0,1];
6 $\theta_{co}=0.7$; $\theta_{unco}=0.5$;
7 $\alpha=0.05$; $\beta=1.6$;
8 $\delta=0.005$; $\gamma=0.005$
9
10Begin
11 if _$\vartheta_{t}\geq\theta_{co}$_ then
12 $\triangleright$ Increase Model;
13 $Exp_{t}=Exp_{t-1}+\vartheta_{t}\alpha(1-\frac{Exp_{t-1}}{max_{Exp}})$
14
15 else if _$0 <\vartheta_{t}\leq\theta_{unco}$_ then
16 $\triangleright$ Decrease Model;
17
$Exp_{t}=Max(min_{Exp},Exp_{t-1}-\beta(1-\vartheta_{t})\alpha(1-\frac{Exp_{t-1}}{max_{Exp}})$
18
19 else
20 $\triangleright$ No transaction ($\vartheta_{t}=0$) or neutral
$\theta_{unco}<\vartheta_{t}<\theta_{co}$
21 $\triangleright$ Decay Model;
22
$Exp_{t}=Max(Exp_{0},Exp_{t-1}-\delta{(1+\gamma-\frac{Exp_{t-2}}{max_{Exp}})}$
23
24
Return $Exp_{t}$
Alg. 1 Experience Calculation Algorithm Figure 5: Increase, Decrease, and
Decay in Experience relationship
For demonstration purposes, the algorithm is implemented in $Matlab$ with
different controlling parameters settings. As depicted in Fig. 5, two sets of
parameters configuration are taken into account in which the maximum increase
value $\alpha$ is either $0.05$ or $0.1$, the decrease rate $\beta$ is either
$1.6$ or $4.0$, and the parameter pair for the decay model ($\delta$,
$\gamma$) is either ($0.005$, $0.005$) or ($0.01$, $0.01$). The initial value
is preset $Exp_{0}=0.5$. As can be seen in Fig. 5, the results show that both
increase model curves are asymptotic to $1$, which is already proven in the
theoretical analysis, at different rates depending on the controlling
parameter $\alpha$. The results also indicate that stronger experience
relationships require more cooperative transactions to achieve. For instance,
with $\alpha=0.05$, experience value increases from $0.5$ to $0.7$ after $12$
consecutive transactions whereas it increases from $0.9$ to just $0.94$ after
the same number of transactions.
The simulation results of the Decrease model show that experience
relationships are prone to uncooperative transactions suggesting that a strong
tie is hard to attain but easy to lose, particularly with higher decrease rate
$\beta$. For instance, with $\alpha=0.05$ and $\beta=4.0$, it takes 50
consecutive cooperative transaction to increase the experience value from
$0.5$ to $0.9$ but takes only 22 uncooperative transactions to drop from $0.9$
to $0.5$. As can also be seen from the figure, both decrease and decay models
exhibit a same behaviour that a strong tie is more resistant to uncooperative
transactions/decay whereas a weaker one is more susceptible. These
characteristics of the experience model manifest the human social
relationships, showing the practicability of the proposed model.
### 4.2 Reputation Model
#### 4.2.1 Analysis
Denote $(N\times 1)$ column vectors $Rep$, $Rep_{Pos}$, and $Rep_{Neg}$ whose
elements are overall reputation, positive reputation, and negative reputation
of $N$ end-users in DApp ecosystem, respectively. As specified in Equation 6,
$Rep_{Pos}(U)$ of the user $U$ is calculated from others’ positive reputations
$Rep_{Pos}(i)$ $\forall{i}$ holding positive experience $PE(i,U)$ with $U$.
Consequently, there would be correlations among the $N$ positive reputations,
which would lead to the fact that $Rep_{Pos}$ might not exist or might be
ambiguous (i.e., there exists more than one values for a user that satisfy
Equation 6). The same condition could happen for $Rep_{Neg}$, and for $Rep$,
as a consequence.
###### Lemma 4.2.
The reputation vector $Rep$ exists and is unique.
###### Proof.
According to Equation 8, $Rep$ exists and is unique if both $Rep_{Pos}$ and
$Rep_{Neg}$ exist and are unique.
The positive experience $N\times N$ matrix $PE$ is constituted as follows:
$PE(i,j)=\begin{cases}Exp(i,j)&\text{if }Exp(j,i)\geq\theta\\\ 0&\text{if
}Exp(j,i)<\theta\\\ \end{cases}$ (15)
Let us constitute an $N\times N$ diagonal matrix $\mathcal{M}$ whose diagonal
elements $m_{i}=C_{Pos}(i),\forall{i}\in\\{1,..,N\\}$ and a matrix
$\mathcal{J}$ is a $N{\times}N$ all-ones matrix.
Based on Equation 6, $Rep_{Pos}$ can be represented in matrix notation as
follows:
$Rep_{Pos}=(\frac{1-d}{N}{\times}\mathcal{J}+d\times{PE}{\times}\mathcal{M}^{-1}){\times}Rep_{Pos}$
(16)
Let us define the $A_{Pos}$ matrix as follows:
$A_{Pos}=\frac{1-d}{N}{\times}\mathcal{J}+d\times{PE}{\times}\mathcal{M}^{-1}$
(17)
Thus, Equation 16 can be re-written:
$Rep_{Pos}=A_{Pos}{\times}Rep_{Pos}$ (18)
From Equation 18, we can see that $Rep_{Pos}$ is the $eigenvector$ of matrix
$A_{Pos}$ with the $eigenvalue=1$. Let us define a matrix $P=A_{Pos}^{T}$;
thus $P^{T}=A_{Pos}$. Therefore, Equation 18 can be re-written as follows:
$Rep_{Pos}=P^{T}{\times}Rep_{Pos}$ (19)
Equation 19 implies that $Rep_{Pos}$ is the stationary distribution of a
$Markov$ chain whose transition probability matrix is $P$. Let us constitute a
discrete-time $Markov$ chain with the transition probability matrix
$P=A_{Pos}^{T}$ consisting of $N$ states and the probability to move from
state $i$ to state $j$ is $P(i,j)$. Note that $\forall{i,j}\in\\{1,..,N\\}$,
we have:
$P(i,j)=A_{Pos}^{T}(i,j)=A_{Pos}(j,i)=\frac{1-d}{N}+d\times\frac{PE(j,i)}{m(j)}$
(20)
The Markov chain can then be constructed as follows:
$P(i,j)=\begin{cases}\frac{1-d}{N}+d\times\frac{PE(j,i)}{m(j)}&\text{if
}Exp(j,i)\geq\theta\\\ 1-(\frac{1-d}{N}+d\times\frac{PE(j,i)}{m(j)})&\text{if
}Exp(j,i)<\theta\\\ \end{cases}$ (21)
where $\theta$ is the threshold to differentiate positive and negative
experiences. This $Markov$ chain is a model of random surfer with random jumps
over the experience relationships directed graph $G(V,E)$ [25, 5, 10]. The
graph $G(V,E)$ is strongly connected with no dangling nodes. This is because
any two nodes $(x,y)$ with no prior transaction is set $Exp(x,y)=0$, implying
that the edge weight is 0; it does not mean there is no connection. This
random surfer Markov chain, apparently, is a weighted PageRank model; as a
result, its stationary distribution, $Rep_{Pos}$, exists and is unique [5, 10,
15].
Similarly, $Rep_{Neg}$ vector exists and is unique. Therefore, the overall
reputation vector $Rep$ exists and is unique. ∎
#### 4.2.2 Algorithm and Simulation
As the existence and the uniqueness are proven, the reputation vector $Rep$ of
$N$ end-users in DApps ecosystem can be calculated by solving the matrix
equations defined in Equations 6, 7. The traditional algebra method to solve
an $NxN$ matrix equation (e.g., Equation 6 or Equation 7), whose the
complexity is $\mathcal{O}(N^{3})$, is impractical when the size of the DApp
ecosystem is enormous (e.g., in millions). Instead, the reputations of the $N$
end-users can be approximately calculated with a predefined accuracy tolerance
using an iterative method, which is much more efficient [3, 19]. Thus, the
latter approach is utilised to solve Equations 6 and 7, demonstrated by the
following pseudo-code (Algorithm 2). As defined in Equation 8, the overall
reputation for $N$ end-users (i.e., $N\times 1$ column vector $Rep$) is then
simply obtained by adding two vectors $Rep_{Pos}$ and $Rep_{Neg}$, which are
the outputs of Algorithm 2.
1
Input : $(N\times N)$ matrix $E$ (set of edges in the directed graph $G(V,E)$
of $N$ end-users)
Positive reputation $N\times 1$ column vector $Rep_{Pos}$
Negative reputation $N\times 1$ column vector $Rep_{Neg}$
2
Output : Updated $Rep_{Pos}$ and $Rep_{Neg}$
3
4Parameters Preset
5 $\mathscr{d}=0.85$; $\triangleright$ damping factor in standard PageRank
6 $tol=1e-5$; $\triangleright$ Error tolerance
7 $thres=0.5$; $\triangleright$ threshold for positive and negative experience
8
9Begin
10 $\triangleright$ Elicit matrices $PE$ and $NE$ from matrix $E$;
11 $PE=zeros(N,N)$; $\triangleright$ initialise zero matrix for $NE$
12 $PE=zeros(N,N)$; $\triangleright$ initialise zero matrix for $PE$
13for _$i\leftarrow 1$ to $N$_ do
14 for _$j\leftarrow 1$ to $N$_ do
15 if _$E(i,j)\geq thres)$_ then
16 $PE(i,j)=E(i,j)$
17 else if _$0 <E(i,j)<thres$_ then
18 $NE(i,j)=1-E(i,j)$
19
20
21
22 $\triangleright$ Constitute $1\times N$ row vectors $C_{Pos}$ and
$C_{Neg}$;
23 $C_{Pos}=zeros(1,N)$; $\triangleright$ initialise zero vector for $C_{Pos}$
24 $C_{Neg}=zeros(1,N)$; $\triangleright$ initialise zero vector for $C_{Neg}$
25for _$i\leftarrow 1$ to $N$_ do
26 for _$j\leftarrow 1$ to $N$_ do
27 $C_{Pos}(1,i)=C_{Pos}(1,i)+PE(i,j)$;
28 $C_{Neg}(1,i)=C_{Neg}(1,i)+NE(i,j)$;
29
30
31 $\triangleright$ Constitute transition matrices of $PE$ and $NE$;
32 for _$i\leftarrow 1$ to $N$_ do
33 for _$j\leftarrow 1$ to $N$_ do
34 if _$PE(j,i) >0)$_ then
35 $A_{Pos}(i,j)=\frac{PE(j,i)}{C_{Pos}(1,j)}$; $\triangleright$ Transition
matrix for PE
36 if _$NE(j,i) >0)$_ then
37 $A_{Neg}(i,j)=\frac{NE(j,i)}{C_{Neg}(1,j)}$; $\triangleright$ Transition
matrix for NE
38
39
40
41 $\triangleright$ Update $Rep_{Pos}$ and $Rep_{Neg}$ based on Equations 6
and 7;
42
43$I=ones(N,1)$; $\triangleright$ create vector of all ones
44 $err=1$; $\triangleright$ Total error of the current iteration
45 while _$err\geq tol$_ do
46 $temp_{Pos}=\mathscr{d}\times A_{Pos}\times
Rep_{Pos}+\frac{(1-\mathscr{d})}{N}\times I$;
47 $temp_{Neg}=\mathscr{d}\times A_{Neg}\times
Rep_{Neg}+\frac{(1-\mathscr{d})}{N}\times I$;
48
49 $\triangleright$ update $err$, $\mathcal{N}(v)$ is the Euclidean norm of
vector $v$;
50 $err=\mathcal{N}(temp_{Pos}-Rep_{Pos})+\mathcal{N}(temp_{Neg}-Rep_{Neg})$;
51
52 $Rep_{Pos}=temp_{Pos}$; $\triangleright$ update $Rep_{Pos}$ vector
53 $Rep_{Neg}=temp_{Neg}$; $\triangleright$ update $Rep_{Neg}$ vector
54
55
Return [$Rep_{Pos}$, $Rep_{Neg}$]
Alg. 2 Reputation algorithm using iterative method Figure 6: Convergences of
the reputation algorithm using interactive method with different sizes of DApp
ecosystem
The simulation of the proposed reputation calculation algorithm are conducted
for different DApp ecosystem sizes (i.e., $N=1000$, $4000$, $8000$ and
$16,000$) with the error tolerance $tol=10^{-5}$, which is accurate enough to
rank $N$ end-users in the DApp ecosystem. As depicted in Algorithm 2, the
total error $err$ is calculated as the Euclidean norm of the vector difference
of the $Rep$ vector in two consecutive iterations. Fig. 6 illustrates the
convergence rate of the algorithm, showing the rapid reduction of the total
error as more iterations are carried out. As can be seen from the figure, the
algorithm converges in less than $70$ iterations (to be exact: $54$, $61$,
$64$, and $66$ iterations) for four DApps ecosystem sizes $N=1000$, $4000$,
$8000$ and $16,000$, respectively. These results suggests that the reputation
model well scales for a huge network as the scaling factor is roughly linear
in $log{N}$.
## 5 Technical Solutions and Implementation
This section provides a real-world demonstration for the proposed
decentralised trust system and how a decentralised storage service interacts
with it. The demonstration is carried out on top of the Ethereum
permissionless BC platform in which system components, functionality,
technical challenges and solutions are identified as the implementation
reference for developers who wish to build a similar system. Source-code of
the demonstration can be found
here999https://github.com/nguyentb/Decentralised\\_Trust\\_Eth\\_IPFS.git.
Smart Contracts source-code is in the $/packages/ethereum-core$ folder of the
repository.
### 5.1 System Setup
The DDS service and the proposed decentralised trust system are implemented on
top of the permissionless Ethereum platform to which fundamental elements for
developing a DApp have already been deployed. For instance, in our platform
setup, Ethereum $account$ and $address$ are leveraged for IdM,
$Metamask$101010https://metamask.io/ is for BC browser and a wallet service,
and $web3/web3j$111111https://github.com/web3j/web3j are DApps APIs for
interacting with Ethereum network (e.g., SCs and end-users). SCs are
implemented in Solidity using Truffle suite
framework121212https://truffleframework.com and deployed in an Ethereum test-
net (i.e., we use several test-nets including $Ropsten$, $Kovan$ $Rinkeby$,
and $Goerli$) for real-world experience. We assume that IPFS storage nodes are
also clients of the DApps ecosystem (e.g., Ethereum clients in $Ropsten$,
$Kovan$ or $Rinkeby$ test-net) that get incentivised once providing storage
capability (e.g., IPFS storage nodes $host$ and $pin$ the hash of requested
files from data owners).
The overall procedure of the setting system is illustrated in Fig. 7. As can
be seen in the sequence diagram, a client starts to use the DDS service by
making a transaction to a DDS SC (step (1)), which invokes enFeedback function
in FeEx SC of the trust system to grant the client permission to give feedback
to the DDS nodes ((step (3)), (4))). Once getting feedback from the end-user
(step (5)), experience relationships between the user and the DDS nodes are
updated on-chain by executing expCal function in FeEx SC (step (6)). On the
contrary, as the reputation calculation is resource-intensive, it is
impractical to implement the algorithm (i.e., Algorithm 2) on-chain; instead,
only the results (i.e., reputation values of entities) are publicly recorded
on-chain. This challenge can be circumvented by using Oraclize service, as
demonstrated in step (7-1), (7-2), and (7-3) in Fig. 7. With the same reason,
Rep SC is not invoked whenever an experience relationship is updated; instead,
it is periodically self-executed - for example, for every 100 blocks.
Figure 7: Sequence diagram of how the decentralised trust system is
incorporated with the DDS service and how the proposed DER trust calculation
is performed
### 5.2 Feedback and Experience Smart Contract
This SC, denoted as FeEx, contains feedback information and experience
relationship of any entity $A$ (i.e., a DDS client) toward entity $B$ (an IPFS
storage node) where a transaction between $A$ and $B$ has been carried out
(i.e., $A$ uses the DDS service provided by $B$ depicted by step (1) and (2)
in Fig. 7). $FeEx$ SC also provides functions for end-users to give feedback
and to update experience relationships accordingly. Note that $A$ and $B$ are
identified by Ethereum $address$ in the ecosystem.
#### 5.2.1 Ledger Data Model
Necessary information about users’ feedback and experience relationships is
permanently recorded on-chain using state variables defined in FeEx SCs. These
state variables are as a public distributed ledger comprised of the full
history of state transitions of all experience relationships between any two
entities. It is convenience to obtain the latest information of any experience
relationship as Ethereum supports key-value data format and the latest state
of the ledger (recording the most recent experience relationships information)
can be found in the most recent block.
FeEx SC stores a state variable, called FeExInfo, in its contract storage in
form of nested key-value pairs using Ethereum built-in $mapping$ type as
follows:
struct FeExStrut {
uint expValue;
uint fbScore;
bool perFlag;
}
mapping (address=>mapping (address=>FeExStrut))
public FeExInfo;
FeExInfo consists of information about the relationship from $A$ toward $B$,
specified in FeExStrut data structure: (ii) $Exp(A,B)$ value, (iii) feedback
score, and (iv) a flag indicating whether $A$ has permission to give $B$
feedback. Any parties or SCs can easily access FeExInfo recorded on-chain to
obtain desired information for their purposes.
#### 5.2.2 Functionality
The FeEx SC contains two main functions: (i) enFeedback enables/revokes
permission of a data owner $A$ to give feedback to storage node $B$ by
updating the permission flag in FeExInfo with associated transaction ID; and
(ii) $expCal$ calculates $Exp(A,B)$ value and updates FeExInfo whenever $A$
gives feedback to $B$. The enFeedback function is called by by an SC of the
DDS service once a transaction has been carried out (illustrated by step (3)
in Fig. 7).
The $expCal$ implements the experience calculation function following
Algorithm 1 proposed in Section 4.1. It is worth noting that as there is no
global time server synchronised among nodes in the Ethereum BC platform so
that the implementation of the decay model is not straightforward. To
circumvent this challenge, $expCal$ determines $time$ in Algorithm 1 using
block height ($block.number$ property) so that $Exp(A,B)$ decays every a
number of blocks if no transaction occurred between $A$ and $B$ during the
period.
### 5.3 Reputation Smart Contract
#### 5.3.1 Ledger Data Model
Reputation SC, denoted as $Rep$, records positive reputation and negative
reputation of all users (e.g., IPFS storage nodes) using two state variables
RepPosInfo and RepNegInfo, respectively. The data model for the two state
variables is a mapping between a user’ address and a value:
mapping (address => uint)
public RepPosInfo;
mapping (address => uint)
public RepNegInfo;
These two state variables play the role of a public distributed ledger
permanently recording a full history of state transitions of the positive and
negative reputation of all users.
#### 5.3.2 Functionality
The reputation calculation algorithm (Algorithm 2) performs matrix
multiplication with numerous iterations that requires a large number of
operations and local variable manipulations. Consequently, the resource-
consumption and the $gas$ cost for executing this algorithm on-chain are
extremely high, which is infeasible to be implemented in $Rep$ SC. To bypass
this challenge, off-chain storage and calculations appear as a promising
solution. The catalyst of this solution is that high-volume data and resource-
intensive tasks should be stored and processed off-chain; only results of the
off-chain tasks are piggybacked for on-chain ledgers and/or calculations.
However, as an SC must be deterministically executed, there might be a room
for ambiguity if SC executions rely on information from off-chain sources. In
addition, this practice could turn a decentralised system into a centralised
one due to the dependency on an external source of information. This dilemma
is known under the term: ?Oracle problem? [36]. The following section will
describe in detail how $Rep$ SC can accomplish the off-chain reputation
calculation while mitigating the Oracle problem.
### 5.4 Off-chain Computation for Reputation
Oracle problem could be mitigated by leveraging a decentralised trusted
provider to feed required data into SCs. For instance,
Oraclize131313https://docs.provable.xyz/ deploys an SC on Ethereum platform as
an API for other SCs to interact with the outside
world141414https://github.com/provable-things/ethereum-
api/blob/master/oraclizeAPI\\_0.4.sol. The Oraclize SC works as a bearer that
gets required data from an external source and delivers the data to the
requested SCs in a decentralised fashion. Furthermore, to alleviate the
ambiguity, it (ii) provides authenticity proof as an assurance for data
integrity. In the implementation, we follow this Oraclize solution to
calculate users’ reputations off-chain.
Assume that there is already an off-chain server, called RepCalService, that
implements Algorithm 2 to calculate positive and negative reputations and
provides an API (e.g., REST API) to retrieve the calculation results. The
implementation of this off-chain service is straightforward: it queries the
Ethereum BC to obtain experience relationships stored in FeExInfo and the
current reputations values from RepPosInfo and RepNegInfo state variables as
inputs for Algorithm 2. $Rep$ SC then periodically calls this service to
update the reputation values in a decentralised fashion using Oraclize
solution. The below implementation reference shows how to execute these tasks.
Specifically, $Rep$ interacts with the Oraclize service by importing the
Oraclize SC (i.e., provableAPI.sol) to make a query to RepCalService using
oraclizeQuery() function. A callback function also needs to be implemented in
order to get the results from the query and to update RepPosInfo and
RepNegInfo accordingly.
import "./provableAPI.sol";
contract Rep is usingProvable {
function oraclizeQuery() {
// make an Oraclize query to the service using URL
oraclize_query("URL", RepCalService_API_URL);
}
function __callback(bytes32 _requestID, string _result) {
// only Oraclize is permitted to invoke the function
require (msg.sender == oraclize_cbAddress());
// update RepPosInfo & RepNegInfo
RepPosInfo[addr] = getRepPos(_result, addr);
RepPosInfo[addr] = getRepNeg(_result, addr);
}
}
### 5.5 Integration of DDS service and Trust System
Supposedly, the DDS service implements some SCs for data storage business
logic between data owners and storage nodes, which is out of the scope of this
paper. The main focus of the paper is that once a transaction has been
accomplished between a client and an IPFS storage node, the enFeedback
function in the $FeEx$ is invoked that enables the owner to give feedback to
its counterpart, which will establish experience and trust relationships (step
(2) in Fig. 7). For this reason, a DDS SC (i.e., the caller SC) defines an
interface of $FeEx$ SC (i.e., the callee SC) and calls it with the callee’s
contract address as demonstrated as follows:
contract DDS {
function ePayment(address _storageNode,
unit _amount, string _datahash) {
...
if (success) {
//call FeEx using deployed address scAddr
FeEx fe = FeEx(scAddr);
fe.enFeedback(msg.sender, _storageNode,
string _transID);
}
}
}
contract FeEx {
function enFeedback(address _owner,
address _storageNode, string _transID);
function expCal(address _owner, uint _fbScore,
address _storageNode, string _transID);
}
Similarly, when a data owner gives feedback toward a storage node (with value
$fbScore$), DDS invokes $expCal$ function that calculates the experience
relationship between the two and updates FeExInfo accordingly. In the
demonstration, feedback scores are randomly generated; however, in the real-
world scenarios, a function to measure DDS QoS shall be implemented to
correctly reflect the service quality. As Solidity supports interactions
between SCs deployed on Ethereum platform, the proposed trust system is
feasibly actualised as any DApps including DDS can be incorporated by invoking
public functions or accessing trust-related information from state variables
defined in the SCs of the proposed trust system.
Finally, to reinforce service quality for a client, the DDS service queries
RepPosInfo, RepNegInfo and FeExInfo stored at $FeEx$ and $Rep$ SCs,
respectively, to receive reputation and experience values related to this
client. The DDS service then aggregates this information for finalising trust
values between the client and the storage nodes and provides the most
trustworthy counterparts to the client.
## 6 System Analysis and Discussion
The demonstration system presented in Section 5 is a proof-of-concept of a
universal decentralised trust system which is incorporated into a BC
infrastructure as an underlying service for supporting DApps. This section
investigates and discusses on the practicality, performance, and security-
related aspects of the proposed trust system.
### 6.1 Feasibility and Performance Evaluation
Practically, a variety of factors should be taken into account when deploying
the trust system into real-world usages. For instance, $gas$ cost for SC
execution in Ethereum Virtual Machine is high as such SCs requires high volume
storage for the state variables, as well as numerous operations and local
variable manipulations in $FeEx$ SC and the cost for using Oraclize service in
$Rep$ SC. This calls for further research on SC optimisation [11] and better
off-chain storage and calculation solutions.
As most of SCs, including $FeEx$ and $Rep$ SCs, are dedicated to performing
critical tasks with minimal storage and computation, the performance of a DApp
is heavily dependent on the BC platform but not the application built on top.
At present, permissionless BC platforms offer limited performance in terms of
both throughput and/or scalability. For instance, Bitcoin and Ethereum main-
net only handle about $7$ and $15$ transactions per
second151515https://blockchain.info/charts/n-transactions). In order to
illustrate the real-world performance, we deploy our system to different BC
platforms, i.e., Ethereum test-nets namely Ropsten, Kovan, Rinkeby, and
Goerli. We carry out latency measurement of both READ and WRITE transactions
to the ledger FeExInfo in the FeEx SC in the four test-nets. The results are
shown in Fig. 8. The performance measurement script can also be found at the
same
repo161616https://github.com/nguyentb/Decentralised\\_Trust\\_Eth\\_IPFS/tree/master/packages/performanceAnalysis.
Figure 8: Latency of READ and WRITE from/to Smart Contracts in Ethereum test-
nets
It is worth noting that in READ transactions, an Ethereum platform does not
perform the consensus mechanism; instead, in WRITE transactions, consensus
mechanism (i.e., Proof-of-Work (Ethash) in Ropsten, Proof of Authority
(Authority Round) in Kovan, Proof of Authority (Clique) in both Rinkeby, and
Goerli) is carried out as the state of the ledger is changed. In details,
WRITE transactions require further complicated processes including block
formulation and mining, broadcast the mined block to peers in the network,
block verification, and updating the ledger. This is why the latency of READ
transactions is much smaller than WRITE transactions, reassured by the results
in Fig. 8. As can be seen in the figure, the average latency of READ
transactions is roughly the same in all four test-nets at around 350-420ms
with relatively small standard deviations. This indicates the consistency when
querying data from the ledger. Compared to READ transactions, the average
latency in WRITE transactions is significantly risen to $6013$, $10376$,
$16973$, and $17727$ $ms$, which is $15$ to $42$ times higher, in Kovan,
Rinkeby, Goerli, and Ropsten, respectively. The standard deviations, however,
are different in the four test-nets: Ropsten and Goerli introduce considerably
higher WRITE latency compared to Kovan and Rinkeby ($2-3$ times) but WRITE
transactions are more stable as the standard deviations are small.
Particularly, in Rinkeby test-net, the standard deviation is substantially
high - The latency spreads out in a wide range, from 4500 to 17350 ms.
Results also show the block latency171717The number of blocks increase counted
when a transaction is broadcasted to the network until it is confirmed
(written in the latest block). in WRITE transactions in the four test-nets. In
Kovan and Rinkeby, WRITE transactions are almost appended and confirmed in the
next block demonstrated by block latency is close to $1$ whereas in Goerli and
Ropsten, it could take one or two more blocks before the transaction is
written onto a new block. This is probably one of the reasons that the latency
in Goerli and Ropsten is higher than in Kovan and Rinkeby.
Results of the system latency indicate the technical barrier on Ethereum-based
system performance, which limits the usability of the proposed decentralised
trust system to serve only small-scale services. Note that unlike the other
test-nets, Ropsten performs Proof-of-Work consensus mechanism, similar with
the Ethereum main-net, thus, it best reproduces the Ethereum production
environment. Nevertheless, besides SC optimisation for individual DApp, system
performance immensely relies on an underlying BC network which requires
further research on consensus mechanisms [40], off-chain [26] and sharding
solutions [39], etc. for a better DApp ecosystem.
### 6.2 System Security
The advanced capability of BC platform plays a key role in providing a secure
and trustworthy environment for DApps. Although current BC and SC technologies
still pose both performance limitations and security threats, we assume that
the decentralised nature of the BC ensures there is no adversary can corrupt
the BC network and change the content of the ledgers as this would imply
majority of the network’s resources are compromised. Besides, there is no
adversary who can impersonate another entity as the public-key cryptography
(e.g., Elliptic Curve Digital Signature Algorithm (ECDSA) used in Ethereum)
cannot be forged.
Security threats in our proposed decentralised trust system are from typical
reputation-related attacks such as Self-promoting, Slandering (good/bad
mouthing), and Whitewashing [16]. In our system, in order to be able to
provide feedback, entity is required to make a transaction toward the counter-
party, which costs some fee, at least transaction fee. Importantly, the
proposed reputation mechanism itself can mitigate such reputation attacks. For
instance, if a newly-created entity (thus its reputation value is minimal),
makes a transaction, and then gives bad/good feedback toward a victim; the
contribution of this feedback to the reputation value of the victim is
minimal. This is because the reputation value of the victim is calculated
based on both experience and reputation score of participants who transact
with the victim (indicated in Equation (6) and (7) ). Obviously, if an entity
is high-reputed (thus, probably not malicious) then the contribution (to one’s
reputation) is huge. Generally, our reputation mechanism shares the same
characteristics to Page-rank algorithm in Google web-ranking engine: it is not
easy to increase the ranking of a web-page by creating lots of new web-pages
and link to it [4]. The nature of any feedback-based reputation systems is
that it is impossible to fully prevent from such reputation attacks; however,
we believe our approach can well mitigate these behaviours.
## 7 Conclusion
In this paper, we have provided a comprehensive concept, system model and
design of a decentralised trust system for DApps ecosystem along with detailed
analysis, algorithms, and simulations actualise the DER trust model. Foremost,
we have developed a proof-of-concept system implementing the DER trust model
on top of the Ethereum permissionless BC. The trust system is then able to
incorporate with the DDS service for supporting data owners to select
trustworthy storage nodes.
We have also provided technical difficulties along with prospective solutions
as well as the implementation reference in the development of the proposed
decentralised trust system. Existing technical barriers are also outlined
which need further efforts to be successfully solved. We believe our research
significantly contributes to further activities on trust-related research
areas and open some future research directions to strengthen a trustworthy
DApp ecosystem.
## Acknowledgement
This research was supported by the HNA Research Centre for Future Data
Ecosystems at Imperial College London and the Innovative Medicines Initiative
2 IDEA-FAST project under grant agreement No 853981.
## References
* Ali et al. [2016] Ali, M., Nelson, J., Shea, R., Freedman, M.J., 2016. Blockstack: A global naming and storage system secured by blockchains, in: $\\{$USENIX$\\}$ Annual Technical Conference, pp. 181–194.
* Almenárez et al. [2011] Almenárez, F., Marín, A., Díaz, D., Cortés, A., Campo, C., García-Rubio, C., 2011\. Trust management for multimedia p2p applications in autonomic networking. Ad Hoc Networks 9, 687–697.
* Arasu et al. [2002] Arasu, A., Novak, J., Tomkins, A., Tomlin, J., 2002\. Pagerank computation and the structure of the web: Experiments and algorithms, in: Proceedings of the Eleventh International World Wide Web Conference, Poster Track, pp. 107–117.
* Avrachenkov and Litvak [2006] Avrachenkov, K., Litvak, N., 2006\. The effect of new links on google pagerank. Stochastic Models 22, 319–331.
* Blum et al. [2006] Blum, A., Chan, T.H., Rwebangira, M.R., 2006. A random-surfer web-graph model, in: 2006 Proceedings of the Third Workshop on Analytic Algorithmics and Combinatorics (ANALCO), SIAM. pp. 238–246.
* Bonomi et al. [2012] Bonomi, F., Milito, R., Zhu, J., Addepalli, S., 2012\. Fog computing and its role in the internet of things, in: Proceedings of the first edition of the MCC workshop on Mobile cloud computing, pp. 13–16.
* Brin and Page [2012] Brin, S., Page, L., 2012. Reprint of: The anatomy of a large-scale hypertextual web search engine. Computer networks 56, 3825–3833.
* Buterin [2014] Buterin, V., 2014. Daos, dacs, das and more: An incomplete terminology guide. Ethereum Blog 6, 2014\.
* Buterin et al. [2014] Buterin, V., et al., 2014. A next-generation smart contract and decentralized application platform. white paper 3.
* Chebolu and Melsted [2008] Chebolu, P., Melsted, P., 2008\. Pagerank and the random surfer model., in: SODA, pp. 1010–1018.
* Chen et al. [2017] Chen, T., Li, X., Luo, X., Zhang, X., 2017. Under-optimized smart contracts devour your money, in: 2017 IEEE 24th International Conference on Software Analysis, Evolution and Reengineering (SANER), IEEE. pp. 442–446.
* Chen et al. [2020] Chen, X., Ding, J., Lu, Z., 2020. A decentralized trust management system for intelligent transportation environments. IEEE Transactions on Intelligent Transportation Systems .
* Debe et al. [2019] Debe, M., Salah, K., Rehman, M.H.U., Svetinovic, D., 2019\. Iot public fog nodes reputation system: A decentralized solution using ethereum blockchain. IEEE Access 7, 178082–178093.
* Hackius and Petersen [2017] Hackius, N., Petersen, M., 2017\. Blockchain in logistics and supply chain: trick or treat?, in: Proceedings of the Hamburg International Conference of Logistics (HICL), epubli. pp. 3–18.
* Haveliwala et al. [2003] Haveliwala, T., Kamvar, S., Jeh, G., 2003. An analytical comparison of approaches to personalizing pagerank. Technical Report. Stanford.
* Hoffman et al. [2009] Hoffman, K., Zage, D., Nita-Rotaru, C., 2009. A survey of attack and defense techniques for reputation systems. ACM Computing Surveys (CSUR) 42, 1–31.
* Jawaheer et al. [2014] Jawaheer, G., Weller, P., Kostkova, P., 2014. Modeling user preferences in recommender systems: A classification framework for explicit and implicit user feedback. ACM Transactions on Interactive Intelligent Systems (TiiS) 4, 1–26.
* Johnston et al. [2014] Johnston, D., Yilmaz, S.O., Kandah, J., Bentenitis, N., Hashemi, F., Gross, R., Wilkinson, S., Mason, S., 2014\. The general theory of decentralized applications - dapps .
* Kamvar et al. [2003] Kamvar, S.D., Haveliwala, T.H., Manning, C.D., Golub, G.H., 2003\. Extrapolation methods for accelerating pagerank computations, in: Proceedings of the 12th international conference on World Wide Web, pp. 261–270.
* Kochovski et al. [2019] Kochovski, P., Gec, S., Stankovski, V., Bajec, M., Drobintsev, P.D., 2019. Trust management in a blockchain based fog computing platform with trustless smart oracles. Future Generation Computer Systems 101, 747–759.
* Korpela et al. [2017] Korpela, K., Hallikas, J., Dahlberg, T., 2017. Digital supply chain transformation toward blockchain integration, in: proceedings of the 50th Hawaii international conference on system sciences.
* Li et al. [2018] Li, R., Song, T., Mei, B., Li, H., Cheng, X., Sun, L., 2018. Blockchain for large-scale internet of things data storage and protection. IEEE Transactions on Services Computing .
* Meiklejohn et al. [2013] Meiklejohn, S., Pomarole, M., Jordan, G., Levchenko, K., McCoy, D., Voelker, G.M., Savage, S., 2013. A fistful of bitcoins: characterizing payments among men with no names, in: Proceedings of the 2013 conference on Internet measurement conference, pp. 127–140.
* Moinet et al. [2017] Moinet, A., Darties, B., Baril, J.L., 2017. Blockchain based trust & authentication for decentralized sensor networks. arXiv preprint arXiv:1706.01730 .
* Page et al. [1999] Page, L., Brin, S., Motwani, R., Winograd, T., 1999\. The PageRank citation ranking: Bringing order to the web. Technical Report. Stanford InfoLab.
* Poon and Dryja [2016] Poon, J., Dryja, T., 2016. The bitcoin lightning network: Scalable off-chain instant payments.
* Roberts et al. [2009] Roberts, S.G., Dunbar, R.I., Pollet, T.V., Kuppens, T., 2009\. Exploring variation in active network size: Constraints and ego characteristics. Social Networks 31, 138–146.
* Shafagh et al. [2017] Shafagh, H., Burkhalter, L., Hithnawi, A., Duquennoy, S., 2017\. Towards blockchain-based auditable storage and sharing of iot data, in: Proceedings of the 2017 on Cloud Computing Security Workshop, ACM. pp. 45–50.
* She et al. [2019] She, W., Liu, Q., Tian, Z., Chen, J.S., Wang, B., Liu, W., 2019. Blockchain trust model for malicious node detection in wireless sensor networks. IEEE Access 7, 38947–38956.
* Truong et al. [2019] Truong, N.B., Lee, G.M., Um, T.W., Mackay, M., 2019\. Trust evaluation mechanism for user recruitment in mobile crowd-sensing in the internet of things. IEEE Transactions on Information Forensics and Security 14, 2705–2719.
* Truong et al. [2017] Truong, N.B., Um, T.W., Zhou, B., Lee, G.M., 2017\. From personal experience to global reputation for trust evaluation in the social internet of things, in: GLOBECOM 2017-2017 IEEE Global Communications Conference, IEEE. pp. 1–7.
* Truong et al. [2018] Truong, N.B., Um, T.W., Zhou, B., Lee, G.M., 2018\. Strengthening the blockchain-based internet of value with trust, in: 2018 IEEE International Conference on Communications (ICC), IEEE. pp. 1–7.
* Tyagi and Sharma [2012] Tyagi, N., Sharma, S., 2012\. Weighted page rank algorithm based on number of visits of links of web page. International Journal of Soft Computing and Engineering (IJSCE) ISSN , 2231–2307.
* Urena et al. [2019] Urena, R., Kou, G., Dong, Y., Chiclana, F., Herrera-Viedma, E., 2019\. A review on trust propagation and opinion dynamics in social networks and group decision making frameworks. Information Sciences 478, 461–475.
* Xing and Ghorbani [2004] Xing, W., Ghorbani, A., 2004\. Weighted pagerank algorithm, in: Proceedings. Second Annual Conference on Communication Networks and Services Research, 2004., IEEE. pp. 305–314.
* Xu et al. [2016] Xu, X., Pautasso, C., Zhu, L., Gramoli, V., Ponomarev, A., Tran, A.B., Chen, S., 2016\. The blockchain as a software connector, in: 2016 13th Working IEEE/IFIP Conference on Software Architecture (WICSA), IEEE. pp. 182–191.
* Yan et al. [2014] Yan, Z., Zhang, P., Vasilakos, A.V., 2014. A survey on trust management for internet of things. Journal of network and computer applications 42, 120–134.
* Yang et al. [2018] Yang, Z., Yang, K., Lei, L., Zheng, K., Leung, V.C., 2018\. Blockchain-based decentralized trust management in vehicular networks. IEEE Internet of Things Journal 6, 1495–1505.
* Zamani et al. [2018] Zamani, M., Movahedi, M., Raykova, M., 2018. Rapidchain: Scaling blockchain via full sharding, in: Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, pp. 931–948.
* Zheng et al. [2017] Zheng, Z., Xie, S., Dai, H., Chen, X., Wang, H., 2017\. An overview of blockchain technology: Architecture, consensus, and future trends, in: 2017 IEEE international congress on big data (BigData congress), IEEE. pp. 557–564.
img/author1.pdf Dr. Nguyen B.Truong is currently a Research Associate at Data
Science Institute, Imperial College London, United Kingdom. He received his
Ph.D, MSc, and BSc degrees from Liverpool John Moores University, United
Kingdom, Pohang University of Science and Technology, Korea, and Hanoi
University of Science and Technology, Vietnam in 2018, 2013, and 2008,
respectively. He was a Software Engineer at DASAN Networks, a leading company
on Networking Products and Services in South Korea in 2012-2015. His research
interest is including, but not limited to, Data Privacy, Security, and Trust,
Personal Data Management, Distributed Systems, and Blockchain.
img/author2.pdf Dr. Gyu Myoung Lee received his BS degree from Hong Ik
University and MS, and PhD degrees from the Korea Advanced Institute of
Science and Technology (KAIST), Korea, in 1999, 2000 and 2007, respectively.
He is a Professor at Department of Computer Science, Liverpool John Moores
University, UK. He is also with KAIST as an adjunct professor. His research
interests include Future Networks, IoT, and multimedia services. He has
actively contributed to standardization in ITU-T as a Rapporteur, oneM2M and
IETF. He is chair of the ITU-T Focus Group on data processing and management
to support IoT and Smart Cities & Communities.
img/author3.pdf Dr. Kai Sun is the Operation Manager of the Data Science
Institute at Imperial College London. She received the MSc degree and the Ph.D
degree in Computing from Imperial College London, in 2010 and 2014,
respectively. From 2014 to 2017, she was a Research Associate at the Data
Science Institute at Imperial College London, working on EU IMI projects
including U-BIOPRED and eTRIKS, responsible for translational data management
and analysis. She was the manager of the HNA Centre of Future Data Ecosystem
in 2017-2018. Her research interests include translational research
management, network analysis and decentralised systems.
img/author4.pdf Mr. Florian Guitton received a BSc in Software Engineering
from Epitech (France) in 2011 and a MSc in Advanced Computing from the
University of Kent (United Kingdom) in 2012. In 2012 he joined the Discovery
Sciences Group at Imperial College London where he became Research Assistant
working on iHealth, eTRIKS and IDEA-FAST EU programs. He is currently a PhD
candidate at Data Science Institute, Imperial College London working on
distributed data collection and analysis pipeline in mixed-security
environments with the angle of optimising user facing experiences.
img/author5.pdf Dr. Yike Guo (FREng, MAE) is the director of the Data Science
Institute at Imperial College London and the Vice-President (Research and
Development) of Hong Kong Baptist University. He received the BSc degree in
Computing Science from Tsinghua University, China, in 1985 and received the
Ph.D in Computational Logic from Imperial College London in 1993. He is a
Professor of Computing Science in the Department of Computing at Imperial
College London since 2002. He is a fellow of the Royal Academy of Engineering
and a member of the Academia Europaea. His research interests are in the areas
of data mining for large-scale scientific applications including distributed
data mining methods, machine learning and informatics systems.
|
top=25mm, bottom=10mm, left=20mm, right=25mm
Università degli Studi di Milano
Corso di Dottorato in
Scienze Matematiche
Ciclo XXXIII
MAT-05: Analisi Matematica
[t]0.4Sorbonne Univeristé
École doctorale de
Sciences Mathématiques
de Paris Centre
Specialité : Mathématiques
Dipartimento di Matematica
[t]0.47Centre d'analyse et de
mathématique sociales
Evolution equations
with applications
to population dynamics
Doctoral thesis in conjoint program
Elisa Affili
Matr. R12038
Prof. Enrico Valdinoci
Prof. Luca Rossi
Coordinatore del Corso di Dottorato
Prof. Vieri Mastropietro
[b]0.47Directeur de l'école doctorale
Prof. Elisha Falbel
Anno accademico 2019-2020
CHAPTER: ABSTRACT
The main topic of this thesis is the analysis of evolution equations reflecting issues in ecology and population dynamics. In mathematical modelling, the impact of environmental elements and the interaction between species is read into the role of heterogeneity in equations and interactions in coupled systems. In this direction, we investigate
three separate problems, each corresponding to a chapter of this thesis.
The first problem addresses the evolution of a single population living in an environment with a fast diffusion line.
From a mathematical point of view, this corresponds to a system of two coupled reaction-diffusion equations working on domains of different dimensions, which is called as in [20] a “road-field model”. We introduce a periodic dependence of the reaction term in the direction of the fast diffusion line; in the ecological interpretation, this corresponds to the presence of more and less favourable zones for the growth of the population.
Necessary and sufficient conditions for persistence or extinction of the population and the effects of the presence of the road are analysed through the study of a suitable generalised principal eigenvalue, originally defined in [16]. By comparison with the literature about reaction-diffusion equations in periodic media, we show that the presence of the road has no impact on the survival chances of the population, despite the deleterious effect that is expected from fragmentation.
The second investigation regards a model describing
the competition between two populations in a situation of asymmetrically aggressive interactions – one is the attacker and the other the defender.
We derive a system of ODEs from basic principles, obtaining a modified Lotka-Volterra model relying on structural parameters as the fitness of the population and the frequency and effectiveness of the attacks.
The evolution progresses through two possible scenarios, where only one population survives.
Then, the interpretation of one of the parameters as the aggressiveness of the attacker population naturally raises questions of controllability. With the aid of geometrical arguments we characterise the set of initial conditions leading to the victory of the attacker through a suitable (possibly time-dependant) strategy.
Indeed, we prove that bang-bang strategies are sufficient and sometimes necessary over constant controls.
Finally, we treat a time minimization question.
The third and last part of this thesis analyses the time decay of some evolution equations with classical and fractional time derivatives. Carrying on an analysis started in [43], we deal with evolution equations with a possibly mixed Caputo and classical time derivative.
By using energy methods, we prove quantitative estimates of polynomial or exponential type; the different behaviour depends heavily on the choice of the time derivative.
The decay results apply to a large class of diffusion operators, comprehending local, nonlocal, real, complex, and even nonlinear ones, of which we provide concrete examples.
CHAPTER: RIASSUNTO
Il principale argomento di questa tesi è l'analisi delle equazioni dell'evoluzione che riflettono questioni di ecologia e di dinamica della popolazione. Nell'ambito della modellizzazione matematica, l'impatto degli elementi ambientali e delle interazioni tra le specie viene studiato mediante il ruolo dell'eterogeneità nelle equazioni e nelle interazioni nei sistemi accoppiati. In questa direzione, indaghiamo
tre problemi distinti corrispondenti a tre capitoli di questa tesi.
Il primo problema riguarda l'evoluzione di una singola popolazione che vive in un ambiente con una linea di diffusione rapida.
Dal punto di vista matematico, lo studio riguarda un sistema di due equazioni di reazione-diffusione accoppiate, che lavorano su domini di dimensioni diverse, chiamato come in [20] un modello “campo-strada”. Introduciamo una dipendenza periodica in direzione della linea di diffusione per il termine di reazione, che nell'interpretazione ecologica corrisponde alla presenza di zone più e meno favorevoli alla crescita della popolazione.
Le condizioni necessarie e sufficienti per la persistenza o l'estinzione della popolazione e gli effetti della presenza della strada sono analizzati attraverso lo studio di un adeguato autovalore principale generalizzato, recentemente definito in [16]. Tramite il confronto con la letteratura in mezzi periodici, si mostra che la presenza della strada non ha alcun impatto sulle possibilità di sopravvivenza della popolazione, nonostante l'effetto deleterio che ci si aspetta dalla frammentazione.
La seconda indagine riguarda un modello che descrive la competizione tra due popolazioni in una situazione di aggressione asimmetrica, in cui una popolazione aggredisce una seconda.
Deriviamo un sistema di ODE da alcune assunzioni fondamentali, ottenendo un modello Lotka-Volterra modificato che si basa su parametri strutturali come la fitness della popolazione e la frequenza e l'efficacia degli attacchi.
L'analisi della dinamica mostra due possibili scenari, in cui una sola delle due popolazioni sopravvive.
Dopodiché, l'interpretazione di uno dei parametri come l'aggressività della prima popolazione solleva in modo naturale un problema di controllabilità. Tramite argomentazioni geometriche caratterizziamo l'insieme delle condizioni iniziali permettendo, con un'adeguata strategia eventualmente variabile nel tempo, la vittoria della popolazione che attacca. Infatti, dimostriamo che le funzioni di tipo bang-bang sono sufficienti a raggiungere l'obiettivo e talvolta sono necessarie rispetto a funzioni costanti.
Infine, trattiamo una questione di minimizzazione nel tempo.
La terza e ultima parte analizza il decadimento nel tempo in equazioni di evoluzione con una possibile derivata temporale frazionaria. Proseguendo un'analisi iniziata in [43], trattiamo equazioni d'evoluzione con una combinazione di derivata temporale di Caputo e classica.
Utilizzando metodi d'energia, dimostriamo stime quantitative di tipo polinomiale o esponenziale; il diverso comportamento dipende principalmente dalla scelta della derivata temporale.
I risultati di decadimento si applicano ad una vasta classe di operatori di diffusione, comprendendone alcuni locali, non locali, reali, complessi e anche non lineari, di cui forniamo esempi concreti.
CHAPTER: RÉSUMÉ
Le sujet principal de cette thèse est l'analyse des équations d'évolution reflétant les questions d'écologie et de dynamique des populations. En modélisation, la compréhension de l'impact des éléments environnementaux et de l'interaction entre les espèces dépend de la compréhension du rôle de l'hétérogénéité dans les équations et les interactions dans les systèmes couplés. Dans cette direction, nous étudions
trois problèmes indépendents correspondant à trois chapitres de cette thèse.
Le premier problème concerne l'évolution d'une seule population vivant dans un environnement avec une ligne de diffusion rapide.
L'analyse porte sur un système de deux équations de réaction-diffusion couplées, travaillant sur des domaines de dimensions différentes, qui est appelé comme dans [20] un modèle “champ-route”. Nous introduisons une dépendance périodique dans la direction de la ligne de diffusion pour le terme de réaction, qui, dans l'interprétation écologique, correspond à la présence de zones plus ou moins favorables à la croissance de la population.
Les conditions nécessaires et suffisantes pour la persistance ou l'extinction de la population et les effets de la présence de la route sont analysés par l'étude de la valeur propre principale généralisée appropriée, définie pour la première fois dans [16]. Par comparaison avec des études similaires dans des environnements périodiques, nous prouvons que la présence de la route n'a aucun impact sur les chances de persistence de la population, malgré l'effet délétère attendu lié à la fragmentation.
La deuxième étude porte sur un modèle décrivant l'interaction compétitive et agressive entre deux populations. Nous dérivons un système d'EDO à partir de principes de base, en obtenant un modèle Lotka-Volterra modifié reposant sur des paramètres structurels comme la fertilité de la population et la fréquence et l'efficacité des attaques.
L'analyse de la dynamique donne deux scénarios possibles, où une seule population survit.
Ensuite, l'interprétation d'un des paramètres comme étant l'agressivité de la première population soulève tout naturellement des questions de contrôlabilité. Grâce à des arguments géométriques, nous caractérisons l'ensemble des conditions initiales permettant la victoire de la première population avec une stratégie appropriée éventuellement dépendante du temps. En effet, nous prouvons que les stratégies de bang-bang sont suffisantes et parfois nécessaires face à des contrôles constants.
Enfin, nous traitons une question de minimisation du temps.
La troisième et dernière partie de la thèse analyse la décroissance dans le temps pour des solutions d'une classe d'équations d'évolution avec dérivées temporelles fractionnaires et classiques. Poursuivant une analyse commencée dans [43], nous traitons des équations d'évolution avec une combinaison linéaire des dérivées temporelles Caputo et classiques.
En utilisant des méthodes d'énérgie, nous prouvons des estimations quantitatives de type polynomial ou exponentiel ; le comportement différent dépend fortement du choix de la dérivée temporelle.
Les résultats de la décroissance s'appliquent à une large classe d'opérateurs de diffusion, comprenant des opérateurs locaux, non locaux, réels, complexes et même non linéaires, dont nous fournissons des exemples concrets.
CHAPTER: RINGRAZIAMENTI
Per primi vorrei ringraziare i miei relatori, Luca Rossi ed Enrico Valdinoci, senza i quali questo lavoro non sarebbe stato possibile.
Luca mi segue già da molti anni e le esperienze positive con lui sono state determinanti sulla mia scelta di intraprendere il dottorato. Ha continuato a seguirmi con attenzione e meticolosità durante questi tre anni.
Enrico fin dal primo giorno ha avuto fiducia in me e mi ha suggerito le migliori possibilità per lo sviluppo della mia carriera, incoraggiandomi, finanziandomi e aiutandomi quando il compito mi risultava troppo difficile.
A entrabi devo i miei ringraziamenti più sinceri.
Un ringraziamento speciale va anche a Serena Dipierro, che è stata molto presente nel mio dottorato, come collaboratrice e quasi come un terzo relatore, si è impegnata a coinvolgermi e promuovermi nella comunità matematica.
Ringrazio moltissimo anche Henri Berestycki, per la grande ispirazione che mi ha dato, per i suoi preziosi consigli e per aver aiutato me e i miei relatori a formare l'accordo di cotutela.
I would like to thank the two anonymous referees for their precious time and their nice comments on the report. I am also happy to thank the members of the defence commission, Luis Almeida, Sepideh Mirrhaimi, Fabiana Leoni and again Henri Berestycki, for accepting the task and devoting their valuable time to me.
Ringrazio anche per l'accoglienza il Dipartimento di Matematica dell'Università degli Studi di Milano, in particolare nelle persone di Vieri Mastropietro, coordinatore del corso di dottorato, e di Stefania Leonardi. Sono molto riconoscente anche a Daniela Lipari per avermi aitato a stringere l'accordo di cotutela. Un grande ringraziamento va anche ai miei colleghi dottorandi, per avermi condiviso con me gioie e dolori del dottorato e per aver reso molto più divertente il tempo a Milano.
Dedico un pensiero particolare agli altri dottorandi (ormai dottori!) e postdoc di Enrico e Serena, che mi hanno accompagnato in numerose conferenze in giro per il mondo e nei due mesi in Australia, facendomi sentire sempre a casa: Pietro, Luca, Claudia, Giorgio, Matteo, Matteo, Julien.
Je suis reconnaissante également à Sorbonne Université et au laboratoire CAMS pour m'avoir accueilli, déjà depuis mon stage de M2.
Un grand merci à Sandrine Nadal, Nathalie Brusseaux, Jean-François Venuti, Corentin Lacombe et Patricia Zizzo pour leur travail administratif, et à tous les personnes du labo pour la belle ambiance, les discussions passionnantes, et pour m'avoir appris beaucoup sur la culture et la langue française.
Ici aussi un grand groupe de doctorants et postdoc m'ont aidé avec leurs conseils et leur amitié, pendant ma thèse et le stage: merci à Romain, Samuel, Charles, Alessandro, Benedetta, Federico, Julien, François, Noemi, Imke, Jérémie, Milim, José, Elisa.
Vorrei ringraziare tutti gli amici anche al di fuori dell'università che mi hanno sostenuto in questi anni di viaggi, conferenze e traslochi sfrenati tra Padova, Milano, Parigi e Stoccolma, e tutti gli amici che c'erano da molto prima che il mio dottorato iniziasse.
Sono divisa tra la felicità di aver conosciuto così tante persone meravigliose e il rammarico di non aver passato abbastanza tempo con ciascuno.
Per la mia famiglia, il mondo dell'università e della matematica sono sempre stati estranei, ma questo non ha impedito loro di sostenermi e di avere fiducia in me. Grazie a mio fratello Andrea per avermi insegnato le sottrazioni e aver sopportato le mie domande sugli strani simboli che apparivano nei suoi libri di matematica. Grazie anche a Emanuela, Nicolò e Matteo per aver arricchito la nostra famiglia con tanta gioia e affetto. Grazie a mamma e papà per avermi dato tutto, senza mai chiedere niente.
Ad Andrea potrei dedicare intere pagine di ringraziamenti, ma credo sia meglio farglieli di persona.
CHAPTER: INTRODUCTION
The main motivation behind research is to enhance mankind ability to predict and keep under control natural and artificial processes.
To this purpose, mathematical models have revealed to be a very compelling instrument. A mathematical model is a simplified representation of a phenomenon through several meaningful, quantitative parameters evolving with analytical laws. Once some faithful evolution equations are established, the role of mathematics is to provide as much information as possible on the solutions, even if often only qualitative properties can be derived. That is, mathematics does not study the reality, but the language in which we read it.
On the other side, given a model, people often find the mathematical challenges interesting in themselves. It is natural that some
questions on the mathematical tools arise, or that variations of the model are proposed and discussed.
This way, knowledge of mathematics is expanded, and more equipment is available to write new models.
At present, the problem of climate change and environment anthropization is a great concern for humankind. In order to activate effective countermeasures against biodiversity loss, it is important to understand as deeply as possible what conditions would entail such event. These conditions depend on quantitative and qualitative properties of the environment where the species lives, on a population's resilience to changes, but also on its interaction with other species sharing the same habitat. We still know too little about the effects that these elements and their alteration have on the survival chances of species.
This thesis is far from giving a solution to these dreadful problems but aims to give a contribution to the field of evolution equations and systems with possible application to population dynamics.
§.§.§ Topics and aims of the thesis
The thesis consists of three parts, each treating a different problem.
In the first part, corresponding to Chapter <ref>, we start from a reaction-diffusion model in a periodic environment with a fast diffusion line. The aim is to find conditions entailing survival or extinction of the population and to understand the influence of the line and the environment on the dynamics. Our analysis permits a comparison with the scenario where the fast diffusion line is not present for the general case of a medium with heterogeneity in one direction.
The content of Chapter <ref> is reflects the content of the paper [2] by the author of this thesis.
The second part, contained in Chapter <ref>, is consecrated to a model of aggressive, asymmetric competition between two populations, derived from a Lotka-Volterra system.
The presence of the aggression term naturally leads to a control problem, where a population tries to prevail on the other using an appropriate strategy.
Hence, once the dynamics of the system is understood, we investigate conditions for the victory of the aggressive population, which quite surprisingly is not always possible.
Moreover it is found that, depending on the initial condition, either a bang-bang or a constant strategy leads to the desired scenario.
Chapter <ref> corresponds to the paper [3] by Serena Dipierro, Luca Rossi, Enrico Valdinoci and the author of this thesis.
The last part of this thesis deals with a more abstract and general problem; we investigate asymptotic behaviour for a class of evolution equations with both fractional and classical time derivatives.
Our setting consists of an homogeneous evolution equation working on a bounded set.
The framework comprehends both real and complex, local and nonlocal diffusion operators, and allow us to evaluate the impact of time derivatives on the decay of solutions.
Depending on the type of time derivative, polynomial or exponential decays are entailed.
The results of Chapter <ref> are presented in the paper [5] in collaboration with Enrico Valdinoci and the note [4] in collaboration with Serena Dipierro and Enrico Valdinoci.
§.§.§ Organisation of the manuscript
In this introductory chapter, we make the reader familiar with the problems we investigate and the framework they are enclosed in.
Following the historical path,
we start by a general introduction that then branches in three sections corresponding to the precise research niches of our problems. In each section, after an overview of the state of the art of the topic, we introduce the corresponding problem in details and provide precise statements of our results.
As mentioned before, the rest of the manuscript consists of three chapters, corresponding respectively and in the same order to the topics we introduce in this introduction.
Each chapter is meant to be a self-standing script.
§ GENERAL HISTORIC BACKGROUND
For apparent reasons of population control and resource organisation, one of the first themes for which modelisation has been used is population dynamics.
The first example in this sense was written by Leonardo Fibonacci in Liber Abaci and treats the size of a population of rabbits.
Fibonacci supposed that each couple of rabbits that are older than one month gives birth to another couple of rabbits; calling $u_n$ the size of the population at the $n-$th month, under the previous hypothesis one deduces that
\begin{equation*}
u_{n+2}=u_{n+1}+ u_n.
\end{equation*}
Staring with $u_0=1$, it can be deduced that $u_n$ has an exponential behaviour [8].
This deduction corresponds to the reality only as long as the food is abundant for all the individuals; moreover, the relation is involved and not easy to treat.
Another discrete model was proposed by Euler in the treatise Introduction to the Analysis of the Infinite, published in 1748 [8]. He assumed the annual growth rate to be a fixed quantity $\alpha>0$. Then, calling $P_n$ the size of the population at the year $n$, one has that
\begin{equation*}
\end{equation*}
so one derives, calling $P_0$ the population at the initial time,
\begin{equation*}
P_n=(1+\alpha)^{n} P_0.
\end{equation*}
The sequence $\{P_n\}_{n\in\N}$ is called a geometric sequence, and its behaviour is again exponential.
Thanks to these formulae, Euler treated some problems linked to the growth of urban population and he investigated the reliability of the biblical story of the Float. However, his model involve many computations that were hard to perform before the introduction of computers.
Thomas Malthus, in his work Essay on the Principle of Population [81], used a simpler relation to represent the evolution of a population size; he supposed the growth of a population to be proportional to its size, that is, the growth rate to be a fixed constant, $a>0$.
Moreover, as simplification, he assumed the size of the population to evolve in a continuous fashion with respect to time.
With these hypothesis, the evolution of $u$ follows the law
\begin{equation}\label{eq:malthus}
u'(t)= a u(t) \quad \text{for} \ t\geq 0.
\end{equation}
The solutions to equation (<ref>) are exponentials, in accordance with the result of Fibonacci.
Again in [81], Malthus pointed out that the growth of a population is limited by the quantity of resources. This idea was taken into the equation by Verhulst [120]. He considered the number $k>0$ of individuals that the environment can support indefinitely with the available resources; this is called carrying capacity of the environment. Then, he corrected Malthus's equation (<ref>) with the following:
\begin{equation}\label{logistic}
u'(t)= a u(t)\left( 1-\frac{u(t)}{k} \right) \quad \text{for} \ t\geq 0.
\end{equation}
Equation (<ref>) presents two equilibria: $u=0$, that is repulsive, and $u=k$, which is attractive.
In fact, for all $u(t)<k$, one has $u'(t)>0$, while for $u(t)>k$, it holds $u'(t)<0$; in both cases, the solution tends to get closer to the value $k$.
This means that, independently of the starting condition, as long as the initial datum is positive, the population size evolves approaching the value $k$, which is the maximum number of individuals that the environment can sustain.
The logistic model is much more realistic that the previous estimates. It is considered the precursor of interesting mathematical branches, including Lotka-Volterra systems and reaction-diffusion equations.
§ THE ROAD-FIELD MODEL IN A PERIODIC MEDIUM
§.§ Reaction diffusion equations in the literature
One important feature that is not taken into account in the logistic equation is dependence on space.
The first effect to take into account for a space structured model is the fact that a population is subject to dispersion.
This is a result of the free movement for animals and of the dispersion of seeds for plants.
The first hypothesis in the literature was to consider the individuals to move with random brownian walk, as particles of a gas.
Without taking account reproduction, calling $u(t, x)$ the size of a population and considering it in continuous dependence on time,
the dispersal would follow the well-known heat equation
\begin{equation}\label{eq1}
\partial_t u - \Delta u=0.
\end{equation}
Note that when speaking of population denisties and sizes, we only consider nonnegative solutions.
The first mathematicians who added a reaction term to equation (<ref>) were
Fisher [50] and Kolmogorov, Petrovsky and Piskunov [72].
They considered a function $u(t,x)$ representing the concentration of an advantageous gene in a population; it was supposed that the population lives in a one-dimensional environment and that the individuals move randomly. Taking these hypothesis, once the gene was introduced, it spreads according to the equation
\begin{equation}\label{eq:KPP}
\partial_t u - \partial_{xx}^2 u = f(u)
\end{equation}
where $f$ is a function such that
\begin{equation}\label{0225}
\end{equation}
moreover it is monostable, that is,
\begin{equation*}
\quad f(u)>0 \ \text{for} \ u\in(0,1),
\end{equation*}
and respects the condition called KPP hypothesis
\begin{equation}\label{0024}
f(u) < f'(0)u.
\end{equation}
The function $f$ represents the birth-death rate of individuals carrying the gene. The fact that $f(0)=0$ is a very natural assumption: if no individuals are present, no new individual is generated. On the other hand, the choice $f(1)=0$ suggests a saturation at the size $u=1$.
The hypothesis (<ref>) reflects the fact that the growth rate decreases as the size of the population grows, as it is the case for the logistic equation (<ref>).
Actually, Fisher supposed $f(u)=au(1-u)$ for $a>0$, which is exactly the nonlinearity proposed by Verhulst, while Kolmogorov, Petrovsky and Piskunov selected $f(u)=a u(1-u)^2$.
For a large class of initial data, among which the Heaviside functions, the solutions to (<ref>) asymptotically converge to a function of the shape
\begin{equation}\label{sol}
u(t,x )=U(z) \quad \text{for} \ z=x+ct.
\end{equation}
Solutions of the form (<ref>) are called travelling waves and the quantity $c$ is called speed of propagation of the travelling wave. The travelling wave found in [50] and [72] has speed corresponding to $c_{KPP}=2\sqrt{ f'(0)}$; actually, a travelling wave exists for all $c\geq c_{KPP}$ and $c_{KPP}$ corresponds to the minimal speed.
The main questions addressed in [50] and [72] have been later asked for larger and larger class of nonlinearites. These questions concerns the existence of stationary solutions, the existence of travelling fronts and the asymptotic speed of propagation for the Cauchy problem.
For the sake of completeness, we must here name other two important settings.
In [48] and in [7] for the multidimensional case, Fife and McLeod and Aronson and Weinberger treated equation (<ref>) in the case of a function $f$ satisfying the hypothesis (<ref>) and such that there exists a value $\theta$ for which
\begin{equation}\label{0044}
f(u)<0 \quad \text{if} \ u\in(0,\theta), \qquad f(u)>0 \quad \text{if} \ u\in(\theta, 1).
\end{equation}
A function satisfying (<ref>) is called bistable, from the fact that the related equation has two attractive states, $0$ and $1$. This type of nonlinearity is particularly interesting because it embodies an important phenomenon in population dynamics, called the Allee effect from the name of the scientist who discover it in the '30s. It happens that in social animals, aggregation increases the survival rate of individuals; therefore, when the size of a population is under a certain threshold, the growth rate is negative; when the group size passes the threshold, the growth rate becomes positive.
A third important setting is the combustion case, in which there exists a quantity $\theta\in(0,1)$ such that
\begin{equation*}
f(u)=0 \quad \text{if} \ u\in[0,\theta], \qquad f(u)>0 \quad \text{if} \ u\in(\theta, 1).
\end{equation*}
This type of nonlinearity is used for ignition models, where to activate the combustion process the temperature must pass a threshold.
As a matter of fact, Aronson and Weinberger investigated the equation
\begin{equation}\label{aw}
\partial_t u- \Delta u= f(u) \quad \text{for} \ x \in\R^n
\end{equation}
and asked under which conditions on the function $f$, other than (<ref>), and on the initial datum $u_0$ one has invasion or spreading, that is,
\begin{equation*}
u(t,x) \overset{t\to+\infty}{\longrightarrow} 1 \quad \text{locally uniformly in} \ x.
\end{equation*}
The opposite behaviour is called extinction, and it occurs when
\begin{equation*}
u(t,x) \overset{t\to+\infty}{\rightarrow} 0 \quad \text{uniformly in} \ x.
\end{equation*}
We point out that for extinction a uniform convergence is required, otherwise, in some scenarios, one could have a positive mass escaping further and further in space as $t$ goes to infinity.
The authors found that for a compactly supported initial datum which is “sufficiently large” (depending on the nonlinearity), invasion occurs if and only if
\begin{equation*}
\int_0^1 f(x)dx>0.
\end{equation*}
Let us give more details on the minimal requirements for the initial datum.
In the monostable case, it is sufficient for $u_0$ to be greater than a positive constant in $(0,1)$ in a large enough ball. Moreover, if $f'(0)>0$, then all solutions issued from a non zero, non negative initial datum converges to 1 as $t$ goes to infinity; this is called hair trigger effect.
In the bistable and monostable cases, the positive constant is necessarily greater than the threshold $\theta$.
Equation (<ref>) was the first example of a whole class of PDEs, the reaction-diffusion equations. From the initial works [50, 72, 48, 7], the literature on reaction-diffusion equations and the study on travelling waves have flourished. What is present here is a circumscribed niche, which is handy to provide context to our work.
§.§.§ Reaction-diffusion equations in periodic media
One of the other natural applications of equations (<ref>) and (<ref>) is of course population dynamics. Skellam [109] was one of the firsts to study the effects of random dispersion on a population subject to the malthusian law, after noticing that the framework given by [50] and [72] could be adapted to this problem.
In the optic of studying the survival and the distribution of a population in space, a homogeneous environment is not satisfying and one expects the growth of the population to vary according to the habitat conditions.
On the other hand, from a mathematical point of view, heterogeneity in the nonlinearity creates great difficulties. Many new techniques were required to overcome these obstacles.
A first analysis was carried out by Shigesada, Kawasaki and Teramoto
in [108, 107].
The authors observed that natural environments are a mosaic of different habitats, such as forests, meadows, brush, cultivated fields and villages.
This led them to consider an environment which consists of two periodically alternating homogeneous habitats, one favourable, $E^+$, and one unfavourable, $E^-$, for the considered species.
The heterogeneity of the living conditions is reflected by the birth-death rate, which they chose to be
\begin{equation*}
f(x, u)= \left\{
\begin{array}{ll}
u(\mu^+ -u), & \text{in} \ E^+, \\
u(\mu^- -u), & \text{in} \ E^-,
\end{array}
\right.
\end{equation*}
for some $\mu^+>\mu^-$.
Moreover, they also consider possibly varying diffusivity, hence they took
\begin{equation*}
A(x)= \left\{
\begin{array}{ll}
A^+, & \text{in} \ E^+, \\
A^-, & \text{in} \ E^-.
\end{array}
\right.
\end{equation*}
This is due to the observation of increased speed in unfavourable environments; hence we expect $A^+<A^-$ for a real population.
Then, the authors studied in [108] the equation
\begin{equation}\label{0325}
\partial_t u - \nabla \cdot (A(x) \nabla u) = f(x,u) \quad \text{for} \ x\in\R^n.
\end{equation}
This is known as the patch model; they investigated long time behaviour, convergence to travelling fronts and propagation speeds.
Actually, since $u=1$ is no longer an equilibrium for equation (<ref>), we have to modify our definition for species survival; from now on, we intend that persistence occurs if $u(x,t)$ approaches a non null stationary solution locally uniformly as $t$ tends to infinity.
By making use of numerical simulations, it was found that the stability of the trivial solution $u=0$ plays a key role in determining if the population survives or not.
It was already known (see [35]) that a negative or positive sign of the principal eigenvalue resulting from the linearisation around $u=0$ entails respectively stability or instability of the $0$ solution.
In [108], it was shown numerically that the stability of the trivial solution entails extinction, while its instability causes persistence of the population.
The authors also studied the sign of the eigenvalue depending on the values of $L$, the measures of $E^+$ and $E^-$ and the values of the parameters; this was possible because of the simplicity of the framework.
Equation (<ref>) was later considered in [70] and [18] for general $A(x)$ and $f(x,u)$ depending on $x$ in a continuous fashion and perdiodically of period $L$ for some $L\in\R^n$.
In this second article, Berestycki, Hamel and Roques comprehended that the extinction or persistence of the population depends on the sign of a periodic eigenvalue $\lambda_p(-\mathcal{L}', \R^n)$, that is the unique real number such that the problem
\begin{equation}\label{sys:L_RN_p}
\left\{
\begin{array}{ll}
\mathcal{L'}(\psi) + \lambda \psi = 0, & x\in\R^n, \\
\psi> 0, & x\in\R^n, \\
|| \psi ||_{\infty}=1, \\
\psi \ \text{is periodic in $x$ of periods $L$},
\end{array}
\right.
\end{equation}
where $\mathcal{L'}$ is given by
\begin{equation*}\label{def:mathcal_L'}
\mathcal{L'}(\psi):= \nabla \cdot(A(x) \nabla \psi) + f_u(x,0)\psi,
\end{equation*}
has a solution $\psi_p\in W_{loc}^{2, 3}(\R^n)$. It was proved that when
$\lambda_p(-\mathcal{L}', \R^n)\geq 0$ extinction occurs. On the other hand,
when $\lambda_p(-\mathcal{L}', \R^n)<0$ there is persistence; moreover, there exists a unique stationary solution to (<ref>), that is periodic of period $L$, and attracts all the solutions starting from a non negative, non zero bounded initial datum.
The studies on the patch model [108, 107] and the ones on periodic media [70, 18] evidenced also the effect of fragmentation on the survival chances of a population. It was found that $\lambda_p(-\mathcal{L}', \R^n)$ decreases as the homogeneity increases, that is, a species has better survival chances when the environment is less fragmented.
§.§.§ The case of a changing climate
A new aspect that one may consider while studying ecological problems is a changing climate. If the environment changes in time, so does the fitness of a population. In this paragraph, we are going to analyse the difficulties produced by the new type on nonlinearity and how it has been overcome.
A 1-dimensional model for population persistence under climate change was first proposed by Berestycki, Diekmann, Nagelkerke and Zegeling in
The authors first imagined that a population lives in a favourable region enclosed into disadvantageous environment.
Assuming that a global warming is in place, and that the population lives in the Boreal Emisphere,
the authors imagined that the favourable region moves to the north, so that for every favourable area lost in the South, an equivalent favourable area is gained in the North.
The resulting equation is
\begin{equation*}
\partial_t u - \partial_{xx}^2 u=f(x-ct,u) \quad \text{for} \ x\in\R.
\end{equation*}
Later, in [21], Berestycki and Rossi presented a model for climate change in $\R^n$ and for a larger class of nonlinearites; they dealt with equation
\begin{equation}\label{1709}
\partial_t u - \Delta u=f(x-ct e,u) \quad \text{for} \ x\in\R^n,
\end{equation}
with $e$ a direction in $\mathbb{S}^{n-1}$ and $f: \R^n\times \R^+ \to \R$.
The authors focused on solutions
in the form of a travelling waves $u(x,t)=U(x-cte)$ which solve the equation
\begin{equation}\label{eq:cc}
\partial_t U - \Delta U- c\,e\cdot \nabla U=f(x,U) \quad \text{for} \ x\in\R^n.
\end{equation}
This second equation is more treatable: in fact, the dependence in time of the nonlinearity, which poses a lot of problems, is transformed into a transport term; now, the equation has a nonlinearity depending only on space, and techniques for this type of heterogeneity are more familiar.
The main question is if the population keeps pace with the shifting climate, that is, if a large enough group is able to migrate with the same speed of the climate. The answer to this question is positive if a solution to (<ref>) exists; as happened for the periodic equation (<ref>), this depends on the sign of the principal eigenvalue coming from the linearisation in $0$.
§.§.§ The road-field model
Spatial heterogeneity in natural environments may be the consequence not only of the diversity of the habitats, but also of the presence of obstacles or fast diffusion channels that affects the fitness and the mobility of individuals.
In recent years, humans activity has caused drastic changes in the environment, causing different species to become invasive in areas they were not present [107]. In the case of the Processionary pine tree caterpillar, the diffusion in France has been even faster than anticipated. It has been observed that the insect was incidentally transported by humans from town to town, and from these settlements it spread in the surroundings [103].
This in not the only example of ecological diffusion acceleration by fast diffusion lines. In Western Canadian Forest, GPS observations on wolves proved that the animals exploit seismic lines, that are straight roads used by the oil companies to test reservoirs, to move faster and therefore to increase their probability of meeting a prey [83].
Roads play a strong role also in the spreading of epidemics. The “black death” plague in the 14th century was one of the most devastating epidemics known in Europe. It is known that the plague was transported by animals and humans along the commercial trade line of the silk road, and from that spread all over Europe. More recently, a similar effect has been conjectured for the COVID-19 infection. By tracing the spreading in Northen Italy in early March 2020, it was found that the diffusion occurred first along highways and then spread in the surrounding territory [52].
Inspired by this behaviour, Berestycki, Roquejoffre and Rossi proposed in [20] a model of spreading in an environment presenting a fast diffusion channel. As a simplification, they considered the channel to be a straight line in $\R^2$, the $x$ axis $\R\times \{ y=0 \}$.
Their idea was to split the population into two groups; the first one, of density $u$, occupies the one dimensional environment $\R\times \{ y=0 \}$ representing the road, and the second one, of density $v$, occupies the surrounding territory; by symmetry, they considered just one half of the plan, thus $\Omega:=\{(x, y) \in\R^2 : y>0 \}$, which they called “the field”. These two groups continuously exchange along the road: a fraction $\nu>0$ of the population in $\Omega$ at $y=0$ passes in the road, and a fraction $\mu>0$ of the population in the road passes in the field.
The diffusivity is different in the two environments; its values are $D$ on the road and $d$ on the field, both positive.
Moreover, it is supposed that population reproduces only in the field and that the environment is homogeneous; the corresponding function $f$ is required to satisfy (<ref>), $f'(0)>0$ and a stronger version of the KPP hypothesis, that is
\begin{equation*}
v \mapsto \frac{f(v)}{v} \quad \text{is decreasing}.
\end{equation*}
The resulting system, called road-field model, is
\begin{equation}\label{sys:rf}
\left\{
\begin{array}{ll}
\partial_t u(x,t) - D \partial_{xx}^2 u (x,t) = \nu v (x,0,t) - \mu u(x,t), & x\in \R, \ t > 0, \\
\partial_t v(x,y,t) - d \Delta v (x,y,t)= f(v), & (x,y) \in \Omega, \ t>0, \\
-d \partial_y v(x,0,t) = -\nu v(x,0,t) + \mu u(x,t), & x \in \R, \ t>0.
\end{array} \right.
\end{equation}
The authors of [20] found that invasion occurs for any non negative, non zero initial datum, so the hair trigger effect holds; solutions converge to the unique steady state $\left(\frac{\nu}{\mu},1 \right)$. Moreover, they studied spreading speeds and found that it is enhanced by the presence of the road.
In a second paper [19], the same authors investigated system (<ref>) with a transport term and a reaction term on the line.
Many variations of the road-field model were proposed.
In [94, 95], the system was modified by introducing nonlocal exchanges between the road and the field.
The case of a general nonlocal diffusion has been treated in [14, 13].
Different geometric settings have also been considered; in [105], the model was extended in higher dimensions.
For a complete list, we refer to the chapter in [112] by Tellini.
Treating system (<ref>) poses some difficulties because of the interaction between functions living in different dimensions and the unusual boundary condition. Adding some heterogeneity in space increases the difficulties. This is why very few studies of this type have carried on so far, a part from an article by Giletti, Monsaingeon and Zhou [58], where the authors considered the case of exchanges terms depending periodically on $x$.
Recently, Berestycki, Ducasse and Rossi introduced in [16] a new generalised principal eigenvalue fitting road-field models for a possibly heterogeneous reaction term.
Hence, they considered the system
\begin{equation*}
\left\{
\begin{array}{ll}
\partial_t u(x,t) - D \partial_{xx}^2 u (x,t) -c \partial_x u(t,x)= \nu v (x,0,t) - \mu u(x,t), & x\in \R, \ t > 0, \\
\partial_t v(x,y,t) - d \Delta v (x,y,t)-c \partial_x u(t,x)= f(x,y,v), & (x,y) \in \Omega, \ t>0, \\
-d \partial_y v(x,0,t) = -\nu v(x,0,t) + \mu u(x,t), & x \in \R, \ t>0.
\end{array} \right.
\end{equation*}
\begin{equation*}\label{sys:operators}
\left\{
\begin{array}{l}
\mathcal{R}(\phi, \psi):=D \phi''+c \phi'+\nu {\psi}|_{y=0}-\mu \phi, \\
\mathcal{L}(\psi):= d\Delta \psi +c \partial_x \psi -f_v(x,y,0)\psi, \\
B(\phi, \psi):=d \partial_y {\psi}|_{y=0}+\mu \phi- \nu {\psi}|_{y=0},
\end{array}
\right.
\end{equation*}
this eigenvalue is defined as
\begin{equation}\label{def:lambda1_S_Omega}
\begin{split}
\lambda_1( \Omega)=\sup \{ \lambda \in \R \ : \ \exists (\phi, \psi)\geq (0,0), \ (\phi, \psi) \not\equiv(0,0), \ \text{such that} \\ \mathcal{L}(\psi) + \lambda \psi \leq 0 \ \text{in} \ \Omega, \ \mathcal{R}(\phi, \psi) +\lambda \phi \leq 0
\ \text{and} \ B(\phi, \psi)\leq 0 \ \text{in} \ \R \},
\end{split}
\end{equation}
with $(\phi, \psi)$ belonging to $W_{loc}^{2,3}(\R)\times W_{loc}^{2,3}(\overline{\Omega})$. Together with the definition, many interesting properties and bounds were studied.
Thanks to that, the same authors were able to investigate the case of
a favourable ecological niche, possibly facing climate change, in [17]. It was proven that the sign of $\lambda_1( \Omega)$ characterises
the extinction or the persistence of the population; moreover, comparing the results with the ones found for the model without the road, in the absence of climate change a deleterious effect of the road on the survival chances was found.
On the other hand, if the ecological niche shifts, the road has in some cases a positive effect on the persistence.
§.§ A KPP model with a fast diffusion line in a periodic medium
We are now ready to introduce in details the first problem dealt with in this thesis.
We are going to investigate a road-field model in a periodic medium.
This problem combines the interests of studying the effect of a fast diffusion line with the one of treating a heterogeneous nonlinearity, that, as we pointed out before, reflects a natural territory in a more realistic way than a homogeneous term. From a technical point of view, it also combines the difficulties of the two settings.
§.§.§ The model
We have already presented the road-field model.
In our problem, we treat a road-field system with possible climate change and with a reaction term depending on the spatial variable $x$; in particular, we will focus on the case of periodic dependence.
There is no dependence in the variable $y$, the heterogeneity in that direction is only due to the presence of the road.
Keeping the notation used so far, the system we investigate reads
\begin{equation}\label{sys:fieldroad}
\left\{
\begin{array}{lr}
\partial_t u-D u '' -c u' - \nu v|_{y=0} + \mu u= 0, & x\in \R, \\
\partial_t v -d \Delta v-c\partial_x v =f(x,v), & (x, y)\in \Omega, \\
-d \partial_y{v}|_{y=0} + \nu v|_{y=0} -\mu u=0, & x\in\R.
\end{array}
\right.
\end{equation}
Recall that $D$, $d$, $\nu$, $\mu$ are positive constants and $c\geq 0$.
The function $f:\R\times \R_{\geq 0}\to \R $
is always supposed to be $\mathcal{C}^{1}$ in $x$, locally in $v$, and Lipschitz in $v$, uniformly in $x$; moreover we suppose that the value $v=0$ is an equilibrium, that is
\begin{equation}\label{hyp:0}
f(x,0)=0, \quad \text{for all} \ x\in \R,
\end{equation}
and that
\begin{equation}\label{hyp:M}
\exists M>0 \ \text{such that} \ f(x, v)<0 \quad \text{for all} \ v>M \ \text{and all} \ x\in \R,
\end{equation}
which indicates that there is a saturation level.
We will derive some inequalities on the generalised principal eigenvalue of (<ref>) for the general case of $f$ respecting these hypothesis and
$c$ possibly nonzero.
The characterisation of extinction or persistence of the species is addressed in the case of $c=0$ and $f$ a periodic function, reflecting the periodicity of the environment in which the population diffuses, as we require with the forthcoming hypothesis.
We will analyse the case of a KPP nonlinearity, that is, we require that
\begin{equation}\label{hyp:KPP}
\frac{f(x,s_2)}{s_2}< \frac{f(x,s_1)}{s_1} \quad \text{for all} \ s_2>s_1>0 \ \text{and all} \ x\in\R.
\end{equation}
Then, we suppose that there exists $\ell> 0$ such that
\begin{equation}\label{hyp:per}
f(x+\ell, s)=f(x,s) \quad \text{for all} \ s >0 \ \text{and all} \ x\in\R.
\end{equation}
To study the effect of the line of fast diffusion, we will compare the behaviour of (<ref>) to the one of the system
\begin{equation}\label{sys:symmetric}
\left\{
\begin{array}{ll}
v_t-d\Delta v - c\partial_x v= f(x,v), & (x,y)\in\Omega,\\
-\partial_y v|_{y=0} =0, & x\in\R,
\end{array}
\right.
\end{equation}
whose solution is a function $v(x,y)$ that can be extended by symmetry to the whole plane.
It is natural to consider system (<ref>) as the counterpart of system (<ref>) in the case without the road, since it presents the same geometry, including the same boundary condition, exception made for the exchange terms that are in place for the case of a fast diffusion channel.
§.§ Our results
We are now ready to present the main results of this part of the thesis.
The case of a periodic $f(x,v)$.
Here, we consider the case of a nonlinearity that respects the KPP hypothesis and is periodic in the direction of the road. Moreover, we consider $c=0$.
We begin by the following result on the long time behaviour for solutions of system (<ref>). As already seen for similar problems, the key point lies in the stability of the $0$ solution. This is linked to the sign of the generalised principal eigenvalue for the road-field model, that we have defined in (<ref>). With this notation, we have the following:
Let $f$ satisfy (<ref>)-(<ref>) and $c=0$.
Then the following holds:
* if $\lambda_1( \Omega)\geq 0$, then extinction occurs.
* if $\lambda_1(\Omega)<0$, then persistence occurs and the positive stationary solution $(u_{\infty}, v_{\infty})$ is unique and periodic in $x$.
Next, we compare the behaviour of solutions of the system (<ref>) with the ones of (<ref>), or, equivalently, after extension by symmetry to the whole plane, of
\begin{equation}\label{eq:bhroques}
\partial_t v - d \Delta v = f(x,v), \quad \text{for} \ (x,y)\in\R^2.
\end{equation}
Recalling the results of [18], we know that the persistence or extinction of a population for a periodic equation in the whole $\R^2$ depends on the sign of the periodic eigenvalue $\lambda_p(-\mathcal{L}, \R^2)$, that was defined in (<ref>) for a general case.
We obtain the following:
Assume $f$ fulfils hypotheses (<ref>)-(<ref>) and let $c=0$. Then:
* if $\lambda_p(-\mathcal{L}, \R^2)<0$, then $\lambda_1( \Omega)<0$, that is, if persistence occurs for the system “without the road” (<ref>), then it occurs also for system “with the road” (<ref>).
* if $\lambda_p(-\mathcal{L}, \R^2)\geq 0$, then $\lambda_1( \Omega)\geq 0$, that is, if extinction occurs for the system “without the road” (<ref>), then it occurs also for system “with the road” (<ref>).
Theorem <ref> asserts that the road has no negative impact on the survival chances of the population in the case of a medium depending periodically on with respect to variable in the direction of the road.
We recall the fact that fragmentation lowers the survival possibilities of a species (see [18, 108]); also, even if we are not in the framework of an ecological niche, we remember from [17] the fact that a road has a negative impact in the setting without climate change. For those reasons, the result in Theorem <ref> may be somehow unexpected.
However, despite the fact that no reproduction takes place on the road, in the case of periodic media the presence of the fast diffusion channel does not interfere with the long time behaviour of the population, which depends only on the environment of a periodicity cell.
As seen in [18], where the dependence of persistence on the amplitude of fragmentation was studied, if the favourable zones are sufficiently large, the population will eventually spread in all of them; the presence of the road does not cause loss of favourable environment and consequently of persistence chances.
However, we expect the spreading speed to be influenced by the presence of the road, as it has been already proven in the case of homogeneous environment.
We point out that Theorem (<ref>) completes and is in accordance with the results on long time behaviour found in [20] for a homogeneous reaction function, which we can consider as a particular case of periodicity, satisfying a positive KPP request (thanks to the hypothesis $f'(0)>0$). In [20], Theorem 4.1 states the convergence of any positive solution to the unique positive stationary solution of the system. Since it is well known that for the homogeneous case it holds $\lambda_1(-\mathcal{L}, \R^2)=- f'(0)$, the hypothesis gives that $\lambda_1(-\mathcal{L}, \R^2)<0$ and, as a consequence of Theorem <ref>, that persistence occurs.
Instead if $f'(0)\leq0$, then we would be in the first case of Theorem <ref>, yielding extinction of the population.
Effects of amplitude of heterogeneity
One may ask if the presence of a road may alter the complex interaction between more favourable and less favourable zones; in particular, one could wonder if this could penalise the persistence, since it was shown that populations prefer a less fragmented environment. Nevertheless, owing from Theorem <ref> that the road has no effect on the survival chances of the species, we can recover all the results on the effect of fragmentation.
Take a parameter $\alpha>0$ and consider system (<ref>) with nonlinearity
\begin{equation}\label{1421}
\tilde{f}(x,v)=\alpha f(x,v).
\end{equation}
To highlight the dependence on $\alpha$, we will call $\lambda_1(\Omega, \alpha)$ the generalised principal eigenvalue defined in (<ref>) with nonlinearity $\tilde{f}$.
As a direct consequence of our Theorem <ref> and of Theorem 2.12 in [18], we have the following result on the amplitude of heterogeneity:
Assume $\tilde{f}$ is defined as in (<ref>), $f$ satisfies (<ref>)-(<ref>), and $c=0$. Then:
* if $ \int_{0}^{\ell} f_v(x,0)>0$, or if $ \int_{0}^{\ell} f_v(x,0)=0$ and $f\neq 0$, then for all $\alpha >0$ we have $\lambda_1(\Omega, \alpha )<0$.
* if $ \int_{0}^{\ell} f_v(x,0)<0$, then $\lambda_1(\Omega, \alpha )>0$ for $\alpha$ small enough; if moreover there exists $x_0\in[0,\ell]$ such that $f_v(x_0,0)>0$, then for all $\alpha$ large enough $\lambda_1(\Omega, \alpha )<0$.
A climate change setting for a general $f(x,v)$.
We consider now a general nonlinearity that depends on the variable in the direction of the road. We stress the fact that we do not suppose any periodicity, but the case of a periodic $f$ is a particular case of this setting. Moreover, the following results are done in the general framework of a possible climate change, so the parameter $c$ may be different from $0$.
Comparison between the systems with and without the road, in the general case, are done through comparison between $\lambda_1(\Omega)$ and the generalised principal eigenvalue of system (<ref>), given by
\begin{equation}\label{lambda:L_Omega}
\begin{split}
\lambda_1(-\mathcal{L}, \Omega)=\sup \{ \lambda \in \R \ : \ \exists \psi \geq 0, \psi \not\equiv 0 \ \text{such that} \\
\mathcal{L}(\psi) + \lambda \psi \leq 0 \ \text{on} \ \Omega, \ -\partial_y \psi|_{y=0}\leq 0 \ \text{on} \ \R \}
\end{split}
\end{equation}
for $\psi\in W_{loc}^{2,3}(\Omega)$. With this notation, we have the following:
Assume $\lambda_1(-\mathcal{L}, \R^2)$ as in (<ref>) and $\lambda_1(\Omega)$ as in (<ref>); then $\lambda_1(-\mathcal{L}, \R^2) \geq \lambda_1(\Omega)$.
In the special case $c=0$, some information on the relations between $\lambda_1(-\mathcal{L}, \R^2)$ and $\lambda_1(\Omega)$ was already available in [17]: Proposition 3.1 gives that if $\lambda_1(-\mathcal{L}, \R^2)\geq 0$ then $\lambda_1(\Omega)\geq 0$. Thanks to that and Theorem <ref>, the following result holds:
If $c=0$, we have $\lambda_1(-\mathcal{L}, \R^2)<0$ if and only if $\lambda_1(\Omega)<0$.
As already pointed out in [16], even for $c=0$ it is not true that $\lambda_1(-\mathcal{L}, \R^2) =\lambda_1(\Omega)$. In fact, it has been found that $\lambda_1(\Omega) \leq \mu$, while playing with $f$ one can have $\lambda_1(-\mathcal{L}, \R^2)$ as large as desired. However, the fact that they have the same sign reveals that they are profoundly linked.
§.§.§ Perspectives
The next problem to tackle for system (<ref>) in a periodic medium regards the existence of travelling fronts and the study of their speed in all the direction of the plane.
We point out that, with respect to the classical case, there are great difficulties linked to the anisotropy of the space, due both to the road and to the periodicity of the medium.
An acceleration effect due to the presence of the road is expected to be found when $D>d$; however, the repercussions of the periodicity of the medium on the spreading speed in a general direction is hard to predict.
We also mention that it would be nice to extend the current results to the case of heterogeneous exchange terms, periodic in $x$, as already treated in [58]. The key point for attacking that problem is in the generalisation of the definition of $\lambda_1(\Omega)$ for non homogeneous coefficients.
§ A NEW MODEL FOR AGGRESSIVE COMPETITION
§.§ Lotka-Volterra models: a literature overview
Another issue that is overlooked in the logistic equation is the interaction of a species with the other ones living in the same environment. In the '20s, Lotka [78] and Volterra [121] observed independently some curios transitory oscillations in the concentration of chemicals during a reaction and in the population sizes of fishes.
They formulated the following model; let $u$ be the quantity of a species of plants present in the environment and $v$ the size of a population of herbivores. It is supposed that the plants have a constant growth rate at all times, $a>0$. The herbivorous feed exclusively on the observed plant and have limitless appetite.
The consumption of plants eaten by the animals is supposed to depend on the probability of meeting of the two, represented by $uv$; the actual loss of the plants is $-buv$ and the gain for the herbivores is $duv$ with $b>d>0$, owning the fact that some plants could be torn but not consumed.
Moreover, as in the malthusian equation, the increase of a population is suppose to depend on its size.
It is also supposed that the environment conditions are stable and than no mutation in the behaviour of the two species is possible.
Then, the model reads
\begin{equation}\label{model:lv}
\left\{
\begin{array}{llr}
\dot{u}&= au-buv, & {\mbox{ for }}t>0,\\
\dot{v}&= -cv+duv, & {\mbox{ for }}t>0.
\end{array}
\right.
\end{equation}
This system has two equilibria, $(0,0)$ and $\left( \frac{c}{d},\frac{a}{b} \right)$. If the initial datum is any point of positive coordinates distinct from the equilibrium, the population sizes oscillate in time, running on a closed curve on the phase portrait.
§.§.§ Competitive Lotka-Volterra models
Since the pioneer works, many studies on the interaction between populations were carried out. In particular, after the studies of Gause [54], another model has been employed to investigate the dynamics between two populations in competition, that is, exploiting at least partly the same resources.
We propose here its construction using the example of two population of squirrels, the grey one and the red one, following the work in [90].
These two species, one of the two recently introduced in Britain, both inhabit hardwood forests and rely on the same resources to live. Keeping in mind the derivation of the logistic equation, we realize that the resource term in this scenario depends on the size of both population.
Moreover, we take into consideration the fact that, due to the social organisation and sometimes the segregation between competing species, the presence of individuals of the rival group may obstruct food collection; if this is the case, there is an additional decrease of the available resources for both population.
Adding these corrections to the logistic of both groups, the Lotka-Volterra competitive system reads
\begin{equation}\label{lv}
\begin{cases}
\dot{u}=a_u u\left(1-\displaystyle
\frac{u+\alpha_{uv} v}{k_u} \right), & t>0,\\
\dot{v}
=a_v v\left(1- \displaystyle\frac{v+\alpha_{vu} u}{k_v} \right), & t>0,
\end{cases}
\end{equation}
where $a_u$, $a_v$, $\alpha_{uv}$, $\alpha_{vu}$, $k_u$ and $k_v$ are nonnegative real numbers.
The coefficients $a_u$ and $a_v$ are the intrinsic growth rates of the two population; $k_u$ and $k_v$ represent the carrying capacities of the environment for the two groups.
The coefficients $\alpha_{uv}$ and $\alpha_{vu}$ represent the competition between individuals of different species, and indeed they appear multiplied by the term $uv$, which represents a probability of meeting.
Taking the example of the squirrels, we expect that $\alpha_{uv}, \alpha_{vu} >1$.
However, for other couple of populations relying on only partially overlapping food sets, one could have also $\alpha_{uv}, \alpha_{vu} \leq 1$. If finally the first population feeds on a subset of the resources of the second one, it would be $\alpha_{uv} \geq1$ and $\alpha_{vu} <1$.
For the sake of completeness, we recall that in the case of species mutually benefiting from the presence of the other, which is not part of the competitive framework, the dynamics prescribes negative values for $\alpha_{uv}$ and $\alpha_{vu}$.
The dynamics of system (<ref>) depends indeed on the values of the interspecific competition terms:
if $\alpha_{uv}<1<\alpha_{vu}$, then the first species $u$
has an advantage over the second one $v$
and will eventually prevail; if $\alpha_{uv}, \ \alpha_{vu} >1$, then the first population that penetrates the environment (that is, the one that has a greater size at the initial time) will persist while the other will extinguish; if $\alpha_{uv}, \ \alpha_{vu} <1$, there exists an attractive coexistence steady state.
The fact that, if two populations' ecological niches completely overlap, then one of the two species gets extinct, is exactly the statement of the Gause principle, a well-established law in ecology.
The Lotka-Volterra models of ODEs have been extended in many ways and its applications range from technology substitution to business competition. In the stochastic analysis community, system (<ref>) with additioned noise terms has been largely studied [62].
Another branch were these systems have been of huge influence is, of course, reaction-diffusion equations.
In the next paragraph we are spending some words on the results for diffusive Lotka-Volterra competitive systems.
§.§.§ Competitive Lotka-Volterra model with diffusion
In the interaction between different population, as already happens in the dynamic of a single species, spatial organisation plays an important role.
A great literature has been devoted to the competitive Lotka-Volterra system with diffusion, that is, up to a rescaling,
\begin{equation}\label{model:diffusion}
\begin{cases}
\partial u - \Delta u= u\left(1-u-\alpha_{uv} v \right), & x\in\R^n, \ t>0,\\
\partial v - d\Delta v
=a v\left(1- v- \alpha_{vu} u \right), & x\in\R^n, \ t>0,
\end{cases}
\end{equation}
for some $d$, $a$, $\alpha_{uv}$, $\alpha_{vu}$ positive constants.
Richer dynamics and more questions naturally arise for system (<ref>) but, unsurprisingly, the study of these involves many more difficulties.
Just to give some flavour of those, we provide some details on the study of the speed of propagation of travelling waves connecting the steady states $\left(1, 0\right)$ and $\left( 0, 1\right)$, to which many studies have been consecrated. From the works of Lewis, Li and Weinberger [75, 76] the minimal speed of propagation of a monotonic wave is called $c_{LLW}$. Even it the simplest case stated in (<ref>), the exact value of $c_{LLW}$ is still not known. Calling $c_{KPP}$ the minimal speed of diffusion for the second equation under the assumption $u\equiv 0$, it holds that $c_{LLW}\geq c_{KPP}$, but the inequality may be strict depending on the parameters [64, 65].
Nevertheless, system (<ref>) is one of the simplest among many possibilities; in the literature one finds systems considering
nonlocal diffusion [91],
free boundary [44],
cross diffusion [79],
and many other variations.
§.§ A model of Lotka-Volterra type for aggressive competition and analysis of the strategies
Among the several models dealing with the
dynamics of biological systems, the case of populations in open hostility
seems to be rather unexplored.
Our model considers the case of
two populations competing for the same resource with one aggressive population
which attacks the other:
concretely, one may think of
a situation in which
two populations live together in the same territory and share the same
environmental resources,
till one population wants to prevail and try to overwhelm the other.
We consider this situation as a “civil war”, since the two populations share
land and resources.
§.§.§ The model
We now describe in further detail our model of conflict between the
two populations and the attack strategies pursued by the aggressive population.
Given the lack of reliable data related to civil wars,
the equations were derived by deduction
from the principles of population dynamics.
Our idea is to modify the Lotka-Volterra competitive system for two populations with
density $u$ and $v$,
adding to the usual competition for resources the fact that both populations suffer some losses as an outcome of the attacks.
The key point in our analysis
is that the clashes do not depend on the chance of meeting of the two populations, given by the quantity $uv$, as it happens in many other works in the literature, but they are sought by the first population and
depend only on the size $u$ of the first population and on its level of aggressiveness $a$.
The resulting model is
\begin{equation}\label{model}
\left\{
\begin{array}{llr}
\dot{u}&= u(1-u-v) - acu, & {\mbox{ for }}t>0,\\
\dot{v}&= \rho v(1-u-v) -au, & {\mbox{ for }}t>0,
\end{array}
\right.
\end{equation}
where $a$, $c$ and $\rho$ are nonnegative real numbers. Here, the coefficient $\rho$ models the fitness of the second population with respect to the first one. The parameter $c$ here stands for the quotient of endured per inflicted damages for the first population.
§.§.§ Behaviour of solutions
We denote by $(u(t), v(t))$ a solution of (<ref>) starting from a point $(u(0),v(0))\in [0,1] \times [0,1]$.
We will also refer to the orbit of $(u(0), v(0))$ as the collection of
points $(u(t), v(t))$ for $t\in \R$, thus both positive and negative times, while the trajectory is the collection of points $(u(t), v(t))$ for $t\geq0$.
From the equations in (<ref>), one promptly sees that $v=0$ is not an equilibrium, hence,
$v$ can reach the value $0$ and even negative values in finite time.
From a modelling point of view, negative values of $v$ are not acceptable, being it a population density.
However, we will suppose that the dynamics stops when the value $v=0$ is reached for the first time.
At this point, the conflict ends with the victory of the first population $u$, that can continue its evolution with a classical Lotka-Volterra equation of the form
\begin{equation*}
\dot{u}= u (1- u)
\end{equation*}
and that would certainly fall into the attractive equilibrium $u=1$.
In order to state our first result on the dynamics of the system (<ref>),
we first observe that, in a real-world situation, the value of $a$ would probably be non-constant and discontinuous, so we allow this coefficient to take values in the class $\mathcal{A}$
defined as follows:
\begin{equation}\begin{split}\label{DEFA}
\mathcal{A}&\; :=
\big\{a: [0, +\infty) \to [0, +\infty) {\mbox{ s.t.~$a$ is continuous}}\\
&\qquad \qquad {\mbox{except at most at a finite number of points}}\big\}.\end{split}\end{equation}
A solution related to a strategy $a(t)\in \mathcal{A}$ is a pair $(u(t), v(t)) \in C_0 (0,+\infty)\times C_0 (0,+\infty)$, which is $C^1$ outside the discontinuity points of $a(t)$ and
solves system (<ref>).
Moreover, once the initial datum is imposed, the solution is assumed to be
continuous at $t=0$.
In this setting, we establish the existence of the solutions to problem (<ref>)
and we classify their behavior with respect to the possible exit from the domain $[0,1]\times[0,1]$. Given $(u(0), v(0))\in [0,1] \times [0,1]$ and $a(t)\in\mathcal{A}$, two scenarios are possible for a solution $(u(t),v(t))$ with $a=a(t)$
of system (<ref>) starting at $(u(0), v(0))$:
(1) The solution $(u(t), v(t))$ issued from $(u(0), v(0))$ belongs to $ [0,1]\times (0,1]$ for all $t\geq 0$.
(2) There exists $T\geq0$ such that the solution $(u(t), v(t))$ issued from $(u(0), v(0))$ exists unique for all $t\leq T$, and $v(T)=0$ and $u(T)>0$.
As a consequence, we can define the
the stopping time of the solution $(u(t), v(t))$ as
\begin{equation}\label{def:T_s}
T_s (u(0), v(0)) =
\left\{
\begin{array}{ll}
+\infty & \text{if situation (1) occurs}, \\
T & \text{if situation (2) occurs}.
\end{array}
\right.
\end{equation}
From now on, we will implicitly consider solutions $(u(t),v(t))$ only for $t\leq T_s(u(0), v(0))$.
We call victory of the first population the scenario where $T_s < +\infty$, that corresponds to the case where $v(T_s)=0$ and $u(T_s)>0$.
On the other hand, we call victory of the second population the scenario where $(u,v)$ tends to $(0,1)$ as $t$ tends to infinity.
Now we are going to analyze the dynamics of (<ref>) with a particular focus on possible strategies. To do this, we now define the basins of attraction.
The first one is the basin of attraction of the point $(0,1)$, that is
\begin{equation}\label{DEFB}\begin{split}
\mathcal{B}&\;:= \Big\{ (u(0),v(0))\in [0,1]\times[0,1] \;{\mbox{ s.t. }}\;\\
&\qquad\qquad T_s (u(0), v(0)) = +\infty, \ (u(t),v(t)) \overset{t\to\infty}{\longrightarrow} (0,1) \Big\},
\end{split}\end{equation}
namely the set of the initial points for which the first population gets extinct (in infinite time) and the second one survives.
The other one is
\begin{equation}\label{DEFE}
\mathcal{E}:= \left\{ (u(0),v(0))\in ([0,1]\times[0,1])\setminus(0,0) \;{\mbox{ s.t. }}\; T_s(u(0),v(0))< + \infty \right\},
\end{equation}
the set of initial points for which we have the victory of the first population and the extinction of the second one.
§.§.§ A control problem
In a rational, or at least well-organised population, one may expect that the parameter $a$, representing aggressiveness, is subject to control; we are suggesting that a population, by performing premeditated attacks, may control its strategy in the conflict and would be able to choose the most appropriate one.
From now on, we may refer to the parameter $a$ as the strategy, that may also depend on time, and we will say that it is winning if it leads to victory of the first population.
We also notice that, with this choice, (<ref>) is a control-affine system.
The main problems that
we deal with are:
* The characterization of the initial conditions for which there exists a winning strategy.
* The success of the constant strategies, compared to all possible strategies.
* The construction of a winning strategy for a given initial datum.
* The existence of a single winning strategy independently of the initial datum.
* The existence of a winning strategy minimizing duration of the war.
The first question is a problem of target reachability for a control-affine system.
The second point regards the choice of a suitable functional space where to choose the strategy.
We also construct an actual winning strategy when victory is possible, answering the third and fourth question.
The last question results to be an optimisation problem.
§.§ Our results
§.§.§ Dynamics for a constant strategy
The first step towards the understanding of the dynamics of the system
in (<ref>) is
is to analyze the behavior of the system for constant coefficients.
To this end, we introduce some notation.
Following the terminology on pages 9-10 in [123],
we say that an equilibrium point (or fixed point) of the dynamics
is a (hyperbolic) sink
if all the eigenvalues of the linearized map have strictly
negative real parts, a (hyperbolic) source
if all the eigenvalues of the linearized map have strictly
positive real parts, and a (hyperbolic) saddle
if some of the eigenvalues of the linearized map have strictly
positive real parts
and some have negative real parts
(since in this problem we work in dimension $2$,
saddles correspond to linearized maps with one
eigenvalue with
strictly positive real part
and one eigenvalue with
strictly negative real part).
We also recall that
sinks are asymptotically stable (and sources are
asymptotically stable for the reversed-time dynamics), see e.g. Theorem 1.1.1
in [123].
With this terminology, we state the following theorem:
For $a > 0$, $c>0$ and ${\rho}> 0$ the system (<ref>) has the following features:
(i) When $0<ac<1$, the system has 3 equilibria: $(0,0)$ is a source, $(0,1)$ is
a sink, and
\begin{equation}\label{usvs}
(u_s, v_s):= \left( \frac{1-ac}{1+{\rho}c} {\rho}c, \frac{1-ac}{1+{\rho}c} \right) \in (0,1)\times (0,1)
\end{equation}
is a saddle.
(ii) When $ac>1$, the system has 2 equilibria: $(0,1)$ is a sink and $(0,0)$ is a saddle.
(iii) When $ac=1$, the system has 2 equilibria: $(0,1)$ is a sink and $(0,0)$
corresponds to a strictly positive eigenvalue and a null one.
(iv) We have
\begin{equation} \label{fml:division}
[0,1]\times [0,1] = \mathcal{B} \cup \mathcal{E} \cup \mathcal{M}
\end{equation}
where $\mathcal{B}~$ and $\mathcal{E}$ are defined in (<ref>)
and (<ref>), respectively, and $\mathcal{M}$ is a smooth curve.
(v) The trajectories starting in $\mathcal{M}$ tend to $(u_s,v_s)$ if $0<ac<1$,
and to $(0,0)$ if $ac\ge1$ as $t$ goes to $+\infty$.
More precisely, one can say that
the curve $\mathcal{M}$ in Theorem <ref> is the stable manifold of the saddle
point $(u_s,v_s)$ when $0<ac<1$, and of the
saddle point $(0,0)$ when $ac>1$. The case $ac=1$ needs a special treatment,
due to the degeneracy of one eigenvalue, and in this case the curve $\mathcal{M}$
corresponds to the center manifold of $(0,0)$, and an ad-hoc argument will
be exploited
to show that also in this degenerate case orbits that start in $\mathcal{M}$
are asymptotic in the future to $(0,0)$.
As a matter of fact, $\mathcal{M}$
acts as a dividing wall between the two basins of attraction $\mathcal{B}$ and $\mathcal{E}$, as described in (iv)
of Theorem <ref>.
From a modelling point of view, Theorem <ref> shows that, also for our model, the Gause principle of exclusion is respected; that is, in general, two competing populations cannot coexist in the same territory, see e.g. [47].
One peculiar feature of our system is that, if the aggressiveness is too strong, the equilibrium $(0,0)$ changes its “stability” properties, passing from a source (as in (i) of
Theorem <ref>)
to a saddle point (as in (ii) of
Theorem <ref>). This shows that the war may have self-destructive outcomes, therefore it is important for the first population to analyze the situation in order to choose a proper level of aggressiveness.
Figure <ref> shows one example of dynamics for each case.
The dedicated chapter contains further results on the dependence of $\mathcal{B}$ and $\mathcal{E}$ on the parameters $\rho$ and $c$. The parameter $a$, a part from having a more intricate influence on the system, may be interpreted not as a biological constant but rather as a choice of the first population.
Therefore, we perform a deeper analysis, whose result are presented in the next paragraph.
$a=0.8$, $c=0.5$, $\rho=2$
$a=0.8$, $c=3$, $\rho=2$
The figures show a phase portrait for the indicated values of the coefficients. In blue, the orbits of the points. The red dots represent the equilibria. The images are realised with Python.
§.§.§ Dynamics for variable strategies and optimisation results
We now introduce some terminology.
Recalling (<ref>),
for any $\mathcal{T}\subseteq \mathcal{A}$, we set
\begin{equation}\label{DEFNU}
\mathcal{V}_{\mathcal{T}}:= \underset{a(\cdot)\in \mathcal{T}}{\bigcup} \mathcal{E}(a(\cdot)),
\end{equation}
where $\mathcal{E}(a(\cdot))$ denotes the set of initial data $(u_0,v_0)$
such that $T_s(u_0,v_0)< +\infty$, when the coefficient $a$ in (<ref>) is replaced by the function $a(t)$.
Namely, $\mathcal{V}_{\mathcal{T}}$ represents the set of initial conditions for which $u$ is able to win by choosing a suitable strategy in $\mathcal{T}$; we call $\mathcal{V}_{\mathcal{T}}$ the victory set with admissible strategies in $\mathcal{T}$.
We also say that $a(\cdot)$ is a winning strategy for the point $(u_0,v_0)$
if $(u_0,v_0)\in \mathcal{E}(a(\cdot) )$.
Moreover, we will call
\begin{equation}\label{u0v0}
(u_s^0, v_s^0):= \left(\frac{\rho c}{1+\rho c}, \frac{1}{1+\rho c}\right).
\end{equation}
Notice that $(u_s^0, v_s^0)$ is the limit point as $a$ tends to $0$ of the sequence of saddle points $\{(u_s^a, v_s^a)\}_{a>0}$
defined in (<ref>).
With this notation,
the first question that we address is for which initial configurations it is possible for the population $u$
to have a winning strategy, that is, to characterize the victory set. For this, we allow the strategy to take all the values in $[0, +\infty)$.
In this setting, we have the following result:
(i) For $\rho=1$, we have that
\begin{equation}\label{Vbound1}\begin{split}
\mathcal{V}_{\mathcal{A}} = \,&\Big\{ (u,v)\in[0,1] \times [0,1] \;
{\mbox{ s.t. }}\; v-\frac{u}{c}<0 \; {\mbox{ if }} u\in[0,c]\\
&\qquad\qquad\qquad {\mbox{ and }}\; v\le1 \; {\mbox{ if }} u\in(c,1]\Big\},
\end{split}\end{equation}
with the convention that the last line in (<ref>) is not present if $c\ge1$.
For $\rho<1$, we have that
\begin{equation}\label{bound:rho<1}
\begin{split}
\mathcal{V}_{\mathcal{A}} &\;= \Bigg\{ (u,v)\in[0,1] \times [0,1] \;{\mbox{ s.t. }}\;
v< \gamma_0(u) \ \text{if} \ u\in [0, u_s^0], \\
v< \frac{u}{c} + \frac{1-\rho}{1+\rho c} \ \text{if} \ u\in \left[u_s^0,
\frac{\rho c(c+1)}{1+\rho c}\right]\\
{\mbox{and }}\; v\le1\ \text{if} \ u\in \left(
\frac{\rho c(c+1)}{1+\rho c},1\right]
\Bigg\},
\end{split}
\end{equation}
\begin{equation*}
\gamma_0(u):= \frac{u^{\rho}}{\rho c(u_s^0)^{\rho-1}},
\end{equation*}
and we use the convention that the last line
in (<ref>) is not present if $ \frac{\rho c(c+1)}{1+\rho c}\ge1$.
(iii) For $\rho>1$, we have that
\begin{equation}\label{bound:rho>1}
\begin{split}
\mathcal{V}_{\mathcal{A}} &\;= \Bigg\{ (u,v)\in[0,1] \times [0,1]\;
{\mbox{ s.t. }}\; v< \frac{u}{c} \ \text{if} \ u\in [0, u_{\infty}],\\&\qquad
\qquad\qquad\qquad
v< \zeta(u) \ \text{if} \ u\in\left(u_{\infty}, \frac{c}{(c+1)^{\frac{\rho-1}\rho}}\right] \\&\qquad
\qquad\qquad\qquad
{\mbox{and }}\; v\le 1
\ \text{if} \ u\in\left(\frac{c}{(c+1)^{\frac{\rho-1}\rho}},1\right]
\Bigg\},
\end{split}
\end{equation}
\begin{equation}\label{ZETADEF}
u_{\infty}:= \frac{c}{c+1}
\quad {\mbox{ and }}\quad \zeta (u):= \frac{u^{\rho}}{c \, u_{\infty}^{\rho-1}} .
%, \quad z:=\min \left\{1, \frac{c}{(c+1)^{1-\frac{1}{\rho}}}\right\}.
\end{equation}
and we use the convention that the last line
in (<ref>) is not present if $ \frac{c}{(c+1)^{\frac{\rho-1}\rho}}\ge1$.
Theorem <ref> implies that the problem is not controllable, that is, for some initial conditions the first population is not able to reach its target.
In practice,
constant strategies could be certainly easier to implement and
it is therefore natural to investigate whether or not
it suffices to restrict the control to constant strategies
without altering the possibility of victory.
The next result addresses this problem by showing that when $\rho=1$
constant strategies are as good as all strategies,
but instead when $\rho\ne 1$ victory cannot be achieved by only
exploiting constant strategies:
Let $\mathcal{K}\subset \mathcal{A}$ be the set of constant functions. Then the following holds:
(i) For $\rho= 1$, we have that $ \mathcal{V}_{\mathcal{A}}=\mathcal{V}_{\mathcal{K}}=\mathcal{E}(a)$ for all $a>0$;
(ii) For $\rho\neq 1$, we have that $\mathcal{V}_{\mathcal{K}} \subsetneq \mathcal{V}_{\mathcal{A}}$.
The result of Theorem <ref>, part (i),
reveals a special rigidity of the case $\rho=1$
in which the victory depends only on the initial conditions, but it is independent of the strategy $a(t)$.
Instead, as stated in
Theorem <ref>, part (ii),
for $\rho \neq 1$ the choice of $a(t)$ plays a crucial role in determining which population is going to win and constant strategies do not exhaust all the
possible winning scenarios.
We stress that $\rho=1$ plays also
a special role in the biological interpretation of the model, since in this case the two
populations have the same fitness to the environmental resource, and hence, in a sense,
they are indistinguishable, up to the possible aggressive behavior of the first population.
Next, we show that for all points in the set $\mathcal{V}_{\mathcal{A}}$ we can choose an appropriate piecewise constant strategy with at most one discontinuity; functions with these properties are called Heaviside functions.
There holds that $\mathcal{V}_{\mathcal{A}} = \mathcal{V}_{\mathcal{H}}$, where $\mathcal{H}$ is the set of Heaviside functions.
The proof of Theorem <ref> solves also the third question mentioned in the introduction. As a matter of fact, it proves that for each point we either have a constant winning strategy or
a winning strategy of type
\begin{equation*}
a(t) = \left\{
\begin{array}{lr}
a_1 &{\mbox{ if }} t<T ,\\
a_2 &{\mbox{ if }} t\geq T,
\end{array}
\right.
\end{equation*}
for some $T\in(0,T_s)$, and
for suitable values $a_1$, $a_2 \in (0,+\infty)$ such that one is very small and the other one very large, the order depending on $\rho$.
The construction that
we give also puts in light the fact that the choice of the strategy depends on the initial datum, answering also our fourth question.
It is interesting to observe that the winning strategy that switches abruptly from a small to a large value
could be considered, in the optimal control terminology, as a “bang-bang” strategy.
Even in a target reachability problem, the structure predicted by Pontryagin's Maximum Principle is brought in light: the bounds of the set $\mathcal{V}_{\mathcal{A}}$, as
given in Theorem <ref>, depend on the bounds that
we impose on the strategy, that are, $a \in[0,+\infty)$.
It is natural to consider also the case
in which the level of aggressiveness
is constrained between a minimal and maximal threshold,
which corresponds to the setting $a\in[m,M]$ for suitable $M\geq m\geq 0$, with the hypothesis that $M>0$.
In this setting, we denote by $\mathcal{A}_{m,M}$ the class of piecewise continuous strategies $a(\cdot)$
in ${\mathcal{A}}$ such that $
m\leq a(t)\leq M$ for all $t>0$ and we let
\begin{equation}\label{SPE}
\mathcal{V}_{m,M}:=\mathcal{V}_{\mathcal{A}_{m,M}}=\underset{{a(\cdot)\in \mathcal{A}}\atop{m\leq a(t)\leq M}
}{\bigcup} \mathcal{E}(a(\cdot))=
\underset{{a(\cdot)\in \mathcal{A}}_{m,M}
}{\bigcup} \mathcal{E}(a(\cdot)).\end{equation}
Then we have the following:
Let $M$ and $m$ be two real numbers such that $M\geq m\geq 0$ and $M>0$. Then, for $\rho\neq 1$ we have the strict inclusion $\mathcal{V}_{{m,M}}\subsetneq \mathcal{V}_{\mathcal{A}}$.
Notice that for $\rho=1$, Theorem <ref> gives instead that $\mathcal{V}_{{m,M}}= \mathcal{V}_{\mathcal{A}}$
and we think that this is a nice feature, outlining a special role played by the parameter $\rho$
(roughly speaking, when $\rho=1$ constant strategies suffice
to detect all possible winning configurations, thanks to
Theorem <ref>, while when $\rho\ne1$ non-constant strategies are necessary to detect
all winning configurations).
Time minimizing strategy.
Once established that it is possible to win starting in a certain initial condition, we are interested in knowing which of the possible strategies is best to choose. One condition that may be taken into account is the duration of the war. Now, this question can be written as a minimization problem with a proper functional to minimize and therefore the classical Pontryagin theory applies.
To state our next result,
we recall the setting in (<ref>) and define
\begin{equation*}
\mathcal{S}(u_0, v_0) := \Big\{ a(\cdot)\in \mathcal{A}_{m,M}
\;\mbox{ s.t. }\; (u_0, v_0) \in \mathcal{E}(a(\cdot)) \Big\},
\end{equation*}
that is the set of all bounded strategies for which the trajectory starting at $(u_0, v_0)$ leads to the victory of the first population.
To each $a(\cdot)\in\mathcal{S}(u_0, v_0)$ we associate the stopping time defined in (<ref>), and we express its dependence on $a(\cdot)$ by writing $T_s(a(\cdot))$.
In this setting, we provide the following statement concerning the strategy leading
to the quickest possible victory for the first population:
Given a point $(u_0, v_0)\in \mathcal{V}_{m,M}$, there exists a winning strategy $\tilde{a}(t)\in
\mathcal{S}(u_0, v_0)$, and a trajectory $(\tilde{u}(t), \tilde{v}(t) )$ associated with $\tilde{a}(t)$,
for $t\in[0,T]$,
with $(\tilde{u}(0), \tilde{v}(0) )=(u_0,v_0)$, where $T$ is given by
\begin{equation*}
T = \underset{a(\cdot)\in\mathcal{S}}{\min} T_s(a(\cdot)).
\end{equation*}
\begin{equation*}
\tilde{a}(t)\in \left\{m, \ M, \ a_s(t) \right\},
\end{equation*}
\begin{equation}\label{KSM94rt3rjjjdfe}
{a}_s(t) := \dfrac{(1-\tilde{u}(t)-\tilde{v}(t))[\tilde{u}(t) \, (2c+1-\rho c)+\rho c]}{\tilde{u}(t) \, 2c(c+1)}.
\end{equation}
The surprising fact given by Theorem <ref>
is that the
minimizing strategy is not only of bang-bang type, but it may assume some values along a singular arc, given by $a_s(t)$.
This possibility is realized in some concrete cases, as we verified by running some numerical simulations, whose results can be visualized in Figure <ref>.
The figure shows the result of a numerical simulation searching a minimizing time strategy $\tilde{a}(t)$ for the problem starting in $(0.5, 0.1875)$ for the
parameters $\rho=0.5$, $c=4.0$, $m=0$ and $M=10$. In blue, the value
found for $\tilde{a}(t)$; in red, the value of $a_s(t)$ for the corresponding trajectory $(u(t), v(t))$. As one can observe, $\tilde{a}(t)\equiv a_s(t)$ in a long trait.
The simulation was done using AMPL-Ipopt on the server NEOS and pictures have been made with Python.
§.§.§ Perspectives
The system of ODEs is the cornerstone for the study of the reaction-diffusion system
\begin{equation}\label{model2}
\left\{
\begin{array}{lr}
\partial_t u- \partial_{xx}^2 u = u(1-u-v) - acu, & {\mbox{ for }} x\in\R, \ t>0,\\
\partial_t v- d\partial_{xx}^2 v = \rho v(1-u-v) -a \int_{\R} u, & {\mbox{ for }} x\in\R, \ t>0,
\end{array}
\right.
\end{equation}
for some $d>0$.
We expect solutions to this system to have very interesting behaviours. It is possible that the second population reaches the value $0$ in only some points of the domain, giving an example of the interesting phenomenon known as dead-core, see e.g. [61].
§ EVOLUTION EQUATIONS WITH CLASSICAL AND FRACTIONAL DERIVATIVES
§.§ Fractional derivatives in evolution equations
The idea of fractional calculus first appears in the discussions between Leibniz and De l'Hospital (see [104]); namely, given the classical derivative $\frac{d^n f(x)}{dx^n}$ for $n\in\N$, it is quite natural to ask if it is possible to define a generalisation of this operator but with non entire order, thus $\frac{d^{\alpha} f(x)}{dx^{\alpha}}$ with $\alpha\in\R$.
However, it is only in the last few decades that a good number of mathematicians started to work on fractional calculus. One of the reasons of this interest is the fact that fractional derivative can help in the modelization of processes with memory or of diffusion phenomena with spreading behaviour different from the one prescribed by Brownian motion. This has applications in charge carrier transport in semicondutors, nuclear magnetic resonance diffusometry, porous systems, dynamics in polymeric systems (see [84] and the references therein).
Here, we introduce some fractional derivatives and operators and justify their meaning and relations with the previous material.
Providing an overview on the state of art over existence, regularity and behaviour of evolution equations dealing with fractional operators is far from our purposes, due to the complexity of the topic. For that, we refer to [6, 33, 34, 98, 106, 124] and the reference therein.
§.§.§ The Caputo derivative
There are several way of defining a fractional derivative. We choose to work with Caputo derivative,
that was first proposed in a geology model by Caputo in [30].
The Caputo derivative of order $\alpha\in(0,1)$ is defined by
\begin{equation*}
D_t^{\alpha} f(t):= \frac{1}{\Gamma(1-\alpha)} \int_{0}^{t} \frac{\dot{f}(\tau)}{(t-\tau)^{\alpha}} d\tau
\end{equation*}
where $\Gamma$ is Euler's Gamma-function and $\dot{f}$ is the classical derivative of $f$.
For simplicity, we will omit the constant and work with
\begin{equation*} \label{def:caputo}
\partial_t^{\alpha} f(t):= \Gamma(1-\alpha) D_t^{\alpha} f(t).
\end{equation*}
Notice that $\partial_t^{\alpha} f(t)$ is defined for all $f\in\mathcal{C}^1([0, t])$ such that $\dot{f}\in L^1(0,t)$. It is also possible to define Caputo derivatives of higher order, thus with $m-1<\alpha<m$ for $m\in\N$, by the following:
\begin{equation*}
D_t^{\alpha} f(t):= \frac{1}{\Gamma(m -\alpha)} \int_{0}^{t} \frac{{f}^{(m)}(\tau)}{(t-\tau)^{1+\alpha-m}} d\tau.
\end{equation*}
The Caputo derivative describes a process “with memory”, in the sense that the history of the process is encoded in the derivative, though old events “count less” than recent ones, since their value is counted with a smaller weight. Due to this memory property, this operator has
found several applications in models of hydrogeology, heat transmission, percolation.
The Caputo derivative is considered to be an operator of “fractional” order, as opposed to the entire order, that is proper to the classical derivatives.
The value $\alpha$ corresponds to the order of the derivative:
indeed, for a power of order $r>0$, it holds $\partial_t^{\alpha} t^{r}=C t^{r-\alpha}$ for some constant $C>0$.
Among the other types of fractional derivatives, it is worth mentioning the Riemann-Liouville derivative because of its large diffusion. The Riemann-Liouville derivative is defined by
\begin{equation}
\mathcal{D}_t^{\alpha} f(t):= \frac{1}{\Gamma(1-\alpha)} \frac{d}{dt} \int_{0}^{t} \frac{f(\tau)}{(t-\tau)^{\alpha}} d\tau,
\end{equation}
and making the calculations one can show that it differs from the Caputo derivative by the term $\frac{f(0)}{t^{\alpha}}$. One of the reasons of the popularity of the Riemann-Liouville derivative is that its limit for $\alpha\to1$ coincides with the classical derivative.
Evolution equations with Caputo time derivative.
Classical partial differential equations are often divided in three groups depending on the order of the time derivative: elliptic, parabolic, hyperbolic. Nevertheless, even in the Preface of Partial Differential Equations by Evans [45], the author states that this subdivision is fictive and “creates the false impression that there is some kind of general and useful classification scheme available”.
This subdivision is supposed to put together object with similar behaviours and classic theory results are usually meant for one of these clusters.
An evolution equation with the Caputo time derivative, for example
\begin{equation}\label{cap1}
\partial_t^{\alpha} u - \Delta u =0,
\end{equation}
is not part of any of this groups.
Thus, many results that one may want to use are not available and must be recovered.
However, relations with the classical objects are present.
Because of some behaviour similarities, evolution equations with Caputo time derivative have often been compared to parabolic ones [84].
Recently, in [42], Dipierro and Valdinoci inspected a model of transmission in neural structures and derive equation (<ref>) from basic principles.
Doing so, they realized that it can be seen as a superposition of several hyperbolic equations acting with delay.
Despite this, the behaviour of solutions of (<ref>) is not similar to the one of wave equations: in fact, in opposition to hyperbolic equations, (<ref>) has a regularising effect on initial data.
§.§.§ The fractional Laplacian
An operator that has been very popular in recent years is the fractional Laplacian, which is considered in some sense the fractional counterpart of the classic homonym operator.
Given $s\in(0,1)$, we define the fractional Laplacian as
\begin{equation}\label{0302}
- \left( \Delta\right)^s u(x) := \text{P.V.} \int_{\R^n} \frac{u(x)-u(y)}{|x-y|^{n+2s}} dy,
\end{equation}
where “P.V.” stands for Principal Value.
Curiously, many equivalent definitions of the fractional Laplacian are possible: a part the one in (<ref>), in [1, 73] one can find a very exhaustive list together with the most important properties of the operator known so far.
One of the reason for the popularity of the fractional Laplacian is its connection with Lévy processes. We now give the idea behind the derivation of an evolution equation containing the fractional Laplacian from a discrete Lévy process, which is similar to the derivation of the heat equation from a Brownian movement. Consider an infinite grid $h \Z^n$, for some step $h>0$, and a discrete evolution of time step $\tau=h^{2s}$. Imagine to put a particle at the origin. At each time step $t\in \tau \N$, the particle can jump to any vertex of the grill different from its actual position, with a probability that depends on the length of the jump; namely, if the particle is at the position $hk$, with $k\in\Z^n$, the probability to jump into the position $hj$, if $j\neq k$, is
\begin{equation*}
\end{equation*}
with $C$ a normalisation constant. Then, we call $u(t,x)$ the probability of finding the particle in $x\in h\Z^n$ at the time $t\in\tau\N$. The function $u(t,x)$ evolves according to the probability of the jumps; for example, the probability of finding the particle in the origin at some time $t+\tau$ is
\begin{equation*}
u(t+\tau, 0)= \sum_{{j\in\Z^n\setminus 0}} P_h(0,j) u(t, j)= \sum_{{j\in\Z^n\setminus 0}} \frac{C }{|j|^{n+2s}}u(t,j)
\end{equation*}
By taking the limit as $h$ tends to $0$,
and by performing suitable manipulations,
the last equality becomes the evolution equation
\begin{equation*}
\partial_t u - (\Delta)^s u=0.
\end{equation*}
For all the details of the proof, we refer to [68].
Satellite-based measures of animal movement performed in the last years have shown that Lévy process are a better approximation of animal movement than Brownian motion. Some examples are provided by honey bees displacements and by movement of marine predators when prey is scarce [10, 66, 101]. In general, it appears that Lévy-flights are more fitting hunt strategies than Brownian walks [77].
For this reason, the fractional Laplacian has been introduced in population dynamics model, see [11, 36, 55] and the reference therein. However, the technical difficulties of dealing with such delicate operators have not been totally overcome.
§.§ Decay estimates for evolution equations with classical and fractional derivatives
Among the many open questions for fractional operators, we choose to study decay estimates of a class of evolution equations with possibly nonlocal or nonlinear diffusion operators.
In particular, we are going to study the decay in time of the Lebesgue norm of solutions to a Cauchy problem in a bounded domain.
We present some general results that apply to a wide class of evolution equations, namely all the ones involving a diffusion operator that is satisfying a certain ellipticity property, involving an “energy functional” that suits for both local and non local operators, possibly complex. The time derivative may be of two types: purely classical or a linear combination of a classical derivative and a Caputo derivative.
§.§.§ The problem
We now set the problem.
Let $\lambda_1, \lambda_2 \geq 0$ be fixed positive numbers.
We suppose, for concreteness,
$$\lambda_1 + \lambda_2=1,$$
but up to a rescaling of the operator we can take $\lambda_1, \lambda_2$
any nonnegative numbers with positive sum.
Let $\Omega \subset \R^n$ be a
bounded open set and let $u_0\in L^{\infty}(\R^n)$ such that $\text{supp} \,u_0 \subset \Omega$.
Consider the Cauchy problem
\begin{equation} \label{sys:generalform}
\left\{ \begin{array}{lr}
(\lambda_1 \partial_t^{\alpha} + \lambda_2 \partial_t) [u] + \mathcal{N}[u]=0, & {\mbox{for all }}x\in \Omega, \ t>0, \\
u(x,t)=0, & {\mbox{for all }}x\in \R^n \setminus \Omega , \ t>0, \\
u(x,0)=u_0(x), & {\mbox{for all }}x\in \R^n ,
\end{array} \right.
\end{equation}
where $\mathcal{N}$ is an operator, possibly involving fractional derivatives.
We underline that we consider smooth ($C^1$ often, $C^2$ if also the second derivative appears) and bounded solutions of the problem (<ref>). In fact, we want to avoid convergence problems with the integrals that appear in the statements and in the proofs. However, for certain operators, weaker hypothesis may be taken.
Let us recall that for a complex valued function $v:\Omega\to\C$ the Lebesgue norm is
\begin{equation*}
\Vert v \Vert_{L^s(\Omega)} = \left( \int_{\Omega} |v(x)|^s \; dx \right)^{\frac{1}{s}}
\end{equation*}
for any $s\in[1, +\infty)$. Also, we call $\Re \{ z\}$ the real part of $z\in\C$.
The main assumption we take is the following: there exist $\gamma \in (0,+\infty) $ and $C\in (0,+\infty)$ such that
\begin{equation} \label{cond:complexstr}
\Vert u(\cdot,t) \Vert_{L^{s}(\Omega) }^{s-1+\gamma} \leq C \int_{\Omega} |u(x,t)|^{s-2} \Re \{ \bar{u}(x,t)\mathcal{N} [u](x,t)\} \; dx.
\end{equation}
The constants $\gamma$ and $C$ and their dependence from the parameters of the problem may vary from case to case. The righthandside of the equation may be seen as an energy functional linked to the diffusion operator.
This inequality implies, essentially, that the operator $\mathcal{N}$ is not too degenerate and the energy of the solution should control a certain power of the solution itself; here $\gamma$ plays the role of the degree of ellipticity.
The inequality (<ref>) strongly depends on the validity of a Sobolev inequality for the solutions of the evolution equation.
To get an intuition of the roles of the factors, take the case of the Laplacian with $s=2$; integrating by parts on the righthandside one obtains $\Vert \nabla u (\cdot, t) \Vert_{L^{2}(\Omega) }^2$, thus the energy, which controls the $L^2$ norm of the solution by the Gagliardo-Nirenberg-Sobolev inequality.
In our setting, the structural inequality in (<ref>)
will be the cornerstone to obtain general energy estimates,
which, combined with appropriate barriers, in turn
produce time-decay estimates.
§.§ Our Results
Extending the method of [43], we obtain a power-law decay in time
of the $L^s$ norm with $s\geq 1$.
Also, for the case of classical time-derivatives,
we obtain exponential decays in time. The difference between
polynomial and exponential decays in time is thus related to
the possible presence of a fractional derivative in the operator involving the time variable.
§.§.§ Decay estimate theorems
First, we present this result for the more general setting, hence for a linear combination of classical and Caputo time derivative. We have the following:
Let $u$ be a solution of the Cauchy problem (<ref>), with $\mathcal{N}$ possibly complex
valued. Suppose that there exist $s\in[1, +\infty)$, $\gamma\in(0,+\infty)$ and $C\in(0,+\infty)$ such that $u$ satisfies (<ref>).
\begin{equation} \label{claim1gen}
(\lambda_1\partial_t^{\alpha} + \lambda_2\partial_t) \Vert u(\cdot,t) \Vert_{L^{s}(\Omega) } \leq -\dfrac{\Vert u(\cdot,t) \Vert_{L^{s}(\Omega) }^{\gamma}}{C},
\qquad{\mbox{ for all }}t>0,\end{equation}
where $C$ and $\gamma$ are the constants appearing in (<ref>).
\begin{equation} \label{claim2gen}
\Vert u(\cdot,t) \Vert_{L^{s}(\Omega) } \leq
\dfrac{C_*}{1+t^{\frac{\alpha}{\gamma}}},\qquad{\mbox{ for all }}t>0,
\end{equation}
for some $C_*>0$, depending only on $C$, $\gamma$, $\alpha$
and $\Vert u_0(\cdot) \Vert_{L^{s}(\R^n)}$.
A polynomial decay is a nice piece of information on the solution and we can expect this to be the best decay we can get for some fractional evolution equations [56, 25]. However, there is also evidence that for classical settings better decays can be achieved. In fact, the following theorem holds:
Let $u$ be a solution of the Cauchy problem (<ref>) with only classical derivative (that is, $\lambda_1=0$) and $\mathcal{N}$ possibly complex
valued. Suppose that there exist $s\in[1, +\infty)$, $\gamma\in(0,+\infty)$ and $C\in(0,+\infty)$ such that $u$ satisfies (<ref>).
Then, for some $C_*>0$, depending only on the constants $C$ and $\gamma$
in (<ref>),
and on $\Vert u_0(\cdot) \Vert_{L^{s}(\R^n)}$, we have that:
1. if $0<\gamma \leq 1$ the solution $u$ satisfies
\begin{equation} \label{claim3}
\Vert u(\cdot,t) \Vert_{L^{s}(\Omega) } \leq
C_* \, e^{-\frac{t}{C}},\qquad{\mbox{for all }}t>0;
\end{equation}
2. if $ \gamma>1$, the solution $u$ satisfies
\begin{equation} \label{claim4}
\Vert u(\cdot,t) \Vert_{L^{s}(\Omega) } \leq
\dfrac{C_*}{1+t^{\frac{1}{\gamma-1}}},\qquad{\mbox{for all }}t>0.
\end{equation}
As we will see in the proofs
of these two theorems, the idea is to find a supersolution of (<ref>) and use a comparison principle in order to estimate the decay of the solution $u$. For the case of mixed derivatives, Vergara and Zacher [119] find both a supersolution and a subsolution decaying as $t^{-\frac{\alpha}{\gamma}}$. But, when $\alpha \to 1$, the subsolution tends to 0.
On the other hand, the classical equation $\partial_t e =- e^{\gamma}$ has some exponential supersolutions.
This allows possibly better decays, which are in fact proven.
We point out that the case of an evolution equation with only Caputo time derivative, i.e. of $\lambda_2=0$, was treated in [43]. The authors find in this case that the supersolution is still asymptotic to $t^{-\frac{\alpha}{\gamma}}$ and the decay is of polynomial type.
It is interesting to notice the presence of a “decoupling” effect: for evolution equations with classical time derivative and fractional space derivative (take for example the fractional Laplacian, $-(\Delta)^{\sigma}u$, $\sigma\in(0,1)$, see [43]), the space derivative does not asymptotically interfere with the time derivative;
thus the polynomial decay, typical of fractional derivatives, does not appear, leaving place for the exponential decay given by the classical time derivative.
An example of this behaviour is found in [93], where a model inspired to atoms dislocation was studied.
§.§.§ Applications
What makes Theorems <ref> and <ref> interesting is the fact that they may be applied to a wide range of equations. Indeed, the only hypothesis required in order to apply the theorems is the validity of the inequality (<ref>) for suitable parameters $C$ and $\gamma$. In [43] and in our work, (<ref>) was verified for many operators, that we are listing here together with some references on the origins of these operators:
* the classic and fractional Laplacian, [27],
* the classic and fractional $p$-Laplacian, [32],
* the doubly nonlinear equation, [100]
* the classic and fractional porous medium equations, [118, 29] and [39],
* the classic and fractional mean curvature equation, [28]
* the classic and fractional Kirchhoff equation, [23] and [49],
* the classic and fractional magnetic operator, [67] and [38].
The list is not supposed to be exhaustive; in fact, the aim is only to provide some example of operators satisfying (<ref>) and to encourage other mathematicians looking for some decay estimates to attempt with operators they are struggling with.
CHAPTER: A FISHER-KPP MODEL WITH A FAST DIFFUSION LINE IN PERIODIC MEDIA
In this chapter, we treat a model of population dynamics in a periodic environment presenting a fast diffusion line. The
“road-field” model, introduced in [20], is a system of coupled reaction-diffusion equations set in domains of different dimensions. Here, we consider for the first time the case of a reaction term depending on a spatial variable in a periodic fashion, which is of great interest for both its mathematical difficult and for its applications. We derive necessary and sufficient conditions for the survival of the species in terms of the sign of a suitable generalised principal eigenvalue, defined recently in [16]. Moreover, we compare the long time behaviour of a population in the same environment without the fast diffusion line, finding that this element has no impact on the survival chances. This chapter corresponds to the paper [2].
§ SETTING AND MAIN RESULTS
This chapter investigates some effects of a fast diffusion line in an ecological dynamics problem.
Various examples in the literature showed that, in the presence of roads or trade lines, some species or infections spread faster along these lines, and then diffuse in the surroundings. This was observed in the case of the Processionary caterpillar, whose spreading in France and Europe has been accelerated by accidental human transport [103]. Another striking proof was given in [52], where the authors point out that the COVID-19 epidemics in Northern Italy at the beginning of 2020 diffused faster along the highways.
A model for biological diffusion in a homogeneous medium presenting a fast diffusion line was proposed by Berestycki, Roquejoffre and Rossi in [20], and since then is called the road-field model. The authors proved an acceleration effect due to the road on the spreading speed of an invading species.
Since then, a growing number of articles treated variations of the same system, investigating in particular the effect of different type of diffusion or different geometries [13, 14, 105].
However, natural environments are usually far from being homogeneous and, more often than not, territories are a composition of different habitats. Living conditions and heterogeneity play a strong impact on the survival chances of a species and on the equilibria at which the population can settle.
Road-field models on heterogeneous environments have been little studied so far, being more complex to treat. One of the few example is the paper [58] for periodic exchange terms between the population on the road and the one in the field. Recently, Berestycki, Ducasse and Rossi introduced a notion of generalised principal eigenvalue for the road-field system in [16] and, thanks to it, they were able to treat the case of an ecological niche facing climate change in [17].
Here, we propose an analysis of the asymptotic behaviour of an invasive population under the assumption of spatial periodicity of the reaction term. Of course, under this hypothesis we can investigate deeper the dependence of the population on a natural-like environment and the effects of the road in this balance.
Under which conditions does the population survive in a periodic medium? And does the road play some role on the survival chances of a species, perturbing the environment and scattering the individuals, or rather permitting them to reach advantageous zones more easily?
These are the questions we are going to tackle.
§.§ The model
In this chapter, we study the reaction-diffusion model regulating the dynamics of a population living in a periodic environment with a fast diffusion channel. The equivalent of this model for homogeneous media was first introduced by Berestycki, Roquejoffre and Rossi in [20]. Consider the half plane $\Omega:=\R\times \R^+$, where we mean $\R^+=(0, +\infty)$.
The proposed model imposes the diffusion of a species in $\Omega$ and prescribes that on $\partial \Omega=\R\times \{ y=0\}$ the population diffuses at a different speed.
We call $v(x,t)$ the density of population for $(x,y)\in\Omega$, hence on the “field”, and $u(x)$ the density of population for $x\in\R$, i.e. on the “road”; moreover, we take $D$, $d$, $\nu$, $\mu$ positive constants and $c\geq 0$. Then, the system we analyse reads
\begin{equation}\label{ch1sys:fieldroad}
\left\{
\begin{array}{lr}
\partial_t u-D u '' -c u' - \nu v|_{y=0} + \mu u= 0, & x\in \R, \\
\partial_t v -d \Delta v-c\partial_x v =f(x,v), & (x, y)\in \Omega, \\
-d \partial_y{v}|_{y=0} + \nu v|_{y=0} -\mu u=0, & x\in\R.
\end{array}
\right.
\end{equation}
In $\Omega$, the population evolves with a net birth-death rate represented by $f$, that depends on the variable $x$.
This embodies the heterogeneity of the media: in fact, environments are typically not uniform and some zone are more favourable than others.
There is no dependence in the variable $y$, since the presence of the road itself creates enough heterogeneity in that direction.
The function $f:\R\times \R_{\geq 0}\to \R $
is always supposed to be $\mathcal{C}^{1}$ in $x$, locally in $v$, and Lipschitz in $v$, uniformly in $x$; moreover, we suppose that the value $v=0$ is an equilibrium, that is
\begin{equation}\label{ch1hyp:0}
f(x,0)=0, \quad \text{for all} \ x\in \R,
\end{equation}
and that
\begin{equation}\label{ch1hyp:M}
\exists M>0 \ \text{such that} \ f(x, v)<0 \quad \text{for all} \ v>M \ \text{and all} \ x\in \R.
\end{equation}
We will derive some inequalities on the generalised principal eigenvalue of (<ref>) for the general case of $f$ respecting these hypothesis and
$c$ possibly nonzero.
The characterisation of extinction or persistence of the species is performed for the case of $c=0$ and $f$ a periodic function, reflecting the periodicity of the environment in which the population diffuses.
We will analyse the case of a KPP nonlinearity, that is, we require that
\begin{equation}\label{ch1hyp:KPP}
\frac{f(x,s_2)}{s_2}< \frac{f(x,s_1)}{s_1} \quad \text{for all} \ s_2>s_1>0 \ \text{and all} \ x\in\R.
\end{equation}
Then, we suppose that there exists $\ell> 0$ such that
\begin{equation}\label{ch1hyp:per}
f(x+\ell, s)=f(x,s) \quad \text{for all} \ s >0 \ \text{and all} \ x\in\R.
\end{equation}
To study the effect of the line of fast diffusion, we will compare the behaviour of (<ref>) to the one of the system
\begin{equation}\label{ch1sys:symmetric}
\left\{
\begin{array}{ll}
v_t-d\Delta v - c\partial_x v= f(x,v), & (x,y)\in\Omega,\\
-\partial_y v|_{y=0} =0, & x\in\R,
\end{array}
\right.
\end{equation}
whose solution is a function $v(x,y)$ that can be extended by symmetry to the whole plane, thanks to the Neumann border condition.
It is natural to consider system (<ref>) as the counterpart of system (<ref>) in the case without the road, since it presents the same geometry, including the same boundary condition exception made for the exchange terms that are in place for the case of a fast diffusion channel.
§.§ State of the art
We present here the background that led us consider system (<ref>) and some useful results that are known in the community.
The study of reaction-diffusion equations started with the works by Fisher [50] and by Kolmogorov, Petrowskii and Piskunov [72], who modelled the spacial diffusion of an advantageous gene in a population living in a one-dimensional environment through the equation
\begin{equation}\label{ch1eq:KPP}
\partial_t v -d \, \partial_{xx}^2 v = f(v)
\end{equation}
for $x\in\R$ and $t\geq 0$. For (<ref>), it is supposed that $d>0$ and $f\geq 0$ is a $\mathcal{C}^1$ function satisfying $f(0)=f(1)=0$ and the KPP hypothesis $f(v)\leq f'(0)v$ for $v\in[0,1]$. The first example was a nonlinearity of logistic type, so $f(v)= av(1-v)$ for some $a>0$.
It was shown any solution $v$ issued from a nonnegative initial datum $v_0$ converges to 1 as $t$ goes to infinity, locally uniformly in space; this long time behaviour is called invasion.
The generalisation in higher dimension of equation (<ref>) was then used to study the spatial diffusion of animals, plants, bacteria and epidemics [109, 89].
A vast literature has been originated from the pioneer works, studying various aspects of the homogeneous equation (<ref>), in particular concerning the travelling fronts. These are solutions of the form $v(t,x)= V(x \cdot e +ct)$ with $V:\R\to[0,1]$, for $e$ a direction, the direction of propagation, and $c$ the speed of propagation of the travelling front. Other than this, researchers have investigated the asymptotic speed of propagation at which level sets of a solution starting from $v_0$ expands. These topics arose already in [50] and [72], and their investigation was continued in many interesting articles, among which [48] and [7].
The correspondence of the theoretical results with actual data as seen in [109] was encouraging, however it was clear that natural environments, even at macroscopic levels, were not well represented by a homogeneous medium, due to the alternation of forests, cultivated fields, plains, scrubs and many other habitats, as well as roads, rivers and other barriers [70]. It was necessary to look at more sophisticated features, as the effects of inhomogeneity, fragmentation, barriers and fast diffusion channels, and on the top of that, climate change.
A first analysis was carried out
in [108, 107] and
in [70] for the so-called the patch model. The authors considered a periodic mosaic of two different homogeneous habitats, one favorable and one unfavorable for the invading species.
In [70], the authors studied the long time behaviour of the population starting from any nonnegative initial datum. For further convenience, let us give the following definition:
For the equation of (<ref>) or the system (<ref>), we say that
* extinction occurs if any solution starting from a non negative bounded initial datum converges to $0$ or to $(0,0)$ uniformly as $t$ goes to infinity.
* persistence occurs if any solution starting from a non negative, non zero, bounded initial datum converges to a positive stationary solution locally uniformly as $t$ goes to infinity.
In [70], it was empirically shown that the stability of the trivial solution $v=0$ determines the long time behaviour of the solutions. A solid mathematical framework for a general periodic environment was given in [18]. There, the authors considered the equation
\begin{equation}\label{ch1eq:bhroques}
\partial_t v - \nabla \cdot (A(x)\cdot \nabla v) = f(x, v)
\end{equation}
for $x\in \R^N$ and $t\geq 0$.
The diffusion matrix $A(x)$ is supposed to be $\mathcal{C}^{1, \alpha}$, uniformly elliptic and periodic; however, for our interest we can suppose $A(x)=d\, I_{N}$, where $I_N$ is the identity matrix. The nonlinearity $f: \R^N \times \R_{\geq0} \to \R$ is supposed to be $\mathcal{C}^{1}$ in $x$, locally in $v$, and Lipshitz in $v$, uniformly in $x$, respecting hypothesis (<ref>)-(<ref>) and such that for some $L=(L_1, \dots, L_N)$, with $L_i\geq 0$, it holds
\begin{equation}\label{ch1hyp:per'}
f(x+L,s)=f(x,s) \quad \text{for all} \ s\geq 0 \ \text{and all} \ x\in\R^N.
\end{equation}
The criterion for persistence or extinction is given via a notion of periodic eigenvalue, that is the unique number $\lambda_p(-\mathcal{L}, \R^N)$
such that there exists a solution $\psi\in W_{loc}^{2, p}(\R^N)$ to the system
\begin{equation}\label{ch1sys:L_RN_p}
\left\{
\begin{array}{ll}
\mathcal{L'}(\psi) + \lambda \psi = 0, & x\in\R^N, \\
\psi> 0, & x\in\R^N, \\
|| \psi ||_{\infty}=1, \\
\psi \ \text{is periodic in $x$ of periods $L$},
\end{array}
\right.
\end{equation}
where $\mathcal{L'}$ is given by
\begin{equation}\label{ch1def:mathcal_L'}
\mathcal{L'}(\psi):= d \Delta \psi + f_v(x,0)\psi.
\end{equation}
We point out that the existence and uniqueness of $\lambda_p(-\mathcal{L}, \R^N)$ is guaranteed by Krein-Rutman theory. The long time behaviour result in [18] is the following:
Assume $f$ satisfies (<ref>)-(<ref>) and (<ref>). Then:
* If $\lambda_p(-\mathcal{L'}, \R^N)<0$, persistence occurs for (<ref>).
* If $\lambda_p(-\mathcal{L'}, \R^N)\geq 0$, extinction occurs for (<ref>).
To prove Theorem <ref>, the authors performed an analysis of $\lambda_p(-\mathcal{L}, \R^N)$, proving that it coincide with the limit of eigenvalues for a sequence of domains invading $\R^N$, so that it coincides with the generalised principal eigenvalue of the system “without the road” (<ref>). Nowadays, that and many other properties of this eigenvalue can be found as part of a broader framework in [22]. In Section <ref>, we will provide further comments on it.
Another important fact highlighted both in the series in [108, 107, 70] and in [18] is that the presence of multiple small unfavourable zones gives less chances of survival than one large one, the surface being equal.
A new difficulty that one may consider while studying ecological problems is, sadly, the issue of a changing climate. A 1-dimensional model in this sense was first proposed in
[15] and [97], and was later treated in higher dimension in [21].
The authors first imagined that a population lives in a favourable region enclosed into a disadvantageous environment; due to the climate change, the favourable zone starts to move in one direction, but keeps the same surface. The resulting equation is
\begin{equation}\label{ch11709}
\partial_t v - \Delta v=f(x-ct e,v) \quad \text{for} \ x\in\R^N,
\end{equation}
with $e$ a direction in $\mathbb{S}^{N-1}$ and $f: \R^N\times \R_{\geq 0} \to \R$. It was observed that a solution to (<ref>) in the form of a travelling wave $v(x,t)=V(x-cte)$ solves the equation
\begin{equation}\label{ch1eq:cc}
\partial_t V - \Delta V- c\,e\cdot \nabla V=f(x,V) \quad \text{for} \ x\in\R^N,
\end{equation}
which is more treatable. The main question is if the population keeps pace with the shifting climate, that is, if the species is able to migrate with the same speed of the climate. The answer to this question is positive if a solution to (<ref>) exists; this depends on the value of $c$. We point out that already in [21] the authors considered the general case of a possible periodic $f(x,v)$.
As mentioned before, another feature worth investigation is the effect of fast diffusion channels on the survival and the spreading of species. In fact, the propagation of invasive species as well as epidemics is influenced by the presence of roads [103, 52].
This observations led Berestycki, Roquejoffre and Rossi to propose a model for ecological diffusion in the presence of a fast diffusion channel in [20], the so-called road-field model. The field is modelled with the halfplane $\Omega=\R \times \R_{+}$ and the line with the $x$ axis; the main idea is to use two different variables for modelling the density of population along the line, $u$, and on the half plane, $v$. The system reads
\begin{equation*}
\left\{
\begin{array}{ll}
\partial_t u(x,t) - D \partial_{xx}^2 u (x,t) = \nu v (x,0,t) - \mu u(x,t), & x\in \R, t > 0, \\
\partial_t v(x,y,t) - d \Delta v (x,y,t)= f(v), & (x,y) \in \Omega, t>0, \\
-d \partial_y v(x,0,t) = -\nu v(x,0,t) + \mu u(x,t), & x \in \R, t>0,
\end{array} \right.
\end{equation*}
for $D$, $d$, $\nu$, $\mu$ positive constants; moreover, $f\in \mathcal{C}^1$ was supposed to satisfy
\begin{equation*}
f(0)=f(1)=0, \quad 0< f(s) < f'(0)s \ \text{for} \ s \in (0,1), \quad f(s)<0 \ \text{for} \ s>1.
\end{equation*}
The three equations describe, respectively, the dynamic on the line, the dynamic on the half plane and the exchanges of population between the line and the half plane. On the line, the diffusion is faster than in $\Omega $ if $D>d$.
In [20], the authors identify the unique positive stationary solution $\left(\frac{1}{\mu}, 1 \right)$ and prove persistence of the population.
Moreover, they show that the presence of the line increases the spreading speed. Another version of the model with a reaction term for the line was presented by the same authors in [19], while many variation of the models were proposed by other authors: with nonlocal exchanges in the direction of the road [94, 95], with nonlocal diffusion [14, 13], and with different geometric settings [105]. For a complete list, we refer to [112].
The case of heterogeneous media for systems of road-field type has been so far not much treated, due to its difficulties. A first road-field model with exchange terms that are periodic in the direction of the road was proposed in [58]. There, the authors
recovered the results of persistence and of acceleration on the propagation speed due to the road known in the homogeneous case;
they also studied the spreading of solution with exponentially decaying initial data and calculated their speeds.
Recently, Berestycki, Ducasse and Rossi introduced in [16] a new generalised principal eigenvalue fitting road-field system for possibly heterogeneous reaction term; here, we give its definition directly for the system (<ref>).
\begin{equation}\label{ch1sys:operators}
\left\{
\begin{array}{l}
\mathcal{R}(\phi, \psi):=D \phi''+c \phi'+\nu {\psi}|_{y=0}-\mu \phi, \\
\mathcal{L}(\psi):= d\Delta \psi +c \partial_x \psi -f_v(x,0)\psi, \\
B(\phi, \psi):=d \partial_y {\psi}|_{y=0}+\mu \phi- \nu {\psi}|_{y=0},
\end{array}
\right.
\end{equation}
this eigenvalue is defined as
\begin{equation}\label{ch1def:lambda1_S_Omega}
\begin{split}
\lambda_1( \Omega)=\sup \{ \lambda \in \R \ : \ \exists (\phi, \psi)\geq (0,0), \ (\phi, \psi) \not\equiv(0,0), \ \text{such that} \\ \mathcal{L}(\psi) + \lambda \psi \leq 0 \ \text{in} \ \Omega, \ \mathcal{R}(\phi, \psi) +\lambda \phi \leq 0
\ \text{and} \ B(\phi, \psi)\leq 0 \ \text{in} \ \R \},
\end{split}
\end{equation}
with $(\phi, \psi)$ belonging to $W_{loc}^{2,3}(\R)\times W_{loc}^{2,3}(\overline{\Omega})$. Together with the definition, many interesting properties and bounds were studied; we will recall some of them later.
Thanks to that, the same authors were able to investigate the case of
a favourable ecological niche, possibly facing climate change, in [17]. It was proven that the sign of $\lambda_1( \Omega)$ characterises
the extinction or the persistence of the population; moreover, comparing the results with the ones found for the model without the road, a deleterious effect of the road on the survival chances is always found when there is no climate change. On the other hand, if the ecological niche shifts, the road has in some cases a positive effect on the persistence.
§.§ Main results
We are now ready to present the main results of this chapter.
§.§.§ The case of a periodic $f(x,v)$
Here, we consider the case of a nonlinearity that respects the KPP hypothesis and is periodic in the direction of the road. Moreover, here we always consider $c=0$.
We begin by the following result on long time behaviour for solutions of system (<ref>):
Assume $f$ satisfy (<ref>)-(<ref>), $c=0$ and let $\lambda_1(\Omega)$ be as in (<ref>).
Then the following holds:
* if $\lambda_1( \Omega)\geq 0$, then extinction occurs.
* if $\lambda_1(\Omega)<0$, then persistence occurs and the positive stationary solution $(u_{\infty}, v_{\infty})$ is unique and periodic in $x$.
Now, we compare the behaviour of solutions to the system (<ref>) with the ones of system (<ref>).
This allows us to highlight the effects of the fast diffusion channel on the survival chances of the population.
Actually, since solutions of (<ref>) can be extended by refection to the whole plane, we can make the comparison with equation (<ref>) for $A(x)=d I_2$ and $L=(\ell, 0)$.
The comparison is performed thanks to the generalised principal eigenvalue $\lambda_1(\Omega)$ for system (<ref>) and the periodic eigenvalue $\lambda_p(-\mathcal{L}, \R^2)$, as defined in (<ref>), for the operator $\mathcal{L}$ in dimension 2.
We obtain the following:
Assume $f$ respects hypothesis (<ref>)-(<ref>), $c=0$. Then:
* if $\lambda_p(-\mathcal{L}, \R^2)<0$, then $\lambda_1( \Omega)<0$, that is, if persistence occurs for the system “without the road” (<ref>), then it occurs also for system “with the road” (<ref>).
* if $\lambda_p(-\mathcal{L}, \R^2)\geq 0$, then $\lambda_1( \Omega)\geq 0$, that is, if extinction occurs for the system “without the road” (<ref>), then it occurs also for system “with the road” (<ref>).
Theorem <ref> says that the road has no negative impact on the survival chances of the population in the case of a periodic medium depending only on the variable in the direction of the road.
This is surprising if compared to the results obtained in [17] (precisely Theorem 1.5, part (ii)), where the authors find that the existence of the road is deleterious in presence of an ecological niche, and even more counter-intuitive owing the fact that fragmentation of the environment lessens the survival chances of a population, as shown in [18]. This means that, in the case of periodic media, the presence of the fast diffusion channel does not interfere with the persistence of the population, which depends only on the environment of a periodicity cell.
As seen in [18], where the dependence of persistence on the amplitude of fragmentation was studied, if the favourable zones are sufficiently large, the population will eventually spread in all of them; the presence of the road does not cause loss of favourable environment and consequently of persistence chances.
However, we expect the spreading speed to be influenced by the presence of the road, as it has been already proven in the case of homogeneous environment.
We point out that Theorem (<ref>) completes and is in accordance with the results on long time behaviour found in [20] for a homogeneous reaction term, which we can see as a particular case of periodicity, which respects positive KPP hypothesis (where the positivity is requested through $f'(0)>0$). In [20], Theorem 4.1 states the convergence of any positive solution to the unique positive stationary solution of the system. Since it is well known that for the homogeneous case it holds $\lambda_1(-\mathcal{L}, \R^2)=- f'(0)$, the positivity hypothesis gives that $\lambda_1(-\mathcal{L}, \R^2)<0$ and, as a consequence of Theorem <ref>, that the second case in our Theorem <ref> occurs.
If instead we asked for $f'(0)\leq0$, then we would be in the first case of Theorem <ref>, yielding extinction of the population.
Effects of amplitude of heterogeneity.
One may expect that the presence of a road may alter the complex interaction between more favourable and less favourable zones, in particular penalising the persistence, since it was shown that populations prefer a less fragmented environment. However, the road does not interfere with that; as a consequence, also for environments presenting fast diffusion channels, some results of the analysis on the effect of fragmentation performed in [18] holds.
Take a parameter $\alpha>0$ and consider system (<ref>) with nonlinearity
\begin{equation}\label{ch11421}
\tilde{f}(x,v)=\alpha f(x,v).
\end{equation}
To highlight the dependence on $\alpha$, we will call $\lambda_1(\Omega, \alpha)$ the generalised principal eigenvalue defined in (<ref>) with nonlinearity $\tilde{f}$.
As a direct consequence of Theorem (<ref>) and Theorem 2.12 in [18], we have the following result on the amplitude of heterogeneity:
Assume $\tilde{f}$ is defined as in (<ref>), $f$ satisfies (<ref>)-(<ref>), and $c=0$. Then:
* if $ \int_{0}^{\ell} f_v(x,0)>0$, or if $ \int_{0}^{\ell} f_v(x,0)=0$ and $f\not\equiv 0$, then for all $\alpha >0$ we have $\lambda_1(\Omega, \alpha )<0$.
* if $ \int_{0}^{\ell} f_v(x,0)<0$, then $\lambda_1(\Omega, \alpha )>0$ for $\alpha$ small enough; if moreover there exists $x_0\in[0,\ell]$ such that $f_v(x_0,0)>0$, then for all $\alpha$ large enough $\lambda_1(\Omega, \alpha )<0$.
This result describes with precision the fact that, to persist, a species must have a sufficiently large favourable zone available. If the territory is more advantageous than not, then the population persist. If however there environment is generally unfavourable, the population persists only if there are some contiguous advantageous zones large enough; if instead the advantageous zones are fragmented, even if there is unlimited favourable territory, the population will encounter extinction.
§.§.§ A climate change setting for a general $f(x,v)$
We consider now a general nonlinearity that depends on the spatial variable in the direction of the road. We stress the fact that we do not suppose any periodicity, but the case of a periodic $f$ is a particular case of this setting. Moreover, the following result is done in the general framework of a possible climate change, so the parameter $c$ may be different from $0$.
Comparison between the systems with and without the road, in the general case, are done through comparison between $\lambda_1(\Omega)$ and the generalised principal eigenvalue of system (<ref>), given by
\begin{equation}\label{ch1lambda:L_Omega}
\begin{split}
\lambda_1(-\mathcal{L}, \Omega)=\sup \{ \lambda \in \R \ : \ \exists \psi \geq 0, \psi \not\equiv 0 \ \text{such that} \\
\mathcal{L}(\psi) + \lambda \psi \leq 0 \ \text{on} \ \Omega, \ -\partial_y \psi|_{y=0}\leq 0 \ \text{on} \ \R \}
\end{split}
\end{equation}
for $\psi\in W_{loc}^{2,3}(\Omega)$. With this notation, we have the following:
Assume $\lambda_1(-\mathcal{L}, \R^2)$ as in (<ref>) and $\lambda_1(\Omega)$ as in (<ref>); then $\lambda_1(-\mathcal{L}, \R^2) \geq \lambda_1(\Omega)$.
In the special case $c=0$, some information on the relations between $\lambda_1(-\mathcal{L}, \R^2)$ and $\lambda_1(\Omega)$ was already available in [17]: Proposition 3.1 yields that $\lambda_1(-\mathcal{L}, \R^2)\geq 0$ implies $\lambda_1(\Omega)\geq 0$. Thanks to that and Theorem <ref>, the following result holds:
If $c=0$, we have $\lambda_1(-\mathcal{L}, \R^2)<0$ if and only if $\lambda_1(\Omega)<0$.
As already pointed out in [16], even for $c=0$ it is not true that $\lambda_1(-\mathcal{L}, \R^2) =\lambda_1(\Omega)$. In fact, it has been found that $\lambda_1(\Omega) \leq \mu$, while playing with $f$ one can have $\lambda_1(-\mathcal{L}, \R^2)$ as large as desired. However, the fact that the two eigenvalues have the same sign reveals that they are profoundly linked.
§.§ Organisation of the chapter
In Section <ref>, we recall and discuss the properties of the eigenvalues $\lambda_1(\Omega)$, $\lambda_1(-\mathcal{L}, \R^2)$ and $\lambda_p(-\mathcal{L}, \R^2)$ already known in the literature.
Furthermore, a periodic eigenvalue for the system (<ref>) will be defined; because of the presence of the road, the periodicity is present only in the $x$ direction. As a consequence, it is useful to define an analogous generalised eigenvalue for the system without the road (<ref>) with periodicity only in the direction of the road.
In Section <ref>, one finds the proof of Theorem <ref> and Theorem <ref>. Moreover, the relations between the newly defined generalised periodic eigenvalues and the known ones are shown.
The last Section <ref> treats large time behaviour for solutions to (<ref>) with $c=0$ and periodic $f$; this includes the proof of Theorem <ref>.
§ GENERALISED PRINCIPAL EIGENVALUES AND THEIR PROPERTIES
Both road-field models and reaction-diffusion equations in periodic media have been treated in several papers.
In this section, we introduce some useful objects and recall their properties.
All along this section we will make repeated use of the operators $\mathcal{L}$, $\mathcal{R}$ and $B$, that were defined in (<ref>).
§.§ Eigenvalues in periodic media
Since $\mathcal{L}$ has periodic terms, it is natural to look for eigenfunctions that have the same property. However, to begin the discussion on the periodic eigenvalue for the operator $\mathcal{L}$ in $\R^2$, we consider its counterpart in $\R$.
We look for the unique number $\lambda_p(-\mathcal{L}, \R)\in\R$ such that there exists a function $\psi\in W_{loc}^{2, 3}(\R)$ solution to the problem
\begin{equation}\label{ch1sys:L_R_p}
\left\{
\begin{array}{ll}
d\psi''+f_v(x, 0)\psi + \lambda \psi = 0, & x\in\R, \\
\psi> 0, & x\in\R, \\
|| \psi ||_{\infty}=1, \\
\psi \ \text{is periodic in $x$ of period $\ell$.}
\end{array}
\right.
\end{equation}
In (<ref>), the operator $\mathcal{L}$ has been replaced by an operator working on $\R$, namely the Laplacian has been substituted by a double derivative.
Notice that existence and uniqueness of the solution to (<ref>), that we call $(\lambda_p(-\mathcal{L}, \R), \psi_p)$, is guaranteed by Krein-Rutman theory.
For the operator $\mathcal{L}$, since it has no dependence on the $y$ variable,
we have to introduce a fictive periodicity in order to be able to use the Krein-Rutman theory. Thus, fix $\ell'>0$ and consider the problem in $\R^2$ of finding the value $\lambda_p(-\mathcal{L}, \R^2)\in\R$ such that there exists a solution $\psi\in W_{loc}^{2, 3}(\R^2)$ to the system
\begin{equation}\label{ch1sys:L_R2_p}
\left\{
\begin{array}{ll}
\mathcal{L}(\psi) + \lambda \psi = 0, & (x,y)\in\R^2, \\
\psi> 0, & (x,y)\in\R^2, \\
|| \psi ||_{\infty}=1, \\
\psi \ \text{is periodic in $x$ and $y$ of periods $\ell$ and $\ell'$}.
\end{array}
\right.
\end{equation}
Again we can use the Krein-Rutman Theorem to see that there exists a unique pair $(\lambda_p(-\mathcal{L}, \R^2), \psi_{\ell'})$ solving (<ref>). Now, with a slight abuse of notation, we consider the function $\psi_p(x,y)$ as the extension in $\R^2$ of $\psi_p$ solution to (<ref>). We observe that the pair $(\lambda_p(-\mathcal{L}, \R), \psi_p)$ gives a solution to (<ref>). Hence, by the uniqueness of the positive eigenfunction, we get
\begin{equation}\label{ch1eq:-5}
\lambda_p(-\mathcal{L}, \R^2)=\lambda_p(-\mathcal{L}, \R) \quad \text{and} \quad \psi_p\equiv \psi_{\ell'}.
\end{equation}
This also implies that neither $\lambda_p(-\mathcal{L}, \R^2)$ nor $\psi_{\ell'}$ depend on the parameter $\ell'$ that was artificially introduced. From now on, we will use only $ \psi_p$.
The properties of the eigenvalue $\lambda_p(-\mathcal{L}, \R^2)$ were also studied in [22], where it is called $\lambda'$ and defined as
\begin{equation}\label{ch1def:lambdap_dim2}
\begin{split}
\lambda_p(-\mathcal{L}, \R^2)= &\inf \{ \lambda \in \R \ :\ \exists \varphi\in \mathcal{C}^2(\R^2)\cap L^{\infty}(\R^2), \ \varphi>0, \\ &\hspace{8em} \varphi \ \text{periodic in $x$ and $y$}, \ \mathcal{L}(\varphi)+\lambda \varphi \geq 0 \}.
\end{split}
\end{equation}
In particular, in Proposition 2.3 of [22] it is stated that the value found with (<ref>) coincides with the one defined in (<ref>).
§.§ Generalised principal eigenvalues for the system with and without the road and some properties
In this section, we are going to treat eigenvalues that are well defined also for non periodic reaction functions.
The generalised eigenvalue $\lambda_1( \Omega)$ for the system (<ref>), that we defined in (<ref>), was first introduced in [16]. Together with this, the authors also proved the interesting property that $\lambda_1( \Omega)$ coincides with the limit of principal
eigenvalues of the same system restricted to a sequence of invading domains.
They use some half ball domains defined as follow for $R>0$:
\begin{equation}\label{ch11722}
\Omega_R:=B_R\cap \Omega \quad \text{and} \quad I_R:=(-R, R).
\end{equation}
Then we have the following characterisation for $\lambda_1( \Omega)$:
For $R>0$,
there is a unique
$\lambda_1( \Omega_R) \in \R$
and a unique (up to multiplication by a positive scalar) positive
$(u_R, v_R) \in W^{2,3}(I_R) \times W^{2,3} (\Omega_R)$ that satisfy the eigenvalue problem
\begin{equation}\label{ch1sys:halfball}
\left\{
\begin{array}{ll}
\mathcal{R}(\phi, \psi) +\lambda\phi = 0, & x\in I_R, \\
\mathcal{L}(\psi) + \lambda \psi = 0, &(x,y)\in \Omega_R, \\
B(\phi, \psi)= 0, & x\in I_R, \\
\psi =0, & (x,y)\in (\partial\Omega_R ) \setminus (I_R\times \{0\}) \\
\phi(R)=\phi(-R)=0. &
\end{array}
\right.
\end{equation}
\begin{equation*}
\lambda_1( \Omega_R) \underset{R\to +\infty}{\searrow} \lambda_1( \Omega).
\end{equation*}
We also consider the principal eigenvalue on the truncated domains for the linear operator $\mathcal{L}(\psi)$.
To do that, for any $R>0$ we call $B_R^P$ the ball of centre $P=(x_P,y_P)$ and radius $R$. We define $\lambda_1(-\mathcal{L}, B_R^P)$ as the unique real number such that
the problem
\begin{equation}\label{ch1sys:L_BR}
\left\{
\begin{array}{ll}
\mathcal{L}(\psi_R) + \lambda_1(-\mathcal{L}, B_R^P) \psi_R = 0, & (x,y)\in B_R^P, \\
%\partial_y \psi_R= 0, & x\in I_R, \\
\psi_R=0, & (x,y)\in \partial B_R^P, \\ %\setminus (I_R\times \{0\})
\psi_R >0, & (x,y)\in B_R^P
\end{array}
\right.
\end{equation}
admits a solution $\psi_R\in W^{2,3}(B_R^P)$.
The existence and uniqueness of such quantity and its eigenfunction is a well-known result derived via the Krein-Rutman theory.
We also notice that, calling $B_R$ the ball with radius $R$ and center $O=(0,0)$, the pair $(\lambda_1(-\mathcal{L}, B_R), \psi_R)$ is also a solution to the problem
\begin{equation}\label{ch1sys:L_OmegaR}
\left\{
\begin{array}{ll}
\mathcal{L}(\psi) + \lambda \psi = 0, & (x,y)\in \Omega_R, \\
\partial_y \psi(x,0)= 0, & x\in I_R, \\
\psi=0, & (x,y)\in (\partial \Omega_R)\setminus (I_R\times \{0\}), \\
\psi >0, & (x,y)\in \Omega_R.
\end{array}
\right.
\end{equation}
The proof of that is very simple.
If $(\lambda, \psi)$ is the unique solution to (<ref>), extending $\psi$ by symmetry in $B_R$ we get a solution to (<ref>). By the uniqueness of the solution to (<ref>), we get $\lambda=\lambda_1(-\mathcal{L}, B_R)$.
Similarly to what happens with $\lambda_1( \Omega_R)$, thanks to the fact $\lambda_1(-\mathcal{L}, B_R)$ solves (<ref>), we have that the sequence $\lambda_1(-\mathcal{L}, B_R)$ converges to the value $\lambda_1(-\mathcal{L}, \Omega)$, that was defined in (<ref>). This was precisely stated in [17] as:
We have that
\begin{equation}
\lambda_1(-\mathcal{L}, B_R) \underset{R\to +\infty}{\searrow} \lambda_1(-\mathcal{L}, \Omega),
\end{equation}
Another notion of generalised eigenvalue analysed in [22] is the quantity
\begin{equation}\label{ch1lambda:L_R2}
\begin{split}
\lambda_1(-\mathcal{L}, \R^2)=\sup \{ \lambda \in \R \ : \ \exists \psi \geq 0, \psi \not\equiv 0 \ \text{such that} \
\mathcal{L}(\psi) + \lambda \psi \leq 0 \ \text{a.e on} \ \R^2 \}
\end{split}
\end{equation}
for test functions $\psi \in W_{loc}^{2,3}(\R^2)$. As stated in Proposition 2.2 of [22], we have
\begin{equation*}
\lambda_1(-\mathcal{L}, B_R) \underset{R\to +\infty}{\searrow} \lambda_1(-\mathcal{L}, \R^2).
\end{equation*}
By that and (<ref>), we have
\begin{equation*}
\lambda_1(-\mathcal{L}, \R^2)=\lambda_1(-\mathcal{L}, \Omega)
\end{equation*}
With this notation, we can report the following affirmations deriving from Theorem 1.7 in [22] for the case of a periodic reaction function:
Suppose $f$ satisfies (<ref>).
The following holds:
* It holds that $\lambda_p(-\mathcal{L}, \R^2)\leq \lambda_1(-\mathcal{L}, \Omega)$.
* If $\mathcal{L}$ is self-adjoint (i.e, if $c=0$), then $\lambda_p(-\mathcal{L}, \R^2)=\lambda_1(-\mathcal{L}, \Omega)$.
At last, we recall the following result on the signs of the eigenvalues for the systems with and without the road:
It holds that
\begin{equation*}
\lambda_1(-\mathcal{L}, \Omega) \geq 0 \quad \Rightarrow \quad \lambda_1( \Omega) \geq 0.
\end{equation*}
This is the result that, in combination with Theorem <ref>, gives Corollary <ref>.
§.§ The periodic generalised principal eigenvalue for the road-field system
We introduce here two new eigenvalues that will be useful in the following proofs. They are somehow of mixed type, in the sense that
they are periodic in $x$ but not in $y$; this derives from the fact that the domains in which they are defined are periodic in the variable $x$ and truncated in the variable $y$. Here, we require $f$ to be periodic as in hypothesis (<ref>).
Given $r>0$, let $(\lambda_p(-\mathcal{L}, \R\times(-r, r)), \psi_{r})$ be the unique pair solving the eigenvalue problem
\begin{equation}\label{ch1sys:bary2}
\left\{
\begin{array}{ll}
\mathcal{L}(\psi_{r}) + \lambda \psi_{r} = 0, \qquad(x,y)\in \R \times (-r, r), \\
\psi_{r} (x, \pm r)=0, \qquad x\in \R, \\
||\psi_{r}||_{\infty}=1, \ \psi_{r} \ \text{is periodic in} \ x.
\end{array}
\right.
\end{equation}
The existence and uniqueness of the solution to (<ref>) derives once again from Krein-Rutman theory.
We point out that $\lambda_p(-\mathcal{L}, \R\times(-r,r))$ is decreasing in $r$ by inclusion of domains. So,
there exists a well defined value, that with a slight abuse of notation we call $\lambda_p (-\mathcal{L}, \Omega)$, such that
\begin{equation}\label{ch1eq:-2}
\lambda_p(-\mathcal{L}, \R\times(-r,r)) \underset{r\to+\infty}{\searrow} \lambda_p (-\mathcal{L}, \Omega).
\end{equation}
Given $r>0$, there exists a unique value $\lambda_p( \R\times(0, r))\in\R$ such that the problem
\begin{equation}\label{ch1sys:r}
\left\{
\begin{array}{ll}
\mathcal{R}(\phi, \psi) +\lambda\phi = 0, \qquad x\in \R, \\
\mathcal{L}(\psi) + \lambda \psi = 0, \qquad(x,y)\in \R \times (0, r), \\
B(\phi, \psi)= 0, \qquad x\in \R, \\
\psi (\cdot, r)=0, \\
\phi \ \text{and} \ \psi \ \text{are periodic in} \ x,
\end{array}
\right.
\end{equation}
has a solution.
The proof of the existence can be derived by modifying for periodic functions the proof of the existence of $\lambda_1( \Omega_R)$ that is found in the Appendix of [16].
Moreover, we define
\begin{equation*}
\begin{split}
\lambda_p ( \Omega)= \sup \{ \lambda \in \R \ : \ \exists (\phi,\psi)\geq (0,0), \ (\phi,\psi) \ \text{periodic in} \ x, \ \text{such that} \\
\mathcal{R}(\phi, \psi) +\lambda \phi \leq 0,
\mathcal{L}(\psi) + \lambda \psi \leq 0, \ \text{and} \ B(\phi, \psi)\leq 0 \}
\end{split}
\end{equation*}
with test functions $(\phi,\psi) \in W_{loc}^{2,3}(\R)\times W_{loc}^{2,3}(\overline{\Omega})$.
Then, we have:
Suppose $f$ satisfies (<ref>).
We have that
\begin{equation}\label{ch1eq:-3}
\lambda_p( \R\times(0,r)) \underset{r\to+\infty}{\searrow} \lambda_p ( \Omega).
\end{equation}
Moreover, there exists a couple $(u_p, v_p)\in W_{loc}^{2,3}(\R)\times W_{loc}^{2,3}(\overline{\Omega})$ of positive functions periodic in $x$ such that satisfy
\begin{equation}\label{ch1sys:upvp}
\left\{
\begin{array}{ll}
\mathcal{R}(u_p, v_p)+ \lambda_p ( \Omega)v_p=0, & x\in\R, \\
\mathcal{L}v_p+ \lambda_p ( \Omega) v_p=0, & (x,y)\in\Omega, \\
B(u_p, v_p)=0, & x\in\R.
\end{array}
\right.
\end{equation}
By inclusion of domains, one has that $\lambda_p( \R\times(0, r))$ is decreasing in $r$.
Let us call
\begin{equation*}
\bar{\lambda}:=\underset{r\to \infty}{\lim} \lambda_p( \R\times(0, r)).
\end{equation*}
Step 1.
We now want to show that there exists a couple $(\bar{\phi}, \bar{\psi})>(0,0)$, with $\bar{\phi}\in W_{loc}^{2,3}(\R)$ and $\bar{\psi}\in W_{loc}^{2,3}(\overline{\Omega})$, periodic in $x$, that satisfy
\begin{equation}\label{ch11957}
\left\{
\begin{array}{ll}
\mathcal{R}(\bar{\phi}, \bar{\psi})+ \bar{\lambda} \bar{\phi}=0, & x\in\R, \\
\mathcal{L}( \bar{\psi})+ \bar{\lambda} \bar{\psi}=0, & (x,y)\in\Omega, \\
B(\bar{\phi}, \bar{\psi})=0, & x\in\R.
\end{array}
\right.
\end{equation}
Fix $M>0$.
First, for all $r>M+2$ consider the periodic eigenfunctions $(\phi_r, \psi_r)$ related to $\lambda_p( \R\times(0,r))$.
We normalize $(\phi_r, \psi_r)$ so that
\begin{equation*}
\phi_r(0)+ \psi_r(0,0)=1.
\end{equation*}
Then, from the Harnack estimate in Theorem 2.3 of [16], there exists $C>0$ such that
\begin{equation}\label{ch11809}
\max \{ \underset{I_{M+1}}{\sup} \phi_r, \ \underset{\Omega_{M+1}}{\sup} \psi_r \} \leq C \min \{ \underset{I_{M+1}}{\inf} \phi_r, \ \underset{\Omega_{M+1}}{\inf} \psi_r \} \leq C,
\end{equation}
where the last inequality comes from the normalization.
We can use the interior estimate for $\phi_r$ and get
\begin{equation*}
|| \phi_r ||_{W^{2,3}(I_M)} \leq C' ( || \phi_r ||_{L^{3}(I_{M+1})}+ || \psi_r ||_{L^{3}(\Omega_{M+1})} )
\end{equation*}
for some $C'$ depending on $M$, $\mu$, $\nu$, and $D$.
By that and (<ref>), we get
\begin{equation}\label{ch11810}
|| \phi_r ||_{W^{2,3}(I_M)} \leq C
\end{equation}
for a possibly different $C$.
For $\psi_r$, in order to have estimates up to the border $y=0$ of $\Omega_M$, we need to make a construction. Recall that, calling $L:= \mathcal{L}+ \lambda_p( \R\times(0,r))$, $\psi_r$ solves
\begin{equation*}
\left\{
\begin{array}{ll}
L \psi_r =0, & (x,y) \in \Omega_{M+1}, \\
-d \partial_y \psi_r |_{y=0} + \nu \psi_r|_{y=0}= \mu \phi_r, & x\in I_{M+1}.
\end{array}
\right.
\end{equation*}
We call
\begin{equation*}
\tilde{\psi}_r:= \psi_r e^{-\frac{\nu}{d}y}
\end{equation*}
and the conjugate operator
\begin{equation*}
\tilde{L}(w):= e^{-\frac{\nu}{d}y} L\left( e^{\frac{\nu}{d}y} w \right).
\end{equation*}
Now, we have
\begin{equation*}
\left\{
\begin{array}{ll}
\tilde{L}\tilde{\psi}_r =0, & (x,y) \in \Omega_{M+1}, \\
-d \partial_y \tilde{\psi}_r |_{y=0} = \mu \phi_r, & x\in I_{M+1}.
\end{array}
\right.
\end{equation*}
Next, calling
\begin{equation}
w_r(x,y)=\tilde{\psi}_r(x,y)- \frac{d}{\mu} \phi_r(x) y,
\end{equation}
we have that
\begin{equation}\label{ch11919}
\left\{
\begin{array}{ll}
\tilde{L}w_r = -\dfrac{d}{\mu} \tilde{L}( \phi_r(x) y), & (x,y) \in \Omega_{M+1}, \\
\partial_y {w_r}|_{y=0} = 0, & x\in I_{M+1}.
\end{array}
\right.
\end{equation}
Now we define in the open ball $B_{M+1}$ the function
\begin{equation}\label{ch11911}
\bar{w}_r(x,y):=w_r(x, |y|),
\end{equation}
that is the extension of $w_r$ by reflection; thanks to the Neumann condition in (<ref>) and the fact that ${w}_r \in W^{2,3}(\Omega_{M+1})$, we get that $\bar{w}_r \in W^{2,3}(B_{M+1})$. Also, we define the function
\begin{equation}\label{ch11912}
g(x,y)= \frac{d}{\mu} \tilde{L}( \phi_r(x) |y|).
\end{equation}
We also take the operator
\begin{equation}\label{ch11913}
\bar{L}w := d \Delta w+ c \partial_x w + 2{\nu} \sigma(y)\partial_y w + \left( f_v(x,0)+ \lambda_p( \R\times(0,r)) + \frac{\nu^2}{d} \right) w
\end{equation}
where $\sigma(y)$ is the sign function given by
\begin{equation*}
\sigma(y) := \left\{
\begin{array}{ll}
1 & \text{if} \ y\geq 0, \\
-1 & \text{if} \ y<0.
\end{array}
\right.
\end{equation*}
Thanks to the definition (<ref>), (<ref>) and (<ref>), we get that $\bar{w}_r $ is a weak solution to the equation
\begin{equation}\label{ch11926}
- \bar{L} \bar{w}_r = g \quad \text{for} \ (x,y)\in B_{M+1}.
\end{equation}
Finally, we can apply the interior estimates and get
\begin{equation*}
|| \bar{w}_r ||_{W^{2,3}(B_M)} \leq C' ( || \bar{w}_r ||_{L^{\infty}(B_{M+1})}+ || g ||_{L^{3}(B_{M+1})})
\end{equation*}
for some $C'$ depending on $M$ and the coefficients of the equation (<ref>). But using the definition of $\bar{w}_r $ and the fact that $g$ is controlled by the norm of $\phi_r$, we get, for a possible different $C'$,
\begin{equation*}
|| \bar{w}_r ||_{W^{2,3}(B_M)} \leq C' ( || \psi_r ||_{L^{\infty}(\Omega_{M+1})}+|| \phi_r ||_{L^{\infty}(I_{M+1})}+ || \phi_r ||_{W^{2,3}(I_{M+1})}).
\end{equation*}
Using (<ref>) and (<ref>),
we finally have
\begin{equation*}
|| \psi_r ||_{W^{2,3}(\Omega_M)} \leq C.
\end{equation*}
Thanks to that and (<ref>), we have that $(\phi_r, \psi_r)$ is uniformly bounded in $W^{2,3}(I_M)\times W^{2,3}(\Omega_M)$ for all $M>0$.
Hence, up to a diagonal extraction, $(\phi_r, \psi_r)$ converge weakly in $W_{loc}^{2,3}(I_M)\times W_{loc}^{2,3}(\Omega_M)$ to some $(\bar{\phi}, \bar{\psi}) \in W_{loc}^{2,3}(I_M)\times W_{loc}^{2,3}(\Omega_M)$. By Morrey inequality, the convergence is strong in $\mathcal{C}_{loc}^{1, \alpha}(\R)\times \mathcal{C}_{loc}^{1, \alpha}(\overline{\Omega})$ for $\alpha<1/6$.
Moreover, $(\bar{\phi}, \bar{\psi})$ are periodic in $x$ since all of the $(\phi_r, \psi_r)$ are periodic.
Then, taking the limit of the equations in (<ref>), we obtain that $(\bar{\phi}, \bar{\psi})$ satisfy (<ref>), as wished.
Step 2. We now prove that
\begin{equation}\label{ch11402}
\bar{\lambda} \leq \lambda_p( \Omega).
\end{equation}
Take $\bar{\lambda}$ and
its associated periodic eigenfunctions couple $(\bar{\phi}, \bar{\psi})$ obtained in Step 1.
By definition, $\lambda_p( \Omega)$ is the supremum of the set
\begin{equation}\label{ch11747}
\begin{split}
\mathcal{A}:= \{ \lambda \in \R \ : \ \exists (\phi,\psi)\geq (0,0), \ (\phi,\psi) \ \text{periodic in} \ x, \
\mathcal{R}(\phi, \psi) +\lambda \phi \leq 0, \\
\mathcal{L}(\psi) + \lambda \psi \leq 0, \ \text{and} \ B(\phi, \psi)\leq 0 \}.
\end{split}
\end{equation}
Then, using $(\bar{\phi}, \bar{\psi})$ as test functions, we obtain that $\bar{\lambda}$ is in the set $\mathcal{A}$ given in (<ref>). By the fact that $\lambda_p( \Omega)$ is the supremum of $\mathcal{A}$, we get (<ref>), as wished.
Step 3. We show
\begin{equation}\label{ch12006}
\lambda_p(\Omega) \leq \bar{\lambda}.
\end{equation}
Now, take any $\lambda\in\mathcal{A}$ and one of its associate couple $(\phi, \psi)$. Then, by inclusion of domains, one gets that for all $r>0$ it holds
\begin{equation*}
\lambda \leq \lambda_p( \R\times (0,r)).
\end{equation*}
Hence, by taking the supremum on the left hand side and the infimum on the right one, we get (<ref>). By this and (<ref>), equality is proven. Moreover, defining $(u_p, v_p)\equiv(\bar{\phi}, \bar{\psi})$, by (<ref>), we have the second statement of the proposition.
§ ORDERING OF THE EIGENVALUES
This section is dedicated to show some inequalities and relations between the aforementioned eigenvalues.
§.§ Proof of Theorem <ref>
We start by proving Theorem <ref>.
We stress that this is done for the general setting of $c$ possibly non zero and $f(x,v)$ which may not be periodic.
Let us start by proving the first part of the theorem.
For all $R>0$, there exists $R'>0$ and a point $C\in\R^2$ such that $B_R(C) \subset \Omega_{R'}$: it is sufficient to take $R'=3R$ and $C=(0, \frac{2}{3}R)$. We want to prove that
\begin{equation}\label{ch11533}
\lambda_1(-\mathcal{L}, B_R) \geq \lambda_1( \Omega_{R'}).
\end{equation}
Suppose by the absurd that (<ref>) is not true.
Consider $\psi_R$ the eigenfunction related to $\lambda_1(-\mathcal{L}, B_R)$ and $v_{R'}$ the eigenfunction in the couple $(u_{R'},v_{R'})$ related to $\lambda_1( \Omega_{R'})$.
Since $\inf_{B_{R}(C)} v_{R'} >0$, and both eigenfunctions are bounded, there exists
\begin{equation*}
\theta^* := \sup \{ \theta\geq 0 \ : \ v_{R'}>\theta \psi_R \ \text{in} \ B_R(C) \} >0.
\end{equation*}
Since $\theta^*$ is a supremum, then there exists $(x^*,y^*)\in \overline{B_R(C)}$ such that $v_{R'}(x^*, y^*)= \theta^* \psi_R (x^*, y^*)$.
Then, $(x^*,y^*)\in {B_R(C)}$ because $v_{R'}>0$ and $\psi_R=0$ in $\partial B_R(C)$.
Calling $\rho=v_{R'}-\theta^* \psi_R$, in a neighbourhood of $(x^*,y^*)$ we have that
\begin{equation}\label{ch11602}
-d \Delta\rho- c \cdot \nabla \rho - f_v(x,0)\rho=\lambda_1(-\mathcal{L}, B_R)\rho + (\lambda_1( \Omega_{R'}) - \lambda_1(-\mathcal{L}, B_R)) v_{R'}.
\end{equation}
We know that $\rho(x^*,y^*)=0$ and that $\rho \geq 0$ in $B_R(C)$. Then $(x^*,y^*)$ is a minimum for $\rho$, so $\nabla \rho(x^*,y^*) =0$ and $\Delta \rho (x^*,y^*) \geq 0$.
Thus, the lefthandside of (<ref>) is non positive. But by the absurd hypotesis we have $(\lambda_1( \Omega_{R'}) - \lambda_1(-\mathcal{L}, B_R)) v_{R'}>0$. This gives
\begin{equation*}
0 \geq -d \Delta\rho(x^*,y^*) = (\lambda_1( \Omega_{R'}) - \lambda_1(-\mathcal{L}, B_R)) v_{R'} (x^*,y^*)>0,
\end{equation*}
is a contradiction. With that we obtain that (<ref>) is true.
Notice that the eigenvalue $\lambda_1(-\mathcal{L}, B_R(C))=\lambda_1(-\mathcal{L}, B_R)$, where $B_R$ is the ball centred in $(0,0)$, because $f(x,v)$ does not depend on $y$, thus system (<ref>) on $B_R(C)$ and $B_R$ are the same. As a consequence, also their eigenfunctions coincide.
Recall that both $\lambda_1(-\mathcal{L}, \R^2)$ and $\lambda_1( \Omega)$ are limits of eigenvalues on limited domains, by (<ref>) and Proposition <ref>.
Now, since for all $R>0$ there exists $R'$ such that (<ref>) is true, then passing to the limit we find the required inequality.
§.§ Further inequalities between the eigenvalues
In this section, we collect some results on the ordering of periodic and generalised eigenvalues for both system (<ref>) and eqaution (<ref>).
Here we require $f$ to be periodic as in (<ref>).
This first result is the analogue of Theorem (<ref>) for the system (<ref>):
Suppose $f$ respects hypothesis (<ref>). Then:
* It holds that $\lambda_1( \Omega)\geq \lambda_p( \Omega)$.
* If moreover $c=0$, then we have $\lambda_1( \Omega)= \lambda_p( \Omega)$.
By definition, $\lambda_p( \Omega)$ is the supremum of the set $\mathcal{A}$ given in (<ref>),
while $\lambda_1( \Omega)$ is the supremum of the set
\begin{equation*}
\begin{split}
\{ \lambda \in \R \ : \ \exists (\phi,\psi)\geq (0,0), \
\mathcal{R}(\phi, \psi) +\lambda \phi \leq 0, \\
\mathcal{L}(\psi) + \lambda \psi \leq 0, \ \text{and} \ B(\phi, \psi)\leq 0 \} \supseteq \mathcal{A}.
\end{split}
\end{equation*}
By inclusion of sets, we have the desired inequality.
We call
$$\mathcal{H}_R:= H_0^1(I_R)\times H_0^1(\Omega_R \cup (I_R\cup \{0\}) ). $$
For $(u,v)\in \mathcal{H}_R$, we define
\begin{equation*}
Q_R(u,v):= \frac{ \mu \int_{I_R} D |u'|^2 + \nu \int_{\Omega_R} (d|\nabla v|^2-f_v(x,0)v^2) + \int_{I_R} (\mu u- \nu v|_{y=0})^2 }{\mu \int_{I_R}u^2 + \nu \int_{\Omega_R} v^2}.
\end{equation*}
Now we fix $r>0$ and we consider $\lambda_p( \R \times (0,r) )$ ad its periodic eigenfunctions $(\phi_{r}, \psi_{r})$. We consider $\psi_{r}$ to be extended to $0$ in $\Omega \setminus (\R\times (0,r))$. This way we have $\psi_{r}\in H^1(\Omega_R \cup (I_R\cup \{0\}) )$.
Then for all $R>1$ we choose a $\mathcal{C}^2(\overline{\Omega})$ function $Y_R:\overline{\Omega}\to [0,1]$ such that
\begin{align*}
Y_R(x,y)=1 & \qquad \text{if} \ |(x,y)|<R-1; \\
Y_R(x,y)=0 & \qquad \text{if} \ |(x,y)|\geq R; \\
|\nabla Y_R|^2 \leq C; & \hspace{5em}
\end{align*}
where $C$ is a fixed constant independent of $R$. To simplify the notation later, we call $X_R(x):=Y_R(x,y)|_{y=0}$; we also have that $X_R\in\mathcal{C}^2(\R)$ and $|X_R''|\leq C$. We have that
\begin{equation*}
(\phi_{r} X_R, \psi_{r} Y_R) \in \mathcal{H}_R.
\end{equation*}
Now we want to show that for a suitable diverging sequence $\{R_n\}_{n\in\N}$ we have
\begin{equation} \label{ch1Claim}
Q_{R_n} (\phi_{r} X_{R_n}, \psi_{r} Y_{R_n}) \overset{n\to \infty}{\longrightarrow} \lambda_p( \R \times (0,r) )).
\end{equation}
First, let us show a few useful rearrangements of the integrals that define $Q_R (\phi_{r} X_R, \psi_{r} Y_R)$. We have that
\begin{align*}
\int_{I_R}|(\phi_{r} X_R)'|^2 &= \int_{I_R} (\phi_{r} X_R)' \, \phi_{r} \, X_R ' + \int_{I_R} (\phi_{r} X_R)' \, \phi_{r} ' \, X_R, \\
& = \int_{I_R} (\phi_{r} X_R)' \, \phi_{r} \, X_R ' + \left[ (\phi_{r} X_R^2) \, \phi_{r} ' \right]_{-R}^R-\int_{I_R} (\phi_{r} X_R) \, \left( \phi_{r} '' \, X_R + \phi_{r} ' \, X_R' \right), \\
&= \int_{I_R} \phi_{r}^2 \, |X_R '|^2 + \left[ (\phi_{r} X_R^2) \, \phi_{r} ' \right]_{-R}^R -\int_{I_R} \phi_{r} '' \, \phi_{r} \, X_R^2 ,
\end{align*}
by having applied integration by parts on the second line and trivial computation in the others.
Since $X_R(R)= X_R(-R)=0$ and $X_R '$ is supported only in $I_R \setminus I_{R-1}$, we get
\begin{equation}\label{ch1eq:parte2}
\mu D\int_{I_R}|(\phi_{r} X_R)'|^2 = -\mu D\int_{I_R} \phi_{r} '' \, \phi_{r} \, X_R^2 + \mu D \int_{I_R \setminus I_{R-1}} \phi_{r}^2 \, |X_R '|^2.
\end{equation}
With similar computations we get
\begin{equation}\label{ch1eq:parte1}
\int_{\Omega_R} d|\nabla (\psi_{r} \, Y_R)|^2 = - \int_{\Omega_R} d\Delta \psi_{r} \, \psi_{r} \, Y_R^2 - \int_{I_R} (d\partial_y \psi_{r}) \psi_{r} \, X_R^2 + \int_{\Omega_R \setminus {\Omega_{R-1} }} d|\nabla Y_R|^2 \psi_{r}^2.
\end{equation}
Then, we also have
\begin{equation} \label{ch1eq:parte3}
\int_{I_R} (\mu \phi_{r} \, X_R - \nu \psi_{r} \, X_R)^2 = \int_{I_R} \mu \phi_{r} \, X_R^2 (\mu \phi_{r}- \nu \psi_{r}) - \int_{I_R} \nu \psi_{r} \, X_R^2 (\mu \phi_{r}- \nu \psi_{r}).
\end{equation}
We now recall that $(\phi_{r}, \psi_{r})$ is an eigenfunction for the problem (<ref>).
Thanks to the third equation of (<ref>), the second term in (<ref>) cancel out with the second term in (<ref>). Moreover we can sum the first term of (<ref>) and the first term of (<ref>) and get
\begin{equation*}
-\int_{I_R} \mu D \phi_{r} '' \, \phi_{r} \, X_R^2 + \int_{I_R} \mu \phi_{r} \, X_R^2 (\mu \phi_{r} - \nu \psi_{r}) = \int_{I_R} \mu \lambda_p( \R\times (0, r) ) \phi_{r}^2 \, X_R^2.
\end{equation*}
Moreover we have that
\begin{equation*}
- \int_{\Omega_R} d\Delta \psi_{r} \, \psi_{r} \, Y_R^2 - \int_{\Omega_R} f_v(x,0) \psi_{r}^2 \, Y_R^2= \int_{\Omega_R} \lambda_p( \R\times (0, r) ) \psi_{r}^2 \, Y_R^2 .
\end{equation*}
So, if we call
\begin{equation*}
P_R := \frac{ \mu \int_{I_R \setminus I_{R-1}} D\phi_{r}^2 \, |X_R '|^2 + \nu \int_{\Omega_R \setminus {\Omega_{R-1} }} d|\nabla Y_R|^2 \psi_{r}^2}{\mu \int_{I_R}(\phi_{r} X_R)^2 + \nu \int_{\Omega_R} (\psi_{r} Y_R)^2},
\end{equation*}
we have that
\begin{equation*}\label{ch10014}
Q_R (\phi_{r} X_R, \psi_{r} Y_R) = \lambda_p( \R\times (0, r) ) + P_R.
\end{equation*}
Proving (<ref>) is equivalent to show that
\begin{equation} \label{ch11604}
P_{R_n} \overset{n\to \infty}{\longrightarrow} 0
\end{equation}
for some diverging sequence $\{R_n\}_{n\in \N}$.
Suppose by the absurd (<ref>) is not true.
First, by the fact that the derivatives of $X_R$ and $Y_R$ are bounded, for some positive constant $C$ we have that
\begin{equation*}
0 \leq P_R \leq C \frac{ \mu \int_{I_R \setminus I_{R-1}} \phi_{r}^2 + \nu \int_{\Omega_R \setminus {\Omega_{R-1} }} \psi_{r}^2}{\mu \int_{I_R}(\phi_{r} X_R)^2 + \nu \int_{\Omega_R} (\psi_{r} Y_R)^2}
\end{equation*}
By the absurd hypothesis, we have that
\begin{equation} \label{ch11652}
\underset{R\to \infty}{\liminf} \, P_R = \xi >0.
\end{equation}
Now let us define for all $R\in \N$ the quantity
\begin{equation*}
\alpha_R:= \mu \int_{I_R\setminus I_{R-1}}\phi_{r} ^2 + \nu \int_{\Omega_R \setminus \Omega_{R-1}} \psi_{r}^2.
\end{equation*}
Since $\phi_r$ and $\psi_r$ are bounded from above, we have that for some constant $k$ depending on $r$, $\mu$, and $\nu$, we have
\begin{equation}\label{ch1H}
\alpha_R \leq k R.
\end{equation}
For $R\in \N$ one has
\begin{equation*}
\mu \int_{I_R}(\phi_{r} X_R)^2 + \nu \int_{\Omega_R} (\psi_{r} Y_R)^2 = \sum_{n=1}^{R-1} \alpha_n + \mu \int_{I_R \setminus I_{R-1}}(\phi_{r} X_R)^2 + \nu \int_{\Omega_R \setminus \Omega_{R-1}} (\psi_{r} Y_R)^2.
\end{equation*}
By comparison with (<ref>), we have
\begin{equation*}
\underset{R\to \infty}{\liminf} \, \frac{\alpha_R}{\sum_{n=1}^{R-1} \alpha_n} \geq \underset{R\to \infty}{\liminf} \, \frac{ \alpha_R}{\sum_{n=1}^{R-1} \alpha_n + \mu \int_{I_R \setminus I_{R-1}}(\phi_{r} X_R)^2 + \nu \int_{\Omega_R \setminus \Omega_{R-1}} (\psi_{r} Y_R)^2} \geq \frac{\xi}{C},
\end{equation*}
so for $0<\varepsilon< \xi /C$ we have
\begin{equation}\label{ch1G}
\alpha_R > \varepsilon \sum_{n=1}^{R-1} \alpha_n
\end{equation}
Thanks to (<ref>) we perform now a chain of inequalities:
\begin{equation*}
\alpha_{R+1} > \varepsilon \sum_{n=1}^{R} \alpha_n = \varepsilon \left( \alpha_R + \sum_{n=1}^{R-1} \alpha_n \right) > \varepsilon(1+\varepsilon)\sum_{n=1}^{R-1} \alpha_n > \dots > (1+\varepsilon)^{R+1} \frac{\varepsilon \alpha_1}{(1+\varepsilon)^3} .
\end{equation*}
from with we derive that $\alpha_{R}$ diverges as an exponential, in contradiction with the inequality in (<ref>). Hence we obtain that
(<ref>) is true, so (<ref>) is also valid.
By Proposition 4.5 in [16], we have that
\begin{equation}\label{ch11207}
\lambda_1( \Omega_R) = \underset{ \substack{(u,v)\in \mathcal{H}_R, \\ (u,v)\neq (0,0)} }{\min} Q_R(u,v).
\end{equation}
Hence by (<ref>) we have that
\begin{equation*}
\lambda_1( \Omega_R) \leq Q_R (\phi_{r} X_R, \psi_{r} Y_R).
\end{equation*}
Since for all $r>0$ there exist $R>0$ so that (<ref>) holds, we have moreover that
\begin{equation*}
\lambda_1( \Omega) \leq \lambda_p( \R\times (0, r) ).
\end{equation*}
Then, recalling Proposition <ref>, we get that
\begin{equation*}
\lambda_1( \Omega) \leq \lambda_p( \Omega ).
\end{equation*}
Since the reverse inequality was already stated in the first part of this theorem, one has the thesis.
At last, we prove this proposition of the bounds for $\lambda_p(-\mathcal{L},\Omega)$.
Suppose $f$ satisfies (<ref>).
We have that
\begin{equation*}
\lambda_p(-\mathcal{L}, \R^2)
\leq \lambda_p(-\mathcal{L}, \Omega) \leq
\lambda_1(-\mathcal{L}, \Omega)
\end{equation*}
and if $c=0$ the equality holds.
Consider any $r>0$ and take $\lambda_p(-\mathcal{L}, \R\times(-r,r))$ and its eigenfunction $\psi_r$ solving (<ref>), that is periodic in $x$.
Then take $\lambda_p(-\mathcal{L}, \R^2)$ and its periodic eigenfunction $\psi_p$, that as we have seen in (<ref>) does not depend on $y$, therefore it is limited and has positive infimum, and solves (<ref>). Then, $\lambda_p(-\mathcal{L}, \R\times(-r,r))$ and $\lambda_p(-\mathcal{L}, \R^2)$ are eigenvalues for the same equation in two domains with one containing the other; hence, one gets that
\begin{equation}\label{ch11432}
\lambda_p(-\mathcal{L}, \R^2)\leq \lambda_p(-\mathcal{L}, \R\times(-r,r)).
\end{equation}
By using (<ref>), from (<ref>) we have
\begin{equation}\label{ch11429}
\lambda_p(-\mathcal{L}, \R^2)\leq \lambda_p(-\mathcal{L}, \Omega).
\end{equation}
Given $R<r$, we can repeat the same argument for $\lambda_1(-\mathcal{L}, B_R)$ and $\lambda_p(-\mathcal{L}, \R\times(-r,r))$ and get
\begin{equation}\label{ch11433}
\lambda_p(-\mathcal{L}, \R\times(-r,r)) \leq \lambda_1(-\mathcal{L}, B_R).
\end{equation}
By (<ref>) and by (<ref>), we get
\begin{equation*}
\lambda_p(-\mathcal{L}, \Omega) \leq \lambda_1(-\mathcal{L}, \Omega).
\end{equation*}
This and (<ref>) give the first statement of the proposition.
If $c=0$, by the second part of Theorem <ref> we get that $\lambda_p(-\mathcal{L}, \R^2) =\lambda_1(-\mathcal{L}, \Omega)$, hence we have
\begin{equation*}
\lambda_p(-\mathcal{L}, \R^2)= \lambda_p(-\mathcal{L}, \Omega) =\lambda_1(-\mathcal{L}, \Omega),
\end{equation*}
as wished.
§.§ Proof of Theorem <ref>
Owing Theorems <ref> and <ref> together with the estimates on the eigenvalues proved in the last subsection, we are ready to prove Theorem <ref>.
By Theorem <ref>, we have that $\lambda_1(-\mathcal{L}, \R^2)=\lambda_p(-\mathcal{L}, \R^2)$. Then, by Corollary <ref>, if $\lambda_1(\Omega)<0$ then
$\lambda_p(-\mathcal{L}, \R^2)<0$, and if $\lambda_1(\Omega)\geq 0$ then
$\lambda_p(-\mathcal{L}, \R^2)\geq 0$.
Observe that, when $c=0$, choosing $N=2$ and $L=(\ell, 0)$, the operator $\mathcal{L'}$ defined in (<ref>) coincides with $\mathcal{L}$.
Then, the affirmations on the asymptotic behaviour of the solutions of the system with and without the road comes from the characterisations in Theorem <ref> and <ref>.
§ LARGE TIME BEHAVIOUR FOR A PERIODIC MEDIUM AND $C=0$
We start considering the long time behaviour of the solutions. As already stated in Theorem <ref>, the two possibilities for a population evolving through (<ref>) are persistence and extinction. We treat these two case in separate sections.
Before starting our analysis, we recall a comparison principle first appeared in [20] that is fundamental for treating
system (<ref>). We recall that a generalised subsolution (respectively, supersolution) is the supremum (resp. infimum) of two subsolutions (resp. supersolutions).
Let $(\underline{u}, \underline{v})$ and $(\overline{u}, \overline{v})$ be respectively a generalised subsolution bounded from
above and a generalised supersolution bounded from below of (<ref>) satisfying $\underline{u} \leq \overline{u}$ and $\underline{v} \leq \overline{v}$
at $t = 0$. Then, either $\underline{u} \leq \overline{u}$ and $\underline{v} \leq \overline{v}$ for all $t$, or there exists $T > 0$ such that
$(\underline{u}, \underline{v}) \equiv (\overline{u}, \overline{v})$ for $t\leq T$.
The original proof is given for the case of $f$ homogeneous in space; however, it can be adapted with changes so small that we find it useless to repeat it.
Proposition <ref> gives us important informations on the behaviour at microscopic level. In fact, it asserts that if two pairs of population densities are “ordered” at an initial time, then the order is preserved during the evolution according to the equations in (<ref>).
§.§ Persistence
The aim of this section is to prove the second part of Theorem <ref>.
First, we are going to show a Liouville type result, that is Theorem <ref>, and then we will use that to derive the suited convergence.
We start with some technical lemmas.
Let $(u,v)$ be a bounded stationary solution to (<ref>) and let $\{(x_n, y_n) \}_{n\in\N}\subset \Omega$ be a sequence of points such that $\{ x_n\}_{n\in\N}$ modulo $\ell$ tends to some $x'\in[0,\ell]$.
* if $\{ y_n\}_{n\in\N}$ is bounded,
the sequence of function $\{(u_n, v_n) \}_{n\in\N}$ defined as
\begin{equation}\label{ch11648}
(u_n(x), v_n(x, y))=(u(x+x_n), v(x+x_n, y))
\end{equation}
converges up to a subsequence to $(\tilde{u}, \tilde{v})$ in $\mathcal{C}_{loc}^2(\R\times\Omega)$ and $(\tilde{u}(x-x'), \tilde{v}(x-x',y)$ is a bounded stationary solution to (<ref>).
* if $\{ y_n\}_{n\in\N}$ is unbounded,
the sequence of function $\{ v_n \}_{n\in\N}$ defined as
\begin{equation}\label{ch11649}
v_n(x, y)= v(x+x_n, y+y_n)
\end{equation}
converges up to a subsequence to $ \tilde{v}$ and $\tilde{v}(x-x', y)$ in $\mathcal{C}_{loc}^2(\R^2)$ is a bounded stationary solution to the second equation in (<ref>) in $\R^2$.
Let us call $V=\max\{ \sup u, \sup v \}$.
For all $n\in\N$, there exists $x_n'\in[0,\ell)$ such that $x_n-x_n'\in\ell \Z$.
We start with the case of bounded $\{y_n\}_{n\in\N}$.
By the periodicity of $f$,
we have that $(u_n, v_n)$ defined in (<ref>) is a solution to
\begin{equation*}
\left\{
\begin{array}{lr}
-D u '' -c u' - \nu v|_{y=0} + \mu u= 0, & x\in \R, \\
v -d \Delta v -c \partial_x v =f(x+ x_n',v), & (x, y)\in \Omega, \\
-d \partial_y{v}|_{y=0} + \nu v|_{y=0} -\mu u=0, & x\in\R,
\end{array}
\right.
\end{equation*}
Fix $p\geq 1$ and three numbers $j>h>k>0$; we use
the notation in (<ref>) for the sets $I_R$ and $\Omega_R$ for $R= k,\ h, \ j$.
By Agmon-Douglis-Nirenberg estimates (see for example Theorem 9.11 in [57]), we have
\begin{equation*}
\norm{u_n}_{W^{2,p}( I_h)} \leq C \left( \norm{u_n}_{L^p( I_j)} +\norm{v_n(x,0)}_{L^p( I_j)} \right).
\end{equation*}
To find the same estimate for the norm of $v_n$, we have to make the same construction used in the proof of Proposition <ref> to find the bound for $\psi_r$. In the same way, we get
\begin{equation*}
\begin{split}
\norm{v_n}_{W^{2,p}( \Omega_h)} \leq C \Big( \norm{u_n}_{L^p( I_j)} +\norm{v_n}_{L^p( \Omega_j)} + \norm{f}_{L^p( I_j \times (0, V) )} \Big) .
\end{split}
\end{equation*}
where the constant $C$, possibly varying in each inequality, depends on $\nu$, $\mu$, $d$, $D$, $h$ and $j$.
Using the boundedness of $u$ and $v$, for a possible different $C$ depending on $f$ we get
\begin{align*}
\norm{u_n}_{W^{2,p}( I_h)} &\leq C V, \\
\norm{v_n}_{W^{2,p}( \Omega_h)} &\leq C V.
\end{align*}
Then, we apply the general Sobolev inequalities (see [45], Theorem 6 in 5.6) and get for some $\alpha$ depending on $p$, that
\begin{align*}
\norm{u_n}_{\mathcal{C}^{\alpha}( I_h)} & \leq C \norm{u_n}_{W^{2,p}( I_h)} \leq CV, \\
\norm{v_n}_{\mathcal{C}^{\alpha}( \Omega_h)} &\leq C \norm{v_n}_{W^{2,p}( \Omega_h)} \leq CV.
\end{align*}
Now we can apply Schauder interior estimates for the oblique boundary condition (see for example Theorem 6.30 in [57]) and find that
\begin{align*}
\norm{u_n}_{\mathcal{C}^{2,\alpha}(I_k)} &\leq C \left( \norm{u_n}_{\mathcal{C}^{\alpha}(I_h)} +\norm{v_n(x,0)}_{\mathcal{C}^{\alpha}(I_h)} \right) \leq CV, \\
\norm{v_n}_{\mathcal{C}^{2,\alpha}(\Omega_k)} &\leq C \Big( \norm{u_n}_{\mathcal{C}^{\alpha}(I_h)}
+\norm{v_n}_{\mathcal{C}^{\alpha}(\Omega_h)} + \norm{f}_{\mathcal{C}^{\alpha}(I_h \times[0,V])} \Big) \leq CV.
\end{align*}
So the sequences $\{u_n\}_{n\in\N}$ and $\{v_n\}_{n\in\N}$ are bounded locally in space in $C^{2,\alpha}$. By compactness we can extract converging subsequences with limits $\tilde{u}(x)$ and $\tilde{v}(x,y)$. Moreover, since by hypothesis $x_n'\to x'$ as $n\to+\infty$, we have that $(\tilde{u}, \tilde{v})$ is a solution
\begin{equation*}
\left\{
\begin{array}{lr}
-D u '' -c u' - \nu v|_{y=0} + \mu u= 0, & x\in \R, \\
v -d \Delta v -c \partial_x v =f(x+ x',v), & (x, y)\in \Omega, \\
-d \partial_y{v}|_{y=0} + \nu v|_{y=0} -\mu u=0, & x\in\R,
\end{array}
\right.
\end{equation*}
This concludes the proof of the first statement.
Now suppose that $\{ y_n \}_{n\in\N}$ is unbounded and, up to a subsequence, we can suppose that
\begin{equation}\label{ch11827}
y_n \overset{n\to\infty}{\longrightarrow} +\infty.
\end{equation}
the function defined in (<ref>) solves the equation
\begin{equation*}
-d\Delta v_n -c \partial_x v_n = f(x+x_n', v) \quad \text{for} \ (x,y)\in\R\times(-y_n,0)
\end{equation*}
with the boundary condition $-d\partial_y v_n(x, y_n) + \nu v_n(x, -y_n)- \mu u(x+x_n)=0$.
Fix $p\geq 1$ and three numbers $j>h>k>0$; we denote by $B_R$ the open ball of $\R^2$ centred in $(0,0)$ and with radius $R$, and we will consider $R=j, \ h, \ k$. Notice that by (<ref>) there exists $N\in\N$ we have that $y_n>j$ for all $n\geq N$.
Hence, applying the previous estimates to $v_n$ for all $n\geq N$, we find that
\begin{equation*}
\begin{split}
\norm{v_n}_{W^{2,p}( B_h)} \leq C \Big( \norm{v_n}_{L^p( B_j)} + \norm{f}_{L^p( I_j \times (0, V) )} \Big) \leq CV
\end{split}
\end{equation*}
and then that
\begin{equation*}
\norm{v_n}_{\mathcal{C}^{2,\alpha}(B_k)} \leq C \Big( \norm{v_n}_{\mathcal{C}^{\alpha}(B_h)} + \norm{f}_{\mathcal{C}^{\alpha}(I_h \times[0,V])} \Big) \leq CV.
\end{equation*}
So the sequence $\{v_n\}_{n\in\N}$ is bounded locally in space in $C^{2,\alpha}(\R^2)$ and by compactness we can extract converging subsequence with limit $\tilde{v}(x,y)$, that satisfy
\begin{equation*}
-d\Delta v_n -c \partial_x v_n = f(x+x', v) \quad \text{for} \ (x,y)\in\R^2,
\end{equation*}
which gives the claim.
The second lemma is similar to the first one, but treats a shifting in time.
Let $(u,v)$ be a bounded solution to (<ref>) which is monotone in time and let $\{ t_n\}_{n\in\N}\subset \R_{\geq 0}$ be a diverging sequence. Then, the sequence $\{(u_n, v_n)\}_{n\in \N}$ defined by
\begin{equation}\label{ch11840}
(u_n(t,x), v_n(t,x, y))=(u(t+t_n,x), v(t+t_n,x, y))
\end{equation}
converges in $C_{loc}^{1,2,\alpha}$ to a couple of functions $(\tilde{u}, \tilde{v})$ which is a stationary solution to (<ref>).
We call $V=\max \{ \sup u, \sup v \}$.
For every fixed $x\in \R$ we have that $u_n(t,x)$ is an monotone bounded sequence. Then, we can define a function $\tilde{u}(x)$ as
\begin{equation}\label{ch11720}
\tilde{u}(x) = \underset{n\to\infty}{\lim} {u_n(t,x)}
\end{equation}
and $0\leq \tilde{u}(x)\leq U$.
Analogously, for all $(x,y)\in \Omega$ we can define
\begin{equation}\label{ch11721}
\tilde{v}(x,y) = \underset{n\to\infty}{\lim} {v_n(t,x,y)}
\end{equation}
and $0 \leq \tilde{v}(x,y)\leq V$.
Fix $p\geq 1$, $T>0$ and three numbers $k<h<j$; we use
the notation in (<ref>) for the sets $I_R$ and $\Omega_R$ for $R= k,\ h, \ j$.
For $S$ an open subset in $\R^N$, in this proof we denote the space of function with one weak derivative in time and two weak derivatives in space by $W_p^{1,2}(S)$.
By Agmon-Douglis-Nirenberg estimates we have
\begin{equation*}
\norm{u_n}_{W^{1,2}_p( I_h)} \leq C \left( \norm{u_n}_{L^p((0,T)\times I_j)} +\norm{v_n(t,x,0)}_{L^p((0,T)\times I_j)} \right) \leq CV.
\end{equation*}
To find the same estimate for the norm of $v_n$, we have to make the same construction used in the proof of Proposition <ref> to find the bound for $\psi_r$. In the same way, we get
\begin{equation*}
\begin{split}
\norm{v_n}_{W^{1,2}_p((0,T)\times \Omega_h)} \leq C \Big( \norm{u_n}_{L^p((0,T)\times I_j)} %+ \hspace{5em}\\
+\norm{v_n}_{L^p((0,T)\times \Omega_j)} + \norm{f}_{L^p( I_j \times (0, V) )} \Big) \leq CV.
\end{split}
\end{equation*}
where the constant $C$, possibly varying in each inequality, depends on $\nu$, $\mu$, $d$, $D$, $T$, $h$ and $j$. Then, we apply the general Sobolev inequalities (see [45], Theorem 6 in 5.6) and get for some $\alpha$ depending on $p$, that
\begin{align*}
\norm{u_n}_{\mathcal{C}^{\alpha}((0,T)\times I_h)} & \leq C \norm{u_n}_{W^{1,2}_p((0,T)\times I_h)} \leq CV, \\
\norm{v_n}_{\mathcal{C}^{\alpha}((0,T)\times \Omega_h)} &\leq C \norm{v_n}_{W^{1,2}_p((0,T)\times \Omega_h)} \leq CV.
\end{align*}
Moreover, since for $n\in \N$ the functions $u_n$ and $v_n$ are just time translation of the same functions $\tilde{u}$ and $\tilde{v}$, we also have that
\begin{align*}
\norm{u_n}_{\mathcal{C}^{\alpha}((0,+\infty)\times I_h)} &\leq CV, \\
\norm{v_n}_{\mathcal{C}^{\alpha}((0,+\infty)\times \Omega_h)} &\leq CV.
\end{align*}
Now we can apply Schauder interior estimates (see for example Theorem 10.1 in Chapter IV of [74]) and find that
\begin{align*}
\norm{u_n}_{\mathcal{C}^{1,2,\alpha}((0,+\infty)\times I_k)} &\leq C \left( \norm{u_n}_{\mathcal{C}^{\alpha}((0,+\infty)\times I_h)} +\norm{v_n(t,x,0)}_{\mathcal{C}^{\alpha}((0,+\infty)\times I_h)} \right) \leq CV, \\
\norm{v_n}_{\mathcal{C}^{1,2,\alpha}((0,+\infty)\times \Omega_k)} &\leq C \Big( \norm{u_n}_{\mathcal{C}^{\alpha}((0,+\infty)\times I_h)} + \\ &\hspace{5em}
+\norm{v_n}_{\mathcal{C}^{\alpha}((0,+\infty)\times \Omega_h)} + \norm{f}_{\mathcal{C}^{\alpha}(I_h \times[0,V])} \Big) \leq CV.
\end{align*}
So the sequences $\{u_n\}_{n\in\N}$ and $\{v_n\}_{n\in\N}$ are bounded locally in space in $C^{1,2,\alpha}$. By compactness we can extract converging subsequences with limits $q(t,x)$ and $p(t,x,y)$ that satisfy system (<ref>). But as said in (<ref>) and (<ref>) the sequences
$\{u_n\}$ and $\{v_n\}$ also converge punctually to $\tilde{u}$ and $\tilde{v}$, that are stationary functions. Then, the couple $(\tilde{u}, \tilde{v})$ is a positive bounded stationary solution of system (<ref>).
The following lemma gives essentials information on the stationary solutions, on which the uniqueness result of Theorem <ref> will rely on.
Suppose that $c=0$, $f$ satisfies (<ref>)-(<ref>) and that
$\lambda_1( \Omega)<0$. Then, every stationary bounded solution $(u, v)\not\equiv(0,0)$ of system (<ref>) respects
\begin{equation*}
\underset{\R}{\inf} \, u >0, \quad \underset{\Omega}{\inf} v>0.
\end{equation*}
Step 1: sliding in $x$.
If $\lambda_1( \Omega)<0$, thanks to Proposition <ref> there exists $R>0$ such that $\lambda_1( \Omega_R)<0$.
Since $\lambda_1( \Omega_R)$ is monotonically decreasing in $R$,
we can suppose that $R> \ell$.
By a slight abuse of notation, let us call $(u_R,v_R)$ the eigenfunctions associated with $\lambda_1( \Omega_R)<0$ extended to 0 in $\R \times \Omega \setminus (I_R\times \Omega_R)$.
We claim that there exists $\varepsilon>0$ such that $\varepsilon(u_R,v_R)$ is a subsolution for system (<ref>).
In fact, we have that
\begin{equation*}
\underset{v\to 0^+}{\lim} \dfrac{f(x,v)}{v} = f_v(x,0),
\end{equation*}
so for $\varepsilon$ small enough we have that
\begin{equation}\label{ch12220}
\dfrac{f(x,\varepsilon v_R)}{\varepsilon v_R} > f_v(x,0) + \lambda_1( \Omega_R).
\end{equation}
\begin{equation}\label{ch12221}
\left\{
\begin{array}{ll}
-D \varepsilon u_R '' -c \varepsilon u_R' - \nu \varepsilon v_R|_{y=0} +\mu \varepsilon u_R=\lambda_1( \Omega_R ) \varepsilon u_R \leq 0, & x\in I_R, \\
-d \Delta \varepsilon v_R -c \partial_x \varepsilon v_R =(f_v(x,0)+ \lambda_1( \Omega_R) \varepsilon v_R \leq f(x, \varepsilon v_R), & (x, y)\in \Omega_R, \\
-d \varepsilon\partial_y{v_R}|_{y=0} + \nu \varepsilon v_R|_{y=0} -\varepsilon u_R=0, & x\in I_R,
\end{array}
\right.
\end{equation}
so $\varepsilon(u_R,v_R)$ is a subsolution.
Decreasing $\varepsilon$ if necessary, we have that $\varepsilon(u_R,v_R)<(u,v)$ because $u$ and $v$ are strictly positive in all points of the domain while $(u_R, v_R)$ has compact support. Now we translate $\varepsilon(u_R,v_R)$ in the variable $x$ by multiples of $\ell$; given $k\in \Z$, we call
\begin{align*}
u_{R, k}(x):= \varepsilon u_R(x-k\ell), \quad & I_{R,k}=(k\ell-R,k\ell+R), \\
v_{R, k}(x,y):=\varepsilon v_R(x-k\ell,y), \quad & \Omega_{R,k}= B_R(k\ell, 0) \cap \Omega.
\end{align*}
The couple $(u_{R,k}, v_{R,k})$ is still a subsolution to system (<ref>) because is a translation of a subsolution by multiple of the periodicity of the coefficients in the equations.
Suppose by the absurd that there exists $k\in \Z$ such that $(u_{R,k}, v_{R,k})\not < (u,v)$.
Since $u$ and $v$ are strictly positive in all points of respectively $\R$ and $\Omega$, while $u_{R,k}$ and $v_{R,k}$ have compact support, by decreasing $\varepsilon$ if necessary, we have that $(u_{R,k}, v_{R,k})\leq (u,v)$ and either there exists $\bar{x}\in I_{R,k}$ such that $ u_{R,k}(\bar{x})=u(\bar{x})$ or there exists $(\bar{x}, \bar{y})\in \overline{\Omega}_{R,k} $ such that $v_{R,k}(\bar{x}, \bar{y})=v(\bar{x}, \bar{y})$.
Then, by the Comparison Principle, we have that $(u_{R,k}, v_{R,k})\equiv (u,v)$, which is absurd because $u_{R,k}$ and $v_{R,k}$ are compactly supported.
Therefore, we have
\begin{equation}\label{ch11811}
\begin{array}{rl}
u(x) > \varepsilon u_R(x+k\ell), &\quad \forall x\in\R, \ \forall k\in\Z, \\
v(x,y) > \varepsilon v_R(x+k\ell,y), &\quad \forall (x,y)\in\Omega, \ \forall k\in\Z.
\end{array}
\end{equation}
Fix $Y<\sqrt{R^2-\ell^2}$. Then, let us call
\begin{equation*}
\delta_Y := \min\{ \underset{[0, \ell]}{\min} \, \varepsilon u_R(x), \underset{[0,\ell]\times[0,Y] }{\min} \varepsilon v_R(x,y) \}.
\end{equation*}
Since $[0,\ell]\times(0,Y) \subset \Omega_R$ and $[0,\ell]\subset I_R$, we have that $\delta_Y>0$.
Then, (<ref>) implies that
\begin{equation}\label{ch11749}
\begin{array}{ll}
u(x)>\delta_Y, & \text{for} \ x\in \R, \\
v(x,y)>\delta_Y, & \text{for} \ x\in \R, \ y\in[0,Y].
\end{array}
\end{equation}
Step 2: sliding in $y$.
Recall that by Corollary <ref> we have $\lambda_1( \Omega)<0$ implies $\lambda_1(-\mathcal{L},\Omega) <0$ and by Proposition (<ref>) it holds $\lambda_p(-\mathcal{L}, \R^2)\leq \lambda_1(-\mathcal{L},\Omega)<0$. By Proposition <ref>
and by (<ref>) we have that for some $r>0$ we have $\lambda_p(-\mathcal{L},\R\times (-r,r))<0$. Then, let us call $v_r$ the eigenfunction related to $\lambda_p(-\mathcal{L},\R\times (-r,r))$ extended to 0 outside its support; repeating the classic argument, one has that for some $\theta>0$ we have $\theta v_r$ extended to 0 outside $\R\times (-r,r)$ is a subsolution for the second equation in system (<ref>).
For all $h>0$, let us now call $\varphi_h(x,y):=v_r(x, y+h)$.
Since $v_r$ is periodic in the variable $x$, we have that $v_r$ is uniformly bounded.
Now take $Y>2r$ and $h_0>r$ such that $Y>h_0+r$; by decreasing $\theta$ if necessary, we get that $\theta v_r < \delta_Y$.
Hence, we get
\begin{equation}\label{ch11756}
\theta \varphi_{h_0}(x,y) < v(x,y) \quad \text{for} \ x\in\R, \ y\geq 0.
\end{equation}
Now define
\begin{equation*}
h^*= \sup \{ h\geq h_0 \ : \ \theta \varphi_h(x,y) < v(x,y) \ \text{for} \ x\in\R, \ y\in[h-r, h+r] \}.
\end{equation*}
By (<ref>), we get that $h^* \geq h_0>r$.
We now take $\tilde{y}<h^*+r$ and define
\begin{equation*}
\tilde{h} = \left\{
\begin{array}{ll}
\tilde{y}, & \text{if} \ \tilde{y}\leq h^*, \\
\dfrac{\tilde{y}+h^*-r}{2}, & \text{if} \ h^* < \tilde{y} < h^*+r.
\end{array}
\right.
\end{equation*}
Then, $\tilde{h}<h^*$: if $\tilde{h}=\tilde{y}$ it is trivial, otherwise one observe that $\tilde{y}-r < h^*$.
Also, $\tilde{y}\in(\tilde{h}-r, \tilde{h}+r )$; in fact, that is obvious if $\tilde{h}=\tilde{y}$, otherwise we have that $\tilde{y}< h^*+r$ and
\begin{equation*}
\tilde{h}-r < h^*-r < \tilde{y} < \dfrac{\tilde{y}+h^*+r}{2}
= \tilde{h}+r.
\end{equation*}
Then, since $v_r$ and therefore $\varphi_{\tilde{h}}$ are periodic in $x$, we have that
\begin{equation} \label{ch11606}
v(x, \tilde{y}) > \theta \varphi_{\tilde{h}}(x, \tilde{y}) > \underset{[0,\ell]}{\min} \, \theta \varphi_{\tilde{h}}(x, \tilde{y})>0,
\end{equation}
so $v(x,y)>0$ for all $y<h^*+r$, $x\in\R$ and moreover
\begin{equation}\label{ch11759}
v(x,y) > \theta \, \underset{[0, \ell]}{\min} \, v_r(x, 0) >0 \quad \text{for} \ x\in\R, \ y\leq h^*.
\end{equation}
Suppose by absurd that $h^*<+\infty$. Then there exists a sequence $\{h_n\}_{n\in \N}$ and a sequence $\{(x_n, y_n)\}_{n\in\N}$ with $(x_n, y_n)\in \R\times[h_n-r, h_n+r]$, such that
$$\underset{n\to+\infty}{\lim} h_n=h^* \quad \text{and} \quad \underset{n\to+\infty}{\lim} {\theta\varphi_{h_n}(x_n, y_n)-v(x_n, y_n) }=0.$$
Up to a subsequence, $\{ y_n\}_{n\in\N} \subset [0, h^*+r] $ converges to some $\bar{y}\in[h^*-r, h^*+r]$ while $\{ x_n\}_{n\in\N}$ either converges to some $\bar{x}\in \R$ or goes to infinity.
For all $n\in \N$ there exists $x_n'\in[0,\ell)$ and $k\in\Z$ such that
\begin{equation}\label{ch12040}
x_n= x_n' + k\ell.
\end{equation}
Up to a subsequence,
\begin{equation}\label{ch11744}
x_n' \overset{n\to\infty}{\longrightarrow} x'\in[0,\ell].
\end{equation}
\begin{equation*}
(u_n(x), v_n(x, y)):=(u(x+x_n), v(x+x_n, y)).
\end{equation*}
Then, by Lemma <ref> we have that $\{(u_n,v_n)\}_{n\in\N}$ converges to some $(\tilde{u}, \tilde{v})$ such that
\begin{equation}\label{ch11959}
\mbox{$(\tilde{u}(x+x'), \tilde{v}(x+x', y)$ is a bounded stationary solution to \eqref{ch1sys:fieldroad}.}
\end{equation}
By (<ref>), we have
\begin{equation}\label{ch11955}
\tilde{v}(x,\tilde{y}) > \underset{x\in[0,\ell]}{\min} \theta \varphi_{\tilde{h}}(x, \tilde{y})>0 \quad \text{for} \ \tilde{y}<h^*+r.
\end{equation}
We notice that if $\tilde{v}(0, \bar{y})=0$, since $\tilde{v}\geq 0$ and (<ref>) holds,
by the maximum principle we get $\tilde{v}\equiv 0$ in $\Omega$. But since (<ref>) holds, this is not possible and instead $\tilde{v}(0, \bar{y})>0$. Hence, $0<\tilde{v}(0, \bar{y})= \theta \varphi_{h^*}(0, \bar{y})$, so
\begin{equation}\label{ch12050}
\bar{y}\neq h^*\pm r.
\end{equation}
We have that $\theta \varphi_{h_n}$ is a subsolution for $ \mathcal{L}$ in $\R\times (h_n-r, h_n+r)$, since it is a translation of a subsolution. Moreover, thanks to the periodicity of $\varphi_{h_n}$ and the definition of $x_n'$ in (<ref>), we have
\begin{equation*}
\varphi_{h_n}(x+x_n, y)=\varphi_{h_n}(x+x_n', y).
\end{equation*}
It follows that the sequence $\varphi_{h_n}(x+x_n, y)$ converges to $\varphi_{h^*}(x+{x}', y)$.
Then, $\theta \varphi_{h^*}(x+{x}', y)$ is a subsolution for the second equation in (<ref>) in $\R\times(h^*-r, h^*+r)$ and by (<ref>) it holds $(0, \bar{y})\in \R\times(h^*-r, h^*+r)\subset \Omega$. Hence, we can apply the comparison principle to $\tilde{v}(x,y)$ and $\theta \varphi_{h^*}(x+{x}', y)$: since $\tilde{v}(0,\bar{y})=\theta\varphi_{h^*}({x}', \bar{y})$, then $\tilde{v}(x,y)\equiv\theta \varphi_{h^*}(x+{x}', y) $ in all the points of $\R\times(h^*-r, h^*+r)$. But then by continuity $\tilde{v}(x, h^*-r)=\theta \varphi_{h^*}(x+{x}', h^*-r)=0 $, which is absurd for (<ref>).
Hence $h^*=+\infty$. From that and (<ref>) we have statement of the lemma.
Finally, we are ready to prove existence and uniqueness of a positive bounded stationary solution to (<ref>). The existence of such couple of function is crucial to get the persistence result of Theorem <ref>.
Suppose that $c=0$, $f$ satisfies (<ref>)-(<ref>) and that
$\lambda_1( \Omega)<0$. Then, the following holds:
* There exists a unique positive bounded stationary solution $(u_{\infty}, v_{\infty})$ to system (<ref>).
* The functions $u_{\infty}$ and $v_{\infty}$ are periodic in the variable $x$ of period $\ell$.
Step 1: construction of a subsolution.
Since $\lambda_1( \Omega)<0$, by Theorem <ref> it holds that $\lambda_p( \Omega)<0$ and moreover by Proposition <ref>
there exists $r>1$ such that $\lambda_p( \R\times(0, r))<0$. Let us call $(\phi_r, \psi_r)$ the eigenfunction related to $\lambda_p( \R\times (0,r))$.
We have that
\begin{equation*}
\underset{v\to 0^+}{\lim} \dfrac{f(x,v)}{v} = f_v(x,0),
\end{equation*}
so there exists $\varepsilon>0$ such that
\begin{equation*}
\dfrac{f(x,\varepsilon\psi_r)}{\varepsilon\psi_r} > f_v(x,0) + \lambda_p( \R\times (0,r)).
\end{equation*}
\begin{equation}\label{ch11933}
\left\{
\begin{array}{ll}
-D \varepsilon \phi_r'' -c \varepsilon\phi_r' - \nu \varepsilon \psi_r\rvert_{y=0} + \varepsilon \phi_r = \lambda_p ( \R\times(0,r) ) \varepsilon\phi_r < 0, & x\in\R, \\
-d \Delta \varepsilon\psi_r -c \partial_x \varepsilon\psi_r < f(x, \varepsilon \psi_r), & (x, y)\in \R\times (0, r), \\
-d \varepsilon\partial_y{\psi_r}|_{y=0} + \nu \varepsilon\psi_r|_{y=0} -\varepsilon\phi_r=0, & x\in\R,
\end{array}
\right.
\end{equation}
so $\varepsilon(\phi_r, \psi_r)$ is a subsolution to system (<ref>).
Thanks to Corollary <ref>, $\lambda_1( \Omega)<0$ implies $\lambda_1(-\mathcal{L}, \R^2)<0$; then Proposition <ref> implies that $\lambda_p(-\mathcal{L}, \R^2)<0$. By (<ref>), also $\lambda_p(-\mathcal{L}, \R)<0$.
Consider the periodic positive eigenfunction $\psi_p(x)$ related to $\lambda_p(-\mathcal{L}, \R)$.
With a slight abuse of notation, we can extend $\psi_p(x)$ in all $\R^2$ by considering constant with respect to the variable $y$.
Repeating the same arguments as before, we can prove that for some $\theta$ the function
$\theta \psi_p(x)$ is a subsolution for the second equation of system (<ref>) in $\R^2$.
Consider $\delta>0$.
We have that $\psi_p(x)$ is limited, therefore there exists $\varepsilon'\in(0, \theta)$ such that
\begin{equation}\label{ch11653}
\underset{[0,\ell]}{\max} \, \varepsilon' \psi_p(x) <\delta < \underset{\substack{[0,\ell]\times[0,r-1]}}{\min} \varepsilon\psi_r(x,y) .
\end{equation}
Then, let us define the functions
\begin{align*}
\underline{u}(x) & := \varepsilon\phi_r(x), \\
\underline{v}(x,y) &:= \max \{ \varepsilon\psi_r(x,y) , \, \varepsilon' \psi_p(x)\}.
\end{align*}
By (<ref>), for $y\in(0, r-1)$ it holds that $\underline{v}(x,y)=\varepsilon\psi_r(x,y)$. Hence, we get that $(\underline{u}, \underline{v})$ is a subsolution for the first and third equation of (<ref>). Moreover, since $\varepsilon\psi_r(x,y)$ and $\varepsilon' \psi_p(x)$ are both subsolution to the second equation to (<ref>), so the maximum between them is a generalised subsolution. Thanks to that, we can conclude that $(\underline{u}, \underline{v})$ is a generalised subsolution for the system (<ref>).
Since $\phi_r$ and $\psi_p$ are periodic in $x$ and independent of $y$, we get
$$\underset{\R}{\inf} \, \underline{u}(x)>0 \quad \text{and} \quad \underset{\Omega}{\inf} \, \underline{v}(x,y) >0.$$
So, $(\underline{u}, \underline{v})$ is a generalised subsolution for the system (<ref>), with positive infimum, and by the periodicity of $\phi_r$, $\psi_r$ and $\psi_p$, it is periodic in $x$ with period $\ell$.
Step 2: construction of a stationary solution.
Take the generalised subsolution $(\underline{u}, \underline{v})$. We want to show that the solution $(\tilde{u}(t,x), \tilde{v}(t,x,y))$ having $(\underline{u}(x), \underline{v}(x,y))$ as initial datum is increasing in time and converge to a stationary solution.
By the fact that $(\underline{u}, \underline{v})$ is a subsolution, at we have $(\underline{u}, \underline{v}) \leq (\tilde{u}, \tilde{v})$ for all $t\geq 0$. Hence, for all $\tau>0$,
let us consider the solution $(z,w)$ stating at $t=\tau$ from the initial datum $(\underline{u}(x), \underline{v}(x,y))$. Then, at $t=\tau$ we have that $(\tilde{u}(\tau,x), \tilde{v}(\tau,x,y))\geq (z(\tau,x), w(\tau,x,y))$. By the comparison principle <ref>, we have that for all $t\geq \tau$ it holds that $(\tilde{u}(t,x), \tilde{v}(t,x,y))\geq (z(t,x), w(t,x,y))$. By the arbitrariness of $\tau$, we get that $(\tilde{u}(t,x), \tilde{v}(t,x,y))$ is increasing in time.
Moreover, consider
\begin{equation*}
V:= \max \left\{ M, \sup \underline{v}, \frac{\mu}{\nu} \sup \underline{u} \right\}, \quad U:= \frac{\nu}{\mu} V,
\end{equation*}
where $M>0$ is the threshold value defined in (<ref>).
One immediatly checks that $(U,V)$ is a supersolution for the system (<ref>).
Also, we have that $(\underline{u}(x), \underline{v}(x,y)) \leq (U,V)$, so by the comparison principle <ref> it holds that
\begin{equation*}
(\tilde{u}(t,x), \tilde{v}(t,x,y)) \leq (U,V) \quad \text{for all} \ t>0.
\end{equation*}
Hence, $(\tilde{u}(t,x), \tilde{v}(t,x,y))$ is limited.
Now consider an increasing diverging sequence $\{t_n\}_{n\in N}\subset \R^+$. Then, define
\begin{equation*}
u_n(t,x):=\tilde{u}(t+t_n, x), \quad v_n(t,x,y):=\tilde{v}(t+t_n, x, y),
\end{equation*}
that is a sequence of functions. By Lemma <ref>, $(u_n, v_n)$ converge in $\mathcal{C}_{loc}^{1,2,\alpha}$ to a stationary bounded solution to (<ref>), that we call $(u_{\infty}, v_{\infty})$. We point out that $(u_{\infty}, v_{\infty})\not\equiv(0,0)$ since
\begin{equation*}
(u_{\infty}, v_{\infty}) \geq (\underline{u}, \underline{v}) > (0,0).
\end{equation*}
Moreover, both functions are periodic of period $\ell$ in the variable $x$ since the initial datum is.
Step 3: uniqueness.
Suppose that there exists another positive bounded stationary solution $(q,p)$ to (<ref>). Then, define
\begin{equation*}
k^* := \sup \left\{ k>0 \ : \ u_{\infty}(x) > k q(x) \ \forall x\in\R, \ v_{\infty}(x,y)> kp(x,y) \ \forall(x,y)\in\Omega \right\}.
\end{equation*}
Since by Lemma <ref> the functions $u_{\infty}$ and $v_{\infty}$ have positive infimum and since $p$ and $q$ are bounded, we have that $k^*>0$.
We claim that
\begin{equation}\label{ch12337}
k^* \geq 1.
\end{equation}
By the definition of $k^*$, one of the following must hold: there exists
\begin{equation}\label{ch1caso1}
\begin{split}
\mbox{ either a sequence $\{x_n\}_{n\in\N}\subset \R$ such that $u_{\infty}(x_n) - k^* q(x_n) \overset{n\to\infty}{\longrightarrow} 0$,}
\end{split}
\end{equation}
\begin{equation}\label{ch1caso2}
\mbox{or a sequence $\{(x_n, y_n)\}_{n\in\N}\subset \Omega$ such that $v_{\infty}(x_n,y_n) - k^* p(x_n, y_n) \overset{n\to\infty}{\longrightarrow} 0$.}
\end{equation}
There exists a sequence $\{ {x}_n' \}_{n\in\N} \subset[0,\ell)$ such that
\begin{equation}\label{ch11807}
x_n-{x}_n' \in \ell \Z \quad \text{for all} \ n\in\N.
\end{equation}
Then, up to extraction of a converging subsequence, we have that there exists $ x'\in \R$ such that ${x}_n' \overset{n\to \infty}{\longrightarrow} x'$.
One can see that the sequence of couples
\begin{equation*}
(q_n(x), p_n(x,y) ):= (q(x+x_n), p(x+x_n, y) )
\end{equation*}
is a stationary solution for (<ref>) with reaction function $f(x+ {x}_n', v)$. By Lemma <ref>, up to a subsequence, $(q_n, p_n)$ converges in $\mathcal{C}_{loc}^{2}$ to some $(q_{\infty}, p_{\infty})$, which is a stationary solution of (<ref>) with reaction function $f(x+ x', v)$.
We also notice that, thanks to the periodicity of $u_{\infty}$ and $v_{\infty}$, $(u_{\infty}(x+ x'), v_{\infty}(x+ x', y))$ is also a stationary solution of (<ref>) with reaction function $f(x+ x', v)$.
Define the function
\begin{equation}
\begin{split}
\alpha(x)&:=u_{\infty}(x+ x') - k^*q_{\infty}(x), \\
\beta(x)&:= v_{\infty}(x+ x', y) - k^*p_{\infty}(x,y),
\end{split}
\end{equation}
and notice that $\alpha (x)\geq 0$, $\beta(x, y)\geq 0$.
Now suppose that (<ref>) holds.
We have that
\begin{equation*}
\alpha(0)=u_{\infty}( x') - k^*q_{\infty}(0)=0.
\end{equation*}
Moreover, $\alpha(x)$ is a solution to the equation
\begin{equation*}
-D \alpha''-c\alpha' -\nu \beta|_{y=0} +\mu \alpha =0.
\end{equation*}
By the maximum principle, we have that since $\alpha(x)$ attains its minimum in the interior of the domain then $\alpha(x)\equiv \min \alpha =0$. Then, one would have $u_{\infty}(x+ x')\equiv k^* q_{\infty}(x)$ and by the comparison principle <ref> we have $v_{\infty}(x+ x', y)\equiv k^*p_{\infty}(x+ x', y)$.
Subtracting the second equation of system (<ref>) for $p_{\infty}$ from the one for $v_{\infty} $ we get
\begin{equation}\label{ch11948}
0=f(x+ x', v_{\infty}(x+ x',y))-k^* f(x+ x',p_{\infty}(x,y)).
\end{equation}
If by the absurd $k^*<1$, by the KPP hypothesis (<ref>) we have $k^*f(x+ x', p_{\infty}(x,y)) <f(x+ x', k^* p_{\infty}(x,y)) = f(x+ x', v_{\infty}(x+ x', y))$ and the right hand side of (<ref>) has a sign, that is absurd since the left hand side is 0. We can conclude that if we are in the case of (<ref>), then (<ref>) holds.
Suppose instead that (<ref>) is true. If $\{ y_n\}_{n\in\N}$ is bounded, we define $y_n \overset{n\to \N}{\longrightarrow} y'\in\R$.
\begin{equation}\label{ch12007}
\beta(0, y')= v_{\infty}( x', y')- k^* p_{\infty}(0,y' )=0.
\end{equation}
If by the absurd $k^*<1$, then by the Fisher-KPP hypothesis (<ref>) we have
\begin{equation}\label{ch12001}
\begin{split}
-d \Delta \beta(x,y) -c \partial_x \beta(x,y) &= f(x+ x', v_{\infty}(x+ x',y))- k^* f(x+ x', p_{\infty}(x,y)) \\
&> f(x+ x', v_{\infty}(x+ x',y))- f(x, k^*p_{\infty}(x,y)).
\end{split}
\end{equation}
Since $f$ is locally Lipschitz continuous in the second variable, one infers from (<ref>) that there exists a bounded function $b(x)$ such that
\begin{equation}\label{ch12009}
-d \Delta \beta -c \partial_x \beta + b \beta >0.
\end{equation}
Since that, $\beta \geq 0$ and by (<ref>) $\beta(0, y')=0$, if $y'>0$ we apply the strong maximum principle and we have $\beta \equiv 0$.
If $y'=0$, we point out that by the fact that $v_{\infty}$ and $p_{\infty}$ are solution to (<ref>) it holds
\begin{equation*}
d \partial_y \beta(x,0) = \nu (v_{\infty}(x+ x',0) - k^* p_{\infty}(x,0) ) - \nu (u_{\infty}(x+ x')- k^* q_{\infty}(x)) \leq 0
\end{equation*}
By that, the inequality in (<ref>), $\beta \geq 0$, $\beta(0, y')=0$, we can apply Hopf's lemma and get again $\beta \equiv 0$.
Then for both $y'>0$ and $y'=0$, $v_{\infty}(x+ x', y)\equiv k^*p_{\infty}(x+ x', y)$ and (<ref>) holds, but we have already saw that this is absurd. So, in the case of (<ref>), if $\{ y_n\}_{n\in\N}$ is bounded, (<ref>) is true.
At last, if $\{ y_n\}_{n\in\N}$ is unbounded, we define
\begin{align*}
V_n(x,y)&:=v_{\infty}(x+x_n, y+y_n), \\
P_n(x,y)&:=p(x+x_n, y+y_n).
\end{align*}
By Lemma <ref>, up to subsequences, $V_n$ and $P_n$ converge in $\mathcal{C}_{loc}^{2}$ to some functions $V_{\infty}$ and $P_{\infty}$ solving
\begin{equation*}
- d \Delta v - c\partial_x v = f(x+ x', v) \quad \text{for} \ (x,y)\in\R^2 .
\end{equation*}
Moreover, if we suppose $k^*< 1$, by the Fisher-KPP hypothesis (<ref>) we have that
\begin{equation*}
k^*f(x+ x', P_{\infty})< f(x+ x', k^* P_{\infty})
\end{equation*}
and consequently, calling $\gamma:=V_{\infty} - k^* P_{\infty}$, we get
\begin{equation*}
- d \Delta\gamma - c\partial_x \gamma > f(x+ x', V_{\infty}) - f(x+ x', k^*P_{\infty}) .
\end{equation*}
Once again using the local Lipschitz boundedness of $f$ in the second variable, for some bounded function $b$ we have that
\begin{equation}\label{ch12336}
- d \Delta\gamma - c\partial_x \gamma + b \gamma >0.
\end{equation}
Also, we have that
\begin{equation*}
\gamma(0,0)=V_{\infty}(0,0) - k^* P_{\infty} (0,0) = \underset{n\to \infty}{\lim} v_{\infty}(x_n, y_n) - k^* p(x_n, y_n)=0.
\end{equation*}
Since that, $\gamma \geq 0$ and (<ref>), we can apply the strong maximum principle and we have $\gamma \equiv 0$ in $\R^2$. Then, $V_{\infty}\equiv k^* P_{\infty}$ and
\begin{equation}
0=- d \Delta\gamma - c\partial_x \gamma = f(x+ x', k^*P_{\infty}) - k^* f(x+ x', P_{\infty}) >0,
\end{equation}
which is absurd. Since this was the last case to rule out, we can conclude that (<ref>) holds.
From (<ref>), we have that
\begin{equation}\label{ch12346}
(u_{\infty}, v_{\infty}) \geq (q,p).
\end{equation}
Now, we can repeat all the argument exchanging the role of $(u_{\infty}, v_{\infty})$ and $(q,p)$. We find
\begin{equation*}
h^* := \sup \left\{ h>0 \ : \ q(x) > h u_{\infty}(x) \ \forall x\in\R, \ p(x,y) >h v_{\infty}(x,y) \ \forall(x,y)\in\Omega \right\} \geq 1.
\end{equation*}
\begin{equation*}
(q,p) \geq (u_{\infty}, v_{\infty}).
\end{equation*}
By that and (<ref>), we have that $(u_{\infty}, v_{\infty}) \equiv (q,p)$. Hence, the uniqueness is proven.
Now we are ready to give a result on the persistence of the population.
Since $\lambda_1( \Omega)<0$, by Proposition (<ref>), we have that there exists $R>0$ such that $\lambda_1( \Omega_R)<0$. Let us consider $(u_R, v_R)$ the eigenfunctions related to $\lambda_1( \Omega_R)<0$; then, with the argument already used in the proof of Lemma <ref> (precisely, in (<ref>) and (<ref>)), there exists a value $\varepsilon>0$ such that $(\varepsilon u_R, \varepsilon v_R)$ is a subsolution to (<ref>) in $\Omega_R$.
Observe also that $u_R(x)=0$ for $x\in\partial I_R$ and $v_R(x,y)=0$ for $(x,y)\in(\partial \Omega_R)\cap\Omega$. Then, we can extend $\varepsilon u_R$ and $ \varepsilon v_R$ outside respectively $I_R$ and $\Omega_R$, obtaining the generalised subsolution $(\varepsilon u_R, \varepsilon v_R)$.
Let us consider the solution $(u,v)$ issued from $(u_0, v_0)$.
Then, by the strong parabolic principle we have that
\begin{equation}\label{ch12225}
u(1, x)>0\quad \text{and} \quad v(1,x,y)>0.
\end{equation}
Recall that $(u_{\infty}, v_{\infty})$ is the unique stationary solution of (<ref>), and that by Lemma (<ref>) we have
\begin{equation}\label{ch12300}
u_{\infty}>0\quad \text{and} \quad v_{\infty}>0.
\end{equation}
By that and (<ref>), we have that
\begin{equation*}
\delta:=\min\{\underset{x\in I_R}{\min} \, u(1,x), \underset{x\in I_R}{\min} \, u_{\infty}(x), \underset{(x,y)\in \Omega_R}{\min} v(1,x,y), \underset{(x,y)\in \Omega_R}{\min} v_{\infty}(x,y) \} >0.
\end{equation*}
Without loss of generality, we can suppose
\begin{equation}\label{ch12301}
\varepsilon <\delta
\end{equation}
and thus by (<ref>), (<ref>), and (<ref>), we have
\begin{equation}\label{ch12306}
\begin{split}
u_{\infty}(x) &> \varepsilon u_R(x) \quad \text{for all} \ x\in \R, \\
v_{\infty}(x,y) &> \varepsilon v_R(x,y) \quad \text{for all} \ (x,y)\in \Omega.
\end{split}
\end{equation}
Now, consider the solution $(\underline{u}, \underline{v})$ issued from $(\varepsilon u_R, \varepsilon v_R)$.
We point out that, by the comparison principle, for all $t>0$ we have
\begin{equation}\label{ch10017}
(\underline{u}(t,x), \underline{v}(t,x,y) \leq ({u}(t+1,x), {v}(t+1,x,y)).
\end{equation}
By the standard argument already used in the proof of Theorem <ref>, we have that $(\underline{u}, \underline{v})$ is increasing in time and by Lemma <ref> it converges in $\mathcal{C}_{loc}^2$ to a stationary function $(\underline{u_{\infty}}, \underline{v_{\infty}})$ as $t$ tends to infinity. Since $(\underline{u}, \underline{v})$ is increasing in time and $(\varepsilon u_R, \varepsilon v_R)\not \equiv (0,0)$, by the strong maximum principle we have $(\underline{u_{\infty}}, \underline{v_{\infty}}) > (0,0)$. By (<ref>), we also have
\begin{equation*}
(\underline{u_{\infty}}, \underline{v_{\infty}}) \leq ({u_{\infty}}, {v_{\infty}})
\end{equation*}
Then, by the uniqueness of the bounded positive stationary solution proved in Theorem <ref>, we have $(\underline{u_{\infty}}, \underline{v_{\infty}}) \equiv ({u_{\infty}}, {v_{\infty}})$.
Next, take
\begin{equation}\label{ch10008}
V:= \max \left\{ M, \sup v_0, \frac{\mu}{\nu} \sup u_0, \sup v_{\infty}, \frac{\mu}{\nu} \sup u_{\infty} \right\}, \quad U:= \frac{\nu}{\mu} V,
\end{equation}
where $M>0$ is the threshold value defined in (<ref>).
Making use of the hypothesis (<ref>) on $f$, one easily check that $(U, V)$ is a supersolution for (<ref>). Let us call $(\overline{u}, \overline{v})$ the solution to (<ref>) issued from $(U,V)$.
By definition, $(U,V)\geq (u_0, v_0)$, hence by the comparison principle for all $t>0$ we have
\begin{equation}\label{ch10012}
( u(t,x), v(t, x,y) ) \leq (\overline{u}(t,x), \overline{v}(t, x,y) ).
\end{equation}
Repeating the argument used in the proof of Theorem <ref>, we observe that $(\overline{u}, \overline{v})$ is decreasing in time and by Lemma <ref> it converges in $\mathcal{C}_{loc}^2$ to a stationary function $(\overline{u_{\infty}}, \overline{v_{\infty}})$ as $t$ tends to infinity.
We have $(\overline{u_{\infty}}, \overline{v_{\infty}}) \leq (U,V)$, so the stationary solution is bounded.
Moreover, since by the definition of $(U,V)$ in (<ref>) we have $ ( {u_{\infty}}, {v_{\infty}}) \leq (U, V) $, by the comparison principle <ref> we get
\begin{equation*}
( {u_{\infty}}, {v_{\infty}}) \leq (\overline{u_{\infty}}, \overline{v_{\infty}}).
\end{equation*}
Since $(\overline{u_{\infty}}, \overline{v_{\infty}})$ is a bounded positive stationary solution of (<ref>), by Theorem <ref> we have that $( {u_{\infty}}, {v_{\infty}}) \equiv (\overline{u_{\infty}}, \overline{v_{\infty}})$.
By the comparison principle <ref> and by (<ref>) and (<ref>), for all $t>1$ we have
\begin{equation*}
\begin{split}
\underline{u}(t-1, x) \leq u(t,x) \leq \overline{u}(t,x) \quad \text{for all} \ x\in\R, \\
\underline{v}(t-1, x,y) \leq v(t,x,y) \leq \overline{v}(t,x,y) \quad \text{for all} \ (x,y)\in\Omega.
\end{split}
\end{equation*}
Since both $(\underline{u}, \underline{v})$ and $(\overline{u}, \overline{v})$ converge to $( {u_{\infty}}, {v_{\infty}})$ locally as $t$ tends to infinity, by the sandwich theorem we have that $(u, v)$ also does. This is precisely the statement that we wanted to prove.
§.§ Extinction
The first step to prove extinction is to show that there is no positive bounded stationary solution to system (<ref>), that is, the only bounded stationary solution is $(0,0)$.
Suppose $c=0$ and $f$ satisfy (<ref>)-(<ref>).
If $\lambda_1( \Omega)\geq 0$, then there is no positive bounded stationary solution to system (<ref>).
Step 1: construction of a supersolution.
Observe that in this case, since $c=0$, by Theorem <ref> it holds $\lambda_p(\Omega)=\lambda_1(\Omega)\geq 0$.
We take the couple of eigenfunctions $(u_p, v_p)$ related to $\lambda_p(\Omega)$ as prescribed by Proposition <ref>; recall that $(u_p, v_p)$ are periodic in $x$.
Suppose $(q,p)$ is a positive bounded stationary solution to (<ref>). Then, there exists $\eta>0$ such that
\begin{equation}\label{ch12347}
q(0) > \eta u_p(0).
\end{equation}
We now choose a smooth function $\chi : \R_{\geq 0} \to \R_{\geq 0}$ such that $\chi(y)=0$ for $y\in[0,\ell]$, $\chi(y)=1$ for $y\in[ 2\ell, +\infty)$.
By (<ref>) and Theorem <ref>, we have $\lambda_p(-\mathcal{L}, \R)=\lambda_p(-\mathcal{L}, \R^2)=\lambda_1(-\mathcal{L}, \R^2)$. By that, Theorem <ref> and the fact that $\lambda_1(\Omega)\geq 0$, we get $\lambda_p(-\mathcal{L}, \R)\geq 0$.
We call $\psi_p$ the eigenfunction related to $\lambda_p(-\mathcal{L}, \R)$ and, with a slight abuse of notation, we extend it to $\R^2$ by considering it constant with respect to the variable $y$.
Take $\varepsilon>0$ to be fixed after, and define
\begin{equation*}
(\overline{u}(x), \overline{v}(x,y)):= (\eta u_p(x), \eta v_p(x,y) + \varepsilon \chi(y) \psi_p(x)).
\end{equation*}
Then, it holds that
\begin{equation}\label{ch12011}
\begin{split}
- d \Delta \overline{v}
&= -d \left(\Delta \eta v_p +\varepsilon \chi''\psi_p + \varepsilon \chi \psi_p'' \right), \\
&= \left( f_v(x,0)+\lambda_p( \Omega) \right) \eta v_p + ( f_v(x,0)+ \lambda_p(-\mathcal{L}, \R) ) \varepsilon \chi \psi_p - d \varepsilon \chi''\psi_p, \\
& = f_v(x,0) \overline{v} + \lambda_p( \Omega) \eta v_p + \lambda_p(-\mathcal{L}, \R) \varepsilon \chi \psi_p - d \varepsilon \chi''\psi_p.
% &> f(x, \overline{v}) + \lambda_p( \Omega)\eta v_p + \lambda_p(-\mathcal{L}, \R) \varepsilon \chi \psi_p , \\
% & \geq f(x, \overline{v}).
\end{split}
\end{equation}
Using the KPP hypothesis (<ref>) and the boundedness of $\chi''$, for $\varepsilon$ small enough we have
\begin{equation*}
f_v(x,0) \overline{v} - d \varepsilon \chi''\psi_p > f(x, \overline{v}).
\end{equation*}
By that, (<ref>) and the non negativity of $\lambda_p(\Omega)$ and $\lambda_p(-\mathcal{L}, \R)$, we have
\begin{equation*}
- d \Delta \overline{v} > f(x, \overline{v}).
\end{equation*}
This means that $\overline{v}$ is a supersolution for the second equation of (<ref>).
Since by definition for $y\leq \ell$ we have $\chi(y)=0$, it holds that
\begin{equation}\label{ch12010}
(\overline{u}(x), \overline{v}(x,y))\equiv (u_p(x), v_p(x,y)) \quad \text{for all} \ (x, y)\in \R\times (0, \ell).
\end{equation}
By the fact that $\lambda_p(\Omega ) \geq 0$, it is easy to check that $(u_p(x), v_p(x,y))$ is a supersolution for the first and third equation in (<ref>). By (<ref>), the same holds for $(\overline{u}(x), \overline{v}(x,y))$.
This, together with (<ref>), gives that $(\overline{u}(x), \overline{v}(x,y))$ is a supersolution to (<ref>).
Step 2: construction of a bounded supersolution
Now we distinguish two cases.
If $ v_p$ is bounded, then we take
\begin{equation}\label{ch1case1super}
(\tilde{u}, \tilde{v}):= (\bar{u}, \bar{v})
\end{equation}
Otherwise, we proceed as follows.
Since in this other case $v_p$ is unbounded, and since it is periodic in $x$, this means there exists a sequence $\{(x_n, y_n)\}_{n\in\N}$ such that
\begin{equation}\label{ch11721b}
v_p(x_n, y_n) \to \infty, \ y_n\to \infty \quad \text{as} \ n\to\infty.
\end{equation}
Now, consider
\begin{equation}\label{ch11418}
V:= \max \left\{ \underset{[0,\ell]\times [0, 3\ell]}{\max} v_p +1, \ \underset{[0,\ell]}{\max} \, \frac{\nu}{\mu} u_p +1 , \ M \right\},
\end{equation}
where $M$ is the quantity defined in (<ref>).
Take the set $S:=(-\ell, \ell)\times(-\ell, \ell)$ and the constant $C$ of the Harnack inequality (see Theorem 5 in Chapter 6.4 of [45]) on the set $S$ for the operator $L(\psi)=\mathcal{L}(\psi)+\lambda_1(\Omega)\psi$. Then, by (<ref>), for some $N\in\N$ we have
\begin{equation*}
V \leq \frac{1}{C} v_p(x_N, y_N).
\end{equation*}
Then by using that and Harnack inequality on $v_p(x+x_N,y+y_N)$ in the set $S$, we get
\begin{equation*}
V \leq \frac{1}{C} \, \underset{S}{\sup} \, v_p(x, y)
\leq \underset{S}{\inf} \, v_p(x,y),
\end{equation*}
Then, using the periodicity of $v_p$, we get
\begin{equation}\label{ch1comp1}
V \leq v_p(x, y_N) \quad \text{for all} \ x\in\R.
\end{equation}
Now, define
\begin{equation}\label{ch1case2superv}
\tilde{v}(x,y):= \left\{
\begin{array}{ll}
\min \{ V, \bar{v}(x,y) \} & \text{if} \ y \leq y_N, \\
V & \text{if} \ y > y_N.
\end{array}
\right.
\end{equation}
Also, we define
\begin{equation*}
U := \frac{\nu}{\mu} V
\end{equation*}
\begin{equation}\label{ch1case2superu}
\tilde{u}:= \min\{ U, u_p \}.
\end{equation}
By the definition of $V$ in (<ref>), one readily checks that $(U,V)$ is a supersolution for system (<ref>) and that
\begin{equation}\label{ch1comp2}
\tilde{u} = u_p \quad \text{and} \quad \tilde{v}(x,0)=v_p(x,0).
\end{equation}
We point out that by the definition of $(\tilde{u}, \tilde{v})$, (<ref>) and (<ref>), for any $(\underline{u}, \underline{v})$ subsolution to system (<ref>), we will be able to apply
the generalised comparison principle, Proposition 3.3 appeared in [20].
Moreover, $(\tilde{u}, \tilde{v})$ is bounded from above by $(U,V)$.
By the fact that $(u_p, v_p)$ is a couple of generalised periodic eigenfunctions to (<ref>), by the strong maximum principle we have that
\begin{equation}\label{ch12341}
\begin{split}
\tilde{u}(x) &\geq \underset{[0,\ell]}{\min} \, \eta u_p(x') >0 \quad \text{for} \ x\in\R, \\
\tilde{v}(x,y) &\geq \underset{ [0,\ell]\times[0,2\ell]}{\min} \eta v_p(x',y') >0 \quad \text{for} \ (x,y)\in\R\times[0, 2\ell], \\
\tilde{v}(x,y) &\geq \min\{\underset{[0,\ell]}{\min} \, \varepsilon\psi_p(x'), V\} >0 \quad \text{for} \ (x,y)\in\R\times(2\ell, +\infty).
\end{split}
\end{equation}
Step 3: comparison with the stationary solution.
Next, define
\begin{equation*}
k^*:= \inf \{ k\geq 0 \ : \ k( \tilde{u}(x), \tilde{v}(x,y)) > (q,p) \ \text{for all} \ (x,y)\in\Omega \}.
\end{equation*}
Since by (<ref>) we have that $ \tilde{u}(x)$ and $ \tilde{v}(x,y)$ are bounded away from $0$, and since $(q,p)$ is bounded by hypothesis, we get that $k^*<+\infty$.
By (<ref>), we have that
\begin{equation}\label{ch12234}
\end{equation}
Then, either
\begin{equation}\label{ch1case1}
\mbox{there exists a sequence $\{x_n\}_{n\in\N}\subset \R$ such that $k^* \tilde{u}(x_n) - q(x_n) \overset{n\to\infty}{\longrightarrow} 0$,}
\end{equation}
\begin{equation}\label{ch1case2}
\mbox{
there exists a sequence $\{(x_n, y_n)\}_{n\in\N}\subset \Omega$ such that $k^* \tilde{v}(x_n,y_n) - p(x_n, y_n) \overset{n\to\infty}{\longrightarrow} 0$.}
\end{equation}
As usual, for all $n\in\N$ we take $x_n'\in[0,\ell)$ such that $x_n-x_n'\in\ell \Z$. Up to a subsequence, $\{ x_n'\}_{n\in\N}$ is convergent and we call
\begin{equation*}
x'= \underset{n\to\infty}{\lim} x_n' \in[0,\ell].
\end{equation*}
Step 4: $\{y_n\}_{n\in\N}$ is bounded.
If $\{y_n\}_{n\in\N}$ is bounded, consider a converging subsequence and call $y'= \underset{n\to \infty}{\lim} y_n$.
We define
\begin{equation*}
(q_n(x), p_n(x,y)):=(q(x+x_n), p(x+x_n,y)).
\end{equation*}
By Lemma <ref>, $(q_n, p_n)$ converges in $\mathcal{C}_{loc}^2$ to some $(q_{\infty}, p_{\infty})$ such that $(q_{\infty}(x-x'), p_{\infty}(x-x', y))$ solves (<ref>).
Define the functions
\begin{align*}
\alpha(x) &:= k^* \tilde{u}(x)-q_{\infty}(x-x'), \\
\beta(x,y)&: = \tilde{v}(x,y)- p_{\infty}(x-x', y).
\end{align*}
If we are in the case of (<ref>), then by the periodicity of $\tilde{u}$ we get
\begin{equation*}
\alpha(x')= k^* \tilde{u}(x')-q_{\infty}(0)= \underset{n\to\infty}{\lim} ( k^* \tilde{u}(x_n)- q(x_n) )=0.
\end{equation*}
Moreover, by the definition of $k^*$, we have that $\alpha\geq 0$. Also, $\alpha$ satisfies
\begin{equation*}
-D \alpha '' -\nu \beta|_{y=0}+ \nu \alpha \geq 0.
\end{equation*}
Then, the strong maximum principle yields that, since $\alpha$ attains its minimum at $x=x'$, then $\alpha\equiv0$. Then, by the comparison principle 3.3 in [20] we have that $\beta\equiv 0$, hence
\begin{equation}\label{ch12226}
0= -d \Delta \beta \geq k^*f(x, \tilde{v}) - f (x,p_{\infty}(x-x',y)).
\end{equation}
By (<ref>), we have that $k^* \tilde{v} > \tilde{v}$. Hence, by the Fischer-KPP hypothesis (<ref>), we have that
\begin{equation}\label{ch12249}
\frac{f(x, k^* \tilde{v})}{k^* \tilde{v}} < \frac{f(x, \tilde{v})}{ \tilde{v}}.
\end{equation}
Hence, again by the fact that $\beta\equiv 0$, we have $p_{\infty}(x-x',y)\equiv k^* \tilde{v}$; by that and by (<ref>), it holds
\begin{equation}\label{ch12257}
k^* f(x, \tilde{v})- f(x,p_{\infty}(x-x',y))=k^* f(x, \tilde{v})- f(x, k^* \tilde{v})>0.
\end{equation}
But this is in contradiction with (<ref>), hence this case cannot be possible.
If instead (<ref>) holds, we get that
\begin{equation}\label{ch12256}
\beta(x', y')= k^* \tilde{v}(x', y')- p_{\infty}(0, y')= \underset{n\to \infty}{\lim} k^* \tilde{v}(x_n, y_n)- p(x_n, y_n)=0.
\end{equation}
By the definition of $k^*$ we also have that $\beta \geq 0$. Moreover, we get that
\begin{equation*}
-d \Delta \beta \geq f(x,k^* \tilde{v}) - f (x,p_{\infty}(x-x',y))
\end{equation*}
using the fact that $ \tilde{v}(x,y)$ is a supersolution, $p_{\infty}(x-x',y)$ is a solution, and (<ref>).
Since $f$ is Lipschitz in the second variable, uniformly with respect to the first one, there exists some function $b$ such that
\begin{equation*}
-d \Delta \beta - b \beta \geq 0.
\end{equation*}
If $y'>0$, using the strong maximum principle and owing (<ref>), we have that $\beta \equiv 0$.
If instead $y'=0$, recall that it also holds
\begin{equation*}
-d \partial_y \beta|_{y=0} \geq \mu \alpha -\nu \beta.
\end{equation*}
Hence, in $(x,y)=(x', y')$, we get that $\partial_y \beta(x',y') \leq 0$. By Hopf's lemma, we get again that $\beta\equiv 0$.
But $\beta\equiv 0$ leads again to (<ref>) and (<ref>), giving an absurd, hence also this case is not possible.
Step 5: $\{y_n\}_{n\in\N}$ is unbounded.
We are left with the case of $\{y_n\}_{n\in\N}$ unbounded.
Up to a subsequence, we can suppose that $\{y_n\}_{n\in\N}$ is increasing.
We define
\begin{equation*}
P_n(x,y):= p(x+x_n, y+y_n).
\end{equation*}
By Lemma <ref> we have that, up to a subsequence, $\{P_n\}_{n\in\N}$ converges in $\mathcal{C}_{loc}^{2,\alpha}(\R^2)$ to some function $P_{\infty}$ such that $P_{\infty}(x-x',y)$ is a solution to the second equation in (<ref>) in $\R^2$.
Now we have two cases depending on how $(\tilde{u}, \tilde{v})$ was constructed. If $v_p$ is bounded, we have defined the supersolution as in (<ref>). Then,
by defining
\begin{equation*}
v_n(x,y):=v_p(x+x_n, y+y_n)
\end{equation*}
and applying Lemma <ref>, we have that $v_n$ converges locally uniformly to a bounded function $v_{p, \infty}$ such that $v_{p, \infty}(x-x',y)$ satisfies
\begin{equation}\label{ch11630}
-d\Delta v_{p, \infty}(x-x',y) = (f_v(x,0)+\lambda_1(\Omega))v_{p, \infty}(x-x',y).
\end{equation}
In this case, we define
\begin{equation*}
v_{\infty}(x,y):=\eta v_{p, \infty}(x,y) + \varepsilon \psi_p(x+x').
\end{equation*}
We point out that $v_{\infty}(x-x',y)$ is a periodic supersolution of the second equation in (<ref>) by (<ref>) and (<ref>).
If instead $v_p$ is unbounded, by (<ref>) for $y>y_N$ we have $\tilde{v}=V$. In this case, we choose
\begin{equation*}
\end{equation*}
By the definition of $V$ in (<ref>), we have that $v_{\infty}$ is also a supersolution to (<ref>).
We call $\gamma(x,y):=k^* {v}_{\infty}(x-x',y) - P_{\infty}(x-x',y)$.
Hence, $\gamma(x,y)\geq 0$ and
\begin{equation}\label{ch12330}
\gamma(x', 0)=k^* {v}_{\infty}(0, 0) - P_{\infty}(0,0)= \underset{n\to\infty}{\lim} k^* \tilde{v}(x_n,y_n) - p(x_n, y_n)=0.
\end{equation}
Notice than that, since (<ref>) holds, from the Fisher-KPP hypothesis on $f$ (<ref>), we get
\begin{equation*}
\frac{f(x,k^* {v}_{\infty} )}{k^* {v}_{\infty}} < \frac{f(x, {v}_{\infty} )}{ {v}_{\infty}}.
\end{equation*}
Using that, the fact that $k^* {v}_{\infty}(x-x',y)$ is a supersolution, and the fact that $P_{\infty}(x-x',y)$ is a solution, we obtain
\begin{equation}\label{ch12333}
-d \Delta \gamma > f(x, k^*{v}_{\infty}(x-x',y)) - f (x,P_{\infty}(x-x',y)).
\end{equation}
Since $f$ is Lipschitz in the second variable, uniformly with respect to the first one, there exists some function $b$ such that
\begin{equation*}
-d \Delta \gamma - b \gamma \geq 0.
\end{equation*}
Using the strong maximum principle for the case of positive functions, since (<ref>) holds, we have that $\gamma\equiv 0$. As a consequence, from (<ref>) we have
\begin{equation*}
f(x,k^* {v}_{\infty}) - f (x,P_{\infty}) <0.
\end{equation*}
but it also holds that $k^* {v}_{\infty} \equiv P_{\infty}$, hence we have an absurd.
Having ruled out all the possible cases, we can conclude that there exists no bounded positive stationary solution $(q,p)$ to (<ref>).
At last, we are ready to prove the first part of Theorem <ref>.
$$V:=\max \left\{ M, \sup v_0, \frac{\mu}{\nu} \sup u_0 \right\} \quad \text{and} \quad U:= \frac{\nu}{\mu} V.$$
It is easy to check that $(U, V)$ is a supersolution for (<ref>). Then take $(\overline{U}, \overline{V})$ to be the solution to (<ref>) with initial datum $(U,V)$.
Notice that by the comparison principle
\begin{equation}\label{ch10013}
(0,0)\leq (u(t,x), v(t,x,y)) \leq (\overline{U}(t,x), \overline{V}(t,x,y)) \quad \text{for all} \ t>0, \ (x,y)\in\Omega.
\end{equation}
Since $(U, V)$ is a supersolution, we have that
\begin{equation}\label{ch12317}
(\overline{U}, \overline{V}) \leq (U,V) \quad \text{for all} \ t\geq 0.
\end{equation}
Consider $\tau>0$ and call $(\tilde{U}, \tilde{V})$ the solution staring with initial datum $(U, V)$ at $t=\tau$. By (<ref>) we have that $(\overline{U}(\tau,x), \overline{V}(\tau,x, y)) \leq (U,V)$, hence by the comparison principle (<ref>) we have $(\overline{U}, \overline{V}) \leq (\tilde{U}, \tilde{V})$. By the arbitrariness of $\tau$, we get that $(\overline{U}, \overline{V})$ is decreasing in $t$.
By Lemma <ref>, $(\overline{U}, \overline{V})$ converges locally uniformly to a stationary solution $(q,p)$. But by Lemma <ref>, the only stationary solution is $(0,0)$.
By that and (<ref>), we have that $(u(t,x), v(t,x,y))$ converges locally uniformly to $(0,0)$ as $t$ goes to infinity.
Moreover, since $(U,V)$ is constant in $x$, and (<ref>) is periodic in x, $(\overline{U}, \overline{V})$ is periodic in $x$.
Hence, the convergence is uniform in $x$.
Now suppose by the absurd that the convergence is not uniform in $y$; hence there exists some $\varepsilon>0$ such that for infinitely many $t_n\geq 0$, with $\{t_n\}_{n\in\N}$ an increasing sequence, and $( x_n, y_n)\in\Omega$, it holds
\begin{equation}\label{ch12327}
\overline{V}(t_n, x_n, y_n) >\varepsilon.
\end{equation}
Since $\overline{V}$ is periodic in $x$, without loss of generality we can suppose $x_n\in[0,\ell]$ and that up to a subsequence $\{x_n\}_{n\in\N}$ converges to some $x'\in[0,\ell]$. If $\{y_n\}_{n\in\N}$ were bounded, by (<ref>) the local convergence to $0$ would be contradicted; hence $y_n$ is unbounded.
Then, define the sequence of functions
\begin{equation*}
V_n(t,x,y)=\overline{V}(t, x+x_n, y+y_n).
\end{equation*}
By (<ref>), we have that
\begin{equation}\label{ch10109}
V_n(t_n,0,0)>\varepsilon \quad \text{for all} \ n\in\N.
\end{equation}
Also, since $V_n$ is bounded, by arguments similar to the ones used in Lemma <ref> and Lemma <ref>
one can prove that, up to a subsequence,
$\{V_n\}_{n\in\N}$ converges in $\mathcal{C}_{loc}^2(\R^2)$ to a function $\tilde{V}$ that solves
\begin{equation}\label{ch10108}
\partial_t \tilde{V} - d\Delta \tilde{V} = f(x+x', \tilde{V}).
\end{equation}
Also by (<ref>), we have that
\begin{equation}\label{ch10110}
\tilde{V}(t_n,0,0 )>\varepsilon \quad \text{for all} \ n\in\N.
\end{equation}
Recall that by the fact that $\lambda_1(\Omega)\geq 0$, Corollary <ref> and Theorem <ref>, $\lambda_p(-\mathcal{L}, \R^2)\geq 0$. Then by Theorem <ref> we have that every solution to (<ref>) converges uniformly to $0$. But this is in contradiction with (<ref>), hence we have an absurd and we must refuse the existence such positive $\varepsilon$. So, the convergence of $\overline{V}$ to $0$ is uniform in space.
As a consequence, the convergence of $(u(t,x), v(t,x,y))$ to $(0,0)$ is uniform in space.
CHAPTER: CIVIL WARS: A NEW LOTKA-VOLTERRA COMPETITIVE SYSTEM AND ANALYSIS OF WINNING STRATEGIES
We introduce a new model in population dynamics that describes
two species sharing the same environmental resources in a situation of open hostility.
Our basic assumption is that one of the populations deliberately seek for hostility through "targeted attacks". Hence, the
interaction among these populations is described not in terms
of random encounters but via the
strategic decisions of one population
that can attack the other according to different levels of aggressiveness.
One of the features that distinguishes this model from usual competitive systems is that it allows one of the population to go extinct in finite time.
This leads to a non-variational model for the two populations at war, taking into
account structural parameters such as the relative fit of the two populations with
respect to the available resources and the effectiveness of the attack strikes of the
aggressive population.
The analysis that we perform is rigorous and focuses
on the dynamical properties of the system, by detecting
and describing all the possible equilibria and their basins of attraction.
Moreover, we will analyze the strategies that may lead to the victory of the aggressive
population, i.e. the choices of the aggressiveness parameter,
in dependence of the structural constants of the system and possibly varying in time
in order to optimize the efficacy of the attacks, which take to the extinction in finite time
of the defensive population.
The model that we present is flexible enough to also include commercial competition models of companies
using aggressive policies against the competitors (such as misleading advertising, or releasing computer
viruses to set rival companies out of the market).
This chapter corresponds to the paper [3] in collaboration with Serena Dipierro, Luca Rossi and Enrico Valdinoci.
§ INTRODUCTION
Among the several models dealing with the
dynamics of biological systems, the case of populations engaging into a mutual conflict
seems to be unexplored.
This chapter aims at laying the foundations of a new model describing
two populations competing for the same resource with one aggressive population
which may attack the other:
concretely, one may think of
a situation in which
two populations live together in the same territory and share the same
environmental resources,
till one population wants to prevail and try to kill the other.
We consider this situation as a “civil war”, since the two populations share
land and resources; the two populations may be equally fit to the environment
(and, in this sense, they are “indistinguishable”, up to the aggressive attitude of
one of the populations), or they can have a different compatibility to the resources
(in which case one may think that the conflict could be motivated by the different
accessibility to environmental resources).
Given the lack of reliable data related to civil wars, a foundation of
a solid
mathematical theory for this type of conflicts may only leverage on the deduction
of the model from first principles: we follow this approach to obtain
the description of the problem in terms of a system of two
ordinary differential equations, each describing the evolution in time
of the density
of one of the two populations.
The method of analysis that we adopt is a combination
of techniques from different fields, including ordinary differential equations,
dynamical systems and optimal control.
This viewpoint will allow us to rigorously investigate the model,
with a special focus on a number of mathematical features of
concrete interest, such as the possible extinction of one of the two populations
and the analysis of the strategies that lead to the victory of the aggressive population.
In particular, we will analyze the dynamics of the system,
characterizing the equilibria and their features (including possible basins of attraction)
in terms of the different parameters
of the model (such as relative fitness to the environment, aggressiveness
and effectiveness of strikes). Also, we will study the initial configurations which
may lead to the victory of the aggressive population, also taking into account
different possible strategies to achieve the victory: roughly speaking,
we suppose that the aggressive population may adjust the parameter
describing the aggressiveness in order to either dim or
exacerbate the conflict with the aim of destroying the second population
(of course, the war has a cost in terms of life
for both the populations,
hence the aggressive population must select the appropriate strategy in terms
of the structural parameters of the system). We will show that the initial data
allowing the victory of the aggressive population
does not exhaust the all space, namely there exists initial configurations
for which the aggressive population cannot make the other extinct, regardless the strategy adopted during the conflict.
Furthermore, for identical populations with the same fit to the environment
the constant strategies suffices for the aggressive population to possibly
achieve the victory: namely, if an initial configuration admits a piecewise continuous in time
strategy that leads to the victory of the aggressive population,
then it also admits a constant in time strategy that reaches the same objective
(and of course, for the aggressive population, the possibility of focusing
only on constant strategies would entail concrete practical advantages).
Conversely, for populations with different fit to the environment,
the constant strategies do not exhaust all the winning strategies:
that is, in this case, there are initial conditions which allow
the victory of the aggressive population only under the exploitation
of a strategy that is not constant in time.
In any case, we will also prove that strategies with at most one jump
discontinuity are sufficient for the aggressive population:
namely, independently from the relative fit to the environment,
if an initial condition allows the aggressive population to reach the victory
through a piecewise continuous in time
strategy, then the same goal can be reached using a “bang-bang”
strategy with at most one jump.
We will also discuss the winning strategies that minimize
the duration of the war: in this case, we will show that
jump discontinuous strategies may be not sufficient and interpolating
arcs have to be taken into account.
We now describe in further detail our model of conflict between the
two populations and the attack strategies pursued by the aggressive population.
Our idea is to modify the Lotka-Volterra competitive system for two populations with
density $u$ and $v$,
adding to the usual competition for resources the fact that both populations suffer some losses as an outcome of the attacks.
The key point in our analysis
is that the clashes do not depend on the chance of meeting of the two populations, given by the quantity $uv$, as it happens in many other works in the literature (starting from the publications of Lotka and Volterra, [78] and [121]), but they are sought by the first population and
depend only on the size $u$ of the first population and on its level of aggressiveness $a$.
The resulting model is
\begin{equation}\label{ch2model}
\left\{
\begin{array}{llr}
\dot{u}&= u(1-u-v) - acu, & {\mbox{ for }}t>0,\\
\dot{v}&= \rho v(1-u-v) -au, & {\mbox{ for }}t>0,
\end{array}
\right.
\end{equation}
where $a$, $c$ and $\rho$ are nonnegative real numbers. Here, the coefficient $\rho$ models the fitness of the second population with respect of the first one when resources are abundant for both; it is linked with the exponential growth rate of the two species. The parameter $c$ here stands for the quotient of endured per inflicted damages for the first population.
Deeper justifications to the model (<ref>) will be given in
Subsection <ref>.
Notice that the size of the second population $v$ may become negative in finite time while the first population is still alive. The situation where $v=0$ and $u>0$ represents the extinction of the second population and the victory of the first one.
To describe our results, for communication convenience (and in spite of our
personal fully pacifist believes)
we take the perspective of the first population,
that is, the aggressive one; the objective
of this population is to win the war, and, to achieve that, it can influence the system by tuning the parameter $a$.
From now on, we may refer to the parameter $a$ as the strategy, that may also depend on time, and we will say that it is winning if it leads to victory of the first population.
The main problems that
we deal with in this chapter are:
* The characterization of the initial conditions for which there exists a winning strategy.
* The success of the constant strategies, compared to all possible strategies.
* The construction of a winning strategy for a given initial datum.
* The existence of a single winning strategy independently of the initial datum.
We discuss all these topics in Subsection <ref>, presenting
concrete answers to each of these problems.
Also, since to our knowledge this is the first time that system (<ref>) is considered,
in Subsections <ref> and <ref> we will discuss the dynamics and some interesting results about the dependence of the basins of attraction on the other parameters.
It would also be extremely interesting to add the space component to our model, by considering a system of reaction-diffusion equations. This will be the subject of a further work.
§.§ Motivations and derivation of the model
The classic Lotka-Volterra equations were first introduced for modelling population dynamics between animals [121] and then used to model other phenomena involving competition, for example in technology substitution [85]. The competitive Lotka-Volterra system concerns the sizes $u_1(t)$ and $u_2(t)$ of two species competing for the same resources. The system that
the couple $(u_1(t), u_2(t))$ solves is
\begin{equation}\label{ch2lv}
\begin{cases}
\dot{u}_1=r_1 u_1\left(\sigma-\displaystyle
\frac{u_1+\alpha_{12} u_2}{k_1} \right), & t>0,\\
\dot{u}_2
=r_2 u_2\left(\sigma- \displaystyle\frac{u_2+\alpha_{21} u_1}{k_2} \right), & t>0,
\end{cases}
\end{equation}
where $r_1$, $r_2$, $\sigma$, $\alpha_{12}$, $\alpha_{21}$, $k_1$ and $k_2$ are nonnegative real numbers.
Here, the coefficients $\alpha_{12}$ and $\alpha_{21}$ represent the competition between individuals of different species, and indeed they
appear multiplied by the term $u_1 u_2$, which represents a probability of meeting.
The coefficient $r_i$ is the exponential growth rate of the $i-$th population, that is, the reproduction rate that is observed when the resources are abundant.
The parameters $k_i$ are called carrying capacity and represent the number of individuals of the $i-$th population that can be fed with the resources of the territory, that are quantified by $\sigma$.
It is however usual to rescale the system in order to reduce the number of parameters.
In general, $u_1$ and $u_2$ are rescaled so that they vary in the interval $[0,1]$,
thus describing densities of populations.
The behavior of the system depends substantially on the values of $\alpha_{12}$ and $\alpha_{21}$ with respect to the threshold given by the value $1$, see e.g. [12]: if $\alpha_{12}<1<\alpha_{21}$, then the first species $u_1$
has an advantage over the second one $u_2$
and will eventually prevail; if $\alpha_{12}$ and $\alpha_{21}$ are both strictly above or below the threshold, then the first population that penetrates the environment (that is, the one that has a greater size at the initial time) will persist while the other will extinguish.
Some modification of the Lotka-Volterra model were made in stochastic analysis by adding a noise term of the form $-f(t)u_i$ in the $i-$th equation, finding some interesting phenomena of phase transition,
see e.g. [62].
The ODE system
in (<ref>) is of course the cornerstone to study the case of two competitive populations that diffuse in space. Many different types of diffusion have been compared and one can find a huge literature on the topic, see [87, 37, 82] for some examples and [86] for a more general overview.
We point out that other dynamic systems presenting finite time extinction of one or more species have been generalised for heterogeneous environments, see for example the model in [53] for the predator-prey behaviour of cats and birds, that has been thereafter widely studied.
In this chapter,
we will focus not only on basic competition for resources, but also on situations of open hostility.
In social sciences, war models are in general little studied; indeed, the collection of data up to modern times is hard for the lack of reliable sources.
Also, there is still much discussion about what factors are involved and how to quantify them: in general, the outcome of a war does not only depend on the availability of resources, but also on more subtle factors as the commitment of the population and the knowledge of the battlefield,
see e.g. [114].
Instead, the causes of war were investigated by the statistician L.F. Richardson, who proposed some models for predicting the beginning of a conflict,
see [102].
In addition to the human populations, behavior of hostility between groups of the same species has been observed in chimpanzee. Other species with complex social behaviors are able to coordinate attacks against groups of different species: ants versus termites, agouti versus snakes, small birds versus hawk and owls, see e.g. [116].
The model that
we present here
is clearly a simplification of reality. Nevertheless,
we tried to capture some important features of conflicts
between rational and strategic populations,
introducing in the
mathematical modeling the new idea that a conflict may be sought
and the parameters that influence its development may be
conveniently adjusted.
Specifically, in our model, the
interactions between populations are not merely driven
by chance and the strategic decisions of the population
play a crucial role in the final outcome of the conflict, and
we consider this perspective as an interesting novelty in the mathematical
description of competitive environments.
At a technical level, our aim is to introduce a model for conflict between
two populations $u$ and $v$,
starting from the model when the two populations compete for food and modifying it to add the information about the clashes.
We imagine that
each individual of the first population $u$ decides to attack an individual of the second population with some probability $a$ in a given period of time. We assume that hostilities take the form of “duels”, that is, one-to-one fights.
In each duel, the individual of the first population has a probability $\zeta_u$ of being killed and a probability $\zeta_v$ of killing his or her opponent;
notice that in some duel the two fighters might be both killed.
Thus, after one time-period, the casualties for the first and second populations
are $a\zeta_u u$ and $a\zeta_v u$
The same conclusions are found if we imagine that the first population forms an army to attack the second, which tries to resist by recruting an army of proportional size. At the end of each battle, a ratio of the total soldiers is dead, and this is again of the form $a\zeta_u u$ for the first population and $a\zeta_v u$ for the second one.
Another effect that
we take into account is the drop in the fertility of the population during wars.
This seems due to the fact that families suffer some income loss during war time, because of a lowering of the average productivity and lacking salaries only partially compensated by the state; another reason possibly discouraging couples to have children is the increased chance of death of the parents during war.
As pointed out in [117], in some cases the number of lost births during wars are comparable to the number of casualties.
However, it is not reasonable to think that this information should be included in the exponential growth rates $r_u$ and $r_v$, because the fertility drop really depends on the intensity of the war. For this reason,
we introduce the parameters $c_u\geq 0$ and $c_v\geq 0$ that are to be multiplied by $a u$ for both populations.
Moreover, for simplicity,
we also suppose that the clashes take place apart from inhabited zone, without having influence on the harvesting of resources.
Now we derive the system of equations from a microlocal analysis.
As in the Lotka-Volterra model, it is assumed that the change of the size of the population in an interval of time $\Delta t$ is proportional to the size of the population $u(t)$, that is
\begin{equation*}
u(t+\Delta t)-u(t) \approx u(t) f(u,v)
\end{equation*}
for some appropriate function $f(u,v)$. In particular, $f(u,v)$ should depend on resources that are available and reachable for the population.
The maximum number of individuals that can be fed with all the resources of the environment is $k$; taking into account all the individuals of the two populations, the available resources are
\begin{equation*}
\end{equation*}
Notice that we suppose here that each individual consumes the same amount of resources, independently of its belonging. In our
model, this assumption
is reasonable since
all the individuals belong to the same species.
Also, the competition for the resources depends only on the number of individuals, independently on their identity.
Furthermore, our model is sufficiently
general to take into account the fact that the growth rate of the populations can be possibly different. In practice,
this possible difference
could be the outcome of
a cultural distinction, or it may be also due to some slight genetic differentiation, as it happened for Homo Sapiens and Neanderthal,
see [51].
Let us call $r_u$ and $r_v$ the fertility of the first and second populations respectively. The contribution
to the population growth rate is given by
$$ f(u,v) := r_u \left(1-\frac{u+v}{k} \right),~$$
and these effects
can be comprised in a typical Lotka-Volterra system.
Instead, in our model, we also take into
account the possible death rate due to casualties.
In this way,
we obtain a term such as $-a\zeta_u$ to be added to $f(u,v)$.
The fertility losses give another term $-ac_u$ for the first population.
We also perform the same analysis for the second population, with the appropriate coefficients.
With these considerations, the system of
the equations that we obtain is
\begin{equation}\label{ch2model1}
\left\{
\begin{array}{llr}
\dot{u}&= r_u u\left(1- \dfrac{u+v}{k} \right) - a(c_u + \zeta_u)u, & t>0,\\
\dot{v}&=r_v v\left(1- \dfrac{v+u}{k} \right) - a(c_v + \zeta_v)u, & t>0.
\end{array}
\right.
\end{equation}
As usual in these kinds of models, we can rescale the variables and the coefficients in order to
find an equivalent model with fewer parameters. Hence, we perform the changes of variables
\begin{equation}\label{ch2changeofvar}\begin{split}
&\tilde{u}(\tilde t)= \dfrac{u(t)}{k}, \quad \tilde{v}(\tilde t)=\dfrac{v(t)}{k},
\quad {\mbox{ where }}\tilde{t}= r_u t,\\
\tilde{a}= \dfrac{a(c_v+\zeta_v)}{r_u}, \quad \tilde{c}= \dfrac{c_u+\zeta_u}{c_v+\zeta_v} \quad {\mbox{ and }}\quad \rho= \frac{r_v }{r_u },
\end{split}
\end{equation}
and, dropping the tildas for the sake of readability, we finally get the system
in (<ref>).
We will also refer to it as the civil war model (CW).
From the change of variables in (<ref>), we notice in particular that $a$ may now take values in $[0,+\infty)$.
The competitive Lotka-Volterra system is already used to study some market phenomena as technology substitution, see e.g. [85, 24, 122], and our model aims at adding new features to such models.
Concretely, in the technological competition model, one can think that $u$ and $v$
represent the capitals of two computer companies.
In this setting, to start with, one can suppose that the first company produces a very successful product, say computers with a certain operating system, in an infinite market, reinvesting a proportion $r_u$ of the profits into the production of additional items, which are purchased by the market, and so on: in this way, one obtains a linear equation of the type $\dot u=r_u u$, with exponentially growing solutions.
The case in which the market is not infinite, but reaches a saturation threshold $k$, would correspond to the equation
$$\dot u=r_u u\left(1-\frac{u}{k}\right).$$
Then, when a second computer company comes into the business, selling computers with a different operating system to the same market, one obtains the competitive system of equations
$$ \begin{cases}
\dot u=r_u u\displaystyle\left(1-\frac{u+v}{k}\right),\\
\dot v=r_v v\displaystyle\left(1-\frac{v+u}{k}\right).
\end{cases}$$
At this stage, the first company may decide to use an “aggressive” strategy consisting in
spreading a virus attacking the other company's operating system,
with the aim of setting the other company out of the market (once the competition of the second company is removed, the first company can then exploit the market in a monopolistic regime).
To model this strategy, one can suppose that the first company
invests a proportion of its capital in the project and
diffusion of the virus, according to a quantifying parameter $a_u\ge0$, thus producing the equation
\begin{equation}\label{ch2MAR1} \dot u=r_u u\left(1-\frac{u+v}{k}\right)-a_u u.\end{equation}
This directly impacts the capital
of the second company proportionally to the virus
since the second company has to spend money to project and release antiviruses,
as well as to repay unsatisfied customers,
hence resulting in a second equation of the form
\begin{equation}\label{ch2MAR2} \dot v=r_v v\left(1-\frac{v+u}{k}\right)-a_v u.\end{equation}
The case $a_u=a_v$ would correspond to an “even” effect in which the costs of producing the virus is in balance with
the damages that it causes.
It is also realistic to take into account the case $a_u<a_v$
(e.g., the first company manages to
produce and diffuse the virus at low cost,
with high impact on the functionality of the operating
system of the second company) as well as the case $a_u>a_v$ (e.g., the cost of producing and diffusing
the virus is high with respect to the damages caused).
We remark that equations (<ref>) and (<ref>) can be set into the form (<ref>), thus showing the interesting versatility of our model also
in financial mathematics.
§.§ Some notation and basic results on the dynamics of system (<ref>)
We denote by $(u(t), v(t))$ a solution of (<ref>) starting from a point $(u(0),v(0))\in [0,1] \times [0,1]$.
We will also refer to the orbit of $(u(0), v(0))$ as the collection of
points $(u(t), v(t))$ for $t\in \R$, thus both positive and negative times, while the trajectory is the collection of points $(u(t), v(t))$ for $t\geq0$.
As already mentioned in the discussion below formula (<ref>),
$v$ can reach the value $0$ and even negative values in finite time. However, we will suppose that the dynamics stops when the value $v=0$ is reached for the first time.
At this point, the conflict ends with the victory of the first population $u$, that can continue its evolution with a classical Lotka-Volterra equation of the form
\begin{equation*}
\dot{u}= u (1- u)
\end{equation*}
and that would certainly fall into the attractive equilibrium $u=1$.
The only other possibility is that the solutions are constrained in the set $[0,1]\times(0,1]$.
In order to state our first result on the dynamics of the system (<ref>),
we first observe that, in a real-world situation, the value of $a$ would probably be non-constant and discontinuous, so we allow this coefficient to take values in the class $\mathcal{A}$
defined as follows:
\begin{equation}\begin{split}\label{ch2DEFA}
\mathcal{A}&\; :=
\big\{a: [0, +\infty) \to [0, +\infty) {\mbox{ s.t.~$a$ is continuous}}\\
&\qquad \qquad {\mbox{except at most at a finite number of points}}\big\}.\end{split}\end{equation}
A solution related to a strategy $a(t)\in \mathcal{A}$ is a pair $(u(t), v(t)) \in C_0 (0,+\infty)\times C_0 (0,+\infty)$, which is $C^1$ outside the discontinuous points of $a(t)$ and
solves system (<ref>).
Moreover, once the initial datum is imposed, the solution is assumed to be
continuous at $t=0$.
In this setting, we establish the existence of the solutions of problem (<ref>)
and we classify their behavior with respect to the possible exit from the domain $[0,1]\times[0,1]$:
Let $a(t)\in\mathcal{A}$.
Given $(u(0), v(0))\in [0,1] \times [0,1]$, there exists
a solution $(u(t),v(t))$ with $a=a(t)$
of system (<ref>) starting at $(u(0), v(0))$.
Furthermore, one of the two following situations occurs:
(1) The solution $(u(t), v(t))$ issued from $(u(0), v(0))$ belongs to $ [0,1]\times (0,1]$ for all $t\geq 0$.
(2) There exists $T\geq0$ such that the solution $(u(t), v(t))$ issued from $(u(0), v(0))$ exists unique for all $t\leq T$, and $v(T)=0$ and $u(T)>0$.
As a consequence of Proposition <ref>, we can define the
the stopping time of the solution $(u(t), v(t))$ as
\begin{equation}\label{ch2def:T_s}
T_s (u(0), v(0)) =
\left\{
\begin{array}{ll}
+\infty & \text{if situation (1) occurs}, \\
T & \text{if situation (2) occurs}.
\end{array}
\right.
\end{equation}
From now on, we will implicitly consider solutions $(u(t),v(t))$ only for $t\leq T_s(u(0), v(0))$.
Now we are going to analyze the dynamics of (<ref>) with a particular focus on possible strategies. To do this, we now define the basins of attraction.
The first one is the basin of attraction of the point $(0,1)$, that is
\begin{equation}\label{ch2DEFB}\begin{split}
\mathcal{B}&\;:= \Big\{ (u(0),v(0))\in [0,1]\times[0,1] \;{\mbox{ s.t. }}\;\\
&\qquad\qquad T_s (u(0), v(0)) = +\infty, \ (u(t),v(t)) \overset{t\to\infty}{\longrightarrow} (0,1) \Big\},
\end{split}\end{equation}
namely the set of the initial points for which the first population gets extinct (in infinite time) and the second one survives.
The other one is
\begin{equation}\label{ch2DEFE}
\mathcal{E}:= \left\{ (u(0),v(0))\in ([0,1]\times[0,1])\setminus(0,0) \;{\mbox{ s.t. }}\; T_s(u(0),v(0))< + \infty \right\},
\end{equation}
the set of initial points for which we have the victory of the first population and the extinction of the second one.
Of course, the sets $\mathcal{B}$ and $\mathcal{E}$ depend on the parameters $a$, $c$, and $\rho$; we will express this dependence by writing $\mathcal{B}(a,c,\rho)$
and $\mathcal{E}(a,c,\rho)$ when it is needed, and omit it otherwise for the sake of readability. The dependence on parameters will be carefully studied
in Subsection <ref>.
§.§ Dynamics of system (<ref>) for constant strategies
The first step towards the understanding of the dynamics of the system
in (<ref>) is
is to analyze the behavior of the system for constant coefficients.
To this end, we introduce some notation.
Following the terminology on pages 9-10 in [123],
we say that an equilibrium point (or fixed point) of the dynamics
is a (hyperbolic) sink
if all the eigenvalues of the linearized map have strictly
negative real parts, a (hyperbolic) source
if all the eigenvalues of the linearized map have strictly
positive real parts, and a (hyperbolic) saddle
if some of the eigenvalues of the linearized map have strictly
positive real parts
and some have negative real parts
(since in this chapter we work in dimension $2$,
saddles correspond to linearized maps with one
eigenvalue with
strictly positive real part
and one eigenvalue with
strictly negative real part).
We also recall that
sinks are asymptotically stable (and sources are
asymptotically stable for the reversed-time dynamics), see e.g. Theorem 1.1.1
in [123].
With this terminology, we state the following theorem:
For $a > 0$ and ${\rho}> 0$ the system (<ref>) has the following features:
(i) When $0<ac<1$, the system has 3 equilibria: $(0,0)$ is a source, $(0,1)$ is
a sink, and
\begin{equation}\label{ch2usvs}
(u_s, v_s):= \left( \frac{1-ac}{1+{\rho}c} {\rho}c, \frac{1-ac}{1+{\rho}c} \right) \in (0,1)\times (0,1)
\end{equation}
is a saddle.
(ii) When $ac>1$, the system has 2 equilibria: $(0,1)$ is a sink and $(0,0)$ is a saddle.
(iii) When $ac=1$, the system has 2 equilibria: $(0,1)$ is a sink and $(0,0)$
corresponds to a strictly positive eigenvalue and a null one.
(iv) We have
\begin{equation} \label{ch2fml:division}
[0,1]\times [0,1] = \mathcal{B} \cup \mathcal{E} \cup \mathcal{M}
\end{equation}
where $\mathcal{B}~$ and $\mathcal{E}$ are defined in (<ref>)
and (<ref>), respectively, and $\mathcal{M}$ is a smooth curve.
(v) The trajectories starting in $\mathcal{M}$ tend to $(u_s,v_s)$ if $0<ac<1$,
and to $(0,0)$ if $ac\ge1$ as $t$ goes to $+\infty$.
More precisely, one can say that
the curve $\mathcal{M}$ in Theorem <ref> is the stable manifold of the saddle
point $(u_s,v_s)$ when $0<ac<1$, and of the
saddle point $(0,0)$ when $ac>1$. The case $ac=1$ needs a special treatment,
due to the degeneracy of one eigenvalue, and in this case the curve $\mathcal{M}$
corresponds to the center manifold of $(0,0)$, and an ad-hoc argument will
be exploited
to show that also in this degenerate case orbits that start in $\mathcal{M}$
are asymptotic in the future to $(0,0)$.
As a matter of fact, $\mathcal{M}$
acts as a dividing wall between the two basins of attraction, as described in (iv)
of Theorem <ref> and in the forthcoming Proposition <ref>.
Moreover, in the forthcoming
Propositions <ref>
and <ref>
we will show that $\mathcal{M}$ can be written as the graph of a function. This is particularly useful because, by studying the properties of this function, we gain relevant pieces of information on the sets $\mathcal{B}$ and $\mathcal{E}$
in (<ref>) and (<ref>).
We point out that in Theorem <ref>
we find that the set of initial data $[0,1]\times[0,1]$ splits into three part:
the set $\mathcal{E}$, given in (<ref>), made
of points going to the extinction of the second population in finite time; the set $\mathcal{B}$, given in (<ref>), which is the
basin of attraction of the equilibrium $(0,1)$;
the set $\mathcal{M}$, which is
a manifold of dimension $1$ that separates $\mathcal{B}$ from $\mathcal{E}$.
In particular,
Theorem <ref> shows that, also for our model, the Gause principle of exclusion is respected; that is, in general, two competing populations cannot coexist in the same territory, see e.g. [47].
One peculiar feature of our system is that, if the aggressiveness is too strong, the equilibrium $(0,0)$ changes its “stability” properties, passing from a source (as in (i) of
Theorem <ref>)
to a saddle point (as in (ii) of
Theorem <ref>). This shows that the war may have self-destructive outcomes, therefore it is important for the first population to analyze the situation in order to choose a proper level of aggressiveness.
Figure <ref> shows one example of dynamics for each case.
$a=0.8$, $c=0.5$, $\rho=2$
$a=0.8$, $c=3$, $\rho=2$
The figures show a phase portrait for the indicated values of the coefficients. In blue, the orbits of the points. The red dots represent the equilibria.
§.§ Dynamics of system (<ref>) for variable strategies
and optimal strategies for the first population
We now deal with the problem of choosing the strategy $a$ such that the first population wins, that is a problem of target reachability for a control-affine system. As we will see, the problem is not controllable, meaning that, starting from a given initial point, it is not always possible to reach a given target.
We now introduce some terminology, that we will use throughout the chapter.
Recalling (<ref>),
for any $\mathcal{T}\subseteq \mathcal{A}$, we set
\begin{equation}\label{ch2DEFNU}
\mathcal{V}_{\mathcal{T}}:= \underset{a(\cdot)\in \mathcal{T}}{\bigcup} \mathcal{E}(a(\cdot)),
\end{equation}
where $\mathcal{E}(a(\cdot))$ denotes the set of initial data $(u_0,v_0)$
such that $T_s(u_0,v_0)< +\infty$, when the coefficient $a$ in (<ref>) is replaced by the function $a(t)$.
Namely, $\mathcal{V}_{\mathcal{T}}$ represents the set of initial conditions for which $u$ is able to win by choosing a suitable strategy in $\mathcal{T}$; we call $\mathcal{V}_{\mathcal{T}}$ the victory set with admissible strategies in $\mathcal{T}$.
We also say that $a(\cdot)$ is a winning strategy for the point $(u_0,v_0)$
if $(u_0,v_0)\in \mathcal{E}(a(\cdot) )$.
Moreover, we will call
\begin{equation}\label{ch2u0v0}
(u_s^0, v_s^0):= \left(\frac{\rho c}{1+\rho c}, \frac{1}{1+\rho c}\right).
\end{equation}
Notice that $(u_s^0, v_s^0)$ is the limit point as $a$ tends to $0$ of the sequence of saddle points $\{(u_s^a, v_s^a)\}_{a>0}$
defined in (<ref>).
With this notation,
the first question that we address is for which initial configurations it is possible for the population $u$
to have a winning strategy, that is, to characterize the victory set. For this, we allow the strategy to take all the values in $[0, +\infty)$.
In this setting, we have the following result:
(i) For $\rho=1$, we have that
\begin{equation}\label{ch2Vbound1}\begin{split}
\mathcal{V}_{\mathcal{A}} = \,&\Big\{ (u,v)\in[0,1] \times [0,1] \;
{\mbox{ s.t. }}\; v-\frac{u}{c}<0 \; {\mbox{ if }} u\in[0,c]\\
&\qquad\qquad\qquad {\mbox{ and }}\; v\le1 \; {\mbox{ if }} u\in(c,1]\Big\},
\end{split}\end{equation}
with the convention that the last line in (<ref>) is not present if $c\ge1$.
For $\rho<1$, we have that
\begin{equation}\label{ch2bound:rho<1}
\begin{split}
\mathcal{V}_{\mathcal{A}} &\;= \Bigg\{ (u,v)\in[0,1] \times [0,1] \;{\mbox{ s.t. }}\;
v< \gamma_0(u) \ \text{if} \ u\in [0, u_s^0], \\
v< \frac{u}{c} + \frac{1-\rho}{1+\rho c} \ \text{if} \ u\in \left[u_s^0,
\frac{\rho c(c+1)}{1+\rho c}\right]\\
{\mbox{and }}\; v\le1\ \text{if} \ u\in \left(
\frac{\rho c(c+1)}{1+\rho c},1\right]
\Bigg\},
\end{split}
\end{equation}
\begin{equation*}
\gamma_0(u):= \frac{u^{\rho}}{\rho c(u_s^0)^{\rho-1}},
\end{equation*}
and we use the convention that the last line
in (<ref>) is not present if $ \frac{\rho c(c+1)}{1+\rho c}\ge1$.
(iii) For $\rho>1$, we have that
\begin{equation}\label{ch2bound:rho>1}
\begin{split}
\mathcal{V}_{\mathcal{A}} &\;= \Bigg\{ (u,v)\in[0,1] \times [0,1]\;
{\mbox{ s.t. }}\; v< \frac{u}{c} \ \text{if} \ u\in [0, u_{\infty}],\\&\qquad
\qquad\qquad\qquad
v< \zeta(u) \ \text{if} \ u\in\left(u_{\infty}, \frac{c}{(c+1)^{\frac{\rho-1}\rho}}\right] \\&\qquad
\qquad\qquad\qquad
{\mbox{and }}\; v\le 1
\ \text{if} \ u\in\left(\frac{c}{(c+1)^{\frac{\rho-1}\rho}},1\right]
\Bigg\},
\end{split}
\end{equation}
\begin{equation}\label{ch2ZETADEF}
u_{\infty}:= \frac{c}{c+1}
\quad {\mbox{ and }}\quad \zeta (u):= \frac{u^{\rho}}{c \, u_{\infty}^{\rho-1}} .
%, \quad z:=\min \left\{1, \frac{c}{(c+1)^{1-\frac{1}{\rho}}}\right\}.
\end{equation}
and we use the convention that the last line
in (<ref>) is not present if $ \frac{c}{(c+1)^{\frac{\rho-1}\rho}}\ge1$.
In practice,
constant strategies could be certainly easier to implement and
it is therefore natural to investigate whether or not
it suffices to restrict to constant strategies
without altering the possibility of victory.
The next result addresses this problem by showing that when $\rho=1$
constant strategies are as good as all strategies,
but instead when $\rho\ne 1$ victory cannot be achieved by only
exploiting constant strategies:
Let $\mathcal{K}\subset \mathcal{A}$ be the set of constant functions. Then the following holds:
(i) For $\rho= 1$, we have that $ \mathcal{V}_{\mathcal{A}}=\mathcal{V}_{\mathcal{K}}=\mathcal{E}(a)$ for all $a>0$;
(ii) For $\rho\neq 1$, we have that $\mathcal{V}_{\mathcal{K}} \subsetneq \mathcal{V}_{\mathcal{A}}$.
The result of Theorem <ref>, part (i),
reveals a special rigidity of the case $\rho=1$
in which, no matter which strategy $u$ chooses, the victory depends only on the initial conditions, but it is independent of the strategy $a(t)$.
Instead, as stated in
Theorem <ref>, part (ii),
for $\rho \neq 1$ the choice of $a(t)$ plays a crucial role in determining which population is going to win and constant strategies do not exhaust all the
possible winning strategies.
We stress that $\rho=1$ plays also
a special role in the biological interpretation of the model, since in this case the two
populations have the same fit to the environmental resource, and hence, in a sense,
they are indistinguishable, up to the possible aggressive behavior of the first population.
Next, we show that the set $\mathcal{V}_{\mathcal{A}}$ can be recovered if we use piecewise constant functions with at most one discontinuity, that we call Heaviside functions.
There holds that $\mathcal{V}_{\mathcal{A}} = \mathcal{V}_{\mathcal{H}}$, where $\mathcal{H}$ is the set of Heaviside functions.
The proof of Theorem <ref> solves also the third question mentioned in the Introduction. As a matter of fact, it proves that for each point we either have a constant winning strategy or
a winning strategy of type
\begin{equation*}
a(t) = \left\{
\begin{array}{lr}
a_1 &{\mbox{ if }} t<T ,\\
a_2 &{\mbox{ if }} t\geq T,
\end{array}
\right.
\end{equation*}
for some $T\in(0,T_s)$, and
for suitable values $a_1$, $a_2 \in (0,+\infty)$ such that one is very small and the other one very large, the order depending on $\rho$.
The construction that
we give also puts in light the fact that the choice of the strategy depends on the initial datum, answering also our fourth question.
It is interesting to observe that the winning strategy that switches abruptly from a small to a large value
could be considered, in the optimal control terminology, as a “bang-bang” strategy.
Even in a target reachability problem, the structure predicted by Pontryagin's Maximum Principle is brought in light: the bounds of the set $\mathcal{V}_{\mathcal{A}}$, as
given in Theorem <ref>, depend on the bounds that
we impose on the strategy, that are, $a \in[0,+\infty)$.
It is natural to consider also the case
in which the level of aggressiveness
is constrained between a minimal and maximal threshold,
which corresponds to the setting $a\in[m,M]$ for suitable $M\geq m\geq 0$, with the hypothesis that $M>0$.
In this setting, we denote by $\mathcal{A}_{m,M}$ the class of piecewise continuous strategies $a(\cdot)$
in ${\mathcal{A}}$ such that $
m\leq a(t)\leq M$ for all $t>0$ and we let
\begin{equation}\label{ch2SPE}
\mathcal{V}_{m,M}:=\mathcal{V}_{\mathcal{A}_{m,M}}=\underset{{a(\cdot)\in \mathcal{A}}\atop{m\leq a(t)\leq M}
}{\bigcup} \mathcal{E}(a(\cdot))=
\underset{{a(\cdot)\in \mathcal{A}}_{m,M}
}{\bigcup} \mathcal{E}(a(\cdot)).\end{equation}
Then we have the following:
Let $M$ and $m$ be two real numbers such that $M\geq m\geq 0$. Then, for $\rho\neq 1$ we have the strict inclusion $\mathcal{V}_{{m,M}}\subsetneq \mathcal{V}_{\mathcal{A}}$.
Notice that for $\rho=1$, Theorem <ref> gives instead that $\mathcal{V}_{{m,M}}= \mathcal{V}_{\mathcal{A}}$
and we think that this is a nice feature, outlining a special role played by the parameter $\rho$
(roughly speaking, when $\rho=1$ constant strategies suffice
to detect all possible winning configurations, thanks to
Theorem <ref>, while when $\rho\ne1$ non-constant strategies are necessary to detect
all winning configurations).
§.§.§ Time minimizing strategy
Once established that it is possible to win starting in a certain initial condition, we are interested in knowing which of the possible strategies is best to choose. One condition that may be taken into account is the duration of the war. Now, this question can be written as a minimization problem with a proper functional to minimize and therefore the classical Pontryagin theory applies.
To state our next result,
we recall the setting in (<ref>) and define
\begin{equation*}
\mathcal{S}(u_0, v_0) := \Big\{ a(\cdot)\in \mathcal{A}_{m,M}
\;\mbox{ s.t. }\; (u_0, v_0) \in \mathcal{E}(a(\cdot)) \Big\},
\end{equation*}
that is the set of all bounded strategies for which the trajectory starting at $(u_0, v_0)$ leads to the victory of the first population.
To each $a(\cdot)\in\mathcal{S}(u_0, v_0)$ we associate the stopping time defined in (<ref>), and we express its dependence on $a(\cdot)$ by writing $T_s(a(\cdot))$.
In this setting, we provide the following statement concerning the strategy leading
to the quickest possible victory for the first population:
Given a point $(u_0, v_0)\in \mathcal{V}_{m,M}$, there exists a winning strategy $\tilde{a}(t)\in
\mathcal{S}(u_0, v_0)$, and a trajectory $(\tilde{u}(t), \tilde{v}(t) )$ associated with $\tilde{a}(t)$,
for $t\in[0,T]$,
with $(\tilde{u}(0), \tilde{v}(0) )=(u_0,v_0)$, where $T$ is given by
\begin{equation*}
T = \underset{a(\cdot)\in\mathcal{S}}{\min} T_s(a(\cdot)).
\end{equation*}
\begin{equation*}
\tilde{a}(t)\in \left\{m, \ M, \ a_s(t) \right\},
\end{equation*}
\begin{equation}\label{ch2KSM94rt3rjjjdfe}
{a}_s(t) := \dfrac{(1-\tilde{u}(t)-\tilde{v}(t))[\tilde{u}(t) \, (2c+1-\rho c)+\rho c]}{\tilde{u}(t) \, 2c(c+1)}.
\end{equation}
The surprising fact given by Theorem <ref>
is that the
minimizing strategy is not only of bang-bang type, but it may assume some values along a singular arc, given by $a_s(t)$.
This possibility is realized in some concrete cases, as we verified by running some numerical simulations, whose results can be visualized in Figure <ref>.
The figure shows the result of a numerical simulation searching a minimizing time strategy $\tilde{a}(t)$ for the problem starting in $(0.5, 0.1875)$ for the
parameters $\rho=0.5$, $c=4.0$, $m=0$ and $M=10$. In blue, the value
found for $\tilde{a}(t)$; in red, the value of $a_s(t)$ for the corresponding trajectory $(u(t), v(t))$. As one can observe, $\tilde{a}(t)\equiv a_s(t)$ in a long trait.
The simulation was done using AMPL-Ipopt on the server NEOS and pictures have been made with Python.
§.§ Organization of the chapter
In the forthcoming Section <ref>
we will exploit methods from ordinary differential equations and dynamical systems
to describe the equilibria of the system and their possible basins of attraction.
The dependence of the dynamics on the structural parameters,
such as fit to the environment, aggressiveness and efficacy of attacks,
is discussed in detail in Section <ref>.
Section <ref> is devoted to the analysis of the strategies
that allow the first population to eradicate the second one (this part needs
an original combination of methods from dynamical systems and optimal control theory).
§ FIRST RESULTS ON THE DYNAMICS
AND PROOFS OF
PROPOSITION <REF> AND THEOREM <REF>
In this section we provide some useful results on the behavior of the solutions
of (<ref>) and on the basin of attraction. In particular,
we provide the proofs
of Proposition <ref> and Theorem <ref>
we state a characterization of the sets $\mathcal{B}$
and $\mathcal{E}$ given in (<ref>) and (<ref>), respectively, see Propositions <ref>.
This material will be extremely useful for the analysis of the strategy that we operate later.
We start with some preliminary notation. Given a close set $\mathcal{S} \subseteq [0,1]\times [0,1]$, we say that a trajectory $(u(t),v(t))$ originated in $\mathcal{S}$ exits the set $\mathcal{S}$ at some time $T\geq 0$ if
* $(u(t),v(t))\in \mathcal{S}$ for $t\leq T$,
* $(u(T),v(T))\in \partial \mathcal{S}$,
* for any vector $\nu$ normal to $\partial \mathcal{S}$
at the point $(u(T),v(T))$, it holds that
$$(\dot{u}(T), \dot{v}(T)) \cdot \nu >0.$$
Now, we prove Proposition <ref>, which is fundamental to
the well-definition of our model:
We consider the function $a(t)\in\mathcal{A}$, which is continuous except in a finite number of points $0<t_1< \dots< t_n$.
In all the intervals $(0, t_1)$, $(t_i, t_{i+1}]$, for $i\in\{1,\cdots,n-1\}$, and $(t_n, +\infty)$, the equations in (<ref>)
have smooth coefficients, and therefore a solution does exist.
Now, it is sufficient to consider $(u(t_i), v(t_i))$ as the initial datum for the dynamics
in $(t_i, t_{i+1}]$ to construct a solution $(u(t), v(t))$ for all $t>0$ satisfying
system (<ref>).
This is a rather classical result and we refer to [96] for more details.
Now, we prove that either the possibility in (1) or the possibility in (2) can occur.
For this, by using the equation for $v$ in (<ref>),
we notice that for $v=1$ the inward pointing normal derivative is
\begin{equation*}
-\dot{v}|_{v=1}=\left(-\rho v(1-u-v)+au\right)|_{v=1} = u(\rho+a) \geq 0.
\end{equation*}
This means that
no trajectory can exit $[0,1]\times[0,1]$ on the edge $v=1$. Similarly, using
the equation for $u$ in (<ref>), we see that
for $u=1$ the normal derivative inward pointing is
\begin{equation*}
-\dot{u}|_{u=1}=\left(-u(1-u-v)+acu\right)|_{u=1} = v +ac \geq 0,
\end{equation*}
and therefore
no trajectory can exit $[0,1]\times[0,1]$ on the edge $u=1$.
Moreover, it is easy to see that
all points on the line $u=0$ go to the equilibrium $(0,1)$, thus trajectories do not cross the line $u=0$. The only remaining possibilities are that the trajectories stay
in $[0,1]\times(0,1]$, that is possibility (1), or they exit the square on the side $v=0$,
that is possibility (2).
Now, we give the proof of (i), (ii) and (iii) of Theorem <ref>.
We first consider equilibria with first coordinate $u=0$. In this case,
from the second equation in (<ref>), we have that
the equilibria must satisfy $\rho v(1-v)=0$, thus $v=0$ or $v=1$.
As a consequence, $(0,0)$ and $(0,1)$ are two equilibria of the system.
Now, we consider equilibria with first coordinate $u>0$.
Equilibria of this form must satisfy $\dot{u}=0$ with $u\neq 0$, and therefore,
from the first equation in (<ref>),
\begin{equation}\label{ch2curve:u'}
\end{equation}
Moreover from the condition $\dot{v}=0$ and the second equation in (<ref>),
we see that
\begin{equation} \label{ch2curve:v'}
\rho v(1-u-v)-au=0.
\end{equation}
Putting together (<ref>) and (<ref>), we obtain that
the intersection point must lie on the line $\rho c v-u=0$.
Since the equilibrium is at the intersection between two lines, it must be unique.
One can easily verify that the values given in (<ref>) satisfy (<ref>)
and (<ref>).
From now on, we distinguish the three situations in (i), (ii) and (iii) of
Theorem <ref>.
(i) If $0<ac<1$, we have that the point $(u_s,v_s)$ given in (<ref>)
lies in $(0,1)\times(0,1)$. As a result, in this case the system has $3$ equilibria,
given by $(0,0)$, $(0,1)$ and $(u_s,v_s)$.
Now, we observe that the Jacobian of the system (<ref>) is
\begin{equation} \label{ch2Jmatrix}
J(u,v)= \begin{pmatrix}
1-2u-v -ac & -u \\
-\rho v-a & \rho (1-u-2v)
\end{pmatrix}.
\end{equation}
At the point $(0,0)$, the matrix has eigenvalues $\rho >0$ and $1-ac >0$, thus $(0,0)$ is a source. At the point $(0,1)$, the Jacobian (<ref>) has eigenvalues $-ac <0$ and $-\rho <0$, thus $(0,1)$ is a sink. At the point $(u_s,v_s)$, by exploiting the relations (<ref>) and (<ref>) we have that
\begin{equation*}
-u_s & -u_s \\
-\rho v_s-a & \rho (ac-v_s)
\end{pmatrix},
\end{equation*}
which, by the change of basis given by the matrix
\begin{pmatrix}
-\frac1{u_s} & 0 \\
-\frac1{u_s}\left[\left(\frac{u_s}{c}+a\right)\left(\frac{\rho c-c}{1+\rho c}\right)+ac \right] & \frac{\rho c-c}{1+\rho c}
\end{pmatrix},
\begin{equation}\label{ch2degieerhfdj}
\begin{pmatrix}
1 & 1 \\
ac & \rho ac
\end{pmatrix}. \end{equation}
The characteristic polynomial of the matrix
in (<ref>)
is $\lambda^2-\lambda(1+\rho a c)+\rho a c-ac$, that has two real roots, as one can see by inspection. Hence, $J(u_s, v_s)$ has two real eigenvalues. Moreover,
the determinant of $J(u_s, v_s)$ is $-\rho ac u_s-au_s <0$, which implies
that $J(u_s, v_s)$ has one positive and one negative eigenvalues. These considerations
give that $(u_s, v_s)$ is a saddle point, as desired. This completes the proof
of (i) in Theorem <ref>.
(ii) and (iii) We assume that $ac\ge1$.
We observe that the equilibrium described by the
coordinates $(u_s,v_s)$ in (<ref>)
coincides with $(0,0)$ for $ac=1$,
and lies outside $[0,1]\times[0,1]$ for $ac>1$.
As a result, when $ac\ge1$ the system has $2$ equilibria, given by $(0,0)$ and $(0,1)$.
Looking at the Jacobian in (<ref>),
one sees that
at the point $(0,1)$, it has eigenvalues $-ac <0$ and $-\rho <0$,
and therefore $(0,1)$ is a sink when $ac\ge1$.
Furthermore, from (<ref>)
one finds that
if $ac>1$ then $J(0,0)$ has the positive eigenvalue $\rho$ and the negative
eigenvalue $1-ac$, thus $(0,0)$ is a saddle point.
If instead $ac=1$, then $J(0,0)$
has one positive eigenvalue and one null eigenvalue, as desired.
To complete the proof of Theorem <ref>,
we will deal with the cases $ac\neq1$ and $ac=1$ separately. This analysis will be performed
in the forthcoming Sections <ref> and <ref>.
The completion of the proof of
Theorem <ref> will then be
given in Section <ref>.
§.§ Characterization of $\mathcal{M}$ when $ac\ne1$
We consider here the case $ac\ne1$. The case $ac=1$ is degenerate
and it will be treated separately in Section <ref>.
We point out that
in the proof of (i) and (ii) in Theorem <ref> we found a saddle point
in both cases.
By the Stable Manifold Theorem (see for example [96]), the point $(u_s, v_s)$ in (<ref>) in the case $0<ac<1$ and the
point $(0,0)$ in the case $ac> 1$ have a stable manifold and an unstable manifold.
These manifolds are unique,
they have dimension $1$, and they are tangent to the eigenvectors of the linearized system.
We will denote by $\mathcal{M}$ the stable manifold associated
with these saddle points.
Since we are interested in the dynamics in the square $[0,1]\times[0,1]$, with a slight abuse of notation we will only consider the restriction of $\mathcal{M}$ in $[0,1]\times[0,1]$.
In order to complete the
proof of Theorem <ref>, we now analyze
some properties of $\mathcal{M}$:
For $ac\ne1$ the set $\mathcal{M}$ can be written as the graph of a unique increasing ${C}^2$ function $\gamma:[0,u_{\mathcal{M}}] \to [0, v_{\mathcal{M}}]$ for some $(u_{\mathcal{M}}, v_{\mathcal{M}}) \in
\big(\{1\}\times[0,1]\big)\cup
\big((0,1]\times\{1\}\big)$, such that $\gamma(0)=0$, $\gamma(u_{\mathcal{M}})=v_{\mathcal{M}}$ and
* if $0<ac<1$, $\gamma(u_s)=v_s$;
* if $ac> 1$, in $u=0$ the function $\gamma$ is tangent to the
line $(\rho-1+ac)v-au=0$.
As a byproduct of the proof of Proposition <ref>, we also
obtain some useful information on the structure of the stable manifold and
the basins of attraction, that we summarize here below:
Suppose that $0<ac<1$. Then,
the curves (<ref>) and (<ref>), loci of the points such that $\dot{u}=0$ and $\dot{v}=0$ respectively, divide the square $[0,1]\times[0,1]$ into four regions:
\begin{equation}\begin{split}\label{ch2DEFA1234}
\mathcal{A}_1 &\;:= \big\{ (u, v) \in [0,1]\times[0,1] \;{\mbox{ s.t }}\; \dot{u}\leq 0,\; \dot{v}\geq 0 \big\}, \\
\mathcal{A}_2 &\;:= \big\{ (u, v) \in [0,1]\times[0,1] \;{\mbox{ s.t }}\; \dot{u}\leq 0,\; \dot{v}\leq 0 \big\}, \\
\mathcal{A}_3 &\;:= \big\{ (u, v) \in [0,1]\times[0,1] \;{\mbox{ s.t }}\;\dot{u}\geq 0,\; \dot{v}\leq 0 \big\}, \\
\mathcal{A}_4 &\;:= \big\{ (u, v) \in [0,1]\times[0,1] \;{\mbox{ s.t }}\;\dot{u}\geq 0, \;\dot{v}\geq 0 \big\}.
\end{split}\end{equation}
Furthermore, the sets $\mathcal{A}_1\cup \mathcal{A}_4$
and $\mathcal{A}_2\cup\mathcal{A}_3$ are separated by the curve $\dot{v}=0$, given by the graph of the continuous function
\begin{equation}\label{ch2f:sigma}
\sigma(v):= 1- \frac{\rho v^2+a}{\rho v+a},
\end{equation}
that satisfies $\sigma(0)=0$, $\sigma(1)=0$, and $0<\sigma(v)<1$ for all $v\in (0,1)$.
In addition,
\begin{equation}\label{ch2aggiunto}
{\mbox{$\mathcal{M}\setminus \{(u_s,v_s) \}$
is contained in~$\mathcal{A}_2\cup\mathcal{A}_4$,}}
\end{equation}
\begin{equation}\label{ch2primaBIS}
(\mathcal{A}_3 \setminus \{ (0,0), (u_s, v_s) \} ) \subseteq \mathcal{E},
\end{equation}
\begin{equation}\label{ch2prima2BIS}
\mathcal{A}_1\setminus \{(u_s,v_s) \} \subset \mathcal{B},\end{equation}
where the notation in (<ref>) and (<ref>) has been utilized.
To visualize the statements in Corollary <ref>, one can see Figure <ref>.
Partition of $[0,1]\times[0,1]$ in the case $a=0.8$, $c=0.5$, $\rho=2$,
as given by (<ref>).
In red, the curve $\dot{u}=0$. In blue, the curve $\dot{v}=0$, parametrized
by the function $\sigma$ in (<ref>).
Suppose that $ac>1$.
Then , we have that $\dot{u}\leq 0$ in $[0,1]\times [0,1]$,
and the curve (<ref>)
divides the square $[0,1]\times[0,1]$ into two regions:
\begin{equation}\begin{split}\label{ch2DEFA12}
\mathcal{A}_1 &\;:= \big\{ (u, v) \in [0,1]\times[0,1]\;{\mbox{ s.t. }}\; \dot{u}\leq 0,\; \dot{v}\geq 0 \big\}, \\
\mathcal{A}_2 &\;:= \big\{ (u, v) \in [0,1]\times[0,1]\;{\mbox{ s.t. }}\; \dot{u}\leq 0, \;
\dot{v}\leq 0 \big\}.
\end{split}\end{equation}
Furthermore, the sets $\mathcal{A}_1$
and $\mathcal{A}_2$ are separated by the curve $\dot{v}=0$, given by the graph of the continuous function $\sigma$ given in (<ref>).
In addition,
\begin{equation}\label{ch2aggiun2}
\mathcal{M}\subset \mathcal{A}_2.\end{equation}
Proposition <ref> and Corollaries <ref>
and <ref>
are a bit technical, but provide fundamental information
to obtain a characterization of the sets $\mathcal{E}$ and
$\mathcal{B}$, given in the forthcoming Proposition <ref>.
We now provide the proof of Proposition <ref>
(and, as a byproduct, of Corollaries <ref> and <ref>).
We treat separately the cases $0<ac<1$ and $ac> 1$.
We start with the case $0<ac<1$, and divide the proof in three steps.
Step 1: localizing $\mathcal{M}$.
With the notation introduced in (<ref>),
we prove that
\begin{equation}\label{ch2dkoegjerig94768}\begin{split}&
{\mbox{all trajectories starting in~$\mathcal{A}_3\setminus \{(0,0), (u_s,v_s) \}~$}}\\
&{\mbox{exit the
set~$\mathcal{A}_3$ on the side~$v=0$.}}\end{split}\end{equation}
To this aim, we first observe that
\begin{equation}\label{ch2pouyi86}
{\mbox{there are no cycles entirely contained
because $\dot{u}$ and $\dot{v}$ have a sign.
\begin{equation}\label{ch2pouyi862}{\mbox{there are no equilibria where a trajectory
in the interior
of~$\mathcal{A}_3$ can converge.}}\end{equation}
no point in $\mathcal{A}_3$ with positive first coordinate can be mapped in $(0,0)$ without exiting the set, because $\dot{u}\geq 0$ in $\mathcal{A}_3$.
Also, for all $(u_0, v_0)\in \mathcal{A}_3 \setminus(u_s, v_s)$, we have that $v_0<v_s$.
On the other hand, $\dot{v}\leq 0$ in $\mathcal{A}_3$, so no trajectory that is entirely
contained in $\mathcal{A}_3$ can converge to $(u_s, v_s)$. These observations prove (<ref>).
As a consequence of (<ref>), (<ref>) and the
Poincaré-Bendixson Theorem (see e.g. [113]), we have that
all the trajectories in the interior of $\mathcal{A}_3$ must exit the set at some time.
We remark that the side connecting $(0,0)$ and $(u_s, v_s)$ can be written as the
of points belonging to
$$\big\{ (u,v)\in [0,1]\times (0,v_s) \;{\mbox{ s.t. }}\;
u=\sigma(v) \big\},$$
where the function $\sigma$ is defined in (<ref>).
In this set, it holds that $\dot{v}=0$ and $\dot{u}>0$, thus the normal derivative pointing outward $\mathcal{A}_3$ is negative, so the trajectories cannot go outside $\mathcal{A}_3$ passing through this side.
Furthermore, on the side connecting $(u_s, v_s)$ with $(1-ac, 0)$, that lies on the straight line $v=1-ac-u$, we have that $\dot{u}= 0$ and $\dot{v}<0$ for $(u,v)\neq (u_s,v_s)$, so also here the outer normal derivative is negative. Therefore, the trajectories cannot go outside $\mathcal{A}_3$ passing through this side either.
These considerations complete the proof of (<ref>).
Accordingly, recalling the definition of $ \mathcal{E}$ in (<ref>), we see
\begin{equation}\label{ch2prima}
(\mathcal{A}_3 \setminus \{ (0,0), (u_s, v_s) \} ) \subseteq \mathcal{E}.
\end{equation}
In a similar way one can prove that all trajectories starting in $\mathcal{A}_1\setminus \{(u_s,v_s) \}$ must converge to $(0,1)$, which, recalling the definition of $\mathcal{B}$
in (<ref>), implies that
\begin{equation}\label{ch2prima2}
\mathcal{A}_1\setminus \{(u_s,v_s) \} \subset \mathcal{B}.\end{equation}
Thanks to (<ref>) and (<ref>),
we have that the stable manifold $\mathcal{M}$ has no intersection with $\mathcal{A}_1\setminus \{(u_s,v_s) \}~$ and $\mathcal{A}_3\setminus \{(0,0),(u_s,v_s) \}~$,
and therefore $\mathcal{M}$ must lie
in $\mathcal{A}_2\cup \mathcal{A}_4$.
Also, we know that $\mathcal{M}$ is tangent to an eigenvector in $(u_s, v_s)$,
and we observe that
\begin{equation}\label{ch2poui985004}
{\mbox{$(1, -1)$ is not an eigenvector of the linearized system.}}\end{equation}
Indeed, if $(1, -1)$ were an eigenvector, then
\begin{pmatrix}
1-ac-2u_s-v_s & -u_s \\
-\rho v_s-a & \rho-\rho u_s-2\rho v_s
\end{pmatrix}\cdot
\begin{pmatrix}
which implies that $1-ac-a-\rho=(u_s+v_s)(1-\rho)$. Hence, recalling (<ref>),
we obtain that $-a=\rho a c$, which is impossible. This
establishes (<ref>).
In light of (<ref>), we conclude that $\mathcal{M}\setminus \{(u_s,v_s) \}$
must have intersection with both $\mathcal{A}_2$ and $\mathcal{A}_4$.
Step 2: defining $\gamma(u)$.
Since $\dot{u}> 0$ and $\dot{v}>0$ in the interior of $\mathcal{A}_4$,
the portion of $\mathcal{M}$
in $\mathcal{A}_4$ can be described globally as the graph of a
monotone increasing smooth function $\gamma_1:U\to[0,v_s]$,
for a suitable interval $U\subseteq[0,u_s]$ with $u_s\in U$, and such that $\gamma_1(u_s)=v_s$.
We stress that, for $u>u_s$, the points $(u,v)\in \mathcal{M}$ belong
to $\mathcal{A}_2$.
Similarly, in the interior of $\mathcal{A}_2$ we have that $\dot{u}< 0$ and $\dot{v}<0$.
Therefore, we find that $\mathcal{M}$ can be represented in $\mathcal{A}_2$
as the graph of a monotone increasing smooth function $\gamma_2: V\to [v_s, 1]$, for a suitable interval $V\subseteq[u_s,1]$ with $u_s\in V$, and such that $\gamma_2(u_s)=v_s$. Notice that in the second case the trajectories and the parametrization run in opposite directions.
Now, we define
\begin{equation*}
\gamma(u) := \begin{cases}
\gamma_1(u) & {\mbox{ if }}u\in U, \\
\gamma_2(u) &{\mbox{ if }} u\in V,
\end{cases}
\end{equation*}
and we observe that it is an increasing smooth function locally
parametrizing $\mathcal{M}$ around $(u_s,v_s)$ (thanks to the
Stable Manifold Theorem).
We point out that, in light of the
Stable Manifold Theorem, the stable manifold $\mathcal{M}$ is globally
parametrized by
an increasing smooth function on a set $W\subset[0,1]$.
Step 3: $\gamma(0)=0$ and $\gamma(u_{\mathcal{M}})=v_{\mathcal{M}}$
for some $(u_{\mathcal{M}},v_{\mathcal{M}})\in\partial\big([0,1]\times[0,1]\big)$.
We first prove that
\begin{equation}\label{ch2r4gyghj}
\gamma(0)=0.\end{equation}
For this, we claim that
\begin{equation}\label{ch2098ouitdbnb}
{\mbox{orbits in the interior of~$\mathcal{A}_4$ do not come from
\end{equation}
Indeed, it is easy to see that points on the half axis $\{u=0\}$ converge to $(0,1)$,
and therefore a trajectory cannot enter $\mathcal{A}_4$ from this side.
As for the side connecting $(0,0)$ to $(u_s, v_s)$, here one has that $\dot{u}\geq0$ and $\dot{v}=0$, and so the inward pointing normal derivative is negative. Therefore,
no trajectory can enter $\mathcal{A}_4$ on this side.
Moreover, on the side connecting $(u_s, v_s)$ to $(0, 1-ac)$ the inward pointing normal derivative is negative, because $\dot{u}=0$ and $\dot{v}\ge0$, thus we have that no
trajectory can enter $\mathcal{A}_4$ on this side either.
These considerations prove (<ref>).
Furthermore, we have that
\begin{equation}\label{ch2098ouitdbnb2}
{\mbox{no cycles are allowed in~$\mathcal{A}_4$,}}
\end{equation}
because $\dot{u}\ge0$ and $\dot{v}\ge0$ in $\mathcal{A}_4$.
From (<ref>), (<ref>) and the
Poincaré-Bendixson Theorem (see e.g. [113]), we conclude that,
given a point $(\tilde u,\tilde v)\in\mathcal{M}$ in the
interior of $\mathcal{A}_4$, the $\alpha$-limit set of $(\tilde u,\tilde v)$,
that we denote by $\alpha_{(\tilde u,\tilde v)}$,
can be
\begin{equation}\label{ch209765gjkd}\begin{split}&
{\mbox{either an equilibrium or a union of (finitely many)}}\\
&{\mbox{equilibria and non-closed orbits connecting these equilibria.}}\end{split}
\end{equation}
We stress that, being $(\tilde u,\tilde v)$ in the
interior of $\mathcal{A}_4$, we have that
\begin{equation}\label{ch28yfe993vcem}
\tilde u<u_s.
\end{equation}
Now, we observe that
\begin{equation}\label{ch2degfiewgh}
{\mbox{$\alpha_{(\tilde u,\tilde v)}$ cannot contain the saddle point~$(u_s,v_s)$.}}\end{equation}
Indeed, suppose by contradiction that $\alpha_{(\tilde u,\tilde v)}$ does
contain $(u_s,v_s)$. Then,
we denote by $\phi_{(\tilde u,\tilde v)}(t)=\big(u_{(\tilde u,\tilde v)}(t),v_{(\tilde u,\tilde v)}(t)\big)$ the solution of (<ref>)
with $\phi_{(\tilde u,\tilde v)}(0)=(\tilde u,\tilde v)$, and we have that
there exists a sequence $t_j\to-\infty$ such
that $\phi_{(\tilde u,\tilde v)}(t_j)$ converges to $(u_s,v_s)$ as $j\to+\infty$.
In particular, in light of (<ref>),
there exists $j_0$ sufficiently large
such that
$$ u_{(\tilde u,\tilde v)}(0)=\tilde u<u_{(\tilde u,\tilde v)}(t_{j_0}).$$
Consequently, there exists $t_\star\in(t_{j_0},0)$ such that $\dot
u_{(\tilde u,\tilde v)}(t_\star)<0$.
As a result, it follows that $\phi_{(\tilde u,\tilde v)}(t_\star)\not\in\mathcal{A}_4$.
This, together with the fact that $\phi_{(\tilde u,\tilde v)}(0)\in\mathcal{A}_4$,
is in contradiction with (<ref>), and the proof of (<ref>)
is thereby complete.
Thus, from (<ref>) and (<ref>),
we deduce that $\alpha_{(\tilde u,\tilde v)}=\{(0,0)\}$.
This gives that $(0,0)$ lies on the stable manifold $\mathcal{M}$,
and therefore the proof of (<ref>) is complete.
Now, we show that
\begin{equation}\label{ch2ifregkjh0000}
{\mbox{there exists~$(u_{\mathcal{M}},v_{\mathcal{M}})\in\partial\big([0,1]\times[0,1]\big)$
such that~$\gamma(u_{\mathcal{M}})=v_{\mathcal{M}}$.}}
\end{equation}
To prove it, we first observe that
\begin{equation}\label{ch2ifregkjh0000pre}
{\mbox{orbits in~$\mathcal{A}_2$ converging to~$(u_s,v_s)$ come from
\end{equation}
Indeed, we suppose by contradiction that
\begin{equation}\label{ch20696u833687}
{\mbox{an orbit in~$\mathcal{A}_2$
converging to~$(u_s,v_s)$ stays confined in~$\mathcal{A}_2$.}}\end{equation}
We remark that, in this case,
\begin{equation}\label{ch2doeutoeru}
{\mbox{an orbit in~$\mathcal{A}_2$ cannot be a cycle,}}
\end{equation}
because $\dot{u}$
and $\dot{v}$ have a sign in $\mathcal{A}_2$.
Then, by the
Poincaré-Bendixson Theorem (see e.g. [113]), we conclude that,
given a point $(\tilde u,\tilde v)\in\mathcal{M}$ in the
interior of $\mathcal{A}_2$, the $\alpha$-limit set of $(\tilde u,\tilde v)$,
that we denote by $\alpha_{(\tilde u,\tilde v)}$,
can be either an equilibrium or a union of (finitely many)
equilibria and non-closed orbits connecting these equilibria.
We notice that the set $\alpha_{(\tilde u,\tilde v)}$ cannot contain $(0,1)$,
since it is a stable equilibrium.
We also claim that
\begin{equation}\label{ch2qewytriyb}
{\mbox{$\alpha_{(\tilde u,\tilde v)}$ cannot contain~$(u_s,v_s)$.}}\end{equation}
Indeed, we
suppose by contradiction that $\alpha_{(\tilde u,\tilde v)}$ does
contain $(u_s,v_s)$. We observe that, since $\dot{u}\le0$ in $\mathcal{A}_2$,
\begin{equation}\label{ch2koewtuyh}
\tilde u>u_s.\end{equation}
We denote by $\phi_{(\tilde u,\tilde v)}(t)=\big(u_{(\tilde u,\tilde v)}(t),v_{(\tilde u,\tilde v)}(t)\big)$ the solution of (<ref>)
with $\phi_{(\tilde u,\tilde v)}(0)=(\tilde u,\tilde v)$, and we have that
there exists a sequence $t_j\to-\infty$ such
that $\phi_{(\tilde u,\tilde v)}(t_j)$ converges to $(u_s,v_s)$ as $j\to+\infty$.
In particular, in light of (<ref>),
there exists $j_0$ sufficiently large
such that
$$ u_{(\tilde u,\tilde v)}(0)=\tilde u>u_{(\tilde u,\tilde v)}(t_{j_0}).$$
Consequently, there exists $t_\star\in(t_{j_0},0)$ such that $\dot
u_{(\tilde u,\tilde v)}(t_\star)>0$.
Accordingly, we have that $\phi_{(\tilde u,\tilde v)}(t_\star)\not\in\mathcal{A}_2$.
This and the fact that $\phi_{(\tilde u,\tilde v)}(0)\in\mathcal{A}_2$ give
a contradiction with (<ref>), and therefore this
establishes (<ref>).
These considerations complete the proof of (<ref>).
Now, we observe that
the inward pointing normal derivative
at every point in $\mathcal{A}_2 \cap \mathcal{A}_3 \setminus\{(u_s, v_s)\}$
is negative, since $\dot{u}=0$ and $\dot{v}\le0$. Hence, no trajectory can enter
from this side.
the inward pointing normal derivative
at every point in $\mathcal{A}_1 \cap \mathcal{A}_2 \setminus\{(u_s, v_s)\}$
is negative, since $\dot{u}\le0$ and $\dot{v}=0$. Hence, no trajectory can enter
from this side either.
These observations and (<ref>) give the desired result
in (<ref>), and thus Proposition <ref>
is established in the case $ac<1$.
Now we treat the case $ac>1$, using the same ideas. In this setting,
$\mathcal{M}$ is the stable manifold associated with the saddle point $(0,0)$.
We point out that, in this case, for all points in $[0,1]\times [0,1]$ we have
that $\dot{u}\leq 0$. Hence, the curve of points satisfying $\dot{v}=0$, that was also given in (<ref>), divides the square $[0,1]\times[0,1]$ into two regions $
\mathcal{A}_1$ and $\mathcal{A}_2$, defined in (<ref>).
Now, one can repeat verbatim the arguments in Step 1 with obvious modifications,
to find that $
\mathcal{M}\subset \mathcal{A}_2$.
Since the derivatives of $u$ and $v$ have a sign in $\mathcal{A}_2$, and the
set $\mathcal{M}$ in this case is the trajectory of a point converging to $(0,0)$, the set $\mathcal{M}$ can be represented globally as the graph of a smooth increasing function $\gamma: U\to [0,1]$ for a suitable interval $U\subseteq[0,1]$ containing the origin.
As a consequence, the condition $\gamma(0)=0$ is trivially satisfied in this setting.
The existence of a suitable $(u_{\mathcal{M}},v_{\mathcal{M}})$ can be derived reasoning as in Step 3
with obvious modifications.
Now, we prove that
\begin{equation}\label{ch2ofriyty98579}
{\mbox{at~$u=0$ the function~$\gamma$ is tangent
to the line~$(\rho-1+ac)v-au=0$.}}\end{equation}
For this, we recall (<ref>) and we see, by inspection, that
the Jacobian matrix $J(0,0)$ has two eigenvectors, namely $(0,1)$ and $(\rho-1+ac, a)$. The first one is tangent to the line $u=0$, that is the unstable manifold of $(0,0)$,
as one can easily verify. Thus, the second eigenvector is the one tangent to $\mathcal{M}$, as prescribed by the Stable Manifold Theorem (see e.g. [96]). Hence, in $(0,0)$ the manifold $\mathcal{M}$ is tangent to the line $(\rho-1+ac)v-au=0$ and so is the function $\gamma$ in $u=0$. This proves (<ref>), and thus
Proposition <ref>
is established in the case $ac>1$ as well.
§.§ Characterization of $\mathcal{M}$ when $ac=1$
Here we will prove the counterpart of Proposition <ref>
in the degenerate case $ac=1$.
To this end, looking at the velocity fields,
we first observe that
\begin{equation}\label{ch2NOEX}
\begin{split}&
{\mbox{trajectories starting in~$(0,1)\times(-\infty,1)$ at time~$t=0$}}\\&{\mbox{remain in~$(0,1)\times(-\infty,1)$ for all time~$t>0$.}}\end{split}
\end{equation}
We also point out that
\begin{equation}\label{ch2CALR}
\begin{split}&
{\mbox{trajectories entering the region~${\mathcal{R}}:=
\{u\in(0,1),\,u+v<0\}
$}}\\&{\mbox{at some time~$t_0\in\R$}}\\&{\mbox{remain in that region for all time~$t>t_0$,}}\end{split}
\end{equation}
since $\dot v=\rho v(1-u-v)-au=-\rho u-au<0$
along $\{u\in(0,1),\,u+v=0\}$.
Also, by the
Center Manifold Theorem
(see e.g. Theorem 1 on page 16
of [31] or pages 89-90 in [99]),
there exists a collection $\mathcal{M}_0$
of invariant curves, which are all
tangent at the origin to the eigenvector corresponding to the null eigenvalue,
that is the straight line $\rho v-au=0$. Then, we define $\mathcal{M}:=
\mathcal{M}_0\cap ([0,1]\times[0,1])$ and we observe that this
intersection is nonvoid, given the tangency property of $\mathcal{M}_0$
at the origin.
In what follows, for every $t\in\R$, we denote by $(u(t),v(t))=\phi_p(t)$ the orbit of $p\in\mathcal{M}\setminus\{(0,0)\}$. We start by providing an observation
related to negative times:
If $p\in\mathcal{M}\setminus\{(0,0)\}$
then $\phi_p(t)$ cannot approach the origin for negative values of $t$.
We argue by contradiction
and denote by $t_1,\dots,t_n,\dots$ a sequence of such negative values of $t$, for which $t_n\to-\infty$ and
$$ \lim_{n\to+\infty}\phi_p(t_n)=(0,0).$$
Up to a subsequence, we can also suppose that
\begin{equation}\label{ch2bejv0565etP}
\end{equation}
In light of (<ref>), we have that, for all $T\le0$,
\begin{equation}\label{ch20okf3233}
\phi_p(T)\not\in{\mathcal{R}}.
\end{equation}
Indeed, if $\phi_p(T)\in{\mathcal{R}}$, we deduce from (<ref>)
that $\phi_p(t)\in{\mathcal{R}}$ for all $t\ge T$. In particular,
we can take $t=0\ge T$ and conclude that $p=\phi_p(0)\in{\mathcal{R}}$,
and this is in contradiction with the assumption that $p\in{\mathcal{M}}\setminus\{(0,0)\}$.
As a byproduct of (<ref>), we obtain that, for all $T\le0$,
\{u\in(0,1),\,u+v\ge0\}\subseteq\{\dot u=-u(u+v)\le0\}.$$
In particular
$$ u(t_n)-u(t_{n+1})=\int_{t_{n+1}}^{t_n}\dot u(\tau)\,d\tau\le0,$$
which is in contradiction with (<ref>),
and consequently we have established the desired result.
Now we show that the $\omega$-limit of any point lying
on the global center manifold coincides with the origin, according to the next result:
If $p\in\mathcal{M}$, then its $\omega$-limit is $(0,0)$.
We observe that, for every $t>0$,
\begin{equation}\label{ch2T0z}
\phi_p(t)\in[0,1]\times[0,1].
\end{equation}
Indeed, by (<ref>), one
sees that, for $t>0$,
$\phi_t(p)$ cannot cross $\{0\}\times[0,1]$,
$\{1\}\times[0,1]$ and $[0,1]\times\{1\}$,
hence the only possible escape side is given by $[0,1]\times\{0\}$.
Therefore, to prove (<ref>), we suppose, by contradiction,
that there exists $t_0\ge0$ such that $\phi_p({t_0})\in[0,1]\times\{0\}$,
that is $v(t_0)=0$. Since $(0,0)$ is an equilibrium, it follows that $u(t_0)\ne0$.
In particular, $u(t_0)>0$ and accordingly $\dot v(t_0)=-au(t_0)<0$.
This means that $v(t_0+\varepsilon)<0$ for all $\varepsilon\in
(0,\varepsilon_0)$ for a suitable $\varepsilon_0>0$.
Looking again at the velocity fields, this entails that $\phi_p(t)\in(0,1)\times(-\infty,0)$
for all $t>\varepsilon_0$. Consequently,
$\phi_p(t)$ cannot approach the
straight line $\rho v-au=0$ for $t>\varepsilon_0$.
This, combined with Lemma <ref>, says that the trajectory
emanating from $p$ can never
approach the
straight line $\rho v-au=0$ at the origin, in contradiction with the definition
of ${\mathcal{M}}$,
and thus the proof of (<ref>)
is complete.
From (<ref>)
and the Poincaré-Bendixson Theorem (see e.g. [113]),
we deduce that the $\omega$-limit of $p$
can be either a cycle, or an equilibrium,
or a union of (finitely many)
equilibria and non-closed orbits connecting these equilibria.
We observe that the $\omega$-limit of $p$ cannot be a cycle, since $\dot u$
has a sign in $[0,1]\times[0,1]$. Moreover, it cannot contain the sink $(0,1)$, due to
Lemma <ref>. Hence, the only possibility is that
the $\omega$-limit of $p$ coincides with $(0,0)$,
which is the desired result.
As a consequence of Lemma <ref> and the fact
that $\dot u<0$ in $(0,1]\times[0,1]$, we obtain the following statement:
Every trajectory in $\mathcal{M}$ has
the form $\{\phi_p(t),\,t\in\R\}$, with
and there exists $t_p\in\R$ such that $\phi_p(t_p)\in\big(\{1\}\times[0,1]\big)
\cup\big([0,1]\times\{1\}\big)$.
The result in Corollary <ref> can be sharpened
in view of the following statement (which can be seen as the counterpart
of Proposition <ref>
in the degenerate case $ac=1$): namely, since the center manifold can in principle contain
many different trajectories
(see e.g. Figure 5.3 in [31]), we provide a
tailor-made argument that excludes this possibility in the specific case
that we deal with.
For $ac=1$
$\mathcal{M}$ contains one, and only one, trajectory,
which is asymptotic to the origin as $t\to+\infty$, and that can be written
as a graph $\gamma:[0,u_{\mathcal{M}}]\to[0,v_{\mathcal{M}}]$, for some $(u_{\mathcal{M}},v_{\mathcal{M}})\in\big(\{1\}\times[0,1]\big)\cup
\big((0,1]\times\{1\}\big)$, where $\gamma$ is an increasing $C^2$ function such that $\gamma(0)=0$,
$\gamma(u_{\mathcal{M}})=v_{\mathcal{M}}$ and the graph of $\gamma$ at the origin is tangent to the line $\rho v-au=0$.
First of all, we show that
\begin{equation}\label{ch2LIMSDD-0}
{\mbox{$\mathcal{M}$ contains one, and only one, trajectory.}}\end{equation}
Suppose, by contradiction, that ${\mathcal{M}}$
contains two different orbits, that we denote by ${\mathcal{M}}_-$
and ${\mathcal{M}}_+$.
Using Corollary <ref>,
we can suppose that ${\mathcal{M}}_+$
lies above ${\mathcal{M}}_-$
\begin{equation}\label{ch2CQPSKD}
\begin{split}&
{\mbox{the region~${\mathcal{P}}\subset[0,1]\times[0,1]$ contained between~${\mathcal{M}}_+$
and~${\mathcal{M}}_-$}}\\&{\mbox{lies in~$\{\dot u<0\}$.}}\end{split}
\end{equation}
Consequently, for every $p\in{\mathcal{P}}$,
it follows that
\begin{equation}\label{ch29ikfjty} \lim_{t\to+\infty}\phi_p(t)=(0,0).\end{equation}
In particular, we can take an open ball $B\subset
{\mathcal{P}}$ in the vicinity of the origin, denote by $\mu(t)$ the Lebesgue measure of ${\mathcal{S}}(t):=\{\phi_p(t),\;
p\in B\}$, and write that $\mu(0)>0$
\begin{equation}\label{ch2LIMSDD} \lim_{t\to+\infty}\mu(t)=0.\end{equation}
We point out that ${\mathcal{S}}(t)$
lies in the vicinity of the origin for all $t\ge0$, thanks to (<ref>).
As a consequence, for all $t$, $\tau>0$, changing variable
$$ y:=\phi_{x}(\tau)=x+\int_0^\tau \frac{d\phi_x(\theta)}{d\theta}\,d\theta=
x+\tau \frac{d\phi_x(0)}{dt}+O(\tau^2),
we find that
\begin{eqnarray*}
\mu(t+\tau)&=&\int_{{\mathcal{S}}(t+\tau)}dy\\&=&
\int_{{\mathcal{S}}(t)}\big|\det \big(D_x \phi_x(\tau)\big)\big|\,dx\\&=&
\int_{{{\mathcal{S}}(t)}}\left|\det D_x \left(x+\tau \frac{d\phi_x(0)}{dt}
\right)\right|\,dx\\
\int_{{{\mathcal{S}}(t)}}\left( 1+\tau\,{\rm Tr}\left(D_x \frac{d\phi_x(0)}{dt}\right)
\right)\,dx
\\&=&\mu(t)+\tau
\int_{{{\mathcal{S}}(t)}} {\rm Tr}\left(D_x \frac{d\phi_x(0)}{dt}\right)\,dx
\end{eqnarray*}
where ${\rm Tr}$ denotes the trace of a $(2\times2)$-matrix.
As a consequence,
\begin{equation}\label{ch2090987t4oyyorfg4}
\frac{d\mu}{dt}(t)=
\int_{{{\mathcal{S}}(t)}} {\rm Tr}\left(D_x \frac{d\phi_x(0)}{dt}\right)\,dx.\end{equation}
Also, using the notation $x=(u,v)$, we can write (<ref>) when $ac=1$ in the form
$$ \frac{d\phi_x}{dt}(t)=
\dot x(t)=\left(
\begin{matrix}
\dot u(t)\\
\dot v(t)
\end{matrix}\right)=
\left(
\begin{matrix}
\rho v(t)(1-u(t)-v(t))-au(t)
\end{matrix}\right).
\begin{eqnarray*} D_x \frac{d\phi_x(0)}{dt}&=&
\left(
\begin{matrix}
\\
\partial_u\big(\rho v (1-u-v)-au\big)&\partial_v\big(\rho v (1-u-v)-au\big)
\end{matrix}\right),
\end{eqnarray*}
\begin{equation}\label{ch20oj476ytgf9846}
\begin{split}
{\rm Tr}\left(D_x \frac{d\phi_x(0)}{dt}\right)\,&=-\partial_u\big(u(u+v)\big)
+\partial_v\big(\rho v (1-u-v)-au\big)\\&=-2u-v+\rho(1-u-v)-\rho v\\&=\rho+O(|x|)
\end{split}
\end{equation}
for $x$ near the origin.
As a result, recalling (<ref>), we can take $t$ sufficiently large,
such that ${{{\mathcal{S}}(t)}}$ lies in a neighborhood of the origin, exploit (<ref>)
to write that ${\rm Tr}\left(D_x \frac{d\phi_x(0)}{dt}\right)\ge\frac\rho2$
and then (<ref>) to conclude that
$$ \frac{d\mu}{dt}(t)\ge \frac{\rho}2
\int_{{{\mathcal{S}}(t)}} dx=\frac{\rho}{2}\,\mu(t).$$
This implies that $\mu(t)$ diverges (exponentially fast)
as $t\to+\infty$, which is in contradiction with (<ref>).
The proof of (<ref>)
is thereby complete.
Now, we check the other claims in the statement of Proposition <ref>.
The asymptotic property as $t\to+\infty$ is a consequence of Corollary <ref>.
Also, the graphical property as well as the monotonicity
property of the graph follow from the fact that ${\mathcal{M}}\subset\{\dot u<0\}$.
The smoothness of the graph follows from the smoothness of the center manifold.
The fact that $\gamma(0)=0$ and
$\gamma(u_{\mathcal{M}})=v_{\mathcal{M}}$ follow also from
Corollary <ref>. The tangency property at the origin is a consequence of
the tangency property of the center manifold to the center eigenspace.
As a byproduct of the proof of Proposition <ref>
we also obtain the following information:
Suppose that $ac=1$.
Then , we have that $\dot{u}\leq 0$ in $[0,1]\times [0,1]$,
and the curve (<ref>)
divides the square $[0,1]\times[0,1]$ into two regions $\mathcal{A}_1$
and $\mathcal{A}_2$,
defined in (<ref>).
Furthermore, the sets $\mathcal{A}_1$
and $\mathcal{A}_2$ are separated by the curve $\dot{v}=0$, given by the graph of the continuous function $\sigma$
given in (<ref>).
In addition,
\begin{equation}\label{ch2aggiun2BIS}
\mathcal{M}\subset \mathcal{A}_2.\end{equation}
§.§ Completion of the proof of Theorem <ref>
We observe that, by the Stable Manifold Theorem
and the Center Manifold Theorem, the statement in (v) of Theorem <ref>
is obviously fulfilled.
Hence, to complete the proof of Theorem <ref>,
it remains to show that the statement in (iv) holds true.
To this aim, exploiting
the useful pieces of
information in Propositions <ref> and <ref>, we first give a characterization of the sets $\mathcal{E}$ and $\mathcal{B}$:
The following characterizations of the sets in (<ref>) and (<ref>)
are true:
\begin{equation}\label{ch2char:E}
\begin{split}
\mathcal{E}=\;&\Big\{ (u,v)\in [0,1]\times [0,1] \;{\mbox{ s.t. }}\; v<\gamma(u) \ \text{if} \ u\in[0,u_{\mathcal{M}}] \\
&\qquad\qquad\qquad \qquad\qquad
{\mbox{ and}} \;v\leq 1 \ \text{if} \ u\in(u_{\mathcal{M}}, 1] \Big\},
\end{split}
\end{equation}
\begin{equation}\label{ch2char:B}
\mathcal{B}=\Big\{ (u,v)\in [0,u_{\mathcal{M}}]\times [0,1]\;{\mbox{ s.t. }}\; v>\gamma(u) \ \text{if} \ u\in[0,u_{\mathcal{M}}] \Big\},
\end{equation}
for some $(u_{\mathcal{M}}, v_{\mathcal{M}}) \in \partial \left( [0,1]\times [0,1] \right)$.
One can visualize the appearance of the set $\mathcal{E}$ in (<ref>)
in two particular cases in
Figure <ref>.
$a=0.8$, $c=0.5$, $\rho=2$
$a=0.8$, $c=3$, $\rho=2$
The figures show the phase portrait for the indicated values of the coefficients. In blue, the orbits of the points. The red dots show the equilibria. In violet, the set $\mathcal{E}$.
We let $\gamma$ be the parametrization of $\mathcal{M}$,
as given by Propositions <ref> (when $ac\neq1$)
and <ref> (when $ac=1$), and we
consider the sets
\begin{eqnarray*}
&&\mathcal{X}:= \big\{ (u,v)\in [0,1]\times [0,1]\;{\mbox{ s.t. }}\; v < \gamma(u) \big\}\\
{\mbox{and }}&&
\mathcal{Y}:= \big\{ (u,v)\in [0,1]\times [0,1] \;{\mbox{ s.t. }}\; v > \gamma(u) \big\}.
\end{eqnarray*}
The goal is to prove that $\mathcal{X}\equiv\mathcal{E}$ and $\mathcal{Y}\equiv\mathcal{B}$. We observe that, when $u_{\mathcal{M}}=1$, then $\mathcal{X} \cup \mathcal{Y} \cup \mathcal{M}=[0,1]\times[0,1]$.
When instead $u_{\mathcal{M}}\in(0,1)$, then $\mathcal{X} \cup \mathcal{Y} \cup \mathcal{M}\cup\big((u_{\mathcal{M}},1]\times[0,1]\big)=[0,1]\times[0,1]$.
Accordingly, if we show that
\begin{eqnarray}
&&\mathcal{X}\cup\big((u_{\mathcal{M}},1]\times[0,1]\big)\subseteq\mathcal{E} \label{ch21627} \\
{\mbox{and }}&& \mathcal{Y}\subseteq\mathcal{B}, \label{ch21628}
\end{eqnarray}
we are done.
Hence, we now focus on the proof of (<ref>).
Namely, recalling (<ref>),
we show that
\begin{equation}\label{ch2alltra}
{\mbox{all trajectories starting in~$\mathcal{X}$ exit the
set on the side~$(0,1]\times \{0\}$.}}\end{equation}
For this, we first notice that, gathering together (<ref>), (<ref>), (<ref>),
(<ref>) and (<ref>), we find that
\begin{equation}\label{ch207960789djiewf2}
{\mbox{no limit cycle exists in~$[0,1]\times[0,1]$}}\end{equation}
(in the case $0<ac<1$, and the same holds true in the case $ac\ge1$ since $\dot u$ has a sign).
In addition,
\begin{equation}\label{ch207960789djiewf}
{\mbox{the~$\omega$-limit of any point in~$\mathcal{X}$
cannot contain an equilibrium.}}\end{equation}
Indeed, by Propositions <ref> (when $(ac\neq1$)
and <ref> (when $ac=1$),
we have that $\gamma(0)=0<1$, and therefore $(0,1)\notin \overline{\mathcal{X}}$.
Moreover, if $ac<1$,
a trajectory in $\mathcal{X}$ cannot converge to $(u_s, v_s)$, since $\mathcal{X}$ does not
contain points of the stable manifold $\mathcal{M}$, nor to $(0,0)$,
since this is a repulsive equilibrium and no trajectory converges here.
If instead $ac\geq 1$, then it cannot converge to $(0,0)$, since $\mathcal{X}$ does not
contain points of $\mathcal{M}$.
These observations completes the proof of (<ref>).
From (<ref>), (<ref>) and the
Poincaré-Bendixson Theorem (see e.g. [113]),
we have that
every trajectory starting in $\mathcal{X}$ leaves the set (possibly in infinite time).
If the trajectory leaves at $t=+\infty$, then it converges to some equilibrium on $\partial \mathcal{X}$, which is in contradiction with (<ref>).
As a consequence a trajectory in $\mathcal{X}$ leaves the set in finite time.
Suppose that a trajectory leaves $\mathcal{X}$ at a point $(u,v)\in \partial \mathcal{X}$; then either $(u, v)\in \mathcal{M}$ or $(u, v)\in \partial ( [0,1]\times[0,1] )$.
The first possibility is impossible, otherwise the starting point of the trajectory would converge to $(u_s, v_s)$. Hence, the only possibility is that the
trajectory leaves $\mathcal{X}$ at $(u, v)\in \partial ( [0,1]\times[0,1] )$.
By Proposition <ref>
this is possible only if $u>0$ and $v=0$,
which proves (<ref>). As a consequence of (<ref>) we obtain that
\begin{equation}\label{ch2po0584yiugherghegk}
\mathcal{X}\subseteq\mathcal{E}.\end{equation}
We now claim that
\begin{equation}\label{ch2po0584yiugherghegkbis}
\big((u_{\mathcal{M}},1]\times[0,1]\big)\subseteq\mathcal{E}.
\end{equation}
To this end, we observe that there are neither cycles nor equlibria in $(u_{\mathcal{M}},1]\times[0,1]$, and therefore
we can use the Poincaré-Bendixson Theorem (see e.g. [113]) to conclude that any trajectory starting in $(u_{\mathcal{M}},1]\times[0,1]$ must exit
the set. Also, the inward normal velocity along the sides $\{1\}\times
(0,1]$ and $(u_{\mathcal{M}},1)\times\{1\}$ is positive, and thus no trajectory can exit from these sides. Now, if a trajectory exits $(u_{\mathcal{M}},1]\times[0,1]$
from the side $\{u_{\mathcal{M}}\}\times(0,1)$, then it enters the set $\mathcal{X}$, and therefore (<ref>)
is a consequence of (<ref>) in this case.
If instead a trajectory exits $(u_{\mathcal{M}},1]\times[0,1]$
from the side $(0,1)\times\{0\}$, then we directly obtain (<ref>).
From (<ref>) and (<ref>) we obtain (<ref>),
as desired.
We now prove (<ref>), namely we show that
\begin{equation}\label{ch2089ghvbdflpoiuytr}
{\mbox{for all~$(u_0,v_0)\in \mathcal{Y}$ we have that~$(u(t), v(t))\to (0,1)$
as~$t\to +\infty$.}}\end{equation}
To this end, we observe that $(u_s, v_s)$ (if $0<ac<1$) and $(0,0)$ are not in $\mathcal{Y}$.
no trajectory starting in $\mathcal{Y}$ converges to $(u_s, v_s)$ (if $0<ac<1$), nor to $(0,0)$,
since $\mathcal{Y}$ does not contain points on $\mathcal{M}$.
In addition, recalling (<ref>), we have that
there are no limit cycles in $\mathcal{Y}$.
As a consequence, by the
Poincaré-Bendixson Theorem (see e.g. [113]),
we have that
every trajectory starting in $\mathcal{Y}$ either go to $(0,1)$ or it exits the set at some point of $\partial \mathcal{Y}$.
In the latter case,
since no trajectory can cross $\mathcal{M}$, the only possibility is that
the trajectory exits $\mathcal{Y}$ at some
point $(u,v)\in\partial\big( [0,1]\times[0,1]\big)$.
We notice that,
since $\gamma$ is increasing, we have
that $\gamma(u)>0$ for all $u>0$. As a consequence,
\begin{equation}\label{ch2ptouy988787}
then~$v>\gamma(u)>0$ for all~$u>0$.}}\end{equation}
thanks to Proposition <ref>,
the only possibility that
a trajectory exits $\mathcal{Y}$ at some
point $(u,v)\in\partial\big( [0,1]\times[0,1]\big)$ is for $u>0$ and $v=0$,
which would contradict (<ref>).
As a result, the only remaining possibility is that a trajectory
in $\mathcal{Y}$ converges to $(0,1)$, which proves (<ref>).
Hence, the proof of (<ref>) is complete as well.
With this, we are now able to complete the
proof of Theorem <ref>:
The statement in (iv) of Theorem <ref>
is a direct consequence of the parametrization
of the manifold $\mathcal{M}$,
as given by Proposition <ref> for $ac\neq 1$ and by Proposition <ref> for $ac=1$,
and the characterization of the sets $\mathcal{B}$
and $\mathcal{E}$, as given by Proposition <ref>.
§ DEPENDENCE OF THE DYNAMICS
ON THE PARAMETERS
In this section we discuss the dependence on the
parameters involved in the system (<ref>).
The dynamics of the system in (<ref>) depends qualitatively
only on $ac$, but of course the position of the saddle equilibrium
and the size and shape of the basins of attraction depend quantitatively
upon all the parameters. Here
we perform a deep analysis on each parameter separately.
We notice that the system in (<ref>) does
not present a variational structure, due to the presence
of the
terms $-acu$ in the first equation and $-au$
in the second one, that are of first order in $u$.
Thus, the classical methods of the calculus of variations cannot
be used and we have to make use of ad-hoc arguments,
of geometrical flavour.
§.§ Dependence of the dynamics on the parameter $c$
We start by studying the dependence on $c$, that represents the losses
(soldier death and missing births) caused
by the war for the first population with respect to the second one.
In the following proposition, we will express the dependence on $c$
of the basin of attraction $\mathcal{E}$
in (<ref>) by writing explicitly $\mathcal{E}(c)$.
With the notation in (<ref>), we have that
(i) If $0< c_1 < c_2$, then
$\mathcal{E}(c_2) \subset \mathcal{E}(c_1)~$.
(ii) It holds that
\begin{equation}\label{ch2242}
\underset{c>0}{\bigcap} \, \mathcal{E}(c)= (0,1]\times \{0\}.
\end{equation}
We remark that the behavior for $c$ sufficiently
small is given by (i) of Theorem <ref>: in this case,
there is a saddle point inside the domain $[0,1]\times [0,1]$,
thus $\mathcal{E}(c)\neq (0,1]\times [0,1]$.
On the other hand, as $c\to +\infty$, the set $\mathcal{E}(c)$ gets
smaller and smaller until the first population has no
chances of victory if the second population has a positive size.
The parameter $c$ appears only in the first equation and it is
multiplied by $-au$, that is always negative in the domain we
are interested in. Thus, the dependence on $c$ is
independent of the other parameters.
As one would expect, Proposition <ref>
tells us that
the greater the cost of the war for the first population,
the fewer possibilities of victory there are for it.
(i) We take $c_2 > c_1 > 0$.
According to Theorem <ref>,
we denote by $(u_s^{2}, v_s^{2})$
the coexistence equilibrium for the parameter ${c_2}$ if $a{c_2}<1$,
otherwise we set $(u_s^{2}, v_s^{2})=(0,0)$;
similarly, we call $(u_s^{1}, v_s^{1})$ the coexistence equilibrium
for the parameter ${c_1}$ if $a{c_1}<1$, and in the other cases we
set $(u_s^{1}, v_s^{1})=(0,0)$.
We observe that
\begin{equation}\label{ch2poiuytrewq}
v_s^{2} \leq v_s^{1}.\end{equation}
Indeed, if $a{c_2}<1$ then also $a c_1<1$, and therefore,
using the characterization in (<ref>),
\begin{equation*}
\frac{\partial v_s}{\partial c} = \frac{-a(1+\rho {c}) -\rho
(1-a{c}) }{(1+\rho{c})^2} = \frac{-a-\rho}{(1+\rho{c})^2} <0,
\end{equation*}
which implies (<ref>) in this case.
If instead $a{c_2} \geq 1$ then the inequality in (<ref>)
is trivially satisfied, thanks to (i), (ii) and (iii)
of Theorem <ref>.
Now, in the notation
of Propositions <ref> (if $ac\neq1$)
and <ref> (if $ac=1$), thanks to the characterization
in (<ref>), if we prove that
\begin{equation}\label{ch21907}
\gamma_{c_1}(u)>\gamma_{c_2}(u)
\quad \text{for any} \ u\in (0,\min \{ u_{\mathcal{M}}^{c_1}, u_{\mathcal{M}}^{c_2} \}],
\end{equation}
then the inclusion in (i) is shown. Hence, we now
focus on the proof of (<ref>).
To this end, we observe that, since $\gamma_{c_1}$
is an increasing function, its inverse function $f_{c_1}:[0, v_{\mathcal{M}}^{c_1}]\to
[0,u_{\mathcal{M}}^{c_1}]$ is well defined and is increasing as well.
In an analogue fashion, we define $f_{c_2}(v)$ as the inverse
of $\gamma_{c_2}(u)$. We point out
that the inequality in (<ref>)
holds true if
\begin{equation}\label{ch2eq:f{c_1}<f{c_2}}
f_{c_1}(v)<f_{c_2}(v) \quad \text{for any} \ v\in (0,\min \{ v_{\mathcal{M}}^{c_1}, v_{\mathcal{M}}^{c_2} \}].
\end{equation}
Accordingly, we will show (<ref>)
in three steps.
First, in light of (<ref>),
we show that
\begin{equation}\label{ch2yoiyjgsdn}
{\mbox{the claim in~\eqref{ch2eq:f{c_1}<f{c_2}} is true
in the interval~$[v_s^2, v_s^1]\cap(0,+\infty)$.}}\end{equation}
For this, if $ac_1\geq 1$, then also $ac_2\ge1$, and therefore $v_s^1=
v_s^2=0$, thanks to (ii) and (iii)
in Theorem <ref>.
Accordingly, in this case the interval $[v_s^2, v_s^1]$
coincides with the singleton $\{ 0 \}$, and so there is nothing to prove.
Otherwise, we
recall that the curve $u=\sigma(v)$, given in (<ref>) and
representing the points where $\dot{v}=0$, is independent of $c$.
Moreover, thanks to formula (<ref>) in
Corollary <ref> if $ac<1$,
formula (<ref>) in
Corollary <ref>
if $ac>1$,
formula (<ref>) in Corollary <ref> if $ac=1$
(see also Figure <ref>),
we have that $f_{c_1}(v)< \sigma(v)$ for $v<v_s^1$
and $f_{c_2}(v)> \sigma(v)$ for $v> v_s^2$, which
proves (<ref>) in the open
interval $(v_s^2,v_s^1)$.
Moreover, it holds that
\begin{equation}\label{ch2po0954757yuiewshfa}
(if $ac_2<1$, otherwise $v_s^2=0$ and
there is no need to perform this computation)
\begin{equation}\label{ch2856sagd}
This completes the proof of (<ref>).
Next we show that
\begin{equation}\label{ch2yoiyjgsdn2}
{\mbox{the claim in~\eqref{ch2eq:f{c_1}<f{c_2}} is true
in the interval~$(0, v_s^{2})$.}}\end{equation}
If $ac_2\geq 1$, then $v_s^2=0$, and so the claim in (<ref>)
is trivial. Hence, we suppose that $ac_2<1$ and we argue by contradiction,
that for some $ v\in (0, v_s^{2})$
it holds that $f_{c_1}( v) \geq f_{c_2}( v)$.
As a consequence, we can define
$$ \bar v:=\sup\big\{v\in (0, v_s^{2})\;{\mbox{ s.t. }}\;
f_{c_1}( v) \geq f_{c_2}( v)\big\}.
We observe that, by continuity, we have that $f_{c_1}(\bar{v})
= f_{c_2}(\bar{v})$, and therefore, by (<ref>), we
see that $\bar{v} < v_s^{2}$.
As a result, since $f_{c_1}(v) <f_{c_2}(v)~$ for every $
v\in(\bar{v},v_s^{2}]$, then it holds that
\begin{equation} \label{ch2eq:f{c_1}'<f{c_2}'}
\frac{d f_{c_1}}{d v}(\bar{v}) < \frac{d f_{c_2}}{d v}(\bar{v}).
\end{equation}
On the other hand, we can compute the derivatives by
exploiting the fact that $\gamma_{c_1}$ and $\gamma_{c_2}$
follow the flux for the system (<ref>).
Namely, setting $\bar{u}:=f_{c_1}(\bar{v})$,
we have that
\begin{eqnarray*}
&&\frac{d f_{c_1}}{d v}(\bar{v})
= \frac{\dot{u}}{\dot{v}} (\bar{v})
= \frac{\bar{u} (1- \bar{u} - \bar{v}) - a{c_1} \bar{u}}{\rho \bar{v} (1- \bar{u} - \bar{v}) - a \bar{u}}
\\
{\mbox{and }}&&
\frac{d f_{c_2}}{d v}(\bar{v})
= \frac{\dot{u}}{\dot{v}} (\bar{v})
= \frac{\bar{u} (1- \bar{u} - \bar{v}) - a{c_2} \bar{u}}{\rho\bar{v} (1- \bar{u} - \bar{v}) - a \bar{u}}
Now, since $\bar{v}\in[0,v_s^1)$, we have that $
\rho\bar{v} (1- \bar{u} - \bar{v}) - a \bar{u}>0$ (recall (<ref>)
and notice that $(\bar u,\bar v)\in \mathcal{A}_4$).
This and the fact that ${c_2}>{c_1}$ give that
$$ \frac{d f_{c_1}}{d v}(\bar{v})>
\frac{d f_{c_2}}{d v}(\bar{v}),$$
which is in contradiction with (<ref>),
thus establishing (<ref>).
Now we prove that
\begin{equation}\label{ch2yoiyjgsdn3}
{\mbox{the claim in~\eqref{ch2eq:f{c_1}<f{c_2}} is true in the
interval~$(v_s^{1},\min \{ u_{\mathcal{M}}^{c_1}, u_{\mathcal{M}}^{c_2} \}]$.}}\end{equation}
Indeed, if $ac_1< 1$, we argue towards a contradiction, supposing that
there exists $v>v_s^1$ such that $f_{c_1}(v) \ge f_{c_2}(v)$. Hence, we can define
$$\widehat v:= \inf \big\{ v>v_s^1\; {\mbox{ s.t. }}\;f_{c_1}(v) \ge f_{c_2}(v)
\big\},$$
and we deduce from (<ref>) that $\widehat v>v_s^1$.
By continuity, we see that $f_{c_1}(\widehat{v}) = f_{c_2}(\widehat{v})$.
Therefore, since $f_{c_1}(v) < f_{c_2}(v)~$ for any $ v < \widehat{v}$, we conclude that
\begin{equation} \label{ch2eq:f{c_1}'<f{c_2}'2BIS}
\frac{d f_{c_1}}{dv}(\widehat{v}) > \frac{d f_{c_2}}{d v}(\widehat{v}).
\end{equation}
On the other hand, setting $\widehat{u}:=f_{c_1}(\widehat{v})$ and
exploiting (<ref>), we get that
\begin{align*}
&\frac{d f_{c_1}}{d v}(\widehat{v})
= \frac{\dot{u}}{\dot{v}} (\widehat{v})
= \frac{\widehat (1- \widehat{u} - \widehat{v}) - a{c_1}\widehat{u}}{\rho\widehat{v}
(1- \widehat{u} -\widehat{v}) - a\widehat{u}}
\\
{\mbox{ and }}\qquad& \frac{d f_{c_2}}{d v}(\widehat{v})
= \frac{\widehat{u}}{\widehat{v}} (\widehat{v})
= \frac{\rho\widehat{u} (1- \widehat{u} -\widehat{v}) - a{c_2} \widehat{u}}{
\widehat{v} (1- \widehat{u} -\widehat{v}) - a \widehat{u}}.
\end{align*}
Moreover, recalling (<ref>) and (<ref>),
we have that $(f_{c_1}(\widehat{v}),\widehat{v})$
and $(f_{c_2}(\widehat{v}),\widehat{v})$ belong to the interior
of $\mathcal{A}_2$,
and therefore $\rho\widehat{v} (1- \widehat{u} -\widehat{v}) - a \widehat{u} <0$.
This ad the
fact that ${c_2}>{c_1}$ give that
$$\frac{d f_{c_1}}{d v}(\widehat{v})<
\frac{d f_{c_2}}{d v}(\widehat{v}),
which is in contradiction with (<ref>).
This establishes (<ref>) in this case.
If instead $ac_1\ge1~$, then also $ac_2\ge1$, and
therefore we have that $(u_s^{2}, v_s^{2})=(u_s^{1}, v_s^{1})=(0,0)$.
In this setting, we use Propositions <ref>
and <ref>
to say that at $v=0$ the function $f_{c_1}$ is tangent to the line $(\rho-1+ac_1)v-au=0$, while $f_{c_2}$ is tangent
to $(\rho-1+ac_2)v-au=0$.
Now, since
\begin{equation*}
\frac{\rho-1}{a} + c_1 < \frac{\rho-1}{a} + c_2,
\end{equation*}
we have that for positive $v$ the second line is above the first one.
Also, thanks to the fact that $f_{c_1}$ and $f_{c_2}$
are tangent to these lines, we conclude that
there exists $\varepsilon>0$
such that
\begin{equation}\label{ch22026}
f_{c_1}(v)<f_{c_2}(v) \quad \text{for any } \ v<\varepsilon.
\end{equation}
Now, we suppose by contradiction that there exists some $v> 0$
such that $f_{c_1}(v) \ge f_{c_2}(v)$. Hence, we can define
$$\tilde v:= \inf \big\{ v>0\; {\mbox{ s.t. }}\;f_{c_1}(v) \ge f_{c_2}(v)
\big\}.$$
In light of (<ref>), we have that $\tilde{v}\ge\varepsilon>0$.
Moreover, by continuity, we see that $f_{c_1}(\tilde{v}) = f_{c_2}(\tilde{v})$.
Accordingly, since $f_{c_1}(v) < f_{c_2}(v)~$ for any $ v < \tilde{v}$, then it must be
\begin{equation} \label{ch2eq:f{c_1}'<f{c_2}'2}
\frac{d f_{c_1}}{dv}(\tilde{v}) > \frac{d f_{c_2}}{d v}(\tilde{v}).
\end{equation}
On the other hand, setting $\tilde{u}:=f_{c_1}(\tilde{v})$ and
exploiting (<ref>), we see that
\begin{align*}
&\frac{d f_{c_1}}{d v}(\tilde{v})
= \frac{\dot{u}}{\dot{v}} (\tilde{v})
= \frac{\tilde{u} (1- \tilde{u} - \tilde{v}) - a{c_1} \tilde{u}}{\rho\tilde{v} (1- \tilde{u} - \tilde{v}) - a \tilde{u}}
\\
{\mbox{ and }}\qquad& \frac{d f_{c_2}}{d v}(\tilde{v})
= \frac{\tilde{u}}{\tilde{v}} (\tilde{v})
= \frac{\rho\tilde{u} (1- \tilde{u} - \tilde{v}) - a{c_2} \tilde{u}}{\tilde{v} (1- \tilde{u} - \tilde{v}) - a \tilde{u}}.
\end{align*}
Now, thanks to (<ref>) and (<ref>),
we have that $(f_{c_1}(\tilde{v}),\tilde{v})$
and $(f_{c_2}(\tilde{v}),\tilde{v})$ belong to the interior
of $\mathcal{A}_2$,
and therefore $\rho\tilde{v} (1- \tilde{u} - \tilde{v}) - a \tilde{u} <0$. This ad the
fact that ${c_2}>{c_1}$ give that
$$\frac{d f_{c_1}}{d v}(\tilde{v})<
\frac{d f_{c_2}}{d v}(\tilde{v}),
which is in contradiction with (<ref>).
This completes the proof of (<ref>).
Gathering together (<ref>), (<ref>) and (<ref>),
we obtain (<ref>), as desired.
(ii) We first show that for all $\varepsilon>0$ there exists $c_{\varepsilon}>0$ such that for all $c\ge c_{\varepsilon}$ it holds that
\begin{equation} \label{ch2859}
\mathcal{E}(c) \subset \big\{
(u,v)\in [0,1]\times [0,1]\;{\mbox{ s.t. }}\; v < \varepsilon u \big\}.
\end{equation}
The inclusion in (<ref>) is also equivalent to
\begin{equation} \label{ch2255}
\big\{ (u,v)\in [0,1]\times [0,1]\;{\mbox{ s.t. }}\; v >
\varepsilon u \big\} \subset \mathcal{B}(c),
\end{equation}
and the strict inequality is justified by the fact that $\mathcal{E}(c)$
and $\mathcal{B}(c)$ are separated by $\mathcal{M}$, according
to Proposition <ref>.
We now establish the inclusion in (<ref>).
For this, let
\begin{equation}\label{ch2255BIS}
\mathcal{T}_{\varepsilon}:= \big\{
(u,v)\in [0,1]\times [0,1] \;{\mbox{ s.t. }}\; v > \varepsilon u \big\}. \end{equation}
Now, we can choose $c$ large enough such that the condition $ac\geq 1$
is fulfilled. In this way, thanks to (ii) and (iii)
of Theorem <ref>,
the only equilibria are the points $(0,0)$ and $(0,1)$.
Now, the component of the
velocity in the inward normal direction to $\mathcal{T}_{\varepsilon}$
on the side $\{v=\varepsilon u\}$ is given by
\begin{eqnarray*}
&&(\dot u,\dot v)\cdot \frac{(-\varepsilon,1)}{\sqrt{1+\varepsilon^2}}=
\frac{\dot{v}-\varepsilon \dot{u}}{\sqrt{1+\varepsilon^2}}
\\&&\qquad =\frac1{{\sqrt{1+\varepsilon^2}}}\big(
\rho v(1-u-v) -au -\varepsilon u(1-u-v) + \varepsilon acu \big)\\
(\rho v-\varepsilon u)(1-u-v) + (\varepsilon c -1)au\big]\\
(\rho \varepsilon u-\varepsilon u)(1-u-\varepsilon u) + (\varepsilon c -1)au\big] ,
\end{eqnarray*}
that is positive for
\begin{equation}\label{ch2possibly}
c > c_{\varepsilon} := \frac{2\varepsilon(1+\rho) +a}{\varepsilon a}.
\end{equation}
This says that no trajectory in $\mathcal{T}_{\varepsilon}$ can exit $\mathcal{T}_{\varepsilon}$ from the side $\{v=\varepsilon u\}$.
The other parts of $\partial \mathcal{T}_{\varepsilon}$ belong to $\partial(
but not to $[0,1]\times \{0 \}$.
As a consequence, by Proposition <ref>,
\begin{equation}\label{ch2po123097}
{\mbox{every trajectory in~$\mathcal{T}_{\varepsilon}$ belongs to~$\mathcal{T}_{\varepsilon}$
for all~$t\ge0$.}}\end{equation}
From this, (<ref>) and the
Poincaré-Bendixson Theorem (see e.g. [113]), we conclude that
the $\omega$-limit of any trajectory starting in $\mathcal{T}_{\varepsilon}$
can be either an equilibrium or a union of (finitely many)
equilibria and non-closed orbits connecting these equilibria.
Now, we claim that, possibly taking $c$ larger in (<ref>),
\begin{equation}\label{ch2po1230972}
\mathcal{M}\subset \big([0,1]\times[0,1]\big)\setminus\mathcal{T}_{\varepsilon}.
\end{equation}
Indeed, suppose by contradiction that there exists $(\tilde u,\tilde v)\in\mathcal{M}
\cap\mathcal{T}_{\varepsilon}$. Then, in light of (<ref>), a trajectory passing
through $(\tilde u, \tilde v)$ and converging to $(0,0)$ has to be entirely contained
in $\mathcal{T}_{\varepsilon}$.
On the other hand, by Propositions <ref> and <ref>,
we know that at $u=0$ the manifold $\mathcal{M}$ is tangent to the
line $(\rho-1+ac)v-au=0$.
Hence, if we choose $c$ large enough such that
$$ \frac{a}{\rho-1+ac}<\varepsilon,$$
we obtain that this line is below the line $v=\varepsilon u$, thus reaching
a contradiction. This establishes (<ref>).
From (<ref>), we deduce that,
given $(\tilde u,\tilde v)\in\mathcal{T}_{\varepsilon}$,
and denoting $\omega_{(\tilde u,\tilde v)}$ the $\omega$-limit of $(\tilde u,\tilde v)$,
\begin{equation}\label{ch2podnjewbf215}
\omega_{(\tilde u,\tilde v)}\neq \{(0,0)\},
\end{equation}
provided that $c$ is taken large enough.
Furthermore, $\omega_{(\tilde u,\tilde v)}$ cannot consist of the two equilibria $(0,0)$
and $(0,1)$ and non-closed orbits connecting these equilibria, since $(0,1)$
is a sink. As a consequence of this and (<ref>),
we obtain that $\omega_{(\tilde u,\tilde v)}=\{(0,1)\}$
for any $(\tilde u,\tilde v)\in\mathcal{T}_{\varepsilon}$, provided that $c$
is large enough.
Thus, recalling (<ref>) and (<ref>), this proves (<ref>),
and therefore (<ref>).
Now, using (<ref>), we see that for every $\varepsilon>0$,
$$\underset{c>0}{\bigcap} \mathcal{E}(c)\subseteq \mathcal{E}(c_{\varepsilon})
\subseteq \big\{
(u,v)\in [0,1]\times [0,1]\; {\mbox{ s.t. }}\; v < \varepsilon u \big \}.
\begin{equation*}
\underset{c>0}{\bigcap} \mathcal{E}(c) \subseteq \underset{\varepsilon>0}{\bigcap} \big\{
(u,v)\in [0,1]\times [0,1]\; {\mbox{ s.t. }}\; v < \varepsilon u \big \} = (0,1] \times \{0\},
\end{equation*}
which implies (<ref>), as desired.
§.§ Dependence of the dynamics on the parameter $\rho$
Now we analyze the dependence of the dynamics on the parameter $\rho$, that is the fitness of the second population $v$ with respect to the fitness of the first one $u$.
In the following proposition, we will make it explicit the dependence on $\rho$ by
writing $\mathcal{E}(\rho)$ and $\mathcal{B}(\rho)$.
With the notation in (<ref>) and (<ref>), we have that
When $\rho=0$, for any $v \in [0,1]$ the point $(0,v)$ is an equilibrium. If $v\in(1-ac,1]$, then it corresponds to a strictly
negative eigenvalue and a null one.
If instead $v\in[0,1-ac)$, then it corresponds to a strictly
positive eigenvalue and a null one
\begin{equation}\label{ch2first}
\mathcal{B}(0)= \varnothing,\end{equation}
and for any $\varepsilon< ac/2~$ and
any $\delta< \varepsilon c/2$ we have that
\begin{equation}\label{ch2first2}
[0,1]\times [0,1-ac) \subseteq \mathcal{E}(0) \subseteq \mathcal{T}_{\varepsilon, \delta} ,
\end{equation}
\begin{equation}\label{ch2TEPS}
\mathcal{T}_{\varepsilon, \delta}:=\big\{ (u,v)\in[0,1] \times [0,1]\;
{\mbox{ s.t. }}\; \delta v-\varepsilon u \leq \delta(1-\varepsilon) \big\}.
\end{equation}
For any $\varepsilon< ac/3~$ and any $\delta< \varepsilon c/2$ it holds that
\begin{equation*}
\underset{a>0}{\bigcap} \ \underset{{0<\rho<a/3}}{\bigcup} \mathcal{E}(\rho) \subseteq \mathcal{T}_{\varepsilon, \delta} ,
\end{equation*}
where $ \mathcal{T}_{\varepsilon, \delta}$ is defined in (<ref>).
It holds that
\begin{equation}\label{ch2ir4t4y4y}
\underset{\omega>0}{\bigcap} \, \underset{{\rho>\omega}}{\bigcup} \mathcal{E}(\rho) = (0,1] \times \{0\}.
\end{equation}
We point out that
the case $\rho =0$ is not comprehended in Theorem <ref>.
As a matter of fact, the dynamics of this case is qualitatively very different from all the other cases. Indeed, for $\rho =0$ the domain $[0,1] \times [0,1]$ is not divided into $\mathcal{E}$ and $\mathcal{B}$, since more attractive equilibria appear on the line $\{0\}\times(0,1)$. Thus, even if the second population cannot grow, it still has some chance of victory.
As soon as $\rho~$ is positive, on the line $u=0$ only the equilibrium $(0,1)$ survives, and it attracts all the points that were going to the line $\{0\}\times(0,1)$ for $\rho =0$.
When $\rho \to +\infty$, the basin of attraction of $(0,1)$ tends to invade the domain, thus the first population tends to have almost no chance of victory and the second population tends to win.
However, the dependence on the parameter $\rho~$ is not monotone as one could think, at least not in $[0,+\infty)\times[0,+\infty)$.
Indeed, by performing some simulation, one could find some values ${\rho}_1$ and ${\rho}_2$, with $0<{\rho}_1 < {\rho}_2$, and a point $(u^*, v^*)\in [0,+\infty)\times[0,+\infty)$ such that $(u^*, v^*) \notin \mathcal{E}({\rho}_1)$ and $(u^*, v^*) \in \mathcal{E}({\rho}_2)$, see Figure <ref>.
$a=0.2$, $c=0.1$, and $\rho=3$
$a=0.2$, $c=0.1$, and $\rho=7$
Figure (a) and Figure (b) show the trajectory starting from the point $(u_0,v_0)=(1.4045, 1.1)$ for $\rho=3$ and $\rho=7$ respectively. For $\rho=3$ the trajectory leads to the equilibrium $(0,1)$, so $(u_0,v_0)\notin \mathcal{E}(\rho=3)$, while for $\rho=7$ the second population goes to extinction in finite time, so $(u_0,v_0)\in \mathcal{E}(\rho=7)$.
This means that, sometimes, a big value of fitness for the second population may lead to extinction while a small value brings to victory. This is counterintuitive, but can be easily explained: the parameter $\rho~$ is multiplied by the term $1-u-v$, that is negative past the counterdiagonal of the square $[0,1]\times[0,1]$. So in the model (<ref>), as well as in any model of Lotka-Volterra type, the population that grows faster is also the one that suffers more the consequences of overpopulation. Moreover, the usual dynamics of Lotka-Volterra models is altered by the presence of the term $-au$, and this leads to the lack of monotonicity that we observe.
We now give the proof of Proposition <ref>:
For $\rho=0$, the equation $\dot{v}=0$ collapses to $u=0$. Since for $u=0$ also the equation $\dot{u}=0$ is satisfied,
each point on the line $u=0$ is an equilibrium.
Calculating the eigenvalues for the points $(0, \tilde{v})$, with $\tilde v\in[0,1]$,
using the Jacobian matrix in (<ref>),
one gets the values $0$ and $1-ac-\tilde{v}$.
Accordingly, this entail that, if $\tilde{v} < 1-ac$, the point $(0, \tilde{v})$ corresponds to a strictly negative eigenvalue
and a null one, while if $\tilde{v}>1-ac$
then $(0, \tilde{v})$ corresponds to a strictly negative
eigenvalue and a null one.
These considerations proves the first statement in (i).
We notice also that in the whole square $(0,1]\times[0,1]$ we have $\dot{v}= -au < 0$, hence there is no trajectory that can go to $(0,1)$, and there is no cycle.
In particular this implies (<ref>).
Now, we observe that
on the side $[0,1]\times \{1\}$ the inward normal derivative is
given by $-\dot{v}=au$, which is nonnegative, and therefore
a trajectory cannot exit the square on this side.
Similarly, along the side $\{1\}\times [0,1]$ the inward normal derivative is
given by $-\dot{u}=v+ac$, which is positive, hence
a trajectory cannot exit the square on this side either.
The side $\{0\}\times[0,1]$ is made of equilibrium points at which the first population $u$ is extinct, while on the side $(0,1]\times\{0\}$ we have extinction of the population $v$.
Thus a trajectory either converges to one of the equilibria on the
side $\{0\}\times[0,1]$, or exits $[0,1]\times[0,1]$ through the side $(0,1]\times\{0\}$.
In particular, since $\{0\}\times [0,1-ac)$ consists of repulsive equilibria, we have that
$$ [0,1]\times[0, 1-ac)\subseteq \mathcal{E}(0),~$$
that is, trajectories starting in $[0,1]\times[0, 1-ac)$ go to the extinction of $v$.
This proves the first inclusion in (<ref>).
To prove the second inclusion in (<ref>),
we first show that
\begin{equation}\label{ch2pot43 yb9 49y}
{\mbox{points in~$\big([0,1]\times[0,1]\big)\setminus\mathcal{T}_{\varepsilon, \delta}$
are mapped into~$\big([0,1]\times[0,1]\big)\setminus\mathcal{T}_{\varepsilon, \delta}$ itself.}}\end{equation}
on the line $\{\delta v-\varepsilon u = \delta(1-\varepsilon)\}$ we have that the inward-pointing normal derivative is given by
\begin{equation}\begin{split}\label{ch2fagiano}
\frac1{\sqrt{\varepsilon^2+\delta^2}} \big(\delta \dot{v}- \varepsilon \dot{u}\big)\\
&\qquad\qquad =\frac1{\sqrt{\varepsilon^2+\delta^2}}\big( -\delta a u - \varepsilon u(1-u-v) +\varepsilon ac u\big)\\
\frac{ u}{\sqrt{\varepsilon^2+\delta^2}}\left[\varepsilon\left(-1+ac+u+\frac{\varepsilon}{\delta}u +1-\varepsilon\right)-\delta a\right] \\
=\frac{ 1}{\sqrt{\varepsilon^2+\delta^2}}\left[u^2\left( 1+\frac{\varepsilon}{\delta}\right) + u(\varepsilon ac -\delta a -\varepsilon^2)\right].
\end{split}\end{equation}
The first term is always positive; the second one is positive for the choice
\begin{equation*}
\delta < \frac{\varepsilon c}{2} \quad {\mbox{ and }}\quad\varepsilon< \frac{ac}{2}.
\end{equation*}
Hence, under the assumption in (i), on the line $\{\delta v-\varepsilon u = \delta(1-\varepsilon)\}$ the inward-pointing normal derivative is positive, which implies
that no trajectories in $\big([0,1]\times[0,1]\big)\setminus\mathcal{T}_{\varepsilon, \delta}$ can exit
from $\big([0,1]\times[0,1]\big)\setminus\mathcal{T}_{\varepsilon, \delta}$. This establishes (<ref>).
As a consequence of (<ref>), we obtain also the second
inclusion (<ref>), as desired.
We claim that
\begin{equation}\label{ch2poi8562eq2dvfkgjlkuykyuasdawre}
\big([0,1]\times[0,1]\big)\setminus\mathcal{T}_{\varepsilon, \delta}
\subseteq \mathcal{B}(\rho),\end{equation}
for all $0<\rho< a/3$.
To this end, we observe that, in order to determine
the sign of the inward pointing normal derivative on the side $\{\delta v -\varepsilon u = \delta(1-\varepsilon)\}$, by (<ref>) we have to
check that $\delta \dot{v}- \varepsilon\dot{u}\ge0$. In order to simplify the calculation, we use the change of coordinates $x:=u$ and $y:=1-v$.
In this way, one needs to verify that $\delta \dot{y}+\varepsilon \dot{x} \leq 0$
on the line $\{\delta y + \varepsilon x = \delta \varepsilon\}$. For this, we compute
\begin{equation}
\label{ch2cneq0}
\begin{split}
\delta \dot{y}+\varepsilon \dot{x} &= \delta \rho (y-1)(y-x)+\delta a x + \varepsilon x (y-x) - \varepsilon acx, \\
& = -\delta \rho (1-y) y+x \big(
\delta \rho (1-y) +\delta a + \varepsilon (y-x) -\varepsilon ac \big) ,\\
& = -\delta \rho (1-y) y + x \big( \delta \rho -\delta \rho y
+\delta a + \varepsilon y-\varepsilon x -\varepsilon a c \big)\\
&\le x \big( \delta \rho -\delta \rho y
+\delta a + \varepsilon y-\varepsilon x -\varepsilon a c \big) .
\end{split}
\end{equation}
Now we choose $\delta<\varepsilon c / 2$ and we
recall that $\rho < a/3$. Moreover, we notice
$$y= \varepsilon-\frac{\varepsilon}{\delta}x\le\varepsilon,
and therefore $\varepsilon y \leq \varepsilon^2$. Thus, we have that
\begin{eqnarray*}
-\delta \rho y + \delta \rho +\delta a + \varepsilon y-\varepsilon x
-\varepsilon a c \le \frac{\varepsilon ac }{6} + \frac{\varepsilon ac }{2} +
\varepsilon^2
-\varepsilon a c= \varepsilon\left( \frac{2}{3} ac + \varepsilon
- a c\right)
\end{eqnarray*}
that is negative for $\varepsilon < ac/3$. Plugging this
information into (<ref>), we obtain
that $\delta \dot{y}+\varepsilon \dot{x} \leq 0~$, as desired.
This proves that
trajectories in $
\big([0,1]\times[0,1]\big)\setminus\mathcal{T}_{\varepsilon, \delta}$
cannot exit $
\big([0,1]\times[0,1]\big)\setminus\mathcal{T}_{\varepsilon, \delta}$.
This, the fact that there are no cycles in $[0,1]\times[0,1]$
and the Poincaré-Bendixson Theorem (see e.g. [113])
give that
trajectories in $\big([0,1]\times[0,1]\big)\setminus\mathcal{T}_{
\varepsilon, \delta}$ converge to $(0,1)$,
that is the only equilibrium in $\big([0,1]\times[0,1]\big)\setminus
\mathcal{T}_{\varepsilon, \delta}$. Hence,
is established.
From (<ref>)
we deduce that
$$ \mathcal{E}(\rho)\subseteq\mathcal{T}_{\varepsilon, \delta}$$
for all $0<\rho<a/3$, which implies the desired result
in (ii).
We consider $\varepsilon_1>\varepsilon_2 >0$ to be
taken sufficiently small in what follows,
and we show that
there exists $R>0$, depending on $\varepsilon_1$
and $\varepsilon_2$, such that for
all $\rho\geq R$ it holds that
\begin{equation}\label{ch2qeruyjy8790}
\mathcal{R}_{\varepsilon_1, \varepsilon_2}:=
[0, 1-\varepsilon_1]\times [\varepsilon_2,1] \subseteq \mathcal{B}(\rho).\end{equation}
For this, we first observe that
\begin{equation}\label{ch2po089egdgdkjfkghjighywrv58465v8}
{\mbox{no trajectory starting
in~$\mathcal{R}_{\varepsilon_1, \varepsilon_2}$
can exit the set.}}\end{equation}
Indeed, looking at the velocity fields on the sides $\{0\}\times
[\varepsilon_2, 1]$ and $[0,1-\varepsilon_1]\times\{1\}$,
one sees that no trajectory in $\mathcal{R}_{\varepsilon_1, \varepsilon_2}$ can exit from these sides.
on the side $\{1-\varepsilon_1\} \times [\varepsilon_2, 1]$, the normal inward derivative is
\begin{equation*}
-\dot{u}=-[u(1-u-v)-acu] = -(1-\varepsilon_1)(\varepsilon_1-v-ac),
\end{equation*}
and this is positive for $\varepsilon_1\leq ac$ (which is fixed
from now on).
In addition, on the side $[0,1-\varepsilon_1]\times\{ \varepsilon_2 \}$, the inward normal derivative is
\begin{eqnarray*}
&& \dot{v}= [\rho v(1-u-v)-au] =
\rho \varepsilon_2(1-u-\varepsilon_2) - au\\&&\qquad
\ge \rho \varepsilon_2(\varepsilon_1-\varepsilon_2) - a(1-
\varepsilon_1),
\end{eqnarray*}
and this is positive
\begin{equation}\label{ch2rhodef}
\rho > \frac{a(1-\varepsilon_1)}{\varepsilon_2(\varepsilon_1
These observations complete the proof
of (<ref>).
From (<ref>),
and the Poincaré-Bendixson Theorem (see e.g. [113]), we have that
all the trajectories in the interior
of $\mathcal{R}_{\varepsilon_1, \varepsilon_2}$
must converge to
either an equilibrium or a union of (finitely many)
equilibria and non-closed orbits connecting these equilibria.
In addition, we claim that, if $0<ac<1$, recalling (<ref>)
and possibly enlarging $\rho$
in (<ref>),
\begin{equation}\label{ch2possibly2}
(u_s,v_s)\notin \mathcal{R}_{\varepsilon_1, \varepsilon_2}.
\end{equation}
Indeed, we have that $u_s \to 1-ac$ and $v_s \to 0$,
as $\rho \to +\infty$. Hence, we can choose $\rho$ large enough
such that the statement in (<ref>) is satisfied.
As a consequence of (<ref>),
we get that all the trajectories in the interior
of $\mathcal{R}_{\varepsilon_1, \varepsilon_2}$
must converge to
the equilibrium $(0,1)$,
and this establishes (<ref>).
Accordingly, (<ref>) entails that, for $\varepsilon_1>
\varepsilon_2>0$ sufficiently small, there exists $R>0$, depending on $\varepsilon_1$
and $\varepsilon_2$, such that for
all $\rho\geq R$
\begin{equation*}
\mathcal{E}(\rho)
\subset\big((0,1]\times[0,\varepsilon_2)\big)\cup
\big( (1-\varepsilon_1,1]\times(\varepsilon_2,1]\big)
This implies (<ref>), as desired.
§.§ Dependence of the dynamics on the parameter $a$
The consequences of the lack of variational structure
become even more extreme when we observe the dependence
of the dynamics on
the parameter $a$, that is the aggressiveness of the first population
towards the other.
Throughout this section, we take $\rho>0$ and $c>0$,
and we perform our analysis
taking into account the limit cases $a\to0$ and $a\to+\infty$.
We start analyzing the dynamics of (<ref>)
in the case $a=0$.
For $a=0$
the system (<ref>) has the following
i) The system has the equilibrium $(0,0)$, which is a source,
and a straight line of equilibria $(u,1-u)$, for all $u\in[0,1]$,
which correspond to a strictly negative eigenvalue and a null one.
ii) Given any $(u(0), v(0))\in (0,1)\times(0,1)$ we have that
\begin{equation}\label{ch2form}
(u(t), v(t)) \to(\bar{u}, 1-\bar{u})\quad{\mbox{ as }}t\to+\infty,
\end{equation}
where $\bar{u}$ satisfies
\begin{equation}\label{ch21650}
\frac{v(0) }{u^{\rho}(0)}\bar{u}^{\rho} + \bar{u} -1=0.
\end{equation}
iii) The equilibrium $(u_s^0, v_s^0)$ given in (<ref>)
has a stable manifold, which can be written as the graph of an
increasing smooth function $\gamma_0:[0,u_{\mathcal{M}}^0]\to[0,v_{\mathcal{M}}^0]$,
for some $(u_{\mathcal{M}}^0,v_{\mathcal{M}}^0)\in\big(\{1\}\times[0,1]\big)\cup
\big((0,1]\times\{1\}\big)$, such that $\gamma_0(0)=0$,
More precisely,
\begin{equation}\label{ch2def:gamma0}
\gamma_0 (u):= \frac{v_s^0}{(u_s^0)^{\rho}} u^{\rho} \quad
{\mbox{ and }}\quad
u_{\mathcal{M}}^0:=\min \left\{1, \frac{u_s^0}{(v_s^0)^{\frac{1}{{\rho}}}}\right\}, \end{equation}
being $(u_s^0,v_s^0)$ defined in (<ref>).
We point out that formula (<ref>)
says that for $a=0$ every point in the interior
of $[0,1]\times[0,1]$ tends to a coexistence equilibrium.
The shape of the trajectories depends on $\rho$, being
convex in the case $\rho>1$,
a straight line in the case $\rho=1$, and concave in the case $\rho< 1$. This means that if the second population $v$ is alive at the beginning, then it does not get extinct in finite time.
For $a=0$, we look for the equilibria of the system (<ref>)
by studying when $\dot{u}=0$ and $\dot{v}=0$. It is easy to see that
the point $(0,0)$ and all the points on the line $u+v=1$ are the only equilibria.
The Jacobian of the system (see (<ref>), with $a=0$) at the point $(0,0)$
has two positive eigenvalues, $1$ and $\rho~$, and thereofore $(0,0)$ is a
the characteristic polynomial at a point $(\tilde{u}, \tilde{v})$ on the line $u+v=1$
is given by
$$(\lambda+\tilde{u})(\lambda+\rho \tilde{v})-\rho \tilde{u}\tilde{v}
=\lambda(\lambda+\tilde{u} +\rho \tilde{v}),$$
and therefore, the eigenvalues are $0$ and $-\tilde{u} -\rho \tilde{v}<0$.
(ii) We point out that
when $a=0$
\begin{equation}\label{ch2intprim678}
{\mbox{$\mu(t):=v(t)/u^{\rho} (t)$ is a prime integral for the system.}}
\end{equation}
\begin{equation*}
\mu'= \frac{\dot{v}u^\rho - {\rho} u^{{\rho}-1} \dot{u} v }{u^{2{\rho}}}=u^{{\rho}-1} \frac{{\rho}uv(1-u-v)- {\rho} uv(1-u-v) }{u^{2{\rho}}}=0.
\end{equation*}
As a result, the trajectory starting at a point $(u(0),v(0) )\in(0,1)\times(0,1)$ lies on the curve
\begin{equation}\label{ch2intprim}
v(t)=\frac{v(0)}{ u^{\rho}(0)}\, u^{\rho} (t).\end{equation}
Moreover, the trajectory starting at $(u(0),v(0) )$ is asymptotic as $t\to+\infty$
to an equilibrium on this curve. Since $(0,0)$ is a source, the only possibility
is that the trajectory starting at $(u(0),v(0) )$ converges to an
equlibrium $(\bar{u}, \bar{v})$
such that
$\bar{v}=1-\bar{u}$. This entails that
\begin{equation*}
1-\bar{u} =\bar{v}=(v(0)/ u^{\rho}(0)) \bar{u}^{\rho},
\end{equation*}
which is exactly equation (<ref>).
(iii) We observe that the point $(u_s^0, v_s^0)$ given in (<ref>)
lies on the straight line $u+v=1$, and therefore, thanks to (i) here, it is
an equilibrium of the system (<ref>), which corresponds
to a strictly negative eigenvalue $-u_s^0-\rho v_s^0$ and a null one.
Hence, by the Center Manifold Theorem
(see e.g. Theorem 1 on page 16
of [31]), the point $(u_s^0, v_s^0)$
has a stable manifold, which has dimension $1$ and
is tangent to the eigenvector of the linearized system associated to the
strictly negative eigenvalue $-u_s^0-\rho v_s^0$.
Also, the graphicality and the monotonicity
properties follow from the strict sign of $\dot{u}$
and $\dot{u}$. The smoothnes of the graphs follows from the smoothness
of the center manifold.
The fact that $\gamma_0(0)=0$ is a consequence of the monotonicity
property of ${u}$ and ${v}$, which ensures that the limit at $t\to-\infty$
exists, and the fact that this limit has to lie on the prime integral
in (<ref>).
The fact that $\gamma_0(u_{\mathcal{M}}^0)=v_{\mathcal{M}}^0$
follows from formula (<ref>) and the monotonicity property.
Formula (<ref>)
follows from the fact that any trajectory has to lie on the prime integral
in (<ref>).
To state our next result concerning the dependence of the basin of
attraction $\mathcal{E}$ defined in (<ref>) on the parameter $a$,
we give some notation.
We will make it explicit the dependence of the sets $\mathcal{E}$
and $\mathcal{B}$
on the parameter $a$, by writing
explicitly $\mathcal{E}(a)$ and $\mathcal{B}(a)$, and we will call
\begin{equation*}
\mathcal{E}_0:=\underset{a'>0}{\bigcap} \, \underset{a'>a>0}{\bigcup} \mathcal{E}(a)
\end{equation*}
\begin{equation}\label{ch2def:Einfty}
\mathcal{E}_{\infty}:=\underset{a'>0}{\bigcap} \, \underset{a>a'}{\bigcup} \mathcal{E}(a).
\end{equation}
In this setting, we have the following statements:
(i) We have that
\begin{equation} \label{ch2char:E0}
\begin{split}
& \big\{ (u,v)\in [0,1]\times [0,1]\;{\mbox{ s.t. }}\;
v < \gamma_{0}(u) \,\text{ if } \, u\in[0, u_{\mathcal{M}}^0]\\
&\qquad\qquad\qquad\qquad{\mbox{and }} \;
v \leq 1 \, \text{ if }\, u\in(u_{\mathcal{M}}^0, 1] \big\}\\&
\qquad \subseteq
\mathcal{E}_0 \subseteq \\&\big\{ (u,v)\in [0,1]\times [0,1]\;{\mbox{ s.t. }}\; v \le \gamma_{0}(u) \,\text{ if } \, u\in[0, u_{\mathcal{M}}^0]\\
&\qquad\qquad\qquad\qquad{\mbox{and }} \;
v \leq 1 \, \text{ if }\, u\in(u_{\mathcal{M}}^0, 1] \big\},
\end{split}
\end{equation}
where $\gamma_0$ and $u_{\mathcal{M}}^0$ are given in (<ref>).
(ii) It holds that
\begin{equation}\label{ch2asdfgert019283}
\mathcal{S}_c\subseteq
\mathcal{E}_{\infty} \subseteq\overline{\mathcal{S}_c},
\end{equation}
\begin{equation} \label{ch2def:S_c}
\mathcal{S}_c:=\left\{ (u,v)\in[0,1] \times [0,1]\;
{\mbox{ s.t. }}\; v-\frac{u}{c}<0 \right\}.
\end{equation}
We point out that the set $\mathcal{E}_0$ in (<ref>)
does not coincide with
the basin of attraction for the system (<ref>) when $a=0$.
Indeed, as already mentioned, formula (<ref>)
in Proposition <ref>
says that for $a=0$ every point in the interior
of $[0,1]\times[0,1]$ tends to a coexistence equilibrium and thus
if $v(0)\neq0$ then $v(t)$ does not get extinct in finite time.
Also, as $a\to+\infty$, we have that the set $\mathcal{E}_{\infty}$
is determined by $\mathcal{S}_c$, defined in (<ref>),
that depends only on the parameter $c$.
The statement in (i) of Proposition <ref>
will be a direct consequence of the following result.
Recalling the function $\gamma$ introduced in
Propositions <ref> and <ref>,
we express here the dependence on the parameter
$a$ by writing $\gamma_a$, $u_a$, $v_a$,
$u_s^a$, $u_{\mathcal{M}}^a$.
We will also denote by $\mathcal{M}^a$ the stable manifold
of the point $(u_s, v_s)$ in (<ref>), and by $\mathcal{M}^0$
the stable manifold
of the point $(u_s^0, v_s^0)$ in (<ref>).
The key lemma is the following:
For all $u\in[0,1]$, we have that $\gamma_a(u) \to \gamma_0(u)$
uniformly as $a\to0$, where $\gamma_0(u)$
is the function defined in (<ref>).
Since we are dealing with the limit as $a$ goes to zero,
throughout this proof we will always assume that
we are in the case $ac<1$.
Also, we denote by $\phi_p^{a}(t)$ the flow at time $t$
of the point $p\in[0,1]\times[0,1]$ associated with (<ref>),
and similarly by $\phi_p^{(0)}(t)$ the flow at time $t$
of the point $p$ associated with (<ref>) when $a=0$.
With a slight abuse of notation,
we will also write $\phi_p^{a}(t)=(u_a(t),v_a(t))$,
with $p=(u_a(0),v_a(0))$.
Let us start by proving that
\begin{equation}\label{ch2340}
\mathcal{M}^a\cap\big([0,u_s^0]\times[0,v_s^0]\big)\to
\mathcal{M}^0\cap\big([0,u_s^0]\times[0,v_s^0]\big) \quad {\mbox{ as }}a\to0.
\end{equation}
For this, we claim that, for every $\varepsilon>0$,
\begin{equation}\label{ch2aqwzero}
(u_a(0))^2+(v_a(0))^2 \ge\frac{\varepsilon^2}{4}
\end{equation}
\begin{equation}\label{ch2zz}
\big| (u_a(t),v_a(t) ) - (u_s^a, v_s^a)\big| > \frac{\varepsilon}{2},
\end{equation}
\begin{equation} \label{ch2z}
|\dot{u}_a(t)|^2 +|\dot{v}_a(t)|^2 > \frac{\varepsilon^{4}}{C_0},
\end{equation}
for some $C_0>0$, depending only on $\rho$ and $c$.
Indeed, by (v) of
Theorem <ref> and (<ref>),
the trajectory $(u_a(t), v_a(t))$ belongs to the set $[0, u_s^a]
\times [0, v_s^a] \setminus B_{\frac{\varepsilon}{2}}(u_s^a, v_s^a)~$.
Moreover, we claim that
\begin{equation}\label{ch2123456poi}
1-ac-u_a(t)-v_a(t)\ge \frac{\varepsilon \sqrt{2}}{4},
\end{equation}
for any $t>0$ such that (<ref>) is satisfied.
To prove this, we recall that $(u_s^a, v_s^a)$ lies on the straight
line $\ell$ given by $v=-u+1-ac$
when $0<ac<1$ (see (<ref>)).
Clearly, there is no point of
the set $[0, u_s^a]
\times [0, v_s^a] \setminus B_{\frac{\varepsilon}{2}}(u_s^a, v_s^a)~$
lying on $\ell$, and we notice that the points
in the set $[0, u_s^a]
\times [0, v_s^a] \setminus B_{\frac{\varepsilon}{2}}(u_s^a, v_s^a)~$
with minimal distance from $\ell$ are given by $p:=(u_s^a-\varepsilon/2,
v_s^a)$ and $q:=(u_s^a, v_s^a-\varepsilon/2)$.
Also, the distance of the point $p$ from the straight line $\ell$
is given by $\frac{\varepsilon}2\cdot \tan\frac\pi4=
\frac{\varepsilon \sqrt{2}}{4}$.
Thus, the distance between $(u_a(t),v_a(t) )$
and the line $\ell$ is greater than
$\frac{\varepsilon \sqrt{2}}{4}$,
and this implies (<ref>).
As a consequence of (<ref>), we obtain that
\begin{equation}\label{ch2pggdeyw087968754}
(\dot{u}_a(t))^2 =
\big(u_a(t)(1-ac-u_a(t)-v_a(t)) \big)^2 >
(u_a(t))^2\left(\frac{\varepsilon \sqrt{2}}{4}\right)^2
\end{equation}
and that
\begin{equation}\begin{split}\label{ch2pggdeyw087968754BIS}
\big(\rho v_a(t)(1-u_a(t)-v_a(t))-au_a(t) \big)^2 \\
\ge&
\left(\rho v_a(t)\left(ac+\frac{\varepsilon \sqrt{2}}{4}\right)-au_a(t) \right)^2.
\end{split}\end{equation}
Now, if $u_a(t)\ge\rho cv_a(t)$, then from (<ref>)
and (<ref>)
we obtain that
\begin{eqnarray*}&&
(\dot{u}_a(t))^2+(\dot{v}_a(t))^2 \ge
(u_a(t))^2\left(\frac{\varepsilon \sqrt{2}}{4}\right)^2\\&&\qquad\qquad
\ge \frac{(u_a(t))^2}2\left(\frac{\varepsilon \sqrt{2}}{4}\right)^2
+\frac{(\rho cv_a(t))^2}2\left(\frac{\varepsilon \sqrt{2}}{4}\right)^2
\\&&\qquad\qquad
\ge \min \{1,\rho^2c^2\} \frac{\varepsilon^2}{16}
\big((u_a(t))^2+(v_a(t))^2\big)\\
\ge \min \{1,\rho^2c^2\} \frac{\varepsilon^2}{16}
\big((u_a(0))^2+(v_a(0))^2\big)\\
\ge \min \{1,\rho^2c^2\} \frac{\varepsilon^4}{64},
\end{eqnarray*}
which proves (<ref>) in this case.
If instead $u_a(t)<\rho cv_a(t)$, we use (<ref>)
to see that
\begin{eqnarray*}&&
(\dot{u}_a(t))^2+(\dot{v}_a(t))^2 \ge
\left(\rho v_a(t)\left(ac+\frac{\varepsilon \sqrt{2}}{4}\right)-au_a(t) \right)^2
\\&&\qquad\qquad =
\left(\frac{\varepsilon \sqrt{2}\rho v_a(t)}{4}+a\big(\rho cv_a(t)-u_a(t)\big) \right)^2
\ge
\left(\frac{\varepsilon \sqrt{2}\rho v_a(t)}{4}\right)^2\\&&\qquad\qquad
\ge
\frac12\left(\frac{\varepsilon \sqrt{2}\rho v_a(t)}{4}\right)^2
+\frac12\left(\frac{\varepsilon \sqrt{2} u_a(t)}{4c}\right)^2\\&&\qquad\qquad
\ge \min \left\{\rho^2,\frac1{c^2}\right\}\frac{\varepsilon^2}{16}
\big( (u_a(t))^2 +(v_a(t))^2\big)\\
\ge \min \left\{\rho^2,\frac1{c^2}\right\}\frac{\varepsilon^2}{16}
\big( (u_a(0))^2 +(v_a(0))^2\big)\\
\ge \min \left\{\rho^2,\frac1{c^2}\right\}\frac{\varepsilon^4}{64},
\end{eqnarray*}
which completes the proof of (<ref>).
Now, for any $\eta>0$, we define
{\mbox{ s.t. }}\; v=\frac{v_s^0-\eta'}{(u_s^0+\eta')^\rho}u^\rho\;
{\mbox{ with }} |\eta'|\le\eta
\right\}.$$
Given $\varepsilon>0$, we define
\begin{equation}\label{ch2mettiin}
{\mbox{$\eta(\varepsilon)$ to be the smallest~$\eta$
for which~$\mathcal{P}_\eta \supset B_{\varepsilon}(u_s^0,v_s^0)$.}}
\end{equation}
We remark that
\begin{equation}\label{ch2ricorda}
\lim_{\varepsilon\to0}\eta(\varepsilon)=0.
\end{equation}
Also, given $\delta>0$, we define a tubular
neighborhood $\mathcal{U}_\delta$
of $\mathcal{M}^0$ as
$$ \mathcal{U}_\delta :=\bigcup_{q\in\mathcal{M}^0}
Furthermore, we define
\begin{equation}\label{ch2lometto}
{\mbox{$\delta(\varepsilon)$ the smallest~$\delta$ such that
$\mathcal{U}_\delta\supset \mathcal{P}_{\eta(\varepsilon)}$.}}
\end{equation}
Recalling (<ref>), we have that
\begin{equation}\label{ch2ricorda2}
\lim_{\varepsilon\to0}\delta(\varepsilon)=0.
\end{equation}
We remark that, as $a\to0$, the point $(u_s^a, v_s^a)$ in (<ref>),
which is a saddle point for the dynamics of (<ref>)
when $ac<1$ (recall Theorem <ref>),
tends to the point $(u_s^0,v_s^0)$ in (<ref>), that belongs
to the line $v+u=1$, which is an equilibrium point for the dynamics of (<ref>)
when $a=0$, according to Proposition <ref>.
As a consequence, for every $\varepsilon>0$, there exists $a_\varepsilon>0$
such that if $a\in(0,a_\varepsilon)$,
\begin{equation}\label{ch2qwertyuisdfghjxcvbn}
\end{equation}
This gives that the intersection of $\mathcal{M}^a$ with $B_{\varepsilon/2}(u_s^0,v_s^0)$
is nonempty.
Furthermore, since $\gamma_a(0)=0$, in light of Proposition <ref>,
we have that the intersection of $\mathcal{M}^a$ with $B_{\varepsilon/2}$
is nonempty. Hence, there exists $p_{\varepsilon,a}\in \mathcal{M}^a\cap
\partial B_{\varepsilon/2}$.
We also notice that
\begin{equation}\label{ch2swqdbvsdjvksdv097654}
\mathcal{M}^a=\phi_{p_{\varepsilon,a}}^{a}(\R).\end{equation}
In addition,
\begin{equation}\label{ch2a12ewgerheh}
\phi_{p_{\varepsilon,a}}^{a}\big((-\infty,0]\big)\subset B_{\varepsilon/2}.
\end{equation}
Also, since the origin belongs to $\mathcal{M}^0$, we have that $
B_{\varepsilon/2}\subset \mathcal{U}_\varepsilon$. From
this and (<ref>), we deduce that
\begin{equation}\label{ch2lsdgrdhtrjb yrweur748v6348900}
\phi_{p_{\varepsilon,a}}^{a}\big((-\infty,0]\big)
\subset \mathcal{U}_\varepsilon.\end{equation}
Now, we let $C_0$ be as in (<ref>) and
we claim that there exists $t_{\varepsilon,a}\in(0,3\sqrt{C_0}\varepsilon^{-2})$
such that
\begin{equation}\label{ch2esisteuntempo}
\phi_{p_{\varepsilon,a}}^{a}(t_{\varepsilon,a})\in\partial B_{3\varepsilon/4}
\end{equation}
To check this, we argue by contradiction and we suppose that
$$ \phi_{p_{\varepsilon,a}}^{a}\big((0,3\sqrt{C_0}\varepsilon^{-2})\big)
\cap B_{3\varepsilon/4}(u_s^0,v_s^0)=\varnothing.$$
Then, for every $t\in(0,3\sqrt{C_0}\varepsilon^{-2})$,
recalling also (<ref>),
$$ \big|\phi_{p_{\varepsilon,a}}^{a}(t)-(u_s^a,v_s^a)\big|\ge
\big|\phi_{p_{\varepsilon,a}}^{a}(t)-(u_s^0,v_s^0)\big| -
\big|(u_s^a,v_s^a)-(u_s^0,v_s^0)\big|\ge
\frac{3\varepsilon}4-\frac\varepsilon8>\frac{\varepsilon}2,
and consequently (<ref>) is satisfied for every $t\in(0,3\sqrt{C_0}
\varepsilon^{-2})$.
we observe that $p_{\varepsilon,a}$ satisfies (<ref>),
and therefore, by (<ref>),
$$ | \dot{u}_a(t)|^2 +| \dot{v}_a(t)|^2
> \frac{\varepsilon^{4}}{C_0},$$
for all $t\in(0,3\sqrt{C_0}\varepsilon^{-2})$,
where we used the notation ${\phi}_{p_{\varepsilon,a}}^{a}(t)=
(u_a(t),v_a(t))$, being $p_{\varepsilon,a}=(u_a(0),v_a(0))$.
As a result,
$$ \big( \dot{u}_a(t)+ \dot{v}_a(t)\big)^2>\frac{\varepsilon^{4}}{C_0},$$
and thus
$$ \dot{u}_a(t)+ \dot{v}_a(t)>\frac{\varepsilon^{2}}{\sqrt{C_0}}.$$
This leads to
\begin{eqnarray*}
+\int_0^{\frac{3\sqrt{C_0}}{\varepsilon^2}}\big( \dot{u}_a(t)+ \dot{v}_a(t)\big)\,dt
\\&&\qquad\quad
\ge u_a(0)+v_a(0)+
\int_0^{\frac{3\sqrt{C_0}}{\varepsilon^2}}\frac{\varepsilon^{2}}{\sqrt{C_0}}\,dt
=u_a(0)+v_a(0) +3\ge 3,
\end{eqnarray*}
which forces the trajectory to exit the region $[0,1]\times[0,1]$.
This is against the assumption that $p_{\varepsilon, a}\in\mathcal{M}^a$,
and therefore the proof of (<ref>) is complete.
In light of (<ref>), we can set $q_{\varepsilon,a}:=
\phi_{p_{\varepsilon,a}}^{a}(t_{\varepsilon,a})$,
and we deduce from (<ref>) that $q_{\varepsilon,a}\in\mathcal{P}_{\eta(\varepsilon)}$.
We also observe that the set $\mathcal{P}_\eta$ is invariant for the
flow with $a=0$, thanks to (<ref>). These observations
give that $\phi_{q_{\varepsilon,a}}^{0}(t)\in\mathcal{P}_{\eta(\varepsilon)}$
for all $t\in\R$.
As a result, using (<ref>), we conclude that
\begin{equation}\label{ch2ASDFGHJtergyfhgj}
\phi_{q_{\varepsilon,a}}^{0}(t)\in\mathcal{U}_{\delta(\varepsilon)}\quad
{\mbox{ for all }} t\in\R.
\end{equation}
In addition, by the continuous dependence of the flow
on the parameter $a$ (see e.g. Section 2.4
in [60],
or Theorem 2.4.2 in [63]),
$$ \big|\phi_{q_{\varepsilon,a}}^{0}(t)-\phi_{q_{\varepsilon,a}}^{a}(t)\big|
for all $t\in[-3\sqrt{C_0}\varepsilon^{-2},0]$, provided that $a$
is sufficiently small, possibly in dependence of $\varepsilon$.
This fact and (<ref>) entail that
$$ \phi_{q_{\varepsilon,a}}^{a}(t)\in\mathcal{U}_{\delta(\varepsilon)+\varepsilon}\quad
{\mbox{ for all }} t\in[-3\sqrt{C_0}\varepsilon^{-2},0].
In particular, for all $t\in[0,t_{\varepsilon,a}]$,
\begin{equation}\label{ch2andatosu}
\phi_{p_{\varepsilon,a}}^{a}(t)=\phi_{q_{\varepsilon,a}}^{a}(t-t_{\varepsilon,a})
\in\mathcal{U}_{\delta(\varepsilon)+\varepsilon}.\end{equation}
We now claim that
for all $t\ge t_{\varepsilon,a}$,
\begin{equation}\label{ch2qwertyuiop}
\phi_{p_{\varepsilon,a}}^{a}(t)\subset B_{\varepsilon}(u_s^a,v_s^a).
\end{equation}
Indeed, this is true when $t=t_{\varepsilon,a}$ thanks to (<ref>)
and (<ref>). Hence, since
the trajectory $\phi_{p_{\varepsilon,a}}^{a}(t)$
is contained in the domain where $\dot{u}\ge0$ and $\dot{v}\ge0$,
thanks to (<ref>), we deduce that (<ref>) holds true.
From (<ref>) and (<ref>), we conclude that
$$ \phi_{p_{\varepsilon,a}}^{a}(t)\subset B_{2\varepsilon}(u_s^0,v_s^0),$$
for all $t\ge t_{\varepsilon,a}$.
Using this, (<ref>)
and (<ref>), we obtain that
$$ \phi_{p_{\varepsilon,a}}^{a}(\R)\subset\mathcal{U}_{\delta(\varepsilon)+
This and (<ref>) give that (<ref>) is satisfied, as desired.
One can also show that
\begin{equation}\label{ch2340BIS}
\mathcal{M}^a\cap\big([u_s^0, u_{\mathcal{M}}^0]\times[v_s^0,v_{\mathcal{M}}^0]\big)\to
\mathcal{M}^0\cap\big([u_s^0, u_{\mathcal{M}}^0]\times[v_s^0,v_{\mathcal{M}}^0]\big) \quad {\mbox{ as }}a\to0.
\end{equation}
The proof of (<ref>) is similar to that of (<ref>),
just replacing $p_{\varepsilon,a}$ with $(u_{\mathcal{M}}^a,v_{\mathcal{M}}^a)$ (in this case
the analysis near the origin is simply omitted since the trajectory
has only one limit point).
With (<ref>) and (<ref>)
the proof of Lemma <ref> is thereby complete.
Now we are ready to give the proof of Proposition <ref>:
(i) We call $\mathcal{G}$ the right-hand-side of (<ref>), that is
\begin{eqnarray*}
\mathcal{G}&:=& \big\{ (u,v)\in [0,1]\times [0,1]\;{\mbox{ s.t. }}\;
v < \gamma_{0}(u) \,\text{ if } \, u\in[0, u_{\mathcal{M}}^0]\\
&&\qquad\qquad\qquad\qquad{\mbox{and }} \;
v \leq 1 \, \text{ if }\, u\in(u_{\mathcal{M}}^0, 1] \big\},
\end{eqnarray*}
and we aim at proving that $\mathcal{G}\subseteq\mathcal{E}_0
\subseteq\overline{\mathcal{G}}$.
For this, we observe that, by Lemma <ref>,
$\gamma_a(u)$ converges to $\gamma_{0}(u)$ pointwise as $a\to0$.
In particular, $u_{\mathcal{M}}^a\to u_{\mathcal{M}}^0$ as $a\to0$.
Also, recalling (<ref>),
we notice that if $u_{\mathcal{M}}^0= u_s^0 / (v_s^0)^{\frac{1}{\rho}}<1$,
then $\gamma_0(u_{\mathcal{M}}^0)= 1$,
otherwise if $u_{\mathcal{M}}^0=1$
then $\gamma_0(u_{\mathcal{M}}^0)<1$, being $\gamma_0(u)$ strictly
monotone increasing.
Furthermore, thanks to Proposition <ref>,
we know that
he set $\mathcal{E}(a)$ is bounded from above by the graph of the function $\gamma_a(u)$ for $u\in [0, u_{\mathcal{M}}^a]$ and from the
straight line $v=1$ for $u\in(u_{\mathcal{M}}^a, 1]$ (that is non empty for $u_{\mathcal{M}}^a<1$).
Now we claim that, for all $a'>0$,
\begin{equation}\label{ch2gocont123}
\mathcal{G} \subseteq \underset{0<a<a'}{\bigcup} \mathcal{E}(a).
\end{equation}
To show this, we take a point $(u,v)\in\mathcal{G}$.
Hence, in light of the considerations
above, we have that $(u,v)\in\mathcal{E}(a)$ for any $a$ sufficiently small,
which proves (<ref>).
From (<ref>), we deduce that
\begin{equation}\label{ch2gocont1232233}
\mathcal{G} \subseteq \underset{a'>0}{\bigcap} \, \underset{0<a<a'}{\bigcup} \mathcal{E}(a).
\end{equation}
Now we show that
\begin{equation}\label{ch2gocont12322}
\underset{a'>0}{\bigcap} \,\underset{0<a<a'}{\bigcup} \mathcal{E}(a)\subseteq
\overline{\mathcal{G}} .
\end{equation}
For this, we take
$$(\hat{u},\hat{v})\in \underset{a'>0}{\bigcap} \, \underset{0<a<a'}{\bigcup} \mathcal{E}(a),$$
then it must hold that for every $a'>0$
there exists $a<a'$ such that $(\hat{u},\hat{v})\in\mathcal{E}(a)$,
namely $\hat v < \gamma_{a}(\hat u)$ if $\hat u\in[0, u_{\mathcal{M}}^a]$ and $\hat v
\leq 1$ if $\hat u\in(u_{\mathcal{M}}^a, 1]$.
Thus, by the pointwise convergence,
we have that $ \hat{v} \le\gamma_0(\hat{u})~$ if $\hat u\in[0, u_{\mathcal{M}}^0]$ and $\hat v
\leq 1$ if $\hat u\in(u_{\mathcal{M}}^0, 1]$, which proves (<ref>).
From (<ref>) and (<ref>),
we conclude that
\begin{equation*}
\mathcal{G}\subseteq
\underset{a'>0}{\bigcap} \, \underset{0<a<a'}{\bigcup} \mathcal{E}(a) =\mathcal{E}_0\subseteq \overline{\mathcal{G}} ,
\end{equation*}
as desired.
(ii) Since we deal with the limit case as $a\to+\infty$, from now on
we suppose from now on that $ac>1$.
We fix $\varepsilon>0$ and we consider the set
\begin{equation*}
\mathcal{S}_{\varepsilon^+} :=
\left\{ (u,v)\in [0,1]\times[0,1]\;{\mbox{ s.t. }}\;
v>u \left( \frac{1}{c}+\varepsilon \right) \right\}.
\end{equation*}
We claim that
\begin{equation}\label{ch2prova1}
\mathcal{S}_{\varepsilon^+} \subseteq \mathcal{B}(a)
\end{equation}
for $a$ big enough, possibly in dependence of $\varepsilon$.
For this,
we first analyze the component of the velocity in the inward normal directions
along the boundary of $\mathcal{S}_{\varepsilon^+}$.
On the side $\{0\}\times [0,1]$,
the trajectories cannot cross the boundary thanks to Proposition <ref>, and the same happens for the sides $[0,1]\times \{1\}$ and $\{1\} \times [\varepsilon + 1/c, 1]$.
Hence, it remains to check the sign of the normal derivative along the
side given by the straight line $v-u(\varepsilon +1/c )=0$.
We compute
\begin{align*}& (\dot{u},\dot{v})\cdot\left(-\left(\varepsilon+\frac1c\right),
\dot{v}- \dot{u}\left(\varepsilon+ \frac{1}{c} \right) \\
&\quad = {\rho}v(1-u-v)-au - \left(\varepsilon+ \frac{1}{c} \right)u(1-u-v) + \left(\varepsilon+ \frac{1}{c} \right) acu \\
&\quad= \Bigg[ {\rho}v - \left(\varepsilon+ \frac{1}{c} \right) u \Bigg] (1-u-v) +\varepsilon ac u .
\end{align*}
Thus, by using that $v-u(\varepsilon +1/c )=0$, we obtain that
\begin{align*}
1\right) \geq u\left[a\varepsilon c + ({\rho}-1)(1-u-v) \left( \varepsilon+ \frac{1}{c} \right) \right].
\end{align*}
Notice that $u\leq 1$ and $|1-u-v|\leq 2$, and therefore
$$ (\dot{u},\dot{v})\cdot\left(-\left(\varepsilon+\frac1c\right),
1\right) \geq u\left[a\varepsilon c -2 ({\rho}+1) \left( \varepsilon+ \frac{1}{c} \right) \right] .$$
Accordingly, the normal velocity is positive for $a \geq {a}_1$, where
\begin{equation*}
{a}_1:= 2({\rho}+1) \left( \varepsilon+ \frac{1}{c} \right)\frac{1}{\varepsilon c}.
\end{equation*}
These considerations, together with the fact that there are no cycles
in $[0,1]\times[0,1]$ and the
Poincaré-Bendixson Theorem (see e.g. [113])
give that the $\omega$-limit set of any trajectory starting
in the interior of $\mathcal{S}_{\varepsilon^+}$
can be either an equilibrium or a union of (finitely many)
equilibria and non-closed orbits connecting these equilibria.
We remark that
\begin{equation}\label{ch2asdfgzxcv098t76re}
{\mbox{the~$\omega$-limit set of any trajectory cannot
be the equilibrium~$(0,0)$.}}\end{equation}
Indeed, if the $\omega$-limit of a trajectory
were $(0,0)$, then this trajectory must lie on the stable manifold of $(0,0)$,
and moreover it must be contained in $\mathcal{S}_{\varepsilon^+}$,
since no trajectory can exit $\mathcal{S}_{\varepsilon^+}$.
On the other hand, by Proposition <ref>,
we have that at $u=0$ the stable manifold is tangent to the
Now, if we take $a$ sufficiently large, this line lies
below the line $v=u(1/c+\varepsilon)$, thus providing a contradiction.
Hence, the proof of (<ref>) is complete.
Accordingly, since $(0,1)$ is a sink, the only possibility is that
the $\omega$-limit set of any trajectory starting
in the interior of $\mathcal{S}_{\varepsilon^+}$ is the equilibrium $(0,1)$.
Namely, we have established (<ref>).
As a consequence of (<ref>), we deduce that for every $\varepsilon>0$
there exists $a_{\varepsilon}>0$ such that
\begin{equation}\label{ch2qwt5uktkjer464586897}
\underset{a\ge a_\varepsilon}{\bigcup} \mathcal{E}(a) \subseteq
\left\{ (u,v)\in [0,1]\times[0,1]\;{\mbox{ s.t. }}\;
v\le u \left( \frac{1}{c}+\varepsilon \right) \right\}.\end{equation}
In addition,
\begin{equation*}\begin{split}
& \underset{\varepsilon >0 }{\bigcap} \left\{ (u,v)\in [0,1]\times[0,1]\;{\mbox{ s.t. }}\;
v\le u \left( \frac{1}{c}+\varepsilon \right) \right\}\\&\qquad
=\left\{ (u,v)\in [0,1]\times[0,1]\;{\mbox{ s.t. }}\;
v\le \frac{u}{c} \right\}=\overline{\mathcal{S}_c}.\end{split}
\end{equation*}
This and (<ref>) entail that
\begin{equation*}
\underset{a'>0}{\bigcap} \, \underset{a>a'}{\bigcup}
\mathcal{E}(a)\subseteq \overline{\mathcal{S}_c},
\end{equation*}
which implies the second inclusion in (<ref>).
Now, to show the first inclusion in (<ref>),
for every $\varepsilon\in(0,1/c)$ we consider the set
\begin{equation*}
\mathcal{S}_{\varepsilon^-} := \left\{
(u,v)\in [0,1]\times[0,1] \;{\mbox{ s.t. }}\; v<u \left( \frac{1}{c}-\varepsilon \right) \right\}.
\end{equation*}
We claim that, for all $\varepsilon\in(0,1/c)$,
\begin{equation}\label{ch2chefus}
\mathcal{S}_{\varepsilon^-} \subseteq \mathcal{E}_{\infty}.
\end{equation}
For this, we first show that if $a$ is sufficiently large, possibly
in dependence of $\varepsilon$,
\begin{equation}\label{ch2forse33}\begin{split}&
{\mbox{every trajectory starting in the interior of~$\mathcal{S}_{\varepsilon^-}$}}\\
&{\mbox{can exit~$\mathcal{S}_{\varepsilon^-}$ from the side~$[0,1]\times\{0\}$.}}
\end{split}\end{equation}
on the side $\{1\}\times [0,1]$ the trajectory cannot exit the set,
thanks to Proposition <ref>.
On the side given by $v-(-\varepsilon+1/c)u=0$, the component of the velocity
in the direction of the
outward normal is
\begin{eqnarray*}&&
(\dot{u},\dot{v})\cdot\left(- \left( \frac{1}{c} -\varepsilon \right),1
\right)=
\dot{v} - \dot{u} \left( \frac{1}{c} -\varepsilon \right)\\
&& \qquad= \rho v(1-u-v) -au - \left( \frac{1}{c} -\varepsilon \right)u(1-u-v) + \left( \frac{1}{c} -\varepsilon \right)acu\\
=u\left[\left( \frac{1}{c} -\varepsilon \right)(\rho-1)(1-u-v) -
\varepsilon ac \right]\\
&&\qquad \le u\left[2\left( \frac{1}{c} -\varepsilon \right)(\rho+1) -
\varepsilon ac \right]
\end{eqnarray*}
which is negative if $a \geq {a}_2$, with
\begin{equation*}
{a}_2:= 2\left( \frac{1}{c} -\varepsilon \right)
\left( \rho+1 \right) \frac{1}{\varepsilon c} .
\end{equation*}
Hence, if $(u(0), v(0))\in \mathcal{S}_{\varepsilon^-}$, then
either $T_s(u(0), v(0)) <\infty$ or $(u(t), v(t))\in \mathcal{S}_{\varepsilon^-}$
for all $t\geq 0$, where the notation in (<ref>) has been used.
We also notice that,
for $a>1/c$, the points $(0,1)$ and $(0,0)$ are the only equilibria
of the system, and there are no cycles.
We have that $(0,1) \notin \overline{\mathcal{S}_{\varepsilon^-}}$
and $(0,0) \in \overline{\mathcal{S}_{\varepsilon^-}}$, thus if
\begin{equation}\label{ch2dweioterygvhsdjk}
{\mbox{$(u(t), v(t))\in
\mathcal{S}_{\varepsilon^-}$ for all~$t\geq 0$}}\end{equation}
\begin{equation}\label{ch2tendere}
(u(t), v(t)) \to (0,0).
\end{equation}
On the other hand, by Proposition <ref>,
we have that at $u=0$ the stable manifold is tangent to the
and, if we take $a$ large enough, this line lies
above the line $v=u(1/c-\varepsilon)$. This says that, for sufficiently large $t$,
the trajectory must lie outside $ \mathcal{S}_{\varepsilon^-}$,
and this is in contradiction with (<ref>).
As a result of these considerations, we conclude
that if $(u(0), v(0))\in \mathcal{S}_{\varepsilon^-}$ then $T_s(u(0), v(0)) <\infty$,
which implies (<ref>).
As a consequence of (<ref>), we obtain that for every $\varepsilon\in(0,1/c)$
there exists $a_\varepsilon>0$ such that
$$\mathcal{S}_{\varepsilon^-}\subseteq\underset{a\ge a_\varepsilon}{\bigcap}
\mathcal{E}(a).$$
In particular for all $\varepsilon\in(0,1/c)$ it holds that
\begin{equation*}
\mathcal{S}_{\varepsilon^-}\subseteq \underset{a'>0}{\bigcap} \,
\underset{a> a'}{\bigcup} \mathcal{E}(a)=\mathcal{E}_\infty,
\end{equation*}
which proves (<ref>), as desired.
Then, the first inclusion in (<ref>) plainly follows
from (<ref>).
§ ANALYSIS OF THE STRATEGIES FOR THE FIRST POPULATION
The main theorems on the winning strategy have
been stated in Subsection <ref>.
In particular, Theorem <ref> gives the characterization of the set
of points that have a winning strategy $\mathcal{V}_{\mathcal{A}}$
in (<ref>),
and Theorem <ref> establishes the non equivalence of constant
and non-constant strategies when $\rho\ne1$
(and their equivalence when $\rho=1$).
Nonetheless, in Theorem <ref> we state that
Heaviside functions are enough to construct a
winning strategy for every point in $\mathcal{V}_{\mathcal{A}}$.
In the following subsections we will give the proofs of these results.
§.§ Construction of winning non-constant strategies
We want to put in light the construction of non-constant winning
strategies for the points for which constant strategies fail.
For this, we recall the notation introduced in (<ref>),
(<ref>) and (<ref>), and we have the following statement:
Let $M>1$. Then we have:
1. For $\rho<1$, let $(u_0, v_0)$ be a point of the set
\begin{equation}\label{ch2PPDEFA}
\mathcal{P}:=\left\{ (u, v)\in [0,1]\times[0,1] \;{\mbox{ s.t. }}\; u\in [u_s^0, 1], \ \gamma_{0}(u) \leq v < \frac{u}{c} + \frac{1-\rho}{1+\rho c} \right\}. \end{equation}
Then there exist $a^*>M$, $a_*<\frac{1}{M}$,
and $T\ge0$, depending on $(u_0, v_0)$, $c$, and $\rho$, such that
the Heaviside strategy defined by
\begin{equation}\label{ch2NSJmldsf965to}
a(t) = \left\{
\begin{array}{lr}
a^*, & {\mbox{ if }} t<T, \\
a_*, & {\mbox{ if }} t\geq T,
\end{array}
\right.
\end{equation}
belongs to $\mathcal{V}_{\mathcal{A}}$.
2. For $\rho>1$, let $(u_0, v_0)$ be a point of the set
\begin{equation}\label{ch2DEFQ}
\mathcal{Q}:=\left\{ (u, v)\in [0,1]\times[0,1] \;{\mbox{ s.t. }}\;u\in [u_{\infty}, 1], \ \frac{u}{c} \leq v < \zeta(u) \right\}.
\end{equation}
Then there exist $a^*>M$, $a_*<\frac{1}{M}$, and $T\ge0$, depending on $(u_0, v_0)$, $c$, and $\rho$, such that the Heaviside strategy
defined by
\begin{equation*}
a(t) = \left\{
\begin{array}{lr}
a_*, &{\mbox{ if }} t<T, \\
a^*, &{\mbox{ if }} t\geq T,
\end{array}
\right.
\end{equation*}
belongs to $\mathcal{V}_{\mathcal{A}}$.
We start by proving the first claim in Proposition <ref>.
To this aim, we take $(\bar{u}, \bar{v})\in \mathcal{P}$, and we observe that
\begin{equation*}
\bar{v}-\frac{\bar{u}}{c}< \frac{1-\rho}{1+\rho c} = v_s^0-\frac{u_s^0}{c}.
\end{equation*}
Therefore, there exists $\xi>0$ such that
\begin{equation*}
\xi < \frac{v_s^0- \bar{v}-\frac{1}{c}(u_s^0-\bar{u})}{\bar{u}-u_s^0}.
\end{equation*}
Hence, setting
\begin{equation}\label{ch2VUESSE0}
v_S:=\left(\frac{1}{c} -\xi \right)(u_s^0-\bar{u}) + \bar{v},
\end{equation}
we see that
\begin{equation}\label{ch2VUESSE}v_S<v_s^0.\end{equation}
Now, we want to show that there exists $a^*>0$ such that, for any $a>a^*$ and $u>u_s^0$, we have that
\begin{equation}\label{ch21632}
\frac{\dot{v}}{\dot{u}} > \frac{1}{c}- \xi.
\end{equation}
To prove this, we first notice that
\begin{equation}\label{ch2questab}
{\mbox{if~$a>\displaystyle\frac2c$, then~$\dot{u}\le -u<0$.}}\end{equation}
Moreover, we set
$$a_1:=\frac{1+\rho c}{4c} , ~$$
and we claim that,
\begin{equation}\label{ch2questae}
{\mbox{if~$a>a_1$ and~$u>u_s^0$, then~$\dot{v}<0$.}}\end{equation}
Indeed, we recall that the function $\sigma$
defined in (<ref>) represents the points
in $[0,1]\times[0,1]$ where $\dot v=0$
and separates the points where $ \dot v>0$, which lie on the left of
the curve described by $\sigma$, from the points where $ \dot v<0$, which lie on the right of
the curve described by $\sigma$.
Therefore, in order to show (<ref>), it is sufficient to prove that
the curve described by $\sigma$
is contained in $\{u\le u_s^0\}$ whenever $a>a_1$. For this, one computes that, if $u=\sigma(v)$
and $a>a_1$, then
\begin{eqnarray*}&&
u-u_s^0=\sigma(v)-\frac{\rho c}{1+\rho c}=
1-\frac{\rho v^2+a}{\rho v+a}-\frac{\rho c}{1+\rho c}\\&&\qquad=
\frac{\rho v-\rho v^2}{\rho v+a}-\frac{\rho c}{1+\rho c}=\frac{\rho v(1-v)}{\rho v+a}-\frac{\rho c}{1+\rho c}
\\&&\qquad\le\frac{\rho }{4(\rho v+a)}-\frac{\rho c}{1+\rho c}\le
\frac{\rho }{4a}-\frac{\rho c}{1+\rho c}\\&&\qquad\le
\frac{\rho }{4a_1}-\frac{\rho c}{1+\rho c}\le0.
\end{eqnarray*}
This completes the proof of (<ref>).
Now we define
\begin{equation*}
a_2:=\left( \rho+\frac{1}{c}+\xi \right) \frac{2}{u_s^0 c \xi}.
\end{equation*}
and we claim that
\begin{equation}\label{ch2questaX}
{\mbox{if~$a>a_2$ and~$u>u_s^0$, then }}
\dot{v} < \left( \frac{1}{c}- \xi \right) \dot{u}.
\end{equation}
Indeed, under the assumptions of (<ref>),
we deduce that
\begin{eqnarray*}&&
\dot{v} -\left( \frac{1}{c}- \xi \right) \dot{u}
=\rho v(1-u-v)-au-\left( \frac{1}{c}- \xi \right)\Big(
\Big)\\
\rho v-\left( \frac{1}{c}- \xi \right)u\right)-ac \xi u
\le 2\left(
\rho v+ \frac{u}{c}+ \xi u\right)-ac \xi u\\&&\qquad< 2\left(
\rho + \frac{1}{c}+ \xi \right)-a_2\,c \xi {u_s^0}=0,
\end{eqnarray*}
and this establishes the claim in (<ref>).
Then, choosing
$$ a^*:=
\max\left\{\displaystyle\frac2c,a_1,a_2,M\right\},$$
we can exploit (<ref>), (<ref>) and (<ref>)
to deduce (<ref>), as desired.
Now we claim that, for any $a>a^*$, there exists $T\ge0$
such that the trajectory $(u(t), v(t))$ starting from $(\bar{u}, \bar{v})$ satisfies
\begin{equation}\label{ch2SM -kg}
{\mbox{$u(T)=u_s^0$ and~$v(T)< v_S$.}}
\end{equation}
Indeed, we define $T\ge0$ to be the first time for which $u(T)=u_s^0$.
This is a fair definition, since $u(0)=\bar{u}\ge u_s^0$
and $\dot u$ is negative, and bounded away from zero
till $u\ge u_s^0$, thanks to (<ref>).
Then, we see that
\begin{eqnarray*}&&
v(T)=\bar{v}+\int_0^T \dot v(t)\,dt<
\bar{v}+\int_0^T\left(
\frac{1}{c}- \xi\right)\,\dot u(t)\,dt=
\bar{v}+\left(
\frac{1}{c}- \xi\right)(u(T)-u(0))\\&&\qquad\qquad=
\bar{v}+\left(
\frac{1}{c}- \xi\right)(u_s^0-\bar u)=v_S,
\end{eqnarray*}
thanks to (<ref>) and (<ref>), and this establishes (<ref>).
Now we observe that
$$ v(T)<v_S<v_s^0=\gamma_0(u_s^0)=\gamma_0(u(T))$$
due to (<ref>)
and (<ref>)
As a result, recalling Lemma <ref>, we can choose $a_*<1/M$
such that
$$ v(T)<\gamma_{a_*}(u(T)).$$
Accordingly, by Proposition <ref>,
we obtain that $(u(T),v(T))\in{\mathcal{E}}(a_*)$.
Hence, applying the strategy
in (<ref>),
we accomplish the desired result and complete the proof of the first claim
in Proposition <ref>.Now we focus on the proof of the second claim in Proposition <ref>.
For this, let
\begin{equation}\label{ch28ygdw}
(u_0,v_0)\in \mathcal{Q},\end{equation}
and consider the trajectory $(u_0(t),v_0(t))$
starting from $(u_0,v_0)$ for the strategy $a=0$. In light of formula (<ref>)
of Proposition <ref>, we have that
\begin{equation}\label{ch2Ecijerrin}\begin{split}&
{\mbox{the trajectory~$(u_0(t),v_0(t))$
converges}}\\&{\mbox{to a point of the form~$(u_F, 1-u_F)$ as~$t\to+\infty$.}}\end{split}
\end{equation}
We define
\begin{equation}\label{ch21713}
v_F:=1-u_F, \quad v_{\infty}:=1-u_{\infty}=\frac{1}{c+1},
\end{equation}
where the last equality can be checked starting from the value of $u_{\infty}$ given in (<ref>).
Using the definition of $\zeta$ in (<ref>) and the information in (<ref>),
we also notice that the curve given by $v=\zeta(u)$ is a trajectory for $a=0$. Moreover
\begin{equation*}
\zeta(u_{\infty})= \frac{1}{c(u_{\infty})^{\rho-1}} u_{\infty}^{\rho}=\frac{c}{c(c+1)}=v_{\infty}
\end{equation*}
and, recalling (<ref>) and formula (<ref>) of Proposition <ref>, we get that
the graph of $\zeta$ is a trajectory for $a=0$ that converges to $(u_\infty,1-u_\infty)$ as $t\to+\infty$.
Also, by (<ref>), we have that $v_0 < \zeta(u_0)$. Thus, since by Cauchy's uniqueness result for ODEs, two orbits never intersect, we have that
\begin{equation}\label{ch2JS145DD-0}
{\mbox{the orbit~$(u_0(t),v_0(t))$ must lie below the graph of~$\zeta$.}}\end{equation}
Since both $(u_F, v_F)$ and $(u_{\infty}, v_{\infty})$ belong to the line given by $v=1-u$, from (<ref>) we get that
\begin{equation}\label{ch22304}
u_{\infty} < u_F
\end{equation}
\begin{equation}\label{ch22305}
v_{\infty} > v_F.
\end{equation}
Thanks to (<ref>) and (<ref>) and recalling the values of $u_{\infty}$ from (<ref>) and of $v_{\infty}$ from (<ref>), we get that
\begin{equation}\label{ch21524}
v_F < v_{\infty} = \frac{u_{\infty}}{c} < \frac{u_F}{c}.
\end{equation}
As a consequence, since the inequality in (<ref>) is strict,
we find that there exists $T'>0$
such that
\begin{equation}\label{ch22334}
v_0(T') < \frac{u_0(T')}{c}.
\end{equation}
Moreover, since $\dot{u}<0$ for $v>1-u$ and $a=0$, we get that
$u_0(t)$ is decreasing in $t$, and therefore $u_F < u_0(T') < u_0$.
By the strict inequality in (<ref>), and
claim (ii) in Proposition <ref>,
we have that $(u_0(T'), v_0(T')) \in \mathcal{E}_{\infty}$, where $\mathcal{E}_{\infty}$ is defined in (<ref>).
In particular, we have that $(u_0(T'), v_0(T')) \in
\underset{a>a'}{\bigcup} \mathcal{E}(a)$, for every $a'>0$.
Consequently, there exists $a^*>M$ such that $(u_0(T'),v_0(T'))\in\mathcal{E}(a^*)$. Therefore, applying the strategy
\begin{equation*}
a(t) = \left\{
\begin{array}{lr}
0, & t<T', \\
a^*, & t\geq T,
\end{array}
\right.
\end{equation*}
we reach the victory.
§.§ Proof of Theorem <ref>
To avoid repeating passages in the proofs of Theorems <ref> and <ref>, we first state and prove the following lemma:
If $\rho=1$, then for all $a>0$ we have $\mathcal{E}(a)=\mathcal{S}_c$, where $\mathcal{S}_c$ was defined in (<ref>).
Let $(u(t),v(t))$ be a trajectory starting at a point
in $[0,1]\times[0,1]$.
For any $a>0$, we consider the function
$$\mu(t):= \frac{\displaystyle
v\left(\frac{t}{a}\right)}{\displaystyle u\left(\frac{t}{a}\right)}.$$
Notice that
\begin{equation}\label{ch2-1-e}
{\mbox{ if and only if there exists~$T>0$ such that }}\mu(T)=0.
\end{equation}
In addition, we observe that
\begin{equation} \label{ch2eq:A=1}\begin{split}
\dot{\mu}(t) \,&=
\frac{\displaystyle\dot v\left(\frac{t}{a}\right)u\left(\frac{t}{a}\right)-
v\left(\frac{t}{a}\right)\dot u\left(\frac{t}{a}\right)}{\displaystyle au^2\left(\frac{t}{a}\right)}
\\&=
\frac{\displaystyle -u^2 \left(\frac{t}{a}\right)+c u\left(\frac{t}{a}\right)v\left(\frac{t}{a}\right)}{\displaystyle u^2\left(\frac{t}{a}\right)}\\&=c\mu(t) -1.
\end{split}
\end{equation}
The equation in (<ref>) is integrable and leads to
\begin{equation*}
\mu(t)=\frac{e^{ct} \left( c\mu(0)-1 \right) +1}{c}.
\end{equation*}
From this and (<ref>), we deduce that
\begin{equation*}
{\mbox{ if and only if }}
\end{equation*}
This leads to
\begin{equation*}
{\mbox{ if and only if }}\,
\frac{v(0)}{u(0)}<\frac1c,
\end{equation*}
which, recalling the definition of $\mathcal{S}_c$
in (<ref>), ends the proof.
Now we provide the proof of Theorem <ref>, exploiting the result obtained
in Section <ref>.
(i) Let $\rho=1$. For the sake of simplicity, we suppose that $c\ge1$, and therefore the second line in (<ref>) is not present
(the proof of (<ref>) when $c<1$ is similar, but one has to take into account also the set $(c,1]\times[0,1]$ and show that it is contained
in $\mathcal{V}_{\mathcal{A}}$ by checking the sign of the component
of the velocity field in the normal direction).
We claim that
\begin{equation}\label{ch2Thns932}
where $\mathcal{S}_c$ was
defined in (<ref>)
(incidentally, $\mathcal{S}_c$
is precisely the right-hand-side of equation (<ref>)).
From Lemma <ref> we have that for $\rho=1$ and $a>0$ it holds $\mathcal{S}_c=\mathcal{E}(a)\subset \mathcal{V}_{\mathcal{A}}$. Thus, to show (<ref>) we just need to check that
\begin{equation}\label{ch2inturn}
\mathcal{V}_{\mathcal{A}} \subseteq \mathcal{S}_c,\end{equation}
which is equivalent to
\begin{equation}\label{ch2dotdot}
\mathcal{S}_c^C \subseteq \mathcal{V}_{\mathcal{A}}^C,
\end{equation}
where the superscript $C$ denotes the complement of the set in the topology of $[0,1]\times[0,1]$.
First, by definition we have that
\begin{equation}\label{ch21617}
\mathcal{S}_c^C \cap ((0,1]\times\{0\})=\varnothing.
\end{equation}
Now, we analyze the behavior of the trajectories at $\partial \mathcal{S}_c^C$. By Proposition <ref>, no trajectory can exit $\mathcal{S}_c^C$ from a point on $\partial ([0,1]\times[0,1]) \setminus((0,1]\times\{0\})$. Moreover,
$\partial \mathcal{S}_c^C \cap ((0,1]\times\{0\})=\varnothing$
thanks to (<ref>) and
the fact that $\mathcal{S}_c^C$ is closed in the topology of $[0,1]\times[0,1]$.
\begin{equation}\label{ch21648}
{\mbox{no trajectory can exit~$\mathcal{S}_c^C$ from a point on~$\partial ([0,1]\times[0,1])$.}}
\end{equation}
Furthermore, it holds that
$$\partial \mathcal{S}_c^C \cap \big(
(0,1)\times(0,1)\big)= \left\{ (u,v)\in (0,1)\times(0,1) \;{\mbox{ s.t. }}\; v=\frac{u}{c} \right\}.$$
The velocity of a trajectory starting on the line $v=\frac{u}{c}$ in the orthogonal direction pointing inward $\mathcal{S}_c^C$ is
\begin{equation*}
(\dot{u}, \dot{v})\cdot\frac{(-1,c)}{\sqrt{c^2+1}}=\frac{1}{\sqrt{c^2+1}} (cv-u)(1-u-v)=0,
\end{equation*}
the last equality coming from the fact that $cv=u$ on $\partial \mathcal{S}_c^C \cap\big( (0,1)\times(0,1)\big)$.
This means that
\begin{equation}\label{ch21649}
{\mbox{no trajectory can exit~$\mathcal{S}_c^C$ from a point on the line~$v=\frac{u}{c}$.}}
\end{equation}
From (<ref>) and (<ref>), we get that no trajectory exits $\mathcal{S}_c^C$. Then, by (<ref>), no trajectory starting in $\mathcal{S}_c^C$ can reach the set $(0,1]\times\{0\}$, therefore $\mathcal{S}_c^C \cap \mathcal{V}_{\mathcal{A}}= \varnothing$ and this implies that (<ref>) is true.
As a result, the proof of (<ref>) is established and the proof is completed for $\rho=1$.
(ii) Let $\rho<1$. For the sake of simplicity, we suppose
that $\frac{\rho c(c+1)}{1+\rho c}\ge1$.
Let $\mathcal{Y}$ be the set in the right-hand-side of (<ref>), and
\begin{equation}\label{ch2qwertyuiolkjhgf}
\mathcal{F}_0:=
\big\{ (u,v)\in [0,1]\times [0,1]\;{\mbox{ s.t. }}\;
v < \gamma_{0}(u) \,\text{ if } \, u\in[0, 1] \big\}.\end{equation}
Notice that
\begin{equation}\label{ch28ujff994-p-1}
\mathcal{Y} = \mathcal{F}_0 \cup \mathcal{P},
\end{equation}
being $\mathcal{P}$ the set defined in (<ref>).
\begin{equation}\label{ch28ujff994-p-2}
\mathcal{P}\subseteq \mathcal{V}_{\mathcal{A}},
\end{equation}
thanks to
Proposition <ref>.
We also claim that
\begin{equation} \label{ch28ujff994-p-3BIS}\mathcal{F}_0\subseteq \mathcal{V}_{\mathcal{K}},\end{equation}
where $\mathcal{K}$ is the set of constant functions.
Indeed, if $(u,v)\in\mathcal{F}_0$, we have that $v < \gamma_{0}(u)$
and consequently $v < \gamma_{a}(u)$, as long as $a$ is small enough,
due to Lemma <ref>.
From this and Proposition <ref>, we deduce that $(u,v)$
belongs to ${\mathcal{E}}(a)$, as long as $a$ is small enough,
and this proves (<ref>).
From (<ref>) and the fact that $\mathcal{K}\subseteq\mathcal{A}$,
we obtain that
\begin{equation} \label{ch28ujff994-p-3}\mathcal{F}_0\subseteq \mathcal{V}_{\mathcal{A}}.\end{equation}
Then, as a consequence of (<ref>),
(<ref>) and (<ref>),
we get that $\mathcal{Y}\subseteq \mathcal{V}_{\mathcal{A}}$.
Hence, we are left with proving that
\begin{equation}\label{ch28iujdpp-1}
\mathcal{V}_{\mathcal{A}} \subseteq \mathcal{Y}.\end{equation}
For this, we show that
\begin{equation}\label{ch28iujdpp-2}
{\mbox{on~$\partial \mathcal{Y}\cap\big((0,1)\times(0,1)
\big)$ the outward normal derivative is nonnegative.}}
\end{equation}
To prove this,
we calculate the outward normal derivative on the part of $\partial \mathcal{Y}$ lying on the graph of $v=\gamma_0(u)$, that is
\begin{equation*}
\dot{v}-\frac{ u^{{\rho} -1}\dot{u}}{c(u^0_s)^{\rho-1} }={\rho} v(1-u-v)-a u -\frac{ u^{{\rho} }(1-u-v-ac)}{ c(u^0_s)^{\rho-1} }.
\end{equation*}
By substituting $v=\gamma_0(u)=\frac{u^\rho}{\rho c(u_s^0)^{\rho-1}}$ we get
\begin{eqnarray*}&&
\dot{v}-\frac{u^{{\rho} -1}\dot{u}}{ c(u^0_s)^{\rho-1} } =
\frac{u^\rho}{ c(u_s^0)^{\rho-1}}(1-u-v)-a u -\frac{ u^{{\rho} }(1-u-v-ac)}{ c(u^0_s)^{\rho-1} }\\
&&\qquad=-a u +\frac{ acu^{{\rho} }}{ c(u^0_s)^{\rho-1} }
=au^{\rho} \left( - u^{1-{\rho} } + \frac{1}{(u^0_s)^{\rho-1} } \right)
As a result, since ${\rho} <1$, we have
\begin{equation}\label{ch2ygfbv7r9yty4}
\dot{v}-\frac{ u^{{\rho} -1}\dot{u}}{c(u^0_s)^{\rho-1} }\geq0 \quad \text{for} \ u\leq u_s^0.
\end{equation}
On the part of $\partial \mathcal{Y}$ contained on the line $v=\frac{u}{c} + \frac{1-{\rho} }{1+{\rho} c}$, the outward normal derivative is
\begin{equation}\label{ch27undws8uf8v}\begin{split}&
\dot{v}-\frac{\dot{u}}{c}= {\rho} v(1-u-v) -au -\frac{u(1-ac-u-v)}{c}=\left({\rho} v-\frac{u}{c}\right)(1-u-v)\\&\qquad\qquad=
\left(
\frac{\rho u}{c} + \frac{\rho(1-{\rho}) }{1+{\rho} c}-\frac{u}{c}\right)\left(1-u-
\frac{u}{c}- \frac{1-{\rho} }{1+{\rho} c}\right)\\&
\qquad\qquad=
\left(
\frac{(\rho-1) u}{c} + \frac{\rho(1-{\rho}) }{1+{\rho} c}\right)\left(-
\frac{u(c+1)}{c}+ \frac{\rho(1+c) }{1+{\rho} c}\right)
\end{equation}
We also observe that, when $u>u_s^0=\frac{\rho c}{1+\rho c}$, the condition $\rho<1$ gives that
\begin{eqnarray*}
\frac{(\rho-1) u}{c} + \frac{\rho(1-{\rho}) }{1+{\rho} c}<
\frac{\rho (\rho-1)}{1+\rho c}+ \frac{\rho(1-{\rho}) }{1+{\rho} c}=0
\end{eqnarray*}
\begin{eqnarray*}-\frac{u(c+1)}{c}+ \frac{\rho(1+c) }{1+{\rho} c}<
-\frac{\rho(c+1)}{1+\rho c}+ \frac{\rho(1+c) }{1+{\rho} c}=0.
\end{eqnarray*}
Therefore, when $u>u_s^0$, we deduce from (<ref>) that
\dot{v}-\frac{\dot{u}}{c}>0.$$
Combining this and (<ref>), we obtain (<ref>), as desired.
Now, by (<ref>), we have that, for any value of $a$, no trajectory starting in $\big([0,1]\times[0,1]
\big)\setminus\mathcal{Y}$ can enter in $\mathcal{Y}$,
and in particular no trajectory starting
in $\big([0,1]\times[0,1]
\big)\setminus\mathcal{Y}$
can hit $\{v=0\}$, which ends the proof of (<ref>).
(iii) Let $\rho>1$. For the sake of simplicity, we suppose
that $\frac{c}{(c+1)^\rho}\ge1$.
Let $\mathcal{X}$ be the right-hand-side of (<ref>).
We observe that
\begin{equation}\label{ch27hperpre923i5}
\mathcal{X}= \mathcal{S}_{c} \cup \mathcal{Q},
\end{equation}
where $\mathcal{S}_{c}$ was defined in (<ref>) and $\mathcal{Q}$ in (<ref>).
Thanks to Proposition <ref>,
one has that $\mathcal{S}_{c}\subseteq \underset{a>a'}{\bigcup} \mathcal{E}(a)$, for every $a'>0$, and therefore $\mathcal{S}_{c}\subseteq\mathcal{V}_{\mathcal{A}}$.
Moreover, by the second claim in
Proposition <ref>, one also has that $\mathcal{Q}\subseteq \mathcal{V}_{\mathcal{A}}$. Hence,
\begin{equation}\label{ch21923}
\mathcal{X}\subseteq \mathcal{V}_{\mathcal{A}}.
\end{equation}
Accordingly, to prove equality in (<ref>) and thus complete
the proof of (<ref>), we need to show that $\mathcal{V}_{\mathcal{A}} \subseteq \mathcal{X}$.
First, we prove that
\begin{equation}\label{ch21229}
(0,1]\times\{0\} \subseteq \mathcal{X}.
\end{equation}
Indeed, for $u>0$ we have $v=\frac{u}{c}>0$, therefore $(u, 0)\in \mathcal{X}$ for $u\in (0, u_{\infty}]$. Then, $\zeta(u)$ is increasing in $u$ since it is a positive power function, therefore $v=\zeta(u)>0$ for $u\in(u_{\infty}, 1]$, hence $(u, 0)\in \mathcal{X}$ for $u\in ( u_{\infty}, 1]$. These observations prove (<ref>).
We now prove that the component of the velocity field in the outward normal direction with respect to $\mathcal{X}$ is nonnegative on
\begin{multline*}
\partial\mathcal{X}\cap \partial( \mathcal{X}^C)= \\
\left\{ (u,v)\in(0,u_{\infty}]\times(0,1) \ : \ v=\frac{u}{c} \right\} \cup \left\{ (u,v)\in(u_{\infty},1)\times(0,1) \: \ v=\zeta(u) \right\}.
\end{multline*}
To this end. we observe that on the line $v=\frac{u}{c}$, the outward normal derivative is
\begin{equation}\label{ch21853}
\dot{v}-\frac{1}{c}\dot{u}= \rho v(1-u-v)-au -\frac{u}{c}(1-ac-u-v)=(\rho v -\frac{u}{c})(1-u-v).
\end{equation}
The first term is positive because for $\rho >1$ we have
\begin{equation*}
\rho v > v =\frac{u}{c}.
\end{equation*}
Moreover, for $u\leq u_{\infty}$ we have that
thanks to (<ref>).
Thus, the left hand side of (<ref>) is nonnegative,
which proves that the component of the velocity field in the outward normal direction is nonnegative
on $\partial\mathcal{X}\cap\left\{v=\frac{u}{c} \right\}$.
On the part of $\partial \mathcal{X}$ lying in the graph of $v=\zeta(u)$, the component of the velocity field in the outward normal direction is
given by
\begin{equation}\label{ch2po091326uthgbvfjf}
\dot{v}-\frac{\rho u^{{\rho} -1}\dot{u}}{\rho c(u_{\infty})^{\rho-1} } ={\rho} v(1-u-v)-a u -\frac{{\rho} u^{{\rho} }}{\rho c(u_{\infty})^{\rho-1} }(1-u-v-ac).
\end{equation}
Now we substitute $v=\zeta(u)= \frac{u^{{\rho} }}{\rho c(u_{\infty})^{\rho-1} }$ in (<ref>) and we get
\begin{align*}
\dot{v}-\frac{u^{{\rho} -1}\dot{u}}{ c(u_{\infty})^{\rho-1} } = au \left( - 1 + \frac{u^{\rho-1}}{(u_{\infty})^{\rho-1} } \right)
\end{align*}
which leads to
\begin{equation*}
\dot{v}-\frac{\rho u^{{\rho} -1}\dot{u}}{\rho c(u_{\infty})^{\rho-1} }>0 \quad \text{if} \ u>u_{\infty},
\end{equation*}
as desired.
As a consequence of these considerations, we find that no trajectory starting in $\mathcal{X}^C$ can enter in $\mathcal{X}$ and therefore hit $\{v=0\}$, by (<ref>). Hence, we conclude that $\mathcal{V}_{\mathcal{A}}\subseteq \mathcal{X}$, which, together with (<ref>), establishes (<ref>).
§.§ Proof of Theorem <ref>
In order to prove Theorem <ref>,
we will establish a geometrical lemma in order to
understand the reciprocal position of the function $\gamma$,
as given by Propositions <ref> and <ref>,
and the straight line where the saddle equilibria lie. To emphasize the dependence of $\gamma$
on the parameter $a$ we will often use the notation $\gamma=\gamma_a$.
Moreover, we recall the notation of the saddle points $(u_s,v_s)$
defined in (<ref>) and of the points $(u_{\mathcal{M}},v_{\mathcal{M}})$ given by
Propositions <ref> and <ref>, with the convention
\begin{equation}\label{ch2usvs2}
{ \mbox{$(u_s,v_s)=(0,0)$ if~$ac\ge1$,} }
\end{equation}
and we state the following
If $\rho<1$, then
\begin{equation}\label{ch2gamma1>r}
\frac{u}{{\rho}c} \leq \gamma_a(u) \quad \text{ for } u\in[0, u_s]
\end{equation}
\begin{equation}\label{ch2gamma<r}
\gamma_a(u) \leq \frac{u}{{\rho}c} \quad \text{ for } u\in[u_s, u_{\mathcal{M}}].
\end{equation}
If instead ${\rho}>1$, then
\begin{equation}\label{ch2gamma1<r}
\gamma_a(u) \leq \frac{u}{{\rho}c} \quad \text{ for } u\in[0, u_s]
\end{equation}
\begin{equation}\label{ch2gamma>r}
\frac{u}{{\rho}c} \leq \gamma_a(u) \quad \text{ for } u\in[u_s, u_{\mathcal{M}}].
\end{equation}
Moreover equality holds in (<ref>) and (<ref>)
if and only if either $u=u_s$ or $u=0$. Also, strict inequality
holds in (<ref>)
and (<ref>) for $u\in(u_s, u_{\mathcal{M}})$.
We focus here on the proof of (<ref>), since
the other inequalities are proven in a similar way. Moreover,
we deal with the case $ac<1$, being the case $ac\ge1$ analogous with
obvious modifications.
We suppose by contradiction that (<ref>) does not hold true.
Namely, we assume that there exists $\tilde{u}\in(u_s,u_{\mathcal{M}}]$ such that
$$ \gamma_a(\tilde u) > \frac{\tilde u}{{\rho}c}.$$
Since $\gamma_a$ is continuous thanks to Propositions <ref>,
we have that
$$ \gamma_a( u) > \frac{ u}{{\rho}c} \quad \mbox{in a neighborhood of~$\tilde u$. }$$
Hence, we consider the largest open
interval $(u_1,u_2)\subset(u_s,u_{\mathcal{M}}]$ containing $\tilde u$ and such that
\begin{equation} \label{ch2g>r}
\gamma_a(u) > \frac{u}{{\rho}c} \quad {\mbox{ for all }} u \in (u_1,u_2).
\end{equation}
Moreover, in light of (<ref>), we see that
\begin{equation}\label{ch2togdfgheter}
\gamma_a(u_s)=v_s= \frac{1-ac}{1+\rho c}=
\frac{u_s}{\rho c}.\end{equation}
Hence, by the continuity of $\gamma_a$, we have that $\gamma_a(u_1)=\frac{u_1}{\rho c}$
\begin{equation}\label{ch2doesnotcon}
{\mbox{either~$\gamma_a(u_2)=\displaystyle \frac{u_2}{\rho c}$ or~$u_2=u_{\mathcal{M}}$.}}\end{equation}
Now, we consider the set
\begin{equation*}
\mathcal{T}:= \left\{ (u,v)\in [u_1,u_2]\times[0,1] \;
{\mbox{ s.t. }}\; \frac{u}{{\rho}c} < v< \gamma_a(u) \right\},
\end{equation*}
that is non empty, thanks to (<ref>).
We claim that
\begin{equation}\label{ch2lkjhgfds1234567}
{\mbox{for all~$(u(0), v(0))\in \mathcal{T}$, the~$\omega$-limit
of its trajectory is~$(u_s,v_s)$.}}\end{equation}
To prove this,
we analyze the normal derivative on
\begin{equation*}\begin{split}
&\partial \mathcal{T} =\mathcal{T}_1\cup\mathcal{T}_2\cup
\mathcal{T}_3,\\
{\mbox{where }}\quad &
\mathcal{T}_1:=\big\{ (u, \gamma_a(u)) \;{\mbox{ with }}
u \in (u_1,u_2) \big\},\\
&\mathcal{T}_2:= \left\{ \left( u, \frac{u}{{\rho}c} \right) \;{\mbox{ with }}
u \in (u_1,u_2) \right\}\\
{\mbox{and }}\quad &
\mathcal{T}_3:=\left\{ (u_2, v) \;{\mbox{ with }}
v \in \left( \frac{u_2}{{\rho}c},\min\{\gamma_a(u_2),1\}\right) \right\}
\end{split}\end{equation*}
with the convention that $\partial \mathcal{T}~$ does contain $\mathcal{T}_3$
only if the second possibility in (<ref>) occurs.
We notice that the set $\mathcal{T}_1$ is an orbit for the system, and
thus the component of the velocity in the normal direction is null.
On $\mathcal{T}_2$, we have that the sign of
the component of the velocity in the inward normal direction is given by
\begin{equation}\label{ch2der:r}
\begin{split}&
(\dot{u},\dot{v})\cdot\left(-\frac1{\rho c},1\right)
\dot{v} - \frac{1}{{\rho}c} \dot{u} = {\rho} v(1-u-v)
-au - \frac{u}{{\rho}c}(1-u-v) + \frac{au}{\rho} \\
&\qquad\qquad = \frac{u}{c} \left( 1-u-\frac{u}{{\rho}c} \right)
\left( 1 - \frac{1}{{\rho}} \right) -au\left( 1-\frac{1}{\rho}
\right) \\
&\qquad\qquad = \frac{u}{c} \left(1-\frac{1}{\rho} \right) \left(
1-u-\frac{u}{{\rho}c} -ac \right) .
\end{split}
\end{equation}
Notice that for $u \geq u_s$ we have that
\begin{equation}\label{ch2acapo}
1-u-v -ac \leq0,
\end{equation}
thus the sign of last term in (<ref>) depends only on
the quantity $1-\frac{1}{\rho}$.
Consequently, since ${\rho}<1$ the sign of
the component of the velocity in the inward normal direction is positive.
Furthermore, in the case in which the second possibility in (<ref>) occurs,
we also check the sign of the component of the velocity in the inward normal direction
along $\mathcal{T}_3$. In this case, if $\gamma_a(u_2)<1$ then $u_2=1$,
and therefore we find that
$$(\dot{u},\dot{v})\cdot\left(-1 ,0 \right)=-\dot{u}=-u(1-u-v)+acu=
which is positive. If instead $\gamma_a(u_2)=1$
$$(\dot{u},\dot{v})\cdot\left(-1 ,0 \right)=-\dot{u}=-u(1-u-v)+acu=
which is positive, thanks to (<ref>).
We also point out that there are no cycle in $\mathcal{T}$, since $\dot{u}$
has a sign. These considerations and the
Poincaré-Bendixson Theorem (see e.g. [113])
give that the $\omega$-limit set of $(u(0),v(0))$
can be either an equilibrium or a union of (finitely many)
equilibria and non-closed orbits connecting these equilibria.
Since $(0,0)$ and $(0,1)$ do not belong to the closure of $\mathcal{T}$,
in this case the only possibility is that the $\omega$-limit is the equilibrium $(u_s,v_s)$.
Consequently, we have that $u_1=u_s$, and that (<ref>)
is satisfied.
Accordingly, in light of (<ref>), we have that the
set $\mathcal{T}$
is contained in the stable manifold of $(u_s,v_s)$, which is in contradiction
with the definition of $\mathcal{T}$.
Hence, (<ref>) is established, as desired.
Now we show that strict inequality holds true in (<ref>)
if $u\in(u_s,u_{\mathcal{M}})$.
To this end, we suppose by contradiction that
there exists $\bar{u}\in (u_s,u_{\mathcal{M}})$ such that
\begin{equation}\label{ch2equality}
\gamma_a(\bar{u})=\frac{\bar{u}}{{\rho}c}.
\end{equation}
Now, since (<ref>) holds true, we have that
the line $v-\frac{u}{{\rho}c}=0$ is tangent to the curve $
v=\gamma_a(u)$ at $(\bar{u}, \gamma_a(\bar{u}))$,
and therefore at this point the components of the velocity
along the normal directions to the curve and to the line coincide.
On the other hand,
the normal derivative at a point on
the line has a sign, as computed in (<ref>), while the normal derivative
to $v=\gamma_a(u)$ is $0$ because the curve is an orbit.
This, together with (<ref>), proves that equality
in (<ref>) holds true if $u=u_s$, but strict inequality holds true
for all $u\in(u_s,u_{\mathcal{M}})$,
and thus
the proof of Lemma <ref> is complete.
For each $a>0$, we define $(u_d^a, v_d^a)\in [0,1]\times[0,1]$ as the unique intersection of the graph of $\gamma_a$ with the line $\{v=1-u\}$, that is the solution of the system
\begin{equation}\label{ch2ki87yh556g}
\left\{
\begin{array}{l}
v_d^a=1- u_d^a.
\end{array}
\right.
\end{equation}
We recall that the above intersection is
unique since the function $\gamma_a$
is increasing. Also, by construction,
\begin{equation}\label{ch2CALM}
u_d^a\le u_{\mathcal{M}}.
\end{equation}
Now, recalling (<ref>) and making explicit the dependence on $a$ by writing $u_s^a$
(with the convention in (<ref>)), we give the following result:
We have that:
* For $\rho<1$, for all $a^*>0$ it holds that
\begin{equation}\label{ch21304b}
\gamma_a(u) \leq \gamma_{a^*}(u) \quad \text{for all} \ a > a^* \ \text{and for all} \ u\in[u_s^{a^*}, u_d^{a^*}].
\end{equation}
* For $\rho>1$, for all $a^*>0$ it holds that
\begin{equation}\label{ch21819b}
\gamma_a(u) \leq \gamma_{a^*}(u) \quad\text{for all} \ a < a^* \ \text{and for all} \ u\in[u_s^{a^*}, u_d^{a^*}].
\end{equation}
We claim that
\begin{equation}\label{ch21306}
u_s^{a^*} < u_d^{a^*}.
\end{equation}
Indeed, when $a^* c\ge1$, we have that $u_s^{a^*} =0< u_d^{a^*}$
and thus (<ref>) holds true. If instead $
a^* c<1$, by (<ref>) and (<ref>) we have that
\begin{equation} \label{ch28uhj76tuyg6446r6f6}\gamma_{a^*}(u_s^{a^*})+u_s^{a^*}=1-a^* c< 1=
\gamma_{a^*}(u_d^{a^*})+u_d^{a^*}.\end{equation}
Also, since $\gamma_{a^*}$ is increasing,
we have that the map $r\mapsto \gamma_{a^*}(r)+r$
is strictly increasing. Consequently, we deduce from (<ref>) that (<ref>)
holds true in this case as well.
Now we suppose that $\rho<1$ and we prove (<ref>).
For this, we claim that, for every $a^*>0$ and every $a>a^*$,
\begin{equation}\label{ch2xcvbn881300}\gamma_{a}(
u_s^{a^*})\le\gamma_{a^*}( u_s^{a^*})\quad{\mbox{ with strict inequality when }}
\end{equation}
To check this, we distinguish two cases.
If $a^*\in\left(0,\frac{1}{c}\right)$, then for all $a>a^*$
\begin{equation}\label{ch21300}
u_s^a=\max \left\{ 0, \rho c \frac{1-ac}{1+ \rho c} \right\} < \rho c \frac{1-a^*c}{1+ \rho c} =u_s^{a^*}.
\end{equation}
By (<ref>) and formula (<ref>) in Lemma <ref>, we have that
\begin{equation}\label{ch21647}
\gamma_a(u_s^{a^*}) < \frac{u_s^{a^*}}{\rho c} = \gamma_{a^*}(u_s^{a^*}) \quad \text{for all} \ a> a^*.
\end{equation}
If instead $a^*\geq \frac{1}{c}$, then $u_s^{a^*}=0$ and for all $a>a^*$ we have $u_s^a=0$. As a consequence,
\begin{equation}\label{ch21647b}
\gamma_{{a^*}}(u_s^{a^*})=\gamma_{{a}}(u_s^{a^*}) \quad \text{for all} \ a> a^* .
\end{equation}
The claim in (<ref>) thus follows from (<ref>) and (<ref>).
Furthermore, by Propositions <ref> and <ref>,
\begin{equation}\label{ch21641b}
\gamma_a'(0)= \frac{a}{\rho+ac-1} < \frac{{a^*}}{\rho+{a^*}c-1}=\gamma_{a^*}'(0) \quad \text{for all} \ a> a^*\ge\frac1c.
\end{equation}
Moreover, for all $a\ge a^*$ and $u>u_s^{a^*}$ it holds that, when $v=\gamma_{a^*}(u)$,
\begin{equation}\label{ch21623b}
\big)=
u(1-u-\gamma_{a^*}(u)- ac) < u(1-u_s^{a^*}-v_s^{a^*}-ac)\le 0.
\end{equation}
Now, we establish that
\begin{equation}\label{ch2767675747372}
u(\rho c v-u)(1-u-v)(a-a^*) < 0 \ \ \text{for all} \ a > a^*, \ u
\in(u_s^{a^*}, u_d^{a^*}), \ v=\gamma_{a^*}(u).
\end{equation}
for the values of $a$, $u$ and $v$ as in (<ref>) we have
that $v\le\gamma_{a^*}(u_d^{a^*})$ and hence
\begin{equation}\label{ch2767675747372-2}
(1-u-v)> (1-u_d^{a^*}-\gamma_{a^*}(u_d^{a^*}))=0.
\end{equation}
Moreover, by formula (<ref>) in Lemma <ref>,
for $u\in(u_s^{a^*}, u_d^{a^*})$ and $v=\gamma_{a^*}(u)$ and we have that
$$\rho c v -u =
\rho c \gamma_{a^*}(u) -u<
From this and (<ref>), we see that (<ref>) plainly follows, as desired.
As a consequence of (<ref>) and (<ref>), one deduces that,
for all $a > a^*$, $u\in(u_s^{a^*}, u_d^{a^*})$ and $v=\gamma_{a^*}(u)$,
\begin{equation}\label{ch21335}\begin{split}
\frac{au- \rho v(1-u-v)}{acu-u(1-u-v)} - \frac{a^* u- \rho v(1-u-v)}{a^* cu-u(1-u-v)} \\=\,&
\frac{(a-a^*)c \rho uv(1-u-v)-(a-a^*) u^2(1-u-v)}{\big(a cu-u(1-u-v)\big)\big(a^* cu-u(1-u-v)\big)}
\\=\,&
\frac{(a-a^*)(1-u-v)u( c \rho v- u)}{\big(a cu-u(1-u-v)\big)\big(a^* cu-u(1-u-v)\big)}
\\ \le\,&0.
\end{split}
\end{equation}
Now, we define
\begin{equation}\label{ch23456784jncdkc6knsbd vc83456789} {\mathcal{Z}}(u):=\gamma_a(u)-\gamma_{a^*}(u)\end{equation}
and we claim that
\begin{equation}\label{ch24jncdkc6knsbd vc8}
{\mbox{if~$u_o\in(u_s^{a^*}, u_d^{a^*})$ is such that~${\mathcal{Z}}(u_o)=0$,
\end{equation}
since $\gamma_a$ is a trajectory for (<ref>),
if $(u_a(t),v_a(t))$ is a solution of (<ref>), we have that $
v_a(t)=\gamma_a(u_a(t))$, whence
\begin{equation}\label{ch2989u:SMNDnb csn44}
\begin{split}&\rho v_a(t)(1-u_a(t)-v_a(t)) -au_a(t)=
\dot v_a(t)=\gamma_a'(u_a(t))\,\dot u_a(t)\\&\qquad=
\gamma_a'(u_a(t))\big( u_a(t)(1-u_a(t)-v_a(t)) - acu_a(t)\big)
Then, we let $v_o:=\gamma_a(u_o)$
and we notice that $v_o$ coincides also with $\gamma_{a^*}(u_o)$.
Hence, we
take trajectories of the system with parameter $a$
and $a^*$ starting at $(u_o,v_o)$,
and by (<ref>) we obtain that
\begin{eqnarray*}0
&>&\frac{au_o- \rho v(1-u_o-v_o)}{acu_o-u_o(1-u_o-v_o)}-
\frac{a^*u_o- \rho v(1-u_o-v_o)}{a^*cu_o-u_o(1-u_o-v_o)}\\&=&
\frac{au_a(0)- \rho v(1-u_a(0)-v_a(0))}{acu_a(0)-u(1-u_a(0)-v_a(0))}-
\frac{a^*u_{a^*}(0)- \rho v(1-u_{a^*}(0)-v_a(0))}{a^*cu_{a^*}(0)-u(1
\gamma'_a(u_o)-\gamma'_{a^*}(u_o),
\end{eqnarray*}
which establishes (<ref>).
Now we claim that
\begin{equation}\label{ch2xx124ff469}\begin{split}&
{\mbox{there exists~$\underline{u}\in[u_s^{a^*}, u_d^{a^*}]$
such that~${\mathcal{Z}}(\underline{u})<0$}}\\&{\mbox{and~${\mathcal{Z}}(u)\le0$ for every~$u\in[u_s^{a^*},\underline{u}]$.}}
\end{split}\end{equation}
Indeed, if $a^*\in\left(0,\frac{1}{c}\right)$,
we deduce from (<ref>)
that ${\mathcal{Z}}( u_s^{a^*})<0$
and therefore (<ref>) holds true with $\underline{u}:=
u_s^{a^*}$. If instead $a^*\ge\frac{1}{c}$,
we have that $u_s^{a}=u_s^{a^*}=0$
and we deduce from (<ref>)
and (<ref>) that ${\mathcal{Z}}(u_s^{a^*})=0$
and ${\mathcal{Z}}'(u_s^{a^*})<0$, from which (<ref>)
follows by choosing $\underline{u}:=u_s^{a^*}+\epsilon$
with $\epsilon>0$ sufficiently small.
Now we claim that
\begin{equation}\label{ch2TBP-SP-EL-34}
{\mathcal{Z}}(u)\le0\qquad{\mbox{for every }}u\in[u_s^{a^*}, u_d^{a^*}].
\end{equation}
To prove this,
in light of (<ref>), it suffices to check that ${\mathcal{Z}}(u)\le0$
for every $u\in(\underline{u}, u_d^{a^*}]$.
Suppose not. Then there exists $u^\sharp\in(\underline{u}, u_d^{a^*}]$
such that ${\mathcal{Z}}(u)<0$ for all $[\underline{u},u^\sharp)$
and ${\mathcal{Z}}(u^\sharp)=0$. This gives that $
But this inequality is in contradiction with (<ref>)
and therefore the proof of (<ref>)
is complete.
The desired claim in (<ref>) follows easily
from (<ref>), hence we focus now
on the proof of (<ref>).
To this end, we take $\rho>1$ and we
claim that, for every $a^*>0$ and every $a\in(0,a^*)$,
\begin{equation}\label{ch2xcvbn881300-ALT56}\gamma_{a}(
u_s^{a^*})\le\gamma_{a^*}( u_s^{a^*})\quad{\mbox{ with strict inequality when }}
\end{equation}
To prove this, we first notice that,
if $a<a^*<\frac{1}{c}$, then
\begin{equation*}
u_s^{a^*}=\rho c \frac{1-a^*c}{1+\rho c} < \rho c \frac{1-ac}{1+\rho c} = u_s^a.
\end{equation*}
Hence by (<ref>) in
Lemma <ref> we have
\begin{equation*}
\gamma_a(u_s^{a^*}) < \frac{u_s^{a^*}}{\rho c} = \gamma_{a^*}(u_s^{a^*}) \quad \text{for} \ a<a^*<\frac{1}{c},
\end{equation*}
and this establishes (<ref>)
when $a^*\in\left(0,\frac{1}{c}\right)$.
Thus, we now focus on the case $a^*\geq \frac{1}{c}$.
In this situation, we have that $u_s^{a^*}=0$
and accordingly $\gamma_a(u_s^{a^*})=
\gamma_a(0)=\gamma_{a^*}(0) =
\gamma_{a^*}(u_s^{a^*})$, that completes the proof
of (<ref>).
In addition, by Propositions <ref> and <ref> we have that
\begin{equation}\label{ch21821}
\gamma_a'(0)= \frac{a}{\rho-1+ac} \leq \frac{{a^*}}{\rho-1+{a^*}c}=\gamma_{a^*}'(0) \quad \text{for} \ a\in\left[\frac{1}{c}, {a^*}\right].
\end{equation}
Moreover, for $u>u_s^a$, if $v=\gamma_a(u)$ we have that $v>\gamma_a(u_s^a)=v_s^a$, thanks to the monotonicity of $\gamma_a$,
and, as a result,
\begin{equation}\label{ch21804}
\end{equation}
Now we claim that, for all $ a< {a^*}$,
$u\in(u_s^{a^*}, u_d^{{a^*}})$ and $ v=\gamma_{{a^*}}(u)$,
we have
\begin{equation}\label{ch2473-bniu-1}
u(1-u-v)({a^*}-a)(u-\rho c v)<0
by the monotonicity of $\gamma_{{a^*}}$,
in this situation we have that $v\le\gamma_{{a^*}}(u^{a^*}_d)$,
and therefore,
by (<ref>),
\begin{equation}\label{ch2473-bniu-2}
1-u-v >1-u_d^{a^*}-\gamma_{{a^*}}(u_d^{a^*})=1-u_d^{a^*}-1+u_d^{a^*}=0. \end{equation}
Moreover, by (<ref>)
in Lemma (<ref>),
we have that $ \gamma_{a^*}(u) > \frac{u}{\rho c}$, and
hence $u-\rho c v> 0$. Combining this inequality with (<ref>),
we obtain (<ref>), as desired.
Now, by (<ref>),
for all $ a < {a^*}$,
$u\in(u_s^{a}, u_d^{{a^*}})$
and $v=\gamma_{{a^*}}(u)$,
-u(1-u-v-ac)=acu-u(1-u-v) <
{a^*} cu-u(1-u-v)$$
and then, by (<ref>),
\begin{equation}\label{ch21800}
\begin{split}&
\frac{au- \rho v(1-u-v)}{acu-u(1-u-v)} - \frac{{a^*} u- \rho v(1-u-v)}{{a^*} cu-u(1-u-v)} \\=\,&
\frac{u(1-u-v)({a^*}-a)(u-\rho c v)}{\big(acu-u(1-u-v)\big)
\big({a^*} cu-u(1-u-v)\big)}
\\ <\,&0.
\end{split}
\end{equation}
Now we recall the definition of ${\mathcal{Z}}$
in (<ref>)
and we claim that
\begin{equation}\label{ch2567890-4jncdkc6knsbd vc8}
{\mbox{if~$u_o\in(u_s^{a^*}, u_d^{a^*})$ is such that~${\mathcal{Z}}(u_o)=0$,
\end{equation}
To prove this, we let $v_o:=\gamma_a(u_o)$,
we notice that $v_o=\gamma_{a^*}(u_o)$, we
recall (<ref>)
and apply it to a trajectory starting at $(u_o,v_o)$,
thus finding that
\begin{eqnarray*}
&&\rho v_o(1-u_o-v_a(t)) -au_o=
\gamma_a'(u_o)\big( u_o(1-u_o-v_o) - acu_o\big)
This and (<ref>) yield that
\begin{eqnarray*}
0>\frac{au- \rho v(1-u-v)}{acu-u(1-u-v)} - \frac{{a^*} u- \rho v(1-u-v)}{{a^*} cu-u(1-u-v)} =\gamma_a'(u_o)-\gamma_{a^*}'(u_o)={\mathcal{Z}}'(u_o),
\end{eqnarray*}
which proves the desired claim in (<ref>).
We now point out that
\begin{equation}\label{ch26879977xx124ff469}\begin{split}&
{\mbox{there exists~$\underline{u}\in[u_s^{a^*}, u_d^{a^*}]$
such that~${\mathcal{Z}}(\underline{u})<0$}}\\&{\mbox{and~${\mathcal{Z}}(u)\le0$ for every~$u\in[u_s^{a^*},\underline{u}]$.}}
\end{split}\end{equation}
Indeed, if $a^*\in\left(0,\frac{1}{c}\right)$,
this claim follows directly from (<ref>)
by choosing $\underline{u}:=
u_s^{a^*}$, while if $a^*\ge\frac{1}{c}$,
the claim follows from (<ref>)
and (<ref>)
by choosing $\underline{u}:=u_s^{a^*}+\epsilon$
with $\epsilon>0$ sufficiently small.
Now we claim that
\begin{equation}\label{ch2jjjjdnfnfTBP-SP-EL-34}
{\mathcal{Z}}(u)\le0\qquad{\mbox{for every }}u\in[u_s^{a^*}, u_d^{a^*}].
\end{equation}
Indeed, by (<ref>),
we know that the claim is true for all $u\in[u_s^{a^*},\underline{u}]$.
Then, the claim for $u\in(\underline{u}, u_d^{a^*}]$
can be proved by contradiction,
supposing that there exists $u^\sharp\in(\underline{u}, u_d^{a^*}]$
such that ${\mathcal{Z}}(u)<0$ for all $[\underline{u},u^\sharp)$
and ${\mathcal{Z}}(u^\sharp)=0$. This gives that $
{\mathcal{Z}}'(u^\sharp)\ge0$, which is in contradiction with (<ref>).
Having completed the proof of (<ref>),
one can use it to obtain the desired claim in (<ref>).
Now we perform the proof
of Theorem <ref>, analyzing separately the cases $\rho=1$, $\rho<1$
and $\rho>1$.
We notice that
\begin{equation}\label{ch21959}
\mathcal{V_{\mathcal{K}}}\subseteq
\mathcal{V_{\mathcal{A}}},
\end{equation}
since $\mathcal{K}\subset \mathcal{A}$.
Also, from Theorem <ref>, part (i), we get that $\mathcal{V}_{\mathcal{A}}=\mathcal{S}_c$, where $\mathcal{S}_c$ was defined in (<ref>).
On the other hand, by Lemma <ref>, we know that for $\rho=1$ and for all $a>0$ we have ${\mathcal{E}(a)}=\mathcal{S}_c$.
But since every constant $a$ belongs to the set $\mathcal{K}$, we have $\mathcal{E}(a)\subseteq \mathcal{V}_{\mathcal{K}}$.
This shows that $\mathcal{V}_{\mathcal{A}} = {\mathcal{E}(a)}\subseteq \mathcal{V}_{\mathcal{K}}$, and together with (<ref>) concludes the proof.
We notice that
\begin{equation}\label{ch200--fg5996}
\mathcal{V_{\mathcal{K}}}\subseteq
\mathcal{V_{\mathcal{A}}},\end{equation}
since $\mathcal{K}\subset \mathcal{A}$.
To prove that the inclusion is strict, we aim to find a point $
(\bar{u}, \bar{v})\in \mathcal{V}_{\mathcal{A}} \setminus \mathcal{V}_{\mathcal{K}}$.
Namely, we have to prove that there exists $
(\bar{u}, \bar{v})\in \mathcal{V}_{\mathcal{A}}$ such that,
for all constant strategies $a>0$, we have that $(\bar{u}, \bar{v})\notin \mathcal{E}(a)$,
that is, by the characterization in Proposition <ref>,
it must hold true that $\bar{v} \geq \gamma_a(\bar{u})$ and $\bar{u}\leq u_{\mathcal{M}}^a$.
To do this, we define
\begin{equation}\label{ch2def:f}
f(u):= \frac{u}{c} + \frac{1-\rho}{1+\rho c}\quad {\mbox{ and }}\quad
m:= \min \left\{\frac{\rho c (c+1)}{1+\rho c}, 1 \right\}.
\end{equation}
By inspection, one can see that $(u, f(u))\in[0,1]\times[0,1]$ if and only
if $u\in [0,m]$.
We point out that, by (ii) of Theorem <ref>,
for $\rho <1$ and $u\in [u_s^0, m]$,
a point $({u}, {v})$ belongs to $\mathcal{V}_{\mathcal{A}}$
if and only if ${v} < f({u})$. Here $u_s^0$ is defined in (<ref>). We underline that the interval $[u_s^0, m]$ is non empty since
\begin{equation}\label{ch22101}
u_s^0=\frac{\rho c}{1+\rho c}<\min \left\{\frac{\rho c (c+1)}{1+\rho c}, 1 \right\}= m.
\end{equation}
Now we point out that
\begin{equation}\label{ch21633}
m \leq u_{\mathcal{M}}^a .
\end{equation}
Indeed, by (<ref>) we already know that $m\leq 1$, thus if $u_{\mathcal{M}}^a=1$ the inequality in (<ref>) is true. On the other hand, when $u_{\mathcal{M}}^a<1$ we have that $(u_{\mathcal{M}}^a,1)\times(0,1)\subseteq{\mathcal{E}}(a)$.
This and (<ref>) give that $
\mathcal{V_{\mathcal{A}}}$. Hence, in view of (<ref>), we deduce that $\frac{\rho c(c+1)}{1+\rho c}\le u_{\mathcal{M}}^a$. In particular, we find that $m\le u_{\mathcal{M}}^a$,
and therefore (<ref>) is true also in this case.
With this notation, we claim the existence of a
value $\bar{v}\in(0,1]$
such that for all $a>0$ we have
$\gamma_a(m)\leq \bar{v} < f(m)$.
That is, we prove now that there
exists $\theta>0$ such that
\begin{equation}\label{ch20000}
\gamma_a(m)+ \theta < f(m) \quad {\mbox{ for all }} a>0.
\end{equation}
The strategy is to study two cases separately, namely we prove (<ref>)
for sufficiently small values of $a$ and then for the other
values of $a$.
To prove (<ref>) for small values of $a$, we start by
looking at
the limit function $\gamma_0$ defined in (<ref>).
One observes that
\begin{equation}\label{ch2wqffwoe3u8ry4}
\gamma_0(u_s^0) = v_s^0= \frac{1}{1+\rho c} = \frac{\rho c}{c(1+\rho c)}+\frac{1-\rho}{1+\rho c}=
Moreover, for all $u\in(u_s^0, m]$,
we have that
$$\gamma_0'(u) =\frac{v_s^0}{(u_s^0)^\rho} \,\rho u^{\rho-1}<
\frac{v_s^0}{(u_s^0)^\rho} \,\rho (u_s^0)^{\rho-1}=\frac{\rho v_s^0}{u_s^0}=
\frac{1}{c}= f'(u).$$
Hence, using the fundamental theorem of calculus on the continuous functions $\gamma_{0}(u)$ and $f(u)$, we get
\begin{equation*}
\gamma_{0}(m)= \gamma_{0}(u_s^0) +\int_{u_s^0}^{m} \gamma_{0}'(u)\,du < f(u_s^0) +\int_{u_s^0}^{m} f'(u)\,du = f(m).
\end{equation*}
Then, the quantity
$$\theta_1:= \frac{f(m)-\gamma_0(m)}{4}$$
is positive and
we have
\begin{equation}\label{ch21608}
\gamma_0(m)+ 2\theta_1 < f(m).
\end{equation}
Now, by the uniform convergence of $\gamma_a$ to $\gamma_0$
given by Lemma <ref>,
we know that there exists $\varepsilon\in \left(0,\frac1c\right)$ such that, if $a\in(0,\varepsilon]$,
\begin{equation}\label{ch2KJ444S}\underset{u\in [u_s^0,m]}{\sup } |\gamma_a(u)-\gamma_0(u)| < {\theta_1}.\end{equation}
By this and (<ref>), we obtain that
\begin{equation}\label{ch20000BIS}
\gamma_a(m) + {\theta_1} < f(m) \quad {\mbox{ for all }} a\in(0,\varepsilon] .
\end{equation}
We remark that formula (<ref>) will give the desired claim
in (<ref>) for conveniently small values of $a$.
We are now left with considering the case $a> \varepsilon$.
To this end, recalling (<ref>), (<ref>), by the first statement in
Lemma <ref>, used here with $a^*:=\varepsilon$,
we get
\begin{equation}\label{ch21304}
\gamma_a(u) \leq \gamma_{\varepsilon}(u) \quad \text{for all} \ a > \varepsilon \ \text{and for all} \ u\in[u_s^{\varepsilon}, u_d^{\varepsilon}].
\end{equation}
Now we observe that
\begin{equation}\label{ch28j8j8fb8i903-1}
u^a_d\ge u_s^\varepsilon.
\end{equation}
Indeed, suppose not, namely
\begin{equation}\label{ch28j8j8fb8i903-2}
u^a_d< u_s^\varepsilon.
\end{equation}
Then, by the monotonicity of $\gamma_a$, we have that $\gamma_a(u^a_d)\le\gamma_a( u_s^\varepsilon)$.
This and (<ref>) yield that $\gamma_a(u^a_d)\le\gamma_\varepsilon( u_s^\varepsilon)$. Hence,
the monotonicity of $\gamma_\varepsilon$ gives that $
\gamma_a(u^a_d)\le\gamma_\varepsilon( u_d^\varepsilon)$.
This and (<ref>) lead to $1-u^a_d\le1-u_d^\varepsilon$,
that is $u_d^\varepsilon\le u^a_d$. From this inequality,
using again (<ref>), we deduce that $u_d^\varepsilon<
u_s^\varepsilon$. This is in contradiction with (<ref>)
and thus the proof of (<ref>)
is complete.
We also notice that
\begin{equation}\label{ch28j8j8fb8i903-11}
u^a_d\ge u_d^\varepsilon.
\end{equation}
Indeed, suppose not, say
\begin{equation}\label{ch28j8j8fb8i903-12}
u^a_d< u_d^\varepsilon.\end{equation}
Then, by (<ref>), we have that $u^a_d\in[u_s^\varepsilon, u_d^\varepsilon]$ and therefore we can apply (<ref>)
to say that $
\gamma_a(u^a_d) \leq \gamma_{\varepsilon}(u^a_d)$.
Also, by the monotonicity of $\gamma_{\varepsilon}$,
we have that $\gamma_{\varepsilon}(u^a_d)\le \gamma_{\varepsilon}(u^\varepsilon_d)$.
With these items of information and (<ref>), we find that
$$ 1-u^a_d=\gamma_a(u^a_d) \leq
\gamma_{\varepsilon}(u^\varepsilon_d)=1-u^\varepsilon_d,$$
and accordingly $u^a_d\ge u^\varepsilon_d$.
This is in contradiction with (<ref>)
and establishes (<ref>).
Moreover, by (<ref>) and (<ref>),
we know that $u_s^0>u_s^{a^*}$, for every $a^*>0$.
Therefore, setting $\tilde u_d^{a^*}:=\min
\{u_d^{a^*},u_s^0\}$, we have that $\tilde u_d^{a^*}\in
[u_s^{a^*},u_d^{a^*}]$. Thus, we are in the position of
using the first statement
in Lemma <ref> with $a:=\varepsilon$
and deduce that
\begin{equation}\label{ch290i3883jj889203}
\gamma_\varepsilon (\tilde u_d^{a^*})\le\gamma_{a^*}(
\tilde u_d^{a^*})\qquad{\mbox{for all}}\quad a^*<\varepsilon.
\end{equation}
We also remark that
\begin{equation} \label{ch279ihkf843767676}u_d^{a^*}\to u_s^0\qquad{\mbox{as }}\;a^*\to0.\end{equation}
up to a subsequence we can assume that $u_d^{a^*}\to \tilde u$ as $a^*\to0$, for some $\tilde u\in[0,1]$. Also, by (<ref>),
$$ \gamma_{a^*} (u_d^{a^*})=1-u_d^{a^*},$$
and then the uniform convergence of $ \gamma_{a^*}$
in Lemma <ref> yields that
$$ \gamma_{0} (\tilde u)=1-\tilde u.$$
This and (<ref>) lead to $\tilde u=u_d^0$.
\begin{equation}\label{ch2SQund0-dis0}
in virtue of (<ref>),
we thus conclude that $\tilde u=u_s^0$
and the proof of (<ref>)
is thereby complete.
As a consequence of (<ref>),
we have that $\tilde u_d^{a^*}\to
u_s^0$ as $a^*\to0$. Hence,
using again the uniform convergence of $ \gamma_{a^*}$
in Lemma <ref>,
we obtain that $\gamma_{a^*}(
\tilde u_d^{a^*})\to\gamma_{0}(u^0_s)$.
From this and (<ref>), we conclude that
\begin{equation}\label{ch2SQund0-dis1}
\gamma_\varepsilon (u^0_s)\le\gamma_{0}(
Now we claim that
\begin{equation}\label{ch29i9i9i78i9u-3934}
u_d^{\varepsilon} > u_s^0 .\end{equation}
Indeed, suppose, by contradiction,
\begin{equation}\label{ch29i9i9i78i9u-3934-0}u_d^{\varepsilon} \le u_s^0.\end{equation}
Then, the monotonicity
of $\gamma_{\varepsilon} $, together with (<ref>)
and (<ref>), gives that
$$ 1-u_d^{\varepsilon} =
\gamma_{\varepsilon} (u_d^{\varepsilon}) \le
\gamma_{\varepsilon} ( u_s^0)=1-u_s^0.$$
From this and (<ref>) we deduce that $u_d^{\varepsilon}=u_s^0$. In particular, we
have that $u^0_s\in(u_s^\varepsilon,u^\varepsilon_{\mathcal{M}})$.
Accordingly, by (<ref>),
$$ 1- u^0_s=
\gamma_{\varepsilon}(u_d^{\varepsilon})=
\gamma_{\varepsilon}(u^0_s)< \frac{u^0_s}{{\rho}c}
As a consequence,
$$ u^0_s>\frac{\rho c}{1+\rho c},$$
and this is in contradiction with (<ref>).
The proof of (<ref>) is thereby complete.
As a byproduct of (<ref>)
and (<ref>), we have that
\begin{equation}\label{ch289ujfvdjjgjh599fghjkl6}
\gamma_{\varepsilon}(u^{\varepsilon}_d)=
1-u_d^{\varepsilon} <1- u_s^0=1- u_d^0=\gamma_0(u^0_d)
Similarly, by means of (<ref>),
\begin{equation}\label{ch2Qcbvolr9fjevcanf9d4}
\gamma_a( u^a_d)= 1-u^a_d\le1- u_d^\varepsilon=
\gamma_\varepsilon(u_d^\varepsilon)=v_d^\varepsilon.
\end{equation}
In light of (<ref>), (<ref>),
(<ref>) and (<ref>),
we can write that
\begin{equation}\label{ch21731}
1>u_d^a \geq u_d^{\varepsilon} > u_s^0 >0 \quad \text{and} \quad 1> v_s^0 > v_d^{\varepsilon} \geq v_d^a >0.
\end{equation}
The figures illustrate the functions involved in the proof of Theorem <ref> for the case $\rho < 1$.
The two vertical lines correspond to the values $u_d^{\varepsilon}$ and $m$. The thick black line represents the boundary of $\mathcal{V}_{\mathcal{A}}$; the blue line is the graph of $\gamma_0(u)$; the dark violet lines delimit the area where $\gamma_{a}(u)$ for $a\leq\varepsilon$ might be; the red line is the upper limit of $\gamma_a(u)$ for $a>\varepsilon$. The image was realized using a simulation in Python for the values $\rho=0.35$ and $c=1.2$.
to complete the proof of (<ref>)
when $a>\varepsilon$,
we consider two cases depending on the order of $m$ and $u_d^{\varepsilon}$. If $u_d^{\varepsilon}\geq m$, by (<ref>) we have that $m<1$ and $f(m)=1$. Then,
\begin{equation}\label{ch21802}
\gamma_a(m) \leq \gamma_a(u_d^{\varepsilon}) \le
\gamma_{\varepsilon}(u_d^{\varepsilon})= v_d^{\varepsilon} < 1 = f(m),
\end{equation}
thanks to the monotonicity of $\gamma_a$, (<ref>) and (<ref>).
We define
which is positive thanks to (<ref>). From (<ref>), we get that
\begin{equation}\label{ch21815}
\gamma_{a}(m) +\theta_2 \leq v_d^{\varepsilon} +\theta_2 <1= f(m).
\end{equation}
This formula proves the claim in (<ref>) for $a>\varepsilon$ and $u_d^{\varepsilon}\geq m$.
If instead $u_d^{\varepsilon}< m$, then we proceed as follows.
By (<ref>) we have
\begin{equation}\label{ch21722}
\gamma_a(u_d^{a}) =v_d^{a} \leq v_d^{\varepsilon} < v_s^0 = f(u_s^0).
\end{equation}
Now we set
Using the definition of $f$ in (<ref>), we see that
$$ \theta_3
= \frac{u_d^{\varepsilon}-u_s^0}{2c} ,$$
and accordingly $\theta_3$ is positive, due to (<ref>).
From (<ref>) we have
\begin{equation}\label{ch21740}
\gamma_a(u_d^{a}) + \theta_3 < f(u_s^0) +\theta_3 < f(u_d^{\varepsilon}).
\end{equation}
Now we show that, on any trajectory $(u(t),v(t))$ lying
on the graph of $\gamma_{a}$, it holds that
\begin{equation} \label{ch21640}
\dot{v}(t) > \frac{\dot{u}(t)}{c} \quad \text{provided that} \ u(t)\in( u_d^a,u^a_{\mathcal{M}}) .
\end{equation}
To prove this, we first observe that $u(t)>u_d^{a}> u_s^{a}$,
thanks to (<ref>).
Hence, we can exploit formula (<ref>) of Lemma <ref> and get that
\begin{equation}\label{ch28yh78749in2fd9}
\gamma_a(u(t)) -
\frac{u(t)}{\rho c}<0.\end{equation}
Also, by the monotonicity of $\gamma_a$
and (<ref>),
$$\gamma_a(u(t))\ge \gamma_a(u_d^a) = 1-u_d^a > 1-u(t).$$
From this and (<ref>) it follows that
\begin{equation*}
\left(\dot{v}(t) - \frac{\dot{u}(t)}{c} \right)= \rho \left(\gamma_a(u(t)) -
\frac{u(t)}{\rho c} \right) (1-u(t)-\gamma_a(u(t))) > 0
\end{equation*}
provided that $ u(t)
\in( u_d^a,u^a_{\mathcal{M}}) $, and this proves (<ref>).
In addition, for such a trajectory $(u(t),v(t))$ we have that
\begin{equation*}\begin{split}&
\dot{u}(t)=u(t)\,(1-u(t)-\gamma_a(u(t))- ac) \\&\qquad\qquad< u(t)\,(1-u(t)-\gamma_a(u_d^a))=u(t)\,(1-u(t)-1+u_d^a)<0,\end{split}
\end{equation*}
provided that $ u(t)
\in( u_d^a,u^a_{\mathcal{M}}) $.
From this and (<ref>), we get
\begin{equation*}
\gamma_a'(u(t))= \frac{\dot{v}(t)}{\dot{u}(t)} < \frac{1}{c} = f'(u(t))
provided that $ u(t)
\in( u_d^a,u^a_{\mathcal{M}}) $.
Consequently, taking as initial datum of the trajectory
an arbitrary point $(u,\gamma_a(u))$ with $u\in
( u_d^a,u^a_{\mathcal{M}}) $, we can write that, for all $u\in( u_d^a,u^a_{\mathcal{M}})$,
\begin{equation*}
\gamma_a'(u)< f'(u).
\end{equation*}
As a result, integrating and using (<ref>),
for all $u\in( u_d^a,u^a_{\mathcal{M}})$, we have
\begin{equation*}
\gamma_a(u)=
\gamma_{a}(u_d^a)+ \int_{u_d^a}^{u}\gamma_a'(u)\,du
< \gamma_{a}(u_d^a)+ \int_{u_d^a}^{u}f'(u)\,du=\gamma_{a}(u_d^a)+f(u)-f(u_d^a)
\end{equation*}
Then, making use (<ref>), for $u\in( u_d^a,u^a_{\mathcal{M}})$,
\begin{equation}\label{ch28781jh98172omOS} \gamma_a(u) + \theta_3 < \gamma_{a}(u_d^a)+f(u)-f(u_d^a) + \theta_3\le
\end{equation}
Also, recalling (<ref>)
and the monotonicity of $f$, we see that $f(u_d^{\varepsilon})\le
f(u_d^{a})$. Combining this and (<ref>),
we deduce that
\begin{equation}\label{ch28781jh98172omOS-0987654-PRE}
\gamma_a(u) + \theta_3 <f(u)\qquad{\mbox{for all }}u\in( u_d^a,u^a_{\mathcal{M}})
We also observe that if $u\in (u_d^\varepsilon, u_d^a]$,
then the monotonicity of $\gamma_a$ yields that $\gamma_a(u)\le
\gamma_a(u_d^a)$. It follows from this and (<ref>)
that $\gamma_a(u)+\theta_3 < f(u_d^{\varepsilon})$.
This and the monotonicity of $f$ give that
$$ \gamma_a(u)+\theta_3 < f(u)
\qquad{\mbox{for all }}u\in(u_d^\varepsilon, u_d^a]
Comparing this with (<ref>),
we obtain
\begin{equation*}
\gamma_a(u) + \theta_3 <f(u)\qquad{\mbox{for all }}u\in( u_d^\varepsilon,u^a_{\mathcal{M}})
\end{equation*}
\begin{equation}\label{ch28781jh98172omOS-0987654}
\gamma_a(u) + \theta_3 \le f(u)\qquad{\mbox{for all }}u\in[u_d^\varepsilon,u^a_{\mathcal{M}}]
Now, in view of (<ref>), we have that $m\in
Consequently, we can utilize (<ref>)
with $u:=m$ and find that
\begin{equation}\label{ch2a}
\gamma_a(m) + \theta_3 \le f(m)
\end{equation}
which gives (<ref>) in the case $a>\varepsilon$ and $u_d^{\varepsilon} \leq m$
(say, in this case with $\theta\le\theta_3/2$).
That is, by (<ref>), (<ref>) and (<ref>)
we obtain that (<ref>) holds true
\begin{equation*}
\theta :=\frac12\, \min \left\{ \theta_1, \ \theta_2 , \ \theta_3 \right\}.
\end{equation*}
If we choose $\bar{v}:= f(m)-\frac{\theta}{2}$ we have that
\begin{equation}\label{ch21643}
0 < \gamma_{a}(m) \leq \bar{v} < f(m) \leq 1.
\end{equation}
This completes the proof of Theorem <ref> when $\rho<1$, in light of
the characterizations of $\mathcal{E}(a)$ and $\mathcal{V}_{\mathcal{A}}$ from Proposition <ref> and Theorem <ref>, respectively.
Now we focus on the case $\rho>1$.
As before, the inclusion $\mathcal{V_{\mathcal{K}}}\subseteq \mathcal{V_{\mathcal{A}}}$ is trivial since $\mathcal{K}\subset \mathcal{A}$.
To prove that it is strict, we aim to find a point $(\bar{u}, \bar{v})\in \mathcal{V}_{\mathcal{A}}$ such that $(\bar{u}, \bar{v})\notin \mathcal{V}_{\mathcal{K}}$.
Thus, we have to prove that there exists $
(\bar{u}, \bar{v})\in \mathcal{V}_{\mathcal{A}}$ such that,
for all constant strategies $a>0$, we have that $(\bar{u}, \bar{v})\notin \mathcal{E}(a)$.
To this end, using the characterizations given in Proposition <ref> and Theorem <ref>, we claim that
\begin{equation}\begin{split}\label{ch21219}
&\mbox{there exists a point~$(\bar{u}, \bar{v})\in[0,1]\times[0,1]$ } \\ &\mbox{satisfying~$u_{\infty}\leq\bar{u}\leq u_{\mathcal{M}}^a$ and~$\gamma_a(\bar{u}) \leq \bar{v} < \zeta (\bar{u})$ for all~$a>0$.}
\end{split}\end{equation}
For this, we let
\begin{equation*}
m:= \min\left\{1, \frac{c}{(c+1)^{\frac{\rho-1}\rho}} \right\} .
\end{equation*}
By (<ref>) one sees that
\begin{equation}\label{ch21657}
\end{equation}
In addition, we point out that
\begin{equation}\label{ch21633-0987654gfhyf}
m \leq u_{\mathcal{M}}^a .
\end{equation}
Indeed, since $m\leq 1$, if $u_{\mathcal{M}}^a=1$ the desired inequality is obvious. If instead $u_{\mathcal{M}}^a<1$ we have that $(u_{\mathcal{M}}^a,1)\times(0,1)\subseteq{\mathcal{E}}(a)\subseteq\mathcal{V_{\mathcal{K}}}\subseteq
\mathcal{V_{\mathcal{A}}}$. Hence, by (<ref>),
it follows that $\frac{c}{(c+1)^{\frac{\rho-1}\rho}}\le u_{\mathcal{M}}^a$,
which leads to (<ref>), as desired.
Now we claim that there exists $\theta >0$ such that
\begin{equation}\label{ch20017}
\gamma_a(m) + \theta < \zeta(m) \quad {\mbox{for all }}\; a>0.
\end{equation}
We first show some preliminary facts for $\gamma_a(u)$.
For all $a>0$, we have that $\mathcal{E}(a)\subseteq \mathcal{V}_{\mathcal{A}}$. Owing
to the characterization of $\mathcal{E}(a)$ from Proposition <ref> and of $\mathcal{V}_{\mathcal{A}}$ from Theorem <ref> (which can be used here, thanks
to (<ref>) and (<ref>)), we get that
\begin{equation}\label{ch21826}
\gamma_a(u)\le \frac{u}{c} \quad \text{for all } \ u\in(0, u_{\infty}]\ \text{ and }\ a>0.
\end{equation}
This is true in particular for $u=u_{\infty}$.
We choose
\begin{equation}\label{ch2pthm3:def:delta}
\delta \in\left(0, \frac{\rho-1}{c}\right)\quad{\mbox{ and }}\quad M:= \max\left\{ \frac{1}{c}, \frac{\rho + \frac{1}{c}+\delta}{\delta c u_{\infty}} \right\} ,
\end{equation}
and we prove (<ref>) by treating separately the cases $a>M$ and $a\in(0, M]$.
We first consider the case $a>M$.
We let $(u(t),v(t))$ be a trajectory for (<ref>) lying on $\gamma_a$ and we show that
\begin{equation}\label{ch21721}
\dot{v}(t)- \left( \frac{1}{c}+\delta \right)\dot{u} (t)> 0 \quad \text{provided that } \ u(t)>u_{\infty}\
{\mbox{ and }}\ a>M.
\end{equation}
To check this, we observe that
\begin{align*}
\dot{v}(t)- \left( \frac{1}{c}+\delta \right)\dot{u} (t)&= \left[ \rho \gamma_{a}(u(t))
-\left( \frac{1}{c}+\delta \right) u(t)\right](1-u(t)-\gamma_{a}(u(t)))+ \delta acu(t) \\
& \geq - \left\vert \rho + \frac{1}{c} + \delta \right\vert
+ \delta a c u_{\infty} >0,
\end{align*}
where the last inequality is true thanks to the hypothesis $a>M$ and the definition of $M$
in (<ref>). This proves (<ref>).
Moreover, for $a>M\geq \frac{1}{c}$ we have $\dot{u}<0$.
From this, (<ref>) and the invariance of $\gamma_a$ for the flow, we get
\begin{equation}\label{ch21905}
\gamma_a'(u(t))=\frac{\dot v(t)}{\dot u(t)}<
\frac{1}{c}+ \delta ,
\end{equation}
provided that $u(t)>u_{\infty}$ and $a>M$.
For this reason and (<ref>), we get
\begin{equation}\label{ch2pthm3:1643}
\gamma_a(u(t)) = \gamma_a(u_{\infty}) + \int_{u_{\infty}}^{u(t)} \gamma_a'(\tau)\,d\tau\le \frac{u_{\infty}}{c} + \left( \frac{1}{c}+\delta \right)(u(t)-u_{\infty}
\end{equation}
provided that $u(t)>u_{\infty}$ and $a>M$.
Furthermore, thanks to the choice of $\delta$ in (<ref>), we have
\begin{equation*}
\zeta'(u)= \frac{\rho u^{\rho-1}}{c u_{\infty}^{\rho-1}}>\frac{\rho}{c} > \frac{1}{c}+\delta \quad \text{for all } \ u>u_{\infty}.
\end{equation*}
Since also $\zeta(u_{\infty})=\frac{u_{\infty}}{c}$, by (<ref>) we deduce that
\begin{equation}\label{ch21621-PRE}
\gamma_a(u(t))\leq \frac{u_{\infty}}{c} + \left( \frac{1}{c}+\delta \right)(u(t)-u_{\infty}) < \zeta(u_{\infty}) + \int_{u_{\infty}}^{u(t)} \zeta'(\tau)\, d\tau = \zeta(u(t)),
\end{equation}
provided that $u(t)>u_{\infty}$ and $a>M$.
In particular, given any $u>u_\infty$, we can take a trajectory starting at $(u,\gamma_a(u))$
and deduce from (<ref>) that
\begin{equation*}
\gamma_a(u)\leq \frac{u_{\infty}}{c} + \left( \frac{1}{c}+\delta \right)(u-u_{\infty}) < \zeta(u_{\infty}) + \int_{u_{\infty}}^{u} \zeta'(\tau)\, d\tau = \zeta(u),
\end{equation*}
whenever $a>M$. We stress that, in light of (<ref>), we can take $u:=m$
in the above chain of inequalities, concluding that
\begin{equation*}
\gamma_a(m)\le \frac{u_{\infty}}{c} + \left( \frac{1}{c}+\delta \right)(m-u_{\infty}) < \zeta(m)
We rewrite this in the form
\begin{equation}\label{ch21621}
\gamma_a(m)\le \left( \frac{1}{c}+\delta \right)m-{\delta}u_{\infty}< \zeta(m)
We define
\begin{equation}\label{ch2pthm3:def:theta}
\theta_1:=\frac{1}{2}\left[ \zeta(m)-\left( \frac{1}{c}+\delta \right)m +{\delta}u_{\infty} \right],
\end{equation}
that is positive thanks to the last inequality in (<ref>). Then by the first inequality in (<ref>) we have
\begin{equation*}
\gamma_a(m)+\theta_1
\le \left( \frac{1}{c}+\delta \right)m-{\delta}u_{\infty}+\theta_1=
\frac12\left[ \left( \frac{1}{c}+\delta \right)m-{{\delta}u_{\infty}}\right]
+\frac{ \zeta(m)}2. \end{equation*}
Hence, using again the last inequality in (<ref>), we obtain that
\begin{equation}\label{ch22001}
\gamma_a(m)+\theta_1<\zeta(m),\end{equation}
which gives the claim in (<ref>) for the case $a>M$.
Now we treat the case $a\in(0, M]$.
We claim that
\begin{equation}\label{ch21204}
u_d^M > u_{\infty}.
\end{equation}
Here, we are using the notation $u_d^M$ to denote the point $u_d^a$ when $a:=M$. To prove (<ref>) we argue as follows.
Since $M\geq \frac{1}{c}$, by Propositions <ref> and <ref> we have
\begin{equation}\label{ch21719}
\gamma_M'(0) = \frac{M}{\rho-1+Mc} < \frac{1}{c}.
\end{equation}
Moreover, since the graph of $\gamma_M(u)$ is a parametrization of a trajectory for (<ref>) with $a=M$, we have that $ \dot{v}(t)= \gamma_M'(u(t)) \dot{u}(t)$.
Hence, at all points $(\bar{u}, \bar{v})$ with $\bar{u}\in(0, u_{\infty})$ and $\bar{v}=\gamma_M(\bar{u})$ we have
\begin{equation}\label{ch21734}
\gamma_M ' (\bar{u}) = \frac{M \bar{u} - \rho \bar{v} (1-\bar{u}-\bar{v}) }{Mc \bar{u} - \bar{u}(1-\bar{u}-\bar{v})}.
\end{equation}
We stress that the denominator in the right hand side of (<ref>)
is strictly positive, since $M\geq \frac{1}{c}$ and $\bar u>0$.
In addition, we have that
\begin{equation}\label{ch21757}
\begin{split}
\frac{1}{c} - \frac{M \bar{u} - \rho \bar{v} (1-\bar{u}-\bar{v}) }{Mc \bar{u} - \bar{u}(1-\bar{u}-\bar{v})} = \frac{(\rho c \bar{v}-\bar{u})(1-\bar{u}-\bar{v}) }{Mc^2 \bar{u} - c\bar{u}(1-\bar{u}-\bar{v})}.
\end{split}
\end{equation}
u_s^M=0<\bar{u}<u_\infty<m\le u^M_{\mathcal{M}},
thanks to (<ref>) and (<ref>).
Hence, we can exploit formula (<ref>) in Lemma <ref>
with the strict inequality, thus obtaining that
\begin{equation}\label{ch28ujINtdensnumeok3965}
\rho c \bar{v}-\bar{u}=\rho c\gamma_M(\bar{u})-\bar{u} >0.\end{equation}
Moreover, by (<ref>),
$$ 1-\bar{u}-\bar{v} =
1-\bar{u}-\frac{\bar{u}}{c} > 1-u_{\infty}- \frac{u_{\infty}}{c}=0. $$
Therefore, using the latter estimate and (<ref>)
into (<ref>), we get that
\begin{equation*}
\begin{split}
\frac{1}{c} - \frac{M \bar{u} - \rho \bar{v} (1-\bar{u}-\bar{v}) }{Mc \bar{u} - \bar{u}(1-\bar{u}-\bar{v})} > 0.
\end{split}
\end{equation*}
From this and (<ref>), we have that
\begin{equation*}
\gamma_M'(u) < \frac{1}{c} \quad \text{for all} \ u\in(0,u_{\infty}).
\end{equation*}
This, together with (<ref>) and the fact that $\gamma_M(0)=0$, gives
\begin{equation*}
\gamma_M(u)
=\gamma_M(u)-\gamma_M(0)=\int_0^u \gamma_M'(\tau)\,d\tau
< \frac{u}{c}
\end{equation*}
for all $ u\in(0,u_{\infty}]$.
This inequality yields that
\begin{equation}\label{ch21810}
\gamma_M(u_{\infty}) < \frac{u_{\infty}}{c}= 1-u_{\infty}.
\end{equation}
Now, to complete the proof of (<ref>) we argue by contradiction
and suppose that the claim in (<ref>) is false, hence
\begin{equation}\label{ch21814}
u_d^M \leq u_{\infty}.
\end{equation}
Thus, by (<ref>), the monotonicity of $\gamma_M(u)$ and the definition of $u_d^M$ given in (<ref>), we get
\begin{equation*}
1-u_d^M = \gamma_M(u_d^M) \le \gamma_M(u_{\infty}) < 1-u_{\infty}
\end{equation*}
which is in contraddiction with (<ref>). Hence, (<ref>) holds true, as desired.
Also, by the second statement in Lemma <ref>, used here with $a^*:=M$,
\begin{equation}\label{ch21819}
\gamma_a(u) \leq \gamma_{M}(u) \quad \text{for all } \ u\in[0, u_d^M].
\end{equation}
We claim that
\begin{equation}\label{ch2181967890p67890-4567890456789}
u_d^M\le u^a_d.
\end{equation}
Indeed, suppose, by contradiction, that
\begin{equation}\label{ch2181967890p67890-4567890456789PRE}
\end{equation}
Then, by the monotonicity of $\gamma_a$ and (<ref>), used here with $u:=u^M_d$,
we find that
$$ 1-u^a_d=\gamma_a(u^a_d)\le\gamma_a(u^M_d) \leq \gamma_{M}(u^M_d)=1-u^M_d.$$
This entails that $u^a_d\ge u^M_d$, which is in contradiction with (<ref>),
and thus establishes (<ref>).
We note in addition that
\begin{equation}\label{ch209876543988878-00-181967890p67890-4567890456789PRE}
v_d^M =\gamma_M(u_d^M)=1-u_d^M
\end{equation}
thanks to the definition of $(u_d^M, v_d^M)$ and (<ref>).
Similarly, by (<ref>),
\begin{equation}\label{ch209876543988878-00-181967890p67890-4567890456789PRE098-08}
v_d^a =\gamma_a(u_d^a)=1-u_d^a\le
= v_d^M.
\end{equation}
Collecting the pieces of information in (<ref>), (<ref>),
and (<ref>),
we thereby conclude that, for all $a\in(0,M]$,
\begin{equation}\label{ch21834}
0< u_{\infty} < u_d^M \leq u_d^a<1 \qquad \text{and} \qquad 0< v_d^a \leq v_d^M < 1-u_{\infty} =:v_\infty<1.
\end{equation}
Now we consider two cases depending on the order of $m$ and $u_d^M$. If $u_d^M\geq m$, by (<ref>) we have $m<1$ and $\zeta(m)=1$. Accordingly,
for $a\in(0,M]$, by (<ref>) and (<ref>) we have
\begin{equation*}
\gamma_a(m) \le \gamma_{a}(u_d^M) \leq \gamma_{M}(u_d^M)=v_d^M < 1=\zeta (m).
\end{equation*}
Hence, we can define
and observe that $\theta_2$ is positive by (<ref>), thus obtaining that
\begin{equation}\label{ch21916}
\gamma_{a}(m) + \theta_2 < \zeta(m).
\end{equation}
This is the desired claim in (<ref>) for $a\in(0,M]$ and $u^*\geq m$.
The figure illustrates the functions involved in the proof of Theorem <ref> for the case $\rho > 1$.
The two vertical lines correspond to the values $u_d^{M}$ and $m$. The thick black line represents the boundary of $\mathcal{V}_{\mathcal{A}}$; the blue line is the graph of the line $v=\frac{u}{c}$; the dark violet line is the upper bound for $\gamma_{a}(u)$ for $a>M$; the red line is $\phi(u)$. The image was realized using a simulation in Python for the values $\rho=2.3$ and $c=1.3$.
If instead $u_d^M<m$, we consider the function
\begin{equation*}
\phi(u) := v_d^M\,\left(\frac{u}{u_d^M}\right)^{\rho} , \quad{\mbox{ for }} u\in[u_d^M, m]
\end{equation*}
and we claim that
\begin{equation}\label{ch21730}
\gamma_a(u)\leq\phi(u) \quad \text{for all } \ a\in(0,M]\ {\mbox{ and }} \ u\in[u_d^M,m].
\end{equation}
To prove this, we recall (<ref>) and the fact that $\gamma_a$ is an increasing function
to see that
\begin{equation}\label{ch21946}\gamma_a(u_d^M)\le
\gamma_a(u_d^a) =v_d^a \leq v_d^M = \phi(u_d^M) .
\end{equation}
Now we remark that
$$ \gamma_M(u_d^M)+u_d^M=1>1-Mc=\gamma_M(u_s^M)+u_s^M,$$
and therefore $u_d^M>u_s^M$. Notice also that $u_d^M<m\le
u^M_{\mathcal{M}}$, thanks to (<ref>).
As a result, we find that $\rho c \gamma_M(u_d^M) > u_d^M$ by inequality (<ref>)
in Lemma <ref>. Therefore, if $u\ge u_d^M$ and $v=\phi(u)$, then
\begin{equation*}\begin{split}&
au \left( 1-\rho c \frac{v_d^M}{(u_d^M)^{\rho}} u^{\rho -1} \right)
= au \left( 1- \frac{\rho c\gamma_M(u_d^M) }{(u_d^M)^{\rho}} u^{\rho -1} \right)\\&\qquad<
au \left( 1- \left(\frac{u}{u_d^M} \right)^{\rho -1} \right)
\leq 0=\rho \left( v- \frac{v_d^M}{(u_d^M)^{\rho}} u^{\rho} \right) (1-u-v).\end{split}
\end{equation*}
Using this and (<ref>), we deduce that, if $a\in[0,M]$, $u\in[u_d^M, m]$ and $v=\phi(u)$,
\begin{equation}\label{ch21858}\begin{split}&
\frac{au-\rho v (1-u-v)}{acu - u(1-u-v)}-\frac{v_d^M}{(u_d^M)^{\rho}} \rho u^{\rho-1}
\\=\;&
\frac{au-\rho v (1-u-v)
-\big(acu - u(1-u-v) \big)\,\frac{v_d^M}{(u_d^M)^{\rho}} \rho u^{\rho-1}
}{acu - u(1-u-v)}\\=\;&
\frac{au\left(1-\rho c\frac{v_d^M}{(u_d^M)^{\rho}} u^{\rho-1}\right)-\rho (1-u-v)\left(v-
\frac{v_d^M}{(u_d^M)^{\rho}} u^{\rho}\right)
}{acu - u(1-u-v)}\\ <\;&0.
\end{split}
\end{equation}
Now we take $ a\in(0,M]$, $u\in[u_d^M, m]$ and
suppose that $v=\phi(u)=\gamma_a(u)$,
we consider an orbit $(u(t),v(t))$ lying on $\gamma_a$ with $(u(0),v(0))=(u,v)$,
and we notice that, by (<ref>) and (<ref>),
\begin{equation}\label{ch21951}\begin{split}&
\gamma_a'(u)= \gamma_a'(u(0))=\frac{\dot v(0)}{\dot u(0)}
= \frac{au(0)-\rho v(0)\, (1-u(0)-v(0))}{acu (0)- u(0)(1-u(0)-v(0))}\\&\qquad
= \frac{au-\rho v\, (1-u-v)}{acu - u(1-u-v)}
\frac{v_d^M}{(u_d^M)^{\rho}} \rho u^{\rho-1}
= \phi'(u).
\end{split}\end{equation}
To complete the proof of (<ref>),
we define
$$ {\mathcal{H}}(u):=\gamma_a(u)-\phi(u)$$
and we claim that for every $a\in(0,M]$ there exists $\underline{u}\in[u_d^M, m]$
such that
\begin{equation}\label{ch298989898kjkjkjkdfbv}
{\mbox{${\mathcal{H}}(\underline u)<0$ and~${\mathcal{H}}(u)\le0$ for every~$u\in[u_d^M,\underline u]$.}}
\end{equation}
Indeed, by (<ref>), we know that ${\mathcal{H}}(u_d^M)\le0$.
Thus, if ${\mathcal{H}}(u_d^M)<0$ then we can choose $
\underline u:=u_d^M$ and obtain (<ref>).
If instead ${\mathcal{H}}(u_d^M)=0$, we have that $
\gamma_a(u_d^M)=\phi(u_d^M)$ and thus we can
exploit (<ref>) and find that ${\mathcal{H}}'(u_d^M)<0$, from which
we obtain (<ref>).
Now we claim that, for every $ a\in(0,M]$ and $u\in[u_d^M, m]$,
\begin{equation}\label{ch2KL:0ksf3566}
\end{equation}
For this, given $ a\in(0,M]$, we define
{\mathcal{L}}:=\{ u_*\in [u_d^M, m]{\mbox{ s.t. }}{\mathcal{H}}(u)\le0 {\mbox{ for every }}u\in[u_d^M,u_*]\}
\qquad{\mbox{and}}\qquad
\overline u:=\sup {\mathcal{L}}.$$
We remark that $\underline u\in{\mathcal{L}}$, thanks to (<ref>)
and therefore $\overline u$ is well defined.
We have that
\begin{equation}\label{ch212jnSikjm239gfvhb37}
\overline u=m,
\end{equation}
otherwise we would have that ${\mathcal{H}}(\overline u)=0$
and thus ${\mathcal{H}}'(\overline u)<0$, thanks to (<ref>),
which would contradict the maximality of $\overline u$.
Now, the claim in (<ref>) plainly follows from (<ref>).
We notice that by the inequalities in (<ref>) we have
\begin{equation}\label{ch22007}
\zeta(u)= \frac{v_{\infty}}{(u_{\infty})^{\rho}} u^{\rho}>
\frac{v_d^M}{(u_d^M)^{\rho}} u^{\rho}
= \phi(u).
\end{equation}
Then, we define
\begin{equation}\label{ch21958}
\theta_3:= \frac{\zeta(m)-\phi(m)}{2},
\end{equation}
that is positive thanks to (<ref>).
We get that
\begin{equation}\label{ch21912}
\phi(m)+\theta_3 < \zeta(m).
\end{equation}
From this and (<ref>), we conclude that
\begin{equation}\label{ch20115}
\gamma_a(m) + \theta_3 \leq \phi(m) + \theta_3 < \zeta(m) \quad \ \text{for} \ a\in(0,M].
\end{equation}
By (<ref>), (<ref>) and (<ref>) we have that (<ref>) is true for $\theta = \min \{\theta_1, \ \theta_2, \ \theta_3 \}$.
This also establishes the claim in (<ref>), and the proof is completed.
§.§ Proof of Theorem <ref>
Now, we can complete the proof of Theorem <ref> by building on the previous work.
Since the class of Heaviside functions $\mathcal{H}$ is contained in the class of piecewise continuous functions $\mathcal{A}$, we have that
\begin{equation}
\mathcal{V}_{\mathcal{H}}\subseteq \mathcal{V}_{\mathcal{A}},
\end{equation}
hence we are left with proving the converse inclusion. We treat separately the cases $\rho=1$, $\rho<1$ and $\rho>0$.
If $\rho=1$, the desired
claim follows from Theorem <ref>, part (i).
If $\rho<1$, we deduce from (<ref>) and (<ref>) that
\begin{equation}\label{ch2iwfewuguew387627}
\mathcal{V}_{\mathcal{A}}= \mathcal{F}_0 \cup \mathcal{P},
\end{equation}
where $\mathcal{P}$ has been defined
in (<ref>)
and $\mathcal{F}_0$ in (<ref>).
Moreover, by (<ref>), we have that
\begin{equation}\label{ch2iwfewuguew38762722} \mathcal{F}_0\subseteq \mathcal{V}_{\mathcal{K}}\subseteq \mathcal{V}_{\mathcal{H}}. \end{equation}
Also, in Proposition <ref> we construct a Heaviside winning strategy for every point in $ \mathcal{P}$. Accordingly, it follows that $ \mathcal{P} \subseteq \mathcal{V}_{\mathcal{H}}$.
This, (<ref>) and (<ref>) entail that $ \mathcal{V}_{\mathcal{A}} \subseteq \mathcal{V}_{\mathcal{H}}$, which completes the proof of
Theorem <ref> when $\rho<1$.
Hence, we now focus on the case $\rho>1$. By (<ref>)
and (<ref>),
\begin{equation}\label{ch28877SA}
\mathcal{V}_{\mathcal{A}}= \mathcal{S}_{c} \cup \mathcal{Q},
\end{equation}
where $\mathcal{S}_{c}$ was defined in (<ref>) and $\mathcal{Q}$ in (<ref>).
For every point $(u_0, v_0)\in\mathcal{S}_{c}$ there exists $\bar{a}$ that is a constant winning strategy for $(u_0, v_0)$, thanks to Proposition <ref>,
therefore $\mathcal{S}_{c}\subseteq
\mathcal{V}_{\mathcal{H}}$.
Moreover, in Proposition <ref> for every point $(u_0, v_0)\in \mathcal{Q}$ we constructed a Heaviside winning strategy, whence $ \mathcal{Q} \subseteq \mathcal{V}_{\mathcal{H}}$. In light of these observations
and (<ref>), we see that also in this case $ \mathcal{V}_{\mathcal{A}} \subseteq \mathcal{V}_{\mathcal{H}}$ and the proof is complete.
§.§ Bounds on winning initial positions under pointwise
constraints for the possible strategies
This subsection is dedicated to the analysis of $\mathcal{V}_{\mathcal{A}}$ when we put some constraints on $a(t)$. In particular, we consider $M\geq m \geq 0$ with $M>0$ and the set $\mathcal{A}_{m,M}$ of the functions $a(t)\in\mathcal{A}$ with $m\leq a(t)\leq M$ for all $t>0$. We will prove Theorem <ref> via
a technical proposition giving informative bounds on $\mathcal{V}_{{m,M}}$.
For this, we denote by $(u_s^m,v_s^m)$ the point $(u_s,v_s)$ introduced in (<ref>)
when $a(t)=m$ for all $t>0$ (this when $mc<1$, and we use the convention that $(u_s^m,v_s^m)=(0,0)$
when $mc\ge1$).
In this setting, we have the following result obtaining explicit
bounds on the favorable set $\mathcal{V}_{{m,M}}$:
Let $M\geq m\geq 0$ with $M>0$
\begin{equation}\label{ch2RANGEEP}
\varepsilon\in\left(0,\,\min\left\{\frac{M(c+1)}{M+1},1\right\}\right).\end{equation}
(i) If $\rho<1$, we have
\begin{equation}\label{ch28uj6tg574tygh}
\begin{split}
\mathcal{V}_{{m,M}} \subseteq \Big\{ (u,v)\in[0,1] \times [0,1] \;{\mbox{ s.t. }}\; v< f_{\varepsilon}(u)\Big\}
\end{split}
\end{equation}
where $f_{\varepsilon} : [0, u_{\mathcal{M}}]\to [0,1]$ is the continuous function given by
\begin{equation*}
\begin{array}{ll}
\displaystyle \frac{(u_s^m)^{1-\rho}u^{\rho}}{\rho c } & \text{if} \ u\in [0, u_s^m), \\
\displaystyle\frac{u}{\rho c} & \text{if} \ u\in [u_s^m, u_s^0), \\
\displaystyle\dfrac{u}{c}+\frac{1-\rho}{1+\rho c} & \text{if} \ u\in [u_s^0, u_1), \\
hu +p & \text{if} \ u\in [u_1, 1],
\end{array} \right.
\end{equation*}
with the convention that the first interval is empty if $m\geq \frac{1}{c}$, the second interval is empty if $m=0$,
and $h$, $u_1$ and $p$ take the following values:
\begin{align*}
&h := \frac{1}{c}\left(1-\dfrac{\varepsilon^2(1-\rho)}{M (1+\rho c)(c+1-\varepsilon)^2 + \varepsilon (\rho c +\rho + \varepsilon-\varepsilon \rho)}\right), \\ &u_1:=\frac{c(\rho c+\rho+\varepsilon-\varepsilon \rho)}{(1+\rho c)(c+1-\varepsilon)}, \\
&p :=\frac{c+1-hc(\rho c+\rho+\varepsilon-\varepsilon \rho)}{(1+\rho c)(c+1-\varepsilon)}.
\end{align*}
(ii) If $\rho>1$, we have
\begin{equation*}
\begin{split}
\mathcal{V}_{{m,M}} \subseteq \Big\{ (u,v)\in[0,1] \times [0,1] \;{\mbox{ s.t. }}\; v< g_{\varepsilon}(u)\Big\}
\end{split}
\end{equation*}
where $g_{\varepsilon} : [0, u_{\mathcal{M}}] \to [0,1]$ is the continuous function given by
\begin{equation*}
\begin{dcases}
k\,u & \text{if} \ u\in [0, u_2), \\
\displaystyle\dfrac{u}{c} + q & \text{if} \ u\in [u_2, u_3), \\
\displaystyle\dfrac{(1-u_3)u^{\rho}}{(u_3)^{\rho}} & \text{if} \ u\in [u_3, 1]
\end{dcases}
\end{equation*}
for the following values:
\begin{align*}&
k:= \frac{(c+1-\varepsilon)M}{(\rho -1)\varepsilon c + (c+1-\varepsilon) Mc}, \qquad q:=
\frac{(kc-1)(1-\varepsilon)}{c(k-k\varepsilon+1)}, \\& u_2:=\frac{1-\varepsilon}{k-k\varepsilon+1}\qquad {\mbox{and}}\qquad
u_3:=\frac{c+1-\varepsilon}{(c+1)(k-k\varepsilon +1)}.
\end{align*}
We observe that it might be that for some $u\in[0,1]$ we have $f_{\varepsilon}(u)>1$ or $g_{\varepsilon}(u)>1$. In this case, the above proposition would produce the trivial result that $\mathcal{V}_{{m,M}} \cap (\{u\}\times[0,1]) \subseteq \{ u\}\times [0,1]$. On the other hand, a suitable choice of $\varepsilon$ would lead to nontrivial
consequences entailing, in particular, the proof of
Theorem <ref>.
We start by proving the claim in (i). For this, we will show that
\begin{equation}\label{ch21642}
\mathcal{D}:=\Big\{ (u,v)\in[0,1] \times [0,1] \;{\mbox{ s.t. }}\; v\geq f_{\varepsilon}(u)\Big\} \subseteq \mathcal{V}_{m, M}^C.
\end{equation}
where $\mathcal{V}_{m, M}^C$ is the complement of $\mathcal{V}_{m, M}$ in the topology of $[0,1]\times[0,1]$.
We remark that once (<ref>) is established, then the desired claim in (<ref>)
plainly follows by taking the complement sets.
To prove (<ref>) we first show that
\begin{equation}\label{ch2rPjhnfvvcc}
0 \leq u_s^m < u_s^0 < u_1 < 1.\end{equation}
Notice, as a byproduct, that the above inequalities also give that $f_{\varepsilon}$ is well defined.
To prove (<ref>) we notice that, by (<ref>), (<ref>) and (<ref>),
\begin{equation*}
0\leq u_s^m=\max \left\{ 0, \frac{1-mc}{1+\rho c}\,\rho c\right\} < \frac{\rho c}{1+\rho c} =u_s^0
\end{equation*}
(and actually the first inequality is strict if $m<\frac{1}{c}$). Next, one can check that, since $\varepsilon>0$,
$$ u_s^0-u_1=\frac{\rho c}{1+\rho c}
-\frac{c(\rho c+\rho+\varepsilon-\varepsilon \rho)}{(1+\rho c)(c+1-\varepsilon)}=-\frac{c\varepsilon}{(1+\rho c)(c+1-\varepsilon)}
Furthermore, since $\varepsilon<1$,
\frac{c(\rho c+\rho+\varepsilon-\varepsilon \rho)}{(1+\rho c)(c+1-\varepsilon)}-1=\frac{(\varepsilon-1)(c+1)}{(1+\rho c)(c+1-\varepsilon)}<0.
These observations prove (<ref>), as desired.
Now we point out that
\begin{equation}\label{ch297896705689045-0}
{\mbox{$f_{\varepsilon}$ is a continuous function. }}\end{equation}
\begin{equation}\label{ch297896705689045-1}
\frac{(u_s^m)^{1-\rho}}{\rho c} (u_s^m)^\rho = \frac{u_s^m}{\rho c}\qquad{\mbox{ and }}\qquad
\frac{u_s^0}{\rho c} = \frac{u_s^0}{ c}+ \frac{1-\rho}{1+\rho c}.\end{equation}
Furthermore, by the definitions of $p$ and $u_1$ we see that
\begin{equation}\label{ch2767thisbc0-i6yjh00}
\begin{split} p\,&=
\frac{c+1}{(1+\rho c)(c+1-\varepsilon)}
\frac{hc(\rho c+\rho+\varepsilon-\varepsilon \rho)}{(1+\rho c)(c+1-\varepsilon)}\\&
=\frac{c+1}{(1+\rho c)(c+1-\varepsilon)}-hu_1.\end{split}\end{equation}
Moreover, from the definition of $u_1$,
$$ \frac{u_1}{c}+\frac{1-\rho}{1+\rho c} = \frac{c+1}{(1+\rho c)(c+1-\varepsilon)}.$$
Combining this and (<ref>), we deduce that
\begin{equation}\label{ch2indeh8idenf4596}
\frac{u_1}{c}+\frac{1-\rho}{1+\rho c} = h u_1+p.
\end{equation}
This observation and (<ref>)
entail the desired claim in (<ref>).
Next, we show that
\begin{equation}\label{ch21601}
f_{\varepsilon}(u)>0 \quad \text{for} \ u>0.
\end{equation}
To prove this, we note that for $u\in(0,u_s^m)$ the function is an exponential times the positive constant $\frac{(u_s^m)^{1-\rho}}{\rho c}$, hence is positive. If $u\in[u_s^m, u_s^0)$ then $f_{\varepsilon}(u)$ is a linear function and it is positive since $\rho c >0$. On $[u_s^0, u_1)$, $f_{\varepsilon}(u)$ coincide with a linear function with positive angular coefficient, hence we have
$$ f_{\varepsilon}(u) \geq \underset{u\in[u_s^0, u_1)}{\min} f_{\varepsilon}(u)= f_{\varepsilon}(u_s^0)= \frac{u_s^0}{\rho c} >0. $$
By inspection one can check that $h>0$. Hence,
in the interval $[u_1,1]$ we have
$$ f_{\varepsilon}(u) \geq \underset{u\in[u_1, 1]}{\min} f_{\varepsilon}(u)= f_{\varepsilon}(u_1)\geq \frac{u_s^0}{\rho c} >0. $$
This completes the proof of (<ref>).
Let us notice that, as a consequence of (<ref>),
\begin{equation}\label{ch22344}
\mathcal{D} \cap\big( (0,1]\times \{0\} \big)= \varnothing.
\end{equation}
Now we show that
\begin{equation}\label{ch22352}
{ \mbox{for any strategy~$a\in\mathcal{A}_{m, M}$, no trajectory starting in~$\mathcal{D}$ leaves~$\mathcal{D}$.} }
\end{equation}
To this end, we notice that, since $\partial \mathcal{D} \cap \{v=0\}= \{(0,0)\}$, and the origin is an equilibrium,
we already have that no trajectory can exit $\mathcal{D}$ by passing through the points in $\partial \mathcal{D}\cap \partial ([0,1]\times[0,1])$. Hence, we are left with
considering the possibility of leaving $\mathcal{D}$ through $\partial \mathcal{D}\cap ((0,1)\times(0,1))$.
To exclude this possibility, we compute the velocity of a trajectory in the inward normal direction at $\partial \mathcal{D}\cap ((0,1)\times(0,1))$.
For every $u\in[0, u_s^m)$ we have that this normal velocity is
\begin{equation}\begin{split}\label{ch21614}&
\dot{v}- \frac{(u_s^m)^{1-\rho} \rho (u)^{\rho-1} \dot{u}}{\rho c } \\
&\qquad =\rho \left( v- \frac{(u_s^m)^{1-\rho} \, u^{\rho} }{\rho c } \right) (1-u-v) -au\left(1- \frac{(u_s^m)^{1-\rho} }{u^{1-\rho}} \right).
\end{split}\end{equation}
Notice that the term $
v- \frac{(u_s^m)^{1-\rho} \, u^{\rho} }{\rho c } $
vanishes on $\partial \mathcal{D}\cap ((0,1)\times(0,1))$
when $u\in[0, u_s^m)$. Also, for all $u\in[0, u_s^m)$ we have
\begin{equation*}
1- \frac{(u_s^m)^{1-\rho} }{u^{1-\rho}}<0,
\end{equation*}
thus the left hand side in (<ref>) is positive. This observation
rules out the possibility of leaving $\mathcal{D}$ through $\partial \mathcal{D}\cap ((0,1)\times(0,1))$
at points where $u\in[0, u_s^m)$.
It remains to exclude egresses at points of $\partial \mathcal{D}\cap ((0,1)\times(0,1))$
with $u\in[ u_s^m,1)$. We first consider this type of points when $(u_s^m,u_s^0)$.
At these points, we have that the
velocity in the
inward normal direction on $\{ v=\frac{u}{\rho c} \}$ is
\begin{equation*}
\dot{v}- \frac{\dot{u}}{\rho c}= \left( \rho v - \frac{u}{\rho c} \right)(1-u-v) + au\left( \frac{1}{\rho} -1\right)
\end{equation*}
Expressing $u$ with respect to $v$ on $\partial \mathcal{D}\cap ((0,1)\times(0,1))$
with $u\in( u_s^m,u_s^0)$, we have
\begin{equation}\begin{split}\label{ch2Moiuyted645JN}
\dot{v}- \frac{\dot{u}}{\rho c}&=v \left( \rho-1 \right)(1-\rho c v-v) + a \rho cv\,\frac{1-\rho}{\rho} \\
&= v(1-\rho)(\rho c v + v-1 +ac).\end{split}
\end{equation}
We also remark that, for these points,
\begin{equation*}
v>v_s^m= \frac{1-mc}{1+\rho c}\ge\frac{1-ac}{1+\rho c} ,\end{equation*}
thanks to (<ref>). This gives that the quantity in (<ref>)
is strictly positive and, as a consequence,
we have excluded the possibility of
exiting $\mathcal{D}$
at points of $\partial \mathcal{D}\cap ((0,1)\times(0,1))$
with $u\in(u_s^m,u_s^0)$.
It remains to consider the case $u\in \{u_s^m\}\cup[u_s^0,1)$.
We first focus on the range $u\in (u_s^0,u_1)$.
In this interval, the velocity of a trajectory starting at a point $(u,v)\in\partial \mathcal{D}\cap ((0,1)\times(0,1))$
lying on the line $v=\frac{u}{c}+ \frac{1-\rho}{1+\rho c}$
in the inward normal direction with respect to $\partial \mathcal{D}$ is given by
\begin{equation}\label{ch2tqwfe3857uvcjycer4cubt}
\dot{v}- \frac{1}{c}\dot{u}= \left(\rho v - \frac{u}{c} \right)(1-u-v).
\end{equation}
We also observe that, in light of (<ref>),
$$ u>u_s^0=\frac{\rho c}{1+\rho c}, $$
and therefore, for any $u\in( u_s^0,u_1)$ lying on the above line,
\begin{equation*}
1-u-v= 1-u-\frac{u}{c} - \frac{1-\rho}{1+\rho c} =(c+1)\left(\frac{\rho}{1+\rho c} -\frac{u}{c} \right) < 0
\end{equation*}
\begin{equation*}
\rho v - \frac{u}{c} = \frac{\rho u}{c}+ \frac{\rho (1-\rho)}{1+\rho c}- \frac{u}{c} =(1-\rho)\left( \frac{\rho}{1+\rho c} - \frac{u}{c} \right) < 0 .
\end{equation*}
Using these pieces of information in (<ref>), we conclude that the
inward normal velocity of a trajectory starting at a point $(u,v)\in\partial \mathcal{D}\cap ((0,1)\times(0,1))$
with $u\in (u_s^0,u_1)$ is strictly positive. This gives that no trajectory can exit $\mathcal{D}$
at this type of points, and we need to exclude the case $u\in \{u_s^m, u_s^0\}\cup[u_1,1)$.
We consider now the interval $[u_1,1)$.
In this interval, the component of the velocity of a trajectory at a point on the straight
line given by $v=hu+p$ in the orthogonal inward pointing direction is
\begin{equation}\label{ch28gqwfOJHNsmeoout43906}\begin{split}&
(\dot{u}, \dot{v}) \cdot \frac{(-h, 1)}{\sqrt{1+h^2}} = \frac{ (\rho v -h u)(1-u-v) -au(1-hc) }{\sqrt{1+h^2}}\\
&\qquad =
\frac{((1-\rho)hu-\rho p )(u+v-1) -au(1-hc)}{\sqrt{1+h^2}}
\end{split}\end{equation}
We observe that, if $u\in[u_1,1)$,
\begin{equation}\label{ch2jd723u9007432yhgvythgkliew}\begin{split}&(1-\rho)hu-\rho p \ge
(1-\rho)hu_1-\rho p=hu_1-\rho (hu_1+p) \\
&\qquad = h u_1-\rho \left(\frac{u_1}{c}+\frac{1-\rho}{1+\rho c} \right) =
h u_1-\rho \left(
\frac{\rho c+\rho+\varepsilon-\varepsilon \rho}{(1+\rho c)(c+1-\varepsilon)}
+\frac{1-\rho}{1+\rho c} \right) \\&\qquad=
h u_1-\frac{\rho (c+1)}{(1+\rho c)(c+1-\varepsilon)},
\end{split}\end{equation}
thanks to (<ref>).
We also remark that
\begin{equation*}\begin{split}
\left(1-\dfrac{\varepsilon^2(1-\rho)}{M (1+\rho c)(c+1-\varepsilon)^2 + \varepsilon (\rho c +\rho + \varepsilon-\varepsilon \rho)}\right)\,\frac{ \rho c+\rho+\varepsilon-\varepsilon \rho}{(1+\rho c)(c+1-\varepsilon)}, \\
\frac{ \rho c+\rho+\varepsilon-\varepsilon \rho}{(1+\rho c)(c+1-\varepsilon)}\\&\qquad\qquad
\dfrac{\varepsilon^2(1-\rho)\big(\rho c+\rho+\varepsilon-\varepsilon \rho\big)}{\big(
M (1+\rho c)(c+1-\varepsilon)^2 + \varepsilon (\rho c +\rho + \varepsilon-\varepsilon \rho)\big)(1+\rho c)(c+1-\varepsilon)}
\end{split}
\end{equation*}
\begin{eqnarray*}&&
h u_1-\frac{\rho (c+1)}{(1+\rho c)(c+1-\varepsilon)}=
\frac{ \varepsilon(1- \rho)}{(1+\rho c)(c+1-\varepsilon)}\\&&\qquad\qquad
\dfrac{\varepsilon^2(1-\rho)\big(\rho c+\rho+\varepsilon-\varepsilon \rho\big)}{\big(
M (1+\rho c)(c+1-\varepsilon)^2 + \varepsilon (\rho c +\rho + \varepsilon-\varepsilon \rho)\big)(1+\rho c)(c+1-\varepsilon)}\\
\frac{ \varepsilon(1- \rho)}{(1+\rho c)(c+1-\varepsilon)}\Bigg(1
\dfrac{\varepsilon \big(\rho c+\rho+\varepsilon-\varepsilon \rho\big)}{
M (1+\rho c)(c+1-\varepsilon)^2 + \varepsilon (\rho c +\rho + \varepsilon-\varepsilon \rho)}\Bigg)\\&&\qquad=
\frac{ \varepsilon(1- \rho)}{(1+\rho c)(c+1-\varepsilon)}\cdot
\dfrac{M (1+\rho c)(c+1-\varepsilon)^2}{
M (1+\rho c)(c+1-\varepsilon)^2 + \varepsilon (\rho c +\rho + \varepsilon-\varepsilon \rho)}\\&&\qquad=
\dfrac{\varepsilon M(1- \rho)(c+1-\varepsilon)}{
M (1+\rho c)(c+1-\varepsilon)^2 + \varepsilon (\rho c +\rho + \varepsilon-\varepsilon \rho)}
From this and (<ref>), we gather that
\begin{equation}\label{ch2ILpredmnow55}
(1-\rho)hu-\rho p\ge
\dfrac{\varepsilon M(1- \rho)(c+1-\varepsilon)}{
M (1+\rho c)(c+1-\varepsilon)^2 + \varepsilon (\rho c +\rho + \varepsilon-\varepsilon \rho)}
Furthermore, we point out that, when $[u_1, 1)$ and $v=hu+p$,
\begin{equation*}\begin{split}&
u+v-1\ge u_1+hu_1+p-1 =
u_1+\frac{u_1}c+\frac{1-\rho}{1+\rho c}-1\\&\qquad
=\frac{(c+1)(\rho c+\rho+\varepsilon-\varepsilon \rho)}{(1+\rho c)(c+1-\varepsilon)}
-\frac{\rho(c+1)}{1+\rho c}
=\frac{\varepsilon(c+1)}{(1+\rho c)(c+1-\varepsilon)} >\frac{\varepsilon}{c+1-\varepsilon},
\end{split}\end{equation*}
thanks to (<ref>).
Combining this inequality and (<ref>), we deduce that
\begin{equation*}
((1-\rho)hu-\rho p )(u+v-1) >
\dfrac{\varepsilon^2 M(1- \rho)}{
M (1+\rho c)(c+1-\varepsilon)^2 + \varepsilon (\rho c +\rho + \varepsilon-\varepsilon \rho)}.
\end{equation*}
Therefore, noticing that $h<\frac{1}{c}$,
\begin{eqnarray*}&&
((1-\rho)hu-\rho p )(u+v-1) -au(1-hc)\\&>&
\dfrac{\varepsilon^2 M(1- \rho)}{
M (1+\rho c)(c+1-\varepsilon)^2 + \varepsilon (\rho c +\rho + \varepsilon-\varepsilon \rho)}-Mu(1-hc)\\
\dfrac{\varepsilon^2 M(1- \rho)(1-u)}{
M (1+\rho c)(c+1-\varepsilon)^2 + \varepsilon (\rho c +\rho + \varepsilon-\varepsilon \rho)},
\end{eqnarray*}
which is strictly positive.
Using this information in (<ref>),
we can thereby exclude the possibility of leaving ${\mathcal{D}}$ through $
\partial \mathcal{D}\cap ((0,1)\times(0,1))$
with $u\in [u_1,1)$.
As a result, it only remains to exclude the possibility
of an egress from ${\mathcal{D}}$ through $
\partial \mathcal{D}\cap ((0,1)\times(0,1))$
with $u\in \{u^m_s,u^0_s\} $.
For this, we perform a general argument of dynamics, as follows. We denote by $P^m_s$
and $P^0_s$ the points on $
\partial \mathcal{D}\cap ((0,1)\times(0,1))$
with $u=u^m_s$ and $u=u^0_s $, respectively (these points may also coincide,
as it happens when $m=0$). We stress that we already know by the previous arguments that
\begin{equation}\label{ch26tGSHuj2fw7545}
{\mbox{if a trajectory leaves~${\mathcal{D}}$ it must pass through~$\{P^m_s,P^0_s\}$.}}
\end{equation}
Our goal is to show that no trajectory leaves ${\mathcal{D}}$ and for this we argue by contradiction,
supposing that there exist $\bar P\in {\mathcal{D}}$ and $T>0$ such that $\phi^T(\bar P)$ lies in the complement
of ${\mathcal{D}}$ in $[0,1]\times[0,1]$. Here, we have denoted by $\phi^T$ the flow associated to (<ref>).
We let $\bar Q:=\phi^T(\bar P)$ and, since the complement of ${\mathcal{D}}$ is open in $[0,1]\times[0,1]$,
we can find $\rho>0$ such that $B_\rho(\bar Q)\cap([0,1]\times[0,1])$ is contained in
the complement of ${\mathcal{D}}$.
Also, from (<ref>), there exists $\bar t\in[0,T)$ such that $\phi^{\bar t}(\bar P)\in\{P^m_s,P^0_s\}$.
We suppose that $\phi^{\bar t}(\bar P)=P^m_s$ (the case $
\phi^{\bar t}(\bar P)=P^0_s$ being completely analogous).
We let $\bar T:=T-\bar t$ and we notice that $\phi^{\bar T}(P^m_s)= \phi^T(\bar P)=\bar Q$. Hence,
by continuity with respect to the data, we can find $r>0$ such that
$$ \phi^{\bar T}\big( B_r(P^m_s)\cap ([0,1]\times[0,1]) \big)\subseteq B_\rho(\bar Q)\cap([0,1]\times[0,1]).$$
We define ${\mathcal{U}}:=B_r(P^m_s)\cap{\mathcal{D}}$. We observe that
\begin{equation}\label{ch21fe45-90jhwg3rg rewt57}
{\mbox{${\mathcal{U}}$ has strictly positive Lebesgue measure,}}
\end{equation}
since $P^m_s\in\partial{\mathcal{D}}$ and ${\mathcal{D}}$ has boundary of Hölder class.
In addition,
$$ \phi^{\bar T}\big( {\mathcal{U}} \big)\subseteq B_\rho(\bar Q)\cap([0,1]\times[0,1])
\subseteq\big( [0,1]\times[0,1]\big)
\setminus{\mathcal{D}}.$$
This and (<ref>) give that for every $P\in{\mathcal{U}}$ there exists $t_P\in[0,\bar T]$ such that $
\phi^{t_P}(P)\in\{P^m_s,P^0_s\}$. In particular,
$$ P\in \phi^{-t_P} \{P^m_s,P^0_s\}\subseteq \big\{ \phi^t (P^m_s), \;t\in [-\bar T,0]\big\}
\cup\big\{ \phi^t (P^0_s), \;t\in [-\bar T,0]\big\}.$$
Since this is valid for every $P\in{\mathcal{U}}$, we conclude that
\begin{equation}\label{ch29iKCIMWSsiofnooer}
{\mathcal{U}}\subseteq\big\{ \phi^t (P^m_s), \;t\in [-\bar T,0]\big\}
\cup\big\{ \phi^t (P^0_s), \;t\in [-\bar T,0]\big\}.
\end{equation}
Now we remark that $\big\{ \phi^t (P^m_s), \;t\in [-\bar T,0]\big\}$ is an arc of a smooth curve,
whence it has null Lebesgue measure, and a similar statement holds true for $
\big\{ \phi^t (P^0_s), \;t\in [-\bar T,0]\big\}$.
Consequently, we deduce from (<ref>) that ${\mathcal{U}}$
has null Lebesgue measure, in contradiction with (<ref>).
In this way, we have shown that no trajectory can leave ${\mathcal{D}}$ and the proof
of (<ref>)
is complete.
By (<ref>) and (<ref>), no trajectory starting in $\mathcal{D}$ can arrive in $(0,1]\times[0,1]$ when the bound $m\leq a(t)\leq M$ holds, hence (<ref>) is true.
Therefore the statement (i) in Proposition <ref> is true.
Now we establish the claim in (ii). To this end,
we point out that claim (ii) is equivalent to
\begin{equation}\label{ch22359}
\mathcal{G}:=\Big\{ (u,v)\in[0,1] \times [0,1] \;{\mbox{ s.t. }}\; v\geq g_{\varepsilon}(u)\Big\} \subseteq \mathcal{V}_{m, M}^C,
\end{equation}
for all $\varepsilon\in\left(0, 1\right)$,
where $\mathcal{V}_{m, M}^C$ is the complement of $\mathcal{V}_{m, M}$ in the topology of $[0,1]\times[0,1]$.
First, we point out that
\begin{equation}\label{ch2Ov54io0v9ik4gfvh}
{\mbox{$g_{\varepsilon}$ is a well defined continuous function. }}\end{equation}
Indeed, one can easily check for $\varepsilon\in(0,1)$ that
\begin{equation}\label{ch21409}\begin{split}&
0 < u_2
=\frac{1-\varepsilon}{k-k\varepsilon+1}-\frac{c+1-\varepsilon}{(c+1)(k-k\varepsilon +1)}+u_3
=-\frac{c\varepsilon }{(c+1)(k-k\varepsilon +1)}+u_3\\&\qquad\qquad\qquad
<\frac{c+1}{(c+1)(k-k\varepsilon +1)}<1.\end{split}
\end{equation}
Then, one checks that
\begin{align*}
\end{align*}
hence $g_{\varepsilon}$ is continuous at the point $u_2$. In addition, one can check that $g_{\varepsilon}$ is continuous
at the point $u_3$ by observing that
\begin{equation}\label{ch2isceocessvcpoo}\begin{split}&
\frac{u_3}c+q-(1-u_3)=\frac{(c+1)u_3}{c}+q-1\\&\qquad=
\frac{c+1-\varepsilon}{c(k-k\varepsilon +1)}+\frac{(kc-1)(1-\varepsilon)}{c(k-k\varepsilon+1)}-1\\&\qquad=
\frac{c+1-\varepsilon+(kc-1)(1-\varepsilon)-c(k-k\varepsilon+1)}{c(k-k\varepsilon+1)}=0.
\end{split}\end{equation}
This completes the proof of (<ref>).
Now we show that
\begin{equation}\label{ch21411}
g_{\varepsilon}(u)>0 \quad \text{for every} \ u\in(0,1].
\end{equation}
We have that $k>0$ for every $\varepsilon<1$, and therefore $g_{\varepsilon}(u)>0$ for all $u\in(0, u_2)$.
Also, since $g_{\varepsilon}(u_2)=ku_2>0$ and $g_{\varepsilon}$ is linear in $(u_2, u_3)$, we have that $g_{\varepsilon}(u)>0$ for all $u\in(u_2, u_3)$. Moreover, in the interval $\in[u_3,1]$ we have that $g_{\varepsilon}$
is an exponential function multiplied by a positive constant, thanks to (<ref>), hence it is positive. These considerations prove (<ref>).
As a consequence of (<ref>), we have that
\begin{equation}\label{ch22360}
\mathcal{G} \cap \big((0,1]\times \{0\}\big) = \varnothing.
\end{equation}
Now we claim that
\begin{equation}\label{ch20002}
{ \mbox{for any strategy~$a\in\mathcal{A}_{m,M}$, no trajectory starting in~$\mathcal{G}$ leaves~$\mathcal{G}$.} }
\end{equation}
For this,
we observe that, in light of (<ref>), all the points on
$$\partial \mathcal{G} \setminus \{ (u,g_{\varepsilon}(u)) \
{\mbox{ with }}\ u\in[0,1] \}$$
belong to $\partial ([0,1]\times [0,1]) \setminus \{v=0 \}$,
and these three sides of the square do not allow the flow to exit.
Hence, to prove (<ref>)
it suffices to check that the trajectories starting on $\partial \mathcal{G}\cap\big( (0,1)\times(0,1)\big)$
enter ${\mathcal{G}}$. We do this
by showing that the inner pointing derivative of the trajectory is nonnegative, according to the computation below.
At a point on the line $v=k u$, the velocity of a trajectory in the direction that is orthogonal to $\partial \mathcal{G}$ for $u\in[0,u_2)$ and pointing inward is:
\begin{equation}\label{ch21741}
(\dot{u}, \dot{v})\cdot \frac{(-k, 1)}{\sqrt{1+k^2}} =\frac{(\rho v- ku)(1-u-v)-au(1-kc) }{\sqrt{1+k^2}} .
\end{equation}
We also note that
\begin{equation}\label{ch22018}
= \frac{(c+1-\varepsilon)M}{(\rho -1)\varepsilon + (c+1-\varepsilon)M }
and therefore, at a point on $v=k u$ with $u\in[0, u_2)$,
\begin{equation*}\begin{split}&
1-u-v \geq 1-u_2-k u_2=
1- \frac{(1+k)(1-\varepsilon)}{k-k\varepsilon+1}
= \frac{\varepsilon}{k(1-\varepsilon)+1}\\&\qquad\qquad\qquad
=\frac{\varepsilon c}{kc(1-\varepsilon)+c}
> \frac{\varepsilon c}{1+c-\varepsilon}.\end{split}
\end{equation*}
This inequality entails that
\begin{equation*}
k= \frac{(1+c-\varepsilon)M}{(\rho-1)\varepsilon c+(1+c-\varepsilon)Mc }
=\frac{M}{\frac{(\rho-1)\varepsilon c}{1+c-\varepsilon}+Mc }
> \frac{M}{(\rho-1)(1-u-v)+Mc}.
\end{equation*}
\begin{equation*}
(\rho-1)(1-u-v)k > M (1-kc).
\end{equation*}
From this and (<ref>), one deduces that, for all $u\in(0, u_2)$, $a\leq M$, and $v=k u$,
\begin{equation*}\begin{split}&
(\dot{u}, \dot{v})\cdot \frac{(-k, 1)}{\sqrt{1+k^2}} =\frac{ku(\rho - 1)(1-u-v)-au(1-kc) }{\sqrt{1+k^2}} \\&\qquad\qquad >
\frac{Mu (1-kc)-au(1-kc) }{\sqrt{1+k^2}}\ge0.\end{split}
\end{equation*}
This (and the fact that the origin is an equilibrium)
rules out the possibility of exiting ${\mathcal{G}}$ from $\{ u\in[0, u_2){\mbox{ and }} v=k u\}$.
It remains to consider the portions of $\partial\mathcal{G}\cap((0,1)\times(0,1))$ given by
\begin{equation}\label{ch29u:9idkj:0oekdjfjfj81763yhrf}
\left\{ u\in[ u_2,u_3){\mbox{ and }} v=\frac{ u}c+q\right\}\end{equation}
and by
\begin{equation}\label{ch29u:9idkj:0oekdjfjfj81763yhrf2}\left\{ u\in[u_3,1]{\mbox{ and }} v=\frac{(1-u_3)u^\rho}{(u_3)^\rho}\right\}.\end{equation}
Let us deal with the case in (<ref>).
In this case,
the velocity of a trajectory in the direction orthogonal to $\partial \mathcal{G}$ for $u\in[u_2,u_3)$ and pointing inward is
\begin{equation}\label{ch22027}
(\dot{u}, \dot{v})\cdot \frac{(-1, c)}{\sqrt{1+c^2}}=\frac{(\rho c v -u)(1-u-v)}{\sqrt{1+c^2}}.
\end{equation}
Recalling (<ref>), we also observe that
\begin{equation}\label{ch21536}\begin{split}&k-
\frac{1}{\rho c}
\frac{(c+1-\varepsilon)M}{(\rho -1)\varepsilon + (c+1-\varepsilon) M}-\frac1\rho\right)\\&\qquad=
\frac{(\rho-1)\big((c+1-\varepsilon)M
-\varepsilon\big)}{\rho c\big( (\rho -1)\varepsilon + (c+1-\varepsilon) M\big)}
\end{equation}
Thus, on the line given by $v=\frac{u}{c}+q$ we have that
\begin{equation}\label{ch2Cnodizeos80p4}\begin{split}&
\rho c v -u= (\rho-1)u +\rho c q
\ge (\rho-1)u_2 +\rho c q\\&\qquad=
\frac{(\rho-1)(1-\varepsilon)}{k-k\varepsilon+1}
= (1-\varepsilon)\frac{(\rho-1)+\rho(kc-1)}{k-k\varepsilon+1}=\frac{(1-\varepsilon)(\rho k c -1)}{k-k\varepsilon+1}>0,\end{split}
\end{equation}
where (<ref>) has been used in the latter inequality.
In addition, recalling (<ref>),
\begin{equation*}
1-u-v > 1-u_3 - \frac{u_3}{c} -q = 1-u_3-1+u_3=0.
\end{equation*}
From this and (<ref>), we gather that the velocity calculated in (<ref>) is positive in $[u_2, u_3)$ and this
excludes the possibility of exiting $\mathcal{G}$ from the boundary given in (<ref>).
Next, we focus
on the portion of the boundary described in (<ref>)
by considering $u\in[u_3, 1]$.
That is, we now compute the component of the velocity at a point on $\partial \mathcal{G}$ for $u\in[u_3, 1]$ in the direction that is orthogonal to $\partial \mathcal{G}$ and pointing inward, that is
\begin{equation}\label{ch21803}
\begin{split}&
(\dot{u}, \dot{v})\cdot \frac{(-\rho \frac{1-u_3}{(u_3)^{\rho}}u^{\rho-1}, 1)}{\sqrt{1+\rho^2\frac{(1-u_3)^2}{(u_3)^{2\rho}}u^{2\rho-2}}} \\=\,& \frac{\rho(1-u-v)\left(v- \frac{1-u_3}{(u_3)^{\rho}} u^{\rho} \right) - au\left( 1-\rho c \frac{1-u_3}{(u_3)^{\rho}} u^{\rho-1} \right) }{\sqrt{1+\rho^2\frac{(1-u_3)^2}{(u_3)^{2\rho}}u^{2\rho-2}}}
\\=\,& \frac{ au\left( \rho c \frac{1-u_3}{(u_3)^{\rho}} u^{\rho-1} -1 \right) }{\sqrt{1+\rho^2\frac{(1-u_3)^2}{(u_3)^{2\rho}}u^{2\rho-2}}}\\ \ge\,&
\frac{ au\left( \rho c \frac{1-u_3}{u_3} -1 \right) }{\sqrt{1+\rho^2\frac{(1-u_3)^2}{(u_3)^{2\rho}}u^{2\rho-2}}}.
\end{split}
\end{equation}
Now we notice that
\begin{eqnarray*}
&& \rho c (1-u_3) = \rho c \left(\frac{u_3}{c}+q \right) =
\rho u_3+ \rho c q=\rho u_3+\frac{\rho (kc-1)(1-\varepsilon)(c+1) u_3}{c+1-\varepsilon},
\end{eqnarray*}
thanks to (<ref>).
As a result, using (<ref>),
\begin{eqnarray*}
&& \rho c (1-u_3) >\rho u_3+\frac{ (1-\rho)(1-\varepsilon)(c+1) u_3}{c+1-\varepsilon}
\\&&\qquad
= \frac{ u_3}{c+1-\varepsilon}
\Big(\rho (c+1-\varepsilon)+(1-\rho)(1-\varepsilon)(c+1) \Big)\\&&\qquad=
\frac{ u_3\big( (1-\varepsilon)(c+1)+\varepsilon \rho c\big)}{c+1-\varepsilon} \\&&
\qquad= u_3+
\frac{ \varepsilon c u_3( \rho-1)}{c+1-\varepsilon}>u_3.
\end{eqnarray*}
This gives that the quantity in (<ref>) is positive.
Hence, we have ruled
out also the possibility of exiting $\mathcal{G}$ from the boundary given in (<ref>),
and this ends the proof of (<ref>).
Since no trajectory can exit $\mathcal{G}$ for any $a$ with $m\leq a \leq M$, we get that no point $(u,v)\in \mathcal{G}$ is mapped into $(0,1]\times\{0\}$ because of (<ref>), thus (<ref>) is true and the proof is complete.
We end this section with the proof of Theorem <ref>.
Since by definition $\mathcal{A}_{m,M}\subseteq \mathcal{A}$, we have that $\mathcal{V}_{{m,M}}\subseteq \mathcal{V}_{\mathcal{A}}$. Hence, we are left with proving that the latter inclusion is strict.
We start with the case $\rho<1$. We choose
\begin{equation}\label{ch21934567890-dfghjk4567890-fd11}
\varepsilon\in\left(0,\,\min\left\{ \frac{ \rho c(c+1)}{1+\rho c}, \frac{M(c+1)}{M+1},1 \right\} \right). \end{equation}
We observe that this choice is compatible with
the assumption on $\varepsilon$ in (<ref>). We note that
\begin{equation}\label{ch21911}
u_1 < \min\left\{ \frac{ \rho c(c+1)}{1+\rho c}, 1 \right\},
\end{equation}
thanks to (<ref>).
Moreover, by (<ref>) and the fact that $h<\frac1c$, it holds that
\begin{equation}\label{ch21933}
h u + p
=h (u-u_1)+hu_1 + p
=h (u-u_1)+\frac{u_1}{c}+\frac{1-\rho}{1+\rho c}<\frac{u}{c}+ \frac{1-\rho }{1+\rho c}
\end{equation}
for all $u>u_1$.
Now we choose
$$\bar{u}\in \left( u_1, \min\left\{ \frac{ \rho c(c+1)}{1+\rho c}, 1 \right\} \right),$$
which is possible thanks to (<ref>), and
\begin{equation}\label{ch21925}
\bar{v}: = \frac{1}{2}\left( h \bar{u} + p \right) + \frac{1}{2}\left( \frac{\bar{u} }{c}+ \frac{1-\rho }{1+\rho c} \right).
\end{equation}
By (<ref>) we get that
\begin{equation}\label{ch21937}
h \bar{u} + p <\frac12\left(h \bar{u} + p\right)
+ \frac{1}{2}\left( \frac{\bar{u} }{c}+ \frac{1-\rho }{1+\rho c} \right)=
\bar{v} < \frac{\bar{u}}{c}+ \frac{1-\rho }{1+\rho c}.
\end{equation}
Using Proposition <ref> and (<ref>), we deduce that $(\bar{u}, \bar{v})\not\in \mathcal{V}_{{m,M}}$. By Theorem <ref> and (<ref>) we obtain instead that $(\bar{u}, \bar{v})\in \mathcal{V}_{\mathcal{A}}$. Hence, the set
$\mathcal{V}_{{m,M}}$ is strictly included in $\mathcal{V}_{\mathcal{A}}$
when $\rho<1$.
Now we consider the case $\rho>1$, using again the notation of Proposition <ref>. We recall that $u_2>0$
and $ u_{\infty}>0$, due to (<ref>) and (<ref>), hence we can choose
$$ \bar{u} \in \left( 0, \min\{u_2, u_{\infty}\} \right) .$$
We also define
\begin{equation*}
\bar{v} := \frac12\left( \frac{1}{c} +k \right) \bar{u}.
\end{equation*}
By (<ref>), we get that
\begin{equation}\label{ch22031}
k \bar{u} < \frac{k\bar{u}}2+\frac{\bar{u}}{2c}
= \bar{v} < \frac{ \bar{u}}{c}.
\end{equation}
Exploiting this and the characterization in Proposition <ref>, it holds that $(\bar{u}, \bar{v})\not\in \mathcal{V}_{{m,M}}$. On the other hand, by Theorem (<ref>) and (<ref>) we have instead that $(\bar{u}, \bar{v})\in \mathcal{V}_{\mathcal{A}}$. As a consequence, the set $\mathcal{V}_{{m,M}}$
is strictly contained in $ \mathcal{V}_{\mathcal{A}}$ for $\rho>1$.
This concludes the proof of Theorem <ref>.
§.§ Minimization of war duration: proof of Theorem <ref>
We now deal with the strategies leading to the quickest possible victory of the first population.
Our aim is to establish the existence
of the strategy leading to the quickest possible victory
and to determine its range.
For this, we consider
the following minimization problem under constraints for $x(t):=(u(t), v(t))$:
\begin{equation}\label{ch2sys:min}
\left\{
\begin{array}{ll}
\dot{x}(t)=f(x(t), a(t) ), \\ x(0)=(u_0, v_0), \\ x(T_s)\in (0,1]\times\{0\}, \\
\displaystyle\min_{a(t)\in [m, M]} \displaystyle\int_{0}^{T_s} 1 \,dt,
\end{array}
\right.
\end{equation}
\begin{equation*}
f(x, a) := \Big( u(1-u-v-ac), \ \rho v(1-u-v) -au \Big).
\end{equation*}
Here $T_s$ corresponds to the exit time introduced in (<ref>), in dependence of the strategy $a(\cdot)$.
Theorem 6.15 in [115] assures the existence of a minimizing solution $(\tilde{a}, \tilde{x})$ with $\tilde{a}(t)\in[m, M]$ for all $t\in[0,T]$, and $\tilde{x}(t)\in[0,1]\times[0,1]$ absolutely continuous, such that $\tilde{x}(T)=(\tilde u(T), 0)$ with $\tilde u(T)\in [0,1]$,
where $T$ is the exit time for $\tilde{a}$.
We now prove that
\begin{equation}\label{ch290o-045}
\tilde{u}(T)>0.\end{equation}
Indeed, if this were false, then $(\tilde{u}(T), \tilde{v}(T))=(0,0)$. Let us call $d(t):
= \tilde{u}^2(t)+ \tilde{v}^2(t)$. Then, we observe that
the function $d(t)$ satisfies the following differential inequality:
\begin{equation}\label{ch21955}
- \dot{d}(t) \le C d , \qquad \text{for} \quad C:=4+4\rho+2Mc+M.
\end{equation}
To check this, we compute that
\begin{align*}
- \dot{d} &= 2\left( -\tilde{u}^2(1-\tilde{u}-\tilde{v}-\tilde ac) - \tilde{v}^2 \rho(1-\tilde{u}-\tilde{v}) + \tilde{u}\tilde{v}\tilde a \right) \\
& \le2\tilde{u}^2(2+Mc) + 4\rho\tilde{v}^2 + (\tilde{u}^2+\tilde{v}^2)M \\
& \le C (\tilde{u}^2+\tilde{v}^2)\\&= C d,
\end{align*}
which proves (<ref>).
From (<ref>), one has that
\begin{equation*}
0<(u_0^2+v_0^2 ) e^{-CT} \le d(T)=\tilde{u}^2(T)+ \tilde{v}^2(T)=\tilde{u}^2(T),
\end{equation*}
and this leads to (<ref>), as desired.
We remark that, in this way, we have found a trajectory $\tilde{a}$ which
leads to the victory of the first population in the shortest possible time.
Theorem 6.15 in [115] assures that $\tilde{a}(t)\in L^{1}[0,T]$, so $\tilde{a}(t)$ is measurable.
We have that the two vectorial functions $F$ and $G$, defined by
\begin{equation*}
F(u,v):= \left(
\begin{array}{c}
\rho v (1-u-v)
\end{array}
\right)\qquad{\mbox{and}}\qquad G(u,v):= \left(
\begin{array}{c}
\end{array}
\right),
\end{equation*}
and satisfying $f(x(t), a(t))= F(x(t))+a(t)G(x(t))$,
are analytic. Moreover the set $\overline{\mathcal{V}}_{\mathcal{A}_{m,M}}$ is a subset of $\R^2$, therefore it can be seen as an analytic manifold with border which is also a compact set. For all $x_0\in{\mathcal{V}}_{\mathcal{A}_{m,M}}$ and $t>0$ we have that the trajectory starting from $x_0$ satisfies $x(\tau)\in\overline{\mathcal{V}}_{\mathcal{A}_{m,M}}$ for all $\tau\in[0,t]$. Then, by Theorem 3.1 in [111], there exists a couple $(\tilde{a}, \tilde{x})$ analytic a part from a finite number of points, such that $(\tilde{a}, \tilde{x})$ solves (<ref>).
Now, to study the range of $\tilde{a}$, we apply the Pontryagin Maximum Principle (see for example [115]). The Hamiltonian associated with system (<ref>) is
\begin{equation*}
H(x,p, p_0, a ): = p\cdot f(x,a) + p_0
\end{equation*}
where $p=(p_u, p_v)$ is the adjoint to $x=(u,v)$ and $p_0$ is the adjoint to the cost function identically
equal to $1$.
The Pontryagin Maximum Principle tells us that, since $\tilde{a}(t)$ and $\tilde{x}(t)=(\tilde{u}(t), \tilde{v}(t))$ give the optimal solution, there exist a vectorial function $\tilde p : [0, T] \to \R^2$ and a scalar $\tilde p_0\in(-\infty, 0]$ such that
\begin{equation}\label{ch2HJA}
\left\{
\begin{array}{ll}
\dfrac{d\tilde{x}}{dt} (t)= \dfrac{\partial H}{\partial p} (\tilde{x}(t), \tilde p(t), \tilde p_0, \tilde{a}(t) ), & \text{for a.a.} \ t\in[0, T], \\
\\
\dfrac{d \tilde{p}}{dt} (t)=- \dfrac{\partial H}{\partial x} (\tilde{x}(t), \tilde p(t), \tilde p_0, \tilde{a}(t) ), & \text{for a.a.} \ t\in[0, T],
\end{array}
\right.
\end{equation}
\begin{equation}\label{ch22349}
H(\tilde{x}(t), \tilde p(t), \tilde p_0, \tilde{a}(t) ) = \underset{a(\cdot)\in[m,M]}{\max} H(\tilde{x}(t), \tilde p(t), \tilde p_0, a ) \quad \text{for a.a.} \ t\in[0, T].
\end{equation}
Moreover, since the final time is free, we have
\begin{equation}\label{ch21244}
H(\tilde{x}(T), \tilde p(T),\tilde p_0, \tilde{a}(T) ) =0.
\end{equation}
Also, since $H(x,p,p_0,a)$ does not depend on $t$, we get
\begin{equation}\label{ch22343}
H(\tilde{x}(t), \tilde p(t), \tilde p_0, \tilde{a}(t) ) ={\mbox{constant}}=0, \quad \text{for a.a.} \ t\in[0, T],
\end{equation}
where the value of the constant is given by (<ref>).
By substituting the values of $f(x,a)$ in $H(x,p,p_0,a)$ and using (<ref>), we get, for a.a. $
t\in[0, T]$,
\begin{equation*}
\tilde p_u \tilde{u}(1-\tilde{u}- \tilde{v}-\tilde{a}c)+ \tilde p_v\rho \tilde{v}(1-\tilde{u}- \tilde{v}) -\tilde p_v \tilde{a} \tilde{u} + \tilde p_0 =0,
\end{equation*}
where $\tilde p=(\tilde p_u,\tilde p_v)$.
Also, by (<ref>) we get that
\begin{equation}\label{ch20oskdfee}
\underset{a\in[m,M]}{\max} H(\tilde{x}(t), \tilde p(t), \tilde p_0, a )= \underset{a\in[m,M]}{\max} \Big[-a\tilde{u}(c\tilde p_u + \tilde p_v ) + \tilde p_u \tilde{u}(1-\tilde{u}- \tilde{v})+ \tilde p_v\rho \tilde{v}(1-\tilde{u}- \tilde{v})+\tilde p_0\Big].
\end{equation}
Thus, to maximize the term in the square brackets we must choose appropriately the value of $\tilde{a}$ depending on the sign of $\varphi(t):=c\tilde p_u(t)+\tilde p_v(t)$, that is
we choose
\begin{equation}\label{ch21631}
\tilde{a}(t):=
\left\{
\begin{array}{ll}
m &{\mbox{ if }} \varphi(t)>0, \\
M &{\mbox{ if }} \varphi(t)<0.
\end{array}
\right.
\end{equation}
When $\varphi(t)=0$, we are for the moment free
to choose $ \tilde{a}(t):=a_s(t)$ for every $a_s(\cdot)$
with range in $[m,M]$,
without affecting the maximization problem in (<ref>).
Our next goal is to determine that $a_s(t)$ has the expression stated in (<ref>) for
a.a. $t\in[0,T]\cap \{\varphi=0\}$.
To this end, we claim that
\begin{equation}\label{ch29id0-3rgjj}
{\mbox{$\dot\varphi(t)=0$ a.e.~$t\in[0,T]\cap \{\varphi=0\}$.}}
\end{equation}
Indeed, by (<ref>), we know that $\tilde p$ is Lipschitz continuous in $[0,T]$,
hence almost everywhere differentiable, and thus the same holds for $\varphi$.
Hence, up to a set of null measure, given $t\in[0,T]\cap \{\varphi=0\}$,
we can suppose that $t$ is not an isolated point in such a set,
and that $\varphi$ is differentiable at $t$. That is, there exists an infinitesimal sequence $h_j$
for which $\varphi(t+h_j)=0$ and
$$ \dot\varphi(t)=\lim_{j\to+\infty}\frac{\varphi(t+h_j)-\varphi(t)}{h_j}
and this establishes (<ref>).
Consequently, in light of (<ref>), a.a. $t\in[0,T]\cap \{\varphi=0\}$
\begin{equation*}\begin{split}
& 0 =\dot\varphi(t)= c\frac{d\tilde p_u}{dt}(t)+ \frac{d\tilde p_v}{dt}(t) \\&\qquad = c\big[ -\tilde p_u(t)(1-2\tilde{u}(t)-\tilde{v}(t)-ca_s(t))+\tilde p_v(t) (\rho \tilde{v}(t)+a_s(t)) \big]\\&\qquad\qquad
+ \tilde p_u(t)\tilde u(t)-\tilde p_v(t) \rho(1-\tilde{u}(t)-2\tilde{v}(t)).\end{split}
\end{equation*}
Now, since $\varphi(t)=0$, we have that $ \tilde p_v(t)=- c\tilde p_u(t)$; inserting this information in the last equation, we get
\begin{equation}\label{ch20004}
0= -\tilde p_u c (1-2\tilde u-\tilde v-a_s c) -\tilde p_u \rho c^2 \tilde v - \tilde p_u a_s c^2 + \tilde p_u \tilde u+ \tilde p_u \rho c (1-\tilde u-2\tilde v).
\end{equation}
Notice that if $\tilde p_u=0$, then $\tilde p_v=-c \tilde p_u=0$; moreover, by (<ref>), one gets $\tilde p_0=0$. But by the Pontryagin Maximum Principle one cannot have $(\tilde p_u, \tilde p_v, \tilde p_0)=(0,0,0)$, therefore one obtains $\tilde p_u\neq 0$ in $\{ \varphi=0 \}$.
Hence, dividing (<ref>) by $\tilde p_u$ and rearranging the terms, one gets
\begin{equation}\label{ch20007}
\tilde{u}(2c+1-\rho c) + c\tilde{v}(1-\rho c-2\rho)+c(\rho-1)=0.
\end{equation}
Differentiating the expression in (<ref>) with respect to time, we get
\begin{equation*}
\tilde{u} (2c+1-\rho c) (1-\tilde{u}-\tilde{v}-ac) + c(1-\rho c-2\rho) [ \rho \tilde{v} (1-\tilde{u}-\tilde{v}) -a\tilde{u} ]=0,
\end{equation*}
that yields
\begin{equation}
a_s = \frac{(1-\tilde{u}-\tilde{v}) ( \tilde{u} (2c+1-\rho c)+\rho c) }{2c\tilde{u}(c+1)},
\end{equation}
which is the desired expression.
By a slight abuse of notation, we define the function $a_s(t)= a_s(\tilde{u}(t), \tilde{v}(t))$ for $t\in[0,T]$.
Notice that since $\tilde{u}(t)>0$ for $t\in[0,T]$, $a_s(t)$ is continuous for $t\in[0,T]$.
CHAPTER: DECAY ESTIMATES FOR EVOLUTION EQUATIONS WITH CLASSICAL AND FRACTIONAL TIME-DERIVATIVES
Using energy methods, we prove some power-law and exponential decay estimates for classical and nonlocal evolutionary equations.
The results obtained are framed into a general setting, which
comprise, among the others,
equations involving both standard and Caputo time-derivative, complex valued magnetic operators,
fractional porous media equations and
nonlocal Kirchhoff operators.
Both local and fractional space diffusion are taken into account, possibly in a nonlinear setting.
The different quantitative behaviors,
which distinguish polynomial decays from exponential ones,
depend heavily on the structure of the time-derivative involved in the equation.
The content of this chapter comes from the paper [5] in collaboration with Enrico Valdinoci and the paper [4] in collaboration with Serena Dipierro and Enrico Valdinoci.
§ INTRODUCTION AND MAIN RESULTS
§.§ Setting of the problem
Fractional calculus is becoming popular thanks to both the deep mathematics that it involves and its adaptability to the modelization of several real-world phenomena. As a matter of fact, integro-differential operators can describe nonlocal interactions of various type and
diffusion by using suitable kernels or fractional time-derivatives, see e.g. [71].
Integro-differential equations and fractional derivatives have been involved in designing, for example, wave equations, magneto-thermoelastic heat conduction, hydrodynamics, quantum physics, porous medium equations.
A wide literature is devoted to the study of existence, uniqueness, regularity and asymptotic theorems. Here we study the behaviour of the Lebesgue norm of solutions of integro-differential equations on bounded domains, extending the method of [43]
to a very broad class of nonlocal equations
and obtaining a power-law decay in time
of the $L^s$ norm with $s\geq 1$.
Also, for the case of classical time-derivatives,
we obtain exponential decays in time. The difference between
polynomial and exponential decays in time is thus related to
the possible presence of a fractional derivative in the operator involving the time variable.
The setting in which we work
takes into account a
parabolic evolution of a function under the action of a spatial diffusive operator,
which possesses suitable “ellipticity” properties, can be either
classical or fractional, and can also be of nonlinear type.
We work in a very general framework that adapts to both local
and nonlocal operators. We comprise in this analysis also the case of
complex valued operators and of a combination
of fractional and classical time-derivatives.
The main assumptions that we take is an “abstract” hypothesis
which extends a construction made in [43],
and which, roughly speaking, can be seen as a quantitative
counterpart of the uniform ellipticity of the spatial diffusive operators.
In [43], several time-decay estimates have been given
covering the cases in which the time-derivative is of fractional
type and the spatial operator is either the Laplacian,
the fractional Laplacian, the $p-$Laplacian and
the mean curvature equation. In this chapter,
we deal with the cases in which the time-derivative can be
either classical or fractional, or a convex combination of the two,
and we deal with new examples of spatial diffusive operators,
which include the case of a
complex valued operators. In particular,
we present applications to the fractional porous medium equation,
to the classical and fractional Kirchhoff equations, to
the classical and fractional magnetic operators.
Referring to [43] for the corresponding results, we also present in Table <ref> the decay results for the $p-$Laplacian, the nonlinear diffusion operator, the graphical mean curvature operator, the fractional $p-$Laplacian, the anisotropic fractional $p-$Laplacian, a second version of fractional porous medium (unfortunately, two different operators are known under the same name), and the fractional graphical mean curvature.
We recall that the Caputo derivative of order $\alpha\in(0,1)$ is given by
\begin{equation*}
\partial_t^\alpha u(t) := \dfrac{d}{dt} \int_{0}^{t} \dfrac{u(\tau)-u(0)}{(t-\tau)^\alpha} d\tau
\end{equation*}
up to a normalizing constant (that we omit here for the sake of simplicity).
Let also $\lambda_1, \lambda_2 \geq 0$ be fixed. We suppose, for concreteness,
$$\lambda_1 + \lambda_2=1,$$
but up to a rescaling of the operator we can take $\lambda_1, \lambda_2$
any nonnegative number with positive sum. Let $\Omega \subset \R^n$ be a
bounded open set and let $u_0\in L^{\infty}(\R^n)$ such that $\text{supp} \,u_0 \subset \Omega$. Consider the Cauchy problem
\begin{equation} \label{ch3sys:generalform}
\left\{ \begin{array}{lr}
(\lambda_1 \partial_t^{\alpha} + \lambda_2 \partial_t) [u] + \mathcal{N}[u]=0, & {\mbox{for all }}x\in \Omega, \ t>0, \\
u(x,t)=0, & {\mbox{for all }}x\in \R^n \setminus \Omega , \ t>0, \\
u(x,0)=u_0(x), & {\mbox{for all }}x\in \R^n ,
\end{array} \right.
\end{equation}
where $\mathcal{N}$ is a possibly nonlocal operator.
Given $s\in[1, +\infty)$ we want to find some estimates on the ${L}^s(\Omega)$
norm of $u$. To this end,
we exploit analytical techniques relying on
energy methods, exploiting also some
that have been recently developed in [69, 119, 43].
Namely, as in [43], we want to compare the $L^{s}$ norm of
the solution $u$ with an explicit function that has a power law decay,
and to do this we take advantadge of a suitable comparison result
and of the study of auxiliary fractional parabolic equations as
in [69, 119].
§.§ Notation and structural assumptions
Let us recall that for a complex valued function $v:\Omega\to\C$ the Lebesgue norm is
\begin{equation*}
\Vert v \Vert_{L^s(\Omega)} = \left( \int_{\Omega} |v(x)|^s \; dx \right)^{\frac{1}{s}}
\end{equation*}
for any $s\in[1, +\infty)$. Also, we will call $\Re \{ z\}$ the real part of $z\in\C$.
The main assumption we take is the following: there exist $\gamma \in (0,+\infty) $ and $C\in (0,+\infty)$ such that
\begin{equation} \label{ch3cond:complexstr}
\Vert u(\cdot,t) \Vert_{L^{s}(\Omega) }^{s-1+\gamma} \leq C \int_{\Omega} |u(x,t)|^{s-2} \Re \{ \bar{u}(x,t)\mathcal{N} [u](x,t)\} \; dx.
\end{equation}
The constants $\gamma$ and $C$ and their dependence from the parameters of the problem may vary from case to case. This structural assumption says, essentially, that $\mathcal{N}$ has
an elliptic structure and it is also related (via an integration by parts)
to a general form of the Sobolev inequality
(as it is apparent in the basic
case in which $u$ is real valued, $s:=2$ and $\mathcal{N}u:=-\Delta u$).
In our setting, the structural inequality in (<ref>)
will be the cornerstone to obtain general energy estimates,
which, combined with appropriate barriers, in turn
produce time-decay estimates. The results
obtained in this way are set in a general framework, and then we
make concrete examples of operators that satisfy the structural
assumptions, which is sufficient to establish asymptotic bounds that fit
to the different cases of interest and take into account the peculiarities
of each example in a quantitative way.
Our general result also includes Theorem 1 of [43]
as a particular case, since, if $\mathcal{N}$ and $u$ are real valued, the
(<ref>) boils down to hypothesis (1.3) of [43] (in any case, the applications and examples covered here
go beyond the ones presented in [43] both for
complex and for real valued operators).
§.§ Main results
The “abstract” result that we establish here is the following:
Let $u$ be a solution of the Cauchy problem (<ref>), with $\mathcal{N}$ possibly complex
valued. Suppose that there exist $s\in[1, +\infty)$, $\gamma\in(0,+\infty)$ and $C\in(0,+\infty)$ such that $u$ satisfies (<ref>).
\begin{equation} \label{ch3claim1gen}
(\lambda_1\partial_t^{\alpha} + \lambda_2\partial_t) \Vert u(\cdot,t) \Vert_{L^{s}(\Omega) } \leq -\dfrac{\Vert u(\cdot,t) \Vert_{L^{s}(\Omega) }^{\gamma}}{C},
\qquad{\mbox{ for all }}t>0,\end{equation}
where $C$ and $\gamma$ are the constants appearing in (<ref>).
\begin{equation} \label{ch3claim2gen}
\Vert u(\cdot,t) \Vert_{L^{s}(\Omega) } \leq
\dfrac{C_*}{1+t^{\frac{\alpha}{\gamma}}},\qquad{\mbox{ for all }}t>0,
\end{equation}
for some $C_*>0$, depending only on $C$, $\gamma$, $\alpha$
and $\Vert u_0(\cdot) \Vert_{L^{s}(\R^n)}$.
Theorem <ref> here
comprises previous results in [43],
extending their applicability
to a wider class of equations, which include the
cases of both standard and fractional
time-derivatives and complex valued operators.
We also recall that the power-law decay in (<ref>)
is due to
the behaviour of the solution of the equation
\begin{equation} \label{ch3mittagleffler}
\partial_t^{\alpha} e(t)=-e(t),
\end{equation}
for $t\in(0, +\infty)$. Indeed, the solution
of (<ref>) is explicit in terms of the Mittag-Leffler function and it is asymptotic to $\frac{1}{t^{\alpha}}$ as $t\rightarrow +\infty$ (see [80], [92]); notice that
the latter decay corresponds to the one
in (<ref>) when $\gamma=1$.
As pointed out in [69],
the power law decay for solutions of
time-fractional equations is, in general, unavoidable.
On the other hand, solutions of equations
driven by the standard time-derivative
of the type
$$ \partial_t v(t) + \mathcal{N}[v](t)=0$$
often have a faster decay in many concrete examples, for instance for $\mathcal{N}=-\Delta$ where exponential decay is attained. This particular feature of
the classical heat equation is in fact a special
case of a general phenomenon, described
in details in the following result:
Let $u$ be a solution of the Cauchy problem (<ref>) with only classical derivative ($\lambda_1=0$) and $\mathcal{N}$ possibly complex
valued. Suppose that there exist $s\in[1, +\infty)$, $\gamma\in(0,+\infty)$ and $C\in(0,+\infty)$ such that $u$ satisfies (<ref>).
Then, for some $C_*>0$, depending only on the constants $C$ and $\gamma$
in (<ref>),
and on $\Vert u_0(\cdot ) \Vert_{L^{s}(\R^n)}$, we have that:
* if $0<\gamma \leq 1$ the solution $u$ satisfies
\begin{equation} \label{ch3claim3}
\Vert u(\cdot ,t) \Vert_{L^{s}(\Omega) } \leq
C_* \, e^{-\frac{t}{C}},\qquad{\mbox{for all }}t>0;
\end{equation}
* if $ \gamma>1$, the solution $u$ satisfies
\begin{equation} \label{ch3claim4}
\Vert u(\cdot ,t) \Vert_{L^{s}(\Omega) } \leq
\dfrac{C_*}{1+t^{\frac{1}{\gamma-1}}},\qquad{\mbox{for all }}t>0.
\end{equation}
We stress that Theorem <ref>
is valid for a very general class of diffusive
operators ${\mathcal{N}}$, including also
the ones which take into account
fractional derivatives in the space-variables.
In this sense, the phenomenon described in
Theorem <ref> is that:
* on the one hand,
the fractional behaviour induces power-law
* on the other hand, for long times,
the interactions between different derivatives
“decouple”: for instance, a space-fractional
derivative, which would naturally induce
a polynomial decay, does not asymptotically “interfere”
with a classical time-derivative
in the setting of Theorem <ref>,
and the final result is that the decay in time is
of exponential, rather than polynomial, type.
The fact that long-time asymptotics
of mixed type (i.e. classical time-derivatives
versus fractional-space diffusion) reflect the
exponential decay of linear ordinary differential
equations was also observed in [93]
for equations inspired by the Peierls-Nabarro model
for atom dislocations in crystal.As we will see in the proof
of Theorem <ref>, the idea is to find a supersolution of (<ref>) and use a comparison principle in order to estimate the decay of the solution $u$. For the case of mixed derivatives, Vergara and Zacher [119] find both a supersolution and a subsolution decaying as $t^{-\frac{\alpha}{\gamma}}$. When $\alpha\rightarrow 1$, thus when the mixed derivative is approaching the classical one, the subsolution tends to 0. This allows possibly better decays, which are in fact proven. On the other side, the supersolution gains some extra decay, possibly reaching an exponential decay.
The optimality of the decay estimates
obtained in our results
and some further comparisons with the existing literature are discussed in Subsection <ref>.
§.§ Applications
We now present several applications of Theorem <ref> to some concrete examples.
The case of the fractional porous medium equation.
Let $0<\sigma<1$
\begin{equation}\label{ch3kappa}
K:\R^n \rightarrow \R^n
\end{equation}
be the positive function
\begin{equation*}
K(x):= c(n,\sigma) |x|^{-(n-2\sigma)},
\end{equation*}
being $c(n,\sigma)$ a constant.
The fractional[As a matter of fact,
as clearly explained in
the fractional porous medium equation
is “the name currently given to two very different equations”.
The one introduced in [39]
has been studied in details in [43]
in terms of decay estimates. We focus here
on the equation introduced in [29].
As discussed in the above mentioned mediawiki page,
the two equations have very different structures and typically
exhibit different behaviors, so we think that it is a nice feature that,
combining the results here with those in [43],
it follows that a complete set of decay estimates is valid for both the
fractional porous medium equations at the same time.]
porous medium operator (as defined in [29]) is
\begin{equation} \label{ch3op:porous}
\mathcal{N}[u]:=-\nabla \cdot (u \nabla \mathcal{K}(u)), \qquad {\mbox{where}}\qquad\mathcal{K}(u):=u \star K
\end{equation}
where $\star$ denotes the convolution. This operator is used to describe the diffusion of a liquid under pressure in a porous environment in presence of memory effects and long-range
interactions, and also has some application in biological models, see [29].In this framework, the following result holds:
Take $u_0(x) \in L^{\infty}(\R^n)$ and let $u$ be a solution in $\Omega \times (0, + \infty)$ to (<ref>) with $\mathcal{N}$ the fractional porous medium operator as in (<ref>). Then for all $s\in (1, +\infty)$ there exists $C_*>0$ depending on $n,\ s,\ \sigma,\ \Omega$ such that
\begin{equation*}
\Vert u(\cdot, t) \Vert_{L^{s}(\Omega) } \leq \dfrac{C_*}{1+t^{\alpha /2}}.
\end{equation*}
Also, in the case of only classical derivative ($\lambda_1=0$), we have
\begin{equation*}
\Vert u(\cdot, t) \Vert_{L^{s}(\Omega) } \leq \dfrac{C_*}{1+t}
\end{equation*}
where $C_*>0$, possibly different than before, depends on $n,\ s,\ \sigma,\ \Omega$.
The case of the Kirchhoff operator and the fractional Kirchhoff operator.
The Kirchhoff equation describes
the movement of an elastic string that is constrained at the extrema,
taking into account a possible growth of
the tension of the vibrating string in view of its extension. It was first introduced by
Gustav Robert Kirchhoff in 1876, see
and fully addressed from the mathematical
point of view in the 20th century, see [23].
Parabolic equations of Kirchhoff type have been widely studied during the '90s (see for example [56] and the reference therein). Recently a fractional counterpart to the Kirchhoff operator has been introduced by Fiscella and Valdinoci [49].The setting that we consider here is the following.
Let $m:[0,+\infty)\to[0,+\infty)$ be an nondecreasing function. A typical example is
\begin{equation}\label{ch3def:m}
m(\xi)=m_0 +b\xi
\end{equation}
where $b> 0$ and $m_0 \geq 0$. We consider here both the cases[The case $m_0=0$ for (<ref>) is usually called the degenerate case and it presents several additional
difficulties with respect with the non-degenerate case. ] in which $m(0)>0$ and in which $m$ takes the form in (<ref>) with $m_0=0$. In this setting, the Kirchhoff operator that we take into account is
\begin{equation}\label{ch3KKOP} \mathcal{N}[u]:= m \left(\Vert \nabla u \Vert_{L^2(\Omega)}^2\right) (-\Delta)u =0. \end{equation}
Then, we obtain the following decay estimates:
Let $u$ be the solution of problem (<ref>) with $\mathcal{N}$ the Kirchhoff operator in (<ref>). Then
there exist $\gamma>0$ and $C>0$ depending on $n,\ s,\ \Omega,\ \inf m(t)$ such that
\begin{equation*}
\Vert u(\cdot ,t) \Vert_{L^{s}(\Omega) } \leq \dfrac{C}{1+t^{\frac{\alpha}{\gamma}}},\qquad{\mbox{for all }}t>0,
\end{equation*}
in the following cases:
(i) for all $s\in[1, +\infty)$ when $m$ is non-degenerate; in particular, in this case $\gamma=1$.
(ii) for all $s\in[1,+ \infty)$ when $m$ is degenerate and $n\leq 4$; in particular, in this case $\gamma=3$.
(iii) for $s\leq\frac{2n}{n-4}$ when $m$ is degenerate and $n>4$; in particular, in this case $\gamma=3$.
Moreover, if we take $\lambda_1=0$,
then there exists $C_*>0$, $C'>0$
depending on $n,\ s,\ \Omega,\ \inf m(t)$,
for which the following statements hold true:
* in case (i) we have
\begin{equation*}
\Vert u(\cdot,t) \Vert_{L^{s}(\Omega) } \leq
C_* \, e^{-\frac{t}{C'}},\qquad{\mbox{for all }}t>0,
\end{equation*}
* in cases (ii) and (iii) we have
\begin{equation*}
\Vert u(\cdot,t) \Vert_{L^{s}(\Omega) } \leq \dfrac{C_*}{1+t^\frac{1}{2}},\qquad{\mbox{for all }}t>0.
\end{equation*}
Next, we consider the case of the fractional Kirchhoff operator.
We take a nondecreasing positive function
$M:[0,+\infty)\rightarrow[0,+\infty)$. As for the classic Kirchhoff
operator, we consider either the case when $M(0)>0$
or the case $M(\xi)=b\xi$ with $b>0$.
We fix $\sigma\in(0,1)$. We define the norm
\begin{equation}\label{ch3FKPO-1}
\Vert u(\cdot , t) \Vert_{Z} = \left( \int_{\R^{2n}} \frac{|u(x,t)-u(y,t)|^2 }{|x-y|^{n+2\sigma}} \; dxdy \right)^{\frac{1}{2}} .
\end{equation}
Finally, the fractional Kirchhoff operator reads
\begin{equation}\label{ch3FKPO}
\mathcal{N}[u](x,t):= -M\left( \Vert u(\cdot , t)\Vert_{Z}^2 \right) \int_{\R^n} \frac{ u(x+y,t) + u(x-y,t) -2u(x,t)}{|x-y|^{n+2\sigma}} \; dy.
\end{equation}
In this setting, our result is the following:
Let $u$ be the solution of problem (<ref>) with $\mathcal{N}$ the fractional Kirchhoff operator in (<ref>). Then
there exist $\gamma>0$ and $C>0$, depending on $K$, $n$, $s$, $\Omega$
and $\inf M(\xi)$, such that
\begin{equation*}
\Vert u(\cdot ,t) \Vert_{L^{s}(\Omega) } \leq
\dfrac{C}{1+t^{\frac{\alpha}{\gamma}}}
,\qquad{\mbox{for all }}t>0,
\end{equation*}
in the following cases:
(i) for all $s\in[1, +\infty)$ when $M$ is non-degenerate; in particular, in this case $\gamma=1$.
(ii) for all $s\in[1,+ \infty)$ when $M$ is degenerate and $n\leq 4\sigma$; in particular, in this case $\gamma=3$.
(iii) for $s\leq\frac{2n}{n-4\sigma}$ when $M$ is degenerate and $n>4\sigma$; in particular, in this case $\gamma=3$.
Moreover, if we take $\lambda_1=0$,
then there exists $C_*>0$, depending on $n,\ s,\ \Omega,\ \inf M(t)$, such that:
* in case (i) we have
\begin{equation*}
\Vert u(\cdot,t) \Vert_{L^{s}(\Omega) } \leq
C_* \, e^{-\frac{t}{C'}} ,\qquad{\mbox{for all }}t>0,
\end{equation*}
for some $C'>0$,
* in cases (ii) and (iii) we have
\begin{equation*}
\Vert u(\cdot,t) \Vert_{L^{s}(\Omega) } \leq \dfrac{C_*}{1+t^\frac{1}{2}},\qquad{\mbox{for all }}t>0.
\end{equation*}
It is interesting to remark that
the cases (i), (ii) and (iii) in Theorem <ref> formally
reduce to those in Theorem <ref>
when $\sigma\to1$.
The case of the magnetic operator and the fractional magnetic operator.
We consider here an operator similar to Schrödinger equation
with a magnetic potential (see e.g. [67] and the references therein), that is
\begin{equation}\label{ch3NuMAG}
\mathcal{N}[u]:= -(\nabla -iA)^2 u(x,t)= -\Delta u + |A|^2u -iA\cdot\nabla u -\nabla \cdot (iAu)
\end{equation}
where $A: \R^n \rightarrow \R^n$ has the physical meaning of a magnetic field
(in this case, one usually studies the three-dimensional case $n=3$, but our
approach is general).
The goal of these pages is to apply Theorem <ref>
to the magnetic operator in (<ref>), thus obtaining decay estimates in time in this framework.
It is interesting to remark that the operator in (<ref>)
is structurally very different from the linear Schrödinger operator,
which corresponds to the choice
\begin{equation}\label{ch3NuMAG:SC}
\mathcal{N}[u]= -i(\Delta +V)u.
\end{equation}
Indeed, for the operator in (<ref>) decay estimates
in time do not[Indeed, if $V\in\R$ and $u$
is a solution of the Schrödinger parabolic equation $
\partial_t u+i(\Delta +V)u=0$ in $\Omega$
with homogeneous data along $\partial\Omega$, the conjugated equation reads $
\partial_t \bar u-i(\Delta +V)\bar u=0$, and therefore
\begin{eqnarray*}&& \partial_t\int_\Omega |u(x,t)|^2\,dx=
\int_\Omega u(x,t)\,\partial_t\bar u(x,t)+\bar u(x,t)\,\partial_t u(x,t)\,dx\\&&\qquad
=i\int_\Omega u(x,t)\,\Delta\bar u(x,t)-\bar u(x,t)\,\Delta u(x,t)\,dx
\\&&\qquad=\int_\Omega \nabla\cdot\big( u(x,t)\,\nabla\bar u(x,t)-\bar u(x,t)\,\nabla u(x,t)\big)\,dx
where the last identity follows from the Divergence Theorem and the boundary conditions.
This shows that decay estimates in time are in general
not possible in this setting, thus highlighting an interesting
difference between the Schrödinger operator in (<ref>)
and the magnetic operator in (<ref>).
This difference, as well as the computation above,
has a natural physical meaning,
since in the Schrödinger equation the squared modulus of
the solution represents
the probability density of a wave function,
whose total amount remains constant if no dissipative forces appear in the equation.] hold in general, not even in the case of classical
time-derivatives. The decay estimate for the classical magnetic operator is the following:
Let $u$ be the solution of problem (<ref>) with $\mathcal{N}$ the magnetic operator in (<ref>).
Then for all $s \in [1, +\infty)$ there exist $C_1>0$ depending on $A$, $n$, $s$ and $\sigma$ such that
\begin{equation*}
\Vert u(\cdot ,t) \Vert_{L^{s}(\Omega) } \leq \dfrac{C_1}{1+t^{{\alpha}}}\qquad{\mbox{for all }}t>0.
\end{equation*}
Moreover, in the case of classical derivatives ($\lambda_1=0$), we have
\begin{equation*}
\Vert u(\cdot ,t) \Vert_{L^{s}(\Omega) } \leq C_2\,e^{-\frac{t}{C_3}}\qquad{\mbox{for all }}t>0
\end{equation*}
for some $C_2$, $C_3>0$, depending on $A$, $n$, $ s$ and $\sigma$.
In [38] D'Avenia and Squassina introduced a fractional operator where a magnetic field $A: \R^n \rightarrow \R^n $ appears. Their aim was to study the behaviour of free particles interacting with a magnetic field. For a fixed $\sigma \in (0,1)$, such an operator in dimension $n$ reads
\begin{equation}\label{ch3SQ}
\mathcal{N}[u](x,t):= \int_{\R^n} \frac{u(x,t)-e^{i(x-y)A(\frac{x+y}{2})} u(y,t)}{|x-y|^{n+2\sigma}} \;dy.
\end{equation}
In the appropriate framework, the fractional magnetic operator in (<ref>)
recovers the classical magnetic operator in (<ref>)
as $\sigma\to1$, see [110] (see also [88]
for a general approach involving also nonlinear operators).
In the setting of the fractional magnetic operator, we present the following result:
Let $u$ be the solution of problem (<ref>) with $\mathcal{N}$ the
fractional magnetic operator in (<ref>). Then for all $s \in [1, +\infty)$ there exist $C_1>0$ depending on $n$, $s$ and $\sigma$ such that
\begin{equation*}
\Vert u(\cdot ,t) \Vert_{L^{s}(\Omega) } \leq \dfrac{C_1}{1+t^{{\alpha}}}\qquad{\mbox{for all }}t>0.
\end{equation*}
Moreover, in the case of classical derivatives ($\lambda_1=0$), we have
\begin{equation*}
\Vert u(\cdot ,t) \Vert_{L^{s}(\Omega) } \leq C_2\,e^{-\frac{t}{C_3}}\qquad{\mbox{for all }}t>0,
\end{equation*}
for some $C_2$, $C_3>0$ depending on $n$, $s$ and $\sigma$.
The magnetic operators present a crucial difference with respect to
the other operators considered in the previous applications, since they are complex valued operators.
Other operators.
We point out that condition (<ref>) has already been checked in many cases in [43]. We present here very briefly the operators treated there that may need an introduction. The list includes the cases of the classical $p$-Laplacian
and porous media diffusion (see [41, 118])
$$ \Delta_p u^m := {\rm div} (|\nabla u^m|^{p-2}\nabla u^m),
\qquad{\mbox{with $p\in(1,+\infty)$ and $m\in(0,+\infty)$,}}$$
the case of graphical mean curvature, given in formula (13.1) of [59],
$$ {\rm div}\left( \frac{\nabla u}{\sqrt{1+|\nabla u|^2}}\right), $$
the case of the fractional $p$-Laplacian (see e.g. [26])
\begin{eqnarray*}&&(-\Delta)^s_pu(x):=
\int_{\R^n}\frac{|u(x)-u(y)|^{p-2}(u(x)-u(y))}{|x-y|^{n+sp}}\,dy,\\&&\qquad{\mbox{with $
p\in(1,+\infty)$ and $s\in(0,1)$,}}\end{eqnarray*}
and possibly even the sum of different nonlinear operator of this type, with coefficients $\beta_j>0$,
$$ \sum_{j=1}^N \beta_j (-\Delta)^{s_j}_{p_j} u, \qquad \text{with} \
p_j\in(1,+\infty) \ \text{and} \ s_j\in(0,1), $$
the case of the anisotropic fractional Laplacian, that is the sum of fractional directional derivatives in the directions of the space $e_j$, given by
$$(-\Delta_{\beta})^{\sigma} u(x)= \sum_{j=1}^{n} \beta_j (-\partial_{x_j}^2)^{\sigma_j} u(x) $$
for $\beta_j>0$, $\beta=(\beta_1, \dots, \beta_n)$ and $\sigma=(\sigma_1, \dots, \sigma_n)$, where
$$ (-\partial_{x_j}^2)^{\sigma_j} u(x) = \int_{\R} \frac{u(x)- u(x+\rho e_j)}{\rho^{1+2\sigma_j}} d\rho, $$
considered for example in [46].
The list of possible diffusion operators continues with
a fractional porous media operators (see [39])
\begin{equation*}
{\mathcal{P}}_{1,s}(u):=(-\Delta)^s u^m
\qquad{\mbox{with $s\in(0,1)$ and $m\in(0,+\infty)$,}}
\end{equation*}
and the graphical fractional mean curvature operator (see [9])
\begin{eqnarray*}&& {\mathcal{H}}^s(u)(x):=\int_{\R^n} F\left(\frac{u(x)-u(x+y)}{|y|}\right)\frac{dy}{|y|^{n+s}},\\&&\qquad\qquad{\mbox{with
$s\in(0,1)$ and }}F(r):=\int_0^r \frac{d\tau}{(1+\tau^2)^{\frac{n+1+s}{2}}},\end{eqnarray*}
For the sake of brevity, we recall the corresponding results in Table <ref>.
$\,$ Operator ${\mathcal{N}}$ Values of $\lambda_1$, $\lambda_2$ Range of $\ell$ Decay rate $\Theta$
Nonlinear classical diffusion
$\Delta_p u^m$
$\lambda_1\in(0,1]$, $\lambda_2\in[0,1)$ $\ell\in[1,+\infty)$
Nonlinear classical diffusion
$\Delta_p u^m$ with $(m,p)\neq(1,2)$
$\lambda_1=0$, $\lambda_2=1$ $\ell\in[1,+\infty)$
$\Delta_2 u$
$\lambda_1=0$, $\lambda_2=1$ $\ell\in[1,+\infty)$
Graphical mean curvature
$ {\rm div}\left( \frac{\nabla u}{\sqrt{1+|\nabla u|^2}}\right)$
$\lambda_1\in(0,1]$, $\lambda_2\in[0,1)$ $\ell\in[1,+\infty)$
Graphical mean curvature
$ {\rm div}\left( \frac{\nabla u}{\sqrt{1+|\nabla u|^2}}\right)$
$\lambda_1=0$, $\lambda_2=1$ $\ell\in[1,+\infty)$
Fractional $p{\mbox{-}}$Laplacian
$\lambda_1\in(0,1]$, $\lambda_2\in[0,1)$ $\ell\in[1,+\infty)$
Fractional $p{\mbox{-}}$Laplacian
$(-\Delta)^s_pu$, $p> 2$
$\lambda_1=0$, $\lambda_2=1$ $\ell\in[1,+\infty)$
Fractional $p{\mbox{-}}$Laplacian
$(-\Delta)^s_pu$, $p\leq 2$
$\lambda_1=0$, $\lambda_2=1$ $\ell\in[1,+\infty)$
Superposition of fractional $p{\mbox{-}}$Laplacians
$ \sum_{j=1}^N \beta_j (-\Delta)^{s_j}_{p_j} u$, $\beta_j>0$
$\lambda_1\in(0,1]$, $\lambda_2\in[0,1)$ $\ell\in[1,+\infty)$
Superposition of fractional $p{\mbox{-}}$Laplacians
$ \sum_{j=1}^N \beta_j (-\Delta)^{s_j}_{p_j} u$,
with $\beta_j>0$ and $p_{\max}>2$
$\lambda_1=0$, $\lambda_2=1$ $\ell\in[1,+\infty)$
Superposition of fractional $p{\mbox{-}}$Laplacians
$ \sum_{j=1}^N \beta_j (-\Delta)^{s_j}_{p_j} u$,
with $\beta_j>0$ and $p_{\max}\leq2$
$\lambda_1=0$, $\lambda_2=1$ $\ell\in[1,+\infty)$
Superposition of anisotropic fractional Laplacians
$ \sum_{j=1}^N \beta_j (-\partial_{x_j}^2)^{s_j} u$, $\beta_j>0$
$\lambda_1\in(0,1]$, $\lambda_2\in[0,1)$ $\ell\in[1,+\infty)$
Superposition of anisotropic fractional Laplacians
$ \sum_{j=1}^N \beta_j (-\partial_{x_j}^2)^{s_j} u$, $\beta_j>0$
$\lambda_1=0$, $\lambda_2=1$ $\ell\in[1,+\infty)$
Fractional porous media I
$ {\mathcal{P}}_{1,s}(u)$
$\lambda_1\in(0,1]$, $\lambda_2\in[0,1)$ $\ell\in[1,+\infty)$
Fractional porous media I
$ {\mathcal{P}}_{1,s}(u)$, $m>1$
$\lambda_1=0$, $\lambda_2=1$ $\ell\in[1,+\infty)$
Fractional porous media I
$ {\mathcal{P}}_{1,s}(u)$, $m\leq1$
$\lambda_1=0$, $\lambda_2=1$ $\ell\in[1,+\infty)$
Fractional graphical mean curvature
$ {\mathcal{H}}^s(u)$
$\lambda_1\in(0,1]$, $\lambda_2\in[0,1)$ $\ell\in[1,+\infty)$
Fractional graphical mean curvature
$ {\mathcal{H}}^s(u)$
$\lambda_1=0$, $\lambda_2=1$ $\ell\in[1,+\infty)$
Results from [43].
The examples provided here show that the “abstract”
structural hypothesis (<ref>) is reasonable and can be explicitly
checked in several cases of interest.
We are also confident that other interesting
examples fulfilling such an assumption
can be found, therefore Theorem <ref> turns out to play a pivotal
role in the asymptotics of real and complex valued,
possibly nonlinear, and possibly fractional, operators.
§.§.§ Comparison with the existing literature
In general, in problems of the type (<ref>) it is very difficult to provide
explicit solutions and often the system has no unique solution, see e.g. [25]. Therefore, even partial information on the solutions is important.
In the case of a Kirchhoff parabolic equation with purely classical time-derivative in the degenerate case $m(0)=0$, Ghisi and Gobbino [56] found the time-decay estimate
\begin{equation}\label{ch3GOBLA1} c (1+t)^{-1} \leq \Vert \nabla u(\cdot, t)
\Vert_{L^2(\Omega)}^2 \leq C (1+t)^{-1} \qquad{\mbox{for all }}t>0.\end{equation}
for some costants $C, c>0$ depending on initial data. From this,
performing an integration of the gradient along paths[More precisely,
the fact that (<ref>) implies (<ref>)
can be seen as a consequence of the following observation:
for every $u\in C^\infty_0(\Omega)$,
\begin{equation}\label{ch3L2GRAs}
\|u\|_{L^2(\Omega)}\le
C\,\|\nabla u\|_{L^2(\Omega)},
\end{equation}
where $C>0$ depends on $n$ and $\Omega$.
fix $x_0\in\R^n$ such that $B_1(x_0)\subset\R^n\setminus\Omega$
and $\Omega\subset B_R(x_0)$, for some $R>1$.
Then, for every $x\in\Omega$ we have that $|x-x_0|\in[1,R]$ and thus
\begin{eqnarray*}
&& |u(x)|^2=|u(x)-u(x_0)|^2=\left|
\int_0^1 \nabla u(x_0+t(x-x_0))\cdot(x-x_0)\,dt
\right|^2\\
&&\qquad\le |x-x_0|^2\,\int_0^1 |\nabla u(x_0+t(x-x_0))|^2\,dt\le
R^2\,\int_0^1 |\nabla u(x_0+t(x-x_0))|^2\,dt.
\end{eqnarray*}
On the other hand, if $t\in[0,1/R)$ we have that
$$ \big|t(x-x_0)\big|< \frac{|x-x_0|}{R}\le1$$
and so $x_0+t(x-x_0)\in B_1(x_0)\subset\R^n\setminus\Omega$,
which in turn implies that $\nabla u(x_0+t(x-x_0))=0$. This gives that
$$ |u(x)|^2\le R^2\,\int_{1/R}^1 |\nabla u(x_0+t(x-x_0))|^2\,dt.$$
Hence, using the substitution $x\mapsto y:=x_0+t(x-x_0)$, we conclude that
\begin{eqnarray*}
&&\int_\Omega |u(x)|^2\,dx\le
R^2\,\int_{1/R}^1 \left[\int_{\R^n}|\nabla u(x_0+t(x-x_0))|^2\,dx\right]\,dt
R^2\,\int_{1/R}^1 \left[\int_{\R^n}|\nabla u(y)|^2\,\frac{dy}{t^n}\right]\,dt\\&&\qquad
\leq R^{n+2}\,\int_{1/R}^1 \left[\int_{\R^n}|\nabla u(y)|^2\,dy\right]\,dt
\leq
R^{n+2}\,\|\nabla u\|^2_{L^2(\R^n)}=R^{n+2}\,\|\nabla u\|^2_{L^2(\Omega)},\end{eqnarray*}
which proves (<ref>).], one can find the estimate
\begin{equation}\label{ch3GOBLA2}
\Vert u(\cdot, t) \Vert_{L^2(\Omega)} \leq C (1+t)^{-\frac{1}{2}}
\qquad{\mbox{for all }}t>0. \end{equation}
The latter is exactly the estimate we found
in Theorem <ref> as a particular case of our analysis.
The fractional porous medium equation with classical derivative has been studied
by Biler, Karch and Imbert
in [25], establishing some decay estimates of the $ L^s$ norm, such as
\begin{equation}\label{ch3bilerdecay}
\Vert u(\cdot, t) \Vert_{L^{s}(\Omega) } \leq t^{-\frac{n}{n+2-2\sigma} \left(1-\frac{1}{s} \right)}.
\end{equation}
As a matter of fact, this decay
is slower than what we find in Theorem <ref>, which is asymptotic to $t^{-1}$
(in this sense, Theorem <ref> here
can be seen as an improvement of the estimates
in [25]).
On the other hand, in [25] the Authors also provide a weak solution that has exactly the decay in (<ref>),
thus showing the optimality of (<ref>)
in this generality,
while our result holds for strong solutions. Then, comparing Theorem <ref> here with the results in (<ref>)
we obtain that a better decay is valid for regular solutions with respect to the one which is valid
also for irregular ones.
§ PROOFS
This section contains the proofs of our main results. We start with the proof of Theorem <ref>.
In order to prove Theorem <ref>, we need a comparison result for the equation involving the mixed time-derivative. As a matter of fact,
comparison results
for the case of the Caputo derivative are available
in the literature, see e.g. Lemma 2.6 of [119]. In our arguments
we employ the differentiability of $u$ and the fact that $u$ is a strong solution, and we obtain:
Let $T\in(0,+\infty)\cup\{+\infty\}$
and $w, \ v: [0,T) \rightarrow [0,+\infty) $ be two Lipschitz continuous
Assume that $w$ is a supersolution and $v$ is a subsolution at each differentiability point for the equation
\begin{equation}\label{ch30919}
\lambda_1 \partial_t^{\alpha} u(t) + \lambda_2 \partial_t u(t) =-ku^{\gamma}(t)
\end{equation}
with $\lambda_1$, $\lambda_2$, $\gamma$, $k >0$.
\begin{equation}\label{ch3X00}
w(0)> v(0), \end{equation}
we have that
\begin{equation}\label{ch3X01}
w(t)>v(t)\qquad{\mbox{ for all }}t\in(0,T).\end{equation}
By contradiction, let us suppose that for some time $t \in(0,T)$ we have $w(t)=v(t)$, and let us call $\tau$ the first time for which the equality is reached. Then,
since $w$ is a supersolution and $v$ is a subsolution of (<ref>),
we obtain that
\begin{equation}\label{ch3QUA1}
\lambda_1 \partial_t^{\alpha} (w-v)(\tau) + \lambda_2 \partial_t (w-v)(\tau) \geq -k [w^{\gamma}(\tau) - v^{\gamma}(\tau)]=0.
\end{equation}
Now we distinguish two cases, depending on whether or not $w-v$ is differentiable at $\tau$.
To start with, suppose that $w-v$ is differentiable at $\tau$.
Since $w\ge v$ in $(0,\tau)$, we have that
\begin{equation*}
\partial_t (w-v)(\tau) \le 0.\end{equation*}
From this and (<ref>), we obtain that
\begin{eqnarray*}
0&\le&\partial_t^{\alpha} (w-v)(\tau) \\&=&
\frac{(w-v)(\tau) - (w-v)(0)}{\tau^{\alpha}} +\alpha \int_0^{\tau} \frac{(w-v)(\tau) - (w-v)(\rho)}{(\tau-\rho)^{1+\alpha}} d\rho\\&=&
-\frac{ (w-v)(0)}{\tau^{\alpha}} -\alpha \int_0^{\tau} \frac{ (w-v)(\rho)}{(\tau-\rho)^{1+\alpha}} d\rho\\&\le&
-\frac{ (w-v)(0)}{\tau^{\alpha}}
This is in contradiction with (<ref>) and so it proves (<ref>) in this case.
Now we focus on the case in which $w-v$ is not differentiable at $\tau$.
Then, there exists a sequence $t_j\in(0,\tau)$
such that $w-v$ is differentiable at $t_j$, with $\partial_t(w-v)(t_j)\le 0$
and $t_j\to\tau$ as $j\to+\infty$.
since $w$ is a supersolution and $v$ is a subsolution of (<ref>),
we obtain that
\begin{equation}\label{ch3QUA2}
\begin{split}
&\frac{(w-v)(t_j) - (w-v)(0)}{t_j^{\alpha}} +\alpha \int_0^{t_j} \frac{(w-v)(t_j)
- (w-v)(\rho)}{(t_j-\rho)^{1+\alpha}} d\rho
\\ =\;&
\partial_t^{\alpha} (w-v)(t_j)
\\ \ge\;&
\partial_{t}^{\alpha} (w-v)(t_j) + \frac{\lambda_2}{\lambda_1} \partial_t (w-v)(t_j)
\\ \geq\;& -\frac{k}{\lambda_1}\, [w^{\gamma}(t_j) - v^{\gamma}(t_j)]
Now we observe that if $f$ is a Lipschitz function and $t_j\to\tau>0$
as $j\to+\infty$, then
\begin{equation}\label{ch3VITALI}
\lim_{j\to+\infty}
\int_0^{t_j} \frac{f(t_j)
- f(\rho)}{(t_j-\rho)^{1+\alpha}} d\rho
=\int_0^{\tau} \frac{f(\tau)
- f(\rho)}{(\tau-\rho)^{1+\alpha}} d\rho.
\end{equation}
To check this, let
$$ F_j(\rho):=\chi_{(0,t_j)}(\rho)\,
\frac{f(t_j)
- f(\rho)}{(t_j-\rho)^{1+\alpha}},$$
and let $E\subset(0,+\infty)$ be a measurable set, with measure $|E|$ less than
a given $\delta>0$. Let also $q:=\frac{1+\alpha}{2\alpha}>1$
and denote by $p$ its conjugated exponent. Then, by Hölder inequality, for large $j$
we have that
\begin{eqnarray*}
\int_E| F_j(\rho)|\,d\rho&\le& |E|^{1/p}\,\left( \int_0^{+\infty} |F_j(\rho)|^q\,d\rho\right)^{1/q}\\
&\le& \delta^{1/p}\,\left( \int_0^{t_j}
\frac{|f(t_j) - f(\rho)|^q}{(t_j-\rho)^{(1+\alpha)q}}
\,d\rho\right)^{1/q}\\
&\le& L\,\delta^{1/p}\,\left( \int_0^{t_j}
\frac{d\rho}{(t_j-\rho)^{\alpha q}}\right)^{1/q}\\
&=& L\,\delta^{1/p}\,\left( \int_0^{t_j}
\frac{d\rho}{(t_j-\rho)^{(1+\alpha)/2}}\right)^{1/q}\\
&=& L\,\delta^{1/p}\,\left( \frac{2 t_j^{(1-\alpha)/2}}{1-\alpha}\right)^{1/q}
\\&\le&
L\,\left( \frac{2 (\tau+1)^{(1-\alpha)/2}}{1-\alpha}\right)^{1/q}
\,\delta^{1/p}
where $L$ is the Lipschitz constant of $f$. Consequently, by the
Vitali Convergence Theorem,
we obtain that
$$ \lim_{j\to+\infty}\int_0^{+\infty} F_j(\rho)\,d\rho=
\int_0^{+\infty} \lim_{j\to+\infty}F_j(\rho)\,d\rho,$$
which gives (<ref>), as desired.
Now, we take the limit as $j\to+\infty$ in (<ref>),
exploiting (<ref>) and the fact that $w(\tau)=v(\tau)$. In this
way, we have that
-\frac{ (w-v)(0)}{\tau^{\alpha}} -\alpha \int_0^{\tau} \frac{
(w-v)(\rho)}{(\tau-\rho)^{1+\alpha}} d\rho
\ge0.$$
Since $w\ge v$ in $(0,\tau)$, the latter inequality implies that
-\frac{ (w-v)(0)}{\tau^{\alpha}}
\ge0.$$
This is in contradiction with (<ref>) and so it completes the proof of (<ref>).
It is also useful to observe that Lemma <ref>
holds true also for the classical derivative (i.e. when $\lambda_1=0$).
We give its statement and proof for the sake of completeness:
Let $T\in(0,+\infty)\cup\{+\infty\}$,
$w, \ v: [0,T) \rightarrow [0,+\infty)$ be two Lipschitz continuous
Assume that $w$ is a supersolution and $v$ is a subsolution at each differentiability point for the equation
\begin{equation}\label{ch309192}
\partial_t u(t) =-ku^{\gamma}(t)
\end{equation}
with $\gamma$, $k >0$.
\begin{equation}\label{ch3X002}
w(0)> v(0), \end{equation}
we have that
\begin{equation}\label{ch3X012}
w(t)>v(t)\qquad{\mbox{ for all }}t\in(0,T).\end{equation}
Suppose that (<ref>) is false. Then there exists $\tau\in(0,T)$
such that $w>v$ in $(0,\tau)$ and
\begin{equation}\label{ch3TAU0}w(\tau)=v(\tau).\end{equation}
We fix $\e>0$, to be taken as small as we wish in the sequel,
and define
\begin{equation}\label{ch3TAU3}
\end{equation}
We observe that
as long as $\e$ is sufficiently small,
and $f(\tau)=w(\tau)-v(\tau)=0$.
Therefore there exists $\tau_\e\in(0,\tau]$ such that
\begin{equation}\label{ch3TAU2}
{\mbox{$f>0$ in~$(0,\tau_\e)$
We claim that
\begin{equation}\label{ch3TAU}
\lim_{\e\to0^+}\tau_\e=\tau.
\end{equation}
Indeed, suppose, by contradiction, that, up to a subsequence, $\tau_\e$
converges to some $\tau_0\in[0,\tau)$ as $\e\to0^+$. Then we have that
$$ 0=\lim_{\e\to0^+} f(\tau_\e)=\lim_{\e\to0^+}
This is in contradiction with the definition of $\tau$
and so (<ref>) is proved.
Now, from (<ref>), we know that
there exists a sequence $t_j\in(0,\tau_\e]$ such that $f$
is differentiable at $t_j$, $\partial_t f(t_j)\le0$
and $t_j\to\tau_\e$ as $j\to+\infty$.
Accordingly, we deduce from (<ref>) and (<ref>) that
$$ 0 \ge \partial_t f(t_j)=
\partial_t (w-v)(t_j)+\e
\ge-k\big( w^{\gamma}(t_j)-v^\gamma(t_j)\big)+\e.$$
Hence, taking the limit as $j\to+\infty$,
\begin{equation}\label{ch3TAY1}
\frac{\e}{k}\le w^{\gamma}(\tau_\e)-v^\gamma(\tau_\e)=
\big( v(\tau_\e)+\e\,(\tau-\tau_\e)\big)^\gamma-v^\gamma(\tau_\e).
\end{equation}
We claim that
\begin{equation}\label{ch3TAY2}\liminf_{\e\to0^+}
\end{equation}
Indeed, if not, by (<ref>) and (<ref>),
\begin{equation}\label{ch3TAY3}
0=\liminf_{\e\to0^+} v(\tau_\e)=v(\tau)=w(\tau).
\end{equation}
We observe that this implies that
\begin{equation}\label{ch31GAMMA}
\gamma\in(0,1).
\end{equation}
Indeed, since $w$ is a supersolution of (<ref>),
we have that
\begin{eqnarray*}&&
w(t)\ge w(0)\,e^{-kt},\qquad{\mbox{when }}\gamma=1\\
{\mbox{and }}&&w(t)\ge\frac{1}{\left(
\frac1{w^{\gamma-1}(0)}+k(\gamma-1)t
\right)^{\frac1{\gamma-1}}},
\qquad{\mbox{when }}\gamma>1,\end{eqnarray*}
as long as $w(t)>0$, and so for all $t>0$.
In particular, we have that $w(\tau)>0$, in contradiction with (<ref>),
and this proves (<ref>).
Then, we use that $v$ is a subsolution of (<ref>) and (<ref>)
to write that, for any $t\in(0,\tau)$,
\frac{v^{1-\gamma}(\tau)-v^{1-\gamma}(t)}{1-\gamma}=\frac1{1-\gamma}
\int_t^\tau \partial_\rho (v^{1-\gamma}(\rho))\,d\rho
=\int_t^\tau \frac{\partial_t v(\rho)}{v^\gamma(\rho)}\,d\rho\le-k(\tau-t).$$
Therefore, recalling (<ref>),
$$ v^{1-\gamma}(t)\ge k(1-\gamma)(\tau-t),$$
and thus
\begin{equation}\label{ch37ygfugv}
v(t)=v(t)\ge \big( k(1-\gamma)(\tau-t)\big)^{1/(1-\gamma)}.\end{equation}
Similarly, using that $w$ is a supersolution of (<ref>) and (<ref>)
we obtain that, for any $t\in(0,\tau)$,
$$ w(t)\le \big( k(1-\gamma)(\tau-t)\big)^{1/(1-\gamma)}.$$
Comparing this and (<ref>), we conclude that
$$ w(0)\le\big( k(1-\gamma)\tau\big)^{1/(1-\gamma)}\le v(0),$$
which is in contradiction with (<ref>), and so the proof of (<ref>)
is complete.
Then, using (<ref>)
and (<ref>), a Taylor expansion gives that
\begin{eqnarray*}
\frac{1}{k}&\le& \frac{v^\gamma(\tau_\e)}{\e}\,\left[
\left( 1+\frac{\e\,(\tau-\tau_\e)}{v(\tau_\e)}\right)^\gamma-1\right]
\\&=&\frac{
\frac{\gamma\e\,(\tau-\tau_\e)}{v(\tau_\e)}+
O\left( \frac{\e^2\,(\tau-\tau_\e)^2}{v^2(\tau_\e)}\right)
\right]\\&=&
\frac{\gamma\,(\tau-\tau_\e)}{v^{1-\gamma}(\tau_\e)}+
O\left( \frac{\e\,(\tau-\tau_\e)^2}{v^{2-\gamma}(\tau_\e)}\right)
Then, sending $\e\to0^+$ and recalling (<ref>)
and (<ref>), we conclude that $\frac1k\le0$.
This is a contradiction and the proof of (<ref>) is thereby complete.
With this preliminary work, we are in the position of proving the general
claim stated in Theorem <ref>.
First, notice that
\begin{equation} \label{ch30721}
{\partial_t |u|^{s}}= s |u|^{s-1} \left(\frac{\Re(u) \partial_{t} \Re(u)+ \Im(u) \partial_{t} \Im(u)}{|u|} \right) = s|u|^{s-2} \Re\{\bar{u} \, \partial_t u\}.
\end{equation}
Using (<ref>) and exchanging the order of the integral and the derivative, we have
\begin{equation}\label{ch3ineq:complex0}
\begin{split}
\int_{\Omega} |u|^{s-2} \Re\{\bar{u} \, \partial_t u\} \; dx &= \int_{\Omega} \frac{\partial_t |u|^{s}}{s} \; dx
=\frac{1}{s} \partial_t \int_{\Omega} |u|^s \; dx = \frac{1}{s} \partial_t \Vert u(\cdot, t) \Vert_{L^{s} (\Omega) }^{s} \\
& =\Vert u(\cdot, t) \Vert_{L^{s} (\Omega) }^{s-1} \partial_t \Vert u(\cdot, t) \Vert_{L^{s} (\Omega)}.
\end{split}
\end{equation}
Now we claim that
\begin{equation}\label{ch3ineq:complex}
\Vert u(\cdot ,t) \Vert_{L^{s}(\Omega) }^{s-1} \partial_t^{\alpha} (\Vert u(\cdot ,t) \Vert_{L^{s}(\Omega) }) \leq \int_{\Omega} |u(x,t)|^{s-2} \Re \{ \bar{u}(x,t) \partial_t^{\alpha} (u(x,t)) \} \, dx.
\end{equation}
This formula is similar to one given in Corollary 3.1 of [119]
for general kernels. In our setting,
we provide
an easier proof for the case of the Caputo derivative, comprising also the case
of complex valued operators.
To prove (<ref>),
using the definition of Caputo derivative we see that
\begin{equation*}
\begin{split}
\int_{\R^n} & |u(x,t)|^{s-2} \Re \{ \bar{u}(x,t)\partial_t^{\alpha} u(x,t) \} \; dx \\
&=\int_{\Omega} |u(x,t)|^{s-2} \Re \left\{ \bar{u}(x,t) \left[
\dfrac{u(x,t)-u(x,0)}{t^{\alpha}} + \alpha \int_{0}^{t} \dfrac{u(x,t)-u(x,\tau)}{(t-\tau)^{1+\alpha}} \;d\tau
\right]
\right\} \; dx \\
&= \int_{\Omega} |u(x,t)|^{s-2} \bigg( \frac{|u(x,t)|^2 - \Re \{ \bar{u}(x,t) u(x,0) \} }{t^{\alpha}} \\
& \hspace{1em} + \alpha \int_{0}^{t} \frac{|u(x,t)|^2 - \Re \{ \bar{u}(x,t) u(x,\tau) \} }{(t-\tau)^{1+\alpha}} \; d\tau \bigg) dx.
\end{split}
\end{equation*}
Hence, by using the Hölder inequality, we get
\begin{equation*}
\begin{split}
\int_{\R^n} & |u(x,t)|^{s-2} \Re \{ \bar{u}(x,t)\partial_t^{\alpha} u(x,t) \} \; dx \\
& \geq \frac{ \Vert u(\cdot, t) \Vert_{L^s(\Omega)}^{s} - \Vert u(\cdot, t) \Vert_{L^s(\Omega)}^{s-1} \Vert u(\cdot, 0) \Vert_{L^s(\Omega)} }{t^{\alpha}} +\alpha \int_{0}^{t} \frac{\Vert u(\cdot, t) \Vert_{L^s(\Omega)}^{s}}{(t-\tau)^{1+\alpha}} \; d\tau \\
& \hspace{1em} - \alpha \int_{0}^{t} \dfrac{\Vert u(\cdot, t) \Vert_{L^s(\Omega)}^{s-1} \Vert u(\cdot, \tau) \Vert_{L^s(\Omega)}}{(t-\tau)^{1+\alpha}} \; d\tau \\
& = \Vert u(\cdot, t) \Vert_{L^s(\Omega)}^{s-1} \bigg[ \dfrac{\Vert u(\cdot, t) \Vert_{L^s(\Omega)}-\Vert u(\cdot, 0) \Vert_{L^s(\Omega)}}{t^{\alpha}} \\
& \hspace{1em} + \alpha \int_{0}^{t} \dfrac{\Vert u(\cdot, t) \Vert_{L^s(\Omega)} - \Vert u(\cdot, \tau) \Vert_{L^s(\Omega)}}{(t-\tau)^{1+\alpha}}\; d\tau \bigg] \\
& = \Vert u(\cdot, t) \Vert_{L^s(\Omega)}^{s-1} \partial_t^{\alpha} \Vert u(\cdot, t) \Vert_{L^s(\Omega)}.
\end{split}
\end{equation*}
This completes the proof of (<ref>).
Now, to make the notation simpler, we set $v(t):= \Vert u(\cdot, t) \Vert_{L^{s} (\Omega) }$. By combining (<ref>) and (<ref>), we find that
\begin{equation*}
v^{s-1}(t) \left( \lambda_1 \partial_t^{\alpha} v(t) + \lambda_2 \partial_t v(t) \right) \leq \int_{\Omega} |u|^{s-2}(x,t) \Re \left\{\bar{u}(x,t) \left( \lambda_1 \partial_t^{\alpha} u(x,t) +\lambda_2 \partial_t u(x,t) \right) \right\} dx
\end{equation*}
and so, using the fact that $u$ is a solution of (<ref>), we conclude that
\begin{equation*}
v^{s-1}(t) \left( \lambda_1 \partial_t^{\alpha} v(t) + \lambda_2 \partial_t v(t) \right) \leq - \int_{\Omega} |u|^{s-2}(x,t) \Re \{\bar{u}(x,t) \mathcal{N}[u](x,t)\} dx.
\end{equation*}
From this, we use the structural hypothesis (<ref>) and we obtain that
\begin{equation*}
v^{s-1}(t) \left( \lambda_1 \partial_t^{\alpha} v(t) + \lambda_2 \partial_t v(t) \right) \leq - \frac{v^{s-1+\gamma}(t)}{C}.
\end{equation*}
Hence, we have established
the claim in (<ref>)
for all $t>0$ such that $v(t)>0$.
Then, suppose that for some $\bar{t}>0$ we have $v(\bar{t})=0$. Since $v$ is nonnegative,
it follows that
\begin{equation}\label{ch3210718}
\partial_t v(\bar{t})=0.
\end{equation}
On the other hand, if $v(t)=0$, then
\begin{equation}\label{ch31321}
{ \partial_t^{\alpha} v(t)
\le 0},\end{equation}
\begin{equation*}
\partial_t^{\alpha} v(t)= \frac{v(t)-v(0)}{t^{\alpha}} + \int_{0}^{t} \frac{v(t)-v(\tau)}{(t-\tau)^{1+\alpha}} d\tau \leq -\frac{v(0)}{t^{\alpha}} - \int_{0}^{t} \frac{v(\tau)}{(t-\tau)^{1+\alpha}} d\tau \leq 0.
\end{equation*}
So, by (<ref>) and (<ref>),
$\left( \lambda_1 \partial_t^{\alpha} v(\bar t) + \lambda_2 \partial_t v(\bar t) \right) \leq 0$, which gives (<ref>) also in this case, as desired.
Now we exhibit a supersolution $w(t)$ of the equation $(\lambda_1 \partial_t^{\alpha} + \lambda_2 \partial_t) v(t) = -\nu v^{\gamma}(t)$, where $\nu:=\frac{1}{C}$.
For this, we recall
Section 7 of [119], and we have that the function
\begin{equation*}
w(t):= \left\{
\begin{array}{ll}
u_0 & {\mbox{if }} t\in [0,t_0], \\
Kt^{-\frac{\alpha}{\gamma}} & {\mbox{if }}t\geq t_0,
\end{array}
\right.
\end{equation*}
with $K:=u_0t_0^{\frac{\alpha}{\gamma}}$ is a supersolution of $
\partial_t^{\alpha} w(t) = -\nu w^{\gamma}(t)$
as long as
\begin{equation*}
t_0 \geq \dfrac{u_0^{1-\gamma}}{\nu} \left(\frac{2^{\alpha}}{\Gamma(1-\alpha)} + \frac{\alpha}{\gamma} \frac{2^{\alpha + \frac{\alpha}{\gamma}}}{\Gamma(2-\alpha)} \right).
\end{equation*}
We claim that $\partial_t w(t) \geq -\nu w^{\gamma} (t)$.
To prove this, it is equivalent to check that
\begin{equation*}
\frac{\alpha}{\gamma} u_0 \, t_0^{\frac{\alpha}{\gamma}} \, t^{-\frac{\alpha}{\gamma}-1} \leq \nu \, u_0^{\gamma} \, t_0^{{\alpha}} \, t^{-\alpha}
which is in turn equivalent to
\begin{equation*}
\frac{\alpha}{\gamma \, \nu} u_0^{1-\gamma} \, t_0^{ \frac{\alpha}{\gamma} -\alpha} \leq t^{1+\frac{\alpha}{\gamma} -\alpha},
\end{equation*}
and the latter equation holds if
$$t_0 \geq \max \left\{ 1, \frac{\alpha}{\gamma \nu} u_0^{1-\gamma} \right\}. $$
Therefore for $t_0$ big enough we have that $w(t)$ is a supersolution of the equation $(\lambda_1 \partial_t^{\alpha} + \lambda_2 \partial_t) v(t) = -\nu v^{\gamma}(t)$. Also, $w(t)$ satisfies
\begin{equation*}
w(t)\leq \frac{c}{1+t^{\frac{\alpha}{\gamma}}}
\end{equation*}
for some $c>0$ depending only on $\nu,\ \gamma, \ \alpha$ and $w(0)$. Hence by the comparison principle in Lemma <ref>, we infer that $v(t) \leq w(t)$, which
completes the proof of the desired result in (<ref>).
The proof is identical to the one of Theorem <ref> a
part from the construction of the supersolution
(and from the use of the comparison principle in Lemma <ref>
rather than in Lemma <ref>). Our aim is now to find a supersolution to the equation (<ref>) in the case $\lambda_1=0$, that we can write as
\begin{equation}\label{ch3CAS1}
v'(t) = -\frac{1}{C}v^{\gamma}(t)
\end{equation}
where $C$ is the constant given in the hypothesis. To construct this supersolution,
we distinguish the cases $0<\gamma \leq 1$
and $\gamma>1$.
We define
\begin{equation}\label{ch379}
w_0:=\Vert u_0(\cdot) \Vert_{L^{s} (\Omega) },
\end{equation}
\begin{equation}
0 & {\mbox{if }}\gamma=1,\\ \max\left\{0, \ \frac{C}{1-\gamma}(w_0^{1-\gamma}-1) \right\}& {\mbox{if }}0<\gamma<1,
\end{matrix}
\right.\end{equation}
\begin{equation}\label{ch3theta1}
\theta_0= \left(w_0-\dfrac{(1-\gamma)}{C}t_0\right).
\end{equation}
Notice that, for $0<\gamma<1$
\begin{equation}\label{ch3theta}
\theta_0 \leq 1.
\end{equation}
In fact,
\begin{equation*}
\frac{C}{1-\gamma} (w_0^{1-\gamma}-1) \leq t_0
\end{equation*}
\begin{equation*}
\left( w_0^{1-\gamma} - \frac{(1-\gamma)}{C}t_0 \right) \leq 1
\end{equation*}
and that proves (<ref>).
Then, we see that the function
\begin{equation}\label{ch397}
w(t):= \left\{
\begin{array}{lr}
\left(w_0^{1-\gamma}-\dfrac{(1-\gamma)t}{C} \right)^{\frac{1}{1-\gamma}}, & {\mbox{if }}t\in[0,t_0] \\
\theta_0 \,e^{\frac{t_0-t}{C}}, & {\mbox{if }}t\in(t_0, +\infty)
\end{array}
\right.
\end{equation}
is a continuous and Lipschitz function, moreover it is a solution of (<ref>)
in the case $\gamma=1$ and a supersolution of (<ref>) in the case $0 <\gamma<1$. Indeed, to check this, we observe that, for $t\in[0, t_0]$,
\begin{eqnarray*}
&& \hspace{-1em} w'(t)+\frac1{C}w^{\gamma}(t)) \\
&& \hspace{3em} = -\dfrac{1}{C}\left( w_0^{1-\gamma} -\dfrac{(1-\gamma)t}{C} \right)^{\frac{\gamma}{1-\gamma}} + \dfrac1{C}\left( w_0^{1-\gamma} -\dfrac{(1-\gamma)t}{C} \right)^{\frac{\gamma}{1-\gamma}} \\
&& \hspace{3em} =0,
\end{eqnarray*}
for all $t>t_0$,
\begin{eqnarray*}
&& C\left( w'(t)+\frac1{C}w^\gamma(t)\right)=
-\theta_0 e^{\frac{(t_0-t)}{C}}
+\theta_0^\gamma e^{\frac{\gamma(t_0-t)}{C}}=
\theta_0^\gamma e^{\frac{\gamma(t_0-t)}{C}}\left(1
-\theta_0^{1-\gamma} e^{\frac{(1-\gamma)(t_0-t)}{C}}\right)\\&&\qquad\ge
\theta_0^\gamma e^{\frac{\gamma(t_0-t)}{C}}\left(1
-\theta_0^{1-\gamma} \right)\ge0,
\end{eqnarray*}
where the inequality holds thanks to (<ref>).
Notice also that the function $w$ is Lipschitz since it is piecewise continuous and derivable and it is continuous in the point $t=t_0$ because of the definition of $\theta$ given in (<ref>).
These observations establish
the desired supersolution properties for the
function in (<ref>) for $0<\gamma\le1$.
From this and the comparison result
in Lemma <ref>, used here with $w(t)$ and $v(t):= \Vert u(\cdot, t) \Vert_{L^{s} (\Omega) } $, we obtain that
$v(t)\le w(t)$ for any $t\ge0$, and in particular,
\begin{equation}\label{ch399}
\Vert u(\cdot, t) \Vert_{L^{s} (\Omega) }\le
K e^{-\frac{t}{C}}
\qquad{\mbox{for any~$t>t_0$}}
\end{equation}
for $K:= \theta_0 e^{\frac{t_0}{C}}$.
This proves (<ref>).
Now we deal with the case $\gamma>1$. In this case, we set
$$ w_0:=\max \left\{\Vert u_0(\cdot) \Vert_{L^{s} (\Omega) }, \Big( \frac{C}{\gamma -1} \Big)^{\frac{1}{\gamma-1}} \right\}. $$
Then the function
\begin{equation}\label{ch3992}
w(t):= \left\{
\begin{array}{lr}
w_0 , & {\mbox{if }}t\in[0,1] \\
w_0 t^{-\frac{1}{\gamma-1}}, & {\mbox{if }}t>1
\end{array}
\right.
\end{equation}
is a supersolution of (<ref>). Indeed, if $t>1$,
\begin{eqnarray*}
C\left( w'(t)+\frac1{C}w^\gamma(t)\right)=
-\frac{C}{\gamma-1} w_0 t^{-\frac{\gamma}{\gamma-1}}
+w_0^\gamma t^{-\frac{\gamma}{\gamma-1}}=
w_0 t^{-\frac{\gamma}{\gamma-1}}\left(
\right)\ge0,
\end{eqnarray*}
while, if $t\in(0,1)$,
$$ w'(t)+\frac1{C}w^\gamma(t)=\frac1{C}w^\gamma(t)\ge0.$$
This gives that the function in (<ref>) has the desired supersolution property and consequently we can apply the comparison result
in Lemma <ref> with $w(t)$ and $v(t):= \Vert u(\cdot, t) \Vert_{L^{s} (\Omega) } $. In this way, we obtain that for all $t\ge1$
$$ \Vert u(\cdot, t) \Vert_{L^{s} (\Omega) }\le
w_0 t^{-\frac{1}{\gamma-1}},$$
and so
the proof of (<ref>)
is complete.
Now, we present the applications of the abstract results to the operators introduced in Section <ref>.
We start with the case of the fractional porous medium equation.
In order to prove Theorem <ref>, our strategy is to verify the
validity of inequality (<ref>) with $\gamma:=2$
for the porous medium operator,
which would put us in the position of exploiting Theorems <ref>
and <ref>.
To this end, by elementary computations, up to changes of the positive constant $c$ depending on $n, \ s,$ and $ \sigma$, we see that
\begin{equation}\label{ch3110}
\begin{split}
\int_{\Omega} u^{s-1}(x,t)\mathcal{N}[u](x,t) \; dx &=\int_{\Omega} - u^{s-1} \nabla \cdot (u \nabla \mathcal{K} u)(x,t) \; dx \hspace{10em} \\
&= \int_{\Omega} (s-1) u^{s-1}(x,t) \nabla u(x,t) \cdot \nabla \mathcal{K} u (x,t) dx \\
&= \int_{\Omega} \nabla u^{s}(x,t) \cdot \nabla \mathcal{K} u (x,t) \,dx
\end{split}
\end{equation}
Now, define for $\e>0$, the regularized operator
\begin{equation}
\mathcal{K}_{\e}u= \int_{\Omega}c(n,\sigma) \frac{u(x-y,t)}{(|y|^2+\e^2)^{\frac{n-2\sigma}{2}}} dy.
\end{equation}
where $c(n,\sigma)$ is the same constant that appears in the definition of $\mathcal{K}$ in (<ref>).
Notice that, since $u$ is regular, we have
\begin{multline}\label{ch3conv}
\int_{\Omega} \nabla u^{s}(x,t) \cdot \nabla \mathcal{K}_{\e} u (x,t) \,dx \\
\leq \iint_{\R^n \times \R^n}
\frac{\chi_{\Omega}(x) \underset{x\in\Omega}{\sup} |\nabla u^s(x,t) | \, \chi_{\Omega}(x-y)\underset{(x-y)\in\Omega}{\sup} |\nabla u(x-y,t)|}{|y|^{n-2\sigma}} dxdy
\end{multline}
where $\chi$ is the characteristic function. Thus, thanks to (<ref>) we can apply the
Dominated Convergence Theorem and obtain
\begin{equation} \label{ch3limit}
\underset{\e\rightarrow 0}{\lim} \int_{\Omega} \nabla u^{s}(x,t) \cdot \nabla \mathcal{K}_{\e} u (x,t) \,dx = \int_{\Omega} \nabla u^{s}(x,t) \cdot \nabla \mathcal{K} u (x,t) \,dx.
\end{equation}
So, using (<ref>) and (<ref>), we have
\begin{equation}\label{ch3lim}
\begin{split}
\int_{\Omega} u^{s-1}(x,t)\mathcal{N}[u](x,t) \; dx &=\underset{\e\rightarrow 0}{\lim} \int_{\Omega} \nabla u^{s}(x,t) \cdot \nabla \mathcal{K}_{\e} u (x,t) \,dx \\
&=\underset{\e\rightarrow 0}{\lim} \int_{\Omega} \nabla u^{s}(x,t) \cdot \int_{\Omega} \frac{(-n+2\sigma)c(n,\sigma)u(y)}{(|x-y|^2+\e^2)^{\frac{n-2\sigma+2}{2}}} (x-y) dy \,dx \\
&=\underset{\e\rightarrow 0}{\lim} \iint_{\Omega\times\Omega} \dfrac{c(n,\sigma) u(y,t)\nabla u^{s}(x,t) \cdot (y-x) }{(|x-y|^2+\e^2)^{\frac{n-2\sigma+2}{2}}} \; dy \,dx,
\end{split}
\end{equation}
up to changes of the positive constant $c(n,\sigma)$.
Now we adapt a method that was introduced in [29] to obtain $L^p$ estimates. We exchange the order of integration and have that
\begin{equation*}
\begin{split}
\iint_{\R^n} c\, u(y,t) \dfrac{\nabla u^{s}(x,t) \cdot (y-x) }{(|x-y|^2+\e^2)^{\frac{n-2\sigma+2}{2}}} \; dx \,dy \\
& \hspace{-12em} = \iint_{\R^n} c\, u(y,t) \dfrac{\nabla (u^{s}(x,t)-u^s(y,t)) \cdot (y-x) }{(|x-y|^2+\e^2)^{\frac{n-2\sigma+2}{2}}} \; dx \,dy \\
& \hspace{-12em} = \iint_{\R^n} -c {(u^{s}(x,t)-u^s(y,t))u(y,t)} \Bigg[\dfrac{-n}{(|x-y|^2+\e^2)^{\frac{n-2\sigma+2}{2}}} \\ &
\hspace{-9em }+\frac{(n-2\sigma+2)|x-y|^2}{(|x-y|^2+\e^2)^{\frac{n-2\sigma+4}{2}}} \Bigg] dx \,dy \\
& \hspace{-12em} = \iint_{\R^n} c \frac{(u^{s}(x,t)-u^s(y,t))(u(x,t)-u(y,t))}2\Bigg[\dfrac{-n}{(|x-y|^2+\e^2)^{\frac{n-2\sigma+2}{2}}} \\ &
\hspace{-9em }+\frac{(n-2\sigma+2)|x-y|^2}{(|x-y|^2+\e^2)^{\frac{n-2\sigma+4}{2}}} \Bigg] dx \,dy.
\end{split}
\end{equation*}
We observe now that, since $(u^{s}(x,t)-u^s(y,t))(u(x,t)-u(y,t))$ is always positive,
\begin{equation*}
\begin{split}
& \hspace{-3em}\iint_{\R^n} c \frac{(u^{s}(x,t)-u^s(y,t))(u(x,t)-u(y,t))}2\Bigg[\dfrac{-n}{(|x-y|^2+\e^2)^{\frac{n-2\sigma+2}{2}}} \\
&+\frac{(n-2\sigma+2)|x-y|^2}{(|x-y|^2+\e^2)^{\frac{n-2\sigma+4}{2}}} \Bigg] dx \,dy \\
& \leq \iint_{\R^n} c \frac{(u^{s}(x,t)-u^s(y,t))(u(x,t)-u(y,t))(2-2\sigma)}{2|x-y|^{n+2(1-\sigma)}} dx \,dy.
\end{split}
\end{equation*}
Thus, again by the Dominated Convergence Theorem, we can pass to the limit in (<ref>) and obtain
\begin{equation}\label{ch3111}
\begin{split}
&\hspace{-1.5em}\int_{\Omega} u^{s-1}(x,t)\mathcal{N}[u](x,t) \; dx \\
& \hspace{1.5em}=\iint_{\R^n} c \frac{(u^{s}(x,t)-u^s(y,t))(u(x,t)-u(y,t))(2-2\sigma)}{2|x-y|^{n+2(1-\sigma)}} dx \,dy.
\end{split}
\end{equation}
Now, we define $v(x,t)=u^{\frac{s+1}{2}}(x,t)$. Then, by inequality (2.15) of [43] we have, for some $C>0$,
\begin{equation*}
C (u^s(x,t)-u^s(y,t))(u(x,t)-u(y,t) ) \geq |v(x,t)-v(y,t)|^2.
\end{equation*}
From this, (<ref>) and (<ref>) we obtain that
\begin{equation}\label{ch3112}
\begin{split}&
C \int_{\Omega} u^{s-1}(x,t)\mathcal{N}[u](x,t) \; dx \\
\iint_{\R^n} c\,C\, \dfrac{2-2s}{2} \dfrac{(u^{s}(x,t)-u^s(y,t))(u(x,t)-u(y,t))}{|x-y|^{n+2(1-\sigma)}} \;dx \,dy\\&\qquad
\ge
\iint_{\R^n} c\, \dfrac{2-2s}{2} \dfrac{|v(x,t)-v(y,t)|^2}{|x-y|^{n+2(1-\sigma)}} \;dx \,dy.\end{split}\end{equation}
Now we set $z:=(1-s)$; then $z\in(0,1)$ and $n\geq 2z$. Let also
$$p_z:= \dfrac{2n}{n-2z} \geq 2. $$
Then for any $q\in [2, p_z]$ we can apply the Gagliardo-Sobolev-Slobodetskiĭ fractionary inequality (compare [40], Theorem 6.5) and obtain
\begin{equation}\label{ch3113}
\left( \int_{\Omega} u^{\frac{s+1}{2}q} \right)^{\frac{2}{q}}
= \Vert v \Vert_{L^{q} (\Omega) }^2
\leq C \iint \dfrac{|v(x,t)-v(y,t)|^2}{|x-y|^{n+2z}} \; dxdy
\end{equation}
with $C$ depending only on $\Omega,\ n,\ z$ and $q$.
In particular, choosing $q=2$, we deduce from (<ref>) that
\begin{equation}\label{ch3091a}
\Vert u(\cdot, t) \Vert_{L^{s+1}(\Omega) }^{s+1}
\leq C \iint \dfrac{|v(x,t)-v(y,t)|^2}{|x-y|^{n+2z}} \; dxdy
\end{equation}
On the other hand, using the Hölder inequality, one has that
\begin{equation*}
\Vert u(\cdot, t) \Vert_{L^{s}(\Omega) }^{s+1} \leq \Vert u(\cdot, t) \Vert_{L^{s+1}(\Omega)}^{s+1} |\Omega|^{1/s}.
\end{equation*}
Combining this and (<ref>), we obtain
\begin{equation*}
\Vert u(\cdot, t) \Vert_{L^{s+1}(\Omega) }^{s}
\leq C \iint \dfrac{|v(x,t)-v(y,t)|^2}{|x-y|^{n+2z}} \; dxdy,
\end{equation*}
up to renaming $C>0$.
This and (<ref>) establish the validity
of (<ref>) for $\gamma:=2$, as desired.
Now we focus on the Kirchhoff equation, first dealing with the case
of classical derivatives.
Our objective here is to verify the
validity of inequality (<ref>) for suitable values
of $\gamma$, and then make use of Theorems <ref>
and <ref>.
First we present the proof for the non-degenerate case, that takes place when $m(\xi)$ has a positive minimum. Let us call $m_0:=\min m(\xi)$, then
\begin{equation}\label{ch3331}
m \left(\Vert \nabla u \Vert_{L^2(\Omega)}\right) \int_{\Omega} |u|^{s-2}u (-\Delta)u \; dx \geq m_0 \int_{\Omega} |u|^{s-2}u (-\Delta)u \; dx.
\end{equation}
In Theorem 1.2 of [43], the case of the Laplacian was considered:
there it was found that, for some $C>0$ depending on $s,\ n,\ \Omega$,
\begin{equation*}
\int_{\Omega} |u|^{s-2}u (-\Delta)u \; dx \geq C \Vert u \Vert_{L^{s} (\Omega) }^s.
\end{equation*}
Combining this with (<ref>)
we see that (<ref>) holds true for $\gamma=1$ and $C> 0$ depending on $s,\ n,\ \Omega, \ \min m(\xi)$.
Now we deal with
the degenerate case, which requires the use of
finer estimates. In this case, we have that
\begin{equation}\label{ch30909}\begin{split}
& \hspace{-3em} b \Vert \nabla u \Vert_{L^2(\Omega)}^2 \int_{\Omega} |u(x,t)|^{s-2} u(x,t)(-\Delta)u (x,t) \; dx\\
& \hspace{3em} = b \Vert \nabla u \Vert_{L^2(\Omega)}^2 \int_{\Omega} |u(x,t)|^{s-2} |\nabla u (x,t)|^2 \; dx \\
& \hspace{3em} \geq C \left(\int_{\Omega} |u(x,t)|^{\frac{s-2}{2}} |\nabla u(x,t)|^2 \; dx\right)^2
where the first passage is an integration by parts and the last inequality holds in view of the Cauchy-Schwarz inequality.
Now define
\begin{equation}\label{ch3992k}
We have that
$$ |\nabla v|^2 = \left( \frac{s+2}{4} \right)^2 |u|^{\frac{s-2}{2}} |\nabla u|^2. $$
This and (<ref>) give that
\begin{equation}\label{ch3781-b}
\begin{split}
&\left( \frac{s+2}{4} \right)^4
b \Vert \nabla u \Vert_{L^2(\Omega)}^2 \int_{\Omega} |u(x,t)|^{s-2} u(x,t)(-\Delta)u (x,t) \; dx\\
\ge\,&
C \left(\int_{\Omega} \left( \frac{s+2}{4} \right)^2|u(x,t)|^{\frac{s-2}{2}} |\nabla u(x,t)|^2 \; dx\right)^2\\
C \left(\int_{\Omega} |\nabla v(x,t)|^2 \; dx\right)^2.
\end{split}\end{equation}
We now use Sobolev injections (in the form
given, for instance, in formula (2.9) of [43]), remembering that $v$ is zero outside $\Omega$. The inequality
\begin{equation}\label{ch3781-a}
\Vert \nabla v \Vert_{L^2(\Omega)} \geq C \Vert v \Vert_{L^q(\Omega)} \end{equation}
\begin{equation}\label{ch3PER781-a}
{\mbox{for all $q\geq 1$ if $n\in\{1, 2\}$,
and for all~$q\in\left[1,\displaystyle\frac{2n}{n-2}\right]$ if $n>2$.}}\end{equation}
Therefore, we set
\begin{equation}\label{ch3PER781-b} q:=\frac{4s}{s+2}.\end{equation}
Recalling the ranges of $s$ in claim (iii) of Theorem <ref>,
when $n>2$ we have that
$$ (n-2) q-2n=\frac{4s(n-2)}{s+2}-2n=\frac{2}{s+2}\,\big(
\big)\le0,$$
which shows that the definition in (<ref>)
fulfills the conditions in (<ref>), and so (<ref>)
is valid in this setting.
Hence, making use of (<ref>), (<ref>) and (<ref>), up to renaming $C$ line after line, we deduce that
\begin{eqnarray*}
&&b \Vert \nabla u \Vert_{L^2(\Omega)}^2 \int_{\Omega} |u(x,t)|^{s-2} u(x,t)(-\Delta)u (x,t) \; dx\\
C \|\nabla v(\cdot,t)\|^4_{L^2(\Omega)}
\ge C\|v\|_{L^q(\Omega)}^4= C\|u\|_{L^s(\Omega)}^{{s+2}}.
\end{eqnarray*}
These observations imply that
condition (<ref>) is satisfied here
with $\gamma=3$ and $C$ depending on $s,\ m(\xi)$ and $\Omega$.
Now we deal with the case of the fractional Kirchhoff equation.
As in the case of classical space-derivatives
dealt with in the proof of
Theorem <ref>,
a quick proof for the non-degenerate case is available. Indeed,
\begin{equation*}
\int_{\Omega} |u|^{s-2} u \mathcal{N}[u] \, dx =
m \left(\Vert \nabla u\Vert_{L^{2} (\Omega) }^2 \right) \int_{\Omega} |u|^{s-2} u (-\Delta)^{\sigma}u \, dx \geq \int_{\Omega} m_0 |u|^{s-2}u(-\Delta)^{\sigma}u \, dx
\end{equation*}
and in [43] it was shown that
\begin{equation*}
\int_{\Omega} m_0 |u|^{s-2}u(-\Delta)^{\sigma}u \, dx \geq \Vert u \Vert_{L^{s} (\Omega) }^s.
\end{equation*}
Thus, the validity of inequality (<ref>) with $\gamma=1$
is established in this case.
We now deal with the degenerate case.
First of all, we have that
\begin{equation*}
-\dfrac{1}{2}\int_{\Omega}\frac{u(x+y,t) + u(x-y,t) -2u(x,t) }{|y|^{n+2\sigma}}
\; dy = \int_{\Omega} \frac{ u(x,t)-u(y,t) }{|x-y|^{n+2\sigma}} \; dy,
\end{equation*}
where the latter integral is intended in the principal value sense.
Next, we have that
\begin{equation*}
\begin{split}
&\int_{\R^n} \left( \int_{\R^n} \frac{u(x,t)-u(y,t)}{|x-y|^{n+2\sigma}}
\; dy \right) |u(x,t)|^{s-2}u(x,t) \; dx \\
=\,& \frac{1}{2} \iint_{\R^{2n}} \Bigg[
\frac{
(u(x,t)-u(y,t)) \,|u(x,t)|^{s-2}u(x,t)}{|x-y|^{n+2\sigma}} + \frac{(u(y,t)-u(x,t)) \,|u(y,t)|^{s-2}u(y,t)
}{|x-y|^{n+2\sigma}} \Bigg] dxdy\\
=\,& \frac{1}{2} \iint_{\R^{2n}}\frac{
(u(x,t)-u(y,t)) (|u(x,t)|^{s-2}u(x,t) -|u(y,t)|^{s-2}u(y,t))}{|x-y|^{n+2\sigma}}
\,dx\,dy
\end{equation*}
We fix
\begin{equation}\label{ch3pge2si}
p\in[2, +\infty)\end{equation}
and we define
\begin{equation}\label{ch318}
r:= \frac{s+2}{2p}\qquad{\mbox{and}}\qquad
We claim that
\begin{equation} \label{ch3kirch:claim1}
\begin{split}
&|v(x,t)-v(y,t)|^p \\
& \hspace{2em}\leq c_0 |u(x,t)-u(y,t)|\sqrt{(u(x,t)-u(y,t)) (|u(x,t)|^{s-2}u(x,t) -|u(y,t)|^{s-2}u(y,t))}
\end{split}
\end{equation}
for some $c_0> 0$, independent of $u$. To prove this, we first observe
that the radicand in (<ref>) is well defined, since, for every $a$, $b\in\R$ we have that
\begin{equation}\label{ch3DO1}
(a-b) (|a|^{s-2}a -|b|^{s-2}b)\ge0.
\end{equation}
To check this, up to exchanging $a$ and $b$, we can suppose that $a\ge b$.
Then, we have three cases to take into account: either $a\ge b\ge0$,
or $a\ge 0\ge b$, or $0\ge a\ge b$.
If $a\ge b\ge0$, we have that
$$ |a|^{s-2}a -|b|^{s-2}b= a^{s-1} -b^{s-1}\ge0,$$
and so (<ref>) holds true. If instead $a\ge 0\ge b$, we have that
$$ |a|^{s-2}a -|b|^{s-2}b=|a|^{s-1} +|b|^{s-1}\ge0,$$
which gives (<ref>)
in this case. Finally, if $0\ge a\ge b$,
$$ |a|^{s-2}a -|b|^{s-2}b=-|a|^{s-1} +|b|^{s-1}\ge0,$$
again since $-|a|=a\ge b=-|b|$, thus completing the proof of (<ref>).
Then, by (<ref>), we have that (<ref>)
is equivalent to
\begin{equation}\label{ch3kirch:claimE}
|v(x,t)-v(y,t)|^{2p} \leq c_1 (u(x,t)-u(y,t))^3{ (|u(x,t)|^{s-2}u(x,t) -|u(y,t)|^{s-2}u(y,t))}.
\end{equation}
We also note that when $u(x,t)=u(y,t)$ the inequality in (<ref>)
is trivially satisfied.
Hence, without loss of generality we can suppose that
\begin{equation}\label{ch3WAG01la}
{\mbox{$|u(x,t)|>|u(y,t)|$,\; for fixed $x,\ y \in \R^n$.}}\end{equation}
We define
the function
\begin{equation}\label{ch3EA2}
(-1,1)\ni\lambda\mapsto g (\lambda)=\frac{(1- |\lambda|^{\frac{s+2}{2p}})^{2p}}{(1-\lambda)^3(1-|\lambda|^{s-2}\lambda)}
\end{equation}
and we claim that
\begin{equation}\label{ch3EA20}
\sup_{(-1,1)}g(\lambda) < +\infty.
\end{equation}
To this end, we point out that
$g$ is regular
for all $\lambda\in (-1,1)$, so, to establish (<ref>), we only have to study the
limits of $g$ for $\lambda \rightarrow -1^+$ and $\lambda \rightarrow 1^-$.
When $\lambda \rightarrow -1^+$, this limit is immediate and $g(-1)=0$. On the other hand,
when $\lambda \rightarrow 1^-$, we see that
\begin{equation*}
\begin{split}
\underset{\lambda\rightarrow 1^-}{\lim} g(\lambda) &= \underset{\varepsilon\rightarrow 0^+}{\lim} \frac{(1- (1-\e)^{\frac{s+2}{2p}})^{2p}}{(1-(1-\varepsilon))^3(1-(1-\varepsilon)^{s-1})} \\
&= \underset{\varepsilon\rightarrow 0^+}{\lim} \frac{\left(
\frac{s+2}{2p}\varepsilon+O(\varepsilon^2)\right)^{2p}}{\varepsilon^3((s-1)\varepsilon + O(\varepsilon^2))} \\
&=\underset{\varepsilon\rightarrow 0^+}{\lim}
\frac{\varepsilon^{2p-4}\,\left(
\frac{s+2}{2p}+O(\varepsilon)\right)^{2p}}{(s-1 + O(\varepsilon))},
\end{split}
\end{equation*}
which is finite, thanks to (<ref>).
Then (<ref>) holds true, as desired.
Then, using (<ref>) with $\lambda:=\frac{b}{a}$, we have that
\begin{equation}\label{ch3WAG01la2}
\begin{split}
&{\mbox{for any~$a$, $b\in\R$ with $|a|>|b|$,}}\\
\left|a\right|^{\frac{s+2}{2p}}
- \left|b\right|^{\frac{s+2}{2p}}\right)^{2p}}{
\left(a-b\right)^3\left(
\left|a\right|^{s-2}a
\;=\;
\frac{|a|^{s+2}}{|a|^{s-2}\;a^4}\cdot \frac{\left(1- \left|\frac{b}{a}\right|^{\frac{s+2}{2p}}\right)^{2p}}{
\left(1-\frac{b}{a}\right)^3\left(1-\left|\frac{b}{a}\right|^{s-2}
\frac{b}{a}\right)}\\&\qquad
\; =\;
\frac{(1- |\lambda|^{\frac{s+2}{2p}})^{2p}}{
\; =\;g(\lambda)\le C,
\end{split}\end{equation}
for some $C>0$. Then, in view of (<ref>),
we can exploit (<ref>)
with $a:=u(x,t)$ and $b:=u(y,t)$, from which we obtain that
\begin{eqnarray*}&&
\left|
\left|u(x,t)\right|^{\frac{s+2}{2p}}
- \left|u(y,t)\right|^{\frac{s+2}{2p}}\right|^{2p}\;=\;
\left(
\left|u(x,t)\right|^{\frac{s+2}{2p}}
- \left|u(y,t)\right|^{\frac{s+2}{2p}}\right)^{2p}\\ &&\qquad\;\leq \;C\,
\left(u(x,t)-u(y,t)\right)^3\left(
\left|u(x,t)\right|^{s-2}u(x,t)
This and (<ref>)
imply (<ref>), as desired.
Now, fixed
$p$ as in (<ref>),
we set
\begin{equation}\label{ch38iswdjc8383}
z:=\frac{ 2\sigma}{p}\in(0,\sigma]\subset(0,1).\end{equation}
We apply the Gagliardo-Sobolev-Slobodetskiĭ fractional immersion (for instance, in the version given
in formula (2.18) of [43]) to $v$.
In this way,
\begin{equation}\label{ch3202020}
{\mbox{for all $q\in[1,+\infty)$ when $n\le zp$, and for all $q\in\left[1,
\displaystyle\dfrac{np}{n-zp}\right]$ when~$n>zp$,}}
\end{equation}
we have that
\begin{equation}\label{ch320}\begin{split}
\Vert u(\cdot,t) \Vert_{L^{\frac{(s+2)q}{2p}}(\Omega)}^{\frac{s+2}2}=
\Vert v(\cdot,t) \Vert_{L^q(\Omega)}^p &\,\leq C \iint_{\R^{2n}}
\frac{|v(x,t)-v(y,t)|^p}{|x-y|^{n+zp}} dxdy\\&=
C \iint_{\R^{2n}}
\frac{|v(x,t)-v(y,t)|^p}{|x-y|^{n+2\sigma}} dxdy,\end{split}
\end{equation}
where the first equality comes from (<ref>) and the
latter equality is a consequence of (<ref>).
Now we choose
\begin{equation}\label{ch3LAq}
p:=\max\left\{ 2,\,\frac{s+2}{2}\right\}\qquad{\mbox{and}}\qquad
Notice that condition (<ref>) is fulfilled in this setting.
Furthermore, recalling (<ref>) and
the assumptions in point (iii)
of Theorem <ref>, we have that, when $n>2\sigma=zp$,
we have
\begin{eqnarray*}
\frac{2(n-2\sigma)sp}{s+2}-np=
\frac{p}{s+2}\,\big(
\big)\\
\frac{p}{s+2}\,\big(
\big)\le0.
\end{eqnarray*}
As a consequence, we have that condition (<ref>)
is fulfilled the setting prescribed by (<ref>),
hence we can exploit (<ref>) in this framework.
Then, from (<ref>) we have that
$$ \frac{(s+2)q}{2p}=s,$$
and so (<ref>) gives that
$$ \Vert u(\cdot,t) \Vert_{L^{s}(\Omega)}^{\frac{s+2}2}\le
C \iint_{\R^{2n}}
\frac{|v(x,t)-v(y,t)|^p}{|x-y|^{n+2\sigma}} dxdy.$$
Hence, recalling (<ref>), up to renaming $C>0$,
we have that
\begin{equation}\label{ch3COM234520Aiekwd}
\begin{split} &\Vert u(\cdot,t) \Vert_{L^{s}(\Omega)}^{s+2}\\ \le\,&
C\big( \iint_{\R^{2n}}
\frac{|u(x,t)-u(y,t)|\sqrt{(u(x,t)-u(y,t))
}}{|x-y|^{n+2\sigma}} \\
& \hspace{2em}\times (|u(x,t)|^{s-2}u(x,t) -|u(y,t)|^{s-2}u(y,t))
\le\,& C
\iint_{\R^{2n}}
\frac{|u(x,t)-u(y,t)|^2}{|x-y|^{n+2\sigma}} dxdy\\
&\times \hspace{2em}
\iint_{\R^{2n}}
\frac{{(u(x,t)-u(y,t))
(|u(x,t)|^{s-2}u(x,t) -|u(y,t)|^{s-2}u(y,t))}}{|x-y|^{n+2\sigma}} dxdy.
\end{split}\end{equation}
Notice also that, in the degenerate case, we deduce from (<ref>)
and (<ref>) that
\begin{equation}\label{ch3ghUAJ:a9ok01}
\begin{split}
&\int_{\R^n} {\mathcal{N}}[u](x,t)\,|u(x,t)|^{s-2}\,u(x,t)\,dx\\
\iint_{\R^{2n}}
\Big( u(x+y,t) + u(x-y,t) -2u(x,t) \Big)
\,|u(x,t)|^{s-2}\,u(x,t)\,\frac{dx\,dy}{|y|^{n+2\sigma}}\\
\big( u(y,t)-u(x,t) \big)
\,|u(x,t)|^{s-2}\,u(x,t)\,\frac{dx\,dy}{|x-y|^{n+2\sigma}}\\
\big( u(x,t)-u(y,t) \big)
\,|u(x,t)|^{s-2}\,u(x,t)\,\frac{dx\,dy}{|x-y|^{n+2\sigma}}\\
\big( u(x,t)-u(y,t) \big)
\,\big(|u(x,t)|^{s-2}\,u(x,t)-|u(y,t)|^{s-2}\,u(y,t)\big)\,\frac{dx\,dy}{|x-y|^{n+2\sigma}}
\begin{equation}\label{ch3ghUAJ:a9ok02}\begin{split} M_u\,&:=
\left( \int_{\R^{2n}}\frac{ |u(x,t)-u(y,t)|^2 }{|x-y|^{n+2\sigma}}
\,dx\,dy \right)\\&\ge
b\,\int_{\R^{2n}}\frac{ |u(x,t)-u(y,t)|^2 }{|x-y|^{n+2\sigma}}\,dx\,dy,
\end{split}\end{equation}
with $b>0$.
Then, from (<ref>)
and (<ref>),
\begin{eqnarray*}
&&\int_{\R^n} {\mathcal{N}}[u](x,t)\,|u(x,t)|^{s-2}\,u(x,t)\,dx
\ge b\,\int_{\R^{2n}}\frac{ |u(x,t)-u(y,t)|^2 }{|x-y|^{n+2\sigma}}\,dx\,dy\\
\qquad&&\times
\iint_{\R^{2n}}
\big( u(x,t)-u(y,t) \big)
\,\big(|u(x,t)|^{s-2}\,u(x,t)-|u(y,t)|^{s-2}\,u(y,t)\big)\,\frac{dx\,dy}{|x-y|^{n+2\sigma}}.
\end{eqnarray*}
Comparing this with (<ref>), we conclude that
$$ \Vert u(\cdot,t) \Vert_{L^{s}(\Omega)}^{s+2}\le C\int_{\R^n} {\mathcal{N}}[u](x,t)\,|u(x,t)|^{s-2}\,u(x,t)\,dx,$$
up to renaming $C$.
This gives that hypothesis (<ref>) is fulfilled
in this case with $\gamma=3$.
Now we deal with the case of the magnetic operators.
We start with the case of classical space-derivatives.
For this, we exploit an elementary, but useful, inequality,
stated in the following auxiliary result:
Let $a$, $b\in\R$, and $\alpha$, $\beta$, $t\in\R^n$.
\begin{equation}\label{ch3ST:00}
(a^2+b^2)\Big(|a t-\beta|^2 + |bt+\alpha|^2 \Big)\ge
For any $t\in\R^n$, we define
\begin{equation} \label{ch3872wj2xz}
(a^2+b^2)\Big(|a t-\beta|^2 + |bt+\alpha|^2 \Big)-
We observe that
\begin{equation}\label{ch3ST:01}
\begin{split}
(a^2+b^2)(\alpha^2+\beta^2) - |a\alpha+b\beta|^2\\ &=
- ( a^2\alpha^2+b^2\beta^2 +2ab\alpha\beta)
\\ &=
- 2ab\alpha\beta\\
&= |a\beta-b\alpha|^2.
\end{split}\end{equation}
\begin{equation}\label{ch3ST:02}
\lim_{|t|\to+\infty} f(t)=\left\{
\begin{matrix}
+\infty & {\mbox{ if }} a^2+b^2>0,\\
0 & {\mbox{ otherwise.}}
\end{matrix}
\right.
\end{equation}
Now we claim that
\begin{equation}\label{ch3ST:03}
for all $t\in\R^n$. To prove (<ref>) we argue by contradiction
and assume that
$$ \inf_{\R^n} f<0.$$
Then, in view of (<ref>) and (<ref>), we have that
\begin{equation}\label{ch3ST:04}
f(\bar t)=\inf_{\R^n} f<0,\end{equation}
for some $\bar t\in \R^n$. As a consequence,
$$ 0=\nabla f(\bar t)=
2(a^2+b^2)\Big(a(a \bar t-\beta) + b(b\bar t+\alpha) \Big)=
2(a^2+b^2)\Big((a^2+b^2) \bar t-a\beta+b\alpha \Big),$$
which implies that
$$ \bar t=
\frac{a\beta-b\alpha}{a^2+b^2}.$$
Thus, we substitute this information into (<ref>)
and we obtain that
\begin{eqnarray*}
f(\bar t) &=&
\frac{a^2\beta-ab\alpha}{a^2+b^2}-\beta\right|^2 +
\left|\frac{ab\beta-b^2\alpha}{a^2+b^2}+\alpha\right|^2 \right)-
\frac{b^2\beta+ab\alpha}{a^2+b^2}\right|^2 +
\left|\frac{ab\beta+a^2\alpha}{a^2+b^2}\right|^2 \right)-
\frac{b\beta+a\alpha}{a^2+b^2}\right|^2 +
a^2\left|\frac{b\beta+a\alpha}{a^2+b^2}\right|^2 \right)-
\frac{b\beta+a\alpha}{a^2+b^2}\right|^2 -
\\&=&0.\end{eqnarray*}
This is in contradiction with (<ref>) and so it proves (<ref>),
which in turn implies (<ref>), as desired.
With this, we are now in the position of completing the proof
of Theorem <ref> and obtain the desired decay estimates
for the classical magnetic operator.
We want to prove inequality (<ref>) for the classical magnetic operator in order to apply Theorem <ref>.
To this end,
we aim at proving that
\begin{equation}\label{ch3FU:MAGN}
\Re\big\{ \bar u{\mathcal{N}} u\big\}+|u|\Delta|u|\ge0.
\end{equation}
To check this, we observe[For an alternative proof based
on fractional arguments, see the forthcoming footnote <ref>.]
that we can make the computations
in the vicinity of a point $x$ for which $|u(x)|>0$.
Indeed, if (<ref>) holds true at $\{|u|>0\}$,
we can fix $\epsilon>0$ and consider the function $u_\epsilon:=u+\epsilon$.
In this way, $u_\epsilon(x)=\epsilon>0$, hence we can apply (<ref>)
to $u_\epsilon$ and conclude that
\begin{equation}\label{ch390:91}
\begin{split}
0 \,&\le \Re\big\{ \bar u_\epsilon(x){\mathcal{N}} u_\epsilon(x)\big\}+|u_\epsilon(x)|\Delta|u_\epsilon(x)|\\
\Re\big\{ (\bar u(x)+\epsilon){\mathcal{N}}u(x)\big\}+
\end{split}\end{equation}
Notice that, for any test function $\varphi\in C^\infty_0(\Omega)$,
we have that
$$ \lim_{\epsilon\to0}\int_\Omega
\Delta|u_\epsilon(y)|\,\varphi(y)\,dy=
\lim_{\epsilon\to0}\int_\Omega
\int_\Omega
and so (in the distributional sense)
$$ \lim_{\epsilon\to0} \Delta|u_\epsilon|=
\Delta|u|.$$
Hence, we can pass to the limit in (<ref>)
and obtain (<ref>).
Accordingly, to prove (<ref>), from now on we will
focus on the case in which $|u|>0$. We write $u=a+ib$ and
we observe that
\begin{equation}\label{ch3Bvah}
\begin{split}
&\Re \{ -\bar{u}(\nabla-iA)^2 u \} \\
=\;& \Re \left\{ -\bar{u}(\Delta u - |A|^2 u -iA \cdot \nabla u -\nabla \cdot (iAu) ) \right\} \\
=\;& \Re \left\{ -\bar{u} \Delta u + |A|^2 |u|^2 +2\bar{u}iA \cdot \nabla u +i(\nabla \cdot A)|u|^2 \right\} \\
=\;& \Re \left\{ (-a+ib)( \Delta a+i\Delta b)
+ |A|^2 (a^2+b^2) +2(b+ia)A \cdot( \nabla a+i\nabla b)
+i(\nabla \cdot A)|u|^2 \right\}\\
=\;& -a\Delta a-b\Delta b
+ |A|^2 (a^2+b^2) +2b\nabla a\cdot A -2a\nabla b\cdot A
\end{split}\end{equation}
where we used the fact that $A$ is
real valued.
On the other hand, at points where $|u|\ne0$,
\begin{eqnarray*}&& \Delta |u|^2=2|u|\Delta |u|+2|\nabla |u||^2\\
{\mbox{and }}&&\nabla |u|= \frac{a \nabla a + b \nabla b}{|u|},
\end{eqnarray*}
\begin{eqnarray*}
|u|\Delta |u|&=&\frac12\, \Delta |u|^2-|\nabla |u||^2\\
&=& \frac12\,\Delta(a^2+b^2)-\frac{|a \nabla a + b \nabla b|^2}{|u|^2}\\
&=& a\Delta a+b\Delta b+|\nabla a|^2+|\nabla b|^2-\frac{|a \nabla a + b \nabla b|^2}{a^2+b^2}.
\end{eqnarray*}
From this and (<ref>), we conclude that
\begin{equation}\label{ch39384-0348}
\begin{split}&
\Re\big\{ \bar u{\mathcal{N}} u\big\}+|u|\Delta|u|
\\ =\;& |\nabla a|^2+|\nabla b|^2-\frac{|a \nabla a + b \nabla b|^2}{a^2+b^2}
+ |A|^2 (a^2+b^2) +2b\nabla a\cdot A -2a\nabla b\cdot A
\\ =\;& \big| aA-\nabla b\big|^2+\big| bA+\nabla a\big|^2
-\frac{|a \nabla a + b \nabla b|^2}{a^2+b^2},
\end{split}\end{equation}
and the latter term is nonnegative, thanks to (<ref>)
(applied here with $t:=A$, $\alpha:=\nabla a$ and $\beta:=\nabla b$).
This completes the proof of (<ref>).
Then, from (<ref>) here and [43]
(see in particular the formula before (2.12) in [43],
exploited here with $p:=2$ and $m:=2$),
\begin{eqnarray*}
&& \int_\Omega |u|^{s-2}
\Re\big\{ \bar u{\mathcal{N}} u\big\}\,dx\ge
-\int_\Omega |u|^{s-1}\Delta|u|\,dx\\
&&\qquad=\int_\Omega\nabla |u|^{s-1}\cdot\nabla|u|\,dx
\ge C\,\Vert u \Vert_{L^s{(\Omega)}}^{s},
\end{eqnarray*}
for some $C>0$.
This establishes inequality (<ref>) in this case,
with $\gamma=1$. Hence, Theorem <ref> follows
from Theorems <ref> and <ref>.
Now we deal with the fractional magnetic operator.
We have to verify the structural hypothesis (<ref>). We already know
that the desired inequality holds for the fractional Laplacian $(-\Delta)^{\sigma} v$ for $\sigma \in (0,1)$ and $v\geq 0$ (compare Theorem 1.2 of [43]). We notice that
\begin{equation}\label{ch3FU:MAGN2}
\begin{split}
\Re& \left\{ \frac{\bar{u}(x,t) \left( u(x,t) - e^{i(x-y)A(\frac{x+y}{2})}u(y,t) \right)}{|x-y|^{n+2\sigma}} \right\} \\
& \hspace{3em} = \frac{|{u}(x,t)|^2 - \Re \left\{ e^{i(x-y)A(\frac{x+y}{2})}u(y,t) \bar{u}(x,t )\right\} }{|x-y|^{n+2\sigma}} \\
& \hspace{3em} \geq |u(x,t)| \frac{|{u}(x,t)| - |u(y,t)| }{|x-y|^{n+2\sigma}},
\end{split}
\end{equation}
and therefore[Interestingly,
integrating and taking the limit as $\sigma\to1$ in (<ref>),
one obtains an alternative (and conceptually simpler)
proof of (<ref>).
This is a nice example of analysis in a nonlocal setting
which carries useful information to the classical case.]
\begin{equation}\label{ch38i9ik9iok92}
\int_{\Omega} |u(x,t)|^{s-2} \Re \{ \bar{u}(x,t)\mathcal{N} [u](x,t)\} \; dx
\geq \int_{\Omega} |u(x,t)|^{s-1} (-\Delta)^{\sigma}|u|(x,t) \; dx.\end{equation}
Also, since $|u|$ is a real and positive function, we can exploit
formula (2.25) in [43] (used here with $p:=2$) and write that
$$ \int_{\Omega} |u(x,t)|^{s-1} (-\Delta)^{\sigma}|u|(x,t) \; dx\ge
{C} \Vert u \Vert_{L^{s}(\Omega) }^{s}.$$
From this and (<ref>) we infer that
condition (<ref>) is satisfied in this case
with $\gamma=1$.
Then, the desired conclusion in Theorem <ref>
follows from Theorems <ref>
and <ref>.
[1]
N. Abatangelo and E. Valdinoci.
Getting acquainted with the fractional laplacian.
In Contemporary research in elliptic PDEs and related topics,
pages 1–105. Springer, 2019.
[2]
E. Affili.
A fisher-kpp model with a fast diffusion line in periodic media.
arXiv preprint arXiv:2009.14760, 2020.
[3]
E. Affili, S. Dipierro, L. Rossi, and E. Valdinoci.
Civil wars: A new lotka-volterra competitive system and analysis of
winning strategies.
arXiv preprint arXiv:2009.14707, 2020.
[4]
E. Affili, S. Dipierro, and E. Valdinoci.
Decay estimates in time for classical and anomalous diffusion.
In 2018 MATRIX Annals, pages 167–182. Springer, 2020.
[5]
E. Affili and E. Valdinoci.
Decay estimates for evolution equations with classical and fractional
Journal of Differential Equations, 266(7):4027–4060, 2019.
[6]
W. Arendt and J. Prüss.
Vector-valued tauberian theorems and asymptotic behavior of linear
volterra equations.
SIAM journal on mathematical analysis, 23(2):412–448, 1992.
[7]
D. G. Aronson and H. F. Weinberger.
Multidimensional nonlinear diffusion arising in population genetics.
Advances in Mathematics, 30(1):33–76, 1978.
[8]
N. Bacaër.
A short history of mathematical population dynamics.
Springer Science & Business Media, 2011.
[9]
B. B. Barrera, A. Figalli, and E. Valdinoci.
Bootstrap regularity for integro-differential operators and its
application to nonlocal minimal surfaces.
Annali della Scuola Normale Superiore di Pisa. Classe di
scienze, 13(3):609–639, 2014.
[10]
F. Bartumeus.
Behavioral intermittence, lévy patterns, and randomness in animal
Oikos, 118(4):488–494, 2009.
[11]
P. W. Bates and G. Zhao.
Existence, uniqueness and stability of the stationary solution to a
nonlocal evolution equation arising in population dispersal.
Journal of mathematical analysis and applications,
332(1):428–440, 2007.
[12]
A. D. Bazykin.
Nonlinear dynamics of interacting populations, volume 11 of
World Scientific Series on Nonlinear Science. Series A: Monographs and
World Scientific Publishing Co., Inc., River Edge, NJ, 1998.
With a biography of the author by Elena P. Kryukova, Yegor A. Bazykin
and Dmitry A. Bazykin, Edited and with a foreword by Alexander I. Khibnik and
Bernd Krauskopf.
[13]
H. Berestycki, A.-C. Coulon, J.-M. Roquejoffre, and L. Rossi.
Speed-up of reaction-diffusion fronts by a line of fast diffusion.
Séminaire Laurent Schwartz-EDP et applications, pages
1–25, 2014.
[14]
H. Berestycki, A.-C. Coulon, J.-M. Roquejoffre, and L. Rossi.
The effect of a line with nonlocal diffusion on fisher-kpp
Mathematical Models and Methods in Applied Sciences,
25(13):2519–2562, 2015.
[15]
H. Berestycki, O. Diekmann, C. J. Nagelkerke, and P. A. Zegeling.
Can a species keep pace with a shifting climate?
Bulletin of mathematical biology, 71(2):399, 2009.
[16]
H. Berestycki, R. Ducasse, and L. Rossi.
Generalized principal eigenvalues for heterogeneous road-field
arXiv preprint arXiv:1810.13180, 2018.
[17]
H. Berestycki, R. Ducasse, and L. Rossi.
Influence of a road on a population in an ecological niche facing
climate change.
arXiv preprint arXiv:1903.02221, 2019.
[18]
H. Berestycki, F. Hamel, and L. Roques.
Analysis of the periodically fragmented environment model: I–species
Journal of Mathematical Biology, 51(1):75–113, 2005.
[19]
H. Berestycki, J.-M. Roquejoffre, and L. Rossi.
Fisher–kpp propagation in the presence of a line: further effects.
Nonlinearity, 26(9):2623, 2013.
[20]
H. Berestycki, J.-M. Roquejoffre, and L. Rossi.
The influence of a line with fast diffusion on fisher-kpp
Journal of mathematical biology, 66(4-5):743–766, 2013.
[21]
H. Berestycki and L. Rossi.
Reaction-diffusion equations for population dynamics with forced
speed i-the case of the whole space.
Discrete & Continuous Dynamical Systems-A, 21(1):41, 2008.
[22]
H. Berestycki and L. Rossi.
Generalizations and properties of the principal eigenvalue of
elliptic operators in unbounded domains.
Communications on Pure and Applied Mathematics,
68(6):1014–1065, 2015.
[23]
S. Bernstein.
Sur une classe d'équations fonctionnelles aux dérivées
Bull. Acad. Sci. URSS. Sér. Math. [Izvestia Akad. Nauk SSSR],
4:17–26, 1940.
[24]
S. C. Bhargava.
Generalized Lotka–Volterra equations and the mechanism of
technological substitution.
Technological Forecasting and Social Change, 35(4):319–326,
[25]
P. Biler, C. Imbert, and G. Karch.
The nonlocal porous medium equation: Barenblatt profiles and other
weak solutions.
Arch. Ration. Mech. Anal., 215(2):497–529, 2015.
[26]
L. Brasco, E. Lindgren, and A. Schikorra.
Higher hölder regularity for the fractional p-laplacian in the
superquadratic case.
Advances in Mathematics, 338:782–846, 2018.
[27]
C. Bucur and E. Valdinoci.
Nonlocal diffusion and applications, volume 20 of Lecture
Notes of the Unione Matematica Italiana.
Springer, [Cham]; Unione Matematica Italiana, Bologna, 2016.
[28]
L. Caffarelli, J.-M. Roquejoffre, and O. Savin.
Nonlocal minimal surfaces.
Communications on Pure and Applied Mathematics,
63(9):1111–1144, 2010.
[29]
L. Caffarelli and J. L. Vazquez.
Nonlinear porous medium flow with fractional potential pressure.
Arch. Ration. Mech. Anal., 202(2):537–565, 2011.
[30]
M. Caputo.
Linear models of dissipation whose q is almost frequency
Geophysical Journal International, 13(5):529–539, 1967.
[31]
J. Carr.
Applications of centre manifold theory, volume 35 of Applied Mathematical Sciences.
Springer-Verlag, New York-Berlin, 1981.
[32]
W. Chen, S. Mosconi, and M. Squassina.
Nonlocal problems with critical hardy nonlinearity.
Journal of Functional Analysis, 275(11):3065 – 3114, 2018.
[33]
G. Ciraolo, A. Figalli, F. Maggi, and M. Novaga.
Rigidity and sharp stability estimates for hypersurfaces with
constant and almost-constant nonlocal mean curvature.
Journal für die reine und angewandte Mathematik,
2018(741):275–294, 2018.
[34]
P. Clément and J. Nohel.
Abstract linear and nonlinear volterra equations preserving
SIAM Journal on Mathematical Analysis, 10(2):365–388, 1979.
[35]
E. A. Coddington and N. Levinson.
Theory of ordinary differential equations.
Tata McGraw-Hill Education, 1955.
[36]
J. Coville and L. Dupaigne.
On a non-local equation arising in population dynamics.
Proceedings of the Royal Society of Edinburgh: Section A
Mathematics, 137(4):727–755, 2007.
[37]
E. C. M. Crooks, E. N. Dancer, D. Hilhorst, M. Mimura, and H. Ninomiya.
Spatial segregation limit of a competition-diffusion system with
Dirichlet boundary conditions.
Nonlinear Anal. Real World Appl., 5(4):645–665, 2004.
[38]
P. D'Avenia and M. Squassina.
Ground states for fractional magnetic operators.
ESAIM: Control, Optimisation and Calculus of Variations,
24(1):1–24, 2018.
[39]
A. de Pablo, F. Quirós, A. Rodríguez, and J. L. Vázquez.
A fractional porous medium equation.
Adv. Math., 226(2):1378–1409, 2011.
[40]
E. Di Nezza, G. Palatucci, and E. Valdinoci.
Hitchhiker's guide to the fractional Sobolev spaces.
Bull. Sci. Math., 136(5):521–573, 2012.
[41]
E. DiBenedetto.
Degenerate parabolic equations.
Springer Science & Business Media, 2012.
[42]
S. Dipierro and E. Valdinoci.
A simple mathematical model inspired by the purkinje cells: from
delayed travelling waves to fractional diffusion.
Bulletin of mathematical biology, 80(7):1849–1870, 2018.
[43]
S. Dipierro, E. Valdinoci, and V. Vespri.
Decay estimates for evolutionary equations with fractional
arXiv preprint arXiv:1707.08278, 2017.
[44]
Y. Du, M. Wang, and M. Zhou.
Semi-wave and spreading speed for the diffusive competition model
with a free boundary.
Journal de Mathématiques Pures et Appliquées,
107(3):253–287, 2017.
[45]
L. C. Evans.
Partial differential equations. graduate studies in mathematics.
American mathematical society, 2:1998, 1998.
[46]
A. Farina and E. Valdinoci.
Regularity and rigidity theorems for a class of anisotropic nonlocal
manuscripta mathematica, 153(1-2):53–70, 2017.
[47]
B. D. Fath.
Encyclopedia of ecology.
Elsevier, 2018.
[48]
P. C. Fife and J. B. McLeod.
The approach of solutions of nonlinear diffusion equations to
travelling front solutions.
Archive for Rational Mechanics and Analysis, 65(4):335–361,
[49]
A. Fiscella and E. Valdinoci.
A critical Kirchhoff type problem involving a nonlocal operator.
Nonlinear Anal., 94:156–170, 2014.
[50]
R. A. Fisher.
The wave of advance of advantageous genes.
Annals of eugenics, 7(4):355–369, 1937.
[51]
J. Flores.
A mathematical model for Neanderthal extinction.
J. Theoret. Biol., 191(3):295–298, 1998.
[52]
M. Gatto, E. Bertuzzo, L. Mari, S. Miccoli, L. Carraro, R. Casagrandi, and
A. Rinaldo.
Spread and dynamics of the covid-19 epidemic in italy: Effects of
emergency containment measures.
Proceedings of the National Academy of Sciences,
117(19):10484–10491, 2020.
[53]
S. Gaucel and M. Langlais.
Some remarks on a singular reaction-diffusion system arising in
predator-prey modeling.
Discrete & Continuous Dynamical Systems-B, 8(1):61, 2007.
[54]
G. Gause.
Ecology and some problems of species origin.
Ecology and theory of evolution. Articles of science research,
Leningrad, pages 5–105, 1984.
[55]
S. Genieys, V. Volpert, and P. Auger.
Pattern and waves for a model in population dynamics with nonlocal
consumption of resources.
Mathematical Modelling of Natural Phenomena, 1(1):63–80, 2006.
[56]
M. Ghisi and M. Gobbino.
Hyperbolic-parabolic singular perturbation for mildly degenerate
Kirchhoff equations: time-decay estimates.
J. Differential Equations, 245(10):2979–3007, 2008.
[57]
D. Gilbarg and N. S. Trudinger.
Elliptic partial differential equations of second order.
springer, 2015.
[58]
T. Giletti, L. Monsaingeon, and M. Zhou.
A kpp road–field system with spatially periodic exchange terms.
Nonlinear Analysis, 128:273–302, 2015.
[59]
E. Giusti and G. H. Williams.
Minimal surfaces and functions of bounded variation, volume 80.
Springer, 1984.
[60]
J. R. Graef, J. Henderson, L. Kong, and X. S. Liu.
Ordinary differential equations and boundary value problems.
Vol. I, volume 7 of Trends in Abstract and Applied Analysis.
World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, 2018.
Advanced ordinary differential equations.
[61]
J.-S. Guo and P. Souplet.
Fast rate of formation of dead-core for the heat equation with strong
absorption and applications to fast blow-up.
Mathematische Annalen, 331(3):651–667, 2005.
[62]
W. Horsthemke.
Noise induced transitions.
In Nonequilibrium dynamics in chemical systems (Bordeaux,
1984), volume 27 of Springer Ser. Synergetics, pages 150–160.
Springer, Berlin, 1984.
[63]
S.-B. Hsu.
Ordinary differential equations with applications, volume 21 of
Series on Applied Mathematics.
World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, second
edition, 2013.
[64]
W. Huang.
Problem on minimum wave speed for a lotka–volterra
reaction–diffusion competition model.
Journal of Dynamics and Differential Equations, 22(2):285–297,
[65]
W. Huang and M. Han.
Non-linear determinacy of minimum wave speed for a lotka–volterra
competition model.
Journal of Differential Equations, 251(6):1549–1561, 2011.
[66]
N. E. Humphries, N. Queiroz, J. R. Dyer, N. G. Pade, M. K. Musyl, K. M.
Schaefer, D. W. Fuller, J. M. Brunnschweiler, T. K. Doyle, J. D. Houghton,
et al.
Environmental context explains lévy and brownian movement
patterns of marine predators.
Nature, 465(7301):1066–1069, 2010.
[67]
T. Ikebe and T. Kato.
Uniqueness of the self-adjoint extension of singular elliptic
differential operators.
Arch. Rational Mech. Anal., 9:77–92, 1962.
[68]
N. Jacob.
Pseudo Differential Operators & Markov Processes, volume 3.
Imperial College Press, 2005.
[69]
J. Kemppainen, J. Siljander, V. Vergara, and R. Zacher.
Decay estimates for time-fractional and other non-local in time
subdiffusion equations in $\Bbb{R}^d$.
Math. Ann., 366(3-4):941–979, 2016.
[70]
N. Kinezaki, K. Kawasaki, F. Takasu, and N. Shigesada.
Modeling biological invasions into periodically fragmented
Theoretical population biology, 64(3):291–302, 2003.
[71]
A. N. Kochubei.
Distributed order calculus and equations of ultraslow diffusion.
J. Math. Anal. Appl., 340(1):252–281, 2008.
[72]
A. Kolmogorov, I. Petrovsky, and N. Piskunov.
Investigation of the equation of diffusion combined with increasing
of the substance and its application to a biology problem.
Bull. Moscow State Univ. Ser. A: Math. Mech, 1(6):1–25, 1937.
[73]
M. Kwaśnicki.
Ten equivalent definitions of the fractional laplace operator.
Fractional Calculus and Applied Analysis, 20(1):7–51, 2017.
[74]
O. A. Ladyzhenskaia, V. A. Solonnikov, and N. N. Ural'tseva.
Linear and quasi-linear equations of parabolic type, volume 23.
American Mathematical Soc., 1968.
[75]
M. A. Lewis, B. Li, and H. F. Weinberger.
Spreading speed and linear determinacy for two-species competition
Journal of mathematical biology, 45(3):219–233, 2002.
[76]
B. Li, H. F. Weinberger, and M. A. Lewis.
Spreading speeds as slowest wave speeds for cooperative systems.
Mathematical biosciences, 196(1):82–98, 2005.
[77]
M. A. Lomholt, K. Tal, R. Metzler, and K. Joseph.
Lévy strategies in intermittent search processes are
Proceedings of the National Academy of Sciences,
105(32):11055–11059, 2008.
[78]
A. J. Lotka.
Elements of physical biology.
Science Progress in the Twentieth Century (1919-1933),
21(82):341–343, 1926.
[79]
Y. Lou, W.-M. Ni, and S. Yotsutani.
On a limiting system in the lotka–volterra competition with
Discrete & Continuous Dynamical Systems-A, 10(1&2):435, 2004.
[80]
F. Mainardi.
On some properties of the Mittag-Leffler function
$E_\alpha(-t^\alpha)$, completely monotone for $t>0$ with $0<\alpha<1$.
Discrete Contin. Dyn. Syst. Ser. B, 19(7):2267–2278, 2014.
[81]
T. R. Malthus.
An essay on the principle of population as it affects the future
improvement of society, with remarks on the speculations of mr godwin, m.
Condorcet, and other writers. London: J. Johnson, 1798.
[82]
A. Massaccesi and E. Valdinoci.
Is a nonlocal diffusion strategy convenient for biological
populations in competition?
J. Math. Biol., 74(1-2):113–147, 2017.
[83]
H. W. McKenzie, E. H. Merrill, R. J. Spiteri, and M. A. Lewis.
How linear features alter predator movement and the functional
Interface focus, 2(2):205–216, 2012.
[84]
R. Metzler and J. Klafter.
The random walk's guide to anomalous diffusion: a fractional dynamics
Physics reports, 339(1):1–77, 2000.
[85]
S. A. Morris and D. Pratt.
Analysis of the Lotka–Volterra competition equations as a
technological substitution model.
Technological Forecasting and Social Change, 70(2):103–133,
[86]
J. Murray.
Mathematical biology ii: Spatial models and biomedical applications.
[87]
T. Namba and M. Mimura.
Spatial distribution of competing populations.
J. Theoret. Biol., 87(4):795–814, 1980.
[88]
H.-M. Nguyen, A. Pinamonti, M. Squassina, and E. Vecchi.
New characterizations of magnetic Sobolev spaces.
Adv. Nonlinear Anal., 7(2):227–245, 2018.
[89]
A. Okubo et al.
Diffusion and ecological problems: mathematical models.
[90]
A. Okubo, P. K. Maini, M. H. Williamson, and J. D. Murray.
On the spatial spread of the grey squirrel in britain.
Proceedings of the Royal Society of London. B. Biological
Sciences, 238(1291):113–125, 1989.
[91]
S. Pan and G. Lin.
Invasion traveling wave solutions of a competitive system with
Boundary Value Problems, 2012(1):120, 2012.
[92]
R. B. Paris.
Exponential asymptotics of the Mittag-Leffler function.
R. Soc. Lond. Proc. Ser. A Math. Phys. Eng. Sci.,
458(2028):3041–3052, 2002.
[93]
S. Patrizi and E. Valdinoci.
Long-time behavior for crystal dislocation dynamics.
Math. Models Methods Appl. Sci., 27(12):2185–2228, 2017.
[94]
A. Pauthier.
Uniform dynamics for fisher-kpp propagation driven by a line of fast
diffusion under a singular limit.
Nonlinearity, 28(11):3891, 2015.
[95]
A. Pauthier.
The influence of nonlocal exchange terms on fisher–kpp propagation
driven by a line of fast diffusion.
Communications in Mathematical Sciences, 14(2):535–570, 2016.
[96]
L. Perko.
Differential equations and dynamical systems, volume 7.
Springer Science & Business Media, 2013.
[97]
A. B. Potapov and M. A. Lewis.
Climate and competition: the effect of moving range boundaries on
habitat invasibility.
Bulletin of Mathematical Biology, 66(5):975–1008, 2004.
[98]
J. Prüss.
Evolutionary integral equations and applications, volume 87.
Birkhäuser, 2013.
[99]
S. N. Rasband.
Chaotic dynamics of nonlinear systems.
A Wiley-Interscience Publication. John Wiley & Sons, Inc., New York,
[100]
P. Raviart.
Sur la résolution de certaines equations paraboliques non
Journal of Functional Analysis, 5(2):299 – 328, 1970.
[101]
A. M. Reynolds and C. J. Rhodes.
The lévy flight paradigm: random search patterns and mechanisms.
Ecology, 90(4):877–887, 2009.
[102]
L. F. Richardson.
Arms and insecurity: A mathematical study of the causes and
origins of war.
Edited by Nicolas Rashevsky and Ernesto Trucco. The Boxwood Press,
Pittsburgh, Pa.; Quadrangle Books, Chicago, Ill., 1960.
[103]
C. Robinet, C.-E. Imbert, J. Rousselet, D. Sauvard, J. Garcia, F. Goussard, and
A. Roques.
Human-mediated long-distance jumps of the pine processionary moth in
Biological invasions, 14(8):1557–1569, 2012.
[104]
B. Ross.
The development of fractional calculus 1695–1900.
Historia Mathematica, 4(1):75–89, 1977.
[105]
L. Rossi, A. Tellini, and E. Valdinoci.
The effect on fisher-kpp propagation in a cylinder with fast
diffusion on the boundary.
SIAM Journal on Mathematical Analysis, 49(6):4595–4624, 2017.
[106]
J. Sánchez and V. Vergara.
Long-time behavior of nonlinear integro-differential evolution
Nonlinear Analysis: Theory, Methods & Applications, 91:20–31,
[107]
N. Shigesada and K. Kawasaki.
Biological invasions: theory and practice.
Oxford University Press, UK, 1997.
[108]
N. Shigesada, K. Kawasaki, and E. Teramoto.
Traveling periodic waves in heterogeneous environments.
Theoretical Population Biology, 30(1):143–160, 1986.
[109]
J. G. Skellam.
Random dispersal in theoretical populations.
Biometrika, 38(1/2):196–218, 1951.
[110]
M. Squassina and B. Volzone.
Bourgain-Brézis-Mironescu formula for magnetic operators.
C. R. Math. Acad. Sci. Paris, 354(8):825–831, 2016.
[111]
H. Sussmann.
Regular synthesis for time-optimal control of single-input real
analytic systems in the plane.
SIAM journal on control and optimization, 25(5):1145–1162,
[112]
A. Tellini.
Comparison among several planar fisher-kpp road-field systems.
In Contemporary Research in Elliptic PDEs and Related Topics,
pages 481–500. Springer, 2019.
[113]
G. Teschl.
Ordinary differential equations and dynamical systems, volume
140 of Graduate Studies in Mathematics.
American Mathematical Society, Providence, RI, 2012.
[114]
M. D. Toft.
The state of the field: Demography and war.
Population and Conflict: Exploring the Links, (11):25–28,
[115]
E. Trélat.
Contrôle optimal: théorie & applications.
Vuibert Paris, 2005.
[116]
J. M. G. van der Dennen.
The origin of war: The evolution of a male-coalitional
reproductive strategy.
Origin Press, 1995.
[117]
G. Vandenbroucke.
Fertility and wars: the case of world war i in france.
American Economic Journal: Macroeconomics, 6(2):108–36, 2014.
[118]
J. L. Vázquez.
The porous medium equation: mathematical theory.
Oxford University Press, 2007.
[119]
V. Vergara and R. Zacher.
Optimal decay estimates for time-fractional and other nonlocal
subdiffusion equations via energy methods.
SIAM J. Math. Anal., 47(1):210–239, 2015.
[120]
P. Verhulst.
La loi d'accroissement de la population.
Nouveaux Memories de l'Académie Royale des Sciences et
Belles-Lettres de Bruxelles, 18:14–54, 1845.
[121]
V. Volterra.
Principes de biologie mathématique.
Acta biotheoretica, 3(1):1–36, 1937.
[122]
C. Watanabe, R. Kondo, N. Ouchi, and H. Wei.
A substitution orbit model of competitive innovations.
Technological Forecasting and Social Change, 71(4):365–390,
[123]
S. Wiggins.
Introduction to applied nonlinear dynamical systems and chaos,
volume 2 of Texts in Applied Mathematics.
Springer-Verlag, New York, 1990.
[124]
R. Zacher.
Weak solutions of abstract evolutionary integro-differential
equations in hilbert spaces.
Funkcialaj Ekvacioj, 52(1):1–18, 2009.
|
Current E-mail<EMAIL_ADDRESS>
Long-term E-mail<EMAIL_ADDRESS>
# Progress of the CHARA/SPICA project
Pannetier C Université Côte d’Azur, Observatoire de la Côte d’Azur, CNRS,
Laboratoire Lagrange, France ONERA/DOTA, Université Paris Saclay, 92322
Châtillon, France Mourard D Université Côte d’Azur, Observatoire de la Côte
d’Azur, CNRS, Laboratoire Lagrange, France Berio P Université Côte d’Azur,
Observatoire de la Côte d’Azur, CNRS, Laboratoire Lagrange, France Cassaing F
ONERA/DOTA, Université Paris Saclay, 92322 Châtillon, France Allouche F
Université Côte d’Azur, Observatoire de la Côte d’Azur, CNRS, Laboratoire
Lagrange, France Anugu N Steward Observatory, Department of Astronomy,
University of Arizona, Tucson, USA University of Michigan, Ann Arbor, MI
48109, US School of Physics and Astronomy, University of Exeter, Exeter,
Stocker Road, EX4 4QL, UK Bailet C Université Côte d’Azur, Observatoire de
la Côte d’Azur, CNRS, Laboratoire Lagrange, France ten Brummelaar T The
CHARA Array, Mount Wilson Observatory, Mount Wilson, CA 91023 Dejonghe J
Université Côte d’Azur, Observatoire de la Côte d’Azur, CNRS, Laboratoire
Lagrange, France Gies D The CHARA Array, Mount Wilson Observatory, Mount
Wilson, CA 91023 Jocou L Institut de Planetologie et d’Astrophysique de
Grenoble, Grenoble 38058, France Kraus S School of Physics and Astronomy,
University of Exeter, Exeter, Stocker Road, EX4 4QL, UK Lacour S LESIA,
Observatoire de Paris, Université PSL, CNRS, Sorbonne Université, Univ. Paris
Diderot, Sorbonne Paris Cité, 5 place Jules Janssen, 92195 Meudon, France
Lagarde S Université Côte d’Azur, Observatoire de la Côte d’Azur, CNRS,
Laboratoire Lagrange, France Le Bouquin J.B Institut de Planetologie et
d’Astrophysique de Grenoble, Grenoble 38058, France Lecron D Université Côte
d’Azur, Observatoire de la Côte d’Azur, CNRS, Laboratoire Lagrange, France
Monnier J University of Michigan, Ann Arbor, MI 48109, US Nardetto N
Université Côte d’Azur, Observatoire de la Côte d’Azur, CNRS, Laboratoire
Lagrange, France Patru F Université Côte d’Azur, Observatoire de la Côte
d’Azur, CNRS, Laboratoire Lagrange, France Perraut K Institut de
Planetologie et d’Astrophysique de Grenoble, Grenoble 38058, France Petrov R
Université Côte d’Azur, Observatoire de la Côte d’Azur, CNRS, Laboratoire
Lagrange, France Rousseau S Université Côte d’Azur, Observatoire de la Côte
d’Azur, CNRS, Laboratoire Lagrange, France Stee P Université Côte d’Azur,
Observatoire de la Côte d’Azur, CNRS, Laboratoire Lagrange, France Sturmann J
The CHARA Array, Mount Wilson Observatory, Mount Wilson, CA 91023 Sturmann L
The CHARA Array, Mount Wilson Observatory, Mount Wilson, CA 91023
###### Abstract
CHARA/SPICA (Stellar Parameters and Images with a Cophased Array) is currently
being developed at Observatoire de la Côte d’Azur. It will be installed at the
visible focus of the CHARA Array by the end of 2021. It has been designed to
perform a large survey of fundamental stellar parameters with, in the possible
cases, a detailed imaging of the surface or environment of stars. To reach the
required precision and sensitivity, CHARA/SPICA combines a low spectral
resolution mode $R=140$ in the visible and single-mode fibers fed by the AO
stages of CHARA. This setup generates additional needs before the
interferometric combination: the compensation of atmospheric refraction and
longitudinal dispersion, and the fringe stabilization. In this paper, we
present the main features of the 6-telescopes fibered visible beam combiner
(SPICA-VIS) together with the first laboratory and on-sky results of the
fringe tracker (SPICA-FT). We describe also the new fringe-tracker simulator
developed in parallel to SPICA-FT.
###### keywords:
long baseline interferometry, fringe-tracking, CHARA
## 1 Introduction
### 1.1 Scientific rationale
Measuring the angular diameter of stars is critical for constraining the
stellar and planet fundamental parameters[1, 2, 3]. In addition to the high
interest for exoplanet characterisation since the first discovery in 1995[4],
stellar physics has seen a recent reawakening with the discovery of
oscillating processes in more than a thousands of stars with WIRE[5], MOST[6],
CoRoT[7] and Kepler[8] and measuring their fundamental properties (diameter,
temperature, mass, age) has become critical in many domains of astronomy.
Indirectly, the stellar angular diameters are also used to derive the distance
of eclipsing binaries in Large Magellanic Cloud[9] and Small Magellanic
Cloud[10] through the so-called Surface-Brightness Color Relations (SBCR). It
exists currently in the literature many relations derived with different
subsets of the JMMC Measured Stellar Diameters Catalog (JMDC), a catalog[11]
that gathers all the star diameters directly measured. When using these
relations for deriving diameters of stars of magnitude V=6, the uncertainty
lies between 2% for V-K=3 and 9% both for early-type (V-K=0) and late-type
(V-K=5) stars[12]. With its capability of resolving such distant objects,
stellar interferometry has long been used for calibrating these relations.
However, due to sensitivity and accuracy limitations, among the 1500 star
diameters of the JMDC, only 11% are known with an accuracy better than 1%,
which partly explains the SBCR dispersion. The three other reasons for this
dispersion are the fact that the stellar diameters of the JMDC come from
heterogeneous measurement techniques, from different star selection
criteria[13], and probably because of circumstellar material (for e.g. wind),
binarity or rotation that are not considered when analysing the data.
Thanks to the combination of fringe tracking, spatial filtering, low spectral
resolution and the use of new modern EMCCD detectors, more than 7000 star
diameters could be measurable with a precision better than 1% by
CHARA/SPICA[14], a new visible instrument that will replace the VEGA[15, 16]
instrument on the CHARA[17] Array. CHARA/SPICA has many similarities with the
NPOI/VISION[18] instrument, but will benefit from the 1 m telescopes of CHARA.
The CHARA/SPICA observing program aims at measuring the diameter of a thousand
of these stars, distributed over the Hertzsprung-Russell (HR) diagram for
spectral types from O to M, over almost 70 nights per year during 3 years. It
will highly enlarge the JMDC with homogeneous and high-precision measurements.
These measurements will be crucial to derive fundamental parameters of stars
and planets, improve evolutionary models, and constrain the SBCR all over the
HR diagram. To complete the spatial measurements provided by the low spectral
resolution $R=140$, a medium and high spectral resolution ($R=3000$,
$R=10000$) will allow spectro-differential interferometry on some targets for
getting crucial information on dynamical processes such as rotation velocity.
With its high angular resolution of tenth of milliarcseconds, CHARA/SPICA can
also measure the apparent orbit of binaries. When completed with the
spectroscopic orbit, the knowledge of their three-dimension orbit permits to
derive their respective masses. The improved precision of the stellar
diameters will finally improve a lot the estimation of the distances in the
Universe and in particular the distances of the neighbour galaxies[12].
For star hosting transiting planets, the ratio $R_{p}/R_{\odot}$ of their
radius is usually estimated by the space missions like CoRoT, Kepler, K2, TESS
or PLATO. Measuring with precision the star radius $R_{\odot}$ from the
combination of its angular diameter measured with interferometric techniques
and its distance measured by parallaxes techniques (Gaia) finally allows us to
estimate the planet radius $R_{p}$ and deriving its properties such that its
density and its position with respect to the habitable-zone of the parent-
star. Furthermore, the high sensitivity and large coverage of the spatial
frequencies permitted by the 6 telescopes and the broadness of the measured
waveband, from 0.6 to 0.9 µ m , will provide precise limb-darkening profiles,
and imaging of surface features when possible. A precise knowledge of the
stellar surface luminance distribution is critical for the characterisation of
the atmosphere of these planets. The large survey planned with CHARA/SPICA
will thus be of major interest for the current spatial missions and in
particular the incoming PLATO survey whose main objective is to characterise
the fundamental parameters of transiting planets by measuring their radius
with a precision of 2%.
### 1.2 Guiding principles for CHARA/SPICA
Designing an interferometric instrument is not only considering the scientific
rationale and its translation into technical specifications but it aims also
at adapting the very particular entrance pupil plane of a ground-based
interferometer to the needs of the cophased focus. In the precedent
descriptions of SPICA[19, 14] we have shown that reaching the required
performance means a spatial filtering of the beams through single-mode fibers
and a low resolution spectrograph. Active controls of the injection in the
fibers and of the fringe stabilization are mandatory to reach the required
limiting magnitude.
These main considerations led to the general design of SPICA-VIS on the basis
of two optical benches showed in Fig. 1. The first one is called Injection
Table (IT) and holds all the modules required for the injection in the fibers:
picking optics, atmospheric refraction correction, pupil plane and image plane
alignment, fast tip/tilt correction for the injection, pupil plane and image
plane sensor, and injection optics. This first table ends with the 6 fibers
glued on a V-groove. This piece is the entrance of the second optical bench,
the Spectrograph Table. It is dedicated to the formation of the image-plane
dispersed fringes at different spectral resolution, the photometric channels
for the calibration of the complex visibility, and the science detector.
Moreover, based on the experience of VEGA[20] and CLIMB[21] combined
operation, it has been decided to install the fast fringe-sensor SPICA-FT in
the near-infrared H-band at 1.65 $\mu$m to keep the whole band for the science
with SPICA-VIS. It is based on a 6T-ABCD integrated optics beam combiner fed
by the MIRC-X fibers and installed in front of the MIRC-X spectrograph [22]
and the C-RED ONE detector[23]. The control loops (group-delay and phase-
delay) use the main CHARA delay lines for the fringe stabilization.
### 1.3 General implementation and CHARA interface
As previously said, SPICA-FT is integrated inside the MIRC-X instrument and
benefits from all its functionalities: the internal delay lines, the off-axis
parabola for the injection, and the compensation of the birefringence of the
fibers. It could be illuminated either by the beams coming from the telescopes
or by the new Six-Telescope-Simulator (STS [22]) coaligned and cophased on the
CHARA reference source.
For what concerns SPICA-VIS, it has been decided to use the two existing VEGA
tables, both for economic reasons and for darkness reasons in this part of the
focal laboratory of CHARA. A new 6-beam periscopic device will replace the old
VEGA periscope and will be installed on the CHARA visible table. With such an
installation, SPICA-VIS could receive light from the 6 telescopes but also
from the STS, thanks to the addition of 6 dichroic plates mimicking the CHARA
beam-sampler for the STS beams. Fig. 1 presents the general layout of the
CHARA laboratory with the new SPICA systems.
Figure 1: General implementation of the three main SPICA elements in the CHARA
Beam Combination Laboratory: the SPICA-VIS feeding optics, the IT and the
spectrograph (green boxes). The light-blue boxes represent the two new LDC
modules.
As will be described in Ref. 24, it has been decided to improve the
longitudinal dispersion compensators (LDC) of CHARA[25]. The main reasons are
the need for improving their transmission in the infrared bands and to permit
a better correction in the low spectral resolution mode of SPICA-VIS. The
combination of the old LDC (but with new glass), an additional LDC in the
visible beams, and the internal delay lines of the different instruments
permits to reach a high fringe contrast and transmission in all bands. This
development will ease the simultaneous use of CHARA instruments and covering
ideally R+I, J, H, and the K bands.
## 2 SPICA-VIS: design study
### 2.1 Injection Table
As rapidly presented in the introduction, the interferometric combination of
beams collected by separated telescopes requires some preparation on each
individual beam. This is not only the transportation and the equalization of
the optical path length but the beams have to be controlled in terms of
alignment and polarization to reach the highest performance. Coupling the
beams into single-mode fibers is now classical in optical interferometry[26,
27]. It presents the great advantage of clearly separating the functions
before and after the injection. After the injection, the beams are transported
by the fibers and could be easily adapted to the requirements of the science
instrument. This part will be described in sub-section 2.2.
It has been demonstrated[14] that, given the performance of the CHARA adaptive
optics stages[28] in the visible (Strehl below 25%) it is critical to add an
additional fast tip/tilt correction before the injection in the fibers. This
stage permits not only to maximize the injected flux but also to highly reduce
the fraction of frames with flux below a certain threshold [29].
The IT (Figure 2) is built around this fast tip/tilt stage. The ideal location
of this tip/tilt mirror is in a pupil plane and the entrance optics of the
table permit to image the distant pupil plane in the CHARA laboratory. An
intermediate image plane is used to allow a correct centering of the pupil.
Two slow-motion mirrors (M2 in the periscope and M3 in the image plane before
the tip/tilt mirror M4) permit to conjugate the CHARA beams and the injection
modules with the required performance (lateral positioning better than 5% of
the diameter of the pupil, residual tip/tilt better than 10 seconds of arc
(lab units)). A fraction of the light (10%) is used to perform a pupil and
image control on each of the 6 beams. The reference positions on the control
detector are recorded with a laser, retro-feeding the fibers and sent to the
control detector by retroreflectors after reflection on the beam splitters. It
should be noted that this table contains also for each beam a module allowing
to compensate for the differential polarization between two beams owing to the
inhomogeneity’s in the fibers[29].
Figure 2: 3D model of the CHARA/SPICA injection table. The six CHARA beams
arrive from the upper-right part of the figure, after beeing picked-up by the
SPICA periscope (mirrors M1 and M2, M2 being dedicated to the slow alignment
of the image plane). Each beam encounters successively the visible shutter,
the polarisation compensator (PDC), the imaging lens, the M3 mirror in image
plane and permitting the alignment of the pupil plane, the fast tip/tilt
mirror M4 (M4/TT), the collimating lens, the beam splitter (BS) sending 10% of
the flux on the control detector, and finally the injection module installed
in the left-bottom corner of the table. In the middle and bottom part of the
table, six retroreflectors (CC) permit to send the retrofeeding laser to the
control camera for the recording of the reference position.
As described in Sec. 1.1, a major part of the science programs of SPICA will
be done in low resolution mode. It means that the whole band between 600 nm
and 900 nm has to be injected into the single-mode fibers. We chose the
single-mode fibers PM630-HP from
Nufern111https://www.nufern.com/pam/optical_fibers/960/PM630-HP/ suited to
this waveband. Therefore the question of the correction of the atmospheric
refraction is important to consider. In Fig. 3, we plot the injection factor
as a function of the wavelength and for different values of an absolute
displacement. From this we can deduce that the chromatic error on the
correction of the refraction has to be done at the level of 10 mas maximum
(sky unit) to limit the loss on the injection factor to less than 1%. A simple
computation of the differential refraction between 700 nm (reference
wavelength) and 600 nm or 850 nm generates a transverse dispersion of 300 mas.
It is thus mandatory to introduce a correction of the differential refraction.
Moreover, because of the field rotation of the CHARA Coudé beams, these
compensators have to be aligned continuously to the field rotation with a
precision of 0.5 °. Finally, our computation shows that the influence of the
field rotation and the change of refraction in 10 min do not generate a
transverse dispersion larger than 10 mas (field rotation) or 15 mas (change of
refraction in the extreme cases). As a conclusion we decided to rotate the
compensators when slewing only.
Figure 3: Injection factor as a function of the wavelength for different
amplitudes of spatial displacement.
Finally, one important development that has been done concerns the injection
module. It is well-known that the theoretical coupling into a single-mode
fiber is limited to $81.8\%$ (in the case of a beam without a central
obstruction). Taking the central obstruction into account leads down to
$72\%$. In theory, a perfectly focused and aligned off-axis parabola could
reach this level with no chromatism. However, surface aberrations and
misalignment would quickly degrade the performance. We studied a lens system
made with an achromatic doublet and a plano-convex lens (Fig. 4). The two
optics are maintained at their adequate position by construction, while focus
is obtained thanks to fine adjustment of the distance between the second lens
and the fiber. All centering are nominal and not adjustable, as the tolerance
for this aspect is not so tight. We developed a test bench for measuring the
coupling efficiency of this module. The light coming from a collimated source
is divided into 2 parts thanks to a beamsplitter. The first part is directly
imaged on the detector, the second part is injected into the fiber. A
motorized Tip Tilt system is placed just before the injection module in order
to optimize the coupling. With this prototype, we reached, in the lab, a
coupling of $75\%$ instead of the theoretical coupling of $81.8\%$.
Figure 4: The opto-mechanical setup of the injection module developed for
SPICA.
### 2.2 Spectrograph
At the entrance of the spectrograph, the 6 fibers (FOP) are linearly arranged
on a silicon V-groove with a pitch of 250 µ m . As illustrated in Fig. 5, 8
additional fibers, 4 single-mode (FCM) and 4 multimode (FCS), are aligned on
free positions to carry the internal source necessary for the spectral
calibration of the spectrometer. The 6 FOP are aligned according to the most
compact non-redundant configuration that makes possible to distinguish the
baselines in the Fourier space while keeping an efficient sampling of the
fringe pattern on the detector. A microlens array (MLA) is glued at the output
of the V-groove such that each microlens of 218 µ m diameter with $NA=0.12$
collimates the Gaussian beam of each fiber.
Figure 5: Arrangement of the scientific and calibration fibers on the V-groove
in front of the MLA. The 6 main scientific fibers (FOP) respect a non-
redundant alignment while the single-mode (FCM) and multimode (FCS) fibers are
distributed according to different spectral calibrations function. The black
rectangle are unusable positions.
To estimate the absolute visibility of each pair of telescopes, it is
necessary to calibrate the photometric fluctuations resulting from the
imperfect fiber injection [30]. On the contrary to a pair-wise recombination,
the all-in-one recombination chosen for SPICA does not permit the estimation
of the individual intensities from the interferometric data. These intensities
are thus directly measured thanks to a dedicated photometric channel. As
illustrated in Fig. 6, a beam splitter separates the light right after the
microlenses: 10% is sent to the photometric channel that reimages the V-groove
on the detector whereas 90% is combined by the interferometric channel on a
common focus and creates the fringe pattern.
Figure 6: Optical principle of the SPICA spectrograph. The red rays symbolize
the diffracting rays of an individual fiber. The two cylindrical lenses (FCL
and SCL) collimate the beam in the spectral direction. The photometric channel
does the same with its three achromatic doublet. The blue rays symbolize the
paraxial rays of each individual fiber output beam. It clearly makes appear on
the detector, side-by-side, the reimaging of the V-groove by the photometric
channel and the focalisation at the common focus by the interferometric
channel. For matters of clarity, only 3 FOP over 6 have been drawn.
The interferometric channel is made of two cylindrical lenses performing the
anamorphosis of factor $AF\simeq 57$ necessary for reaching the sampling
criteria in the spectral and spatial directions. In the spectral direction,
the V-groove is reimaged by the first cylindrical lens (FCL) onto the object
focal plane of the second cylindrical lens (SCL). The SCL and the camera
achromatic doublet (CAO) play the role of the collimating optics of the
spectrograph in the spectral direction with the suitable magnification ratio
for the final spectral resolution. In the spatial direction, the six aligned
beams propagate without being affected, except from their natural diffraction,
until the CAO combines them at its focal plane to create the dispersed fringe
pattern.
Thanks to the presence of three different dispersing components on a
turntable, SPICA offers three different spectral resolution modes. For
achieving the low spectral resolution $R=140$, a mirror reflects the light
towards a pair of prisms made of F2. For reaching the two high spectral
resolutions $R=3000$ and $R=10000$, the light is dispersed by diffraction
gratings with 300 grooves/mm and 900 grooves/mm respectively.
Both interferometric and photometric channels are dispersed by the same
component. Finally, the CAO images the dispersed fringe pattern of the
interferometric channel next to the six dispersed beams of the photometric
channel on the fast and low-noise EMCCD detector Andor iXon 888. The shortest
interfringe at 0.650 µ m is sampled on 3 pixels and each spectral resolution
element is sampled over 2 pixels. The dispersed fringe pattern of the low
spectral resolution are spread over an area of $400\times 1024$ pixels. The
dispersed image of the V-groove lies on $100\times 1024$ pixels next to the
fringe pattern. Only half of the detector ($500\times 80$ in $R=140$ and
$500\times 1024$ in $R=3000$ and $R=10000$) is used. This is required for
reaching the smallest detector integration time (20 ms) that guaranties fringe
acquisition shorter than the atmosphere coherence time, typically 20 ms, when
the fringe tracker doesn’t work properly.
## 3 SPICA-FT
We have already shown [19, 14] that reaching the limiting magnitude (mV around
8 or 9) required for the science program presented in Section 1.1 supposes the
possibility of single exposures longer than the usual value of 20 ms chosen to
correctly freeze the atmospheric piston at Mount Wilson. This opened the way
for the development of the fringe tracker SPICA-FT. Moreover, as mentioned
before, it has been decided to use the H band for SPICA-FT. It is interesting
to note also that doing so permits to benefit from higher visibility’s in the
fringe tracker because of the largest wavelength.
This general idea was turned into an actual implementation of an integrated
optics device combining the 6 CHARA beams on the ABCD principle. The design
was guided by the work done on VLTI/GRAVITY[31] with an adaptation to 6 beams,
to the H band, and to the CHARA Array. It was therefore decided to simplify
the project by using the H-band injection systems of the instrument MIRC-X[22]
as well as its new detector[23] in the low spectral resolution mode ($R=20$,
so 5-6 spectral channels in the H band). With simple calculations based on the
GRAVITY-FT performance and considering the important reduction of the number
of pixels used in the ABCD setup versus the All-In-One combination of MIRC-X,
it is anticipated that SPICA-FT may exhibit a better sensitivity. Knowing that
MIRC-X equipped of its new detector has already reached mH=8.5, we expect to
be able to achieve the required performance ($\lambda/8$ at mH=8) very soon.
The IO device is used as the sensor of the fringe-tracking loop, a dedicated
optical path difference controller has been developed, and the loop is closed
on the existing fast stages of the CHARA main delay lines, also refurbished to
permit a fast dialog.
In Fig. 2, we present the IO device realized for the purpose of SPICA-FT. The
entrance of the chip is directly glued on the output side of a V-groove with
the 6 single-mode fibers connected to the injection systems of MIRC-X. A
dedicated microlens array (pitch 80 µ m ) is glued at the output side of the
chip to collimate each of the 60 beams independently. The beams are then
dispersed and reimaged with a 1x magnification on the C-RED ONE detector, so
that two outputs are separated exactly by 5 pixels on the detector. During the
first commissioning run in January 2020, we succeeded in getting signals with
the 5 available telescopes at that time but we were not ready to close the
loops on the delay lines.
Figure 7: Left: design of the 6-beam ABCD combiner realized by VLC
Photonics222https://www.vlcphotonics.com/on the design developed by the group
on the basis of the initial idea of P. Labeye[32]. Each entrance beam (on the
left) is divided in 5 equal parts through different splitters functions
(60/40, 50/50, 66/33). Then the 15 pairs of beams are combined through the
ABCD cell performing the adequate dephasing. The 60 outputs are then directed
to the right side of the chip. Right: actual image on the sky of the 60
dispersed output of the SPICA-FT chip (dispersion is vertical in the figure
and covers the H band).
The fringe tracker loop is based on the architecture described for the
Gravity[33] fringe tracker. The images of the detector are stored in a shared
memory and are accessible for recording and for processing[22]. The phase
sensor process estimates in real time the 6 fluxes and the 15 complex coherent
fluxes as well as their related variances and stores them into a second shared
memory. From these quantities the optical path difference (OPD) controller
estimates the 15 group delays and the 15 phase delays as well as the best 10
closure phases and the best 10 quantities called group-delay closure phases.
The variances of these quantities are used in real time for the decision
process. From these quantities, the required OPD are estimated and the signals
for the group-delay loop and for the phase-delay loop are estimated by
comparison between the actual measurements and the values of the reference
matrix accounting for the internal or stellar closure phases. The guiding
principle of the state machine is to start the phase-delay closed loop as soon
as the group delay ensures that the delay is within the central fringe. The
current control loop is based on a simple integrator with a gain for the group
delay and a gain for the phase delay. In Fig. 8, we present the actual
performance reached on our testbench in Nice.
Figure 8: Closed-loop (group delay in the first part, phase delay in the
second part) obtained in September 2020 on the Nice testbed. Measurements are
made on 10 spectral channels over the H band, with a detector integration time
of 8 ms, no perturbations except the lab turbulence. The group-delay gain has
been set to 0.05 while the phase-delay gain is set to 0.1. A residual piston
noise of 22 nm is obtained on any baseline.
Although the conditions of the experiment in the lab in Nice were not
representative of the sky, this result demonstrates the excellent behaviour of
the group-delay and phase-delay loops and permitted to qualify the IO chip. In
phase-delay loop we reached a residual rms of less than 22 nm, which is very
encouraging. We noticed residual internal closure phases at the level of 4
$\mu$m that will be corrected in the final fabrication of the chips. We
noticed also some flux unbalance between the outputs of the ABCD chips. In
principle each entrance beam illuminates 20 outputs (5 ABCD channels) and thus
each output should receive 5% of the flux. We can see variation of flux from 3
to 7% typically. Thanks to the different photometric measurements made on the
chips, it has been understood that the 66/33 splitter function was in fact
providing a flux repartition close to 80/20 and the 50/50 one is close to
60/40. The new fabrication considered in the first semester of 2021 will
permit to correct this thanks to an adaptation of the design of these
individual functions.
## 4 FT simulator
The optical bench of the fringe-tracking cannot simulate every disturbance
schemes that are expected on-sky. That is why we developed a fringe-tracker
simulator that makes easier the understanding of the fringe-tracking servo
loop. It is a Python-adapted version of the IDL code developed by E.
Choquet[34] that was used to optimise GRAVITY fringe-tracker[33].
Furthermore, this simulator benefits from the SPICA-FT experience and will be
enriched with many different interferometer configurations and optimised servo
loops in order to study new fringe-tracking logic’s and fringe-sensing
techniques.
### 4.1 Design
As explained before, the fringe sensor provides two levels of measurement.
First, the phase-delay estimator is precise but wrapped over one wavelength.
Using this estimator, the fringe tracker calculates, with a simple integrator
controller, a precise command confined to the horizon of the wavelength.
Second, the group-delay estimator is noisier but sensitive to the many phase
jumps overpassing the wavelength. It enables to compute by another simple
integrator controller a command less precise but with the capacity of
restoring the tracking reference position, the 0 group-delay. The combination
of these two levels of correction is possible only to the means of a non-
linear logic that smartly synchronises them. To get as much freedom as
possible when developing this non-linear logic, we chose to use a temporal
simulator rather than more standard frequency simulation tools suitable for
linear commands.
Fig. 9 illustrates the structure of the fringe-tracking loop as modeled by the
simulator. It is expected to be a modular package where each module accounts
for a distinct function (fringe sensing, fringe tracking, noise, delay lines
model, …), enabling to test many different component architectures.
Figure 9: Structure of the fringe-tracking simulator. The respective coherence
matrices $\Phi_{o}$ and $\Phi_{d}$ of an object and of a realistic disturbance
pattern are created for $NW$ wavelengths and $NB$ baselines. During the
simulation, $\Phi_{o}$ is static whereas $\Phi_{d}$ varies to simulate the
atmosphere. At a time $t$, they are coherently multiplied and propagated into
a fringe sensor that detects the signal on a spectrograph with $NP$ pixels and
$MW$ spectral channels. An estimated image is processed based on a model of
noise and the coherence matrix $\Phi_{est}$ is estimated. From this, the
fringe tracker derives the piston commands $U_{p}$ for the delay lines. The
final coherence matrix $\Phi_{c}$ associated to the delay line correction is
calculated using its unitary response and is coherently multiplied with the
matrices $\Phi_{o}$ and $\Phi_{d}$ at the time $t+1$. The matrix $\Phi_{true}$
of the new residual coherences is propagated.
### 4.2 Results
We simulate the fringe tracking with SPICA-FT of a non-resolved target at
CHARA for typical atmospheric conditions. According to previous measurements
of the atmosphere behaviour at CHARA[35, 36], the outer-scale $L_{0}$ is
estimated to 25 m. This enables us to make the realistic assumption that the
disturbances on all telescopes are totally decorrelated, even though a
correlation is sometimes observed on the shortest baselines. We thus generate
6 independent temporal pistons representing the typical condition of the 80th
percentile of the summer seeing at Mount Wilson ($r_{0}=15$ cm and $t_{0}=10$
ms). The shape of the power spectral distribution of a disturbance with
average wind speed $W=5$ m/s above the telescopes of diameter $d=1$ m shows
three regimes[37, 38, 39, 36]. At low frequency, the atmospheric disturbance
is proportional to $\nu$ until reaching the low-frequency cut-off
$\nu_{0}=0.2W/L_{0}\simeq 0.04$ Hz above which it starts decreasing with
$\nu^{-2/3}$. Above the high-frequency $\nu_{1}=0.3W/d\simeq 1.5$ Hz,
representing the filtering of the highest frequencies by each individual
telescope, it becomes proportional to $\nu^{-8.5/3}$.
The electromagnetic field of a non-resolved object is propagated through the
disturbed atmosphere and the CHARA delay lines. The final flux received by the
detector accounts for a typical total coherent throughput of 2% in H band,
leading to an irradiance per telescope $N=1.66\cdot 10^{5}$ ph/s for $H=7$.
The camera C-RED ONE of SPICA-FT uses an electron avalanche photo-diode
detector of $320\times 256$ pixels and is expected to work most of the time
with the spectral resolution 22, meaning 4 spectral channels between 1.45 and
1.75 µ m . According to Lanthermann et al[40], we can model it with the excess
noise factor $ENF=1.47$ and the dark current and readout noise gathered within
an additive Gaussian noise of dispersion $\sigma_{tot}=0.5$ e-/pix/frame when
used at 300 Hz, its optimal working mode. A latency of 2 frames is chosen.
Fig. 10(a) shows the residual OPD on three of the five baselines involving the
telescope E1, after running a simulation over 30 seconds on a target of
magnitude 6 with all the parameters previously given. The gains of the phase-
delay and group-delay integrators have respectively been optimised to 0.2 and
0.5, the group-delay loop working with an integration time of 40 frames. We
see that the group-delay command occasionally jumps, corresponding to moments
when the OPD variation induced by the atmospheric disturbance is too fast.
Forgetting these group-delay jumps, the residues remain below 50 nm RMS on all
baselines, i.e. $\lambda/33$ at 1.65 µ m , and $\lambda/15$ at 750 nm. We are
still within the SPICA requirement for single integrations longer than 200 ms.
We observe on Fig. 10(b) that it is possible to reach magnitude of 7.5 at the
cost of a degradation of the transfer function of 20%.
This result has been obtained using the model of the first version of the
fringe tracker based on two integrated commands smartly synchronised. Yet, the
experience with GRAVITY[33] demonstrated the interest for a Kalman control
which brings more robustness to predictable fringe losses.
(a) Residual Optical-Path-Delay after a simulated correction with SPICA-FT on
6 telescopes in good observing conditions. To the right are given the residual
OPD RMS.
(b) Evolution of the correction performance with the target magnitude. The
variance of the OPD, in µ m , is the average of the variances of all 50 ms
temporal samples in the last third of the simulation.
Figure 10: Simulations results of SPICA-FT in typical observing conditions.
### 4.3 Perspectives
In addition to the short-term usage of the simulator for optimising SPICA-FT,
it is intended to become a tool for further investigations on the ideal fringe
tracker associated with interferometers with $N\geq 6$ telescopes. The
development of more sensitive fringe sensing techniques and their associated
servo loops could push the capacities of fringe trackers on fainter objects.
SPICA-FT will of course be the first beneficiary of potential improvements
brought by this study.
With the recent generalisation of the spatial filtering offered by optical
fibers, the fringe tracking is partly limited by the photometry drops that the
imperfect injection involves. Indeed it plays a role in two open questions.
First, although a $N$ telescopes interferometers involves only $N-1$ OPDs to
correct, it provides $N(N-1)/2$ independent measured ones. The question of
using only a part of all the baselines to maximise sensitivity has often been
posed in the past[41]. But the photometry drops regularly reduce or null the
visibility of the different baselines. Based on the experience made with
GRAVITY, SPICA-FT is currently equipped with the conservative approach
consisting in using all the baselines for robustness purpose. However, with
its 5 OPDs and 15 baselines, the gap is even bigger than for GRAVITY which
tracks 3 OPDs with 6 baselines. So this question will be further investigated.
Second, these drops take the fringe tracker out of its linear regime,
demanding a non-linear response to be corrected. Both GRAVITY and SPICA-FT are
equipped with non-linear controls but there still remain other logic that can
be studied. The temporal-domain simulator makes possible testing these new
schemes.
Furthermore, to get the information on the phase, SPICA-FT either uses the
MIRC-X all-in-one configuration or the integrated-optics component encoding
the fringes following the ABCD principle. This simulator enables us to study
more sensitive fringe demodulation techniques by getting closer to the Nyquist
criteria limit.
## 5 Conclusion
We have given an overview of the developments of the two main parts of the
CHARA/SPICA instrument made at Observatoire de la Côte d’Azur.
SPICA-FT is based on 6T-ABCD integrated optics beam combiner for the encoding
of the 15 baselines and a servo logic inspired from the fringe tracker of
VLTI/GRAVITY. The preliminary tests on-sky and in laboratory give confidence
on its capacity to track the fringes with residuals lower than $\lambda/33$ at
1.65 µ m up to magnitude 7-8. New architectures and servo logics may come
improving its performance in the next years.
SPICA-VIS is designed with the goal to get a high accuracy on faint targets
and low visibilities necessary for the direct diameter measurement of a large
number of stars, ranging from M to O spectral types. The low spectral
resolution $R=140$ on the bandwidth 0.6 - 0.9 µ m brings the sensitivity
necessary for reaching stars of magnitude 8 and an interesting coverage of the
(u,v) plane for surface imaging of suited stars. Higher spectral modes
$R=$3000 and $R=$10 000 will give access to important knowledge on the stellar
activity. It will measure visibilities of 0.1 with an accuracy of 1%. The high
signal-to-noise necessary for reaching these performance is made possible by
the new fringe tracker and the spatial filtering properties of single-mode
fibers. It is expected to be on sky at the end of 2021 and to start the
science operation by mid 2022.
###### Acknowledgements.
The CHARA/SPICA instrument is funded by CNRS, Université Côte d’Azur,
Observatoire de la Côte d’Azur, and by the Région Sud. The CHARA Array is
supported by the National Science Foundation under Grant No. AST-1636624 and
AST-1715788. Institutional support has been provided from the GSU College of
Arts and Sciences and the GSU Office of the Vice President for Research and
Economic Development. The postdoc fellowship of FP is funded through the
European H2020 OPTICON program, with the grant agreement n °730 890. FC thanks
support from Onera’s Direction Scientifique Générale.
## References
* [1] Perraut, K., Cunha, M., Romanovskaya, A., Shulyak, D., Ryabchikova, T., Hocdé, V., Nardetto, N., Mourard, D., Meilland, A., Morand, F., Tallon-Bosc, I., Farrington, C., and Lanthermann, C., “Benchmarking the fundamental parameters of Ap stars with optical long-baseline interferometric measurements,” 13 (2020).
* [2] Ligi, R., Dorn, C., Crida, A., Lebreton, Y., Creevey, O., Borsa, F., Mourard, D., Nardetto, N., Tallon-Bosc, I., Morand, F., and Poretti, E., “From the stellar properties of HD 219134 to the internal compositions of its transiting exoplanets,” 12 (2019).
* [3] Creevey, O. L., Thévenin, F., Berio, P., Heiter, U., Kervella, P., Morel, P., Pichon, B., Chiavassa, A., Nardetto, N., Perraut, K., Meilland, A., and Alister, H. A. M., “Benchmark stars for Gaia Fundamental properties of the Population II star HD 140283 from interferometric, spectroscopic, and photometric data,” 18 (2015).
* [4] Mayor, M. and Queloz, D., “A Jupiter-mass companion to a solar-type star,” Nature 378, 355–359 (Nov. 1995).
* [5] Bruntt, H., “Asteroseismology with the WIRE satellite,” Communications in Asteroseismology 150, 326 (June 2007).
* [6] Matthews, J., Kuschnig, R., Lanting, T., and Walker, G., “Ultraprecise photometry from space: simulations of the MOST space telescope performance.,” JRASC 93, 184 (Aug. 1999).
* [7] Baglin, A., Auvergne, M., Catala, C., Michel, E., Goupil, M. J., Samadi, R., Popielsky, B., and COROT Team, “The COROT Mission and its Seismology Programme (invited paper),” in [IAU Colloq. 185: Radial and Nonradial Pulsationsn as Probes of Stellar Physics ], Aerts, C., Bedding, T. R., and Christensen-Dalsgaard, J., eds., Astronomical Society of the Pacific Conference Series 259, 626 (2002).
* [8] Borucki, W. J., Koch, D., Basri, G., Batalha, N., Brown, T., Caldwell, D., Caldwell, J., Christensen-Dalsgaard, J., Cochran, W. D., DeVore, E., Dunham, E. W., Dupree, A. K., Gautier, T. N., Geary, J. C., Gilliland, R., Gould, A., Howell, S. B., Jenkins, J. M., Kondo, Y., Latham, D. W., Marcy, G. W., Meibom, S., Kjeldsen, H., Lissauer, J. J., Monet, D. G., Morrison, D., Sasselov, D., Tarter, J., Boss, A., Brownlee, D., Owen, T., Buzasi, D., Charbonneau, D., Doyle, L., Fortney, J., Ford, E. B., Holman, M. J., Seager, S., Steffen, J. H., Welsh, W. F., Rowe, J., Anderson, H., Buchhave, L., Ciardi, D., Walkowicz, L., Sherry, W., Horch, E., Isaacson, H., Everett, M. E., Fischer, D., Torres, G., Johnson, J. A., Endl, M., MacQueen, P., Bryson, S. T., Dotson, J., Haas, M., Kolodziejczak, J., Van Cleve, J., Chandrasekaran, H., Twicken, J. D., Quintana, E. V., Clarke, B. D., Allen, C., Li, J., Wu, H., Tenenbaum, P., Verner, E., Bruhweiler, F., Barnes, J., and Prsa, A., “Kepler Planet-Detection Mission: Introduction and First Results,” Science 327, 977 (Feb. 2010).
* [9] Pietrzyński, G., Graczyk, D., Gallenne, A., Gieren, W., Thompson, I. B., Pilecki, B., Karczmarek, P., Górski, M., Suchomska, K., Taormina, M., Zgirski, B., Wielgórski, P., Kołaczkowski, Z., Konorski, P., Villanova, S., Nardetto, N., Kervella, P., Bresolin, F., Kudritzki, R. P., Storm, J., Smolec, R., and Narloch, W., “A distance to the Large Magellanic Cloud that is precise to one per cent,” Nature 567, 200–203 (Mar. 2019).
* [10] Graczyk, D., Pietrzynski, G., Thompson, I. B., Gieren, W., Zgirski, B., Villanova, S., Gorski, M., Wielgorski, P., Karczmarek, P., Narloch, W., Pilecki, B., Taormina, M., Smolec, R., Suchomska, K., Gallenne, A., Nardetto, N., Storm, J., Kudritzki, R.-P., Kaluszynski, M., and Pych, W., “A distance determination to the Small Magellanic Cloud with an accuracy of better than 2 percent based on late-type eclipsing binary stars,” arXiv:2010.08754 [astro-ph] (Oct. 2020). arXiv: 2010.08754.
* [11] Chelli, A., Duvert, G., Bourgès, L., Mella, G., Lafrasse, S., Bonneau, D., and Chesneau, O., “Pseudomagnitudes and differential surface brightness: Application to the apparent diameter of stars,” Astronomy & Astrophysics 589, A112 (May 2016).
* [12] Nardetto, N., “Pulsating stars and eclipsing binaries as distances indicators in the universe,” (2018).
* [13] Salsi, A., Nardetto, N., Mourard, D., Creevey, O., Huber, D., White, T. R., Hocdé, V., Morand, F., Tallon-Bosc, I., Farrington, C. D., Chelli, A., and Duvert, G., “Precise calibration of the dependence of surface brightness-colour relations on colour and class for late-type stars,” Astronomy & Astrophysics 640, A2 (Aug. 2020). arXiv: 2007.01906.
* [14] Mourard, D., Berio, P., Clausse, J.-M., Martinod, M.-A., Nardetto, N., Perraut, K., Bailet, C., Bresson, Y., Cassaing, F., Dejonghe, J., Lagarde, S., Michau, V., Petit, C., Tallon, M., Tallon-Bosc, I., and ten Brummelaar, T., “SPICA, a new 6T visible beam combiner for CHARA: science, design and interfaces,” in [Optical and Infrared Interferometry and Imaging VI ], Mérand, A., Creech-Eakman, M. J., and Tuthill, P. G., eds., Proc. SPIE 10701, 55, SPIE, Austin, United States (July 2018).
* [15] Mourard, D., Clausse, J. M., Marcotto, A., Perraut, K., Tallon-Bosc, I., Bério, P., Blazit, A., Bonneau, D., Bosio, S., Bresson, Y., Chesneau, O., Delaa, O., Hénault, F., Hughes, Y., Lagarde, S., Merlin, G., Roussel, A., Spang, A., Stee, P., Tallon, M., Antonelli, P., Foy, R., Kervella, P., Petrov, R., Thiebaut, E., Vakili, F., McAlister, H., ten Brummelaar, T., Sturmann, J., Sturmann, L., Turner, N., Farrington, C., and Goldfinger, P. J., “VEGA: Visible spEctroGraph and polArimeter for the CHARA array: principle and performance,” Astronomy & Astrophysics 508, 1073–1083 (Dec. 2009).
* [16] Mourard, D., Bério, P., Perraut, K., Ligi, R., Blazit, A., Clausse, J. M., Nardetto, N., Spang, A., Tallon-Bosc, I., Bonneau, D., Chesneau, O., Delaa, O., Millour, F., Stee, P., Bouquin, J. B. L., Goldfinger, P. J., and Monnier, J. D., “Spatio-spectral encoding of fringes in optical long-baseline interferometry,” 531, 9 (2011).
* [17] ten Brummelaar, T. A., McAlister, H. A., Ridgway, S. T., Bagnuolo, Jr., W. G., Turner, N. H., Sturmann, L., Sturmann, J., Berger, D. H., Ogden, C. E., Cadman, R., Hartkopf, W. I., Hopper, C. H., and Shure, M. A., “First Results from the CHARA Array. II. A Description of the Instrument,” The Astrophysical Journal 628, 453–465 (July 2005).
* [18] Garcia, E. V., Muterspaugh, M. W., van Belle, G., Monnier, J. D., Stassun, K. G., Ghasempour, A., Clark, J. H., Zavala, R. T., Benson, J. A., Hutter, D. J., Schmitt, H. R., Baines, E. K., Jorgensen, A. M., Strosahl, S. G., Sanborn, J., Zawicki, S. J., Sakosky, M. F., and Swihart, S., “VISION: A Six-Telescope Fiber-Fed Visible Light Beam Combiner for the Navy Precision Optical Interferometer,” Publications of the Astronomical Society of the Pacific 128, 055004 (May 2016). arXiv: 1601.00036.
* [19] Mourard, D., Bério, P., Perraut, K., Clausse, J.-M., Creevey, O., Martinod, M.-A., Meilland, A., Millour, F., and Nardetto, N., “SPICA, Stellar Parameters and Images with a Cophased Array: a 6T visible combiner for the CHARA array,” Journal of the Optical Society of America A 34, A37 (May 2017).
* [20] Mourard, D., Challouf, M., Ligi, R., Bério, P., Clausse, J.-M., Gerakis, J., Bourges, L., Nardetto, N., Perraut, K., Tallon-Bosc, I., McAlister, H., ten Brummelaar, T., Ridgway, S., Sturmann, J., Sturmann, L., Turner, N., Farrington, C., and Goldfinger, P. J., “Performance, results, and prospects of the visible spectrograph VEGA on CHARA,” 84450K, SPIE, Amsterdam, Netherlands (Sept. 2012).
* [21] Ten Brummelaar, T. A., Sturmann, J., Ridgway, S. T., Sturmann, L., Turner, N. H., Mcalister, H. A., Farrington, C. D., Beckmann, U., Weigelt, G., and Shure, M., “THE CLASSIC/CLIMB BEAM COMBINER AT THE CHARA ARRAY,” Journal of Astronomical Instrumentation 02, 1340004 (Dec. 2013).
* [22] Anugu, N., Le Bouquin, J.-B., Monnier, J. D., Kraus, S., Setterholm, B. R., Labdon, A., Davies, C. L., Lanthermann, C., Gardner, T., Ennis, J., et al., “MIRC-X: a highly-sensitive six telescope interferometric imager at the CHARA array,” The Astronomical Journal 160(4), 158 (2020).
* [23] Lanthermann, C., Le Bouquin, J.-B., Anugu, N., Monnier, J., and Kraus, S., “Astronomical interferometry with near-IR e-APD at CHARA: characterization, optimization and on-sky operation,” in [High Energy, Optical, and Infrared Detectors for Astronomy VIII ], Proc. SPIE 10709, 1070914, International Society for Optics and Photonics (2018).
* [24] Pannetier, C., Mourard, D., Cassaing, F., Lagarde, S., Le Bouquin, J., Sturmann, J., and ten Brummelaar, T., “Compensation of differential dispersion: application to multiband stellar interferometry,” Monthly Notices of the Royal Astronomical Society (to be submitted).
* [25] Berger, “Preliminary results from the longitudinal dispersion compensation system for the CHARA array,” (2003).
* [26] V. Coudé du Foresto, Ridgway, S., and Mariotti, J.-M., “Deriving object visibilities from interferograms obtained with a fiber stellar interferometer,” Astron. Astrophys. Suppl. Ser. 121(2), 379–392 (1997).
* [27] Kraus, S., Monnier, J. D., Anugu, N., Bouquin, J.-B. L., Davies, C. L., Ennis, J., Labdon, A., Lanthermann, C., Setterholm, B., and Brummelaar, T. t., “The MIRC-X 6-telescope imager: Key science drivers, instrument design and operation,” arXiv:1807.03794 [astro-ph] (July 2018). arXiv: 1807.03794.
* [28] Narsireddy, A. and et al, “CHARA array adaptive optics: complex operational softwareand performance,” Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series 259 (Dec. 2020).
* [29] Martinod, M. A., Mourard, D., Bério, P., Perraut, K., Meilland, A., Bailet, C., Bresson, Y., ten Brummelaar, T., Clausse, J. M., Dejonghe, J., Ireland, M., Millour, F., Monnier, J. D., Sturmann, J., Sturmann, L., and Tallon, M., “Fibered visible interferometry and adaptive optics: FRIEND at CHARA,” Astronomy & Astrophysics 618, A153 (Oct. 2018).
* [30] Martinod, M. A., Berio, P., Mourard, D., Perraut, K., Meilland, A., Millour, F., Clausse, J. M., Spang, A., Bresson, Y., Dejonghe, J., Bailet, C., Tallon-Bosc, I., and Tallon, M., “Long baseline interferometry in the visible: first results of the FRIEND project,” Proc. SPIE 9907, 99071H (Aug. 2016).
* [31] Perraut, K., Jocou, L., Berger, J. P., Chabli, A., Cardin, V., Chamiot-Maitral, G., Delboulbé, A., Eisenhauer, F., Gambérini, Y., Gillessen, S., Guieu, S., Guerrero, J., Haug, M., Hausmann, F., Joulain, F., Kervella, P., Labeye, P., Lacour, S., Lanthermann, C., Lapras, V., Le Bouquin, J. B., Lippa, M., Magnard, Y., Moulin, T., Noël, P., Nolot, A., Patru, F., Perrin, G., Pfuhl, O., Pocas, S., Poulain, S., Scibetta, C., Stadler, E., Templier, R., Ventura, N., Vizioz, C., Amorim, A., Brandner, W., and Straubmeier, C., “Single-mode waveguides for GRAVITY: I. The cryogenic 4-telescope integrated optics beam combiner,” Astronomy & Astrophysics 614, A70 (June 2018).
* [32] Labeye, P., Composants optiques intégrés pour l’Interférométrie astronomique, theses, Institut National Polytechnique de Grenoble - INPG (Feb. 2008).
* [33] Lacour, S., Dembet, R., Abuter, R., Fédou, P., Perrin, G., Choquet, É., Pfuhl, O., Eisenhauer, F., Woillez, J., Cassaing, F., Wieprecht, E., Ott, T., Wiezorrek, E., Tristram, K. R. W., Wolff, B., Ramírez, A., Haubois, X., Perraut, K., Straubmeier, C., Brandner, W., and Amorim, A., “The GRAVITY fringe tracker,” Astronomy & Astrophysics 624, A99 (Apr. 2019).
* [34] Choquet, É., Menu, J., Perrin, G., Cassaing, F., Lacour, S., and Eisenhauer, F., “Comparison of fringe-tracking algorithms for single-mode near-infrared long-baseline interferometers,” Astronomy & Astrophysics 569, A2 (Sept. 2014). arXiv: 1406.4391.
* [35] Berger, D. H., Monnier, J. D., Millan-Gabet, R., ten Brummelaar, T. A., Anderson, M., Blum, J. L., Blasius, T., Pedretti, E., and Thureau, N., “CHARA Michigan phase-tracker (CHAMP): a preliminary performance report,” Proc. SPIE 7013, 701319 (July 2008).
* [36] Colavita, M. M., Shao, M., and Staelin, D. H., “Atmospheric phase measurements with the Mark III stellar interferometer,” Applied Optics 26, 4106 (Oct. 1987).
* [37] Conan, J.-M., Rousset, G., and Madec, P.-Y., “Wave-front temporal spectra in high-resolution imaging through turbulence,” Journal of the Optical Society of America A 12, 1559 (July 1995).
* [38] Avila, R., Ziad, A., Borgnino, J., Martin, F., Agabi, A., and Tokovinin, A., “Theoretical spatiotemporal analysis of angle of arrival induced by atmospheric turbulence as observed with the grating scale monitor experiment,” Journal of the Optical Society of America A 14, 3070 (Nov. 1997).
* [39] Buscher, D. F., Young, J. S., Baron, F., and Haniff, C. A., “Fringe tracking and spatial filtering: phase jumps and dropouts,” in [Optical and Infrared Interferometry ], Proc. SPIE 7013, 70131D, International Society for Optics and Photonics (2008).
* [40] Lanthermann, C., Anugu, N., Bouquin, J.-B. L., Monnier, J. D., Kraus, S., and Perraut, K., “Modeling the e-APD SAPHIRA/C-RED ONE camera at low flux level: an attempt of photon counting in the near-infrared with the MIRC-X interferometric combiner,” Astronomy & Astrophysics 625, A38 (May 2019). arXiv: 1903.08444.
* [41] Houairi, K., Cassaing, F., Perrin, G., Eisenhauer, F., Brandner, W., Straubmeier, C., and Gillessen, S., “Fringe tracking optimization with 4 beams: application to GRAVITY,” 70131B (July 2008).
|
# Attention Can Reflect Syntactic Structure
_(If You Let It)_
Vinit Ravishankar22footnotemark: 2 11footnotemark: 1 Artur
Kulmizev33footnotemark: 3 Mostafa Abdou44footnotemark: 4
Anders Søgaard44footnotemark: 4 Joakim Nivre33footnotemark: 3
22footnotemark: 2 Language Technology Group, Department of Informatics,
University of Oslo
33footnotemark: 3 Department of Linguistics and Philology, Uppsala University
44footnotemark: 4 Department of Computer Science, University of Copenhagen
22footnotemark: 2<EMAIL_ADDRESS>
33footnotemark: 3<EMAIL_ADDRESS>
44footnotemark: 4<EMAIL_ADDRESS>Equal contribution. Order was
decided by a coin toss.
###### Abstract
Since the popularization of the Transformer as a general-purpose feature
encoder for NLP, many studies have attempted to decode linguistic structure
from its novel multi-head attention mechanism. However, much of such work
focused almost exclusively on English — a language with rigid word order and a
lack of inflectional morphology. In this study, we present decoding
experiments for multilingual BERT across 18 languages in order to test the
generalizability of the claim that dependency syntax is reflected in attention
patterns. We show that full trees can be decoded above baseline accuracy from
single attention heads, and that individual relations are often tracked by the
same heads across languages. Furthermore, in an attempt to address recent
debates about the status of attention as an explanatory mechanism, we
experiment with fine-tuning mBERT on a supervised parsing objective while
freezing different series of parameters. Interestingly, in steering the
objective to learn explicit linguistic structure, we find much of the same
structure represented in the resulting attention patterns, with interesting
differences with respect to which parameters are frozen.
## 1 Introduction
In recent years, the attention mechanism proposed by Bahdanau et al. (2014)
has become an indispensable component of many NLP systems. Its widespread
adoption was, in part, heralded by the introduction of the Transformer
architecture (Vaswani et al., 2017a), which constrains a soft alignment to be
learned across discrete states in the input (self-attention), rather than
across input and output (e.g., Xu et al., 2015; Rocktäschel et al., 2015). The
Transformer has, by now, supplanted the popular LSTM (Hochreiter and
Schmidhuber, 1997) as NLP’s feature-encoder-of-choice, largely due to its
compatibility with parallelized training regimes and ability to handle long-
distance dependencies.
Certainly, the nature of attention as a distribution over tokens lends itself
to a straightforward interpretation of a model’s inner workings. Bahdanau et
al. (2014) illustrate this nicely in the context of seq2seq machine
translation, showing that the attention learned by their models reflects
expected cross-lingual idiosyncrasies between English and French, e.g.,
concerning word order. With self-attentive Transformers, interpretation
becomes slightly more difficult, as attention is distributed across words
within the input itself. This is further compounded by the use of multiple
layers and heads, each combination of which yields its own alignment,
representing a different (possibly redundant) view of the data. Given the
similarity of such attention matrices to the score matrices employed in arc-
factored dependency parsing (McDonald et al., 2005a, b), a salient question
concerning interpretability becomes: Can we expect some combination of these
parameters to capture linguistic structure in the form of a dependency tree,
especially if the model performs well on NLP tasks? If not, can we relax the
expectation and examine the extent to which subcomponents of the linguistic
structure, such as subject-verb relations, are represented? This prospect was
first posed by Raganato et al. (2018) for MT encoders, and later explored by
Clark et al. (2019) for BERT. Ultimately, the consensus of these and other
studies (Voita et al., 2019; Htut et al., 2019; Limisiewicz et al., 2020) was
that, while there appears to exist no “generalist” head responsible for
extracting full dependency structures, standalone heads often specialize in
capturing individual grammatical relations.
Unfortunately, most of such studies focused their experiments entirely on
English, which is typologically favored to succeed in such scenarios due to
its rigid word order and lack of inflectional morphology. It remains to be
seen whether the attention patterns of such models can capture structural
features across typologically diverse languages, or if the reported
experiments on English are a misrepresentation of local positional heuristics
as such. Furthermore, though previous work has investigated how attention
patterns might change after fine-tuning on different tasks (Htut et al.,
2019), a recent debate about attention as an explanatory mechanism (Jain and
Wallace, 2019; Wiegreffe and Pinter, 2019) has cast the entire enterprise in
doubt. Indeed, it remains to be seen whether fine-tuning on an explicit
structured prediction task, e.g. dependency parsing, can force attention to
represent the structure being learned, or if the patterns observed in
pretrained models are not altered in any meaningful way.
To address these issues, we investigate the prospect of extracting linguistic
structure from the attention weights of multilingual Transformer-based
language models. In light of the surveyed literature, our research questions
are as follows:
1. 1.
Can we decode dependency trees for some languages better than others?
2. 2.
Do the same layer–head combinations track the same relations across languages?
3. 3.
How do attention patterns change after fine-tuning with explicit syntactic
annotation?
4. 4.
Which components of the model are involved in these changes?
In answering these questions, we believe we can shed further light on the
(cross-)linguistic properties of Transformer-based language models, as well as
address the question of attention patterns being a reliable representation of
linguistic structure.
## 2 Attention as Structure
#### Transformers
The focus of the present study is mBERT, a multilingual variant of the
exceedingly popular language model (Devlin et al., 2019). BERT is built upon
the Transformer architecture (Vaswani et al., 2017b), which is a self-
attention-based encoder-decoder model (though only the encoder is relevant to
our purposes). A Transformer takes a sequence of vectors
$\mathbf{x}=[\mathbf{x_{1}},\mathbf{x_{2}},...\mathbf{x_{n}}]$ as input and
applies a positional encoding to them, in order to retain the order of words
in a sentence. These inputs are then transformed into query ($Q$), key ($K$),
and value ($V$) vectors via three separate linear transformations and passed
to an attention mechanism. A single attention head computes scaled dot-product
attention between $K$ and $Q$, outputting a weighted sum of $V$:
$\mathrm{Attention}(Q,K,V)=\mathrm{softmax}\left(\frac{QK^{\top}}{\sqrt{d_{k}}}\right)V$
(1)
For multihead attention (MHA), the same process is repeated for $k$ heads,
allowing the model to jointly attend to information from different
representation subspaces at different positions (Vaswani et al., 2017b).
Ultimately, the output of all heads is concatenated and passed through a
linear projection $W^{O}$:
$H_{i}=\mathrm{Attention}\left(QW_{i}^{Q},KW_{i}^{K},VW_{i}^{V}\right)$ (2)
$\mathrm{MHA}(Q,K,V)=\mathrm{concat}(H_{1},H_{2},...,H_{k})W^{O}$ (3)
Every layer also consists of a feed-forward network ($\mathrm{FFN}$),
consisting of two Dense layers with ReLU activation functions. For each layer,
therefore, the output of $\mathrm{MHA}$ is passed through a LayerNorm with
residual connections, passed through $\mathrm{FFN}$, and then through another
LayerNorm with residual connections.
#### Searching for structure
Often, the line of inquiry regarding interpretability in NLP has been
concerned with extracting and analyzing linguistic information from neural
network models of language (Belinkov and Glass, 2019). Recently, such
investigations have targeted Transformer models Hewitt and Manning (2019);
Rosa and Mareček (2019); Tenney et al. (2019), at least in part because the
self-attention mechanism employed by these models offers a possible window
into their inner workings. With large-scale machine translation and language
models being openly distributed for experimentation, several researchers have
wondered if self-attention is capable of representing syntactic structure,
despite not being trained with any overt parsing objective.
In pursuit of this question, Raganato et al. (2018) applied a maximum-
spanning-tree algorithm over the attention weights of several trained MT
models, comparing them with gold trees from Universal Dependencies (Nivre et
al., 2016, 2020). They found that, while the accuracy was not comparable to
that of a supervised parser, it was nonetheless higher than several strong
baselines, implying that some structure was consistently represented. Clark et
al. (2019) corroborated the same findings for BERT when decoding full trees,
but observed that individual dependency relations were often tracked by
specialized heads and were decodable with much higher accuracy than some
fixed-offset baselines. Concurrently, Voita et al. (2019) made a similar
observation about heads specializing in specific dependency relations,
proposing a coarse taxonomy of head attention functions: positional, where
heads attend to adjacent tokens; syntactic, where heads attend to specific
syntactic relations; and rare words, where heads point to the least frequent
tokens in the sentence. Htut et al. (2019) followed Raganato et al. (2018) in
decoding dependency trees from BERT-based models, finding that fine-tuning on
two classification tasks did not produce syntactically plausible attention
patterns. Lastly, Limisiewicz et al. (2020) modified UD annotation to better
represent attention patterns and introduced a supervised head-ensembling
method for consolidating shared syntactic information across heads.
#### Does attention have explanatory value?
Though many studies have yielded insight about how attention behaves in a
variety of models, the question of whether it can be seen as a “faithful”
explanation of model predictions has been subject to much recent debate. For
example, Jain and Wallace (2019) present compelling arguments that attention
does not offer a faithful explanation of predictions. Primarily, they
demonstrate that there is little correlation between standard feature
importance measures and attention weights. Furthermore, they contend that
there exist counterfactual attention distributions, which are substantially
different from learned attention weights but that do not alter a model’s
predictions. Using a similar methodology, Serrano and Smith (2019) corroborate
that attention does not provide an adequate account of an input component’s
importance.
In response to these findings, Wiegreffe and Pinter (2019) question the
assumptions underlying such claims. Attention, they argue, is not a primitive,
i.e., it cannot be detached from the rest of a model’s components as is done
in the experiments of Jain and Wallace (2019). They propose a set of four
analyses to test whether a given model’s attention mechanism can provide
meaningful explanation and demonstrate that the alternative attention
distributions found via adversarial training methods do, in fact, perform
poorly compared to standard attention mechanisms. On a theoretical level, they
argue that, although attention weights do not give an exclusive “faithful”
explanation, they do provide a meaningful plausible explanation.
This discussion is relevant to our study because it remains unclear whether or
not attending to syntactic structure serves, in practice, as plausible
explanation for model behavior, or whether or not it is even capable of
serving as such. Indeed, the studies of Raganato et al. (2018) and Clark et
al. (2019) relate a convincing but incomplete picture — tree decoding accuracy
just marginally exceeds baselines and various relations tend to be tracked
across varying heads and layers. Thus, our fine-tuning experiments (detailed
in the following section) serve to enable an “easy” setting wherein we
explicitly inform our models of the same structure that we are trying to
extract. We posit that, if, after fine-tuning, syntactic structures were still
_not_ decodable from the attention weights, one could safely conclude that
these structures are being stored via a non-transparent mechanism that may not
even involve attention weights. Such an insight would allow us to conclude
that attention weights cannot provide even a plausible explanation for models
relying on syntax.
## 3 Experimental Design
To examine the extent to which we can decode dependency trees from attention
patterns, we run a tree decoding algorithm over mBERT’s attention heads —
before and after fine-tuning via a parsing objective. We surmise that doing so
will enable us to determine if attention can be interpreted as a reliable
mechanism for capturing linguistic structure.
### 3.1 Model
We employ mBERT111https://github.com/google-research/bert in our experiments,
which has been shown to perform well across a variety of NLP tasks (Hu et al.,
2020; Kondratyuk and Straka, 2019a) and capture aspects of syntactic structure
cross-lingually (Pires et al., 2019; Chi et al., 2020). mBERT features 12
layers with 768 hidden units and 12 attention heads, with a joint WordPiece
sub-word vocabulary across languages. The model was trained on the
concatenation of WikiDumps for the top 104 languages with the largest
Wikipedias,where principled sampling was employed to enforce a balance between
high- and low-resource languages.
### 3.2 Decoding Algorithm
For decoding dependency trees, we follow Raganato et al. (2018) in applying
the Chu-Liu-Edmonds maximum spanning tree algorithm (Chu, 1965) to every
layer/head combination available in mBERT ($12\times 12=144$ in total). In
order for the matrices to correspond to gold treebank tokenization, we remove
the cells corresponding to the BERT delimiter tokens ([CLS] and [SEP]). In
addition to this, we sum the columns and average the rows corresponding to the
constituent subwords of gold tokens, respectively (Clark et al., 2019).
Lastly, since attention patterns across heads may differ in whether they
represent heads attending to their dependents or vice versa, we take our input
to be the element-wise product of a given attention matrix and its transpose
($A\circ A^{\top}$). We liken this to the joint probability of a head
attending to its dependent and a dependent attending to its head, similarly to
Limisiewicz et al. (2020). Per this point, we also follow Htut et al. (2019)
in evaluating the decoded trees via Undirected Unlabeled Attachment Score
(UUAS) — the percentage of undirected edges recovered correctly. Since we
discount directionality, this is effectively a less strict measure than UAS,
but one that has a long tradition in unsupervised dependency parsing since
Klein and Manning (2004).
### 3.3 Data
For our data, we employ the Parallel Universal Dependencies (PUD) treebanks,
as collected in UD v2.4 (Nivre et al., 2019). PUD was first released as part
of the CONLL 2017 shared task (Zeman et al., 2018), containing 1000 parallel
sentences, which were (professionally) translated from English, German,
French, Italian, and Spanish to 14 other languages. The sentences are taken
from two domains, news and wikipedia, the latter implying some overlap with
mBERT’s training data (though we did not investigate this). We include all PUD
treebanks except Thai.222Thai is the only treebank that does not have a non-
PUD treebank available in UD, which we need for our fine-tuning experiments.
### 3.4 Fine-Tuning Details
| ar | cs | de | en | es | fi | fr | hi | id | it | ja | ko | pl | pt | ru | sv | tr | zh
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---
Baseline | 50 | 40 | 36 | 36 | 40 | 42 | 40 | 46 | 47 | 40 | 43 | 55 | 45 | 41 | 42 | 39 | 52 | 41
Pre | 53 | 53 | 49 | 47 | 50 | 48 | 41 | 48 | 50 | 41 | 45 | 64 | 52 | 50 | 51 | 51 | 55 | 42
7-6 | 10-8 | 10-8 | 10-8 | 9-5 | 10-8 | 2-3 | 2-3 | 9-5 | 6-4 | 2-3 | 9-2 | 10-8 | 9-5 | 10-8 | 10-8 | 3-8 | 2-3
None | 76 | 78 | 76 | 71 | 77 | 66 | 45 | 72 | 75 | 58 | 42 | 64 | 75 | 76 | 75 | 74 | 55 | 38
11-10 | 11-10 | 11-10 | 10-11 | 10-11 | 10-11 | 11-10 | 11-10 | 11-10 | 11-10 | 11-10 | 11-10 | 11-10 | 11-10 | 10-8 | 10-8 | 3-8 | 2-3
Key | 62 | 64 | 58 | 53 | 59 | 56 | 41 | 54 | 59 | 47 | 44 | 62 | 64 | 58 | 61 | 59 | 55 | 41
10-8 | 10-8 | 11-12 | 10-8 | 11-12 | 10-8 | 7-12 | 10-8 | 10-8 | 9-2 | 2-3 | 10-8 | 10-8 | 11-12 | 10-8 | 12-10 | 3-12 | 2-3
Query | 69 | 74 | 70 | 66 | 73 | 63 | 42 | 62 | 67 | 54 | 45 | 65 | 72 | 70 | 70 | 68 | 56 | 42
11-4 | 10-8 | 11-4 | 11-4 | 11-4 | 10-8 | 11-4 | 11-4 | 11-4 | 11-4 | 2-3 | 10-8 | 11-4 | 11-4 | 10-8 | 11-4 | 10-8 | 2-3
KQ | 71 | 76 | 70 | 65 | 74 | 62 | 43 | 64 | 69 | 55 | 44 | 64 | 73 | 73 | 69 | 69 | 55 | 41
11-4 | 11-4 | 11-4 | 11-4 | 11-4 | 11-4 | 10-11 | 11-4 | 11-4 | 11-4 | 2-3 | 11-4 | 11-4 | 11-4 | 11-4 | 11-4 | 11-4 | 2-3
Value | 75 | 72 | 72 | 64 | 76 | 59 | 45 | 63 | 73 | 55 | 45 | 66 | 73 | 74 | 69 | 65 | 57 | 42
12-5 | 12-5 | 12-5 | 12-5 | 12-5 | 12-5 | 12-5 | 12-5 | 12-5 | 12-5 | 2-3 | 10-8 | 12-5 | 12-5 | 12-5 | 12-5 | 12-5 | 3-8
Dense | 68 | 71 | 65 | 60 | 67 | 61 | 42 | 65 | 66 | 49 | 44 | 64 | 70 | 64 | 67 | 64 | 55 | 40
11-10 | 11-10 | 11-10 | 10-8 | 12-10 | 11-10 | 10-8 | 11-10 | 11-10 | 9-5 | 3-12 | 11-10 | 11-10 | 12-5 | 11-10 | 11-10 | 11-10 | 3-12
Table 1: Adjacent-branching baseline and maximum UUAS decoding accuracy per
PUD treebank, expressed as best score and best layer/head combination for UUAS
decoding. Pre refers to basic mBERT model before fine-tuning, while all cells
below correspond different fine-tuned models described in Section 3.4. Best
score indicated in bold.
In addition to exploring pretrained mBERT’s attention weights, we are also
interested in how attention might be guided by a training objective that
learns the exact tree structure we aim to decode. To this end, we employ the
graph-based decoding algorithm of the biaffine parser introduced by Dozat and
Manning (2016). We replace the standard BiLSTM encoder for this parser with
the entire mBERT network, which we fine-tune with the parsing loss. The full
parser decoder consists of four dense layers, two for head/child
representations for dependency arcs (dim. 500) and two for head/child
representations for dependency labels (dim. 100). These are transformed into
the label space via a bilinear transform.
After training the parser, we can decode the fine-tuned mBERT parameters in
the same fashion as described in Section 3.2. We surmise that, if attention
heads are capable of tracking hierarchical relations between words in any
capacity, it is precisely in this setting that this ability would be attested.
In addition to this, we are interested in what individual components of the
mBERT network are capable of steering attention patterns towards syntactic
structure. We believe that addressing this question will help us not only in
interpreting decisions made by BERT-based neural parsers, but also in aiding
us developing syntax-aware models in general (Strubell et al., 2018;
Swayamdipta et al., 2018). As such — beyond fine-tuning all parameters of the
mBERT network (our basic setting) — we perform a series of ablation
experiments wherein we update only one set of parameters per training cycle,
e.g. the Query weights $W_{i}^{Q}$, and leave everything else frozen. This
gives us a set of 6 models, which are described below. For each model, all
non-BERT parser components are always left unfrozen.
* •
Key: only the $K$ components of the transformer are unfrozen; these are the
representations of tokens that are paying attention to other tokens.
* •
Query: only the $Q$ components are unfrozen; these, conversely, are the
representations of tokens being paid attention to.
* •
KQ: both keys and queries are unfrozen.
* •
Value: semantic value vectors per token ($V$) are unfrozen; they are composed
after being weighted with attention scores obtained from the $K$/$Q$ matrices.
* •
Dense: the dense feed-forward networks in the attention mechanism; all three
per layer are unfrozen.
* •
None: The basic setting with nothing frozen; all parameters are updated with
the parsing loss.
We fine-tune each of these models on a concatentation of all PUD treebanks for
20 epochs, which effectively makes our model multilingual. We do so in order
to 1) control for domain and annotation confounds, since all PUD sentences are
parallel and are natively annotated (unlike converted UD treebanks, for
instance); 2) increase the number of training samples for fine-tuning, as each
PUD treebank features only 1000 sentences; and 3) induce a better parser
through multilinguality, as in Kondratyuk and Straka (2019b). Furthermore, in
order to gauge the overall performance of our parser across all ablated
settings, we evaluate on the test set of the largest non-PUD treebank
available for each language, since PUD only features test partitions. When
training, we employ a combined dense/sparse Adam optimiser, at a learning rate
of $3*10^{-5}$. We rescale gradients to have a maximum norm of 5.
## 4 Decoding mBERT Attention
Figure 1: UUAS of MST decoding per layer and head, across languages. Heads
(y-axis) are sorted by accuracy for easier visualization. Figure 2: Left: UUAS
per relation across languages (best layer/head combination indicated in cell).
Right: Best UUAS as a function of best positional baseline (derived from the
treebank), selected relations.
The second row of Table 1 (Pre) depicts the UUAS after running our decoding
algorithm over mBERT attention matrices, per language. We see a familiar
pattern to that in Clark et al. (2019) among others — namely that attention
patterns extracted directly from mBERT appear to be incapable of decoding
dependency trees beyond a threshold of 50–60% UUAS accuracy. However, we also
note that, in all languages, the attention-decoding algorithm outperforms a
Baseline (row 1) that draws an (undirected) edge between any two adjacent
words in linear order, which implies that some non-linear structures are
captured with regularity. Indeed, head 8 in layer 10 appears to be
particularly strong in this regard, returning the highest UUAS for 7
languages. Interestingly, the accuracy patterns across layers depicted in
Figure 1 tend to follow an identical trend for all languages, with nearly all
heads in layer 7 returning high within-language accuracies.
It appears that attention for some languages (Arabic, Czech, Korean, Turkish)
is comparatively easier to decode than others (French, Italian, Japanese,
Chinese). A possible explanation for this result is that dependency relations
between content words, which are favored by the UD annotation, are more likely
to be adjacent in the morphologically rich languages of the first group
(without intervening function words). This assumption seems to be corroborated
by the high baseline scores for Arabic, Korean and Turkish (but not Czech).
Conversely, the low baselines scores and the likewise low decoding accuracies
for the latter four languages are difficult to characterize. Indeed, we could
not identify what factors — typological, annotation, tokenization or otherwise
— would set French and Italian apart from the remaining languages in terms of
score. However, we hypothesize that the tokenization and our treatment of
subword tokens plays a part in attempting to decode attention from Chinese and
Japanese representations. Per the mBERT
documentation,333https://github.com/google-
research/bert/blob/master/multilingual.md Chinese and Japanese Kanji character
spans within the CJK Unicode range are character-tokenized. This lies in
contrast with all other languages (Korean Hangul and Japanese Hiragana and
Katakana included), which rely on whitespace and WordPiece (Wu et al., 2016).
It is thus possible that the attention distributions for these two languages
(at least where CJK characters are relevant) are devoted to composing words,
rather than structural relations, which will distort the attention matrices
that we compute to correspond with gold tokenization (e.g. by maxing rows and
averaging columns).
#### Relation analysis
We can disambiguate what sort of structures are captured with regularity by
looking at the UUAS returned per dependency relation. Figure 2 (left) shows
that adjectival modifiers (amod, mean UUAS = $85$ $\pm 12$) and determiners
(det, $88\pm 6$) are among the easiest relations to decode across languages.
Indeed, words that are connected by these relations are often adjacent to each
other and may be simple to decode if a head is primarily concerned with
tracking linear order. To verify the extent to which this might be happening,
we plot the aforementioned decoding accuracy as a function of select
relations’ positional baselines in Figure 2 (right). The positional baselines,
in this case, are calculated by picking the most frequent offset at which a
dependent occurs with respect to its head, e.g., $-$1 for det in English,
meaning one position to the left of the head. Interestingly, while we observe
significant variation across the positional baselines for amod and det, the
decoding accuracy remains quite high.
In slight contrast to this, the core subject (nsubj, $58\pm 16$ SD) and object
(obj, $64\pm 13$) relations prove to be more difficult to decode. Unlike the
aforementioned relations, nsubj and obj are much more sensitive to the word
order properties of the language at hand. For example, while a language like
English, with Subject-Verb-Object (SVO) order, might have the subject
frequently appear to the left of the verb, an SOV language like Hindi might
have it several positions further away, with an object and its potential
modifiers intervening. Indeed, the best positional baseline for English nsubj
is 39 UUAS, while it is only 10 for Hindi. Despite this variation, the
relation seems to be tracked with some regularity by the same head (layer 3,
head 9), returning 60 UUAS for English and 52 for Hindi. The same can largely
be said for obj, where the positional baselines return $51\pm 18$. In this
latter case, however, the heads tend to be much differently distributed across
languages. Finally, he results for the obj relation provides some support for
our earlier explanation concerning morphologically rich languages, as Arabic,
Czech, Korean and Turkish all have among the highest accuracies (as well as
positional baselines).
## 5 Fine-Tuning Experiments
Figure 3: (Top) best scores across all heads, per language; (bottom) mean
scores across all heads, per language. The languages (hidden from the X-axis
for brevity) are, in order, _ar, cs, de, en, es, fi, fr, hi, id, it, ja, ko,
pl, pt, ru, sv, tr, zh_ Figure 4: Mean UAS and LAS when evaluating different
models on language-specific treebanks (Korean excluded due to annotation
differences). mBERT refers to models where the entire mBERT network is frozen
as input to the parser.
Next, we investigate the effect fine-tuning has on UUAS decoding. Row 3 in
Table 1 (None) indicates that fine-tuning does result in large improvements to
UUAS decoding across most languages, often by margins as high as $\sim 30\%$.
This shows that with an explicit parsing objective, attention heads are
capable of serving as explanatory mechanisms for syntax; syntactic structure
can be made to be transparently stored in the heads, in a manner that does not
require additional probe fitting or parameterized transformation to extract.
Given that we do manage to decode reasonable syntactic trees, we can then
refine our question — what components are capable of learning these trees? One
obvious candidate is the key/query component pair, given that attention
weights are a scaled softmax of a composition of the two. Figure 3 (top) shows
the difference between pretrained UUAS and fine-tuned UUAS per layer, across
models and languages. Interestingly, the best parsing accuracies do not appear
to vary much depending on what component is frozen. We do see a clear trend,
however, in that decoding the attention patterns of the fine-tuned model
typically yields better UUAS than the pretrained model, particularly in the
highest layers. Indeed, the lowest layer at which fine-tuning appears to
improve decoding is layer 7. This implies that, regardless of which component
remains frozen, the parameters facing any sort of significant and positive
update tend to be those appearing towards the higher-end of the network,
closer to the output.
For the frozen components, the best improvements in UUAS are seen at the final
layer in Value, which is also the only model that shows consistent
improvement, as well as the highest average improvement in mean scores444The
inner average is over all heads; the outer is over all languages. for the last
few layers. Perhaps most interestingly, the mean UUAS (Figure 3 (bottom)) for
our “attentive” components – keys, queries, and their combination – does not
appear to have improved by much after fine-tuning. In contrast, the maximum
does show considerable improvement; this seems to imply that although all
components appear to be more or less equally capable of learning decodable
heads, the attentive components, when fine-tuned, appear to sharpen fewer
heads.
Note that the only difference between keys and queries in an attention
mechanism is that keys are transposed to index attention from/to
appropriately. Surprisingly, Key and Query appear to act somewhat differently,
with Query being almost uniformly better than Key with the best heads, whilst
Key is slightly better with averages, implying distinctions in how both store
information. Furthermore, allowing both keys and queries seems to result in an
interesting contradiction – the ultimate layer, which has reasonable maximums
and averages for both Key and Query, now seems to show a UUAS drop almost
uniformly. This is also true for the completely unfrozen encoder.
#### Supervised Parsing
In addition to decoding trees from attention matrices, we also measure
supervised UAS/LAS on a held-out test set.555Note that the test set in our
scenario is from the actual, non-parallel language treebank; as such, we left
Korean out of this comparison due to annotation differences. Based on Figure
4, it is apparent that all settings result in generally the same UAS. This is
somewhat expected; Lauscher et al. (2020) see better results on parsing with
the entire encoder frozen, implying that the task is easy enough for a
biaffine parser to learn, given frozen mBERT representations.666Due to
training on concatenated PUD sets, however, our results are not directly
comparable/ The LAS distinction is, however, rather interesting: there is a
marked difference between how important the dense layers are, as opposed to
the attentive components. This is likely not reflected in our UUAS probe as,
strictly speaking, labelling arcs is not equivalent to searching for structure
in sentences, but more akin to classifying pre-identified structures. We also
note that Dense appears to be better than None on average, implying that non-
dense components might actually be hurting labelling capacity.
In brief, consolidating the two sets of results above, we can draw three
interesting conclusions about the components:
1. 1.
Value vectors are best aligned with syntactic dependencies; this is reflected
both in the best head at the upper layers, and the average score across all
heads.
2. 2.
Dense layers appear to have moderate informative capacity, but appear to have
the best learning capacity for the task of arc labelling.
3. 3.
Perhaps most surprisingly, Key and Query vectors do not appear to make any
outstanding contributions, save for sharpening a smaller subset of heads.
Our last result is especially surprising for UUAS decoding. Keys and queries,
fundamentally, combine to form the attention weight matrix, which is precisely
what we use to decode trees. One would expect that allowing these components
to learn from labelled syntax would result in the best improvements to
decoding, but all three have surprisingly negligible mean improvements. This
indicates that we need to further improve our understanding of how attentive
structure and weighting really works.
#### Cross-linguistic observations
We notice no clear cross-linguistic trends here across different component
sets; however, certain languages do stand out as being particularly hard to
decode from the fine-tuned parser. These include Japanese, Korean, Chinese,
French and Turkish. For the first three, we hypothesise that tokenization
clashes with mBERT’s internal representations may play a role. Indeed, as we
hypothesized in Section 3.2, it could be the case that the composition of CJK
characters into gold tokens for Chinese and Japanese may degrade the
representations (and their corresponding attention) therein. Furthermore, for
Japanese and Korean specifically, it has been observed that tokenization
strategies employed by different treebanks could drastically influence the
conclusions one may draw about their inherent hierarchical structure (Kulmizev
et al., 2020). Turkish and French are admittedly more difficult to diagnose.
Note, however, that we fine-tuned our model on a concatenation of all PUD
treebanks. As such, any deviation from PUD’s annotation norms is therefore
likely to be heavily penalised, by virtue of signal from other languages
drowning out these differences.
## 6 Conclusion
In this study, we revisited the prospect of decoding dependency trees from the
self-attention patterns of Transformer-based language models. We elected to
extend our experiments to 18 languages in order to gain better insight about
how tree decoding accuracy might be affected in the face of (modest)
typological diversity. Surprisingly, across all languages, we were able to
decode dependency trees from attention patterns more accurately than an
adjacent-linking baseline, implying that some structure was indeed being
tracked by the mechanism. In looking at specific relation types, we
corroborated previous studies in showing that particular layer-head
combinations tracked the same relation with regularity across languages,
despite typological differences concerning word order, etc.
In investigating the extent to which attention can be guided to properly
capture structural relations between input words, we fine-tuned mBERT as input
to a dependency parser. This, we found, yielded large improvements over the
pretrained attention patterns in terms of decoding accuracy, demonstrating
that the attention mechanism was learning to represent the structural
objective of the parser. In addition to fine-tuning the entire mBERT network,
we conducted a series of experiments, wherein we updated only select
components of model and left the remainder frozen. Most surprisingly, we
observed that the Transformer parameters designed for composing the attention
matrix, $K$ and $Q$, were only modestly capable of guiding the attention
towards resembling the dependency structure. In contrast, it was the Value
($V$) parameters, which are used for computing a weighted sum over the
$KQ$-produced attention, that yielded the most faithful representations of the
linguistic structure via attention.
Though prior work (Kovaleva et al., 2019; Zhao and Bethard, 2020) seems to
indicate that there is a lack of a substantial change in attention patterns
after fine-tuning on syntax- and semantics-oriented classification tasks, the
opposite effect has been observed with fine-tuning on negation scope
resolution, where a more explanatory attention mechanism can be induced (Htut
et al., 2019). Our results are similar to the latter, and we demonstrate that
given explicit syntactic annotation, attention weights do end up storing more
transparently decodable structure. It is, however, still unclear which sets of
transformer parameters are best suited for learning this information and
storing it in the form of attention.
## Acknowledgements
Our experiments were run on resources provided by UNINETT Sigma2 - the
National Infrastructure for High Performance Computing and Data Storage in
Norway, under the NeIC-NLPL umbrella. Mostafa and Anders were funded by a
Google Focused Research Award. We would like to thank Daniel Dakota and Ali
Basirat for some fruitful discussions and the anonymous reviewers for their
excellent feedback.
## References
* Bahdanau et al. (2014) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. _arXiv preprint arXiv:1409.0473_.
* Belinkov and Glass (2019) Yonatan Belinkov and James Glass. 2019. Analysis methods in neural language processing: A survey. _Transactions of the Association for Computational Linguistics_ , 7:49–72.
* Chi et al. (2020) Ethan A. Chi, John Hewitt, and Christopher D. Manning. 2020. Finding universal grammatical relations in multilingual BERT. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 5564–5577, Online. Association for Computational Linguistics.
* Chu (1965) Yoeng-Jin Chu. 1965. On the shortest arborescence of a directed graph. _Scientia Sinica_ , 14:1396–1400.
* Clark et al. (2019) Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What does BERT look at? an analysis of BERT’s attention. In _Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP_ , pages 276–286, Florence, Italy. Association for Computational Linguistics.
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4171–4186.
* Dozat and Manning (2016) Timothy Dozat and Christopher D Manning. 2016. Deep biaffine attention for neural dependency parsing. _arXiv preprint arXiv:1611.01734_.
* Hewitt and Manning (2019) John Hewitt and Christopher D Manning. 2019. A structural probe for finding syntax in word representations. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4129–4138.
* Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. _Neural computation_ , 9(8):1735–1780.
* Htut et al. (2019) Phu Mon Htut, Jason Phang, Shikha Bordia, and Samuel R Bowman. 2019. Do attention heads in bert track syntactic dependencies? _arXiv preprint arXiv:1911.12246_.
* Hu et al. (2020) Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, and Melvin Johnson. 2020. XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating Cross-lingual Generalization. _arXiv:2003.11080 [cs]_. ArXiv: 2003.11080.
* Jain and Wallace (2019) Sarthak Jain and Byron C Wallace. 2019. Attention is not explanation. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 3543–3556.
* Klein and Manning (2004) Dan Klein and Christopher D. Manning. 2004. Corpus-based induction of syntactic structure: Models of dependency and constituency. In _Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL)_ , pages 479–486.
* Kondratyuk and Straka (2019a) Dan Kondratyuk and Milan Straka. 2019a. 75 languages, 1 model: Parsing universal dependencies universally. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 2779–2795.
* Kondratyuk and Straka (2019b) Dan Kondratyuk and Milan Straka. 2019b. 75 languages, 1 model: Parsing Universal Dependencies universally. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 2779–2795, Hong Kong, China. Association for Computational Linguistics.
* Kovaleva et al. (2019) Olga Kovaleva, Alexey Romanov, Anna Rogers, and Anna Rumshisky. 2019. Revealing the Dark Secrets of BERT. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 4364–4373, Hong Kong, China. Association for Computational Linguistics.
* Kulmizev et al. (2020) Artur Kulmizev, Vinit Ravishankar, Mostafa Abdou, and Joakim Nivre. 2020. Do neural language models show preferences for syntactic formalisms? In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 4077–4091, Online. Association for Computational Linguistics.
* Lauscher et al. (2020) Anne Lauscher, Vinit Ravishankar, Ivan Vulić, and Goran Glavaš. 2020. From Zero to Hero: On the Limitations of Zero-Shot Cross-Lingual Transfer with Multilingual Transformers. _arXiv:2005.00633 [cs]_. ArXiv: 2005.00633.
* Limisiewicz et al. (2020) Tomasz Limisiewicz, Rudolf Rosa, and David Mareček. 2020. Universal dependencies according to bert: both more specific and more general. _arXiv preprint arXiv:2004.14620_.
* McDonald et al. (2005a) Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005a. Online large-margin training of dependency parsers. In _Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL)_ , pages 91–98.
* McDonald et al. (2005b) Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajič. 2005b. Non-projective dependency parsing using spanning tree algorithms. In _Proceedings of the Human Language Technology Conference and the Conference on Empirical Methods in Natural Language Processing (HLT/EMNLP)_ , pages 523–530.
* Nivre et al. (2019) Joakim Nivre, Mitchell Abrams, Željko Agić, Lars Ahrenberg, Gabrielė Aleksandravičiūtė, Lene Antonsen, Katya Aplonova, Maria Jesus Aranzabe, Gashaw Arutie, Masayuki Asahara, Luma Ateyah, Mohammed Attia, Aitziber Atutxa, Liesbeth Augustinus, Elena Badmaeva, Miguel Ballesteros, Esha Banerjee, Sebastian Bank, Verginica Barbu Mititelu, Victoria Basmov, John Bauer, Sandra Bellato, Kepa Bengoetxea, Yevgeni Berzak, Irshad Ahmad Bhat, Riyaz Ahmad Bhat, Erica Biagetti, Eckhard Bick, Agnė Bielinskienė, Rogier Blokland, Victoria Bobicev, Loïc Boizou, Emanuel Borges Völker, Carl Börstell, Cristina Bosco, Gosse Bouma, Sam Bowman, Adriane Boyd, Kristina Brokaitė, Aljoscha Burchardt, Marie Candito, Bernard Caron, Gauthier Caron, Gülşen Cebiroğlu Eryiğit, Flavio Massimiliano Cecchini, Giuseppe G. A. Celano, Slavomír Čéplö, Savas Cetin, Fabricio Chalub, Jinho Choi, Yongseok Cho, Jayeol Chun, Silvie Cinková, Aurélie Collomb, Çağrı Çöltekin, Miriam Connor, Marine Courtin, Elizabeth Davidson, Marie-Catherine de Marneffe, Valeria de Paiva, Arantza Diaz de Ilarraza, Carly Dickerson, Bamba Dione, Peter Dirix, Kaja Dobrovoljc, Timothy Dozat, Kira Droganova, Puneet Dwivedi, Hanne Eckhoff, Marhaba Eli, Ali Elkahky, Binyam Ephrem, Tomaž Erjavec, Aline Etienne, Richárd Farkas, Hector Fernandez Alcalde, Jennifer Foster, Cláudia Freitas, Kazunori Fujita, Katarína Gajdošová, Daniel Galbraith, Marcos Garcia, Moa Gärdenfors, Sebastian Garza, Kim Gerdes, Filip Ginter, Iakes Goenaga, Koldo Gojenola, Memduh Gökırmak, Yoav Goldberg, Xavier Gómez Guinovart, Berta González Saavedra, Matias Grioni, Normunds Grūzītis, Bruno Guillaume, Céline Guillot-Barbance, Nizar Habash, Jan Hajič, Jan Hajič jr., Linh Hà Mỹ, Na-Rae Han, Kim Harris, Dag Haug, Johannes Heinecke, Felix Hennig, Barbora Hladká, Jaroslava Hlaváčová, Florinel Hociung, Petter Hohle, Jena Hwang, Takumi Ikeda, Radu Ion, Elena Irimia, Ọlájídé Ishola, Tomáš Jelínek, Anders Johannsen, Fredrik Jørgensen, Hüner Kaşıkara, Andre Kaasen, Sylvain Kahane, Hiroshi Kanayama, Jenna Kanerva, Boris Katz, Tolga Kayadelen, Jessica Kenney, Václava Kettnerová, Jesse Kirchner, Arne Köhn, Kamil Kopacewicz, Natalia Kotsyba, Jolanta Kovalevskaitė, Simon Krek, Sookyoung Kwak, Veronika Laippala, Lorenzo Lambertino, Lucia Lam, Tatiana Lando, Septina Dian Larasati, Alexei Lavrentiev, John Lee, Phương Lê Hồng, Alessandro Lenci, Saran Lertpradit, Herman Leung, Cheuk Ying Li, Josie Li, Keying Li, KyungTae Lim, Yuan Li, Nikola Ljubešić, Olga Loginova, Olga Lyashevskaya, Teresa Lynn, Vivien Macketanz, Aibek Makazhanov, Michael Mandl, Christopher Manning, Ruli Manurung, Cătălina Mărănduc, David Mareček, Katrin Marheinecke, Héctor Martínez Alonso, André Martins, Jan Mašek, Yuji Matsumoto, Ryan McDonald, Sarah McGuinness, Gustavo Mendonça, Niko Miekka, Margarita Misirpashayeva, Anna Missilä, Cătălin Mititelu, Yusuke Miyao, Simonetta Montemagni, Amir More, Laura Moreno Romero, Keiko Sophie Mori, Tomohiko Morioka, Shinsuke Mori, Shigeki Moro, Bjartur Mortensen, Bohdan Moskalevskyi, Kadri Muischnek, Yugo Murawaki, Kaili Müürisep, Pinkey Nainwani, Juan Ignacio Navarro Horñiacek, Anna Nedoluzhko, Gunta Nešpore-Bērzkalne, Lương Nguyễn Thị, Huyền Nguyễn Thị Minh, Yoshihiro Nikaido, Vitaly Nikolaev, Rattima Nitisaroj, Hanna Nurmi, Stina Ojala, Adédayọ˝̀ Olúòkun, Mai Omura, Petya Osenova, Robert Östling, Lilja Øvrelid, Niko Partanen, Elena Pascual, Marco Passarotti, Agnieszka Patejuk, Guilherme Paulino-Passos, Angelika Peljak-Łapińska, Siyao Peng, Cenel-Augusto Perez, Guy Perrier, Daria Petrova, Slav Petrov, Jussi Piitulainen, Tommi A Pirinen, Emily Pitler, Barbara Plank, Thierry Poibeau, Martin Popel, Lauma Pretkalniņa, Sophie Prévost, Prokopis Prokopidis, Adam Przepiórkowski, Tiina Puolakainen, Sampo Pyysalo, Andriela Rääbis, Alexandre Rademaker, Loganathan Ramasamy, Taraka Rama, Carlos Ramisch, Vinit Ravishankar, Livy Real, Siva Reddy, Georg Rehm, Michael Rießler, Erika Rimkutė, Larissa Rinaldi, Laura Rituma, Luisa Rocha, Mykhailo Romanenko, Rudolf Rosa, Davide Rovati, Valentin Roșca, Olga Rudina, Jack Rueter, Shoval Sadde, Benoît Sagot, Shadi Saleh, Alessio Salomoni, Tanja Samardžić, Stephanie Samson, Manuela Sanguinetti, Dage Särg, Baiba Saulīte, Yanin Sawanakunanon, Nathan Schneider, Sebastian Schuster, Djamé Seddah, Wolfgang Seeker, Mojgan Seraji, Mo Shen, Atsuko Shimada, Hiroyuki Shirasu, Muh Shohibussirri, Dmitry Sichinava, Natalia Silveira, Maria Simi, Radu Simionescu, Katalin Simkó, Mária Šimková, Kiril Simov, Aaron Smith, Isabela Soares-Bastos, Carolyn Spadine, Antonio Stella, Milan Straka, Jana Strnadová, Alane Suhr, Umut Sulubacak, Shingo Suzuki, Zsolt Szántó, Dima Taji, Yuta Takahashi, Fabio Tamburini, Takaaki Tanaka, Isabelle Tellier, Guillaume Thomas, Liisi Torga, Trond Trosterud, Anna Trukhina, Reut Tsarfaty, Francis Tyers, Sumire Uematsu, Zdeňka Urešová, Larraitz Uria, Hans Uszkoreit, Sowmya Vajjala, Daniel van Niekerk, Gertjan van Noord, Viktor Varga, Eric Villemonte de la Clergerie, Veronika Vincze, Lars Wallin, Abigail Walsh, Jing Xian Wang, Jonathan North Washington, Maximilan Wendt, Seyi Williams, Mats Wirén, Christian Wittern, Tsegay Woldemariam, Tak-sum Wong, Alina Wróblewska, Mary Yako, Naoki Yamazaki, Chunxiao Yan, Koichi Yasuoka, Marat M. Yavrumyan, Zhuoran Yu, Zdeněk Žabokrtský, Amir Zeldes, Daniel Zeman, Manying Zhang, and Hanzhi Zhu. 2019. Universal dependencies 2.4. LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (ÚFAL), Faculty of Mathematics and Physics, Charles University.
* Nivre et al. (2016) Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajič, Christopher D. Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Dan Zeman. 2016. Universal Dependencies v1: A multilingual treebank collection. In _Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC)_.
* Nivre et al. (2020) Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Jan Hajič, Christopher D. Manning, Sampo Pyysalo, Sebastian Schuster, Francis Tyers, and Dan Zeman. 2020. Universal Dependencies v2: An evergrowing multilingual treebank collection. In _Proceedings of the 12th International Conference on Language Resources and Evaluation (LREC)_.
* Pires et al. (2019) Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual bert? In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 4996–5001.
* Raganato et al. (2018) Alessandro Raganato, Jörg Tiedemann, et al. 2018. An analysis of encoder representations in transformer-based machine translation. In _Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP_. The Association for Computational Linguistics.
* Rocktäschel et al. (2015) Tim Rocktäschel, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiskỳ, and Phil Blunsom. 2015. Reasoning about entailment with neural attention. _arXiv preprint arXiv:1509.06664_.
* Rosa and Mareček (2019) Rudolf Rosa and David Mareček. 2019. Inducing syntactic trees from bert representations. _arXiv preprint arXiv:1906.11511_.
* Serrano and Smith (2019) Sofia Serrano and Noah A Smith. 2019. Is attention interpretable? _arXiv preprint arXiv:1906.03731_.
* Strubell et al. (2018) Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, and Andrew McCallum. 2018\. Linguistically-informed self-attention for semantic role labeling. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 5027–5038, Brussels, Belgium. Association for Computational Linguistics.
* Swayamdipta et al. (2018) Swabha Swayamdipta, Sam Thomson, Kenton Lee, Luke Zettlemoyer, Chris Dyer, and Noah A. Smith. 2018. Syntactic scaffolds for semantic structures. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 3772–3782, Brussels, Belgium. Association for Computational Linguistics.
* Tenney et al. (2019) Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. Bert rediscovers the classical nlp pipeline. _arXiv preprint arXiv:1905.05950_.
* Vaswani et al. (2017a) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017a. Attention is all you need. In _Advances in neural information processing systems_ , pages 5998–6008.
* Vaswani et al. (2017b) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017b. Attention Is All You Need. _arXiv:1706.03762 [cs]_. ArXiv: 1706.03762.
* Voita et al. (2019) Elena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, and Ivan Titov. 2019. Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned. _arXiv preprint arXiv:1905.09418_.
* Wiegreffe and Pinter (2019) Sarah Wiegreffe and Yuval Pinter. 2019. Attention is not not explanation. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 11–20.
* Wu et al. (2016) Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. _arXiv preprint arXiv:1609.08144_.
* Xu et al. (2015) Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In _International conference on machine learning_ , pages 2048–2057.
* Zeman et al. (2018) Daniel Zeman, Jan Hajic, Martin Popel, Martin Potthast, Milan Straka, Filip Ginter, Joakim Nivre, and Slav Petrov. 2018. Conll 2018 shared task: Multilingual parsing from raw text to universal dependencies. In _Proceedings of the CoNLL 2018 Shared Task: Multilingual parsing from raw text to universal dependencies_ , pages 1–21.
* Zhao and Bethard (2020) Yiyun Zhao and Steven Bethard. 2020. How does BERT’s attention change when you fine-tune? An analysis methodology and a case study in negation scope. page 19.
Figure 5: Positional scores across relations for all languages. Figure 6:
Decoding UUAS as a function of best positional baselines. Figure 7: Parsing
scores across components and languages.
|
††institutetext: School of Physics, Xi’an Jiaotong University, Xi’an 710049,
People’s Republic of China
# Revisit on two-dimensional self-gravitating kinks: superpotential formalism
and linear stability
Yuan Zhong111Corresponding author<EMAIL_ADDRESS>
###### Abstract
Self-gravitating kink solutions of a two-dimensional dilaton gravity are
revisited in this work. Analytical kink solutions are derived from a concise
superpotential formalism of the dynamical equations. A general analysis on the
linear stability is conducted for an arbitrary static solution of the model.
After gauge fixing, a Schrödinger-like equation with factorizable Hamiltonian
operator is obtained, which ensures the linear stability of the solution.
###### Keywords:
2D Gravity; Solitons Monopoles and Instantons
††arxiv: 2101.10928
## 1 Introduction
During the past decades, two-dimensional (2D) gravitational models continue
attracting the attention of theorists for a variety of reasons. First of all,
the field equations obtained in many 2D gravity models are simple enough to
allow a rigorous analysis of some difficult issues of gravitational theory,
such as the quantization of gravity Henneaux1985 ; Alwis1992 , gravitational
collapse VazWitten1994 ; VazWitten1996 , black hole evaporation
CallanGiddingsHarveyStrominger1992 ; BilalCallan1993 ;
RussoSusskindThorlacius1992 ; RussoSusskindThorlacius1992a ;
RussoSusskindThorlacius1993 , see Brown1988 ; Thorlacius1995 ;
GrumillerKummerVassilevich2002 for comprehensive reviews on early works.
Second, a number of very different approaches of quantum gravity all hint that
at very short distances space-time becomes effectively two dimensional
AmbjornJurkiewiczLoll2005 ; Horava2009b ; MureikaStojkovic2011 ;
AnchordoquiDaiFairbairnLandsbergEtAl2012 ; Stojkovic2013 ; Loll2020 . Here,
the dimensions that are reduced can be effective, spectral, topological or the
usual dimensions Carlip2017 . Recently, the studies of the Sachdev-Ye-Kitaev
(SYK) model SachdevYe1993 ; Kitaev2015 also lead to a resurgence of interest
in 2D gravity AlmheiriPolchinski2015 ; MaldacenaStanfordYang2016 ;
MaldacenaStanford2016 ; Jensen2016 , see Rosenhaus2018 ; Sarosi2018 ;
Trunin2020 for pedagogical introductions.
Since the Einstein tensor vanishes identically in two dimensions, the
Einstein-Hilbert action cannot be used to describe 2D gravity. An economical
solution to this problem is to introduce a dilaton field. Many different 2D
dilaton gravity models have been proposed and studied so far. The simplest
action for 2D dilaton gravity is the Jackiw-Teitelboim (JT) action Jackiw1985
; Teitelboim1983
$\displaystyle S_{JT}=\frac{1}{\kappa}\int d^{2}x\sqrt{-g}\varphi(R+\Lambda),$
(1)
where the dilaton $\varphi$ plays the role of a Lagrangian multiplier.
$\kappa$ and $\Lambda$ are the gravitational coupling and the cosmological
constant, respectively. Two other famous actions for 2D dilaton gravity are
the Mann-Morsink-Sikkema-Steele (MMSS) action, which generalize the JT action
by giving the dilaton a kinetic term MannMorsinkSikkemaSteele1991
$\displaystyle
S_{\textrm{MMSS}}=\frac{1}{\kappa}\int{d^{2}x}\sqrt{-g}\left[-\frac{1}{2}(\nabla\varphi)^{2}+\varphi
R+\Lambda\right],$ (2)
and the Callan-Giddings-Harvey-Strominger (CGHS) action
CallanGiddingsHarveyStrominger1992 :
$\displaystyle S_{\mathrm{CGHS}}=\frac{1}{2\pi}\int
d^{2}x\sqrt{-g}\left\\{e^{-2\varphi}\left[R+4(\nabla\varphi)^{2}+4\Lambda^{2}\right]-\frac{1}{2}(\nabla\phi)^{2}\right\\},$
(3)
where $\phi$ is a massless scalar matter field. A comprehensive review of 2D
dilaton gravity models and their applications in black hole physics and
quantum gravity can be found in Ref. GrumillerKummerVassilevich2002 .
It is a natural idea to extend the discussion on 2D dilaton gravity to other
classical solutions such as topological solitons, which could be produced by
cosmic phase transitions VilenkinShellard2000 . As the simplest topological
soliton solution, kink (or domain wall) has been extensively studied in 4D
cosmology Vachaspati2006 and 5D thick brane world models
DzhunushalievFolomeevMinamitsuji2010 ; Liu2018 . In the case of two
dimensions, previous works have revealed close connections between kinks and
2D black holes ShinSoh1995 ; JohngShinSoh1996 ; GegenbergKunstatter1998 ;
Cadoni1998 , or naked singularities VazWitten1994 ; VazWitten1996 ;
VazWitten1995 ; YanQiu1998 ; YanWangTao2001 .
In 1995, an exact 2D self-gravitating sine-Gordon kink solution without
curvature singularity was found by Stötzel, in the MMSS gravity model
Stoetzel1995 . In addition to the kink configuration of the scalar field, the
metric solution Stoetzel1995 describes a 2D asymptotic anti de-Sitter (AdS2)
geometry. This property reminds us the thick brane solutions found in
asymptotic AdS5 geometry SkenderisTownsend1999 ;
DeWolfeFreedmanGubserKarch2000 ; Gremm2000 . The aim of the present work is to
reveal similarities between 2D self-gravitating kinks and 5D thick brane
worlds.
The organization of the paper is as follows. In Sec. 2, we give a brief review
of Stötzel’s model, and show that for static solutions, the field equations
can be written as a group of first-order differential equations by introducing
the so called superpotential. With the superpotential formalism, one can
easily generate exact self-gravitating kink solutions by chosen proper
superpotentials. We will discuss two analytical solutions in Sec. 3. Then, in
Sec. 4 we give a complete analysis to the linear stability of the solutions.
To our knowledge, no such analysis was done before. In a recent work
IzquierdoFuertesGuilarte2020 , the authors considered the linear perturbations
around self-gravitating kink solutions in 2D MMSS gravity. However, they
expand the metric around the Minkowski metric rather than the asymptotic AdS2
metric solution. Finally, we offer in Sec. 5 some concluding remarks.
## 2 The model and the superpotential formalism
The action of Stötzel’s model Stoetzel1995 contains an MMSS gravity part
along with a canonical real scalar $\phi$:
$\displaystyle
S=\frac{1}{\kappa}\int{d^{2}x}\sqrt{-g}\left[-\frac{1}{2}\partial^{\mu}\varphi\partial_{\mu}\varphi+\varphi
R+\Lambda+\kappa\mathcal{L}_{\text{m}}\right],$ (4)
where
$\displaystyle\mathcal{L}_{\text{m}}=-\frac{1}{2}\partial^{\mu}\phi\partial_{\mu}\phi-V(\phi)$
(5)
is the Lagrangian density of the scalar field.
After variation, one immediately obtains the Einstein equations
$\displaystyle\nabla_{\mu}\varphi\nabla_{\nu}\varphi+2\nabla_{\mu}\nabla_{\nu}\varphi-\frac{1}{2}g_{\mu\nu}\left(\nabla_{\lambda}\varphi\nabla^{\lambda}\varphi+4\nabla_{\lambda}\nabla^{\lambda}\varphi-2\Lambda\right)=-\kappa
T_{\mu\nu},$ (6)
the dilaton equation
$\displaystyle\nabla_{\lambda}\nabla^{\lambda}\varphi+R=0,$ (7)
and the scalar field equation
$\displaystyle\nabla^{\mu}\nabla_{\mu}\phi-\frac{dV}{d\phi}=0.$ (8)
The energy-momentum tensor in Eq. (6) is defined as
$\displaystyle T_{\mu\nu}$ $\displaystyle=$ $\displaystyle
g_{\mu\nu}\mathcal{L}_{\text{m}}-2\frac{\delta\mathcal{L}_{\text{m}}}{\delta
g^{\mu\nu}}$ (9) $\displaystyle=$
$\displaystyle\partial_{\mu}\phi\partial_{\nu}\phi-\frac{1}{2}g_{\mu\nu}\left(\partial^{\alpha}\phi\partial_{\alpha}\phi+2V\right).$
To obtain self-gravitating kink solution, Stötzel used the following metric
$\displaystyle ds^{2}=-e^{2A(x)}dt^{2}+dx^{2}.$ (10)
Similar metric ansatz is also used in 5D brane world models with non-
factorizable geometry RandallSundrum1999a ; RandallSundrum1999 , therefore, we
will follow the terminology of brane world theory and call the function $A(x)$
as the warp factor. As a convention, the derivative with respect to $x$ will
always be denoted as a subscript, for example, $\phi_{x}\equiv d\phi/dx.$
Substituting metric (10) into the Einstein equations (6), one obtains
$\displaystyle 2A_{x}\varphi_{x}-2\varphi_{xx}-\varphi_{x}^{2}$
$\displaystyle=$ $\displaystyle\kappa\phi_{x}^{2},$ (11) $\displaystyle
A_{x}\varphi_{x}+\varphi_{xx}$ $\displaystyle=$ $\displaystyle\Lambda-\kappa
V.$ (12)
The equations of motion for the dilaton and the scalar fields read
$\displaystyle-2A_{xx}-2A_{x}^{2}+\varphi_{xx}+A_{x}\varphi_{x}=0.$ (13)
and
$\displaystyle A_{x}\phi_{x}+\phi_{xx}=\frac{dV}{d\phi},$ (14)
respectively. Note that only three of the above equations are independent. For
example, Eq. (14) can be derived by using Eqs. (11)-(13). At a first glance,
Eqs. (11)-(14) constitute a complicate nonlinear differential system, and
finding their solutions seems to be a formidable task. But the study of brane
world models has taught us a lesson on how to solve such system by means of
superpotential method, which rewrites second-order differential equations,
such as Eqs. (11)-(14), into some first-order ones SkenderisTownsend1999 ;
DeWolfeFreedmanGubserKarch2000 ; Gremm2000 .
To construct a superpotential formalism for the present model, we first note
that the combination of Eqs. (12) and (13) leads to an expression of $V$ in
terms of cosmological constant and warp factor:
$\displaystyle\kappa V=\Lambda-2A_{xx}-2A_{x}^{2}.$ (15)
Taking the derivative of the above equation and eliminating $dV/d\phi$ by
using Eq. (14), one obtains a relation between $A$ and $\phi$:
$\displaystyle
A_{xxx}+2A_{x}A_{xx}=-\frac{1}{2}\kappa(A_{x}\phi_{x}^{2}+\phi_{xx}\phi_{x}).$
(16)
The superpotential method starts with an assumption that the first-order
derivative of $\phi$ equals to a function of $\phi$ itself, namely, the
superpotential $W(\phi)$ via the following equation:
$\displaystyle\phi_{x}=\frac{dW}{d\phi}.$ (17)
Under this assumption, one can testify that Eq. (16) supports a very simple
special solution:
$\displaystyle A_{x}=-\frac{1}{4}\kappa W.$ (18)
Then, Eq. (15) enables us to write $V$ in terms of superpotential:
$\displaystyle
V=\frac{1}{2}\left(\frac{dW}{d\phi}\right)^{2}-\frac{1}{8}\kappa
W^{2}+\frac{\Lambda}{\kappa}.$ (19)
Finally, the general solution of Eq. (13) gives a simple relation between
dilaton and warp factor:
$\displaystyle\varphi=2A+\beta\int e^{-A}dx+\varphi_{0},$
where $\beta$ and $\varphi_{0}$ are just two integral constants. Since the
field equations only contain the derivatives of the dilaton, the value of
$\varphi_{0}$ is unimportant to the solution of other variables, and can be
taken as $\varphi_{0}=0$. Besides, to consist with Eq. (11), $\beta$ must be
set as zero, so
$\displaystyle\varphi=2A.$ (20)
Eqs. (17)-(20) constitute the first-order superpotential formalism of the
present model. Exact kink solutions can be derived by choosing proper
superpotentials. The freedom of choosing a superpotential comes from the fact
that there are four unknown variables ($A,\phi,\varphi$ and $V$) but only
three independent equations. Taking a superpotential amounts to specifying one
of the four unknown variables.
## 3 Exact solutions
In this section, we show how to use the superpotential formalism to derive
exact self-gravitating kink solutions. We first reproduce Stötzel’s solution
and then report a new solution.
### 3.1 Reproducing Stötzel’s solution
In fact, the superpotential formalism presented in last section has been
derived and used, although unconsciously, by Stötzel Stoetzel1995 . Instead of
choosing a superpotential $W(\phi)$, Stötzel started with the Sine-Gordon
potential
$\displaystyle V(\phi)=2m^{2}\sin^{2}\frac{\phi}{2}.$ (21)
He observed that when $\kappa=\frac{\lambda}{4m^{2}-\lambda}$, Eq. (19)
surports two solutions of the superpotential:
$\displaystyle W_{\pm}=\pm
2\sqrt{4m^{2}-\lambda}\cos\left(\frac{\phi}{2}\right),$ (22)
where $0<\lambda\equiv\frac{2\Lambda}{\kappa}<4m^{2}$. The solutions of
$\phi(x)$ corresponds to $W_{-}$ could be obtained by integrating Eqs. (17),
and the result turns out to be the sine-Gordon kink Stoetzel1995 :
$\displaystyle\phi_{K}(x)=4\arctan\left(e^{M(x-x_{0})}\right).$ (23)
Here $x_{0}$ is an integral constant that represents the position of the kink,
and will be set to zero from now on. The constant $M$ is defined as
$M\equiv\frac{1}{2}\sqrt{4m^{2}-\lambda}$. Obviously, $M\in(0,m)$. The
solution corresponds to $W_{+}$ is an antikink
$\displaystyle\phi_{\bar{K}}(x)=4\arctan\left(e^{-Mx}\right),$ (24)
which is similar as the kink in many aspects. Thus, we will focus on the kink
solution only, and eliminate the subscript $K$ from now on.
Plugging the solutions of $W(\phi)$ and $\phi(x)$ into Eq. (18), one
immediately obtains the expression of the warp factor:
$\displaystyle A(x)=A_{0}-\frac{\lambda}{4M^{2}}\ln(2\cosh(Mx)),$ (25)
which further reduces to Stoetzel1995
$\displaystyle A(x)$ $\displaystyle=$
$\displaystyle-\frac{\lambda}{4M^{2}}\ln\cosh(Mx)$ (26) $\displaystyle=$
$\displaystyle-\kappa\ln\cosh(Mx)$
after taking integral constant $A_{0}=\frac{\lambda}{4M^{2}}\ln 2$. Obviously,
this warp factor describes an asymptotic AdS2 geometry. Finally, the dilaton
field reads
$\displaystyle\varphi(x)$ $\displaystyle=$ $\displaystyle
2A(x)=-2\kappa\ln\cosh(Mx).$ (27)
The profiles of $\phi$, $A$ and $\varphi$ are plotted in Fig. 1.
Figure 1: The shapes of some important variables in Stötzel’s solution,
incluting (a) scalar field, (b) warp factor and the dilaton field. The
parameters are taken as $\kappa=1$, $m=\sqrt{2}$ and $\lambda=4$, therefore
$M=1$ and $\Lambda=2$.
### 3.2 A polynomial superpotential
As shown repeatedly in the study of 5D thick brane models, it is quite easy to
construct exact self-gravitating kink solutions once the superpotential
formalism is established. In the following discussions, we will take
$\Lambda=0$ for simplicity, as it can be absorbed into the definition of
$V(\phi)$.
Consider a simple polynomial potential with parameter $c$ EtoSakai2003 ;
TakamizuMaeda2006 ; BazeiaMenezesRocha2014
$\displaystyle W=c+\phi\left(1-\frac{\phi^{2}}{3}\right).$ (28)
It has two minima at $\phi_{\pm}=\pm 1$, where
$W(\phi_{\pm})=\pm\frac{2}{3}+c$. With this superpotential, one obtains
BazeiaMenezesRocha2014
$\displaystyle\phi(x)$ $\displaystyle=$ $\displaystyle\tanh(x),$ (29)
$\displaystyle\varphi(x)$ $\displaystyle=$ $\displaystyle 2A(x),$ (30)
$\displaystyle A(x)$ $\displaystyle=$
$\displaystyle\frac{1}{24}\kappa\left[-6cx+\text{sech}^{2}(x)-4\ln(\cosh(x))-1\right],$
(31) $\displaystyle V(\phi)$ $\displaystyle=$
$\displaystyle-\frac{1}{72}\kappa\left(-3c+\phi^{3}-3\phi\right)^{2}+\frac{1}{2}\left(\phi^{2}-1\right)^{2}.$
(32)
The asymptotic behaviors of the warp factor and the scalar potential are
$\displaystyle A_{\pm}(x)$ $\displaystyle=$ $\displaystyle-\frac{1}{4}\kappa
W(\phi_{\pm})x=-\frac{1}{4}\kappa(\frac{2}{3}\pm c)|x|,$ (33) $\displaystyle
V_{\pm}$ $\displaystyle=$ $\displaystyle-\frac{1}{72}(3c\pm 2)^{2}\kappa.$
(34)
Depending on the value of $c$, there are four different situations
BazeiaMenezesRocha2014 :
1. 1.
$c=0$: In this case, the kink connects two equivalent AdS2 spaces
symmetrically, and $V_{+}=V_{-}=-\frac{1}{18}\kappa$.
2. 2.
$0<|c|<\frac{2}{3}$: The kink connects two distinct AdS2 spaces.
3. 3.
$|c|=\frac{2}{3}$: The kink connects an AdS2 space and a 2D Minkowski space
(M2) asymmetrically. This situation is of particular interesting when
considering kink collision in asymptotical AdS space-time TakamizuMaeda2006 ;
OmotaniSaffinLouko2011 .
4. 4.
$|c|>\frac{2}{3}$: The warp factor diverges at one side of the kink.
The behavior of $e^{A}$ for different values of $c$ has been plotted in Fig.
2. Obviously, for $c\neq 0$, the warp factor is asymmetric.
Figure 2: Plots of warp factor $e^{A(x)}$ of the polynomial model with
$\kappa=1$.
## 4 Linear stability analysis
In this section, we discuss the linear stability of the self-gravitating kink
solutions against small perturbations. This issue has been studied extensively
in 5D brane world models DeWolfeFreedmanGubserKarch2000 ; Giovannini2001a ;
Giovannini2002 ; Giovannini2003 ; ZhongLiu2013 , but remains untouched in the
case of 2D. The reducing of dimensions and the introducing of dilaton field
make it impossible to analyze linear stability of 2D self-gravitating kinks by
simply copying the stability analysis of 5D thick branes. For example, there
are no vector and tensor perturbation in 2D, so the traditional scalar-vector-
tensor decomposition Giovannini2002 ; ZhongLiu2013 is no longer needed.
Beside, in 2D there is no way to eliminate the non-minimal gravity-dilaton
coupling by using conformal transformation.
It is convenient to discuss the linear stability in the conformal flat
coordinates
$\displaystyle ds^{2}=e^{2A(r)}\eta_{\mu\nu}dx^{\mu}dx^{\nu},$ (35)
where $r$ is defined through $dr\equiv e^{-A(x)}dx$. For simplicity, we use a
prime and an overdot to represent the derivatives with respect to $r$ and $t$,
respectively.
In this coordinates, the Einstein equations take the following form:
$\displaystyle\kappa\phi^{\prime 2}$ $\displaystyle=$ $\displaystyle
4A^{\prime}\varphi^{\prime}-2\varphi^{\prime\prime}-\varphi^{\prime 2},$ (36)
$\displaystyle\varphi^{\prime\prime}$ $\displaystyle=$ $\displaystyle
e^{2A}(\Lambda-\kappa V).$ (37)
The equation of motion for the scalar and dilaton fields are
$\displaystyle\phi^{\prime\prime}$ $\displaystyle=$ $\displaystyle
e^{2A}\frac{dV}{d\phi},$ (38)
and
$\displaystyle\varphi^{\prime\prime}$ $\displaystyle=$ $\displaystyle
2A^{\prime\prime},$ (39)
respectively. Obviously, the general solution of Eq. (39) is $\varphi=2A+\beta
r+\varphi_{0}$, but as stated before, we will take $\beta=0=\varphi_{0}$.
Equation (16) becomes
$\displaystyle
2A^{\prime\prime\prime}-4A^{\prime}A^{\prime\prime}+\kappa\phi^{\prime}\phi^{\prime\prime}=0,$
(40)
which, after integration, gives
$\displaystyle
A^{\prime\prime}-{A^{\prime}}^{2}+\frac{1}{4}\kappa{\phi^{\prime}}^{2}=0,$
(41)
where the integral constant has been taken as zero.
Now, let us consider small field perturbations around an arbitrary static
background solution $\\{\varphi(r),\phi(r),g_{\mu\nu}(r)\\}$:
$\displaystyle\varphi(r)+\delta\varphi(r,t),\quad\phi(r)+\delta\phi(r,t),\quad
g_{\mu\nu}(r)+\delta g_{\mu\nu}(r,t).$ (42)
We also define
$\displaystyle\delta g_{\mu\nu}(r,t)\equiv e^{2A(r)}h_{\mu\nu}(r,t),$ (43)
for convenience.
In the linear perturbation analysis of cosmological or brane world models, one
usually decompose $h_{\mu\nu}$ into scalar, vector and tensor sectors
MukhanovFeldmanBrandenberger1992 ; KodamaSasaki1984 . Each sector can be
discussed independently. In the present case, we have only one spatial
dimension and no such decomposition is needed. So we will directly deal with
the components of the metric perturbation
$\displaystyle h_{\mu\nu}=\left(\begin{array}[]{cc}h_{00}(r,t)&\Phi(r,t)\\\
\Phi(r,t)&h_{rr}(r,t)\\\ \end{array}\right),$ (46)
where we have renamed $h_{01}=h_{10}$ as $\Phi$, and $h_{11}$ as $h_{rr}$.
To the first order, the perturbation of the metric inverse is given by
$\displaystyle\delta g^{\mu\nu}=-e^{-2A}h^{\mu\nu}.$ (47)
Note that the indices of $h$ are always raised or lowered with
$\eta_{\mu\nu}$, thus,
$\displaystyle
h^{\mu\nu}\equiv\eta^{\mu\rho}\eta^{\nu\sigma}h_{\rho\sigma}=\left(\begin{array}[]{cc}h_{00}&-\Phi\\\
-\Phi&h_{rr}\\\ \end{array}\right).$ (50)
After linearization, the Einstein equations (6) lead to three nontrivial
perturbation equations, namely, the $(0,0)$ component:
$\displaystyle
2A^{\prime}\delta\varphi^{\prime}-2A^{\prime}\varphi^{\prime}h_{rr}-2\delta\varphi^{\prime\prime}-\delta\varphi^{\prime}\varphi^{\prime}+h_{rr}^{\prime}\varphi^{\prime}$
(51) $\displaystyle+$ $\displaystyle
2h_{rr}\varphi^{\prime\prime}+\frac{1}{2}h_{rr}\varphi^{\prime
2}=\kappa\left(\phi^{\prime}\delta\phi^{\prime}+\phi^{\prime\prime}\delta\phi-\frac{1}{2}\phi^{\prime
2}h_{rr}\right),$
the $(0,1)$ or $(1,0)$ components:
$\displaystyle
2A^{\prime}\delta\varphi-2\delta\varphi^{\prime}-\varphi^{\prime}\delta\varphi+\varphi^{\prime}{h_{rr}}=\kappa\phi^{\prime}\delta\phi,$
(52)
and the $(1,1)$ component:
$\displaystyle
2A^{\prime}\delta\varphi^{\prime}-2A^{\prime}\varphi^{\prime}h_{rr}-\delta\varphi^{\prime}\varphi^{\prime}-2\ddot{\delta}\varphi+\frac{1}{2}h_{rr}\varphi^{\prime
2}+\Xi\varphi^{\prime}=\kappa\left(\phi^{\prime}\delta\phi^{\prime}-\phi^{\prime\prime}\delta\phi-\frac{1}{2}\phi^{\prime
2}h_{rr}\right).$ (53)
Here we have defined a new variable $\Xi\equiv 2\dot{\Phi}-{h}_{00}^{\prime}$.
One can testify that after using background equations (36)-(39), Eq. (51)
reduces to Eq. (52).
Another independent equation comes from the perturbation of the scalar
equation of motion:
$\displaystyle-\ddot{\delta}\phi+\delta\phi^{\prime\prime}+2A^{\prime}\frac{\phi^{\prime\prime}}{\phi^{\prime}}\delta\phi-\frac{\phi^{\prime\prime\prime}}{\phi^{\prime}}\delta\phi-\frac{1}{2}\phi^{\prime}h_{rr}^{\prime}-\phi^{\prime\prime}h_{rr}+\frac{1}{2}\phi^{\prime}\Xi=0.$
(54)
One can also linearize the dilaton equation (7), but it does not offer new
information further.
Therefore, we have three independent perturbation equations, i.e., (52)-(54).
But one should note that the perturbation variables are not all independent.
The invariance of the dynamical equations under coordinate transformations
$\displaystyle x^{\mu}\to\tilde{x}^{\mu}=x^{\mu}+\xi^{\mu}(r,t)$ (55)
induces an invariance of the linear perturbation equations (52)-(54) under the
following gauge transformations:
$\displaystyle\Delta h_{\mu\nu}$ $\displaystyle\equiv$
$\displaystyle\widetilde{h}_{\mu\nu}-h_{\mu\nu}=-2\xi_{(\mu,\nu)}-2\eta_{\mu,\nu}A^{\prime}\xi^{1},$
(56) $\displaystyle\Delta\delta\phi$ $\displaystyle\equiv$
$\displaystyle\widetilde{\delta\phi}-\delta\phi=-\phi^{\prime}\xi^{1},$ (57)
$\displaystyle\Delta\delta\varphi$ $\displaystyle\equiv$
$\displaystyle\widetilde{\delta\varphi}-\delta\varphi=-\varphi^{\prime}\xi^{1}.$
(58)
The components of $h_{\mu\nu}$ transform as
$\displaystyle\Delta h_{00}$ $\displaystyle=$ $\displaystyle
2\partial_{t}\xi^{0}+2A^{\prime}\xi^{1},$ (59) $\displaystyle\Delta\Phi$
$\displaystyle=$ $\displaystyle-\partial_{t}\xi^{1}+\partial_{r}\xi^{0},$ (60)
$\displaystyle\Delta h_{rr}$ $\displaystyle=$
$\displaystyle-2\partial_{r}\xi^{1}-2A^{\prime}\xi^{1},$ (61)
which means that the variable $\Xi=2\dot{\Phi}-{h}_{00}^{\prime}$ should
transforms as
$\displaystyle\Delta\Xi=-2\left[\ddot{\xi}^{1}+\left(A^{\prime}\xi^{1}\right)^{\prime}\right].$
(62)
We see that the gauge degree of freedom $\xi^{0}$ has been canceled.
The residual gauge degree of freedom in $\xi^{1}$ allows us to eliminate one
of the perturbation variables. Here we simply take $\delta\varphi=0$, with
which Eq. (52) reduces to
$\displaystyle\varphi^{\prime}{h_{rr}}=\kappa\phi^{\prime}\delta\phi,$ (63)
and Eq. (53) becomes
$\displaystyle-2A^{\prime}\varphi^{\prime}h_{rr}+\frac{1}{2}h_{rr}\varphi^{\prime
2}+\Xi\varphi^{\prime}=\kappa\left(\phi^{\prime}\delta\phi^{\prime}-\phi^{\prime\prime}\delta\phi-\frac{1}{2}\phi^{\prime
2}h_{rr}\right).$ (64)
After eliminating $h_{rr}$ and $\Xi$, equation (54) can be written as a wave
equation of $\delta\phi$:
$\displaystyle\ddot{\delta\phi}-\delta\phi^{\prime\prime}+V_{\text{eff}}(r)\delta\phi=0,$
(65)
where the effective potential reads
$\displaystyle
V_{\text{eff}}(r)=4A^{\prime\prime}-2A^{\prime}\frac{\phi^{\prime\prime}}{\phi^{\prime}}-\varphi^{\prime\prime}+2\left(\frac{\varphi^{\prime\prime}}{\varphi^{\prime}}\right)^{2}-2\frac{\varphi^{\prime\prime\prime}}{\varphi^{\prime}}+\frac{\phi^{\prime\prime\prime}}{\phi^{\prime}}.$
(66)
Using Eqs. (39)-(41), one can obtain an useful identity:
$\displaystyle\varphi^{\prime\prime}=\frac{\varphi^{\prime\prime\prime}}{\varphi^{\prime}}+\frac{\phi^{\prime\prime}}{\phi^{\prime}}\varphi^{\prime}-2\frac{\phi^{\prime\prime}}{\phi^{\prime}}\frac{\varphi^{\prime\prime}}{\varphi^{\prime}},$
(67)
which enable us to rewrite the effective potential as
$\displaystyle
V_{\text{eff}}=\frac{\phi^{\prime\prime\prime}}{\phi^{\prime}}-2\frac{\phi^{\prime\prime}}{\phi^{\prime}}\frac{\varphi^{\prime\prime}}{\varphi^{\prime}}+2\left(\frac{\varphi^{\prime\prime}}{\varphi^{\prime}}\right)^{2}-\frac{\varphi^{\prime\prime\prime}}{\varphi^{\prime}},$
(68)
or, in a more compact form
$\displaystyle
V_{\text{eff}}=\frac{f^{\prime\prime}}{f},\quad\textrm{with}\quad
f\equiv\frac{\phi^{\prime}}{\varphi^{\prime}}.$ (69)
If we take $\delta\phi=\psi(r)e^{iwt}$, Eq. (65) becomes a Schrödinger-like
equation of $\psi(r)$:
$\displaystyle-\psi^{\prime\prime}+V_{\text{eff}}\psi=w^{2}\psi.$ (70)
It is interesting to note that the Hamiltonian operator are factorizable:
$\displaystyle\hat{H}=-\frac{d^{2}}{dr^{2}}+V_{\text{eff}}=\hat{\mathcal{A}}\hat{\mathcal{A}}^{\dagger},$
(71)
with
$\displaystyle\mathcal{A}=\frac{d}{dr}+\frac{{f}^{\prime}}{f},\quad\mathcal{A}^{\dagger}=-\frac{d}{dr}+\frac{{f}^{\prime}}{f}.$
(72)
According to the theory of supersymmetric quantum mechanics
CooperKhareSukhatme1995 , the eigenvalues of a factorizable Hamiltonian
operator are semipositive definite, namely, $w^{2}\geq 0$. Therefore, static
kink solutions are stable against linear perturbations. The zero mode
($w_{0}=0$) satisfies $\mathcal{A}^{\dagger}\psi_{0}(r)=0$, and the solution
reads
$\displaystyle\psi_{0}(r)\propto
f=\frac{\phi^{\prime}}{\varphi^{\prime}}=\frac{\phi^{\prime}}{2A^{\prime}}.$
(73)
Obviously, for any solution with a non-monotonic warp factor, $\psi_{0}(r)$
diverges at the extrema of $A$, and would be unnormalizable. Since it is not
always possible to obtain the explicit expression of $x(r)$, it is useful to
transform $V_{\text{eff}}$ back to the $x$-coordinates:
$\displaystyle V_{\text{eff}}(x)$ $\displaystyle=$ $\displaystyle
e^{2A}\left(A_{x}\frac{f_{x}}{f}+\frac{f_{xx}}{f}\right),$ (74)
with $f(x)=\phi_{x}/\varphi_{x}$.
It should be note that the stability analysis presented so far are rather
general and does not depend on the specific form of the solution, but only on
the general form of the metric (35) and of the action (4).
Now, we move on to the specific solutions. For Stötzel’s sine-Gordon model and
the polynomial model, the effective potentials read
$\displaystyle V_{\text{eff}}(x)$ $\displaystyle=$ $\displaystyle
M^{2}\cosh^{-2\kappa}(Mx)\left[\kappa+2\text{csch}^{2}(Mx)+1\right],$ (75)
and
$\displaystyle V_{\text{eff}}(x)$ $\displaystyle=$
$\displaystyle\frac{\exp\left[\frac{1}{12}\left(-6cx+\text{sech}^{2}(x)-1\right)\right]}{{12\sqrt[3]{\cosh(x)}\left[3c+\tanh(x)\left(\text{sech}^{2}(x)+2\right)\right]^{2}}}\left\\{-\text{sech}^{2}(x)\left[296\right.\right.$
(76) $\displaystyle+$
$\displaystyle\left.\left.702c^{2}+\left(27c^{2}-424\right)\text{sech}^{2}(x)+118\text{sech}^{4}(x)+\text{sech}^{6}(x)+\text{sech}^{8}(x)\right]\right.$
$\displaystyle+$
$\displaystyle\left.18c\tanh(x)\left[3c^{2}+23\text{sech}^{4}(x)-32\text{sech}^{2}(x)+36\right]+540c^{2}+208\right\\},$
respectively. For the later case, we have taken $\kappa=1$, for simplicity.
The profiles of the $V_{\text{eff}}(x)$ are depicted in Fig. 3. For Stötzel’s
model, we take $m=\sqrt{2}$, $\Lambda=2\kappa$ such that
$M\equiv\frac{1}{2}\sqrt{4m^{2}-\frac{2\Lambda}{\kappa}}=1$, while keep
$\kappa$ as a free parameter. We see that $V_{\text{eff}}$ is positive and
divergent at $x=0$ for $\kappa=0.2$, 1 and 3.
For the polynomial model, we take $c=0$, 1/3, 2/3 and 1 as examples. We see
that $V_{\text{eff}}(x)$ diverges at $x=0$ for both $c=0$ and 1/3, while blows
up at $x\to-\infty$ if $c=1$, but becomes finite when $c=2/3$.
Figure 3: Plots of $V_{\text{eff}}(x)$. For polynomial model with $c=2/3$,
$V_{\text{eff}}(x)$ becomes finite, and approaches to
$4\sqrt[3]{2}e^{-\frac{1}{12}}\approx 4.637$ as $x\to-\infty$.
It is worth to mention that in many 5D thick brane models the effective
potentials of the scalar perturbation also have singularities, and the
corresponding scalar zero modes are usually unnormalizable. Without
normalizable scalar zero modes, these models are free of the problem of long
range scalar fifth force Giovannini2002 ; Giovannini2001a ; ZhongLiu2013 . For
the 2D self-gravitating kink solutions considered in this paper, however, we
find an unusual situation where the zero mode might be normalizable, namely,
the polynomial model with $c>2/3$. In this case, the zero mode reads
$\displaystyle\psi_{0}(x)=\mathcal{N}\frac{\phi_{x}}{2A_{x}}=-\mathcal{N}\frac{6\text{sech}^{2}(x)}{3c+\tanh(x)\left(\text{sech}^{2}(x)+2\right)},$
(77)
where $\mathcal{N}$ is the normalization constant, and we have taken
$\kappa=1$. The normalization of zero mode requires
$\displaystyle 1$ $\displaystyle=$
$\displaystyle\int_{-\infty}^{+\infty}dr\psi_{0}^{2}(r)=\mathcal{N}^{2}\int_{-\infty}^{+\infty}dxe^{-A}\left(\frac{\phi_{x}}{2A_{x}}\right)^{2}.$
(78)
The integration can be done numerically, for instance, taking $c=1$, 1.2 and
1.5 we obtain $|\mathcal{N}|\approx$ 0.334, 0.446 and 0.598, respectively.
Plots of $\psi_{0}(x)$ is depicted in Fig. 4.
Figure 4: Plots of $\psi_{0}(x)$ for the polynomial model with $\kappa=1$,
$c=1$, 1.2 and 1.5.
## 5 Summary and outlook
In this work, we revisited smooth self-gravitating kink solutions of a type of
2D dilaton gravity proposed by Mann et al. MannMorsinkSikkemaSteele1991 . We
first showed that exact kink solutions can be constructed with the aid of a
first-order superpotential formalism (17)-(20) of the dynamical equations.
This formalism has already been derived and used by Stötzel in 1995, for 2D
self-gravitating sine-Gordon model Stoetzel1995 , but its virtue was not
completely appreciated until the advent of 5D thick brane world models. After
reproducing Stötzel’s solution Stoetzel1995 , we reported another kink
solution generated by a polynomial superpotential used in some 5D brane world
models EtoSakai2003 ; TakamizuMaeda2006 ; BazeiaMenezesRocha2014 .
The main contribution of the present work, however, is a general analysis on
the stability of static kink solutions under small linear perturbations. After
eliminating the redundant gauge degrees of freedom, we derived a Schrödinger-
like equation for the physical perturbation. We found that the Hamiltonian
operator can be factorized as
$\hat{H}=\hat{\mathcal{A}}\hat{\mathcal{A}}^{\dagger}$, which implies the
stability of the solutions. Besides, the zero mode takes the form
$\psi_{0}(r)\propto
f\equiv\frac{\phi^{\prime}}{\varphi^{\prime}}=\frac{\phi^{\prime}}{2A^{\prime}}$,
which diverges at the extrema of $A$. For Stötzel’s model, the zero mode is
not normalizable, because the symmetric solution of the warp factor
corresponds to a singularity of $\psi_{0}(r)$ at $r=0$. For the polynomial
model, however, the zero mode is normalizable provides $c>2/3$.
It is natural to ask if the superpotential formalism and stability analysis of
the present work can also be extended to other 2D dilaton gravity models, such
as the CGHS model CallanGiddingsHarveyStrominger1992 or other more general
models IkedaIzawa1993 ; TakahashiKobayashi2019 . The superpotential formalism
also makes it possible discuss the application of the present model to the
study of holographic RG flow BianchiFreedmanSkenderis2001 ;
KiritsisNittiSilvaPimenta2017 . We will leave these questions to our future
works.
## Acknowledgements
This work was supported by the National Natural Science Foundation of China
(Grant Nos. 11847211, 11605127), Fundamental Research Funds for the Central
Universities (Grant No. xzy012019052), and China Postdoctoral Science
Foundation (Grant No. 2016M592770).
## References
* (1) M. Henneaux, _Quantum Gravity in Two-Dimensions: Exact Solution of The Jackiw Model_ , _Phys. Rev. Lett._ 54 (1985) 959.
* (2) S. de Alwis, _Quantization of a theory of 2-d dilaton gravity_ , _Phys. Lett. B_ 289 (1992) 278 [hep-th/9205069].
* (3) C. Vaz and L. Witten, _Formation and evaporation of a naked singularity in 2-d gravity_ , _Phys. Lett. B_ 325 (1994) 27 [hep-th/9311133].
* (4) C. Vaz and L. Witten, _Do naked singularities form?_ , _Class. Quant. Grav._ 13 (1996) L59 [gr-qc/9511018].
* (5) C. G. Callan, Jr., S. B. Giddings, J. A. Harvey and A. Strominger, _Evanescent black holes_ , _Phys. Rev. D_ 45 (1992) R1005 [hep-th/9111056].
* (6) A. Bilal and J. Callan, Curtis G., _Liouville models of black hole evaporation_ , _Nucl. Phys. B_ 394 (1993) 73 [hep-th/9205089].
* (7) J. G. Russo, L. Susskind and L. Thorlacius, _Black hole evaporation in (1+1)-dimensions_ , _Phys. Lett. B_ 292 (1992) 13 [hep-th/9201074].
* (8) J. G. Russo, L. Susskind and L. Thorlacius, _The Endpoint of Hawking radiation_ , _Phys. Rev. D_ 46 (1992) 3444 [hep-th/9206070].
* (9) J. G. Russo, L. Susskind and L. Thorlacius, _Cosmic censorship in two-dimensional gravity_ , _Phys. Rev. D_ 47 (1993) 533 [hep-th/9209012].
* (10) J. Brown, _Lower Dimensional Gravity_. World Scientific Publishing Co. Pte. Ltd., 1988.
* (11) L. Thorlacius, _Black hole evolution_ , _Nucl. Phys. B Proc. Suppl._ 41 (1995) 245 [hep-th/9411020].
* (12) D. Grumiller, W. Kummer and D. Vassilevich, _Dilaton gravity in two-dimensions_ , _Phys. Rept._ 369 (2002) 327 [hep-th/0204253].
* (13) J. Ambjorn, J. Jurkiewicz and R. Loll, _The Spectral Dimension of the Universe is Scale Dependent_ , _Phys. Rev. Lett._ 95 (2005) 171301 [hep-th/0505113].
* (14) P. Horava, _Spectral dimension of the universe in quantum gravity at a lifshitz point_ , _Phys. Rev. Lett._ 102 (2009) 161301 [0902.3657].
* (15) J. R. Mureika and D. Stojkovic, _Detecting Vanishing Dimensions Via Primordial Gravitational Wave Astronomy_ , _Phys. Rev. Lett._ 106 (2011) 101101 [1102.3434].
* (16) L. Anchordoqui, D. C. Dai, M. Fairbairn, G. Landsberg and D. Stojkovic, _Vanishing Dimensions and Planar Events at the LHC_ , _Mod. Phys. Lett. A_ 27 (2012) 1250021 [1003.5914].
* (17) D. Stojkovic, _Vanishing dimensions: A review_ , _Mod. Phys. Lett. A_ 28 (2013) 1330034 [1406.2696].
* (18) R. Loll, _Quantum Gravity from Causal Dynamical Triangulations: A Review_ , _Class. Quant. Grav._ 37 (2020) 013002 [1905.08669].
* (19) S. Carlip, _Dimension and Dimensional Reduction in Quantum Gravity_ , _Class. Quant. Grav._ 34 (2017) 193001 [1705.05417].
* (20) S. Sachdev and J. Ye, _Gapless spin fluid ground state in a random, quantum Heisenberg magnet_ , _Phys. Rev. Lett._ 70 (1993) 3339 [cond-mat/9212030].
* (21) A. Kitaev, _A simple model of quantum holography_ , 2015, http://online.kitp.ucsb.edu/online/entangled15/.
* (22) A. Almheiri and J. Polchinski, _Models of AdS 2 backreaction and holography_, _JHEP_ 11 (2015) 014 [1402.6334].
* (23) J. Maldacena, D. Stanford and Z. Yang, _Conformal symmetry and its breaking in two dimensional Nearly Anti-de-Sitter space_ , _PTEP_ 2016 (2016) 12C104 [1606.01857].
* (24) J. Maldacena and D. Stanford, _Remarks on the Sachdev-Ye-Kitaev model_ , _Phys. Rev. D_ 94 (2016) 106002 [1604.07818].
* (25) K. Jensen, _Chaos in AdS 2 Holography_, _Phys. Rev. Lett._ 117 (2016) 111601 [1605.06098].
* (26) V. Rosenhaus, _An introduction to the SYK model_ , 1807.03334.
* (27) G. Sárosi, _AdS 2 holography and the SYK model_, _PoS_ Modave2017 (2018) 001 [1711.08482].
* (28) D. A. Trunin, _Pedagogical introduction to SYK model and 2D Dilaton Gravity_ , 2002.12187.
* (29) R. Jackiw, _Lower Dimensional Gravity_ , _Nucl. Phys. B_ 252 (1985) 343.
* (30) C. Teitelboim, _Gravitation and Hamiltonian Structure in Two Space-Time Dimensions_ , _Phys. Lett. B_ 126 (1983) 41.
* (31) R. B. Mann, S. Morsink, A. Sikkema and T. Steele, _Semiclassical gravity in (1+1)-dimensions_ , _Phys. Rev. D_ 43 (1991) 3948.
* (32) A. Vilenkin and E. P. S. Shellard, _Cosmic Strings and Other Topological Defects_. Cambridge University Press, 2000.
* (33) T. Vachaspati, _Kinks And Domain Walls_. Cambridge University Press, 2006.
* (34) V. Dzhunushaliev, V. Folomeev and M. Minamitsuji, _Thick brane solutions_ , _Rept. Prog. Phys._ 73 (2010) 066901 [0904.1775].
* (35) Y.-X. Liu, _Introduction to Extra Dimensions and Thick Braneworlds_ , 1707.08541.
* (36) H.-S. Shin and K.-S. Soh, _Black hole formation by Sine-Gordon solitons in two-dimensional dilaton gravity_ , _Phys. Rev. D_ 52 (1995) 981 [hep-th/9501045].
* (37) H.-M. Johng, H.-S. Shin and K.-S. Soh, _Sine-gordon solitons coupled with dilaton gravity in two-dimensional spacetime_ , _Phys. Rev. D_ 53 (1996) 801.
* (38) J. Gegenberg and G. Kunstatter, _Geometrodynamics of sine-gordon solitons_ , _Phys. Rev. D_ 58 (1998) 124010.
* (39) M. Cadoni, _2-D extremal black holes as solitons_ , _Phys. Rev. D_ 58 (1998) 104001 [hep-th/9803257].
* (40) C. Vaz and L. Witten, _Soliton induced singularities in 2-d gravity and their evaporation_ , _Class. Quant. Grav._ 12 (1995) 2607 [gr-qc/9504037].
* (41) J. Yan and X. Qiu, _Sinh-Gordon matter field and a solvable model in two-dimensional gravity_ , _Gen. Rel. Grav._ 30 (1998) 1319.
* (42) J. Yan, S.-J. Wang and B.-Y. Tao, _A solvable model in two-dimensional gravity coupled to a nonlinear matter field_ , _Commun. Theor. Phys._ 35 (2001) 19.
* (43) B. Stötzel, _Two-dimensional gravitation and Sine-Gordon solitons_ , _Phys. Rev. D_ 52 (1995) 2192 [gr-qc/9501033].
* (44) K. Skenderis and P. K. Townsend, _Gravitational stability and renormalization-group flow_ , _Phys. Lett. B_ 468 (1999) 46 [hep-th/9909070].
* (45) O. DeWolfe, D. Z. Freedman, S. S. Gubser and A. Karch, _Modeling the fifth dimension with scalars and gravity_ , _Phys. Rev. D_ 62 (2000) 046008 [hep-th/9909134].
* (46) M. Gremm, _Four-dimensional gravity on a thick domain wall_ , _Phys. Lett. B_ 478 (2000) 434 [hep-th/9912060].
* (47) A. A. Izquierdo, W. G. Fuertes and J. M. Guilarte, _Self-gravitating kinks in two-dimensional pseudo-riemannian universes_ , _Phys. Rev. D_ 101 (2020) 036020.
* (48) L. Randall and R. Sundrum, _An alternative to compactification_ , _Phys. Rev. Lett._ 83 (1999) 4690 [hep-th/9906064].
* (49) L. Randall and R. Sundrum, _A large mass hierarchy from a small extra dimension_ , _Phys. Rev. Lett._ 83 (1999) 3370 [hep-ph/9905221].
* (50) M. Eto and N. Sakai, _Solvable models of domain walls in N = 1 supergravity_ , _Phys. Rev. D_ 68 (2003) 125001 [hep-th/0307276].
* (51) Y.-i. Takamizu and K.-i. Maeda, _Collision of domain walls in asymptotically anti de Sitter spacetime_ , _Phys. Rev. D_ 73 (2006) 103508 [hep-th/0603076].
* (52) D. Bazeia, R. Menezes and R. da Rocha, _A Note on Asymmetric Thick Branes_ , _Adv. High Energy Phys._ 2014 (2014) 276729 [1312.3864].
* (53) J. Omotani, P. M. Saffin and J. Louko, _Colliding branes and big crunches_ , _Phys. Rev. D_ 84 (2011) 063526 [1107.3938].
* (54) M. Giovannini, _Gauge invariant fluctuations of scalar branes_ , _Phys. Rev. D_ 64 (2001) 064023 [hep-th/0106041].
* (55) M. Giovannini, _Localization of metric fluctuations on scalar branes_ , _Phys. Rev. D_ 65 (2002) 064008 [hep-th/0106131].
* (56) M. Giovannini, _Scalar normal modes of higher dimensional gravitating kinks_ , _Classical Quantum Gravity_ 20 (2003) 1063 [gr-qc/0207116].
* (57) Y. Zhong and Y.-X. Liu, _Linearization of thick K-branes_ , _Phys. Rev. D_ 88 (2013) 024017 [1212.1871].
* (58) V. F. Mukhanov, H. A. Feldman and R. H. Brandenberger, _Theory of cosmological perturbations_ , _Phys. Rep._ 215 (1992) 203.
* (59) H. Kodama and M. Sasaki, _Cosmological perturbation theory_ , _Progr. Theoret. Phys. Suppl._ 78 (1984) 1.
* (60) F. Cooper, A. Khare and U. Sukhatme, _Supersymmetry and quantum mechanics_ , _Phys. Rep._ 251 (1995) 267 [hep-th/9405029].
* (61) N. Ikeda and K. I. Izawa, _General form of dilaton gravity and nonlinear gauge theory_ , _Prog. Theor. Phys._ 90 (1993) 237 [hep-th/9304012].
* (62) K. Takahashi and T. Kobayashi, _Generalized 2D dilaton gravity and kinetic gravity braiding_ , _Class. Quant. Grav._ 36 (2019) 095003 [1812.08847].
* (63) M. Bianchi, D. Z. Freedman and K. Skenderis, _How to go with an RG flow_ , _J. High Energy Phys._ 0108 (2001) 041 [hep-th/0105276].
* (64) E. Kiritsis, F. Nitti and L. Silva Pimenta, _Exotic RG Flows from Holography_ , _Fortsch. Phys._ 65 (2017) 1600120 [1611.05493].
|
# Inferred Linear Stability of Parker Solar Probe Observations using One- and
Two-Component Proton Distributions
K.G. Klein University of Arizona, Tucson, AZ, USA J.L. Verniero Space
Sciences Laboratory, University of California, Berkeley, CA 94720-7450, USA
B. Alterman Space Science and Engineering, Southwest Research Institute, San
Antonio, TX, USA S. Bale Physics Department, University of California,
Berkeley, CA 94720-7300, USA Space Sciences Laboratory, University of
California, Berkeley, CA 94720-7450, USA The Blackett Laboratory, Imperial
College London, London, SW7 2AZ, UK School of Physics and Astronomy, Queen
Mary University of London, London E1 4NS, UK A. Case Smithsonian
Astrophysical Observatory, Cambridge, MA, USA J.C. Kasper Smithsonian
Astrophysical Observatory, Cambridge, MA, USA K. Korreck Smithsonian
Astrophysical Observatory, Cambridge, MA, USA D. Larson Space Sciences
Laboratory, University of California, Berkeley, CA 94720-7450, USA E. Lichko
University of Arizona, Tucson, AZ, USA R. Livi Space Sciences Laboratory,
University of California, Berkeley, CA 94720-7450, USA M. McManus Space
Sciences Laboratory, University of California, Berkeley, CA 94720-7450, USA
M. Martinović University of Arizona, Tucson, AZ, USA A. Rahmati Space
Sciences Laboratory, University of California, Berkeley, CA 94720-7450, USA
M. Stevens Smithsonian Astrophysical Observatory, Cambridge, MA, USA P.
Whittlesey Space Sciences Laboratory, University of California, Berkeley, CA
94720-7450, USA
###### Abstract
The hot and diffuse nature of the Sun’s extended atmosphere allows it to
persist in non-equilibrium states for long enough that wave-particle
instabilities can arise and modify the evolution of the expanding solar wind.
Determining which instabilities arise, and how significant a role they play in
governing the dynamics of the solar wind, has been a decades-long process
involving in situ observations at a variety of radial distances. With new
measurements from Parker Solar Probe (PSP), we can study what wave modes are
driven near the Sun, and calculate what instabilities are predicted for
different models of the underlying particle populations. We model two hours-
long intervals of PSP/SPAN-i measurements of the proton phase-space density
during PSP’s fourth perihelion with the Sun using two commonly used
descriptions for the underlying velocity distribution. The linear stability
and growth rates associated with the two models are calculated and compared.
We find that both selected intervals are susceptible to resonant
instabilities, though the growth rates and kind of modes driven unstable vary
depending on if the protons are modeled using one or two components. In some
cases, the predicted growth rates are large enough to compete with other
dynamic processes, such as the nonlinear turbulent transfer of energy, in
contrast with relatively slower instabilities at larger radial distances from
the Sun.
solar wind — plasmas — instabilities — Sun: corona
## 1 Introduction
Wave-particle interactions are suspected of affecting the evolution of the
solar wind as it is accelerated from the Sun’s surface and expands into the
heliosphere; c.f. reviews in Matteini et al. (2012); Yoon (2017); Verscharen
et al. (2019). Such instabilities are driven by departures from local
thermodynamic equilibrium (LTE) that are frequently modeled using velocity
distributions with anisotropic temperatures $T_{\perp,j}$ and
$T_{\parallel,j}$ with respect to local magnetic field $\mathbf{B}$, relative
field-aligned drifts between constituent plasma populations $\Delta
v_{i,j}=(\mathbf{V}_{i}-\mathbf{V}_{j})\cdot\mathbf{B}/|\mathbf{B}|$, and
temperature disequilibrium between species $T_{i}\neq T_{j}$. The simultaneous
effects of multiple sources of free energy can complicate a simple linear
analysis; for instance, it has been found that the free energy contributions
to unstable behavior from different ion and electron species can be non-
negligible (Chen et al., 2016). To address this difficulty, previous works
have applied a numerical implementation of the Nyquist instability criterion
(Nyquist, 1932; Klein et al., 2017) to selected solar wind observations from
the Wind (Klein et al., 2018) and Helios (Klein et al., 2019) missions,
finding that a majority of intervals were unstable, including many intervals
that simple parametric models accounting for a single source of free energy
would have predicted to be stable.
Given the complexity of phase-space distributions typically found in weakly
collisional plasmas, a number of different schemes for modeling the underlying
velocity-space structure are frequently used; for instance, it is common to
treat the protons as a single, anisotropic bi-Maxwellian or kappa
distribution, or as a linear combination of core and relatively drifting beam
distributions, each with distinct parallel and perpendicular temperatures; see
the introduction of Alterman et al. (2018) for a review of solar wind
observations of secondary ion populations.
In this work, we select two hours-long time intervals observed by the SPAN-i
instrument from the SWEAP instrument suite (Kasper et al., 2015) on Parker
Solar Probe (PSP) (Fox et al., 2015) during its fourth encounter with the Sun,
where significant ion-scale wave activity is observed, similar to activity
previously reported in Bowen et al. (2020) and Verniero et al. (2020). We
generate both a one-component and two-component model for each measurement of
the proton velocity distribution, calculating and comparing the associated
linear stability. Using the different models produces significantly different
instabilities, either in the robustness of the associated growth rates or the
kinds of waves driven unstable. The two-component model generally predicts
ion-scale waves with characteristics more in line with the observed wave
activity than models using a single proton component. This suggests that using
overly simplistic models for ion distributions may neglect essential kinetic-
scale processes responsible for the generation of these waves, even if these
models capture macroscopic departures from LTE.
## 2 Data and Methodology
### 2.1 Parker Solar Probe Data
We select two hours-long sections from the outbound pass of PSP’s fourth
encounter with the Sun, when SPAN-i had sufficient coverage of the proton
velocity distribution to model $f_{p}(\mathbf{v})$, specifically Selection A:
2020/01/30 11:00-13:30 (SA, Fig. 1) and Selection B: 2020/02/01 00:10-02:00
(SB, Fig. 2). During both selections, ion-scale electromagnetic waves are
observed by the FIELDS instrument suite (Bale et al., 2016). Figs. 1 and 2
show the vector magnetic field components, as well as the trace power spectral
density normalized to an ansatz power-law distribution for the background
turbulent spectrum of $f^{-5/3}$, and the polarization of the transverse
components of the magnetic fields, where red (blue) indicates right-handed
(left-handed) circular polarization in the spacecraft frame. In SA, we see an
abundance of power above a $f^{-5/3}$ spectrum persist for several hours near
$3$ Hz. At the same frequencies, we see a clear signature (red) of right-hand
polarization persist for nearly the entire duration of the more than two-hour
selection.
Unlike in SA, in SB there is not a persistent signature at a nearly constant
frequency of ion-scale waves of a single handedness; both left-handed (blue)
and right-handed (red) polarized waves are observed. There are also times
during SB where no enhanced wave activity near ion frequencies is observed.
Figure 1: Magnetic field characteristics observed by FIELDS/PSP during
Selection A, 2020/01/30 11:00-13:30. Top row: vector components of
$\mathbf{B}$. Second row: Trace power spectral density normalized by
$k^{-5/3}$ power law. Third row: Polarization of transverse magnetic field
components, where red indicates right-handed circular polarization in the
spacecraft frame.
Figure 2: Magnetic field characteristics observed by FIELDS/PSP during
Selection B, 2020/02/01 00:10-02:00, organized in the same fashion as Fig. 1.
### 2.2 One- and Two-Component Proton Distributions
For each $\approx 7$ second measurement where a significant fraction of the
thermal proton distribution is in the SPAN-i field of view, a two-component
fit of the observed proton energy and angle spectra is attempted, modeling the
protons as a combination of two relatively drifting bi-Maxwellian
distributions,
$f_{p}^{\textrm{2-comp.}}(v_{\perp},v_{\parallel})=\sum\limits_{j=c,b}\frac{n_{j}}{\pi^{3/2}w_{\perp,j}^{2}w_{\parallel,j}}\exp\left[-\frac{v_{\perp}^{2}}{w_{\perp,j}^{2}}-\frac{\left(v_{\parallel}-V_{j}\right)^{2}}{w_{\parallel,j}^{2}}\right].$
(1)
Parallel and perpendicular are defined with respect to the local mean-magnetic
field direction, $n_{j}$ is the component density, $V_{j}$ the component bulk
speed, and $w_{\perp,\parallel;j}=\sqrt{2T_{\perp,\parallel;j}/m_{j}}$ the
component thermal velocities. This fit represents our two-component model. To
mitigate the partial FOV coverage of SPAN-i, all fitted densities were
calibrated to QTN densities. All calculations using this model are performed
in the proton center-of-mass frame.
For a model with the same macroscopic thermodynamic quantities, i.e. total
proton density as well as parallel and perpendicular thermal pressures, that
are used in a linear instability calculation that does not represent the beam-
and-core structure of the protons observed in the inner heliosphere, we
construct a one-component model as
$f_{p}^{\textrm{1-comp.}}(v_{\perp},v_{\parallel})=\frac{n_{p}}{\pi^{3/2}w_{\perp,p}^{2}w_{\parallel,p}}\\\
\exp\left[-\frac{v_{\perp}^{2}}{w_{\perp,p}^{2}}-\frac{v_{\parallel}^{2}}{w_{\parallel,p}^{2}}\right].$
(2)
Here, the proton density is $n_{p}=n_{c}+n_{b}$ and the total thermal
velocities are $w_{\perp,\parallel;p}=\sqrt{2T_{\perp,\parallel;p}/m_{p}}$. We
have defined the perpendicular proton temperature as
$T_{\perp,p}=\frac{n_{c}T_{\perp,c}+n_{b}T_{\perp,b}}{n_{c}+n_{b}}$ (3)
and the parallel proton temperature as
$T_{\parallel,p}=\frac{n_{c}T_{\parallel,c}+n_{b}T_{\parallel,b}+\left(\frac{n_{c}n_{b}}{n_{c}+n_{b}}\right)m_{p}\Delta
v_{cb}^{2}}{n_{c}+n_{b}}.$ (4)
We emphasize that this is not equivalent to fitting the measured proton VDF
with a single bi-Maxwellian distribution. Our method is employed so that both
models have the same macroscopic perpendicular and parallel proton pressures,
which would not necessarily be the case for a single bi-Maxwellian fit of
protons with a significant secondary population.
The parameters from both models, along with measurements of the magnetic field
strength averaged to the SPAN-i measurement cadence, are combined into the
dimensionless parameters used as inputs for the Nyquist instability analysis.
We will see that the significant differences in the underlying proton phase-
space densities for the two models lead to significant differences in the
predicted unstable behavior.
### 2.3 Instability Analysis
We employ a numerical implementation of the Nyquist instability criterion
(Nyquist, 1932; Klein et al., 2017) for the hot plasma dispersion relation for
an arbitrary number of relatively drifting bi-Maxwellian components as
determined by the PLUME numerical dispersion solver (Klein & Howes, 2015). The
Nyquist criterion determines the stability of a linear system of equations
through a conformal mapping of the contour integral of a dispersion relation
$\mathcal{D}(\omega,\mathbf{k},\mathcal{P})$ over the upper-half of the
complex frequency plane. This integral counts the number of normal mode
solutions that are unstable, having $\gamma>0$, for a specific wavevector
$\mathbf{k}$ and set of dimensionless parameters $\mathcal{P}$;
$\omega_{\textrm{r}}$ and $\gamma$ are the real and imaginary components of
the complex frequency $\omega$. Iterating this process for multiple contours
with increasing values of $\gamma$ enables the determination of the maximum
growth rate and associated characteristics of the fastest growing mode
supported by a particular $\mathbf{k}$. We have set $\gamma=10^{-4}\Omega_{p}$
as the minimum growth rate for a wavevector to be considered unstable. We
repeat this process over a log-spaced grid in wavevector space
$k_{\perp}\rho_{p}\in[10^{-3},3]$ and $k_{\parallel}\rho_{p}\in[10^{-2},3]$,
enabling the determination of the fastest growing mode for all wavevectors
given a particular parameter set $\mathcal{P}$.
For the one-component model, the set of dimensionless plasma parameters is
$\mathcal{P}_{\textrm{1-comp}}=\left(\beta_{\parallel,p},\frac{w_{\parallel,p}}{c},\frac{T_{\perp,p}}{T_{\parallel,p}}\right)$
(5)
while for the two-component model, the dimensionless plasma parameters are
$\displaystyle\mathcal{P}_{\textrm{2-comp}}=$
$\displaystyle\left(\beta_{\parallel,c},\frac{w_{\parallel,c}}{c},\frac{T_{\perp,c}}{T_{\parallel,c}},\frac{T_{\perp,b}}{T_{\parallel,b}},\right.$
$\displaystyle\left.\frac{n_{b}}{n_{c}},\frac{T_{\parallel,b}}{T_{\parallel,c}},\frac{\Delta
v_{b,c}}{v_{Ac}}\right),$
where we define the thermal-to-magnetic pressure ratio
$\beta_{\parallel,j}=8\pi n_{j}T_{\parallel,j}/B^{2}$, the core-proton Alfvén
velocity as $v_{Ac}=B/\sqrt{4\pi m_{p}n_{c}}$, and the speed of light $c$.
Frequencies are normalized to the proton gyrofrequency
$\Omega_{p}=q_{p}B/m_{p}c$. For this study, we neglect the contribution of
alphas and other minor ions and treat the electrons as a single isotropic
distribution with density and velocity necessary to enforce quasi-neutrality
and zero net current. The impact of the non-proton components on stability
will be the focus of future study.
Figure 3: SPAN-i observation of the proton velocity distribution for the
interval under analysis in Fig. 4 as a function of $v_{z}$ and $v_{r}$ (top)
and $v_{y}$ and $v_{r}$ (bottom), in SPAN-i instrument coördinates where
$v_{r}=\sqrt{v_{x}^{2}+v_{y}^{2}}$. Diamonds represent the central values of
the instrument’s velocity space bins, color the proton distribution phase-
space density, and the arrow magnetic field orientation with the length
representing the Alfvén speeed. Figure 4: Comparison of linear stability and
resonances for the one- and two-component models, left and right columns,
associated with the SPANi observation shown in Fig. 3. Top row: Fastest
growing mode calculated by the Nyquist method as a function of
$k_{\perp}d_{p}$ and $k_{\parallel}d_{p}$. Second row: linear dispersion
relation $\omega_{\textrm{r}}(k_{\parallel}d_{p})/\Omega_{p}$ for the four
weakly damped, parallel propagating linear modes. Third: Normalized growth
(solid) or damping (dashed) rates $\gamma/\omega_{\textrm{r}}$ for the same
modes. Fourth: Cyclotron resonant velocities normalized to $v_{A}$. Bottom:
Illustration of phase (dashed vertical) and resonant (solid) velocities for
the four modes at the wavevector associated with the maximum growth rate (dot-
dashed line in middle panels), the associated curves of constant energy in the
wave-frame, and the phase-space densities associated with the one- and two-
component models (grey-scale).
Given an example SPAN-i measurement of $f_{p}(\mathbf{v})$, shown in Fig. 3,
both the one-component and two-component models are constructed, producing the
sets of dimensionless parameters $\mathcal{P}_{\textrm{1-comp}}$ and
$\mathcal{P}_{\textrm{2-comp}}$. For the selected example, starting at
11:16:22 on 01/30/2020, these sets are:
$\displaystyle\mathcal{P}_{\textrm{1-comp}}=$
$\displaystyle\left(\beta_{\parallel,p}=1.0646,\frac{w_{\parallel,p}}{c}=2.786\times
10^{-4}\right.,$ (7)
$\displaystyle\left.\frac{T_{\perp,p}}{T_{\parallel,p}}=0.389\right)$
and
$\displaystyle\mathcal{P}_{\textrm{2-comp}}=$
$\displaystyle\left(\beta_{\parallel,c}=0.410,\frac{w_{\parallel,c}}{c}=1.861\times
10^{-4},\right.$
$\displaystyle\left.\frac{T_{\perp,c}}{T_{\parallel,c}}=0.770,\frac{T_{\perp,b}}{T_{\parallel,b}}=0.620,\frac{n_{b}}{n_{c}}=0.157,\right.$
$\displaystyle\left.\frac{T_{\parallel,b}}{T_{\parallel,c}}=2.465,\frac{\Delta
v_{b,c}}{v_{Ac}}=-1.350\right).$
Given these sets, we calculated
$\gamma^{\textrm{max}}(\mathbf{k}d_{p})/\Omega_{p}$ using the Nyquist method,
shown in the top two panels in Fig. 4, which in turn allows the calculation of
$\gamma^{\textrm{max}}/\Omega_{p}$ over the entire wavevector range, as well
as the associated $\omega_{\textrm{r}}^{\textrm{max}}/\Omega_{p}$,
$k^{\textrm{max}}d_{p}$, $\theta_{kB}^{\textrm{max}}$, and other
eigenfunctions of the unstable modes. For this measurement and associated
models, $\gamma^{\textrm{max}}/\Omega_{p}$ is significantly larger for the
two-component model and the wavevector region supporting unstable modes is
broader compared to the one-component model, though both models predict the
same mode, the parallel propagating firehose/fast-magnetosonic wave, to be
linearly unstable.
For validation, we compare these predicted properties to the normal mode
solutions for the forward and backward parallel propagating Alfvén and fast-
magnetosonic waves numerically calculated using the PLUME dispersion
solver(Klein & Howes, 2015). The central rows of Fig. 4 show the real
component of the normal mode frequency
$\omega_{\textrm{r}}(k_{\parallel}d_{p})/\Omega_{p}$ for fixed
$k_{\perp}d_{p}=10^{-3}$, the normalized growth or damping rates
$\gamma(k_{\parallel}d_{p})/|\omega_{\textrm{r}}|$, and the normalized $n=\pm
1$ cyclotron resonant velocities,
$\frac{v_{\textrm{res}}(k_{\parallel})}{v_{A}}=\frac{\omega_{\textrm{r}}(k_{\parallel})-n\Omega_{p}}{k_{\parallel}v_{A}}$
(9)
where the choice of sign of $n$ is determined by the wave’s polarization and
direction of propagation; $n=+1$ for the forward Alfvén and backwards fast
modes and $n=-1$ for the backwards Alfvén and forward fast modes. For these
nearly parallel modes, there is no significant $n=0$ contribution to the wave-
particle interaction. We find good agreement with the kinds of modes and
region of wavevectors predicted to be stable and unstable from both the
Nyquist and traditional dispersion calculation.
Both models are unstable to the parallel firehose instability for this
interval, but there are significant differences—illustrated in the bottom
panels of Fig. 4— in the resonant coupling between the protons and the
electric field. The wave-phase velocity for each of the four parallel
propagating modes at a fixed wavevector $k_{\parallel}d_{p}$, set to be
$|k^{\textrm{max}}|d_{p}$ for the one- or two-component model, is illustrated
as a dashed vertical line compared to the model phase-space density
$f_{p}(v_{\perp}/v_{A},v_{\parallel}/v_{A})$. The $n=\pm 1$ cyclotron resonant
velocity is shown as a solid vertical line, and contours of constant energy in
the wave-frame are illustrated as colored half-circles. The sign of the pitch
angle gradient of $f_{p}$ where the resonant velocity meets the contours of
constant energy determines if energy is transferred from the wave to the
protons, leading to damping of the wave, or from the protons to the wave,
leading to excitation and instability. For this interval, the fitting of a
secondary proton population leads to the suppression of the unstable anti-
beam-aligned fast mode and the enhancement of the beam-aligned fast mode’s
growth rate. The beam component also significantly increases the damping rate
of the anti-beam aligned Alfvén mode, leading it to switch propagation
directions at $k_{\parallel}d_{p}\approx 0.3$.
## 3 Inferred Stability Across Selections
The Nyquist instability analysis described in §2.3 is performed over the
entirety of SA, Fig. 5, and SB, Fig. 6, for both the one- and two-component
models (red and blue).
Figure 5: Dimensionless parameters from one- and two-component (red and blue)
models for SA, (a-f) and calculated instability characteristics, (g-j). (a):
thermal-to-magnetic pressure ratio $\beta_{\parallel,p}$ or
$\beta_{\parallel,c}$, (b): thermal-speed ratio $w_{\parallel,p}/c$ or
$w_{\parallel,c}/c$, (c): temperature anisotropy $T_{\perp,p}/T_{\parallel,p}$
or $T_{\perp,c}/T_{\parallel,c}$ ($T_{\perp,b}/T_{\parallel,b}$ in teal), (d):
temperature disequilibrium $T_{\parallel,b}/T_{\parallel,c}$, (e): density
ratio $n_{b}/n_{c}$, (f): relative drift velocity $\Delta v_{bc}/v_{A}$. (g):
maximum growth rate $\gamma^{\textrm{max}}/\Omega_{p}$ (h): normal mode real
frequency $\omega_{\textrm{r}}^{\textrm{max}}/\Omega_{p}$ (i & j): Amplitude
and angle, $|k|^{\textrm{max}}d_{p}$ and $\theta_{kB}^{\textrm{max}}$ of the
wavevector associated with fastest growing mode. Figure 6: Dimensionless
parameters and calculated instability characteristics from one- and two-
component (red and blue) models for SB organized in the same format as Fig. 5.
For both selections, we see different predicted unstable behavior for the two
models. Using the one-component model for SA, only $40.5\%$ of the intervals
are found to be unstable, and of those most have relatively weak growth rates,
with a median value of
$\bar{\gamma}^{\textrm{max}}_{1-\textrm{comp}}=2.33^{3.41}_{1.57}\times
10^{-4}\Omega_{p}$. The sub- and super-scripts represent the
$25^{\textrm{th}}$ and $75^{\textrm{th}}$ percentiles of the unstable mode
growth rate distribution. These are parallel firehose instabilities, where
sufficiently extreme parallel-to-perpendicular thermal pressure ratios,
manifest in a one-component proton distribution, change the sign of the
velocity gradient at the cyclotron resonant velocity such that energy is
extracted from the protons to drive an unstable fast-magnetosonic mode. Due to
the symmetry of the one-component model, both forward and backward propagating
modes are driven. No other kinds of unstable modes are supported by the one-
component model during SA.
For the two-component model, $99.9\%$ of the intervals in SA are found to be
unstable, with a median growth rate of
$\bar{\gamma}^{\textrm{max}}_{2-\textrm{comp}}=2.54^{3.24}_{1.92}\times
10^{-2}\Omega_{p}$, two orders of magnitude larger than for the one component
model. All of the unstable intervals are associated with parallel propagating
fast-magnetosonic modes with $|k|^{\textrm{max}}d_{p}\approx 0.5$. Unlike the
symmetrically emitted unstable waves from the one-component model, the
unstable modes from the two-component model only propagate in the same
direction as the secondary proton population.111We define the radial component
of our coördinate system to align with the mean magnetic field. In both SA and
SB, PSP was in a region of Sunward magnetic polarity, meaning that the anti-
Sunward propagating secondary proton populations have a negative velocity with
respect to the primary proton population. The maximum growth rate of the
unstable fast mode is enhanced due to an increased phase-space density
associated with the secondary proton population, while the anti-beam aligned
fast-mode resonance is effectively starved of protons with which to interact,
leading to damping rather than instability for this mode.
We find differences in the kinds of instabilities predicted for the two models
in SB. Ninety-nine percent of the intervals are predicted to be linearly
unstable to the parallel propagating firehose instability for the one-
component model, with a median growth rate of
$\bar{\gamma}^{\textrm{max}}_{1-\textrm{comp}}=9.26^{13.3}_{6.12}\times
10^{-4}\Omega_{p}$. This is not the case for the two-component model. The
median growth rate for the two-component model is similar,
$\bar{\gamma}^{\textrm{max}}_{2-\textrm{comp}}=6.43^{15.6}_{3.30}\times
10^{-4}\Omega_{p}$, however only $55.7\%$ of the intervals are found to be
unstable and the associated fastest growing mode oscillates between a beam-
aligned, parallel propagating firehose mode and an oblique instability. This
demonstrates that fitting a secondary component does not universally enhance
the predicted growth rate and that more sophisticated treatments of velocity-
space structure can lead to the generation of different kinds of unstable
modes.
As seen in Fig. 7, $\gamma^{\textrm{max}}/\Omega_{p}$ is generally larger for
the two-component model than for the one-component model for SA. This is not
the case for SB, where more of the one-component intervals are unstable, while
the variance in the growth rate for the two-component model is larger. When
re-normalized to the normal mode frequency
$\omega_{\textrm{r}}^{\textrm{max}}$, Fig 7b, we see an enhancement in the
growth rates for the two-component model in SB, while the the other growth
rates remain relatively unaffected.
Other time scales of potential interest include an estimate for the non-linear
cascade rate at the wavevector of fastest growth,
$\displaystyle\gamma^{\textrm{max}}\tau_{nl}=$
$\displaystyle\left(\frac{\gamma^{\textrm{max}}}{v_{A}}\right)\left(k_{\textrm{break}}\right)^{-1/3}(|\mathbf{k}^{\textrm{max}}|)^{-2/3}$
(10) $\displaystyle=$
$\displaystyle\left(\frac{\gamma^{\textrm{max}}}{\Omega_{p}}\right)(\frac{2\pi
f_{\textrm{break}}}{\Omega_{p}}\frac{v_{A}}{v_{sw}})^{-1/3}(|\mathbf{k}^{\textrm{max}}d_{p}|)^{-2/3}$
where we approximate the transition from the injection to the inertial ranges
of turbulence as $k_{\textrm{break}}=2\pi f_{\textrm{break}}/v_{sw}$ with
$f_{\textrm{break}}$ found to be approximately $10^{-3}$ Hz when constructing
trace power-spectral density curves for either SA or SB, not shown. These
values are in rough agreement with the results reported in Chen et al. (2020).
The cascade time is estimated as the critically balanced nonlinear cascade
rate, $\tau_{nl}\sim\omega_{\textrm{Alfv\'{e}n}}^{-1}$(Goldreich & Sridhar,
1995; Mallet et al., 2015). Previous analysis between 0.3 and 0.7 au (Klein et
al., 2019) found that $\gamma^{\textrm{max}}$ never exceeded the estimated
nonlinear cascade rate, though the two rates were found to be within an order
of magnitude, with $50\%$ of the intervals having
$\gamma^{\textrm{max}}\tau_{nl}\gtrsim 0.2$. For the two-component model in
SA, the maximum growth rate is of the same order as $\tau_{nl}^{-1}$, with a
median value of
$\bar{\gamma}^{\textrm{max}}_{2-\textrm{comp}}\tau_{nl}=0.618^{0.813}_{0.463}$,
indicating that these predicted instabilities operate on similar timescales as
the nonlinear transport of energy through these spatial scales. Importantly,
while $\bar{\gamma}^{\textrm{max}}_{2-\textrm{comp}}\sim\tau_{nl}^{-1}$, the
median value of $\bar{\gamma}^{\textrm{max}}_{1-\textrm{comp}}\tau_{nl}$ is
$4.70^{6.90}_{3.14}\times 10^{-3}$ for the same interval. This emphasizes that
our choice of different models for the proton phase-space density will lead to
drastically different interpretations of the importance of different physical
processes. The impact of these instabilities, especially when the ions are
modeled as multiple components, on the turbulent transport of energy must be
considered in future modeling efforts. The median values of
$\gamma^{\textrm{max}}\tau_{nl}$ are comparable for SB, with
$\bar{\gamma}^{\textrm{max}}_{1-\textrm{comp}}\tau_{nl}=2.08^{3.04}_{1.45}\times
10^{-2}$ and
$\bar{\gamma}^{\textrm{max}}_{2-\textrm{comp}}\tau_{nl}=3.55^{6.81}_{1.74}\times
10^{-2}$, again showing that the two-component model does not universally
enhance growth rates compared to the one-component model. To remove variations
associated with the normalization by $\Omega_{p}$ due to changes in
$|\mathbf{B}|$ as a function of time, we also plot the growth rate in Hertz,
Fig 7d, and see a distribution of growth rates similar to that seen in panel
a.
Figure 7: Comparison of normalized growth rates for the two models for SA
(teal) and SB (gold), with the abscissa and ordinate mapping the one- and two-
component rates. In panels a,b,c, and d, $\gamma^{\textrm{max}}$ is normalized
to $\Omega_{p}$, $\omega_{\textrm{r}}^{\textrm{max}}$, $\tau_{nl}^{-1}$ and
$1$ Hz respectively. Black dots and bars correspond to medians and
$25^{\textrm{th}}$ and $75^{\textrm{th}}$ percentiles associated with the
unstable intervals.
By design, the one- and two-component models have the same parallel and
perpendicular thermal pressures for a given interval, which can be
characterized by the firehose (Kunz et al., 2015)
$\Lambda_{F}=\frac{\beta_{\parallel}-\beta_{\perp}}{2}+\frac{\sum_{j}n_{j}m_{j}|\Delta\tilde{v}_{j}|^{2}}{\sum_{j}(n_{j}m_{j})v_{A}^{2}}$
(11)
or mirror (Hellinger, 2007)
$\Lambda_{M}=\sum_{j}\beta_{\perp,j}\left(\frac{\beta_{\perp,j}}{\beta_{\parallel,j}}-1\right)-\frac{\left(\sum_{j}q_{j}n_{j}\frac{\beta_{\perp,j}}{\beta_{\parallel,j}}\right)^{2}}{2\sum_{j}\frac{(q_{j}n_{j})^{2}}{\beta_{\parallel,j}}}$
(12)
criterion, where $\Delta\tilde{v}_{j}$ is the difference between the bulk
speed of component $j$ and the center of mass velocity. When these criterion
exceed unity, large-scale firehose or mirror instabilities are generated. For
both SA and SB, the amplitude of neither criteria exceeds $\sim 0.5$ for
either model; therefore, it is the resonances between the proton distribution
and the associated electromagnetic fields and not the excess macroscopic
parallel or perpendicular pressures that drives the predicted unstable wave
modes.
Slight changes in the relative drift speed between the two proton populations
and their densities can have a significant impact on the kind of unstable mode
predicted to be generated. This is illustrated in Fig. 8, where nine
sequential illustrations of contours of constant
$\gamma^{\textrm{max}}(\mathbf{k})$ are shown for the one- and two-component
models for SPAN-i observations from near the beginning of SB. Throughout these
two minutes both the maximum growth rates and regions of unstable wavevectors
are largely unchanged for the one-component model. This is expected given that
$T_{\perp,p}/T_{\parallel,p}$ and $\beta_{\parallel,p}$ are relatively
constant over this time, remaining consistent with a parallel propagating
firehose instability. For the two-component model, oblique modes are initially
driven. A minute into the sequence, the maximum growth rate transitions to a
parallel propagating wavevector, and then transitions back to an oblique
instability. These transitions correspond to a temporary dip in the relative
density of the beam component and an increase in the relative drift speed.
Given that many kinds of waves are observed in this section of data, it
appears plausible that these transitions between parallel and oblique
instabilities may be real, but are not properly accounted for in overly
simplistic models of the protons as a single anisotropic distribution, which
only drive one kind of unstable mode.
Figure 8: Left: Contours of constant $\gamma^{\textrm{max}}/\Omega_{p}$ as a
function of $k_{\perp}\rho_{c}$ and $k_{\parallel}\rho_{c}$ for the one- and
two-component models (red and blue) for nine intervals at the start of SB.
Right: Temporal variation of $T_{\perp,p}/T_{\parallel,p}$ and
$\beta_{\parallel,p}$ from the one-component model (top) and of the relative
drifts and densities of the two-component model (bottom).
We note that there is not a simple parametric function dependence only on
$n_{b}/n_{c}$ and $\Delta v_{b,c}/v_{A}$ that divides the parallel unstable
modes from the oblique modes. In Fig. 9, we plot the angle of the fastest
growing mode $\theta_{kB}^{\textrm{max}}$ for the two-component model for SB
as a function of these two parameters. Generally, the larger the relative
drift, the more likely the model is predicted to generate an oblique unstable
mode, with the transition between parallel and oblique modes arising at lower
drifts for larger relative beam densities. However, we find many stable
intervals with very similar drifts and densities to the intervals unstable to
the generation of both parallel and oblique unstable modes. This can be
understood by recalling that the variation of the temperatures and
anisotropies of the individual proton components will have a significant
impact on the predicted stability of the system that is not captured in this
reduced parameter space. Due to this complexity, we do not attempt to offer a
simple parametric prescription for this transition between parallel and
oblique instabilities in this work, but do note again that if this
distribution is treated as a single proton population, the only instability
supported is the parallel-propagating, fast/magnetosonic firehose instability.
Figure 9: Wavevector angle $\theta_{kB}^{\textrm{max}}$ of fastest growing
mode during SB calculated using the two-component model, indicated by color,
as a function of relative densities $n_{b}/n_{c}$ and drift velocities $\Delta
v_{b,c}/v_{A}$. Grey squares indicate intervals predicted to be linearly
stable.
## 4 Conclusions
In this work, we have selected two hours-long intervals where in situ
measurements of the local plasma conditions have been made during PSP’s fourth
perihelion orbit. These measurements coincide with significant ion-scale wave
activity as observed by the FIELDS magnetometers. The proton phase-space
densities have been modeled as either a single anisotropic population, or two
relatively drifting anisotropic populations. The linear stability of both
models was calculated, with strikingly different predictions for the supported
linear modes. In the first selection, both models produce the same kind of
unstable mode, but the two-component model drives instabilities that grow
nearly two orders of magnitude faster, fast enough to potentially act on the
same timescales as the local nonlinear turbulent transfer of energy.
Additionally, the two-component model for SA only drives instabilities
propagating in a single direction, as opposed to the one-component model where
waves are driven both Sunward and anti-Sunward due to the enforced symmetry of
the simplified description of the protons. For the second selection, modeling
the protons using two components does not make the plasma more unstable, but
does change the kind of unstable modes driven, leading to an oscillation
between the production of parallel and oblique propagating waves.
As future lines of inquiry, we intend on extending this work to investigate
the predicted growth rates and waves concurrently observed with other plasma
parameters and solar wind conditions, such as intervals where the total
parallel proton pressure is exceeded by the total perpendicular pressure. We
will also include additional sources of free energy associated with minor ions
and electrons, to determine if they act to enhance or stabilize these growing
modes. This work will help to ascertain under what conditions which models may
suffice to properly describe kinetic processes. Importantly, as the
instabilities under consideration are resonant, we must also consider the
impact of departures from bi-Maxwellian distributions, either using other
analytic prescriptions, e.g. kappa (Livadiotis, 2015) or flattop
distributions(Klein & Chandran, 2016; Wilson et al., 2020), or via a direct
numerical integration of the observed phase-space density (Verscharen et al.,
2018).
The SWEAP Investigation and this publication are supported by the PSP mission
under NASA contract NNN06AA01C. K.G.K. is supported by NASA ECIP Grant
80NSSC19K0912. An allocation of computer time from the UA Research Computing
High Performance Computing at the University of Arizona is gratefully
acknowledged.
## References
* Alterman et al. (2018) Alterman, B. L., Kasper, J. C., Stevens, M. L., & Koval, A. 2018, Astrophys. J., 864, 112
* Bale et al. (2016) Bale, S. D., Goetz, K., Harvey, P. R., et al. 2016, Space Sci. Rev., doi:10.1007/s11214-016-0244-5
* Bowen et al. (2020) Bowen, T. A., Mallet, A., Huang, J., et al. 2020, Astrophys. J. Supp., 246, 66
* Chen et al. (2016) Chen, C. H. K., Matteini, L., Schekochihin, A. A., et al. 2016, Astrophys. J. Lett., 825, L26
* Chen et al. (2020) Chen, C. H. K., Bale, S. D., Bonnell, J. W., et al. 2020, Astrophys. J. Supp., 246, 53
* Fox et al. (2015) Fox, N. J., Velli, M. C., Bale, S. D., et al. 2015, Space Sci. Rev., doi:10.1007/s11214-015-0211-6
* Goldreich & Sridhar (1995) Goldreich, P., & Sridhar, S. 1995, Astrophys. J., 438, 763
* Hellinger (2007) Hellinger, P. 2007, Phys. Plasmas, 14, 082105
* Kasper et al. (2015) Kasper, J. C., Abiad, R., Austin, G., et al. 2015, Space Sci. Rev., 1
* Klein et al. (2018) Klein, K. G., Alterman, B. L., Stevens, M. L., Vech, D., & Kasper, J. C. 2018, Phys. Rev. Lett., 120, 205102
* Klein & Chandran (2016) Klein, K. G., & Chandran, B. D. G. 2016, Astrophys. J., 820, 47
* Klein & Howes (2015) Klein, K. G., & Howes, G. G. 2015, Phys. Plasmas, 22, 032903
* Klein et al. (2017) Klein, K. G., Kasper, J. C., Korreck, K. E., & Stevens, M. L. 2017, Journal of Geophysical Research (Space Physics), 122, 9815
* Klein et al. (2019) Klein, K. G., Martinović, M., Stansby, D., & Horbury, T. S. 2019, Astrophys. J., 887, 234
* Kunz et al. (2015) Kunz, M. W., Schekochihin, A. A., Chen, C. H. K., Abel, I. G., & Cowley, S. C. 2015, Journal of Plasma Physics, 81, 325810501
* Livadiotis (2015) Livadiotis, G. 2015, Journal of Geophysical Research (Space Physics), 120, 1607
* Mallet et al. (2015) Mallet, A., Schekochihin, A. A., & Chandran, B. D. G. 2015, Mon. Not. Roy. Astron. Soc., 449, L77
* Matteini et al. (2012) Matteini, L., Hellinger, P., Landi, S., Trávníček, P. M., & Velli, M. 2012, Space Sci. Rev., 172, 373
* Nyquist (1932) Nyquist, H. 1932, Bell system technical journal, 11, 126
* Verniero et al. (2020) Verniero, J. L., Larson, D. E., Livi, R., et al. 2020, Astrophys. J. Supp., 248, 5
* Verscharen et al. (2018) Verscharen, D., Klein, K. G., Chandran, B. D. G., et al. 2018, Journal of Plasma Physics, 84, 905840403
* Verscharen et al. (2019) Verscharen, D., Klein, K. G., & Maruca, B. A. 2019, Living Rev. Solar Phys., 16, 5
* Wilson et al. (2020) Wilson, Lynn B., I., Chen, L.-J., Wang, S., et al. 2020, Astrophys. J., 893, 22
* Yoon (2017) Yoon, P. H. 2017, Reviews of Modern Plasma Physics, 1, 4
|
# Non-minimality of spirals in sub-Riemannian manifolds
Roberto Monti<EMAIL_ADDRESS>Università di Padova, Dipartimento di
Matematica “Tullio Levi-Civita”, via Trieste 63, 35121 Padova, Italy and
Alessandro Socionovo<EMAIL_ADDRESS>Università di Padova,
Dipartimento di Matematica “Tullio Levi-Civita”, via Trieste 63, 35121 Padova,
Italy
###### Abstract.
We show that in analytic sub-Riemannian manifolds of rank 2 satisfying a
commutativity condition spiral-like curves are not length minimizing near the
center of the spiral. The proof relies upon the delicate construction of a
competing curve.
## 1\. Introduction
The regularity of geodesics (length-minimizing curves) in sub-Riemannian
geometry is an open problem since forty years. Its difficulty is due to the
presence of singular (or abnormal) extremals, i.e., curves where the
differential of the end-point map is singular (it is not surjective). There
exist singular curves that are as a matter of fact length-minimizing. The
first example was discovered in [9] and other classes of examples (regular
abnormal extremals) are studied in [13]. All such examples are smooth curves.
When the end-point map is singular, it is not possible to deduce the Euler-
Lagrange equations with their regularizing effect for minimizers constrained
on a nonsmooth set. On the other hand, in the case of singular extremals the
necessary conditions given by Optimal Control Theory (Pontryagin Maximum
Principle) do not provide in general any further regularity beyond the
starting one, absolute continuity or Lipschitz continuity of the curve.
The most elementary kind of singularity for a Lipschitz curve is of the
corner-type: at a given point, the curve has a left and a right tangent that
are linearly independent. In [8] and [3] it was proved that length minimizers
cannot have singular points of this kind. These results have been improved in
[11]: at any point, the tangent cone to a length-minimizing curve contains at
least one line (a half line, for extreme points), see also [4]. The uniqueness
of this tangent line for length minimizers is an open problem. Indeed, there
exist other types of singularities related to the non-uniqueness of the
tangent. In particular, there exist spiral-like curves whose tangent cone at
the center contains many and in fact all tangent lines, see Example 2.5 below.
These curves may appear as Goh extremals in Carnot groups, see [6] and [7,
Section 5]. For these reasons, the results of [11] are not enough to prove the
nonminimality of spiral-like extremals. Goal of this paper is to show that
curves with this kind of singularity are not length-minimizing.
Let $M$ be an $n$-dimensional, $n\geq 3$, analytic manifold endowed with a
rank 2 analytic distribution $\mathscr{D}\subset TM$ that is bracket
generating (Hörmander condition). An absolutely continuous curve $\gamma\in
AC([0,1];M)$ is horizontal if $\dot{\gamma}\in\mathscr{D}(\gamma)$ almost
everywhere. The length of $\gamma$ is defined fixing a metric tensor $g$ on
$\mathscr{D}$ and letting
$L(\gamma)=\int_{[0,1]}g_{\gamma}(\dot{\gamma},\dot{\gamma})^{1/2}dt.$ (1.1)
The curve $\gamma$ is a length-minimizer between its end-points if for any
other horizontal curve $\bar{\gamma}\in AC([0,1];M)$ such that
$\bar{\gamma}(0)=\gamma(0)$ and $\bar{\gamma}(1)=\gamma(1)$ we have
$L(\gamma)\leq L(\bar{\gamma})$.
Our notion of horizontal spiral in a sub-Riemannian manifold of rank 2 is
fixed in Definition 2.4. We will show that spirals are not length-minimizing
when the horizontal distribution $\mathscr{D}$ satisfies the following
commutativity condition. Fix two vector fields $X_{1},X_{2}\in\mathscr{D}$
that are linearly independent at some point $p\in M$. For $k\in\mathbb{N}$ and
for a multi-index $J=(j_{1},\dots,j_{k})$, with $j_{i}\in\\{1,2\\}$, we denote
by $X_{J}=[X_{j_{1}},[\dots,[X_{j_{k-1}},X_{j_{k}}]\cdots]]$ the iterated
commutator associated with $J$. We define its length as
$\mathrm{len}(X_{J})=k$. Let $\mathscr{D}_{k}(p)$ be the $\mathbb{R}$-linear
span of $\\{X_{J}(p)\,|\,\mathrm{len}(X_{J})\leq k\\}\subset T_{p}M$. In a
neighborhood of the center of the spiral, we will assume the following
condition
$[\mathscr{D}_{i},\mathscr{D}_{j}]=\\{0\\}\quad\textrm{for all $i,j\geq 2$}.$
(1.2)
Our main result is the following
###### Theorem 1.1.
Let $(M,\mathscr{D},g)$ be an analytic sub-Riemmanian manifold of rank 2
satisfying (1.2). Any horizontal spiral $\gamma\in AC([0,1];M)$ is not length-
minimizing near its center.
Differently from [8, 3, 11, 4] and similarly to [10], the proof of this
theorem cannot be reduced to the case of Carnot groups, the infinitesimal
models of equiregular sub-Riemanian manifolds. This is because the blow-up of
the spiral could be a horizontal line, that is indeed length-minimizing.
The nonminimality of spirals combined with the necessary conditions given by
Pontryagin Maximum Principle is likely to give new regularity results on
classes of sub-Riemannian manifolds, in the spirit of [1]. We think, however,
that the main interest of Theorem 1.1 is in the deeper understanding that it
provides on the loss of minimality caused by singularities.
The proof of Theorem 1.1 consists in constructing a competing curve shorter
than the spiral. The construction uses exponential coordinates of the second
type and our first step is a review of Hermes’ theorem on the structure of
vector-fields in such coordinates. In this situation, the commutativity
condition (1.2) has a clear meaning explained in Theorem 2.2, that may be of
independent interest. Even though our definition of “horizontal spiral” is
given in coordinates of the second type, see Definition 2.4, it is actually
coordinates-independent, see Remark 2.6.
In Section 3, we start the construction of the competing curve. Here we use
the specific structure of a spiral. The gain of length is obtained by cutting
one spire near the center. The adjustment of the end-point will be obtained
modifying the spiral in a certain number of locations adding “devices”
depending on a set of parameters. The horizontal coordinates of the spiral are
a planar curve intersecting the positive $x_{1}$-axis infinitely many times.
The possibility of adding devices at such locations arbitrarily close to the
origin will be a crucial fact.
In Section 4, we develop an integral calculus on monomials that is used to
estimate the effect of cut and devices on the end-point of the modified
spiral. Then, in Section 5, we fix the parameters of the devices in such a way
that the end-point of the modified curve coincides with the end-point of the
spiral. This is done in Theorem 5.1 by a linearization argument. Sections 3–5
contain the technical core of the paper.
We use the specific structure of the length-functional in Section 6, where we
prove that the modified curve is shorter than the spiral, provided that the
cut is sufficiently close to the origin. This will be the conclusion of the
proof of Theorem 1.1.
We briefly comment on the assumptions made in Theorem 1.1. The analyticity of
$M$ and $\mathscr{D}$ is needed only in Section 2. In the analytic case, it is
known that length-minimizers are smooth in an open and dense set, see [12].
See also [2] for a $C^{1}$-regularity result when $M$ is an analytic manifold
of dimension $3$.
The assumption that the distribution $\mathscr{D}$ has rank 2 is natural when
considering horizontal spirals. When the rank is higher there is room for more
complicated singularities in the horizontal coordinates, raising challenging
questions about the regularity problem.
Dropping the commutativity assumption (1.2) is a major technical problem:
getting sharp estimates from below for the effect produced by cut and devices
on the end-point seems extremely difficult when the coefficients of the
horizontal vector fields depend also on nonhorizontal coordinates, see Remark
4.3.
## 2\. Exponential coordinates at the center of the spiral
In this section, we introduce in $M$ exponential coordinates of the second
type centered at a point $p\in M$, that will be the center of the spiral.
Let $X_{1},X_{2}\in\mathscr{D}$ be linearly independent at $p$. Since the
distribution $\mathscr{D}$ is bracket-generating we can find vector-fields
$X_{3},\ldots,X_{n}$, with $n=\mathrm{dim}(M)$, such that each $X_{i}$ is an
iterated commutator of $X_{1},X_{2}$ with length $w_{i}=\mathrm{len}(X_{i})$,
$i=3,\ldots,n$, and such that $X_{1},\ldots,X_{n}$ at $p$ are a basis for
$T_{p}M$. By continuity, there exists an open neighborhood $U$ of $p$ such
that $X_{1}(q),\dots,X_{n}(q)$ form a basis for $T_{q}M$, for any $q\in U$. We
call $X_{1},\ldots,X_{n}$ a stratified basis of vector-fields in $M$.
Let $\varphi\in C^{\infty}(U;\mathbb{R}^{n})$ be a chart such that
$\varphi(p)=0$ and $\varphi(U)=V$, with $V\subset\mathbb{R}^{n}$ open
neighborhood of $0\in\mathbb{R}^{n}$. Then
$\widetilde{X}_{1}=\varphi_{*}X_{1},\ldots,\widetilde{X}_{n}=\varphi_{*}X_{n}$
is a system of point-wise linearly independent vector fields in
$V\subset\mathbb{R}^{n}$. Since our problem has a local nature, we can without
loss of generality assume that $M=V=\mathbb{R}^{n}$ and $p=0$.
After these identifications, we have a stratified basis of vector-fields
$X_{1},\dots,X_{n}$ in $\mathbb{R}^{n}$. We say that
$x=(x_{1},\dots,x_{n})\in\mathbb{R}^{n}$ are exponential coordinates of the
second type associated with the vector fields $X_{1},\dots,X_{n}$ if we have
$x=\Phi_{x_{1}}^{X_{1}}\circ\dots\circ\Phi_{x_{n}}^{X_{n}}(0),\quad
x\in\mathbb{R}^{n}.$ (2.1)
We are using the notation $\Phi_{s}^{X}=\exp(sX)$, $s\in\mathbb{R}$, to denote
the flow of a vector-field $X$. From now on, we assume that
$X_{1},\ldots,X_{n}$ are complete and induce exponential coordinates of the
second type.
We define the homogeneous degree of the coordinate $x_{i}$ of $\mathbb{R}^{n}$
as $w_{i}=\mathrm{len}(X_{i})$. We introduce the $1$-parameter group of
dilations $\delta_{\lambda}:\mathbb{R}^{n}\to\mathbb{R}^{n}$, $\lambda>0$,
$\delta_{\lambda}(x)=(\lambda^{w_{1}}x_{1},\dots,\lambda^{w_{n}}x_{n}),\qquad
x\in\mathbb{R}^{n},$
and we say that a function $f:\mathbb{R}^{n}\to\mathbb{R}$ is
$\delta$-homogeneous of degree $w\in\mathbb{N}$ if
$f(\delta_{\lambda}(x))=\lambda^{w}f(x)$ for all $x\in\mathbb{R}^{n}$ and
$\lambda>0$. An example of $\delta$-homogeneous function of degree $1$ is the
pseudo-norm
$\|x\|=\sum_{j=1}^{n}{|x_{i}|^{1/w_{i}}},\quad x\in\mathbb{R}^{n}.$ (2.2)
The following theorem is proved in [5] in the case of general rank.
###### Theorem 2.1.
Let $\mathscr{D}=\mathrm{span}\\{X_{1},X_{2}\\}\subset TM$ be an analytic
distribution of rank 2. In exponential coordinates of the second type around a
point $p\in M$ identified with $0\in\mathbb{R}^{n}$, the vector fields $X_{1}$
and $X_{2}$ have the form
$\begin{split}&X_{1}(x)=\partial_{x_{1}},\\\
&X_{2}(x)=\partial_{x_{2}}+\sum_{j=3}^{n}{a_{j}(x)\partial_{x_{j}}},\end{split}$
(2.3)
for $x\in U$, where $U$ is a neighborhood of $0$. The analytic functions
$a_{j}\in C^{\infty}(U)$, $j=3,\ldots,n$, have the structure
$a_{j}=p_{j}+r_{j}$, where:
* (i)
$p_{j}$ are $\delta$-homogeneous polynomials of degree $w_{j}-1$ such that
$p_{j}(0,x_{2},\dots,x_{n})=0$;
* (ii)
$r_{j}\in C^{\infty}(U)$ are analytic functions such that, for some constants
$C_{1},C_{2}>0$ and for $x\in U$,
$|r_{j}(x)|\leq
C_{1}\|x\|^{w_{j}}\quad\textrm{and}\quad|\partial_{x_{i}}r_{j}(x)|\leq
C_{2}\|x\|^{w_{j}-w_{i}}.$ (2.4)
###### Proof.
The proof that $a_{j}=p_{j}+r_{j}$ where $p_{j}$ are polynomials as in (i) and
the remainders $r_{j}$ are real-analytic functions such that $r_{j}(0)=0$ can
be found in [5]. The proof of (ii) is also implicitly contained in [5]. Here,
we add some details. The Taylor series of $r_{j}$ has the form
$r_{j}(x)=\sum_{\ell=w_{j}}^{\infty}r_{j\ell}(x)=\sum_{\ell=w_{j}}^{\infty}\sum_{\alpha\in\mathscr{A}_{\ell}}c_{\alpha\ell}x^{\alpha},$
where
$\mathscr{A}_{\ell}=\\{\alpha\in\mathbb{N}^{n}:\alpha_{1}w_{1}+\ldots+\alpha_{n}w_{n}=\ell\\}$,
$x^{\alpha}=x_{1}^{\alpha_{1}}\cdots x_{n}^{\alpha_{n}}$ and
$c_{\alpha\ell}\in\mathbb{R}$ are constants. Here and in the following,
$\mathbb{N}=\\{0,1,2,\ldots\\}$. The series converges absolutely in a small
homogeneous cube $Q_{\delta}=\\{x\in\mathbb{R}^{n}:\|x\|\leq\delta\\}$ for
some $\delta>0$, and in particular
$\sum_{\ell=w_{j}}^{\infty}\delta^{\ell}\sum_{\alpha\in\mathscr{A}_{\ell}}|c_{\alpha\ell}|<\infty.$
Using the inequality $|x^{\alpha}|\leq\|x\|^{\ell}$ for
$\alpha\in\mathscr{A}_{\ell}$, for $x\in Q_{\delta}$ we get
$|r_{j}(x)|\leq C_{1}\|x\|^{w_{j}},\quad\textrm{with
}C_{1}=\sum_{\ell=w_{j}}^{\infty}\delta^{\ell-
w_{j}}\sum_{\alpha\in\mathscr{A}_{\ell}}|c_{\alpha}|<\infty.$
The estimate for the derivatives of $r_{j}$ is analogous. Indeed, we have
$\partial_{x_{i}}r_{j}(x)=\sum_{\ell=w_{j}}^{\infty}\sum_{\alpha\in\mathscr{A}_{\ell}}\alpha_{i}c_{\alpha\ell}x^{\alpha-\text{e}_{i}},$
where $\alpha-\text{e}_{i}\in\mathscr{A}_{\ell-w_{i}}$ whenever
$\alpha\in\mathscr{A}_{\ell}$. Thus the leading term in the series has
homogeneous degree $w_{j}-w_{i}$ and repeating the argument above we get the
estimate $|\partial_{x_{i}}r_{j}(x)|\leq C_{2}\|x\|^{w_{j}-w_{i}}$ for $x\in
Q_{\delta}$.
∎
When the distribution $\mathscr{D}$ satisfies the commutativity assumption
(1.2) the coefficients $a_{j}$ appearing in the vector-field $X_{2}$ in (2.3)
enjoy additional properties.
###### Theorem 2.2.
If $\mathscr{D}\subset TM$ is an analytic distribution of rank 2 satisfying
(1.2) then the functions $a_{3},\ldots,a_{n}$ of Theorem 2.1 depend only on
the variables $x_{1}$ and $x_{2}$.
###### Proof.
Let $\Gamma:\mathbb{R}\times\mathbb{R}^{n}\to\mathbb{R}^{n}$ be the map
$\Gamma(t,x)=\Phi_{t}^{X_{2}}(x),$ where $x\in\mathbb{R}^{n}$ and
$t\in\mathbb{R}$. Here, we are using the exponential coordinates (2.1). In the
following we omit the composition sign $\circ$. Defining
$\Theta:\mathbb{R}^{3}\times\mathbb{R}^{n}\to\mathbb{R}^{n}$ as the map
$\Theta_{t,x_{1},x_{2}}(p)=\Phi_{-(x_{2}+t)}^{X_{2}}\Phi_{-x_{1}}^{X_{1}}\Phi_{t}^{X_{2}}\Phi_{x_{1}}^{X_{1}}\Phi_{x_{2}}^{X_{2}}(p),$
we have
$\Gamma(t,x)=\Phi_{x_{1}}^{X_{1}}\Phi_{x_{2}+t}^{X_{2}}\Theta_{t,x_{1},x_{2}}\Phi_{x_{3}}^{X_{3}}\dots\Phi_{x_{n}}^{X_{n}}(0).$
We claim that there exists a $C>0$ independent of $t$ such that, for $t\to 0$,
$|\Theta_{t,x_{1},x_{2}}\Phi_{s}^{X_{j}}-\Phi_{s}^{X_{j}}\Theta_{t,x_{1},x_{2}}|\leq
Ct^{2}.$ (2.5)
We will prove claim (2.5) in Lemma 2.3 below. From (2.5) it follows that there
exist mappings $R_{t}\in C^{\infty}(\mathbb{R}^{n},\mathbb{R}^{n})$ such that
$\Gamma(t,x)=\Phi_{x_{1}}^{X_{1}}\Phi_{x_{2}+t}^{X_{2}}\Phi_{x_{3}}^{X_{3}}\dots\Phi_{x_{n}}^{X_{n}}\Theta_{t,x_{1},x_{2}}(0)+R_{t}(x),$
(2.6)
and such that $|R_{t}|\leq Ct^{2}$ for $t\to 0$.
By the structure (2.3) of the vector fields $X_{1}$ and $X_{2}$ and since
$\Theta_{t,x_{1},x_{2}}$ is the composition of $C^{\infty}$ maps, there exist
$C^{\infty}$ functions $f_{j}=f_{j}(t,x_{1},x_{2})$ such that
$\Theta_{t,x_{1},x_{2}}(0)=\big{(}0,0,f_{3}(t,x_{1},x_{2}),\dots,f_{n}(t,x_{1},x_{2})\big{)}=\exp\Big{(}\sum_{j=3}^{n}f_{j}(t,x_{1}x_{2})X_{j}\Big{)}(0).$
(2.7)
By (1.2), from (2.6) and (2.7) we obtain
$\displaystyle\Gamma(t,x)$
$\displaystyle=\Phi_{x_{1}}^{X_{1}}\Phi_{x_{2}+t}^{X_{2}}\exp\Big{(}\sum_{i=3}^{n}(x_{j}+f_{j}(t,x_{1},x_{2}))X_{j}\Big{)}(0)+R_{t}(x)$
$\displaystyle=\big{(}x_{1},x_{2}+t,x_{3}+f_{3}(t,x_{1},x_{2}),\dots,x_{n}+f_{n}(t,x_{1},x_{2})\big{)}+R_{t}(x),$
and we conclude that
$X_{2}(x)=\frac{d}{dt}\Gamma(x,t)\Big{|}_{t=0}=\partial_{2}+\sum_{j=3}^{n}\frac{d}{dt}f_{j}(t,x_{1},x_{2})\Big{|}_{t=0}\partial_{j}.$
Thus the coefficients
$a_{j}(x_{1},x_{2})=\frac{d}{dt}f_{j}(t,x_{1},x_{2})|_{t=0}$, $j=3,\ldots,n$,
depend only on the first two variables, completing the proof. ∎
In the following lemma, we prove our claim (2.5).
###### Lemma 2.3.
Let $\mathscr{D}\subset TM$ be an analytic distribution satisfying (1.2). Then
for any $j=3,\ldots,n$ the claim in (2.5) holds.
###### Proof.
Let $X=X_{j}$ for any $j=3,\ldots,n$ and define the map
$T_{t,x_{1},x_{2};s}^{X}=\Theta_{t,x_{1},x_{2}}\Phi_{s}^{X}-\Phi_{s}^{X}\Theta_{t,x_{1},x_{2}}$.
For $t=0$ the map $\Theta_{0,x_{1},x_{2}}$ is the identity and thus
$T_{0,x_{1},x_{2};s}^{X}=0$. So, claim (2.5) follows as soon as we show that
$\dot{T}_{0,x_{1},x_{2};s}^{X}=\frac{\partial}{\partial
t}\Big{|}_{t=0}T_{t,x_{1},x_{2};s}^{X}=0,$
for any $s\in\mathbb{R}$ and for all $x_{1},x_{2}\in\mathbb{R}$.
We first compute the derivative of $\Theta_{t,x_{1},x_{2}}$ with respect to
$t$. Letting
$\Psi_{t,x_{1}}=\Phi_{-x_{1}}^{X_{1}}\Phi_{t}^{X_{2}}\Phi_{x_{1}}^{X_{1}}$ we
have
$\Theta_{t,x_{1},x_{2}}=\Phi_{-(x_{2}+t)}^{X_{2}}\Psi_{t,x_{1}}\Phi_{x_{2}}^{X_{2}},$
and, thanks to [5, Appendix A], the derivative of $\Psi_{t,x_{1}}$ at $t=0$ is
$\dot{\Psi}_{0,x_{1}}=\sum_{\nu=0}^{\infty}c_{\nu,x_{1}}W_{\nu},$
where $W_{\nu}=[X_{1},[\cdots,[X_{1},X_{2}]\cdots]]$ with $X_{1}$ appearing
$\nu$ times and $c_{\nu,x_{1}}=(-1)^{\nu}x_{1}^{\nu}/\nu!$. In particular, we
have $c_{0,x_{1}}=1$. Then the derivative of $\Theta_{t,x_{1},x_{2}}$ at $t=0$
is
$\displaystyle\dot{\Theta}_{0,x_{1},x_{2}}$
$\displaystyle=-X_{2}+d\Phi_{-x_{2}}^{X_{2}}\big{(}\dot{\Psi}_{0,x_{1}}(\Phi_{x_{2}}^{X_{2}})\big{)}$
$\displaystyle=-X_{2}+\sum_{\nu=0}^{\infty}c_{\nu,x_{1}}d\Phi_{-x_{2}}^{X_{2}}\big{(}W_{\nu}(\Phi_{x_{2}}^{X_{2}})\big{)}$
$\displaystyle=\sum_{\nu=1}^{\infty}c_{\nu,x_{1}}d\Phi_{-x_{2}}^{X_{2}}\big{(}W_{\nu}(\Phi_{x_{2}}^{X_{2}})\big{)},$
because the term in the sum with $\nu=0$ is
$d\Phi_{-x_{2}}^{X_{2}}\big{(}X_{2}(\Phi_{x_{2}}^{X_{2}})\big{)}=X_{2}$.
Inserting this formula for $\dot{\Theta}_{0,x_{1},x_{2}}$ into
$\dot{T}_{0,x_{1},x_{2};s}^{X}=\dot{\Theta}_{0,x_{1},x_{2}}(\Phi_{s}^{X})-d\Phi_{s}^{X}(\dot{\Theta}_{0,x_{1},x_{2}}),$
(2.8)
we obtain
$\displaystyle\dot{T}_{0,x_{1},x_{2};s}^{X}=$
$\displaystyle\sum_{\nu=1}^{\infty}c_{\nu,x_{1}}d\Phi_{-x_{2}}^{X_{2}}\big{(}W_{\nu}(\Phi_{x_{2}}^{X_{2}}\Phi_{s}^{X})\big{)}-d\Phi_{s}^{X}\sum_{\nu=1}^{\infty}c_{\nu,x_{1}}d\Phi_{-x_{2}}^{X_{2}}\big{(}W_{\nu}\big{(}\Phi_{x_{2}}^{X_{2}})\big{)}$
$\displaystyle=$ $\displaystyle
d\Phi_{s}^{X}\sum_{\nu=1}^{\infty}c_{\nu,x_{1}}\Big{(}d\Phi_{-s}^{X}d\Phi_{-x_{2}}^{X_{2}}\big{(}W_{\nu}(\Phi_{x_{2}}^{X_{2}}\Phi_{s}^{X})\big{)}-d\Phi_{-x_{2}}^{X_{2}}\big{(}W_{\nu}(\Phi_{x_{2}}^{X_{2}})\big{)}\Big{)}.$
In order to prove that $\dot{T}_{0,x_{1},x_{2};s}^{X}$ vanishes for all
$x_{1},x_{2}$ and $s$, we have to show that
$g(x_{2},s):=d\Phi_{-s}^{X}d\Phi_{-x_{2}}^{X_{2}}\big{(}W_{\nu}(\Phi_{x_{2}}^{X_{2}}\Phi_{s}^{X})\big{)}-d\Phi_{-x_{2}}^{X_{2}}\big{(}W_{\nu}(\Phi_{x_{2}}^{X_{2}})\big{)}=0,$
(2.9)
for any $\nu\geq 1$ and for any $x_{2}$ and $s$. From
$\Phi_{0}^{X}=\mathrm{id}$ it follows that $g(x_{2},0)=0$. Then, our claim
(2.9) is implied by
$h(x_{2},s):=\frac{\partial}{\partial s}g(x_{2},s)=0.$ (2.10)
Actually, this is a Lie derivative and, namely,
$\displaystyle h(x_{2},s)$
$\displaystyle=-d\Phi_{-s}^{X}\big{[}X,d\Phi_{-x_{2}}^{X_{2}}\big{(}W_{\nu}(\Phi_{x_{2}}^{X_{2}})\big{)}\big{]}.$
Notice that $h(0,s)=-d\Phi_{-s}^{X}[X,W_{\nu}]=0$ by our assumption (1.2). In
a similar way, for any $k\in\mathbb{N}$ we have
$\frac{\partial^{k}}{\partial
x_{2}^{k}}h(0,s)=(-1)^{k+1}d\Phi_{-s}^{X}[X,[X_{2},\cdots[X_{2},W_{\nu}]\cdots]]=0,$
with $X_{2}$ appearing $k$ times. Since the function $x_{2}\mapsto h(x_{2},s)$
is analytic our claim (2.10) follows.
∎
From now on, we assume that $a_{j}(x)=a_{j}(x_{1},x_{2})$ are functions of the
variables $x_{1},x_{2}$.
A curve $\gamma\in AC([0,1];M)$ is horizontal if
$\dot{\gamma}(t)\in\mathscr{D}(\gamma(t))$ for a.e. $t\in[0,1]$. In
exponential coordinates we have $\gamma=(\gamma_{1},\ldots,\gamma_{n})$ where,
for $j=3,\ldots,n$, the coordinates satisfy the following integral identities
$\gamma_{j}(t)=\gamma_{j}(0)+\int_{0}^{t}a_{j}(\gamma_{1}(s),\gamma_{2}(s))\dot{\gamma}_{2}(s)ds,\quad
t\in[0,1].$ (2.11)
When $\gamma(0)$ and $\gamma_{1},\gamma_{2}$ are given, these formulas
determine in a unique way the whole horizontal curve $\gamma$. We call
$\kappa\in AC([0,1];\mathbb{R}^{2})$, $\kappa=(\gamma_{1},\gamma_{2})$, the
horizontal coordinates of $\gamma$.
###### Definition 2.4 (Spiral).
We say that a horizontal curve $\gamma\in AC([0,1];M)$ is a _spiral_ if, in
exponential coordinates of the second type centered at $\gamma(0)$, the
horizontal coordinates $\kappa\in AC([0,1];\mathbb{R}^{2})$ are of the form
$\kappa(t)=t\mathrm{e}^{i\varphi(t)},\quad t\in]0,1],$ (2.12)
where $\varphi\in C^{1}(]0,1];\mathbb{R}^{+})$ is a function, called _phase_
of the spiral, such that $|\varphi(t)|\to\infty$ and
$|\dot{\varphi}(t)|\to\infty$ as $t\to 0^{+}$.
Without loss of generality, we shall focus our attention on spirals that are
oriented clock-wise, i.e., with a phase satisfying $\varphi(t)\to\infty$ and
$\dot{\varphi}(t)\to-\infty$ as $t\to 0^{+}$. Such a phase is decreasing near
$0$. Notice that if $\varphi(t)\to\infty$ and $\dot{\varphi}(t)$ has a limit
as $t\to 0^{+}$ then this limit must be $-\infty$.
###### Example 2.5.
An interesting example of horizontal spiral is the double-logarithm spiral,
the horizontal lift of the curve $\kappa$ in the plane of the form (2.12) with
phase $\varphi(t)=\log(-\log t)$, $t\in(0,1/2]$. In this case, we have
$\dot{\varphi}(t)=\frac{1}{t\log t},\quad t\in(0,1/2],$
and clearly $\varphi(t)\to\infty$ and $\dot{\varphi}(t)\to-\infty$ as $t\to
0^{+}$. In fact, we also have $t\dot{\varphi}\in L^{\infty}(0,1/2)$, which
means that $\kappa$ and thus $\gamma$ is Lipschitz continuous. This spiral has
the following additional properties:
* i)
for any $v\in\mathbb{R}^{2}$ with $|v|=1$ there exists an infinitesimal
sequence of positive real numbers $(\lambda_{n})_{n\in\mathbb{N}}$ such that
$\kappa(\lambda_{n}t)/\lambda_{n}\to tv$ locally uniformly, as $n\to\infty$;
* ii)
for any infinitesimal sequence of positive real numbers
$(\lambda_{n})_{n\in\mathbb{N}}$ there exists a subsequence and a
$v\in\mathbb{R}^{2}$ with $|v|=1$ such that
$\kappa(\lambda_{n_{k}}t)/\lambda_{n_{k}}\to tv$ as $k\to\infty$, locally
uniformly.
This means that the tangent cone of $\kappa$ at $t=0$ consists of all half-
lines in $\mathbb{R}^{2}$ emanating from $0$.
###### Remark 2.6.
We show that Definition 2.4 of a horizontal spiral does not in fact depend on
the chosen coordinates.
Let $F\in C^{\infty}(\mathbb{R}^{n};\mathbb{R}^{n})$ be a diffeomorphism such
that $F(0)=0$ and
$d_{0}F(\mathbb{R}^{2}\times\\{0\\})=\mathbb{R}^{2}\times\\{0\\}$, where
$d_{0}F$ is the differential of $F$ at $0$. In the new coordinates, the spiral
$\gamma$ becomes $\zeta(t)=F(\gamma(t))$ with horizontal coordinates
$\xi(t)=(F_{1}(\gamma(t)),F_{2}(\gamma(t)))$. We claim that after a
reparameterization $\xi$ is of the form (2.12), with a phase $\omega$
satisfying $|\omega|\to\infty$ and $|\dot{\omega}|\to\infty$. In particular,
we will show that $|\dot{\omega}|\to\infty$.
The function $s(t)=|\xi(t)|=|(F_{1}(\gamma(t)),F_{2}(\gamma(t)))|$ satisfies
$0<c_{0}\leq\dot{s}(t)\leq c_{1}<\infty,\quad t\in(0,1].$ (2.13)
Define the function $\omega\in C^{1}((0,1])$ letting
$\xi(t)=s(t)\mathrm{e}^{i\omega(s(t))}$. Then differentiating the identity
obtained inverting
$\tan\\!\big{(}\omega(s(t))\big{)}=\frac{F_{2}(\gamma(t))}{F_{1}(\gamma(t))},\quad
t\in(0,1],$
we obtain
$\dot{s}(t)\dot{\omega}(s(t))=\frac{1}{s(t)^{2}}\langle\Phi(\gamma(t)),\dot{\gamma}(t)\rangle,\qquad
t\in(0,1],$ (2.14)
where the function $\Phi(x)=F_{1}(x)\nabla F_{2}(x)-F_{2}(x)\nabla F_{1}(x)$
has the Taylor development as $x\to 0$
$\begin{split}\Phi(x)=&\langle\nabla F_{1}(0),x\rangle\nabla
F_{2}(0)-\langle\nabla F_{2}(0),x\rangle\nabla
F_{1}(0)+O(|x|^{2}).\end{split}$
Observe that from (2.11) it follows that $|\dot{\gamma}_{j}(t)|=O(t)$ for
$j\geq 3$. Denoting by $\bar{\nabla}$ the gradient in the first two variables,
we deduce that as $t\to 0^{+}$ we have
$\langle\Phi(\gamma),\dot{\gamma}\rangle=\langle
F_{1}(\gamma)\bar{\nabla}F_{2}(\gamma)-F_{2}(\gamma)\bar{\nabla}F_{1}(\gamma),\dot{\kappa}\rangle+O(t^{2})$
(2.15)
with
$F_{1}(\gamma)\bar{\nabla}F_{2}(\gamma)-F_{2}(\gamma)\bar{\nabla}F_{1}(\gamma)=\langle\bar{\nabla}F_{1}(0),\kappa\rangle\bar{\nabla}F_{2}(0)-\langle\bar{\nabla}F_{2}(0),\kappa\rangle\bar{\nabla}F_{1}(0)+O(t^{2}).$
Inserting the last identity and
$\dot{\kappa}=\mathrm{e}^{i\varphi}+it\dot{\varphi}\mathrm{e}^{i\varphi}$ into
(2.15), after some computations we obtain
$\langle\Phi(\gamma),\dot{\gamma}\rangle=\dot{\varphi}t^{2}\det(d_{0}\bar{F}(0))+O(t^{2}),$
where $\det(d_{0}\bar{F}(0))\neq 0$ is the determinant Jacobian at
$x_{1}=x_{2}=0$ of the mapping
$(x_{1},x_{2})\mapsto(F_{1}(x_{1},x_{2},0),F_{2}(x_{1},x_{2},0))$. Now the
claim $|\dot{\omega}(s)|\to\infty$ as $s\to 0^{+}$ easily follows from (2.13),
(2.14) and from $|\dot{\varphi}(t)|\to\infty$ as $t\to 0^{+}$.
## 3\. Cut and correction devices
In this section, we begin the construction of the competing curve. Let
$\gamma$ be a spiral with horizontal coordinates $\kappa$ as in (2.12). We can
assume that $\varphi$ is decreasing and that $\varphi(1)=1$ and we denote by
$\psi:[1,\infty)\to(0,1]$ the inverse function of $\varphi$. For
$k\in\mathbb{N}$ and $\eta\in[0,2\pi)$ we define $t_{k\eta}\in(0,1]$ as the
unique solution to the equation $\varphi(t_{k\eta})=2\pi k+\eta$, i.e., we let
$t_{k\eta}=\psi(2\pi k+\eta)$. The times
$t_{k}=t_{k0}=\psi(2\pi k),\quad k\in\mathbb{N},$ (3.1)
will play a special role in our construction. The points $\kappa(t_{k})$ are
in the positive $x_{1}$-axis.
For a fixed $k\in\mathbb{N}$, we cut the curve $\kappa$ in the interval
$[t_{k+1},t_{k}]$ following the line segment joining $\kappa(t_{k+1})$ to
$\kappa(t_{k})$ instead of the path $\kappa$, while we leave unchanged the
remaining part of the path. We call this new curve $\kappa_{k}^{\mathrm{cut}}$
and, namely, we let
$\begin{split}\kappa_{k}^{\mathrm{cut}}(t)&=\kappa(t)\quad\text{for}\quad
t\in[0,t_{k+1}]\cup[t_{k},1],\\\
\kappa_{k}^{\mathrm{cut}}(t)&=(t,0)\quad\text{for}\quad
t\in[t_{k+1},t_{k}].\end{split}$
We denote by $\gamma_{k}^{\mathrm{cut}}\in AC([0,1];M)$ the horizontal curve
with horizontal coordinates $\kappa_{k}^{\mathrm{cut}}$ and such that
$\gamma_{k}^{\mathrm{cut}}(0)=\gamma(0)$. For $t\in[0,t_{k+1}]$, we have
$\gamma_{k}^{\mathrm{cut}}(t)=\gamma(t)$. To correct the errors produced by
the cut on the end-point, we modify the curve $\kappa_{k}^{\mathrm{cut}}$
using a certain number of devices. The construction is made by induction.
We start with the base construction. Let $\mathscr{E}=(h,\eta,\varepsilon)$ be
a triple such that $h\in\mathbb{N}$, $0<\eta<\pi/4$, and
$\varepsilon\in\mathbb{R}$. Starting from a curve
$\kappa:[0,1]\to\mathbb{R}^{2}$, we define the curve
$\mathrm{D}(\kappa;\mathscr{E}):[0,1+2|\varepsilon|]\to\mathbb{R}^{2}$ in the
following way:
$\mathrm{D}(\kappa;\mathscr{E})(t)=\left\\{\begin{array}[]{ll}\kappa(t)&t\in[0,t_{h\eta}]\\\
\kappa(t_{h\eta})+(\text{sgn}(\varepsilon)(t-t_{h\eta}),0)&t\in[t_{h\eta},t_{h\eta}+|\varepsilon|]\\\
\kappa(t-|\varepsilon|)+(\varepsilon,0)&t\in[t_{h\eta}+|\varepsilon|,t_{h}+|\varepsilon|]\\\
\kappa(t_{h})+(2\varepsilon+\text{sgn}(\varepsilon)(t_{h}-t),0)&t\in[t_{h}+|\varepsilon|,t_{h}+2|\varepsilon|]\\\
\kappa(t-2|\varepsilon|)&t\in[t_{h}+2|\varepsilon|,1+2|\varepsilon|].\end{array}\right.$
(3.2)
We denote by $\mathrm{D}(\gamma;\mathscr{E})$ the horizontal curve with
horizontal coordinates $\mathrm{D}(\kappa;\mathscr{E})$. We let
$\dot{\mathrm{D}}(\gamma;\mathscr{E})=\frac{d}{dt}\mathrm{D}(\gamma;\mathscr{E})$
and we indicate by $\mathrm{D}_{i}(\gamma;\mathscr{E})$ the i-th coordinate of
the corrected curve in exponential coordinates.
In the lifting formula (2.11), the intervals where $\dot{\gamma}_{2}=0$ do not
contribute to the integral. For this reason, in (3.2) we may cancel the second
and fourth lines, where $\dot{\mathrm{D}}_{2}(\gamma;\mathscr{E})=0$, and then
reparameterize the curve on $[0,1]$. Namely, we define the discontinuous curve
$\overline{\mathrm{D}}(\kappa;\mathscr{E}):[0,1]\to\mathbb{R}^{2}$ as
$\overline{\mathrm{D}}(\kappa;\mathscr{E})(t)=\left\\{\begin{array}[]{ll}\kappa(t)&t\in[0,t_{h\eta}]\\\
\kappa(t)+(\varepsilon,0)&t\in[t_{h\eta},t_{h}]\\\
\kappa(t)&t\in[t_{h},1],\end{array}\right.$ (3.3)
and then we consider the “formal” i-th coordinate
$\overline{\mathrm{D}}_{i}(\gamma;\mathscr{E})(t)=\int_{0}^{t}a_{i}(\overline{\mathrm{D}}(\kappa;\mathscr{E})(s))\dot{\kappa}_{2}(s)ds.$
The following identities can be checked by an elementary computation (for
$\varepsilon>0$)
$\overline{\mathrm{D}}(\gamma;\mathscr{E})(t)=\left\\{\begin{array}[]{ll}\mathrm{D}(\gamma;\mathscr{E})(t)&t\in[0,t_{h\eta}]\\\
\mathrm{D}(\gamma;\mathscr{E})(t+\varepsilon)&t\in[t_{h\eta},t_{h}]\\\
\mathrm{D}(\gamma;\mathscr{E})(t+2\varepsilon)&t\in[t_{h},1].\end{array}\right.$
(3.4)
With this notation, the final error produced on the i-th coordinate by the
correction device $\mathscr{E}$ is
$\gamma_{i}(1)-\mathrm{D}_{i}(\gamma;\mathscr{E})(1+2|\varepsilon|)=\int_{0}^{1}\big{\\{}a_{i}(\kappa(s))-a_{i}(\overline{\mathrm{D}}(\kappa;\mathscr{E})(s))\big{\\}}\dot{\kappa}_{2}(s)ds.$
(3.5)
The proof of this formula is elementary and can be omitted.
We will iterate the above construction a certain number of times depending on
a collections of triples $\mathscr{E}$. We first fix the number of triples and
iterations.
For $i=3,\dots,n$, let
$\mathscr{B}_{i}=\\{(\alpha,\beta)\in\mathbb{N}^{2}\,:\,\alpha+\beta=w_{i}-2\\}$,
where $w_{i}\geq 2$ is the homogeneous degree of the coordinate $x_{i}$. Then,
the polynomials $p_{i}$ given by Theorem 2.1 and Theorem 2.2 are of the form
$p_{i}(x_{1},x_{2})=\sum_{(\alpha,\beta)\in\mathscr{B}_{i}}c_{\alpha\beta}\,x_{1}^{\alpha+1}x_{2}^{\beta},$
(3.6)
for suitable constants $c_{\alpha\beta}\in\mathbb{R}$. We set
$\ell=\sum_{i=3}^{n}\mathrm{Card}(\mathscr{B}_{i}),$ (3.7)
and we consider an $(\ell-2)$-tuple of triples
$\bar{\mathscr{E}}=(\mathscr{E}_{3},\ldots,\mathscr{E}_{\ell})$ such that
$h_{\ell}<h_{\ell-1}<\ldots<h_{3}<k$. Each triple is used to correct one
monomial.
Without loss of generality, we simplify the construction in the following way.
In the sum (3.6), we can assume that $c_{\alpha\beta}=0$ for all
$(\alpha,\beta)\in\mathscr{B}_{i}$ but one. Namely, we can assume that
$p_{i}(x_{1},x_{2})=x_{1}^{\alpha_{i}+1}x_{2}^{\beta_{i}}\quad\textrm{with}\quad\alpha_{i}+\beta_{i}=w_{i}-2,$
(3.8)
and with $c_{\alpha_{i}\beta_{i}}=1$. In this case, we have $\ell=n$ and we
will use $n-2$ devices associated with the triples
$\mathscr{E}_{3},\ldots,\mathscr{E}_{n}$ to correct the coordinates
$i=3,\ldots,n$. By the bracket generating property of the vector fields
$X_{1}$ and $X_{2}$ and by the stratified basis property for
$X_{1},\ldots,X_{n}$, the pairs $(\alpha_{i},\beta_{i})$ satisfy the following
condition
$(\alpha_{i},\beta_{i})\neq(\alpha_{j},\beta_{j})\quad\textrm{for}\quad i\neq
j.$ (3.9)
From now on in the rest of the paper we will assume that the polynomials
$p_{i}$ are of the form (3.8) with (3.9).
Now we clarify the inductive step of our construction. Let
$\mathscr{E}_{3}=(h_{3},\eta_{3},\varepsilon_{3})$ be a triple such that
$h_{3}<k$. We define the curve
$\kappa^{(3)}=\mathrm{D}(\kappa_{k}^{\mathrm{cut}};\mathscr{E}_{3})$. Given a
triple $\mathscr{E}_{4}=(h_{4},\eta_{4},\varepsilon_{4})$ with $h_{4}<h_{3}$
we then define $\kappa^{(4)}=\mathrm{D}(\kappa^{(3)};\mathscr{E}_{4})$. By
induction on $\ell\in\mathbb{N}$, given a triple
$\mathscr{E}_{\ell}=(h_{\ell},\eta_{\ell},\varepsilon_{\ell})$ with
$h_{\ell}<h_{\ell-1}$, we define
$\kappa^{(\ell)}=\mathrm{D}(\kappa^{(\ell-1)};\mathscr{E}_{\ell})$. When
$\ell=n$ we stop.
We define the planar curve $\mathrm{D}(\kappa;k,{\bar{\mathscr{E}}})\in
AC([0,1+2\bar{\varepsilon}];\mathbb{R}^{2})$ as
$\mathrm{D}(\kappa;k,{\bar{\mathscr{E}}})=\kappa^{(n)}$ according to the
inductive construction explained above, where
$\bar{\varepsilon}=|\varepsilon_{3}|+\ldots+|\varepsilon_{n}|$. Then we call
$\mathrm{D}(\gamma;k,\bar{\mathscr{E}})\in AC([0,1+2\bar{\varepsilon}];M)$,
the horizontal lift of $\mathrm{D}(\kappa;k,\bar{\mathscr{E}})$ with
$\mathrm{D}(\gamma;k,\mathscr{E})(0)=\gamma(0)$, the modified curve of
$\gamma$ associated with $\bar{\mathscr{E}}$ and with cut of parameter
$k\in\mathbb{N}$. There is a last adjustment to do. In
$[0,1+2\bar{\varepsilon}]$ there are $2(n-2)$ subintervals where
$\dot{\kappa}_{2}^{(n)}=0$. On each of these intervals the coordinates
$\mathrm{D}_{j}(\gamma;k,\bar{\mathscr{E}})$ are constant. According to the
procedure explained in (3.2)–(3.4), we erase these intervals and we
parametrize the resulting curve on $[0,1]$. We denote this curve by
$\bar{\gamma}=\overline{\mathrm{D}}(\gamma;k,\bar{\mathscr{E}})$.
###### Definition 3.1 (Adjusted modification of $\gamma$).
We call the curve
$\bar{\gamma}=\overline{\mathrm{D}}(\gamma;k,\bar{\mathscr{E}}):[0,1]\to M$
the adjusted modification of $\gamma$ relative to the collections of devices
$\bar{\mathscr{E}}=(\mathscr{E}_{3},\ldots,\mathscr{E}_{n})$ and with cut of
parameter $k$.
Our next task is to compute the error produced by cut and devices on the end-
point of the spiral. For $i=3,\ldots,n$ and for $t\in[0,1]$ we let
$\Delta_{i}^{\gamma}(t)=a_{i}(\kappa(t))\dot{\kappa}_{2}(t)-a_{i}(\bar{\kappa}(t))\dot{\bar{\kappa}}_{2}(t).$
(3.10)
When $t<t_{k+1}$ or $t>t_{k}$ we have
$\dot{\kappa}_{2}=\dot{\bar{\kappa}}_{2}$ and so the definition above reads
$\Delta_{i}^{\gamma}(t)=\big{(}a_{i}(\kappa(t))-a_{i}(\bar{\kappa}(t))\big{)}\dot{\kappa}_{2}(t).$
By the recursive application of the argument used to obtain (3.5), we get the
following formula for the error at the final time $\bar{t}=t_{h_{n}}$:
$\begin{split}E_{i}^{k,\bar{\mathscr{E}}}&=\gamma_{i}(\bar{t})-\bar{\gamma}_{i}(\bar{t})=\int_{t_{k+1}}^{\bar{t}}\Delta_{i}^{\gamma}(t)dt\\\
&=\int_{F_{k}}\Delta_{i}^{\gamma}(t)dt+\sum_{j=3}^{n}\Big{(}\int_{A_{j}}\Delta_{i}^{\gamma}(t)dt+\int_{B_{j}}\Delta_{i}^{\gamma}(t)dt\Big{)}.\end{split}$
(3.11)
In (3.11) and in the following, we use the following notation for the
intervals:
$F_{k}=[t_{k+1},t_{k}],\quad A_{j}=[t_{h_{j-1}},t_{h_{j}\eta_{j}}],\quad
B_{j}=[t_{h_{j}\eta_{j}},t_{h_{j}}],$ (3.12)
with $t_{h_{2}}=t_{k}$. We used also the fact that on $[0,t_{k+1}]$ we have
$\gamma=\bar{\gamma}$.
On the interval $F_{k}$ we have $\dot{\bar{\kappa}}_{2}=0$ and thus
$\int_{F_{k}}\Delta_{i}^{\gamma}\,dt=\int_{F_{k}}\big{\\{}p_{i}(\kappa)+r_{i}(\kappa)\big{\\}}\dot{\kappa}_{2}dt.$
(3.13)
On the intervals $A_{j}$ we have $\kappa=\bar{\kappa}$ and thus
$\int_{A_{j}}\Delta_{i}^{\gamma}dt=0,$ (3.14)
because the functions $a_{i}$ depend only on $\kappa$. Finally, on the
intervals $B_{j}$ we have $\bar{\kappa}_{1}=\kappa_{1}+\varepsilon_{j}$ and
$\kappa_{2}=\bar{\kappa}_{2}$ and thus
$\int_{B_{j}}\Delta_{i}^{\gamma}\,dt=\int_{B_{j}}\\{p_{i}(\kappa)-p_{i}(\kappa+(\varepsilon_{j},0))\\}\dot{\kappa}_{2}dt+\int_{B_{j}}\\{r_{i}(\kappa)-r_{i}(\kappa+(\varepsilon_{j},0))\\}\dot{\kappa}_{2}dt.$
(3.15)
Our goal is to find $k\in\mathbb{N}$ and devices $\bar{\mathscr{E}}$ such that
$E_{i}^{k,\bar{\mathscr{E}}}=0$ for all $i=3,\ldots,n$ and such that the
modified curve $\mathrm{D}(\gamma;k,\bar{\mathscr{E}})$ is shorter than
$\gamma$.
## 4\. Effect of cut and devices on monomials and remainders
Let $\gamma$ be a horizontal spiral with horizontal coordinates $\kappa\in
AC([0,1];\mathbb{R}^{2})$ of the form (2.12). We prove some estimates about
the integrals of the polynomials (3.8) along the curve $\kappa$. These
estimates are preliminary to the study of the errors introduced in (3.11).
For $\alpha,\beta\in\mathbb{N}$, we associate with the monomial
$p_{\alpha\beta}(x_{1},x_{2})=x_{1}^{\alpha+1}x_{2}^{\beta}$ the function
$\gamma_{\alpha\beta}$ defined for $t\in[0,1]$ by
$\begin{split}\gamma_{\alpha\beta}(t)&=\int_{\kappa|_{[0,t]}}{p_{\alpha\beta}(x_{1},x_{2})dx_{2}}=\int_{0}^{t}{p_{\alpha\beta}(\kappa(s))\dot{\kappa}_{2}(s)ds}.\end{split}$
When $p_{i}=p_{\alpha\beta}$, the function $\gamma_{\alpha\beta}$ is the
leading term in the i-th coordinate of $\gamma$ in exponential coordinates. In
this case, the problem of estimating $\gamma_{i}(t)$ reduces to the estimate
of integrals of the form
$I_{\omega\eta}^{\alpha\beta}=\int_{t_{\eta}}^{t_{\omega}}{\kappa_{1}(t)^{\alpha+1}\kappa_{2}(t)^{\beta}\dot{\kappa}_{2}(t)dt},$
(4.1)
where $\omega\leq\eta$ are angles, $t_{\omega}=\psi(\omega)$ and
$t_{\eta}=\psi(\eta)$. These integrals are related to the integrals
$J_{\omega\eta}^{\alpha\beta}=\int_{\omega}^{\eta}{t_{\vartheta}^{\alpha+\beta+2}\cos^{\alpha}(\vartheta)\sin^{\beta}(\vartheta)d\vartheta}.$
(4.2)
In the following, we will use the short notation
$D^{\alpha\beta}_{\omega}=\cos^{\alpha+1}(\omega)\sin^{\beta+1}(\omega)$.
###### Lemma 4.1.
For any $\alpha,\beta\in\mathbb{N}$ and $\omega\leq\eta$ we have the identity
$\begin{split}(\alpha+\beta+2)I_{\omega\eta}^{\alpha\beta}=&t_{\omega}^{\alpha+\beta+2}D^{\alpha\beta}_{\omega}-t_{\eta}^{\alpha+\beta+2}D^{\alpha\beta}_{\eta}-(\alpha+1)J_{\omega\eta}^{\alpha\beta}.\end{split}$
(4.3)
###### Proof.
Inserting into $I_{\omega\eta}^{\alpha\beta}$ the identities
$\kappa_{1}(t)=t\cos(\varphi(t))$, $\kappa_{2}(t)=t\sin(\varphi(t))$, and
$\dot{\kappa}_{2}(t)=\sin(\varphi(t))+t\cos(\varphi(t))\dot{\varphi}(t)$ we
get
$I_{\omega\eta}^{\alpha\beta}=\int_{t_{\eta}}^{t_{\omega}}{t^{\alpha+\beta+1}D^{\alpha\beta}_{\varphi(t)}dt}+\int_{t_{\eta}}^{t_{\omega}}{t^{\alpha+\beta+2}\cos^{\alpha+2}(\varphi(t))\sin^{\beta}(\varphi(t))\dot{\varphi}(t)dt},$
and, integrating by parts in the first integral, this identity reads
$\displaystyle I_{\omega\eta}^{\alpha\beta}=$
$\displaystyle\left[\frac{t^{\alpha+\beta+2}D^{\alpha\beta}_{\varphi(t)}}{\alpha+\beta+2}\right]^{t_{\omega}}_{t_{\eta}}+\frac{\alpha+1}{\alpha+\beta+2}\int_{t_{\eta}}^{t_{\omega}}{t^{\alpha+\beta+2}\cos^{\alpha}(\varphi(t))\sin^{\beta+2}(\varphi(t))\dot{\varphi}(t)dt}$
$\displaystyle-\frac{\beta+1}{\alpha+\beta+2}\int_{t_{\eta}}^{t_{\omega}}{t^{\alpha+\beta+2}\cos^{\alpha+2}(\varphi(t))\sin^{\beta}(\varphi(t))\dot{\varphi}(t)dt}$
$\displaystyle+\int_{t_{\eta}}^{t_{\omega}}{t^{\alpha+\beta+2}\cos^{\alpha+2}(\varphi(t))\sin^{\beta}(\varphi(t))\dot{\varphi}(t)dt}.$
Grouping the trigonometric terms and then performing the change of variable
$\varphi(t)=\vartheta$, we get
$\displaystyle I_{\omega\eta}^{\alpha\beta}=$
$\displaystyle\left[\frac{t_{\vartheta}^{\alpha+\beta+2}D^{\alpha\beta}_{\vartheta}}{\alpha+\beta+2}\right]^{\omega}_{\eta}+\frac{\alpha+1}{\alpha+\beta+2}\int_{\eta}^{\omega}{t_{\vartheta}^{\alpha+\beta+2}\cos^{\alpha}(\vartheta)\sin^{\beta}(\vartheta)d\vartheta}.$
This is our claim. ∎
For $\alpha,\beta\in\mathbb{N}$, $h\in\mathbb{N}$ and $\eta\in(0,\pi/4)$ we
let
$j^{\alpha\beta}_{h\eta}=\eta^{\beta}\int_{2h\pi}^{2h\pi+\eta}t_{\vartheta}^{\alpha+\beta+2}\,d\vartheta=\int_{t_{h\eta}}^{t_{h}}t^{\alpha+\beta+2}|\dot{\varphi}(t)|dt,$
(4.4)
where in the second equality we let $\vartheta=\varphi(t)$.
###### Corollary 4.2.
There exist constants $0<c_{\alpha\beta}<C_{\alpha\beta}$ depending on
$\alpha,\beta\in\mathbb{N}$ such that for all $h\in\mathbb{N}$ and
$\eta\in(0,\pi/4)$ we have
$c_{\alpha\beta}j^{\alpha\beta}_{h\eta}\leq|I^{\alpha\beta}_{2h\pi,2h\pi+\eta}|\leq
C_{\alpha\beta}j^{\alpha\beta}_{h\eta}.$ (4.5)
###### Proof.
From (4.3) with $D^{\alpha\beta}_{2h\pi}=0$ we obtain
$(\alpha+\beta+2)|I_{2h\pi,2h\pi+\eta}^{\alpha\beta}|=t_{2h\pi+\eta}^{\alpha+\beta+2}D^{\alpha\beta}_{\eta}+(\alpha+1)J_{2h\pi,2h\pi+\eta}^{\alpha\beta},$
where $c_{\alpha\beta}\eta^{\beta+1}\leq
D^{\alpha\beta}_{\eta}\leq\eta^{\beta+1}$, because $\eta\in(0,\pi/4)$, and
$c_{\alpha\beta}\eta^{\beta+1}t_{2h\pi+\eta}^{\alpha+\beta+2}\leq
c_{\alpha\beta}\eta^{\beta}\int_{2h\pi}^{2h\pi+\eta}t_{\vartheta}^{\alpha+\beta+2}d\vartheta\leq
J_{2h\pi,2h\pi+\eta}^{\alpha\beta}\leq\eta^{\beta}\int_{2h\pi}^{2h\pi+\eta}t_{\vartheta}^{\alpha+\beta+2}d\vartheta.$
The claim follows. ∎
###### Remark 4.3.
We will use the estimates (4.5) in the proof of the solvability of the end-
point equations. In particular, the computations above are possible thanks to
the structure of the monomials $p_{i}$: here, their dependence only on the
variables $x_{1}$ and $x_{2}$, ensured by (1.2), is crucial. When the
coefficients $a_{i}$ depend on all the variables $x_{1},\ldots,x_{n}$,
repeating the same computations seems difficult. Indeed, in the integrals
(4.1) and (4.2) there are also the coordinates $\gamma_{3},\ldots,\gamma_{n}$.
Then, the new identity (4.3) becomes more complicated because other addends
appear after the integration by parts, owing to the derivatives of
$\gamma_{3},\ldots,\gamma_{n}$. Now, by the presence of these new terms the
estimates from below in (4.5) are difficult, while the estimates from above
remain possible.
We denote by $\kappa_{\varepsilon}$ the rigid translation by
$\varepsilon\in\mathbb{R}$ in the $x_{1}$ direction of the curve $\kappa$.
Namely, we let $\kappa_{\varepsilon,1}=\kappa_{1}+\varepsilon$ and
$\kappa_{\varepsilon,2}=\kappa_{2}$. Recall the notation $t_{h}=\psi(2\pi h)$
and $t_{h\eta}=\psi(2\pi h+\eta)$, for $h\in\mathbb{N}$ and $\eta>0$. In
particular, when we take $\varepsilon_{j}$, $h_{j}$ and $\eta_{j}$ related to
the $j$-th correction-device, we have
$\kappa_{\varepsilon_{j}}|_{B_{j}}=\bar{\kappa}|_{B_{j}}$.
In the study of the polynomial part of integrals in (3.15) we need estimates
for the quantities
$\Delta_{h\eta\varepsilon}^{\alpha\beta}=\int_{\kappa_{\varepsilon}|_{[t_{h\eta},t_{h}]}}{p_{\alpha\beta}(x_{1},x_{2})dx_{2}}-\int_{\kappa|_{[t_{h\eta},t_{h}]}}{p_{\alpha\beta}(x_{1},x_{2})dx_{2}}.$
###### Lemma 4.4.
We have
$\begin{split}\Delta_{h\eta\varepsilon}^{\alpha\beta}=(\alpha+1)\varepsilon
I_{2h\pi,2h\pi+\eta}^{\alpha-1,\beta}+O(\varepsilon^{2}),\end{split}$ (4.6)
where $O(\varepsilon^{2})/\varepsilon^{2}$ is bounded as $\varepsilon\to 0$.
###### Proof.
The proof is an elementary computation:
$\begin{split}\Delta_{h\eta\varepsilon}^{\alpha\beta}&=\int_{t_{h\eta}}^{t_{h}}\dot{\kappa}_{2}(t)\kappa_{2}(t)^{\beta}\big{[}(\kappa_{1}(t)+\varepsilon)^{\alpha+1}-\kappa_{1}(t)^{\alpha+1}\big{]}dt\\\
&=\sum_{i=0}^{\alpha}\binom{\alpha+1}{i}\varepsilon^{\alpha+1-i}\int_{t_{h\eta}}^{t_{h}}{\dot{\kappa}_{2}(t)\kappa_{1}(t)^{i}\kappa_{2}(t)^{\beta}dt}\\\
&=\sum_{i=0}^{\alpha}\binom{\alpha+1}{i}\varepsilon^{\alpha+1-i}I_{2h\pi,2h\pi+\eta}^{i-1,\beta}\\\
&=(\alpha+1)\varepsilon
I_{2h\pi,2h\pi+\eta}^{\alpha-1,\beta}+O(\varepsilon^{2}).\end{split}$ (4.7)
∎
.
We estimate the terms in (3.13). The quantities $\Delta_{i}^{\gamma}$ are
introduced in (4.6).
###### Lemma 4.5.
Let $\gamma$ be a horizontal spiral with phase $\varphi$. For all
$i=3,\ldots,n$ and for all $k\in\mathbb{N}$ large enough we have
$\Big{|}\int_{F_{k}}\Delta_{i}^{\gamma}dt\Big{|}\leq\int_{F_{k}}t^{\alpha_{i}+\beta_{i}+2}|\dot{\varphi}|dt.$
(4.8)
###### Proof.
By (4.3) with vanishing boundary contributions, we obtain
$\begin{split}\Big{|}\int_{F_{k}}p_{i}(\kappa)\dot{\kappa}_{2}dt\Big{|}&=|I_{2k\pi,2(k+1)\pi}^{\alpha_{i}\beta_{i}}|=\frac{\alpha_{i}+1}{\alpha_{i}+\beta_{i}+2}|J_{2k\pi,2(k+1)\pi}^{\alpha_{i}\beta_{i}}|\\\
&\leq\frac{\alpha_{i}+1}{\alpha_{i}+\beta_{i}+2}\int_{F_{k}}t^{\alpha_{i}+\beta_{i}+2}|\dot{\varphi}|dt,\end{split}$
so we are left with the estimate of the integral of $r_{i}$. Using
$\kappa_{2}=t\sin(\varphi(t))$ we get
$\begin{split}\int_{F_{k}}r_{i}(\kappa)\dot{\kappa}_{2}dt&=\int_{F_{k}}r_{i}(\kappa)(\sin(\varphi)+t\cos(\varphi)\dot{\varphi})dt\\\
&=\int_{F_{k}}(tr_{i}(\kappa)-R_{i})\cos(\varphi)\dot{\varphi}dt,\end{split}$
where we let
$R_{i}(t)=\int_{t_{k+1}}^{t}r_{i}(\kappa)ds.$
From (2.12), we have $|\kappa(t)|\leq t$ for all $t\in[0,1]$. By part (ii) of
Theorem 2.1 we have $|r_{i}(x)|\leq C\|x\|^{w_{i}}$ for all
$x\in\mathbb{R}^{n}$ near $0$, with $w_{i}=\alpha_{i}+\beta_{i}+2$. It follows
that $|r_{i}(\kappa(t))|\leq Ct^{w_{i}}$ for all $t\in[0,1]$, and
$|R_{i}(t)|\leq Ct^{w_{i}+1}$. We deduce that
$\Big{|}\int_{F_{k}}r_{i}(\kappa)\dot{\kappa}_{2}dt\Big{|}\leq
C\int_{F_{k}}t^{\alpha_{i}+\beta_{i}+3}|\dot{\varphi}|dt,$
and the claim follows. ∎
Now we study the integrals in (3.15). Let us introduce the following notation
$\Delta_{r_{i}}^{\gamma}=\big{(}r_{i}(\kappa)-r_{i}(\bar{\kappa})\big{)}\dot{\kappa}_{2}\qquad\textrm{and}\qquad\delta_{r_{i}}^{\gamma}=r_{i}(\kappa)-r_{i}(\bar{\kappa}).$
###### Lemma 4.6.
Let $\gamma$ be a horizontal spiral with phase $\varphi$. Then for any
$j=3,\dots,n$ and for $|\varepsilon_{j}|<t_{h_{j}\eta_{j}}$, we have
$\Big{|}\int_{B_{j}}\Delta_{r_{i}}^{\gamma}(t)dt\Big{|}\leq
C|\varepsilon_{j}|\int_{B_{j}}t^{w_{i}}|\dot{\varphi}(t)|dt,$ (4.9)
where $C>0$ is constant.
###### Proof.
For $t\in B_{j}$ we have $\kappa_{2}(t)=\bar{\kappa}_{2}(t)$ and
$\bar{\kappa}_{1}(t)=\kappa_{1}(t)+\varepsilon_{j}$. By Lagrange Theorem it
follows that
$\delta_{r_{i}}^{\gamma}(t)=\varepsilon_{j}\partial_{1}r_{i}(\kappa^{*}(t)),$
where $\kappa^{*}(t)=(\kappa_{1}^{*}(t),\kappa_{2}(t))$ and
$\kappa_{1}^{*}(t)=\kappa_{1}(t)+\delta_{j}$, $0<\delta_{j}<\varepsilon_{j}$.
By Theorem 2.1 we have $|\partial_{1}r_{i}(x)|\leq C\|x\|^{w_{i}-1}$ and so,
also using $\delta_{j}<\varepsilon_{j}<t$,
$|\partial_{1}r_{i}(\kappa^{*}(t))|\leq
C\|\kappa^{*}(t)\|^{w_{i}-1}=C\Big{(}|\kappa_{1}(t)+\delta_{j}|+|\kappa_{2}(t)|\Big{)}^{w_{i}-1}\leq
Ct^{w_{i}-1}.$
This implies $|\delta_{r_{i}}^{\gamma}(t)|\leq C|\varepsilon_{j}|t^{w_{i}-1}$.
Now, the integral we have to study is
$\displaystyle\int_{B_{j}}\Delta_{r_{i}}^{\gamma}dt=\int_{B_{j}}\delta_{r_{i}}^{\gamma}\dot{\kappa}_{2}dt=\int_{B_{j}}\delta_{r_{i}}^{\gamma}\sin\varphi
dt+\int_{B_{j}}\delta_{r_{i}}^{\gamma}t\dot{\varphi}\cos\varphi dt.$
We integrate by parts the integral without $\dot{\varphi}$, getting
$\int_{B_{j}}\delta_{r_{i}}^{\gamma}\sin\varphi
dt=\Big{[}\sin\varphi(t)\int_{t_{h_{j}\eta_{j}}}^{t}\delta_{r_{i}}^{\gamma}ds\Big{]}_{t=t_{h_{j}\eta_{j}}}^{t=t_{h_{j}}}-\int_{B_{j}}\Big{\\{}\dot{\varphi}\cos\varphi\int_{t_{h_{j}\eta_{j}}}^{t}\delta_{r_{i}}^{\gamma}ds\Big{\\}}dt.$
Since the boundary term is 0, we obtain
$\int_{B_{j}}\delta_{r_{i}}^{\gamma}\dot{\kappa}_{2}dt=\int_{B_{j}}\Big{\\{}t\delta_{r_{i}}^{\gamma}-\int_{t_{h_{j}\eta_{j}}}^{t}\delta_{r_{i}}^{\gamma}ds\Big{\\}}\dot{\varphi}\cos\varphi
dt,$
and thus
$\displaystyle\Big{|}\int_{B_{j}}\delta_{r_{i}}^{\gamma}\dot{\kappa}_{2}dt\Big{|}$
$\displaystyle\leq\int_{B_{j}}\Big{\\{}t|\delta_{r_{i}}^{\gamma}|+\int_{t_{h_{j}\eta_{j}}}^{t}|\delta_{r_{i}}^{\gamma}|ds\Big{\\}}|\dot{\varphi}|dt\leq
C|\varepsilon_{j}|\int_{B_{j}}t^{w_{i}}|\dot{\varphi}|dt.$
∎
###### Remark 4.7.
We stress again the fact that, when the coefficients $a_{i}$ depend on all the
variables $x_{1},\ldots,x_{n}$, the computations above become less clear. As a
matter of fact, there is a non-commutative effect of the devices due to the
varying coordinates $\gamma_{3},\ldots,\gamma_{n}$ that modifies the
coefficients of the parameters $\varepsilon_{j}$.
## 5\. Solution to the end-point equations
In this section we solve the system of equations
$E_{i}^{k,\bar{\mathscr{E}}}=0$, $i=3,\ldots,n$. The homogeneous polynomials
$p_{j}$ are of the form
$p_{j}(x_{1},x_{2})=x_{1}^{\alpha_{j}+1}x_{2}^{\beta_{j}}$, as in (3.8).
The quantities (3.13), (3.14) and (3.15) are, respectively,
$\begin{split}&\int_{F_{k}}\Delta_{i}^{\gamma}dt=I^{\alpha_{i}\beta_{i}}_{k}+\int_{F_{k}}r_{i}(\kappa(t))dt,\\\
&\int_{A_{j}}\Delta_{i}^{\gamma}dt=0,\\\
&\int_{B_{j}}\Delta_{i}^{\gamma}dt=-\Delta^{\alpha_{i}\beta_{i}}_{{h_{j}}\eta_{j}\varepsilon_{j}}+\int_{B_{j}}\Delta_{r_{i}}^{\gamma}dt,\end{split}$
(5.1)
where we used the short-notation
$I^{\alpha_{i}\beta_{i}}_{k}=I^{\alpha_{i}\beta_{i}}_{2\pi k,2\pi(k+1)}$. So
the end-point equations $E_{i}^{k,\bar{\mathscr{E}}}=0$ read
$f_{i}(\varepsilon)=b_{i},\quad i=3,\ldots,n.$ (5.2)
with
$f_{i}(\varepsilon)=\sum_{j=3}^{n}\Big{(}\Delta^{\alpha_{i}\beta_{i}}_{{h_{j}}\eta_{j}\varepsilon_{j}}-\int_{B_{j}}\Delta_{r_{i}}^{\gamma}dt\Big{)}\quad\textrm{and}\quad
b_{i}=\int_{F_{k}}\Delta_{i}^{\gamma}dt.$
We will regard $k$, ${h_{j}}$, and $\eta_{j}$ as parameters and we will solve
the system of equations (5.2) in the unknowns
$\varepsilon=(\varepsilon_{3},\ldots,\varepsilon_{n})$. The functions
$f_{i}:\mathbb{R}^{n-2}\to\mathbb{R}$ are analytic and the data $b_{i}$ are
estimated from above by (4.8):
$|b_{i}|\leq\int_{F_{k}}t^{w_{i}}|\dot{\varphi}|dt.$ (5.3)
###### Theorem 5.1.
There exist real parameters $\eta_{3},\ldots,\eta_{n}>0$ and integers
$h_{3}>\ldots>h_{n}$ such that for all $k\in\mathbb{N}$ large enough the
system of equations (5.2) has a unique solution
$\varepsilon=(\varepsilon_{3},\ldots,\varepsilon_{n})$ satisfying
$|\varepsilon|\ \leq C\sum_{i=3}^{n}|b_{i}|,$ (5.4)
for a constant $C>0$ independent of $k$.
###### Proof.
We will use the inverse function theorem. Let
$A=\big{(}a_{ij}\big{)}_{i,j=3,\ldots,n}\in M_{n-2}(\mathbb{R})$ be the
Jacobian matrix of $f=(f_{3},\ldots,f_{n})$ in the variables
$\varepsilon=(\varepsilon_{3},\ldots,\varepsilon_{n})$ computed at
$\varepsilon=0$. By (4.6) and Lemma 4.6 we have
$a_{ij}=\frac{\partial
f_{i}(0)}{\partial\varepsilon_{j}}=(\alpha_{i}+1)I_{h_{j}\eta_{j}}^{\alpha_{i}-1,\beta_{i}}+o(I_{h_{j}\eta_{j}}^{\alpha_{i}-1,\beta_{i}}).$
(5.5)
Here, we are using the fact that for $h_{j}\to\infty$ we have
$\int_{B_{j}}t^{w_{i}}|\dot{\varphi}|dt=o\Big{(}\int_{B_{j}}t^{w_{i}-1}|\dot{\varphi}|dt\Big{)}.$
The proof of Theorem 5.1 will be complete if we show that the matrix $A$ is
invertible.
We claim that there exist real parameters $\eta_{3},\ldots,\eta_{n}>0$ and
positive integers $h_{3}>\ldots>h_{n}$ such that
$\det(A)\neq 0.$ (5.6)
The proof is by induction on $n$. When $n=3$, the matrix $A$ boils down to the
real number $a_{33}$. From (5.5) and (4.5) we deduce that for any
$\eta_{3}\in(0,\pi/4)$ we have
$\begin{split}|a_{33}|&\geq\frac{1}{2}(\alpha_{3}+1)|I_{h_{3}\eta_{3}}^{\alpha_{3}-1,\beta_{3}}|\geq
c_{\alpha\beta}j^{\alpha_{3}-1,\beta_{3}}_{h_{3}\eta_{3}}>0.\end{split}$ (5.7)
We can choose $h_{3}\in\mathbb{N}$ as large as we wish.
Now we prove the inductive step. We assume that (5.6) holds when $A$ is a
$(n-3)\times(n-3)$ matrix, $n\geq 4$. We develop $\det(A)$ with respect to the
first column using Laplace formula:
$\begin{split}\det(A)=\sum_{i=3}^{n}(-1)^{i+1}a_{i3}P_{i},\end{split}$ (5.8)
where
$P_{i}=P_{i}(a_{43},\dots,a_{4n},\dots,\hat{a}_{i3},\dots,\hat{a}_{in},\dots,a_{n3},\dots,a_{nn})$
are the determinants of the minors. By the inductive assumption, there exist
$\eta_{4},\ldots,\eta_{n}\in(0,\pi/4)$ and integers $h_{4}>\dots>h_{n}$ such
that $|P_{i}|>0$. By (4.5), for any $\eta_{3}\in(0,\pi/4)$ we have the
estimates
$c_{0}j^{\alpha_{i}-1,\beta_{i}}_{h_{3}\eta_{3}}\leq|a_{i3}|\leq
C_{0}j^{\alpha_{i}-1,\beta_{i}}_{h_{3}\eta_{3}},$ (5.9)
for absolute constants $0<c_{0}<C_{0}$. The leading (larger) $|a_{i3}|$ can be
found in the following way. On the set
$\mathscr{A}=\\{(\alpha_{i},\beta_{i})\in\mathbb{N}\times\mathbb{N}:i=3,\ldots,n\\}$
we introduce the order $(\alpha,\beta)<(\alpha^{\prime},\beta^{\prime})$
defined by the conditions $\alpha+\beta<\alpha^{\prime}+\beta^{\prime}$, or
$\alpha+\beta=\alpha^{\prime}+\beta^{\prime}$ and $\beta<\beta^{\prime}$. We
denote by $(\alpha_{\iota},\beta_{\iota})\in\mathscr{A}$, for some
$\iota=3,\ldots,n$, the minimal element with respect to this order relation.
We claim that, given $\varepsilon_{0}>0$, for all $h_{3}>h_{4}$ large enough
and for some $0<\eta_{3}<\pi/4$ the following inequalities hold:
$|a_{i3}||P_{i}|\leq\varepsilon_{0}|a_{\iota
3}P_{\iota}|,\quad\textrm{for}\quad i\neq\iota.$ (5.10)
In the case when $i=3,\ldots,n$ is such that
$\alpha_{i}+\beta_{i}=\alpha_{\iota}+\beta_{\iota}$, then we have
$\beta_{i}>\beta_{\iota}$. By (5.9) and (4.4), inequality (5.10) is implied by
$\eta_{3}^{\beta_{i}-\beta_{\iota}}|P_{i}|\leq\varepsilon_{0}|P_{\iota}|$,
possibly for a smaller $\varepsilon_{0}$. So we fix $\eta_{3}\in(0,\pi/4)$
independently from $h_{3}$ such that
$0<\eta_{3}\leq\min\Big{\\{}\Big{(}\frac{\varepsilon_{0}|P_{\iota}|}{|P_{i}|}\Big{)}^{1/(\beta_{i}-\beta_{\iota})}:i\neq\iota\Big{\\}}.$
In the case when $i=3,\ldots,n$ is such that
$\alpha_{i}+\beta_{i}>\alpha_{\iota}+\beta_{\iota}$, inequality (5.10) is
implied by
$\int_{B_{3}}t^{\alpha_{i}+\beta_{i}}|\dot{\varphi}(t)|dt\leq\varepsilon_{0}\eta_{3}^{\beta_{\iota}-\beta_{i}}\frac{|P_{\iota}|}{|P_{i}|}\int_{B_{3}}t^{\alpha_{\iota}+\beta_{\iota}}|\dot{\varphi}(t)|dt.$
This holds for all $h_{3}\in\mathbb{N}$ large enough.
Now we can estimate from below the determinant of $A$ using (5.10). We have
$|\det(A)|\geq|a_{\iota
3}P_{\iota}|-\sum_{i\neq\iota}|a_{i3}||P_{i}|\geq\frac{1}{2}|a_{\iota
3}P_{\iota}|$
and the last inequality holds for all $h_{3}\in\mathbb{N}$ large enough, after
fixing $\eta_{3}>0$. This ends the proof of the theorem. ∎
## 6\. Nonminimality of the spiral
In this section we prove Theorem 1.1. Let $\gamma\in AC([0,1];M)$ be a
horizontal spiral of the form (2.12). We work in exponential coordinates of
the second type centered at $\gamma(0)$.
We fix on $\mathscr{D}$ the metric $g$ making orthonormal the vector fields
$X_{1}$ and $X_{2}$ spanning $\mathscr{D}$. This is without loss of
generality, because any other metric is equivalent to this one in a
neighborhood of the center of the spiral. With this choice, the length of
$\gamma$ is the standard length of its horizontal coordinates and for a spiral
as in (2.12) we have
$L(\gamma)=\int_{0}^{1}{|\dot{\kappa}(t)|dt}=\int_{0}^{1}{\sqrt{1+t^{2}\dot{\varphi}(t)^{2}}}dt.$
(6.1)
In particular, $\gamma$ is rectifiable precisely when $t\dot{\varphi}\in
L^{1}(0,1)$, and $\kappa$ is a Lipschitz curve in the plane precisely when
$t\dot{\varphi}\in L^{\infty}(0,1)$.
For $k\in\mathbb{N}$ and
$\bar{\mathscr{E}}=(\mathscr{E}_{3},\ldots,\mathscr{E}_{n})$, we denote by
$\mathrm{D}(\gamma;k,\bar{\mathscr{E}})$ the curve constructed in Section 3.
The devices $\mathscr{E}_{j}=(h_{j},\eta_{j},\varepsilon_{j})$ are chosen in
such a way that the parameters $h_{j},\eta_{j}$ are fixed as in Theorem 5.1
and $\varepsilon_{3},\ldots,\varepsilon_{n}$ are the unique solutions to the
system (5.2), for $k$ large enough. In this way the curves $\gamma$ and
$\mathrm{D}(\gamma;k,\bar{\mathscr{E}})(1)$ have the same initial and end-
point.
We claim that for $k\in\mathbb{N}$ large enough the length of
$\mathrm{D}(\gamma;k,\bar{\mathscr{E}})$ is less than the length of $\gamma$.
We denote by $\Delta L(k)=L(\mathrm{D}(\gamma;k,\bar{\mathscr{E}}))-L(\gamma)$
the gain of length and, namely,
$\begin{split}\Delta
L(k)&=\int_{F_{k}}\sqrt{1+t^{2}\dot{\varphi}(t)^{2}}dt-\Big{(}t_{k}-t_{k+1}+2\sum_{j=3}^{n}|\varepsilon_{j}|\Big{)}\\\
&=\int_{F_{k}}\frac{t^{2}\dot{\varphi}(t)^{2}}{\sqrt{1+t^{2}\dot{\varphi}(t)^{2}}+1}dt-2\sum_{j=3}^{n}|\varepsilon_{j}|.\end{split}$
(6.2)
By (5.4), there exists a constant $C_{1}>0$ independent of $k$ such that the
solution $\varepsilon=(\varepsilon_{3},\ldots,\varepsilon_{n})$ to the end-
point equations (5.2) satisfies
$|\varepsilon|\leq C_{1}\sum_{i=3}^{n}|I_{k}^{\alpha_{i}\beta_{i}}|\leq
C_{2}\sum_{i=3}^{n}\int_{F_{k}}t^{w_{i}}|\dot{\varphi}(t)|dt\leq
C_{3}\int_{F_{k}}t^{2}|\dot{\varphi}(t)|dt.$ (6.3)
We used (4.5) and the fact that $w_{i}\geq 2$. The new constants $C_{2},C_{3}$
do not depend on $k$.
By (6.2) and (6.3), the inequality $\Delta L(k)>0$ is implied by
$\int_{F_{k}}\frac{t^{2}\dot{\varphi}(t)^{2}}{\sqrt{1+t^{2}\dot{\varphi}(t)^{2}}+1}dt>C_{4}\int_{F_{k}}t^{2}|\dot{\varphi}(t)|dt,$
(6.4)
where $C_{4}$ is a large constant independent of $k$. For any
$k\in\mathbb{N}$, we split the interval $F_{k}=F_{k}^{+}\cup F_{k}^{-}$ where
$F_{k}^{+}=\\{t\in F_{k}:|t\dot{\varphi}(t)|\geq 1\\}\quad\textrm{and}\quad
F_{k}^{-}=\\{t\in F_{k}:|t\dot{\varphi}(t)|<1\\}.$
On the set $F_{k}^{+}$ we have
$\begin{split}\int_{F_{k}^{+}}\frac{t^{2}\dot{\varphi}(t)^{2}}{\sqrt{1+t^{2}\dot{\varphi}(t)^{2}}+1}dt\geq\frac{1}{3}\int_{F_{k}^{+}}{t|\dot{\varphi}(t)|}dt\geq
C_{4}\int_{F_{k}^{+}}{t^{2}|\dot{\varphi}(t)|}dt,\end{split}$ (6.5)
where the last inequality holds for all $k\in\mathbb{N}$ large enough, and
namely as soon as $3C_{4}t_{k}<1$. On the set $F_{k}^{-}$ we have
$\begin{split}\int_{F_{k}^{-}}\frac{t^{2}\dot{\varphi}(t)^{2}}{\sqrt{1+t^{2}\dot{\varphi}(t)^{2}}+1}dt\geq\frac{1}{3}\int_{F_{k}^{+}}{t^{2}|\dot{\varphi}(t)|^{2}}dt\geq
C_{4}\int_{F_{k}^{-}}{t^{2}|\dot{\varphi}(t)|}dt,\end{split}$ (6.6)
where the last inequality holds for all $k\in\mathbb{N}$ large enough, by our
assumption on the spiral
$\lim_{t\to 0^{+}}|\dot{\varphi}(t)|=\infty.$
Now (6.5) and (6.6) imply (6.4) and thus $\Delta L(k)>0$. This ends the proof
of Theorem 1.1.
## References
* [1] D. Barilari, Y. Chitour, F. Jean, D. Prandi & M. Sigalotti, On the regularity of abnormal minimizers for rank 2 sub-Riemannian structures, J. Math. Pures Appl. (9) 133, 2020, 118–138.
* [2] A. Belotto da Silva, A. Figalli, A. Parusiński & L. Rifford, Strong Sard Conjecture and regularity of singular minimizing geodesics for analytic sub-Riemannian structures in dimension 3 https://arxiv.org/abs/1810.03347
* [3] E. Hakavuori & E. Le Donne, Non-minimality of corners in subriemannian geometry. Invent. Math. 206 (2016), no. 3, 693–704.
* [4] E. Hakavuori & E. Le Donne, Blowups and blowdowns of geodesics in Carnot groups, https://arxiv.org/abs/1806.09375
* [5] H. Hermes, Nilpotent and high-order approximations of vector field systems. SIAM Rev. 33 (1991), no. 2, 238–264.
* [6] E. Le Donne, G. P. Leonardi, R. Monti & D. Vittone, Extremal curves in nilpotent Lie groups. Geom. Funct. Anal. 23 (2013), 1371–1401.
* [7] E. Le Donne, G. P. Leonardi, R. Monti & D. Vittone, Extremal polynomials in stratified groups. Comm. Anal. Geom., 26, 2018, 723–757.
* [8] G. P. Leonardi & R. Monti, End-point equations and regularity of sub-Riemannian geodesics. Geom. Funct. Anal. 18 (2008), no. 2, 552–582.
* [9] R. Montgomery, Abnormal minimizers. SIAM J. Control Optim. 32 (1994), no. 6, 1605–1620.
* [10] R. Monti, Regularity results for sub-Riemannian geodesics. Calc. Var. Partial Differential Equations 49 (2014), no. 1–2,
* [11] R. Monti, A. Pigati & D. Vittone, Existence of tangent lines to Carnot-Carathéodory geodesics Calc. Var. Partial Differential Equations, 57, 2018, 3, Paper No. 75, 18.
* [12] H. Sussmann, A regularity theorem for minimizers of real-analytic subriemannian metrics. Proceedings of the 53rd IEEE Conference on Decision and Control (2015), 4801–4806.
* [13] W. Liu & H. Sussmann, Shortest paths for sub-Riemannian metrics on rank-two distributions Mem. Amer. Math. Soc. 118, 1995, 564, x+104.
|
I describe a method for estimating agents' perceived returns to investments that relies on cross-sectional data containing binary choices and prices, where prices may be imperfectly known to agents.
This method identifies the scale of perceived returns by assuming agent knowledge of an identity that relates profits, revenues, and costs rather than by eliciting or assuming agent beliefs about structural parameters that are estimated by researchers.
With this assumption, modest adjustments to standard binary choice estimators enable consistent estimation of perceived returns when using price instruments that are uncorrelated with unobserved determinants of agents' price misperceptions as well as other unobserved determinants of their perceived returns.
I demonstrate the method, and the importance of using price variation that is known to agents, in a series of data simulations.
JEL Codes: C31, D84, D61
Keywords: Biased Beliefs, Returns to Investments, Revealed Preference, Subsidies, Taxes
§ INTRODUCTION
In this paper I describe a method for estimating distributions of perceived private returns to binary investments. These structural perceived returns estimates are of distributions of agents' compensating variation associated with a binary choice that condition on observables. This method complements program evaluation methods that estimate effects of specific policy shocks on binary choices by allowing for predictions of counterfactual policies that differ from past policies in magnitude or targeted population.
For instance, <cit.> applies this method to estimate perceived returns to college, allowing for counterfactual predictions of targeted college attendance subsidies (and taxes) for diverse groups of individuals.
Identification is achieved by assuming common agent knowledge of an identity that relates prices to returns, while also using instruments that are de facto known to agents, in the sense that they shift perceived prices the same amount that they shift actual prices, in addition to satisfying the traditional exclusion restriction.
This paper presents a special case of a general method for identifying the scale of binary choice models by assuming agent beliefs about a variable observed by the researcher and agent beliefs about the mapping between that variable and the perceived return latent variable. Existing work that makes such assumptions includes <cit.>, who assume agent knowledge of their lifetime pecuniary return to college insofar as it is attributable to explanatory variables observed by the researcher, and <cit.>, who assume partial agent knowledge of trade revenues and agent knowledge of an estimated demand elasticity parameter. The present paper assumes partial agent knowledge of prices in the sense of <cit.> while assuming agent knowledge that prices causally decrease returns dollar for dollar in accordance with an identity that relates profits, revenues, and costs. The use of this identity imposes a theoretical restriction on a structural parameter (the coefficient on price in the binary choice latent variable equation) without requiring its estimation by researchers or agents. Avoiding the assumption that agents obtain the same estimate of a parameter as researchers improves robustness to the concerns articulated by <cit.> about the pitfalls of making incorrect assumptions on agents' knowledge of structural models.
The method in the present paper avoids assuming rational expectations on any model objects, instead assuming that the variation in prices associated with chosen instruments is known to agents regardless of whether agents are correct about prices on average. This makes it particularly attractive in applications where rational expectations assumptions in general are suspect, but the researcher can credibly argue that a particular price shock is nonetheless known to agents. Considering the example of college attendance, it is possible that exogeneous policy shocks may shift prices more than they shift perceived prices, as with Pell grants <cit.>, they may shift perceived prices more than they shift prices, as with the Michigan HAIL policy <cit.>, or they may shift prices and perceived prices the same amount, as with the Social Security Student Benefit termination <cit.>. Of these preceding sources of variation, only the last would be appropriate for estimating the model presented in this paper. In addition to college attendance, attractive targets for this method include healthcare, home purchases, R&D, and export decisions due to the substantial information frictions on prices in these settings.
In addition to considerations regarding the relative credibility of different assumptions on agent beliefs, applications also differ in data availability. The method described in this paper relies on cross-sectional data that contains binary choices on investments and prices associated with those investments. Methods that rely on rational expectations on ex post returns to investments require longitudinal data (without requiring data on prices), as in <cit.> and related research surveyed by <cit.>. Meanwhile, inferring beliefs by eliciting them directly from agents requires surveys that contain this information, as in <cit.>, <cit.>, and <cit.>. The method described in this paper is thus useful in settings where there is no clear winner in terms of assumption validity, but when longitudinal data and data on agent perceptions in unavailable.
I describe how to estimate perceived returns when prices are known to agents and exogenous, and how to overcome violations of these conditions using instrumental variables. I compare performance of these methods with valid and invalid instruments across data generating processes that differ in the assumptions on agent knowledge of prices. In the most realistic settings, methods that make no use of instruments, or which use instruments that are correlated with agent misperceptions, perform poorly compared to those that use instruments that are de facto known to agents.
The plan of the rest of this paper is as follows. Section <ref> introduces the empirical model. Section <ref> describes the econometric strategy and the assumptions required for identification. Section <ref> evaluates the robustness of various methods and instruments to various empirical challenges in a series of simulated data exercises. Section <ref> concludes.
§ MODEL
I assume that agents choose whether to make an investment based on their
beliefs about discounted net incomes and costs associated with choices, which I present as a two-sector generalized Roy model.
Agents choose to select the investment, $S_i=1$, or to not do so, $S_i=0$, which is observed by the researcher. I define $\widetilde{Y}_{1,i}$ as agent $i$'s perceived discounted present value of lifetime income associated with choosing the investment and $\widetilde{Y}_{0,i}$ as their perceived discounted present value of lifetime income associated with not doing so. I further define $\widetilde{C}_i$ as their perceived net present value cost of making the investment, which includes prices paid and nonpecuniary costs expressed in monetary values. Unlike common applications of the Roy model, none of $\widetilde{Y}_{1,i}$, $\widetilde{Y}_{0,i}$, and $\widetilde{C}_i$ are observed by the researcher for any individual because they represent agent perceptions.
I express the perceived potential incomes and costs for individual $i$ with the following linear-in-parameters production functions,
\begin{equation}\label{potential_outcomes}
\begin{split}
\widetilde{Y}_{1,i} = &
X_i\beta_1+\tilde{\epsilon}_{1,i} \\
\widetilde{Y}_{0,i} = &
X_i\beta_0+\tilde{\epsilon}_{0,i} \\
\widetilde{C}_{i} = &
\end{split}
\end{equation}
Here, $X_i$ are variables observed by the researcher that determine potential incomes and costs. The parameters $\{\beta\}$ capture the extent to which these variables drive beliefs about potential outcomes regardless of whether they are known to agents. $\widetilde{Price}_i$ is the agent's perceived price for the investment, which is known to agents but not to researchers. Importantly, it is assumed to only affect costs and has a coefficient that is normalized to unity.
Finally, $\tilde{\epsilon}_{1,i}$, $\tilde{\epsilon}_{0,i}$, and $\tilde{\epsilon}_{Ci}$ represent idiosyncratic perceived returns to investment that are known to agents but not to the researcher.
I assume that agents maximize expected wealth independently of how they consume it, as in the case of perfect credit markets. It follows that the perceived net return/profit, $\widetilde{\pi}_i$,
is sufficient to determine agents' decisions in accordance with the rule
\begin{equation}\label{Selection_general}
\begin{cases}
& 1 \mbox{, if } \widetilde{\pi}_i \geq 0, \\
& 0 \mbox{, otherwise.}
\end{cases}
\end{equation}
I further assume that the definition of profit, $\pi_i \equiv Revenue_i-Cost_i$, is known to agents in the sense that it holds for their beliefs as well, such that
\begin{equation}\label{profit_identity}
\begin{split}
\widetilde{\pi}_i &= \widetilde{Revenue}_i-\widetilde{Cost}_i \\
&= \widetilde{Y}_{1,i}-(\widetilde{Y}_{0,i}+\widetilde{C}_i),
\end{split}
\end{equation}
where $\widetilde{Revenue}_i$ denotes the agent's perceived income and $\widetilde{Cost}_i$ denotes the agent's perceived opportunity cost, which includes $\widetilde{Y}_{0,i}$.
[I avoid denoting agents' beliefs with conditional expectations over realized values, as is common in the literature, to avoid the implication of rational expectations which follows from the law of iterated expectations.]
It follows that the agent's decision rule can be expressed in terms of potential outcomes as
\begin{equation}\label{Selection}
\begin{cases}
& 1 \mbox{, if } \widetilde{Y}_{1,i}-\widetilde{Y}_{0,i}-\widetilde{C}_i \geq 0, \\
& 0 \mbox{, otherwise.}
\end{cases}
\end{equation}
Defining the net marginal effects $\beta \equiv \beta_1-\beta_0-\beta_C$ and the net idiosyncratic component of perceived outcomes $\tilde{\epsilon}_i \equiv \tilde{\epsilon}_{1,i}-\tilde{\epsilon}_{0,i}-\tilde{\epsilon}_{Ci}$, we can combine (<ref>) with (<ref>) to write the perceived return latent variable as
\begin{equation}\label{perceivedreturns}
\widetilde{\pi}_i = X_{i}\beta-\widetilde{Price}_i{}+\tilde{\epsilon}_{i}.
\end{equation}
Importantly, the assumptions given result in the latent variable being linear in perceived prices, with a marginal effect ($-1$) that is known to both agents and the researcher.
[The researcher constraining the price coefficient to the value used by agents is key to identification, not the researcher or agents being correct about its value.]
The expression of perceived returns as a latent variable in a binary choice problem with a single known marginal effect is the starting point of the estimation procedures described below.
§ EMPIRICAL STRATEGY
It follows from the model that latent perceived returns are identified by $\beta$, $\widetilde{Price}_i$, and $\tilde{\epsilon}_i$, given the observed $X_i$. The lack of observation of $\tilde{\epsilon}_i$
is a common problem that will be addressed with commonly used binary choice estimation techniques. In this section I will describe adjustments to these estimators that leverage the assumptions described above
to permit identification of $\beta$ and the scale of the distribution of $\tilde{\epsilon}_i$ in the context of the researcher's failure to observe agents' perceived prices. To preface, these adjustments address challenges that arise due to perceived costs having a causal effect on perceived returns in the identity given in (<ref>).
The econometric methods described below establish conditions under which the assumed coefficient on perceived prices from (<ref>) exactly determines the marginal effect of realized prices on perceived returns in a binary choice model. Omitted variable bias and measurement error in prices as measures of perceived prices threaten the validity of this assumption. It follows that methods which address omitted variable bias and measurement error will validate the assumption on the marginal effect of realized prices on perceived returns. To clarify, consider the expression of agents' beliefs about prices used throughout this paper,
\begin{equation}\label{belief_restriction}
\widetilde{Price}_i = {Price}_i+X_i\alpha+\nu_i,
\end{equation}
where the realized price, $Price_i$, is observed by the researcher, $\alpha$ gives the effect of explanatory variables on price misperceptions, and $\nu_i$ is the idiosyncratic component of agent $i$'s misperception of prices. Here, realized prices are assumed to increase agents' beliefs about prices at a known marginal rate of unity insofar as they are known to agents.
This expression allows us to present an empirically tractable version of perceived returns,
\begin{equation}\label{perceivedreturns_empirical}
\begin{split}
\widetilde{\pi}_i &= X_{i}\beta-\widetilde{Price}_i{}+\tilde{\epsilon}_{i} \\
&= X_{i}\beta-Price_i{}-X_i\alpha{}-\nu_i{} +\tilde{\epsilon}_{i},% \\
% &= X_{i}\theta-Price_i{}+\eta_i,
\end{split}
\end{equation}
by substituting in prices observed by the researcher for agents' unobserved perceived prices and defining $\theta=\beta-\alpha{}$.
[The distinction between the extent to which each control contributes to misperceptions in prices, $\alpha$, and to other components of perceived returns, $\beta$, is presented to emphasize that the methods in this paper are robust to systematic bias in perceptions associated with explanatory variables, even though they are not separately identified.] This representation presents the unexplained price misperception as an omitted variable, which will produce problems if $Price_i$ is correlated with $\nu_i$. Natural examples of problematic correlations between price misperceptions include agents systematically over-reacting or under-reacting to price predictors that are unobserved by the researcher. The extreme case of under-reaction is that in which an unobserved predictor of realized price variation is ignored by or unknown to agents altogether, which amounts to classical measurement error in realized prices as measures of perceived prices.
In what follows, I first consider a benchmark case in which unobserved components of price misperceptions are mean independent of realized prices and prices are uncorrelated with unobserved determinants of perceived returns. Though agents may be mistaken about prices, actual prices can stand in for perceived prices because any systematic price misperceptions are accounted for by observables. Second, I consider the case in which prices are correlated with unobserved price misperceptions and unobserved components of perceived returns. In this setting, instruments for observed prices that are uncorrelated with unobserved components of perceived returns will be needed to identify perceived returns. This case emphasizes the importance of choosing instruments that are de facto known to agents in addition to being exogenous for constructing credible counterfactuals relating to price changes.
§.§ Estimation with Known, Exogenous Prices
Here, I describe a benchmark procedure for estimating perceived returns with a simple adjustment to a common binary choice method. This procedure will provide consistent estimates of the perceived returns distribution under two assumptions that are likely to be violated in applications. First, this method assumes that prices and the unobserved component of perceived returns are uncorrelated. Second, it assumes that unobserved components of price misperceptions are mean independent of prices conditional on $X_i$, the simplest case of which is agents having perfect information on prices.
With the decision rule in (<ref>) and the expression of perceived returns in (<ref>), an assumption on the distribution of $-\nu_i{} +\tilde{\epsilon}_{i}$ is sufficient to consistently estimate perceived returns by maximum likelihood. I assume the composite unobserved component of perceived returns in (<ref>) is normally distributed as
\begin{equation}
-\nu_i{} +\tilde{\epsilon}_{i}|X_i,Price_i \sim \mathcal{N}(0,\sigma^2).
\end{equation}
The assumption of normality is chosen for convenience, and is not necessary for the estimation procedures in this paper. Defining $(\beta^*,\theta^*,{\gamma}^*)=(\frac{\beta}{\sigma},\frac{\theta}{\sigma}, \frac{1}{\sigma})$ for notational convenience, the probability of selection is given by
\begin{equation}
Pr(S_i=1|X_i,Price_i) = \Phi
% \Big
% \Big
\end{equation}
where $\Phi(\cdot)$ denotes the standard normal CDF.
The parameters $(\theta^*,{\gamma}^*)$ are the values that maximize the log-likelihood
\begin{equation}\label{L-likelihood}
\begin{gathered}
\mathcal{L}(\theta^*,{\gamma}^* |
X_i,Price_i) = \\
\sum_i
\Phi
\Big
\Big
\Bigg]
+ (1-S_i)\log\Bigg[
\Big)\Bigg].
\end{gathered}
\end{equation}
The estimates of perceived returns are then given by
\begin{equation}\label{probitdist}
\hat{\widetilde{\pi}}_i|X_i, Price_i \sim \mathcal{N}(X_{i}\hat{\theta}-Price_i{},\hat{\sigma}^2),
\end{equation}
where imposing the constraint $\gamma^*=\frac{1}{\sigma}$ (rather than the standard constraint $\sigma=1$) is the only difference from a standard probit. Importantly, the assumption that $\gamma^*=\frac{1}{\sigma}$ is only valid under the assumptions described in Section <ref> when realized prices are uncorrelated with unobserved components of price misperceptions and perceived returns conditional on $X_i$. As this generally will not be the case, this assumption is not an innocuous normalization.
§.§ Estimation with Endogenous, Unknown Prices
Here, I describe a control function approach that addresses correlation between prices and unobserved components of perceived returns as well as arbitrary correlation between prices and misperceptions on prices. In Appendix <ref>, I discuss a method developed by <cit.> that performs well in this model when agents under-react to price variation, such as when they form rational expectations on prices based on a known price predictors and only a subset of price predictors are known to them. The method in this section uses an established estimator, but adds the assumption that instruments are uncorrelated with unobserved components of price misperceptions in addition to the more commonly invoked assumption that instruments are uncorrelated with other unobserved idiosyncratic components of perceived returns. This additional assumption contributes to credibility for predictions of responds to counterfactual price changes that are known to agents, without changing the asymptotic or finite sample properties of the estimator.
The control function approach uses the following system of equations, with reference to the expression of perceived returns in (<ref>),
\begin{equation}\label{Price_Z}
\begin{gathered}
\widetilde{\pi}_i = X_i\theta-{Price_i}{}-\nu_i{}+\tilde{\epsilon}_i \\
% \widetilde{Price}_i=Z_i\pi +u_i \\
Price_i =
% {\widetilde{Price}_i}+X_i\alpha+\nu_i=
Z_i\delta +u_i,
\end{gathered}
\end{equation}
where I have left unobserved price misperceptions and other unobserved components of perceived returns separate for clarity. Here, I introduce the instruments, $Z_i$, where $X_i \subset Z_i$, that are assumed to be conditionally uncorrelated with $-\nu_i+\tilde{\epsilon}_i$ and strongly correlated with observed prices. With some loss of generality, I will refer to instruments that satisfy this condition as “known and exogoneous” for brevity.
[It is not necessary that agents know the instruments in $Z_i$, but only that they know the variation in prices that is attributable to $Z_i$. For example, agents need not know about a tax or subsidy shock to the price of investment, so long as they are aware of the change in price that arises from the policy shock. Furthermore, the language that instruments are known and exogenous suggests that $Cov(Z_i,\nu_i)=Cov(Z_i,\tilde{\epsilon}_i)=0$, while these are sufficient but not necessary for the less intuitive condition $Cov(Z_i,\tilde{\epsilon}_i-\nu_i)=0$, which accommodates the knife-edge case of the two sources of bias cancelling out.]
With valid instruments, the price residual $u_i$ contains all components of prices that are correlated with idiosyncratic components of price misperceptions or other unobserved components of perceived returns.
Given the above, I estimate the following equation,
\begin{equation}\label{Perceived_Returns_CF}
\begin{split}
\widetilde{\pi}_i
% =& X_i\beta-\widetilde{Price_i}{}+\tilde{\epsilon}_i \\
=& X_i\theta-{Price_i}{}-\nu_i{}+\tilde{\epsilon}_i \\
=& X_i\theta-{Price_i}{}+u_i\rho+\xi_i \\
=& X_i\theta-{Price_i}{}+\hat u_i\rho+\zeta_i.
\end{split}
\end{equation}
The first line follows directly from the representation of perceived returns in (<ref>).
The second line substitutes in the linear projection of the composite error $-\nu_i{}+\tilde{\epsilon}_i$ on the first stage error $u_i$, wherein $\rho = \mathbb{E}[u_i(-\nu_i{}+\tilde{\epsilon}_i)]/\mathbb{E}[u_i^2]$ and $\xi_i$ is the residual when controlling for $u_i$. The third line substitutes the estimated residuals from the first stage regression of $Price_i$ on $Z_i$ in for their unobserved true values, generating a new error, $\zeta_i = \xi_i+(u_i-\hat u_i)\rho$. This new error will converge asymptotically to $\xi_i$, but will differ in small samples due to sampling error in the estimation of the residual from the first stage, $\hat u_i$.
To estimate perceived returns, I assume that the new error in the perceived returns control function expression is normally distributed,
\begin{equation}\label{error_CF}
\zeta_i|X_i,Price_i,\hat u_i \sim \mathcal{N}(0,\sigma_\zeta^2),
\end{equation}
noting that the variance of $\zeta_i$ will differ from that of $\tilde{\epsilon}_i$ if $\rho\neq0$.
I estimate perceived returns using two-stage conditional maximum likelihood, following <cit.>, while correcting for the inclusion of estimated regressors, following <cit.>, though other estimators will also provide consistent estimates.
Defining $(\theta^*_\zeta,{\gamma}^*_\zeta,\rho^*_\zeta)=(\frac{\theta}{\sigma_\zeta},\frac{{1}}{\sigma_\zeta}, \frac{\rho}{\sigma_\zeta})$, the log-likelihood for the second stage of the control function approach is given by
[As an closely-related alternative, we could perform a instrumental variables probit to obtain identical estimates of $\theta$. The control function method has the advantage of conditioning on the variation in prices that isn't used in identifying the effect on perceived returns, which permits more precise counterfactual predictions for policies that are targeted on observables.
\begin{equation}
\begin{gathered}
\mathcal{L}\Big(\theta^*,{\gamma}^*,\rho^* |
X_i, \hat u_i \Big) = \\
\sum_i
S_i\log\Bigg[\Phi\Big(X_i\theta^*_\zeta-{Price_i}{\gamma}^*_\zeta+\hat u_i \rho^*_\zeta
\Big)
\Bigg] \\
+ (1-S_i)\log\Bigg[1-\Phi\Big(X_i\theta^*_\zeta-{Price_i}{\gamma}^*_\zeta+\hat u_i\rho^*_\zeta
\Big
\end{gathered}
\end{equation}
Estimates of perceived returns are obtained by plugging the estimated parameters and the assumed coefficient on perceived prices into the latent variable equation,
\begin{equation}\label{CF_dist}
\widetilde{\pi}_i|X_i,\hat u_i \sim \mathcal{N}\Big(X_i\hat{\theta}-{Price_i}{}+\hat u_i\hat{\rho},\hat{\sigma}_{\zeta}^2\Big).
\end{equation}
§ SIMULATIONS
In this section I apply the methods described above to simulated datasets to compare their performance. The important considerations involve agent beliefs about prices, price endogeneity, and instruments being known and/or exogenous to agents. Because the estimators used are standard, I stop short of performing full Monte Carlo simulations, instead comparing the performance instruments according to whether they are known or exogenous to agents within individual simulations. For additional simulations which compare the methods of this paper to the method of <cit.>, see Appendix <ref>.
For the simulations, I use the following DGP,
\begin{equation}
\begin{gathered}
\widetilde{\pi}_i = X_i\beta-\widetilde{Price_i}{}+\tilde{\epsilon}_i \\
\widetilde{Price}_i = {Price}_i+\nu_i=Z_i\delta +u_i +\nu_i, \\
\end{gathered}
\end{equation}
where the nature of the covariance of $(Z_i,u_i,\nu_i,\tilde{\epsilon}_i)$ will determine the performance of various estimation approaches.
Both the probit and the control function method will obtain estimates of $\beta$, while the probit will estimate
\begin{equation}
\sigma = Var(-\nu_i+\epsilon)
\end{equation}
and the control function method will estimate
\begin{equation}
\begin{gathered}
\rho = \mathbb{E}[u_i(-\nu_i{}+\tilde{\epsilon}_i)]/\mathbb{E}[u_i^2],\\
\sigma_\zeta = \sqrt{Var(\zeta_i)} = \sqrt{Var(-\nu_i{}+\tilde{\epsilon}_i-\hat u_i\rho)}.
\end{gathered}
\end{equation}
Each DGP is comprised of $N=10,000$ observations of agents whose decisions are governed by their perceived returns to investment.
§.§ Simulation with Known, Exogenous Prices
I begin with a well-behaved benchmark DGP that corresponds to the setting described in Section <ref>.
I generate data according to
\begin{equation}
\begin{bmatrix}
z_i \\
u_i \\
\nu_i \\
\tilde{\epsilon}_i
\end{bmatrix}
\sim
\mathcal{N}(\textbf{0},\Sigma);
\quad \Sigma =
\begin{bmatrix}
& 4 & 0 & 0 & 0 &\\
& 0 & 1 & 0 & 0 &\\
& 0 & 0 & 2 & 0 & \\
& 0& 0& 0 & 2 &
\end{bmatrix}.
\end{equation}
I construct the instrument vector as $Z_i = [X_i \mbox{ } z_{1,i}]$ where $X_i$ includes only a constant, and $\alpha=0$ such that $\theta=\beta$. Finally, I set $\beta=1$ and $\delta=[0 \mbox{ } 1]'$. Although I set $Var(\nu_i) = 2$, I describe prices as known in this setting because the price misperception is uncorrelated with prices.[This setting is one in which agents are wrong about prices in ways that are unrelated to price determinants. This sort of price misperception is plausible in cases where prices change frequently according to a distribution that is de facto known to agents, such as frequently repeated investments.]
Table <ref> shows perceived returns estimates for one simulation of this DGP using the methods from Section <ref> and Section <ref>. Figure <ref> shows the distributions implied by the estimates for each method. In this case, the lack of correlation between prices and unobserved components of perceived returns, including price misperceptions, means that both methods will provide consistent estimates of perceived returns.
Simulation 1, Perceived Returns Estimates
Notes: Standard errors in parentheses, corrected for the inclusion of estimated regressors following <cit.> in the case of the control function.
Parameters are in monetary units.
Estimates relate to expressions (<ref>) and (<ref>), respectively.
All data is generated in Stata using random seed 1234.
Simulation 1, Implied Perceived Returns Distributions
Notes: Estimated densities of perceived returns given by the probit method using expression (<ref>), and the control function method using expression (<ref>).
§.§ Simulation with Unknown, Endogenous Prices
In this simulation, I consider a DGP that corresponds to the setting described in Section <ref> in which agents systematically misperceive prices in ways that not accounted for by observables, and prices are correlated with unobserved components of perceived returns. I also compare the performance of an instrument that is exogenous but unknown to one that is both known and exogenous.
I generate data according to
\begin{equation}
\begin{bmatrix}
z_{1,i} \\
z_{2,i} \\
u_i \\
\nu_i \\
\tilde{\epsilon}_i
\end{bmatrix}
\sim
\mathcal{N}(\textbf{0},\Sigma);
\quad \Sigma =
\begin{bmatrix}
&9 & 0 & 0 & -4& 0&\\
&0 & 9 & 0 & 0& 0&\\
&0 & 0 & 27 & -5& 9&\\
& -4 & 0 & -5 & 9& 0 & \\
& 0& 0& 9 & 0 & 16 &
\end{bmatrix}.
\end{equation}
I construct the instrument vector as $Z_i = [X_i \mbox{ } z_{1,i} \mbox{ } z_{2,i}]$ where $X_i$ includes only a constant, and $\alpha=0$ such that $\theta=\beta$. Finally, I set $\beta=1$ and $\delta=[0 \mbox{ } 1 \mbox{ } 1]'$.
In this case, there is positive correlation between $u_i$ and $\tilde{\epsilon}_i$ such that individuals who face idiosyncratically high prices also have high perceived returns, as may occur with price discrimination. Additionally, there is negative correlation between $u_i$ and $\nu_i$ such that individuals systematically underestimate the extent to which their price deviates from the average, as may occur if agents form rational expectations on prices conditional on an incomplete set of price determinants. Finally, this DGP includes two potential instruments; $z_{1,i}$, which is exogenous but not fully known to agents, as in the case of a poorly publicized policy shock, and $z_{2,i}$, which is both exogenous and known to agents.
Because $z_{1,i}$ is correlated with $\nu_i$, it is not a valid instrument for the purposes of this paper. For the control function estimates of $\rho$ and $\sigma_{\zeta}$, I use $u_{1,i}$ in place of $u_i$, where $u_{1,i} = z_{1,i}\delta_1+u_i$. In applications with many valid instruments, including different combinations of instruments will result in different estimates $\hat u_i$, $\hat \rho$ and $\hat \sigma_\zeta$, while nonetheless all returning consistent estimates of perceived returns. For comparisons between instruments, the complete distribution of perceived returns (succinctly described by the figures) and the estimated coefficients on $X_i$ will be correct for all valid instruments.
Table <ref> shows the estimates for one simulation of this DGP using both methods, and also using each instrumental variable individually. Figure <ref> shows the distributions implied by the estimates for each method. Because $z_{1,i}$ is correlated with misperceptions, it is not a valid instrument, and results in an estimated perceived returns distribution that is no better than that obtained when using no instruments.[For estimating instrument-specific intent to treat effects of prices on investment, which would be sufficient for determining the performance of a particular policy in the context of its actual implementation, instruments such as $z_{1,i}$ are valid. They nonetheless fail to provide credible insight into counterfactual policy changes that are well-publicized.]
Simulation 2, Perceived Returns Estimates
Notes: Standard errors in parentheses, corrected for the inclusion of estimated regressors following <cit.> in the case of the control function.
Parameters are in monetary units.
Estimates relate to expressions (<ref>) and (<ref>), respectively.
All data is generated in Stata using random seed 1234.
Simulation 2, Implied Perceived Returns Distributions
Notes: Estimated densities of perceived returns given by the probit method using expression (<ref>), and the control function method using expression (<ref>). The unknown IV is $z_{1,i}$ and the valid IV is $z_{2,i}$, where each IV is excluded from the estimation model when the other is used.
§ CONCLUSIONS
In this paper I describe how to estimate perceived returns to investments by assuming agent knowledge of an intuitive identity and modestly altering common estimation techniques. The assumption on agent knowledge may be preferable to rational expectations or related assumptions in applications. I further describe the econometric challenges that arise from the assumption and how to overcome them with careful choice of instruments that are not only exogenous to agents, but are also de facto known to them.
This method is relevant in many empirical questions, especially those subject to substantial information frictions on prices such as such as college attendance, firm R&D, automobile purchases, home purchases, and healthcare. While the estimation techniques used in this paper are restricted to a probit and a control function probit, the general insights are relevant to more sophisticated models that involve responses to prices. Implementation of the identity relating perceived returns and prices used in this paper in the context of more sophisticated models, such as <cit.> and its extensions, are left to future work.
In terms of policy implications, the methods described in this paper are relevant for constructing credible counterfactuals for well-publicized price changes, which are relevant for taxes and subsidies on investments including those associated with education and healthcare. The general insight is to avoid being too quick to assume that agents have rational expectations on model objects when alternative assumptions may be more defensible. Relatedly, the insights here also caution against extrapolating effects of counterfactual policies when the policy effects are estimated using a source of variation in prices that may not be known to agents. In practice, applied researchers should justify that sources of variation used for estimating treatment effects are known to agents just as they justify that they are exogenous to agents when making counterfactual predictions.
[Andrews and Soares]Andrews and Soares2010as10
Andrews, D. W. K., and G. Soares (2010): “Inference for
Parameters Defined by Moment Inequalities Using Generalized Moment
Selection,” Econometrica, 78(1), 119–157.
[Berry, Levinsohn, and Pakes]Berry, Levinsohn, and
Berry, S., J. Levinsohn, and A. Pakes (1995): “Automobile
Prices in Market Equilibrium,” Econometrica: Journal of the
Econometric Society, pp. 841–890.
[Bleemer and Zafar]Bleemer and Zafar2018bz18
Bleemer, Z., and B. Zafar (2018): “Intended College
Attendance: Evidence from an Experiment on College Returns and Costs,”
Journal of Public Economics, 157, 184–211.
[Cunha, Heckman, and Navarro]Cunha, Heckman, and
Cunha, F., J. Heckman, and S. Navarro (2005): “Separating
Uncertainty from Heterogeneity in Life Cycle Earnings,” Oxford
Economic Papers, 57(2), 191–261.
[Cunha and Heckman]Cunha and Heckman2007Ch07
Cunha, F., and J. J. Heckman (2007): “Identifying and
estimating the distributions of ex post and ex ante returns to schooling,”
Labour Economics, 14(6), 870–893.
[Dickstein and Morales]Dickstein and Morales2018dm18
Dickstein, M. J., and E. Morales (2018): “What Do
Exporters Know?,” The Quarterly Journal of Economics, 133(4),
[Dynarski, Libassi, Michelmore, and Owen]Dynarski, Libassi,
Michelmore, and Owen2018dlmo18
Dynarski, S., C. Libassi, K. Michelmore, and S. Owen
(2018): “Closing the gap: The effect of a targeted, tuition-free promise on
college choices of high-achieving, low-income students,” Discussion paper,
National Bureau of Economic Research.
Dynarski, S. M. (2003): “Does Aid Matter? Measuring the Effect of
Student Aid on College Attendance and Completion,” American Economic
Review, 93(1), 279–288.
Hansen, W. L. (1983): “Impact of student financial aid on access,”
Proceedings of the Academy of Political Science, 35(2), 84–96.
Harris, C. M. (2020): “Estimating the perceived returns to
college,” Available at SSRN 3577816.
Jensen, R. (2010): “The (Perceived) Returns to Education and the
Demand for Schooling,” The Quarterly Journal of Economics, 125(2),
Kane, T. J. (1995): “Rising public college tuition and college
entry: How well do public subsidies promote access to college?,” Discussion
paper, National Bureau of Economic Research.
Manski, C. F. (1993): “Adolescent econometricians: How do youth
infer the returns to schooling?,” in Studies of supply and demand in
higher education, pp. 43–60. University of Chicago Press.
(2004): “Measuring expectations,” Econometrica,
72(5), 1329–1376.
[Murphy and Topel]Murphy and Topel1985mt85
Murphy, K. M., and R. H. Topel (1985): “Least Squares with
Estimated Regressors,” Journal of Business and Economic Statistics.
[Rivers and Vuong]Rivers and Vuong1988Rv88
Rivers, D., and Q. H. Vuong (1988): “Limited Information
Estimators and Exogeneity Tests for Simultaneous Probit Models,”
Journal of Econometrics, 39(3), 347–366.
Roy, A. D. (1951): “Some Thoughts on the Distribution of Earnings,”
Oxford Economic Papers, 3(2), 135–146.
[Wiswall and Zafar]Wiswall and Zafar2015Wz15
Wiswall, M., and B. Zafar (2015): “How Do College Students
Respond to Public Information about Earnings?,” Journal of Human
Capital, 9(2), 117–169.
[Yatchew and Griliches]Yatchew and Griliches1985yg85
Yatchew, A., and Z. Griliches (1985): “Specification error
in probit models,” The Review of Economics and Statistics, pp.
|
# Least Squares Monte Carlo applied to Dynamic Monetary Utility Functions
Hampus Engsner111Corresponding author. Department of Mathematics, Stockholm
University, SE-10691 Stockholm, Sweden. e-mail<EMAIL_ADDRESS>
###### Abstract
In this paper we explore ways of numerically computing recursive dynamic
monetary risk measures and utility functions. Computationally, this problem
suffers from the curse of dimensionality and nested simulations are unfeasible
if there are more than two time steps. The approach considered in this paper
is to use a Least Squares Monte Carlo (LSM) algorithm to tackle this problem,
a method which has been primarily considered for valuing American derivatives,
or more general stopping time problems, as these also give rise to backward
recursions with corresponding challenges in terms of numerical computation. We
give some overarching consistency results for the LSM algorithm in a general
setting as well as explore numerically its performance for recursive Cost-of-
Capital valuation, a special case of a dynamic monetary utility function.
keywords: Monte Carlo algorithms, least-squares regression, multi-period
valuation, dynamic utility funcitons
## 1 Introduction
Dynamic monetary risk measures and utility functions, as described for
instance in [1] and [4], are time consistent if and only if they satisfy a
recursive relationship (see for instance [5], [19]). In the case of time-
consistent valuations of cash flows, often in an insurance setting, (e.g.
[12], [13], [17], [19], [20], [18], [11]), analogous recursions also appear.
Recursive relationships also occur as properties of solutions to optimal
stopping problems, of which valuation of American derivatives is a special
case. It is well known that numerical solutions to these kinds of recursions
suffer from ”the curse of dimensionality”: As the underlying stochastic
process generating the flow of information grows high dimensional, direct
computations of solutions of these recursions prove unfeasible.
In order to make objective of this paper more clear, consider a probability
space $(\Omega,\mathcal{F},\mathbb{P})$, a $d$-dimensional Markov chain
$(S_{t})_{t=0}^{T}$ in $L^{2}(\mathcal{F})$ and its natural filtration
$(\mathcal{F}_{t})_{t=0}^{T}$. We are interested in computing $V_{0}$ given as
the solution to the following recursion
$\displaystyle V_{t}=\varphi_{t}(f(S_{t+1})+V_{t+1}),\quad V_{T}=0,$ (1)
where, for each $t$, $\varphi_{t}:L^{2}(\mathcal{F}_{t+1})\to
L^{2}(\mathcal{F}_{t})$ is a given law-invariant mapping (see Section 2 and
definition 3). Recursions such as (1) arise when describing time-consistent
dynamic monetary risk measures/utility functions (see e.g. [4]).
Alternatively, we may be interested in computing $V_{0}$ given as the solution
to the following recursion
$\displaystyle V_{t}=f(S_{t})\lor\mathbb{E}[V_{t+1}\mid\mathcal{F}_{t}],\quad
V_{T}=f(S_{T}),$ (2)
where $f:\mathbb{R}^{d}\to\mathbb{R}$ is a given function. Recursions such as
(2) arise when solving discrete-time optimal stopping problems or valuing
American-style derivatives (see e.g. [14] and [16]). In this article we will
focus the recursive expression (1). In either case, due to the Markovian
assumptions, we expect $V_{t}$ to be determined by some deterministic function
of the state $S_{t}$ at time $t$. The curse of dimensionality can now be
succinctly put as the statement that as the dimension $d$ grows, direct
computation of $V_{t}$ often becomes unfeasible. Additionally, brute-force
valuation via a nested Monte Carlo simulation, discussed in [3] and [2], is
only a feasible option when $T=2$, as the number of required simulations would
grow exponentially with $T$. One approach to tackle this problem is the Least
Squares Monte Carlo (LSM) algorithm, notably used in [16] to value American-
style derivatives, and consists of approximating $V_{t+1}$ in either (1) or
(2) as a linear combination basis functions of the state $S_{t+1}$ via least-
squares regression. While most often considered for optimal stopping problems
([16], [22], [7], [21], [14], [23], [24], [25]), it has also been used
recently in [18] for the purpose of actuarial valuation, with respect to a
recursive relationship in line with (1).
The paper is organized as follows. In Section 2 we introduce the mathematical
definitions and notation that will allow us to describe the LSM algorithm in
our setting mathematically, as well as to formulate theoretical results.
Section 3 contains consistency results with respect to computing (1) both in a
general setting, only requiring an assumption of continuity in $L^{2}$ norm,
and for the special case of a Cost-of-Capital valuation, studied in [12] and
[13], under the assumption that capital requirements are given by the risk
measure Value-at-Risk, in line with solvency II. The lack of convenient
continuity properties of Value-at-Risk pose certain challenges that are
handled. Section 4 investigates the numerical performance of the LSM algorithm
on valuation problems for a set of models for liability cash flows. Here some
effort is also put into evaluating and validating the LSM algorithm’s
performance, as this is not trivial for the considered cases.
## 2 Mathematical setup
Consider a probability space $(\Omega,\mathcal{F},\mathbb{P})$. On this space
we consider two filtrations $(\mathcal{F}_{t})_{t=0}^{T}$, with
$\mathcal{F}_{0}=\\{\emptyset,\Omega\\}$, and $(\mathcal{H}_{t})_{t=0}^{T}$.
The latter filtration is an initial expansion of the former: take
$\mathcal{D}\subset\mathcal{F}$ and set
$\mathcal{H}_{t}:=\mathcal{F}_{t}\vee\mathcal{D}$. $\mathcal{D}$ will later
correspond to the $\sigma$-field generated by initially simulated data needed
for numerical approximations. Define $L^{2}(\mathcal{H}_{t})$ as the spaces of
$\mathcal{H}_{t}$ measurable random variables $Z$ with
$\mathbb{E}[Z^{2}]<\infty$. The subspace $L^{2}(\mathcal{F}_{t})\subset
L^{2}(\mathcal{H}_{t})$ is defined analogously. All equalities and
inequalities between random variables should be interpreted in the
$\mathbb{P}$-almost sure sense.
We assume that the probability space supports a Markov chain
$S=(S_{t})_{t=0}^{T}$ on $(\mathbb{R}^{d})^{T}$, where $S_{0}$ is constant,
and an iid sequence $D=(S^{(i)})_{i\in\mathbb{N}}$, independent of $S$, where,
for each $i$, $S^{(i)}=(S^{(i)}_{t})_{t=0}^{T}$ has independent components
with $\mathcal{L}(S^{(i)}_{t})=\mathcal{L}(S_{t})$ (equal in distribution).
$D$ will represent possible initially simulated data and we set
$\mathcal{D}=\sigma(D)$. The actual simulated data will be a finite sample and
we write $D_{n}:=(S^{(i)})_{i=1}^{n}$. For $Z\in L^{2}(\mathcal{F})$ we write
$\|Z\|_{2}:=\mathbb{E}[|Z|^{2}\mid\mathcal{H}_{0}]^{\frac{1}{2}}$. Notice that
$\|Z\|_{2}$ is a nonrandom number if $Z$ is independent of $D$.
The mappings $\rho_{t}$ and $\varphi_{t}$ appearing in Definitions 1 and 2
below can be defined analogously as mappings from $L^{p}(\mathcal{H}_{t+1})$
to $L^{p}(\mathcal{H}_{t})$ for $p\neq 2$. However, $p=2$ will be the relevant
choice for the applications treated subsequently.
###### Definition 1.
A dynamic monetary risk measure is a sequence $(\rho_{t})_{t=0}^{T-1}$ of
mappings $\rho_{t}:L^{2}(\mathcal{H}_{t+1})\to L^{2}(\mathcal{H}_{t})$
satisfying
$\displaystyle\textrm{if }\lambda\in L^{2}(\mathcal{H}_{t})\textrm{ and }Y\in
L^{2}(\mathcal{H}_{t+1}),\textrm{ then
}\rho_{t}(Y+\lambda)=\rho_{t}(Y)-\lambda,$ (3) $\displaystyle\textrm{if
}Y,\widetilde{Y}\in L^{2}(\mathcal{H}_{t+1})\textrm{ and
}Y\leq\widetilde{Y},\textrm{ then }\rho_{t}(Y)\geq\rho_{t}(\widetilde{Y}),$
(4) $\displaystyle\rho_{t}(0)=0.$ (5)
The elements $\rho_{t}$ of the dynamic monetary risk measure
$(\rho_{t})_{t=0}^{T-1}$ are called conditional monetary risk measures.
###### Definition 2.
A dynamic monetary utility function is a sequence $(\varphi_{t})_{t=0}^{T-1}$
of mappings $\varphi_{t}:L^{2}(\mathcal{H}_{t+1})\to L^{2}(\mathcal{H}_{t})$
satisfying
$\displaystyle\textrm{if }\lambda\in L^{2}(\mathcal{H}_{t})\textrm{ and }Y\in
L^{2}(\mathcal{H}_{t+1}),\textrm{ then
}\varphi_{t}(Y+\lambda)=\varphi_{t}(Y)+\lambda,$ (6) $\displaystyle\textrm{if
}Y,\widetilde{Y}\in L^{2}(\mathcal{H}_{t+1})\textrm{ and
}Y\leq\widetilde{Y},\textrm{ then
}\varphi_{t}(Y)\leq\varphi_{t}(\widetilde{Y}),$ (7)
$\displaystyle\varphi_{t}(0)=0.$ (8)
Note that if $(\rho_{t})_{t=0}^{T-1}$ is a dynamic monetary risk measure,
$(\rho_{t}(-\cdot))_{t=0}^{T-1}$ is a dynamic monetary utility function. In
what follows we will focus on dynamic monetary utility function of the form
$\displaystyle\varphi_{t}(Y)=\rho_{t}(-Y)-\frac{1}{1+\eta_{t}}\mathbb{E}\big{[}\big{(}\rho_{t}(-Y)-Y\big{)}^{+}\mid\mathcal{H}_{t}\big{]},$
(9)
where $(\rho_{t})_{t=0}^{T}$ is a dynamic monetary risk measures in the sense
of Definition 1 and $(\eta_{t})_{t=0}^{T-1}$ is a sequence of nonrandom
numbers in $(0,1)$. We may consider a more general version of this dynamic
monetary utility function by allowing $(\eta_{t})_{t=0}^{T-1}$ to be an
$(\mathcal{F}_{t})_{t=0}^{T-1}$ adapted sequence, however we choose the
simpler version here. That $(\varphi_{t})_{t=0}^{T}$ is indeed a dynamic
monetary utility function is shown in [12].
We will later consider conditional monetary risk measures that are
conditionally law invariant in the sense of Definition 3 below. Conditional
law invariance will then be inherited by $\varphi_{t}$ in (9).
###### Definition 3.
A mapping $\varphi_{t}:L^{2}(\mathcal{H}_{t+1})\to L^{2}(\mathcal{H}_{t})$ is
called law invariant if $\varphi_{t}(X)=\varphi_{t}(Y)$ whenever
$\mathbb{P}(X\in\cdot\mid\mathcal{H}_{t})=\mathbb{P}(Y\in\cdot\mid\mathcal{H}_{t})$.
We now define the value process corresponding to a dynamic monetary utility
function $(\varphi_{t})_{t=0}^{T-1}$ in the sense of Definition 2 with respect
to the filtration $(\mathcal{F}_{t})_{t=0}^{T-1}$ instead of
$(\mathcal{H}_{t})_{t=0}^{T-1}$. The use of the smaller filtration is due to
that a value process of the sort appearing in Definition 4 is the theoretical
object that we aim to approximate well by the methods considered in this
paper.
###### Definition 4.
Let $L:=(L_{t})_{t=1}^{T}$ with $L_{t}\in L^{2}(\mathcal{F}_{t})$ for all $t$.
Let $(\varphi_{t})_{t=0}^{T-1}$ be a dynamic monetary utility function with
respect to $(\mathcal{F}_{t})_{t=0}^{T-1}$. Let
$\displaystyle V_{t}(L,\varphi):=\varphi_{t}(L_{t+1}+V_{t+1}(L,\varphi)),\quad
V_{T}(L,\varphi):=0.$ (10)
We refer to $V_{t}(L,\varphi)$ as the time $t$ $\varphi$-value of $L$.
Whenever it will cause no confusion, we will suppress the argument $\varphi$
in $V_{t}(L,\varphi)$ in order to make the expressions less notationally
heavy.
###### Remark 1.
Letting $\rho$ be a dynamic monetary risk measure and letting $\varphi$ be a
dynamic monetary utility function, $-\sum_{s=1}^{t}L_{t}-V_{t}(-L,-\rho)$ will
be a conditional monetary risk measure on the cash flow $L$ in the sense of
[4] and likewise $\sum_{s=1}^{t}L_{t}+V_{t}(L,\varphi)$ will be a conditional
monetary utility function in the sense of [4], with $L$ being interpreted as a
process of incremental cash flows. If $L$ is a liability cash flow, we may
write the risk measure of $L$ as $\sum_{s=1}^{t}L_{t}+V_{t}(L,\rho)$.
Importantly, any time-consistent dynamic monetary utility function/risk
measure may be written in this way (see e.g. [5]). Often convexity or
subadditivity is added to the list of desired properties in definitions 1 and
2 (see e.g. [1], [4], [5]).
### 2.1 The approximation framework
For $t=1,\dots,T$, consider a sequence of functions
$\\{1,\Phi_{t,1},\Phi_{t,2},\dots\\}$, where for each $i\in\mathbb{N}$,
$\Phi_{t,i}:\mathbb{R}^{d}\to\mathbb{R}$ has the property
$\Phi_{t,i}(S_{t})\in L^{p}(\mathcal{F}_{t})$ and the set
$\\{1,\Phi_{t,1}(S_{t}),\Phi_{t,2}(S_{t}),\dots\\}$ make up a.s. linearly
independent random variables. We define the approximation space
$\mathcal{B}_{t,N}$ and its corresponding $L^{2}$ projection operator
$P_{\mathcal{B}_{t,N}}:L^{2}(\mathcal{H}_{t})\to\mathcal{B}_{t,N}$ as follows:
for $N\in\mathbb{N}$ and $t\in\\{0,\dots,T\\}$,
$\displaystyle\mathcal{B}_{t,N}:=\mathrm{span}\\{1,\Phi_{t,1}(S_{t}),\dots,\Phi_{t,N}(S_{t})\\},$
(11) $\displaystyle
P_{\mathcal{B}_{t,N}}Z_{t}:=\arg\inf_{B\in\mathcal{B}_{t,N}}\|Z_{t}-B\|_{2}.$
(12)
Defining
$\mathbf{\Phi}_{t,N}:=(1,\Phi_{t,1}(S_{t}),\dots,\Phi_{t,N}(S_{t}))^{\mathrm{T}}$,
note that the unique minimizer in (12) is given by
$P_{\mathcal{B}_{t,N}}Z_{t}:=\beta_{t,N,Z_{t}}^{\mathrm{T}}\mathbf{\Phi}_{t,N}$,
with
$\displaystyle\beta_{t,N,Z_{t}}=\mathbb{E}\big{[}\mathbf{\Phi}_{t,N}\mathbf{\Phi}_{t,N}^{{\mathrm{T}}}\mid\mathcal{H}_{0}\big{]}^{-1}\mathbb{E}\big{[}\mathbf{\Phi}_{t,N}Z_{t}\mid\mathcal{H}_{0}\big{]},$
(13)
where the expected value of a vector or matrix is interpreted elementwise.
Note that if $Z_{t}$ in (13) is independent of the initial data $D$, then
$\beta_{t,N,Z_{t}}$ is a nonrandom vector. Indeed, we will only apply the
operator $P_{\mathcal{B}_{t,N}}$ to random variables $Z_{t}$ independent of
$D$.
For each $t$, consider a nonrandom function $z_{t}$ such that
$Z_{t}=z_{t}(D_{M},S_{t})\in L^{2}(\mathcal{H}_{t})$. For $M\in\mathbb{N}$,
let
$\displaystyle\mathbf{\Phi}^{(M)}_{t,N}:=\left(\begin{array}[]{cccc}1&\Phi_{t,1}(S^{(1)}_{t})&\dots&\Phi_{t,N}(S^{(1)}_{t})\\\
\vdots&\dots&\dots&\vdots\\\
1&\Phi_{t,1}(S^{(M)}_{t})&\dots&\Phi_{t,N}(S^{(M)}_{t})\end{array}\right),$
$\displaystyle
Z_{t}^{(M)}:=\left(\begin{array}[]{c}z_{t}(D_{M},S^{(1)}_{t})\\\ \vdots\\\
z_{t}(D_{M},S^{(M)}_{t})\end{array}\right)$
and define
$\displaystyle\widehat{\beta}^{(M)}_{t,N,Z_{t}}$
$\displaystyle:=\Big{(}\big{(}\mathbf{\Phi}^{(M)}_{t,N}\big{)}^{\mathrm{T}}\mathbf{\Phi}^{(M)}_{t,N}\Big{)}^{-1}\big{(}\mathbf{\Phi}^{(M)}_{t,N}\big{)}^{\mathrm{T}}Z_{t}^{(M)},$
(14) $\displaystyle P_{\mathcal{B}_{t,N}}^{(M)}Z_{t}$
$\displaystyle:=(\widehat{\beta}^{(M)}_{t,N,Z_{t}})^{\mathrm{T}}\mathbf{\Phi}_{t,N}.$
(15)
Notice that $\widehat{\beta}^{(M)}_{t,N,Z_{t}}$ is independent of $S$ and is
the standard OLS estimator of $\beta_{t,N,Z_{t}}$ in (13). Notice also that
$\mathbf{\Phi}_{t,N}$ is independent of $D$. With the above definitions we can
define the Least Squares Monte Carlo (LSM) algorithm for approximating the
value $V_{0}(L,\varphi)$ given by (4).
Let $(\varphi_{t})_{t=0}^{T-1}$ be a sequence of law-invariant mappings
$\varphi_{t}:L^{2}(\mathcal{H}_{t+1})\to L^{2}(\mathcal{H}_{t})$. Consider a
stochastic process $L:=(L_{t})_{t=1}^{T}$, where $L_{t}=g_{t}(S_{t})\in
L^{2}(\mathcal{F}_{t})$ for all $t$ for some nonrandom functions
$g_{t}:\mathbb{R}^{d}\to\mathbb{R}$. The goal is to estimate the values
$(V_{t}(L))_{t=0}^{T}$ given by Definition 4. Note that the sought values
$V_{t}(L)$ are independent of $D$ and thus, by the law-invariance property and
the Markov property, $V_{t}(L)$ is a function of $S_{t}$ for each $t$. Now we
may describe the LSM algorithm with respect to $N$ basis functions and
simulation sample size $M$. The LSM algorithm corresponds to the following
recursion:
$\displaystyle\widehat{V}^{(M)}_{N,t}(L):=P^{(M)}_{\mathcal{B}_{t,N}}\varphi_{t}\big{(}L_{t+1}+\widehat{V}^{(M)}_{N,t+1}(L)\big{)},\quad\widehat{V}^{(M)}_{N,T}(L):=0.$
(16)
Notice that $\widehat{V}^{(M)}_{N,t}$ is a function of the random variables
$S_{t}$ and $S_{u}^{(i)}$ for $1\leq i\leq M$ and $t+1\leq u\leq T$. In
particular, $\widehat{V}^{(M)}_{N,t}\in L^{2}(\mathcal{H}_{t})$. In the
section below, we will investigate when, and in what manner,
$\widehat{V}^{(M)}_{N,t}\in L^{2}(\mathcal{H}_{t})$ may converge to $V_{t}\in
L^{2}(\mathcal{F}_{t})\subset L^{2}(\mathcal{H}_{t})$. For this purpose, we
make the additional useful definition:
$\displaystyle\widehat{V}_{N,t}(L):=P_{\mathcal{B}_{t,N}}\varphi_{t}\big{(}L_{t+1}+\widehat{V}_{N,t+1}(L)\big{)},\quad\widehat{V}_{N,T}(L):=0.$
(17)
$\widehat{V}_{N,t}(L)$ is to be interpreted as an idealized LSM estimate,
where we make a least-squares optimal estimate in each iteration. Note that
this quantity is independent of $D$
## 3 Consistency results
In the following section we prove what essentially are two consistency results
for the LSM estimator $\widehat{V}^{(M)}_{N,t}(L)$ along with providing
conditions for these to hold. These consistency results are analogous to
Theorems 3.1 and 3.2 in [7]. The first and simplest result, Lemma 1, is that
if we have a flexible enough class of basis functions, $\widehat{V}_{N,t}(L)$
will asymptotically approach the true value $V_{t}(L)$. The second consistency
result, Theorem 1, is that when $N$ is kept fixed, then
$\widehat{V}^{(M)}_{N,t}(L)$ will approach the least-squares optimal
$\widehat{V}_{N,t}(L)$ for each $t$ as $M$ grows to infinity. Hence, we show
that the LSM estimator for a fixed number of basis functions is consistent in
the sense that the simulation-based projection operator
$P^{(M)}_{\mathcal{B}_{t,N}}$ will approach $P_{\mathcal{B}_{t,N}}$ even in
the presence of errors in a multiperiod setting. Lemma 7 and Theorem 3
furthermore extends these results to the case of a Cost-of-Capital valuation,
studied in [12] and [13], which here is dependent on the non-continuous risk
measure Value-at-Risk. Note from Section 2 that these results presume
simulated data not to be path dependent, in contrast to the results in [25].
We should note that these results do not give a rate of convergence, which is
done in the optimal stopping setting in for instance [14] and [23].
Especially, these papers provide a joint convergence rate in which $M$ and $N$
simultaneously go to infinity, something which is not done here. There are
three main reasons for this. First of all, in this paper the purpose is to
investigate LSM methods given by standard OLS regression, i.e. we do not want
to involve a truncation operator as we believe this would not likely be
implemented in practice. The use of truncation operators is necessary for the
results in [14] and [23], although one can handle the case of unbounded cash
flows by letting the bound based on the truncation operator suitably go to
infinity along with $N$ and $M$. Secondly, we believe that the bounds involved
in the rates of convergence would be quite large in our case if we applied
repeatedly the procedure in [14] or [23] (see remark 3). Thirdly, we want to
consider mappings which are $L^{2}$-continuous (Definition 5) but not
necessarily Lipschitz. In this case it is not clear how convergence can be
established other than at some unspecified rate.
### 3.1 General convergence results
We first define a useful mode of continuity that we will require to show our
first results on the convergence of the LSM algorithm.
###### Definition 5.
The mapping $\varphi_{t}:L^{2}(\mathcal{H}_{t+1})\to L^{2}(\mathcal{H}_{t})$
is said to be $L^{2}$-continuous if
$\|X-X_{n}\|_{2}\stackrel{{\scriptstyle{\small\mathbb{P}}}}{{\to}}0\text{
implies
}\|\varphi_{t}(X)-\varphi_{t}(X_{n})\|_{2}\stackrel{{\scriptstyle{\small\mathbb{P}}}}{{\to}}0$.
Notice that if $(X_{n})_{n=1}^{\infty}$ and $X$ are independent of $D$, the
convergence in probability may be replaced by convergence of real numbers.
We are now ready to formulate our first result on the convergence of the LSM
algorithm. The first result essentially says that if we make the best possible
estimation in each recursion step, using $N$ basis functions, then, for each
$t$, the estimator of $V_{t}$ will converge in $L^{2}$ to $V_{t}$ as
$N\to\infty$. This result is not affected by the initial data $D$, as it does
not require any simulation-based approximation.
###### Lemma 1.
For $t=0,\dots,T-1$, let the mappings $\varphi_{t}:L^{2}(\mathcal{H}_{t+1})\to
L^{2}(\mathcal{H}_{t})$ be $L^{2}$-continuous and law invariant. For
$t=1,\dots,T$, let $\bigcup_{n\in\mathbb{N}}\mathcal{B}_{t,n}$ be dense in the
set $\\{h(S_{t})\mid h:\mathbb{R}^{d}\to\mathbb{R},h(S_{t})\in
L^{2}(\mathcal{F}_{t})\\}$. Then, for $t=0,\dots,T-1$,
$\displaystyle\|\widehat{V}_{N,t}(L)-V_{t}(L)\|_{2}\in\mathbb{R}\text{ and
}\lim_{N\to\infty}\|\widehat{V}_{N,t}(L)-V_{t}(L)\|_{2}=0.$
The second result uses the independence assumptions of $D$ to prove a somewhat
technical result for when $P_{\mathcal{B}_{t,N}}^{(M)}$ given by (15)
asymptotically approaches the projection $P_{\mathcal{B}_{t,N}}$ given by
(12).
###### Lemma 2.
Let $Z_{t}=z_{t}(S_{t})\in L^{2}(\mathcal{F}_{t})$. For each $M\in\mathbb{N}$,
let $Z_{M,t}=z_{M,t}(D_{M},S_{t})\in L^{2}(\mathcal{H}_{t})$, where
$z_{M,t}(D_{M},\cdot)$ only depends on $D_{M}$ through $\\{S_{u}^{(i)}:1\leq
i\leq M,u\geq t+1\\}$, i.e.
$\displaystyle\mathcal{L}\big{(}z_{M,t}(D_{M},S_{t}^{(i)})\big{)}=\mathcal{L}\big{(}z_{M,t}(D_{M},S_{t})\big{)}.$
(18)
Then,
$\|Z_{t}-Z_{M,t}\|_{2}\stackrel{{\scriptstyle{\small\mathbb{P}}}}{{\to}}0$ as
$M\to\infty$ implies
$\displaystyle\|P_{\mathcal{B}_{t,N}}Z_{t}-P^{(M)}_{\mathcal{B}_{t,N}}Z_{M,t}\|_{2}\stackrel{{\scriptstyle{\small\mathbb{P}}}}{{\to}}0\quad\text{and}\quad\widehat{\beta}^{(M)}_{t,N,Z_{M,t}}\stackrel{{\scriptstyle{\small\mathbb{P}}}}{{\to}}\beta_{t,N,Z_{t}}\quad\text{
as }M\to\infty.$
###### Remark 2.
Notice that $\widehat{V}_{N,t}^{(M)}(L)$ satisfies (18) due to the backwards
recursive structure of the LSM algorithm (16).
Lemma 2 is essentially what is needed to prove the induction step in the
induction argument used to prove the following result:
###### Theorem 1.
For $t=0,\dots,T-1$, let the mappings $\varphi_{t}:L^{2}(\mathcal{H}_{t+1})\to
L^{2}(\mathcal{H}_{t})$ be $L^{2}$-continuous and law invariant, let
$\widehat{V}_{N,t}^{(M)}(L)$ be given by (16), and let $\widehat{V}_{N,t}(L)$
be given by (17). Then, for $t=0,\dots,T-1$,
$\displaystyle\|\widehat{V}_{N,t}(L)-\widehat{V}_{N,t}^{(M)}(L)\|_{2}\stackrel{{\scriptstyle{\small\mathbb{P}}}}{{\to}}0\text{
as }M\to\infty.$
To summarize, Lemma 1 says that we can theoretically/asymptotically achieve
arbitrarily accurate approximations, even when applying the approximation
recursively, and Theorem 1 says that we may approach this theoretical best
approximation in practice, with enough simulated non-path-dependent data.
###### Lemma 3.
If the mapping $\varphi_{t}:L^{2}(\mathcal{H}_{t+1})\to
L^{2}(\mathcal{H}_{t})$ is Lipschitz continuous in the sense that there exists
a constant $K>0$ such that
$\displaystyle|\varphi_{t}(X)-\varphi_{t}(Y)|\leq
K\mathbb{E}[|X-Y|\mid\mathcal{H}_{t}]\quad\text{for all }X,Y\in
L^{2}(\mathcal{H}_{t+1}),$ (19)
then $\varphi_{t}$ is $L^{2}$-continuous in the sense of Definition 5.
###### Lemma 4.
If the conditional monetary risk measure $\rho_{t}:L^{2}(\mathcal{H}_{t+1})\to
L^{2}(\mathcal{H}_{t})$ is Lipschitz continuous in the sense of (19) with
Lipschitz constant $K$, then so is the mapping $\varphi_{t}$ given by (9) with
Lipschitz constant $2K$.
The large class of (conditional) spectral risk measures are in fact Lipschitz
continuous. These conditional monetary risk measures can be expressed as
$\displaystyle\rho_{t,m}(Y)=-\int_{0}^{1}F^{-1}_{t,Y}(u)m(u)\mathrm{d}u,$ (20)
where $m$ is a probability density function that is decreasing, bounded and
right continuous, and $F^{-1}_{t,Y}(u)$ is the conditional quantile function
$\displaystyle F^{-1}_{t,Y}(u):=\operatorname*{ess\,inf}\\{y\in
L^{0}(\mathcal{H}_{t}):\mathbb{P}(Y\leq y\mid\mathcal{H}_{t})\geq u\\}.$
It is well known that spectral risk measures are coherent and includes
expected shortfall as a special case.
###### Lemma 5.
If $m$ is a probability density function that is decreasing, bounded and right
continuous, then $(\rho_{t,m})_{t=0}^{T-1}$ is a dynamic monetary risk measure
in the sense of Definition 1. Moreover, each $\rho_{t,m}$ is law invariant in
the sense of Definition 3 and also Lipschitz continuous with constant $m(0)$.
###### Remark 3.
Assume $\varphi_{t}$ is Lipschitz continuous. Then
$\displaystyle||\widehat{V}_{t}(L)-V_{t}(L)||_{2}$
$\displaystyle\quad\leq||\widehat{V}_{t}(L)-\varphi_{t}(L_{t+1}+\widehat{V}_{t+1}(L))||_{2}$
$\displaystyle\quad\quad+||\varphi_{t}(L_{t+1}+V_{t+1}(L))-\varphi_{t}(L_{t+1}+\widehat{V}_{t+1}(L)||_{2}$
$\displaystyle\quad\leq||\widehat{V}_{t}(L)-\varphi_{t}(L_{t+1}+\widehat{V}_{t+1}(L))||_{2}+K||V_{t+1}(L)-\widehat{V}_{t+1}(L)||_{2}$
Repeating this argument gives
$\displaystyle||\widehat{V}_{t}(L)-V_{t}(L)||_{2}\leq\sum_{s=t}^{T}K^{s-t}||\widehat{V}_{s}(L)-\varphi_{s}(L_{s+1}+\widehat{V}_{s+1}(L)||_{2}$
This bound is analogous to that in [24] (Lemma 2.3, see also Remark 3.4 for
how this ties in with the main result) with the exception that the constant
$K^{s-t}$ appears instead of $2$. As $K$ may be quite large this is one of the
reasons for not seeking to determine the exact rate of convergence, as is done
in for instance [25] and [14]. This observation also discourages judging the
accuracy of the LSM algorithm purely by estimating the out-of-sample one-step
estimation errors of the form
$||\widehat{V}_{s}(L)-\varphi_{s}(L_{s+1}+\widehat{V}_{s+1}(L))||_{2}$, as
these need to be quite small in order for a satisfying error bound.
### 3.2 Convergence results using Value-at-Risk
In this section, we will focus on mappings
$\varphi_{t,\alpha}:L^{2}(\mathcal{H}_{t+1})\to L^{2}(\mathcal{H}_{t})$ given
by
$\displaystyle\varphi_{t,\alpha}(Y):=\operatorname{VaR}_{t,\alpha}(-Y)-\frac{1}{1+\eta_{t}}\mathbb{E}[(\operatorname{VaR}_{t,\alpha}(-Y)-Y)^{+}\mid\mathcal{H}_{t}]$
(21)
for some $\alpha\in(0,1)$ and nonnegative constants $(\eta_{t})_{t=0}^{T-1}$,
and where
$\displaystyle\operatorname{VaR}_{t,\alpha}(-Y)$
$\displaystyle:=F^{-1}_{t,Y}(1-\alpha)$
$\displaystyle:=\operatorname*{ess\,inf}\\{y\in
L^{0}(\mathcal{H}_{t}):\mathbb{P}(Y\leq y\mid\mathcal{H}_{t})\geq
1-\alpha\\}.$
is the conditional version of Value-at-Risk. Note that $\varphi_{t,\alpha}$ is
a special case of mappings $\varphi$ in (9).
$(\operatorname{VaR}_{t,\alpha})_{t=0}^{T-1}$ is a dynamic monetary risk
measure in the sense of Definition 1, and $\operatorname{VaR}_{t,\alpha}$ is
law invariant in the sense of Definition 3. Since
$\operatorname{VaR}_{t,\alpha}$ is in general not Lipschitz continuous,
$\varphi_{t,\alpha}$ cannot be guaranteed to be so, without further regularity
conditions. The aim of this section is to find results analogous to Lemma 1
and Theorem 1.
We will use the following Lemma and especially its corollary in lieu of
$L^{2}$-continuity for Value-at-Risk:
###### Lemma 6.
For any $X,Z\in\mathcal{H}_{t+1}$ and any $\delta\in(0,1-\alpha)$,
$\displaystyle\operatorname{VaR}_{t,\alpha}(-(X+Z))\leq\operatorname{VaR}_{t,\alpha+\delta}(-X)+\operatorname{VaR}_{t,1-\delta}(-Z),$
(22)
$\displaystyle\operatorname{VaR}_{t,\alpha}(-(X+Z))\geq\operatorname{VaR}_{t,\alpha-\delta}(-X)-\operatorname{VaR}_{t,1-\delta}(-Z).$
(23)
We get an interesting corollary from this lemma:
###### Corollary 1.
Let $\alpha\in(0,1)$ and let $\delta\in(0,1-\alpha)$ with $\delta<1/2$. Then,
for any $X,Y\in L^{1}(\mathcal{H}_{t+1})$,
$\displaystyle\inf_{|\epsilon|<\delta}|\operatorname{VaR}_{t,\alpha+\epsilon}(X)-\operatorname{VaR}_{t,\alpha}(Y)|\leq\frac{1}{\delta}\mathbb{E}[|X-Y|\mid\mathcal{H}_{t}].$
Using these Lipschitz-like results, we can show a Lipschitz-like result for
$\varphi_{t,\alpha}(\cdot)$.
###### Theorem 2.
Let $\alpha\in(0,1)$ and let $\delta\in(0,1-\alpha)$ with $\delta<1/2$. Then,
for any $X,Y\in L^{1}(\mathcal{H}_{t+1})$, then
$\displaystyle\inf_{|\epsilon|<\delta}|\varphi_{t,\alpha+\epsilon}(X)-\varphi_{t,\alpha}(Y)|\leq\frac{2}{\delta}\mathbb{E}[|X-Y|\mid\mathcal{H}_{t}]$
(24)
and
$\displaystyle\varphi_{t,\alpha-\delta}(X)-\frac{2}{\delta}\mathbb{E}[|X-Y|\mid\mathcal{H}_{t}]$
$\displaystyle\leq\varphi_{t,\alpha}(Y)$
$\displaystyle\leq\varphi_{t,\alpha+\delta}(X)+\frac{2}{\delta}\mathbb{E}[|X-Y|\mid\mathcal{H}_{t}],$
(25)
and (24) and (25) are equivalent.
Theorem 2 enables us to prove $L^{2}$-continuity of $\varphi_{t,\alpha}$ under
a continuity assumption.
###### Corollary 2.
Consider $X,X_{n}\in L^{2}(\mathcal{H}_{t+1})$, $n\geq 1$, with $X_{n}\to X$
in $L^{2}$. Assume that $(0,1)\ni u\mapsto\operatorname{VaR}_{t,u}(-X)$ be
a.s. continuous at $u=\alpha$. Then
$\varphi_{t,\alpha}(X_{n})\to\varphi_{t,\alpha}(X)$ in $L^{2}$.
The following remark illustrates that even a stronger requirement of a.s.
continuous time $t$-conditional distributions should not be a great hindrance
in practice:
###### Remark 4.
If we add to our cash flow $(L_{t})_{t=1}^{T}$ an adapted process
$(\epsilon_{t})_{t=1}^{T}$, independent of $(L_{t})_{t=1}^{T}$, such that for
each $t$, $\epsilon_{t}$ is independent of $\mathcal{F}_{t-1}$ and has a
continuous distribution function, then the assumptions in Corollary 2 will be
satisfied.
We are now ready to formulate a result analogous to Lemma 1.
###### Lemma 7.
Let $\alpha\in(0,1)$ and let $\delta\in(0,1-\alpha)$ with $\delta<1/2$. Let
$(\varphi_{t,\alpha})_{t=0}^{T-1}$ be defined by (21) and let
$\displaystyle\widehat{V}_{N,t,\alpha}(L):=P_{\mathcal{B}_{t,N}}\varphi_{t,\alpha}\big{(}L_{t+1}+\widehat{V}_{N,t+1,\alpha}(L)\big{)},\quad\widehat{V}_{N,T,\alpha}(L):=0.$
(26)
Let $\bigcup_{n\in\mathbb{N}}\mathcal{B}_{t,n}$ be dense in the set
$\\{h(S_{t})\mid h:\mathbb{R}^{d}\to\mathbb{R},h(S_{t})\in
L^{2}(\mathcal{F}_{t})\\}$ and assume that $(0,1)\ni
u\mapsto\operatorname{VaR}_{t,u}(-L_{t+1}-\widehat{V}_{N,t+1,\alpha}(L))$ be
a.s. continuous at $u=\alpha$ for all $N\in\mathbb{N},t=0,\dots T-1$. Then,
for $t=0,\dots,T-1$,
$\displaystyle\|\widehat{V}_{N,t,\alpha}(L)-V_{t,\alpha}(L)\|_{2}\in\mathbb{R}\text{
and }\lim_{N\to\infty}\|\widehat{V}_{N,t,\alpha}(L)-V_{t,\alpha}(L)\|_{2}=0.$
###### Lemma 8.
Let $\alpha\in(0,1)$ and let $(0,1)\ni
u\mapsto\operatorname{VaR}_{t,u}(v^{\mathrm{T}}\mathbf{\Phi}_{t+1,N})$ be a.s.
continuous at $u=\alpha$ for any $v\in\mathbb{R}^{N}$. Then
$\displaystyle\beta_{n}\stackrel{{\scriptstyle{\small\mathbb{P}}}}{{\to}}\beta\quad\text{implies}\quad\big{\|}\varphi_{t,\alpha}(\beta^{\mathrm{T}}\mathbf{\Phi}_{t+1,N})-\varphi_{t,\alpha}(\beta_{n}^{\mathrm{T}}\mathbf{\Phi}_{t+1,N})\big{\|}_{2}\stackrel{{\scriptstyle{\small\mathbb{P}}}}{{\to}}0.$
###### Remark 5.
Lemma 8 can be extended to show the convergence
$\displaystyle\big{\|}\varphi_{t,\alpha}(L_{t+1}+\beta^{\mathrm{T}}\mathbf{\Phi}_{t+1,N})-\varphi_{t,\alpha}(L_{t+1}+\beta_{n}^{\mathrm{T}}\mathbf{\Phi}_{t+1,N})\big{\|}_{2}\stackrel{{\scriptstyle{\small\mathbb{P}}}}{{\to}}0$
since the vector of basis functions $\mathbf{\Phi}_{t+1,N}$ could contain
$L_{t+1}$ as an element. The requirement for convergence is that
$u\mapsto\operatorname{VaR}_{t,u}(-L_{t+1}-v^{\mathrm{T}}\mathbf{\Phi}_{t+1,N})$
is a.s. continuous at $u=\alpha$. This requirement could be replaced by the
stronger requirement that
$x\mapsto\mathbb{P}(L_{t+1}+v^{\mathrm{T}}\mathbf{\Phi}_{t+1,N}\mid\mathcal{F}_{t})$
is a.s. continuous.
We have now fitted $\varphi_{t,\alpha}$ into the setting of Theorem 1.
###### Theorem 3.
Let
$u\mapsto\operatorname{VaR}_{t,u}(-L_{t+1}-v^{\mathrm{T}}\mathbf{\Phi}_{t+1,N})$
be a.s. continuous at $u=\alpha$ for any $v\in\mathbb{R}^{N}$. For any $N\in
N$ and $t=0,\dots,T$, let $\widehat{V}_{N,t,\alpha}(L)$ be given by (26) and
define
$\displaystyle\widehat{V}^{(M)}_{N,t,\alpha}(L):=P^{(M)}_{\mathcal{B}_{t,N}}\varphi_{t,\alpha}\big{(}L_{t+1}+\widehat{V}^{(M)}_{N,t+1}(L)\big{)},\quad\widehat{V}^{(M)}_{N,T,\alpha}(L):=0.$
Then, for $t=0,1,\dots,T-1$,
$\|\widehat{V}_{N,t,\alpha}(L)-\widehat{V}_{N,t,\alpha}^{(M)}(L)\|_{2}\stackrel{{\scriptstyle{\small\mathbb{P}}}}{{\to}}0$
as $M\to\infty$.
## 4 Implementing and validating the LSM algorithm
In this section we will test the LSM algorithm empirically for the special
case of the mappings $\varphi$ being given by
$(\varphi_{t,\alpha})_{t=0}^{T-1}$ in (21). The LSM algorithm described below,
Algorithm 1, will differ slightly from the one previously, in the sense that
it will contain the small inefficiency of having two regression steps: One for
the $\operatorname{VaR}$ term of the mapping, one for the expected value term.
The reason for introducing this split is that it will significantly simplify
the validation procedures of the algorithm. Heuristically, we will be able to
run a forward simulation where we may test the accuracy of both the
$\operatorname{VaR}$ term and the expected value term.
Let $\mu_{t,t+1}(\cdot,\cdot)$ be the transition kernel from time $t$ to $t+1$
of the Markov process $(S_{t})_{t=0}^{T}$ so that
$\mu_{t,t+1}(S_{t},\cdot)=\mathbb{P}(S_{t+1}\in\cdot\mid S_{t})$. In order to
perform the LSM algorithm below, the only requirements are the ability to
efficiently sample a variate $s$ from the unconditional law
$\mathcal{L}(S_{t})$ of $S_{t}$ and from the conditional law
$\mu_{t,t+1}(\cdot,s)$. Recall that the liability cash flow
$(L_{t})_{t=1}^{T}$ is assumed to be given by $L_{t}:=g_{t}(S_{t})$, for known
functions $(g_{t})_{t=1}^{T}$.
Algorithm 1 LSM Algorithm
Set $\widehat{\beta}^{(M)}_{T,N,V}:=0$
for $t=T-1:0$ do
Draw independent variables $S^{(1)}_{t},\dots S^{(M)}_{t}$ from
$\mathcal{L}(S_{t})$
for $i=1:M$ do
Draw independent variables $S^{(i,1)}_{t+1},\dots,S^{(i,n)}_{t+1}$ from
$\mu_{t,t+1}(S^{(i)}_{t},\cdot)$
Set
$Y^{(i,j)}_{t+1}:=g_{t+1}(S^{(i,j)}_{t+1})+(\widehat{\beta}^{(M)}_{t+1,N,V})^{{\mathrm{T}}}\mathbf{\Phi}_{t+1,N}(S^{(i,j)}_{t+1})$,
$j=1,\dots,n$
Let
$\widehat{F}^{(i)}_{t}(y):=\frac{1}{n}\sum_{j=1}^{n}I\\{Y^{(i,j)}_{t+1}\leq
y\\}$ (empirical cdf)
Set $R^{(i)}_{t}:=\min\\{y:\widehat{F}^{(i)}_{t}(y)\geq\alpha\\}$ (empirical
$\alpha$-quantile)
Set $E^{(i)}_{t}:=\frac{1}{n}\sum_{j=1}^{n}(R^{(i)}_{t}-Y^{(i,j)}_{t+1})_{+}$
end for
Set $\widehat{\beta}^{(M)}_{t,N,R}$ as in (14) by regressing
$(R_{t}^{(i)})_{i=1}^{M}$ onto $(\mathbf{\Phi}_{t,N}(S^{(i)}_{t}))_{i=1}^{M}$
Set $\widehat{\beta}^{(M)}_{t,N,E}$ as in (14) by regressing
$(E_{t}^{(i)})_{i=1}^{M}$ onto $(\mathbf{\Phi}_{t,N}(S^{(i)}_{t}))_{i=1}^{M}$
Set
$\widehat{\beta}^{(M)}_{t,N,V}:=\widehat{\beta}^{(M)}_{t,N,R_{t}}-\frac{1}{1+\eta}\widehat{\beta}^{(M)}_{t,N,E_{t}}$
end for
We may assess the accuracy of the LSM implementation by computing root mean-
squared errors (RMSE) of quantities appearing in Algorithm 1. For each index
pair $(t,i)$ set $V^{(i)}_{t}:=R^{(i)}_{t}-\frac{1}{1+\eta}E^{(i)}_{t}$.
Define the RMSE and the normalized RMSE by
$\displaystyle\text{RMSE}_{Z,t}$
$\displaystyle:=\bigg{(}\frac{1}{M}\sum_{i=1}^{M}\Big{(}Z^{(i)}_{t}-(\widehat{\beta}^{(M)}_{t,N,Z})^{\mathrm{T}}\mathbf{\Phi}_{t,N}\big{(}S^{(i)}_{t}\big{)}\Big{)}^{2}\bigg{)}^{1/2},$
(27) $\displaystyle\text{NRMSE}_{Z,t}$
$\displaystyle:=\text{RMSE}_{Z,t}\times\bigg{(}\frac{1}{M}\sum_{i=1}^{M}{Z^{(i)}_{t}}^{2}\bigg{)}^{-1/2},$
(28)
where $Z$ is a placeholder for $R$, $E$ or $V$.
For each index pair $(t,i)$ consider the actual non-default probability and
actual return on capital given by
$\displaystyle\text{ANDP}^{(i)}_{t}$
$\displaystyle:=\widehat{F}^{(i)}_{t}\big{(}(\widehat{\beta}^{(M)}_{t,N,R})^{{\mathrm{T}}}\mathbf{\Phi}_{t,N}(S^{(i)}_{t}))\big{)},$
(29) $\displaystyle\text{AROC}^{(i)}_{t}$
$\displaystyle:=(1+\eta)E^{(i)}_{t}\times\big{(}(\widehat{\beta}^{(M)}_{t,N,E})^{{\mathrm{T}}}\mathbf{\Phi}_{t,N}(S^{(i)}_{t})\big{)}^{-1}),$
(30)
and note that these random variables are expected to be centered around
$\alpha$ and $1+\eta$, respectively, if the implementation is accurate. All
validation procedures in this paper are performed out-of-sample, i.e. we must
perform a second validation run to get these values.
### 4.1 Models
In this section we will introduce two model types in order to test the
performance of the LSM algorithm. The first model type, introduced in Section
4.1.1, is not motivated by a specific application but is simply a sufficiently
flexible and moderately complex time series model.The second model type,
introduced in Section 4.1.2, aims to describe the cash flow of a life
insurance portfolio paying both survival and death benefits.
#### 4.1.1 AR(1)-GARCH(1,1) models
The first model to be evaluated is when the liability cash flow
$(L_{t})_{t=1}^{T}$ is assumed to be given by a process given by an AR(1)
model with GARCH(1,1) residuals, with dynamics given by:
$\displaystyle
L_{t+1}=\alpha_{0}+\alpha_{1}L_{t}+\sigma_{t+1}\epsilon_{t+1},\quad\sigma^{2}_{t+1}=\alpha_{2}+\alpha_{3}\sigma^{2}_{t}+\alpha_{4}L^{2}_{t},\quad
L_{0}=0,\sigma_{1}=1.$
Here $\epsilon_{1},\dots,\epsilon_{T}$ are assumed to be i.i.d. standard
normally distributed and $\alpha_{0},\dots\alpha_{4}$ are known model
parameters. If we put $S_{t}=(L_{t},\sigma_{t+1})$ for $t=0,\dots,T$, we see
that $S_{t}$ will form a time homogeneous Markov chain.
In order to contrast this model with a more complex model, we also investigate
the case where the process $(L_{t})_{t=1}^{T}$ is given by a sum of
independent AR(1)-GARCH(1,1)-processes of the above type:
$L_{t}=\sum_{i=1}^{10}L_{t,i}$, where
$\displaystyle
L_{t+1,i}=\alpha_{0,i}+\alpha_{1,i}L_{t,i}+\sigma_{t+1,i}\epsilon_{t+1,i},\quad\sigma^{2}_{t+1,i}=\alpha_{2,i}+\alpha_{3,i}\sigma^{2}_{t,i}+\alpha_{4,i}L^{2}_{t,i}.$
The motivation for these choices of toy models is as follows: Firstly, a
single AR(1)-GARCH(1,1) process is sufficiently low dimensional so we may
compare brute force approximation with that of the LSM model, thus getting a
real sense of the performance of the LSM model. Secondly, despite it being low
dimensional, it still seems to have a sufficiently complex dependence
structure as not to be easily valued other than by numerical means. The
motivation for looking at a sum of AR(1)-GARCH(1,1) processes is simply to
investigate whether model performance is severely hampered by an increase in
dimensionality, provided a certain amount of independence of the sources of
randomness.
#### 4.1.2 Life insurance models
In order to investigate a set of models more closely resembling an insurance
cash flow, we also consider an example closely inspired by that in [9].
Essentially, we will assume the liability cash flow to be given by life
insurance policies where we take into account age cohorts and their sizes at
each time, along with financial data relevant to the contract payouts.
We consider two risky assets $Y$ and $F$, given by the log-normal dynamics
$\displaystyle\mathrm{d}Y_{t}=\mu_{Y}Y_{t}\mathrm{d}t+\sigma_{Y}Y_{t}\mathrm{d}W^{Y}_{t},\quad
0\leq t\leq T,\quad Y_{0}=y_{0},$
$\displaystyle\mathrm{d}F_{t}=\mu_{F}F_{t}\mathrm{d}t+\sigma_{F}F_{t}\mathrm{d}W^{F}_{t},\quad
0\leq t\leq T,\quad F_{0}=f_{0}.$
$W^{Y}_{t}$ $W^{F}_{t}$ are two correlated Brownian motions, which we may re-
write as
$\displaystyle W^{Y}_{t}=W^{1}_{t},\quad W^{F}_{t}=\rho
W^{1}_{t}+\sqrt{1-\rho^{2}}W^{2}_{t},\quad 0\leq t\leq T,$
where $W^{1}$ and $W^{2}$ are two standard, uncorrelated Brownian motions.
Here, $F$ will represent the index associated with unit-linked contracts and
$Y$ will represent assets owned by the insurance company. Furthermore, we
assume that an individual of age $a$ has the probability $1-p_{a}$ of reaching
age $a+1$, where the probabilities $p_{a}$ for $a=0,1,\dots$ are assumed to be
nonrandom and known. All deaths are assumed to be independent of each other.
We will consider $k$ age-homogeneous cohorts of sizes $n_{1},\dots,n_{k}$ at
time $t=0$ and ages $a_{1},\dots,a_{k}$ at time $t=0$. We assume that all
insured individuals have bought identical contracts. If death occurs at time
$t$, the contract pays out the death benefit $\max(D^{*},F_{t})$, where
$D^{*}$ is a nonrandom guaranteed amount. If an insured person survives until
time $T$, the survival benefit $\max(S^{*},F_{T})$, where again $S^{*}$ is a
nonrandom amount. We finally assume that the insurance company holds the
nominal amount $c(n_{1}+\dots+n_{k})$ in the risky asset Y and that they will
sell off these assets proportionally to the amount of deaths as they occur,
and sell off the entire remaining amount at time $T$. Let $N^{i}_{t}$ denote
the number of people alive in cohort $i$ at time $t$, with the following
dynamics:
$\displaystyle N^{i}_{t+1}\sim\text{Bin}(N^{i}_{t},1-p_{a_{i}+t}),\quad
t=0,\dots,T-1.$
These are the same dynamics as the life insurance example in Section 5 of
[12]. Thus, the liability cash flow we consider here is given by
$\displaystyle L_{t}$
$\displaystyle=\big{(}\max(D^{*},F_{t})-cY_{t}\big{)}\sum_{i=1}^{k}(N^{i}_{t}-N^{i}_{t-1})$
$\displaystyle\quad+\mathbb{I}\\{t=T\\}\big{(}\max(S^{*},F_{T})-cY_{T}\big{)}\sum_{i=1}^{k}N^{i}_{T}$
If we write $S_{t}=(Y_{t},F_{t},N^{1}_{t},\dots,N^{k}_{t})$, then
$S:=(S_{t})_{t=0}^{T}$ will be a Markov chain with dynamics outlined above.
Note that depending on the number $k$ of cohorts, $S$ might be a fairly high-
dimensional Markov chain. Note that in addition to the obvious risk factors of
mortality and the contractual payout amounts, there is also the risk of the
value of the insurance company’s risky asset $Y$ depreciating in value,
something which is of course a large risk factor of insurance companies in
practice. Here we will consider the case of $k=4$ cohorts, referred to as the
small life insurance model and the case $k=10$ cohorts, referred to as the
large life insurance model.
### 4.2 Choice of basis functions
So far, the choice of basis functions has not been addressed. As we are trying
to numerically calculate some unknown functions we do not know the form of,
the approach used here will be a combination of standard polynomial functions,
completed with functions that in some ways bear resemblance to the underlying
liability cash flow. A similar approach for the valuation of American
derivatives is taken in for instance [16] and [3], where in the latter it is
explicitly advised (see pp. 1082) to use the value of related, simpler,
derivatives as basis functions to price more exotic ones.
In these examples, we will not be overly concerned with model sparsity,
covariate significance or efficiency, but rather take the machine-learning
approach of simply evaluating models based on out-of-sample performance. This
is feasible due to the availability of simulated data for both fitting and
out-of-sample validation.
#### 4.2.1 AR(1)-GARCH(1,1) models
Since the AR(1)-GARCH(1,1) models can be considered toy models, generic basis
functions were chosen. For a single AR(1)-GARCH(1,1) model, the choice of
basis functions was all polynomials of the form $L_{t}^{i}\sigma_{t+1}^{j}$
for all $0<i+j\leq 2$. For the sum of $10$ independent AR(1)-GARCH(1,1) models
we denote by $L_{t},\sigma_{t+1}$ the aggregated liability cash flow and
standard deviation at time $t$ and $t+1$, respectively. Then we consider the
basis functions consisting of the state vector
$(L_{t,i},\sigma_{t,i})_{i=1}^{10}$ along with $L_{t}^{i}\sigma_{t+1}^{j}$ for
all $0<i+j\leq 2$, omitting the case of $i=1$, $j=0$ to avoid collinearity.
Note that the number of basis functions grow linearly with the dimensionality
of the state space, rather than quadratically.
#### 4.2.2 Life insurance models
For the state $S_{t}=(Y_{t},F_{t},N^{1}_{t},\dots,N^{k}_{t})$, let
$p^{i}_{t+1}$ be the probability of death during $(t,t+1)$ for an individual
in cohort $i$, with $q^{i}_{t+1}:=1-p^{i}_{t+1}$. We then introduce the state-
dependent variables
$\displaystyle\mu_{t+1}:=\sum_{i=1}^{k}N^{i}_{t}p^{i}_{t+1},\quad\sigma_{t+1}:=\Big{(}\sum_{i=1}^{k}N^{i}_{t}p^{i}_{t+1}q^{i}_{t+1}\Big{)}^{1/2},\quad
N_{t}:=\sum_{i=1}^{k}N^{i}_{t}.$
The first two terms here are the mean and standard deviation of the number of
deaths during $(t,t+1)$, the third simply being the total number of people
alive at time $t$. The basis functions we choose consist of the state vector
$Y_{t},F_{t},N^{1}_{t},\dots,N^{k}_{t}$ together with all products of two
factors where the first factor is an element of the set
$\\{\mu_{t+1},\sigma_{t+1},N_{t}\\}$ and the other factor is an element of the
set
$\displaystyle\big{\\{}Y_{t},F_{t},Y^{2}_{t},F^{2}_{t},F_{t}^{3},Y_{t}F_{t},Y_{t}F_{t}^{2},(F_{t}-K_{j})_{+},(F_{t}-K_{j})_{+}Y_{t},$
$\displaystyle\quad
C(F_{t},S^{*},T,t),C(F_{t},D^{*},t+1,t),C(F_{t},S^{*},T,t)Y_{t},C(F_{t},D^{*},t+1,t)Y_{t}\big{\\}}.$
$K_{j}$ can take values in $\\{200,162,124,103\\}$ depending on which
covariates of the form $(F_{t}-K_{j})_{+}$ had the highest $R^{2}$-value at
time $T=5$. Here the $R^{2}$-values were calculated based on the residuals
after performing linear regression with respect to all basis function not
containing elements of the form $(F_{t}-K_{j})_{+}$. While this is a somewhat
ad hoc approach that could be refined, it is a simple and easy to implement
example of basis functions. Again note that the number of basis functions grow
linearly with the dimensionality of the state space, rather than
quadratically.
#### 4.2.3 Run specifications
For Algorithm 1, $M=5\cdot 10^{4}$ and $n=10^{5}$ were chosen for the life
insurance models and $M=10^{4}$ and $n=10^{5}$ for the AR(1)-GARCH(1,1)
models. Terminal time $T=6$ was used in all cases. For the validation run,
$M=10^{4}$ and $n=10^{5}$ were chosen for all models. Due to the extreme
quantile level involved, and also based on empirical observations, it was
deemed necessary to keep $n$ around this order of magnitude. Similarly, in
part due to the number of basis functions involved, it was observed as well
that performance seemed to increase with $M$. The choice of $M$ and $n$ to be
on the considered order of magnitude was thus necessary for good model
performance, and also the largest orders of magnitude that was computationally
feasible given the computing power available.
For the AR(1)-GARCH(1,1) model, the chosen parameters were
$\alpha_{0}=1,\quad\alpha_{1}=1,\quad\alpha_{2}=0.1,\quad\alpha_{3}=0.1,\quad\alpha_{4}=0.1.$
The same choice was used for each of the terms in the sum of $10$
AR(1)-GARCH(1,1) processes, making the model a sum of i.i.d. processes.
For the life insurance models, the choice of parameters of the risky assets
was $\mu_{Y}=\mu_{F}=0.03,\sigma_{Y}=\sigma_{F}=0.1,\rho=0.4,y_{0}=f_{0}=100$.
The benefit lower bounds were chosen as $D^{*}=100,S^{*}=110$. The
death/survival probabilities were calculated using the Makeham formula (for
males):
$\displaystyle
p_{a}=\exp\Big{\\{}-\int_{a}^{a+1}\mu_{x}\mathrm{d}x\Big{\\}}\quad\mu_{x}:=0.001+0.000012\exp\\{0.101314x\\}.$
These numbers correspond to the Swedish mortality table M90 for males (the
formula for females is identical, but adjusted backwards by $6$ years to
account for the greater longevity in the female population). For the case of
$4$ cohorts, starting ages (for males) were $50-80$ in $10$-year increments
and for the case of $10$ cohorts the starting ages were $40-85$ with $5$-year
increments.
The algorithms were run on a computer with 8 Intel(R) Core(TM) i7-4770S
3.10GHz processors, and parallel programming was implemented in the nested
simulation steps in both Algorithm 1 and the validation algorithm.
### 4.3 Numerical results
The RMSE:s and NRMSE:s of the LSM models can be seen in Table 1 (RMSE:s) and
Table 2 (NRMSE:s). The ANDP:s and AROC:s of the LSM models can be seen in
Table 3. The tables display quantile ranges with respect to the $2.5\%$ and
$97.5\%$ quantiles of the data.
Model | RMSE V | RMSE R | RMSE E
---|---|---|---
one single AR(1)-GARCH(1,1) | 0.0114, 0.0118, 0.0115, 0.0098, 0.0061 | 0.0533, 0.0556, 0.0553, 0.0455, 0.0285 | 0.0521, 0.0542, 0.0544, 0.0444, 0.0279
a sum of $10$ AR(1)-GARCH(1,1) | 0.0172, 0.0130, 0.0120, 0.0100, 0.0061 | 0.0525, 0.0552, 0.0546, 0.0467, 0.0278 | 0.0536, 0.0544, 0.0535, 0.0458, 0.0273
Life model with 4 cohorts | 134.4, 120.3, 134.8, 85.1, 75.5 | 760.3, 682.7, 901.6, 535.9, 575.1 | 742.4, 665.0, 856.8, 536.0, 571.3
Life model with 10 cohorts | 331.9, 307.4, 330.8, 226.2, 219.3 | 1730.1, 1719.4, 2148.3, 1431.0, 1928.3 | 1689.2, 1672.8, 2049.8, 1429.4, 1910.3
Table 1: RMSE values for the quantities $V,R,E$ as defined in (27). The five values in each cell are for times $t=1,2,3,4,5$, in that order. Model | NRMSE V (%) | NRMSE R (%) | NRMSE E (%)
---|---|---|---
one single AR(1)-GARCH(1,1) | 0.0498, 0.0583, 0.0685, 0.0797, 0.0901 | 0.1705, 0.1931, 0.2170, 0.2333, 0.2544 | 0.5810, 0.5962, 0.5930, 0.5747, 0.5885
a sum of $10$ AR(1)-GARCH(1,1) | 0.0758, 0.0642, 0.0715, 0.0813, 0.0912 | 0.1693, 0.1913, 0.2158, 0.2393, 0.2493 | 0.6049, 0.5963, 0.5880, 0.5949, 0.5779
Life model with 4 cohorts | 0.2567 0.2225 0.2603 0.1711 0.1647 | 0.5443 0.5646 0.8722 0.5924 0.7403 | 0.7109 0.7535 1.1381 0.8306 1.0271
Life model with 10 cohorts | 0.2505, 0.2247, 0.2454, 0.1720, 0.1733 | 0.4911, 0.5592, 0.7948, 0.5952, 0.9056 | 0.6475, 0.7452, 1.0499, 0.8351, 1.2580
Table 2: NRMSE values for the quantities $V,R,E$ as defined in (28). The five values in each cell are for times $t=1,2,3,4,5$, in that order. Model | QR ANDP ($2.5\%$, $97.5\%$) | QR AROC ($2.5\%$, $97.5\%$)
---|---|---
one single AR(1)-GARCH(1,1) | (0.457, 0.544), (0.456, 0.545), (0.457, 0.545), (0.458, 0.545), (0.457, 0.543) | (4.79, 7.22), (4.76, 7.25), (4.78, 7.22), (4.84, 7.24), (4.80, 7.20)
a sum of $10$ AR(1)-GARCH(1,1) | (0.458, 0.545), (0.456, 0.545), (0.457, 0.545), (0.456, 0.546), (0.457, 0.544) | (4.74, 7.29), (4.77, 7.26), (4.77, 7.23), (4.79, 7.25), (4.81, 7.21)
Life model with 4 cohorts | (0.454, 0.548), (0.443, 0.565), (0.385, 0.622), (0.436, 0.571), (0.387, 0.603) | (4.59, 7.43), (4.32, 7.82), (2.71, 9.05), (4.10, 7.93), (2.69, 8.49)
Life model with 10 cohorts | (0.457, 0.546), (0.444, 0.560), (0.391, 0.611), (0.435, 0.569), (0.394, 0.605) | (4.66, 7.37), (4.33, 7.68), (2.94, 8.97), (4.08, 7.89), (2.96, 8.58)
Table 3: Quantile ranges for the samples
$(1-\text{ANDP}_{t}^{(i)})_{i=1}^{M}$ and $(\text{AROC}_{t}^{(i)})_{i=1}^{M}$,
as defined in (29) and (30). The quantiles considered are $2.5\%$ and
$97.5\%$. The five intervals in each cell are for times $t=1,2,3,4,5$, in that
order.
Below, in Figure 1 we also present some histograms of the actual returns and
risks of ruin, in order to get a sense of the spread of these values.
|
---|---
|
Figure 1: The top two figures correspond to the AR(1)-GARCH(1,1) model. The
bottom two figures correspond to the life insurance model with 10 cohorts.
From these we can observe that the quantity representing the actual returns
seems to be quite sensitive to model errors, if we recall the rather small
size of the RMSE values.
Model | Running time valuation (HH:MM) | Running time validation (HH:MM)
---|---|---
one single AR(1)-GARCH(1,1) | 00:06 | 00:10
a sum of 10 AR(1)-GARCH(1,1) | 00:33 | 00:39
Life model with 4 cohorts | 12:48 | 02:30
Life model with 10 cohorts | 13:29 | 02:44
Table 4: Run time of each model in hours and minutes. Run specifications are
described in section 4.2.3
Table 4 displays the running times of each model. As far as is known, the main
factor determining running time of Algoritm 1 is the repeated calculation of
the basis functions inside the nested simulation (required to calculate the
quantities $Y_{t+1}^{i,j}$ in the inner for-loop). As these are quite many for
models with high-dimensional state spaces, we see that running times increase
accordingly. It should be noted that Algorithm 1 was not implemented to run as
fast as possible for any specific model, other than the implementation of
parallel programming. Speed could potentially be gained by adapting Algorithm
1 for specific models of interest.
Some conclusions can be drawn from the numerical results. Firstly, we can see
that from a mean-squared-error point of view, the LSM model seems to work well
in order to capture the dynamics of the multiperiod cost-of-capital valuation.
It should be noted that the (N)RMSE of the value $V$ is lower than those of
$R$ and $E$ across the board for all models and times. Since the expression
for $E$ is heavily dependent of $R$, we can suspect that estimation errors of
$R$ and $E$ are positively correlated, and thus that $V=R-\frac{1}{1+\eta}E$
gets lower mean squared errors as a result.
We can see that increasing model complexity for the AR(1)-GARCH(1,1) and life
insurance models seems to have no significant effect on LSM performance. It
should be noted that model complexity in both cases is increased by
introducing independent stochastic factors; A sum of i.i.d. processes in the
AR(1)-GARCH(1,1) case and the adding of (independent) cohorts in the life
insurance case. Thus the de-facto model complexity might not have increased
much, even though the state-space of the markov process is increased.
When we look at the ANDP and AROC quantities, we see that these seem to vary
more than do the (N)RMSE:s. Especially AROC, which is defined via a quotient,
seems to be sensitive to model error.
One important thing to note with regards to sensitivity of ANDP and AROC is
the presence of errors introduced by the necessity of having to use Monte-
Carlo simulations in order to calculate samples of
$\operatorname{VaR}_{t,1-\alpha}(-\cdot),\mathbb{E}[(\cdot)_{+}]$. This can be
seen in the AR(1)-GARCH(1,1) case: If we investigate what the value $V_{5}(L)$
should be, we see that in this case it has a closed form (using positive
homogeneity and translation invariance):
$\displaystyle\varphi_{5}(L_{6}+V_{6}(L))=\varphi_{5}(\alpha_{0}+\alpha_{1}L_{5}+\sigma_{6}\epsilon_{6})=\alpha_{0}+\alpha_{1}L_{5}+\sigma_{6}\varphi_{5}(\epsilon_{6}).$
$\varphi_{5}(\epsilon_{6})$ is deterministic due to law invariance. Since
$L_{t}$ and $\sigma_{t}$ are included in the basis functions for the
AR(1)-GARCH(1,1) model, we would expect the fit in this case to be perfect.
Since it is not, we conclude that errors still may appear even if optimal
basis functions are among our selection of basis functions.
Finally, if we recall that the main purpose of these calculations is to
calculate the quantity $V_{t}(L)$, a good approach for validation might be to
re-balance the LSM estimates of $R^{(i)}_{t}$ and $E^{(i)}_{t}$ so that the
LSM estimate of the value $V_{t}$ remains unchanged, but the LSM estimates are
better fitted to $R^{(i)}_{t}$ and $E^{(i)}_{t}$. This re-balancing would not
be problematic in the economic model that this validation scheme is played out
in. However, in this paper we were also interested in how the LSM model
captures both the VaR term and the expected value term, so the quantities ANDP
and AROC remain relevant to look at.
## 5 Conclusion
We have studied the performance of the LSM algorithm to numerically compute
recursively defined objects such as $(V_{t}(L,\varphi))_{t=0}^{T}$ given in
definition 4, where the mappings $\varphi$ are either $L^{2}$-continuous or
are given by (21). As a part of this study, Lipschitz-like results and
conditions for $L^{2}$-continuity were established for Value-at-Risk and the
associated operator $\varphi_{t,\alpha}$ in Theorem 2 and Corollary 2.
Important basic consistency results have been obtained showing the convergence
of the LSM estimator both as the number of basis functions go to infinity in
Lemmas 1 and 7 and when the size of the simulated data goes to infinity for a
fixed number of basis functions in Theorems 1 and 3. Furthermore, these
results are applicable to a large class of conditional monetary risk measures,
utility functions and various actuarial multi-period valuations, the only
requirement being $L^{2}$-continuity or a property like that established in
Theorem 2. We also apply and evaluate the LSM algorithm with respect to multi-
period cost-of-capital valuation considered in [12] and [13], and in doing
this also provide insight into practical considerations concerning
implementation and validation of the LSM algorithm.
## 6 Proofs
###### Proof of Lemma 1.
Note that the quantities defined in (10) and (17) are independent of $D$,
hence all norms below are a.s. constants. Define
$\epsilon_{t}:=\widehat{V}_{N,t}-V_{t}$. We will now show via backwards
induction staring from time $t=T$ that $||\epsilon_{t}||_{2}\to 0$. The
induction base is trivial, since $\widehat{V}_{N,T}(L)=V_{T}(L)=0$. Now assume
that $||\epsilon_{t+1}||_{2}\to 0$. Then
$\displaystyle||\epsilon_{t}||_{2}$
$\displaystyle\leq||\widehat{V}_{N,t}-\varphi_{t}(L_{t+1}+\widehat{V}_{N,t+1}(L))||_{2}$
$\displaystyle\quad+||\varphi_{t}(L_{t+1}+\widehat{V}_{N,t+1}(L))-\varphi_{t}(L_{t+1}+V_{t+1}(L))||_{2}$
By the induction assumption and the continuity assumption, we know that the
second summand goes to $0$. We now need to show that
$||\widehat{V}_{N,t}-\varphi_{t}(L_{t+1}+\widehat{V}_{N,t+1}(L))||_{2}\to 0$.
Now we simply note, by the definition of the projection operator and denseness
of the approximating sets,
$\displaystyle||\widehat{V}_{N,t}-\varphi_{t}(L_{t+1}+\widehat{V}_{N,t+1}(L))||_{2}$
$\displaystyle\quad=\inf_{B\in\mathcal{B}_{N}}||B-\varphi_{t}(L_{t+1}+\widehat{V}_{N,t+1}(L))||_{2}$
$\displaystyle\quad\leq\inf_{B\in\mathcal{B}_{N}}||B-\varphi_{t}(L_{t+1}+V_{t+1})||_{2}$
$\displaystyle\quad\quad+||\varphi_{t}(L_{t+1}+\widehat{V}_{N,t+1}(L))-\varphi_{t}(L_{t+1}+V_{t+1})||_{2}$
$\displaystyle\quad=\Big{[}\inf_{B\in\mathcal{B}_{N}}||B-\varphi_{t}(L_{t+1}+V_{t+1})||_{2}\Big{]}$
$\displaystyle\quad\quad+||\varphi_{t}(L_{t+1}+\widehat{V}_{N,t+1}(L))-\varphi_{t}(L_{t+1}+V_{t+1})||_{2}.$
By our assumptions, both these terms go to zero as
$\varphi_{t}(L_{t+1}+V_{t+1}(L))$ is afunction of the state $S_{t}$, which
lies in $L^{2}(\mathcal{F}_{t})$. ∎
###### Proof of Lemma 2.
We first note that if
$\widehat{\beta}^{(M)}_{t,N,Z_{M,t}}\to\beta_{t,N,Z_{t}}$ in probability, then
$||(\beta_{t,N,Z_{t}})^{\mathrm{T}}\mathbf{\Phi}_{t,N}-(\widehat{\beta}^{(M)}_{t,N,Z_{M,t}})^{\mathrm{T}}\mathbf{\Phi}_{t,N}||_{2}\to
0$ in probability, since $\widehat{\beta}^{(M)}_{t,N,Z_{M,t}}$ is independent
of $\mathbf{\Phi}_{t,N}$ and $\mathcal{F}_{0}$-measurable, while
$\mathbf{\Phi}_{t,N}$ is independent of $D$. Hence it suffices to show that
$\widehat{\beta}^{(M)}_{t,N,Z_{M,t}}\to\beta_{t,N,Z_{t}}$ in probability. Now,
recalling the definition of $\widehat{\beta}^{(M)}_{t,N,Z_{M,t}}$ , we re-
write (14) as
$\displaystyle\widehat{\beta}^{(M)}_{t,N,Z_{t}}=\Big{(}\frac{1}{M}\big{(}\mathbf{\Phi}^{(M)}_{t,N}\big{)}^{\mathrm{T}}\mathbf{\Phi}^{(M)}_{t,N}\Big{)}^{-1}\frac{1}{M}\big{(}\mathbf{\Phi}^{(M)}_{t,N}\big{)}^{\mathrm{T}}Z_{t}^{(M)},$
Furthermore recall the form of $\beta_{t,N,Z_{t}}$ given by (13). We first
note that since, by the law of large numbers,
$\frac{1}{M}\big{(}\mathbf{\Phi}^{(M)}_{t,N}\big{)}^{\mathrm{T}}\mathbf{\Phi}^{(M)}_{t,N}\to\mathbb{E}_{0}\big{[}\mathbf{\Phi}_{t,N}\mathbf{\Phi}_{t,N}^{{\mathrm{T}}}\big{]}$
almost surely and thus in probability, it suffices to show that
$\displaystyle\frac{1}{M}\big{(}\Phi^{(M)}_{t,j}(S_{t}^{(i)})\big{)}_{1\leq
i\leq M}^{\mathrm{T}}Z_{t}^{(M)}\to\mathbb{E}_{0}[\Phi_{t,j}(S_{t})Z_{t}]$
in probability for each $j=1,\dots N$. We first note that, letting
$\epsilon_{M}^{(i)}:=Z_{M,t}^{(i)}-z_{t}(S_{t}^{(i)})$
$\displaystyle\Big{|}\frac{1}{M}\sum_{i=1}^{M}\Phi_{t,j}(S_{t}^{(i)})Z_{M,t}^{(i)}-\mathbb{E}_{0}[\Phi_{t,j}(S_{t})Z_{t}]\Big{|}$
$\displaystyle\quad\leq\Big{|}\frac{1}{M}\sum_{i=1}^{M}\Phi_{t,j}(S_{t}^{(i)})z_{t}(S_{t}^{(i)})-\mathbb{E}_{0}[\Phi_{t,j}(S_{t})Z_{t}]\Big{|}+\Big{|}\frac{1}{M}\sum_{i=1}^{M}\Phi_{t,j}(S_{t}^{(i)})\epsilon_{M}^{(i)}\Big{|}$
The first summand goes to zero in probability by the law of large numbers.
Thus, we investigate the second summand using Hölder’s inequality:
$\displaystyle\Big{|}\frac{1}{M}\sum_{i=1}^{M}\Phi_{t,j}(S_{t}^{(i)})\epsilon_{M}^{(i)}\Big{|}\leq\Big{(}\frac{1}{M}\sum_{i=1}^{M}(\Phi_{t,j}(S_{t}^{(i)}))^{2}\Big{)}^{1/2}\Big{(}\frac{1}{M}\sum_{i=1}^{M}(\epsilon_{M}^{(i)})^{2}\Big{)}^{1/2}$
We see that, again by the law of large numbers, the first factor converges to
$\mathbb{E}[(\Phi_{t,j}(S_{t}))^{2}]$ in probability. Now we look at the
second factor. By our independence assumption,
$\epsilon_{M}^{(i)}\overset{d}{=}Z_{t,M}-Z_{t}$ and thus
$\displaystyle\operatorname{Var}\Big{(}\Big{(}\frac{1}{M}\sum_{i=1}^{M}(\epsilon_{M}^{(i)})^{2}\Big{)}^{1/2}\Big{|}\mathcal{F}_{0}\Big{)}\leq\mathbb{E}_{0}\Big{[}\frac{1}{M}\sum_{i=1}^{M}(\epsilon_{M}^{(i)})^{2}\Big{]}=||Z_{t,M}-Z_{t}||^{2}_{2},$
which, by assumption, goes to $0$ in probability, hence the expression goes to
zero in probability. This concludes the proof. ∎
###### Proof of Theorem 1.
We pove the statement by backwards induction, starting from time $t=T$. As
before, the induction base follows immediately from our assumptions. Now
assume $||\widehat{V}_{N,t+1}(L)-\widehat{V}_{N,t+1}^{(M)}(L)||_{2}\to 0$ in
probability, as $M\to\infty$. By $L^{2}$-continuity we get that
$||\varphi_{t}(L_{t+1}+\widehat{V}_{N,t+1}(L))-\varphi_{t}(L_{t+1}+\widehat{V}_{N,t+1}^{(M)}(L))||_{2}\to
0$ in probability. But then by Lemma 2 we immediately get that
$||\widehat{V}_{N,t}(L)-\widehat{V}_{N,t}^{(M)}(L)||_{2}\to 0$ in probability.
∎
###### Proof of Lemma 3.
Note that
$\displaystyle||\varphi_{t}(X)-\varphi_{t}(Y)||^{2}_{2}$
$\displaystyle=\mathbb{E}_{0}[|\varphi_{t}(X)-\varphi_{t}(Y)|^{2}]\leq\mathbb{E}_{0}[K^{2}\mathbb{E}_{t}[|X-Y|]^{2}]$
$\displaystyle\leq
K^{2}\mathbb{E}_{0}[\mathbb{E}_{t}[|X-Y|^{2}]]=K^{2}\mathbb{E}_{0}[|X-Y|^{2}]$
$\displaystyle=K^{2}||X-Y||^{2}_{2}.$
Here we have used Jensen’s inequality and the tower property of the
conditional epectation at the second inequality and the following equality,
respectively. From this $L^{2}$-continuity immediately follows. ∎
###### Proof of Lemma 4.
By Lemma 9, to construct upper and lower bounds for a quantity given by
$\varphi_{t,\alpha}(\xi)$ we may find upper and lower bounds for
$\rho_{t}(-\xi)$ and insert them into the expression for
$\varphi_{t,\alpha}(\xi)$. Now Take $X,Y\in L^{p}(\mathcal{F}_{t+1})$ and let
$Z:=Y-X$. By monotnicity we get that
$\displaystyle\varphi_{t}(X-|Z|)\leq\varphi_{t}(Y)\leq\varphi_{t}(X+|Z|).$
We now observe that
$\displaystyle\rho_{t}(-(X+|Z|))\leq\rho_{t}(-X)+K\mathbb{E}_{t}[|Z|]$
$\displaystyle\rho_{t}(-(X-|Z|))\leq\rho_{t}(-X)-K\mathbb{E}_{t}[|Z|]$
We use this to also observe that, by the subadditivity of the
$()_{+}$-operation,
$\displaystyle-\mathbb{E}_{t}[(\rho_{t}(-X)+K\mathbb{E}_{t}[|Z|]-X-|Z|)_{+}]$
$\displaystyle\quad\leq-\mathbb{E}_{t}[\rho_{t}(-X)-X)_{+}]+\mathbb{E}_{t}[(\rho_{t}(-|Z|)-|Z|)_{+}]$
$\displaystyle\quad\leq-\mathbb{E}_{t}[\rho_{t}(-X)-X)_{+}]+\rho_{t}(-|Z|)$
$\displaystyle\quad\leq-\mathbb{E}_{t}[\rho_{t}(-X)-X)_{+}]+K\mathbb{E}_{t}[|Z|].$
Similarly, we have that
$\displaystyle-\mathbb{E}_{t}[(\rho_{t}(-X)-K\mathbb{E}_{t}[|Z|]-X+|Z|)_{+}]$
$\displaystyle\quad\leq-\mathbb{E}_{t}[\rho_{t}(-X)-X)_{+}]-\mathbb{E}_{t}[(\rho_{t}(-|Z|)-|Z|)_{+}]$
$\displaystyle\quad\leq-\mathbb{E}_{t}[\rho_{t}(-X)-X)_{+}]-\rho_{t}(-|Z|)$
$\displaystyle\quad\leq-\mathbb{E}_{t}[\rho_{t}(-X)-X)_{+}]-K\mathbb{E}_{t}[|Z|].$
From this we get that
$\displaystyle\varphi_{t}(Y)\leq\varphi_{t}(X+|Z|)\leq\varphi_{t}(X)+K\mathbb{E}_{t}[|Z|]+\frac{1}{1+\eta}K\mathbb{E}_{t}[|Z|]$
$\displaystyle\varphi_{t}(Y)\geq\varphi_{t}(X-|Z|)\geq\varphi_{t}(X)-K\mathbb{E}_{t}[|Z|]-\frac{1}{1+\eta}K\mathbb{E}_{t}[|Z|]$
From which Lipschitz continuity with resepct to the constant $2K$ immediately
follows. ∎
###### Proof of Lemma 5.
By subadditivity we have that
$\displaystyle\rho_{t,M}(Y)-\rho_{t,M}(X)\leq\rho_{t,M}(Y-X)\leq\rho_{t,M}(-|Y-X)|),$
$\displaystyle\rho_{t,M}(X)-\rho_{t,M}(Y)\leq\rho_{t,M}(X-Y)\leq\rho_{t,M}(-|Y-X)|),$
Now we simply note that
$\displaystyle\rho_{t,M}(-|Y-X)|)$
$\displaystyle=-\int_{0}^{1}F^{-1}_{t,|Y-X)|}(u)m(u)\mathrm{d}u$
$\displaystyle\leq-m(0)\int_{0}^{1}F^{-1}_{t,|Y-X)|}(u)\mathrm{d}u$
$\displaystyle=m(0)\mathbb{E}_{t}[|Y-X|]$
This concludes the proof. ∎
###### Proof of Lemma 6.
We begin by showing (22). Let
$E=\\{Z\leq\operatorname{VaR}_{1-\delta}(-Z)\\}$. Then:
$\displaystyle\mathbb{P}_{t}(X+Z\leq y)$
$\displaystyle\geq\mathbb{P}_{t}(E\cap\\{X+Z\leq y\\})$
$\displaystyle\geq\mathbb{P}_{t}(E\cap\\{X+\operatorname{VaR}_{1-\delta}(-Z)\leq
y\\})$
$\displaystyle\geq\mathbb{P}_{t}(X+\operatorname{VaR}_{1-\delta}(-Z)\leq
y)-\mathbb{P}(E^{\complement})$ $\displaystyle\geq\mathbb{P}_{t}(X\leq
y-\operatorname{VaR}_{1-\delta}(Z))-\delta$
Putting
$y=\operatorname{VaR}_{\alpha+\delta}(-X)+\operatorname{VaR}_{1-\delta}(-Z)$
yields
$\displaystyle\mathbb{P}_{t}(X\leq
y-\operatorname{VaR}_{1-\delta}(-Z))-\delta\geq\alpha+\delta-\delta=\alpha$
Hence
$\operatorname{VaR}_{t,\alpha}(-(X+Z))\leq\operatorname{VaR}_{\alpha+\delta}(-X)+\operatorname{VaR}_{1-\delta}(-Z)$.
We now prove (23) by applying (22)
$\displaystyle\operatorname{VaR}_{t,\alpha-\delta}(-X)$
$\displaystyle=\operatorname{VaR}_{t,\alpha-\delta}(-(X+Z+(-Z))$
$\displaystyle\leq\operatorname{VaR}_{t,\alpha}(-(X+Z))+\operatorname{VaR}_{1-\delta}(Z),$
from which we get (23) ∎
###### Proof of Corollary 1.
Let $Z=Y-X$. Now we simply note that, for any $\delta$
$\displaystyle\operatorname{VaR}_{t,\alpha}(-(X+Z))$
$\displaystyle\leq\operatorname{VaR}_{t,\alpha}(-(X+|Z|))$
$\displaystyle\leq\operatorname{VaR}_{t,\alpha+\delta}(-X)+\operatorname{VaR}_{t,1-\delta}(-|Z|)$
By Markov’s inequality, we may bound the latter summand:
$\displaystyle\operatorname{VaR}_{t,1-\delta}(-|Z|)\leq\frac{1}{\delta}\mathbb{E}_{t}[|Z|]$
Now for the lower bound, we similarly note
$\displaystyle\operatorname{VaR}_{t,\alpha}(-(X+Z))$
$\displaystyle\geq\operatorname{VaR}_{t,\alpha}(-(X-|Z|))$
$\displaystyle\geq\operatorname{VaR}_{t,\alpha-\delta}(-X)+\operatorname{VaR}_{t,1-\delta}(|Z|)$
where again we may bound the second summand using Markov’s inequality:
$\displaystyle\operatorname{VaR}_{t,1-\delta}(|Z|)\geq-\frac{1}{1-\delta}\mathbb{E}_{t}[|Z|]\geq-\frac{1}{\delta}\mathbb{E}_{t}[|Z|],$
since we have assumed $\delta<1/2$. This immediately yields that, almost
surely,
$\displaystyle\operatorname{VaR}_{t,\alpha}(-Y)\in\Big{[}\operatorname{VaR}_{t,\alpha-\delta}(-X)-\frac{1}{\delta}\mathbb{E}_{t}[|X-Y|],\operatorname{VaR}_{t,\alpha+\delta}(-X)+\frac{1}{\delta}\mathbb{E}_{t}[|X-Y|]\Big{]}.$
This immediately yields our desired result. ∎
###### Lemma 9.
For any $X\in L^{1}(\mathcal{F}_{t+1})$ and $R_{1},R_{2}\in
L^{0}(\mathcal{F}_{t})$ with $R_{1}\leq R_{2}$ a.s.,
$\displaystyle R_{1}-\frac{1}{1+\eta}\mathbb{E}_{t}[(R_{1}-X)_{+}]\leq
R_{2}-\frac{1}{1+\eta}\mathbb{E}_{t}[(R_{2}-X)_{+}]\quad a.s.$
###### Proof of Lemma 9.
Let $R_{1}\leq R_{2}$ a.s. and let $A_{1}=\\{R_{1}-X\geq 0\\}$ and
$A_{2}=\\{R_{2}-X\geq 0\\}$. Note that $A_{1}\subseteq A_{2}$ almost surely,
i.e. $\mathbb{P}_{t}(A_{1}\setminus A_{2})=0$ a.s. We now note that:
$\displaystyle
R_{1}-\frac{1}{1+\eta}\mathbb{E}_{t}[(R_{1}-X)_{+}]=\big{(}1-\frac{1}{1+\eta}\mathbb{P}_{t}(A_{1})\big{)}R_{1}+\frac{1}{1+\eta}\mathbb{E}_{t}[\mathbb{I}_{A_{1}}X]$
$\displaystyle
R_{2}-\frac{1}{1+\eta}\mathbb{E}_{t}[(R_{2}-X)_{+}]=\big{(}1-\frac{1}{1+\eta}\mathbb{P}_{t}(A_{2})\big{)}R_{2}+\frac{1}{1+\eta}\mathbb{E}_{t}[\mathbb{I}_{A_{2}}X]$
We look at the expectation in the first expression:
$\displaystyle\mathbb{E}_{t}[\mathbb{I}_{A_{1}}X]=\mathbb{E}_{t}[\mathbb{I}_{A_{2}}X]-\mathbb{E}_{t}[X\mathbb{I}_{A_{2}\setminus
A_{1}}]\leq\mathbb{E}_{t}[\mathbb{I}_{A_{2}}X]-\mathbb{P}_{t}(A_{2}\setminus
A_{1})R_{1}$
We now see that
$\displaystyle\big{(}1-\frac{1}{1+\eta}\mathbb{P}_{t}(A_{1})\big{)}R_{1}+\frac{1}{1+\eta}\mathbb{E}_{t}[\mathbb{I}_{A_{1}}X]$
$\displaystyle\quad\leq\big{(}1-\frac{1}{1+\eta}\mathbb{P}_{t}(A_{1})\big{)}R_{1}+\frac{1}{1+\eta}\mathbb{E}_{t}[\mathbb{I}_{A_{2}}X]-\frac{1}{1+\eta}\mathbb{P}_{t}(A_{2}\setminus
A_{1})R_{1}$
$\displaystyle\quad=\big{(}1-\frac{1}{1+\eta}\mathbb{P}_{t}(A_{2})\big{)}R_{1}+\frac{1}{1+\eta}\mathbb{E}_{t}[\mathbb{I}_{A_{2}}X]$
$\displaystyle\quad\leq\big{(}1-\frac{1}{1+\eta}\mathbb{P}_{t}(A_{2})\big{)}R_{2}+\frac{1}{1+\eta}\mathbb{E}_{t}[\mathbb{I}_{A_{2}}X]$
This concludes the proof. ∎
###### Proof of Theorem 2.
Let $Z=Y-X$. Note that
$\displaystyle\varphi_{t,\alpha}(Y)=\varphi_{t,\alpha}(X+Z)\leq\varphi_{t,\alpha}(X+|Z|).$
As for the $\operatorname{VaR}$-part of $\varphi_{t,\alpha}$, including that
in the expectation, we note that by Lemma 6
$\operatorname{VaR}_{t,\alpha}(-(X+|Z|))\leq\operatorname{VaR}_{t,\alpha+\delta}(-X)+\operatorname{VaR}_{t,1-\delta}(-|Z|)$.
We now note that, by subadditivity of $x\mapsto(x)_{+}$,
$\displaystyle-\mathbb{E}_{t}[(\operatorname{VaR}_{t,\alpha+\delta}(-X)+\operatorname{VaR}_{t,1-\delta}(-|Z|)-X-|Z|)_{+}]$
$\displaystyle\quad\leq-\mathbb{E}_{t}[(\operatorname{VaR}_{t,\alpha+\delta}(-X)-X)_{+}]+\mathbb{E}_{t}[(\operatorname{VaR}_{t,1-\delta}(-|Z|)-|Z|)_{+}]$
$\displaystyle\quad\leq-\mathbb{E}_{t}[(\operatorname{VaR}_{t,\alpha+\delta}(-X)-X)_{+}]+\operatorname{VaR}_{t,1-\delta}(-|Z|).$
Hence
$\displaystyle\varphi_{t,\alpha}(X+|Z|)$
$\displaystyle\leq\varphi_{t,\alpha+\delta}(X)+\frac{2+\eta}{1+\eta}\operatorname{VaR}_{t,1-\delta}(-|Z|)$
$\displaystyle\leq\varphi_{t,\alpha+\delta}(X)+\frac{2}{\delta}\mathbb{E}_{t}[|Z|].$
Here we have used the Markov’s inequality bound from Corollary 1.
We now similarly construct a lower bound for $\varphi_{t,\alpha}(Y)$:
$\displaystyle\varphi_{t,\alpha}(Y)=\varphi_{t,\alpha}(X+Z)\geq\varphi_{t,\alpha}(X-|Z|)$
Again, for the $\operatorname{VaR}$-part, we note that by Lemma 6
$\operatorname{VaR}_{t,\alpha}(-(X-|Z|))\geq\operatorname{VaR}_{t,\alpha-\delta}(-X)+\operatorname{VaR}_{t,1-\delta}(|Z|)$.
We now analyze the resulting expected value part, using subadditivity of
$x\mapsto(x)_{+}$:
$\displaystyle-\mathbb{E}_{t}[(\operatorname{VaR}_{t,\alpha+\delta}(-X)+\operatorname{VaR}_{t,1-\delta}(|Z|)-X+|Z|)_{+}]$
$\displaystyle\quad\geq-\mathbb{E}_{t}[(\operatorname{VaR}_{t,\alpha+\delta}(-X)-X)_{+}]-\mathbb{E}_{t}[(\operatorname{VaR}_{t,\alpha+\delta}(|Z|)+|Z|)_{+}]$
$\displaystyle\quad\geq-\mathbb{E}_{t}[(\operatorname{VaR}_{t,\alpha+\delta}(-X)-X)_{+}]-\mathbb{E}_{t}[|Z|]$
Hence we get the lower bound
$\displaystyle\varphi_{t,\alpha}(X-|Z|)$
$\displaystyle\leq\varphi_{t,\alpha-\delta}(X)+\operatorname{VaR}_{t,1-\delta}(|Z|)+\frac{1}{1+\eta}\mathbb{E}_{t}[|Z|]$
$\displaystyle\geq\varphi_{t,\alpha-\delta}(X)-\frac{2}{\delta}\mathbb{E}_{t}[|Z|].$
Here, again, we have used the Markov’s inequality bound from Corollary 1.
Hence we have shown that
$\displaystyle\varphi_{t,\alpha}(Y)\in\Big{[}\varphi_{t,\alpha-\delta}(X)-\frac{2}{\delta}\mathbb{E}_{t}[|X-Y|],\varphi_{t,\alpha+\delta}(X)+\frac{2}{\delta}\mathbb{E}_{t}[|X-Y|]\Big{]},$
from which (24) immediately follows. ∎
###### Proof of Corollary 2.
Choose a sequence $\delta_{n}\to 0$ such that
$\frac{2}{\delta_{n}^{2}}\mathbb{E}[|X-X_{n}|^{2}]\to 0$ with
$\alpha+\delta_{n}<1$ and $\delta_{n}<1/2$. We now use the following
inequality, which follows from the monotonicity of $\varphi_{t,\alpha}$ in
$\alpha$:
$\displaystyle|\varphi_{t,\alpha}(X)-\varphi_{t,\alpha}(X_{n})|$
$\displaystyle\quad\leq\varphi_{t,\alpha+\delta_{n}}(X)-\varphi_{t,\alpha-\delta_{n}}(X)+\inf_{|\epsilon|<\delta_{n}}|\varphi_{t,\alpha+\epsilon}(X)-\varphi_{t,\alpha}(X_{n})|$
$\displaystyle\quad\leq\varphi_{t,\alpha+\delta_{n}}(X)-\varphi_{t,\alpha-\delta_{n}}(X)+\frac{2}{\delta_{n}}\mathbb{E}[|X-X_{n}|\mid\mathcal{H}_{t}]$
By $L^{2}$-convergence and our choice of $\delta_{n}$, the last term clearly
goes to $0$ by our assumptions.
As for the first summand, we see that for any sequence $\delta_{n}\to 0$,
$\varphi_{t,\alpha-\delta_{n}}(X)-\varphi_{t,\alpha+\delta_{n}}(X)\to 0$
almost surely (by the continuity assumption of $\operatorname{VaR}_{t,u}$ at
$\alpha$) and furthermore it is a decreasing sequence of nonnegative random
variables in $L^{2}(\mathcal{H}_{t})$. Hence by Lebesgue’s monotone
convergence theorem
$||\varphi_{t,\alpha-\delta_{n}}(X)-\varphi_{t,\alpha+\delta_{n}}(X)||_{2}\to
0$. This concludes the proof. ∎
###### Proof of Lemma 7.
By Lemma 2, $\varphi_{t,\alpha}$ is $L^{2}$ continuous with respect to limit
objects with a.s. continuous $t$-conditional distributions. Hence the proof is
completely analogous to that of Lemma 1. ∎
###### Proof of Lemma 8.
Fix $\epsilon>0$. we want to show that
$\mathbb{P}(||\varphi_{t,\alpha}(\beta^{\mathrm{T}}\mathbf{\Phi}_{t+1,N})-\varphi_{t,\alpha}(\beta_{n}^{\mathrm{T}}\mathbf{\Phi}_{t+1,N})||_{2}>\epsilon)\to
0$ as $n\to\infty$. We first note that, for any $\delta\in(0,1-\alpha)$ with
$\delta<1/2$, we have an inequality similar to that in the proof of Corollary
2:
$\displaystyle||\varphi_{t,\alpha}(\beta^{\mathrm{T}}\mathbf{\Phi}_{t+1,N})-\varphi_{t,\alpha}(\beta_{n}^{\mathrm{T}}\mathbf{\Phi}_{t+1,N})||_{2}$
$\displaystyle\quad\leq||\varphi_{t,\alpha-\delta}(\beta^{\mathrm{T}}\mathbf{\Phi}_{t+1,N})-\varphi_{t,\alpha+\delta}(\beta^{\mathrm{T}}\mathbf{\Phi}_{t+1,N})||_{2}$
$\displaystyle\quad+\inf_{|\xi|\leq\delta}||\varphi_{t,\alpha+\xi}(\beta^{\mathrm{T}}\mathbf{\Phi}_{t+1,N})-\varphi_{t,\alpha}(\beta_{n}^{\mathrm{T}}\mathbf{\Phi}_{t+1,N})||_{2}$
If we look at the first summand, we see that for any sequence $\delta_{n}\to
0$,
$\varphi_{t,\alpha-\delta_{n}}(\beta^{\mathrm{T}}\mathbf{\Phi}_{t+1,N})-\varphi_{t,\alpha+\delta_{n}}(\beta^{\mathrm{T}}\mathbf{\Phi}_{t+1,N})\to
0$ almost surely (by a.s. continuity) and furthermore it is a decreasing
sequence of nonnegative random variables. Hence by Lebesgue’s monotone
convergence theorem
$||\varphi_{t,\alpha-\delta_{n}}(\beta^{\mathrm{T}}\mathbf{\Phi}_{t+1,N})-\varphi_{t,\alpha+\delta_{n}}(\beta^{\mathrm{T}}\mathbf{\Phi}_{t+1,N})||_{2}\to
0$ as a sequence of constants, since the expression is independent of $D$.
We now apply Theorem 2 to the second term to see that
$\displaystyle\inf_{|\xi|\leq\delta}||\varphi_{t,\alpha+\xi}(\beta^{\mathrm{T}}\mathbf{\Phi}_{t+1,N})-\varphi_{t,\alpha}(\beta_{n}^{\mathrm{T}}\mathbf{\Phi}_{t+1,N})||_{2}$
$\displaystyle\leq\frac{2}{\delta}||(\beta-\beta_{n})^{\mathrm{T}}\mathbf{\Phi}_{t+1,N}||_{2}$
$\displaystyle\leq\frac{2}{\delta}||\beta-\beta_{n}||_{\infty}||\mathbf{\Phi}_{t+1,N}||_{2}$
Note that $||\mathbf{\Phi}_{t+1,N}||_{2}:=K_{\Phi}$ is just a constant. We now
note that, as $||\beta-\beta_{n}||_{\infty}\to 0$ in probability, then for any
fixed $\epsilon>0$, it is possible to choose a sequence $\delta_{n}\to 0$ such
that
$\mathbb{P}(\frac{2K_{\Phi}}{\delta_{n}}||\beta-\beta_{n}||_{\infty}>\epsilon)\to
0$. Hence, for any fixed $\epsilon>0$,
$\mathbb{P}(||\varphi_{t,\alpha}(\beta^{\mathrm{T}}\mathbf{\Phi}_{t+1,N})-\varphi_{t,\alpha}(\beta_{n}^{\mathrm{T}}\mathbf{\Phi}_{t+1,N})||_{2}>\epsilon)\to
0$ as $n\to\infty$. ∎
###### Proof of Theorem 3.
We prove the statement by backwards induction, starting from time $t=T$. The
induction base is trivial. Now assume that the statement holds for time $t+1$.
But then, by Lemma 8,
$||\varphi_{t,\alpha}(L_{t+1}+\widehat{V}_{N,t,\alpha}(L))-\varphi_{t,\alpha}(L_{t+1}+\widehat{V}^{(M)}_{N,t,\alpha}(L))||_{2}\to
0$ in probability. Hence, we get immediately by Lemma 2 that
$||\widehat{V}_{N,t,\alpha}(L)-\widehat{V}_{N,t,\alpha}^{(M)}(L)||_{2}\to 0$
in probability. This concludes the proof. ∎
## References
* [1] Philippe Artzner, Freddy Delbaen, Jean-Marc Eber, David Heath and Hyejin Ku (2007), Coherent multiperiod risk adjusted values and Bellman’s principle. _Annals of Operations Research_ , 152, 5-22.
* [2] D. Barrera, S. Crépey, B. Diallo, G. Fort, E. Gobet and U. Stazhynski (2018), Stochastic approximation schemes for economic capital and risk margin computations. _HAL <hal-01710394>_
* [3] Mark Broadie, Yiping Du and Ciamac C. Moallemi (2015), Risk estimation via regression. _Operations Research_ , 63 (5) 1077-1097.
* [4] Patrick Cheridito, Freddy Delbaen and Michael Kupper (2006), Dynamic monetary risk measures for bounded discrete-time processes. _Electronic Journal of Probability_ , 11, 57-106.
* [5] Patrick Cheridito and Michael Kupper (2011), Composition of time-consistent dynamic monetary risk measures in discrete time. _International Journal of Theoretical and Applied Finance_ , 14 (1), 137-162.
* [6] Patrick Cheridito and Michael Kupper (2009), Recursiveness of indifference prices and translation-invariant preferences. _Mathematics and Financial Economics_ , 2 (3), 173-188.
* [7] Emanuelle Clément, Damien Lamberton, and Philip Protter(2002), An analysis of a least squares regression method for american option pricing. _Finance and Stochastics_ , 6, 449–471.
* [8] European Commission (2015), Commission delegated regulation (EU) 2015/35 of 10 October 2014. _Official Journal of the European Union_.
* [9] Łukasz Delong, Jan Dhaene and Karim Barigou (2019), Fair valuation of insurance liability cash-flow streams in continuous time: Applications. _ASTIN Bulletin_ , 49 (2), 299-333.
* [10] Kai Detlefsen and Giacomo Scandolo (2005), Conditional and dynamic convex risk measures. _Finance and Stochastics_ , 9, 539-561.
* [11] Jan Dhaene, Ben Stassen, Karim Barigou, Daniël Linders and Ze Chen (2017) Fair valuation of insurance liabilities: Merging actuarial judgement and Market-Consistency. _Insurance: Mathematics and Economics_ , 76, 14-27.
* [12] Hampus Engsner, Mathias Lindholm and Filip Lindskog (2017), Insurance valuation: A computable multi-period cost-of-capital approach. _Insurance: Mathematics and Economics_ , 72, 250-264.
* [13] Hampus Engsner and Filip Lindskog (2020), Continuous-time limits of multi-period cost-of-capital margins. _Statistics and Risk Modelling_ , forthcoming (DOI: https://doi.org/10.1515/strm-2019-0008)
* [14] Daniel Egloff (2005), Monte Carlo algorithms for optimal stopping and statistical learning, _The Annals of Applied Probability_ , 15 (2), 1396-1432.
* [15] Hans Föllmer and Alexander Schied (2016), _Stochastic finance: An introduction in discrete time_ , 4th edition, De Gruyter Graduate.
* [16] Francis A. Longstaff and Eduardo S. Schwartz (2001). Valuing American options by simulation: A simple least-squares approach. _The Review of Financial Studies_ , 14 (1),113-147.
* [17] Christoph Möhr (2011), Market-consistent valuation of insurance liabilities by cost of capital. _ASTIN Bulletin_ , 41, 315-341.
* [18] Antoon Pelsser and Ahmad Salahnejhad Ghalehjooghi (2020). Time-consistent and market-consistent actuarial valuation of the participating pension contract. _Scandinavian Actuarial Journal_ , forthcoming (DOI: https://doi.org/10.1080/03461238.2020.1832911)
* [19] Antoon Pelsser and Ahmad Salahnejhad Ghalehjooghi (2016). Time-consistent actuarial valuations. _Insurance: Mathematics and Economics_ , 66, 97-112.
* [20] Antoon Pelsser and Mitja Stadje (2014), Time-consistent and market-consistent evaluations. _Mathematical Finance_ , 24 (1), 25-65.
* [21] Lars Stentoft (2004), Convergence of the least squares Monte Carlo approach to American option valuation. _Management Science_ , 50 (9), 1193-1203.
* [22] John N. Tsitsiklis and Benjamin Van Roy (2001). Regression methods for pricing complex American-style options. _IEEE Transactions On Neural Networks_ , 12 (4), 694-703.
* [23] Daniel Z. Zanger (2009), Convergence of a least-squares Monte Carlo algorithm for bounded approximating sets. _Applied Mathematical Finance_ , 16 (2), 123-150.
* [24] Daniel Z. Zanger (2013), Quantitative error estimates for a least-squares Monte Carlo algorithm for American option pricing. _Finance and Stochastics_ , 17, 503-534.
* [25] Daniel Z. Zanger (2018), Convergence of a least-squares Monte Carlo algorithm for American option pricing with dependent sample data. _Mathematical Finance_ , 28 (1), 447-479.
|
# Linear and nonlinear Hall conductivity in presence of interaction and
disorder
Raffaele Resta<EMAIL_ADDRESS>Istituto Officina dei Materiali IOM-CNR,
Strada Costiera 11, 314151 Trieste, Italy Donostia International Physics
Center, 20018 San Sebastián, Spain
###### Abstract
The theory of the nonlinear Hall effect has been established by I. Sodemann
and L. Fu [Phys. Rev. Lett. 115, 216806 (2015)] in a semiclassical framework:
therein, the effect appears as a geometrical property of Bloch electrons,
originating from their anomalous velocity. Here I present a more general
theory, addressing correlated and/or noncrystalline systems as well, where the
expressions of both linear and nonlinear Hall conductivities originate from
the many-electron anomalous velocity. The independent-electron results are
retrieved as special cases.
It is known since long time that transverse dc conductivity is allowed—to
linear order in the field—only in materials which spontaneously break time-
reversal (T) symmetry: it goes then under the name of anomalous Hall
conductivity (AHC) Nagaosa10 . More recently it has been pointed out that
second-order transverse dc conductivity can be nonzero even in T-symmetric
materials, provided that inversion (I) symmetry is absent: the quadratic dc
response is then called nonlinear Hall conductivity (NHC); the theory so far
is based on geometrical concepts at the independent-electron level for
crystalline systems, and the relevant expressions are obtained semiclassically
Sodemann15 ; Matsyshyn19 ; Nandy19 . In this Letter I show how to formulate
the theory at a much more general level, encompassing correlated and/or
disordered systems as well. Even in the present case the theory is based on
geometrical concepts, although in a many-body framework: in particular on the
many-body Berry curvature, whose root is in a seminal paper by Niu and
Thouless Niu84 . The known independent-electron NHC formula Sodemann15 ;
Matsyshyn19 will be retrieved as a special case; a few other known results
will be also presented en passant, obtained here via somewhat unconventional
proofs.
The independent-electron geometrical theory for a pristine crystal only
provides the intrinsic AHC term; extrinsic terms are necessarily present in
the case of metals Nagaosa10 . The present formulation allows in principle for
the inclusion of disorder and accounts therefore for a part of the extrinsic
effects as well, thus generalizing a previous work at the independent-electron
level rap149 .
An outstanding qualitative difference exists between AHC and NHC. In the
former case the geometrical intrinsic term is nondissipative: it yields a dc
current without any mechanism accounting for dissipation (e.g. relaxation
times); in the latter case, instead, the geometrical expressions yield a
transverse free acceleration; one gets a dc current only after some
dissipation mechanism is accounted for. NHC can therefore be assimilated to a
skewed nonlinear Drude-like conductivity.
The starting point of the present theory is a milestone paper published by
Kohn in 1964 Kohn64 . Following him, we consider a system of $N$ interacting
$d$-dimensional electrons in a cubic box of volume $L^{d}$, and the family of
many-body Hamiltonians parametrized by $\kappa$, called “flux” or “twist”:
$\hat{H}_{\mbox{\boldmath$\kappa$}}=\frac{1}{2m}\sum_{i=1}^{N}\left[{\bf
p}_{i}+\frac{e}{c}{\bf A}^{(\rm micro)}({\bf
r}_{i})+\hbar\mbox{\boldmath$\kappa$}\right]^{2}+\hat{V},$ (1)
where $\hat{V}$ includes the one-body potential (possibly disordered) and
electron-electron interaction, while the microscopic vector potential ${\bf
A}^{(\rm micro)}({\bf r})$ summarizes all the intrinsic T-breaking terms, as
e.g. those due to spin-orbit coupling to a background of local moments. We
assume the system to be macroscopically homogeneous; the eigenstates
$|\Psi_{n\mbox{\boldmath$\kappa$}}\rangle$ are normalized to one in the
hypercube of volume $L^{Nd}$. The thermodynamic limit $N\rightarrow\infty$,
$L\rightarrow\infty$, $N/L^{d}=n$ constant, is understood throughout this
Letter. In order to simplify notations we will set $\hat{H}_{0}\equiv\hat{H}$,
$|\Psi_{n0}\rangle\equiv|\Psi_{n}\rangle$ , $E_{n0}\equiv E_{n}$.
We assume Born-von-Kàrmàn periodic boundary conditions (PBCs): the many-body
wavefunctions are periodic with period $L$ over each electron coordinate ${\bf
r}_{i}$ independently; the potential $\hat{V}$ and the intrinsic vector
potential ${\bf A}^{(\rm micro)}({\bf r})$ enjoy the same periodicity. The
flux $\kappa$—cast into inverse-length dimensions for convenience—corresponds
to perturbing the Hamiltonian with a vector potential ${\bf A}=\hbar
c\mbox{\boldmath$\kappa$}/e$, constant in space. Kohn only considered a time-
independent $\kappa$, which amounts to a pure gauge transformation; the latter
has nontrivial effects, given that PBCs violate gauge-invariance in the
conventional sense Kohn64 . Here we additionally consider even a time-
dependent flux, which amounts to perturbing the Hamiltonian with the
macroscopic field $\mbox{\boldmath${\cal E}$}(t)=-\dot{\bf
A}(t)/c=-\hbar\dot{}\mbox{\boldmath$\kappa$}(t)/e$.
The kinetic-energy term in Eq. (1) defines the extensive many-electron
velocity as
$\hat{{\bf v}}_{\mbox{\boldmath$\kappa$}}=\frac{1}{m}\sum_{i=1}^{N}\left[{\bf
p}_{i}+\frac{e}{c}{\bf A}^{\rm(micro)}({\bf
r}_{i})+\hbar\mbox{\boldmath$\kappa$}\right]=\frac{1}{\hbar}\partial_{\mbox{\boldmath$\kappa$}}\hat{H}_{\mbox{\boldmath$\kappa$}}.$
(2)
When $\kappa$ is adiabatically varied in time the instantaneous current
density is the sum of two terms: the expectation value of the current
operator, and the Niu-Thouless adiabatic current Niu84 ; Xiao10 . Their
expression is cast as:
$\displaystyle j_{\alpha}$ $\displaystyle=$ $\displaystyle-\frac{e}{\hbar
L^{d}}\langle\Psi_{0\mbox{\boldmath$\kappa$}}|\partial_{\kappa_{\alpha}}\hat{H}_{\mbox{\boldmath$\kappa$}}|\Psi_{0\mbox{\boldmath$\kappa$}}\rangle$
(3) $\displaystyle+$
$\displaystyle\frac{ie}{L^{d}}(\langle\partial_{\kappa_{\alpha}}{\Psi}_{0\mbox{\boldmath$\kappa$}}|\dot{\Psi}_{0\mbox{\boldmath$\kappa$}}\rangle-\langle\dot{\Psi}_{0\mbox{\boldmath$\kappa$}}|\partial_{\kappa_{\alpha}}\Psi_{0\mbox{\boldmath$\kappa$}}\rangle)$
$\displaystyle=$
$\displaystyle-\frac{e}{L^{d}}\left(\frac{1}{\hbar}\partial_{\kappa_{\alpha}}E_{0\mbox{\boldmath$\kappa$}}-\Omega_{\alpha\beta}(\mbox{\boldmath$\kappa$})\dot{\kappa}_{\beta}\right)\;,$
where the sum over repeated Cartesian indices is understood, and
$\Omega_{\alpha\beta}(\mbox{\boldmath$\kappa$})$ is the many-body Berry
curvature
$\Omega_{\alpha\beta}(\mbox{\boldmath$\kappa$})=-2\,\mbox{Im
}\langle\partial_{\kappa_{\alpha}}\Psi_{0\mbox{\boldmath$\kappa$}}|\partial_{\kappa_{\beta}}\Psi_{0\mbox{\boldmath$\kappa$}}\rangle.$
(4)
The extensive quantity
$\Omega_{\alpha\beta}(\mbox{\boldmath$\kappa$})\dot{\kappa}_{\beta}$ is the
many-electron anomalous velocity. In the static case
($\dot{\mbox{\boldmath$\kappa$}}=0$) no dc current may flow trough an
insulating sample, ergo the ground-state energy
$E_{0\mbox{\boldmath$\kappa$}}=E_{0}$ is $\kappa$-independent; in metals,
instead, $E_{0\mbox{\boldmath$\kappa$}}$ does depend on $\kappa$ Kohn64 .
The linear conductivity is by definition
$\sigma_{\alpha\beta}(\omega)=\frac{\partial j_{\alpha}(\omega)}{\partial{\cal
E}_{\beta}(\omega)}=\frac{\partial j_{\alpha}(\omega)}{\partial
A_{\beta}(\omega)}\frac{dA(\omega)}{d{\cal E}(\omega)};$ (5)
since ${\cal E}(\omega)=i\omega A(\omega)/c$, causal inversion yields the last
factor as Scalapino92
$\frac{dA(\omega)}{d{\cal
E}(\omega)}=-\frac{ic}{\omega+i\eta}=-c\left[\pi\delta(\omega)+\frac{i}{\omega}\right].$
(6)
At finite $\omega$, the linear response $\partial j_{\alpha}(\omega)/\partial
A_{\beta}(\omega)$ is provided by time-dependent perturbation theory (i.e.
Kubo formulæ suppl ); here instead we only address the response to a dc
macroscopic field. The physical perturbation is therefore static; it enters
the Hamiltonian as a dynamical one in the adiabatic limit, owing to the
vector-potential gauge, mandatory within PBCs nota2 . Hence we set
$\frac{\partial j_{\alpha}(\omega)}{\partial
A_{\beta}(\omega)}\doteq\frac{\partial j_{\alpha}(0)}{\partial A_{\beta}(0)},$
(7)
where the symbol “$\doteq$” means “equal in the dc limit”.
We chose the perturbing vector potential in the form ${\bf A}(t)={\bf
A}(\omega){\rm e}^{-i\omega t}$, ergo we set
$\mbox{\boldmath$\kappa$}(t)=\frac{e}{\hbar c}{\bf A}(\omega){\rm e}^{-i\omega
t},\quad\quad\dot{\mbox{\boldmath$\kappa$}}(t)=-\frac{ie\omega}{\hbar c}{\bf
A}(\omega){\rm e}^{-i\omega t},$ (8)
whence (to lowest nonvanishing order in $\omega$):
$\mbox{\boldmath$\kappa$}\doteq\frac{e}{\hbar c}{\bf
A}(0),\quad\dot{}\mbox{\boldmath$\kappa$}\doteq-\frac{ie\omega}{\hbar c}{\bf
A}(0).$ (9)
From Eqs. (3) and (9) it follows that
$\frac{\partial j_{\alpha}(\omega)}{\partial
A_{\beta}(\omega)}\doteq\frac{\partial j_{\alpha}(0)}{\partial
A_{\beta}(0)}=-\frac{e^{2}}{\hbar
cL^{d}}\left(\frac{1}{\hbar}\frac{\partial^{2}E_{0}}{\partial{\kappa_{\alpha}}\partial{\kappa_{\beta}}}-i\omega\Omega_{\alpha\beta}(0)\right).$
(10)
The product of Eq. (10) times Eq. (6) yields the real parts of symmetric
(longitudinal) and antisymmetric (transverse) dc conductivities as:
$\displaystyle\mbox{Re }\sigma_{\alpha\beta}^{(+)}(\omega)$
$\displaystyle\doteq$ $\displaystyle\frac{\pi
e^{2}}{\hbar^{2}L^{d}}\frac{\partial^{2}E_{0}}{\partial{\kappa_{\alpha}}\partial{\kappa_{\beta}}}\delta(\omega)=D_{\alpha\beta}\delta(\omega);$
(11) $\displaystyle\mbox{Re }\sigma_{\alpha\beta}^{(-)}(\omega)$
$\displaystyle\doteq$ $\displaystyle\mbox{Re
}\sigma_{\alpha\beta}^{(-)}(0)=-\frac{e^{2}}{\hbar
L^{d}}\Omega_{\alpha\beta}(0).$ (12)
Both these equations are not new, and can be alternatively obtained by the
standard sum-over-states Kubo formulæ suppl in the $\omega\rightarrow 0$
limit.
The present unconventional derivation has the virtue of being easily
generalizable to nonlinear dc conductivity, which is the major focus of the
present work. Eq. (11) is Kohn’s milestone expression for the Drude term in
longitudinal conductivity Kohn64 ; the derivation given here is inspired by
Ref. Scalapino92 . As for Eq. (12), it holds for either insulators or metals,
for either $d=2$ or $d=3$, and yields the geometric (or intrinsic) term in the
AHC; extrinsic effects are discussed in the final part of the present Letter.
AHC is nonzero only if the Hamiltonian of Eq. (1) breaks T-symmetry at
$\mbox{\boldmath$\kappa$}=0$ (see also the discussion below about symmetry).
The case of a two-dimensional insulator deserves a separate discussion.
Transverse conductivity is quantized:
$\sigma_{xy}^{(-)}(0)=-\frac{e^{2}}{h}C_{1},$ (13)
where $C_{1}\in{\mathbb{Z}}$ is a Chern number. This famous relationship was
first established at the independent-electron level, where $C_{1}$ is also
known as TKNN invariant Thouless82 ; it was later generalized by Niu,
Thouless, and Wu, who provided the many-body expression for $C_{1}$ Niu85 .
Following Ref. Xiao10 (Sec. III.C) the same invariant is conveniently recast
as
$C_{1}=\frac{1}{2\pi}\int_{0}^{\frac{2\pi}{L}}\\!\\!d\kappa_{x}\int_{0}^{\frac{2\pi}{L}}\\!\\!d\kappa_{x}\;{\sf\Omega}_{xy}(\mbox{\boldmath$\kappa$});$
(14)
Eq. (14) is quantized because it is equivalent to the integral over a torus.
In order to show this, we remind that in insulators the ground-state energy
$E_{0\mbox{\boldmath$\kappa$}}$ is $\kappa$-independent, and we observe that
whenever the components of
$\mbox{\boldmath$\kappa$}-\mbox{\boldmath$\kappa$}^{\prime}$ are integer
multiples of $2\pi/L$, then the state ${\rm
e}^{i(\mbox{\boldmath$\kappa$}-\mbox{\boldmath$\kappa$}^{\prime})\cdot\hat{{\bf
r}}}|\Psi_{0\mbox{\boldmath$\kappa$}}\rangle$ is eigenstate of
$\hat{H}_{\mbox{\boldmath$\kappa$}^{\prime}}$ with the same eigenvalue as
$|\Psi_{0\mbox{\boldmath$\kappa$}}\rangle$. The eigenstates which define
${\sf\Omega}_{xy}(\mbox{\boldmath$\kappa$})$ have therefore the required
toroidal periodicity:
$|\Psi_{0\mbox{\boldmath$\kappa$}^{\prime}}\rangle={\rm
e}^{i(\mbox{\boldmath$\kappa$}-\mbox{\boldmath$\kappa$}^{\prime})\cdot\hat{{\bf
r}}}|\Psi_{0\mbox{\boldmath$\kappa$}}\rangle.$ (15)
Since ${\sf\Omega}_{xy}(\mbox{\boldmath$\kappa$})$ is gauge-invariant, an
arbitrary $\kappa$-dependent phase factor may relate the two members of Eq.
(15). It is worth stressing that in the topological case a globally smooth
periodic gauge does not exist; in other words we can enforce Eq. (15) as it
stands (with no extra phase factor) only locally, not globally; we also notice
that Eq. (15) may be regarded as the many-body analogue of the periodic gauge
in band-structure theory Vanderbilt .
Eq. (14) is independent of the $L$ value, and its integrand is extensive:
therefore in the large-$L$ limit the integration domain contracts to a point:
$C_{1}=\frac{1}{2\pi}\left(\frac{2\pi}{L}\right)^{2}\Omega_{xy}(0).$ (16)
By comparing this to Eq. (12) for $d=2$, Eq. (13) is immediately retrieved.
Next we move on to deal with nonlinear conductivity; for the symmetric
longitudinal term we adopt the same definitions as in Refs. Watanabe20 ; suppl
. The same logic as adopted above yields
$\sigma_{\alpha\beta\gamma}^{(+)}(\omega_{1},\omega_{2})\doteq\frac{e^{3}}{\hbar^{3}L^{d}}\frac{\partial^{3}E_{0}}{\partial\kappa_{\alpha}\,\partial\kappa_{\beta}\,\partial\kappa_{\gamma}}\;\frac{i}{\omega_{1}+i\eta}\frac{i}{\omega_{2}+i\eta}:$
(17)
not surprisingly, this is indeed identical to the recent finding of Ref.
Watanabe20 .
In order to address the antisymmetric second-order term, we expand the many-
electron anomalous velocity as
$\Omega_{\alpha\beta}(\mbox{\boldmath$\kappa$})\dot{\kappa}_{\beta}\simeq\Omega_{\alpha\beta}(0)\dot{\kappa}_{\beta}+\partial_{\kappa_{\gamma}}\Omega_{\alpha\beta}(0)\;\dot{\kappa}_{\beta}{\kappa}_{\gamma}.$
(18)
The first term yields the AHC, Eq. (12); we focus on the second term in the
following, and we evaluate it in the adiabatic limit. Eq. (3) yields, to
second order,
$\displaystyle j^{(2)}_{\alpha}(\omega)$ $\displaystyle\doteq$
$\displaystyle\frac{e}{L^{d}}\partial_{\kappa_{\gamma}}\Omega_{\alpha\beta}(0)\;\dot{\kappa}_{\beta}{\kappa}_{\gamma}$
(19) $\displaystyle\doteq$ $\displaystyle-\frac{e^{2}}{\hbar
L^{d}}\partial_{\kappa_{\gamma}}\Omega_{\alpha\beta}(0)\;{\cal
E}_{\beta}\,{\kappa}_{\gamma},$
where the second equality owes to $\mbox{\boldmath${\cal
E}$}(t)=-\hbar\dot{}\mbox{\boldmath$\kappa$}(t)/e$ in the dc limit; the
$\kappa_{\gamma}$ factor is dealt with in the same way as in Eqs. (6) and (8),
i.e.
$\kappa_{\gamma}(t)=\frac{e}{\hbar c}A_{\gamma}(\omega){\rm e}^{-i\omega
t}=-\frac{i}{\omega+i\eta}\frac{e}{\hbar}{\cal E}_{\gamma}(\omega){\rm
e}^{-i\omega t}.$ (20)
Therefore, to leading order in $\omega$,
$\displaystyle j^{(2)}_{\alpha}(\omega)$ $\displaystyle\doteq$
$\displaystyle\frac{e^{3}}{\hbar^{2}L^{d}}\partial_{\kappa_{\gamma}}\Omega_{\alpha\beta}(0)\frac{i}{\omega+i\eta}{\cal
E}_{\beta}\,{\cal E}_{\gamma}$ $\displaystyle\doteq$
$\displaystyle\frac{i}{\omega+i\eta}\chi_{\alpha\beta\gamma}{\cal
E}_{\beta}{\cal
E}_{\gamma},\quad\chi_{\alpha\beta\gamma}=\frac{e^{3}}{\hbar^{2}L^{d}}\partial_{\kappa_{\gamma}}\Omega_{\alpha\beta}(0).$
This is the major result of the present work: the sought for general NHC
formula, which also applies to cases with interaction and/or disorder. For a
crystalline system of independent electrons, Eq. (Linear and nonlinear Hall
conductivity in presence of interaction and disorder) converges—in the large-
sample limit—to the original Sodemann-Fu formula: see Eq. (25) below.
The real part of the $\omega$-dependent factor in Eq. (Linear and nonlinear
Hall conductivity in presence of interaction and disorder) equals
$\pi\delta(\omega)$: the many-electron system undergoes a transverse free
acceleration. One gets a dc current upon replacement of the infinitesimal
$\eta$ with an inverse relaxation time $1/\tau$. This is in stark contrast
with AHC, Eq. (12), accounting for a $\tau$-independent dc current (some
extrinsic AHC contributions are $\tau$-dependent; see below).
As for the symmetry properties of Eq. (Linear and nonlinear Hall conductivity
in presence of interaction and disorder), we remind that in presence of
T-symmetry
$\Omega_{\alpha\beta}(\mbox{\boldmath$\kappa$})=-\Omega_{\alpha\beta}(-\mbox{\boldmath$\kappa$})$,
while in presence of I-symmetry
$\Omega_{\alpha\beta}(\mbox{\boldmath$\kappa$})=\Omega_{\alpha\beta}(-\mbox{\boldmath$\kappa$})$
Xiao10 : therefore in a T-symmetric system $\Omega_{\alpha\beta}(0)=0$ and the
AHC vanishes. In the case of NHC the parity is swapped: the gradient of
$\Omega_{\alpha\beta}(\mbox{\boldmath$\kappa$})$ is even in T-symmetric
systems, and odd in I-symmetric systems. Therefore the NHC requires breaking
of I-symmetry; in the special case of a T-symmetric and I-breaking system,
nonzero transverse conductivity appears at second order only.
Since the responses to ${\cal E}_{\beta}{\cal E}_{\gamma}$ and to ${\cal
E}_{\gamma}{\cal E}_{\beta}$ coincide, $\chi_{\alpha\beta\gamma}$ is
symmetrical in the $\beta,\gamma$ indices, while instead it is antisymmetrical
in the $\alpha,\beta$ and $\alpha,\gamma$ indices. Therefore the current is
always orthogonal to the field: if—for an arbitrary ${\cal E}$ orientation—we
set the $x$-axis along ${\cal E}$, then $j_{x}\propto\chi_{xxx}=0$, while
$j_{y}\propto\chi_{yxx}$ and $j_{z}\propto\chi_{zxx}$ are not constrained to
be zero by (this) symmetry.
At the independent-electron level (either Hartree-Fock or Kohn-Sham) the many-
electron wavefunction is a Slater determinant of Bloch orbitals $|\psi_{j{\bf
k}}\rangle={\rm e}^{i{\bf k}\cdot{\bf r}}|u_{j{\bf k}}\rangle$; we normalize
them to one over the crystal cell. The Berry curvature of band $j$ is
Vanderbilt
$\tilde{\Omega}_{j,\alpha\beta}({\bf k})=-2\,\mbox{Im
}\langle\partial_{k_{\alpha}}u_{j{\bf k}}|\partial_{k_{\beta}}u_{j{\bf
k}}\rangle,$ (22)
and the many-body curvature per unit volume is suppl
$\frac{1}{L^{d}}\Omega_{\alpha\beta}(0)=\sum_{j}\int_{\rm BZ}\frac{d{\bf
k}}{(2\pi)^{d}}f_{j}({\bf k})\,\tilde{\Omega}_{j,\alpha\beta}({\bf k}),$ (23)
where BZ is the Brillouin zone, and $f_{j}({\bf k})$ is the Fermi factor at
$T=0$.
The equality holds in the $L\rightarrow\infty$ limit. The convergence of Eq.
(23) with $1/L$ has been indeed investigated by means of tight-binding
simulations in the simple case of a Chern insulator, where the r.h.s. is
quantized: Fig. 2 in Ref. rap135 .
It is then easy to prove suppl that
$\frac{1}{L^{d}}\partial_{\kappa_{\alpha}}\Omega_{\alpha\beta}(0)=\sum_{j}\int_{\rm
BZ}\frac{d{\bf k}}{(2\pi)^{d}}f_{j}({\bf
k})\;\partial_{k_{\alpha}}\tilde{\Omega}_{j,\alpha\beta}({\bf k}),$ (24)
whence
$\chi_{\alpha\beta\gamma}=\frac{e^{3}}{\hbar^{2}}\sum_{j}\int_{\rm
BZ}\frac{d{\bf k}}{(2\pi)^{d}}f_{j}({\bf
k})\;\partial_{k_{\gamma}}\tilde{\Omega}_{j,\alpha\beta}({\bf k}).$ (25)
This is equivalent—in the single-band case—to the semiclassical expression
which first appeared in the founding NHC paper by Sodemann and Fu Sodemann15 .
The current induced by a monochromatic field of frequency $\omega$ has a dc
(i.e. rectifying) component and a second-harmonic component. The adiabatic
limit of the two terms is considered separately in Ref. Sodemann15 , hence a
factor $1/2$ in each of them nota .
In the final part of this Letter I revert to AHC in order to comment on the
extrinsic effects. First of all I stress the quite different role of the
impurities between the AHC in metals and the quantized AHC in 2$d$ insulators:
in the former case there must necessarily be extrinsic effects, while in the
latter case extrinsic effects are ruled out. In fact—as a basic tenet of
topology—any impurity has no effect on linear Hall conductivity insofar as the
system remains insulating.
In a pristine metal the dc longitudinal conductivity is infinite: the Drude
term is proportional to $\delta(\omega)$. Extrinsic mechanisms are necessary
to warrant Ohm’s law, and are accounted for by relaxation time(s) $\tau$; in
absence of T-symmetry, extrinsic effects contribute to AHC as well. Two
distinct mechanisms have been identified: they go under the name of “side
jump” and “skew scattering” Nagaosa10 . The side-jump term is nondissipative
(independent of $\tau$). Since a crystal with impurities actually is a (very)
dilute alloy, it was previously argued rap149 that the sum of the intrinsic
and side-jump terms can be regarded as the intrinsic (geometrical) term of the
dirty sample, whose AHC is given by Eq. (12) as it stands, provided that the
potential $\hat{V}$ includes the effect of the impurities. At the independent-
electron level, the same effect can in principle be retrieved from the
complementary real-space formulation of AHC rap153 . The other extrinsic term
(skew scattering) is dissipative, proportional to $\tau$ in the single-
relaxation-time approximation, and presumably cannot be explained by means of
geometrical concepts. Remarkably, NHC is also proportional to $\tau$, yet it
is a geometrical effect.
In this Letter I have started addressing linear dc conductivity (longitudinal
and transverse), showing that their many-body expressions can be retrieved in
an alternative way, making no use of the standard sum-over-states Kubo
formulæ; in this formulation AHC owes to the many-electron generalization of
the anomalous velocity. Then I have adopted the same logic to second order in
the field. In the longitudinal case the present approach retrieves the same
result as in Ref. Watanabe20 ; in the transverse case the quadratic expansion
of the anomalous velocity yields a compact generalization of the semiclassical
NHC formula of Ref. Sodemann15 . Even in presence of electron-electron
interaction and/or disorder, NHC is dominated by the quantum geometry of the
electronic system. Finally, it is worth observing that—as it often happens
when dealing with transport phenomena Xiao10 ; rap157 —the semiclassical NHC
coincides with the exact one at the independent-electron level.
I thank Gabriele Bellomia and Ivo Souza for illuminating discussions, and for
bringing some relevant papers to my attention. Work supported by the Office of
Naval Research (USA) Grant No. N00014-17-1-2803.
## References
* (1) N. Nagaosa, J. Sinova, S. Onoda, A. H. MacDonald, and N. P. Ong, Rev. Mod. Phys. 82, 1539 (2010).
* (2) I. Sodemann and L. Fu, Phys. Rev. Lett. 115, 216806 (2015).
* (3) O. Matsyshyn and I. Sodemann, Phys. Rev. Lett. 123, 246602 (2019).
* (4) S. Nandy and I. Sodemann, Phys. Rev. B 100, 195117 (2019).
* (5) Q. Niu and D. J. Thouless, J. Phys A 17, 2453 (1984).
* (6) R. Bianco, R. Resta, and I. Souza, Phys. Rev. B 90, 125153 (2014).
* (7) W. Kohn, Phys. Rev. 133, A171 (1964).
* (8) D. Xiao, M.-C. Chang, and Q. Niu, Rev. Mod. Phys. 82, 1959 (2010).
* (9) D. J. Scalapino, S. R. White, and S. C. Zhang, Phys. Rev. Lett. 18, 2830 (1992).
* (10) See Supplemental Material.
* (11) The scalar potential $\phi({\bf r})=-\mbox{\boldmath${\cal E}$}\cdot{\bf r}$ is incompatible with PBCs.
* (12) D. J. Thouless, M. Kohmoto, M. P. Nightingale, and M. den Nijs, Phys. Rev. Lett. 49, 405 (1982).
* (13) Q. Niu, D. J. Thouless, and Y. S. Wu, Phys. Rev. B 31, 3372 (1985).
* (14) D. Vanderbilt, Berry Phases in Electronic Structure Theory (Cambridge University Press, Cambridge, 2018).
* (15) H. Watanabe and M. Oshikawa, Phys. Rev. B 102, 165137 (2020).
* (16) D. Ceresoli and R. Resta, Phys. Rev. B 76, 012405 (2007).
* (17) The relationships between the vector and tensor forms of a Berry curvature are $\Omega_{\alpha\beta}=\varepsilon_{\alpha\beta\gamma}\Omega_{\gamma}$, $\Omega_{\gamma}=\frac{1}{2}\varepsilon_{\alpha\beta\gamma}\Omega_{\alpha\beta}$.
* (18) A. Marrazzo and R. Resta, Phys. Rev. B 95, 121114(R) (2017).
* (19) R. Resta, J. Phys. Condens. Matter 30, 414001 (2018).
|
# “Laughing at you or with you”: The Role of Sarcasm
in Shaping the Disagreement Space
Debanjan Ghosh 1, Ritvik Shrivastava*2, 3 and Smaranda Muresan3
1Educational Testing Service
2MindMeld, Cisco Systems
3Data Science Institute, Columbia University
<EMAIL_ADDRESS><EMAIL_ADDRESS>{rs3868<EMAIL_ADDRESS>Equal
Contribution.
###### Abstract
Detecting arguments in online interactions is useful to understand how
conflicts arise and get resolved. Users often use figurative language, such as
sarcasm, either as persuasive devices or to attack the opponent by an ad
hominem argument. To further our understanding of the role of sarcasm in
shaping the disagreement space, we present a thorough experimental setup using
a corpus annotated with both argumentative moves (agree/disagree) and sarcasm.
We exploit joint modeling in terms of (a) applying discrete features that are
useful in detecting sarcasm to the task of argumentative relation
classification (agree/disagree/none), and (b) multitask learning for
argumentative relation classification and sarcasm detection using deep
learning architectures (e.g., dual Long Short-Term Memory (LSTM) with
hierarchical attention and Transformer-based architectures). We demonstrate
that modeling sarcasm improves the argumentative relation classification task
(agree/disagree/none) in all setups.
## 1 Introduction
User-generated conversational data such as discussion forums provide a wealth
of naturally occurring arguments. The ability to automatically detect and
classify argumentative relations (e.g., agree/disagree) in threaded
discussions is useful to understand how collective opinions form, how conflict
arises and is resolved (van Eemeren et al., 1993; Abbott et al., 2011; Walker
et al., 2012b; Misra and Walker, 2013; Ghosh et al., 2014; Rosenthal and
McKeown, 2015; Stede and Schneider, 2018).
Arg. Rel. | Turn Pairs
---|---
| Prior Turn: Today, no informed creationist would deny natural selection.
$Agree$ | Current Turn: Seeing how this was proposed over a century and a half ago by Darwin, what took the creationists so long to catch up?
| Prior Turn: Personally I wouldn’t own a gun for self defense because I am
just not that big of a sissy.
$Disagree$ | Current Turn: Because taking responsibility for ones own safety is certainly a sissy thing to do?
| Prior Turn: I’m not surprised that no one on your side of the debate would
correct you, but wolves and dogs are both members of the same species. The
Canid species.
| Current Turn: Wow, you ’re even wrong when you get away from your precious
Bible and try to sound scientific.
Prior Turn: The hand of God kept me from serious harm. Maybe He has a plan for
me.
$None$ | Current Turn: You better hurry up . Are n’t you like 113 years old.
Table 1: Sarcastic turns that disagree, agree or have no argumentative
relation with their prior turns.
Linguistic and argumentation theories have thoroughly studied the use of
sarcasm in argumentation, including its effectiveness as a persuasive device
or as a means to express an ad hominem fallacy (attacking the opponent instead
of her/his argument) Tindale and Gough (1987); van Eemeren and Grootendorst
(1992); Gibbs and Izett (2005); Averbeck (2013). We propose an experimental
setup to further our understanding of the role of sarcasm in shaping up the
disagreement space in online interactions. The disagreement space, defined in
the context of the dialogical perspective on argumentation, is seen as the
speech acts initiating the difference of opinions that argumentation is
intended to resolve Jackson (1992); van Eemeren et al. (1993). Our study is
based on the Internet Argument Corpus (IAC) introduced by Abbott et al. (2011)
that contains online discussions annotated for the presence/absence and the
type of an argumentative move (agree/disagree/none) as well as the
presence/absence of sarcasm. Consider the dialogue turns from IAC in Table 1,
where the current turn (henceforth, $ct$) is a sarcastic response to the prior
turn (henceforth, $pt$). These dialogue moves can be argumentative
(agree/disagree) or not argumentative (none). The argumentative move can
express agreement (first example) or disagreement (the second example is an
undercutter, while the third example is an ad hominem attack). The fourth
example, although sarcastic, it is not argumentative. It can be noticed that
none of the current turns contain explicit lexical terms that could signal an
argumentative relation with the prior turn. Instead, the argumentative move is
being implicitly expressed using sarcasm.
We study whether modeling _sarcasm_ can improve the detection and
classification of _argumentative relations_ in online discussions. We propose
a thorough experimental setup to answer this question using feature-based
machine learning approaches and deep learning models. For the former, we show
that _combining_ features that are useful to detect sarcasm Joshi et al.
(2015); Muresan et al. (2016); Ghosh and Muresan (2018) with state-of-the-art
argument features leads to better performance for the argumentative relation
classification task (agree/disagree/none) (Section 5). For the deep learning
approaches, we hypothesize that _multitask learning_ , which allows
representations to be shared between multiple tasks (e.g., here, the tasks of
argumentative relation classification and sarcasm detection), lead to better
generalizations. We investigate the impact of multitask learning for a dual
Long Short-Term Memory (LSTM) Network with hierarchical attention Ghosh et al.
(2017) (Section 4.2) and BERT (Bidirectional Encoder Representations from
Transformers) Devlin et al. (2019), including an optional joint multitask
learning objective with uncertainty-based weighting of task-specific losses
Kendall et al. (2018) (Section 4.3). We demonstrate that multitask learning
improves the performance of the argumentative relation classification task for
all settings (Section 5). We provide a detailed qualitative analysis (Section
5.1) to give insights into when and how modeling sarcasm helps. We make the
code from our experiments publicly
available.111https://github.com/ritvikshrivastava/multitask_transformers The
Internet Argument Corpus ($IAC$) Walker et al. (2012b) can be found for public
acess here:222https://nlds.soe.ucsc.edu/iac2
## 2 Related Work
Argument mining is a growing area of research in computational linguistics,
focusing on the detection of argumentative structures in a text (see Stede and
Schneider (2018) for an overview). This paper focuses on two subtasks:
argumentative relation identification and classification (i.e.,
agree/disagree/none). Some of the earlier work on argumentative relation
identification and classification has relied on feature-based machine learning
models, focusing on online discussions Abbott et al. (2011); Walker et al.
(2012b); Misra and Walker (2013); Ghosh et al. (2014); Wacholder et al. (2014)
and monologues Stab and Gurevych (2014, 2017); Persing and Ng (2016); Ghosh et
al. (2016). Stab and Gurevych (2014) proposed a set of lexical, syntactic,
semantic, and discourse features to classify them. On the same essay dataset,
Nguyen and Litman (2016) utilized contextual information to improve the
accuracy. Both Stab and Gurevych (2017) and Persing and Ng (2016) used Integer
Linear Programming (ILP) based joint modeling to detect argument components
and relations. Rosenthal and McKeown (2015) introduced sentence similarity and
accommodation features, whereas Menini and Tonelli (2016) presented how
entailment between text pairs can discover argumentative relations. Our
argumentative features in the feature-based model are based on the above works
(Section 4.1). We show that additional features that are useful in sarcasm
detection Joshi et al. (2015); Ghosh and Muresan (2018) enhance the
performance on the argumentative relation identification and classification
tasks.
In addition to feature-based models, deep learning models have been recently
used for these tasks. Potash et al. (2017) proposed a pointer network, and Hou
and Jochim (2017) offered LSTM+Attention network to predict argument
components and relations jointly, whereas Chakrabarty et al. (2019) exploited
adaptive pretraining Gururangan et al. (2020) for BERT to identify argument
relations. We use two multitask learning objectives (argumentative relation
identification/classification and sarcasm detection), as our goal is to
investigate whether identifying sarcasm can help in modeling the disagreement
space. Majumder et al. (2019); Chauhan et al. (2020) used multitask learning
for sarcasm & sentiment and sarcasm, sentiment, & emotion, respectively, where
a direct link between the corresponding tasks is evident.
Finally, analyzing the role of sarcasm and verbal irony in argumentation has a
long history in linguistics Tindale and Gough (1987); Gibbs and Izett (2005);
Averbeck (2013); van Eemeren and Grootendorst (1992). We propose joint
modeling of argumentative relation detection and sarcasm detection to
empirically validate sarcasm’s role in shaping the disagreement space in
online conversations.
While the focus of our paper is not to provide a state-of-the-art sarcasm
detection model, our feature-based models, along with the deep learning models
for sarcasm detection are based on state-of-the-art approaches. We implemented
discrete features such as pragmatic features González-Ibáñez et al. (2011);
Muresan et al. (2016), diverse sarcasm markers Ghosh and Muresan (2018), and
incongruity detection features Riloff et al. (2013); Joshi et al. (2015). The
LSTM models are influenced by Ghosh and Veale (2017); Ghosh et al. (2018),
where the function of contextual knowledge is used to detect sarcasm. Lastly,
transformer models such as BERT and RoBERTa have been used in the winning
entries for the recent shared task on sarcasm detection Ghosh et al. (2020).
In our research, for both kinds of deep-learning models, the best results are
obtained by using the multitask setup, showing that multitask learning indeed
helps improve both tasks.
## 3 Data
Our $training$ and $test$ data are collected from the Internet Argument Corpus
($IAC$) Walker et al. (2012a). This corpus consists of posts from
conversations in online forums on a range of controversial political and
social topics such as Evolution, Abortion, Gun Control, and Gay Marriage
Abbott et al. (2011, 2016). Multiple versions of $IAC$ corpora are publicly
available, and we use a particular subset, marked as $IAC_{orig}$, collected
from Abbott et al. (2011). This consists of around 10K pairs of conversation
turns (i.e., prior turn $pt$ and the current turn $ct$) that were annotated
using Mechanical Turk for argumentative relations (agree/disagree/none) and
other characteristics such as sarcasm/non-sarcasm, respect/insult,
nice/nastiness. Median Cohen’s $\kappa$ is 0.5 across all topics.
Arg. Rel. | Sarcasm | # of turns
---|---|---
$A$ | $S$ | 315 (33%)
$NS$ | 638 (67%)
$D$ | $S$ | 2207 (57%)
$NS$ | 1696 (43%)
$N$ | $S$ | 2285 (44%)
$NS$ | 2841 (56%)
Table 2: Dataset statistics; A (Agree), D (Disagree), N (None); S (Sarcasm),
NS (Non-Sarcasm)
For agree/disagree/none relations the annotation was a scalar judgment on an
11 point scale [-5,5] where “-5” indicates a high disagreement move, “0”
indicates none relation, and “5” denotes a high agreement move. We converted
the scalar values to three categories: disagree ($D$) for values between [-5,
-2], none ($N$) for values between [-1,1], and agree ($A$) for values between
[2,5], where the scalar partitions ([]) follow prior work with $IAC$ Misra and
Walker (2013); Rosenthal and McKeown (2015).
Each “current turn” that is part of a $<$pt,ct$>$ pair is also labeled with a
Sarcasm ($S$) or Non-Sarcasm ($NS$) label. Table 2 shows the data statistics
in terms of argumentative relations ($A$/$D$/$N$) and sarcasm ($S$/$NS$). We
split the dataset into $training$ (80%; 7,982 turn pairs), $test$ (10%; 999
turn pairs), and $dev$ (10%; 999 turn pairs) sets where each set contains a
proportional number of instances (i.e., 80% of 315 (=252) sarcastic turns
($S$) with argument relation label $A$ (agree) appears in the training set).
The $dev$ set is used for parameter tuning.
## 4 Experimental Setup
We present the computational approaches to investigate whether modeling
_sarcasm_ can help detect argumentative relations. As our goal is to provide a
comprehensive empirical investigation of sarcasm’s role in argument mining
rather than propose new models, we explore three separate machine learning
approaches well-established for studying argumentation and figurative
language. First, we implement a Logistic Regression method that exploits a
combination of state-of-the-art features to detect argumentative relations as
well as sarcasm (Section 4.1). Second, we present a dual LSTM architecture
with hierarchical attention and its multitask learning setup (Section 4.2).
Third, we discuss experiments using the pre-trained BERT models and our
multitask learning architectures based on it (Section 4.3).
### 4.1 Logistic Regression with Discrete Features
We use a Logistic Regression (LR) model that uses both argument-relevant
($ArgF$) and sarcasm-relevant ($SarcF$) features. Unless mentioned, all
features were extracted from the current turn $ct$.
#### Argument-relevant features ($ArgF$).
We first evaluate the features that are reported as being useful for
identifying and classifying argumentative relations: (a) _n-grams_ (e.g.,
unigram, bigram, trigram) created based on the full vocabulary of the $IAC$
corpus; (b) _argument lexicons_ : two lists of twenty words representing
agreement (e.g., “agree”, “accord”) and disagreement (e.g., “differ”,
“oppose”), respectively Rosenthal and McKeown (2015) (c) _sentiment lexicons_
such as MPQA Wilson et al. (2005) and opinion lexicon Hu and Liu (2004) to
identify sentiment in the turns; (d) _hedge features_ , since they are often
used to mitigate speaker’s commitment Tan et al. (2016); (e) _PDTB discourse
markers_ because _claims_ often start with discourse markers such as
_therefore_ , _so_. We discard markers from the temporal relation; (f) _modal
verbs_ because they signal the degree of certainty when expressing a claim
Stab and Gurevych (2014); (g) _pronouns_ , since they dialogically point to
the previous speaker’s stance; (h) _textual entailment_ : captures whether a
position expressed in the prior turn is accepted in the current turn Cabrio
and Villata (2012); Menini and Tonelli (2016)333We used the textual entailment
toolkit (AllenNLP) Gardner et al. (2017).; (i) _lemma overlap_ to determine
topical alignment between the prior and current turn Somasundaran and Wiebe
(2010). We compute lemma overlap of noun, verbs, and adjectives between the
turns, and (j) _negation_ to extract explicit negation cues (e.g., “not”,
“don’t”) that often signal disagreement.
#### Sarcasm-relevant features ($SarcF$).
As sarcasm-relevant features we use: (a) Linguistic Inquiry Word Count
_(LIWC)_ Pennebaker et al. (2001) features to capture the linguistic, social,
individual, and psychological processes; (b) measuring _sentiment incongruity_
, that is, capturing the number of times the difference in sentiment polarity
between the prior turn $pt$ and the current turn $ct$ occurs and number of
positive and negative sentiment words in turns Joshi et al. (2015); (c)
_sarcasm markers_ used by Ghosh and Muresan (2018), such as _capitalization_ ,
_quotation marks_ , _punctuation_ , _exclamations_ that emphasize a sense of
surprisal, _tag questions_ , _interjections_ because they seem to undermine a
literal evaluation, _hyperbole_ because users frequently overstate the
magnitude of an event in sarcasm, and _emoticons_ & _emojis_ , since they
often emphasize the sarcastic intent.
We use _SKLL_ , an open-source Python package that wraps around the Scikit-
learn tool Pedregosa et al. (2011). 444https://pypi.org/project/skll/ We
perform the feature-based experiment using the Logistic Regression model from
Scikit-learn.
In the experimental runs, LRArgF (i.e., model that uses just the $ArgF$
features) denotes the _individual_ model and LRArgF+SarcF (i.e., model that
uses both $ArgF$ and $SarcF$ features) is the _joint_ model.
### 4.2 Dual LSTM and Multitask Learning
LSTMs are able to learn long-term dependencies Hochreiter and Schmidhuber
(1997) and have been shown to be effective in Natural Language Inference (NLI)
research, where the task is to establish the _relationship_ between multiple
inputs Rocktäschel et al. (2015). This type of architecture is often denoted
as the _dual architecture_ since one LSTM models the premise and the other
models the hypothesis (in Recognizing Textual Entailment(RTE) tasks). Ghosh et
al. (2018) used the dual LSTM architecture with hierarchical attention (HAN)
Yang et al. (2016) for sarcasm detection to model the conversation context,
and we use their approach in this paper to model the current turn $ct$ and the
prior turn $pt$. HAN implements attention both at the word level and sentence
level. The distinct characteristics of this attention is that the
word/sentence-representations are weighted by measuring similarity with a
word/sentence level context vector, respectively, which are randomly
initialized and jointly learned during training Yang et al. (2016). We compute
the vector representation for the current turn $ct$ and prior turn $pt$ and
concatenate vectors from the two LSTMs for the final softmax decision (i.e.,
$A$, $D$ or $N$ for argumentative relation detection). Henceforth, this dual
LSTM architecture is denoted as $LSTM_{attn}$.
Figure 1: Sentence-level Multitask Attention Network for prior turn $pt$ and
current turn $ct$. Figure is inspired by Yang et al. (2016).
To measure the impact of _sarcasm_ in argumentative relation detection, we use
a multitask learning approach. Multitask learning aims to leverage useful
information in multiple related tasks to improve each task’s performance
Caruana (1997); Liu et al. (2019). We use a simple hard parameter sharing
network. The architecture is a replica of the $LSTM_{attn}$, with a
modification of employing two loss functions, one for sarcasm detection (i.e.,
training using the $S$ and $NS$ labels) and another for the argumentative
relation classification task (i.e., training using the $A$, $D$, and $N$
labels).
Figure 1 shows the high-level architecture of the dual LSTM and multitask
learning ($LSTM_{MT}$). The prior turn $pt$ (left) and the current turn $ct$
(right) are read by two separate LSTMs (i.e., $LSTM_{pt}$ and $LSTM_{ct}$). In
case of $LSTM_{MT}$ the concatenation of $v_{pt}$ and $v_{ct}$ is passed
through a dense+Softmax layer for the MTL as shown in Figure 1. Similar to the
$LR$ models, $LSTM_{attn}$ now represents the _individual_ model (i.e.,
predicts only the argumentative relation) whereas $LSTM_{MT}$ represents the
_joint_ model.
#### Dynamic Multitask Loss.
In addition to simply adding the two losses, we also employed _dynamic
weighting_ of task-specific losses during the training process, based on the
homoscedastic uncertainty of tasks, as proposed in Kendall et al. (2018):
$L=\sum_{t}\frac{1}{2\sigma^{2}_{t}}L_{t}+\log\sigma^{2}_{t}$ (1)
where $L_{t}$ and $\sigma_{t}$ depict the task-specific loss and its variance,
respectively, over training instances. We denote this variation as
LSTM${}_{{MT}_{uncert}}$.
### 4.3 Pretrained BERT and Multitask Learning
BERT Devlin et al. (2019), a bidirectional transformer model, has achieved
state-of-the-art performance for many NLP tasks. BERT is initially trained on
masked token prediction and next sentence prediction tasks over large corpora
(English Wikipedia and Book Corpus). During its training, a special token
“[CLS]” is added to the beginning of each training instance, and the “[SEP]”
tokens are added to indicate the end of utterance(s) and separate, in case of
two utterances (e.g., $pt$ and $ct$). During the evaluation, the learned
representation for the “[CLS]” token is processed by an additional layer with
nonlinear activation. In its standard form, pre-trained BERT (“bert-base-
uncased”) can be used for transfer learning by fine-tuning on a downstream
task, i.e., argument relation detection where training instances are labeled
as $A$, $D$, and $N$. We denote the BERT baseline model as $BERT_{orig}$ that
is fine-tuned over the $training$ partition of only the argumentative relation
data (i.e., individual task training). Unless mentioned otherwise, we use the
BERT predictions available via the “[CLS]” token. To this end, we propose a
couple of variations in the multitask learning settings, and they are briefly
described in the following sections.
Figure 2: Alternating mini-batch training based on the task type
($BERT_{ALT}$).
#### Multitask Learning with BERT.
The first model we use for multitask learning is denoted as $BERT_{MT}$ (i.e.,
BERT Multitask Learning). Here, we pass the BERT output embeddings to two
classification heads - one for each task (i.e., detection of argumentative
relation and sarcasm), and the relevant gold labels are passed to them. Each
classification head is a linear layer (size=3 and 2 for # of labels for
argumentative relation and sarcasm detection, respectively) applied on top of
the pooled BERT output. The losses from these individual heads are added and
propagated back through the model. This allows BERT to model the nuances of
both tasks and their interdependence simultaneously.
Dynamic Loss: Similar to the LSTM architecture, here, too, we experiment with
dynamic multitask loss. We denote this variation as BERT${}_{{MT}_{uncert}}$.
#### Alternate Multitask Learning.
We employ another multitask learning technique where we attempt to enrich the
learning with fine-tuning of labeled _additional_ material from the sarcasm
detection task. Notably, we exploit “sarcasm V2”, a sarcasm detection dataset
that was also curated from the original corpus of $IAC$ and was released by
Oraby et al. (2016). We pre-process the “sarcasm V2” dataset by removing
duplicates that appear in $IAC_{orig}$ and we end up selecting 3513
$training_{v2}$ instances and 423 $dev_{v2}$ instances balanced between S/NS
categories for experiments and merged them to the sarcasm dataset ($training$
and $dev$, respectively) from $IAC_{orig}$. Note, unlike the original
multitask setting, this time we have more sarcastic instances (a total of
11,495) than instances labeled with argumentative roles (7,982 instances as
before) for the training purpose, while keeping the $test$ set from
$IAC_{orig}$ unchanged.
Since the $training$ data is now unequal between the two tasks of
argumentative relation and sarcasm detection, we create mini-batches so that
each batch consists of instances with only one task label (i.e., either
argumentative labels or sarcasm labels). The batches from the two tasks are
interleaved uniformly, i.e., the BERT model is only passed to one of the two
tasks’ specific classification heads, and the related loss is used to update
the parameters in that iteration. This way, the model trains both tasks but
alternates between the two tasks per mini-batch iteration while the extra
batches of sarcasm data from the “sarcasm V2” dataset are managed at the end
together. This model is denoted as $BERT_{ALT}$ (see Figure 2).
For brevity, all models’ parameter tuning description (e.g., Logistic
Regression, Dual LSTM, BERT) is in the supplemental material.
## 5 Results and Discussion
Model | $F1_{micro}$ | $A$ | $D$ | $N$
---|---|---|---|---
LRArgF | 53.5 | 22.4 | 57.2 | 56.3
LRArgF+SarcF | 56.4${}^{\alpha{{}^{*}}}$ | 31.0 | 58.4 | 58.9
LSTMAttn | 51.8 | 28.0 | 49.4 | 59.2
LSTMMT | 53.1 | 30.0 | 53.2 | 56.5
LSTM${}_{{MT}_{uncert}}$ | 54.6 ${}^{\alpha{{}^{*}}}$ | 33.1 | 54.5 | 58.5
BERTorig | 62.2 | 41.8 | 63.3 | 64.4
BERTMT | 63.2 | 44.5 | 64.1 | 65.4
BERT${}_{{MT}_{uncert}}$ | 65.3${}^{\alpha{{}^{*}}}$ | 44.6 | 66.2 | 67.5
BERTALT | 63.4 | 40.1 | 62.2 | 66.9
Table 3: Results for argumentative relation detection ($F1_{micro}$ and F1
scores/category) on the $test$ set of $IAC_{orig}$. ${}^{\alpha{{}^{*}}}$
depict significance on $p\leq 0.05$ (measured via Mcnemar’s test) against the
corresponding individual model (e.g., LRArgF, LSTMAttn, BERTorig,
respectively). Highest scores per group of models are in bold.
Table 3 presents the classification results on the $test$ set. We report F1
scores for each class ($A$, $D$ and $N$) and Micro-F1 overall score (F1micro)
(used to account for multi-class and class imbalance).
The LR model using both the $SarcF$ and $ArgF$ features performs better than
the model that uses $ArgF$ features alone, improving the overall performance
by an absolute 2.9% F1micro, and showing a huge impact on the agreement class
($A$) (8.6% absolute improvement). Table 4 shows the _top_ discrete features
for argumentative relation identification. From $ArgF$ features (first
column), we notice discourse expansion (“particularly”), contrast (“although”)
and agree/disagree lexicon getting high feature weights. We also notice
_pronouns_ receive large feature weights because argumentative text often
refers to personal stance (e.g., “you think”, “I believe”). However, when
analyzing ${ArgF+SarcF}$ features we find various sarcasm markers, such as tag
questions, hyperbole, multiple punctuation, or sarcasm characteristics such as
sentiment incongruity receive the highest weights.
LRArgF | LRArgF+SarcF
---|---
_pronouns_ : I. my (both $A$), your(s) ($D$); _discourse_ : so, because, for (all $A$), incidentally, particularly, although (all $D$); _disagree_lexicon_ : disagree, differ (both $D$);_agree_lexicon_ : agreed ($A$); _entailment relation_ ; _negation_ ($D$) | _pronouns_ : mine, my (both $A$), you ($D$); _discourse_ : then ($A$), though, however (both $D$); _modal_ : will ($A$); _punctuation_ : multiple question marks (both $A$ and $D$); _tag question_ : “are you”, “do you” (both $D$); _hyperbole_ : wonderful ($A$), nonsense, biased (both $D$); _LIWC dimensions_ : anxiety, assent, certainty (all $D$); _sentiment incongruity_ ($D$); _interj_ : so, agreed (both $A$)
Table 4: Top discrete features from LRArgF and LRArgF+SarcF models,
respectively. $A$ and $D$ depict the argumentative relations (agree and
disagree) for the particular feature.
For LSTM models, we see that multitask learning helps,
LSTM${}_{{MT}_{uncert}}$ showing a 2.8% improvement over the single model
LSTMAttn, which is statistically significant. Moreover, we notice that the
improvement for the agree ($A$) and disagree ($D$) classes is 5.1%, with just
a small reduction for the none ($N$) class (0.7%).
For BERT, we notice better results when performing multitask learning, while
the best performing model is obtained from BERT${}_{{MT}_{uncert}}$ where we
experimented with the dynamic weighting of task-specific losses during the
training process Kendall et al. (2018). The performance increase is consistent
across all three classes. The difference in performance among each setup is
statistically significant, as shown in Table 3. Moreover,
BERT${}_{{MT}_{uncert}}$ model improves the $F1_{micro}$ by a large margin
when compared to the LR and the LSTM models. However, adding more data for the
auxiliary task (i.e., sarcasm detection) as presented in $BERT_{ALT}$ did not
provide any significant improvement, only a 0.2 improvement of $F1_{micro}$
over $BERT_{MT}$ (however it does show improvement over the single task
model). The reason could be that although “sarcasm V2”is a subset of the
original $IAC$ corpus, it was annotated by a different set of Turkers than
$IAC_{orig}$ with different annotation guidelines.
Between the three classes - $A$, $D$, and $N$ \- we observe the lowest
performance on the $A$ class. This is unsurprising, given the highly
unbalanced setting of the $training$ data ($A$ occurs less than 10% of times
in the $IAC_{orig}$, see Table 2).
In sum, these improvements through multitask learning over single task
argumentative relation detection indicate that modeling sarcasm is useful in
modeling the disagreement space in online discussions. This provides an
empirical justification to existing theories that study sarcasm’s impact in
modeling argumentation, persuasion, and argument fallacies such as ad hominem
attacks. Finally, we notice that multitask learning also improves the
performance on the sarcasm detection task (results are presented in the
Appendix).
### 5.1 Qualitative Analysis
Figure 3: Attention heatmap of a particular turn pair from $LSTM_{attn}$_(_
left) and LSTM${}_{{MT}_{uncert}}$_(_ right) showing higher weights on sarcasm
marker such as “Oops” and “!!” for LSTM${}_{{MT}_{uncert}}$ (disagree
relation)
To further investigate the effect of multitask learning, we present
qualitative analysis studies to:
1. 1.
Understand the models’ performance by looking at the turns correctly
classified by the multitask models and misclassified by the corresponding
individual single task model. We analyze the turns in terms of sarcastic
characteristics - whether they depict incongruity, humor, or sarcasm
indicators (i.e., markers).
2. 2.
Understand when both multitask and individual model made incorrect
predictions.
We compare the predictions between the multitask and the individual models for
different settings to address the first issue. For example,
$BERT_{{MT}_{uncert}}$ correctly identifies 6 $A$, 50 $D$, and 60 $N$
instances more than $BERT_{Orig}$ (out of 91, 398 and 510 instances,
respectively). Two of the authors independently investigated a random sample
of 100 instances ($qual$ set) chosen from the union of the $test$ instances
that are correctly predicted only by the multitask models (LRArgF+SarcF,
$LSTM_{{MT}_{uncert}}$, $BERT_{{MT}_{uncert}}$, and $BERT_{ALT}$) and not by
the corresponding individual models (LRArgF, $LSTM_{attn}$, and
$BERT_{Orig}$). For both Transformer and LSTM-based models, we explore how
attention heads behave and whether common patterns exist (e.g., attending
words with opposite meaning when incongruity occurs). We display the heat maps
of the attention weights for a pair of prior and current turns (LSTM-based
models) (Figure 3) whereas for BERT we display word-to-word attentions
(Figures 4, 5, 6, 7, and 8) using visualization tools Vig (2019); Yang and
Zhang (2018).555Clark et al. (2019) have probed different layers and attention
heads in BERT to find patterns, e.g., whether a token consistently attends a
fixed token in a specific layer. To avoid confusion and bias, we select
attention examples from only the middle (layer=6) layer. All the examples
presented in this section are argumentative moves (i.e., turns with $A$ or
$D$) correctly identified by our multitask learning models but wrongly
predicted as none ($N$) by the individual models. Moreover, the multitask
learning models also correctly predict that these turns are instances of
sarcasm.
Figure 4: $BERT_{{MT}_{uncert}}$ (right) attending contrasting words more in
word-level attention in comparison to $BERT_{Orig}$ (left) (disagree relation)
Figure 5: $BERT_{ALT}$ (right) attending only contrasting words in comparison
to $BERT_{Orig}$ (left) (disagree relation). However, the strength of the
contrast in the case of $BERT_{ALT}$ is lower than $BERT_{{MT}_{uncert}}$ for
the same example turns.
#### Incongruity between prior turn and current turn.
Semantic incongruity, which can appear between conversation context $pt$ and
the current turn $ct$ is an inherent characteristic of sarcasm Joshi et al.
(2015). This characteristic highlights the inconsistency between
_expectations_ and _reality_ , making sarcasm or irony highly effective in
persuasive communication Gibbs and Izett (2005).
In the case of BERT, Figure 4 presents the turns “evolution can’t prove the
book of genesis false” ($pt$) $\leftrightarrow$ “ignorant of science think
evolution has anything to do with the bible” ($ct$). Here,
$BERT_{{MT}_{uncert}}$ shows more attention between incongruous terms
(“genesis” $\leftrightarrow$ “science”, “evolution”) as well as to the mocking
word “ignorance”. Likewise, Figure 6 presents two turns “you are quite anti
religious it seems” ($pt$) $\leftrightarrow$ “anti ignorance and superstition
…this is religion” ($ct$). We notice the word “religious” is attending “anti”
and “ignorance” with high weights in case of $BERT_{{MT}_{uncert}}$ (from $pt$
to $ct$) whereas $BERT_{Orig}$ only attends to the word “religious” from the
$pt$ to $ct$ turn. By modeling sarcasm, the multitask learning models can
better predict argumentative moves that are expressed implicitly.
We also evaluate the $BERT_{ALT}$ model for the examples presented in Figure 4
and Figure 6. Figure 5 shows that although $BERT_{ALT}$ is attending (from
$pt$ to $ct$) incongruous terms “genesis” $\leftrightarrow$ “evolution”, the
strength of the relation (i.e., attention weight) is comparatively lower than
$BERT_{{MT}_{uncert}}$ (See Figure 4). On the contrary, between Figure 6 and
Figure 7, $BERT_{{MT}_{uncert}}$ model is attending multiple words in $ct$
from the word “religion” in $pt$, but the $BERT_{ALT}$ model attends only two
words ‘anti” and “ignorance”, with high weights from “religion” ($pt$ to
$ct$).
Figure 6: $BERT_{{MT}_{uncert}}$ (right) attending contrasting words more than
$BERT_{Orig}$ (left) (disagree relation)
Figure 7: $BERT_{ALT}$ (right) attending only the contrasting words in
comparison to $BERT_{Orig}$ (left) (disagree relation)
#### Humor by word repetition.
Often the current turn $ct$ sarcastically taunts the prior turn $pt$ by word
repetition and rhyme, imposing a humorous comic effect, also regarded as the
phonetic style of humor Yang et al. (2015). For the pair, “genetics has
nothing to do with it” ($pt$) $\leftrightarrow$ “are saying that genetics has
nothing to do with genetics?” ($ct$), we notice in $BERT_{{MT}_{uncert}}$ the
token “it” in $pt$ correctly attends to both occurrences of “genetics” in $ct$
where the second occurrence is the co-reference of “it” (Figure 8), which is
missed by the individual model $BERT_{Orig}$.
Figure 8: $BERT_{{MT}_{uncert}}$ (right) attending co-referenced words in a
humorous example missed by the $BERT_{Orig}$ model (left) (disagree relation)
#### Role of sarcasm markers.
Sarcasm markers are indicators that alert if an utterance is sarcastic Attardo
(2000). While comparing the logistic regression models between LRArgF+SarcF
and LRArgF, we observe markers such as multiple punctuations (“???”), tag
question (“are you”), upper case (“NOT”) have received the highest features
weights ( Table 4). In Figure 3, while the individual model $LSTM_{attn}$
attends the words almost equally, we notice in the multitask variation several
sarcasm markers such as “ya”, “oops”, and numerous exclamations (“!!”) receive
larger attention weights.
Addressing the second issue (i.e., when both multitask and single tasks models
make the wrong predictions), we notice that over 100 examples of none ($N$)
class were classified as argumentative by both $BERT_{{MT}_{uncert}}$ and
$BERT_{Orig}$. For the none $N$ class, one of the most common instances of
wrong predictions is when the current turn $ct$ sarcastically takes a
“different stance” on a topic from $pt$ in a narrow context but the whole turn
is not argumentative. In the following example: “does he just say the opposite
of everything $<$name$>$ says?” ($pt$) $\leftrightarrow$ “using $<$name$>$ as
a 180 compass is just fine by me” ($ct$), $BERT_{{MT}_{uncert}}$,
$BERT_{Orig}$, LSTM${}_{{MT}_{uncert}}$, and $LSTM_{attn}$ models make
disagree $D$ prediction (since $ct$ is sarcastic on “$<$name$>$”) where the
gold label is none $N$. Looking closely at this pair of turns, it seems that
the $ct$ presents a case of ad hominem attack (on the person’s “$<$name$>$”)
rather than a none relation.
In the case of argumentative turns (agree and disagree) that are wrongly
classified as none by all models, we found two common patterns: the use of
concessions (e.g., “it’s a consideration, _but_ I doubt we should be promoting
this …”) and arguments with uncommitted beliefs (e.g., “it is _possible_
that”, “that could _probably_ be”, “ _possibly_ , I must admit”).
## 6 Conclusion and Future Work
Linguistic and argumentation theories have studied the use of sarcasm in
argumentation, including its effectiveness as a persuasive device or as a
means to express an ad hominem fallacy. We present a comprehensive
experimental study for argumentative relation identification and
classification using sarcasm detection as an additional task. First, in
discrete feature space, we show that sarcasm-related features, in addition to
argument-related features, improve the accuracy of the argumentative relation
identification/classification task by 3%. Next, we show that multitask
learning using both a dual LSTM framework and BERT helps improve performance
compared to the corresponding single model by a statistically significant
margin. In both cases, the dynamic weighting of task specific losses performs
best. We provide a detailed qualitative analysis by investigating a large
sample manually and show what characteristics of sarcasm are attended to,
which might have guided the correct prediction on the identification of the
argumentative relation/classification task. In the future, we aim to study
this synergy further by looking at sarcasm as well as the persuasive
strategies (e.g., ethos, pathos, logos), and argument fallacies (e.g., ad
hominem attack that was also noticed by Habernal et al. (2018)).
## Acknowledgements
The authors thank the anonymous reviewers and Tuhin Chakrabarty for helpful
comments.
## References
* Abbott et al. (2016) Rob Abbott, Brian Ecker, Pranav Anand, and Marilyn Walker. 2016. Internet argument corpus 2.0: An sql schema for dialogic social media and the corpora to go with it. In _Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16)_ , pages 4445–4452.
* Abbott et al. (2011) Rob Abbott, Marilyn Walker, Pranav Anand, Jean E Fox Tree, Robeson Bowmani, and Joseph King. 2011. How can you say such things?!?: Recognizing disagreement in informal political argument. In _Proceedings of the Workshop on Languages in Social Media_ , pages 2–11. Association for Computational Linguistics.
* Attardo (2000) Salvatore Attardo. 2000. Irony markers and functions: Towards a goal-oriented theory of irony and its processing. _Rask_ , 12(1):3–20.
* Averbeck (2013) Joshua M. Averbeck. 2013. Comparisons of ironic and sarcastic arguments in terms of appropriateness and effectiveness in personal relationships. _Argumentation and Advocacy_ , 50(1):47–57.
* Cabrio and Villata (2012) Elena Cabrio and Serena Villata. 2012. Combining textual entailment and argumentation theory for supporting online debates interactions. In _Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)_ , pages 208–212.
* Caruana (1997) Rich Caruana. 1997. Multitask learning. _Machine learning_ , 28(1):41–75.
* Chakrabarty et al. (2019) Tuhin Chakrabarty, Christopher Hidey, Smaranda Muresan, Kathy McKeown, and Alyssa Hwang. 2019. AMPERSAND: Argument mining for PERSuAsive oNline discussions. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 2933–2943, Hong Kong, China. Association for Computational Linguistics.
* Chauhan et al. (2020) Dushyant Singh Chauhan, SR Dhanush, Asif Ekbal, and Pushpak Bhattacharyya. 2020\. Sentiment and emotion help sarcasm? a multi-task learning framework for multi-modal sarcasm, sentiment and emotion analysis. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 4351–4360.
* Clark et al. (2019) Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What does BERT look at? an analysis of BERT’s attention. In _Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP_ , pages 276–286, Florence, Italy. Association for Computational Linguistics.
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
* van Eemeren and Grootendorst (1992) Frans H. van Eemeren and Rob Grootendorst. 1992. _Argumentation, communication, and fallacies: a pragma-dialectical perspective_. Lawrence Erlbaum Associates, Inc.
* van Eemeren et al. (1993) Frans Hendrik van Eemeren, Rob Grootendorst, Sally Jackson, Scott Jacobs, et al. 1993. _Reconstructing argumentative discourse._ University of Alabama Press.
* Gardner et al. (2017) Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke S. Zettlemoyer. 2017\. Allennlp: A deep semantic natural language processing platform.
* Ghosh and Veale (2017) Aniruddha Ghosh and Tony Veale. 2017. Magnets for sarcasm: Making sarcasm detection timely, contextual and very personal. In _Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing_ , pages 482–491.
* Ghosh et al. (2018) Debanjan Ghosh, Alexander R Fabbri, and Smaranda Muresan. 2018. Sarcasm analysis using conversation context. _Computational Linguistics_ , 44(4):755–792.
* Ghosh et al. (2017) Debanjan Ghosh, Alexander Richard Fabbri, and Smaranda Muresan. 2017. The role of conversation context for sarcasm detection in online interactions. In _Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue_ , pages 186–196, Saarbrücken, Germany. Association for Computational Linguistics.
* Ghosh et al. (2016) Debanjan Ghosh, Aquila Khanam, Yubo Han, and Smaranda Muresan. 2016. Coarse-grained argumentation features for scoring persuasive essays. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)_ , pages 549–554, Berlin, Germany. Association for Computational Linguistics.
* Ghosh and Muresan (2018) Debanjan Ghosh and Smaranda Muresan. 2018. “with 1 follower i must be awesome :p”. exploring the role of irony markers in irony recognition. _Proceedings of ICWSM_.
* Ghosh et al. (2014) Debanjan Ghosh, Smaranda Muresan, Nina Wacholder, Mark Aakhus, and Matthew Mitsui. 2014. Analyzing argumentative discourse units in online interactions. In _Proceedings of the First Workshop on Argumentation Mining_ , pages 39–48.
* Ghosh et al. (2020) Debanjan Ghosh, Avijit Vajpayee, and Smaranda Muresan. 2020. A report on the 2020 sarcasm detection shared task. In _Proceedings of the Second Workshop on Figurative Language Processing_ , pages 1–11, Online. Association for Computational Linguistics.
* Gibbs and Izett (2005) Raymond W Gibbs and Christin Izett. 2005. Irony as persuasive communication. _Figurative language comprehension: Social and cultural influences_ , pages 131–151.
* González-Ibáñez et al. (2011) Roberto González-Ibáñez, Smaranda Muresan, and Nina Wacholder. 2011\. Identifying sarcasm in twitter: A closer look. In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_ , pages 581–586, Portland, Oregon, USA. Association for Computational Linguistics.
* Gururangan et al. (2020) Suchin Gururangan, Ana Marasović, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don’t stop pretraining: Adapt language models to domains and tasks. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 8342–8360, Online. Association for Computational Linguistics.
* Habernal et al. (2018) Ivan Habernal, Henning Wachsmuth, Iryna Gurevych, and Benno Stein. 2018. Before name-calling: Dynamics and triggers of ad hominem fallacies in web argumentation. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_ , pages 386–396, New Orleans, Louisiana. Association for Computational Linguistics.
* Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. _Neural computation_ , 9(8):1735–1780.
* Hou and Jochim (2017) Yufang Hou and Charles Jochim. 2017. Argument relation classification using a joint inference model. In _Proceedings of the 4th Workshop on Argument Mining_ , pages 60–66, Copenhagen, Denmark. Association for Computational Linguistics.
* Hu and Liu (2004) Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In _Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining_ , pages 168–177. ACM.
* Jackson (1992) S Jackson. 1992. Virtual standpoints’ and the pragmatics of conversational argument. _Argumentation illuminated_ , page 260–269.
* Joshi et al. (2015) Aditya Joshi, Vinita Sharma, and Pushpak Bhattacharyya. 2015. Harnessing context incongruity for sarcasm detection. In _Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)_ , pages 757–762, Beijing, China. Association for Computational Linguistics.
* Joulin et al. (2016) Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, and Tomas Mikolov. 2016. Fasttext. zip: Compressing text classification models. _arXiv preprint arXiv:1612.03651_.
* Kendall et al. (2018) Alex Kendall, Yarin Gal, and Roberto Cipolla. 2018. Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pages 7482–7491.
* Liu et al. (2019) Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019. Multi-task deep neural networks for natural language understanding. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 4487–4496, Florence, Italy. Association for Computational Linguistics.
* Majumder et al. (2019) Navonil Majumder, Soujanya Poria, Haiyun Peng, Niyati Chhaya, Erik Cambria, and Alexander Gelbukh. 2019. Sentiment and sarcasm classification with multitask learning. _IEEE Intelligent Systems_ , 34(3):38–43.
* Menini and Tonelli (2016) Stefano Menini and Sara Tonelli. 2016. Agreement and disagreement: Comparison of points of view in the political domain. In _Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers_ , pages 2461–2470.
* Misra and Walker (2013) Amita Misra and Marilyn A Walker. 2013. Topic independent identification of agreement and disagreement in social media dialogue. In _Proceedings of the SIGDIAL 2013 Conference_ , pages 41–50. Association for Computational Linguistics.
* Muresan et al. (2016) Smaranda Muresan, Roberto Gonzalez-Ibanez, Debanjan Ghosh, and Nina Wacholder. 2016\. Identification of nonliteral language in social media: A case study on sarcasm. _Journal of the Association for Information Science and Technology_.
* Nguyen and Litman (2016) Huy Nguyen and Diane Litman. 2016. Context-aware argumentative relation mining. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1127–1137.
* Oraby et al. (2016) Shereen Oraby, Vrindavan Harrison, Lena Reed, Ernesto Hernandez, Ellen Riloff, and Marilyn Walker. 2016. Creating and characterizing a diverse corpus of sarcasm in dialogue. In _17th Annual Meeting of the Special Interest Group on Discourse and Dialogue_ , page 31.
* Pedregosa et al. (2011) Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: Machine learning in python. _Journal of machine learning research_ , 12(Oct):2825–2830.
* Pennebaker et al. (2001) James W Pennebaker, Martha E Francis, and Roger J Booth. 2001. Linguistic inquiry and word count: Liwc 2001. _Mahway: Lawrence Erlbaum Associates_ , 71:2001.
* Persing and Ng (2016) Isaac Persing and Vincent Ng. 2016. Modeling stance in student essays. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 2174–2184, Berlin, Germany. Association for Computational Linguistics.
* Potash et al. (2017) Peter Potash, Alexey Romanov, and Anna Rumshisky. 2017. Here’s my point: Joint pointer architecture for argument mining. In _Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing_ , pages 1364–1373, Copenhagen, Denmark. Association for Computational Linguistics.
* Riloff et al. (2013) Ellen Riloff, Ashequl Qadir, Prafulla Surve, Lalindra De Silva, Nathan Gilbert, and Ruihong Huang. 2013. Sarcasm as contrast between a positive sentiment and negative situation. In _Proceedings of the Conference on Empirical Methods in Natural Language Processing_ , pages 704–714. Association for Computational Linguistics.
* Rocktäschel et al. (2015) Tim Rocktäschel, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiskỳ, and Phil Blunsom. 2015. Reasoning about entailment with neural attention. _arXiv preprint arXiv:1509.06664_.
* Rosenthal and McKeown (2015) Sara Rosenthal and Kathleen McKeown. 2015. I couldn’t agree more: The role of conversational structure in agreement and disagreement detection in online discussions. In _16th Annual Meeting of the Special Interest Group on Discourse and Dialogue_ , page 168.
* Somasundaran and Wiebe (2010) Swapna Somasundaran and Janyce Wiebe. 2010. Recognizing stances in ideological on-line debates. In _Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text_ , pages 116–124. Association for Computational Linguistics.
* Stab and Gurevych (2014) Christian Stab and Iryna Gurevych. 2014. Identifying argumentative discourse structures in persuasive essays. In _EMNLP_ , pages 46–56.
* Stab and Gurevych (2017) Christian Stab and Iryna Gurevych. 2017. Parsing argumentation structures in persuasive essays. _Computational Linguistics_ , 43(3):619–659.
* Stede and Schneider (2018) Manfred Stede and Jodi Schneider. 2018. Argumentation mining. _Synthesis Lectures on Human Language Technologies_ , 11(2):1–191.
* Tan et al. (2016) Chenhao Tan, Vlad Niculae, Cristian Danescu-Niculescu-Mizil, and Lillian Lee. 2016\. Winning arguments: Interaction dynamics and persuasion strategies in good-faith online discussions. In _Proceedings of WWW_.
* Tindale and Gough (1987) Christopher W Tindale and James Gough. 1987. The use of irony in argumentation. _Philosophy & rhetoric_, pages 1–17.
* Vig (2019) Jesse Vig. 2019. A multiscale visualization of attention in the transformer model. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations_ , pages 37–42, Florence, Italy. Association for Computational Linguistics.
* Wacholder et al. (2014) Nina Wacholder, Smaranda Muresan, Debanjan Ghosh, and Mark Aakhus. 2014. Annotating multiparty discourse: Challenges for agreement metrics. _LAW VIII_ , page 120.
* Walker et al. (2012a) Marilyn A Walker, Pranav Anand, Robert Abbott, and Ricky Grant. 2012a. Stance classification using dialogic properties of persuasion. In _Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 592–596. Association for Computational Linguistics.
* Walker et al. (2012b) Marilyn A Walker, Jean E Fox Tree, Pranav Anand, Rob Abbott, and Joseph King. 2012b. A corpus for research on deliberation and debate. In _LREC_ , pages 812–817.
* Wilson et al. (2005) Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phrase-level sentiment analysis. In _Proceedings of the conference on human language technology and empirical methods in natural language processing_ , pages 347–354. Association for Computational Linguistics.
* Yang et al. (2015) Diyi Yang, Alon Lavie, Chris Dyer, and Eduard Hovy. 2015. Humor recognition and humor anchor extraction. In _Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing_ , pages 2367–2376.
* Yang and Zhang (2018) Jie Yang and Yue Zhang. 2018. Ncrf++: An open-source neural sequence labeling toolkit. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics_.
* Yang et al. (2016) Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016\. Hierarchical attention networks for document classification. In _Proceedings of NAACL-HLT_ , pages 1480–1489.
## 7 Appendix
### 7.1 Parameter Tuning
#### Logistic Regression (LR) experiment:
A Logistic Regression model with $L_{2}$ penalty is employed where the class
weights are proportional to the number of instances for $A$, $D$ and $N$
classes. The regularization strength $C$ is searched over a grid using the
$dev$ data. Following values were tried for $c$: [.0001, .001, .01, .1, 1, 10,
100, 1000, 10000].
#### Dual LSTM and Multi-task Learning experiment:
For LSTM networks based experiments we searched the hyper parameters over the
$dev$ set. Particularly we experimented with different mini-batch size (e.g.,
8, 16, 32), dropout value (e.g., 0.3, 0.5, 0.7), number of epochs (e.g., 40,
50), hidden state of different sized-vectors (100, 300) and the Adam optimizer
(learning rate of 0.01). Embeddings were generated using FastText vectors (300
dimensions) Joulin et al. (2016). Any token occurring less than five times
were replaced by a special UNK token where the UNK vector is created based on
random samples from a normal (Gaussian) distribution between 0.0 and 0.17.
After tuning we use the following hyper-parameters for the $test$ set: mini-
batch size of 16, hidden state of size 300, number of epochs = 50, and dropout
value of 0.5. Task-specific losses for the dynamic multitask version was
learned during training.
#### BERT based models:
We use the $dev$ partition for hyperparameter tuning such as different mini-
batch size (e.g., 8, 16, 32, 48), number of epochs (3, 5, 6), learning rate of
3e-5) and optimized networks with the Adam optimizer. The training partitions
were fine-tuned for 5 epochs with batch size = 16. Each training epoch took
between 08:46 $\sim$ 9 minutes over a K-80 GPU with 48GB vRAM.
### 7.2 Results on the Sarcasm Detection Task
Although improving sarcasm detection is not the focus our paper, we observe
that multi-task learning improves the performance on this task as well, when
compared to the single task model. We present results for the deep learning
models in Table 5. The multi-task models (both for LSTM and BERT) outperform
the corresponding single task models (by 6.9 F1 and 6.4 F1 for LSTM and BERT
models, respectively). We note that the results on this particular dataset are
much lower than on other datasets used for sarcasm detection. For example, the
LSTMAttn which is the best model used by Ghosh et al. (2018) obtained only
52.9 F1 score on this dataset, while it obtained 70.34 F1 on Sarcasm V2
(derived also from IAC but using different annotation guidelines), 74.96 F1 on
a Twitter dataset and 75.41 F1 on a Reddit dataset Ghosh et al. (2018).
Model | Precision | Recall | F1
---|---|---|---
LSTMAttn | 52.9 | 52.8 | 52.9
LSTMMT | 59.5 | 59.3 | 59.4
BERTorig | 57.4 | 57.4 | 57.4
BERTMT | 61.8 | 61.7 | 61.8
BERT${}_{{MT}_{uncert}}$ | 64.1 | 63.5 | 64.0
Table 5: Evaluations of sarcasm detection on the $test$ set of $IAC_{orig}$.
|
# Scrutinizing new physics of $B_{d}\to\phi(\eta^{(^{\prime})},\pi,\omega)$
decay modes
Manas K. Mohapatra<EMAIL_ADDRESS>Department of Physics, IIT
Hyderabad, Kandi - 502285, India
###### Abstract
We inspect the exclusive hadronic decay modes
$B_{d}\to\phi(\eta^{(^{\prime})},\pi,\omega)$, induced by quark level
transition as $b\to d$ $(\Delta S=0)$, in vector like down quark model. As
these decay modes insist highly suppressed followed by the predicted branching
fraction $\mathcal{O}(10^{-9})$ which reflects to scrutinize physics beyond
the standard model. We constrain the new parameter space inferred from
experimental limits on leptonic $B_{d}\to\ell\ell(\ell=e,\mu,\tau)$ and
nonleptonic decay modes $B_{d}\to\eta^{\prime}\pi^{0}$ and
$B_{u}\to\rho^{-}\eta^{\prime}$. We then check the new physics contributions
can have significant impact on the prominent observable so called branching
ratio of $B_{d}\to\phi(\eta^{(^{\prime})},\pi,\omega)$ processes.
###### pacs:
13.30.-a,14.20.Mr, 14.80.Sv
## I Introduction
The Standard model (SM) of particle physics, one of the biggest achievements
in twentieth century science, encompasses the beauty of fundamental particles
and their interactions which is ruled by strong, weak and electromagnetic
forces. Despite its spectacular success, it, however, has some important voids
that couldn’t filled out such as matter dominance over antimatter in the
present universe, dark matter and dark energy, hierarchy problem, neutrino
mass etc. The source of matter-antimatter asymmetry is the violation of
combined discrete symmetry of charge conjugation (C) and parity (P) where
Cabbibo–Kobayasi–Maskawa (CKM) matrix is the main cornerstone to account for.
Among various indirect searches, the study of B decay modes provide an insight
to analyze in the SM, and to explore the possible existence of new physics
(NP) beyond it. In one way, it is of interest to study the potential
implications in the sector of flavor changing neutral current (FCNC)
transition at electroweak scale which mainly occurs at loop level.
In this paper, we will perform the decay modes
$B_{d}\to\phi(\eta^{(^{\prime})},\pi,\omega)$ induced by $b\to d$ quark level
transition which are suppressed in the SM indicating an ideal place to search
for the new physics effects. We repredict the CP averaged branching ratios
based on the framework of QCD factorisation approach which includes next to
leading order (NLO) contributions where the previous predictions have been
done in Cheng and Chua (2009); Beneke and Neubert (2003); Beneke et al.
(2007). The upper limit of branching fractions of the aforesaid decay modes
are given as Zyla et al. (2020)
$\displaystyle Br(B_{d}^{0}\to\phi\eta)<5\times
10^{-7},Br(B_{d}^{0}\to\phi\eta^{\prime})<5\times 10^{-7},$ $\displaystyle
Br(B_{d}^{0}\to\phi\pi)<1.5\times 10^{-7},Br(B_{d}^{0}\to\phi\omega)<7\times
10^{-7}.$ (1)
where as the SM predictions are of $\mathcal{O}(10^{-9})$ with no CPV (CP
violation) observables. Inspired by these discrepancies, we would like to
probe in the presence of vector like down quark (VLDQ) model where an extra
$SU(2)_{L}$ singlet down type quark has been added to the SM and observe the
significant impact on the branching fractions. As the SM include 3 generations
of quarks but there may be possibility of having a heavier exotic quark in
another generation. Such fermion can appear in $E_{6}$ grand unified theories
and models with large extra dimensions. Because of the addition of this
particle to the SM particle spectrum, it modifies the CKM matrix, of course
not unitary. Thus it implies the FCNC transition at tree level mediated by Z
in the down quark sector. We would like to see such effect in our model on the
above decay modes.
Among $b\to d$ quark level transitions are leptonic decays
$B_{d}\to\ell\ell(\ell=e,\mu,\tau)$. The SM and experimental results are shown
in the TABLE - 1. The deviations, between the predictions and experimental
values, need modification of the branching fractions in the search of NP
scenario.
Table 1: Branching fractions (leptonic) induced by $b\to d$ transition Decay processes | Predicted Br | Experimental values/upper limits Zyla et al. (2020)
---|---|---
$B_{d}^{0}\to ee$ | $(2.23\pm 0.21)\times 10^{-15}$ | $<0.83\times 10^{-8}$
$B_{d}^{0}\to\mu\mu$ | $(0.95\pm 0.09)\times 10^{-10}$ | $(1.1\begin{subarray}{c}+1.4\\\ -1.3\end{subarray})\times 10^{-10}$
$B_{d}^{0}\to\tau\tau$ | $(2.00\pm 0.19)\times 10^{-8}$ | $<2.1\times 10^{-3}$
Table 2: CP av. Branching fractions (non leptonic) induced by $b\to d$ transition Decay modes | Our results | Previous results Cheng and Chua (2009) | Experimental value Zyla et al. (2020)
---|---|---|---
$\bar{B}_{d}\to\pi^{0}\eta^{\prime}$ | $(0.50\pm 0.57\pm 0.03)\times 10^{-6}$ | $(0.42\begin{subarray}{c}+0.21+0.18\\\ -0.09-0.12\end{subarray})\times 10^{-6}$ | $(1.2\pm 0.6)\times 10^{-6}$
$B^{-}\to\rho^{-}\eta^{\prime}$ | $(6.26\pm 1.70\pm 0.34)\times 10^{-6}$ | $(5.6\begin{subarray}{c}+0.9+0.8\\\ -0.7-0.5\end{subarray})\times 10^{-6}$ | $(9.7\pm 2.2)\times 10^{-6}$
On the other side, the decay channels having final state meson $\eta^{\prime}$
play a vital role in the non leptonic $B_{d}\to\eta^{\prime}\pi^{0}$ and
$B_{u}\to\rho^{-}\eta^{\prime}$ decay modes where we consider
$\eta-\eta^{\prime}$ mixing effect in the investigation of decay mode having
final state meson $\eta$. Now due to the potential deviations between SM and
experimental results given in TABLE -2, it allows a room to search for physics
beyond the SM. The associated new couplings in the presence of VLDQ model can
be constrained by using the experimental limits on both the leptonic as well
as non leptonic decay modes. Using the allowed parameter space, we scrutinize
the new physics impact on the aforesaid decay processes.
The layout of this work is organized as follows. In section II, we briefly
study the effective Hamiltonian accountable for the quark level transition
$b\to d$ for $B_{d}\to\phi(\eta^{(^{\prime})},\pi,\omega)$ decay modes. We
include the discussion of the amplitudes for the abovesaid nonleptonic decay
modes along with the numerical results of the so called branching fractions in
the SM with all necessary input parameters. In section III, we then constrain
the new parameter space using the existing experimental limits on branching
fractions of both leptonic $B_{d}\to\ell\ell$ and also non leptonic decay
modes $B_{d}\to\eta^{\prime}\pi^{0}$ and $B_{u}\to\rho^{-}\eta^{\prime}$,
discussed above in the presence of VLDQ model. Next we implement the above
model on the decay modes $B_{d}\to\phi(\eta^{(^{\prime})},\pi,\omega)$ in the
new physics scenario by using new coupling paramaters. Finally we bring to an
end with brief summary and conclusion of our results in section IV.
## II Standard Model Predictions
The weak effective Hamiltonian of the decay mode having the quark level
transition $b\to d$, can be written as Buchalla et al. (1996)
$\displaystyle\mathcal{H}_{eff}=\frac{G_{F}}{\sqrt{2}}\bigg{[}V_{ub}V_{ud}^{*}(C_{1}(\mu)O_{1}(\mu)+C_{2}(\mu)O_{2}(\mu))-V_{tb}V_{td}^{*}\sum_{i=3}^{10}C_{i}(\mu)O_{i}(\mu)\bigg{]},$
(2)
where the six-dimensional four-quark operators $O_{i}(i=1,...,10)$ given in
the above effective Hamiltonian are specified as below:
$\displaystyle O_{1}$
$\displaystyle=(\bar{d}_{\alpha}\gamma^{\mu}Lu_{\beta}).(\bar{u}_{\beta}\gamma_{\mu}Lb_{\alpha}),$
$\displaystyle
O_{6}=(\bar{d}_{\alpha}\gamma^{\mu}Lu_{\beta}).(\sum_{q}\bar{q}_{\beta}\gamma_{\mu}Rq_{\alpha}),$
$\displaystyle O_{2}$
$\displaystyle=(\bar{d}_{\alpha}\gamma^{\mu}Lu_{\alpha}).(\bar{u}_{\beta}\gamma_{\mu}Lb_{\beta}),$
$\displaystyle
O_{7}=\frac{3}{2}(\bar{d}_{\alpha}\gamma^{\mu}Lu_{\alpha}).(\sum_{q}e_{q}\bar{q}_{\beta}\gamma_{\mu}Rq_{\beta}),$
$\displaystyle O_{3}$
$\displaystyle=(\bar{d}_{\alpha}\gamma^{\mu}Lu_{\alpha}).(\sum_{q}\bar{q}_{\beta}\gamma_{\mu}Lq_{\beta}),$
$\displaystyle
O_{8}=\frac{3}{2}(\bar{d}_{\alpha}\gamma^{\mu}Lu_{\beta}).(\sum_{q}e_{q}\bar{q}_{\beta}\gamma_{\mu}Rq_{\alpha}),$
$\displaystyle O_{4}$
$\displaystyle=(\bar{d}_{\alpha}\gamma^{\mu}Lu_{\beta}).(\sum_{q}\bar{q}_{\beta}\gamma_{\mu}Lq_{\alpha}),$
$\displaystyle
O_{9}=\frac{3}{2}(\bar{d}_{\alpha}\gamma^{\mu}Lu_{\alpha}).(\sum_{q}e_{q}\bar{q}_{\beta}\gamma_{\mu}Lq_{\beta}),$
$\displaystyle O_{5}$
$\displaystyle=(\bar{d}_{\alpha}\gamma^{\mu}Lu_{\alpha}).(\sum_{q}\bar{q}_{\beta}\gamma_{\mu}Rq_{\beta}),$
$\displaystyle
O_{10}=\frac{3}{2}(\bar{d}_{\alpha}\gamma^{\mu}Lu_{\beta}).(\sum_{q}e_{q}\bar{q}_{\beta}\gamma_{\mu}Lq_{\alpha}),$
(3)
where $G_{F}$ is Fermi coupling constant, all
${V_{ab}}^{\prime}s(a,b=u,b,s,t)$ are the CKM matrix elements, $e_{q}$ is the
electromagnetic charge of quark field ‘q’, L (R) is the left (right) handed
projection operator, and $\alpha,\beta$ are the color indices. The quark field
q runs over active flavors i.e., $q\epsilon\\{u,d,c,s,b\\}$ at the scale
$\mu=m_{b}$. In addition to this, the operators belong to $i=1,2$ are current-
current, $i=3,..,6$ are the QCD penguin and $i=7,...,10$ are the EW penguin
operators. The corresponding coupling constants so called Wilson coefficients
$C_{i}^{\prime}s(i=1,...,10)$ are used in the next-to-leading order (NLO) at
the scale of $O(m_{b})$ in order to cancel the $\mu$ dependence of the
amplitude.
## $B_{d}\to\phi\eta^{(^{\prime})}$
The weak decay amplitude in the presence of QCDF approach Beneke and Neubert
(2003) can be written in the form as
$\displaystyle\langle\phi\eta^{(^{\prime})}|O_{\mathit{i}}|B_{d}\rangle=\langle\phi\eta^{(^{\prime})}|O_{\mathit{i}}|B_{d}\rangle_{fact}\big{[}1+\sum
r_{n}\alpha_{s}^{n}+\mathit{O}(\frac{\Lambda_{QCD}}{m_{b}})\big{]},$ (4)
where the factorized matrix element of four-quark operators
$\langle\phi\eta^{(^{\prime})}|O_{\mathit{i}}|B_{d}\rangle_{fact}$ includes
form factors and decay constants. The second term in the paranthesis involve
to higher order contributions which include the QCD effect, more to say, gluon
corrections, and the third term directs to power corrections containing
troublesome end-point divergence. The decay mode $B_{d}\to\phi\eta$ can be
produced from the $B_{d}\to\omega\eta$ process followed by $\omega-\phi$
mixing along with the angle $\delta=3.3^{\circ}$ (Cheng and Chua, 2009;
Benayoun et al., 1999; Kucukarslan and Meissner, 2006; Benayoun et al., 2008;
Qian and Ma, 2008). Now the amplitude for the decay mode $B_{d}\to\omega\eta$
is given by (Cheng and Chua, 2009)
$\displaystyle\mathit{A}(B_{d}\to\omega\eta)\approx\frac{1}{2}\lambda_{p}\bigg{[}A_{\omega\eta_{q}}\big{\\{}\delta_{pu}(\alpha_{2}+\beta_{1})+2\alpha_{3}^{p}+\hat{\alpha}_{4}^{p}\big{\\}}+A_{\eta_{q}\omega}\big{\\{}\delta_{pu}(\alpha_{2}+\beta_{1})+2\alpha_{3}^{p}+\hat{\alpha}_{4}^{p}\big{\\}}\bigg{]},$
(5)
where the parameter $\lambda_{p}$ is the CKM matrix element and is summed over
the quark element $p=u,c$. The required parameters
$\alpha_{i}^{p},\beta_{i}^{p}$ and$\hat{\alpha}_{i}^{p}$ and the factorized
matrix elements are given in the Appendix A. The relation between both the
branching fractions is given by (Cheng and Chua, 2009)
$\displaystyle
Br(B_{d}\to\phi\eta)=Br(B_{d}\to\omega\eta)\times\rm\sin^{2}\delta$ (6)
Similarly, we can proceed for the final states $\omega(\phi)\eta^{\prime}$
where $\eta$ is replaced by $\eta^{\prime}$ in the previous decay mode. For
our study of B decay modes having final state particle $\eta$ and
$\eta^{\prime}$, we consider $\eta-\eta^{\prime}$ mixing effect in our study
of the physical observable of given decay modes. Due to different flavor
states of $\eta^{(^{\prime})}(\eta_{q}^{(^{\prime})},\eta_{s}^{(^{\prime})}$
and $\eta_{c}^{(^{\prime})})$, the corresponding decay constants $f_{q}$ and
$f_{s}$ correlated by a mixing angle $\theta$ which is given by
$\displaystyle\begin{pmatrix}f_{\eta}^{q}&f_{\eta}^{s}\\\
f_{\eta^{\prime}}^{q}&f_{\eta^{\prime}}^{s}\end{pmatrix}=\begin{pmatrix}\cos\theta&-\sin\theta\\\
\sin\theta&\cos\theta\end{pmatrix}\begin{pmatrix}f_{q}&0\\\
0&f_{s}\end{pmatrix},$ (7)
where the $\eta_{q}-\eta_{s}$ mixing angle $\theta=(39.3\pm 1.0)^{\circ}$
Feldmann et al. (1998) and the mixing with $\eta_{c}$ has been neglected and
the useful parameters $f_{q}$ and $f_{s}$ are given by
$\displaystyle f_{q}=(1.07\pm 0.02)f_{\pi},\hskip 14.22636ptf_{s}=(1.34\pm
0.06)f_{\pi}.$ (8)
## $B_{d}\to\phi\pi^{0}$
Similar to the previous decay mode, the process $B_{d}\to\phi\pi^{-}$ also can
be produced from the chanel $B_{d}\to\omega\pi$ in the presence of
$\omega-\phi$ mixing effect. Now the amplitude for the decay mode
$B^{-}\to\pi^{-}\omega$ is given by(Beneke and Neubert, 2003)
$\displaystyle A(B^{-}\to\pi^{-}\omega)$
$\displaystyle\approx\frac{1}{\sqrt{2}}\lambda_{p}\bigg{[}A_{\pi\omega}\big{\\{}\delta_{pu}(\alpha_{2}+\beta_{2}+2\beta_{S2})+2\alpha_{3}^{p}+\alpha_{4}^{p}+\frac{1}{2}\alpha_{3,EW}^{p}-\frac{1}{2}\alpha_{4,EW}^{p}$
(9)
$\displaystyle+\beta_{3}^{p}+\beta_{3,EW}^{p}+2\beta_{S3}^{p}+2\beta_{S3,EW}^{p}\big{\\}}$
$\displaystyle+A_{\omega\pi}\big{\\{}\delta_{pu}(\alpha_{1}+\beta_{2})+\alpha_{4}^{p}+\alpha_{4,EW}^{p}+\beta_{3}^{p}+\beta_{3,EW}^{p}\big{\\}}\bigg{]},$
where $\lambda_{p}$ is the CKM parameter and the other contributions in the
above amplitude are given in the Appendix A. Now, the mixing relation between
the decay modes $B^{-}\to\omega^{-}\pi$ and $B^{-}\to\phi\pi^{-}$ is given by
(Cheng and Chua, 2009)
$\displaystyle Br(B^{-}\to\phi\pi^{-})_{{\phi-\omega}\hskip
2.84544ptmixing}=Br(B^{-}\to\omega\pi^{-})\times\rm\sin^{2}\delta,$ (10)
and the decay amplitude of $B_{d}\to\phi\pi^{0}$ mode is given by the relation
as (Beneke and Neubert, 2003)
$\displaystyle A(B^{-}\to\pi^{-}\phi)=-\sqrt{2}A(B_{d}\to\phi\pi^{0})$ (11)
For our calculations of above discussed modes
$B_{d}\to\phi(\pi^{0},\eta^{(^{\prime})})$, we consider the $\phi-\omega$
mixing angle $\delta=(3.32\pm 0.09)^{\circ}$ from Ambrosino et al. (2009).
## $B_{d}\to\phi\omega$
From the effective Hamiltonian (2) the matrix element for the four quark
operators is given by
$\displaystyle\big{\langle}V_{1}(\lambda_{1})V_{2}(\lambda_{2})|(\bar{q}_{2}q_{3})_{V-A}(\bar{q}_{1}b)_{V-A}|\bar{B}_{d}\big{\rangle},$
(12)
where $\lambda_{1},\lambda_{2}$ are the helicities of the final state vector
mesons $V_{1}$ and $V_{2}$ respectively. Now the amplitude of the penguin
dominated decay mode $B_{d}\to\phi\omega$ is expressed as Beneke et al. (2007)
$\displaystyle
A(B_{d}\to\phi\omega)=A_{\omega\phi}\big{[}\lambda_{p}(\alpha_{3}^{p}-\frac{1}{2}\alpha_{3,EW}^{p})\big{]}$
(13)
The details of the parameters used in the above process has been provided in
the Appendix B. The helicity amplitudes corresponding to this decay mode are
$A_{0},A_{+}$ and $A_{-}$ and the hierarchy of helicity amplitudes are
$\displaystyle
A_{0}:A_{-}:A_{+}=1:\frac{\Lambda_{QCD}}{m_{b}}:\bigg{(}\frac{\Lambda_{QCD}}{m_{b}}\bigg{)}^{2},$
(14)
where the transverse amplitudes $A_{+}$ and $A_{-}$ are suppressed relative to
the longitudinal one $A_{0}$.
Now in this section, all the discussed non leptonic decay modes include
factorized matrix elements $A_{VP}$ and $A_{VV}$ (P = pseudoscalar meson, V =
vector meson) as well as the higher order corrections such as vertex
corrections, hard spectator interactions, penguin contractions and
annihilation contributions. Now all the discussed amplitudes can be written in
the parameterized form symbolically as
$\displaystyle A(B\to VP)$ $\displaystyle=\lambda_{u}A_{u}+\lambda_{c}A_{c}$
(15)
$\displaystyle=\lambda_{c}A_{c}\big{[}1+rae^{i(\delta_{1}-\gamma)}\big{]},$
where $V=\phi,P=\eta^{(^{\prime})},\pi$. $\lambda_{u,c}$ are the CKM elements
and $A_{u,c}$ are the amplitudes correspond to $u$ and $c$ quark.
$a=|\lambda_{u}/\lambda_{c}|$, $r=|A_{u}/A_{c}|$, $\gamma$ is the weak phase
of CKM element $V_{ub}$ and the relative strong phase is $\delta_{1}$. The
formula for CP averaged branching fraction is given by
$\displaystyle\mathfrak{B}=\frac{1}{2}\left[Br(\mathcal{A}_{B_{d}\to
M_{1}M_{2}})+Br(\mathcal{\bar{A}}_{B_{d}\to M_{1}M_{2}})\right].$ (16)
Now for all the non leptonic decay modes
$B_{d}\to\phi(\eta^{(^{\prime})},\pi)$ can be written as
$\displaystyle\mathfrak{B}=\frac{p_{cm}\tau_{B}}{8\pi
m_{B}^{2}}|\lambda_{c}A_{c}|^{2}\big{\\{}1+r^{2}a^{2}+2ra\rm\cos\delta_{1}\rm\cos\gamma\big{\\}},$
(17)
where the center of mass momentum in $B_{d}$ rest frame is given by
$\displaystyle
p_{cm}=\sqrt{(m_{B_{d}}^{2}-(m_{1}+m_{2})^{2})(m_{B_{d}}^{2}-(m_{1}-m_{2})^{2})}\,,$
(18)
where $m_{1}$ and $m_{2}$ are the masses of final states. Similarly the CP av.
branching fraction for $B_{d}\to\phi\omega$ is given by
$\displaystyle\mathfrak{B}_{(B_{d}\to\phi\omega)}=\frac{\tau_{B_{d}}p_{cm}}{8\pi^{2}m_{B}}(|A_{0}|^{2}+|A_{-}|^{2}+|A_{+}|^{2}).$
(19)
Here CP av. branching fraction correspond to all the indivisual helicity
amplitude can be written similar to the expressions in $B_{d}\to VP$ decay
mode.
Now the CP averaged branching ratio can be calculated for the considered
nonleptonic $B_{d}\to\phi(\eta^{(^{\prime})},\pi,\omega)$ decay modes. For the
numerical predictions of the CP averaged branching ratio, we use the input
parameters given in S4 scenario of QCDF approach Beneke and Neubert (2003).
The Wilson coefficients in NDR scheme at NLO are taken from de Groot et al.
(2003) at $m_{b}$ scale and the relevant input parameters are given in the
Table 4. Many studies have been done in the Ref. Beneke and Neubert (2003);
Cheng and Chua (2009); Beneke et al. (2007); Zhang and Xiao (2008); Cheng et
al. (2015). We repredict SM values of the CP averaged branching fractions of
$B_{d}\to\phi(\eta^{(^{\prime})},\pi,\omega)$ decay modes which are given in
the TABLE 3 along with the previous results. Here the first theoretical error
correspond to the uncertainties occured due to quak masses, form factor, decay
constants, Gegenbauer moments, the wave function of $B_{d}^{0}$ meson and
$\phi-\omega$ mixing angle where as the parameters due to weak annhilation and
hard spectator interactions are lumped into the second uncertainty. As per the
ref. (Cheng and Chua, 2009) we assign $0.1$ and $20^{\circ}$ uncertainties to
the annihilation parameters $\rho_{A}$ and $\phi_{A}$ respectively.
Table 3: SM predictions of CP av. branching fractions (non leptonic) induced by $b\to d$ transition Decay modes | Our results | Previous results Beneke and Neubert (2003); Bao et al. (2008) | Expt. values Zyla et al. (2020)
---|---|---|---
$B_{d}\to\phi\eta$ | $(1.18\pm 0.84\pm 0.03)\times 10^{-9}$ | $0.001\times 10^{-6}$ | $<5\times 10^{-7}$
$B_{d}\to\phi\eta^{\prime}$ | $(2.26\pm 1.8\pm 0.09)\times 10^{-9}$ | $0.003\times 10^{-6}$ | $<5\times 10^{-7}$
$B_{d}\to\phi\pi^{0}$ | $(6.91\pm 1.23\pm 0.03)\times 10^{-9}$ | $0.004\times 10^{-6}$ | $<1.5\times 10^{-7}$
$B_{d}\to\phi\omega$ | $(3.16\pm 1.23\pm 0.006)\times 10^{-9}$ | $0.0017\times 10^{-6}$ | $<7\times 10^{-7}$
Table 4: Input parameters used in the numerical analysis Running quark masses and coupling constants: | Ref.
---|---
$G_{F}=1.166\times 10^{-5}$ GeV-2; $\alpha_{em}=1/129$ | (Zyla et al., 2020)
$\alpha_{s}(M_{Z})=0.1185$; $\tau_{B_{d}}=(1.52\pm 0.004)\times 10^{-12}\hskip 2.84544pts$ | (Zyla et al., 2020)
$m_{b}(m_{b})=4.2\hskip 2.84544pt\rm GeV$; $m_{c}(m_{b})=0.91\hskip 2.84544pt\rm GeV$; $m_{c}^{\rm pole}/m_{b}^{\rm pole}=0.3$ | (Cheng and Chua, 2009)
$m_{u}(2\hskip 0.28436pt\rm GeV)=2.15\pm 0.15$ MeV; $m_{d}(2\hskip 0.28436pt\rm GeV)=4.7\pm 0.2$ MeV | Lü et al. (2019)
$m_{s}(2\hskip 0.28436pt\rm GeV)=93.8\pm 1.3\pm 1.9$ MeV | Lü et al. (2019)
CKM parameters: |
$V_{ub}=(3.82\pm 0.24)\times 10^{-3}$; $V_{ud}=0.97370\pm 0.00014$; $V_{cb}=(41\pm 1.4)\times 10^{-3}$ | Zyla et al. (2020)
$V_{cd}=0.221\pm 0.004$; $V_{td}=(8.0\pm 0.3)\times 10^{-3}$; $V_{tb}=1.013\pm 0.030$ | Zyla et al. (2020)
$\gamma=(72.1\begin{subarray}{c}+4.1\\\ -4.5\end{subarray})^{\circ}$; $\sin 2\beta_{d}=0.699\pm 0.017$ | Zyla et al. (2020)
Form factors and decay constants: |
$F_{B\to\eta}(0)=0.168\begin{subarray}{c}+0.041\\\ -0.047\end{subarray}$; $F_{B\to\eta^{\prime}}(0)=0.130\begin{subarray}{c}+0.036\\\ -0.032\end{subarray}$; | Duplancic and Melic (2015)
$F_{B\to\pi}(0)=0.21\pm 0.07$; $A_{B\to\rho}(0)=0.356\pm 0.042$; | Gubernari et al. (2019); Bharucha et al. (2016)
$A^{0}_{B\to\omega}(0)=0.328\pm 0.048$; $A^{1}_{B\to\omega}(0)=0.243\pm 0.031$; $V_{B\to\omega}(0)=0.304\pm 0.038$; $f_{\omega}=(187\pm 5)$ MeV; $f^{\perp}_{\omega}=(151\pm 9)$ MeV | Bharucha et al. (2016); Cheng and Chua (2009)
$f_{\eta}^{q}=107$ MeV; $f_{\eta}^{s}=-112$ MeV; $f_{\eta^{\prime}}^{q}=89$ MeV; $f_{\eta^{\prime}}^{s}=137$ MeV | Cheng and Chua (2009)
$f_{B_{d}}=(190.5\pm 1.3)$ MeV; $f_{\pi}=(130.2\pm 1.4)$ MeV; | Aoki et al. (2020); Gubernari et al. (2019)
$f_{\rho}=(216\pm 3)$ MeV; $f_{\rho}^{\perp}=(165\pm 9)$ MeV | Cheng and Chua (2009)
$f_{\phi}=(215\pm 5)$ MeV; $f_{\phi}^{\perp}=(186\pm 9)$ MeV | Cheng and Chua (2009)
Gegenbauer moments: |
$a_{1}^{\phi}=0$; $a_{2}^{\phi}=0.18\pm 0.08$; $a_{1}^{\perp,\phi}=0$; $a_{2}^{\perp,\phi}=0.14\pm 0.06$ | Cheng and Chua (2009)
$a_{1}^{\rho}=0$; $a_{2}^{\rho}=0.15\pm 0.07$; $a_{1}^{\perp,\rho}=0$; $a_{2}^{\perp,\rho}=0.14\pm 0.07$ | (Cheng and Chua, 2009)
$a_{1}^{\pi}=0$; $a_{2}^{\pi}=0.25\pm 0.15$ | (Cheng and Chua, 2009)
$a_{1}^{\omega}=0$; $a_{2}^{\omega}=0.15\pm 0.07$; $a_{1}^{\perp,\omega}=0$; $a_{2}^{\perp,\omega}=0.14\pm 0.06$; $\lambda_{B}=300\pm 100$ MeV | (Cheng and Chua, 2009)
Annihilation and hard spectator parameters: |
PP mode: $\rho_{A}=1.1$; $\phi=-50^{\circ}$ | (Cheng and Chua, 2009)
PV mode: $\rho_{A}=0.87$; $\phi=-30^{\circ}$ | (Cheng and Chua, 2009)
VP mode: $\rho_{A}=1.07$; $\phi=-70^{\circ}$;$X_{H}=2.4\pm 0.024$ | (Cheng and Chua, 2009)
## III New-Physics Contributions
In the standard model, flavor changing neutral current (FCNC) occurs at loop
level and provide a very strong suppression because of the intermediate light
quark contributions. Therefore it would be more challenging to explore the NP
beyond the SM. In this work we include a self-consistent framework where a
minimal extension of the standard model in other words enlarging the matter
sector having an extra iso singlet vector-like down quark represent to this
where Z boson is mediated with FCNC transition at tree level. Now due to the
addition of down type quark, the interaction lagrangian for Z boson in the
weak eigen state basis can be represented as Deshpande et al. (2004)
$\displaystyle
L_{Z}=-\frac{g}{2c_{W}}\big{[}\bar{U}_{L}^{0}\gamma^{\mu}U_{L}^{0}-\bar{D}_{L}^{0}\gamma^{\mu}D_{L}^{0}-2s_{W}^{2}(Q_{u}\bar{U}^{0}\gamma^{\mu}U^{0}+Q_{d}\bar{D}^{0}\gamma^{\mu}D^{0}+Q_{d}\bar{D}^{0^{\prime}}\gamma^{\mu}D^{0^{\prime}}\big{]}Z_{\mu},$
(20)
where $Q_{u,d}$ are the electric charges of up and down type quarks. The up
type quark $U^{0}$ and the down type quark $D^{0}$ are embeded in the SM three
generations of quarks and the additional down type quark is given by
$D^{0^{\prime}}=d^{0^{\prime}}$. Now because of the extension of down type
quark, the down quark matrix and the up quark matrix can be diagonalized by
$4\times 4$ and $3\times 3$ matrix respectively. So the corresponding
interaction Lagrangian mediated by Z, is given as Giri and Mohanta (2003)
$\mathcal{L}_{Z}=\frac{g}{2c_{W}}\big{[}\bar{\mathit{U}}_{L\mathit{i}}\gamma^{\mu}\mathit{U}_{L\mathit{i}}-\bar{D}_{L\alpha}U_{\alpha\beta}\gamma^{\mu}D_{L\alpha}-2s^{2}_{W}J_{em}^{\mu}\big{]}Z_{\mu},$
where $i$ ($\alpha,\beta$) denote the generation indices for up (down)-type
and $L$ indicate for the left chiral particles. Here the focus point is the
second term where the matrix $U_{\alpha\beta}$ is a $4\times 4$ matrix and the
expression is represented as
$\displaystyle
U_{\alpha\beta}=\sum_{\mathit{i}=\mathit{u,c,t}}V_{\alpha\mathit{i}}^{\dagger}V_{\mathit{i}\beta}=\delta_{\alpha\beta}-V_{4\alpha}^{*}V_{4\beta}.$
(21)
And this is the distinctive feature of this model. The corresponding CKM
matrix for the charge current interaction would be
$V=V_{u}^{L\dagger}V_{d}^{L}$ which is a $3\times 4$ pseudo matrix. It is
usually different from the CKM matrix present in the standard model. Since
$U_{\alpha\beta}(\alpha,\beta=b,d,s)\neq 0$, it motivates to study FCNC
mediated by Z boson at tree levelBuras and Lindner (1998); Alok et al. (2016);
Mohapatra (2020); Giri and Mohanta (2004).
Now the non unitrary matrix V arrises due to the addition of the extra 4th
quark to the SM sector. Thus it provides a new signal to scrutinize the
physics beyond the SM. Now we constrain the new parameter space arising due to
both leptonic as well non leptonic modes.
### III.1 Constraint from leptonic
$B_{d}\to\ell^{+}\ell^{-}(\ell=e,\mu,\tau)$, and non leptonic modes
$B_{d}\to\eta^{\prime}\pi^{0}$ and $B_{u}\to\rho^{-}\eta^{\prime}$:
The leptonic modes $B_{d}\to\ell^{+}\ell^{-}(\ell=e,\mu,\tau)$ are suppressed
in the SM, still it can be investigated in the new physics scenario in the
presence of VLDQ model. The branching fraction of $B_{d}\to\ell^{+}\ell^{-}$
in $Z$ mediated VLDQ model is given by Chen et al. (2010)
$\displaystyle\mathfrak{B}_{(B_{d}\to\ell^{+}\ell^{-})}=\frac{G_{F}^{2}\alpha^{2}m_{B_{d}}m_{\ell}^{2}f_{B_{d}}^{2}\tau_{B_{d}}}{16\pi^{3}}|V_{tb}V_{td}^{*}|^{2}\sqrt{1-4(\frac{m_{\ell}^{2}}{m_{B_{d}}^{2}})}\left|C_{10}^{\rm
tot}\right|^{2},$ (22)
where
$\displaystyle C_{10}^{\rm
tot}=C_{10}-\frac{\pi}{\alpha}\frac{U_{bd}}{V_{tb}V_{td}^{*}}\,.$ (23)
Here the term $U_{bd}$ is the coupling parameter when b quark talks to d quark
in the presence of mediating Z particle and because of quark mixing it may
behave as a complex with weak phase $\phi_{d}$. Now the amplitude of the non
leptonic $B^{-}\to\rho^{-}\eta^{\prime}$ process is given byBeneke and Neubert
(2003),
$\displaystyle-\sqrt{2}\mathcal{A}_{B^{-}\to\rho\eta^{\prime}}$
$\displaystyle=$ $\displaystyle
A_{\rho\eta^{\prime}_{q}}\big{[}\delta_{pu}(\alpha_{2}-\beta_{2}+2\beta_{S2})+2\alpha_{3}^{p}+\alpha_{4}^{p}+\frac{1}{2}\alpha_{3,EW}^{p}-\frac{1}{2}\alpha_{4,EW}^{p}$
(24) $\displaystyle+$
$\displaystyle\beta_{3}^{p}+\beta_{3,EW}^{p}+2\beta_{S3}^{p}+2\beta_{S3,EW}^{p}\big{]}$
$\displaystyle+$
$\displaystyle\sqrt{2}A_{\rho\eta^{\prime}_{s}}\big{[}\delta_{pu}\beta_{S2}+\alpha_{3}^{p}-\frac{1}{2}\alpha_{3,EW}^{p}+\beta_{S3}^{p}+\beta_{S3,EW}^{p}\big{]}$
$\displaystyle+$
$\displaystyle\sqrt{2}A_{\rho\eta_{c}}\big{[}\delta_{pc}\alpha_{2}+\alpha_{3}^{p}\big{]}$
$\displaystyle+$ $\displaystyle
A_{\eta^{\prime}_{q}\rho}\big{[}\delta_{pu}(\alpha_{+}\beta_{2})+\alpha_{4}^{p}+\alpha_{4,EW}^{p}+\beta_{3}^{p}+\beta_{3,EW}^{p}\big{]}.$
And the amplitude of the decay mode $B_{d}\to\pi^{0}\eta^{\prime}$ is given as
Beneke and Neubert (2003),
$\displaystyle-2\mathcal{A}_{\bar{B}_{d}\to\pi^{0}\eta^{\prime}}$
$\displaystyle=$ $\displaystyle
A_{\pi\eta_{q}}\big{[}\delta_{pu}(\alpha_{2}-\beta_{1}-2\beta_{S1})+2\alpha_{3}^{p}+\alpha_{4}^{p}+\frac{1}{2}\alpha_{3,EW}^{p}-\frac{1}{2}\alpha_{4,EW}^{p}$
(25) $\displaystyle+$
$\displaystyle\beta_{3}^{p}-\frac{1}{2}\beta_{3,EW}^{p}-\frac{3}{2}\beta_{4,EW}^{p}+2\beta_{S3}^{p}-\beta_{S3,EW}^{p}-3\beta_{S4,EW}^{p}\big{]}$
$\displaystyle+$
$\displaystyle\sqrt{2}A_{\pi\eta_{s}}\big{[}-\delta_{pu}\beta_{S1}+\alpha_{3}^{p}-\frac{1}{2}\alpha_{3,EW}^{p}+\beta_{S3}^{p}-\frac{1}{2}\beta_{S3,EW}^{p}-\frac{3}{2}\beta_{S3,EW}^{p}\big{]}$
$\displaystyle+$
$\displaystyle\sqrt{2}A_{\pi\eta_{c}}\big{[}\delta_{pc}\alpha_{2}+\alpha_{3}^{p}\big{]}$
$\displaystyle+$ $\displaystyle
A_{\eta_{q}\pi}\big{[}\delta_{pu}(-\alpha_{2}-\beta_{1})+\alpha_{4}^{p}-\frac{3}{2}\alpha_{3,EW}^{p}-\frac{1}{2}\beta_{4,EW}^{p}+\beta_{3}^{p}-\frac{1}{2}\beta_{3,EW}^{p}-\frac{3}{2}\beta_{4,EW}^{p}\big{]},$
where the above decay mode amplitudes are multiplied by the CKM element
$\lambda_{p}$ and summed over $p=u,c$. The required parameters are given in
the Appendix A. Now the effective Hamiltonian corresponding to new interaction
describing quark lvel transition $b\to d$ can be represented as
$\mathcal{H}_{eff}^{Z}=-\frac{G_{F}}{\sqrt{2}}V_{tb}V_{td}^{*}\big{[}\tilde{C}_{3}O_{3}+\tilde{C}_{7}O_{7}+\tilde{C}_{9}O_{9}\big{]},$
where the new Wilson coefficients at the $M_{Z}$ scale are given as Atwood and
Hiller (2003); Deshpande and Ghosh (2004)
$\displaystyle\tilde{C}_{3}(M_{Z})$ $\displaystyle=$
$\displaystyle\frac{1}{6}\frac{U_{bd}}{V_{tb}V_{td}^{*}},$
$\displaystyle\tilde{C}_{7}(M_{Z})$ $\displaystyle=$
$\displaystyle\frac{2}{3}\frac{U_{bd}}{V_{tb}V_{td}^{*}}\sin^{2}\theta_{W}\,,$
$\displaystyle\tilde{C}_{9}(M_{Z})$ $\displaystyle=$
$\displaystyle-\frac{2}{3}\frac{U_{bd}}{V_{tb}V_{td}^{*}}(1-\sin^{2}\theta_{W})\,$
(26)
and the new Wilson coefficients at $m_{b}$ scale can be found in Mawlong et
al. (2008). From the unitary condition (21), we get
$\displaystyle\lambda_{u}+\lambda_{c}+\lambda_{t}=U_{bd}\,.$ (27)
Now the amplitude in the presence of new physics can be parameterized as,
$\displaystyle\mathcal{A}$ $\displaystyle=$
$\displaystyle\lambda_{u}\mathcal{A}_{u}+\lambda_{c}\mathcal{A}_{c}-U_{bd}\mathcal{A}_{NP}$
(28) $\displaystyle=$
$\displaystyle\lambda_{c}A_{c}\Big{[}1+are^{\mathit{i}(\delta_{1}-\gamma)}-a^{\prime}r^{\prime}e^{\mathit{i}(\delta^{\prime}+\phi_{d})}\Big{]}\,,$
where
$\displaystyle a=|\frac{\lambda_{u}}{\lambda_{c}}|,\hskip
8.53581ptr=|\frac{A_{u}}{A_{c}}|,\hskip
8.53581pta^{\prime}=|\frac{U_{bd}}{\lambda_{c}}|,\hskip
8.53581ptr^{\prime}=|\frac{A_{NP}}{A_{c}}|.$ (29)
Here, $\gamma$, the weak phase arises from the CKM matrix element $V_{ub}$.
$\delta_{1}$ and $\delta^{\prime}$ are the relative strong phase of $A_{u}$
and $A_{NP}$ respectively with $A_{c}$ where the subscript u and c quark
correspond the amplitude involved to up and charm quark. Here the new coupling
parameter $U_{bd}$ may have complex phase $\phi_{d}$. From the amplitude given
in Eq.(28), the CP averaged branching ratio can be written as
$\displaystyle\mathfrak{B}$ $\displaystyle=$
$\displaystyle\frac{\tau_{B_{d}}p_{c}}{8\pi
m_{B_{d}}^{2}}|\xi_{c}A_{c}|^{2}\bigg{[}\mathcal{G}+2ra\cos\delta_{1}\cos\gamma-2r^{\prime}a^{\prime}\cos\delta^{\prime}\cos\phi_{d}$
(30)
$\displaystyle-2rr^{\prime}aa^{\prime}\cos(\delta_{1}-\delta^{\prime})\cos(\gamma+\phi_{d})\bigg{]}\,,$
where $\mathcal{G}=1+(ra)^{2}+(r^{\prime}a^{\prime})^{2}$. Now combining both
the leptonic and non leptonic modes with the given experimental and
theoretical values from TABLE - 1 and 2, the new parameter space
$U_{bd}-\phi_{d}$ within $1\sigma$ limit is represented in FIG.1 . Now the new
parameter ranges are shown below.
$\displaystyle 1.60\times 10^{-6}\leq|U_{bd}|\leq 2.22\times 10^{-4},\hskip
14.22636pt92.70^{\circ}\leq\phi_{d}\leq 360^{\circ},$ $\displaystyle
1.32\times 10^{-5}\leq|U_{bd}|\leq 1.69\times 10^{-4},\hskip
14.22636pt211.55^{\circ}\leq\phi_{d}\leq 360^{\circ}.$ (31)
Figure 1: The allowed region of new coupling parameter space $U_{bd}-\phi_{d}$
arised from the branching fractions of both leptonic
$B_{d}\to\ell^{+}\ell^{-}(\ell=e,\mu,\tau)$, and non leptonic
$B_{d}\to\eta^{\prime}\pi^{0}$ and $B_{u}\to\rho^{-}\eta^{\prime}$ processes.
### III.2 Impact on the non leptonic modes:
#### $B_{d}\to\phi\eta$:
Using the allowed parameter space, we present the variation of CP averaged
branching ratio $\mathcal{B}$ with the weak phase $\phi_{d}$ by considering
three benchmark entries of the parameter $|U_{bd}|$ as $2\times
10^{-5},4\times 10^{-5}$ and $6\times 10^{-5}$ given in FIG.-2 in the left
panel. The black dotted central line correspond to the SM value where as the
red dot-dashed lines shaded with the yellow color represent its $1\sigma$
uncertainty. From this figure we see that during the variation of the weak
phase for the benchmark value $|U_{bd}|=2\times 10^{-5}$ (blue line), the
observable has significantly deviated from its SM contribution in the region
$0\leq\phi_{d}\leq 240^{\circ}$. Similarly one can also observe that for the
other two benchmark entries (purple and green line), the CP av. branching
fraction has also effective contribution from its standard model prediction.
Additionally, in the presence of the constraint parameters $U_{bd}$ and
$\phi_{d}$ from the two regions (III.1), the observable has constructive
contribution to the standard model value. In the right panel we have shown the
variation of the observable (in the units of $10^{-8}$) in the presence of all
possible entries of the parameter $|U_{bd}|$ and with the weak phase
$\phi_{d}$.
Figure 2: $B_{d}\to\phi\eta$: Variation of CP averaged branching ratio (in the
units of $10^{-8}$) of with ($\mathit{i}$) some benchmark points of $U_{bd}$
as $2\times 10^{-5}$ (Blue), $4\times 10^{-5}$ (Purple) and $6\times 10^{-5}$
(Green) with the new weak phase $\phi_{d}$ where the dashed black line
represents to the SM value with the red dot-dashed line along with the yellow
region denote its $1\sigma$ uncertainty (left panel), and with ($\mathit{ii}$)
all possible values of $U_{bd}$ and $\phi_{d}$ (right panel).
#### $B_{d}\to\phi\eta^{\prime}$:
Similarly in FIG.- 3, we display the impact of the new coupling parameter on
variation of the CP averaged branching ratio with the weak phase $\phi_{d}$
for the decay mode $B_{d}\to\phi\eta^{\prime}$. We study with three different
values of the parameter $|U_{bd}|$ whose entries are $5\times 10^{-5},8\times
10^{-5}$ and $1\times 10^{-4}$, and these entries correspond to the line with
blue, cyan and red color in the given figure (left panel) respectively. Also
the black dotted line and the magenta dot-dashed line (shaded with green
color) denote the SM value and its $1\sigma$ error respectively. Now from this
one can notice clearly in the left panel that the observable in the presence
of the benchmark point $|U_{bd}|$ correspond to blue line has deviated towards
the $1\sigma$ range of SM line in the region $240^{\circ}\leq\phi_{d}\leq
360^{\circ}$ while it has contributions above to the SM in the other region.
Moreover it could be significantly enhanced from the SM value in the presence
of the other two benchmark points while in the constraint regions of the NP
parameters given in eq. (III.1), it has remarkable contributions. Additionally
the right panel shows the impact of all the entries of the new physics
parameter on the CP av. branching ratio (in the units of $10^{-8}$).
Figure 3: $B_{d}\to\phi\eta^{\prime}$: Variation of CP averaged branching
ratio (in the units of $10^{-8}$) with the new weak phase $\phi_{d}$
($\mathit{i}$) in the presence of benchmark points of $U_{bd}$ as $5\times
10^{-5}$ (Blue), $8\times 10^{-5}$ (Cyan) and $1\times 10^{-4}$ (Red) (left
panel) where the dashed black line represent to the SM value along with
$1-\sigma$ error (green region), and with ($\mathit{ii}$) all possible values
of $U_{bd}$ and $\phi_{d}$ (right panel).
#### $B_{d}\to\phi\pi^{0}$:
Here we investigate the CP av. branching ratio of the process
$B_{d}\to\phi\pi^{0}$ with respect to the weak phase $\phi_{d}$. The Z - b - d
coupled parameter $U_{bd}$ has important contributions to the observable in
the presence of NP scenario and is displayed in the left panel of FIG. - 4
with three benchmark inputs. The black dotted line corresponds to the SM
prediction where as the light blue colored region along with red dot-dashed
line denote its $1\sigma$ deviation. With the 3 different inputs of
$|U_{bd}|$, we observe that the observable has significantly deviated from the
SM result. Moreover we get more deviations while increasing the coupling
parameter $|U_{bd}|$. For the ranges of $0^{\circ}\leq\phi_{d}\leq
230^{\circ}$, the CP av. branching ratio could be effectively deviated from
the stadard model value. In addition to this, the right panel having the
variation of the new parameter with all entries along with the phase
$\phi_{d}$, the observable displays all its deviations. However the observable
has significant impact in the regions of the new coupling parameters given in
eq. (III.1).
Figure 4: $B_{d}\to\phi\pi^{0}$: Variation of CP averaged branching ratio (in
the units of $10^{-8}$) with ($\mathit{i}$) three benchmark points of $U_{bd}$
as $5\times 10^{-5}$ (Magenta), $9\times 10^{-5}$ (Green) and $2\times
10^{-4}$ (Cyan) with the new weak phase $\phi_{d}$ (left panel) where the
dashed black line represent to the SM value along with shaded region of
$1-\sigma$ uncertainty, and with ($\mathit{ii}$) all possible values of
$U_{bd}$ and $\phi_{d}$ (right panel).
#### $B_{d}\to\omega\phi$:
Now in the study of $B_{d}\to\omega\phi$ process, the new physics parameter
has contributed effectively to the variation of CP av. branching ratio with
respect to the phase $\phi_{d}$. The corresponding FIG. 5 (left panel)
represents that the new physics in the presence of the benchmark values of
$|U_{bd}|$, the above decay mode has effective contributions from its standard
model value. The region of central black dotted line with dot-dashed grey line
shaded with cyan color provides $1\sigma$ uncertainty to the SM. Taking a
careful observation to the contribution corresponding to the input value of
$|U_{bd}|=4\times 10^{-5}$ $(9\times 10^{-5})$, the observable in the range of
$0^{\circ}\leq\phi_{d}\leq 95^{\circ},255^{\circ}\leq\phi_{d}\leq 360^{\circ}$
($0^{\circ}\leq\phi_{d}\leq 110^{\circ},240^{\circ}\leq\phi_{d}\leq
360^{\circ}$) has less deviations while in the span of
$0^{\circ}\leq\phi_{d}\leq 155^{\circ}$ and $200^{\circ}\leq\phi_{d}\leq
360^{\circ}$ from its $1\sigma$ range, it has deviated more effectively than
the other two contributions. Similar to other decay modes discussed above, we
vary the observable with all the contributions of new physics parameters in
the right panel of the given figure. Furthermore, in the regions of the
sizeable parameters given in eq. (III.1), the physical observable has
significant impact in the presence of the VLDQ model.
Figure 5: $B_{d}\to\phi\omega$: Variation of CP averaged branching ratio (in
the units of $10^{-8}$) with ($\mathit{i}$) some benchmark points of $U_{bd}$
as $4\times 10^{-5}$ (Blue), $9\times 10^{-5}$ (Red) and $2.2\times 10^{-4}$
(Purple) with the new weak phase $\phi_{d}$ (left panel) where the dashed
black line represent to the SM value, with ($\mathit{ii}$) all possible values
of $U_{bd}$ and $\phi_{d}$ (right panel).
## IV Conclusion
We have scrutinized the decay modes of
$B_{d}\to\phi(\eta^{(^{\prime})},\pi,\omega)$, induced by $b\to d$ quark level
transition beyond the standard model. In the new physics scenario, we have
considered vector-like down quark model where a new quark generation has been
added to the SM and consequently it provides the interaction of $Z$ mediated
FCNC at the tree level. As the leptonic modes
$B_{d}\to\ell\ell(\ell=e,\mu,\tau)$ and the non leptonic modes
$B_{d}\to\pi\eta^{\prime}$ and $B_{u}\to\rho^{-}\eta^{\prime}$ have
descrepancies between the SM and experimental values, we investigated in the
presence of VLDQ model. In the presence of new physics, we constrained the
region of parameter space associated with the interactions“$Z-b-d$” at tree
level. We found that with the sizeable new coupling parameter $U_{bd}$,
considered from both leptonic as well as nonleptonic modes, it provides
significant contributions to $B_{d}\to\phi(\eta^{(^{\prime})},\pi,\omega)$
processes.
###### Acknowledgements.
MKM would like to thank to Department of Science and Technology(DST)- Inspire
Fellowship division, Government of India for financial support through ID No -
IF160303. MKM would like to acknowledge Prof. Anjan Giri for his support and
useful discussions.
## References
* Cheng and Chua (2009) H.-Y. Cheng and C.-K. Chua, Phys. Rev. D 80, 114008 (2009), eprint 0909.5229.
* Beneke and Neubert (2003) M. Beneke and M. Neubert, Nucl. Phys. B675, 333 (2003), eprint hep-ph/0308039.
* Beneke et al. (2007) M. Beneke, J. Rohrer, and D. Yang, Nucl. Phys. B 774, 64 (2007), eprint hep-ph/0612290.
* Zyla et al. (2020) P. Zyla et al. (Particle Data Group), PTEP 2020, 083C01 (2020).
* Buchalla et al. (1996) G. Buchalla, A. J. Buras, and M. E. Lautenbacher, Rev. Mod. Phys. 68, 1125 (1996), eprint hep-ph/9512380.
* Benayoun et al. (1999) M. Benayoun, L. DelBuono, S. Eidelman, V. Ivanchenko, and H. B. O’Connell, Phys. Rev. D 59, 114027 (1999), eprint hep-ph/9902326.
* Kucukarslan and Meissner (2006) A. Kucukarslan and U.-G. Meissner, Mod. Phys. Lett. A 21, 1423 (2006), eprint hep-ph/0603061.
* Benayoun et al. (2008) M. Benayoun, P. David, L. DelBuono, O. Leitner, and H. O’Connell, Eur. Phys. J. C 55, 199 (2008), eprint 0711.4482.
* Qian and Ma (2008) W. Qian and B.-Q. Ma, Phys. Rev. D 78, 074002 (2008), eprint 0809.4411.
* Feldmann et al. (1998) T. Feldmann, P. Kroll, and B. Stech, Phys. Rev. D 58, 114006 (1998), eprint hep-ph/9802409.
* Ambrosino et al. (2009) F. Ambrosino et al., JHEP 07, 105 (2009), eprint 0906.3819.
* de Groot et al. (2003) N. de Groot, W. N. Cottingham, and I. B. Whittingham, Phys. Rev. D68, 113005 (2003), eprint hep-ph/0308269.
* Zhang and Xiao (2008) Z.-Q. Zhang and Z.-J. Xiao (2008), eprint 0807.2024.
* Cheng et al. (2015) H.-Y. Cheng, C.-W. Chiang, and A.-L. Kuo, Phys. Rev. D 91, 014011 (2015), eprint 1409.5026.
* Bao et al. (2008) S.-S. Bao, F. Su, Y.-L. Wu, and C. Zhuang, Phys. Rev. D 77, 095004 (2008), eprint 0801.2596.
* Lü et al. (2019) C.-D. Lü, Y.-L. Shen, Y.-M. Wang, and Y.-B. Wei, JHEP 01, 024 (2019), eprint 1810.00819.
* Duplancic and Melic (2015) G. Duplancic and B. Melic, JHEP 11, 138 (2015), eprint 1508.05287.
* Gubernari et al. (2019) N. Gubernari, A. Kokulu, and D. van Dyk, JHEP 01, 150 (2019), eprint 1811.00983.
* Bharucha et al. (2016) A. Bharucha, D. M. Straub, and R. Zwicky, JHEP 08, 098 (2016), eprint 1503.05534.
* Aoki et al. (2020) S. Aoki et al. (Flavour Lattice Averaging Group), Eur. Phys. J. C80, 113 (2020), eprint 1902.08191.
* Deshpande et al. (2004) N. Deshpande, D. K. Ghosh, and X.-G. He, Phys. Rev. D 70, 093003 (2004), eprint hep-ph/0407021.
* Giri and Mohanta (2003) A. K. Giri and R. Mohanta, Phys. Rev. D 68, 014020 (2003), eprint hep-ph/0306041.
* Buras and Lindner (1998) A. Buras and M. Lindner, eds., _Heavy flavours II_ , vol. 15 (WSP, Singapore, 1998).
* Alok et al. (2016) A. K. Alok, S. Banerjee, D. Kumar, and S. Uma Sankar, Nucl. Phys. B 906, 321 (2016), eprint 1402.1023.
* Mohapatra (2020) M. K. Mohapatra, Phys. Rev. D 101, 075033 (2020), eprint 1910.14510.
* Giri and Mohanta (2004) A. K. Giri and R. Mohanta, Phys. Lett. B 594, 196 (2004), eprint hep-ph/0404091.
* Chen et al. (2010) C.-H. Chen, C.-Q. Geng, and W. Wang, JHEP 11, 089 (2010), eprint 1006.5216.
* Atwood and Hiller (2003) D. Atwood and G. Hiller (2003), eprint hep-ph/0307251.
* Deshpande and Ghosh (2004) N. Deshpande and D. K. Ghosh, Phys. Lett. B 593, 135 (2004), eprint hep-ph/0311332.
* Mawlong et al. (2008) B. Mawlong, R. Mohanta, and A. Giri, Phys. Lett. B 668, 116 (2008), eprint 0804.1231.
## Appendix A The parameters used in the nonleptonic $B\to PP,PV,VP$ decay
modes
We need the factorized matrix elements for the decay mode $B\to M_{1}M_{2}$
which are given by
$\displaystyle
A_{M_{1}M_{2}}=i\frac{G_{F}}{\sqrt{2}}\begin{cases}m_{B}^{2}F_{0}^{B\to
M_{1}}(0)f_{M_{2}};&{\rm for}\hskip 5.69046ptM_{1}=M_{2}=\rm Pseudoscalar,\\\
-2m_{V}\epsilon^{*}_{M_{1}}.p_{B}A_{0}^{B\to M_{1}}(0)f_{M_{2}};&{\rm
for}\hskip 5.69046ptM_{1}=\rm Vector,M_{2}=\rm Pseudoscalar,\\\
-2m_{V}\epsilon^{*}_{M_{1}}.p_{B}F_{+}^{B\to M_{1}}(0)f_{M_{2}};&{\rm
for}\hskip 5.69046ptM_{1}=\rm Pseudoscalar,M_{2}=\rm Vector.\end{cases}$ (32)
The form factors $F_{+}$ and $F_{0}$ denote pseudoscalar mesons, $A_{0}$
stands for vector meson where as $f_{P}$ and $F_{V}$ denote the decay constant
for pseudoscalar and vector meson respectively.
The expressions of flavor operators in QCD factorisation process are given as
follows:
$\displaystyle\alpha_{1}(M_{1}M_{2})$ $\displaystyle=$ $\displaystyle
a_{1}(M_{1}M_{2}),$ $\displaystyle\alpha_{2}(M_{1}M_{2})$ $\displaystyle=$
$\displaystyle a_{2}(M_{1}M_{2}),$ $\displaystyle\alpha_{3}^{p}(M_{1}M_{2})$
$\displaystyle=$
$\displaystyle\begin{cases}a_{3}(M_{1}M_{2})-a_{5}(M_{1}M_{2});&{\rm
for}\hskip 5.69046ptM_{1}M_{2}=PP,VP,\\\
a_{3}(M_{1}M_{2})+a_{5}(M_{1}M_{2});&{\rm for}\hskip
5.69046ptM_{1}M_{2}=VV,PV,\end{cases}$
$\displaystyle\alpha_{4}^{p}(M_{1}M_{2})$ $\displaystyle=$
$\displaystyle\begin{cases}a_{4}^{p}(M_{1}M_{2})+r_{\chi}^{M_{2}}a_{6}^{p}(M_{1}M_{2});&{\rm
for}\hskip 5.69046ptM_{1}M_{2}=PP,PV,\\\
a_{4}^{p}(M_{1}M_{2})-r_{\chi}^{M_{2}}a_{6}^{p}(M_{1}M_{2});&{\rm for}\hskip
5.69046ptM_{1}M_{2}=VV,VP,\end{cases}$
$\displaystyle\alpha_{3,EW}^{p}(M_{1}M_{2})$ $\displaystyle=$
$\displaystyle\begin{cases}a_{9}(M_{1}M_{2})-a_{7}(M_{1}M_{2});&{\rm
for}\hskip 5.69046ptM_{1}M_{2}=PP,VP,\\\
a_{9}(M_{1}M_{2})+a_{7}(M_{1}M_{2});&{\rm for}\hskip
5.69046ptM_{1}M_{2}=VV,PV,\end{cases}$
$\displaystyle\alpha_{4,EW}^{p}(M_{1}M_{2})$ $\displaystyle=$
$\displaystyle\begin{cases}a_{10}^{p}(M_{1}M_{2})+r_{\chi}^{M_{2}}a_{8}^{p}(M_{1}M_{2});&{\rm
for}\hskip 5.69046ptM_{1}M_{2}=PP,PV,\\\
a_{10}^{p}(M_{1}M_{2})-r_{\chi}^{M_{2}}a_{8}^{p}(M_{1}M_{2});&{\rm for}\hskip
5.69046ptM_{1}M_{2}=VV,VP,\end{cases}$ (33)
where
$\displaystyle a_{i}^{p}(M_{1}M_{2})=\bigg{(}C_{i}+\frac{C_{i\pm
1}}{N_{c}}\bigg{)}N_{i}(M_{2})+\frac{C_{i\pm
1}}{N_{c}}\frac{C_{F}\alpha_{s}}{4\pi}\bigg{[}V_{i}(M_{2})+\frac{4\pi^{2}}{N_{c}}H_{i}(M_{1}M_{2})\bigg{]}+P_{i}^{p}(M_{2}),$
(34)
and $\hat{\alpha}_{4}^{p}=\alpha_{4}^{p}+\beta_{3}^{p}$ with the superscript
$p=u,c$. The details of the parameter $\beta_{3}^{p}$ is given below. The
quantity $N_{i}(M_{2})$ also reads as
$\displaystyle N_{i}(M_{2})=\begin{cases}0;&i=6,8,\\\ 1;&{\rm
otherwise},\end{cases}$ (35)
and $i$ runs from 1 to 10. The lower and upper sign correspond to even and odd
values of $i$, where as $C_{i}^{\prime}s$ and $C_{F}$ are the Wilson
coefficients and the color factor (with $N_{c}$ = 3) respectively. The
relevant contributions $V_{i}(M_{2})$ and $H_{i}(M_{1}M_{2})$ are vertex
corrections and hard spectator interactions where as the term
$P_{i}^{p}(M_{1}M_{2})$ shows as penguin contractions. The explicit
expressions are given below.
* •
Vertex corrections:
$\displaystyle
V_{i}(M_{2})=\begin{cases}\int_{0}^{1}dx{\rm\Phi}_{M_{2}}(x)\big{[}12{\rm
ln}\frac{m_{b}}{\mu}-18+g(x)\big{]};&{\rm for}\hskip 5.69046pti=1-4,9,10,\\\
\int_{0}^{1}dx{\rm\Phi}_{M_{2}}(x)\big{[}-12{\rm
ln}\frac{m_{b}}{\mu}+6-g(1-x)\big{]};&{\rm for}\hskip 5.69046pti=5,7,\\\
\int_{0}^{1}dx{\rm\Phi}_{m_{2}}(x)\big{[}-6+h(x)\big{]};&{\rm for}\hskip
5.69046pti=6,8,\end{cases}$
where
$\displaystyle g(x)$ $\displaystyle=$ $\displaystyle
3\bigg{(}\frac{1-2x}{1-x}{\rm ln}x-i\pi\bigg{)}$ $\displaystyle+$
$\displaystyle\bigg{[}2Li_{2}(x)-{\rm ln}^{2}x+\frac{2{\rm ln}x}{1-x}-(3+2\pi
i){\rm ln}x-(x\leftrightarrow 1-x)\bigg{]},$ $\displaystyle h(x)$
$\displaystyle=$ $\displaystyle 2Li_{2}(x)-{\rm ln}^{2}x-(1+2\pi i){\rm
ln}x-(x\leftrightarrow 1-x).$ (36)
The terms so called $\Phi_{P,V}(x)$ and $\Phi_{p,v}(x)$ given in the above
expessions are leading twist and twist-3 light-cone distribution amplitudes,
respectively Beneke and Neubert (2003).
* •
Hard spectator interactions:
$\displaystyle
H_{i}(M_{1}M_{2})=\frac{B_{M_{1}M_{2}}}{A_{M_{1}M_{2}}}\frac{m_{B}}{\lambda_{B}}\int_{0}^{1}dx\int_{0}^{1}dy\bigg{[}\frac{\Phi_{M_{2}}(x)\Phi_{M_{1}}(y)}{\bar{x}\bar{y}}+r_{\chi}^{M_{1}}\frac{\Phi_{M_{2}}(x)\Phi_{m_{1}}(y)}{x\bar{y}}\bigg{]}$
(37)
for $i=1-4,9,10,$
$\displaystyle
H_{i}(M_{1}M_{2})=-\frac{B_{M_{1}M_{2}}}{A_{M_{1}M_{2}}}\frac{m_{B}}{\lambda_{B}}\int_{0}^{1}dx\int_{0}^{1}dy\bigg{[}\frac{\Phi_{M_{2}}(x)\Phi_{M_{1}}(y)}{x\bar{y}}+r_{\chi}^{M_{1}}\frac{\Phi_{M_{2}}(x)\Phi_{m_{1}}(y)}{\bar{x}\bar{y}}\bigg{]}$
(38)
for $i=5,7$ and $H_{i}(M_{1}M_{2})=0$ for $i=6,8$ where we consider
$\lambda_{B}$= 300 MeV.
* •
Penguin contractions:
These terms at the order of $\alpha_{s}$ are given as
$\displaystyle P_{4}^{p}(M_{2})$ $\displaystyle=$
$\displaystyle\frac{C_{F}\alpha_{s}}{4\pi
N_{c}}\bigg{\\{}C_{1}\bigg{[}\frac{4}{3}{\rm
ln}\frac{m_{b}}{\mu}+\frac{2}{3}-G_{M_{2}}(s_{p})\bigg{]}+C_{3}\bigg{[}\frac{8}{3}{\rm
ln}\frac{m_{b}}{\mu}+\frac{4}{3}-G_{M_{2}}(0)-G_{M_{2}}(1)\bigg{]}\bigg{\\}}$
(39) $\displaystyle+$ $\displaystyle(C_{4}+C_{6})\bigg{[}\frac{4n_{f}}{3}{\rm
ln}\frac{m_{b}}{\mu}-(n_{f}-2)G_{M_{2}}(0)-G_{M_{2}}(s_{c})-G_{M_{2}}(1)\bigg{]}$
$\displaystyle-$ $\displaystyle 2C_{8g^{\rm
eff}}\int_{0}^{1}\frac{dx}{1-x}\Phi_{M_{2}}(x),$ $\displaystyle
P_{6}^{p}(M_{2}=P)$ $\displaystyle=$ $\displaystyle\frac{C_{F}\alpha_{s}}{4\pi
N_{c}}\bigg{\\{}C_{1}\bigg{[}\frac{4}{3}{\rm
ln}\frac{m_{b}}{\mu}+\frac{2}{3}-\hat{G}_{M_{2}}(s_{p})\bigg{]}+C_{3}\bigg{[}\frac{8}{3}{\rm
ln}\frac{m_{b}}{\mu}+\frac{4}{3}-\hat{G}_{M_{2}}(0)-\hat{G}_{M_{2}}(1)\bigg{]}$
$\displaystyle+$ $\displaystyle(C_{4}+C_{6})\bigg{[}\frac{4n_{f}}{3}{\rm
ln}\frac{m_{b}}{\mu}-(n_{f}-2)\hat{G}_{M_{2}}(0)-\hat{G}_{M_{2}}(s_{c})-\hat{G}_{M_{2}}(1)\bigg{]}-2C_{8g}^{\rm
eff}\bigg{\\}},$ $\displaystyle P_{6}^{p}(M_{2}=V)$ $\displaystyle=$
$\displaystyle-\frac{C_{F}\alpha_{s}}{4\pi
N_{c}}\bigg{\\{}C_{1}\bigg{[}\hat{G}_{M_{2}}(s_{p})\bigg{]}+C_{3}\bigg{[}\hat{G}_{M_{2}}(0)-\hat{G}_{M_{2}}(1)\bigg{]}$
$\displaystyle+$
$\displaystyle(C_{4}+C_{6})\bigg{[}(n_{f}-2)\hat{G}_{M_{2}}(0)+\hat{G}_{M_{2}}(s_{c})+\hat{G}_{M_{2}}(1)\bigg{]}\bigg{\\}},$
$\displaystyle P_{8}^{p}(M_{2}=P)$ $\displaystyle=$
$\displaystyle\frac{\alpha}{9\pi
N_{c}}\bigg{\\{}(C_{1}+N_{c}C_{2})\bigg{[}\frac{4}{3}{\rm
ln}\frac{m_{b}}{\mu}+\frac{2}{3}-\hat{G}_{M_{2}}(s_{p})\bigg{]}-3C_{7\gamma}^{\rm
eff}\bigg{\\}},$ $\displaystyle P_{8}^{p}(M_{2}=V)$ $\displaystyle=$
$\displaystyle-\frac{\alpha}{9\pi
N_{c}}(C_{1}+N_{c}C_{2})\hat{G}_{M_{2}}(s_{p}),$ (40) $\displaystyle
P_{10}^{p}$ $\displaystyle=$ $\displaystyle\frac{\alpha}{9\pi
N_{c}}\bigg{\\{}(C_{1}+N_{c}C_{2})\bigg{[}\frac{4}{3}{\rm
ln}\frac{m_{b}}{\mu}+\frac{2}{3}-G_{M_{2}}(s_{p})\bigg{]}-3C_{7\gamma}^{\rm
eff}\int_{0}^{1}\frac{dx}{1-x}\Phi_{M_{2}}(x)\bigg{\\}},$ (41)
where $n_{f}=5$, $s_{u}=(\frac{m_{u}}{m_{b}})^{2}\approx 0$ and
$s_{c}=(\frac{m_{c}}{m_{b}})^{2}$. The parameters so called $\alpha_{s}$ and
$\alpha$ are strongand EM coupling constants respectively. The functions
$G_{M_{2}}(s)$ and $\hat{G}_{M_{2}}(s)$ are defined in Beneke and Neubert
(2003). In addition to this, the power suppressed weak annihilation
contributions are given by
* •
Annihilation contribution:
$\displaystyle\beta_{i}^{p}(M_{1}M_{2})=\frac{if_{B}f_{M_{1}}f_{M_{2}}}{A_{M_{1}M_{2}}}b_{i}^{p},$
(42)
where
$\displaystyle b_{1}=\frac{C_{F}}{N_{c}^{2}}C_{1}A_{1}^{i},\hskip
5.69046ptb_{3}=\frac{C_{F}}{N_{c}^{2}}\big{[}C_{3}A_{1}^{i}+C_{5}(A_{3}^{i}+A_{3}^{f})+N_{c}C_{6}A_{3}^{3}\big{]},$
$\displaystyle b_{2}=\frac{C_{F}}{N_{c}^{2}}C_{2}A_{1}^{i},\hskip
5.69046ptb_{4}=\frac{C_{F}}{N_{c}^{2}}\big{[}C_{4}A_{1}^{i}+C_{6}A_{2}^{f}\big{]},$
$\displaystyle
b_{3,EW}^{p}=\frac{C_{F}}{N_{c}^{2}}\big{[}C_{9}A_{1}^{i}+C_{7}(A_{3}^{i}+A_{3}^{f})+N_{c}C_{8}A_{3}^{i}\big{]},$
$\displaystyle
b_{4,EW}^{p}=\frac{C_{F}}{N_{c}^{2}}\big{[}C_{10}A_{1}^{i}+C_{8}(A_{2}^{i}\big{]}.$
(43)
Here the expressions of A are given as:
Case - I ($M_{1}=M_{2}=P$):
$\displaystyle A_{1}^{i}\approx A_{2}^{i}\approx
2\pi\alpha_{s}\big{[}9(X_{A}-4+\frac{\pi^{2}}{3})+r_{\chi}^{M_{1}}r_{\chi}^{M_{2}}X_{A}^{2}\big{]},$
$\displaystyle A_{3}^{i}\approx
6\pi\alpha_{s}(r_{\chi}^{M_{1}}-r_{\chi}^{M^{2}})\big{(}X_{A}^{2}-2X^{A}+\frac{\pi^{2}}{3}\big{)},$
$\displaystyle A_{3}^{f}\approx
6\pi\alpha_{s}(r_{\chi}^{M_{1}}+r_{\chi}^{M^{2}})(2X_{A}^{2}-X_{A}),$
$\displaystyle A_{1}^{f}=A_{2}^{f}=0.$ (44)
Case - II ($M_{1}=V,M_{2}=P$):
$\displaystyle A_{1}^{i}\approx-A_{2}^{i}\approx
6\pi\alpha_{s}\big{[}3(X_{A}-4+\frac{\pi^{2}}{3})+r_{\chi}^{M_{1}}r_{\chi}^{M_{2}}(X_{A}^{2}-2X_{A})\big{]},$
$\displaystyle A_{3}^{i}\approx
6\pi\alpha_{s}\bigg{[}-3r_{\chi}^{M_{1}}\big{(}X_{A}^{2}-2X^{A}+\frac{\pi^{2}}{3}+4\big{)}+r_{\chi}^{M_{2}}\big{(}X_{A}^{2}-2X_{A}+\frac{\pi^{2}}{3}\big{)}\bigg{]},$
$\displaystyle A_{3}^{f}\approx
6\pi\alpha_{s}\big{[}3r_{\chi}^{M_{1}}(2X_{A}-1)(2-X_{A})-r_{\chi}^{M_{2}}(2X_{A}^{1}-X_{A})\big{]},$
$\displaystyle A_{1}^{f}=A_{2}^{f}=0.$ (45)
Case - III ($M_{1}=P,M_{2}=V$):
$\displaystyle A_{1}^{i}\approx-A_{2}^{i}\approx
6\pi\alpha_{s}\big{[}3(X_{A}-4+\frac{\pi^{2}}{3})+r_{\chi}^{M_{2}}r_{\chi}^{M_{1}}(X_{A}^{2}-2X_{A})\big{]},$
$\displaystyle A_{3}^{i}\approx
6\pi\alpha_{s}\bigg{[}-3r_{\chi}^{M_{2}}\big{(}X_{A}^{2}-2X^{A}+\frac{\pi^{2}}{3}+4\big{)}+r_{\chi}^{M_{1}}\big{(}X_{A}^{2}-2X_{A}+\frac{\pi^{2}}{3}\big{)}\bigg{]},$
$\displaystyle
A_{3}^{f}\approx-6\pi\alpha_{s}\big{[}3r_{\chi}^{M_{2}}(2X_{A}-1)(2-X_{A})-r_{\chi}^{M_{1}}(2X_{A}^{1}-X_{A})\big{]},$
$\displaystyle A_{1}^{f}=A_{2}^{f}=0.$ (46)
$A_{n}^{i,f}:$ n = 1, 2 and 3 correspond to the the operator structure
$(V-A)(V-A),(V-A)(V+A)$ and $(S-P)(S+P)$ respectively where as the
superscripts - ($i,f$) denote for the gluon emission from initial and final
states. The chiral factor $r_{\chi}$ is given by
$\displaystyle
r_{\chi}^{P}(\mu)=\frac{2m_{P}^{2}}{m_{b}(\mu)(m_{1}+m_{2})(\mu)},\hskip
5.69046ptr_{\chi}^{V}(\mu)=\frac{2m_{V}}{m_{b}(\mu)}\frac{f_{V}^{\perp}(\mu)}{f_{V}}.$
(47)
The end point divergence that has been used, can be given as
$\displaystyle X_{A}={\rm
ln}\frac{m_{B}}{\Lambda_{QCD}}(1+{\rm\rho_{A}}\exp^{i\phi_{A}}),$ (48)
where ${\rm\rho_{A}}$ and $\phi_{A}$ can be found from Cheng and Chua (2009).
Modes | $\rm\rho_{A}$ | $\rm\phi_{A}$
---|---|---
$B\to PP$ | 1.10 | $-50^{\circ}$
$B\to PV$ | 0.87 | $-30^{\circ}$
$B\to VP$ | 1.07 | $-70^{\circ}$
## Appendix B The parameters used in the nonleptonic $B\to VV$ decay mode
For the decay process $B\to VV$ the helicity amplitudes depend upon the
factorized matrix elements as Beneke et al. (2007)
$\displaystyle X_{B_{d}\to
V_{1},V_{2}}=\big{\langle}V_{2}|(\bar{q}_{2}q_{3})_{V-A}|0\big{\rangle}\big{\langle}V_{1}|(\bar{q}_{1}b)_{V-A}|\bar{B}_{d}\big{\rangle},$
(49)
where the form factor and the decay constants are defined by
$\displaystyle\big{\langle}V(p,\epsilon^{*})|\bar{q}\gamma_{\mu}q^{\prime}|0\
big\rangle=$ $\displaystyle-if_{V}m_{V}\epsilon_{\mu}^{*},$
$\displaystyle\big{\langle}V(p,\epsilon^{*})\bar{q}\gamma_{\mu}(1-\gamma_{5})b|\bar{B}_{d}(p_{B})\big{\rangle}=$
$\displaystyle-\epsilon_{\mu}^{*}(m_{B}+m_{V})A_{1}^{B_{d}V}(q^{2})+(p_{B}+p)_{\mu}(\epsilon^{*}.p_{B})\frac{A_{2}^{B_{d}V}(q^{2})}{m_{B}+m_{V}}$
(50)
$\displaystyle+q_{\mu}(\epsilon^{*}p_{B})\frac{2m_{V}}{q^{2}}[A_{3}^{B_{d}V}(q^{2})-A_{0}^{B_{d}V}(q^{2})]$
$\displaystyle-i\mathcal{E}_{\mu\nu\alpha\beta}\epsilon^{*\nu}p_{B}^{\alpha}p^{\beta}\frac{2V^{B_{d}V}(q^{2})}{m_{B}+m_{V}},$
where $q=p_{B}-p$. The expressions of the helicity amplitudes are given as
$\displaystyle
X^{0}_{V_{1}V_{2}}=i\frac{G_{F}}{\sqrt{2}}m_{B}^{2}f_{V_{2}}A_{0}^{B\to
V_{1}}(0),\hskip
8.5359ptX_{V_{1}V_{2}}^{\pm}=\frac{G_{F}}{\sqrt{2}}m_{B}m_{2}f_{V_{2}}F_{\pm}^{B\to
V_{1}}(0),$ (51)
where the form factor $F_{\pm}^{B\to V_{1}}$ is defined as
$\displaystyle F_{\pm}^{B\to V_{1}}(q^{2})=(1+\frac{m_{1}}{m_{B}})A_{1}^{B\to
V_{1}}(q^{2})\mp(1-\frac{m_{1}}{m_{B}})V^{B\to V_{1}}(q^{2}).$ (52)
The assembled form of the coefficients $a_{i}$ are given as
$\displaystyle a_{i}^{p,h}(M_{1}M_{2})=\bigg{(}C_{i}+\frac{C_{i\pm
1}}{N_{c}}\bigg{)}N_{i}^{h}(M_{2})+\frac{C_{i\pm
1}}{N_{c}}\frac{C_{F}\alpha_{s}}{4\pi}\bigg{[}V_{i}^{h}(M_{2})+\frac{4\pi^{2}}{N_{c}}H_{i}^{h}(M_{1}M_{2})\bigg{]}+P_{i}^{h,p}(M_{2}),$
(53)
where the upper (lower) sign signifies for $i$ odd (even). The superscript $p$
correspond to penguin contributions where it is ommited for $i=1,2$. The
parameters for $h=0$ corrspond to those given in Appendix A where P is
replaced by V in the final staes PV. And due to the suppressed contributions
from positive helicity so only considering the negative helicity amplitudes
(Beneke et al., 2007), the LO parameter $N_{i}$ is given by
$\displaystyle N_{i}^{-}(M_{2})=\begin{cases}0;&i=\big{\\{}6,8\big{\\}}\\\
1;&{\rm otherwise},\end{cases}$ (54)
The vertex corrections are given as
$\displaystyle
V_{i}^{-}(M_{2})=\begin{cases}\int_{0}^{1}dx{\rm\Phi_{b2}}(x)\big{[}12{\rm
ln}\frac{m_{b}}{\mu}-18+g_{T}(x)\big{]};&{\rm for}\hskip
5.69046pti=\big{\\{}1-4,9,10\big{\\}}\\\
\int_{0}^{1}dx{\rm\Phi_{a2}}_{M_{2}}(x)\big{[}-12{\rm
ln}\frac{m_{b}}{\mu}+6-g_{T}(1-x)\big{]};&{\rm for}\hskip
5.69046pti=\big{\\{}5,7\big{\\}}\\\
\int_{0}^{1}dx{\rm\Phi}_{m_{2}}(x)\big{[}-6+h(x)\big{]};&{\rm for}\hskip
5.69046pti=\big{\\{}6,8\big{\\}}\end{cases}$
The parameter $g_{T}(x)$ is given as
$\displaystyle g_{T}(x)=g(x)+\frac{lnx}{1-x},$ (55)
where $g(x)$ is given in the Appendix A. The hard spectator contributions are
given by
$\displaystyle H_{i}^{-}$
$\displaystyle=-\frac{2f_{B}f_{V_{1}}^{\perp}}{m_{B}m_{b}F_{-}^{B\to
V_{1}}(0)}\frac{m_{b}}{\lambda_{B}}\int_{0}^{1}dxdy\frac{\phi_{1}^{\perp}(x)\phi_{b2}(y)}{\bar{x}^{2}y},\hskip
28.45274pti=\big{\\{}1-4,9,10\big{\\}},$ $\displaystyle H_{i}^{-}$
$\displaystyle=\frac{2f_{B}f_{V_{1}}^{\perp}}{m_{B}m_{b}F_{-}^{B\to
V_{1}}(0)}\frac{m_{b}}{\lambda_{B}}\int_{0}^{1}dxdy\frac{\phi_{1}^{\perp}(x)\phi_{a2}(y)}{\bar{x}^{2}\bar{y}},\hskip
28.45274pti=\big{\\{}5,7\big{\\}},$ $\displaystyle H_{i}^{-}$
$\displaystyle=\frac{2f_{B}f_{V_{1}}}{m_{B}m_{b}F_{-}^{B\to
V_{1}}(0)}\frac{m_{b}m_{1}}{m_{2}^{2}}\frac{m_{b}}{\lambda_{B}}\int_{0}^{1}dxdy\frac{\phi_{a1}(x)\phi_{2}^{\perp}(y)}{y\bar{x}\bar{y}},\hskip
28.45274pti=\big{\\{}6,8\big{\\}},$ (56)
where we use the divergent integral can be founf finite through the defining
parameter as Beneke et al. (2007)
$\displaystyle\int_{0}^{1}dx\frac{\phi_{1}^{\perp}}{\bar{x}^{2}}=\bigg{[}\lim_{u\to
1}\frac{\phi_{1}^{\perp}}{\bar{u}}\bigg{]}X_{H}^{V_{1}}+\int_{0}^{1}\frac{dx}{1-x}\bigg{[}\frac{\phi_{1}^{\perp}(x)}{1-x}-\bigg{(}\lim_{u\to
1}\frac{\phi_{1}^{\perp}(u)}{\bar{u}}\bigg{)}\bigg{]},$ (57)
where the asymptotic distribution amplitudes are given by
$\phi^{\perp}(x)=6x(1-x),\hskip 5.69046pt\\\ \phi_{a}(x)=3(1-x)^{2},\hskip
5.69046pt\phi_{b}(x)=3x^{2}$.
|
# The hyperbolic Anderson model: Moment estimates of the Malliavin derivatives
and applications111Dedicated to Professor István Gyöngy on the occasion of his
seventieth birthday
Raluca M. Balan222University of Ottawa, Department of Mathematics and
Statistics, STEM Building, 150 Louis-Pasteur Private, Ottawa, ON, K1N 6N5,
Canada. E-mail<EMAIL_ADDRESS>Research supported by a grant from the
Natural Sciences and Engineering Research Council of Canada., David
Nualart333Department of Mathematics, University of Kansas, 405 Snow Hall,
Lawrence, KS, 66045, USA. Email<EMAIL_ADDRESS>Supported by NSF Grant DMS
1811181., Lluís Quer-Sardanyons444Departament de Matemàtiques, Universitat
Autònoma de Barcelona, 08193, Cerdanyola del Vallès, Catalonia, Spain. E-mail:
<EMAIL_ADDRESS>Supported by the grant PGC2018-097848-B-I00 (Ministerio de
Economía y Competitividad).,
Guangqu Zheng555Corresponding author. School of Mathematics, The University of
Edinburgh, James Clerk Maxwell Building, Peter Guthrie Tait Road, Edinburgh,
EH9 3FD, United Kingdom. Email<EMAIL_ADDRESS>
###### Abstract
In this article, we study the hyperbolic Anderson model driven by a space-time
_colored_ Gaussian homogeneous noise with spatial dimension $d=1,2$. Under
mild assumptions, we provide $L^{p}$-estimates of the iterated Malliavin
derivative of the solution in terms of the fundamental solution of the wave
solution. To achieve this goal, we rely heavily on the _Wiener chaos
expansion_ of the solution.
Our first application are _quantitative central limit theorems_ for spatial
averages of the solution to the hyperbolic Anderson model, where the rates of
convergence are described by the total variation distance. These quantitative
results have been elusive so far due to the temporal correlation of the noise
blocking us from using the Itô calculus. A _novel_ ingredient to overcome this
difficulty is the _second-order Gaussian Poincaré inequality_ coupled with the
application of the aforementioned $L^{p}$-estimates of the first two Malliavin
derivatives. Besides, we provide the corresponding functional central limit
theorems.
As a second application, we establish the absolute continuity of the law for
the hyperbolic Anderson model. The $L^{p}$-estimates of Malliavin derivatives
are crucial ingredients to verify a local version of Bouleau-Hirsch criterion
for absolute continuity. Our approach substantially simplifies the arguments
for the one-dimensional case, which has been studied in the recent work by
Balan, Quer-Sardanyons and Song (2019).
Mathematics Subject Classifications (2010): 60H15; 60H07; 60G15; 60F05.
Keywords: Hyperbolic Anderson model; Wiener chaos expansion; Malliavin
calculus; Second-order Poincaré inequality; Quantitative central limit
theorem; Riesz kernel; Dalang’s condition.
## 1 Introduction
One of the main tools of modern stochastic analysis is Malliavin calculus. To
put it short, this is a differential calculus on a Gaussian space that
represents an infinite dimensional generalization of the usual analytical
concepts on an Euclidean space. The Malliavin calculus (also known as the
stochastic calculus of variations) was initiated by Paul Malliavin [21] to
give a probabilistic proof of Hörmander’s “sum of squares” theorem. It has
been further developed by Stroock, Bismut, Watanabe and others. One of the
main applications of Malliavin calculus is the study of regularity properties
of probability laws, for example, the laws of the solutions to certain
stochastic differential equations and stochastic partial differential
equations (SPDEs), see _e.g._ [27, Chapter 2]. The Malliavin calculus is also
useful in formulating and interpreting stochastic (partial) differential
equations when the solution is not adapted to a Brownian filtration, which is
the case of SPDEs driven by a Gaussian noise that is colored in time.
Recently, the Malliavin calculus has found another important application in
the work of Nualart and Ortiz-Latorre [28], which paved the road for _Stein to
meet Malliavin_. The authors of [28] applied the Malliavin calculus (notably
the integration by parts formula) to characterize the convergence in law of a
sequence of multiple Wiener integrals, and they were able to give new proofs
for the fourth moment theorems of Nualart, Peccati and Tudor [30, 37]. Soon
after the work [28], Nourdin and Peccati combined Malliavin calculus and
Stein’s method of normal approximation to quantify the fourth moment theorem.
Their work [24] marked the birth of the so-called Malliavin-Stein approach.
This combination works admirably well, partially because one of the
fundamental ingredients in Stein’s method—the so-called Stein’s lemma
(2.6)—that characterizes the normal distribution, is nothing else but a
particular case of the integration by parts formula (2.5) in Malliavin
calculus. We refer interested readers to [44, Section 1.2] for a friendly
introduction to this approach.
The central object of study in this paper is the stochastic wave equation with
_linear_ Gaussian multiplicative noise (in _Skorokhod sense_):
$\displaystyle\begin{cases}\dfrac{\partial^{2}u}{\partial t^{2}}=\Delta
u+u\dot{W}\\\ u(0,x)=1,\quad\dfrac{\partial u}{\partial
t}(0,x)=0\end{cases}~{}\text{on $\mathbb{R}_{+}\times\mathbb{R}^{d}$ for
$d\in\\{1,2\\}$,}$ (1.1)
where $\Delta$ is the Laplacian in space variables and the Gaussian noise
$\dot{W}$ has the following correlation structure
$\mathbb{E}\big{[}\dot{W}(t,x)\dot{W}(s,y)\big{]}=\gamma_{0}(t-s)\gamma(x-y),$
with the following standing assumptions:
* (i)
$\gamma_{0}:\mathbb{R}\to[0,\infty]$ is locally integrable and non-negative
definite;
* (ii)
$\gamma$ is a non-negative and non-negative definite measure on
$\mathbb{R}^{d}$ whose spectral measure $\mu$666The spectral measure $\mu$ of
$\gamma$ is a tempered measure on $\mathbb{R}^{d}$ such that
$\gamma=\mathcal{F}\mu$, that is, $\gamma$ is the Fourier transform of $\mu$,
and its existence is guaranteed by the Bochner-Schwarz theorem. satisfies
_Dalang’s condition_ :
$\displaystyle\qquad\qquad\quad\int_{\mathbb{R}^{d}}\frac{1}{1+|\xi|^{2}}\mu(d\xi)<\infty,$
(1.2)
where $|\xi|$ denotes the Euclidean norm of $\xi\in\mathbb{R}^{d}$.
An important example of the temporal correlation is the Riesz kernel
$\gamma_{0}(t)=|t|^{-\alpha_{0}}$ for some $\alpha_{0}\in(0,1)$ (with
$\gamma_{0}(0)=\infty$).
Equation (1.1) is also known in the literature as the _hyperbolic Anderson
model_ , by analogy with the parabolic Anderson model in which the wave
operator is replaced by the heat operator. The noise $\dot{W}$ can be formally
realized as an isonormal Gaussian process $W=\\{W(\phi):\phi\in\mathcal{H}\\}$
and here $\mathcal{H}$ is a Hilbert space that is the completion of the set
$C^{\infty}_{c}\big{(}\mathbb{R}_{+}\times\mathbb{R}^{d})$ of infinitely
differentiable functions with compact support under the inner product
$\displaystyle\langle\phi,\psi\rangle_{\mathcal{H}}$
$\displaystyle=\int_{\mathbb{R}_{+}^{2}\times\mathbb{R}^{2d}}\phi(t,x)\psi(s,y)\gamma_{0}(t-s)\gamma(x-y)dtdxdsdy$
(1.3)
$\displaystyle=\int_{\mathbb{R}_{+}^{2}}dtds\gamma_{0}(t-s)\int_{\mathbb{R}^{d}}dx\phi(t,x)\big{[}\psi(s,\bullet)\ast\gamma\big{]}(x),$
(1.4)
where we write $\gamma(x)$ for the density of $\gamma$ if it exists and we
shall use the definition (1.4) instead of (1.3) when $\gamma$ is a measure. In
(1.4), $\ast$ denotes the convolution in the space variable and
$\gamma_{0}(t)=\gamma_{0}(-t)$ for $t<0$. We denote by $\mathcal{H}^{\otimes
p}$ the $p$th tensor product of $\mathcal{H}$ for $p\in\mathbb{N}^{\ast}$, see
Section 2 for more details.
As mentioned before, the existence of a temporal correlation $\gamma_{0}$
prevents us from defining equation (1.1) in the Itô sense due to a lack of the
martingale structure. In the recent work [3] by Balan and Song, the following
results are established using Malliavin calculus. Let $G_{t}$ denote the
fundamental solution to the corresponding deterministic wave equation, that
is, for $(t,z)\in(0,\infty)\times\mathbb{R}^{d}$,
$\displaystyle
G_{t}(z):=\begin{cases}\dfrac{1}{2}\mathbf{1}_{\\{|z|<t\\}}\quad&\text{if
$d=1$};\\\
\dfrac{1}{2\pi\sqrt{t^{2}-|z|^{2}}}\mathbf{1}_{\\{|z|<t\\}}\quad&\text{if
$d=2$}.\end{cases}$ (1.5)
To ease the notation, we will stick to the convention that
$G_{t}(z)=0$ when $t\leq 0$. (1.6)
###### Definition 1.1.
Fix $d\in\\{1,2\\}$. We say that a square-integrable process
$u=\\{u(t,x):(t,x)\in\mathbb{R}_{+}\times\mathbb{R}^{d}\\}$ is a _mild
Skorokhod solution_ to the hyperbolic Anderson model (1.1) if $u$ has a
jointly measurable modification $($still denoted by $u$$)$ such that
$\sup\\{\mathbb{E}[u(t,x)^{2}]:(t,x)\in[0,T]\times\mathbb{R}^{d}\\}<\infty$
for any finite $T$; and for any $t>0$ and $x\in\mathbb{R}^{d}$, the following
equality holds in $L^{2}(\Omega)$:
$u(t,x)=1+\int_{0}^{t}\int_{\mathbb{R}^{d}}G_{t-s}(x-y)u(s,y)W(ds,dy),$
where the above stochastic integral is understood in the _Skorokhod sense_ and
the process
$(s,y)\in\mathbb{R}_{+}\times\mathbb{R}^{d}\longmapsto\mathbf{1}_{(0,t)}(s)G_{t-s}(x-y)u(s,y)$
is Skorokhod integrable. See Definition 5.1 in [3] and Definition 1.1 in [2].
It has been proved in [3, Section 5] that equation (1.1) admits a unique mild
Skorokhod solution $u$ with the following Wiener chaos expansion:
$\displaystyle u(t,x)=1+\sum_{n\geq
1}I_{n}\big{(}\widetilde{f}_{t,x,n}\big{)},$ (1.7)
where $I_{n}$ denotes the $n$th multiple Wiener integral associated to the
isonormal Gaussian process $W$ (see Section 2 for more details),
$f_{t,x,n}\in\mathcal{H}^{\otimes n}$ is defined by (with the convention (1.6)
in mind)
$f_{t,x,n}(t_{1},x_{1},\dots,t_{n},x_{n}):=G_{t-t_{1}}(x-x_{1})G_{t_{1}-t_{2}}(x_{1}-x_{2})\cdots
G_{t_{n-1}-t_{n}}(x_{n-1}-x_{n}),$ (1.8)
and $\widetilde{f}_{t,x,n}$ is the canonical symmetrization of
$f_{t,x,n}\in\mathcal{H}^{\otimes n}$ given by
$\widetilde{f}_{t,x,n}(t_{1},x_{1},\dots,t_{n},x_{n}):=\frac{1}{n!}\sum_{\sigma\in\mathfrak{S}_{n}}f_{t,x,n}(t_{\sigma(1)},x_{\sigma(1)},\dots,t_{\sigma(n)},x_{\sigma(n)}),$
(1.9)
where the sum in (1.9) runs over $\mathfrak{S}_{n}$, the set of permutations
on $\\{1,2,\dots,n\\}$. For example,
$f_{t,x,1}(t_{1},x_{1})=G_{t-t_{1}}(x-x_{1})$ and
$\widetilde{f}_{t,x,2}(t_{1},x_{1},t_{2},x_{2})=\frac{1}{2}\Big{(}G_{t-t_{1}}(x-x_{1})G_{t_{1}-t_{2}}(x_{1}-x_{2})+G_{t-t_{2}}(x-x_{2})G_{t_{2}-t_{1}}(x_{2}-x_{1})\Big{)}.$
We would like to point out that in the presence of temporal correlation, there
is no developed solution theory for the nonlinear wave equation (replacing
$u\dot{W}$ in (1.1) by $\sigma(u)\dot{W}$ for some deterministic Lipschitz
function $\sigma:\mathbb{R}\to\mathbb{R}$). We regard this as a totally
different problem.
Now let us introduce the following hypothesis when $d=2$:
$\displaystyle{\bf(H1)}\begin{cases}&\text{({a}) $\gamma\in
L^{\ell}(\mathbb{R}^{2})$ for some $\ell\in(1,\infty)$,}\\\ &\text{({b})
$\gamma(x)=|x|^{-\beta}$ for some $\beta\in(0,2)$,}\\\ &\text{({c})
$\gamma(x_{1},x_{2})=\gamma_{1}(x_{1})\gamma_{2}(x_{2})$, where
$\gamma_{i}(x_{i})=|x_{i}|^{-\beta_{i}}$ or $\gamma_{i}\in
L^{\ell_{i}}(\mathbb{R})$ }\\\ &\qquad\text{for some
$0<\beta_{i}<1<\ell_{i}<+\infty$, $i=1,2$. }\end{cases}$
###### Remark 1.2.
(i) Note that condition (a) for $d=2$ is slightly stronger than Dalang’s
condition (1.2). In fact, when $d=2$, the paper [18] pointed out that Dalang’s
condition (1.2) is equivalent to
$\displaystyle\int_{|x|\leq 1}\ln(|x|^{-1})\gamma(x)dx<\infty;$ (1.10)
let $\ell^{\star}=\frac{\ell}{\ell-1}$ and $0<\varepsilon<1/\ell^{\star}$,
then there is some $\delta\in(0,1)$ and a constant $C_{\varepsilon}$ such that
$\ln(|x|^{-1})\leq C_{\varepsilon}|x|^{-\varepsilon}$ for any $|x|\leq\delta$,
from which we deduce that
$\displaystyle\int_{|x|\leq 1}\ln(|x|^{-1})\gamma(x)dx$
$\displaystyle\leq\ln(\delta^{-1})\int_{\delta<|x|\leq
1}\gamma(x)dx+C_{\varepsilon}\int_{|x|\leq\delta}|x|^{-\varepsilon}\gamma(x)dx$
$\displaystyle\leq\ln(\delta^{-1})\int_{\delta<|x|\leq
1}\gamma(x)dx+C_{\varepsilon}\|\gamma\|_{L^{\ell}(\mathbb{R}^{2})}\left(\int_{|x|\leq\delta}|x|^{-\varepsilon\ell^{\star}}dx\right)^{1/\ell^{\star}}<\infty.$
(ii) The case (c) in Hypothesis ${\bf(H1)}$ is a mixture of cases (a) and (b).
Accordingly, more examples of the noise $\dot{W}$ arise. In the space
variables, $W$ can behave like a fractional Brownian sheet with Hurst indices
greater than $1/2$ in both directions, i.e.
$\gamma(x_{1},x_{2})=|x_{1}|^{2H_{1}-2}|x_{2}|^{2H_{2}-2}$ for some
$H_{1},H_{2}\in(1/2,1)$.
(iii) For $d=1$ we just assume that $\gamma$ is a non-negative and non-
negative definite measure on $\mathbb{R}$. In this case (see, for instance,
Remark 10 of [11]) Dalang’s condition is always satisfied.
Under Hypothesis $\bf(H1)$, we will state our first main result — the
$L^{p}(\Omega)$ estimates of the Malliavin derivatives of $u(t,x)$. The first
Malliavin derivative $Du(t,x)$ is a random element in the Hilbert space
$\mathcal{H}$, the completion of
$C^{\infty}_{c}\big{(}\mathbb{R}_{+}\times\mathbb{R}^{d})$ under the inner
product (1.3); as the space $\mathcal{H}$ contains generalized functions, it
is not clear at first sight whether $(s,y)\longmapsto D_{s,y}u(t,x)$ is a
(random) function. The higher-order Malliavin derivative $D^{m}u(t,x)$ is a
random element in $\mathcal{H}^{\otimes m}$ for $m\geq 1$, see Section 2 for
more details.
Let us first fix some notation.
Notation A. (1) We write $a\lesssim b$ to mean $a\leq Kb$ for some immaterial
constant $K>0$.
(2) We write $\|X\|_{p}=\big{(}\mathbb{E}[|X|^{p}]\big{)}^{1/p}$ to denote the
$L^{p}(\Omega)$-norm of $X$ for $p\in[1,\infty)$.
(3) When $p$ is a positive integer, we often write
$\boldsymbol{z_{p}}=(z_{1},\dots,z_{p})$ for points in $\mathbb{R}_{+}^{p}$ or
$\mathbb{R}^{dp}$, and $d\boldsymbol{z_{p}}=dz_{1}\cdots dz_{p}$,
$\mu(d\boldsymbol{z_{p}})=\mu(dz_{1})\cdots\mu(dz_{p})$. For a function
$h:(\mathbb{R}_{+}\times\mathbb{R}^{d})^{p}\rightarrow\mathbb{R}$ with $p\geq
2$, we often write
$h(\boldsymbol{s_{p}},\boldsymbol{y_{p}})=h(s_{1},\dots,s_{p},y_{1},\dots,y_{p})=h(s_{1},y_{1},\dots,s_{p},y_{p}),$
which shall not cause any confusion. For $m\in\\{1,\dots,p-1\\}$ and
$(\boldsymbol{s_{m}},\boldsymbol{y_{m}})\in\mathbb{R}_{+}^{m}\times\mathbb{R}^{dm}$,
the expression $h(\boldsymbol{s_{m}},\boldsymbol{y_{m}};\bullet)$ stands for
the function
$(t_{1},x_{1},\dots,t_{p-m},x_{p-m})\mapsto
h(s_{1},y_{1},\dots,s_{m},y_{m},t_{1},x_{1},\dots,t_{p-m},x_{p-m})=h(\boldsymbol{s_{m}},\boldsymbol{y_{m}};\boldsymbol{t_{p-m}},\boldsymbol{x_{p-m}}).$
Now, with the above notation in mind, we are in the position to state the
first main result777In higher dimension $(d\geq 3)$, the fundamental wave
solution is a uniform measure supported on certain surfaces, then the
Malliavin derivative $Du(t,x)$ is expected to be merely a random measure
instead of being a random function. In this case, the expression
$D_{s,y}u(t,x)$ does not make sense; see also the recent article [34] for
related discussions. .
###### Theorem 1.3.
Let $d\in\\{1,2\\}$ and suppose that Hypothesis $\bf(H1)$ holds if $d=2$.
Then, for any $(t,x)\in\mathbb{R}_{+}\times\mathbb{R}^{d}$, the random
variable $u(t,x)$ belongs to $\mathbb{D}^{\infty}$ $($see Section 2.1$)$.
Moreover, for any integer $m\geq 1$, the $m$th Malliavin derivative
$D^{m}u(t,x)$ is a random symmetric function denoted by
$(\boldsymbol{s_{m}},\boldsymbol{y_{m}})=(s_{1},y_{1},\dots,s_{m},y_{m})\longmapsto
D_{s_{1},y_{1}}D_{s_{2},y_{2}}\ldots
D_{s_{m},y_{m}}u(t,x)=D^{m}_{\boldsymbol{s_{m}},\boldsymbol{y_{m}}}u(t,x),$
and for any $p\in[2,\infty)$, we have, for almost all
$(\boldsymbol{s_{m}},\boldsymbol{y_{m}})\in[0,t]^{m}\times\mathbb{R}^{md}$,
$\displaystyle
m!\widetilde{f}_{t,x,m}(\boldsymbol{s_{m}},\boldsymbol{y_{m}})\leq\big{\|}D^{m}_{\boldsymbol{s_{m}},\boldsymbol{y_{m}}}u(t,x)\big{\|}_{p}\lesssim\widetilde{f}_{t,x,m}(\boldsymbol{s_{m}},\boldsymbol{y_{m}}),$
(1.11)
where the constant in the upper bound only depends on
$(p,t,\gamma_{0},\gamma,m)$ and is increasing in $t$. Moreover, $D^{m}u(t,x)$
has a measurable modification.
Throughout this paper, we will work with the measurable modifications of
$Du(t,x)$ and $D^{2}u(t,x)$ given by Theorem 1.3, which are still denoted by
$Du(t,x),D^{2}u(t,x)$ respectively.
In this paper, we will present two applications of Theorem 1.3. Our first
application are _quantitative central limit theorems_ (CLTs) for the spatial
averages of the solution to (1.1), which have been elusive so far due to the
temporal correlation of the noise preventing the use of Itô calculus approach.
A _novel_ ingredient to overcome this difficulty is the so-called _second-
order Gaussian Poincaré inequality_ in an improved form. We will address these
CLT results in Section 1.1. While in Section 1.2, as the second application,
we establish the absolute continuity of the law of the solution to equation
(1.1) using the $L^{p}$-estimates of Malliavin derivatives that are crucial to
establish a local version of Bouleau-Hirsch criterion [5].
### 1.1 Gaussian fluctuation of spatial averages
Spatial averages of SPDEs have recently attracted considerable interest. It
was Huang, Nualart and Viitasaari who first studied the fluctuation of spatial
statistics and established a central limit theorem for a nonlinear SPDE in
[15]. More precisely, they considered the following one-dimensional stochastic
heat equation
$\displaystyle\frac{\partial u}{\partial t}=\frac{1}{2}\Delta
u+\sigma(u)\dot{W}$ (1.12)
on $\mathbb{R}_{+}\times\mathbb{R}$, where $\dot{W}$ is a space-time Gaussian
white noise, with constant initial condition $u(0,\bullet)=1$ and the
nonlinearity $\sigma:\mathbb{R}\to\mathbb{R}$ is a Lipschitz function. In view
of the localization property of its mild formulation (in the Walsh sense
[43]),
$u(t,x)=1+\int_{0}^{t}\int_{\mathbb{R}}p_{t-s}(x-y)\sigma\big{(}u(s,y)\big{)}W(ds,dy),$
(1.13)
with $p_{t}$ denoting the heat kernel888$p_{t}(x)=(2\pi
t)^{-d/2}e^{-|x|^{2}/(2t)}$ for $t>0$ and $x\in\mathbb{R}^{d}$; in (1.13),
$d=1$., one can regard $u(t,x)$ and $u(t,y)$ as weakly dependent random
variables for $x,y$ far apart so that the integral
$\int_{-R}^{R}\big{[}u(t,x)-1\big{]}dx$
can be roughly understood as a sum of weakly dependent random variables.
Therefore, it is very natural to expect Gaussian fluctuations when $R$ tends
to infinity.
Let us stop now to briefly fix some notation to facilitate our discussion.
Notation B. (1) For $t>0$, we define, with
$B_{R}:=\\{x\in\mathbb{R}^{d}:|x|\leq R\\}$,
$\displaystyle F_{R}(t):=\int_{B_{R}}\big{[}u(t,x)-1\big{]}dx\quad{\rm
and}\quad\sigma_{R}(t)=\sqrt{\text{Var}\big{(}F_{R}(t)\big{)}}.$ (1.14)
(2) We write $f(R)\sim g(R)$ to mean that $f(R)/g(R)$ converges to some
positive constant as $R\to\infty$.
(3) For two real random variables $X,Y$ with distribution measures $\mu,\nu$
respectively, the total variation distance between $X,Y$ (or $\mu,\nu$) is
defined to be
$\displaystyle d_{\rm TV}(X,Y)=\sup_{B}\big{|}\mu(B)-\nu(B)|,$ (1.15)
where the supremum runs over all Borel set $B\subset\mathbb{R}$. The total
variation distance is well known to induce a stronger topology than that of
convergence in distribution, see [25, Appendix C].
(4) We define the following quantities for future reference:
$\displaystyle\omega_{1}=2,\quad\omega_{2}=\pi,\quad{\rm
and}\quad\kappa_{\beta,d}:=\int_{\mathbb{R}^{2d}}dxdy|x-y|^{-\beta}\mathbf{1}_{B_{1}}(x)\mathbf{1}_{B_{1}}(y)~{}\text{for
$\beta\in(0,d)$}.$ (1.16)
(5) For an integer $m\geq 1$ and $p\in[1,\infty)$, we say
$F\in\mathbb{D}^{m,p}$ if $F$ is $m$-times Malliavin differentiable random
variable in $L^{p}(\Omega)$ and
$\mathbb{E}\big{[}\|D^{j}F\|_{\mathcal{H}^{\otimes j}}^{p}\big{]}<\infty$ for
every $j=1,\dots,m$; see Section 2.1 for more details.
Now let us illustrate the strategy in [15]: (For this reference, $d=1$)
* •
The authors first rewrite $F_{R}(t)=\delta(V_{t,R})$ with the random kernel
$V_{t,R}(s,y)=\sigma(u(s,y))\int_{B_{R}}p_{t-s}(x-y)dx,$
where $\delta$ denotes the Skorokhod integral, the adjoint of the Malliavin
derivative $D$.
* •
By standard computations, they obtained $\sigma^{2}_{R}(t)\sim R$.
* •
If $F=\delta(v)\in\mathbb{D}^{1,2}$ is a centered random variable with
variance one, for some $v$ in the domain of $\delta$, the (univariate)
Malliavin-Stein bound (see [15, Proposition 2.2]) ensures that $d_{\rm
TV}(F,Z)\leq 2\sqrt{\text{Var}(\langle DF,v\rangle_{\mathcal{H}})}$ for $Z\sim
N(0,1)$.
* •
Combining the above points, one can see that the obtention of a quantitative
CLT is reduced to the computation of $\text{Var}(\langle
DF_{R}(t),V_{t,R}\rangle_{\mathcal{H}})$.
Because the driving noise is white in time as considered in [15], tools from
Itô calculus (Clark-Ocone formula, Burkholder’s inequality, _etc._) are used
to estimate the above variance term. It is proved in [15] that $d_{\rm
TV}(F_{R}(t)/\sigma_{R}(t),Z)\lesssim R^{-1/2}$. Meanwhile, a multivariate
Malliavin-Stein bound and similar computations lead to the convergence of the
finite-dimensional distributions, which coupled with the tightness property
gives a functional CLT for $\\{R^{-1/2}F_{R}(t):t\in\mathbb{R}_{+}\\}$.
The above general strategy has been adapted to various settings, see [9, 10,
16, 19, 20, 38] for the study of stochastic heat equations and see [4, 12, 35]
for the study of stochastic wave equations. All these references consider a
Gaussian noise that is white in time. Nevertheless, when the Gaussian noise is
colored in time, the mild formulation (1.13) cannot be interpreted in the
Walsh-Itô sense. In this situation, only in the case $\sigma(u)=u$ the
stochastic heat equation (1.12) (also known as the _parabolic Anderson model_)
can be properly solved using Wiener chaos expansions, so that $F_{R}(t)$,
defined in (1.14), can be expressed as an infinite sum of multiple Wiener
integrals. With this well-known fact in mind, Nualart and Zheng [33]
considered the parabolic Anderson model (_i.e._ (1.12) with $\sigma(u)=u$) on
$\mathbb{R}_{+}\times\mathbb{R}^{d}$ such that $d\geq 1$, the initial
condition is constant and the assumptions (i)-(ii) hold (see page 1). The main
result of [33] is the chaotic CLT that is based on the fourth moment theorems
[30, 37]. When, additionally, $\gamma$ is a finite measure, the authors of
[33] established $\sigma_{R}(t)\sim R^{d/2}$ and a functional CLT for the
process $R^{-d/2}F_{R}$; they also considered the case where
$\gamma(x)=|x|^{-\beta}$, for some $\beta\in(0,2\wedge d)$, is the Riesz
kernel, and obtain the corresponding CLT results. As pointed out in the paper
[33], due to the homogeneity of the underlying Gaussian noise, the solution
$u$ to (1.12) can be regarded as the functional of a stationary Gaussian
random field so that, with the Breuer-Major theorem [6] in mind, it is natural
to study Gaussian fluctuations for the problems (1.12) and (1.1). Note that
the constant initial condition makes the solution stationary in space and, in
fact it is spatially ergodic (see [10, 36]). At last, let us mention the paper
[32] in which chaotic CLT was used to study the parabolic Anderson model
driven by a colored Gaussian noise that is rough in space. However, let us
point out that the aforementioned methods fail to provide the rate of
convergence when the noise is colored in time.
In this paper, we bring in a novel ingredient – the _second-order Gaussian
Poincaré inequality_ 999The use of second-order Gaussian Poincaré inequality
for obtaining CLT on a Gaussian space is one of the central techniques in the
Malliavin-Stein approach; for example, in the recent paper [13], Dunlap _et
al._ have used this Poincaré inequality to investigate the Gaussian
fluctuation of the KPZ in dimension three and higher. We remark here that we
can not directly apply this inequality because of the complicated correlation
structure of the underlying Gaussian homogeneous noise, while the underlying
Gaussian noise in [13] is white in time and smooth in space so that they can
directly apply the version from [26]. In this article, we have established a
quite involved variant of second-order Poincaré inequality, which is tailor-
made for our applications. – to reach quantitative CLT results for the
hyperbolic Anderson model (1.1). Let us first state our main result.
###### Theorem 1.4.
Let $u$ denote the solution to the hyperbolic Anderson model (1.1) and recall
the definition of $F_{R}(t)$ and $\sigma_{R}(t)$ from (1.14). Let $Z\sim
N(0,1)$ be the standard normal random variable. We assume that $\gamma_{0}$ is
not identically zero meaning
$\displaystyle\|\gamma_{0}\|_{L^{1}([0,\varepsilon])}>0~{}\text{ for any
$\varepsilon\in(0,1)$.}$ (1.17)
Then the following statements hold true:
(1) Suppose that $0<\gamma(\mathbb{R}^{d})<\infty$ if $d=1$ and $\gamma\in
L^{1}(\mathbb{R}^{d})\cap L^{\ell}(\mathbb{R}^{d})$ for some $\ell>1$ if
$d=2$. Then,
$\sigma_{R}(t)\sim R^{d/2}$ and $d_{\rm
TV}\big{(}F_{R}(t)/\sigma_{R}(t),Z\big{)}\lesssim R^{-d/2}.$
Moreover, as $R\to\infty$, the process
$\big{\\{}R^{-d/2}F_{R}(t):t\in\mathbb{R}_{+}\big{\\}}$ converges weakly in
the space of continuous functions $C(\mathbb{R}_{+})$ to a centered Gaussian
process $\mathcal{G}$ with covariance structure
$\displaystyle\mathbb{E}\big{[}\mathcal{G}(t)\mathcal{G}(s)\big{]}=\omega_{d}\sum_{p\geq
1}p!\int_{\mathbb{R}^{d}}\big{\langle}\widetilde{f}_{t,x,p},\widetilde{f}_{s,0,p}\big{\rangle}_{\mathcal{H}^{\otimes
p}}dx,$ (1.18)
for $t,s\in\mathbb{R}_{+}$. Here $\omega_{1}=2$, $\omega_{2}=\pi$ and
$\widetilde{f}_{t,x,p}$ are introduced in (1.16) and (1.9), respectively. The
convergence of the series in (1.18) is part of the conclusion.
(2) Suppose $d\in\\{1,2\\}$ and $\gamma(x)=|x|^{-\beta}$ for some
$\beta\in(0,2\wedge d)$. Then,
$\sigma_{R}(t)\sim R^{d-\frac{\beta}{2}}$ and $d_{\rm
TV}\big{(}F_{R}(t)/\sigma_{R}(t),Z\big{)}\lesssim R^{-\beta/2}.$
Moreover, as $R\to\infty$, the process
$\big{\\{}R^{-d+\frac{\beta}{2}}F_{R}(t):t\in\mathbb{R}_{+}\big{\\}}$
converges weakly in the space $C(\mathbb{R}_{+})$ to a centered Gaussian
process $\mathcal{G}_{\beta}$ with the covariance structure
$\displaystyle\mathbb{E}\big{[}\mathcal{G}_{\beta}(t)\mathcal{G}_{\beta}(s)\big{]}=\kappa_{\beta,d}\int_{0}^{t}dr\int_{0}^{s}dr^{\prime}\gamma_{0}(r-r^{\prime})(t-r)(s-r^{\prime}),$
(1.19)
for $t,s\in\mathbb{R}_{+}$. Here the quantity $\kappa_{\beta,d}$ is introduced
in (1.16).
(3) Suppose $d=2$ and $\gamma(x_{1},x_{2})=\gamma_{1}(x_{1})\gamma_{2}(x_{2})$
such that one of the following two conditions holds:
$\displaystyle\begin{cases}&\text{\rm($a^{\prime}$)}~{}\gamma_{i}(x_{i})=|x_{i}|^{-\beta_{i}}~{}\text{for
some $\beta_{i}\in(0,1)$, $i=1,2$;}\\\
&\text{\rm($b^{\prime}$)}~{}\gamma_{1}\in L^{\ell}(\mathbb{R})\cap
L^{1}(\mathbb{R})~{}\text{and $\gamma_{2}(x_{2})=|x_{2}|^{-\beta}$ for some
$0<\beta<1<\ell<\infty$.}\end{cases}$ (1.20)
Then,
$\displaystyle\begin{cases}\sigma_{R}(t)\sim
R^{2-\frac{1}{2}(\beta_{1}+\beta_{2})}\quad\text{and}\quad d_{\rm
TV}\big{(}F_{R}(t)/\sigma_{R}(t),Z\big{)}\lesssim
R^{-(\beta_{1}+\beta_{2})/2}~{}&\text{in case {\rm$(a^{\prime})$}},\\\
\sigma_{R}(t)\sim R^{(3-\beta)/2}\quad\text{and}\quad d_{\rm
TV}\big{(}F_{R}(t)/\sigma_{R}(t),Z\big{)}\lesssim R^{-(\beta+1)/2}~{}&\text{in
case {\rm$(b^{\prime})$}}.\end{cases}$
Moreover, as $R\to\infty$, in case $(a^{\prime})$ , the process
$\big{\\{}R^{-2+\frac{\beta_{1}+\beta_{2}}{2}}F_{R}(t):t\in\mathbb{R}_{+}\big{\\}}$
converges weakly in the space $C(\mathbb{R}_{+})$ to a centered Gaussian
process $\mathcal{G}_{\beta_{1},\beta_{2}}$ with the covariance structure
$\displaystyle\mathbb{E}\big{[}\mathcal{G}_{\beta_{1},\beta_{2}}(t)\mathcal{G}_{\beta_{1},\beta_{2}}(s)\big{]}=K_{\beta_{1},\beta_{2}}\int_{0}^{t}dr\int_{0}^{s}dr^{\prime}\gamma_{0}(r-r^{\prime})(t-r)(s-r^{\prime}),$
(1.21)
for $t,s\in\mathbb{R}_{+}$, where
$\displaystyle K_{\beta_{1},\beta_{2}}:$
$\displaystyle=\int_{\mathbb{R}^{4}}\mathbf{1}_{\\{x_{1}^{2}+x_{2}^{2}\leq
1\\}}\mathbf{1}_{\\{y_{1}^{2}+y_{2}^{2}\leq
1\\}}|x_{1}-y_{1}|^{-\beta_{1}}|x_{2}-y_{2}|^{-\beta_{2}}dx_{1}dx_{2}dy_{1}dy_{2};$
(1.22)
and in case $(b^{\prime})$ , the process
$\big{\\{}R^{\frac{\beta-3}{2}}F_{R}(t):t\in\mathbb{R}_{+}\big{\\}}$ converges
weakly in the space $C(\mathbb{R}_{+})$ to a centered Gaussian process
$\widehat{\mathcal{G}}_{\beta}$ with the covariance structure
$\displaystyle\mathbb{E}\big{[}\widehat{\mathcal{G}}_{\beta}(t)\widehat{\mathcal{G}}_{\beta}(s)\big{]}=\gamma_{1}(\mathbb{R})\mathcal{L}_{\beta}\int_{0}^{t}dr\int_{0}^{s}dr^{\prime}\gamma_{0}(r-r^{\prime})(t-r)(s-r^{\prime})$
(1.23)
for $t,s\in\mathbb{R}_{+}$, where
$\displaystyle\mathcal{L}_{\beta}:=\int_{\mathbb{R}^{3}}dx_{1}dx_{2}dx_{3}\mathbf{1}_{\\{x_{1}^{2}+x_{2}^{2}\leq
1\\}}\mathbf{1}_{\\{x_{1}^{2}+x_{3}^{2}\leq 1\\}}|x_{2}-x_{3}|^{-\beta}.$
(1.24)
For the above functional convergences, we specify that the space
$C(\mathbb{R}_{+})$ is equipped with the topology of uniform convergence on
compact sets.
###### Remark 1.5.
(i) Note that the case when $\gamma(x)=\gamma_{1}(x_{1})\gamma_{2}(x_{2})$
with $\gamma_{i}\in L^{\ell_{i}}(\mathbb{R})\cap L^{1}(\mathbb{R})$ for some
$\ell_{i}>1$, $i=1,2$, is covered in part (1). Indeed, suppose that
$\ell_{1}\geq\ell_{2}$, then by Hölder’s inequality, $\gamma_{1}\in
L^{\ell_{1}}(\mathbb{R})\cap L^{1}(\mathbb{R})$ implies $\gamma_{1}\in
L^{\ell_{2}}(\mathbb{R})\cap L^{1}(\mathbb{R})$ and hence $\gamma\in
L^{\ell_{2}}(\mathbb{R}^{2})\cap L^{1}(\mathbb{R}^{2})$.
(ii) The rate of convergence can also be described using other common
distances such as the Wasserstein distance and the Kolmogorov distance; see
[25, Appendix C].
(iii) The variance orders and the rates in parts (1) and (2) of Theorem 1.4
are consistent with previous work on stochastic wave equations, see [4, 12,
35]. The setting in part (3) is new. As we will see shortly, our strategy is
quite different from that in these papers.
Now, let us briefly explain our strategy and begin with the Gaussian Poincaré
inequality. For $F\in\mathbb{D}^{1,2}$, the Gaussian Poincaré inequality (see
_e.g._ [14] or (2.12)) ensures that
$\text{Var}(F)\leq\mathbb{E}\big{[}\|DF\|_{\mathcal{H}}^{2}\big{]}~{}\text{with
equality if and only if $F$ is Gaussian},$
that is, if $DF$ is small, then the random variable $F$ has necessarily small
fluctuations. In the paper [8], Chatterjee pointed out that for
$F=f(X_{1},\dots,X_{d})$ with $X_{1},\dots,X_{d}$ i.i.d. $N(0,1)$ and $f$
twice differentiable, $F$ is close in total variation distance to a normal
distribution with matched mean and variance if the Hessian matrix
$\text{Hess}f(X_{1},\dots,X_{d})$ is negligible, roughly speaking. This is
known as the second-order Gaussian Poincaré inequality. In what follows, we
state the infinite-dimensional version of this inequality due to Nourdin,
Peccati and Reinert; see the paper [26] as well as the book [25]101010Note
that there is a typo in equation (5.3.2) of [25]: We have
$E[\|DF\|_{\mathcal{H}}^{4}]^{1/4}$ instead of
$E[\|D^{2}F\|_{\mathcal{H}}^{4}]^{1/4}$..
###### Proposition 1.6.
Let $F$ be a centered element of $\mathbb{D}^{2,4}$ such that
$\mathbb{E}[F^{2}]=\sigma^{2}>0$ and let $Z\sim N(0,\sigma^{2})$. Then,
$\displaystyle d_{\rm
TV}(F,Z)\leq\frac{3}{\sigma^{2}}\left(\mathbb{E}\Big{[}\big{\|}D^{2}F\otimes_{1}D^{2}F\big{\|}^{2}_{\mathcal{H}^{\otimes
2}}\Big{]}\right)^{1/4}\left(\mathbb{E}\big{[}\|DF\|_{\mathcal{H}}^{4}\big{]}\right)^{1/4},$
(1.25)
where $D^{2}F\otimes_{1}D^{2}F$ denotes the 1-contraction between $D^{2}F$ and
itself $($see (2.10)$)$.
It has been known that this inequality usually gives sub-optimal rate. In the
recent work [42] by Vidotto, she provided an improved version of the above
inequality, where she considered an $L^{2}$-based Hilbert space
$\mathcal{H}=L^{2}(A,\nu)$ with $\nu$ a diffusive measure (nonnegative,
$\sigma$-finite and non-atomic) on some measurable space $A$. Let us state
this result for the convenience of readers.
###### Theorem 1.7 (Theorem 2.1 in [42]).
Let $F\in\mathbb{D}^{2,4}$ with mean zero and variance $\sigma^{2}>0$ and let
$Z\sim N(0,\sigma^{2})$. Suppose $\mathcal{H}=L^{2}(A,\nu)$ with $\nu$ a
diffusive measure on some measurable space $A$. Then,
$d_{\rm TV}\big{(}F,Z\big{)}\leq\frac{4}{\sigma^{2}}\left[\int_{A\times
A}\sqrt{\mathbb{E}\big{[}\big{(}D^{2}F\otimes_{1}D^{2}F\big{)}^{2}(x,y)\big{]}\times\mathbb{E}\big{[}(DF)^{2}(x)(DF)^{2}(y)\big{]}}\nu(dx)\nu(dy)\right]^{\frac{1}{2}}.$
The proof of the above inequality follows from the general Malliavin-Stein
bound
$\displaystyle d_{\rm
TV}\big{(}F,Z\big{)}\leq\frac{2}{\sigma^{2}}\mathbb{E}\left(\big{|}\sigma^{2}-\langle
DF,-DL^{-1}F\rangle_{\mathcal{H}}\big{|}\right)$ (1.26)
(see [25, equation (5.1.4)]111111Unlike in [25], we do not assume $F$ to have
a density; in fact, it suffices to use [44, Proposition 2.1.1] and [25,
(5.1.1)] to establish [25, equation (5.1.4)]. ) and Vidotto’s new bound of
$\qquad\qquad\mathbb{E}\big{[}(\text{Cov}(F,G)-\langle
DF,-DL^{-1}G\rangle_{\mathcal{H}})^{2}\big{]}~{}\text{for centered
$F,G\in\mathbb{D}^{2,4}$}$
(see [42, Proposition 3.2]), where $L^{-1}$ is the pseudo-inverse of the
Ornstein-Uhlenbeck operator $L$; see Section 2.1 for the definitions.
Recall that our Hilbert space $\mathcal{H}$ is the completion of
$C^{\infty}_{c}(\mathbb{R}_{+}\times\mathbb{R}^{d})$ under the inner product
(1.3). The Hilbert space $\mathcal{H}$ contains generalized functions, but
fortunately the objects $D^{2}u(t,x)$, $Du(t,x)$ are random functions in view
of Theorem 1.3. By adapting Vidotto’s proof to our setting, we have the
following version of second-order Gaussian Poincaré inequality. Note we write
$f\in|\mathcal{H}^{\otimes p}|$ to mean $f$ is a real valued function and
$\bullet\mapsto|f(\bullet)|$ belongs to $\mathcal{H}^{\otimes p}$.
###### Proposition 1.8.
If $F\in\mathbb{D}^{2,4}$ has mean zero and variance $\sigma^{2}\in(0,\infty)$
such that with probability 1, $DF\in|\mathcal{H}|$ and
$D^{2}F\in|\mathcal{H}^{\otimes 2}|$, then
$d_{\rm TV}\big{(}F,Z\big{)}\leq\frac{4}{\sigma^{2}}\sqrt{\mathcal{A}},$
where $Z\sim N(0,\sigma^{2})$ and
$\displaystyle\mathcal{A}:$
$\displaystyle=\int_{\mathbb{R}_{+}^{6}\times\mathbb{R}^{6d}}drdr^{\prime}dsds^{\prime}d\theta
d\theta^{\prime}dzdz^{\prime}dydy^{\prime}dwdw^{\prime}\gamma_{0}(\theta-\theta^{\prime})\gamma_{0}(s-s^{\prime})\gamma_{0}(r-r^{\prime})$
$\displaystyle\quad\times\gamma(z-z^{\prime})\gamma(w-w^{\prime})\gamma(y-y^{\prime})\|D_{r,z}D_{\theta,w}F\|_{4}\|D_{s,y}D_{\theta^{\prime},w^{\prime}}F\|_{4}\|D_{r^{\prime},z^{\prime}}F\|_{4}\|D_{s^{\prime},y^{\prime}}F\|_{4}.$
As mentioned before, Proposition 1.8 will follow from the Malliavin-Stein
bound (1.26) and Cauchy-Schwarz inequality, taking into account that, by the
duality relation (2.5), we have that $\mathbb{E}\left(\langle
DF,-DL^{-1}F\rangle_{\mathcal{H}}\right)=\mathbb{E}[F^{2}]=\sigma^{2}$.
Indeed, we can write
$\displaystyle d_{\rm TV}(F,Z)$
$\displaystyle\leq\frac{2}{\sigma^{2}}\mathbb{E}\left(\big{|}\sigma^{2}-\langle
DF,-DL^{-1}F\rangle_{\mathcal{H}}\big{|}\right)\leq\frac{2}{\sigma^{2}}\sqrt{\text{Var}\big{(}\langle
DF,-DL^{-1}F\rangle_{\mathcal{H}}\big{)}}$
$\displaystyle\leq\frac{4}{\sigma^{2}}\sqrt{\mathcal{A}}\quad\text{by
Proposition \ref{propAV} below.}$
###### Proposition 1.9.
If $F,G\in\mathbb{D}^{2,4}$ have mean zero such that with probability one,
$DF,DG\in|\mathcal{H}|$ and $D^{2}F,D^{2}G\in|\mathcal{H}^{\otimes 2}|$, then
$\displaystyle{\rm Var}\Big{(}\langle
DF,-DL^{-1}G\rangle_{\mathcal{H}}\Big{)}=\mathbb{E}\big{[}(\text{\rm
Cov}(F,G)-\langle DF,-DL^{-1}G\rangle_{\mathcal{H}})^{2}\big{]}\leq
2A_{1}+2A_{2},$ (1.27)
where
$\displaystyle A_{1}:$
$\displaystyle=\int_{\mathbb{R}_{+}^{6}\times\mathbb{R}^{6d}}drdr^{\prime}dsds^{\prime}d\theta
d\theta^{\prime}dzdz^{\prime}dydy^{\prime}dwdw^{\prime}\gamma_{0}(\theta-\theta^{\prime})\gamma_{0}(s-s^{\prime})\gamma_{0}(r-r^{\prime})$
$\displaystyle\quad\times\gamma(z-z^{\prime})\gamma(w-w^{\prime})\gamma(y-y^{\prime})\|D_{r,z}D_{\theta,w}F\|_{4}\|D_{s,y}D_{\theta^{\prime},w^{\prime}}F\|_{4}\|D_{r^{\prime},z^{\prime}}G\|_{4}\|D_{s^{\prime},y^{\prime}}G\|_{4}$
and $A_{2}$ is defined by switching the positions of $F,G$ in the definition
of $A_{1}$.
For the sake of completeness, we sketch the proof of Proposition 1.9 in
Appendix A.2. Once we have the information on the growth order of
$\sigma_{R}(t)$, we can apply Theorem 1.3 and Proposition 1.9 to obtain the
error bounds in Theorem 1.4. The proof of Theorem 1.4 will be given in Section
4: In Section 4.1, we will establish the limiting covariance structure, which
will be used to obtain the quantitative CLTs in Section 4.2; Proposition 1.9,
combined with a multivariate Malliavin-Stein bound (see _e.g._ [25, Theorem
6.1.2]), also gives us easy access to the convergence of finite-dimensional
distributions (_f.d.d. convergence_) for part (1), while in the other parts,
the _f.d.d._ convergence follows easily from the dominance of the first
chaotic component of $F_{R}(t)$; finally in Section 4.3, we establish the
functional CLT by showing the required tightness, which will follow by
verifying the well-known criterion of Kolmogorov-Chentsov (see _e.g._ [17,
Corollary 16.9]).
### 1.2 Absolute continuity of the law of the solution to equation (1.1)
In this part, we fix the following extra hypothesis on the correlation kernels
$\gamma_{0},\gamma$.
$\displaystyle{\bf(H2)}\begin{cases}\text{ $\gamma_{0}=\mathcal{F}\mu_{0}$ and
$\gamma=\mathcal{F}\mu$, where $\mu_{0},\mu$ are nonnegative tempered
measures}\\\ \text{ and have strictly positive densities with respect to the
Lebesgue measure. }\end{cases}$
The following is the main result of this section.
###### Theorem 1.10.
Let $d\in\\{1,2\\}$ and assume that Hypothesis ${\bf(H2)}$ holds. In addition,
assume that Hypothesis ${\bf(H1)}$ holds if $d=2$. Let $u$ be the solution to
(1.1). For any $t>0$ and $x\in\mathbb{R}^{d}$, the law of $u(t,x)$ restricted
to the set $\mathbb{R}\verb 2\2\\{0\\}$ is absolutely continuous with respect
to the Lebesgue measure on $\mathbb{R}\verb 2\2\\{0\\}$.
Let us sketch the proof of Theorem 1.10. In view of the Bouleau-Hirsch
criterion for absolute continuity (see [5]), it suffices to prove that for
each $m\geq 1$,
$\|Du(t,x)\|_{\mathcal{H}}>0\quad\mbox{a.s. on}\ \Omega_{m},$ (1.28)
where $\Omega_{m}=\\{|u(t,x)|\geq 1/m\\}$. Notice that
$\|Du(t,x)\|^{2}_{\mathcal{H}}=\int_{0}^{t}\int_{0}^{t}\gamma_{0}(r-s)\langle
D_{r,\bullet}u(t,x),D_{s,\bullet}u(t,x)\rangle_{0}drds,$
where $\mathcal{P}_{0}$ is the completion of $C^{\infty}_{c}(\mathbb{R}^{d})$
with respect to the inner product $\langle\cdot,\cdot\rangle_{0}$ introduced
in (2.1). The usual approach to show the positivity of this norm is to get a
lower bound for this integral by integrating on a small interval
$[t-\delta,t]^{2}$ and use that, for $r$ close to $t$, $D_{r,y}u(t,x)$ behaves
as $G_{t-r}(x-y)u(s,y)$ (see, e.g., [31]). However, for $r\not=s$, the inner
product $\langle D_{r,\bullet}u(t,x),D_{s,\bullet}u(t,x)\rangle_{0}$ is not
necessarily non-negative. Our strategy to overcome this difficulty consists in
making use of Hypothesis ${\bf(H2)}$ in order to show that
$\int_{0}^{t}\|D_{r,\bullet}u(t,x)\|_{0}^{2}dr>0~{}~{}\text{implies ~{}
$\|Du(t,x)\|_{\mathcal{H}}>0$ (see Lemma \ref{pos-norm}).}$
This allows us to reduce the problem to the non-degeneracy of
$\int_{t-\delta}^{t}\|D_{r,\bullet}u(t,x)\|_{0}^{2}dr$ for $\delta$ small
enough, which can be handled by the usual arguments. At this point, we will
make use of the estimates provided in Theorem 1.3.
For $d=1$, Theorem 1.10 was proved in [2] under stronger assumptions on the
covariance structure. The result in Theorem 1.10 for $d=2$ is new. Indeed, the
study of the existence (and smoothness) of the density for the stochastic wave
equation has been extensively revisited over the last three decades. We refer
the readers to [7, 23, 22, 39, 40, 31, 41]. In all these articles, the authors
considered a stochastic wave equation of the form
$\frac{\partial^{2}u}{\partial t^{2}}(t,x)=\Delta
u(t,x)+b(u(t,x))+\sigma(u(t,x))\dot{\mathfrak{X}}(t,x),$
on $\mathbb{R}_{+}\times\mathbb{R}^{d}$, with $d\geq 1$. Here,
$\dot{\mathfrak{X}}$ denotes a space-time white noise in the case $d=1$, or a
Gaussian noise that is white in time and has a spatially homogeneous
correlation (slightly more general than that of $W$) in the case $d\geq 2$.
The functions $b,\sigma$ are usually assumed to be globally Lipschitz, and
such that the following non-degeneracy condition is fulfilled:
$|\sigma(z)|\geq C>0$, for all $z\in\mathbb{R}$. The temporal nature of the
noise $\dot{\mathfrak{X}}$ made possible to interpret the solution in the
classical Dalang-Walsh sense, making use of all needed martingale techniques.
The first attempt to consider a Gaussian noise that is colored in time was in
the paper [2], where the hyperbolic Anderson model with spatial dimension one
was considered. As mentioned above, in that paper the existence of density was
proved under a slightly stronger assumption than Hypothesis ${\bf(H2)}$.
The rest of this paper is organized as follows. Section 2 contains preliminary
results and the proofs of our main results – Theorems 1.3, 1.4 and 1.10 – are
given in Sections 3, 4 and 5, respectively.
Acknowledgement. The authors would like to thank Wangjun Yuan for carefully
proofreading the manuscript and providing a list of typos.
## 2 Preliminary results
This section is devoted to presenting some basic elements of the Malliavin
calculus and collecting some preliminary results that will be needed in the
sequel.
### 2.1 Basic Malliavin calculus
Recall that the Hilbert space $\mathcal{H}$ is the completion of
$C^{\infty}_{c}(\mathbb{R}_{+}\times\mathbb{R}^{d})$ under the inner product
(1.3) that can be written as
$\displaystyle\big{\langle}\psi,\phi\big{\rangle}_{\mathcal{H}}=\int_{\mathbb{R}_{+}^{2}}dsdt\gamma_{0}(t-s)\big{\langle}\psi(t,\bullet),\phi(s,\bullet)\big{\rangle}_{0}\quad\text{for
$\psi,\phi\in C^{\infty}_{c}(\mathbb{R}_{+}\times\mathbb{R}^{d})$,}$
where
$\langle
h,g\rangle_{0}=\int_{\mathbb{R}^{2d}}dzdz^{\prime}\gamma(z-z^{\prime})h(z)g(z^{\prime}).$
(2.1)
As defined in Section 1.2, we denote by $\mathcal{P}_{0}$ the completion of
$C^{\infty}_{c}(\mathbb{R}^{d})$ with respect to the inner product $\langle
h,g\rangle_{0}$. Let $|\mathcal{P}_{0}|$ be the set of measurable functions
$h:\mathbb{R}^{d}\to\mathbb{R}$ such that
$\int_{\mathbb{R}^{2d}}dzdz^{\prime}\gamma(z-z^{\prime})|h|(z)|h|(z^{\prime})<\infty.$
(2.2)
Then $|\mathcal{P}_{0}|\subset\mathcal{P}_{0}$ and for
$h\in|\mathcal{P}_{0}|$,
$\|h\|^{2}_{0}=\int_{\mathbb{R}^{2d}}dzdz^{\prime}\gamma(z-z^{\prime})h(z)h(z^{\prime})$.
We define the space $|\mathcal{H}|$ in a similar way. For $h,g\in
C^{\infty}_{c}(\mathbb{R}^{d})$ we can express (2.1) using the Fourier
transform:
$\langle
h,g\rangle_{0}=\int_{\mathbb{R}^{d}}\mu(d\xi)\mathcal{F}h(\xi)\overline{\mathcal{F}g(\xi)}.$
(2.3)
The Parseval-type relation (2.3) also holds for functions $h,g\in
L^{1}(\mathbb{R}^{d})\cap|\mathcal{P}_{0}|$.
For every integer $p\geq 1$, $\mathcal{H}^{\otimes p}$ and $\mathcal{H}^{\odot
p}$ denote the $p$th tensor product of $\mathcal{H}$ and its symmetric
subspace, respectively. For example, $f_{t,x,n}$ in (1.8) belongs to
$\mathcal{H}^{\otimes n}$ and $\widetilde{f}_{t,x,n}\in\mathcal{H}^{\odot n}$;
we also have $f\otimes g\in\mathcal{H}^{\otimes(n+m)}$, provided
$f\in\mathcal{H}^{\otimes m}$ and $g\in\mathcal{H}^{\otimes n}$; see [25,
Appendix B] for more details.
Fix a probability space $(\Omega,\mathcal{B},\mathbb{P})$, on which we can
construct the isonormal Gaussian process associated to the Gaussian noise
$\dot{W}$ in (1.1) that we denote by $\\{W(\phi):\phi\in\mathcal{H}\\}$. That
is, $\\{W(\phi):\phi\in\mathcal{H}\\}$ is a _centered Gaussian family_ of
real-valued random variables defined on $(\Omega,\mathcal{B},\mathbb{P})$ such
that $\mathbb{E}[W(\psi)W(\phi)]=\langle\psi,\phi\rangle_{\mathcal{H}}$ for
any $\psi,\phi\in\mathcal{H}$. We will take $\mathcal{B}$ to be the
$\sigma$-algebra $\sigma\\{W\\}$ generated by the family of random variables
$\\{W(h):h\in C^{\infty}_{c}(\mathbb{R}_{+}\times\mathbb{R}^{d})\\}$.
In the sequel, we recall some basics on Malliavin calculus from the books [25,
27].
Let $C^{\infty}_{\text{poly}}(\mathbb{R}^{n})$ denote the space of smooth
functions with all their partial derivatives having at most polynomial growth
at infinity and let $\mathcal{S}$ denote the set of simple smooth functionals
of the form
$F=f\big{(}W(h_{1}),\dots,W(h_{n})\big{)}$ for $f\in
C^{\infty}_{\text{poly}}(\mathbb{R}^{n})$ and $h_{i}\in\mathcal{H}$, $1\leq
i\leq n$.
For such a random variable $F$, its Malliavin derivative $DF$ is the
$\mathcal{H}$-valued random variable given by
$DF=\sum_{i=1}^{n}\frac{\partial f}{\partial
x_{i}}\big{(}W(h_{1}),\dots,W(h_{n})\big{)}h_{i}.$
And similarly its $m$th Malliavin derivative $D^{m}F$ is the
$\mathcal{H}^{\otimes m}$-valued random variable given by
$\displaystyle
D^{m}F=\sum_{i_{1},\dots,i_{m}=1}^{n}\frac{\partial^{m}f}{\partial
x_{i_{1}}\cdots\partial
x_{i_{m}}}\big{(}W(h_{1}),\dots,W(h_{n})\big{)}h_{i_{1}}\otimes\cdots\otimes
h_{i_{m}},$ (2.4)
which is an element in $L^{p}(\Omega;\mathcal{H}^{\odot m})$ for any
$p\in[1,\infty)$. It is known that the space $\mathcal{S}$ is dense in
$L^{p}(\Omega,\sigma\\{W\\},\mathbb{P})$ and
$D^{m}:\mathcal{S}\longrightarrow L^{p}(\Omega;\mathcal{H}^{\odot m})$
is closable for any $p\in[1,\infty)$; see _e.g._ Lemma 2.3.1 and Proposition
2.3.4 in [25]. Let $\mathbb{D}^{m,p}$ be the closure of $\mathcal{S}$ under
the norm
$\big{\|}F\big{\|}_{\mathbb{D}^{m,p}}=\Big{(}\mathbb{E}\big{[}|F|^{p}\big{]}+\mathbb{E}\big{[}\|DF\|^{p}_{\mathcal{H}}\big{]}+\cdots+\mathbb{E}\big{[}\|D^{m}F\|^{p}_{\mathcal{H}^{\otimes
m}}\big{]}\Big{)}^{1/p}~{}\text{and let $\mathbb{D}^{\infty}:=\bigcap_{m,p\geq
1}\mathbb{D}^{m,p}.$}$
Now, let us introduce the adjoint of the derivative operator $D^{m}$. Let
$\text{Dom}(\delta^{m})$ be the set of random variables $v\in
L^{2}(\Omega;\mathcal{H}^{\otimes m})$ such that there is a constant $C_{v}>0$
for which
$\Big{|}\mathbb{E}\big{[}\langle D^{m}F,v\rangle_{\mathcal{H}^{\otimes
m}}\big{]}\Big{|}\leq C_{v}\|F\|_{2}\quad\text{for all $F\in\mathcal{S}$}.$
By _Riesz representation theorem_ , there is a unique random variable, denoted
by $\delta^{m}(v)$, such that the following duality relationship holds:
$\displaystyle\mathbb{E}\big{[}F\delta^{m}(v)\big{]}=\mathbb{E}\big{[}\langle
D^{m}F,v\rangle_{\mathcal{H}^{\otimes m}}\big{]}.$ (2.5)
Equality (2.5) holds for all $v\in\text{Dom}(\delta^{m})$ and all
$F\in\mathbb{D}^{m,2}$. In the simplest case when $F=f(W(h))$ with
$h\in\mathcal{H}$ and $f\in C^{1}_{\text{poly}}(\mathbb{R})$, we have
$\delta(h)=W(h)\sim N(0,\|h\|_{\mathcal{H}}^{2})$ and equality (2.5) reduces
to
$\mathbb{E}\big{[}f(W(h))W(h)\big{]}=\mathbb{E}\big{[}f^{\prime}(W(h))\big{]}\|h\|_{\mathcal{H}}^{2},$
which is exactly part of the Stein’s lemma recalled below: For
$\sigma\in(0,\infty)$ and an integrable random variable $Z$, Stein’s lemma
(see _e.g._ [25, Lemma 3.1.2]) asserts that
$\displaystyle Z\sim N(0,\sigma^{2})~{}\text{if and only
if}~{}\mathbb{E}[Zf(Z)]=\sigma^{2}\mathbb{E}[f^{\prime}(Z)],$ (2.6)
for any differentiable function $f:\mathbb{R}\to\mathbb{R}$ such that the
above expectations are finite. The operator $\delta$ is often called the
_Skorokhod integral_ since in the case of the Brownian motion, it coincides
with an extension of the Itô integral introduced by Skorokhod, see _e.g._
[29]. Then we can say $\text{Dom}(\delta^{m})$ is the space of Skorokhod
integrable random variables with values in $\mathcal{H}^{\otimes m}$.
The Wiener-Itô chaos decomposition theorem asserts that
$L^{2}(\Omega,\sigma\\{W\\},\mathbb{P})$ can be written as a direct sum of
mutually orthogonal subspaces:
$L^{2}(\Omega,\sigma\\{W\\},\mathbb{P})=\bigoplus_{n\geq
0}\mathbb{C}_{n}^{W},$
where $\mathbb{C}_{0}^{W}$, identified as $\mathbb{R}$, is the space of
constant random variables and
$\mathbb{C}_{n}^{W}=\\{\delta^{n}(h):h\in\mathcal{H}^{\otimes n}~{}\text{is
deterministic}\\}$, for $n\geq 1$, is called the $n$th _Wiener chaos_
associated to $W$. Note that the first Wiener chaos consists of centered
Gaussian random variables. When $h\in\mathcal{H}^{\otimes n}$ is
deterministic, we write $I_{n}(h)=\delta^{n}(h)$ and we call it the $n$th
multiple integral of $h$ with respect to $W$. By the symmetry in (2.4) and the
duality relation (2.5), $\delta^{n}(h)=\delta^{n}(\widetilde{h})$ with
$\widetilde{h}$ the canonical symmetrization of $h$, so that we have
$I_{n}(h)=I_{n}(\widetilde{h})$ for any $h\in\mathcal{H}^{\otimes n}$. The
above decomposition can be rephrased as follows. For any $F\in
L^{2}(\Omega,\sigma\\{W\\},\mathbb{P})$,
$\displaystyle F=\mathbb{E}[F]+\sum_{n\geq 1}I_{n}(f_{n}),$ (2.7)
with $f_{n}\in\mathcal{H}^{\odot n}$ uniquely determined for each $n\geq 1$.
Moreover, the (modified) isometry property holds
$\displaystyle\mathbb{E}\big{[}I_{p}(f)I_{q}(g)\big{]}=p!\mathbf{1}_{\\{p=q\\}}\big{\langle}\widetilde{f},\widetilde{g}\big{\rangle}_{\mathcal{H}^{\otimes
p}},$ (2.8)
for any $f\in\mathcal{H}^{\otimes p}$ and $g\in\mathcal{H}^{\otimes q}$. We
have the following _product formula_ : For $f\in\mathcal{H}^{\odot p}$ and
$g\in\mathcal{H}^{\odot q}$,
$\displaystyle I_{p}(f)I_{q}(g)=\sum_{r=0}^{p\wedge
q}r!\binom{p}{r}\binom{q}{r}I_{p+q-2r}(f\otimes_{r}g),$ (2.9)
where $f\otimes_{r}g$ is the $r$-contraction between $f$ and $g$, which is an
element in $\mathcal{H}^{\otimes(p+q-2r)}$ defined as follows. Fix an
orthonormal basis $\\{e_{i},i\in\mathcal{O}\\}$ of $\mathcal{H}$. Then, for
$1\leq r\leq p\wedge q$,
$\displaystyle f\otimes_{r}g$
$\displaystyle:=\sum_{i_{1},\dots,i_{p},j_{1},\dots,j_{q}\in\mathcal{O}}\langle
f,e_{i_{1}}\otimes\cdots\otimes e_{i_{p}}\rangle_{\mathcal{H}^{\otimes
p}}\langle g,e_{j_{1}}\otimes\cdots\otimes
e_{j_{q}}\rangle_{\mathcal{H}^{\otimes p}}\mathbf{1}_{\\{i_{k}=j_{k},\forall
k=1,\dots,r\\}}$ $\displaystyle\qquad\times e_{i_{r+1}}\otimes\cdots\otimes
e_{i_{p}}\otimes e_{j_{r+1}}\otimes\cdots\otimes e_{j_{q}}.$ (2.10)
In the particular case when $f,g$ are real-valued functions, we can write
$\displaystyle(f\otimes_{r}g)(\boldsymbol{t_{p-r}},\boldsymbol{x_{p-r}},\boldsymbol{t^{\prime}_{q-r}},\boldsymbol{x^{\prime}_{q-r}})$
$\displaystyle=\int_{\mathbb{R}_{+}^{2r}\times\mathbb{R}^{2rd}}d\boldsymbol{s_{r}}d\boldsymbol{s^{\prime}_{r}}d\boldsymbol{y_{r}}d\boldsymbol{y^{\prime}_{r}}\left(\prod_{j=1}^{r}\gamma_{0}(s_{j}-s^{\prime}_{j})\gamma(y_{j}-y^{\prime}_{j})\right)$
$\displaystyle\quad\times
f(\boldsymbol{s_{r}},\boldsymbol{t_{p-r}},\boldsymbol{y_{r}},\boldsymbol{x_{p-r}})g(\boldsymbol{s^{\prime}_{r}},\boldsymbol{t^{\prime}_{q-r}},\boldsymbol{y^{\prime}_{r}},\boldsymbol{x^{\prime}_{q-r}}),$
provided the above integral exists. For $F\in\mathbb{D}^{m,2}$ with the
representation (2.7) and $m\geq 1$, we have
$\displaystyle D^{m}_{\bullet}F=\sum_{n\geq
m}\frac{n!}{(n-m)!}I_{n-m}\big{(}f_{n}(\bullet,\ast)\big{)}~{}\text{with
convergence in $L^{2}(\Omega;\mathcal{H}^{\otimes m})$},$ (2.11)
where $I_{n-m}\big{(}f_{n}(\bullet,\ast)\big{)}$ is understood as the
$(n-m)$th multiple integral of
$f_{n}(\bullet,\ast)\in\mathcal{H}^{\otimes(n-m)}$ for fixed $\bullet$. We can
write
$\displaystyle D^{m}_{\boldsymbol{s_{m}},\boldsymbol{y_{m}}}F=\sum_{n\geq
m}\frac{n!}{(n-m)!}I_{n-m}\big{(}f_{n}(\boldsymbol{s_{m}},\boldsymbol{y_{m}};\ast)\big{)},$
whenever the above series makes sense and converges in $L^{2}(\Omega)$. With
the decomposition (2.11) in mind, we have the following Gaussian Poincaré
inequality: For $F\in\mathbb{D}^{1,2}$, it holds that
$\displaystyle\text{Var}(F)\leq\mathbb{E}\big{[}\|DF\|_{\mathcal{H}}^{2}\big{]}.$
(2.12)
In fact, if $F$ has the representation (2.7), then
$\text{Var}(F)=\sum_{n\geq 1}n!\|f_{n}\|_{\mathcal{H}^{\otimes
n}}^{2}\quad\mbox{and}\quad\mathbb{E}\big{[}\|DF\|_{\mathcal{H}}^{2}\big{]}=\sum_{n\geq
1}nn!\|f_{n}\|_{\mathcal{H}^{\otimes n}}^{2},$
which gives us (2.12) and, moreover, indicates that the equality in (2.12)
holds only when $F\in\mathbb{C}^{W}_{0}\oplus\mathbb{C}^{W}_{1}$, that is,
only when $F$ is a real Gaussian random variable.
Now let us mention the particular case when the Gaussian noise is white in
time, which is used in the reduction step in Section 3.2. First, let us denote
$\mathcal{H}_{0}:=L^{2}\big{(}\mathbb{R}_{+};\mathcal{P}_{0}\big{)}$
and point out that the following inequality reduces many calculations to the
case of the white noise in time. For any nonnegative function
$f\in\mathcal{H}_{0}^{\otimes n}$ that vanishes outside
$([0,t]\times\mathbb{R}^{d})^{n}$,
$\|f\|_{\mathcal{H}^{\otimes
n}}^{2}\leq\Gamma_{t}^{n}\|f\|_{\mathcal{H}_{0}^{\otimes n}}^{2},$ (2.13)
where121212For the sake of completeness, we sketch a proof of (2.13) here:
Given such a function $f\in\mathcal{H}_{0}^{\otimes n}$,
$\displaystyle\|f\|_{\mathcal{H}^{\otimes n}}^{2}$
$\displaystyle=\int_{[0,t]^{2n}}d\boldsymbol{s_{n}}d\boldsymbol{t_{n}}\big{\langle}f(\boldsymbol{s_{n}},\bullet),f(\boldsymbol{t_{n}},\bullet)\big{\rangle}_{\mathcal{P}_{0}\otimes
n}\prod_{j=1}^{n}\gamma_{0}(s_{j}-t_{j})$
$\displaystyle\leq\int_{[0,t]^{2n}}d\boldsymbol{s_{n}}d\boldsymbol{t_{n}}\frac{1}{2}\Big{(}\big{\|}f(\boldsymbol{s_{n}},\bullet)\big{\|}_{\mathcal{P}_{0}^{\otimes
n}}^{2}+\big{\|}f(\boldsymbol{t_{n}},\bullet)\big{\|}_{\mathcal{P}_{0}^{\otimes
n}}^{2}\Big{)}\prod_{j=1}^{n}\gamma_{0}(s_{j}-t_{j})\leq\Gamma_{t}^{n}\|f\|_{\mathcal{H}_{0}^{\otimes
n}}^{2}.$
$\Gamma_{t}=2\int_{0}^{t}\gamma_{0}(s)ds\quad{\rm
and}\quad\|f\|_{\mathcal{H}_{0}^{\otimes
n}}^{2}=\int_{[0,t]^{n}}\|f(t_{1},\cdot,\ldots,t_{n},\cdot)\|_{\mathcal{P}_{0}^{\otimes
n}}^{2}dt_{1}\cdots dt_{n};$
whenever no ambiguity arises, we write
$\|f\|_{0}:=\|f\|_{\mathcal{P}_{0}^{\otimes n}}$ so that
$\|f\|_{\mathcal{H}_{0}^{\otimes
n}}^{2}=\int_{[0,t]^{n}}\|f(\boldsymbol{t_{n}},\bullet)\|_{0}^{2}d\boldsymbol{t_{n}}.$
Let $\dot{\mathfrak{X}}$ denote the Gaussian noise that is white in time and
has the same spatial correlation as $W$. More precisely,
$\\{\mathfrak{X}(f):f\in\mathcal{H}_{0}\\}$ is a centered Gaussian family with
covariance
$\mathbb{E}[\mathfrak{X}(f)\mathfrak{X}(g)]=\langle
f,g\rangle_{\mathcal{H}_{0}},\quad\mbox{for any $f,g\in\mathcal{H}_{0}$}.$
Denote by $I^{\mathfrak{X}}_{p}$ the $p$-th multiple stochastic integral with
respect to $\mathfrak{X}$. The product formula (2.9) still holds with $W$
replaced by the noise $\mathfrak{X}$. Moreover, if $f\in\mathcal{H}^{\otimes
p}$ and $g\in\mathcal{H}^{\otimes q}$ have disjoint temporal
supports131313This means $f=0$ outside $(J\times\mathbb{R}^{d})^{p}$ and $g=0$
outside $(J^{c}\times\mathbb{R}^{d})^{q}$ for some set
$J\subset\mathbb{R}_{+}$. We will apply this formula to functions
$f=f_{t,x,j}^{(j)}(r,z;\bullet)$ and $g=f_{r,z,n-j}$ given in Section 3.1, in
which case $J=(r,t)$., then we have $f\otimes_{r}g=0$ for $r=1,\dots,p\wedge
q$ and the product formula (2.9) reduces to
$\displaystyle
I^{\mathfrak{X}}_{p}(f)I^{\mathfrak{X}}_{q}(g)=I^{\mathfrak{X}}_{p+q}(f\otimes
g).$ (2.14)
In this case, the random variables $I^{\mathfrak{X}}_{p}(f)$ and
$I^{\mathfrak{X}}_{q}(g)$ are independent by the Üstünel-Zakai-Kallenberg
criterion (see Exercise 5.4.8 of [25]) and note that we do not need to assume
$f,g$ to be symmetric in (2.14).
Now let us introduce the Ornstein-Uhlenbeck operator $L$ that can be defined
as follows. We say that $F$ belongs to the $\text{Dom}(L)$ if
$F\in\mathbb{D}^{1,2}$ and $DF\in\text{Dom}(\delta)$; in this case, we let
$LF=-\delta DF$. For $F\in L^{2}(\Omega)$ of the form (2.7),
$F\in\text{Dom}(L)$ if and only if $\sum_{n\geq
1}n^{2}n!\|f_{n}\|_{\mathcal{H}^{\otimes n}}^{2}<\infty.$ In this case, we
have $LF=\sum_{n\geq 1}-nI_{n}(f_{n})$. Using the chaos expansion, we can also
define the Ornstein-Uhlenbeck semigroup
$\\{P_{t}=e^{tL},t\in\mathbb{R}_{+}\\}$ and the pseudo-inverse $L^{-1}$ of the
Ornstein-Uhlenbeck operator $L$ as follows. For $F\in L^{2}(\Omega)$ having
the chaos expansion (2.7),
$P_{t}F:=\sum_{n\geq 0}e^{-nt}I_{n}(f_{n})\quad{\rm and}\quad
L^{-1}F=\sum_{n\geq 1}-\frac{1}{n}I_{n}(f_{n}).$
Observe that for any centered random variable $F\in
L^{2}(\Omega,\sigma\\{W\\},\mathbb{P})$, $LL^{-1}F=F$ and for any
$G\in\text{Dom}(L)$, $L^{-1}LG=G-\mathbb{E}[G].$ The above expression and the
modified isometry property (2.8) give us the contraction property of $P_{t}$
on $L^{2}(\Omega)$, that is, for $F\in
L^{2}(\Omega,\sigma\\{W\\},\mathbb{P})$, $\|P_{t}F\|_{2}\leq\|F\|_{2}$.
Moreover, $P_{t}$ is a contraction operator on $L^{q}(\Omega)$ for any
$q\in[1,\infty)$; see [25, Proposition 2.8.6].
Finally, let us recall Nelson’s _hypercontractivity property_ of the Ornstein-
Uhlenbeck semigroup: For $F\in L^{q}(\Omega,\sigma\\{W\\},\mathbb{P})$ with
$q\in(1,\infty)$, it holds for each $t\geq 0$ that
$\|P_{t}F\|_{q_{t}}\leq\|F\|_{q}$ with $q_{t}=1+(q-1)e^{2t}$. In this paper,
we need one of its consequences – a moment inequality comparing
$L^{q}(\Omega)$-norms on a fixed chaos:
If $F\in\mathbb{C}^{W}_{n}$ and $p\in[2,\infty)$, then
$\|F\|_{p}\leq(p-1)^{n/2}\|F\|_{2}$; (2.15)
see _e.g._ [25, Corollary 2.8.14].
### 2.2 Inequalities
Let us first present a few inequalities, which will be used in Section 3.
###### Lemma 2.1.
Fix an integer $d\geq 1$. Suppose that either one of the following conditions
hold:
(a) $\gamma\in L^{\ell}(\mathbb{R}^{d})$ for some $\ell\in(1,\infty)$ (b)
$\gamma(x)=|x|^{-\beta}$ for some $\beta\in(0,d)$.
Define
$\displaystyle q=\begin{cases}\ell/(2\ell-1)&\text{in case \rm(a)}\\\
d/(2d-\beta)&\text{in case \rm(b).}\end{cases}$
Then, for any $f,g\in L^{2q}(\mathbb{R}^{d})$,
$\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}f(x)g(y)\gamma(x-y)dxdy\leq
C_{\gamma}\|f\|_{L^{2q}(\mathbb{R}^{d})}\|g\|_{L^{2q}(\mathbb{R}^{d})},$
where $C_{\gamma}=\|\gamma\|_{L^{\ell}(\mathbb{R}^{d})}$ in case (a), and
$C_{\gamma}=C_{d,\beta}$ is the constant $($depending on $d,\beta)$ that
appears in the Hardy-Littlewood-Sobolev inequality (2.16) below, in case (b).
###### Proof.
In the case $d=2$, this result was essentially proved on page 15 of [35] in
case (a), and on page 6 of [4] in case (b). We reproduce the arguments here
for the sake of completeness.
In case (a), we apply Hölder’s inequality and _Young’s convolution inequality_
:
$\int_{\mathbb{R}^{d}}f(x)(g*\gamma)(x)dx\leq\|f\|_{L^{\frac{2\ell}{2\ell-1}}(\mathbb{R}^{d})}\|g*\gamma\|_{L^{2\ell}(\mathbb{R}^{d})}\leq\|f\|_{L^{\frac{2\ell}{2\ell-1}}(\mathbb{R}^{d})}\|g\|_{L^{\frac{2\ell}{2\ell-1}}(\mathbb{R}^{d})}\|\gamma\|_{L^{\ell}(\mathbb{R}^{d})}.$
In case (b), we apply Hölder’s inequality and _Hardy-Littlewood-Sobolev
inequality_ :
$\int_{\mathbb{R}^{d}}f(x)(g*\gamma)(x)dx\leq\|f\|_{L^{\frac{2d}{2d-\beta}}(\mathbb{R}^{d})}\|g*\gamma\|_{L^{2d/\beta}(\mathbb{R}^{d})}\leq
C_{d,\beta}\|f\|_{L^{\frac{2d}{2d-\beta}}(\mathbb{R}^{d})}\|g\|_{L^{\frac{2d}{2d-\beta}}(\mathbb{R}^{d})}.$
(2.16)
This concludes the proof. ∎
To deal with case (c) in ${\bf(H1)}$, we need the following modification of
Lemma 2.1.
###### Lemma 2.2.
Suppose that $\gamma(x_{1},\ldots,x_{d})=\prod_{i=1}^{d}\gamma_{i}(x_{i})$,
where for each $i\in\\{1,\ldots,d\\}$,
$\mbox{\rm(M1) $\gamma_{i}\in L^{\ell_{i}}(\mathbb{R})$ for some
$\ell_{i}\in(1,\infty)$ \quad or \quad(M2) $\gamma_{i}(x)=|x|^{-\beta_{i}}$
for some $\beta_{i}\in(0,1)$}.$
Let $q_{i}=\ell_{i}/(2\ell_{i}-1)$ in case (M1) and $q_{i}=1/(2-\beta_{i})$ in
case (M2). Let $q=\max\\{q_{i}:i=1,\dots,d\\}$.
If $f,g\in L^{2q}(\mathbb{R}^{d})$ satisfy $f(x)=g(x)=0$ for
$x\not\in\prod_{i=1}^{d}[a_{i},b_{i}]$ for some real numbers
$a_{i}<b_{i}$141414We can apply this lemma to the function
$y\in\mathbb{R}^{2}\mapsto G_{t-s}(x-y)$ whose support is contained in
$\\{y\in\mathbb{R}^{2};|x-y|<t-s\\}$, so we can choose $\Lambda=2t-2s$., then
$\displaystyle\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}f(x)g(y)\gamma(x-y)dxdy\leq\Lambda^{\nu}C_{\gamma}\|f\|_{L^{2q}(\mathbb{R}^{d})}\|g\|_{L^{2q}(\mathbb{R}^{d})},$
(2.17)
with $\Lambda=\max\\{b_{i}-a_{i};i=1,\ldots,d\\}$,
$C_{\gamma}=\prod_{i=1}^{d}C_{\gamma_{i}}$ and
$\nu=\sum_{i=1}^{d}(q_{i}^{-1}-q^{-1})$. In particular, when $q_{i}=q$ for all
$i\in\\{1,\ldots,d\\}$, we have
$\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}f(x)g(y)\gamma(x-y)dxdy\leq
C_{\gamma}\|f\|_{L^{2q}(\mathbb{R}^{d})}\|g\|_{L^{2q}(\mathbb{R}^{d})}.$
The constants $C_{\gamma_{i}}$ are defined as in Lemma 2.1.
###### Proof.
By Lemma 2.1, inequality (2.17) holds for $d=1$ with $\nu=0$. Now let us
consider $d\geq 2$ and prove inequality (2.17) by induction. Suppose (2.17)
holds for $d\leq k-1$ $(k\geq 2)$. We use the notation
$x=(x_{1},\ldots,x_{k})=:\boldsymbol{x_{k}}.$ Without loss of any generality
we assume $q_{1}\geq q_{2}\geq\cdots\geq q_{k}$, so that $q=q_{1}$. Applying
the initial step $(d=1)$ yields
$\displaystyle\int_{\mathbb{R}^{2k}}d\boldsymbol{x_{k}}d\boldsymbol{y_{k}}f(\boldsymbol{x_{k}})g(\boldsymbol{y_{k}})\prod_{i=1}^{k}\gamma_{i}(x_{i}-y_{i})$
$\displaystyle\quad\leq
C_{\gamma_{k}}\int_{\mathbb{R}^{2(k-1)}}d\boldsymbol{x_{k-1}}d\boldsymbol{y_{k-1}}\big{\|}f(\boldsymbol{x_{k-1}},\bullet)\big{\|}_{L^{2q_{k}}(\mathbb{R})}\big{\|}g(\boldsymbol{y_{k-1}},\bullet)\big{\|}_{L^{2q_{k}}(\mathbb{R})}\prod_{i=1}^{k-1}\gamma_{i}(x_{i}-y_{i}).$
(2.18)
By the induction hypothesis, we can bound the right-hand side of (2.18) by
$\displaystyle\left(\prod_{i=1}^{k}C_{\gamma_{i}}\right)\Lambda^{\nu^{\ast}}\left(\int_{\mathbb{R}^{k-1}}\big{\|}f(\boldsymbol{x_{k-1}},\bullet)\big{\|}_{L^{2q_{k}}(\mathbb{R})}^{2q}d\boldsymbol{x_{k-1}}\right)^{\frac{1}{2q}}\left(\int_{\mathbb{R}^{k-1}}\big{\|}g(\boldsymbol{y_{k-1}},\bullet)\big{\|}_{L^{2q_{k}}(\mathbb{R})}^{2q}d\boldsymbol{y_{k-1}}\right)^{\frac{1}{2q}},$
with $\nu^{\ast}=\sum_{i=1}^{k-1}(q_{i}^{-1}-q^{-1})$. By Hölder’s inequality,
$\displaystyle\left(\int_{\mathbb{R}^{k-1}}\big{\|}f(\boldsymbol{x_{k-1}},\bullet)\big{\|}_{L^{2q_{k}}(\mathbb{R})}^{2q}d\boldsymbol{x_{k-1}}\right)^{\frac{1}{2q}}$
$\displaystyle=\left(\int_{\mathbb{R}^{k-1}}\left[\int_{a_{k}}^{b_{k}}\big{|}f(\boldsymbol{x_{k-1}},x_{k})\big{|}^{2q_{k}}dx_{k}\right]^{\frac{2q}{2q_{k}}}d\boldsymbol{x_{k-1}}\right)^{\frac{1}{2q}}$
$\displaystyle\leq\Lambda^{\frac{1}{2q_{k}}-\frac{1}{2q}}\left(\int_{\mathbb{R}^{k-1}}\int_{a_{k}}^{b_{k}}\big{|}f(\boldsymbol{x_{k-1}},x_{k})\big{|}^{2q}dx_{k}d\boldsymbol{x_{k-1}}\right)^{\frac{1}{2q}}.$
A similar inequality holds for $g$. Since
$\nu^{\ast}+(q_{k}^{-1}-q^{-1})=\sum_{i=1}^{k}(q_{i}^{-1}-q^{-1})$, inequality
(2.17) holds for $d=k$. ∎
We will need the following generalization of Lemma 2.1 and Lemma 2.2.
###### Lemma 2.3.
(1) Under the conditions of Lemma 2.1, for any $f,g\in
L^{2q}(\mathbb{R}^{md})$
$\displaystyle\int_{\mathbb{R}^{2md}}f(\boldsymbol{x_{m}})g(\boldsymbol{y_{m}})\prod_{j=1}^{m}\gamma(x_{j}-y_{j})d\boldsymbol{x_{m}}d\boldsymbol{y_{m}}\leq
C_{\gamma}^{m}\|f\|_{L^{2q}(\mathbb{R}^{md})}\|g\|_{L^{2q}(\mathbb{R}^{md})},$
(2.19)
where $C_{\gamma}$ is the same constant as in Lemma 2.1. Here
$\boldsymbol{x_{m}}=(x_{1},\dots,x_{m})$ with $x_{i}\in\mathbb{R}^{d}$.
(2) Let $\gamma,C_{\gamma}$ and $q$ be given as in Lemma 2.2. If $f,g\in
L^{2q}(\mathbb{R}^{md})$ satisfy
$f(\boldsymbol{x_{md}})=g(\boldsymbol{x_{md}})=0$ for
$\boldsymbol{x_{md}}\notin\prod_{i=1}^{md}[a_{i},b_{i}]$ for some real numbers
$a_{i}<b_{i}$, then inequality (2.19) holds with $C_{\gamma}$ replaced by
$\Lambda^{\nu}C_{\gamma}$, where $\Lambda=\max\\{b_{i}-a_{i}:i=1,\dots,md\\}$
and $\nu=\sum_{i=1}^{d}(q_{i}^{-1}-q^{-1})$. Here
$\boldsymbol{x_{md}}=(x_{1},\dots,x_{md})$ with $x_{i}\in\mathbb{R}$.
###### Proof.
The proof will be done by induction on $m$ simultaneously for both cases (1)
and (2). Let $C=C_{\gamma}$ in case (1) and $C=\Lambda^{\nu}C_{\gamma}$ in
case (2). The results are true for $m=1$ by Lemma 2.1 and Lemma 2.2. Assume
that the results hold for $m-1$. Applying the inequality for $m=1$ yields
$\displaystyle\quad\int_{\mathbb{R}^{2dm}}f(\boldsymbol{x_{m}})g(\boldsymbol{y_{m}})\prod_{j=1}^{m}\gamma(x_{j}-y_{j})d\boldsymbol{x_{m}}d\boldsymbol{y_{m}}$
$\displaystyle\leq
C\int_{\mathbb{R}^{2d(m-1)}}\|f(\boldsymbol{x_{m-1}},\bullet)\|_{L^{2q}(\mathbb{R}^{d})}\|g(\boldsymbol{y_{m-1}},\bullet)\|_{L^{2q}(\mathbb{R}^{d})}\prod_{j=1}^{m-1}\gamma(x_{j}-y_{j})d\boldsymbol{x_{m-1}}d\boldsymbol{y_{m-1}}.$
By the induction hypothesis, the latter term can be bounded by
$\displaystyle
C^{m}\left(\int_{\mathbb{R}^{d(m-1)}}\|f(\boldsymbol{x_{m-1}},\bullet)\|^{2q}_{L^{2q}(\mathbb{R}^{d})}d\boldsymbol{x_{m-1}}\right)^{\frac{1}{2q}}\left(\int_{\mathbb{R}^{d(m-1)}}\|g(\boldsymbol{x_{m-1}},\bullet)\|^{2q}_{L^{2q}(\mathbb{R}^{d})}d\boldsymbol{x_{m-1}}\right)^{\frac{1}{2q}},$
which completes the proof. ∎
Let us return to the three cases of Hypothesis ${\bf(H1)}$. Lemma 2.1
indicates that $L^{2q}(\mathbb{R}^{2})$ is continuously embedded into
$\mathcal{P}_{0}$, with $q\in(1/2,1)$ given by
$\displaystyle q=\begin{cases}\ell/(2\ell-1)&\mbox{in case \rm({a})},\\\
2/(4-\beta)&\mbox{in case \rm({b})}.\end{cases}$ (2.20)
Recall that $\mathcal{P}_{0}$ has been defined at the beginning of Section
2.1. Moreover, for any $f,g\in L^{2q}(\mathbb{R}^{2})$,
$\int_{\mathbb{R}^{4}}\big{|}f(x)g(x)\big{|}\gamma(x-y)dxdy\leq
D_{\gamma}\|f\|_{L^{2q}(\mathbb{R}^{2})}\|g\|_{L^{2q}(\mathbb{R}^{2})},$
(2.21)
where
$\displaystyle
D_{\gamma}=\begin{cases}\|\gamma\|_{L^{\ell}(\mathbb{R}^{2})}&\mbox{in case
\rm({a})},\\\ C_{2,\beta}&\mbox{in case \rm({b})}.\end{cases}$ (2.22)
For case (c) of Hypothesis ${\bf(H1)}$, we consider three sub-cases:
$\displaystyle\begin{cases}&{\rm(i)}~{}\gamma_{i}\in
L^{\ell_{i}}(\mathbb{R})~{}\text{for some $\ell_{i}>1$, $i=1,2$;}\\\
&{\rm(ii)}~{}\gamma_{i}(x_{i})=|x_{i}|^{-\beta_{i}}~{}\text{for some
$\beta_{i}\in(0,1)$, $i=1,2$;}\\\ &{\rm(iii)}~{}\gamma_{1}\in
L^{\ell}(\mathbb{R})~{}\text{for some $\ell\in(1,\infty)$ and
$\gamma_{2}(x_{2})=|x_{2}|^{-\beta}$ for some $\beta\in(0,1)$.}\end{cases}$
Lemma 2.2 implies that, for any $f,g\in L^{2q}(\mathbb{R}^{2})$ with
$\displaystyle q=\begin{cases}\max\\{\ell_{i}/(2\ell_{i}-1):i=1,2\\}&\mbox{in
case \rm(i)}\\\ \max\\{1/(2-\beta_{i}):i=1,2\\}&\mbox{in case \rm(ii)}\\\
\max\\{\ell/(2\ell-1),1/(2-\beta)\\}&\mbox{in case \rm(iii)}\end{cases},$
(2.23)
such that $f,g$ vanish outside a box with side lengths bounded by $\Lambda$,
then inequality (2.21) still holds with
$\displaystyle
D_{\gamma}=\begin{cases}\|\gamma_{1}\|_{L^{\ell_{1}}(\mathbb{R})}\|\gamma_{2}\|_{L^{\ell_{2}}(\mathbb{R})}\Lambda^{|\frac{1}{\ell_{1}}-\frac{1}{\ell_{2}}|}&\mbox{in
case \rm(i)}\\\
C_{1,\beta_{1}}C_{1,\beta_{2}}\Lambda^{|\beta_{1}-\beta_{2}|}&\mbox{in case
\rm(ii)}\\\
C_{1,\beta}\|\gamma_{1}\|_{L^{\ell}(\mathbb{R})}\Lambda^{|\frac{1}{\ell}-\beta|}&\mbox{in
case \rm(iii)}\end{cases},$ (2.24)
where the constants $C_{1,\beta_{i}}$ are given as in Lemma 2.1.
From Lemma 2.3, we deduce that in cases (a) and (b),
$\|f\|_{\mathcal{H}_{0}^{\otimes n}}^{2}\leq
D_{\gamma}^{n}\int_{[0,t]^{n}}\|f(\boldsymbol{t_{n}},\bullet)\|_{L^{2q}(\mathbb{R}^{2n})}^{2}d\boldsymbol{t_{n}},$
(2.25)
for any measurable function
$f:(\mathbb{R}_{+}\times\mathbb{R}^{2})^{n}\to\mathbb{R}$ such that $f$
vanishes outside $([0,t]\times\mathbb{R}^{2})^{n}$; in case (c), inequality
(2.25) holds true for any measurable function
$f:(\mathbb{R}_{+}\times\mathbb{R}^{2})^{n}\to\mathbb{R}$ such that
$f(t_{1},x_{1},\dots,t_{n},x_{n})=f(\boldsymbol{t_{n}},\boldsymbol{x_{n}})=0~{}\text{for
$\boldsymbol{t_{n}}\notin[0,t]^{n}$ and
$\boldsymbol{x_{n}}\notin\prod_{i=1}^{2n}[a_{i},b_{i}]$}$
with $\Lambda:=\max\\{b_{i}-a_{i}:i=1,\dots,2n\\}<\infty$.
Let us present a few facts on the fundamental solution $G$. When $d=2$,
$\|G_{t}\|_{L^{p}(\mathbb{R}^{2})}=\left(\frac{(2\pi)^{1-p}}{2-p}\right)^{1/p}t^{\frac{2}{p}-1}\quad\mbox{for
all}~{}p\in(0,2),$ (2.26) $G_{t}^{p}(x)\leq(2\pi
t)^{q-p}G_{t}^{q}(x)\quad\mbox{for all}~{}p<q,$ (2.27)
and
$\mathbf{1}_{\\{|x|<t\\}}\leq 2\pi tG_{t}(x).$ (2.28)
We will use also the following estimate.
###### Lemma 2.4 (Lemma 4.3 of [4]).
For any $q\in(1/2,1)$ and $d=2$,
$\int_{r}^{t}(G_{t-s}^{2q}*G_{s-r}^{2q})^{1/q}(z)ds\leq
A_{q}(t-r)^{\frac{1}{q}-1}G_{t-r}^{2-\frac{1}{q}}(z),$
where $A_{q}>0$ is a constant depending on $q$.
Finally, we record the expression of the Fourier transform of $G_{t}$ for
$d\in\\{1,2\\}$:
$\displaystyle\mathcal{F}G_{t}(\xi)=\int_{\mathbb{R}^{d}}e^{-i\xi\cdot
x}G_{t}(x)dx=\frac{\sin(t|\xi|)}{|\xi|}=:\widehat{G}_{t}(\xi).$ (2.29)
Note that (see e.g. (3.4) of [3])
$\displaystyle\big{|}\widehat{G}_{t}(\xi)\big{|}^{2}\leq 2(t^{2}\vee
1)\frac{1}{1+|\xi|^{2}}.$ (2.30)
In Section 4, we need the following two results.
###### Lemma 2.5.
For $d\in\\{1,2\\}$, let $\gamma_{0}$ satisfy the assumption (i) on page 1 and
let $\mu_{p}$ be a symmetric measure on $(\mathbb{R}^{d})^{p}$, for some
integer $p\geq 1$. Then, with $0<s\leq t$ and
$\Delta_{p}(t)=\\{\boldsymbol{s_{p}}\in\mathbb{R}_{+}^{p}:t=s_{0}>s_{1}>\cdots>s_{p}>0\\}$,
$\displaystyle\quad\sum_{\sigma\in\mathfrak{S}_{p}}\int_{\Delta_{p}(t)}d\boldsymbol{s_{p}}\int_{[0,s]^{p}}d\boldsymbol{\tilde{s}_{p}}\mathbf{1}_{\\{s>\tilde{s}_{\sigma(1)>\cdots>\tilde{s}_{\sigma(p)}>0}\\}}\left(\prod_{j=1}^{p}\gamma_{0}(s_{j}-\tilde{s}_{j})\right)\int_{\mathbb{R}^{pd}}\mu_{p}(d\boldsymbol{\xi_{p}})$
$\displaystyle\qquad\qquad\times
g(s_{1},\xi_{1},\dots,s_{p},\xi_{p})g(\tilde{s}_{\sigma(1)},\xi_{\sigma(1)},\dots,\tilde{s}_{\sigma(p)},\xi_{\sigma(p)})$
$\displaystyle\leq\Gamma_{t}^{p}\int_{\Delta_{p}(t)}d\boldsymbol{s_{p}}\int_{\mathbb{R}^{pd}}\mu_{p}(d\boldsymbol{\xi_{p}})g(s_{1},\xi_{1},\dots,s_{p},\xi_{p})^{2},\quad\text{with}~{}\Gamma_{t}:=\int_{-t}^{t}\gamma_{0}(a)da,$
for any measurable function
$g:(\mathbb{R}_{+}\times\mathbb{R}^{d})^{p}\to\mathbb{R}_{+}$ for which the
above integral is finite.
###### Proof.
After applying $|ab|\leq\frac{a^{2}+b^{2}}{2}$ and using the symmetry of
$\mu_{p}$, we have that the left-hand side quantity is bounded by
$\displaystyle\frac{1}{2}\sum_{\sigma\in\mathfrak{S}_{p}}\int_{\Delta_{p}(t)}d\boldsymbol{s_{p}}\int_{[0,s]^{p}}d\boldsymbol{\tilde{s}_{p}}\mathbf{1}_{\\{s>\tilde{s}_{\sigma(1)>\cdots>\tilde{s}_{\sigma(p)}>0}\\}}h(\boldsymbol{s_{p}})\prod_{j=1}^{p}\gamma_{0}(s_{j}-\tilde{s}_{j})$
(2.31)
$\displaystyle\quad+\frac{1}{2}\sum_{\sigma\in\mathfrak{S}_{p}}\int_{\Delta_{p}(t)}d\boldsymbol{s_{p}}\int_{[0,s]^{p}}d\boldsymbol{\tilde{s}_{p}}\mathbf{1}_{\\{s>\tilde{s}_{\sigma(1)>\cdots>\tilde{s}_{\sigma(p)}>0}\\}}h\big{(}\tilde{s}_{\sigma(1)},...,\tilde{s}_{\sigma(p)}\big{)}\prod_{j=1}^{p}\gamma_{0}(s_{j}-\tilde{s}_{j})$
(2.32)
with
$\displaystyle
h(s_{1},\dots,s_{p}):=\begin{cases}{\displaystyle\int_{\mathbb{R}^{pd}}\mu_{p}(d\boldsymbol{\xi_{p}})g(s_{1},\xi_{1},\dots,s_{p},\xi_{p})^{2},}\quad&\text{for
$\boldsymbol{s_{p}}\in\Delta_{p}(t)$}\\\ 0,&\text{otherwise.}\end{cases}$
Putting
$\mathcal{I}_{s}(s_{1},\dots,s_{p}):=\mathbf{1}_{\\{s>s_{1}>\cdots>s_{p}>0\\}}$
and letting $\widetilde{\mathcal{I}}_{s}(s_{1},\dots,s_{p})$ be its canonical
symmetrization (so that
$\big{|}\widetilde{\mathcal{I}}_{s}\big{|}\leq(p!)^{-1}$), we can rewrite the
term in (2.31) as
$\displaystyle\frac{p!}{2}\int_{\Delta_{p}(t)}\int_{[0,s]^{p}}d\boldsymbol{s_{p}}d\boldsymbol{\tilde{s}_{p}}h(\boldsymbol{s_{p}})\widetilde{\mathcal{I}}_{s}(\boldsymbol{\tilde{s}_{p}})\prod_{j=1}^{p}\gamma_{0}(s_{j}-\tilde{s}_{j})$
$\displaystyle\leq\frac{1}{2}\int_{\Delta_{p}(t)}\int_{[0,s]^{p}}d\boldsymbol{s_{p}}d\boldsymbol{\tilde{s}_{p}}h(\boldsymbol{s_{p}})\prod_{j=1}^{p}\gamma_{0}(s_{j}-\tilde{s}_{j})$
$\displaystyle\leq\frac{1}{2}\Gamma_{t}^{p}\int_{\Delta_{p}(t)}d\boldsymbol{s_{p}}h(\boldsymbol{s_{p}}),$
using also the bound
$\sup\\{\int_{0}^{s}\gamma_{0}(r-r^{\prime})dr^{\prime}:r\in[0,t]\\}\leq\Gamma_{t}$.
For the other term (2.32), we argue in the same way: With
$(\mathcal{I}_{s}\cdot
h)(s_{1},...,s_{p})=\mathcal{I}_{s}(s_{1},\dots,s_{p})h(s_{1},...,s_{p})$, we
rewrite the term (2.32) as
$\displaystyle\quad\frac{p!}{2}\int_{[0,t]^{p}}d\boldsymbol{s_{p}}\int_{[0,s]^{p}}d\boldsymbol{\tilde{s}_{p}}\mathcal{I}_{t}(\boldsymbol{s_{p}})\times\widetilde{(\mathcal{I}_{s}\cdot
h)}(\boldsymbol{\widetilde{s}_{p}})\prod_{j=1}^{p}\gamma_{0}(s_{j}-\tilde{s}_{j})=\frac{p!}{2}\big{\langle}\mathcal{I}_{t},\widetilde{\mathcal{I}_{s}\cdot
h}\big{\rangle}_{\mathcal{H}^{\otimes p}}$
$\displaystyle=\frac{p!}{2}\big{\langle}\widetilde{\mathcal{I}_{t}},\mathcal{I}_{s}\cdot
h\big{\rangle}_{\mathcal{H}^{\otimes
p}}\leq\frac{1}{2}\int_{[0,t]^{p}}d\boldsymbol{t_{p}}\int_{\Delta_{p}(s)}h(\boldsymbol{\widetilde{s}_{p}})\prod_{j=1}^{p}\gamma_{0}(s_{j}-\widetilde{s}_{j})\leq\frac{1}{2}\Gamma_{t}^{p}\int_{\Delta_{p}(s)}d\boldsymbol{s_{p}}h(\boldsymbol{s_{p}}),$
since $h\geq 0$ and $\big{|}\widetilde{\mathcal{I}}_{t}\big{|}\leq(p!)^{-1}$.
This concludes the proof. ∎
###### Lemma 2.6.
For $d\in\\{1,2\\}$ let $\gamma,\mu$ satisfy the assumption (ii) on page 1.
Then, for any _nonnegative_ function $h\in\mathcal{P}_{0}\cap
L^{1}(\mathbb{R}^{d})$,
$\sup_{z\in\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\mu(d\xi)|\mathcal{F}h(\xi+z)|^{2}\leq\int_{\mathbb{R}^{d}}\mu(d\xi)|\mathcal{F}h(\xi)|^{2}.$
As a consequence, for any integer $p\geq 1$ and $w_{1},\dots,w_{p}\in[0,t]$,
$\sup_{\boldsymbol{w_{p}}\in[0,t]^{p}}\sup_{\boldsymbol{z_{p}}\in\mathbb{R}^{dp}}\int_{\mathbb{R}^{dp}}\mu(d\boldsymbol{\xi_{p}})\prod_{j=1}^{p}\big{|}\widehat{G}_{w_{j}}(\xi_{j}+z_{j})\big{|}^{2}\leq\left(2(t^{2}\vee
1)\int_{\mathbb{R}^{d}}\frac{\mu(d\xi)}{1+|\xi|^{2}}\right)^{p}.$ (2.33)
###### Proof.
Since $h\geq 0$, using the fact that
$\mathcal{F}h(\xi+z)=\mathcal{F}(e^{-iz\cdot}h)(\xi)$ together with
$|e^{-iz(x+y)}|=1$, we get
$\displaystyle\int_{\mathbb{R}^{d}}\mu(d\xi)\big{|}\mathcal{F}h(\xi+z)\big{|}^{2}=\int_{\mathbb{R}^{2d}}e^{-iz(x+y)}h(x)h(y)\gamma(x-y)dxdy\leq\int_{\mathbb{R}^{2d}}h(x)h(y)\gamma(x-y)dxdy,$
which is exactly
$\int_{\mathbb{R}^{d}}\mu(d\xi)\big{|}\mathcal{F}h(\xi)\big{|}^{2}.$ In
particular, by (2.30),
$\sup_{z\in\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\mu(d\xi)\big{|}\widehat{G}_{s}(\xi+z)\big{|}^{2}\leq\int_{\mathbb{R}^{d}}\mu(d\xi)\big{|}\widehat{G}_{s}(\xi)\big{|}^{2}\leq
2(s^{2}\vee 1)\int_{\mathbb{R}^{d}}\frac{\mu(d\xi)}{1+|\xi|^{2}},$
which is finite due to Dalang’s condition (1.2). Applying this inequality
several times yields
$\displaystyle\int_{\mathbb{R}^{dp}}\mu(d\boldsymbol{\xi_{p}})\prod_{j=1}^{p}\big{|}\widehat{G}_{w_{j}}(\xi_{j}+z_{j})\big{|}^{2}\leq\left(2(t^{2}\vee
1)\int_{\mathbb{R}^{d}}\frac{\mu(d\xi)}{1+|\xi|^{2}}\right)^{p},$
which is a uniform bound over
$(\boldsymbol{z_{p}},\boldsymbol{w_{p}})\in\mathbb{R}^{dp}\times[0,t]^{p}$. ∎
## 3 $L^{p}$ estimates for Malliavin derivatives
This section is mainly devoted to the proof of Theorem 1.3. The proof will be
done in several steps organized in Sections 3.1, 3.2, 3.3, 3.4 and 3.5. In
Section 3.6, we record a few consequences of Theorem 1.3 that will be used in
the proof of Theorem 1.10 in Section 5.
### 3.1 Step 1: Preliminaries
Let us first introduce some handy notation. Recall that for
$\boldsymbol{t_{n}}:=(t_{1},\ldots,t_{n})$ and
$\boldsymbol{x_{n}}:=(x_{1},\ldots,x_{n})$, we defined in (1.8)
$f_{t,x,n}(\boldsymbol{t_{n}},\boldsymbol{x_{n}})=G_{t-t_{1}}(x-x_{1})G_{t_{1}-t_{2}}(x_{1}-x_{2})\cdots
G_{t_{n-1}-t_{n}}(x_{n-1}-x_{n}),$
with the convention (1.6), and we denote by $\widetilde{f}_{t,x,n}$ the
symmetrization of $f_{t,x,n}$; see (1.9). We treat the time-space variables
$(t_{i},x_{i})$ as one coordinate and we write
$f_{t,x,n}(r,z;\boldsymbol{t_{n-1}},\boldsymbol{x_{n-1}}):=f_{t,x,n}(r,z,t_{1},x_{1},\ldots,t_{n-1},x_{n-1})$
as in Notation A-(3). Recall that the solution $u(t,x)$ has the Wiener chaos
expansion
$u(t,x)=1+\sum_{n=1}^{\infty}I_{n}(f_{t,x,n}),$
where the kernel $f_{t,x,n}$ is not symmetric and in this case, by definition,
$I_{n}(f_{t,x,n})=I_{n}\big{(}\widetilde{f}_{t,x,n}\big{)}$.
Our first goal is to show that, for any fixed
$(r,z)\in[0,t]\times\mathbb{R}^{d}$ and for any $p\in[2,\infty)$, the series
$\displaystyle\sum_{n\geq
1}nI_{n-1}\big{(}\widetilde{f}_{t,x,n}(r,z;\bullet)\big{)}$ (3.1)
converges in $L^{p}(\Omega)$, and the sum, denoted by $D_{r,z}u(t,x)$,
satisfies the $L^{p}$ estimates (1.11).
The first term of the series (3.1) is
$\widetilde{f}_{t,x,1}(r,z)=G_{t-r}(x-z)$. In general, for any $n\geq 1$,
$\widetilde{f}_{t,x,n}(r,z;\bullet)=\frac{1}{n}\sum_{j=1}^{n}h^{(j)}_{t,x,n}(r,z;\bullet),$
(3.2)
where $h^{(j)}_{t,x,n}(r,z;\bullet)$ is the symmetrization of the function
$(\boldsymbol{t_{n-1}},\boldsymbol{x_{n-1}})\to
f^{(j)}_{t,x,n}(r,z;\boldsymbol{t_{n-1}},\boldsymbol{x_{n-1}})$, which is
obtained from $f_{t,x,n}$ by placing $r$ on position $j$ among the time
instants, and $z$ on position $j$ among the space points: With the convention
(1.6),
$\displaystyle f^{(j)}_{t,x,n}(r,z;\boldsymbol{t_{n-1}},\boldsymbol{x_{n-1}})$
$\displaystyle\quad=G_{t-t_{1}}(x-x_{1})\cdots
G_{t_{j-1}-r}(x_{j-1}-z)G_{r-t_{j}}(z-x_{j})\cdots
G_{t_{n-2}-t_{n-1}}(x_{n-2}-x_{n-1}).$ (3.3)
That is,
$\displaystyle
f^{(j)}_{t,x,n}(r,z;\bullet)=f_{t,x,j}^{(j)}(r,z;\bullet)\otimes f_{r,z,n-j},$
(3.4)
with $f_{r,z,1}=1$. For example, $f^{(1)}_{t,x,1}(r,z;\bullet)=G_{t-r}(x-z)$
and
$f^{(1)}_{t,x,n}(r,z;\boldsymbol{t_{n-1}},\boldsymbol{x_{n-1}})=G_{t-r}(x-z)f_{r,z,n-1}(\boldsymbol{t_{n-1}},\boldsymbol{x_{n-1}})$.
By the definition of the symmetrization,
$h^{(j)}_{t,x,n}(r,z;\boldsymbol{t_{n-1}},\boldsymbol{x_{n-1}})=\frac{1}{(n-1)!}\sum_{\sigma\in\mathfrak{S}_{n-1}}f_{t,x,n}^{(j)}(r,z;t_{\sigma(1)},x_{\sigma(1)},\ldots,t_{\sigma(n-1)},x_{\sigma(n-1)}).$
(3.5)
Similarly, for $\boldsymbol{s_{m}}\in[0,t]^{m}$ and
$\boldsymbol{y_{m}}\in\mathbb{R}^{dm}$, and for any $p\in[2,\infty)$, we will
show that
$\displaystyle
D^{m}_{\boldsymbol{s_{m}},\boldsymbol{y_{m}}}u(t,x):=\sum_{n\geq
m}\frac{n!}{(n-m)!}I_{n-m}\big{(}\widetilde{f}_{t,x,n}(\boldsymbol{s_{m}},\boldsymbol{y_{m}};\bullet)\big{)}$
(3.6)
converges in $L^{p}(\Omega)$. Note that if the series (3.6) converges in
$L^{p}(\Omega)$, we can see that almost surely, the function
$(\boldsymbol{s_{m}},\boldsymbol{y_{m}})\mapsto
D^{m}_{\boldsymbol{s_{m}},\boldsymbol{y_{m}}}u(t,x)$
is _symmetric_ , meaning that for any $\sigma\in\mathfrak{S}_{m}$,
$D_{s_{1},y_{1}}D_{s_{2},y_{2}}\cdots
D_{s_{m},y_{m}}u(t,x)=D_{s_{\sigma(1)},y_{\sigma(1)}}D_{s_{\sigma(2)},y_{\sigma(2)}}\cdots
D_{s_{\sigma(m)},y_{\sigma(m)}}u(t,x).$
_From now on_ , we assume $t>s_{1}>...>s_{m}>0$ without losing any generality.
Note that like (3.2), we can write
$\displaystyle\frac{n!}{(n-m)!}\widetilde{f}_{t,x,n}(\boldsymbol{s_{m}},\boldsymbol{y_{m}};\bullet)=\sum_{\boldsymbol{i_{m}}\in\Delta_{n,m}}h^{(\boldsymbol{i_{m}})}_{t,x,n}(\boldsymbol{s_{m}},\boldsymbol{y_{m}};\bullet),$
(3.7)
where $\boldsymbol{i_{m}}\in\Delta_{n,m}$ means $1\leq
i_{1}<i_{2}<\cdots<i_{m}\leq n$ and
$h^{(\boldsymbol{i_{m}})}_{t,x,n}(\boldsymbol{s_{m}},\boldsymbol{y_{m}};\bullet)$
is the symmetrization of the function
$f^{(\boldsymbol{i_{m}})}_{t,x,n}(\boldsymbol{s_{m}},\boldsymbol{y_{m}};\bullet)$
that is defined by
$\displaystyle
f^{(\boldsymbol{i_{m}})}_{t,x,n}(\boldsymbol{s_{m}},\boldsymbol{y_{m}};\bullet)$
(3.8) $\displaystyle=f^{(i_{1})}_{t,x,i_{1}}(s_{1},y_{1};\bullet)\otimes
f^{(i_{2}-i_{1})}_{s_{1},y_{1},i_{2}-i_{1}}(s_{2},y_{2};\bullet)\otimes\cdots\otimes
f^{(i_{m}-i_{m-1})}_{s_{m-1},y_{m-1},i_{m}-i_{m-1}}(s_{m},y_{m};\bullet)\otimes
f_{s_{m},y_{m},n-i_{m}},$
which is a generalization of (3.4).
### 3.2 Step 2: Reduction to white noise in time
Let $\dot{\mathfrak{X}}$ denote the Gaussian noise that is white in time and
has the same spatial correlation as $W$ and let
$\\{\mathfrak{X}(f):f\in\mathcal{H}_{0}\\}$ denote the resulting isonormal
Gaussian process; see Section 2.1.
For any $p\in[2,\infty)$, we deduce from (3.6) and (3.7) that
$\displaystyle\big{\|}D^{m}_{\boldsymbol{s_{m}},\boldsymbol{y_{m}}}u(t,x)\big{\|}_{p}$
$\displaystyle\leq\sum_{n\geq
m}\left\|I_{n-m}\left(\sum_{\boldsymbol{i_{m}}\in\Delta_{n,m}}h^{(\boldsymbol{i_{m}})}_{t,x,n}(\boldsymbol{s_{m}},\boldsymbol{y_{m}};\bullet)\right)\right\|_{p}\quad\text{by
triangle inequality}$ $\displaystyle\leq\sum_{n\geq
m}(p-1)^{\frac{n-m}{2}}\left\|I_{n-m}\left(\sum_{\boldsymbol{i_{m}}\in\Delta_{n,m}}h^{(\boldsymbol{i_{m}})}_{t,x,n}(\boldsymbol{s_{m}},\boldsymbol{y_{m}};\bullet)\right)\right\|_{2}\quad\text{by
\eqref{hyper}}.$
The function
$\sum_{\boldsymbol{i_{m}}\in\Delta_{n,m}}h^{(\boldsymbol{i_{m}})}_{t,x,n}(\boldsymbol{s_{m}},\boldsymbol{y_{m}};\bullet)$
vanishes outside $\big{(}[0,t]\times\mathbb{R}^{d}\big{)}^{n-m}$, thus we
deduce from (2.13) that
$\displaystyle\quad\left\|I_{n-m}\left(\sum_{\boldsymbol{i_{m}}\in\Delta_{n,m}}h^{(\boldsymbol{i_{m}})}_{t,x,n}(\boldsymbol{s_{m}},\boldsymbol{y_{m}};\bullet)\right)\right\|_{2}^{2}=(n-m)!\left\|\sum_{\boldsymbol{i_{m}}\in\Delta_{n,m}}h^{(\boldsymbol{i_{m}})}_{t,x,n}(\boldsymbol{s_{m}},\boldsymbol{y_{m}};\bullet)\right\|_{\mathcal{H}^{\otimes(n-m)}}^{2}$
$\displaystyle\leq\Gamma_{t}^{n-m}(n-m)!\left\|\sum_{\boldsymbol{i_{m}}\in\Delta_{n,m}}h^{(\boldsymbol{i_{m}})}_{t,x,n}(\boldsymbol{s_{m}},\boldsymbol{y_{m}};\bullet)\right\|_{\mathcal{H}_{0}^{\otimes(n-m)}}^{2}=\Gamma_{t}^{n-m}\left\|I^{\mathfrak{X}}_{n-m}\left(\sum_{\boldsymbol{i_{m}}\in\Delta_{n,m}}h^{(\boldsymbol{i_{m}})}_{t,x,n}(\boldsymbol{s_{m}},\boldsymbol{y_{m}};\bullet)\right)\right\|_{2}^{2}.$
Therefore, we get
$\displaystyle\big{\|}D^{m}_{\boldsymbol{s_{m}},\boldsymbol{y_{m}}}u(t,x)\big{\|}_{p}$
$\displaystyle\leq\sum_{n\geq
m}\big{[}(p-1)\Gamma_{t}\big{]}^{\frac{n-m}{2}}\left\|\sum_{\boldsymbol{i_{m}}\in\Delta_{n,m}}I^{\mathfrak{X}}_{n-m}\big{(}f^{(\boldsymbol{i_{m}})}_{t,x,n}(\boldsymbol{s_{m}},\boldsymbol{y_{m}};\bullet)\big{)}\right\|_{2}.$
(3.9)
This leads to
$\displaystyle\big{\|}D^{m}_{\boldsymbol{s_{m}},\boldsymbol{y_{m}}}u(t,x)\big{\|}_{p}\leq\sum_{n\geq
m}\big{[}(p-1)\Gamma_{t}\big{]}^{\frac{n-m}{2}}\sqrt{\mathcal{Q}_{m,n}},$
(3.10)
with
$\displaystyle\mathcal{Q}_{m,n}:$
$\displaystyle=\mathbb{E}\left[\left(\sum_{\boldsymbol{i_{m}}\in\Delta_{n,m}}I^{\mathfrak{X}}_{n-m}\big{(}f^{(\boldsymbol{i_{m}})}_{t,x,n}(\boldsymbol{s_{m}},\boldsymbol{y_{m}};\bullet)\big{)}\right)^{2}\right]\leq\binom{n}{m}\sum_{\boldsymbol{i_{m}}\in\Delta_{n,m}}\mathbb{E}\left(I^{\mathfrak{X}}_{n-m}\big{(}f^{(\boldsymbol{i_{m}})}_{t,x,n}(\boldsymbol{s_{m}},\boldsymbol{y_{m}};\bullet)\big{)}^{2}\right).$
(3.11)
The product formula (2.14) and the decomposition (3.8) yield, with
$(i_{0},s_{0},y_{0})=(0,t,x)$,
$\displaystyle\mathcal{Q}_{m,n}\leq\binom{n}{m}\sum_{\boldsymbol{i_{m}}\in\Delta_{n,m}}\mathbb{E}\left(I^{\mathfrak{X}}_{n-i_{m}}\big{(}f_{s_{m},y_{m},n-i_{m}}\big{)}^{2}\prod_{j=1}^{m}I^{\mathfrak{X}}_{i_{j}-i_{j-1}-1}\Big{(}f^{(i_{j}-i_{j-1})}_{s_{j-1},y_{j-1},i_{j}-i_{j-1}}(s_{j},y_{j};\bullet)\Big{)}^{2}\right)$
$\displaystyle=\binom{n}{m}\sum_{\boldsymbol{i_{m}}\in\Delta_{n,m}}\big{\|}I^{\mathfrak{X}}_{n-i_{m}}\big{(}f_{s_{m},y_{m},n-i_{m}}\big{)}\big{\|}^{2}_{2}\times\prod_{j=1}^{m}\Big{\|}I^{\mathfrak{X}}_{i_{j}-i_{j-1}-1}\Big{(}f^{(i_{j}-i_{j-1})}_{s_{j-1},y_{j-1},i_{j}-i_{j-1}}(s_{j},y_{j};\bullet)\Big{)}\Big{\|}^{2}_{2},$
(3.12)
where the last equality is obtained by using the independence among the random
variables inside the expectation. It remains to estimate two typical terms:
$\displaystyle\big{\|}I^{\mathfrak{X}}_{j}(f_{r,z,j})\|_{2}^{2}\quad{\rm
and}\quad\Big{\|}I^{\mathfrak{X}}_{j-1}(f^{(j)}_{t,x,j}(r,z;\bullet)\big{)}\Big{\|}_{2}^{2}~{}\text{for
$1\leq j\leq n$ and $t>r$}.$ (3.13)
The first term in (3.13) can be estimated as follows. Using Fourier transform
in space (see (2.29)), we have, with $t_{0}=r$,
$\displaystyle\big{\|}I^{\mathfrak{X}}_{j}(f_{r,z,j})\|_{2}^{2}$
$\displaystyle=j!\big{\|}\widetilde{f}_{r,z,j}\big{\|}_{\mathcal{H}_{0}^{\otimes
j}}^{2}=\int_{[0,r]^{j}}\big{\|}f_{r,z,j}(\boldsymbol{t_{j}},\bullet)\big{\|}_{0}^{2}d\boldsymbol{t_{j}}$
(3.14)
$\displaystyle=\int_{r>t_{1}>\cdots>t_{j}>0}\int_{\mathbb{R}^{dj}}\big{|}\mathcal{F}f_{r,z,j}(\boldsymbol{t_{j}},\boldsymbol{\xi_{j}})\big{|}^{2}\mu(d\boldsymbol{\xi_{j}})d\boldsymbol{t_{j}}$
$\displaystyle=\int_{r>t_{1}>\cdots>t_{j}>0}\left(\int_{\mathbb{R}^{dj}}\prod_{k=0}^{j-1}\big{|}\mathcal{F}G_{t_{k}-t_{k+1}}(\xi_{k+1}+\cdots+\xi_{j})\big{|}^{2}\mu(d\xi_{k})\right)d\boldsymbol{t_{j}}.$
By Lemma 2.6,
$\displaystyle\big{\|}I^{\mathfrak{X}}_{j}(f_{r,z,j})\|_{2}^{2}$
$\displaystyle\leq\frac{C^{j}}{j!},$ (3.15)
where $C=2(t^{2}+1)\int_{\mathbb{R}^{d}}(1+|\xi|^{2})^{-1}\mu(d\xi)$.
###### Remark 3.1.
By the arguments that lead to (3.9), we can also get, for any
$p\in[2,\infty)$,
$\big{\|}u(t,x)\big{\|}_{p}\leq 1+\sum_{n\geq
1}\big{\|}I_{n}(f_{t,x,n})\big{\|}_{p}\leq 1+\sum_{n\geq
1}\big{[}(p-1)\Gamma_{t}\big{]}^{n/2}\big{\|}I^{\mathfrak{X}}_{n}(f_{t,x,n})\big{\|}_{2}$
and then the estimate (3.15) implies $u(t,x)\in L^{p}(\Omega)$. Moreover,
$\displaystyle\sup_{(s,y)\in[0,t]\times\mathbb{R}^{d}}\|u(s,y)\|_{p}<+\infty~{}\text{for
any $t\in\mathbb{R}_{+}$.}$ (3.16)
This is done under the Dalang’s condition (1.2) only and the case $p=2$
provides another proof of [3, Theorem 4.4] when $d=1,2$.
In what follows, we estimate the second term in (3.13) separately for the
cases $d=1$ and $d=2$. As usual, we will use $C$ to denote an immaterial
constant that may vary from line to line.
#### 3.2.1 Estimation of
$\Big{\|}I^{\mathfrak{X}}_{j-1}(f^{(j)}_{t,x,j}(r,z;\bullet)\big{)}\Big{\|}_{2}^{2}$
when $d=1$
When $d=1$, $G_{t}(x)=\frac{1}{2}\mathbf{1}_{\\{|x|<t\\}}$. For $j=1$,
$I^{\mathfrak{X}}_{j-1}(f^{(j)}_{t,x,j}(r,z;\bullet)\big{)}=G_{t-r}(x-z)$ with
the convention (1.6). For $j\geq 2$, it follows from the (modified) isometry
property (2.8) that
$\displaystyle\Big{\|}I^{\mathfrak{X}}_{j-1}(f^{(j)}_{t,x,j}(r,z;\bullet)\big{)}\Big{\|}_{2}^{2}=(j-1)!\Big{\|}h^{(j)}_{t,x,j}(r,z;\bullet)\Big{\|}_{\mathcal{H}_{0}^{\otimes(j-1)}}^{2}=\int_{[r,t]^{j-1}}\big{\|}f^{(j)}_{t,x,j}(r,z;\boldsymbol{t_{j-1}},\bullet)\big{\|}_{0}^{2}d\boldsymbol{t_{j-1}},$
where we recall that $h^{(j)}_{t,x,j}(r,z;\bullet)$ is the symmetrization of
$f^{(j)}_{t,x,j}(r,z;\bullet)$; see (3.5). Then, taking advantage of the
simple form of $G_{t}(x)$ for $d=1$, we get
$0\leq
f^{(j)}_{t,x,j}(r,z;\boldsymbol{t_{j-1}},\bullet)\leq\frac{1}{2}\mathbf{1}_{\\{|x-z|<t-r\\}}f_{t,x,j-1}(\boldsymbol{t_{j-1}},\bullet),$
from which we further get
$\displaystyle\Big{\|}I^{\mathfrak{X}}_{j-1}(f^{(j)}_{t,x,j}(r,z;\bullet)\big{)}\Big{\|}_{2}^{2}$
$\displaystyle\leq
G^{2}_{t-r}(x-z)\int_{[r,t]^{j-1}}\big{\|}f_{t,x,j-1}(\boldsymbol{t_{j-1}},\bullet)\big{\|}_{0}^{2}d\boldsymbol{t_{j-1}}$
$\displaystyle\leq\frac{C^{j-1}}{(j-1)!}G^{2}_{t-r}(x-z),$ (3.17)
where the last inequality follows from (3.15) and (3.14).
#### 3.2.2 Estimation of
$\Big{\|}I^{\mathfrak{X}}_{j-1}(f^{(j)}_{t,x,j}(r,z;\bullet)\big{)}\Big{\|}_{2}^{2}$
when $d=2$
Let $q$ be defined as in (2.20) and (2.23) and we fix such a $q$ _throughout
this subsection_. For $j=1$,
$I^{\mathfrak{X}}_{j-1}(f^{(j)}_{t,x,j}(r,z;\bullet)\big{)}=G_{t-r}(x-z)$ with
the convention (1.6). For $j\geq 2$, we begin with
$\displaystyle\Big{\|}I^{\mathfrak{X}}_{j-1}(f^{(j)}_{t,x,j}(r,z;\bullet)\big{)}\Big{\|}_{2}^{2}$
$\displaystyle=\int_{[r,t]^{j-1}}\big{\|}f^{(j)}_{t,x,j}(r,z;\boldsymbol{t_{j-1}},\bullet)\big{\|}_{0}^{2}d\boldsymbol{t_{j-1}},$
$\displaystyle\leq
C^{j-1}\int_{t>t_{1}>\cdots>t_{j-1}>r}\big{\|}f^{(j)}_{t,x,j}(r,z;\boldsymbol{t_{j-1}},\bullet)\big{\|}_{L^{2q}(\mathbb{R}^{2j-2})}^{2}d\boldsymbol{t_{j-1}}=C^{j-1}\mathcal{T}_{j},$
where we applied Lemma 2.3 for the inequality above151515The function
$\boldsymbol{x_{j-1}}\to
f_{t,x,j}^{(j)}(\boldsymbol{t_{j-1}},\boldsymbol{x_{j-1}})=G_{t-t_{1}}(x-x_{1})G_{t_{1}-t_{2}}(x_{1}-x_{2})\ldots
G_{t_{j-1}-r}(x_{j-1}-z)$ has support contained in
$\\{\boldsymbol{x_{j-1}}\in\mathbb{R}^{2(j-1)};|x_{i}-x|<t-t_{i},\ \mbox{for
all}\ i=1,\ldots,j-1\\}$. and we denote
$\displaystyle\mathcal{T}_{j}:=\int_{t>t_{1}>\cdots>t_{j-1}>r}d\boldsymbol{t_{j-1}}\left(\int_{\mathbb{R}^{2(j-1)}}G^{2q}_{t-t_{1}}(x-x_{1})\cdots
G^{2q}_{t_{j-1}-r}(x_{j-1}-z)d\boldsymbol{x_{j-1}}\right)^{1/q}.$ (3.18)
Note that we can choose $C$ to depend only on $(t,\gamma,q)$ and be increasing
in $t$.
Case $j=2$. In this case, we deduce from Lemma 2.4 and (2.27) that
$\mathcal{T}_{2}=\int_{r}^{t}dt_{1}(G^{2q}_{t-t_{1}}\ast
G^{2q}_{t_{1}-r})^{1/q}(x-z)\leq CG_{t-r}^{2-\frac{1}{q}}(x-z)\leq
CG^{2}_{t-r}(x-z).$ (3.19)
Case $j\geq 3$. In this case, we use Minkowski inequality with respect to the
norm in $L^{1/q}([t_{2},t],dt_{1})$ in order to get
$\displaystyle\mathcal{T}_{j}$
$\displaystyle\leq\int_{t>t_{2}>\cdots>t_{j-1}>r}\Bigg{(}\int_{\mathbb{R}^{2(j-2)}}\left[\int_{t_{2}}^{t}\big{(}G_{t-t_{1}}^{2q}\ast
G_{t_{1}-t_{2}}^{2q}\big{)}^{1/q}(x-x_{2})dt_{1}\right]^{q}$
$\displaystyle\qquad\times G^{2q}_{t_{2}-t_{3}}(x_{2}-x_{3})\cdots
G^{2q}_{t_{j-1}-r}(x_{j-1}-z)dx_{2}\cdots dx_{j-1}\Bigg{)}^{1/q}dt_{2}\cdots
dt_{j-1}.$
Applying Lemma 2.4 yields
$\displaystyle\mathcal{T}_{j}$ $\displaystyle\leq
A_{q}\int_{t>t_{2}>\cdots>t_{j-1}>r}(t-t_{2})^{\frac{1}{q}-1}\Bigg{(}\int_{\mathbb{R}^{2(j-2)}}G^{2q-1}_{t-t_{2}}(x-x_{2})$
$\displaystyle\qquad\times G^{2q}_{t_{2}-t_{3}}(x_{2}-x_{3})\cdots
G^{2q}_{t_{j-1}-r}(x_{j-1}-z)dx_{2}\cdots dx_{j-1}\Bigg{)}^{1/q}dt_{2}\cdots
dt_{j-1}.$ (3.20)
If $j=3$, we have
$\displaystyle\mathcal{T}_{3}$ $\displaystyle\leq
A_{q}\int_{r}^{t}(t-t_{2})^{\frac{1}{q}-1}\Bigg{(}\int_{\mathbb{R}^{2}}G^{2q-1}_{t-t_{2}}(x-x_{2})G^{2q}_{t_{2}-r}(x_{2}-z)dx_{2}\Bigg{)}^{1/q}dt_{2}.$
Owing to (2.27), we can bound $G^{2q-1}_{t-t_{2}}(x-x_{2})$ by
$(2\pi)(t-t_{2})G^{2q}_{t-t_{2}}(x-x_{2})$, and then we apply again Lemma 2.4
and (2.27) to conclude that
$\mathcal{T}_{3}\leq
A_{q}^{2}(2\pi)^{\frac{1}{q}}(t-r)^{\frac{3}{q}-2}G_{t-r}^{2-\frac{1}{q}}(x-z)\leq
CG^{2}_{t-r}(x-z).$ (3.21)
For $j\geq 4$, we continue with the estimate (3.20). We can first apply
Minkowski inequality with respect to the norm
$L^{1/q}\big{(}[t_{4},t_{2}],dt_{3}\big{)}$ and then apply Lemma 2.4 to obtain
$\displaystyle\mathcal{T}_{j}$ $\displaystyle\leq
A_{q}^{2}\int_{t>t_{2}>t_{4}>\cdots>t_{j-1}>r}dt_{2}dt_{4}\cdots
dt_{j-1}(t-t_{2})^{\frac{1}{q}-1}(t_{2}-t_{4})^{\frac{1}{q}-1}\Bigg{(}\int_{\mathbb{R}^{2(j-3)}}G^{2q-1}_{t-t_{2}}(x-x_{2})$
$\displaystyle\qquad\times
G^{2q-1}_{t_{2}-t_{4}}(x_{2}-x_{4})G^{2q}_{t_{4}-t_{5}}(x_{4}-x_{5})\cdots
G^{2q}_{t_{j-1}-r}(x_{j-1}-z)dx_{2}dx_{4}\cdots dx_{j-1}\Bigg{)}^{1/q}.$
(3.22)
Note that
$G^{2q-1}_{t-t_{2}}(x-x_{2})G^{2q-1}_{t_{2}-t_{4}}(x_{2}-x_{4})\leq\mathbf{1}_{\\{|x-x_{4}|\leq
t-t_{4}\\}}G^{2q-1}_{t-t_{2}}(x-x_{2})G^{2q-1}_{t_{2}-t_{4}}(x_{2}-x_{4}).$
Then, by Cauchy-Schwarz inequality and (2.26), we can infer that
$\displaystyle\int_{\mathbb{R}^{2}}G^{2q-1}_{t-t_{2}}(x-x_{2})G^{2q-1}_{t_{2}-t_{4}}(x_{2}-x_{4})dx_{2}$
$\displaystyle\leq\mathbf{1}_{\\{|x-x_{4}|\leq
t-t_{4}\\}}\|G^{2q-1}_{t-t_{2}}\|_{L^{2}(\mathbb{R}^{2})}\|G^{2q-1}_{t_{2}-t_{4}}\|_{L^{2}(\mathbb{R}^{2})}$
$\displaystyle=c_{1}(t-t_{2})^{2-2q}(t_{2}-t_{4})^{2-2q}\mathbf{1}_{\\{|x-x_{4}|\leq
t-t_{4}\\}},$
where $c_{1}=\frac{(2\pi)^{3-4q}}{4-4q}$. Thus, substituting this estimate
into (3.22), we end up with
$\displaystyle\mathcal{T}_{j}$ $\displaystyle\leq
A_{q}^{2}c_{1}^{1/q}\int_{t>t_{2}>t_{4}>\cdots>t_{j-1}>r}dt_{2}dt_{4}\cdots
dt_{j-1}(t-t_{2})^{\frac{3}{q}-3}(t_{2}-t_{4})^{\frac{3}{q}-3}$
$\displaystyle\qquad\times\left(\int_{\mathbb{R}^{2(j-4)}}\mathbf{1}_{\\{|x-x_{4}|\leq
t-t_{4}\\}}G^{2q}_{t_{4}-t_{5}}(x_{4}-x_{5})\cdots
G^{2q}_{t_{j-1}-r}(x_{j-1}-z)dx_{4}\cdots dx_{j-1}\right)^{1/q}.$
Focusing on the indicators, the right-hand side of this estimate can be
bounded by
$\displaystyle A_{q}^{2}c_{1}^{1/q}\mathbf{1}_{\\{|x-z|\leq
t-r\\}}\int_{t>t_{2}>t_{4}>\cdots>t_{j-1}>r}dt_{2}dt_{4}\cdots
dt_{j-1}(t-t_{2})^{\frac{3}{q}-3}(t_{2}-t_{4})^{\frac{3}{q}-3}$
$\displaystyle\quad\times\left(\int_{\mathbb{R}^{2(j-4)}}G^{2q}_{t_{4}-t_{5}}(x_{4}-x_{5})\cdots
G^{2q}_{t_{j-1}-r}(x_{j-1}-z)dx_{4}\cdots dx_{j-1}\right)^{1/q}.$
For $j=4$, using (2.28), we have
$\mathcal{T}_{4}\leq
A_{q}^{2}c_{1}^{1/q}(t-r)^{\frac{6}{q}-6}\mathbf{1}_{\\{|x-z|\leq t-r\\}}\leq
CG_{t-r}^{2}(x-z).$ (3.23)
Now for $j\geq 5$, we just integrate in each of the variables
$x_{4},\dots,x_{j-1}$ (with this order) so that, thanks to (2.26), we end up
with
$\displaystyle\mathcal{T}_{j}$ $\displaystyle\leq
A_{q}^{2}c_{1}^{1/q}c_{2}^{j-4}\mathbf{1}_{\\{|x-z|\leq
t-r\\}}\int_{t>t_{2}>t_{4}>\cdots>t_{j-1}>r}dt_{2}dt_{4}\cdots dt_{j-1}$
$\displaystyle\qquad\times(t-t_{2})^{\frac{3}{q}-3}(t_{2}-t_{4})^{\frac{3}{q}-3}(t_{4}-t_{5})^{\frac{2}{q}-2}\cdots(t_{j-1}-r)^{\frac{2}{q}-2}\quad\text{with
$c_{2}=\left(\dfrac{(2\pi)^{1-2q}}{2-2q}\right)^{2}$}$ $\displaystyle\leq
A_{q}^{2}c_{1}^{1/q}c_{2}^{j-4}\frac{(t-r)^{j-3}}{(j-3)!}(t-r+1)^{j(\frac{2}{q}-2)}\mathbf{1}_{\\{|x-z|\leq
t-r\\}},$
where we used the rough estimate $a^{\nu}\leq(b+1)^{\nu}$ for $0<a\leq b$ and
$\nu>0$. Thus, using (2.28) we obtain:
$\mathcal{T}_{j}\leq\frac{C^{j-3}}{(j-3)!}G^{2}_{t-r}(x-z)\quad\text{for
any}~{}j\geq 5.$ (3.24)
Hence, combining the estimates (3.19), (3.21), (3.23) and (3.24) and taking
into account that
$I^{\mathfrak{X}}_{0}(f^{(1)}_{t,x,1}(r,z;\bullet)\big{)}=G_{r-s}(z-y)$, we
can write
$\displaystyle\Big{\|}I^{\mathfrak{X}}_{j-1}(f^{(j)}_{t,x,j}(r,z;\bullet)\big{)}\Big{\|}_{2}^{2}\leq\begin{cases}CG_{t-r}^{2}(x-z)&\text{for
$j=1,2,3,4$}\\\ \dfrac{C^{j}}{(j-3)!}G_{t-r}^{2}(x-z)&\text{for $j\geq
5$}\end{cases},$
where the constant $C>1$ depends on $(t,\gamma,q)$ and is increasing in $t$.
For $1\leq j\leq n$, we obtain the following bound
$\displaystyle\Big{\|}I^{\mathfrak{X}}_{j-1}(f^{(j)}_{t,x,j}(r,z;\bullet)\big{)}\Big{\|}_{2}^{2}\leq\
\frac{C^{j}}{j!}n^{3}G_{t-r}^{2}(x-z).$ (3.25)
### 3.3 Step 3: Proof of (1.11)
Let us first consider the lower bound in (1.11) for $d\in\\{1,2\\}$. For
$p\in[2,\infty)$, we deduce from the modified isometry (2.8) that
$\big{\|}D^{m}_{\boldsymbol{s_{m}},\boldsymbol{y_{m}}}u(t,x)\big{\|}_{p}\geq\big{\|}D^{m}_{\boldsymbol{s_{m}},\boldsymbol{y_{m}}}u(t,x)\big{\|}_{2}\geq
m!\widetilde{f}_{t,x,m}(\boldsymbol{s_{m}},\boldsymbol{y_{m}}).$
Now let us establish the upper bound in (1.11). By symmetry, we can assume
$t>s_{1}>\cdots>s_{m}>0$. First we consider the case where $d=2$. Recall the
definition of $\mathcal{Q}_{m,n}$ from (3.11), and then plugging the estimates
(3.15) and (3.25) into (3.12) yields, with $(i_{0},s_{0},y_{0})=(0,t,x)$,
$\displaystyle\mathcal{Q}_{m,n}$
$\displaystyle\leq\binom{n}{m}\sum_{\boldsymbol{i_{m}}\in\Delta_{n,m}}\frac{C^{n-i_{m}}}{(n-i_{m})!}\times\prod_{j=1}^{m}\frac{n^{3}C^{i_{j}-i_{j-1}}}{(i_{j}-i_{j-1})!}G^{2}_{s_{j-1}-s_{j}}(y_{j-1}-y_{j})$
$\displaystyle\leq(2C)^{n}n^{3m}\left(\sum_{\boldsymbol{i_{m}}\in\Delta_{n,m}}\frac{1}{i_{1}!(i_{2}-i_{1})!\cdots(i_{m}-i_{m-1})!(n-i_{m})!}\right)f^{2}_{t,x,m}(\boldsymbol{s_{m}},\boldsymbol{y_{m}}),$
where we used the rough bound $\binom{n}{m}\leq 2^{n}$. The sum in the above
display is equal to
$\frac{1}{n!}\sum_{\begin{subarray}{c}a_{1}+...+a_{m+1}=n\\\
a_{i}\in\mathbb{N},\forall
i\end{subarray}}\binom{n}{a_{1},...,a_{m+1}}=\frac{(m+1)^{n}}{n!},$
by multinomial formula. That is, we can get
$\mathcal{Q}_{m,n}\leq\frac{\big{[}C(m+1)\big{]}^{n}n^{3m}}{n!}f^{2}_{t,x,m}(\boldsymbol{s_{m}},\boldsymbol{y_{m}}),$
which, together with the estimate (3.10), implies the upper bound in (1.11),
when $d=2$.
The case $d=1$ can be done in the same way by noticing that the bound in
(3.17) can be replaced by $n\frac{C^{j}}{j!}G_{t-r}^{2}(x-z)$ for $1\leq j\leq
n$. Then, like the estimate for $d=2$, we can get, for
$t>s_{1}>\cdots>s_{m}>0$,
$\mathcal{Q}_{m,n}\leq\frac{\big{[}C(m+1)\big{]}^{n}n^{m}}{n!}f^{2}_{t,x,m}(\boldsymbol{s_{m}},\boldsymbol{y_{m}}),$
which together with the estimate (3.10) implies the upper bound in (1.11),
when $d=1$. This completes the proof of the estimate (1.11).
Notice that the upper bound also shows the convergence in $L^{p}$ for any
$p\in[2,\infty)$ of the series (3.6), for any _fixed_
$\boldsymbol{s_{m}}\in[0,t]^{m}$ and $\boldsymbol{y_{m}}\in\mathbb{R}^{dm}$.
### 3.4 Step 4: Existence of a measurable version
We claim that there is a random field $Y$ such that
$Y(\boldsymbol{s_{m}},\boldsymbol{y_{m}})=D^{m}_{\boldsymbol{s_{m}},\boldsymbol{y_{m}}}u(t,x)$
almost surely for almost all
$(\boldsymbol{s_{m}},\boldsymbol{y_{m}})\in[0,t]^{m}\times\mathbb{R}^{md}$ and
the mapping
$(\omega,\boldsymbol{s_{m}},\boldsymbol{y_{m}})\in\Omega\times[0,t]^{m}\times\mathbb{R}^{md}\longmapsto
Y(\omega,\boldsymbol{s_{m}},\boldsymbol{y_{m}})\in\mathbb{R}$
is jointly measurable. This fact is rather standard and we will sketch the
proof only in the case $d=2$. From the explicit form of the kernels
$f_{t,x,n}$ given in (1.8), it follows that the mapping
$(\boldsymbol{s_{m}},\boldsymbol{y_{m}})\rightarrow\widetilde{f}_{t,x,n}(\boldsymbol{s_{m}},\boldsymbol{y_{m}};\bullet)$
(3.26)
is measurable from $[0,t]^{m}\times\mathbb{R}^{2m}$ to
$L^{2}([0,t]^{n-m};L^{2q}(\mathbb{R}^{2(n-m)}))$. Because
$L^{2}([0,t]^{n-m};L^{2q}(\mathbb{R}^{2(n-m)}))$ is continuously embedded into
$\mathcal{H}^{\otimes(n-m)}$ (see (2.13) and (2.25)),
we deduce that the map (3.26) is measurable from
$[0,t]^{m}\times\mathbb{R}^{2m}$ into $\mathcal{H}^{\otimes(n-m)}$. This
implies that the mapping
$(\boldsymbol{s_{m}},\boldsymbol{y_{m}})\rightarrow
I_{n-m}(\widetilde{f}_{t,x,n}(\boldsymbol{s_{m}},\boldsymbol{y_{m}};\bullet))$
(3.27)
is measurable from $[0,t]^{m}\times\mathbb{R}^{2m}$ to $L^{2}(\Omega)$. The
upper bound in (1.11) implies that the mapping (3.27) belongs to the space
$L^{2q}([0,t]^{m}\times\mathbb{R}^{2m};L^{2}(\Omega))\subset
L^{2q}([0,t]^{m}\times\mathbb{R}^{2m}\times\Omega).$
From this, it follows that we can find a measurable modification of the
process
$\\{I_{n-m}(\widetilde{f}_{t,x,n}(\boldsymbol{s_{m}},\boldsymbol{y_{m}};\bullet))(\omega):(\omega,\boldsymbol{s_{m}},\boldsymbol{y_{m}})\in\Omega\times[0,t]^{m}\times\mathbb{R}^{2m}\\}.$
Finally, by standard arguments we deduce the existence of a measurable
modification of the series (3.6).
### 3.5 Step 5: Proof of $u(t,x)\in\mathbb{D}^{\infty}$
We have already seen in Remark 3.1 that $u(t,x)\in L^{p}(\Omega)$ for any
$p\in[2,\infty)$. Then, it remains to show that the function
$D^{m}_{\boldsymbol{s_{m}},\boldsymbol{y_{m}}}u(t,x)$ defined as the limit of
the series (3.6) coincides with the $m$th Malliavin derivative of $u(t,x)$. To
do this, it suffices to show that
$\mathbb{E}\big{[}\|D^{m}u(t,x)\|_{\mathcal{H}^{\otimes m}}^{p}\big{]}<\infty$
for any $m\geq 1$. By Fubini’ theorem and using the upper bound (1.11), we
write
$\displaystyle\Big{(}\mathbb{E}\big{[}\|D^{m}u(t,x)\|_{\mathcal{H}^{\otimes
m}}^{p}\big{]}\Big{)}^{2/p}$
$\displaystyle=\left\|\int_{[0,t]^{2m}\times\mathbb{R}^{2md}}d\boldsymbol{s_{m}}d\boldsymbol{s^{\prime}_{m}}d\boldsymbol{y_{m}}d\boldsymbol{y^{\prime}_{m}}\big{(}D^{m}_{\boldsymbol{s_{m}},\boldsymbol{y_{m}}}u(t,x)\big{)}\big{(}D^{m}_{\boldsymbol{s^{\prime}_{m}},\boldsymbol{y^{\prime}_{m}}}u(t,x)\big{)}\prod_{j=1}^{m}\gamma_{0}(s_{j}-s_{j}^{\prime})\gamma(y_{j}-y_{j}^{\prime})\right\|_{p/2}$
$\displaystyle\leq\int_{[0,t]^{2m}\times\mathbb{R}^{2md}}d\boldsymbol{s_{m}}d\boldsymbol{s^{\prime}_{m}}d\boldsymbol{y_{m}}d\boldsymbol{y^{\prime}_{m}}\big{\|}D^{m}_{\boldsymbol{s_{m}},\boldsymbol{y_{m}}}u(t,x)\big{\|}_{p}\big{\|}D^{m}_{\boldsymbol{s^{\prime}_{m}},\boldsymbol{y^{\prime}_{m}}}u(t,x)\big{\|}_{p}\prod_{j=1}^{m}\gamma_{0}(s_{j}-s_{j}^{\prime})\gamma(y_{j}-y_{j}^{\prime})$
$\displaystyle\lesssim\big{\|}\widetilde{f}_{t,x,m}\big{\|}^{2}_{\mathcal{H}^{\otimes
m}}<\infty.$
This shows $u(t,x)\in\mathbb{D}^{\infty}$ and completes the proof of Theorem
1.3.
###### Remark 3.2.
When $d=2,p=2,m=1$ and for the cases (a), (b) in Hypothesis ${\bf(H1)}$, the
upper bound in (1.11) can be proved in a much simpler way for almost all
$(r,z)\in[0,t]\times\mathbb{R}^{2}$. Let $v_{\lambda}$ be the solution to the
stochastic wave equation
$\begin{cases}{\displaystyle\frac{\partial^{2}v_{\lambda}}{\partial
t^{2}}=\Delta v_{\lambda}+\lambda v_{\lambda}\dot{\mathfrak{X}}}\\\
v_{\lambda}(0,\bullet)=1,\quad\dfrac{\partial v_{\lambda}}{\partial
t}(0,\bullet)=0,\end{cases}$
where $\lambda>0$ and $\dot{\mathfrak{X}}$ is given as before. This solution
has the chaos expansion $v_{\lambda}(t,x)=\sum_{n\geq
0}\lambda^{n}I_{n}^{\mathfrak{X}}(f_{t,x,n})$ and its Malliavin derivative has
the chaos expansion
$D_{r,z}v_{\lambda}(t,x)=\sum_{n\geq
1}\lambda^{n}I_{n-1}^{\mathfrak{X}}\left(\sum_{j=1}^{n}h_{t,x,n}^{(j)}(r,z;\bullet)\right);$
see (3.1) and (3.2). From this, we infer that for any
$(\lambda,t,x)\in(0,\infty)^{2}\times\mathbb{R}^{2}$ and for _almost every_
$(r,z)\in[0,t]\times\mathbb{R}^{2}$,
$\big{\|}D_{r,z}v_{\lambda}(t,x)\big{\|}_{2}^{2}=\sum_{n\geq
1}(n-1)!\,\lambda^{2n}\Big{\|}\sum_{j=1}^{n}h_{t,x,n}^{(j)}(r,z;\bullet)\Big{\|}_{\mathcal{H}_{0}^{\otimes(n-1)}}^{2}\leq
C_{\lambda,t,\gamma}G_{t-r}^{2}(x-z),$ (3.28)
where $C_{\lambda,t,\gamma}>0$ is a constant depending on $(\lambda,t,\gamma)$
and is increasing in $t$. The inequality above is due to Theorem 1.3 of [35]
for case (a), respectively Theorem 1.2 of [4] for case (b). Therefore,
$\displaystyle\big{\|}D_{r,z}u(t,x)\big{\|}_{2}^{2}$
$\displaystyle=\sum_{n\geq
1}(n-1)!\,\big{\|}\sum_{j=1}^{n}h_{t,x,n}^{(j)}(r,z;\bullet)\big{\|}_{\mathcal{H}^{\otimes(n-1)}}^{2}$
$\displaystyle\leq\sum_{n\geq
1}(n-1)!\,\Gamma_{t}^{n-1}\big{\|}\sum_{j=1}^{n}h_{t,x,n}^{(j)}(r,z;\bullet)\big{\|}_{\mathcal{H}_{0}^{\otimes(n-1)}}^{2}~{}\text{by
\eqref{white-ineq}}.$
Thus, using (3.28) with $\lambda=\sqrt{\Gamma_{t}}$, we get
$\big{\|}D_{r,z}u(t,x)\big{\|}_{2}^{2}\leq
C_{\Gamma_{t},t,\gamma}G_{t-r}^{2}(x-z)$.
### 3.6 Consequences of Theorem 1.3
We will establish two estimates that will be useful in Section 5.
###### Corollary 3.3.
Let $d=1,2$. Then, for any finite $T>0$,
$\sup_{(t,x)\in[0,T]\times\mathbb{R}^{d}}\,\sup_{r\in[0,t]}\mathbb{E}\Big{[}\big{\|}|D_{r,\bullet}u(t,x)|\big{\|}_{0}^{2}\Big{]}<\infty.$
(3.29)
In particular, $D_{r,\bullet}u(t,x)(\omega)\in|\mathcal{P}_{0}|$ for almost
every $(\omega,r)\in\Omega\times[0,t]$, where $|\mathcal{P}_{0}|$ is defined
in (2.2).
###### Proof.
We work with a version of
$\\{D_{r,z}u(t,x):(r,z)\in[0,t]\times\mathbb{R}^{2}\\}$ that is jointly
measurable. By Fubini’s theorem and Cauchy-Schwarz inequality, we have
$\displaystyle\mathbb{E}\Big{[}\big{\|}|D_{r,\bullet}u(t,x)|\big{\|}_{0}^{2}\Big{]}$
$\displaystyle\leq\mathbb{E}\int_{\mathbb{R}^{2d}}|D_{r,z}u(t,x)||D_{r,z^{\prime}}u(t,x)|\gamma(z-z^{\prime})dzdz^{\prime}$
$\displaystyle\leq\int_{\mathbb{R}^{2d}}\|D_{r,z}u(t,x)\|_{2}\|D_{r,z^{\prime}}u(t,x)\|_{2}\gamma(z-z^{\prime})dzdz^{\prime}$
$\displaystyle\leq
C\int_{\mathbb{R}^{2d}}G_{t-r}(x-z)G_{t-r}(x-z^{\prime})\gamma(z-z^{\prime})dzdz^{\prime}\quad\text{by
Theorem \ref{MR1}}$
$\displaystyle=C\int_{\mathbb{R}^{d}}\mu(d\xi)\big{|}\widehat{G}_{t-r}(\xi)\big{|}^{2}\quad\text{using
Fourier transform}$ $\displaystyle\leq 2C(t^{2}\vee
1)\int_{\mathbb{R}^{d}}\frac{\mu(d\xi)}{1+|\xi|^{2}}~{}\text{by
\eqref{ineq1}},$
where $C$ is a constant depending on $\gamma_{0},\gamma,t$ and is increasing
in $t$. The above (uniform) bound implies (3.29). Hence,
$D_{r,\bullet}u(t,x)(\omega)\in|\mathcal{P}_{0}|$ for almost all
$(\omega,r)\in\Omega\times[0,t]$. ∎
The space $|\mathcal{H}\otimes\mathcal{P}_{0}|$ appearing in the next
corollary is defined as the set of measurable functions
$h:\mathbb{R}_{+}\times\mathbb{R}^{2d}\to\mathbb{R}$ such that
$\int_{\mathbb{R}_{+}^{2}\times\mathbb{R}^{4d}}|h(r,w,z)||h(r^{\prime},w^{\prime},z^{\prime})|\gamma_{0}(r-r^{\prime})\gamma(w-w^{\prime})\gamma(z-z^{\prime})dwdw^{\prime}dzdz^{\prime}drdr^{\prime}<\infty.$
Then,
$|\mathcal{H}\otimes\mathcal{P}_{0}|\subset\mathcal{H}\otimes\mathcal{P}_{0}$.
###### Corollary 3.4.
Let $d=1,2$. For almost all $(\omega,r)\in\Omega\times[0,t]$,
$DD_{r,\bullet}u(t,x)(\omega)\in|\mathcal{H}\otimes\mathcal{P}_{0}|$ and for
any finite $T>0$,
$\displaystyle\sup_{(t,x)\in[0,T]\times\mathbb{R}^{d}}\sup_{r\in[0,t]}\mathbb{E}\left(\Big{\|}\big{|}DD_{r,\bullet}u(t,x)\big{|}\Big{\|}_{\mathcal{H}\otimes\mathcal{P}_{0}}^{2}\right)<+\infty.$
(3.30)
###### Proof.
Using Theorem 1.3, Cauchy-Schwarz inequality and the estimate (1.11), we can
write
$\displaystyle\mathbb{E}\left(\Big{\|}\big{|}DD_{r,\bullet}u(t,x)\big{|}\Big{\|}_{\mathcal{H}\otimes\mathcal{P}_{0}}^{2}\right)=\mathbb{E}\Bigg{(}\int_{[0,t]^{2}}\int_{\mathbb{R}^{4d}}|D_{(\theta,w),(r,z)}^{2}u(t,x)||D_{(\theta^{\prime},w^{\prime}),(r,z^{\prime})}^{2}u(t,x)|$
$\displaystyle\qquad\qquad\qquad\qquad\qquad\qquad\qquad\times\gamma_{0}(\theta-\theta^{\prime})\gamma(w-w^{\prime})\gamma(z-z^{\prime})dwdw^{\prime}dzdz^{\prime}d\theta
d\theta^{\prime}\Bigg{)}$
$\displaystyle\leq\int_{[0,t]^{2}}\int_{\mathbb{R}^{4d}}\big{\|}D_{(\theta,w),(r,z)}^{2}u(t,x)\big{\|}_{2}\big{\|}D_{(\theta^{\prime},w^{\prime}),(r,z^{\prime})}^{2}u(t,x)\big{\|}_{2}$
$\displaystyle\qquad\qquad\qquad\quad\quad\times\gamma_{0}(\theta-\theta^{\prime})\gamma(w-w^{\prime})\gamma(z-z^{\prime})dwdw^{\prime}dzdz^{\prime}d\theta
d\theta^{\prime}$ $\displaystyle\leq
C\int_{[0,t]^{2}}\int_{\mathbb{R}^{4d}}\widetilde{f}_{t,x,2}(r,z,\theta,w)\widetilde{f}_{t,x,2}(r,z^{\prime},\theta^{\prime},w^{\prime})\gamma_{0}(\theta-\theta^{\prime})\gamma(w-w^{\prime})\gamma(z-z^{\prime})dwdw^{\prime}dzdz^{\prime}d\theta
d\theta^{\prime}.$
As a consequence,
$\displaystyle\mathbb{E}\left(\Big{\|}\big{|}DD_{r,\bullet}u(t,x)\big{|}\Big{\|}_{\mathcal{H}\otimes\mathcal{P}_{0}}^{2}\right)$
$\displaystyle\leq
C\int_{\mathbb{R}^{2d}}\|\widetilde{f}_{t,x,2}(r,z;\bullet)\|_{\mathcal{H}}\|\widetilde{f}_{t,x,2}(r,z^{\prime};\bullet)\|_{\mathcal{H}}\gamma(z-z^{\prime})dzdz^{\prime}.$
By the arguments used in the proof of Theorem 1.3, it follows that
$\|\widetilde{f}_{t,x,2}(r,z;\bullet)\|_{\mathcal{H}}\leq CG_{t-r}(x-z).$
Therefore,
$\mathbb{E}\left(\Big{\|}\big{|}DD_{r,\bullet}u(t,x)\big{|}\Big{\|}_{\mathcal{H}\otimes\mathcal{P}_{0}}^{2}\right)\leq
C\int_{\mathbb{R}^{2d}}\gamma(z-z^{\prime})G_{t-r}(x-z)G_{t-r}(x-z^{\prime})dzdz^{\prime}$
and the same argument as in the proof of Corollary 3.3 ends our proof. ∎
###### Remark 3.5.
Note that for any finite $T>0$,
$\mathbb{E}\big{(}\big{\|}|D^{2}u(t,x)|\big{\|}_{\mathcal{H}^{\otimes
2}}^{2}\big{)}<\infty$ for any $(t,x)\in[0,T]\times\mathbb{R}^{d}$.
## 4 Gaussian fluctuation: Proof of Theorem 1.4
Recall that
$F_{R}(t)=\int_{B_{R}}\big{[}u(t,x)-1\big{]}dx$
and $\sigma_{R}(t)=\sqrt{\text{Var}\big{(}F_{R}(t)\big{)}}$. First, we need to
obtain the limiting covariance structure, which is the content of Proposition
4.1. It will give us the growth order of $\sigma_{R}(t)$. Then, in Section
4.2, we apply the second-order Gaussian Poincaré inequality to establish the
quantitative CLT for $F_{R}(t)/\sigma_{R}(t)$. Finally, we will prove the
functional CLT by showing the convergence of the finite-dimensional
distributions and the tightness.
### 4.1 Limiting covariance
###### Proposition 4.1.
Let $u$ denote the solution to the hyperbolic Anderson model (1.1) and assume
that the non-degeneracy condition (1.17) holds. Then, the following results
hold true:
(1) Suppose $d\in\\{1,2\\}$ and $\gamma(\mathbb{R}^{d})\in(0,\infty)$. Then,
for any $t,s\in(0,\infty)$,
$\displaystyle\lim_{R\to\infty}R^{-d}\mathbb{E}\big{[}F_{R}(t)F_{R}(s)\big{]}=\omega_{d}\sum_{p\geq
1}p!\int_{\mathbb{R}^{d}}\big{\langle}\widetilde{f}_{t,x,p},\widetilde{f}_{s,0,p}\big{\rangle}_{\mathcal{H}^{\otimes
p}}dx,$ (4.1)
see also (1.18). In particular, $\sigma_{R}(t)\sim R^{d/2}$.
(2) Suppose $d\in\\{1,2\\}$ and $\gamma(x)=|x|^{-\beta}$ for some
$\beta\in(0,2\wedge d)$. Then, for any $t,s\in(0,\infty)$,
$\displaystyle\lim_{R\to\infty}R^{\beta-2d}\mathbb{E}\big{[}F_{R}(t)F_{R}(s)\big{]}=\kappa_{\beta,d}\int_{0}^{t}dr\int_{0}^{s}dr^{\prime}\gamma_{0}(r-r^{\prime})(t-r)(s-r^{\prime}),$
(4.2)
where $\kappa_{\beta,d}=\int_{B_{1}^{2}}dxdy|x-y|^{-\beta}$ is introduced in
(1.16). In particular, $\sigma_{R}(t)\sim R^{d-\frac{\beta}{2}}$.
(3) Suppose $d=2$ and $\gamma(x_{1},x_{2})=\gamma_{1}(x_{1})\gamma_{2}(x_{2})$
satisfies one of the following conditions:
$\displaystyle\begin{cases}(c_{1})&\gamma_{i}(x_{i})=|x_{i}|^{-\beta_{i}}~{}\text{for
some $\beta_{i}\in(0,1)$, $i=1,2$;}\\\ \quad\\\ (c_{2})&\gamma_{1}\in
L^{1}(\mathbb{R})~{}{\rm and}~{}\gamma_{2}(x)=|x|^{-\beta}~{}\text{for some
$\beta\in(0,1)$}\end{cases}\,\,.$ (4.3)
For any $s,t\in(0,\infty)$, the following results hold true:
1. $(r_{1})$
In $(c_{1})$, we have
$\displaystyle\lim_{R\to\infty}R^{\beta_{1}-\beta_{2}-4}\mathbb{E}\big{[}F_{R}(t)F_{R}(s)\big{]}$
$\displaystyle=K_{\beta_{1},\beta_{2}}\int_{0}^{t}dr\int_{0}^{s}dr^{\prime}\gamma_{0}(r-r^{\prime})(t-r)(s-r^{\prime}),$
(4.4)
where $K_{\beta_{1},\beta_{2}}$ is defined in (1.22).
2. $(r_{2})$
In $(c_{2})$, we have
$\displaystyle\lim_{R\to\infty}R^{\beta-3}\mathbb{E}\big{[}F_{R}(t)F_{R}(s)\big{]}=\gamma_{1}(\mathbb{R})\mathcal{L}_{\beta}\int_{0}^{t}dr\int_{0}^{s}dr^{\prime}\gamma_{0}(r-r^{\prime})(t-r)(s-r^{\prime}),$
(4.5)
where $\mathcal{L}_{\beta}$ is defined in (1.24).
#### 4.1.1 Proof of part (1) in Proposition 4.1
##### Preparation.
In the following, we will denote by $\varphi$ the density of $\mu$. For
$0<s\leq t<\infty$ and $x,y\in\mathbb{R}^{d}$, we have
$\displaystyle\mathbb{E}\big{[}u(t,x)u(s,y)\big{]}-1$
$\displaystyle=\sum_{p\geq
1}p!\big{\langle}\widetilde{f}_{t,x,p},\widetilde{f}_{s,y,p}\big{\rangle}_{\mathcal{H}^{\otimes
p}}$ $\displaystyle=:\sum_{p\geq 1}\frac{1}{p!}\Phi_{p}(t,s;x-y),$
where $\widetilde{f}_{t,x,p}\in\mathcal{H}^{\otimes p}$ is defined as in
(1.8)-(1.9) and $\Phi_{p}(t,s;x-y)$, defined in the obvious manner, depends
only on the difference $x-y$. To see this dependency and to prepare for the
future computations, we rewrite $\Phi_{p}(t,s;x-y)$ using Fourier transform in
space:
$\displaystyle\Phi_{p}(t,s;x-y)=(p!)^{2}\big{\langle}f_{t,x,p},\widetilde{f}_{s,y,p}\big{\rangle}_{\mathcal{H}^{\otimes
p}}$
$\displaystyle=p!\sum_{\sigma\in\mathfrak{S}_{p}}\int_{\Delta_{p}(t)}d\boldsymbol{s_{p}}\int_{[0,s]^{p}}d\boldsymbol{\tilde{s}_{p}}\left(\prod_{j=1}^{p}\gamma_{0}(s_{j}-\tilde{s}_{j})\right)\int_{\mathbb{R}^{2pd}}d\boldsymbol{y_{p}}d\boldsymbol{\tilde{y}_{p}}\left(\prod_{j=1}^{p}\gamma(y_{j}-\tilde{y}_{j})\right)$
$\displaystyle\qquad\times\left(\prod_{j=0}^{p-1}G_{s_{j}-s_{j+1}}(y_{j}-y_{j+1})\right)\left(\prod_{j=0}^{p-1}G_{\tilde{s}_{\sigma(j)}-\tilde{s}_{\sigma(j+1)}}(\widetilde{y}_{\sigma(j)}-\widetilde{y}_{\sigma(j+1)})\right)$
(4.6)
$\displaystyle=p!\sum_{\sigma\in\mathfrak{S}_{p}}\int_{\Delta_{p}(t)}d\boldsymbol{s_{p}}\int_{[0,s]^{p}}d\boldsymbol{\tilde{s}_{p}}\left(\prod_{j=1}^{p}\gamma_{0}(s_{j}-\tilde{s}_{j})\right)\int_{\mathbb{R}^{pd}}d\boldsymbol{\xi_{p}}\left(\prod_{j=1}^{p}\varphi(\xi_{j})\right)e^{-i(x-y)\cdot(\xi_{1}+\cdots+\xi_{p})}$
$\displaystyle\qquad\times\left(\prod_{j=0}^{p-1}\widehat{G}_{s_{j}-s_{j+1}}(\xi_{p}+\cdots+\xi_{j+1})\right)\left(\prod_{j=0}^{p-1}\widehat{G}_{\tilde{s}_{\sigma(j)}-\tilde{s}_{\sigma(j+1)}}(\xi_{\sigma(p)}+\cdots+\xi_{\sigma(j+1)})\right),$
(4.7)
where $\Delta_{p}(t)=\\{\boldsymbol{s_{p}}:t>s_{1}>\cdots>s_{p}>0\\}$,
$(s_{0},y_{0},\tilde{s}_{\sigma(0)},\tilde{y}_{\sigma(0)})=(t,x,s,y)$,
$\widehat{G}_{t}(\xi)=\frac{\sin(t|\xi|)}{|\xi|}$ is introduced in (2.29) and
we have used again the convention $G_{t}(z)=0$ for $t\leq 0$.
Relation (4.6) shows that $\Phi_{p}(t,s;x-y)$ is always nonnegative and
equality (4.7) indicates that $\Phi_{p}(t,s;x-y)$ indeed depends only on the
difference $x-y$, so that we can write
$\displaystyle\Phi_{p}(t,s;z)=(p!)^{2}\big{\langle}\widetilde{f}_{t,z,p},\widetilde{f}_{s,0,p}\big{\rangle}_{\mathcal{H}^{\otimes
p}}.$ (4.8)
Note that $\Phi_{p}(t,t;0)$ coincides with $\alpha_{p}(t)$ given in [3,
Equation (4.11)]. Moreover, applying Lemma 2.5 with
$\mu_{p}(d\boldsymbol{\xi_{p}})=\varphi(\xi_{1})\cdots\varphi(\xi_{p})d\xi_{1}\cdots
d\xi_{p}$ and
$g(s_{1},\xi_{1},\dots,s_{p},\xi_{p})=\prod_{j=0}^{p-1}|\widehat{G}_{s_{j}-s_{j+1}}(\xi_{p}+\cdots+\xi_{j+1})|,$
we get (with $s\leq t$)
$\displaystyle\Phi_{p}(t,s;z)\leq\Gamma_{t}^{p}p!\int_{\Delta_{p}(t)}d\boldsymbol{s_{p}}\int_{\mathbb{R}^{pd}}\mu(d\boldsymbol{\xi_{p}})\prod_{j=0}^{p-1}\Big{|}\widehat{G}_{s_{j}-s_{j+1}}(\xi_{p}+\cdots+\xi_{j+1})\Big{|}^{2},$
(4.9)
where we recall that $\Gamma_{t}=\int_{-t}^{t}\gamma_{0}(a)da$ and point out
that the right-hand side of (4.9) is finite by applying Lemma 2.6 with
$z_{j}=\xi_{j+1}+\cdots+\xi_{p}$ and $z_{p}=0$.
Now we are ready to show (4.1).
###### Proof of (4.1).
Let us begin with
$\displaystyle\frac{\mathbb{E}\big{[}F_{R}(t)F_{R}(s)\big{]}}{R^{d}}$
$\displaystyle=\int_{B_{R}^{2}}dxdy\frac{\mathbb{E}\big{[}u(t,x)u(s,y)\big{]}-1}{R^{d}}=\sum_{p\geq
1}\frac{\omega_{d}}{p!}\int_{\mathbb{R}^{d}}\frac{\text{Leb}\big{(}B_{R}\cap
B_{R}(-z)\big{)}}{\text{Leb}(B_{R})}\Phi_{p}(t,s;z)dz,$
where $\omega_{1}=2$, $\omega_{2}=\pi$ and $\text{Leb}(A)$ stands for the
Lebesgue measure of $A\subset\mathbb{R}^{d}$. We claim that
$\displaystyle\sum_{p\geq
1}\frac{1}{p!}\int_{\mathbb{R}^{d}}\Phi_{p}(t,s;z)dz<\infty,$ (4.10)
from which and the dominated convergence theorem we can deduce that
$\displaystyle\lim_{R\to\infty}R^{-d}\mathbb{E}\big{[}F_{R}(t)F_{R}(s)\big{]}=\omega_{d}\sum_{p\geq
1}\frac{1}{p!}\int_{\mathbb{R}^{d}}\Phi_{p}(t,s;z)dz.$ (4.11)
We remark that, by the monotone convergence theorem and the fact that
$\Phi_{p}(t,s;z)\geq 0$ for all $z\in\mathbb{R}^{d}$, the claim (4.10) is
equivalent to
$\displaystyle\sup_{\varepsilon>0}\sum_{p\geq
1}\frac{1}{p!}\int_{\mathbb{R}^{d}}\Phi_{p}(t,s;z)e^{-\frac{\varepsilon}{2}|z|^{2}}dz<\infty.$
(4.12)
Let us show the claim (4.12).
For $p=1$, by direct computations, we can perform integration with respect to
$z,y,\tilde{y}$ (one by one in this order) to obtain
$\displaystyle\int_{\mathbb{R}^{d}}\Phi_{1}(t,s;z)dz$
$\displaystyle=\int_{\mathbb{R}^{d}}\left(\int_{0}^{t}dr\int_{0}^{s}d\tilde{r}\gamma_{0}(r-\tilde{r})\int_{\mathbb{R}^{2d}}dyd\tilde{y}G_{t-r}(y-z)G_{s-\tilde{r}}(\tilde{y})\gamma(y-\tilde{y})\right)dz$
$\displaystyle=\gamma(\mathbb{R}^{d})\int_{0}^{t}\int_{0}^{s}\gamma_{0}(r-\tilde{r})(t-r)(s-\tilde{r})d\tilde{r}dr\leq\gamma(\mathbb{R}^{d})t^{3}\Gamma_{t},$
(4.13)
where $\int_{\mathbb{R}^{d}}\Phi_{1}(t,s;z)dz>0$ due to the non-degeneracy
assumption (1.17) on $\gamma_{0}$. This implies in particular that
$\sigma_{R}(t)>0$ for large enough $R$.
Next we consider $p\geq 2$. Using the expression (4.7) and applying Fubini’s
theorem with the dominance condition (4.9), we can write
$\displaystyle\mathcal{T}_{p,\varepsilon}:=(2\pi)^{-d}\int_{\mathbb{R}^{d}}\Phi_{p}(t,s;z)e^{-\frac{\varepsilon}{2}|z|^{2}}dz=p!\sum_{\sigma\in\mathfrak{S}_{p}}\int_{\Delta_{p}(t)}d\boldsymbol{s_{p}}\int_{[0,s]^{p}}d\boldsymbol{\tilde{s}_{p}}\prod_{j=1}^{p}\gamma_{0}(s_{j}-\tilde{s}_{j})\int_{\mathbb{R}^{pd}}d\boldsymbol{\xi_{p}}$
$\displaystyle\quad\times
p_{\varepsilon}(\xi_{1}+\cdots+\xi_{p})\prod_{j=0}^{p-1}\varphi(\xi_{j+1})\widehat{G}_{s_{j}-s_{j+1}}(\xi_{p}+\cdots+\xi_{j+1})\widehat{G}_{\tilde{s}_{\sigma(j)}-\tilde{s}_{\sigma(j+1)}}(\xi_{\sigma(p)}+\cdots+\xi_{\sigma(j+1)})$
$\displaystyle\leq\Gamma_{t}^{p}p!\int_{\Delta_{p}(t)}d\boldsymbol{s_{p}}\int_{\mathbb{R}^{pd}}d\boldsymbol{\xi_{p}}\left(\prod_{j=1}^{p}\varphi(\xi_{j})\right)p_{\varepsilon}\left(\sum_{j=1}^{p}\xi_{j}\right)\prod_{j=0}^{p-1}\Big{|}\widehat{G}_{s_{j}-s_{j+1}}(\xi_{p}+\cdots+\xi_{j+1})\Big{|}^{2},$
(4.14)
where
$p_{\varepsilon}(\xi)=(2\pi\varepsilon)^{-d/2}e^{-|\xi|^{2}/(2\varepsilon)}$
for $\xi\in\mathbb{R}^{d}$ and we applied Lemma 2.5 with
$\mu_{p}(d\boldsymbol{\xi_{p}})=\varphi(\xi_{1})\cdots\varphi(\xi_{p})p_{\varepsilon}(\xi_{1}+\cdots+\xi_{p})d\xi_{1}\cdots
d\xi_{p}$.
Next, we make the change of variables
$\eta_{j}=\xi_{p}+\cdots+\xi_{j}~{}\text{with the convention $\eta_{p+1}=0$},$
and the bound (4.14) becomes
$\displaystyle\mathcal{T}_{p,\varepsilon}$
$\displaystyle\leq\Gamma_{t}^{p}p!\int_{\Delta_{p}(t)}d\boldsymbol{s_{p}}\int_{\mathbb{R}^{pd}}d\boldsymbol{\eta_{p}}\left(\prod_{j=1}^{p}\varphi(\eta_{j}-\eta_{j+1})\right)p_{\varepsilon}(\eta_{1})\prod_{j=0}^{p-1}\Big{|}\widehat{G}_{s_{j}-s_{j+1}}(\eta_{j+1})\Big{|}^{2}$
$\displaystyle\leq\Gamma_{t}^{p}p!\|\varphi\|_{\infty}t^{2}\int_{\mathbb{R}^{d}}d\eta_{1}p_{\varepsilon}(\eta_{1})\int_{\Delta_{p}(t)}d\boldsymbol{s_{p}}\int_{\mathbb{R}^{pd-d}}d\eta_{2}\cdots
d\eta_{p}\left(\prod_{j=2}^{p}\varphi(\eta_{j}-\eta_{j+1})\right)$
$\displaystyle\quad\times\Big{|}\widehat{G}_{s_{1}-s_{2}}(\eta_{2})\widehat{G}_{s_{2}-s_{3}}(\eta_{3})\cdots\widehat{G}_{s_{p-1}-s_{p}}(\eta_{p})\Big{|}^{2}=\Gamma_{t}^{p}p!\|\varphi\|_{\infty}t^{2}\int_{\mathbb{R}^{d}}d\eta_{1}p_{\varepsilon}(\eta_{1})Q_{p-1},$
(4.15)
where we used $|\widehat{G}_{t-s_{1}}(\xi)|\leq t$, and
$\varphi(\eta_{1}-\eta_{2})\leq\|\varphi\|_{\infty}$ (which is finite because
$\gamma(\mathbb{R}^{d})<\infty$) to obtain (4.15), and
$\displaystyle
Q_{p-1}:=\int_{\Delta_{p}(t)}d\boldsymbol{s_{p}}\int_{\mathbb{R}^{pd-d}}\prod_{j=2}^{p}\varphi(\eta_{j}-\eta_{j+1})\big{|}\widehat{G}_{s_{j-1}-s_{j}}(\eta_{j})\big{|}^{2}d\eta_{j}.$
(4.16)
Observe that $Q_{p-1}$ does not depend on $\eta_{1}$, thus for any $p\geq 2$
$\displaystyle\mathcal{T}_{p,\varepsilon}\leq\Gamma_{t}^{p}p!\|\varphi\|_{\infty}t^{2}Q_{p-1}.$
(4.17)
By Lemma 2.6, we have for any $p\geq 2$
$Q_{p-1}\leq\left(2(t^{2}\vee
1)\int_{\mathbb{R}^{d}}\frac{\mu(d\xi)}{1+|\xi|^{2}}\right)^{p-1}\frac{t^{p}}{p!}\leq\frac{C^{p}}{p!}.$
Now, plugging the above estimate and (4.17) into (4.12), and using (4.13) for
$p=1$, we have
$\sup_{\varepsilon>0}\sum_{p\geq
1}\frac{1}{p!}\int_{\mathbb{R}^{d}}\Phi_{p}(t,s;z)e^{-\frac{\varepsilon}{2}|z|^{2}}dz\leq\gamma(\mathbb{R}^{d})t^{3}\Gamma_{t}+(2\pi)^{d}\|\varphi\|_{\infty}t^{2}\sum_{p\geq
2}\frac{\Gamma_{t}^{p}C^{p}}{p!}<+\infty.$
This shows the claim (4.12) and the claim (4.10), which confirm the limiting
covariance structure (4.11). Hence the proof of (4.1) is completed. ∎
#### 4.1.2 Proof of part (2) in Proposition 4.1
In this case, the corresponding spectral density is given by
$\varphi(\xi)=c_{d,\beta}|\xi|^{\beta-d}$, for some constant $c_{d,\beta}$
that only depends on $d$ and $\beta$.
Now, let us recall the chaos expansion (1.7) of $u(t,x)$, from which we can
obtain the following chaos expansion of $F_{R}(t)$:
$F_{R}(t)=\sum_{p\geq 1}\mathbf{J}_{p,R}(t),$
where $\mathbf{J}_{p,R}(t):=I_{p}\left(\int_{|x|\leq
R}\widetilde{f}_{t,x,p}dx\right)$ is the projection of $F_{R}(t)$ onto the
$p$th Wiener chaos, with $\widetilde{f}_{t,x,p}$ given as in (1.9).
Using the orthogonality of Wiener chaoses with different order, we have
$\sigma^{2}_{R}(t)=\text{Var}\big{(}F_{R}(t)\big{)}=\sum_{p\geq
1}\text{Var}\big{(}\mathbf{J}_{p,R}(t)\big{)}.$
Let us first consider the variance of $\mathbf{J}_{1,R}(t)$. With
$B_{R}=\\{x\in\mathbb{R}^{d}:|x|\leq R\\}$, we can write
$\displaystyle\quad\text{Var}\big{(}\mathbf{J}_{1,R}(t)\big{)}=\int_{B_{R}^{2}}dxdx^{\prime}\langle
G_{t-\bullet}(x-\ast),G_{t-\bullet}(x^{\prime}-\ast)\rangle_{\mathcal{H}}$
$\displaystyle=\int_{B_{R}^{2}}dxdx^{\prime}\int_{[0,t]^{2}}dsds^{\prime}\gamma_{0}(s-s^{\prime})\int_{\mathbb{R}^{d}}d\xi\varphi(\xi)e^{-i(x-x^{\prime})\cdot\xi}\widehat{G}_{t-s}(\xi)\widehat{G}_{t-s^{\prime}}(\xi).$
(4.18)
Then, making the change of variables
$(x,x^{\prime},\xi)\to(Rx,Rx^{\prime},\xi/R)$, we get
$\displaystyle\text{Var}\big{(}\mathbf{J}_{1,R}(t)\big{)}=R^{2d-\beta}\int_{[0,t]^{2}}dsds^{\prime}\gamma_{0}(s-s^{\prime})\int_{B_{1}^{2}}dxdx^{\prime}\int_{\mathbb{R}^{d}}d\xi\varphi(\xi)e^{-i(x-x^{\prime})\cdot\xi}\widehat{G}_{t-s}(\xi/R)\widehat{G}_{t-s^{\prime}}(\xi/R).$
Note that $\widehat{G}_{t}(\xi/R)$ is uniformly bounded and convergent to $t$
as $R\to\infty$; observe also that
$\ell_{R}(\xi):=\int_{B_{R}^{2}}dxdx^{\prime}e^{-i(x-x^{\prime})\cdot\xi}=\big{|}\mathcal{F}\mathbf{1}_{B_{R}}\big{|}^{2}(\xi)\in[0,\infty).$
(4.19)
Thus we deduce from the dominated convergence theorem that, with
$\kappa_{\beta,d}:=\int_{B_{1}^{2}}dxdx^{\prime}|x-x^{\prime}|^{-\beta}$,
$\displaystyle\frac{\text{Var}\big{(}\mathbf{J}_{1,R}(t)\big{)}}{R^{2d-\beta}}\xrightarrow{R\to\infty}$
$\displaystyle\int_{[0,t]^{2}}dsds^{\prime}\gamma_{0}(s-s^{\prime})(t-s)(t-s^{\prime})\int_{\mathbb{R}^{d}}d\xi\varphi(\xi)\big{|}\mathcal{F}\mathbf{1}_{B_{1}}\big{|}^{2}(\xi)$
$\displaystyle=\kappa_{\beta,d}\int_{[0,t]^{2}}dsds^{\prime}\gamma_{0}(s-s^{\prime})ss^{\prime}.$
(4.20)
In the same way, we can get
$\displaystyle\frac{\mathbb{E}\big{[}\mathbf{J}_{1,R}(t)\mathbf{J}_{1,R}(s)\big{]}}{R^{2d-\beta}}\xrightarrow{R\to\infty}$
$\displaystyle\kappa_{\beta,d}\int_{0}^{t}dr\int_{0}^{s}dr^{\prime}\gamma_{0}(r-r^{\prime})(t-r)(s-r^{\prime})$
(4.21)
In what follows, we will show that as $R\to\infty$,
$\displaystyle\sum_{p\geq
2}\text{Var}\big{(}\mathbf{J}_{p,R}(t)\big{)}=o(R^{2d-\beta}).$ (4.22)
In view of the orthogonality again, the above claim (4.22) and the results
(4.20)-(4.21) imply that the first chaos of $F_{R}(t)$ is dominant and
$\frac{\mathbb{E}\big{[}F_{R}(t)F_{R}(s)\big{]}}{R^{2d-\beta}}\xrightarrow{R\to\infty}\kappa_{\beta,d}\int_{0}^{t}dr\int_{0}^{s}dr^{\prime}\gamma_{0}(r-r^{\prime})(t-r)(s-r^{\prime}),$
which gives us the desired limiting covariance structure. Moreover, we obtain
immediately that the process
$\big{\\{}R^{-d+\frac{\beta}{2}}F_{R}(t):t\in\mathbb{R}_{+}\big{\\}}$
converges in finite-dimensional distributions to the centered Gaussian process
$\mathcal{G}_{\beta}$, whose covariance structure is given by (1.19).
The rest of Section 4.1.2 is then devoted to proving (4.22). We point out that
the strategy in Section 4.1.1 can not be directly used, because $\varphi$ is
not uniformly bounded here.
###### Proof of Claim (4.22).
We begin by writing (with $s_{0}=\tilde{s}_{\sigma(0)}=t$ and
$B_{R}=\\{x:|x|\leq R\\}$)
$\displaystyle\quad\text{Var}\big{(}\mathbf{J}_{p,R}(t)\big{)}=p!\int_{B_{R}^{2}}dxdx^{\prime}\big{\langle}\widetilde{f}_{t,x,p},\widetilde{f}_{t,x^{\prime},p}\big{\rangle}_{\mathcal{H}^{\otimes
p}}=p!\int_{B_{R}^{2}}dxdx^{\prime}\big{\langle}f_{t,x,p},\widetilde{f}_{t,x^{\prime},p}\big{\rangle}_{\mathcal{H}^{\otimes
p}}$
$\displaystyle=c_{d,\beta}^{p}\sum_{\sigma\in\mathfrak{S}_{p}}\int_{B_{R}^{2}}dxdx^{\prime}\int_{[0,t]^{2p}}d\boldsymbol{s_{p}}d\boldsymbol{\tilde{s}_{p}}\prod_{k=1}^{p}\gamma_{0}(s_{k}-\tilde{s}_{k})\int_{\mathbb{R}^{pd}}\left(\prod_{j=1}^{p}d\xi_{j}|\xi_{j}|^{\beta-d}\right)$
$\displaystyle\quad\times
e^{-i(x-x^{\prime})\cdot(\xi_{p}+\cdots+\xi_{1})}\prod_{j=0}^{p-1}\widehat{G}_{s_{j}-s_{j+1}}(\xi_{p}+\cdots+\xi_{j+1})\widehat{G}_{\tilde{s}_{\sigma(j)}-\tilde{s}_{\sigma(j+1)}}(\xi_{\sigma(p)}+\cdots+\xi_{\sigma(j+1)}),$
where we recall the convention that $G_{t}(z)=0$ for $t\leq 0$. Then,
recalling definition (4.19) of $\ell_{R}(\xi)$, we can apply Lemma 2.5 with
$\mu(d\boldsymbol{\xi_{p}})=\varphi(\xi_{1})\cdots\varphi(\xi_{p})\ell_{R}(\xi_{1}+\cdots+\xi_{p})d\xi_{1}\cdots
d\xi_{p}$
to get $\text{Var}\big{(}\mathbf{J}_{p,R}(t)\big{)}$ bounded by
$\displaystyle
c_{d,\beta}^{p}\Gamma_{t}^{p}\int_{\Delta_{p}(t)}d\boldsymbol{s_{p}}\int_{\mathbb{R}^{pd}}\left(\prod_{j=1}^{p}d\xi_{j}|\xi_{j}|^{\beta-d}\right)\ell_{R}(\xi_{1}+\cdots+\xi_{p})\prod_{j=0}^{p-1}\Big{|}\widehat{G}_{s_{j}-s_{j+1}}(\xi_{p}+\cdots+\xi_{j+1})\Big{|}^{2}.$
(4.23)
Making change of variables
${\rm(i)}~{}\eta_{j}=\xi_{p}+\cdots+\xi_{j}~{}\text{with $\eta_{p+1}=0$
\quad(ii)}~{}(x,x^{\prime},\eta_{1})\to(Rx,Rx^{\prime},\eta_{1}R^{-1}),$
we obtain
$\displaystyle\text{Var}\big{(}\mathbf{J}_{p,R}(t)\big{)}$ $\displaystyle\leq
c_{d,\beta}^{p}\Gamma_{t}^{p}\int_{\Delta_{p}(t)}d\boldsymbol{s_{p}}\int_{\mathbb{R}^{pd}}\left(\prod_{j=1}^{p}d\eta_{j}|\eta_{j}-\eta_{j+1}|^{\beta-d}\right)$
$\displaystyle\qquad\times\left(\int_{B_{R}^{2}}dxdx^{\prime}e^{-i(x-x^{\prime})\cdot\eta_{1}}\right)\prod_{j=0}^{p-1}\Big{|}\widehat{G}_{s_{j}-s_{j+1}}(\eta_{j+1})\Big{|}^{2}$
$\displaystyle=c_{d,\beta}^{p}\Gamma_{t}^{p}R^{2d-\beta}\int_{\Delta_{p}(t)}d\boldsymbol{s_{p}}\int_{\mathbb{R}^{pd}}d\eta_{1}|\eta_{1}-\eta_{2}R|^{\beta-d}\left(\prod_{j=2}^{p}d\eta_{j}|\eta_{j}-\eta_{j+1}|^{\beta-d}\right)$
$\displaystyle\qquad\times\left(\int_{B_{1}^{2}}dxdx^{\prime}e^{-i(x-x^{\prime})\cdot\eta_{1}}\right)\Big{|}\widehat{G}_{t-s_{1}}(\eta_{1}/R)\Big{|}^{2}\prod_{j=1}^{p-1}\Big{|}\widehat{G}_{s_{j}-s_{j+1}}(\eta_{j+1})\Big{|}^{2}$
$\displaystyle\leq
t^{2}c_{d,\beta}^{p-1}\Gamma_{t}^{p}R^{2d-\beta}\int_{\Delta_{p}(t)}d\boldsymbol{s_{p}}\int_{\mathbb{R}^{pd-d}}\left(\prod_{j=2}^{p}d\eta_{j}|\eta_{j}-\eta_{j+1}|^{\beta-d}\right)$
$\displaystyle\qquad\times\left(\int_{B_{1}^{2}}dxdx^{\prime}|x-x^{\prime}|^{-\beta}e^{-i(x-x^{\prime})\cdot\eta_{2}R}\right)\prod_{j=1}^{p-1}\Big{|}\widehat{G}_{s_{j}-s_{j+1}}(\eta_{j+1})\Big{|}^{2},$
where in the last inequality we used $|\widehat{G}_{t}|\leq t$ and the
following Fourier transform:
$\displaystyle\int_{B_{1}^{2}}dxdx^{\prime}c_{d,\beta}\int_{\mathbb{R}^{d}}d\eta_{1}|\eta_{1}-\eta_{2}R|^{\beta-d}e^{-i(x-x^{\prime})\cdot\eta_{1}}$
$\displaystyle=c_{d,\beta}\int_{\mathbb{R}^{d}}d\eta_{1}|\eta_{1}-\eta_{2}R|^{\beta-d}\big{|}\mathcal{F}\mathbf{1}_{B_{1}}\big{|}^{2}(\eta_{1})$
$\displaystyle=\int_{B_{1}^{2}}dxdx^{\prime}|x-x^{\prime}|^{-\beta}e^{-i(x-x^{\prime})\cdot\eta_{2}R}.$
Note that the integral
$\int_{B_{1}^{2}}dxdx^{\prime}|x-x^{\prime}|^{-\beta}e^{-i(x-x^{\prime})\cdot\eta_{2}R}$
is uniformly bounded by $\kappa_{\beta,d}$ and it converges to zero as
$R\to\infty$ for $\eta_{2}\neq 0$. This convergence is a consequence of the
Riemann-Lebesgue’s lemma. Taking into account the definition (4.16) of
$Q_{p-1}$, then we have
$R^{\beta-2d}\text{Var}\big{(}\mathbf{J}_{p,R}(t)\big{)}\leq
t^{2}\kappa_{\beta,d}\Gamma_{t}^{p}Q_{p-1},$
which is summable over $p\geq 2$ by the arguments in the previous section.
Hence by the dominated convergence theorem, we get
$R^{\beta-2d}\sum_{p\geq
2}\text{Var}\big{(}\mathbf{J}_{p,R}(t)\big{)}\xrightarrow{R\to\infty}0.$
This proves the claim (4.22). ∎
#### 4.1.3 Proof of part (3) in Proposition 4.1
Recall the two cases from (4.3):
$\displaystyle\begin{cases}(c_{1})&\gamma_{i}(x_{i})=|x_{i}|^{-\beta_{i}}~{}\text{for
some $\beta_{i}\in(0,1)$, $i=1,2$,}\\\ (c_{2})&\gamma_{1}\in
L^{1}(\mathbb{R})~{}{\rm and}~{}\gamma_{2}(x)=|x|^{-\beta}~{}\text{for some
$\beta\in(0,1)$.}\end{cases}.$
In $(c_{1})$, the spectral density is
$\varphi(\xi_{1},\xi_{2})=c_{1,\beta_{1}}c_{1,\beta_{2}}|\xi_{1}|^{\beta_{1}-1}|\xi_{2}|^{\beta_{2}-1}$
for $(\xi_{1},\xi_{2})\in\mathbb{R}^{2}$, where $c_{1,\beta}$ is a constant
that only depends on $\beta$. Now, using the notation from Section 4.1.2, we
write
$\displaystyle\text{Var}\big{(}\mathbf{J}_{1,R}(t)\big{)}=\int_{B_{R}^{2}}dxdx^{\prime}\int_{[0,t]^{2}}dsds^{\prime}\gamma_{0}(s-s^{\prime})\int_{\mathbb{R}^{d}}d\xi\varphi(\xi)e^{-i(x-x^{\prime})\cdot\xi}\widehat{G}_{t-s}(\xi)\widehat{G}_{t-s^{\prime}}(\xi)\quad\text{see
\eqref{oneBeta}}$
$\displaystyle=R^{4-\beta_{1}-\beta_{2}}\int_{[0,t]^{2}}dsds^{\prime}\gamma_{0}(s-s^{\prime})\int_{\mathbb{R}^{d}}d\xi\varphi(\xi_{1},\xi_{2})\int_{B_{1}^{2}}dxdx^{\prime}e^{-i(x-x^{\prime})\cdot\xi}\widehat{G}_{t-s}(\xi/R)\widehat{G}_{t-s^{\prime}}(\xi/R),$
where the last equality is obtained by the change of variables
$(x,x^{\prime},\xi_{1},\xi_{2})$ to $(Rx,Rx^{\prime},\xi_{1}/R,\xi_{2}/R)$.
Thus, by the exactly same arguments that lead to (4.20), we can get
$\frac{\text{Var}\big{(}\mathbf{J}_{1,R}(t)\big{)}}{R^{4-\beta_{1}-\beta_{2}}}\xrightarrow{R\to\infty}K_{\beta_{1},\beta_{2}}\int_{[0,t]^{2}}dsds^{\prime}\gamma_{0}(s-s^{\prime})ss^{\prime},$
with $K_{\beta_{1},\beta_{2}}$ introduced in (1.22). Similar to (4.21), we
also have
$\displaystyle\frac{\mathbb{E}\big{[}\mathbf{J}_{1,R}(t)\mathbf{J}_{1,R}(s)\big{]}}{R^{4-\beta_{1}-\beta_{2}}}\xrightarrow{R\to\infty}K_{\beta_{1},\beta_{2}}\int_{0}^{t}dr\int_{0}^{s}dr^{\prime}\gamma_{0}(r-r^{\prime})(t-r)(s-r^{\prime}).$
(4.24)
To obtain the result $(r_{1})$, it remains to show
$\displaystyle\sum_{p\geq
2}\text{Var}\big{(}\mathbf{J}_{p,R}(t)\big{)}=o\big{(}R^{4-\beta_{1}-\beta_{2}}\big{)}.$
(4.25)
Its proof can be done _verbatim_ as for the result (4.22), so we omit the
details here.
Finally, let us look at the more interesting case $(c_{2})$ where
$\gamma_{1}\in L^{1}(\mathbb{R})$ and $\gamma_{2}(x)=|x|^{-\beta}$ for some
fixed $\beta\in(0,1)$. In this case, the corresponding spectral density is
$\varphi(\xi_{1},\xi_{2})=\varphi_{1}(\xi_{1})\varphi_{2}(\xi_{2})$, where
$\displaystyle\begin{cases}{\rm(i)}&\text{ $\gamma_{1}=\mathcal{F}\varphi_{1}$
and $\varphi_{1}$ is uniformly continuous and bounded, }\\\ {\rm(ii)}&\text{
$\varphi_{2}(\xi_{2})=c_{1,\beta}|\xi_{2}|^{\beta-1}$ for some constant
$c_{1,\beta}$ that only depends on $\beta$.}\end{cases}$ (4.26)
Let us begin with (4.18) and make the usual change of variables
$(x,x^{\prime},\xi)\to(Rx,Rx^{\prime},\xi/R)$ to obtain
$\displaystyle\text{Var}\big{(}\mathbf{J}_{1,R}(t)\big{)}=\int_{B_{R}^{2}}dxdx^{\prime}\int_{[0,t]^{2}}dsds^{\prime}\gamma_{0}(s-s^{\prime})\int_{\mathbb{R}^{2}}d\xi\varphi_{1}(\xi_{1})\varphi_{2}(\xi_{2})e^{-i(x-x^{\prime})\cdot\xi}\widehat{G}_{t-s}(\xi)\widehat{G}_{t-s^{\prime}}(\xi)$
$\displaystyle=R^{3-\beta}\int_{[0,t]^{2}}dsds^{\prime}\gamma_{0}(s-s^{\prime})\int_{\mathbb{R}^{2}}d\xi\varphi_{1}(\xi_{1}/R)\varphi_{2}(\xi_{2})\left(\int_{B_{1}^{2}}dxdx^{\prime}e^{-i(x-x^{\prime})\cdot\xi}\right)\widehat{G}_{t-s}(\xi/R)\widehat{G}_{t-s^{\prime}}(\xi/R)$
$\displaystyle=R^{3-\beta}\int_{[0,t]^{2}}dsds^{\prime}\gamma_{0}(s-s^{\prime})\int_{\mathbb{R}^{2}}d\xi\varphi_{1}(\xi_{1}/R)\varphi_{2}(\xi_{2})\big{|}\mathcal{F}\mathbf{1}_{B_{1}}\big{|}^{2}(\xi)\widehat{G}_{t-s}(\xi/R)\widehat{G}_{t-s^{\prime}}(\xi/R).$
Recall that $\varphi_{1}$, $\widehat{G}_{t-s}$ and
$\widehat{G}_{t-s^{\prime}}$ are uniformly bounded and continuous. Note that,
applying Plancherel’s theorem and the Parseval-type relation (2.3), we have
$\displaystyle\int_{\mathbb{R}^{2}}d\xi\varphi_{2}(\xi_{2})\big{|}\mathcal{F}\mathbf{1}_{B_{1}}\big{|}^{2}(\xi)$
$\displaystyle=2\pi\int_{\mathbb{R}^{2}}dx_{1}d\xi_{2}\varphi_{2}(\xi_{2})\left|\mathcal{F}\mathbf{1}_{B_{1}}(x_{1},\bullet)(\xi_{2})\right|^{2}$
$\displaystyle=2\pi\int_{\mathbb{R}^{3}}dx_{1}dx_{2}dx_{3}\mathbf{1}_{\\{x_{1}^{2}+x_{2}^{2}\leq
1\\}}\mathbf{1}_{\\{x_{1}^{2}+x_{3}^{2}\leq
1\\}}|x_{2}-x_{3}|^{-\beta}<\infty.$
Therefore, by the dominated convergence theorem and the fact that
$\varphi_{1}(0)=\frac{1}{2\pi}\gamma_{1}(\mathbb{R})$, we get
$\displaystyle\frac{\text{Var}\big{(}\mathbf{J}_{1,R}(t)\big{)}}{R^{3-\beta}}\xrightarrow{R\to\infty}$
$\displaystyle\varphi_{1}(0)\int_{[0,t]^{2}}dsds^{\prime}\gamma_{0}(s-s^{\prime})(t-s)(t-s^{\prime})\int_{\mathbb{R}^{2}}d\xi\varphi_{2}(\xi_{2})\big{|}\mathcal{F}\mathbf{1}_{B_{1}}\big{|}^{2}(\xi)$
$\displaystyle=$
$\displaystyle\gamma_{1}(\mathbb{R})\mathcal{L}_{\beta}\int_{[0,t]^{2}}dsds^{\prime}\gamma_{0}(s-s^{\prime})ss^{\prime},$
where $\mathcal{L}_{\beta}$ is defined in (1.24). In the same way, we get for
$s,t\in(0,\infty)$,
$\displaystyle\frac{\mathbb{E}\big{[}\mathbf{J}_{1,R}(t)\mathbf{J}_{1,R}(s)\big{]}}{R^{3-\beta}}\xrightarrow{R\to\infty}\gamma_{1}(\mathbb{R})\mathcal{L}_{\beta}\int_{0}^{t}dr\int_{0}^{s}dr^{\prime}\gamma_{0}(r-r^{\prime})(t-r)(s-r^{\prime}).$
(4.27)
Now we claim that the other chaoses are negligible, that is, as $R\to\infty$,
$\displaystyle\sum_{p\geq
2}\text{Var}\big{(}\mathbf{J}_{p,R}(t)\big{)}=o(R^{3-\beta}).$ (4.28)
Note that the desired limiting covariance structure follows from (4.27) and
the above claim (4.28). The rest of this section is devoted to proving claim
(4.28).
###### Proof of Claim (4.28).
By the same arguments that lead to the estimate (4.23), we can obtain
$\displaystyle\text{Var}\big{(}\mathbf{J}_{p,R}(t))\leq\Gamma_{t}^{p}\int_{\Delta_{p}(t)}d\boldsymbol{s_{p}}\int_{\mathbb{R}^{2p}}d\boldsymbol{\xi_{p}}\varphi_{p}(\boldsymbol{\xi_{p}})\prod_{j=0}^{p-1}\Big{|}\widehat{G}_{s_{j}-s_{j+1}}(\xi_{p}+\cdots+\xi_{j+1})\Big{|}^{2}~{}\text{with
$s_{0}=t$},$
where
$\varphi_{p}(\boldsymbol{\xi_{p}})=\varphi(\xi_{1})\cdots\varphi(\xi_{p})\ell_{R}(\xi_{1}+\cdots+\xi_{p})$
for $\xi_{j}=(\xi_{j}^{(1)},\xi_{j}^{(2)})\in\mathbb{R}^{2}$, $j=1,\dots,p$
and $\ell_{R}$ is defined in (4.19). Recall that in the current case,
$\varphi(\xi)=\varphi_{1}(\xi^{(1)})\varphi_{2}(\xi^{(2)})$ for
$\xi=(\xi^{(1)},\xi^{(2)})\in\mathbb{R}^{2}$ and $\varphi_{1},\varphi_{2}$
satisfy the conditions in (4.26). Then, the following change of variables
$\eta_{j}=\xi_{j}+\xi_{j+1}+\cdots+\xi_{p}$ with $\eta_{p+1}=0$
yields
$\displaystyle\text{Var}\big{(}\mathbf{J}_{p,R}(t))$
$\displaystyle\leq\Gamma_{t}^{p}\int_{\Delta_{p}(t)}d\boldsymbol{s_{p}}\int_{\mathbb{R}^{2p}}d\boldsymbol{\eta_{p}}\ell_{R}(\eta_{1})\prod_{j=0}^{p-1}\varphi(\eta_{j+1}-\eta_{j+2})\Big{|}\widehat{G}_{s_{j}-s_{j+1}}(\eta_{j+1})\Big{|}^{2}.$
In view of (4.19), we have $\ell_{R}(\eta_{1}/R)=R^{4}\ell_{1}(\eta_{1})$.
Thus, by changing $\eta_{1}$ to $\eta_{1}/R$, we write
$\displaystyle\text{Var}\big{(}\mathbf{J}_{p,R}(t))$ $\displaystyle\leq
R^{2}\Gamma_{t}^{p}\int_{\Delta_{p}(t)}d\boldsymbol{s_{p}}\int_{\mathbb{R}^{2p}}d\boldsymbol{\eta_{p}}\ell_{1}(\eta_{1})\varphi(\eta_{1}R^{-1}-\eta_{2})\Big{|}\widehat{G}_{t-s_{1}}(\eta_{1}/R)\Big{|}^{2}$
$\displaystyle\qquad\times\prod_{j=1}^{p-1}\varphi(\eta_{j+1}-\eta_{j+2})\Big{|}\widehat{G}_{s_{j}-s_{j+1}}(\eta_{j+1})\Big{|}^{2}$
$\displaystyle\leq
R^{3-\beta}\Gamma_{t}^{p}\|\varphi_{1}\|_{\infty}t^{2}\int_{\Delta_{p}(t)}d\boldsymbol{s_{p}}\int_{\mathbb{R}^{2p-2}}d\eta_{2}...d\eta_{p}\left(\int_{\mathbb{R}^{2}}d\eta_{1}\ell_{1}(\eta_{1})c_{1,\beta}\big{|}\eta^{(2)}_{1}-\eta_{2}^{(2)}R\big{|}^{\beta-1}\right)$
$\displaystyle\qquad\times\prod_{j=1}^{p-1}\varphi(\eta_{j+1}-\eta_{j+2})\Big{|}\widehat{G}_{s_{j}-s_{j+1}}(\eta_{j+1})\Big{|}^{2},$
where we used $|\widehat{G}_{t-s_{1}}(\eta_{1}/R)|^{2}\leq t^{2}$. Observe
that with $\eta=(\eta^{(1)},\eta^{(2)})$, we deduce from the fact
$\ell_{1}(\eta)=\big{|}\mathcal{F}\mathbf{1}_{B_{1}}\big{|}^{2}(\eta^{(1)},\eta^{(2)})$
that
$\displaystyle\int_{\mathbb{R}^{2}}d\eta\ell_{1}(\eta)\varphi_{2}(\eta^{(2)}-xR)$
$\displaystyle=\int_{\mathbb{R}^{2}}d\eta^{(1)}d\eta^{(2)}\big{|}\mathcal{F}\mathbf{1}_{B_{1}}\big{|}^{2}(\eta^{(1)},\eta^{(2)}+xR)\varphi_{2}(\eta^{(2)})$
$\displaystyle=2\pi\int_{\mathbb{R}^{3}}\mathbf{1}_{\\{x_{1}^{2}+x_{2}^{2}\leq
1\\}}\mathbf{1}_{\\{x_{1}^{2}+x_{3}^{2}\leq
1\\}}e^{-i(x_{2}-x_{3})xR}|x_{2}-x_{3}|^{-\beta}dx_{1}dx_{2}dx_{3},$
by inverting the Fourier transform. The above quantity is uniformly bounded by
$2\pi\mathcal{L}_{\beta}$ with $\mathcal{L}_{\beta}$ given in (1.24) and
convergent to zero as $R\to\infty$ for every $x\neq 0$ in view of the Riemann-
Lebesgue lemma. Thus, $R^{\beta-3}\text{Var}\big{(}\mathbf{J}_{p,R}(t))$ is
uniformly bounded by
$2\pi\mathcal{L}_{\beta}\Gamma_{t}^{p}\|\varphi_{1}\|_{\infty}t^{2}Q_{p-1}$,
with $Q_{p-1}$ given by (4.16) and it converges to zero as $R\to\infty$. Since
$Q_{p}\leq C^{p}/p!$, we have
$\sum_{p\geq 2}\Gamma_{t}^{p}Q_{p-1}<\infty,$
and the dominated convergence theorem implies (4.28). ∎
###### Remark 4.2.
Under the assumptions of Proposition 4.1, we point out that $\sigma_{R}(t)>0$
for large enough $R$ so that the renormalized random variable
$F_{R}(t)/\sigma_{R}(t)$ is well-defined for large $R$.
### 4.2 Quantitative central limit theorems (QCLT) and f.d.d. convergence
In this section, we prove the quantitative CLTs that are stated in Theorem 1.4
and, as an easy consequence, we are also able to show the convergence of
finite-dimensional distributions in Theorem 1.4. We consider first the part
(1) and later we treat parts (2) and (3).
#### 4.2.1 Part (1)
We will first show the estimate
$\displaystyle d_{\rm TV}\big{(}F_{R}(t)/\sigma_{R}(t),Z\big{)}\lesssim
R^{-d/2},$ (4.29)
where $Z\sim N(0,1)$. By Proposition 1.8 applied to
$\frac{1}{\sigma_{R}(t)}F_{R}(t)$, we have
$d_{\rm
TV}\big{(}F_{R}(t)/\sigma_{R}(t),Z\big{)}\leq\frac{4}{\sigma^{2}_{R}(t)}\sqrt{\mathcal{A}_{R}},$
(4.30)
where
$\displaystyle\mathcal{A}_{R}$
$\displaystyle=\int_{\mathbb{R}_{+}^{6}\times\mathbb{R}^{6d}}drdr^{\prime}dsds^{\prime}d\theta
d\theta^{\prime}dzdz^{\prime}dydy^{\prime}dwdw^{\prime}\gamma_{0}(\theta-\theta^{\prime})\gamma_{0}(s-s^{\prime})\gamma_{0}(r-r^{\prime})\gamma(z-z^{\prime})\gamma(w-w^{\prime})$
$\displaystyle\quad\times\gamma(y-y^{\prime})\|D_{r,z}D_{\theta,w}F_{R}(t)\|_{4}\|D_{s,y}D_{\theta^{\prime},w^{\prime}}F_{R}(t)\|_{4}\|D_{r^{\prime},z^{\prime}}F_{R}(t)\|_{4}\|D_{s^{\prime},y^{\prime}}F_{R}(t)\|_{4}.$
Recall from Section 4.1.1 that $\sigma^{2}_{R}(t)\sim R^{d}$. Therefore, in
order to show (4.29) it suffices to prove the estimate
$\mathcal{A}_{R}\lesssim R^{d}.$ (4.31)
Using Minkowski’s inequality, we can write
$\|D_{r,z}D_{\theta,w}F_{R}(t)\|_{4}=\left\|\int_{B_{R}}D_{r,z}D_{\theta,w}u(t,x)dx\right\|_{4}\leq\int_{B_{R}}\big{\|}D_{r,z}D_{\theta,w}u(t,x)\big{\|}_{4}dx.$
Then, it follows from our fundamental estimates in Theorem 1.3 that
$\displaystyle\|D_{r,z}D_{\theta,w}F_{R}(t)\|_{4}\lesssim\int_{B_{R}}\widetilde{f}_{t,x,2}(r,z,\theta,w)dx,$
(4.32)
with
$\widetilde{f}_{t,x,2}(r,z,\theta,w)=\frac{1}{2}\left[G_{t-r}(x-z)G_{r-\theta}(z-w)\mathbf{1}_{\\{r>\theta\\}}+G_{t-\theta}(x-w)G_{\theta-r}(z-w)\mathbf{1}_{\\{r<\theta\\}}\right];$
and, in the same way, we have
$\displaystyle\|D_{r,z}F_{R}(t)\|_{4}\lesssim\int_{B_{R}}G_{t-r}(x-z)dx,$
(4.33)
where the implicit constants in (4.32)-(4.33) do not depend on
$(R,r,z,\theta,w)$ and are increasing in $t$. Now, plugging (4.32)-(4.33) into
the expression of $\mathcal{A}_{R}$, we get
$\displaystyle\mathcal{A}_{R}\lesssim\int_{[0,t]^{6}\times\mathbb{R}^{6d}}drdr^{\prime}dsds^{\prime}d\theta
d\theta^{\prime}dzdz^{\prime}dydy^{\prime}dwdw^{\prime}\gamma_{0}(r-r^{\prime})\gamma_{0}(s-s^{\prime})\gamma_{0}(\theta-\theta^{\prime})\gamma(z-z^{\prime})\gamma(w-w^{\prime})$
$\displaystyle\quad\times\gamma(y-y^{\prime})\int_{B_{R}^{4}}\widetilde{f}_{t,x_{1},2}(r,z,\theta,w)\widetilde{f}_{t,x_{2},2}(s,y,\theta^{\prime},w^{\prime})G_{t-r^{\prime}}(x_{3}-z^{\prime})G_{t-s^{\prime}}(x_{4}-y^{\prime})d\boldsymbol{x_{4}}=:\sum_{j=1}^{4}\mathcal{A}_{R,j}.$
The four terms $\mathcal{A}_{R,1},\dots,\mathcal{A}_{R,4}$ are defined
according to whether $r>\theta$ or $r<\theta$, and whether $s>\theta^{\prime}$
or $s<\theta^{\prime}$. For example, the term $\mathcal{A}_{R,1}$ corresponds
to $r>\theta$ and $s>\theta^{\prime}$:
$\displaystyle\mathcal{A}_{R,1}$
$\displaystyle=\frac{1}{4}\int_{[0,t]^{6}\times\mathbb{R}^{6d}}drdr^{\prime}dsds^{\prime}d\theta
d\theta^{\prime}dzdz^{\prime}dydy^{\prime}dwdw^{\prime}\gamma_{0}(r-r^{\prime})\gamma_{0}(s-s^{\prime})\gamma_{0}(\theta-\theta^{\prime})$
$\displaystyle\quad\times\gamma(w-w^{\prime})\gamma(y-y^{\prime})\gamma(z-z^{\prime})G_{r-\theta}(z-w)G_{s-\theta^{\prime}}(y-w^{\prime})$
$\displaystyle\quad\times\int_{B_{R}^{4}}d\boldsymbol{x_{4}}G_{t-r}(x_{1}-z)G_{t-s}(x_{2}-y)G_{t-r^{\prime}}(x_{3}-z^{\prime})G_{t-s^{\prime}}(x_{4}-y^{\prime}).$
(4.34)
The term $\mathcal{A}_{R,2}$ corresponds to $r>\theta$ and
$s<\theta^{\prime}$, the term $\mathcal{A}_{R,3}$ corresponds to $r<\theta$
and $s>\theta^{\prime}$ and the term $\mathcal{A}_{R,4}$ corresponds to
$r<\theta$ and $s<\theta^{\prime}$. In the following, we estimate
$\mathcal{A}_{R,j}$ for $j=1,2,3,4$ by a constant times $R^{d}$, which yields
(4.31).
To get the bound for $\mathcal{A}_{R,1}$, it suffices to perform the
integration with respect to $dx_{1},dx_{2},dx_{4}$,
$dy^{\prime},dy,dw^{\prime},dw$, $dz,dz^{\prime},dx_{3}$ one by one, by taking
into account the following facts:
$\sup_{z\in\mathbb{R}^{d}}\int_{B_{R}}G_{t-r}(x-z)dx\leq t\quad{\rm
and}\quad\sup_{y^{\prime}\in\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\gamma(y-y^{\prime})dy=\|\gamma\|_{L^{1}(\mathbb{R}^{d})}.$
To get the bound for $\mathcal{A}_{R,2}$, it suffices to perform the
integration with respect to $dx_{1},dx_{3},dz^{\prime},dz$,
$dx_{2},dw,dw^{\prime},dy,dy^{\prime},dx_{4}$. To get the bound for
$\mathcal{A}_{R,3}$, it suffices to perform the integration with respect to
$dx_{4},dy^{\prime},dx_{2},dy,dw^{\prime},dx_{1},dw,dz,dz^{\prime},dx_{3}$ one
by one. To get the bound for $\mathcal{A}_{R,4}$, it suffices to perform the
integration with respect to
$dx_{1},dx_{3},dx_{2},dz^{\prime},dz,dw,dw^{\prime},dy,dy^{\prime},dx_{4}$ one
by one. This completes the proof of (4.29).
In the second part of this subsection, we show the f.d.d. convergence in
Theorem 1.4-(1).
Fix an integer $m\geq 1$ and choose $t_{1},\dots,t_{m}\in(0,\infty)$. Put
$\mathbf{F}_{R}=\big{(}F_{R}(t_{1}),\dots,F_{R}(t_{m})\big{)}$. Then, by the
result on limiting covariance structure from Section 4.1.1, we have that the
covariance matrix of $R^{-d/2}\mathbf{F}_{R}$, denoted by $\mathcal{C}_{R}$,
converges to the matrix $\mathcal{C}=(\mathcal{C}_{ij}:1\leq i,j\leq m)$, with
$\mathcal{C}_{ij}=\omega_{d}\sum_{p\geq
1}p!\int_{\mathbb{R}^{d}}\big{\langle}\widetilde{f}_{t_{i},x,p},\widetilde{f}_{t_{j},0,p}\big{\rangle}_{\mathcal{H}^{\otimes
p}}dx.$
Since $F_{R}(t)=\delta(-DL^{-1}F_{R}(t))$, according to [25, Theorem
6.1.2]161616Note that there is a typo in Theorem 6.1.2 of [25]: In (6.1.3) of
[25], one has $d/2$ instead of $1/2$., for any twice differentiable function
$h:\mathbb{R}^{m}\to\mathbb{R}$ with bounded second partial derivatives,
$\displaystyle\Big{|}\mathbb{E}\big{[}h(R^{-d/2}\mathbf{F}_{R})-h(\mathbf{Z})\big{]}\Big{|}\leq\Big{|}\mathbb{E}\big{[}h(R^{-d/2}\mathbf{F}_{R})-h(\mathbf{Z}_{R})\big{]}\Big{|}+\Big{|}\mathbb{E}\big{[}h(\mathbf{Z})-h(\mathbf{Z}_{R})\big{]}\Big{|}$
$\displaystyle\leq\frac{m}{2R^{d}}\|h^{\prime\prime}\|_{\infty}\sqrt{\sum_{i,j=1}^{m}{\rm
Var}\Big{(}\big{\langle}DF_{R}(t_{i}),-DL^{-1}F_{R}(t_{j})\big{\rangle}_{\mathcal{H}}\Big{)}}+\Big{|}\mathbb{E}\big{[}h(\mathbf{Z})-h(\mathbf{Z}_{R})\big{]}\Big{|},$
(4.35)
with $\mathbf{Z}_{R}\sim N\big{(}0,\mathcal{C}_{R}\big{)}$, $\mathbf{Z}\sim
N\big{(}0,\mathcal{C}\big{)}$ and
$\|h^{\prime\prime}\|_{\infty}=\sup\big{\\{}\big{|}\frac{\partial^{2}}{\partial
x_{i}\partial x_{j}}h(x)\big{|}:x\in\mathbb{R}^{m},i,j=1,\dots,m\big{\\}}$. It
is clear that the second term in (4.35) tends to zero as $R\to\infty$. For the
variance term in (4.35), taking advantage of Proposition 1.9 applied to
$F=F_{R}(t_{i})$ and $G=F_{R}(t_{j})$ and using arguments analogous to those
employed to derive (4.31), we obtain
${\rm
Var}\Big{(}\big{\langle}DF_{R}(t_{i}),-DL^{-1}F_{R}(t_{j})\big{\rangle}_{\mathcal{H}}\Big{)}\lesssim
R^{d}.$
Thus, the first term in (4.35) is $O(R^{-d/2})$, implying that
$\mathbb{E}\big{[}h(R^{-d/2}\mathbf{F}_{R})-h(\mathbf{Z})\big{]}$ converges to
zero as $R\to\infty$. This shows the convergence of the finite-dimensional
distributions of $\\{R^{-d/2}F_{R}(t):t\in\mathbb{R}_{+}\\}$ to those of the
centered Gaussian process $\mathcal{G}$, whose covariance structure is given
by
$\mathbb{E}\big{[}\mathcal{G}(t)\mathcal{G}(s)\big{]}=\omega_{d}\sum_{p\geq
1}p!\int_{\mathbb{R}^{d}}\big{\langle}\widetilde{f}_{t,x,p},\widetilde{f}_{s,0,p}\big{\rangle}_{\mathcal{H}^{\otimes
p}}dx,\;\text{for $s,t\in[0,\infty)$}.$
This concludes the proof of part (1) in Theorem 1.4. $\square$
#### 4.2.2 Proofs in parts (2) and (3)
In part (2), in view of the dominance of the first chaos, we have already
obtained in Section 4.1.2 that the finite-dimensional distributions of the
process $\big{\\{}R^{-d+\frac{\beta}{2}}F_{R}(t):t\in\mathbb{R}_{+}\big{\\}}$
converge to those of a centered Gaussian process
$\\{\mathcal{G}_{\beta}(t)\\}_{t\in\mathbb{R}_{+}}$, whose covariance
structure is given by (1.19). By the same reason, the convergence of the
finite-dimensional distributions in part (3) follows from (4.24), (4.25),
(4.27) and (4.28).
In this section, we show that:
$\displaystyle d_{\rm
TV}\big{(}F_{R}(t)/\sigma_{R}(t),Z\big{)}\lesssim\begin{cases}R^{-\beta/2}&\text{in
part (2)},\\\ R^{-\frac{1}{2}(\beta_{1}+\beta_{2})}&\text{in part (3) case
$(a^{\prime})$},\\\ R^{-(1+\beta)/2}&\text{in part (3) case
$(b^{\prime})$,}\end{cases}$ (4.36)
where $Z\sim N(0,1)$. Taking into account (4.30) and the variance estimates in
Section 4.1.2 and Section 4.1.3, in order to get (4.36) it suffices to show
that, for $j\in\\{1,2,3,4\\}$ and for $R\geq t$,
$\displaystyle\mathcal{A}_{R,j}\lesssim\begin{cases}R^{4d-3\beta}&\text{in
part (2)},\\\ R^{8-3(\beta_{1}+\beta_{2})}&\text{in case $(a^{\prime})$ of
part (3),}\\\ R^{5-3\beta}&\text{in case $(b^{\prime})$ of part
(3).}\end{cases}$ (4.37)
Since the total-variation distance is always bounded by one, the bound (4.36)
still holds for $R<t$ by choosing the implicit constant large enough.
The rest of this section is then devoted to proving (4.37) for $R\geq t$ and
for $j\in\\{1,2,3,4\\}$.
###### Proof of (4.37).
Let us first consider the term $\mathcal{A}_{R,1}$, which can be expressed as
$\displaystyle\mathcal{A}_{R,1}$
$\displaystyle=\int_{[0,t]^{6}}drdr^{\prime}dsds^{\prime}d\theta
d\theta^{\prime}\gamma_{0}(r-r^{\prime})\gamma_{0}(s-s^{\prime})\gamma_{0}(\theta-\theta^{\prime})\mathbf{S}_{1,R}.$
with
$\displaystyle\mathbf{S}_{1,R}:$
$\displaystyle=\int_{\mathbb{R}^{6d}}dzdz^{\prime}dydy^{\prime}dwdw^{\prime}\gamma(w-w^{\prime})\gamma(y-y^{\prime})\gamma(z-z^{\prime})\int_{B_{R}^{4}}d\boldsymbol{x_{4}}G_{t-r}(x_{1}-z)$
$\displaystyle\quad\times
G_{r-\theta}(z-w)G_{t-s}(x_{2}-y)G_{s-\theta^{\prime}}(y-w^{\prime})G_{t-r^{\prime}}(x_{3}-z^{\prime})G_{t-s^{\prime}}(x_{4}-y^{\prime}).$
From now on, when $d=2$, we write
$(w,w^{\prime},y,y^{\prime},z,z^{\prime})=(w_{1},w_{2},w^{\prime}_{1},w^{\prime}_{2},y_{1},y_{2},y^{\prime}_{1},y^{\prime}_{2},z_{1},z_{2},z^{\prime}_{1},z^{\prime}_{2})$
and then $dy=dy_{1}dy_{2}$; note also that $x_{1},\dots,x_{4}$ denote the
dummy variables in $\mathbb{R}^{d}$. By making the following change of
variables
$\displaystyle(z,z^{\prime},y,y^{\prime},w,w^{\prime},x_{1},x_{2},x_{3},x_{4})\to
R(z,z^{\prime},y,y^{\prime},w,w^{\prime},x_{1},x_{2},x_{3},x_{4})$ (4.38)
and using the scaling property $G_{t}(Rz)=R^{1-d}G_{tR^{-1}}(z)$ for
$d\in\\{1,2\\}$, we get
$\displaystyle\mathbf{S}_{1,R}=R^{6+4d}\int_{[-2,2]^{6d}}dzdz^{\prime}dydy^{\prime}dwdw^{\prime}\gamma(Rw-
Rw^{\prime})\gamma(Ry-Ry^{\prime})\gamma(Rz-
Rz^{\prime})\int_{B_{1}^{4}}d\boldsymbol{x_{4}}$ $\displaystyle\quad\times
G_{\frac{t-r}{R}}(x_{1}-z)G_{\frac{r-\theta}{R}}(z-w)G_{\frac{t-s}{R}}(x_{2}-y)G_{\frac{s-\theta^{\prime}}{R}}(y-w^{\prime})G_{\frac{t-r^{\prime}}{R}}(x_{3}-z^{\prime})G_{\frac{t-s^{\prime}}{R}}(x_{4}-y^{\prime}).$
(4.39)
Note that we have replaced the integral domain $\mathbb{R}^{6d}$ by
$[-2,2]^{6d}$ in (4.39) without changing the value of $\mathbf{S}_{1,R}$,
because, for example, $x_{1}\in B_{1}$ and $|x_{1}-z|\leq(t-r)/R$ implies
$|z|\leq 1+tR^{-1}\leq 2$ while $|z-w|\leq(r-\theta)/R$ and
$|x_{1}-z|\leq(t-r)/R$ imply $|w|\leq(t-\theta)R^{-1}+1\leq 2$.
In view of the expression of $\gamma$ in part (2) and part (3), we write, for
$z\in\mathbb{R}^{d}$ ($z=(z_{1},z_{2})\in\mathbb{R}^{2}$ when $d=2$),
$\displaystyle\gamma(Rz)=\begin{cases}R^{-\beta}\gamma(z)&\text{in part
(2)},\\\ R^{-\beta_{1}-\beta_{2}}\gamma(z)&\text{in case $(a^{\prime})$ of
part (3)},\\\ R^{-\beta}\gamma_{1}(Rz_{1})\gamma_{2}(z_{2})&\text{in case
$(b^{\prime})$ of part (3)},\end{cases}$
and it is easy to see that
$\displaystyle\sup_{z^{\prime}\in[-2,2]^{d}}\int_{[-2,2]^{d}}\gamma(Rz-
Rz^{\prime})dz\leq\begin{cases}{\displaystyle
R^{-\beta}\int_{[-4,4]^{d}}\gamma(z)dz<\infty}&\text{in part (2)},\\\ \quad\\\
{\displaystyle
R^{-\beta_{1}-\beta_{2}}\int_{[-4,4]^{d}}\gamma(z)dz<\infty}&\text{in case
$(a^{\prime})$ of part (3)},\\\ \quad\\\ {\displaystyle
R^{-\beta-1}\gamma_{1}(\mathbb{R})\int_{-4}^{4}\gamma_{2}(s)ds<\infty}&\text{in
case $(b^{\prime})$ of part (3)}.\end{cases}$
To ease the notation, we just rewrite the above estimates as
$\displaystyle\sup_{z^{\prime}\in[-2,2]^{d}}\int_{[-2,2]^{d}}\gamma(Rz-
Rz^{\prime})dz\lesssim R^{-\alpha}$ (4.40)
with $\alpha=\beta$ in part (2), $\alpha=\beta_{1}+\beta_{2}$ in case
$(a^{\prime})$ of part (3), and $\alpha=1+\beta$ in case $(b^{\prime})$ of
part (3).
To estimate $\mathcal{A}_{R,1}$, we can use (4.40) to perform integration with
respect to $dx_{1},dx_{2},dx_{4}$, $dy^{\prime},dy,dw^{\prime},dw$,
$dz,dz^{\prime},dx_{3}$ successively. More precisely, performing the
integration with respect to $dx_{1},dx_{2},dx_{4}$ and using the fact
$\displaystyle\sup_{(s,z^{\prime})\in[0,t]\times\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}G_{s/R}(z-z^{\prime})dz=t/R$
(4.41)
gives us
$\displaystyle\mathbf{S}_{1,R}$ $\displaystyle\leq
R^{3+4d}t^{3}\int_{[-2,2]^{6d}}dzdz^{\prime}dydy^{\prime}dwdw^{\prime}\gamma(Rw-
Rw^{\prime})\gamma(Ry-Ry^{\prime})\gamma(Rz-Rz^{\prime})\int_{B_{1}}dx_{3}$
$\displaystyle\qquad\times
G_{\frac{r-\theta}{R}}(z-w)G_{\frac{s-\theta^{\prime}}{R}}(y-w^{\prime})G_{\frac{t-r^{\prime}}{R}}(x_{3}-z^{\prime})$
$\displaystyle\lesssim
R^{3+4d}R^{-\alpha}\int_{[-2,2]^{5d}}dzdz^{\prime}dydwdw^{\prime}\gamma(Rw-
Rw^{\prime})\gamma(Rz-Rz^{\prime})\int_{B_{1}}dx_{3}$
$\displaystyle\quad\times
G_{\frac{r-\theta}{R}}(z-w)G_{\frac{s-\theta^{\prime}}{R}}(y-w^{\prime})G_{\frac{t-r^{\prime}}{R}}(x_{3}-z^{\prime})\quad\text{by
integrating out $dy^{\prime}$ and using \eqref{note2}}$ $\displaystyle\lesssim
R^{2+4d-\alpha}\int_{[-2,2]^{4d}}dzdz^{\prime}dwdw^{\prime}\gamma(Rw-
Rw^{\prime})\gamma(Rz-Rz^{\prime})\int_{B_{1}}dx_{3}$
$\displaystyle\quad\times
G_{\frac{r-\theta}{R}}(z-w)G_{\frac{t-r^{\prime}}{R}}(x_{3}-z^{\prime})\quad\text{by
integrating out $dy$ and using \eqref{useineq1} }$ $\displaystyle\lesssim
R^{2+4d-2\alpha}\int_{[-2,2]^{3d}}dzdz^{\prime}dw\gamma(Rz-
Rz^{\prime})\int_{B_{1}}dx_{3}G_{\frac{r-\theta}{R}}(z-w)G_{\frac{t-r^{\prime}}{R}}(x_{3}-z^{\prime})$
by integrating out $dw^{\prime}$ and using (4.40); then, using (4.41) to
integrate out $dw$ $\displaystyle\lesssim
R^{1+4d-2\alpha}\int_{[-2,2]^{2d}}dzdz^{\prime}\gamma(Rz-
Rz^{\prime})\int_{B_{1}}dx_{3}G_{\frac{t-r^{\prime}}{R}}(x_{3}-z^{\prime})\lesssim
R^{4d-3\alpha}$
where the last inequality is obtained by integrating out $dz,dz^{\prime}$,
$dx_{3}$ one by one and using (4.40) and (4.41). The bound
$\mathbf{S}_{1,R}\lesssim R^{4d-3\alpha}=\begin{cases}R^{4d-3\beta}&\text{in
part (2)},\\\ R^{8-3\beta_{1}-3\beta_{2}}&\text{in cae $(a^{\prime})$ of part
(3)},\\\ R^{5-3\beta}&\text{in cae $(b^{\prime})$ of part (3)}\end{cases}$
is uniform over
$(r,r^{\prime},s,s^{\prime},\theta,\theta^{\prime})\in[0,t]^{6}$, and hence we
obtain (4.37) for $j=1$. For the other terms
$\mathcal{A}_{R,2},\mathcal{A}_{R,3}$ and $\mathcal{A}_{R,4}$, the arguments
are the same: We first go through the same change of variables (4.38) to
obtain terms $\mathbf{S}_{j,R}$ similar to $\mathbf{S}_{1,R}$ in (4.39), and
then use the facts (4.40) and (4.41) to perform one-by-one integration with
respect to the variables
$\begin{cases}dx_{1},dx_{3},dz^{\prime},dz,dx_{2},dw,dw^{\prime},dy,dy^{\prime},dx_{4}\quad\text{for
estimating $\mathcal{A}_{R,2}$}\\\
dx_{4},dy^{\prime},dx_{2},dy,dw^{\prime},dx_{1},dw,dz,dz^{\prime},dx_{3}\quad\text{for
estimating $\mathcal{A}_{R,3}$}\\\
dx_{1},dx_{3},dx_{2},dz^{\prime},dz,dw,dw^{\prime},dy,dy^{\prime},dx_{4}\quad\text{for
estimating $\mathcal{A}_{R,4}$}\end{cases}.$
This concludes the proof of (4.37) and hence completes the proof of (4.36). ∎
### 4.3 Tightness
This section is devoted to establishing the tightness in Theorem 1.4. This,
together with the results in Section 4.1 and Section 4.2 will conclude the
proof of Theorem 1.4. To get the tightness, we appeal to the criterion of
Kolmogorov-Chentsov (see _e.g._ [17, Corollary 16.9]). Put
$\displaystyle\sigma_{R}=\begin{cases}R^{d/2}&\text{in part (1) of Theorem
\ref{MR2}}\\\ R^{d-\frac{\beta}{2}}&\text{in part (2) of Theorem \ref{MR2}}\\\
R^{2-\frac{1}{2}(\beta_{1}+\beta_{2})}&\text{in part (3)-$(a^{\prime})$ of
Theorem \ref{MR2}}\\\ R^{(3-\beta)/2}&\text{in part (3)-$(b^{\prime})$ of
Theorem \ref{MR2}}\end{cases}$ (4.42)
and we will show, for any fixed $T>0$, that the following inequality holds for
any integer $k\geq 2$ and any $0<s<t\leq T\leq R$:
$\displaystyle\big{\|}F_{R}(t)-F_{R}(s)\big{\|}_{k}\lesssim(t-s)\sigma_{R},$
(4.43)
where the implicit constant does not depend on $R,s$ or $t$. This moment
estimate (4.43) ensures the tightness of
$\big{\\{}\sigma_{R}^{-1}F_{R}(t):t\in[0,T]\big{\\}}$ for any fixed $T>0$ and,
therefore, the desired tightness on $\mathbb{R}_{+}$ holds.
To show the above moment estimate (4.43) for the increment
$F_{R}(t)-F_{R}(s)$, we begin with the chaos expansion
$F_{R}(t)-F_{R}(s)=\sum_{n\geq
1}I_{n}\left(\int_{B_{R}}dx[f_{t,x,n}-f_{s,x,n}]\right)=\sum_{n\geq
1}I_{n}\big{(}g_{n,R}\big{)},$
where $s,t$ are fixed, so we leave them out of the subscript of the kernel
$g_{n,R}$ and
$\displaystyle
g_{n,R}(\boldsymbol{s_{n}},\boldsymbol{y_{n}})=\Big{[}\varphi_{t,R}(s_{1},y_{1})-\varphi_{s,R}(s_{1},y_{1})\Big{]}\prod_{j=1}^{n-1}G_{s_{j}-s_{j+1}}(y_{j}-y_{j+1})$
(4.44)
with $\prod_{j=1}^{0}=1$ and $\varphi_{t,R}(r,y):=\int_{B_{R}}G_{t-r}(x-y)dx$.
The rest of this section is then devoted to proving (4.43).
###### Proof of (4.43).
By the triangle inequality and using the moment estimate (2.15), we get, for
any $k\in[2,\infty)$,
$\displaystyle\big{\|}F_{R}(t)-F_{R}(s)\big{\|}_{k}\leq\sum_{n\geq
1}(k-1)^{n/2}\left\|I_{n}\left(g_{n,R}\right)\right\|_{2}.$
Note that the kernel $g_{n,R}=0$ outside $[0,t]^{n}\times\mathbb{R}^{dn}$.
Then, using (2.8) and (2.13), we can write
$\displaystyle\big{\|}F_{R}(t)-F_{R}(s)\big{\|}_{k}\leq\sum_{n\geq
1}\big{[}\Gamma_{t}(k-1)\big{]}^{n/2}\Big{(}n!\|\widetilde{g}_{n,R}\|_{\mathcal{H}_{0}^{\otimes
n}}^{2}\Big{)}^{1/2},$
where $\widetilde{g}_{n,R}$ is the canonical symmetrization of $g_{n,R}$:
$\widetilde{g}_{n,R}(\boldsymbol{s_{n}},\boldsymbol{y_{n}})=\frac{1}{n!}\sum_{\sigma\in\mathfrak{S}_{n}}\Big{[}\varphi_{t,R}(s_{\sigma(1)},y_{\sigma(1)})-\varphi_{s,R}(s_{\sigma(1)},y_{\sigma(1)})\Big{]}\prod_{j=1}^{n-1}G_{s_{\sigma(j)}-s_{\sigma(j+1)}}(y_{\sigma(j)}-y_{\sigma(j+1)}).$
With the convention (1.6) in mind, we can write
$\displaystyle n!\|\widetilde{g}_{n,R}\|_{\mathcal{H}_{0}^{\otimes
n}}^{2}=\int_{t>s_{1}>\cdots>s_{n}>0}d\boldsymbol{s_{n}}\int_{\mathbb{R}^{2nd}}\Big{[}\varphi_{t,R}(s_{1},y_{1})-\varphi_{s,R}(s_{1},y_{1})\Big{]}\left(\prod_{j=1}^{n-1}G_{s_{j}-s_{j+1}}(y_{j}-y_{j+1})\right)$
$\displaystyle\qquad\qquad\qquad\times\Big{[}\varphi_{t,R}(s_{1},y^{\prime}_{1})-\varphi_{s,R}(s_{1},y^{\prime}_{1})\Big{]}\left(\prod_{j=1}^{n-1}G_{s_{j}-s_{j+1}}(y^{\prime}_{j}-y^{\prime}_{j+1})\right)\prod_{j=1}^{n}\gamma(y_{j}-y_{j}^{\prime})dy_{j}dy_{j}^{\prime}.$
Then, using Fourier transform, we can rewrite
$n!\|\widetilde{g}_{n,R}\|_{\mathcal{H}_{0}^{\otimes n}}^{2}$ as follows:
$\displaystyle n!\|\widetilde{g}_{n,R}\|_{\mathcal{H}_{0}^{\otimes
n}}^{2}=\int_{t>s_{1}>\cdots>s_{n}>0}d\boldsymbol{s_{n}}\int_{\mathbb{R}^{nd}}\mu(d\boldsymbol{\xi_{p}})\big{|}\mathcal{F}\mathbf{1}_{B_{R}}\big{|}^{2}(\xi_{1}+\cdots+\xi_{p})$
$\displaystyle\qquad\qquad\times\big{|}\widehat{G}_{t-t_{1}}(\xi_{1}+\cdots+\xi_{p})-\widehat{G}_{s-t_{1}}(\xi_{1}+\cdots+\xi_{p})\big{|}^{2}\prod_{j=1}^{n-1}\big{|}\widehat{G}_{s_{j}-s_{j+1}}\big{|}^{2}(\xi_{j+1}+\cdots+\xi_{p}).$
(4.45)
Recall the expression (2.29) $\widehat{G}_{t}(\xi)=\frac{\sin(t|\xi|)}{|\xi|}$
and note that it is a $1$-Lipschitz function in the variable $t$, uniformly
over $\xi\in\mathbb{R}^{d}$. Then
$\big{|}\widehat{G}_{t-t_{1}}(\xi_{1}+\cdots+\xi_{p})-\widehat{G}_{s-t_{1}}(\xi_{1}+\cdots+\xi_{p})\big{|}^{2}\leq(t-s)^{2}.$
Therefore, plugging this inequality into (4.45) and then applying Lemma 2.6
yields
$\displaystyle n!\|\widetilde{g}_{n,R}\|_{\mathcal{H}_{0}^{\otimes n}}^{2}$
$\displaystyle\leq(t-s)^{2}\int_{t>s_{1}>\cdots>s_{n}>0}d\boldsymbol{s_{n}}\left(\int_{\mathbb{R}^{d}}\mu(d\xi)\big{|}\mathcal{F}\mathbf{1}_{B_{R}}\big{|}^{2}(\xi)\right)\prod_{j=1}^{n-1}\int_{\mathbb{R}^{d}}\mu(d\xi_{j})\big{|}\widehat{G}_{s_{j}-s_{j+1}}\big{|}^{2}(\xi_{j})$
$\displaystyle\leq(t-s)^{2}\frac{t^{n}}{n!}\left(2(t^{2}\vee
1)\int_{\mathbb{R}^{d}}\frac{\mu(d\xi)}{1+|\xi|^{2}}\right)^{n-1}\int_{\mathbb{R}^{d}}\mu(d\xi)\big{|}\mathcal{F}\mathbf{1}_{B_{R}}\big{|}^{2}(\xi),$
which is finite since $\mathbf{1}_{B_{R}}\in\mathcal{P}_{0}$. Using Fourier
transform, we can write
$\displaystyle\int_{\mathbb{R}^{d}}\mu(d\xi)\big{|}\mathcal{F}\mathbf{1}_{B_{R}}\big{|}^{2}(\xi)=\int_{\mathbb{R}^{2d}}\mathbf{1}_{B_{R}}(x)\mathbf{1}_{B_{R}}(y)\gamma(x-y)dxdy.$
Now let us consider the cases in (4.42).
In part (1) where $\gamma\in L^{1}(\mathbb{R}^{d})$,
$\int_{\mathbb{R}^{2d}}\mathbf{1}_{B_{R}}(x)\mathbf{1}_{B_{R}}(y)\gamma(x-y)dxdy\leq\gamma(\mathbb{R}^{d})\omega_{d}R^{d}\lesssim\sigma_{R}^{2}.$
In the other cases, we can make the change of variables $(x,y)\to R(x,y)$ to
obtain
$\displaystyle\int_{\mathbb{R}^{2d}}\mathbf{1}_{B_{R}}(x)\mathbf{1}_{B_{R}}(y)\gamma(x-y)dxdy$
$\displaystyle=R^{2d}\int_{\mathbb{R}^{2d}}\mathbf{1}_{B_{1}}(x)\mathbf{1}_{B_{1}}(y)\gamma(Rx-
Ry)dxdy$ $\displaystyle\lesssim R^{2d-\alpha}=\sigma_{R}^{2},$
using (4.40) with $\alpha=\beta$ in part (2), $\alpha=\beta_{1}+\beta_{2}$ in
case $(a^{\prime})$, and $\alpha=1+\beta$ in case $(b^{\prime})$.
As a consequence, we get
$n!\|\widetilde{g}_{n,R}\|_{\mathcal{H}_{0}^{\otimes
n}}^{2}\leq\frac{C^{n}}{n!}\sigma_{R}^{2}(t-s)^{2},$
and therefore,
$\big{\|}F_{R}(t)-F_{R}(s)\big{\|}_{k}\leq|t-s|\sigma_{R}\sum_{n\geq
1}\big{[}C\Gamma_{t}(k-1)\big{]}^{n/2}\frac{1}{\sqrt{n!}},$
which leads to (4.43). ∎
## 5 Proof of Theorem 1.10
We argue as in the proof of Theorem 1.2 of [2]. As we explained in the
introduction, it suffices to show that for each $m\geq 1$,
$\|Du(t,x)\|_{\mathcal{H}}>0\quad\mbox{a.s. on}\ \Omega_{m},$
where $\Omega_{m}=\\{|u(t,x)|\geq 1/m\\}$.
We claim that, almost surely, the function $(s,y)\mapsto D_{s,y}u(t,x)$
satisfies the assumptions of Lemma A.1. Indeed, for $d=2$, by Minkowski’s
inequality and the estimate (1.11), we have
$\displaystyle\mathbb{E}\left(\int_{0}^{t}ds\left(\int_{\mathbb{R}^{2}}|D_{s,y}u(t,x)|^{2q}dy\right)^{1/q}\right)$
$\displaystyle\leq\int_{0}^{t}ds\left(\int_{\mathbb{R}^{2}}\Big{|}\mathbb{E}\big{[}|D_{s,y}u(t,x)|^{2}\big{]}\Big{|}^{q}dy\right)^{1/q}$
$\displaystyle\leq
C\int_{0}^{t}ds\left(\int_{\mathbb{R}^{2}}G^{2q}_{t-s}(x-y)dy\right)^{1/q}<\infty.$
For $d=1$, again by the estimate (1.11),
$\mathbb{E}\left(\int_{0}^{t}ds\left(\int_{\mathbb{R}}|D_{s,y}u(t,x)|^{2}dy\right)\right)\leq
C\int_{0}^{t}ds\int_{\mathbb{R}}G^{2}_{t-s}(x-y)dy<\infty.$
Moreover, $(s,y)\mapsto D_{s,y}u(t,x)$ has compact support on $[0,t]\times
B_{M}$ for some $M>0$. As a consequence, by Lemma A.1, it suffices to prove
that
$\int_{0}^{t}\|D_{r,\bullet}u(t,x)\|_{0}^{2}dr=\int_{0}^{t}\int_{\mathbb{R}^{2d}}D_{r,z}u(t,x)D_{r,z^{\prime}}u(t,x)\gamma(z-z^{\prime})dzdz^{\prime}dr>0~{}\mbox{a.s.
on $\Omega_{m}$}.$ (5.1)
As in the proof of Lemma 5.1 of [2], Corollaries 3.3 and 3.4 allow us to infer
that the $\mathcal{H}\otimes\mathcal{P}_{0}$-valued process $K^{(r)}$ defined
by
$K^{(r)}(s,y,z)=G_{t-s}(x-y)D_{r,z}u(s,y)$
belongs to the space $\mathbb{D}^{1,2}(\mathcal{H}\otimes\mathcal{P}_{0})$.
This is because, using Corollary 3.3, we can write
$\displaystyle\mathbb{E}\big{(}\|K^{(r)}\|_{\mathcal{H}\otimes\mathcal{P}_{0}}^{2}\big{)}$
$\displaystyle=\int_{[r,t]^{2}}\int_{\mathbb{R}^{2d}}G_{t-s}(x-y)G_{t-s^{\prime}}(x-y^{\prime})\mathbb{E}\Big{(}\big{\langle}D_{r,\bullet}u(s,y),D_{r,\bullet}u(s^{\prime},y^{\prime})\big{\rangle}_{0}\Big{)}$
$\displaystyle\qquad\times\gamma_{0}(s-s^{\prime})\gamma(y-y^{\prime})dydy^{\prime}dsds^{\prime}$
$\displaystyle\leq
C\int_{[r,t]^{2}}\int_{\mathbb{R}^{2d}}G_{t-s}(x-y)G_{t-s^{\prime}}(x-y^{\prime})\gamma_{0}(s-s^{\prime})\gamma(y-y^{\prime})dydy^{\prime}dsds^{\prime}<\infty,$
and in the same way, using Corollary 3.4 we can show that
$\mathbb{E}\big{(}\|DK^{(r)}\|_{\mathcal{H}\otimes\mathcal{H}\otimes\mathcal{P}_{0}}^{2}\big{)}<\infty$.
Therefore, the process $K^{(r)}$ belongs to the domain of the
$\mathcal{P}_{0}$-valued Skorokhod integral, denoted by $\overline{\delta}$.
Then, using the same arguments as in the proof of Proposition 5.2 of [2],
replacing $L^{2}(\mathbb{R})$ by $\mathcal{P}_{0}$, we can show that for any
$r\in[0,t]$, the following equation holds in $L^{2}(\Omega;\mathcal{P}_{0})$:
$D_{r,\bullet}u(t,x)=G_{t-r}(x-\bullet)u(r,\bullet)+\int_{r}^{t}\int_{\mathbb{R}^{d}}G_{t-s}(x-y)D_{r,\bullet}u(s,y)W(\overline{\delta}s,\overline{\delta}y).$
(5.2)
Let $\delta\in(0,t\wedge 1)$ be arbitrary. Due to relation (5.2) we have,
almost surely,
$\displaystyle\int_{0}^{t}\|D_{r,\bullet}u(t,x)\|^{2}_{0}\,dr$
$\displaystyle\geq\int_{t-\delta}^{t}\|D_{r,\bullet}u(t,x)\|^{2}_{0}\,dr\geq\frac{1}{2}\int_{t-\delta}^{t}\|G_{t-r}(x-\bullet)u(r,\bullet)\|^{2}_{0}\,dr-I(\delta),$
(5.3)
where
$\displaystyle I(\delta)$
$\displaystyle=\int_{t-\delta}^{t}\left\|\int_{r}^{t}\int_{\mathbb{R}^{d}}G_{t-s}(x-y)D_{r,\bullet}u(s,y)W(\overline{\delta}s,\overline{\delta}y)\right\|^{2}_{0}\,dr$
$\displaystyle=\int_{t-\delta}^{t}\left\|\int_{t-\delta}^{t}\int_{\mathbb{R}^{d}}G_{t-s}(x-y)D_{r,\bullet}u(s,y)W(\overline{\delta}s,\overline{\delta}y)\right\|^{2}_{0}\,dr.$
On the event $\Omega_{m}=\\{|u(t,x)|\geq 1/m\\}$, we have
$\displaystyle\int_{t-\delta}^{t}\|G_{t-r}(x-\bullet)u(r,\bullet)\|_{0}^{2}dr=\int_{t-\delta}^{t}\int_{\mathbb{R}^{2d}}G_{t-r}(x-z)G_{t-r}(x-z^{\prime})u(r,z)u(r,z^{\prime})\gamma(z-z^{\prime})dzdz^{\prime}dr$
$\displaystyle=\int_{t-\delta}^{t}\int_{\mathbb{R}^{2d}}G_{t-r}(x-z)G_{t-r}(x-z^{\prime})u(t,x)^{2}\gamma(z-z^{\prime})dzdz^{\prime}dr$
$\displaystyle\quad-\int_{t-\delta}^{t}\int_{\mathbb{R}^{2d}}G_{t-r}(x-z)G_{t-r}(x-z^{\prime})\big{[}u(t,x)^{2}-u(r,z)u(r,z^{\prime})\big{]}\gamma(z-z^{\prime})dzdz^{\prime}dr$
$\displaystyle\geq\frac{1}{m^{2}}\psi_{0}(\delta)-J(\delta),$
where
$\displaystyle\psi_{0}(\delta)$
$\displaystyle:=\int_{t-\delta}^{t}\int_{\mathbb{R}^{2d}}G_{t-r}(x-z)G_{t-r}(x-z^{\prime})\gamma(z-z^{\prime})dzdz^{\prime}dr$
$\displaystyle=\int_{0}^{\delta}\int_{\mathbb{R}^{2d}}G_{r}(z)G_{r}(z^{\prime})\gamma(z-z^{\prime})dzdz^{\prime}dr$
and
$\displaystyle J(\delta)$
$\displaystyle:=\int_{t-\delta}^{t}\int_{\mathbb{R}^{2d}}G_{t-r}(x-z)G_{t-r}(x-z^{\prime})\gamma(z-z^{\prime})\Big{(}u(t,x)^{2}-u(r,z)u(r,z^{\prime})\Big{)}dzdz^{\prime}dr.$
Coming back to (5.3), we can write
$\int_{0}^{t}\|D_{r,\bullet}u(t,x)\|_{0}^{2}dr\geq\frac{1}{2m^{2}}\psi_{0}(\delta)-\frac{1}{2}J(\delta)-I(\delta)\quad\mbox{on}\quad\Omega_{m}.$
(5.4)
We now give upper bounds for the first moments of $J(\delta)$ and $I(\delta)$.
We will use the following facts, which were proved in [3]:
$\displaystyle C_{t}^{*}$
$\displaystyle:=\sup_{(s,y)\in[0,t]\times\mathbb{R}^{d}}\|u(s,y)\|_{2}<\infty\qquad(\text{see
also \eqref{calsoRem31} in Remark \ref{rem_Lp}})$ $\displaystyle
g_{t,x}(\delta)$
$\displaystyle:=\sup_{|t-s|<\delta}\sup_{|x-y|<\delta}\|u(t,x)-u(s,y)\|_{2}\to
0\quad\mbox{as}\ \delta\to 0.$
We first treat $J(\delta)$. By Cauchy-Schwarz inequality, for any $r\in[0,t]$
and $z,z^{\prime}\in\mathbb{R}^{2}$,
$\displaystyle\mathbb{E}\big{[}|u(t,x)^{2}-u(r,z)u(r,z^{\prime})|\big{]}$
$\displaystyle\leq\|u(t,x)\|_{2}\|u(t,x)-u(r,z)\|_{2}+\|u(r,z)\|_{2}\|u(t,x)-u(r,z^{\prime})\|_{2}$
$\displaystyle\leq
C_{t}^{*}\Big{(}\|u(t,x)-u(r,z)\|_{2}+\|u(t,x)-u(r,z^{\prime})\|_{2}\Big{)}.$
Since $G_{t-r}(x-z)$ contains the indicator of the set $\\{|x-z|<t-r\\}$, we
obtain:
$\displaystyle\mathbb{E}(|J(\delta)|)$ $\displaystyle\leq
2C_{t}^{*}\int_{t-\delta}^{t}\int_{\mathbb{R}^{2d}}G_{t-r}(x-z)G_{t-r}(x-z^{\prime})\gamma(z-z^{\prime})\|u(t,x)-u(r,z)\|_{2}dzdz^{\prime}dr$
$\displaystyle\leq
2C_{t}^{*}\int_{t-\delta}^{t}\int_{\mathbb{R}^{2d}}G_{t-r}(x-z)G_{t-r}(x-z^{\prime})\gamma(z-z^{\prime})\sup_{\begin{subarray}{c}t-\delta<s<t\\\
|x-y|<\delta\end{subarray}}\|u(t,x)-u(s,y)\|_{2}dzdz^{\prime}dr.$
It follows that
$\mathbb{E}(|J(\delta)|)\leq 2C_{t}^{*}g_{t,x}(\delta)\psi_{0}(\delta).$ (5.5)
Next, we treat $I(\delta)$. Applying Proposition 6.2 of [1] to the
$\mathcal{P}_{0}$-valued process
$U(s,y)=\mathbf{1}_{[t-\delta,t]}(s)G_{t-s}(x-y)D_{r,\bullet}u(s,y)$
we obtain
$\mathbb{E}(\|\overline{\delta}(U)\|_{0}^{2})\leq\mathbb{E}(\|U\|_{\mathcal{H}\otimes\mathcal{P}_{0}}^{2})+\mathbb{E}(\|DU\|_{\mathcal{H}\otimes\mathcal{H}\otimes\mathcal{P}_{0}}^{2}).$
We have,
$\displaystyle\mathbb{E}(\|U\|_{\mathcal{H}\otimes\mathcal{P}_{0}}^{2})$
$\displaystyle=\mathbb{E}\Bigg{(}\int_{[t-\delta,t]^{2}}\int_{\mathbb{R}^{2d}}G_{t-s}(x-y)G_{t-s^{\prime}}(x-y^{\prime})\gamma_{0}(s-s^{\prime})\gamma(y-y^{\prime})$
$\displaystyle\qquad\qquad\times\big{\langle}D_{r,\bullet}u(s,y),D_{r,\bullet}u(s^{\prime},y^{\prime})\big{\rangle}_{0}dydy^{\prime}dsds^{\prime}\Bigg{)}$
and
$\displaystyle\mathbb{E}(\|DU\|_{\mathcal{H}\otimes\mathcal{H}\otimes\mathcal{P}_{0}}^{2})=\mathbb{E}\Bigg{(}\int_{[t-\delta,t]^{2}}\int_{[0,r]^{2}}\int_{\mathbb{R}^{4d}}G_{t-s}(x-y)G_{t-s^{\prime}}(x-y^{\prime})\gamma_{0}(s-s^{\prime})\gamma(y-y^{\prime})$
$\displaystyle\qquad\times\big{\langle}D^{2}_{(\theta,w),(r,\bullet)}u(s,y),D_{(\theta^{\prime},w^{\prime}),(r,\bullet)}u(s^{\prime},y^{\prime})\big{\rangle}_{0}\,\gamma_{0}(\theta-\theta^{\prime})\gamma(w-w^{\prime})dwdw^{\prime}dydy^{\prime}d\theta
d\theta^{\prime}dsds^{\prime}\Bigg{)}$
$\displaystyle\quad=\mathbb{E}\Bigg{(}\int_{[t-\delta,t]^{2}}\int_{\mathbb{R}^{2d}}G_{t-s}(x-y)G_{t-s^{\prime}}(x-y^{\prime})\gamma_{0}(s-s^{\prime})\gamma(y-y^{\prime})$
$\displaystyle\qquad\qquad\times\big{\langle}DD_{r,\bullet}u(s,y),DD_{r,\bullet}u(s^{\prime},y^{\prime})\big{\rangle}_{\mathcal{H}\otimes\mathcal{P}_{0}}dydy^{\prime}dsds^{\prime}\Bigg{)}.$
Hence, $\mathbb{E}(I(\delta))\leq I_{1}(\delta)+I_{2}(\delta)$, where
$\displaystyle I_{1}(\delta)$
$\displaystyle:=\mathbb{E}\Bigg{(}\int_{[t-\delta,t]^{3}}\int_{\mathbb{R}^{2d}}G_{t-s}(x-y)G_{t-s^{\prime}}(x-y^{\prime})\gamma_{0}(s-s^{\prime})\gamma(y-y^{\prime})$
$\displaystyle\qquad\times\big{\langle}D_{r,\bullet}u(s,y),D_{r,\bullet}u(s^{\prime},y^{\prime})\big{\rangle}_{0}dydy^{\prime}dsds^{\prime}dr\Bigg{)}$
and
$\displaystyle I_{2}(\delta)$
$\displaystyle:=\mathbb{E}\Bigg{(}\int_{[t-\delta,t]^{3}}\int_{\mathbb{R}^{2d}}G_{t-s}(x-y)G_{t-s^{\prime}}(x-y^{\prime})\gamma_{0}(s-s^{\prime})\gamma(y-y^{\prime})$
$\displaystyle\qquad\times\langle
DD_{r,\bullet}u(s,y),DD_{r,\bullet}u(s^{\prime},y^{\prime})\rangle_{\mathcal{H}\otimes\mathcal{P}_{0}}dydy^{\prime}dsds^{\prime}dr\Bigg{)}.$
Using Cauchy-Schwarz inequality and Corollaries 3.3 and 3.4, we obtain:
$\mathbb{E}\Big{(}\big{|}\langle
D_{r,\bullet}u(s,y),D_{r,\bullet}u(s^{\prime},y^{\prime})\rangle_{0}\big{|}\Big{)}\leq
C_{t}\quad\mbox{and}\quad\mathbb{E}\Big{(}\big{|}\langle
DD_{r,\bullet}u(s,y),DD_{r,\bullet}u(s^{\prime},y^{\prime})\rangle_{\mathcal{H}\otimes\mathcal{P}_{0}}\big{|}\Big{)}\leq
C_{t}^{\prime\prime}.$
Hence,
$\mathbb{E}[I(\delta)]\leq(C_{t}+C_{t}^{\prime\prime})\delta\phi(\delta),$
(5.6)
where
$\displaystyle\phi(\delta):$
$\displaystyle=\int_{[t-\delta,t]^{2}}\int_{\mathbb{R}^{2d}}G_{t-s}(x-y)G_{t-s^{\prime}}(x-y^{\prime})\gamma_{0}(s-s^{\prime})\gamma(y-y^{\prime})dydy^{\prime}dsds^{\prime}$
$\displaystyle=\int_{[0,\delta]^{2}}\int_{\mathbb{R}^{2d}}G_{s}(y)G_{s^{\prime}}(y^{\prime})\gamma_{0}(s-s^{\prime})\gamma(y-y^{\prime})dydy^{\prime}dsds^{\prime}.$
(5.7)
Using (5.4), (5.5) and (5.6), we conclude the proof as follows. For any $n\geq
1$,
$\displaystyle\quad\mathbb{P}\left(\left\\{\int_{0}^{t}\|D_{r,\bullet}u(t,x)\|_{0}^{2}\,dr<\frac{1}{n}\right\\}\cap\Omega_{m}\right)\leq\mathbb{P}\left(I(\delta)+\frac{1}{2}J(\delta)>\frac{1}{2m^{2}}\psi_{0}(\delta)-\frac{1}{n}\right)$
$\displaystyle\leq\left(\frac{1}{2m^{2}}\psi_{0}(\delta)-\frac{1}{n}\right)^{-1}\Big{(}\mathbb{E}[I(\delta)]+\frac{1}{2}\mathbb{E}[|J(\delta)|]\Big{)}\leq\frac{(C_{t}+C_{t}^{\prime\prime})\delta\phi(\delta)+C_{t}^{*}g_{t,x}(\delta)\psi_{0}(\delta)}{\frac{1}{2m^{2}}\psi_{0}(\delta)-\frac{1}{n}}.$
Letting $n\to\infty$, we obtain:
$\mathbb{P}\left(\left\\{\int_{0}^{t}\|D_{r,\bullet}u(t,x)\|_{0}^{2}dr=0\right\\}\cap\Omega_{m}\right)\leq
2m^{2}\Big{(}(C_{t}+C_{t}^{\prime\prime})\delta\frac{\phi(\delta)}{\psi_{0}(\delta)}+C_{t}^{*}g_{t,x}(\delta)\Big{)}.$
Note that using Fourier transform and the expression (2.29), we can rewrite
(5.7) as
$\displaystyle\phi(\delta)$
$\displaystyle=\int_{[0,\delta]^{2}}\int_{\mathbb{R}^{d}}\widehat{G}_{s}(\xi)\widehat{G}_{s^{\prime}}(\xi)\gamma_{0}(s-s^{\prime})\mu(d\xi)dsds^{\prime}$
$\displaystyle\leq\int_{[0,\delta]^{2}}\int_{\mathbb{R}^{d}}\frac{1}{2}\Big{[}\widehat{G}_{s}(\xi)^{2}+\widehat{G}_{s^{\prime}}(\xi)^{2}\Big{]}\gamma_{0}(s-s^{\prime})\mu(d\xi)dsds^{\prime}\leq\Gamma_{\delta}\int_{[0,\delta]}\int_{\mathbb{R}^{d}}\widehat{G}_{s}(\xi)^{2}\mu(d\xi)ds,$
where $\Gamma_{\delta}=2\int_{0}^{\delta}\gamma_{0}(s)ds$. That is, we have
$\phi(\delta)\leq\Gamma_{\delta}\psi_{0}(\delta)$. Finally taking $\delta\to
0$ proves (5.1), since $g_{t,x}(\delta)\to 0$ and
$\delta\frac{\phi(\delta)}{\psi_{0}(\delta)}\leq\delta\Gamma_{\delta}\to 0$ as
$\delta\to 0$. ∎
## Appendix A Appendix
### A.1 Auxiliary Results
Let $d=2$ and assume Hypothesis ${\bf(H1)}$. Suppose that
$S:\mathbb{R}_{+}\times\mathbb{R}^{2}\to\mathbb{R}$ is a measurable function
such that $S\in L^{2}(\mathbb{R}_{+};L^{2q}(\mathbb{R}^{2}))$, where $q$ is
given in (2.20) in cases (a) and (b) and it is given in (2.23) in case (c). We
assume also that $S$ has support in $[0,T]\times B_{M}$ for some $M>0$. We
claim that $S$ belongs to $\mathcal{H}$ and the following estimates hold true:
$\|S\|_{\mathcal{H}}\leq\sqrt{\Gamma_{T}}\|S\|_{\mathcal{H}_{0}}\leq\sqrt{\Gamma_{T}D_{\gamma}}\|S\|_{L^{2}(\mathbb{R}_{+};L^{2q}(\mathbb{R}^{2}))}.$
Indeed, the first inequality is due to (2.13) and the second one follows from
(2.25).
For $d=1$, if $S\in L^{2}(\mathbb{R}_{+}\times\mathbb{R})$ has support in
$[0,T]\times B_{M}$ for some $M>0$, then $S\in\mathcal{H}$ and the following
estimates hold true:
$\|S\|_{\mathcal{H}}\leq\sqrt{\Gamma_{T}}\|S\|_{\mathcal{H}_{0}}\leq\sqrt{\Gamma_{T}\|\gamma\mathbf{1}_{B_{2M}}\|_{L^{1}(\mathbb{R})}}\|S\|_{L^{2}(\mathbb{R}_{+}\times\mathbb{R})}.$
Indeed, the first inequality is due to (2.13) and the second one follows from
$\displaystyle\|S\|_{\mathcal{H}_{0}}^{2}$
$\displaystyle=\int_{0}^{T}\int_{\mathbb{R}^{2}}S(t,y)S(t,y^{\prime})\gamma(y-y^{\prime})dydy^{\prime}dt\leq\int_{0}^{T}\int_{\mathbb{R}^{2}}\frac{S^{2}(t,y)+S^{2}(t,y^{\prime})}{2}\gamma(y-y^{\prime})dydy^{\prime}dt$
and
$\sup_{y^{\prime}\in
B_{M}}\int_{B_{M}}\gamma(y-y^{\prime})dy\leq\int_{B_{2M}}\gamma(y)dy.$
Let us recall the Hypothesis ${\bf(H2)}$: The measures $\mu_{0}$ and $\mu$
such that $\gamma_{0}=\mathcal{F}\mu_{0}$ and $\gamma=\mathcal{F}\mu$ are
absolutely continuous with respect to the Lebesgue measures with strictly
positive densities.
###### Lemma A.1.
Fix $d\in\\{1,2\\}$ and assume that the Hypothesis ${\bf(H2)}$ holds. Let the
Hypothesis ${\bf(H1)}$ hold if in addition $d=2$. Suppose that the function
$S:\mathbb{R}_{+}\times\mathbb{R}^{d}\to\mathbb{R}$ has support in
$[0,T]\times B_{M}$ for some $M>0$ and $S\in
L^{2}\big{(}\mathbb{R}_{+};L^{2q}(\mathbb{R}^{d})\big{)}$, where
$\displaystyle\begin{cases}\text{$q$ is given by \eqref{def-q} in cases
{\rm({a})} and {\rm({b})} and by \eqref{def-qq} in case {\rm({c})} if $d=2$,
}\\\ \text{$q=1$ if $d=1$.}\end{cases}$
If
$I:=\int_{0}^{T}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}S(t,x)S(t,y)\gamma(x-y)dxdydt>0,$
(A.1)
then $\|S\|_{\mathcal{H}}>0$.
###### Proof.
Suppose that $\|S\|_{\mathcal{H}}=0$. There exists a sequence of smooth
functions $(\psi_{k})_{k\geq 1}$ in
$C^{\infty}(\mathbb{R}_{+}\times\mathbb{R}^{d})$, with support in $[0,T]\times
B_{M}$, which converges to $S$ in
$L^{2}(\mathbb{R}_{+};L^{2q}(\mathbb{R}^{d}))$. Then,
$0=\|S\|_{\mathcal{H}}^{2}=\lim_{k\rightarrow\infty}\|\psi_{k}\|^{2}_{\mathcal{H}}=\lim_{k\rightarrow\infty}\int_{\mathbb{R}_{+}\times\mathbb{R}^{d}}|\mathcal{F}\psi_{k}(\tau,\xi)|^{2}\mu_{0}(d\tau)\mu(d\xi),$
where $\gamma_{0}=\mathcal{F}\mu_{0}$, $\gamma=\mathcal{F}\mu$ and
$\mathcal{F}\psi_{k}$ stands for the Fourier transform of $\psi_{k}$ in space-
time variables _in this proof_. By choosing a subsequence $(k_{j})_{j\geq 1}$
we have that
$\lim_{j\rightarrow\infty}\mathcal{F}\psi_{k_{j}}(\tau,\xi)=0$
for $\mu_{0}\otimes\mu$-almost all $(\tau,\xi)$. On the other hand, keeping in
mind that the supports of $S,\psi_{k}$ are contained in $[0,T]\times B_{M}$,
we have
$\big{\|}\psi_{k}-S\big{\|}_{L^{1}(\mathbb{R}_{+}\times\mathbb{R}^{2})}\leq(\pi
M^{2}T)^{1-\frac{1}{2q}}\big{\|}\psi_{k}-S\big{\|}_{L^{2q}(\mathbb{R}_{+}\times\mathbb{R}^{2})}\leq(\pi
M^{2})^{1-\frac{1}{2q}}T^{\frac{1}{2}}\big{\|}\psi_{k}-S\big{\|}_{L^{2}(\mathbb{R}_{+};L^{2q}(\mathbb{R}^{2}))},$
from which we deduce that $(\psi_{k})_{k\geq 1}$ converge in
$L^{1}([0,T]\times B_{M})$ to $S$. Thus $\mathcal{F}\psi_{k}(\tau,\xi)$
converges to $\mathcal{F}S(\tau,\xi)$ for all $(\tau,\xi)$ and the convergence
is uniform. As a consequence, $\mathcal{F}S(\tau,\xi)=0$ for
$\mu_{0}\otimes\mu$-almost all
$(\tau,\xi)\in\mathbb{R}_{+}\times\mathbb{R}^{d}$ and by Hypothesis
${\bf(H2)}$, we obtain $\mathcal{F}S(\tau,\xi)=0$ for almost all
$(\tau,\xi)\in\mathbb{R}_{+}\times\mathbb{R}^{d}$ with respect to the Lebesgue
measure.
Hence $S(t,x)=0$ for almost all $t>0$ and $x\in\mathbb{R}^{d}$, i.e. there
exists a Borel set $N\subset\mathbb{R}_{+}\times\mathbb{R}^{d}$ with
$\lambda_{d+1}(N)=0$ such that $S(t,x)=0$ for all $(t,x)\not\in N$. Here
$\lambda_{k}$ denotes the Lebesgue measure on $\mathbb{R}^{k}$. Therefore,
$I=\int_{0}^{\infty}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}1_{A}(t,x,y)S(t,x)S(t,y)\gamma(x-y)dxdydt,$
where
$A:=\\{(t,x,y)\in\mathbb{R}_{+}\times\mathbb{R}^{d}\times\mathbb{R}^{d};(t,x)\in
N,(t,y)\in N\\}$.
Let $N_{t}=\\{x\in\mathbb{R}^{d};(t,x)\in N\\}$ be the section of the set $N$
at point $t>0$. By Fubini’s theorem,
$\lambda_{d+1}(N)=\int_{0}^{\infty}\lambda_{d}(N_{t})dt$. Since
$\lambda_{d+1}(N)=0$, we infer that $\lambda_{d}(N_{t})=0$ for almost all
$t>0$. Note that the section of the set $A$ at point $t$ is
$A_{t}=\\{(x,y)\in\mathbb{R}^{d}\times\mathbb{R}^{d};(t,x,y)\in
A\\}=N_{t}\times N_{t}$, and its Lebesque measure is
$\lambda_{2d}(A_{t})=\lambda_{d}^{2}(N_{t})=0$ for almost all $t>0$. By
applying Fubini again, we infer that
$\lambda_{2d+1}(A)=\int_{0}^{\infty}\lambda_{2d}(A_{t})dt=0$. This shows
$I=0$, which contradicts (A.1). ∎
### A.2 Proof of Proposition 1.9
In this section, we only sketch the proof of Proposition 1.9 as the main body
of the proof is almost identical to that in [42, Proposition 3.2].
###### Proof of (1.27).
Using the duality relation (2.5) and the identity $L=-\delta D$, we have
$\mathbb{E}\big{[}\langle
DF,-DL^{-1}G\rangle_{\mathcal{H}}\big{]}=\mathbb{E}\big{[}F(-\delta
D)L^{-1}G\big{]}=\mathbb{E}[FLL^{-1}G]=\mathbb{E}[FG]=\text{Cov}(F,G),$
which shows the equality in (1.27). Then, applying the Gaussian Poincaré
inequality (2.12) and using Lemma 3.2 of [26], we can bound the variance
appearing in the left-hand side of (1.27) by
$\displaystyle\mathbb{E}\Big{[}\|D\langle
DF,-DL^{-1}G\rangle_{\mathcal{H}}\|_{\mathcal{H}}^{2}\Big{]}\leq
2\mathbb{E}\Big{[}\|\langle
D^{2}F,-DL^{-1}G\rangle_{\mathcal{H}}\|_{\mathcal{H}}^{2}\Big{]}+2\mathbb{E}\Big{[}\|\langle
DF,-D^{2}L^{-1}G\rangle_{\mathcal{H}}\|_{\mathcal{H}}^{2}\Big{]}.$
We will show that the first expectation-term is bounded by $A_{1}$ and the
other one can be estimated in the same way and bounded by $A_{2}$. Using the
representation (see _e.g._ [25, Proposition 2.9.3])
$-DL^{-1}G=\int_{0}^{\infty}dte^{-t}P_{t}DG,$
with $\\{P_{t},t\geq 0\\}$ the Ornstein-Uhlenbeck semigroup, we can write
$\displaystyle\langle
D^{2}F,-DL^{-1}G\rangle_{\mathcal{H}}=\int_{0}^{\infty}dte^{-t}\langle
D^{2}F,P_{t}DG\rangle_{\mathcal{H}}.$ (A.2)
Note that if $(\mathcal{M},\mathfrak{M},\nu)$ is a probability space on which
$s\in\mathcal{M}\longmapsto V_{s}\in|\mathcal{H}|$ is
$\mathfrak{M}$-measurable such that
$\int_{\mathcal{M}}\big{\|}|V_{s}|\big{\|}_{\mathcal{H}}^{2}\nu(ds)<\infty$,
then by Fubini’s theorem and Cauchy-Schwarz inequality,
$\displaystyle\left\|\int_{\mathcal{M}}V_{s}\nu(ds)\right\|_{\mathcal{H}}^{2}$
$\displaystyle=\int_{\mathcal{M}^{2}}\langle
V_{s},V_{s^{\prime}}\rangle_{\mathcal{H}}\nu(ds)\nu(ds^{\prime})$
$\displaystyle\leq\int_{\mathcal{M}^{2}}\frac{\|V_{s}\|^{2}_{\mathcal{H}}+\|V_{s^{\prime}}\|^{2}_{\mathcal{H}}}{2}\nu(ds)\nu(ds^{\prime})=\int_{\mathcal{M}}\|V_{s}\|_{\mathcal{H}}^{2}\nu(ds).$
Using the above inequality on $(\mathbb{R}_{+},e^{-t}dt)$, we deduce from
(A.2) that
$\displaystyle\big{\|}\langle
D^{2}F,-DL^{-1}G\rangle_{\mathcal{H}}\big{\|}_{\mathcal{H}}^{2}\leq\int_{0}^{\infty}dte^{-t}\big{\|}\langle
D^{2}F,P_{t}DG\rangle_{\mathcal{H}}\big{\|}^{2}_{\mathcal{H}}.$
Observe that $\langle D^{2}F,P_{t}DG\rangle_{\mathcal{H}}$ is nothing else but
the one-contraction $D^{2}F\otimes_{1}P_{t}DG$, so that
$\displaystyle\big{\|}\langle
D^{2}F,P_{t}DG\rangle_{\mathcal{H}}\big{\|}^{2}_{\mathcal{H}}$
$\displaystyle=\langle
D^{2}F\otimes_{1}P_{t}DG,D^{2}F\otimes_{1}P_{t}DG\rangle_{\mathcal{H}}$
$\displaystyle=\big{\langle}D^{2}F\otimes_{1}D^{2}F,(P_{t}DG)\otimes(P_{t}DG)\big{\rangle}_{\mathcal{H}^{\otimes
2}},$
where the last equality follows from the definition of contractions.
Therefore, we have
$\displaystyle\mathbb{E}[\|\langle
D^{2}F,-DL^{-1}G\rangle_{\mathcal{H}}\|_{\mathcal{H}}^{2}]$
$\displaystyle\quad\leq\mathbb{E}\int_{0}^{\infty}dt~{}e^{-t}\int_{\mathbb{R}_{+}^{6}\times\mathbb{R}^{6d}}drdr^{\prime}dsds^{\prime}d\theta
d\theta^{\prime}dzdz^{\prime}dydy^{\prime}dwdw^{\prime}\gamma_{0}(\theta-\theta^{\prime})\gamma_{0}(s-s^{\prime})\gamma_{0}(r-r^{\prime})$
$\displaystyle\qquad\quad\times\gamma(z-z^{\prime})\gamma(w-w^{\prime})\gamma(y-y^{\prime})\times\big{[}D_{r,z}D_{\theta,w}F\big{]}\big{[}D_{s,y}D_{\theta^{\prime},w^{\prime}}F\big{]}P_{t}(D_{r^{\prime},z^{\prime}}G)P_{t}(D_{s^{\prime},y^{\prime}}G)$
and thus we end our estimation of $\mathbb{E}[\|\langle
D^{2}F,-DL^{-1}G\rangle_{\mathcal{H}}\|_{\mathcal{H}}^{2}]$ by using Hölder
inequality and the contraction property of $P_{t}$ on $L^{4}(\Omega)$, that
is, using
$\|P_{t}(D_{r^{\prime},z^{\prime}}G)\|_{4}\leq\|D_{r^{\prime},z^{\prime}}G\|_{4}$.
To estimate the other expectation-term $\mathbb{E}[\|\langle
DF,-D^{2}L^{-1}G\rangle_{\mathcal{H}}\|_{\mathcal{H}}^{2}]$, one can begin
with
$-D^{2}L^{-1}G=\int_{0}^{\infty}dte^{-2t}P_{t}D^{2}G$
and then follow the same arguments. ∎
## References
* [1] Balan, R. M. (2012): The stochastic wave equation with multiplicative fractional noise: a Malliavin calculus approach. Potential Anal. 36, 1-34.
* [2] Balan, R.M., Quer-Sardanyons, L. and Song, J. (2019): Existence of density for the stochastic wave equation with space-time homogeneous Gaussian noise. Electron. J. Probab. 24, no. 106, 1-43.
* [3] Balan, R. M. and Song, J. (2017): Hyperbolic Anderson Model with space-time homogeneous Gaussian noise. ALEA Lat. Am. J. Probab. Math. Stat. 14, 799-849.
* [4] Bolaños-Guerrero, R., Nualart, D. and Zheng, G. (2021): Averaging 2D stochastic wave equation. _Electron. J. Probab._ 26 (102): 1-32.
* [5] Bouleau N. and Hirsch, F. (1986): Propriété d’absolue continuité dans les espaces de Dirichlet et applications aux équations différentielles stochastiques. _Séminaire de Probabilités_ XX: 12, 131-161, LNM 1204.
* [6] Breuer P. and Major P. (1983) : Central limit theorems for non-linear functionals of Gaussian fields. _J. Multivariate Anal._ 13, 425-441.
* [7] Carmona, R. and Nualart, D. (1988): Random non-linear wave equations: smoothness of the solutions. _Probab. Theory Related Fields_ 79, 469-508.
* [8] Chatterjee C. (2009): Fluctuation of eigenvalues and second order Poincaré inequalities. _Probab. Theory Related Fields_ 143, 1-40.
* [9] Chen L. , Khoshnevisan D., Nualart D. and Pu F. (2022): Poincaré inequality, and central limit theorems for parabolic stochastic partial differential equations. To appear in: _Ann. Inst. Henri Poincaré Probab. Stat._ arXiv: 1912.01482
* [10] Chen L. , Khoshnevisan D., Nualart D. and Pu F. (2021): Central limit theorems for spatial averages of the stochastic heat equation via Malliavin-Stein’s method. Stoch. Partial Differ. Equ. Anal. Comput.
* [11] R. C. Dalang (1999): Extending the martingale measure stochastic integral with applications to spatially homogeneous S.P.D.E.’s. _Electron. J. Probab._ 4, no. 6, 1-29
* [12] Delgado-Vences, F., Nualart, D. and Zheng G. (2020): A central limit theorem for the stochastic wave equation with fractional noise. _Ann. Inst. Henri Poincaré Probab. Stat._ , 56, 4, 3020-3042.
* [13] Dunlap, A., Gu, Y., Ryzhik, L. and Zeitouni, O. (2020): Fluctuations of the solutions to the KPZ equation in dimensions three and higher. _Probab.Theory Related Fields_ 176.
* [14] Houdré C. and Pérez-Abreu V. (1995): Covariance identities and inequalities for functionals on Wiener and Poisson spaces. _Ann. Probab._ 23, 400-419.
* [15] Huang J., Nualart D. and Viitasaari L. (2020): A central limit theorem for the stochastic heat equation. _Stochastic Process. Appl._ 130, no. 12, 7170-7184.
* [16] Huang J., Nualart D., Viitasaari L. and Zheng G. (2020): Gaussian fluctuations for the stochastic heat equation with colored noise. _Stoch. Partial Differ. Equ. Anal. Comput._ 8, 402-421.
* [17] Kallenberg O. (2002): _Foundations of Modern Probability_. Second edition. Probability and Its Applications, Springer.
* [18] Karkzewska A. and Zabczyk J. (1999): _Stochastic PDE’s with function-valued solutions._ In: _Infinite-dimensional stochastic analysis_ (Clément Ph., den Hollander F., van Neerven J. & de Pagter B., eds), pp. 197–216, Proceedings of the Colloquium of the Royal Netherlands Academy of Arts and Sciences, Amsterdam.
* [19] Khoshnevisan D., Nualart D. and Pu F. (2021): Spatial stationarity, ergodicity and CLT for parabolic Anderson model with delta initial condition in dimension $d\geq 1$. _SIAM J. Math. Anal._ 53 no. 2, 2084-2133.
* [20] Kim K. and Yi, J. (2022): Limit theorems for time-dependent averages of nonlinear stochastic heat equations. _Bernoulli_ 28 (1): 214-238.
* [21] Malliavin, P. (1978): Stochastic calculus of variations and hypoelliptic operators. _Proceedings of the International Symposium on Stochastic Differential Equations_ (Res. Inst. Math. Sci., Kyoto Univ., Kyoto, 1976). New York: Wiley. pp. 195-263.
* [22] Márquez-Carreras, D., Mellouk, M., Sarrà, M. (2001): On stochastic partial differential equations with spatially correlated noise: smoothness of the law. _Stochastic Process. Appl._ 93, 269-284.
* [23] Millet, A. and Sanz-Solé, M. (1999): A stochastic wave equation in two space dimension: smoothness of the law. _Ann. Probab._ 27, 803-844.
* [24] Nourdin I. and Peccati G. (2009): Stein’s method on Wiener chaos. _Probab. Theory Related Fields_ 145, no. 1, 75-118.
* [25] Nourdin I. and Peccati G. (2012): _Normal approximations with Malliavin calculus: from Stein’s method to universality._ Cambridge Tracts in Mathematics 192, Cambridge University Press.
* [26] Nourdin I., Peccati G. and Reinert G. (2009): Second order Poincaré inequalities and CLTs on Wiener space. _J. Funct. Anal._ 257, 593-609.
* [27] Nualart D. (2006): _The Malliavin Calculus and Related Topics_ , second edition. Probability and Its Applications, Springer-Verlag Berlin Heidelberg.
* [28] Nualart D. and Ortiz-Latorre S. (2008): Central limit theorems for multiple stochastic integrals and Malliavin calculus, _Stochastic Process. Appl._ 118 no 4, 614-628.
* [29] Nualart D. and Pardoux É. (1988): Stochastic calculus with anticipating integrands. _Probab. Theory Related Fields_ 78, 535-581.
* [30] Nualart D. and Peccati G. (2005): Central limit theorems for sequences of multiple stochastic integrals. _Ann. Probab._ 33 no. 1, 177-193.
* [31] Nualart, D. and Quer-Sardanyons, L. (2007): Existence and smoothness of the density for spatially homogeneous SPDEs. _Potential Anal._ 27, 281-299.
* [32] Nualart, D., Song X. M. and Zheng, G. (2021): Spatial averages for the Parabolic Anderson model driven by rough noise. _ALEA Lat. Am. J. Probab. Math. Stat._ 18, 907-943.
* [33] Nualart, D. and Zheng, G. (2020): Averaging Gaussian functionals. Electron. J. Probab. 25, no. 48, 1-54.
* [34] Nualart, D., Xia, P. and Zheng, G. (2021): Quantitative central limit theorems for the parabolic Anderson model driven by colored noises. arXiv:2109.03875
* [35] Nualart, D. and Zheng, G. (2021): Central limit theorems for stochastic wave equations in dimensions one and two. _Stoch. Partial Differ. Equ. Anal. Comput._
* [36] Nualart, D. and Zheng, G. (2020): Spatial ergodicity of stochastic wave equations in dimensions 1, 2 and 3. _Electron. Commun. Probab._ 25, no. 80, pages 1-11.
* [37] Peccati G. and Tudor C. A. (2005): Gaussian limits for vector-valued multiple stochastic integrals. _Séminaire de Probabilités_ XXXVIII. pp 247-262.
* [38] Pu F. (2021): Gaussian fluctuation for spatial average of parabolic Anderson model with Neumann/Dirichlet/periodic boundary conditions. Trans. Amer. Math. Soc.
* [39] Quer-Sardanyons, L., Sanz-Solé, M. (2004): Absolute continuity of the law of the solution to the 3-dimensional stochastic wave equation. _J. Funct. Anal._ 206, no. 1, 1-32.
* [40] Quer-Sardanyons, L., Sanz-Solé, M. (2004): A stochastic wave equation in dimension 3: Smoothness of the law. _Bernoulli_ 10, no. 1, 165-186.
* [41] Sanz-Solé, M. and Süss, A. (2013): The stochastic wave equation in high dimensions: Malliavin differentiability and absolute continuity. _Electron. J. Probab._ 18, no. 64, 1-28
* [42] Vidotto A. (2020): An improved second-order Poincaré inequality for functionals of Gaussian fields. _J. Theoret. Probab._ 33, 396-427.
* [43] Walsh J.B. (1986): An Introduction to Stochastic Partial Differential Equations. In: École d’été de probabilités de Saint-Flour, XIV—1984, 265–439. Lecture Notes in Math. 1180, Springer, Berlin.
* [44] Zheng G. (2018): _Recent developments around the Malliavin-Stein approach — fourth moment phenomena via exchangeable pairs._ Ph.D thesis, Université du Luxembourg. Available at http://hdl.handle.net/10993/35536
|
# On the two-parameter Erdős-Falconer distance problem
over finite fields
Clément Francois ETH Zurich, Switzerland. Email<EMAIL_ADDRESS>Hossein Nassajian Mojarrad Courant Institute, New York University. Email:
<EMAIL_ADDRESS>Supported by Swiss National Science Foundation grant
P2ELP2-178313. Duc Hiep Pham University of Education, Vietnam National
University, Hanoi. Email<EMAIL_ADDRESS>Chun-Yen Shen Department of
Mathematics, National Taiwan University. Email<EMAIL_ADDRESS>
###### Abstract
Given $E\subseteq\mathbb{F}_{q}^{d}\times\mathbb{F}_{q}^{d}$, with the finite
field $\mathbb{F}_{q}$ of order $q$ and the integer $d\geq 2$, we define the
two-parameter distance set as
$\Delta_{d,d}(E)=\left\\{\left(\|x_{1}-y_{1}\|,\|x_{2}-y_{2}\|\right):(x_{1},x_{2}),(y_{1},y_{2})\in
E\right\\}$. Birklbauer and Iosevich (2017) proved that if $|E|\gg
q^{\frac{3d+1}{2}}$, then $|\Delta_{d,d}(E)|=q^{2}$. For the case of $d=2$,
they showed that if $|E|\gg q^{\frac{10}{3}}$, then $|\Delta_{2,2}(E)|\gg
q^{2}$. In this paper, we present extensions and improvements of these
results.
2010 Mathematical Subject Classification: 52C10 (11T99)
Keywords: Erdős-Falconer distance problem, finite fields.
## 1 Introduction
The general Erdős distance problem asks to determine the number of distinct
distances spanned by a finite set of points. In the Euclidean space, it is
conjectured that for any finite set $E\subset\mathbb{R}^{d}$, $d\geq 2$, we
have $|\Delta(E)|\gtrapprox|E|^{\frac{2}{d}}$, where
$\Delta(E)=\\{\|x-y\|:x,y\in E\\}$. Here and throughout, $X\ll Y$ means that
there exists $C>0$ such that $X\leq CY$, and $X\lessapprox Y$ with the
parameter $N$ means that for any $\varepsilon>0$, there exists
$C_{\varepsilon}>0$ such that $X\leq C_{\varepsilon}N^{\varepsilon}Y$.
The finite field analogue of the distance problem was first studied by
Bourgrain, Katz, and Tao [3] over prime fields. In this setting, the Euclidean
distance among any two points
$\boldsymbol{x}=(x_{1},\ldots,x_{d}),\boldsymbol{y}=(y_{1},\ldots,y_{d})\in\mathbb{F}_{q}^{d}$,
the $d$-dimensional vector space over the finite field of order $q$, is
defined as
$\|\boldsymbol{x}-\boldsymbol{y}\|=\displaystyle\sum_{i=1}^{d}(x_{i}-y_{i})^{2}\in\mathbb{F}_{q}$.
For prime fields $\mathbb{F}_{p}$ with $p\equiv-1\pmod{4}$, they showed that
if $E\subset\mathbb{F}_{p}^{2}$ with $|E|=p^{\delta}$ for some $0<\delta<2$,
then the distance set satisfies $|\Delta(E)|\gg|E|^{\frac{1}{2}+\varepsilon}$,
for some $\varepsilon>0$ depending only on $\delta$.
This bound does not hold in general for arbitrary finite fields
$\mathbb{F}_{q}$ as shown by Iosevich and Rudnev [7]. In this general setting,
they considered the Erdős-Falconer distance problem to determine how large
$E\subset\mathbb{F}_{q}^{d}$ needs to be so that $\Delta(E)$ spans all
possible distances or at least a positive proportion of them. More precisely,
they proved that $\Delta(E)=\mathbb{F}_{q}$ if $|E|>2q^{\frac{d+1}{2}}$, where
the exponent is sharp for odd $d$. It is conjectured that in even dimensions,
the optimal exponent will be $\frac{d}{2}$. As a relaxed fractional variant
for $d=2$, it was shown in [4] that if $E\subseteq\mathbb{F}_{q}^{2}$
satisfies $|E|\gg q^{\frac{4}{3}}$, then $|\Delta(E)|\gg q$. A recent series
of other improvements and generalizations on the Erdős-Falconer distance
problem can be found in [6, 9, 11, 12, 13].
Using Fourier analytic techniques, a two-parameter variant of the Erdős-
Falconer distance problem for the Euclidean distance was studied by Birklbauer
and Iosevich in [2]. More precisely, given
$E\subseteq\mathbb{F}_{q}^{d}\times\mathbb{F}_{q}^{d}$, where $d\geq 2$,
define the two-parameter distance set as
$\Delta_{d,d}(E)=\left\\{\left(\|x_{1}-y_{1}\|,\|x_{2}-y_{2}\|\right):(x_{1},x_{2}),(y_{1},y_{2})\in
E\right\\}\subseteq\mathbb{F}_{q}\times\mathbb{F}_{q}.$
They proved the following results.
###### Theorem 1.1.
Let $E$ be a subset in $\mathbb{F}_{q}^{d}\times\mathbb{F}_{q}^{d}$. If
$|E|\gg q^{\frac{3d+1}{2}}$, then $|\Delta_{d,d}(E)|=q^{2}$.
###### Theorem 1.2.
Let $E$ be a subset in $\mathbb{F}_{q}^{2}\times\mathbb{F}_{q}^{2}$. If
$|E|\gg q^{\frac{10}{3}}$, then $|\Delta_{2,2}(E)|\gg q^{2}$.
In this short note, we provide an extension and an improvement of these
results. Compared to the method in [2], our results are much elementary.
For
$\boldsymbol{x}=(x_{1},\ldots,x_{d}),\boldsymbol{y}=(y_{1},\ldots,y_{d})\in\mathbb{F}_{q}^{d}$
and for an integer $s\geq 2$, we introduce
$\|\boldsymbol{x}-\boldsymbol{y}\|_{s}:=\sum_{i=1}^{d}a_{i}(x_{i}-y_{i})^{s},$
where $a_{i}\in\mathbb{F}_{q}$ with $a_{i}\neq 0$ for $i=1,\ldots,d$. For any
set $E\subset\mathbb{F}_{q}^{d}\times\mathbb{F}_{q}^{d}$, define
$\Delta_{d,d}^{s}(E)=\left\\{\left(\|x_{1}-y_{1}\|_{s},\|x_{2}-y_{2}\|_{s}\right):(x_{1},x_{2}),(y_{1},y_{2})\in
E\right\\}.$
Our first result reads as follows.
###### Theorem 1.3.
Let $E$ be a subset in $\mathbb{F}_{q}^{d}\times\mathbb{F}_{q}^{d}$. If
$|E|\gg q^{\frac{3d+1}{2}}$, then $|\Delta_{d,d}^{s}(E)|\gg q^{2}$.
It is worth mentioning that our method also works for the multi-parameter
distance set defined for $E\subseteq\mathbb{F}_{q}^{d_{1}+\dots+d_{k}}$, but
we do not discuss such extensions herein. For the case of $d=2$, we get an
improved version of Theorem 1.2 for the usual distance function over prime
fields.
###### Theorem 1.4.
Let $E\subseteq\mathbb{F}_{p}^{2}\times\mathbb{F}_{p}^{2}$. If $|E|\gg
p^{\frac{13}{4}}$, then $|\Delta_{2}(E)|\gg p^{2}$.
We note that the continuous version of Theorems 1.3 and 1.4 have been studied
in [5, 8]. However, the authors do not know whether the method in this paper
can be extended to that setting. Moreover, it follows from our approach that
the conjecture exponent $\frac{d}{2}$ of the (one-parameter) distance problem
would imply the sharp exponent for two-parameter analogue, namely,
$\frac{3d}{2}$ for even dimensions. We refer the reader to [2] for
constructions and more discussions.
## 2 Proof of Theorem 1.3
The following lemma plays a key role in our proof for Theorem 1.3.
###### Lemma 2.1 (Theorem 2.3, [14]).
Let $X,Y\subseteq\mathbb{F}_{q}^{d}$. Define
$\Delta^{s}(X,Y)=\\{\|x-y\|_{s}\colon x\in X,y\in Y\\}$. If $|X||Y|\gg
q^{d+1}$, then $|\Delta^{s}(X,Y)|\gg q$.
###### Proof of Theorem 1.3.
By assumption, we have $|E|\geq Cq^{d+\frac{d+1}{2}}$ for some constant $C>0$.
For $y\in\mathbb{F}_{q}^{d}$, let
$E_{y}:=\left\\{x\in\mathbb{F}_{q}^{d}:(x,y)\in E\right\\}$, and define
$Y:=\left\\{y\in\mathbb{F}_{q}^{d}:~{}|E_{y}|>\frac{C}{2}q^{\frac{d+1}{2}}\right\\}.$
We first show that $|Y|\geq\frac{C}{2}q^{\frac{d+1}{2}}$. Note that
$|E|=\sum_{y\in Y}|E_{y}|+\sum_{y\in\mathbb{F}^{d}_{q}\setminus
Y}|E_{y}|~{}\leq~{}q^{d}|Y|+\sum_{y\in\mathbb{F}^{d}_{q}\setminus Y}|E_{y}|,$
where the last inequality holds since $|E_{y}|\leq q^{d}$ for
$y\in\mathbb{F}_{q}^{d}$. Combining it with the assumption on $|E|$ gives the
lower bound $\sum_{y\in\mathbb{F}^{d}_{q}\setminus Y}|E_{y}|\geq
Cq^{d+\frac{d+1}{2}}-q^{d}|Y|.$ On the other hand, by definition, we have
$|E_{y}|\leq\frac{C}{2}q^{\frac{d+1}{2}}$ for $y\in\mathbb{F}^{d}_{q}\setminus
Y$ yielding the upper bound $\sum_{y\in\mathbb{F}^{d}_{q}\setminus
Y}|E_{y}|\leq\frac{C}{2}q^{d+\frac{d+1}{2}}$. Thus, these two bounds
altogether give
$Cq^{d+\frac{d+1}{2}}-q^{d}|Y|\leq\frac{C}{2}q^{d+\frac{d+1}{2}}$, proving the
claimed bound $|Y|\geq\frac{C}{2}q^{\frac{d+1}{2}}$.
In particular, Lemma 2.1 implies $|\Delta^{s}(Y,Y)|\gg q$, as $|Y||Y|\gg
q^{d+1}$. On the other hand, for each $u\in\Delta^{s}(Y,Y)$, there are $z,t\in
Y$ such that $\|z-t\|=u$. One has $|E_{z}|,|E_{t}|\gg q^{\frac{d+1}{2}}$,
therefore, again by Lemma 2.1, $|\Delta^{s}(E_{z},E_{t})|\gg q$. Furthermore,
for $v\in\Delta^{s}(E_{z},E_{t})$, there are $x\in E_{z}$ and $y\in E_{t}$
satisfying $\|x-y\|_{s}=v$. Note that $x\in E_{z}$ and $y\in E_{t}$ mean that
$(x,z),(y,t)\in E$. Thus,
$(v,u)=(\|x-y\|_{s},\|z-t\|_{s})\in\Delta_{d,d}^{s}(E)$. From this, we
conclude that $|\Delta_{d,d}^{s}(E)|\gg q|\Delta^{s}(Y,Y)|\gg q^{2}$, which
completes the proof. ∎
## 3 Proof of Theorem 1.4
To improve the exponent over prime fields $\mathbb{F}_{p}$, we strengthen
Lemma 2.1 as follows. Following the proof of Theorem 1.3 with Lemma 3.1 below
proves Theorem 1.4 then.
###### Lemma 3.1.
Let $X,Y\subseteq\mathbb{F}_{p}^{2}$. If $|X|,|Y|\gg p^{\frac{5}{4}}$, then
$|\Delta(X,Y)|\gg p$.
###### Proof.
It is clear that if $X^{\prime}\subseteq X$ and $Y^{\prime}\subseteq Y$, then
$\Delta(X^{\prime},Y^{\prime})\subseteq\Delta(X,Y)$. Thus, without loss of
generality, we may assume that $|X|=|Y|=N$ with $N\gg p^{\frac{5}{4}}$. Let
$Q$ be the number of quadruples $(x,y,x^{\prime},y^{\prime})\in X\times
Y\times X\times Y$ such that $\|x-y\|=\|x^{\prime}-y^{\prime}\|$. It follows
easily from the Cauchy-Schwarz inequality that
$|\Delta(X,Y)|\gg\frac{|X|^{2}|Y|^{2}}{Q}.$
Let $T$ be the number of triples $(x,y,y^{\prime})\in X\times Y\times Y$ such
that $\|x-y\|=\|x-y^{\prime}\|$. By the Cauchy-Schwarz inequality again, one
gets $Q\ll|X|\cdot T$. Next, we need to bound $T$. For this, denote $Z=X\cup
Y$, then $N\leq|Z|\leq 2N$. Let $T^{\prime}$ be the number of triples
$(a,b,c)\in Z\times Z\times Z$ such that $\|a-b\|=\|a-c\|$. Obviously, one
gets $T\leq T^{\prime}$. On the other hand, it was recently proved (see [10,
Theorem 4]) that
$T^{\prime}\ll\frac{|Z|^{3}}{p}+p^{2/3}|Z|^{5/3}+p^{1/4}|Z|^{2},$
which gives
$T\ll\frac{N^{3}}{p}+p^{2/3}N^{5/3}+p^{1/4}N^{2},$
and then $T\ll\dfrac{N^{3}}{p}$ (since $N\gg p^{\frac{5}{4}}$). Putting all
bounds together we obtain
$\dfrac{N^{3}}{|\Delta(X,Y)|}=\dfrac{|X||Y|^{2}}{|\Delta(X,Y)|}\ll\dfrac{Q}{|X|}\ll
T\ll\dfrac{N^{3}}{p},$
or equivalently, $|\Delta(X,Y)|\gg p$, as required. ∎
## Acknowledgment
The authors would like to thank Thang Pham for sharing insights and new ideas.
## References
* [1]
* [2] P. Birklbauer and A. Iosevich, _A two-parameter finite field Erdős-Falconer distance problem_ , Bull. Hellenic Math. Soc. 61 (2017), 21–30.
* [3] J. Bourgain, N. Katz, and T. Tao, _A sum-product estimate in finite fields, and applications_ , Geometric & Functional Analysis 14 (2004) 27–57.
* [4] J. Chapman, M. Erdogan, D. Hart, A. Iosevich, and D. Koh, _Pinned distance sets, $k$-simplices, Wolff’s exponent in finite fields and sum-product estimates_, Math Z. 271 (2012), 63–93.
* [5] K. Hambrook, A. Iosevich, A. Rice, _Group actions and a multi-parameter Falconer distance problem_ , (2017), https://arxiv.org/abs/1705.03871
* [6] D. Hieu and T. Pham, _Distinct distances on regular varieties over finite fields_ , Journal of Number Theory, 173 (2017), 602–613.
* [7] A. Iosevich and M. Rudnev, _Erdős distance problem in vector spaces over finite fields_ , Transactions of the American Mathematical Society 359 (2007), 6127–6142.
* [8] A. Iosevich, M. Janczak, J. Passant, _A multi-parameter variant of the Erdős distance problem_ , (2017), https://arxiv.org/abs/1712.04060
* [9] D. Koh, T. Pham, and V. Le, _Extension theorems and a connection to the Erdős-Falconer distance problem over finite fields_ , (2020), https://arxiv.org/abs/1809.08699
* [10] B. Murphy, G. Petridis, T. Pham, M. Rudnev, and S. Stevens, _On the pinned distances problem over finite fields_ , (2020), https://arxiv.org/abs/2003.00510
* [11] T. Pham, N. Phuong, N. Sang, C. Valculescu, and L. Vinh, _Distinct distances between points and lines in $\mathbb{F}_{q}^{2}$_, Forum Mathematicum. 30 (2018), no. 4, 799–808.
* [12] T. Pham and A. Suk, _Structures of distance sets over prime fields_ , Proceedings of the American Mathematical Society 148 (2020), 3209–3215.
* [13] T. Pham and V. Le, _Distribution of distances in vector spaces over prime fields_ , Pacific Journal of Mathematics 309 (2020), 437–451.
* [14] L. A. Vinh, _On the generalized Erdős-Falconer distance problems over finite fields_ , Journal of Number Theory 133 (2013), 2939–2947.
|
# [C II] and CO Emission Along the Bar and Counter-Arms of NGC 7479111Based on
SOFIA observations with FIFI-LS.
Dario Fadda SOFIA Science Center, USRA, NASA Ames Research Center, M.S.
N232-12 Moffett Field, CA 94035, USA Seppo Laine IPAC, Mail Code 314-6,
Caltech, 1200 E. California Blvd., Pasadena, CA 91125, USA Philip N. Appleton
IPAC, Mail Code 314-6, Caltech, 1200 E. California Blvd., Pasadena, CA 91125,
USA
(Received Dec 8, 2020; Revised Jan 20, 2021; Accepted Jan 25, 2021)
###### Abstract
We present new SOFIA [C II] and ALMA COJ=1→0 observations of the nearby
asymmetric barred spiral galaxy NGC 7479. The data, which cover the whole bar
of the galaxy and the counter-arms visible in the radio continuum, are
analyzed in conjunction with a wealth of existing visible, infrared, radio,
and X-ray data. As in most normal galaxies, the [C II] emission is generally
consistent with emission from cooling gas excited by photoelectric heating in
photo-dissociation regions. However, anomalously high [C II]/CO ratios are
seen at the two ends of the counter-arms. Both ends show shell-like
structures, possibly bubbles, in H$\alpha$ emission. In addition, the southern
end has [C II] to infrared emission ratios inconsistent with normal star
formation. Because there is little H I emission at this location, the [C II]
emission probably originates in warm shocked molecular gas heated by the
interaction of the radio jet forming the counter-arms with the interstellar
medium in the galaxy. At two other locations, the high [C II]/CO ratios
provide evidence for the existence of patches of CO-dark molecular gas. The [C
II] and CO observations also reveal resolved velocity components along the
bar. In particular, the CO emission can be separated into two components
associated to gas along the leading edge of the bar and gas trailing the bar.
The trailing gas component that amounts to approximately 40% of the gas around
the bar region may be related to a minor merger.
Infrared galaxies (790) – Barred spiral galaxies (136) – Molecular gas (1073)
††journal: ApJ††facilities: SOFIA (FIFI-LS), Spitzer (IRAC, MIPS), Herschel
(PACS, SPIRE), ALMA, GALEX, Chandra, SDSS, 2MASS††software: astropy (Astropy
Collaboration et al., 2013), sospex (Fadda & Chambers (2018),
http://www.github.com/darioflute/sospex), mopex (Makovoz & Marleau (2005),
https://irsa.ipac.caltech.edu/data/SPITZER/docs/dataanalysistools/tools/mopex/),
stinytim (John
Krist,https://irsa.ipac.caltech.edu/data/SPITZER/docs/dataanalysistools/tools/contributed/general/stinytim/),
CIAO (Fruscione et al. (2006), https://cxc.harvard.edu/ciao4.12/), magphys (da
Cunha et al. (2008), http://www.iap.fr/magphys/)
## 1 Introduction
Bars are common features in spiral galaxies. Recent infrared studies estimate
that in the local Universe approximately 70% of the spiral galaxies have bars
(see, e.g., Eskridge et al., 2000). Since this percentage declines at higher
redshifts, the presence of a bar has been seen as a sign of galaxies reaching
full maturity after several episodes of merging (Sheth et al., 2008). Since
most galaxies have companions (Zaritsky et al., 1997), minor mergers (with
galaxy masses 10 to 20 times smaller than the main galaxy) are likely common
events during the life of a galaxy (see, e.g., Jogee et al., 2009). Such
mergers can play a major role in triggering a stellar bar that efficiently
channels gas into the nucleus (Mihos & Hernquist, 1994), and have been
proposed as a mechanism for the formation of active galactic nuclei (AGNs, see
e.g., Taniguchi, 1999; Kendall et al., 2003; Kaviraj, 2014). In a merger, the
loss of angular momentum in the gaseous component of the interstellar medium
(ISM) leads to an inflow of gas toward the galactic nucleus (Barnes &
Hernquist, 1991; Blumenthal & Barnes, 2018) causing the growth of the central
black hole. Once the central black hole reaches a critical mass (Ishibashi &
Fabian, 2012), the AGN starts energizing the surrounding medium via winds,
jets, and radiation. This triggers star formation in the surrounding gas and
suppresses further gas inflow by blowing the gas out, a process generally
known as AGN feedback (see, e.g., Silk, 2013).
Table 1: Log of FIFI-LS observations
Observation | Flight | AOR | Starting | Exposure | Barometric | Zenithal | Zenithal
---|---|---|---|---|---|---|---
Date | Number | ID | Time | Time | Altitude | Angle | Water vapor
[UT] | | | [UT] | [minutes] | [feet] | [degs] | [$\mu$m]
2019 05 14 | 570 | 07_0154_6 | 09:57:25 | 38 | 42000 | 68.4 – 61.7 | 3.0 [start]
2019 05 14 | 570 | 07_0154_7 | 10:36:19 | 38 | 43000 | 61.2 – 54.0 | 3.1 [start] – 3.2 [end]
2019 05 15 | 571 | 07_0154_6 | 10:25:12 | 35 | 43000 | 65.9 – 59.4 | 3.2 [start]
2019 05 15 | 571 | 07_0154_7 | 11:01:39 | 36 | 43000 | 59.3 – 52.1 | 3.7 [end]
Note. — Observations on the same date were made during one flight leg. Water
vapor measurements were taken at the beginning and end of each observation,
except for 2019 May 14 when a change of altitude occurred between the AORs,
and the water vapor measurement was done after the altitude change. The
zenithal angle varied linearly during the observations between the two
reported values.
Several observations support the hypothesis that bars act as channels to drive
gas from the arms to the nucleus. Barred galaxies have shallower metallicity
gradients than unbarred ones (Martin & Roy, 1994), suggesting that gas is
radially transported along the bar. Enhanced star formation is common in the
central regions of many barred galaxies (Ho et al., 1997) but not in all of
them. An anti-correlation between the presence of a bar and the central atomic
gas content could be interpreted as depletion of gas in barred galaxies,
possibly resulting from enhanced star formation (Laine & Gottesman, 1998;
Masters et al., 2012). Although the correlation between bars and AGNs is not
clear, there is a definitive correlation between the gaseous absorbing column
density towards type 2 Seyfert nuclei and the presence of stellar bars
(Maiolino et al., 1999), suggesting that bars are effective in driving gas
inward to enshroud galactic nuclei. Finally, molecular gas along bars has been
directly observed through CO observations. Sakamoto et al. (1999) and Sheth et
al. (2005) found that barred spirals have higher molecular gas concentrations
in the central kiloparsec than unbarred galaxies, which is consistent with
radial inflow driven by bars.
Figure 1: Coverage of the FIFI-LS and ALMA observations over a two-color
visible image from HST archival observations (combination of bands F555W and
F814W). The green contour defines the region covered by FIFI-LS with at least
500 seconds of on-source integration. The filled green circle corresponds to
the beam of FIFI-LS at 159 $\mu$m, the redshifted wavelength of [C II]
emission from NGC 7479. The yellow contour shows the coverage of the ALMA CO
observations. The yellow ellipse corresponds to the beam of the ALMA CO
observations in the configuration used.
Observing molecular gas is an excellent way to study the gas inflow along
bars. Because of the depletion due to enhanced star formation or a phase
transition into molecular gas, neutral atomic hydrogen along bars can be hard
to detect or non-existent (Masters et al., 2012; Laine & Gottesman, 1998).
Moreover, since the H$\alpha$ emission in the ISM arises from diffuse ionized
gas or in HII regions, it will be necessarily biased by regions of active star
formation, and is subject to extinction by dust. Many studies have traced
molecular gas in galaxy bars through CO emission (see, e.g., Sakamoto et al.,
1999; Laine et al., 1999; Regan et al., 1999; Sheth et al., 2005).
Another excellent tracer of diffuse gas that is not significantly affected by
extinction in the ISM is the [C II] far-IR line at 157.741 $\mu$m. Although
observed in many galaxies with Herschel (see, e.g., Herrera–Camus et al.,
2015) and SOFIA (see, e.g., Pineda et al., 2018; Bigiel et al., 2020), it has
never been used for detailed studies of galaxy bars. The only published study
with Herschel data is limited to the circumnuclear region of the barred spiral
NGC 1097 (Beirão et al., 2012).
[C II] emission can complement CO studies of the distribution and state of the
molecular gas because it most commonly arises in photodissociation regions
(PDRs) surrounding star formation locations. In most observations of nearby
galaxies [C II] generally traces a mix of warm molecular gas heated by the
photoelectric ejection of electrons from polycyclic aromatic hydrocarbons
(PAH) and small grains, and ionized gas (Draine, 1978; Tielens & Hollenbach,
1985; Bakes & Tielens, 1998). For this reason [C II] emission is commonly used
in normal galaxies as a star formation rate indicator (see, e.g. Stacey et
al., 1991; Malhotra et al., 2001; De Looze et al., 2011; Díaz–Santos et al.,
2014; Herrera–Camus et al., 2015). Care must be taken to blindly use [C II]
observations as a proxy for star formation. For example, in addition to
correcting for the fraction of ionized gas, this line is sensitive to purely
neutral atomic gas (Croxall et al., 2017). It can also reveal the presence of
molecular gas in regions of low metallicity that are usually CO-dark (Wolfire
et al., 2010; Jameson et al., 2018; Madden et al., 2020; Chevance et al.,
2020) and may form a significant fraction of the diffuse ISM in our own Galaxy
(Pineda et al., 2013).
In addition, warm molecular gas heated by turbulence and shocks has been shown
to emit significant amounts of [C II] in areas devoid of significant star
formation, but in regions with diffuse UV radiation and very strong mid-IR
pure-rotational signatures of warm H2 (Appleton et al., 2013; Peterson et al.,
2018). Such observations of shock-heated intergalactic warm H2 also exhibit
very broad [C II] line widths (400–600 km s-1) and unusually high [C II]/FIR
and [C II]/PAH ratios. Models of warm molecular gas shocks (Appleton et al.,
2017) in Stephan’s Quintet show that low velocity magnetic shocks are likely
responsible for the very strong H2 and [C II] emission. Recently, further
evidence of shock-enhanced [C II] emission in a different environment was
discovered near the ends of a radio jet in NGC 4258, where large quantities of
warm H2 were detected (Appleton et al., 2018). The gas was found to correlate
not only with warm mid-IR H2 emission, but also with soft X-ray emission
relating to the activity of the jet in the inner regions of the galaxy. These
results are very relevant to this paper, since NGC 7479, like NGC 4258, also
shows a large-scale radio jet that may be interacting with its own ISM.
We present the analysis of new [C II] and CO observations of the nearby
strongly barred galaxy NGC 7479 ($cz=2381$ km s-1). The galaxy shows a clear
signature of minor merger, visible along the bar of the galaxy (Quillen et
al., 1995; Laine & Heller, 1999; Martin et al., 2000), and hosts an AGN. NGC
7479 also exhibits an S-shaped 10-kpc scale radio continuum structure
emanating from the nucleus. These counter-arms, likely caused by a radio-jet
originating from the nucleus, were discovered by Laine & Beck (2008) and have
polarization vectors aligned along the main ridge-line of the structure. We
present the first X-ray detection of this jet-like structure using archival
Chandra data. The radio continuum structure is remarkably similar to the
ghostly counter-arms in the nearby galaxy NGC 4258 (Appleton et al., 2018). In
NGC 4258, about 40% of the [C II] emission in the central region comes from
molecular gas excited by shocks and turbulence due to the jet propagating near
the plane of the disk. The jet collides with dense clumps of gas in the thick
disk and changes direction and dissipates its energy over a wide area of the
galaxy. A similar scenario was invoked by Laine & Beck (2008) to explain the
jet-like structure in NGC 7479.
Here we present intriguing evidence of [C II] emission that is spatially
coincident with the jet emission, as well as [C II] emission emanating from
the speculated location of the merged companion about 17″ north of the
nucleus, and elsewhere along the bar.
Throughout this paper, we use $H_{0}$ = 70 km s-1 Mpc-1, $\Omega_{\rm m}$ =
0.3, and $\Omega_{\rm\Lambda}$ = 0.7. We also refer to the ground rotational
transition $J=1\rightarrow 0$ of the most common 12C16O isotopologue as simply
CO.
## 2 Observations and Data
### 2.1 SOFIA Observations
The new SOFIA observations were part of the SOFIA Cycle 7 observing program
07_0154. The Field Imaging Far-Infrared Line Spectrometer (FIFI-LS, Fischer et
al., 2018; Colditz et al., 2018) was used to map the [C II] 157.741 $\mu$m
(rest frame) line. For NGC 7479 that line corresponds to the observer frame
wavelength of 159 $\mu$m. The spectral resolving power of FIFI-LS is 1167,
meaning that an unresolved line has a FWHM of 257 km s-1. The spatial
resolution of the instrument is 15.6 arcseconds, corresponding to 2.5 kpc at
the distance (34.2 Mpc) of our galaxy. FIFI-LS is a dual channel instrument.
Parallel observations were obtained at 88.3 $\mu$m. Unfortunately, those
observations had an insufficient signal-to-noise ratio to derive any science
results, and consequently they are not discussed in this paper.
The data were acquired in two consecutive flights (2019 May 14 and 15) for a
total of approximately 2.5 hr of flight time (see Table 1). For each flight
two Astronomical Observation Requests (AORs) were observed during the same
leg, one AOR to cover the northern and another AOR to cover the southern part
of the bar. The two flight legs were almost identical, and the observations
were acquired in very similar atmospheric conditions, resulting in a
homogeneous data set.
Figure 1 shows the extent of the region covered on top of a visible image and
the exposure map of ALMA CO observations. The observations were performed in
the chop–nod mode with the secondary mirror chopping between the galaxy and
reference fields on the two sides of the galaxy, each at a 200 arcsecond
distance from the center of the galaxy. Since the instantaneous field of view
of FIFI-LS in the red array is approximately $60\arcsec\times 60\arcsec$, we
covered the total mapped field with two pointings.
Some dithering was performed to reduce the effect of bad pixels and to improve
the recovery of the point spread function (PSF) in the images, since the size
of the spatial pixel of FIFI-LS (12″) is not small enough to recover the shape
of the PSF. The data were reduced using the FIFI-LS pipeline (Vacca, 2020). In
particular, the data were corrected for atmospheric transmission using the
ATRAN model (Lord, 1992), and the values of the zenithal water vapor burden
were estimated during the observations.
The reduced data were projected into spectral cubes with a fine grid of 3″
sampling using a Gaussian spectral kernel with a dispersion equal to 1/4 of
the spectral resolution and a Gaussian spatial kernel with a dispersion equal
to 1/2 the spatial resolution. These parameters produced a data cube that
conserves the instrumental spectral and spatial resolutions.
Figure 2: Comparison between the archived MIPS 24 $\mu$m observations (left)
and our reduction after subtraction of the nuclear PSF (right), shown with
three different top brightness cuts (2, 4, and 20 MJy/sr). Artefacts such as
residual ”jail bars,” regularly alternating columns with higher fluxes, and
ghost sources, due to memory effects and the dithering pattern of the
observation, are visible in the top-left panel. The different brightness cut
levels show how the wings of the PSF of the nucleus affect the flux
measurements in different parts of the galaxy. The image is oriented according
to the MIPS array direction to better show the instrumental features.
### 2.2 Spitzer Observations
The galaxy has been observed with the IRAC (Fazio et al., 2004) and MIPS
(Rieke et al., 2004) instruments onboard the Spitzer Space Telescope (Werner
et al., 2004). We retrieved the relevant data from the Spitzer Heritage
Archive, and found that the IRAC archival data were directly usable, while the
MIPS 24 $\mu$m image still contained artefacts, and was dominated by the point
spread function (PSF) of the bright nucleus of the galaxy. We therefore
produced another mosaic starting from the basic calibrated data (BCDs).
Figure 3: Multiwavelength panoramic of NGC 7479’s bar. The selected apertures
which cover interesting parts of the bar with a diameter equal to the spatial
resolution of FIFI-LS are marked in the different images. The green circle at
the lower left corner of each panel shows the spatial resolution of the
corresponding observation.
In particular, we removed a pattern of ”jail bars,” a variation in brightness
that repeats every four columns in the BCDs, and is due to the reading mode of
the detector. The pattern is visible in the top-left panel of Figure 2, as
well as in a residual gradient in the background. To remove these artefacts,
we coadded every fourth column in each BCD and fitted a third-degree Chebyshev
polynomial. This average pattern was then subtracted from the respective
columns in the original BCDs.
Other visible artefacts in Figure 2 include two symmetric sources at the top
and bottom of the image. These ghost sources are due to latencies in the
detector response. Two other ghosts are not visible in the combined image
since they fall close to the nucleus of the galaxy. To remove these spurious
sources we subtracted the previous four BCDs from each BCD, scaled as in Fadda
et al. (2006, section 4.5).
Finally, to remove the wings of the PSF of the bright central source, we used
STinyTim222
https://irsa.ipac.caltech.edu/data/SPITZER/docs/dataanalysistools/tools/contributed/general/stinytim/
to produce synthetic PSFs. For each BCD, we generated a PSF at the position of
the nucleus in the BCD with an oversampling factor of ten. Since this
observation has been obtained in the ”compact source” mode (see chapter 3.1.1
of the MIPS
handbook333https://irsa.ipac.caltech.edu/data/SPITZER/docs/mips/mipsinstrumenthandbook/),
we used the predicted positions on the sky recorded in the header as CSM_SKY
as inputs for STinyTim for the displacement of the focal plane along the scan
direction. We computed a stack of 100 point source realizations (PSR) by
integrating the synthetic PSF in MIPS pixels centered at 100 different
positions in the brightest central pixel of the galactic nucleus. To find the
best approximation, we maximized the cross-correlation between each BCD and
the PSRs. Finally, we minimized the sum of the squares of the differences
between each BCD and the optimal PSR to compute the normalization factor.
After subtracting these PSRs from all the BCDs, we made a new mosaic using the
MOPEX software (Makovoz & Marleau, 2005). As shown in Figure 2, there are
parts of the first and second Airy rings which are slightly over-subtracted.
This is due to the limitations of the STinyTim model. The fine details of the
PSF depend in fact on the parameters of the optical system that are based on
the design rather than the performance of the instrument. On the other hand,
empirical PSFs work well only if derived from many point sources in the same
observation, such as are available in a wide field survey. Experiments with
empirical PSFs from other MIPS observations did not yield a better
subtraction.
Nevertheless in NGC 7479, the subtraction of the synthetic PSF made it
possible to obtain more accurate flux density measurements in the parts of the
galaxy covered by our new FIFI-LS SOFIA observations. To give an idea of how
much flux is contained in the artefacts and wings of the PSF, we measured the
intensity of the ghost sources at the bottom and top of the image. The ghost
fluxes are 0.65% of that of the central source. These ghost sources, because
of the dithering pattern of the observations, appear also in the central part
of the galaxy. The first Airy ring contains 30% of the total flux, while the
knots in the secondary Airy ring have 10% of the total flux. Without removing
artefacts and subtracting the PSF of the central source, measurements of the
24 $\mu$m flux along the bar would be severely biased.
Table 2: Properties of galaxy regions
Region | Center | L(FIR) | L([C II]) | L(CO) | L(PAH) | Z | TISM | $M_{*}$ | sSFR
---|---|---|---|---|---|---|---|---|---
Label | [J2000] | [$10^{9}\,L_{\odot}$] | [$10^{6}\,L_{\odot}$] | [$10^{3}\,L_{\odot}$] | [$10^{8}\,L_{\odot}$] | [Z⊙] | [K] | [$10^{9}\,M_{\odot}$] | [$10^{-10}\,yr^{-1}$]
A | 23:04:56.89 +12:20:14.9 | 0.87 | 6.19 $\pm$ 0.87 | 1.52 $\pm$ 0.11 | 0.93 | 0.8 | 22.1${}^{+1.6}_{-1.2}$ | 0.88${}^{+0.14}_{-0.01}$ | 1.86${}^{+0.36}_{-0.03}$
B | 23:04:57.37 +12:20:05.5 | 0.90 | 4.70 $\pm$ 0.99 | 2.50 $\pm$ 0.16 | 0.89 | 1.1 | 21.9${}^{+2.4}_{-0.4}$ | 1.08${}^{+0.01}_{-0.85}$ | 0.52${}^{+0.50}_{-0.01}$
C | 23:04:56.48 +12:20:03.6 | 0.90 | 6.30 $\pm$ 0.98 | 0.75 $\pm$ 0.03 | 0.91 | 0.8 | 21.8${}^{+2.3}_{-1.4}$ | 1.06${}^{+0.29}_{-0.15}$ | 0.74${}^{+0.36}_{-0.59}$
D | 23:04:56.89 +12:19:52.6 | 1.02 | 7.97 $\pm$ 1.93 | 2.49 $\pm$ 0.03 | 1.02 | 1.1 | 22.3${}^{+2.6}_{-0.0}$ | 1.77${}^{+0.01}_{-0.72}$ | 0.48${}^{+0.46}_{-0.00}$
E | 23:04:55.82 +12:19:48.7 | 0.63 | 3.64 $\pm$ 0.80 | 0.44 $\pm$ 0.02 | 0.61 | 1.2 | 21.2${}^{+0.7}_{-1.0}$ | 0.72${}^{+0.09}_{-0.00}$ | 0.89${}^{+0.13}_{-0.30}$
F | 23:04:56.75 +12:19:38.8 | 2.19 | 6.77 $\pm$ 0.72 | 4.72 $\pm$ 0.21 | 1.72 | 1.3 | 22.5${}^{+1.3}_{-2.1}$ | 2.83${}^{+0.21}_{-0.01}$ | 0.61${}^{+0.01}_{-0.07}$
G | 23:04:56.65 +12:19:23.2 | 1.36 | 9.66 $\pm$ 0.46 | 13.12 $\pm$ 0.35 | 1.25 | 0.8 | 23.0${}^{+2.5}_{-0.3}$ | 1.83${}^{+0.44}_{-0.45}$ | 0.40${}^{+0.06}_{-0.20}$
H | 23:04:56.61 +12:19:07.2 | 2.41 | 12.66 $\pm$ 0.74 | 4.70 $\pm$ 0.18 | 2.46 | 1.2 | 23.9${}^{+0.7}_{-0.9}$ | 3.48${}^{+0.69}_{-0.29}$ | 0.31${}^{+0.01}_{-0.03}$
I | 23:04:57.34 +12:18:57.9 | 0.79 | 6.99 $\pm$ 0.08 | 0.51 $\pm$ 0.01 | 0.74 | 1.1 | 22.1${}^{+1.6}_{-0.4}$ | 0.94${}^{+0.01}_{-0.01}$ | 0.52${}^{+0.01}_{-0.01}$
K | 23:04:55.19 +12:18:57.7 | 0.48 | 3.88 $\pm$ 0.38 | 0.25 $\pm$ 0.01 | 0.44 | 0.4 | 21.8${}^{+2.0}_{-0.6}$ | 0.85${}^{+0.28}_{-0.20}$ | 0.15${}^{+0.10}_{-0.33}$
L | 23:04:56.41 +12:18:54.2 | 1.36 | 6.91 $\pm$ 0.67 | 2.55 $\pm$ 0.08 | 1.25 | 0.8 | 23.0${}^{+2.5}_{-0.3}$ | 1.83${}^{+0.44}_{-0.45}$ | 0.40${}^{+0.06}_{-0.20}$
M | 23:04:56.22 +12:18:35.6 | 0.92 | 4.34 $\pm$ 0.26 | 1.35 $\pm$ 0.03 | 0.96 | 0.9 | 21.9${}^{+1.5}_{-0.9}$ | 0.89${}^{+0.05}_{-0.22}$ | 0.91${}^{+0.23}_{-0.05}$
Note. — Each region consists of a circular aperture with a 15$\farcs$6
diameter and the reported center. All images were degraded to the angular
resolution of the [C II] observations. The FIR luminosity was computed by
integrating the best-fitting model between 8 and 1000 $\mu$m. [C II] and CO
luminosities were computed by fitting the line inside the aperture, while the
PAH luminosity was computed by subtracting the stellar component from the IRAC
8 $\mu$m photometry with the help of synthetic photometry from the best-
fitting MagPhys model. Errors of FIR and PAH luminosities are less than 5%.
The metallicity Z is the value of the MagPhys model that best fits the
photometric data. Temperature of the ISM, stellar mass, and specific star
formation rate are MagPhys outputs with 95% confidence interval uncertainties.
### 2.3 Herschel Observations
NGC 7479 was observed by Herschel with the PACS and SPIRE instruments. In this
paper we will consider only the PACS observations obtained at 70, 100, and 160
$\mu$m and the 250 $\mu$m image obtained with SPIRE. We do not consider the
other SPIRE images, since their spatial resolution is too low in comparison to
our SOFIA data, and they do not allow us to discern the emission from the
different parts of the bar. Moreover, the emission at those wavelengths is
well beyond the peak of the infrared emission and not useful to constrain the
total infrared emission. We did not reprocess the images for our work since
the quality of the archived products in the Herschel Science Archive is
adequate for our analysis.
### 2.4 H$\alpha$ Data
We present data from H$\alpha$ Fabry–Perot observations, kindly provided by
Stuart Vogel and Michael Regan. These observations were made with the
Maryland–Caltech Fabry–Perot Spectrometer attached to the Cassegrain focus of
the 1.5 m telescope at the Palomar Observatory (Vogel et al., 1995). The data
were obtained on 1994 September 29–30. Forty exposures were taken, each with a
500 s integration time and a pixel scale of 1$\farcs$88\. To improve the
signal-to-noise ratio, these data have been smoothed to 3$\farcs$6 resolution.
The velocity planes are separated by 12.1 km s-1.
### 2.5 UV, visible, Near-IR Data
SEDs (spectral energy distributions) were created for parts of NGC 7479. We
made use of observations in the two GALEX bands (FUV and NUV), SDSS images in
the five Sloan bands ($u^{\prime}$, $g^{\prime}$, $r^{\prime}$, $i^{\prime}$,
$z^{\prime}$), and 2MASS images in the $J$, $H$, and $K_{\rm s}$ bands. The
GALEX FUV observation are from the Nearby Galaxy Atlas survey. A deep NUV
observation, taken in 2009 to observe the supernova SN 2009JF, was also used.
Finally, we also analyzed a spectrum of the galaxy nucleus available in the
SDSS archive. Images and spectra were retrieved from the respective archives:
MAST, SDSS, and IRSA.
Figure 4: SEDs, [C II], and CO spectra for the 12 apertures considered. The
spectral coverages of the various images used to estimate the SEDs (left
panels) are shaded in different colors. The best fits obtained with MagPhys
(da Cunha et al., 2008) are presented in red (attenuated distribution) and
blue (unattenuated distribution). The [C II] and CO spectra (middle and right
panels, respectively) are plotted in blue. The red lines show the fits of
continuum plus a combination of pseudo-Voigt functions. The green shaded
spectra are the H$\alpha$ lines rescaled to the [C II] and CO lines,
respectively. The velocity shift along the bar is visible across the different
apertures, from the top (North) to the bottom (South).
### 2.6 Chandra X-Ray Data
The X-ray observations were retrieved from the Chandra archive. Two
observations exist in the archive. They were obtained in August 11, 2009 to
study the remnants of the supernova SN 1990U and, as a target of opportunity,
two months later (November, 24) to follow up the more recent supernova SN
2009JF. We reprocessed the data using the latest Chandra calibration (CALDB
4.9.3) with CIAO, version 4.12.1. To obtain an image we used the most recent
observation (ID 11230) which has the longest exposure time (25 ks). For
spectral extractions we considered also the other observation (ID 10120) which
has an exposure time of 10 ks. The image, including photons between 0.3 and 8
keV, has been smoothed with an adaptive Gaussian kernel using the CIAO tool
dmimgadapt. This routine smooths each pixel on optimal scales, in our case
between 1 and 10 pixels. This is in order to reach the desired count threshold
under the convolution kernel, in this case 10 counts. The resulting spatial
resolution varies according to the counts, and it is better than 5 arcsec over
the entire image. Because of the low photon statistics, we extracted spectra
from the two observations and studied the combined spectrum after a separate
background subtraction. We used the specextract tool to extract X-ray spectra
in the 0.3–8.0 keV range for the region centered on the nucleus and the
regions on the counter-arms. The background was evaluated in a region close to
the galaxy without X-ray emission in the same chip. The tool automatically
scales the ratio to the same area when subtracting. We filtered the events
based on the energy range and then grouped them to a minimum of 10 counts per
bin prior to modeling the spectrum. Fluxes were estimated based on the count
rates and the best fitting models.
### 2.7 ALMA CO Data
We used unpublished archival observations of the ${}^{12}CO_{J=1\rightarrow
0}$ line in the 2.61–2.63 mm wavelength range taken with ALMA in band 3
(program ID: 2016.2.00195.S, P.I. Tanaka). The data were taken on 2017
September 21 and cover the whole region observed with SOFIA (see Fig. 1). NGC
7479 was observed for a total of 5136 s with the 7 m array for an expected
line sensitivity of 17.6 mJy/beam. The spatial beam is an ellipse with axes of
15$\farcs$6 and 8$\farcs$0 and major axis oriented along the R.A. direction.
The spectral resolution is 1.27 km/s, roughly ten times better than that of
the data used by Laine et al. (1999).
Figure 5: [C II] vs CO relationship for local galaxies from Madden et al.
(2020) with a grid of PDR models as a function of gas density $n$ and the
strength of the incident FUV field $G_{0}$ (Kaufman et al., 1999). Regions
with [C II] associated with PDRs are shaded in blue. The region shaded in
yellow has either a [C II] excess (e.g., due to excitation by shocks) or
anomalously low CO emission (e.g., in sub-solar metallicity regions). The
regions of NGC 7479 discussed in this paper are marked with black circles. The
nucleus of NGC 7479 (region G) is less luminous in [C II] than a typical star
formation region. Regions I, E, K, and C emit more [C II] than normal star
formation regions and fall in the region populated by dwarf and low-
metallicity galaxies.
## 3 Results and Discussion
### 3.1 Origin And Distribution of Gas Emission
In this section we study the distribution of the CO and [C II] emission, and
we try to relate the emission to the mechanism that produced it. To achieve
this we compare the [C II] , CO, and H$\alpha$ emissions at different
locations in the galaxy to the broad-band emission from radio to X-ray
wavelengths. In Fig. 3 we show the emission at different key wavelengths
compared to the integrated emission from [C II] , CO, and H$\alpha$.
From Laine & Gottesman (1998) we know that there is very little neutral atomic
hydrogen along the bar of NGC 7479. Also, the H$\alpha$ emission is associated
with star forming regions, as can be seen from the close spatial
correspondence between the H$\alpha$ emission and UV emission intensities of
the galaxy. On the other hand, a quick look at the integrated emission of CO
and [C II] presents a different picture of the galaxy, not directly correlated
with the unobscured star formation.
To study the relationship between gas emission intensities at different
wavelengths, we defined 12 different apertures with a diameter of 15.6
arcseconds, equivalent to the spatial resolution of our [C II] map. Each
aperture covers a different part of the bar and is centered either on a peak
of the far-IR emission, or on a region with 20 cm radio continuum emission. At
the top end of the bar we defined two apertures since the peak of the CO
emission is displaced to the East with respect to the FIR emission (apertures
B and C). Apertures A and M are situated at the locations where the bar meets
the arms of the spiral galaxy. Aperture K is the only one defined outside of
the bar in a region with H$\alpha$ and [C II] emission, but with very low CO
emission. The apertures are marked in Fig. 3.
For each one of these apertures we measured the flux densities in the two
GALEX bands, five SDSS bands, four IRAC bands, the MIPS 24 $\mu$m image, the
PACS 70, 100, and 160 $\mu$m images and the SPIRE 250 $\mu$m image. To compare
the emission at these bands with the [C II] emission, we degraded the spatial
resolution of each image to that of the [C II] spectral cube. The spectral
energy densities (SEDs) are presented in Figure 4. Each SED has been fitted
with the Magphys code (da Cunha et al., 2008). In the figure, the two lines
represent the best fit with the attenuated and unattenuated distributions in
red and blue respectively. In the same figure, the middle panel shows the [C
II] line from the apertures fitted with one or two pseudo-Voigt functions (red
lines). The right panel, finally, shows the CO lines fitted again with a
combination of pseudo-Voigt functions. The spatial resolution of the CO
spectral cube has been degraded to the resolution of the [C II] cube to have
meaningful comparisons. The profile of the H$\alpha$ line, rescaled to the [C
II] and CO lines, is shown with a green shade.
The measurements of the intensity of the CO and [C II] lines in the different
apertures, as well as the main outputs from the code are reported in Table 2.
### 3.2 Relative Strength of CO and [C II] Emissions
When comparing the [C II] and CO emissions, the main difference is the low
luminosity in [C II] at the nucleus of the galaxy (aperture G). This is
especially evident when comparing the integrated emission shown in Figure 3.
The [C II] emission is also extending towards the Southern end (aperture I) of
the S-shaped radio continuum emission, unlike CO. Figure 5 shows the
relationship between the [C II] and CO emission in the different apertures,
normalized to the far-IR emission. This ratio can be used as a star formation
diagnostic.
Figure 6: Ratios of [C II] over far-IR emission as a function of the infrared
surface brightness for the 12 apertures considered. The blue symbols
correspond to the comparison sample from the GOALS project (Díaz–Santos et
al., 2017). The shaded region is the fitted curve with 1-$\sigma$ uncertainty.
Region I, corresponding to the radio/X-ray hot spot at the southern end of the
jet, shows an excess of [C II] emission.
We show, for reference, a grid of PDR models from Kaufman et al. (1999). The
region shaded in blue corresponds to emission possible with pure PDR models.
The region shaded in yellow cannot be explained in terms of pure PDR emission,
but requires either an excess of [C II] emission or a deficit of CO emission.
Most of the normal and star forming local galaxies lie in the blue region,
while higher ratios are measured for dwarf galaxies and lower metallicity
galaxies (Madden et al., 2020).
The regions at the ends of the radio continuum jet (apertures I and E) have a
ratio higher than normal star forming regions. The same is true about region C
that we already pointed out as having very little CO emission and region K
which lies outside of the bar.
Regions I and E are especially interesting because they seem to have only very
faint and narrow H$\alpha$ and CO emission, whereas the [C II] emission is
double-peaked in the case of Region E, and broad in the case of Region I.
These regions correspond to the ends of the radio counter-arms. In this case
there are strong similarities with NGC 4258 (Appleton et al., 2018), where
evidence was presented that the enhanced [C II] emission traces the
dissipation of mechanical energy through shocks and turbulence as the jet
interacts with the surrounding ISM. Lesaffre et al. (2013) showed that even
quite low-velocity shocks, passing through a mildly UV-irradiated diffuse
(102–103 cm-3) molecular medium, can produce strong [C II] emission,
comparable to other powerful ISM coolants, like mid-IR H2 emission. Models of
this sort were used to explain the powerful H2, [C II] and H2O emission
detected by Spitzer and Herschel in the shocked filament in Stephan’s Quintet
(Guillard et al., 2009; Appleton et al., 2013, 2017). A similar mechanism was
put forward to explain the clear association of [C II] emission with warm H2
and faint soft X-ray emission associated with the end of the southern radio
jet and anomalous radio arms in NGC 4258 (Appleton et al., 2018). The
necessary mild UV radiation field required to ionize the carbon is provided by
the general galactic stellar background. In NGC 7479, although we do not have
direct evidence of shock-heated warm molecular gas, the association with the
X-ray emission (see Fig. 9), and the unusual [C II]/FIR ratios discussed in
the next subsection, are consistent with this picture.
Figure 7: Ratio of [C II] over PAH 7.7 $\mu$m emission as a function of the
far-IR slope for the 12 apertures considered. The comparison sample consists
of regions in NGC 1097 (blue) and NGC 4559 (green) from Croxall et al. (2012),
and in NGC 6946 (red) from Bigiel et al. (2020). The shaded part of the plot
contains 99% of the regions in the three comparison galaxies. The highest
ratio in NGC 7479 is found in region I that corresponds to the radio hot spot
at the southern end of the radio continuum jet.
### 3.3 Infrared Diagnostics
The relationship between [C II] and star formation can be tested using mid-
and far-IR diagnostics. The far-IR emission (between 8 and 1000 $\mu$m) has
been shown to be a good estimator of star formation (Kennicutt, 1998).
Infrared surveys, such as the GOALS survey (Díaz–Santos et al., 2017), found a
good correlation between the [C II] emission strength and the total far-IR
luminosity. The ratio is fairly constant for normal galaxies, while ultra-
luminous infrared galaxies have a deficit of [C II] emission. The same
relationship can be used to explore various regions of a galaxy to see if the
[C II] emission is related to star formation, or if there is an excess or
deficit of such emission with respect to the star formation rate measured. For
each aperture, we computed the FIR emission by integrating the spectral energy
distribution of the best fitting MagPhys model (see Fig. 4). In Figure 6 we
compare the values of the [C II]/FIR ratios in the different apertures on NGC
7479 with values from the GOALS sample. Most of the apertures lie in the
region of normal galaxies, except for the aperture I, which lies on the
southern end of the radio continuum jet emission. The [C II]/FIR ratio is
anomalously high at this location.
Another quantity that correlates very well with the star formation rate is the
emission at 7.7 $\mu$m from the polycyclic aromatic hydrocarbons (PAHs,
Peeters et al., 2004). Most of the [C II] emission in normal galaxies
originates from cooling of neutral gas in photo-dissociation regions (Croxall
et al., 2017). On the other hand, the main mechanism to heat the neutral gas
is via photoelectric heating by interstellar PAHs (Draine, 1978; Tielens &
Hollenbach, 1985; Hollenbach & McKee, 1989) which explains the good
correlation between PAH and [C II] emissions. Therefore, this ratio is very
sensitive to the [C II] emission mechanism. The 7.7 $\mu$m PAH luminosity is
estimated by subtracting the stellar emission from the IRAC 8 $\mu$m image.
The stellar emission estimate was computed through synthetic photometry of the
unattenuated flux from the best-fitting MagPhys model.
By comparing the values in different regions of normal star-forming galaxies
(Croxall et al., 2012; Bigiel et al., 2020), we see that region I and K have
anomalously high ratios. Region K that lies outside of the bar seems to have
some [C II] excess. It is possible that this region may be an example of ”CO-
dark” molecular hydrogen. This idea is supported by the observation that this
region has the lowest best-fitting SED metallicity of all the regions observed
(see Table 2), a condition that may be conducive to ”CO-dark” molecular gas
(Wolfire et al., 2010). Recent studies of low metallicity regions in galaxies
(Madden et al., 2020) show that most of CO is dissociated in these
environments, making [C II] a better tracer of molecular gas in these
particular regions. Region I, which lies close to the end of the continuum
radio jet in the South, will be discussed in the next subsection in the
context of possible ISM heating by the jet. We note that the corresponding
region E (at the tip of the northern jet) does not seem to have anomalous
ratios in these two diagnostics. However, we caution that there are two
velocity components to the [CII] emission from Region E, and so any excess in
this ratio in one component may be diluted by normal PDR emission from the
other.
### 3.4 The Counter-arm Structure
NGC 7479 is known to host an active nucleus. Ho et al. (1997) classified its
nucleus as a Seyfert 1.9, although previously it was classified as a LINER
(Keel, 1983). A recent SDSS spectrum of the nucleus (observed in March 2012,
see Fig. 8) shows clearly that the galaxy can be classified as a Seyfert
according to the BPT diagrams by Kewley et al. (2006). From the line ratio
H$\beta$/[OIII]5008Å $=0.22\pm 0.1$ the galaxy can be classified as Sy 1.8,
according to the schema of Winkler (1992). The SDSS spectrum shows a high
extinction ($A_{V}=8.4$ mag, from the Balmer ratio decrement) typical of
Seyfert type 1.8–2 galaxies. Moreover, blue wings are visible in narrow lines
such as [OIII]5008Å and [SIII]9068Å (see insets in the top panel of Fig. 8).
Such asymmetric line profiles are usually associated with outflows of gas
(see, i.e., Schmidt et al., 2018).
Figure 8: Top: SDSS spectrum of the nucleus of NGC 7479 with main spectral
features identified. The asymmetric profiles of the [OIII]5008Å and
[SIII]9531Å lines are shown in the insets. Bottom: Kewley et al. (2006)
diagnostic diagrams for the nucleus of NGC 7479. The galaxy, blue dot with
errorbars in the figures, can be safely classified as a Seyfert.
Figure 9: On the left, contours of the 0.5–8 keV X-ray emission over the 20
cm radio continuum map (Laine & Beck, 2008). The contours follow a pattern
similar to that of the radio intensity, confirming the nuclear origin of the
radio emission. In particular, the southern end of the jet-like structure is
clearly detected in the X-ray image, although it peaks at a slightly different
location. On the right, spectra extracted in the apertures traced with dotted
circles in the image. The models fitting the data are traced with orange
lines, while the residuals of the fits are shown at the bottom of each
spectrum.
Radio observations at 20 cm by Laine & Beck (2008) showed evidence of the
existence of a jet-like structure with arms opening in a direction opposite to
the optical arms. They were not able to see this structure at any other
wavelength and they speculated that this radio emission could be linked to a
jet emanating from the nucleus of the galaxy.
Our reprocessing of Chandra archival data shows that the X-ray emission
follows the same pattern as the counter-arms visible in the 20 cm radio
continuum. As shown in Fig. 9, the X-ray contours follow the same orientation
as the radio structure. Moreover, the southern end of the structure that has
the strongest radio emission, is also a hot spot in the X-ray observations.
NGC 7479 is considered to harbor an AGN on the basis of optical spectra of its
nucleus. In Fig. 9 we show the X-ray spectrum extracted from aperture G that
shows a prominent Fe K$\alpha$ feature at 6.4 keV. We used the sherpa package
of CIAO to model the spectrum. We obtained a good fit with a combination of a
power law and apec thermal models from the sherpa library considering the
intrinsic absorption as a free parameter. The Fe K$\alpha$ line at 6.4 keV was
fitted with a Gaussian function. The spectrum is typical of an AGN: it has a
high hardness ratio444The hardness ratio is defined as $HR=\frac{H-S}{H+S}$
with $H$ and $S$ the hard (0.5–2 keV) and soft (2–8 keV) fluxes, respectively.
($HR=0.8\pm 0.5$), and the Fe K$\alpha$ line at 6.4 keV is clearly detected.
Moreover, the high H I absorption required for a good fit
(N${}_{H}=0.8^{+0.1}_{-0.3}\times 10^{22}$ cm-2) and the large width of the
6.4 keV line (FWHM$=0.9^{+0.2}_{-0.7}$ keV) suggest that the galaxy harbors at
its center an heavily obscured active nucleus. This analysis confirms previous
results obtained with 13 ks XMM observations (Wang et al., 2010). The ratio of
our estimated values of optical extinction ($A_{V}$) and H I absorption (NH)
are not far from the Galactic standard ratio (see Fig.3 in Burtscher et al.,
2016). The rest of the X-ray emission is much weaker as shown in the two other
spectra in Fig. 9. Also, the hardness ratios of the X-ray emission in the
other apertures are much lower than in the nucleus ($HR=0.0$ and $-0.2$ for
apertures I and F+H, respectively). We fitted the other two spectra either
with apec thermal models or a combination of power-law and apec thermal
models. In both cases we also assumed a Galactic intrinsic absorption,
obtaining similar results. With these models we can estimate the 0.5–8 keV
flux inside the southern X-ray hot spot (aperture I) and the average flux
inside the two other apertures along the bar (F and H). The flux inside
aperture I is $5_{-2}^{+2}\times 10^{-15}$ erg s-1cm-2, while the average flux
inside apertures F and H is $5_{-1}^{+2}\times 10^{-15}$ erg s-1cm-2. By
considering the relationship between X-ray emission and the total IR flux in
Mineo et al. (2012, their equation 23) for normal star-forming galaxies, we
find that aperture I has a ratio of 1.4 between the expected IR emission
(based on the X-ray emission) and the measured IR emission. For our apertures
that lie on the bar (F and H), the ratio is 0.5, i.e., approximately one third
of the value found in aperture I. If we take into account the fact that most
of the IR emission in aperture I is located close to the bar while the X-ray
emission peaks on the opposite side, we conclude that at least some of the
X-ray emission cannot be explained with star formation only. It is therefore
reasonable to assume that the S-like structure detected in the radio and X-ray
is associated with emission from the AGN. Similarly to another galaxy showing
radio counter-arms (NGC 4258, see Appleton et al., 2018), this structure can
be explained by invoking the existence of a jet originating inside the
nucleus, and colliding with dense clumps of gas along the bar (Plante et al.,
1991; Daigle & Roy, 2001; Mukherjee et al., 2016). If the jet is emitted at an
angle with respect to the bar, during the collisions the gas transfers
momentum to the clouds of gas along the direction of the bar, hence gradually
changing the jet direction as the component of velocity along the bar
decreases. As the jet exits the bar and enters the less dense disk region, the
direction of the jet remains constant. This scenario can explain the shape of
the counter-arms.
Figure 10: End locations of the radio jet-like emission in NGC 7479. The
middle panel shows the HST two-color image with 20 cm radio continuum
logarithmic contours from 0.1 to 4 mJy/beam. The top and bottom panels show
the regions at the end of the radio jet-like emission. The cyan circles are
the apertures E and I considered in this paper. In the top panel, the tip of
the radio emission is surrounded by bright young stars. The bottom panel shows
that the southern end of the radio jet has new stars on the top and below the
radio emission peak. The jet likely exits the disk since no star formation is
visible at its end.
Figure 10 shows the comparison between visible and radio emission at the ends
of the jet-like structure and the apertures used to measure the CO and [C II]
emission in this paper. As shown in the previous sections, the ratio of the [C
II] to CO emission is anomalously high in these two regions with respect to
regions of
Figure 11: Region with remnants of a possible minor merger (nucleus of a
merging galaxy, tail of star formation, horizontal dust lane) in the two-color
HST image. The contours of the total [CII] emission are shown in green in the
left panel. The two velocity components of the CO emission are shown in the
right panel. While the higher velocity component (red, $\Delta v\approx-50$
km/s) follows the merging structure with the maximum on the nucleus, the lower
velocity (blue, $\Delta v\approx-150$ km/s) component shows an extension
aligned with the horizontal dust lane crossing the bar. The [CII] emission
also shows two velocity peaks, but the spectral resolution is not sufficient
to distinguish between the two components across the entire image.
normal star formation. However, when using mid-IR and far-IR diagnostics, the
northern region (E) seems compatible with star formation, while the southern
region (I) shows an excess of [C II] emission.
To better understand the reason for this behavior, it is instructive to look
at the clusters of young stars located at the ends of the radio continuum
arms. At the northern end, the radio emission points to a region which is
opaque in visual wavelengths, and is surrounded by a region full of clusters
of bright blue stars. The same region (region E) is also bright in the
ultraviolet (see Fig. 3). This morphology is consistent with at least the
majority of the [C II] emission originates in star forming regions. This might
explain why the [C II] emission correlates very well with mid-IR and far-IR
estimators of star formation. On the other hand in this region, we cannot rule
out some fraction of the [CII] emission originating in warm molecular gas
heated by the jet, since the [CII] line profile is double-peaked, but the CO
emission is not.
At the southern end, on the contrary, the jet is only partially surrounded by
star clusters. The northwestern edge of the southern jet end has a front of
bright young stars, and just in front of the maximum radio emission there is
an arch-like cluster of blue stars. Beyond these stars, there seems to be
little star formation associated with the jet. This is exactly where the peak
of the X-ray emission is located. The impression in this case is that the jet
is coming out of the disk, and therefore, has no possibility to interact with
the dense gas in the disk. Going out of the disk, the jet is still able to
interact with lower density molecular gas in the halo, which probably triggers
a more intense X-ray radiation. Even if the halo gas density is not high
enough to trigger star formation, the energy of the jet dissipates by shocking
the molecular gas that later cools down, emitting [C II]. This scenario would
explain the excess [C II] emission with respect to the lower PAH and FIR
emission. The lack of H$\alpha$ and H I emission in the region supports this
hypothesis.
### 3.5 Merging Remnants
Laine & Heller (1999) were able to explain the shape and features of NGC 7479
with a minor merger model. In such a model, a region north of the nucleus
contains visible remnants of the merging process. In particular, what is left
of the nucleus of the less massive galaxy captured by NGC 7479 is still
visible just north of the nucleus. As shown in the HST image in Fig. 11, the
bright elongated nucleus is followed by a trail of forming stars, likely a
residual of an arm. Moreover, a thick dust lane across the bar could also be a
residual of the merged galaxy. The merging left the northern part of the
galaxy in a much more turbulent state than its southern part. In Fig. 11 the
contours of the [C II] and CO emission are overlaid on an HST image. The
region trailing the merging nucleus appears very bright in [C II]. As
discussed in the next section, the velocity structure of [C II] shows a double
peak over this region.
Figure 12: On the left, slices in declination of the H$\alpha$ emission (grey
shaded) with overlapped logarithmic contours of the CO (green) and [C II]
(orange) emissions showing the distribution of the atomic and molecular gas in
velocity and R.A. across the bar. The image on the right shows the intensity
of the 3.6 $\mu$m radiation with overlapping contours of the near-UV (green)
and 20 cm radio continuum (white) emissions and a vertical grid of angular
distance in arcseconds from the central position. Each slice corresponds to an
horizontal segment on the image. The distance in declination of each slice
from the nucleus is marked on the right side of each segment on the image and
on the bottom left corner of each subplot. The normalized intensity profile of
the UV and IR emission along the slices is shown at the bottom of each subplot
in blue and red, respectively.
Unfortunately, the spectral resolution of FIFI-LS is not good enough to
clearly separate the two components.
Figure 13: Displacement in R.A. with respect to the center (left) and line-of-
sight velocity relative to the systemic velocity (right) for the H$\alpha$,
CO, and [C II] components. The assumed values are: 23:04:56.63 +12:19:22.7
(J2000) for the galaxy center and 2381 km/s for the systemic velocity. In the
left panels, the peak of the far-IR emission at 70 $\mu$m is traced with a
wide light blue line and the locations of UV emission with a thin purple line.
In the right panels, the projected rotational velocity of 5.6 km s-1 arcsec-1
of the two sides of the bar is traced with a broad blue line to put in
evidence the rigid rotation of the bar. The dots mark the peak emission of the
components, while the horizontal bar shows their extents. Peaks more than 4
arcsec from the far-IR peak are marked with darker colors. The declination
range of the nucleus and of the merging remnants are indicated on the right.
The CO emission also presents two clear peaks in velocity (see next section).
In this case, thanks to the high spectral resolution of ALMA, it is possible
to obtain the integrated intensities of the two components by fitting two
velocity components over the entire spectral cube. In the region with merging
remnants, the component with the velocity of $-50$ km s-1 with respect to the
systemic velocity follows the dust lane of the bar. The emission is all over
the merging structure and peaks on the merging nucleus. The other component,
approximately at $-150$ km/s from the systemic velocity, traces a cloud of
molecular gas which is limited by the horizontal dust lane. It is possible
that molecular gas flows towards the bar following this dust lane. We conclude
that the turbulent velocity profile of the [C II] emission and the presence of
a cloud of molecular gas along the dust lane are other possible tracers of an
ongoing minor merger event.
Figure 14: 3D view of the molecular gas emission along the galaxy bar detected
with the CO line. Isocontours of sections in declination of the CO spectral
cube at 20, 30, 60, 80 $\mu$Jy/pixel are shown in yellow, red, green, and
blue, respectively.
### 3.6 Gas Kinematics
A way to visualize the kinematics of the gas in NGC 7479’s bar region is to
consider sections in declination of the bar, since the bar is almost aligned
along the north–south direction. By plotting the intensity of the spectral
cube in the velocity–R.A. space, it is possible to identify different
components of the emission. In Fig. 12 we display 12 different sections of the
spectral cube of H$\alpha$, CO, and [C II] emissions corresponding to
interesting regions along the bar. The declinations at which we sliced the
spectral cubes are displayed in the right column panel of the figure as
horizontal segments over a near-IR image (the IRAC map at 3.6 $\mu$m) with
green contours of the near-UV emission (GALEX image) and white contours of the
20 cm continuum (VLA). The image summarizes the main components of the bar:
old stars (3.6 $\mu$m), new stars (near-UV), and radio counter-arms. For each
declination, the normalized intensity of the IR and UV images as a function of
the R.A. is displayed at the bottom of the corresponding spectral subplot. The
[C II] observations (in orange contours) clearly show the limited spectral
resolution of FIFI-LS since each component is elongated along the velocity
axis. Nevertheless, there are two regions where two components are clearly
visible. The first one is between 0 and -16 arcsec south of the nucleus, a
region which happens to be bright also in the near-UV. The second one is
between 24 and 37 arcsec north of the nucleus in the region which has remnants
of a past minor merger, as discussed in Section 3.5. The CO emission (green
contours) shows at least two components in each section. The velocity
dispersion in the nucleus (between 9 and -8 arcsec) is much higher than the
spectral resolution. The weaker component, far from the major axis of the bar,
has also a lower velocity dispersion than the component on the bar. Finally,
we notice that there is a component of the UV emission north and south of the
nucleus which is displaced with respect to the position of the bar. In the
northern part the spiral arms are separated into two branches, as visible in
the near-IR twin peaks. The brightest UV emission comes from the branch to the
west of the bar, a location which precedes the bar in the sense of the galaxy
rotation. The major axis [C II] and CO emissions peak at the same location
that also coincides with the peak of the UV emission. In the southern part of
the bar, the situation is symmetric. A ridge of UV emission is visible east of
the bar, again preceding the bar in the sense of galaxy rotation, but in this
case there is no secondary branch in the infrared. The peak of the [C II]
emission is shifted towards east with respect to the CO emission below -8
arseconds south of the nucleus. There is some H$\alpha$ emission associated
with the same region. The fact that H II regions (outlined by the UV ridges)
are displaced with respect to the position of the molecular gas has been also
observed in other galaxy bars (Sheth et al., 2002).
Figure 15: The CO spectral cube separated into two kinematic components. This
figure shows the logarithmic contours of the integrated emission, velocity,
and velocity dispersion, for the two components over the HST image of the
galaxy. The top and bottom panels show the high and low velocity components,
respectively. Levels are in W/m2/pixel for intensity, and in km/s for velocity
and velocity dispersion. The green polygons on the intensity plots are the
regions we considered when estimating the amount of molecular gas along and
outside the bar.
For each declination we identified the main components of the gas emission and
fitted them with 2D Gaussians. Fig. 13 shows the position and velocity of the
peak of each component. The dispersion in velocity and R.A. is shown with
horizontal bars. In the same figure (left panels) we report the location of
the peak of the far-IR emission (at 70 $\mu$m) that accurately traces the
location of the dust lane of the bar with a thick light blue line. Thin purple
lines identify the locations where the UV emission peaks. The bulk of the
molecular gas traced by the [C II] and CO emission is found along the
locations traced by the far-IR emission. The atomic gas traced by H$\alpha$
shows some emission in the outer regions. In particular, it traces the ridge
of UV emission in the middle of the bar better than CO and [C II] emissions.
In the velocity plots (right panels), we traced a light-blue line with a slope
of 5.6 km s-1 arcsec-1 which marks the rotational velocity of the gas along
the bar dust lanes. To identify the spatial structures in the velocity plot,
components more than 4 arcsec from the major axis of the bar are plotted with
a darker color. The CO velocity plot shows that the component along the bar
has the typical profile of a galaxy bar that is similar to a rigid rotator. In
the nuclear region (between $-15$ and $+15$ arcsec), the fast rotating nuclear
disk (Laine et al., 1999) distorts the velocity profile. Finally, the two
fainter components in darker green appear to slow down until they reach the
bar speed as they go out of the nucleus. The velocity plot of H$\alpha$
contains more components located outside of the bar. Nevertheless, all these
components fall on the same pattern traced by the CO components. In
particular, we notice that the UV ridges have some H$\alpha$ emission but
their velocities show that these regions are linked to the bar. Finally, the
[C II] velocity plot shows two regions with two peaks: the southern part of
the nucleus and the merging region. The less disturbed parts (ends of the bar)
are again aligned along the velocity of the bar.
### 3.7 An Interpretation of the Molecular Gas Flow
The structure of the CO emission can be visualized in three dimensions to
better show the distribution of the gas along the bar. In Fig. 14 we show the
iso-contours of the CO emission for several sections in declination along the
bar. Practically at each declination it is possible to distinguish at least
two components distinct in velocity and with slightly different spatial
locations. The strongest component is aligned with the bar, as shown in Fig.
13. The weaker component detaches itself from the nucleus to join again the
bar at its two ends. Because of the substantial difference in velocity of
these two components with respect to the spectral resolution of the ALMA
observations, it is possible to fit two lines for each spatial pixel in the CO
spectral cube. In this way, we are able to separate the emission from the two
velocity components. Fig. 15 shows the integrated emission, velocity, and
velocity dispersion of the two CO components. It is evident that the higher
velocity component, in the top panels, includes most of the gas funnelled by
the bar toward the galactic nucleus. The iso-velocity lines north of the
nucleus are perpendicular to the dust lane, there is a steep gradient across
the nucleus, and then they split in two diverging directions. Finally, south
of the nucleus, the emission is mainly outside of the bar in a position
trailing the rotation of the galaxy. A symmetrical situation is visible in the
lower velocity component. This time, most of the emission is associated with
the southern part of the bar, while north of the bar there is an accumulation
of gas that seems to be limited by the horizontal dust lane in the bar. Again,
the pattern of lines in the velocity fields is the same as in the northern
component. We notice that the region where gas trails the bar is more extended
in the low velocity component and also that there are several dust lanes
associated with it, although not as sharp as the horizontal dust lane in the
northern part of the bar.
It is natural to think that most of the gas in the stronger two velocity
components on the opposite sides of the nucleus is simply gas flowing along
the leading dust lanes of the bar towards the nucleus. This part exhibits the
usual behavior of rigid rotation found in most barred galaxies, as the inflow
velocity component is presumably much smaller than the component reflecting
the tumbling of the bar. The weaker gas component, offset in velocity and
trailing the bar, indicates that some gas may be falling into the bar, thus
having a velocity component corresponding to the rotation curve of the galaxy
in addition to the velocity component due to the participation in the tumbling
of the bar. The origin of this gas is not clear. Since this structure has not
been seen in any previous CO observations of galaxy bars, a possibility exists
that such gas is linked to the minor merger, and corresponds at least partly
to the “anomalous dust lanes” that intersect the bar almost at right angles.
If the merging companion galaxy’s orbit is almost perpendicular to the bar,
some molecular gas from the disrupted galaxy could have escaped the bar, and
rotated around the central part of the galaxy to be eventually recaptured by
the bar in a later phase of the minor merger. In addition, some of this
trailing gas forms naturally in a bar forming process (that may have been
triggered by the minor merger), as seen in Figure 3 (left) of Laine et al.
(1998), who simulated the bar pattern speed in NGC 7479 by matching the gas
and dust morphology to the observations. We estimated the relative quantity of
cold molecular gas trailing the bar by measuring the CO flux in the apertures
drawn in Fig. 15. The apertures are traced far enough from the nucleus to
avoid contamination from the nuclear emission. The fluxes in the apertures on
the bar are $(42\pm 2)$ Jy km/s north of the nucleus and $(31\pm 3)$ Jy km/s
south of the nucleus. The fluxes in the regions beyond the nucleus are
$(18.3\pm 0.3)$ Jy km/s in the north and $(10\pm 1)$ Jy km/s in the south. So,
considering the total amount of gas along the bar, the ratio between the gas
trailing the bar and the gas flowing towards the nucleus is around 40%.
Therefore, there is a non-negligible amount of gas that is not directly
flowing along the bar. In particular, the cloud north of the nucleus located
close to the horizontal dust lane has a flux of $(8.6\pm 0.3)$ Jy km/s. By
using the relation between CO luminosity and molecular mass from Bolatto et
al. (2013, equation 3) and a luminosity distance of 34.2 Mpc (assuming
cz$=2381$ km/s), the mass of the cloud is approximately $10^{8}\ M_{\odot}$,
bigger than a giant molecular cloud. However this is clearly not a stable
cloud but it is probably formed by gas channeled to the bar through the
horizontal dust lane.
Refined simulations of the minor merging in NGC 7479, including gas and
extending the work of Laine & Heller (1999), are needed to shed light on the
mechanism responsible for the complex kinematics of the CO emission in this
galaxy, but they are beyond the scope of the present paper.
## 4 Summary and Conclusions
We presented the analysis of new SOFIA and ALMA observations of the [CII] and
CO emission from the bar of the spiral galaxy NGC 7479. These observations
have been compared to a wealth of archival photometric and spectroscopic
observations, including unpublished Chandra observations. The main conclusions
of this work can be summarized as follows.
* •
We confirm the nuclear origin of the S-like structure found by Laine &
Gottesman (1998) by showing that the X-ray emission follows the same pattern
as the 20 cm radio continuum. The X-ray observations confirm that the galaxy
harbors a Compton-thick active nucleus. The spectrum extracted in the nuclear
region has a high hardness ratio, and a broad Fe K-alpha line at 6.4 keV is
detected. The X-ray flux in the southern hot-spot exceeds the possible
emission from pure star forming regions.
* •
Most of the [C II] emission corresponds to CO emission in the bar showing that
the majority of the [C II] emission is due to cooling of gas excited through
photoelectric heating by emission of young stars in PDRs. There are a few
exceptions. At the ends of the X-ray/radio jet-like structure, the [C II]
emission is higher than what expected from the CO emission. However, infrared
diagnostics show that the [C II] emission in the northern end (region E) is
mainly compatible with star formation. On the contrary, the southern end
(region I) has an excess of [C II] emission unrelated to star formation. We
attribute this excess to the cooling of molecular gas shocked by the jet.
Another location along the bar (region C) and one location external to the bar
(region K) have very low CO/[C II] ratios. These could be locations of CO-dark
molecular clouds. Region K appears to have a low metallicity, and is the best
candidate for CO-dark molecular gas.
* •
The high spectral resolution and sensitivity of the CO observations allowed us
to separate the CO emission into two distinct kinematic components. Each
velocity component consists of strong emission along the bar on one side with
respect to the nucleus and of weak emission on the other side that trails the
bar in the sense of the rotation of the galaxy. The gas trailing the bar is
approximately 40% of the gas along the bar, excluding the emission around the
nucleus. In particular, a large cloud of molecular gas (mass of approximately
$10^{8}$ M⊙) is found on the location of a thick dust lane crossing the bar
north of the nucleus, a feature probably related to a past minor merger. The
origin of the gas trailing the bar is not clear. It could be related to the
proposed minor merger in NGC 7479 where the companion has been mostly
disrupted.
A higher spectral resolution for the [C II] observations would allow the
separation of different kinematic components along the bar, thus enabling a
better comparison with the ALMA CO data. Future observations of NGC 7479 with
the GREAT spectrograph on SOFIA that has a spectral resolution similar to the
existent ALMA observations are planned.
We thank Lauranne Lanz and Isaac Shlosman for illuminating discussions and
suggestions. This research is based on data and software from the following
projects. The NASA/DLR Stratospheric Observatory for Infrared Astronomy
(SOFIA) jointly operated by USRA, under NASA contract NNA17BF53C, and DSI
under DLR contract 50 OK 0901 to the University of Stuttgart. The ALMA
observatory, operated by ESO, AUI/NRAO, and NAOJ, is a partnership of ESO, NSF
(USA), and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan),
and KASI (Rep. of Korea), in cooperation with the Rep. of Chile. Herschel, an
ESA space observatory science with instruments provided by European-led P.I.
consortia and important NASA participation. The Spitzer Space Telescope,
operated by JPL, Caltech under a contract with NASA. The SDSS survey, funded
by the A. P. Sloan Foundation, the Participating Institutions, NSF, the U.S.
Dep. of Energy, NASA, the Japanese Monbukagakusho, the Max Planck Society, and
the Higher Education Funding Council for England. The Two Micron All Sky
Survey (2MASS), a joint project of the University of Massachusetts and
IPAC/Caltech, funded by NASA and NSF. The GALEX archive hosted by the High
Energy Astrophysics Science Archive Research Center (HEASARC), which is a
service of the Astrophysics Science Division at NASA/GSFC. The Chandra Data
Archive and the CIAO software provided by the Chandra X-ray Center. Financial
support for the project was provided by NASA through award # SOF-06-0124
issued by USRA.
## References
* Appleton et al. (2013) Appleton, P. N., Guillard, P., Boulanger, F., et al. 2013, ApJ, 777, 66, doi: 10.1088/0004-637X/777/1/66
* Appleton et al. (2017) Appleton, P. N., Guillard, P., Togi, A., et al. 2017, ApJ, 836, 76, doi: 10.3847/1538-4357/836/1/76
* Appleton et al. (2018) Appleton, P. N., Diaz-Santos, T., Fadda, D., et al. 2018, ApJ, 869, 61, doi: 10.3847/1538-4357/aaed2a
* Astropy Collaboration et al. (2013) Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, A&A, 558, A33, doi: 10.1051/0004-6361/201322068
* Bakes & Tielens (1998) Bakes, E. L. O., & Tielens, A. G. G. M. 1998, ApJ, 499, 258, doi: 10.1086/305625
* Barnes & Hernquist (1991) Barnes, J. E., & Hernquist, L. E. 1991, ApJ, 370, L65, doi: 10.1086/185978
* Beirão et al. (2012) Beirão, P., Armus, L., Helou, G., et al. 2012, ApJ, 751, 144, doi: 10.1088/0004-637X/751/2/144
* Bigiel et al. (2020) Bigiel, F., de Looze, I., Krabbe, A., et al. 2020, ApJ, 903, 30, doi: 10.3847/1538-4357/abb677
* Blumenthal & Barnes (2018) Blumenthal, K. A., & Barnes, J. E. 2018, MNRAS, 479, 3952, doi: 10.1093/mnras/sty1605
* Bolatto et al. (2013) Bolatto, A. D., Wolfire, M., & Leroy, A. K. 2013, ARA&A, 51, 207, doi: 10.1146/annurev-astro-082812-140944
* Burtscher et al. (2016) Burtscher, L., Davies, R. I., Graciá-Carpio, J., et al. 2016, A&A, 586, A28, doi: 10.1051/0004-6361/201527575
* Chevance et al. (2020) Chevance, M., Madden, S. C., Fischer, C., et al. 2020, MNRAS, 494, 5279, doi: 10.1093/mnras/staa1106
* Colditz et al. (2018) Colditz, S., Beckmann, S., Bryant, A., et al. 2018, Journal of Astronomical Instrumentation, 7, 1840004, doi: 10.1142/S2251171718400044
* Croxall et al. (2012) Croxall, K. V., Smith, J. D., Wolfire, M. G., et al. 2012, ApJ, 747, 81, doi: 10.1088/0004-637X/747/1/81
* Croxall et al. (2017) Croxall, K. V., Smith, J. D., Pellegrini, E., et al. 2017, ApJ, 845, 96, doi: 10.3847/1538-4357/aa8035
* da Cunha et al. (2008) da Cunha, E., Charlot, S., & Elbaz, D. 2008, MNRAS, 388, 1595, doi: 10.1111/j.1365-2966.2008.13535.x
* Daigle & Roy (2001) Daigle, A., & Roy, J.-R. 2001, ApJ, 552, 144, doi: 10.1086/320437
* De Looze et al. (2011) De Looze, I., Baes, M., Bendo, G. J., Cortese, L., & Fritz, J. 2011, MNRAS, 416, 2712, doi: 10.1111/j.1365-2966.2011.19223.x
* Díaz–Santos et al. (2014) Díaz–Santos, T., Armus, L., Charmandaris, V., et al. 2014, ApJ, 788, L17, doi: 10.1088/2041-8205/788/1/L17
* Díaz–Santos et al. (2017) —. 2017, ApJ, 846, 32, doi: 10.3847/1538-4357/aa81d7
* Draine (1978) Draine, B. T. 1978, ApJS, 36, 595, doi: 10.1086/190513
* Eskridge et al. (2000) Eskridge, P. B., Frogel, J. A., Pogge, R. W., et al. 2000, AJ, 119, 536, doi: 10.1086/301203
* Fadda & Chambers (2018) Fadda, D., & Chambers, E. T. 2018, in American Astronomical Society Meeting Abstracts, Vol. 231, American Astronomical Society Meeting Abstracts #231, 150.11
* Fadda et al. (2006) Fadda, D., Marleau, F. R., Storrie-Lombardi, L. J., et al. 2006, AJ, 131, 2859, doi: 10.1086/504034
* Fazio et al. (2004) Fazio, G. G., Hora, J. L., Allen, L. E., et al. 2004, ApJS, 154, 10, doi: 10.1086/422843
* Fischer et al. (2018) Fischer, C., Beckmann, S., Bryant, A., et al. 2018, Journal of Astronomical Instrumentation, 7, 1840003, doi: 10.1142/S2251171718400032
* Fruscione et al. (2006) Fruscione, A., McDowell, J. C., Allen, G. E., et al. 2006, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 6270, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, ed. D. R. Silva & R. E. Doxsey, 62701V, doi: 10.1117/12.671760
* Guillard et al. (2009) Guillard, P., Boulanger, F., Pineau Des Forêts, G., & Appleton, P. N. 2009, A&A, 502, 515, doi: 10.1051/0004-6361/200811263
* Herrera–Camus et al. (2015) Herrera–Camus, R., Bolatto, A. D., Wolfire, M. G., et al. 2015, ApJ, 800, 1, doi: 10.1088/0004-637X/800/1/1
* Ho et al. (1997) Ho, L. C., Filippenko, A. V., & Sargent, W. L. W. 1997, ApJ, 487, 591, doi: 10.1086/304643
* Hollenbach & McKee (1989) Hollenbach, D., & McKee, C. F. 1989, ApJ, 342, 306, doi: 10.1086/167595
* Ishibashi & Fabian (2012) Ishibashi, W., & Fabian, A. C. 2012, MNRAS, 427, 2998, doi: 10.1111/j.1365-2966.2012.22074.x
* Jameson et al. (2018) Jameson, K. E., Bolatto, A. D., Wolfire, M., et al. 2018, ApJ, 853, 111, doi: 10.3847/1538-4357/aaa4bb
* Jogee et al. (2009) Jogee, S., Miller, S. H., Penner, K., et al. 2009, ApJ, 697, 1971, doi: 10.1088/0004-637X/697/2/1971
* Kaufman et al. (1999) Kaufman, M. J., Wolfire, M. G., Hollenbach, D. J., & Luhman, M. L. 1999, ApJ, 527, 795, doi: 10.1086/308102
* Kaviraj (2014) Kaviraj, S. 2014, MNRAS, 440, 2944, doi: 10.1093/mnras/stu338
* Keel (1983) Keel, W. C. 1983, ApJS, 52, 229, doi: 10.1086/190866
* Kendall et al. (2003) Kendall, P., Magorrian, J., & Pringle, J. E. 2003, MNRAS, 346, 1078, doi: 10.1111/j.1365-2966.2003.06776.x
* Kennicutt (1998) Kennicutt, Robert C., J. 1998, ApJ, 498, 541, doi: 10.1086/305588
* Kewley et al. (2006) Kewley, L. J., Groves, B., Kauffmann, G., & Heckman, T. 2006, MNRAS, 372, 961, doi: 10.1111/j.1365-2966.2006.10859.x
* Laine & Beck (2008) Laine, S., & Beck, R. 2008, ApJ, 673, 128, doi: 10.1086/523960
* Laine & Gottesman (1998) Laine, S., & Gottesman, S. T. 1998, MNRAS, 297, 1041, doi: 10.1046/j.1365-8711.1998.01513.x
* Laine & Heller (1999) Laine, S., & Heller, C. H. 1999, MNRAS, 308, 557, doi: 10.1046/j.1365-8711.1999.02712.x
* Laine et al. (1999) Laine, S., Kenney, J. D. P., Yun, M. S., & Gottesman, S. T. 1999, ApJ, 511, 709, doi: 10.1086/306709
* Laine et al. (1998) Laine, S., Shlosman, I., & Heller, C. H. 1998, MNRAS, 297, 1052, doi: 10.1046/j.1365-8711.1998.01512.x
* Lesaffre et al. (2013) Lesaffre, P., Pineau des Forêts, G., Godard, B., et al. 2013, A&A, 550, A106, doi: 10.1051/0004-6361/201219928
* Lord (1992) Lord, S. D. 1992, A new software tool for computing Earth’s atmospheric transmission of near- and far-infrared radiation, NASA Technical Memorandum 103957
* Madden et al. (2020) Madden, S. C., Cormier, D., Hony, S., et al. 2020, A&A, 643, A141, doi: 10.1051/0004-6361/202038860
* Maiolino et al. (1999) Maiolino, R., Risaliti, G., & Salvati, M. 1999, A&A, 341, L35. https://arxiv.org/abs/astro-ph/9811237
* Makovoz & Marleau (2005) Makovoz, D., & Marleau, F. R. 2005, PASP, 117, 1113, doi: 10.1086/432977
* Malhotra et al. (2001) Malhotra, S., Kaufman, M. J., Hollenbach, D., et al. 2001, ApJ, 561, 766, doi: 10.1086/323046
* Martin et al. (2000) Martin, P., Lelièvre, M., & Roy, J.-R. 2000, ApJ, 538, 141, doi: 10.1086/309131
* Martin & Roy (1994) Martin, P., & Roy, J.-R. 1994, ApJ, 424, 599, doi: 10.1086/173917
* Masters et al. (2012) Masters, K. L., Nichol, R. C., Haynes, M. P., et al. 2012, MNRAS, 424, 2180, doi: 10.1111/j.1365-2966.2012.21377.x
* Mihos & Hernquist (1994) Mihos, J. C., & Hernquist, L. 1994, ApJ, 425, L13, doi: 10.1086/187299
* Mineo et al. (2012) Mineo, S., Gilfanov, M., & Sunyaev, R. 2012, MNRAS, 419, 2095, doi: 10.1111/j.1365-2966.2011.19862.x
* Mukherjee et al. (2016) Mukherjee, D., Bicknell, G. V., Sutherland, R., & Wagner, A. 2016, MNRAS, 461, 967, doi: 10.1093/mnras/stw1368
* Peeters et al. (2004) Peeters, E., Spoon, H. W. W., & Tielens, A. G. G. M. 2004, ApJ, 613, 986, doi: 10.1086/423237
* Peterson et al. (2018) Peterson, B. W., Appleton, P. N., Bitsakis, T., et al. 2018, ApJ, 855, 141, doi: 10.3847/1538-4357/aaac2c
* Pineda et al. (2013) Pineda, J. L., Langer, W. D., Velusamy, T., & Goldsmith, P. F. 2013, A&A, 554, A103, doi: 10.1051/0004-6361/201321188
* Pineda et al. (2018) Pineda, J. L., Fischer, C., Kapala, M., et al. 2018, ApJ, 869, L30, doi: 10.3847/2041-8213/aaf1ad
* Plante et al. (1991) Plante, R. L., Lo, K. Y., Roy, J.-R., Martin, P., & Noreau, L. 1991, ApJ, 381, 110, doi: 10.1086/170633
* Quillen et al. (1995) Quillen, A. C., Frogel, J. A., Kenney, J. D. P., Pogge, R. W., & Depoy, D. L. 1995, ApJ, 441, 549, doi: 10.1086/175381
* Regan et al. (1999) Regan, M. W., Sheth, K., & Vogel, S. N. 1999, ApJ, 526, 97, doi: 10.1086/307960
* Rieke et al. (2004) Rieke, G. H., Young, E. T., Engelbracht, C. W., et al. 2004, ApJS, 154, 25, doi: 10.1086/422717
* Sakamoto et al. (1999) Sakamoto, K., Okumura, S. K., Ishizuki, S., & Scoville, N. Z. 1999, ApJ, 525, 691, doi: 10.1086/307910
* Schmidt et al. (2018) Schmidt, E. O., Oio, G. A., Ferreiro, D., Vega, L., & Weidmann, W. 2018, A&A, 615, A13, doi: 10.1051/0004-6361/201731557
* Sheth et al. (2002) Sheth, K., Vogel, S. N., Regan, M. W., et al. 2002, AJ, 124, 2581, doi: 10.1086/343835
* Sheth et al. (2005) Sheth, K., Vogel, S. N., Regan, M. W., Thornley, M. D., & Teuben, P. J. 2005, ApJ, 632, 217, doi: 10.1086/432409
* Sheth et al. (2008) Sheth, K., Elmegreen, D. M., Elmegreen, B. G., et al. 2008, ApJ, 675, 1141, doi: 10.1086/524980
* Silk (2013) Silk, J. 2013, ApJ, 772, 112, doi: 10.1088/0004-637X/772/2/112
* Stacey et al. (1991) Stacey, G. J., Geis, N., Genzel, R., et al. 1991, ApJ, 373, 423, doi: 10.1086/170062
* Taniguchi (1999) Taniguchi, Y. 1999, ApJ, 524, 65, doi: 10.1086/307814
* Tielens & Hollenbach (1985) Tielens, A. G. G. M., & Hollenbach, D. 1985, ApJ, 291, 722, doi: 10.1086/163111
* Vacca (2020) Vacca, W. D. 2020, The Data Reduction Pipeline for FIFI-LS, the MIR Integral Field Spectrograph for SOFIA, ed. R. Pizzo, ASP Conference Series, in press
* Vogel et al. (1995) Vogel, S. N., Weymann, R., Rauch, M., & Hamilton, T. 1995, ApJ, 441, 162, doi: 10.1086/175346
* Wang et al. (2010) Wang, J., Zhang, J.-S., & Fan, J.-H. 2010, Research in Astronomy and Astrophysics, 10, 915, doi: 10.1088/1674-4527/10/9/005
* Werner et al. (2004) Werner, M. W., Roellig, T. L., Low, F. J., et al. 2004, ApJS, 154, 1, doi: 10.1086/422992
* Winkler (1992) Winkler, H. 1992, MNRAS, 257, 677, doi: 10.1093/mnras/257.4.677
* Wolfire et al. (2010) Wolfire, M. G., Hollenbach, D., & McKee, C. F. 2010, ApJ, 716, 1191, doi: 10.1088/0004-637X/716/2/1191
* Zaritsky et al. (1997) Zaritsky, D., Smith, R., Frenk, C., & White, S. D. M. 1997, ApJ, 478, 39, doi: 10.1086/303784
|
# Ratio of flavour non-singlet and singlet scalar density renormalisation
parameters in $N_{\mathrm{f}}=3$ QCD
with Wilson quarks
Jochen Heitger Fabian Joswig Pia L. J. Petrak Anastassios Vladikas
###### Abstract
We determine non-perturbatively the normalisation factor $r_{\rm m}\equiv
Z_{\rm S}/Z_{\rm S}^{0}$, where $Z_{\rm S}$ and $Z_{\rm S}^{0}$ are the
renormalisation parameters of the flavour non-singlet and singlet scalar
densities, respectively. This quantity is required in the computation of quark
masses with Wilson fermions and for instance the renormalisation of nucleon
matrix elements of scalar densities. Our calculation involves simulations of
finite-volume lattice QCD with the tree-level Symanzik-improved gauge action,
$N_{\rm f}=3$ mass-degenerate ${\rm O}(a)$ improved Wilson fermions and
Schrödinger functional boundary conditions. The slope of the current quark
mass, as a function of the subtracted Wilson quark mass is extracted both in a
unitary setup (where nearly chiral valence and sea quark masses are
degenerate) and in a non-unitary setup (where all valence flavours are chiral
and the sea quark masses are small). These slopes are then combined with
$Z\equiv Z_{\rm P}/(Z_{\rm S}Z_{\rm A})$ in order to obtain $r_{\rm m}$. A
novel chiral Ward identity is employed for the calculation of the
normalisation factor $Z$. Our results cover the range of gauge couplings
corresponding to lattice spacings below $0.1\,$fm, for which $N_{\rm f}=2+1$
QCD simulations in large volumes with the same lattice action are typically
performed.
## 1 Introduction
Scalar and pseudoscalar flavour singlet and non-singlet dimension-3 bilinear
operators have the same anomalous dimension, since they belong to the same
chiral multiplet. The same is true for their renormalisation parameters,
provided that the regularisation does not break chiral symmetry. Otherwise,
the renormalisation parameters of the chiral multiplet components differ by
finite terms. This is the case for the lattice regularisation with Wilson
fermions. For example, the renormalisation parameters of the non-singlet
scalar and pseudoscalar densities (denoted as $Z_{\rm S}$ and $Z_{\rm P}$,
respectively) have a finite ratio which is a polynomial of the bare gauge
coupling $g_{0}$. This ratio can be determined by chiral Ward identities;111In
practice, distinct chiral Ward identities are used for the computation of the
ratio $Z_{\rm S}/(Z_{\rm P}Z_{\rm A})$ and $Z_{\rm A}$; the two results are
subsequently multiplied to give $Z_{\rm S}/Z_{\rm P}$. see Refs. [1, 2]. Since
$Z_{\rm P}$ and $Z_{\rm S}$ are scale dependent, imposing a renormalisation
scheme is necessary to fix one of them, and the other can be obtained using
the scheme independent ratio $Z_{\rm S}/Z_{\rm P}$.222Examples of
renormalisation schemes are ${\rm\overline{MS\kern-0.50003pt}\kern
0.50003pt}$, RI/(S)MOM [3, 4], the Schrödinger functional (SF) [5] and the
chirally rotated Schrödinger functional ($\chi$SF) [6]. In this way the
renormalised scalar and pseudoscalar densities are defined consistently in the
same scheme, with the same anomalous dimension and renormalisation group (RG)
running, and chiral symmetry is restored in the continuum limit. The ratio
$Z_{\rm S}/Z_{\rm P}$ has been computed for several gauge and Wilson fermion
actions (standard, improved etc.) in the quenched approximation [2, 7, 8, 9,
10, 11], with two dynamical quarks ($N_{\rm f}=2$ QCD) [12], and with three
dynamical quarks ($N_{\rm f}=3$ QCD) [13, 14, 15, 16].
Far less progress has been made on the computation of the ratio of the
renormalisation parameters of the non-singlet and singlet scalar densities,
$r_{\rm m}\equiv Z_{\rm S}/Z_{\rm S}^{0}$. For chirally symmetric
regularisations $r_{\rm m}=1$ holds, while for Wilson fermions $r_{\rm m}$ is
a (finite) polynomial of the gauge coupling, arising from the sea fermion
loops of the quark propagator. In the quenched approximation, $r_{\rm m}=1$.
As explained in Ref. [17], the lowest-order non-trivial perturbative
contribution to this quantity is a two-loop effect; i.e., $r_{\rm m}=1+{\rm
O}(g_{0}^{4})$. In Ref. [18] the ${\rm O}(g_{0}^{4})$ perturbative term has
been calculated for several lattice actions. Non-perturbative estimates of
this quantity have been reported in Ref. [13] at two values of the gauge
coupling for $N_{\rm f}=2+1$ QCD with the tree-level Symanzik-improved gauge
action [19] and the non-perturbatively improved Wilson-clover fermion action
[20]. This is the regularisation chosen by the CLS (Coordinated Lattice
Simulations) initiative which carries out QCD simulations with $N_{\rm f}=2+1$
flavours, on large physical volumes, for a range of bare couplings
corresponding to a hadronic regime [21, 22, 13, 23]. These CLS ensembles are
suitable for the computation of correlation functions, from which low-energy
hadronic quantities can be evaluated. In parallel, our group is performing
$N_{\rm f}=3$ simulations in the same range of bare gauge couplings, but for
small-volume lattices with Schrödinger functional boundary conditions and
nearly-chiral quark masses. These ensembles are used for the numerical
determination of the necessary renormalisation parameters and Symanzik
improvement coefficients, see Refs. [24, 15, 25, 26, 27, 28, 14] that have
various applications in lattice QCD when using this discretisation of Wilson
fermions. The present work provides high-precision estimates of $r_{\rm m}$
obtained in the same computational framework.
As seen from eq. (2.2) below, $(r_{\rm m}-1)$ contributes an ${\rm
O}(g_{0}^{4})$ term to the renormalisation of the quark masses [17]. This is
expected to be a small effect. Symanzik $\mathrm{O}(a)$ counterterms
containing $r_{\rm m}$ are often neglected in light quark mass determinations;
cf. Ref. [29]. In practical computations, however, $r_{\mathrm{m}}$ can be
relevant at $\mathrm{O}(a)$, especially when dealing with heavy flavours, and
should be taken into account in order to achieve full $\mathrm{O}(a)$
improvement; see, for example, eq. (2.13) in Ref. [30]. Another application
where $r_{\rm m}$ plays a prominent rôle is the nucleon sigma-term, which is
defined in terms of nucleon matrix elements of flavour singlet scalar
densities; see Refs. [31, 32] for example and [33, 34, 35] for more recent
works. A direct determination of $Z_{\rm S}^{0}$ is not as straightforward as
that of $Z_{\rm S}$, the former also requiring the computation of two-boundary
(“disconnected”) quark diagrams. This problem is circumvented by extracting
$Z_{\rm S}^{0}$ as the product of $Z_{\rm S}$ and $r_{\rm m}$.
Our computation of $r_{\rm m}$ is based on the relation between the current
(PCAC) mass $m$ and the subtracted quark mass $m_{\rm q}$. Close to the chiral
limit, $m(m_{\rm q})$ is a linear function with a slope that depends on the
details of the QCD model being simulated. In a unitary theory with degenerate
sea and valence quark masses, the slope of $m(m_{\rm q})$ is $Zr_{\rm m}$,
where $Z\equiv Z_{\rm P}/(Z_{\rm S}Z_{\rm A})$ and $Z_{\rm A}$ is the non-
singlet axial current normalisation. On the other hand, in a non-unitary
theory with chiral valence subtracted quark masses ($m_{\rm q}^{\rm val}=0$)
and small degenerate sea quark masses $m_{\rm q}^{\rm sea}\neq 0$, the slope
of $m(m_{\rm q}^{\rm sea})$ is $Z(r_{\rm m}-1)$. The two slopes are accessible
from two distinct sets of measurements at several common values of the bare
coupling $g_{0}$. The results are combined to give estimates of $r_{\rm
m}(g_{0}^{2})$. This approach is described in Section 2.
Alternatively, each of the two slopes $Zr_{\rm m}$ and $Z(r_{\rm m}-1)$ may be
combined with an independent estimate of $Z$, such as the results of Refs.
[14, 15]. In the present work we prefer to use a novel determination of $Z$,
relying on a chiral Ward identity which differs from the one of Ref. [15].
This identity is derived in Section 3.
In Section 4 we present our simulation setup for $N_{\rm f}=3$ QCD with
lattices of small physical volumes and Schrödinger functional boundary
conditions; these serve to numerically implement the strategies outlined in
the foregoing section. Most of our gauge field ensembles were already
generated in the context of previous works; cf. Refs. [15, 25, 26, 27, 28,
14]. Some new ensembles have also been generated, in order to cover the region
close to the origin of the function $m(m_{\rm q})$ more evenly and asses its
slope reliably.
Our results for $r_{\rm m}$, based on various combinations of $Zr_{\rm m}$,
$Z(r_{\rm m}-1)$, and $Z$ are discussed in Section 5. Different determinations
of $r_{\rm m}$ are compared, allowing us to settle for a conservative final
estimate with reliable systematic errors. Our final result is that of eq.
(5.5). In Table 5 we also list $r_{\rm m}(g_{0}^{2})$ for the
$g_{0}^{2}$-values at which CLS simulations are being performed for the
computations of hadronic quantities in $N_{\rm f}=2+1$ QCD.
In the final section we sum up our results and their uses in lattice QCD. More
detailed calculations and definitions of the correlation functions employed
can be found in Appendix A and B. Comparison of $Z$ determinations and
corresponding scaling tests can be found in Appendix C.
## 2 Wilson quark masses
In this section we recapitulate the basic quark mass definitions, namely
subtracted and current (PCAC) quark masses, and discuss how to obtain the
products $Zr_{\mathrm{m}}$ and $Z(r_{\mathrm{m}}-1)$ from relations between
the two. For any unexplained notation we refer to Ref. [14]. The starting
point is the subtracted bare quark mass of flavour $i=1,\ldots,N_{\rm f}$,
$m_{{\rm q},i}\equiv m_{0,i}-m_{\rm
crit}=\dfrac{1}{2a}\Big{(}\dfrac{1}{\kappa_{i}}-\dfrac{1}{\kappa_{\rm
crit}}\Big{)}\,,$ (2.1)
where $\kappa_{i}$ is the hopping parameter for flavour $i$, $\kappa_{\rm
crit}$ its value in the chiral limit, and $a$ is the lattice spacing. In terms
of the subtracted masses $m_{{\rm q},i}$, the corresponding renormalised quark
masses are given by
$\displaystyle m_{i,\rm R}=Z_{\rm m}\Bigg{[}m_{{\rm q},i}\,+\,(r_{\rm
m}-1)\dfrac{{\rm Tr}M_{\rm q}}{N_{\rm f}}\Bigg{]}+{\rm O}(a)\,,$ (2.2)
where $M_{\rm q}={\rm diag}(m_{{\rm q},1},\ldots,m_{{\rm q},N_{\rm f}})$ is
the $N_{\rm f}\times N_{\rm f}$ bare quark mass matrix.
We recall in passing that the renormalisation parameter $Z_{\rm
m}(g_{0}^{2},a\mu)$ depends on the renormalisation scale $\mu$ and diverges
logarithmically in the ultraviolet. It is the inverse of $Z_{\rm
S}(g_{0}^{2},a\mu)$, the renormalisation parameter of the flavour non-singlet
scalar density operator. A mass independent renormalisation scheme is implied
throughout this work. In such a scheme operator renormalisation parameters
(e.g. $Z_{\rm P},Z_{\rm m},Z_{\rm S}$), current normalisations (i.e. $Z_{\rm
A},Z_{\rm V}$) and $r_{\rm m}$ are functions of the squared bare gauge
coupling $g_{0}^{2}$. In a non-perturbative determination at non-zero quark
mass, they are affected by ${\rm O}(am_{{\rm q},i})$, ${\rm O}(a{\rm Tr}M_{\rm
q})$, and ${\rm O}(a\Lambda_{\rm QCD})$ discretisation effects, which are part
of their operational definition. As pointed out in Ref. [17], the term
$(r_{\rm m}-1)$ multiplies ${\rm Tr}M_{\rm q}$, so it arises from a mass
insertion in a quark loop. In perturbation theory it is a two-loop effect,
contributing at ${\rm O}(g_{0}^{4})$. Its non-perturbative determination is
the main purpose of this paper. An important consequence of eq. (2.2) is that
a renormalised mass $m_{i,\rm R}$ goes to the chiral limit only when all
subtracted masses $m_{{\rm q},1},\dots,m_{{\rm q},N_{\rm f}}$ vanish.
Alternatively, a bare current (PCAC) quark mass $m_{ij}$ can be defined
through the following relation:
$({\widetilde{\partial}_{\mu}})_{x}\big{\langle}(A_{{\rm
I}})^{ij}_{\mu}(x)\,\mathcal{O}^{ji}\big{\rangle}=2m_{ij}{\big{\langle}{P}^{ij}(x)\,\mathcal{O}^{ji}\big{\rangle}}\,.$
(2.3)
The quantity $m_{ij}$ is distinct from the subtracted bare quark masses, but
it is related to the mass average $(m_{{\rm q},i}+m_{{\rm q},j})/2$; see eq.
(2.9) below. The flavour non-singlet bare axial current and the pseudoscalar
density are given by
$A_{\mu}^{ij}(x)\equiv\bar{\psi}_{i}(x)\,\gamma_{\mu}\gamma_{5}\,\psi_{j}(x)\,,\qquad
P^{ij}(x)\equiv\bar{\psi}_{i}(x)\,\gamma_{5}\,\psi_{j}(x)\,,$ (2.4)
with indices $i,j$ denoting two distinct flavours ($i\neq j$). The
pseudoscalar density $P^{ij}$ and the current $(A_{{\rm I}})^{ij}_{\mu}\equiv
A^{ij}_{\mu}+ac_{\rm A}{\widetilde{\partial}_{\mu}}P^{ij}$ are Symanzik-
improved in the chiral limit, with the improvement coefficient $c_{\rm
A}(g_{0}^{2})$ being in principle only a function of the gauge coupling. In
these definitions, ${\widetilde{\partial}_{\mu}}$ denotes the average of the
usual forward and backward derivatives.333The forward derivative is defined as
$a\partial_{\mu}f(x)\equiv f(x+a\hat{\mu})-f(x)$ and the backward derivative
as $a\partial_{\mu}^{\ast}f(x)\equiv f(x)-f(x-a\hat{\mu})$. The source
operator $\mathcal{O}^{ji}$ is defined in a region of space-time that does not
include the point $x$, so as to avoid contact terms. In the ${\rm O}(a)$
improved theory, the renormalised axial current and pseudoscalar density are
$\displaystyle(A_{{\rm R}})^{ij}(x)$ $\displaystyle=Z_{\rm
A}(g_{0}^{2})(A_{\rm I})_{\mu}^{ij}(x)+{\rm O}(am_{\mathrm{q}},a^{2})\,,$
(2.5) $\displaystyle(P_{{\rm R}})^{ij}(x)$ $\displaystyle=Z_{\rm
P}(g_{0}^{2},a\mu)P^{ij}(x)+{\rm O}(am_{\mathrm{q}},a^{2})\,.$ (2.6)
The normalisation of the axial current $Z_{\rm A}(g_{0}^{2})$ is scale
independent, depending only on the squared gauge coupling $g_{0}^{2}$. The
renormalisation parameter $Z_{\rm P}(g_{0}^{2},a\mu)$ (determined, say in the
Schrödinger functional scheme of Ref. [5]) additionally depends on the
renormalisation scale $\mu$ and diverges logarithmically in the ultraviolet.
The PCAC relation, expressed by renormalised fields,
$\displaystyle({\widetilde{\partial}_{\mu}})_{x}\big{\langle}\,(A_{{\rm
R}})^{ij}_{\mu}(x)\ \mathcal{O}^{ji}\,\big{\rangle}$ $\displaystyle=(m_{{\rm
R},i}+m_{{\rm R},j})\,\big{\langle}\,(P_{\mathrm{R}})^{ij}(x)\
\mathcal{O}^{ji}\,\big{\rangle}\,,$ (2.7)
valid up to discretisation effects in the continuum, combined with eqs.
(2.3)–(2.6), implies that
$\dfrac{m_{i,\rm R}+m_{j,\rm R}}{2}=\dfrac{Z_{\rm A}}{Z_{\rm P}}m_{ij}+{\rm
O}(am_{\mathrm{q}},a^{2})\,.$ (2.8)
If we calculate the average mass $(m_{i\rm R}+m_{j\rm R})/2$ from eq. (2.2)
and equate the result to the r.h.s of eq. (2.8), we obtain an expression which
relates subtracted and PCAC bare masses:
$m_{ij}=Z\Bigg{[}\dfrac{(m_{{\rm q},i}+m_{{\rm q},j})}{2}+(r_{\rm
m}-1)\dfrac{{\rm Tr}M_{\rm q}}{N_{\rm f}}\Bigg{]}+{\rm
O}(am_{\mathrm{q}},a^{2})\,,$ (2.9)
where the product of the renormalisation parameters $Z(g_{0}^{2})\equiv Z_{\rm
P}(g_{0}^{2},\mu)/(Z_{\rm S}(g_{0}^{2},\mu)Z_{\rm A}(g_{0}^{2}))$ is scale
independent. We now exploit eq. (2.9) in two ways:
(1) In a theory with mass-degenerate quarks ($m_{{\rm q},i}=m_{{\rm q},j}={\rm
Tr}M_{\rm q}/N_{\rm f}$), it reduces to
$\displaystyle m$ $\displaystyle=Zr_{\rm m}m_{\rm q}+{\rm
O}(am_{\mathrm{q}},a^{2})$ (2.10) $\displaystyle=Zr_{\rm
m}\dfrac{1}{2a}\Big{(}\dfrac{1}{\kappa}-\dfrac{1}{\kappa_{\rm
crit}}\Big{)}+{\rm O}(am_{\mathrm{q}},a^{2})\,.$ (2.11)
In the above equation, flavour indices have been dropped from the quark masses
$m_{ij},m_{{\rm q},i}$ and the hopping parameter $\kappa_{i}$. This
simplification of notation will be adopted on most occasions below. Thus,
modelling the current quark mass $am$ as a function of $1/\kappa$ for values
of $\kappa$ close to $\kappa_{\rm crit}$, we obtain the latter as the root of
the function $am(1/\kappa)$ and the combination $Zr_{\rm m}$ as the slope of
the same curve.
(2) Once the critical hopping parameter $\kappa_{\rm crit}$ is available from
the previous step (1), we use a non-unitary setup where valence and sea quarks
of the same flavour have different bare subtracted masses $m_{{\rm q},i}^{\rm
val}\neq m_{{\rm q},i}^{\rm sea}$. In eq. (2.9), masses $m_{{\rm q},i}$ and
$m_{{\rm q},j}$ on the r.h.s. are valence quark contributions, while ${\rm
Tr}M_{\rm q}$ stands for the trace of sea quark masses; see Refs. [17, 14] for
detailed explanations. In particular, we set $\kappa^{\rm val}=\kappa_{\rm
crit}$, so as to ensure $m_{\rm q}^{\rm val}=0$ for all valence flavours.
Moreover sea quark masses are taken to be small, degenerate, and non-zero
(i.e. $\kappa^{\rm sea}\neq\kappa_{\rm crit}$, ensuring $m_{\rm q}^{\rm
sea}\neq 0$ for all sea flavours). With these conditions, the current quark
mass of eq. (2.9) reduces to
$m=Z(r_{\rm m}-1)m_{\rm q}^{\rm sea}+{\rm O}(am_{\mathrm{q}},a^{2})\,.$ (2.12)
It is remarkable that with non-zero bare subtracted sea quark masses (i.e.
$m_{\rm q}^{\rm sea}\neq 0$), all current quark masses in this setup are not
chiral (i.e. $m_{ij}\neq 0,\forall i,j$), even if all subtracted valence quark
masses vanish (i.e. $m_{{\rm q},i}^{\rm val}=0,\forall i$). From eq. (2.12) we
see that, if we compute $am$ as a function of $am_{\rm q}^{\rm sea}$ for
several sea masses, the slope of the functions gives an estimate of $Z(r_{\rm
m}-1)$.
The two slopes $Zr_{\rm m}$ and $Z(r_{\rm m}-1)$, computed in the two
different settings described above, but at the same gauge couplings
$g_{0}^{2}$, can be combined yielding estimates of $r_{\rm m}(g_{0}^{2})$; see
Subsection 5.3 for details. We stress that the above discussion concerns
relations which suffer from ${\rm O}(a)$ discretisation effects. For the quark
masses, such effects may be removed by introducing Symanzik counterterms,
leaving us with ${\rm O}(a^{2})$ discretisation errors. These counterterms
have been worked out in Refs. [36, 17]. In Ref [17] (see also eq. (2.10) of
Ref. [14]) the full ${\rm O}(am_{\rm q})$ contributions, omitted in eq. (2.9)
above, are written down explicitly. Such contributions are complicated and
taking them all into account could compromise the numerical stability of our
procedure to extract the quantities in question. We prefer a simpler and more
robust strategy, consisting of working with small quark masses so that ${\rm
O}(a)$-effects in eq. (2.9) may be safely dropped. This must of course be
checked a posteriori, by ensuring that the function $m(m_{\rm q})$ is linear
close to the origin, where our simulations are performed. The only improvement
coefficients used in this work are $c_{\mathrm{sw}}$ of the clover action and
$c_{\rm A}$, of the axial current (entering the PCAC mass).
There is an important subtlety concerning results obtained with Wilson
fermions in a Symanzik-improved setup: the bare parameters of the theory (i.e.
the gauge coupling $g_{0}^{2}$ and the $N_{\rm f}=2+1$ quark masses) are to be
varied, while staying on lines of constant physics within systematic
uncertainties of ${\rm O}(a^{2})$. In particular, if the improved bare gauge
coupling [36]
$\tilde{g}_{0}^{2}\equiv g_{0}^{2}\Big{(}1+\dfrac{1}{N_{\rm
f}}b_{g}(g_{0}^{2})a{\rm Tr}M_{\rm q}\Big{)}$ (2.13)
is kept fixed in the simulations, so is the lattice spacing, with fluctuations
being attributed to ${\rm O}(a^{2})$ effects [21]. This implies that, once
$\kappa^{\mathrm{crit}}$ has been evaluated as a function of $g_{0}^{2}$,
(re)normalisation parameters and improvement coefficients should be treated as
functions of $\tilde{g}_{0}^{2}$, rather than $g_{0}^{2}$; e.g. $Z_{\rm
A}(\tilde{g}_{0}^{2}),Z_{\rm
P}(\tilde{g}_{0}^{2},a\mu),Z(\tilde{g}_{0}^{2}),r_{\rm m}(\tilde{g}_{0}^{2})$
etc. To the extent that we are working in the chiral limit, or very close to
it (i.e. very light quark masses), this difference is immaterial. This is why
in the present work we always express our results as functions of $g_{0}^{2}$.
However, when they are to be used away from the chiral limit at low-energy
scales (see Refs. [22, 29]), this difference must be taken into account
properly. We shall elaborate further on this point when summarising our work
in Section 6.
## 3 Ward identity determination of $Z$
In the previous section we have shown how the quantities $Zr_{\rm m}$ and
$Z(r_{\rm m}-1)$ can be estimated from relations between suitably chosen
current and subtracted Wilson quark masses. They may then straightforwardly be
combined to give $r_{\rm m}$ and $Z$. The latter quantity has already been
measured in our setup ($N_{\rm f}=3$ lattice QCD with Schrödinger functional
boundary conditions) in two ways: either by using appropriate combinations of
current and subtracted quark masses with different flavours [14], or from
chiral Ward identities [15] via $Z\equiv Z_{\rm P}/(Z_{\rm A}Z_{\rm S})$. Here
we will describe yet another direct method, based on a new Ward identity, very
similar to the one of Ref. [15]. The reader is referred to that work for
details, notation etc.
We consider a product of two composite operators ${\cal O}\equiv S^{b}(y){\cal
O}^{c}$, defined as
$\displaystyle S^{b}(y)$ $\displaystyle\equiv$
$\displaystyle\mathrm{i}\bar{\psi}(y)T^{b}\psi(y)$ $\displaystyle{\cal O}^{c}$
$\displaystyle\equiv$ $\displaystyle\mathrm{i}\dfrac{a^{6}}{L^{3}}\sum_{\bf
u,v}\bar{\zeta}({\bf u})\gamma_{5}T^{c}\zeta({\bf v})\,,$ (3.1)
where $T^{b}$ and $T^{c}$ are generators of $SU(N_{\mathrm{f}})$. The former
operator is the flavour non-singlet scalar density, located in the bulk of
space-time, while the latter resides at the $x_{0}=0$ Dirichlet time boundary
of the Schrödinger functional.444For reasons of convenience, we have adopted a
slightly different notation in this section: the flavour content of operators
like $S^{b}$ or ${\cal O}^{c}$ is determined by a single flavour index $b$ or
$c$, corresponding to its flavour matrix $T^{b}$ or $T^{c}$. The fermion
fields of these operators $\psi$ and $\bar{\psi}$ are columns in flavour
space. This is to be contrasted to the notation of Section 2, where we have
introduced operators like $P^{ij}$ and $\mathcal{O}^{ji}$, which have explicit
indices, referring to the flavour of fields $\psi_{j},\bar{\psi}_{i}$ etc. The
Ward identity of interest is obtained by performing axial variations on ${\cal
O}$ in a region $R$, chosen to be the space-time volume between the hyper-
planes at $t_{1}$ and $t_{2}$ where $t_{1}<t_{2}$. With ${\cal O}^{c}$ lying
outside $R$, we have $\delta_{\rm A}{\cal O}=[\delta_{\rm A}S^{b}(y)]{\cal
O}^{c}$ and
$\delta_{\rm
A}S^{b}(x)=\epsilon^{a}\Big{[}d^{abe}P^{e}(x)+\dfrac{\delta^{ab}}{N_{\rm
f}}\bar{\psi}(x)\psi(x)\Big{]}\,.$ (3.2)
In what follows we simplify matters by always working with $a\neq b$, so as to
eliminate the second contribution on the r.h.s. of the above expression. In
analogy to the derivation exposed in Ref. [15], we arrive at the formal
continuum Ward identity
$\displaystyle\int\mathrm{d}^{3}{\bf y}\int\mathrm{d}^{3}{\bf
x}\Big{\langle}\Big{[}A_{0}^{a}(t_{2};{\bf x})-A_{0}^{a}(t_{1};{\bf
x})\Big{]}S^{b}(y_{0};{\bf y}){\cal O}^{c}\Big{\rangle}$
$\displaystyle-2m\int\mathrm{d}^{3}{\bf y}\int\mathrm{d}^{3}{\bf
x}\int_{t_{1}}^{t_{2}}\mathrm{d}x_{0}\langle P^{a}(x_{0};{\bf
x})S^{b}(y_{0};{\bf y}){\cal O}^{c}\rangle$ (3.3)
$\displaystyle=-d^{abe}\int\mathrm{d}^{3}{\bf y}\,\,\langle P^{e}(y){\cal
O}^{c}\rangle\,.$
Next we adapt the previous formal manipulations to the lattice regularisation
with Schrödinger functional boundary conditions. The pseudoscalar operator
${\cal O}^{c}$ is defined on the $x_{0}=0$ time boundary. Ward identity (3.3)
then becomes:
$\displaystyle Z_{\rm A}Z_{\rm S}a^{6}\Bigg{\\{}\sum_{{\bf x},{\bf
y}}\,\left\langle\Big{[}(A_{\rm I})^{a}_{0}(t_{2};{\bf x})-(A_{\rm
I})^{a}_{0}(t_{1};{\bf x})\Big{]}\,S^{b}(y_{0};{\bf y})\,{\cal
O}^{c}\right\rangle$ $\displaystyle-2am\sum_{{\bf x},{\bf
y}}\sum_{x_{0}=t_{1}}^{t_{2}}w(x_{0})\,\langle P^{a}(x_{0};{\bf
x})\,S^{b}(y_{0};{\bf y})\,{\cal O}^{c}\rangle\Bigg{\\}}$ (3.4)
$\displaystyle=-d^{abe}Z_{\rm P}\,\,a^{3}\sum_{\bf y}\langle\,P^{e}(y)\,{\cal
O}^{c}\rangle+{\rm O}(am,a^{2})\,.$
In this expression repeated flavour indices $e$ are summed, as usual. The
weight factor is $w(x_{0})=1/2$ for $x_{0}\in\\{t_{1},t_{2}\\}$ and
$w(x_{0})=1$ otherwise. It is introduced in order to implement the trapezoidal
rule for discretising integrals. Quark masses are degenerate and $m$ is the
current quark mass.
The last step is to perform the Wick contractions in Ward identity (3.4). How
this is done is explained in Appendix B; eventually, flavour factors drop out
and we are left with a Ward identity that translates into traces of products
of quark propagators and $\gamma$-matrices, graphically depicted in Fig. 1.
Solving for $Z$ we get
$\displaystyle Z\equiv\dfrac{Z_{\rm P}}{Z_{\rm A}Z_{\rm S}}={-}\dfrac{f_{\rm
AS}^{\mathrm{I}}(t_{2},y_{0})-f_{\rm
AS}^{\mathrm{I}}(t_{1},y_{0})-2am\tilde{f}_{\rm PS}(t_{2},t_{1},y_{0})}{f_{\rm
P}(y_{0})}+{\rm O}(am,a^{2})\,,$ (3.5)
where dependencies are suppressed on the l.h.s. Assuming that we work in the
chiral limit (or with nearly-vanishing quark masses, so that ${\rm O}(am)$
effects may be safely neglected), the above Ward identity is valid up to ${\rm
O}(a^{2})$ discretisation errors in lattice QCD with Wilson quarks. In this
spirit, terms proportional to Symanzik $b$-coefficients may also be safely
ignored.555This is even true for light (up/down, strange) non-chiral quark
masses, as explicitly demonstrated in Ref. [29], using the $b$-coefficients of
Ref. [14]. The renormalisation factor of the external source ${\cal O}^{c}$ is
not taken into consideration, as it cancels out in the ratio (3.5). The term
proportional to the current quark mass $m$ may also be dropped close to the
chiral limit, but since we are working with masses which are not strictly
zero, it could be advantageous to keep it in practice. In fact, it was found
in Refs. [26, 15] that this term stabilizes the chiral extrapolation leading
to smaller errors. This turns out to be true also in our case, as we will show
in Subsection 5.2 and Fig. 5.
(a) Diagram $f_{\rm P}$
(b) Diagram $f_{\rm AS;1}$
(c) Diagram $f_{\rm AS;2}$
Figure 1: The trace diagrams contributing to the expectation values of $f_{\rm
P}$, defined in eq. (B.1) (diagram (a)) and $f_{\rm AS}$, defined in eq. (B.3)
(diagrams (b) and (c)). The wall represents the time slice $x_{0}=0$ with a
$\gamma_{5}$ Dirac matrix between circles. The squares in the bulk represent
either the insertions of a pseudoscalar operator $P(y)$ (diagram (a)) or a
scalar operator $S(y)$ (diagrams (b) and (c)). The diamonds stand for an axial
operator $A_{0}(x)$. The open circles correspond to the boundary fields
$\zeta$, while the filled circles denote $\bar{\zeta}$. The diagrams
schematically represent traces, formed by starting from any point and
following the lines (quark propagators) until we close the loop. The time
ordering of points $x$ and $y$ is left unspecified in these diagrams.
It is interesting to compare Ward identity (3.3) with those of Ref. [15]:
* •
In Ref. [15] the flavour factors gave rise to a multitude of identities, which
were combined in order to increase the signal-to-noise ratio, while here we
only have one identity. On these grounds one could expect that the numerical
results of Ref. [15] are more precise than the ones from the Ward identity
introduced here.
* •
On the other hand, the identities of Ref. [15] involved: (i) correlation
functions with one operator insertion in the bulk of the lattice and one wall
source at each time slice; cf. Fig. 1 in that work; (ii) correlation functions
with two operator insertions in the bulk and one wall source at each time
slice; cf. Fig. 2 in that work. Here we have: (i) a correlation function with
one operator insertion in the bulk and one wall source; (ii) correlation
functions with two operator insertions in the bulk and one wall source. These
somewhat simpler correlation functions illustrated in Fig. 1 above are
expected to have less statistical fluctuations. From this point of view, the
results of the present work are expected to gain in accuracy.
Thus, one of our aims is to establish which of the two approaches leads to
more accurate results. This is discussed in Subsection 5.2 and Appendix C.
## 4 Numerical setup
$(L/a)^{3}\times T/a$ | $\beta$ | $\kappa$ | #REP | #MDU | ID | $a$ (in fm)
---|---|---|---|---|---|---
$12^{3}\times 17$ | 3.3 | 0.13652 | 20 | 20480 | A1k1 | $0.1045(18)$
| | 0.13648 | 5 | 6876 | A1k3 |
| | 0.13650 | 20 | 96640 | A1k4 |
$12^{3}\times 18$ | | $0.13612$ | 4 | 41600 | A3k1 |
| | $0.13627$ | 4 | 41600 | A3k2 |
| | $0.13593$ | 4 | 41600 | A3k3 |
| | $0.136444$ | 4 | 41600 | A3k4 |
| | $0.136575$ | 4 | 41600 | A3k5 |
| | $0.136385$ | 4 | 41600 | A3k6 |
$14^{3}\times 21$ | 3.414 | 0.13690 | 32 | 38400 | E1k1 | $0.08381(68)$
| | 0.13695 | 48 | 57600 | E1k2 |
$14^{3}\times 20$ | | 0.13656 | 18 | 60480 | E2k1 |
| | 0.13675 | 18 | 60480 | E2k2 |
$16^{3}\times 23$ | 3.512 | 0.13700 | 2 | 20480 | B1k1 | $0.06954(43)$
| | 0.13703 | 1 | 8192 | B1k2 |
| | 0.13710 | 2 | 16384 | B1k3 |
| | 0.13714 | 1 | 27856 | B1k4 |
$16^{3}\times 24$ | | 0.13677 | 1 | 25904 | B3k1 |
$20^{3}\times 29$ | 3.676 | 0.13680 | 1 | 7848 | C1k1 | $0.05170(42)$
| | 0.13700 | 4 | 15232 | C1k2 |
| | 0.13719 | 4 | 15472 | C1k3 |
$24^{3}\times 35$ | 3.810 | 0.13711875582 | 5 | 8416 | D1k1∗ | $0.04175(70)$
| | 0.13701 | 2 | 6424 | D1k2 |
| | 0.137033 | 8 | 85008 | D1k4 |
Table 1: Simulation parameters $L$, $T$, $\beta$, $\kappa$, the number of
replica #REP and the number of molecular dynamics units #MDU for the ensembles
labelled by ID. Ensembles highlighted in italics were newly generated for this
study while the remaining ones were already used in previous investigations
(see, for example Ref. [14]). The ensemble D1k1 marked by an asterisk is only
used for the determination of the PCAC masses. The lattice spacings $a$ are
obtained by interpolating the results of Ref. [22] with a polynomial fit. All
configurations are separated by 8 MDU’s except for the ensembles A1k3 (4
MDU’s) and D1k4 (16 MDU’s).
We employ the tree-level Symanzik-improved gauge action and $N_{\mathrm{f}}=3$
mass-degenerate $\mathrm{O}(a)$ improved Wilson fermions. For the
corresponding improvement coefficient $c_{\mathrm{sw}}$ we use the non-
perturbative determination of Ref. [37]. As already indicated, we impose
Schrödinger functional boundary conditions at the temporal boundaries of the
lattice. The Schrödinger functional setup is highly suitable for massless
renormalisation schemes, since nearly-vanishing quark masses are accessible in
numerical calculations due to the spectral gap of the Dirac operator. This gap
is imposed by the boundaries, so that the quark mass dependence can be mapped
out reliably in the vicinity of the chiral point. The generation of the gauge
field configurations is performed with the openQCD code [38] which employs the
RHMC algorithm [39, 40] for the third quark.
All gauge field ensembles used in this study are summarized in Table 1 and lie
on a line of constant physics (LCP), defined by a fixed spatial extent of
$L\approx 1.2\,$\mathrm{f}\mathrm{m}$$ and $T/L\approx 3/2$. The tuning was
guided by the two-loop beta-function; see Ref. [27]. Provided that this
perturbative approximation is satisfactory in the case at hand, this ensures
that our estimates of $r_{\rm m}$ and $Z$ become smooth functions of the
lattice spacing, with higher-order ambiguities vanishing monotonically. In
Ref. [25] it was explicitly shown that $L$ is constant up to ${\rm O}(a)$ cut-
off effects across the coupling range also considered in the present work. We
thus expect our final results for $r_{\rm m}$ and $Z$ to only be affected by
${\rm O}(a^{2})$ effects. These are beyond the order we are interested in and
they are treated as an ambiguity that extrapolates to zero in the continuum
limit.666More precisely, the results on scale setting for our lattice action
from Ref. [22] have been used in Ref. [25] in order to demonstrate numerically
that the deviation from a constant value of $L$ in physical units is
proportional to the lattice spacing $a$. As the latter work uses the
configuration ensembles and range of non-perturbative bare couplings used also
in the present paper, our simulation parameters define a LCP up to ${\rm
O}(a)$ lattice artefacts, so that the discretisation effects of $r_{\rm m}$
and $Z$ are ${\rm O}(a^{2})$ in the ${\rm O}(a)$ improved theory.
The gauge ensembles highlighted in italics were newly generated for this
study, while the remaining ones were already used in previous investigations;
see Refs. [24, 15, 25, 26, 27, 28, 14].777In a setup with heavy sea quarks and
very light valence quarks we approach a quenched-like situation in which
exceptional configurations are to be expected; cf. Ref. [41] where a similar
situation is discussed. In a careful analysis we identified only one gauge
field configuration in the ensemble E2k1, with an exceptionally small
eigenvalue of the massless Dirac operator. This leads to very large values of
the correlation functions $f_{\mathrm{P}}$ and $f_{\mathrm{A}}$. We have
discarded this exceptional configuration. These additional ensembles allow for
a more even and wider spread of bare quark masses around the chiral point for
each value of $\beta$, which enables a more precise extraction of the slopes
corresponding to $Zr_{\mathrm{m}}$ and $Z\left(r_{\mathrm{m}}-1\right)$ as
explained in Section 2. Since a newer version of the openQCD code was utilised
for the generation of the ensembles, the time extent $T/a$, which was odd in
the pre-existing ensembles, is even for the new ones. For all ensembles we use
tree-level boundary $\mathrm{O}(a)$ improvement for both the gauge and fermion
fields (i.e. the appropriate $c_{\rm t},\widetilde{c}_{\rm t}$ values) as if
the time extents were even. The fact that an odd time extent alters the tree-
level value of $c_{\rm t}$, depending on the definition of the line of
constants physics [42], affects the current quark masses below the precision
achieved here, as explicitly demonstrated in Ref. [27].
All Schrödinger functional correlation functions required for our numerical
investigations are $\mathrm{O}(a)$ improved. In this context we only require
the improvement coefficient $c_{\mathrm{A}}$, non-perturbatively known from
Ref. [27]. Since the Markov chain Monte Carlo sampling of the gauge field
configurations suffers from critical slowing down of the topological charge
for smaller lattice spacings (see Ref. [43]), we project our data to the
trivial topological sector as suggested in Ref. [44], in order to account for
the insufficient sampling of all topological sectors. For the analysis of the
statistical errors we employ the $\Gamma$-method [45]. We account for the
remaining critical slowing down of the Monte Carlo algorithm by attaching a
tail to the autocorrelation function, as suggested in Ref. [46]. The
corresponding slowest mode is estimated from the autocorrelation time of the
boundary-to-boundary correlation function $F_{1}^{ij}$, defined in Appendix A.
The error analysis is carried out with a python implementation of the
$\Gamma$-method, using automatic differentiation for the error propagation as
proposed in Ref. [47].
## 5 Analysis details and results
In the following we present our analysis which eventually leads to several
estimates for the ratio of the renormalisation parameters of the non-singlet
and singlet scalar densities, $r_{\rm m}$. We will first describe how we
obtain $Zr_{\mathrm{m}}$, $Z(r_{\mathrm{m}}-1)$, and $Z$ individually and then
discuss several ways of combining the three into $r_{\mathrm{m}}$. As a final
result we provide an interpolation formula for $r_{\mathrm{m}}$ and extract
its value at the bare couplings of large-volume CLS simulations [21, 13, 23].
### 5.1 Quark mass slopes
Figure 2: PCAC mass $am$ as a function of time $x_{0}/a$ for ensembles B3k1
(left) and D1k2 (right). Squares are results obtained in the unitary setup,
while diamonds are results obtained in the non-unitary setup. The final
estimate for $m$ is obtained by averaging results in the time interval
$[T/3,2T/3]$, indicated by the dashed vertical lines.
As described in Section 2, the quantities $Zr_{\mathrm{m}}$ and
$Z(r_{\mathrm{m}}-1)$ can be extracted from quark mass slopes. Our results are
based on the determination of the $\mathrm{O}(a)$ improved PCAC masses via
$\displaystyle m(x_{0})=\frac{{\widetilde{\partial}_{0}}f_{\rm
A}^{ij}(x_{0})+ac_{\mathrm{A}}\partial_{0}^{\ast}\partial_{0}f_{\rm
P}^{ij}(x_{0})}{2f_{\rm P}^{ij}(x_{0})}\,,$ (5.1)
where $f_{\mathrm{A}}^{ij}$ and $f_{\mathrm{P}}^{ij}$ are Schrödinger
functional correlation functions. In order to improve the signal, these
correlation functions are symmetrised with their $T$-symmetric counterparts
$g_{\rm A}^{ij}(T-x_{0})$ and $g_{\rm P}^{ij}(T-x_{0})$, which are constructed
from the same operators $(A_{{\rm I}})^{ij}_{0}(x)$ and ${P}^{ij}(x)$ in the
bulk but the pseudoscalar wall with operator $\mathcal{O^{\prime}}^{ji}$
positioned at the time boundary $x_{0}=T$. For exact definitions see Appendix
A.
ID | $am$ | ${Z^{\\{T/3\\}}}$ | ${Z^{\\{T/4\\}}}$
---|---|---|---
| $\kappa^{\mathrm{val}}_{i}=\kappa_{i}^{\mathrm{sea}}$ | $\kappa^{\mathrm{val}}_{i}=\kappa_{\mathrm{crit}}$ | |
A3k3 | $\phantom{-}0.12143(82)$ | $\phantom{-}0.10759(77)$ | |
A3k1 | $\phantom{-}0.07316(184)$ | $\phantom{-}0.06440(193)$ | |
A3k2 | $\phantom{-}0.03070(85)$ | $\phantom{-}0.02588(89)$ | |
A3k6 | $\phantom{-}0.01246(54)$ | $\phantom{-}0.01029(54)$ | |
A3k4 | $\phantom{-}0.00465(56)$ | $\phantom{-}0.00370(57)$ | |
A1k3 | $\phantom{-}0.00095(93)$ | $\phantom{-}0.00074(93)$ | 0.8195(93) | 0.7454(94)
A1k4 | $-0.00119(33)$ | $-0.00100(33)$ | 0.8101(43) | 0.7520(58)
A1k1 | $-0.00287(61)$ | $-0.00229(61)$ | 0.7892(67) | 0.7189(79)
A3k5 | $-0.00952(50)$ | $-0.00864(49)$ | |
| $\phantom{-}0.0$ | $\phantom{-}0.0$ | 0.8184(77) | 0.7588(143)
E2k1 | $\phantom{-}0.02083(19)$ | $\phantom{-}0.01117(27)$ | |
E2k2 | $\phantom{-}0.01072(16)$ | $\phantom{-}0.00592(17)$ | |
E1k1 | $\phantom{-}0.00265(22)$ | $\phantom{-}0.00153(23)$ | 0.8990(47) | 0.8619(54)
E1k2 | $-0.00022(19)$ | $-0.00017(19)$ | 0.8987(47) | 0.8580(64)
| $\phantom{-}0.0$ | $\phantom{-}0.0$ | 0.8987(43) | 0.8583(59)
B3k1 | $\phantom{-}0.01502(16)$ | $\phantom{-}0.00552(22)$ | |
B1k1 | $\phantom{-}0.00552(19)$ | $\phantom{-}0.00232(18)$ | 0.9972(45) | 0.9760(53)
B1k2 | $\phantom{-}0.00435(28)$ | $\phantom{-}0.00168(30)$ | 0.9963(73) | 0.9756(94)
B1k3 | $\phantom{-}0.00157(18)$ | $\phantom{-}0.00024(20)$ | 0.9839(48) | 0.9643(52)
B1k4 | $-0.00056(16)$ | $-0.00035(16)$ | 1.0004(50) | 0.9690(73)
| $\phantom{-}0.0$ | $\phantom{-}0.0$ | 0.9935(38) | 0.9654(50)
C1k1 | $\phantom{-}0.01322(17)$ | $\phantom{-}0.00304(21)$ | 1.0593(46) | 1.0446(42)
C1k2 | $\phantom{-}0.00601(11)$ | $\phantom{-}0.00148(11)$ | 1.0615(30) | 1.0517(35)
C1k3 | $-0.00110(11)$ | $-0.00029(11)$ | 1.0617(47) | 1.0542(42)
| $\phantom{-}0.0$ | $\phantom{-}0.0$ | 1.0621(36) | 1.0544(34)
D1k2 | $\phantom{-}0.00073(15)$ | $\phantom{-}0.00012(15)$ | 1.0896(89) | 1.0868(52)
D1k4 | $-0.00007(3)$ | $-0.00001(3)$ | 1.0908(12) | 1.0849(13)
D1k1 | $-0.00295(11)$ | $-0.00040(9)$ | |
| $\phantom{-}0.0$ | $\phantom{-}0.0$ | 1.0907(13) | 1.0850(12)
Table 2: For each ensemble, identified in the first column by an ID label, we
list our results for the PCAC mass $am$ for simulations with
$\kappa^{\mathrm{val}}=\kappa^{\mathrm{sea}}$ (second column) and
$\kappa^{\mathrm{sea}}\neq\kappa^{\mathrm{val}}=\kappa_{\mathrm{crit}}$ (third
column). The last two columns contain $Z$ results obtained from the Ward
identity (3.5). The final results are those extrapolated to the chiral limit
at each $\beta=6/g_{0}^{2}$ (last line of each data grouping). The labels
${Z^{\\{T/3\\}}}$ and ${Z^{\\{T/4\\}}}$ refer to different choices of time
slices with operator insertions in the correlation functions (see text for
details).
We first determine the required correlation functions in a unitary setup,
$\kappa^{\mathrm{val}}=\kappa^{\mathrm{sea}}$. From these we can obtain
$\kappa_{\mathrm{crit}}$ as will be detailed below. In a second step we
compute the same correlation functions in a non-unitary setup where
$\kappa^{\mathrm{sea}}\neq\kappa^{\mathrm{val}}=\kappa_{\mathrm{crit}}$. In
Fig. 2 we show the temporal dependence of the current quark mass $m(x_{0})$
for both of these setups for the representative ensembles B3k1 and D1k2 and
demonstrate that they form well-defined plateaux as a function of time, away
from the Dirichlet boundaries. Our final estimate for the PCAC masses is
obtained by averaging $m(x_{0})$ over the central third of the temporal extent
of the lattice. This choice is motivated by the coarsest lattices; the
plateaux for the finer ones also extend closer to the boundary before lattice
artefacts become relevant as can be seen in Fig. 2. The plateau range is
adapted according to the time extent for each value of $\beta$, so as to
preserve the line of constant physics. Our PCAC mass estimates in both setups
are listed in Table 2 for all ensembles.
Figure 3: PCAC masses $am$ fitted linearly in $1/(2\kappa)$, for all simulated
$\beta$ values (i.e. for decreasing lattice spacings from top to bottom). Open
squares and filled diamonds are results in the unitary and non-unitary setups,
respectively. Note that horizontal and vertical axes are identical for all
values of $\beta$, so as to highlight the different ranges of $\kappa$ and the
change of $\kappa_{\mathrm{crit}}$ marked by the vertical dashed lines.
In order to extract $Zr_{\mathrm{m}}$ and $Z(r_{\mathrm{m}}-1)$ from the
slopes of the current quark masses with respect to the bare quark masses, we
plot $am$ against the inverse hopping parameter $1/(2\kappa)$ for both the
unitary and the non-unitary setup, as demonstrated in Fig. 3. We generally
observe that $m$ behaves linearly as a function of $1/(2\kappa)$ in the range
$-0.1\lesssim Lm\lesssim 0.3$. For the ensembles A3k1, A3k2, and A3k3 (not
displayed in Fig. 3), which correspond to $Lm\gtrsim 0.3$, linearity is lost.
Figure 4: PCAC masses $am$, at $\beta=3.3$, fitted with a quadratic polynomial
in $1/(2\kappa)$. Squares and diamonds are results in the unitary and non-
unitary setups, respectively. Note that the two rightmost points (A3k1, A3k3)
are not included in the fit, while A3k2 (at $1/(2\kappa)=3.6692$) is. The
vertical dashed line is positioned at $\kappa_{\mathrm{crit}}$ from the linear
fit; see Table 3. The $Zr_{\mathrm{m}}$ and $Z(r_{\mathrm{m}}-1)$ values shown
in the legend are obtained from the linear term of the quadratic polynomial.
Results from these ensembles have thus not been included in the linear fits.
The good linear behaviour of the data from the remaining ensembles is
justified a posteriori, by the small $\chi^{2}/\mathrm{d.o.f}$. of our fits,
as shown in Fig. 3.
We also probe the non-linear regime in both setups for $\beta=3.3$ by
performing a quadratic fit, in the presence of the ensembles A3k1, A3k2, and
A3k3, as displayed in Fig. 4. For both setups, fits confirm the presence of
${\rm O}((am_{\mathrm{q}})^{2})$ effects in this case. The two rightmost
points (A3k1, A3k3) have not been included in these fits. Including them would
result in a very large value of $\chi^{2}/\mathrm{d.o.f}$. This may also be
related to the fact that no clear-cut plateaux are seen in the current quark
mass data for these ensembles. This could be explained by the fact that
(boundary) cut-off effects for these comparatively large masses (in lattice
units) are substantial. Estimates of $Zr_{\mathrm{m}}$ and
$Z(r_{\mathrm{m}}-1)$, obtained as the linear coefficient of the quadratic
fits around $\kappa_{\mathrm{crit}}$, are compatible with those from linear
fits. The influence of the quadratic term on our final result is therefore
negligible. This ensures that our results are not affected by ${\rm
O}((am_{\rm q})^{2})$ systematic errors at $\beta=3.3$, which is our coarsest
lattice. The same conclusion holds for the finer lattices, since also for them
$am_{\rm q}$ is small and linear fits have small $\chi^{2}/\mathrm{d.o.f.}$
As implied by eq. (2.11), $\kappa_{\mathrm{crit}}$ and $Zr_{\mathrm{m}}$ are
assessed as the intercept and the slope of the linear fit to the unitary data.
Similarly, eq. (2.12) tells us that $Z(r_{\mathrm{m}}-1)$ can be estimated
from the slope of the linear fit to the non-unitary data. Our final findings
for $Zr_{\mathrm{m}}$, $Z(r_{\mathrm{m}}-1)$, and $\kappa_{\mathrm{crit}}$ are
listed in Table 3.
$\beta$ | $Zr_{\mathrm{m}}$ | $Z(r_{\mathrm{m}}-1)$ | $\kappa_{\mathrm{crit}}$
---|---|---|---
3.3 | 4.240(134) | 3.621(133) | 0.1364904(18)
3.414 | 2.015(24) | 1.092(29) | 0.1369478(26)
3.512 | 1.561(21) | 0.603(26) | 0.1371320(26)
3.676 | 1.383(19) | 0.329(20) | 0.1371611(25)
3.81 | 1.263(47) | 0.173(40) | 0.1370310(9)
Table 3: Results from the PCAC mass analyses. The second and fourth column
show results obtained in a unitary setup; the third column refers to the non-
unitary setup.
### 5.2 Renormalisation constant $Z$
As the next step in our analysis, we extract the renormalisation constant
$Z\equiv(Z_{\rm P}/Z_{\rm S}Z_{\rm A})$ from the ratio (3.5), using the subset
of gauge field ensembles listed in Table 1 which are not emphasised in italic
font.888As explained in Section 4, the ensembles in italics have been
generated for the purpose of performing reliable fits of the data in Fig. 3
and 4, in order to accurately measure their slopes. These extra ensembles have
not been used for the computation of $Z$, as they do not increase the accuracy
of the result. D1k1 (marked by an asterisk) is also not taken into account.
The correlation functions in eq. (3.5) are computed for two choices of $t_{1}$
and $t_{2}$. Our first choice is $t_{1}\approx T/3$ and $t_{2}\approx 2T/3$,
and the results obtained in this fashion are denoted as ${Z^{\\{T/3\\}}}$.
Alternatively, choosing $t_{1}\approx T/4$ and $t_{2}\approx 3T/4$ yields a
second $Z$ estimate denoted as ${Z^{\\{T/4\\}}}$. When $T/3$ and $T/4$ are not
integers, $t_{1}$ and $t_{2}$ are rounded up/down to the nearest integer.
Figure 5: Left: Ward identity estimates of $Z$, plotted against time
$y_{0}/T$, for one representative ensemble for each lattice spacing (except
for $\beta=3.3$, corresponding to the coarsest lattice). The dashed vertical
lines bracket the two central time slices that determine the final value of
$Z$. Right: Chiral extrapolation of $Z$ at fixed $\beta$ obtained from the
Ward identity with the mass term (squares) and without it (diamonds). In the
massless case, a possible linear range in $am$ is illustrated by the dashed
line joining the two leftmost points. In the massive case, no significant
quark mass dependence is observed; the dashed line through the squares is a
linear fit where the slope vanishes within its uncertainty. Note that the
errors of the PCAC masses are also displayed and taken into account in the
fits via orthogonal distance regression [48].
In the left part of Fig. 5, we depict ${Z^{\\{T/3\\}}}$ as a function of
$y_{0}/T$ for several representative ensembles (we remind the reader that $T$
is approximately constant in physical units). Contrary to the PCAC masses in
Fig. 2, these local estimators of $Z$ do not exhibit plateau-like behaviour;
this was also observed for a similar Ward identity adopted to compute the
improvement coefficient of the vector current in Ref. [25]. Note, however,
that this is not problematic; since $Z$ is obtained from a Ward identity, its
value at any time slice qualifies as a well-defined estimate. We prefer to err
on the side of caution and quote the average of the two central time slices as
our best $Z$ estimate. Results for the two determinations of $Z$ are collected
in Table 2, where we see that ${Z^{\\{T/3\\}}}$ and ${Z^{\\{T/4\\}}}$ are not
compatible, indicating the presence of lattice artefacts that also differ
noticeably. We consider ${Z^{\\{T/3\\}}}$ the more reliable estimate because
the operator insertions in this case, being further from $x_{0}=0$ and
$x_{0}=T$, are expected to lead to less contamination through cut-off effects
induced by the boundaries.
Since the Ward identity (3.5) is only valid up to lattice artefacts of
$\mathrm{O}(am,a^{2})$, we have to interpolate our data to the chiral point,
in order to eliminate the $\mathrm{O}(am)$-effects and be left with
$\mathrm{O}(a^{2})$ only. As an additional cross-check we also compute $Z$
without the “mass term” $2am\tilde{f}_{\rm PS}(t_{2},t_{1})$ in the Ward
identity (3.5), where $am$ is the PCAC mass from the unitary setup discussed
in the previous section. This chiral interpolation is demonstrated for
$\beta=3.676$ in the right part of Fig. 5. While the data including the “mass
term” shows a very flat behaviour with respect to the current quark mass
(where the associated fit parameter even vanishes within its uncertainty
except for the coarsest lattice spacing), the truncated Ward identity results
in a considerably larger slope. If we exclude the rightmost data point for the
identity without the “mass term”, linear fits to both datasets still agree in
the chiral limit. This situation resembles closely what was observed in Ref.
[15], where $Z$ was measured employing a different Ward identity. We note that
the linear fit is based on the orthogonal distance regression method [48],
taking into account both the error of dependent and independent variables. The
final results for ${Z^{\\{T/3\\}}}$ and ${Z^{\\{T/4\\}}}$ at the chiral point
are also listed in Table 2. Compared to the indirect Ward identity
determination of Ref. [15], they have considerably smaller errors. This
confirms the expectation that the simpler structure of the correlation
functions building the Ward identity (3.4) is preferable from a numerical
perspective; see the discussion at the end of Section 3. On the other hand,
compared to the so-called ’LCP-0’ determination of Ref. [14], our results are
of similar accuracy across the bare couplings investigated. We will use our
results (Table 2) for a precise estimation of $r_{\mathrm{m}}$ in the
following. More details on the relative cut-off effects between the present
determination of $Z$ and the results obtained in Refs. [14, 15, 13] can be
found in Appendix C.
### 5.3 Results for $r_{\mathrm{m}}$
In the final step of our analysis we combine the values of $Zr_{\mathrm{m}}$
obtained in a unitary setup, $Z(r_{\mathrm{m}}-1)$ in a non-unitary setup, and
$Z$ from a chiral Ward identity, in order to arrive at different estimates for
$r_{\mathrm{m}}$. Combining the first two, we construct
${r_{\mathrm{m}}^{\\{\mathrm{u,nu}\\}}}$, defined as
$\displaystyle{r_{\mathrm{m}}^{\\{\mathrm{u,nu}\\}}}$
$\displaystyle=\bigg{(}1-\left[\frac{Z(r_{\text{m}}-1)}{Zr_{\text{m}}}\right]\bigg{)}^{-1}\,,$
(5.2) where the superscripts “u” and “nu” stand for “unitary” and “non-
unitary”, respectively. Combining $Zr_{\mathrm{m}}$ and $Z$, results in
${r_{\mathrm{m}}^{\\{\mathrm{u;}Z\\}}}$, defined as
$\displaystyle{r_{\mathrm{m}}^{\\{\mathrm{u;}Z\\}}}$
$\displaystyle=\frac{Zr_{\text{m}}}{Z}\,.$ (5.3) As mentioned above, this
comes in two versions, ${r_{\mathrm{m}}^{\\{\mathrm{u;}Z,T/3\\}}}$ and
${r_{\mathrm{m}}^{\\{\mathrm{u;}Z,T/4\\}}}$. Moreover, from the second and
third result we gain ${r_{\mathrm{m}}^{\\{\mathrm{nu;}Z\\}}}$ given by
$\displaystyle{r_{\mathrm{m}}^{\\{\mathrm{nu;}Z\\}}}$
$\displaystyle=\frac{Z(r_{\text{m}}-1)}{Z}+1\,,$ (5.4)
which is again worked out for two cases,
${r_{\mathrm{m}}^{\\{\mathrm{nu;}Z,T/3\\}}}$ and
${r_{\mathrm{m}}^{\\{\mathrm{nu;}Z,T/4\\}}}$. All our results for
$r_{\mathrm{m}}$ from these different determinations just outlined are
gathered in Table 4.
$\beta$ | ${r_{\mathrm{m}}^{\\{\mathrm{u,nu}\\}}}$ | ${r_{\mathrm{m}}^{\\{\mathrm{u;}Z,T/3\\}}}$ | ${r_{\mathrm{m}}^{\\{\mathrm{nu;}Z,T/3\\}}}$ | ${r_{\mathrm{m}}^{\\{\mathrm{u;}Z,T/4\\}}}$ | ${r_{\mathrm{m}}^{\\{\mathrm{nu;}Z,T/4\\}}}$
---|---|---|---|---|---
3.3 | 6.848(569) | 5.181(172) | 5.424(169) | 5.588(207) | 5.772(199)
3.414 | 2.183(44) | 2.242(26) | 2.215(32) | 2.348(32) | 2.272(35)
3.512 | 1.629(32) | 1.571(21) | 1.607(26) | 1.617(22) | 1.625(27)
3.676 | 1.312(20) | 1.303(15) | 1.309(19) | 1.312(16) | 1.312(19)
3.81 | 1.158(37) | 1.158(44) | 1.158(37) | 1.164(44) | 1.159(37)
Table 4: Results for $r_{\mathrm{m}}$, obtained via eqs. (5.2) to (5.4).
In principle, the different estimates can differ by $\mathrm{O}(a^{2})$
ambiguities. In Fig. 6 (left) the three determinations
${r_{\mathrm{m}}^{\\{\mathrm{u,nu}\\}}}$,
${r_{\mathrm{m}}^{\\{\mathrm{u;}Z,T/3\\}}}$, and
${r_{\mathrm{m}}^{\\{\mathrm{nu;}Z,T/3\\}}}$ are plotted against the bare
coupling squared; to be able to distinguish between the different estimates,
the data points corresponding to the coarsest lattice spacing ($\beta=3.3$)
are omitted as they exhibit large cut-off effects and are thus well out of the
range displayed here. Results are compatible within their respective
$1\sigma$-errors. In Fig. 6 (right) we take a closer look at this behaviour by
plotting ratios of different $r_{\mathrm{m}}$ estimates as functions of the
lattice spacing squared; the corresponding lattice spacings can be found in
Table 1. Since the ratios have been computed on a line of constant physics,
and assuming that we are in a scaling region where Symazik’s effective theory
of cut-off effects applies, they are expected to be polynomials in the lattice
spacing, tending to 1 in the continuum limit. In this context we introduce an
additional determination,
$r_{\mathrm{m}}^{\\{\mathrm{u},\mathrm{nu};\mathrm{impr}\\}}$, which only
differs from ${r_{\mathrm{m}}^{\\{\mathrm{u,nu}\\}}}$ by an improved version
of the derivative ${\widetilde{\partial}_{0}}$ in eq. (5.1).999The improved
derivative is defined as
$a\partial_{\mu}f(x)\equiv\frac{1}{12}[-f(x+2a\hat{\mu})+8f(x+a\hat{\mu})-8f(x-a\hat{\mu})+f(x-2a\hat{\mu})]$
and its corresponding second derivative by
$a^{2}\partial_{\mu}^{*}\partial_{\mu}f(x)\equiv\frac{1}{12}[-f(x+2a\hat{\mu})+16f(x+a\hat{\mu})-30f(x)+16f(x-a\hat{\mu})-f(x-2a\hat{\mu})]$
as shown by eq. (B.4) in Ref. [14]. These ratios are very close to one except
for one of the data points at $\beta=3.3$, for which the ratio is
significantly larger. Even though it would be sufficient to demonstrate that
these ratios of $r_{\mathrm{m}}$ approach unity with a rate $\propto a^{2}$ or
higher in our particular line of constant physics framework, such ambiguities
appear to be nearly absent for $a<0.1\mathrm{fm}$.
We tried to model the data sets with and without the $\beta=3.3$ points, using
polynomials in the lattice spacing, constrained to one in the continuum limit.
When a linear term is included, we obtain unsatisfactory fits with
$\chi^{2}/\mathrm{d.o.f.}>3$. We thus conclude that our results are compatible
with the theoretical expectation of $\mathrm{O}(a^{2})$ lattice artefacts or
higher (see also Appendix C).
Figure 6: Left: Results for different $r_{\mathrm{m}}$ estimates as reported
in Table 4. The results for $\beta=3.3$ are not shown. Right: Ratio of
different $r_{\mathrm{m}}$ determinations as a function of the squared lattice
spacing. The dashed horizontal line indicates the expected continuum result.
As our preferred determination of $r_{\mathrm{m}}$ we advocate
${r_{\mathrm{m}}^{\\{\mathrm{nu;}Z,T/3\\}}}$ because of its small statistical
errors in our range of bare couplings and the poorer scaling behaviour of the
other estimators at the coarsest lattice spacing. In Fig. 7 we show this
result including the two-loop perturbative prediction of Ref. [18]. An
important observation is that the non-perturbative estimates strongly deviate
from the perturbative prediction in this region of strong couplings. A similar
behaviour was also observed in several studies of renormalisation factors for
which one-loop perturbative predictions are available (see, e.g. [49, 25]).
Here, we confirm this finding also for two-loop perturbation theory. We also
compare our results with those of other works. In Ref. [13], $r_{\mathrm{m}}$
was determined for two values of the bare coupling, from an alternative
renormalization condition. As inferred by Fig. 7 this result agrees with ours
at the smaller coupling, while it deviates notably at the larger coupling,
most likely due to $\mathrm{O}(a^{2})$ ambiguities (or higher).
Figure 7: Non-perturbative determination of
${r_{\mathrm{m}}^{\\{\mathrm{nu;}Z,T/3\\}}}$ (open circles), compared to the
results of Ref. [13] (filled diamonds) and those of two-loop perturbation
theory [18] (horizontal dotted line). The dashed line is the interpolation
(5.5) and the vertical dotted lines correspond to the bare couplings used in
CLS simulations.
Our final result consists of a continuous interpolation formula for
$r_{\mathrm{m}}={r_{\mathrm{m}}^{\\{\mathrm{nu;}Z,T/3\\}}}$. Our data is best
described by a Padé ansatz, constrained to the two-loop prediction of Ref.
[18] for small couplings, of the form
$r_{\text{m}}(g_{0}^{2})=1.0+0.004630\,g_{0}^{4}\times\left\\{\frac{1+c_{1}\,g_{0}^{2}+c_{2}\,g_{0}^{4}}{1+c_{3}\,g_{0}^{2}}\right\\}\,,$
(5.5a) where $\quad c_{i}=\left(-7.86078,5.49175,-0.54078\right)\,,$ (5.5b)
and
$\mathrm{cov}(c_{i},c_{j})=\left(\begin{array}[]{lll}\phantom{-}3.699760\phantom{\times
10^{-1}}&-2.198586\phantom{\times 10^{-1}}&-1.476913\times 10^{-3}\\\
-2.198586\phantom{\times 10^{-1}}&\phantom{-}1.306512\phantom{\times
10^{-1}}&\phantom{-}8.776569\times 10^{-4}\\\ -1.476913\times
10^{-3}&\phantom{-}8.776569\times 10^{-4}&\end{array}\right)\,,$ (5.5c)
which is also displayed in Fig. 7. The fit function describes our data with
$\chi^{2}/{\rm d.o.f.}=1.37$ and provides errors of a size comparable to the
fitted data points.
The interpolation formula can now be used in order to determine
$r_{\mathrm{m}}$ at the couplings used in CLS simulations for the computation
of hadronic quantities [21, 13, 23]. Since the CLS coupling $\beta=3.85$ lies
outside the range of our $r_{\mathrm{m}}$ computations, we perform a short
extrapolation in order to provide a value for $r_{\mathrm{m}}(\beta=3.85)$. A
systematic error, estimated as the difference between the lower error bar of
our data point at $\beta=3.81$ and the extrapolated value at $\beta=3.85$
($\sigma_{\mathrm{syst}}=0.027$) is added to the statistical error
($\sigma_{\mathrm{stat}}=0.018$) in quadrature. Our final $r_{\mathrm{m}}$
results at the CLS couplings are collected in Table 5.
$\beta$ | 3.4 | 3.46 | 3.55 | 3.7 | 3.85
---|---|---|---|---|---
$r_{\mathrm{m}}$ | 2.335(31) | 1.869(19) | 1.523(14) | 1.267(16) | 1.149(18)(27)[33]
Table 5: Values for $r_{\mathrm{m}}$ at the couplings used in CLS simulations,
obtained from the interpolation formula (5.5). As mentioned in the text, an
additional systematic error was added to the $\beta=3.85$ result. The errors
are displayed in this way:
$(\sigma_{\mathrm{stat}})(\sigma_{\mathrm{syst}})[\sigma_{\mathrm{total}}]$.
## 6 Summary
With the non-perturbative computation of the ratio of the renormalisation
constants of non-singlet and singlet scalar densities, $r_{\rm m}\equiv Z_{\rm
S}/Z_{\rm S}^{0}$, presented in this paper we have addressed a quantity, which
not only enters the renormalisation pattern of quark masses in lattice QCD
with Wilson fermions, but also constitutes an important ingredient in
calculations of renormalised nucleon (and other baryon) matrix elements of
singlet scalar densities, known as sigma terms.
Our strategy to calculate $r_{\rm m}$ merges the functional dependences of the
PCAC quark mass in terms of the subtracted quark mass, evaluated in a unitary
as well as a non-unitary setting with respect to the choice of sea and valence
quark masses. In the vicinity of the chiral limit, these dependences are found
to be linear, so that $r_{\rm m}$ can be obtained through the associated quark
mass slopes with confidence and superior control of statistical and systematic
errors. The finite-volume numerical simulations of ${\rm O}(a)$ improved QCD
with Schrödinger functional boundary conditions that enter the analysis
realise a line of constant physics by working in a volume of spatial extent
$L\approx 1.2\,$fm and thereby fixing all other relevant length scales in
physical units. This guarantees that $r_{\rm m}$ becomes a smooth function of
the bare gauge coupling as the lattice spacing is varied, where any
potentially remaining intrinsic ambiguities disappear monotonically towards
the continuum limit at a rate that stays beyond the sensitivity of the ${\rm
O}(a)$ improved theory.
Our central results, which hold for a lattice discretisation of QCD with three
flavours of non-perturbatively ${\rm O}(a)$ improved Wilson-clover sea quarks
and tree-level Symanzik-improved gluons, are the continuous parameterisation
of $r_{\rm m}$ as a function of the squared bare gauge coupling
$g_{0}^{2}=6/\beta$ in eq. (5.5), as well as its values in Table 5 at the
specific strong-coupling $\beta$ values of large-volume CLS simulations [21,
22, 13, 23].
Along with the numerical implementation of our strategy to extract $r_{\rm
m}$, we have also developed a new method to determine the scale independent
combination $Z=Z_{\rm P}/(Z_{\rm S}Z_{\rm A})$ of renormalisation parameters
of quark bilinears in the pseudoscalar, (non-singlet) scalar and axial vector
channel, respectively. It relies upon a Ward identity that, according to our
knowledge, has not yet appeared explicitly in the literature. Since, as
explained in Sections 2 and 3, the renormalisation factor $Z$ is actually
required to isolate $r_{\rm m}$ from the unitary and non-unitary quark mass
slopes, we have employed the estimates on $Z$ from this approach in our final
results of $r_{\rm m}$. However, this was primarily done for practical reasons
and served the purpose of demonstrating the feasibility of the Ward identity
method for $Z$. In fact, it is apparent from the discussion in Appendix C and
Figure 8 that these new values for $Z$ are fully compatible with the earlier
determinations available from Refs. [14, 15] and are neither superior in
statistical precision nor in systematics regarding lattice artefacts.
Nevertheless we give an interpolation formula for the present $Z$ (Table 6)
for completeness.
Finally we recall the subtlety discussed in Section 2: away from the chiral
limit, the dependence of (re)normalisation parameters should be
$Z(\tilde{g}_{0}^{2}),r_{\rm m}(\tilde{g}_{0}^{2})$, with $\tilde{g}_{0}^{2}$
defined in eq. (2.13). In order to be able to combine our results with CLS
low-energy quantities such as those of Refs. [22, 29], we should use the
expansion
$Z(\tilde{g}_{0}^{2})=Z(g_{0}^{2})\Big{[}1+\dfrac{\partial\ln
Z(g_{0}^{2})}{\partial g_{0}^{2}}\dfrac{1}{N_{\rm
f}}b_{g}(g_{0}^{2})g_{0}^{2}a{\rm Tr}M_{\rm q}\Big{]}\,,$ (6.1)
see also Ref. [50], and similarly for $r_{\rm m}(\tilde{g}_{0}^{2})$. At
present, $b_{g}$ is only known in perturbation theory [51]; $b_{g}=0.012N_{\rm
f}g_{0}^{2}$. The correction $\partial\ln Z/\partial g_{0}^{2}$ as well as
$\partial\ln r_{\rm m}/\partial g_{0}^{2}$ , computed at CLS values of the
inverse coupling $\beta$, can be found in Table 7.
Acknowledgements. This work is supported by the Deutsche
Forschungsgemeinschaft (DFG) through the Research Training Group “GRK 2149:
Strong and Weak Interactions – from Hadrons to Dark Matter” (J. H., F. J. and
P. L. J. P.). We acknowledge the computer resources provided by the WWU IT,
formerly ‘Zentrum für Informationsverarbeitung (ZIV)’, of the University of
Münster (PALMA-II HPC cluster) and thank its staff for support.
## Appendix A Schrödinger functional correlation functions
The Schrödinger functional correlation functions employed in this work are
defined as
$\displaystyle f_{\mathrm{P}}^{ij}$
$\displaystyle=-\frac{1}{2}\frac{a^{9}}{L^{3}}\sum_{\bf
x,u,v}\left\langle\bar{\psi}_{i}(x)\gamma_{5}\psi_{j}(x)\cdot\bar{\zeta}_{j}({\bf
v})\gamma_{5}\zeta_{i}(\bf u)\right\rangle\,,$ (A.1) $\displaystyle
g_{\mathrm{P}}^{ij}$ $\displaystyle=-\frac{1}{2}\frac{a^{9}}{L^{3}}\sum_{\bf
x,u,v}\left\langle\bar{\psi}_{i}(x)\gamma_{5}\psi_{j}(x)\cdot\bar{\zeta}^{\prime}_{j}({\bf
u})\gamma_{5}\zeta^{\prime}_{i}(\bf v)\right\rangle\,,$ (A.2) $\displaystyle
f_{\mathrm{A}}^{ij}$ $\displaystyle=-\frac{1}{2}\frac{a^{9}}{L^{3}}\sum_{\bf
x,u,v}\left\langle\bar{\psi}_{i}(x)\gamma_{0}\gamma_{5}\psi_{j}(x)\cdot\bar{\zeta}_{j}({\bf
u})\gamma_{5}\zeta_{i}(\bf v)\right\rangle\,,$ (A.3) $\displaystyle
g_{\mathrm{A}}^{ij}$ $\displaystyle=-\frac{1}{2}\frac{a^{9}}{L^{3}}\sum_{\bf
x,u,v}\left\langle\bar{\psi}_{i}(x)\gamma_{0}\gamma_{5}\psi_{j}(x)\cdot\bar{\zeta}^{\prime}_{j}({\bf
u})\gamma_{5}\zeta^{\prime}_{i}(\bf v)\right\rangle\,,$ (A.4) $\displaystyle
F_{1}^{ij}$ $\displaystyle=-\frac{1}{2}\frac{a^{12}}{L^{6}}\sum_{\bf
u^{\prime},v^{\prime},u,v}\left\langle\bar{\zeta}^{\prime}_{i}({\bf
u^{\prime}})\gamma_{5}\zeta^{\prime}_{j}({\bf
v^{\prime}})\cdot\bar{\zeta}_{j}({\bf u})\gamma_{5}\zeta_{i}(\bf
v)\right\rangle\,.$ (A.5)
They refer to the general case of two distinct, i.e. not necessarily mass-
degenerate quark flavours $i,j$. Summation over the indices $i$ and $j$ is not
implied. The space-time point $x$ lies in the lattice bulk; i.e. $0<x_{0}<T$.
The Dirichlet boundary fields $\bar{\zeta}_{j}(\bf u)$ and $\zeta_{i}(\bf v)$
live on time slice $x_{0}=0$, while $\bar{\zeta}^{\prime}_{j}(\bf u^{\prime})$
and $\zeta^{\prime}_{i}(\bf v^{\prime})$ live on time slice $x_{0}=T$; the
boundary fields are introduced in Ref.[36].
## Appendix B Wick contractions of correlation functions
In this appendix we briefly explain how to obtain eq. (3.5) from eq. (3.4).
The idea is to perform the Wick contractions of the correlation functions,
arriving at expressions which are traces of flavour matrices, multiplying
traces of products of quark propagators and $\gamma$-matrices. This procedure
has been described in full detail in Ref. [15], which deals with more
complicated Ward identities; we refer the reader to that work for unexplained
notation. Here we will only present the main features of the proof.
We start with the r.h.s. of eq. (3.4). The Wick contractions result in
$\displaystyle-d^{abe}\,a^{3}\sum_{\bf y}\langle P^{e}(y){\cal O}^{c}\rangle=$
$\displaystyle-d^{abe}{\rm Tr}[T^{e}T^{c}]\dfrac{a^{9}}{L^{3}}\sum_{\bf
y}\sum_{\bf u,v}\Bigg{\langle}\,\hbox{tr}\,\bigg{\\{}[\psi(y)\bar{\zeta}({\bf
u})]_{\rm F}\gamma_{5}[\zeta({\bf v})\bar{\psi}(y)]_{\rm
F}\gamma_{5}\bigg{\\}}\Bigg{\rangle}$ $\displaystyle=$
$\displaystyle\,d^{abc}f_{\rm P}(y_{0})\,,$ (B.1)
where the second equality implicitly defines $f_{\rm P}$ (see also eq. (A.1)
and Appendix B of Ref. [14]). The left-hand-side consists of correlation
functions with one boundary operator and two insertions in the bulk. So the
Wick contractions of such a correlation function give:
$\displaystyle a^{6}\sum_{\bf x,y}\langle A_{0}^{a}(x)S^{b}(y){\cal
O}^{c}\rangle=$ $\displaystyle\dfrac{\mathrm{i}a^{12}}{L^{3}}{\rm
Tr}[T^{a}T^{b}T^{c}]\sum_{\bf x,y}\sum_{\bf
u,v}\Bigg{\langle}\,\hbox{tr}\,\bigg{\\{}\gamma_{0}\gamma_{5}[\psi(x)\bar{\psi}(y)]_{\rm
F}[\psi(y)\bar{\zeta}({\bf u})]_{\rm F}\gamma_{5}[\zeta({\bf
v})\bar{\psi}(x)]_{\rm F}\bigg{\\}}\Bigg{\rangle}$
$\displaystyle+\dfrac{\mathrm{i}a^{12}}{L^{3}}{\rm
Tr}[T^{c}T^{b}T^{a}]\sum_{\bf x,y}\sum_{\bf
u,v}\Bigg{\langle}\,\hbox{tr}\,\bigg{\\{}\gamma_{0}\gamma_{5}[\psi(x)\bar{\zeta}({\bf
u})]_{\rm F}\gamma_{5}[\zeta({\bf v})\bar{\psi}(y)]_{\rm
F}[\psi(y)\bar{\psi}(x)]_{\rm F}\bigg{\\}}\Bigg{\rangle}$ $\displaystyle=$
$\displaystyle\,\mathrm{i}{\rm Tr}[T^{a}T^{b}T^{c}]f_{\rm
AS;1}(x_{0},y_{0})+\mathrm{i}{\rm Tr}[T^{c}T^{b}T^{a}]f_{\rm
AS;2}(x_{0},y_{0})$ $\displaystyle=$
$\displaystyle\,\dfrac{1}{2}\bigg{[}-d^{abc}{\rm Re}\,f_{\rm
AS;1}(x_{0},y_{0})+f^{abc}{\rm Im}\,f_{\rm AS;1}(x_{0},y_{0})\bigg{]}\,.$
(B.2)
The second in the above string of equations implicitly defines the two traces
of quark propagators (devoid of flavour structure) as $f_{\rm
AS;1}(x_{0},y_{0})$ and $f_{\rm AS;2}(x_{0},y_{0})$. In the last equation we
have made use of the fact that the two traces of propagators are complex
conjugates of each other which is a consequence of the
$\gamma_{5}$-Hermiticity property of Wilson fermion propagators. Finally, the
fact that the above correlation function is invariant under charge conjugation
leads to the vanishing of the term proportional to $f^{abc}$ in the last
expression. Hence, we obtain
$\displaystyle a^{6}\sum_{\bf x,y}\langle A_{0}^{a}(x)S^{b}(y){\cal
O}^{c}\rangle=-\dfrac{1}{2}d^{abc}{\rm Re}\,f_{\rm
AS;1}(x_{0},y_{0})=-d^{abc}f_{\rm AS}(x_{0},y_{0})\,,$ (B.3)
which implicitly defines $f_{\rm AS}(x_{0},y_{0})$. The correlation functions
$f_{\rm P}$ and $f_{\rm AS}$ are schematically drawn in Fig. 1. Analogously,
from the mass dependent term of the Ward identity we also define
$\tilde{f}_{\rm PS}(x_{0},y_{0})$; the summation over all times from $t_{1}$
up to $t_{2}$ (see eq. (3.4)) is included in its definition. It is important
to note that $d^{abc}$ appears in both eqs. (B.1) and (B.3). Therefore, it
cancels out in the Ward identity, which becomes an expression between traces
of propagators, without any flavour indices. Putting everything together, we
eventually obtain eq. (3.5).
## Appendix C Comparison of $Z$ determinations and scaling tests
In this appendix we present more details on our $Z$ results, listed in Table
2. In Fig. 8 and Table 6 our preferred determination for $Z$, namely
${Z^{\\{T/3\\}}}$ is compared to Ref. [14] (de Divitiis et al.), $Z$
determined at two values of the gauge coupling in Ref. [32] (Bali et al.) and
to a $Z$ estimate that we work out from the results of Refs. [49] and [15]
(Heitger et al.). In particular, we extract the axial current normalisation
$Z_{\mathrm{A}}$ at our couplings from the interpolation formula of Ref. [49]
and combine it with the ratio $Z_{\mathrm{S}}/Z_{\mathrm{P}}$ of the
pseudoscalar and scalar renormalisation constants from Ref. [15]. In addition,
we give an interpolation formula for our preferred determination for $Z$ (also
displayed in Fig. 8).
Our result agrees with the other determinations at weaker bare couplings,
while disagreements are seen at stronger couplings. These are attributed to
lattice artefacts associated with intrinsic ambiguities of ${\rm O}(a^{2})$ or
higher between different determinations. Agreement is generally better between
our results and those of Ref. [14] (de Divitiis et al.).
Figure 8: $Z$ results, obtained with different methods, as a function of the squared bare coupling $g_{0}^{2}$. The preferred determination of this work is $Z={Z^{\\{T/3\\}}}$ (pentagons). The squares are obtained by combining results from Refs. [49] and [24] (Heitger et al.). The two $Z$ estimates determined in Ref. [32] (Bali et al.) are depicted by triangles. The circles correspond to the $Z$ results from Ref. [14] (de Divitiis et al.). One-loop perturbation theory is illustrated by the dotted line, Ref. [14]. The dashed line shows the interpolation (C.1a) of ${Z^{\\{T/3\\}}}$ (excluding the coarsest lattice spacing from the fit). The vertical dotted lines correspond to the bare couplings used in CLS simulations. $\beta$ | $Z={Z^{\\{T/3\\}}}$ this work | $Z$, LCP-0 de Divitiis et al. | $Z$, LCP-1 de Divitiis et al. | $1/Z_{\mathrm{A}}\cdot Z_{\mathrm{P}}/Z_{\mathrm{S}}$ Heitger et al.
---|---|---|---|---
3.3 | 0.8184(77) | 0.7462(56) | 0.7896(36) | 0.884(26)0
3.414 | 0.8987(43) | 0.8762(40) | 0.8992(26) | 0.990(12)0
3.512 | 0.9935(38) | 0.9764(33) | 0.9861(23) | 1.0396(80)
3.676 | 1.0621(36) | 1.0588(31) | 1.0611(23) | 1.0901(89)
3.81 | 1.0907(13) | 1.0882(11) | 1.0884(8)0 | 1.1029(61)
Table 6: Comparison of our preferred $Z$ determination with results from Ref.
[14] (de Divitiis et al.) and the combination of results from Refs. [49] and
[24] (Heitger et al.).
In order to confirm this claim of consistency (leaving aside higher cut-off
effects) we construct ratios of different determinations and investigate their
behaviour as a function of the lattice spacing. Interestingly, rather than
${\rm O}(a^{2})$, leading cut-off effects of ${\rm O}(a^{3})$ can be
identified in the ratio ${Z^{\\{T/3\\}}}/{Z^{\\{T/4\\}}}$, as seen in Fig. 9
(left). The scaling behaviour of our results compared to those of previous
works is shown in Fig. 9 (right). All ratios are fitted with an ansatz
$1+ca^{3}$, excluding the coarsest lattice spacing. When adding a term linear
in the lattice spacing, its fit parameter vanishes within its uncertainty in
all cases. In conclusion, these scaling tests indicate that our results for
$Z$ are in accordance with the theoretical expectation of ${\rm O}(a^{2})$
ambiguities or higher which by virtue of the imposed line of constant physics
decrease monotically towards the continuum limit.
Figure 9: Left: Ratio of our results ${Z^{\\{T/3\\}}}/{Z^{\\{T/4\\}}}$, fitted
as a function of the lattice spacing. Right: Ratios of ${Z^{\\{T/3\\}}}$ and
$Z$ computed in previous works, fitted as a function of the lattice spacing.
The coarsest lattice spacing is excluded from the fits. The dashed lines are
the fits while the horizontal dotted lines indicate the expected continuum
results.
In addition, we interpolate our $Z$ data using a Padé ansatz, constrained to
the one-loop prediction of Ref. [52] for small couplings; see eq. (C.1) and
Fig. 8. Owing to relevant higher order cut-off effects, shown in the right
panel of Fig. 9, we do not include the coarsest lattice spacing in the fit.
Other than that, it must be noted that the coarsest lattice spacing is well
outside the range of CLS couplings. We obtain
$Z(g_{0}^{2})=1+0.0703169\cdot
g_{0}^{2}\times\frac{1+d_{1}g_{0}^{4}}{1+d_{2}g_{0}^{2}}\,,$ (C.1a) where
$\quad d_{i}=\left(-0.34504,-0.52309\right)\,,$ (C.1b) and
$\mathrm{cov}(d_{i},d_{j})=\left(\begin{array}[]{lll}\phantom{-}2.798505\times
10^{-7}&\phantom{-}6.577477\times 10^{-7}\\\ \phantom{-}6.577477\times
10^{-7}&\end{array}\right).$ (C.1c)
Our $Z$ results at the CLS couplings are gathered in Table 7 and compared to
those of Ref. [14] for two different LCP conditions. The two outmost CLS
$\beta$ values ($3.4$ and $3.85$) lie outside the range of our fitted $Z$
estimates, so they are obtained by extrapolation. Their systematic errors are
estimated from the statistical uncertainty of the nearest ${Z^{\\{T/3\\}}}$
data point: the systematic error of ${Z^{\\{T/3\\}}}(\beta=3.4)$ is the
statistical error of ${Z^{\\{T/3\\}}}(\beta=3.414)$ and that of
${Z^{\\{T/3\\}}}(\beta=3.85)$ is the statistical error of
${Z^{\\{T/3\\}}}(\beta=3.81)$. These systematic errors are added to the
statistical ones in quadrature.
$\beta$ | $Z={Z^{\\{T/3\\}}}$ interpolated, this work | $Z$, LCP-0 de Divitiis et al. | $Z$, LCP-1 de Divitiis et al. | $\partial\ln Z/\partial g_{0}^{2}$ | $\partial\ln r_{\mathrm{m}}/\partial g_{0}^{2}$
---|---|---|---|---|---
3.4 | $\phantom{-}0.8798(47)(43)[64]$ | $\phantom{-}0.8758(52)$ | $\phantom{-}0.8981(35)$ | $-3.241(144)$ | $\phantom{-}8.975(195)$
3.46 | $\phantom{-}0.9507(25)$ | $\phantom{-}0.9320(50)$ | $\phantom{-}0.9468(35)$ | $-1.974(62)$ | $\phantom{-}5.915(179)$
3.55 | $\phantom{-}1.0147(15)$ | $\phantom{-}0.9937(42)$ | $\phantom{-}1.0015(30)$ | $-1.104(23)$ | $\phantom{-}3.647(149)$
3.7 | $\phantom{-}1.0696(13)$ | $\phantom{-}1.0591(23)$ | $\phantom{-}1.0612(17)$ | $-0.522(7)$ | $\phantom{-}1.962(90)$
3.85 | $\phantom{-}1.0961(12)(13)[18]$ | $\phantom{-}1.0975(25)$ | $\phantom{-}1.0971(18)$ | $-0.278(3)$ | $\phantom{-}1.190(49)$
Table 7: $Z$ values at the couplings used in CLS simulations, obtained from
the interpolation formula (C.1a), excluding the coarsest lattice spacing from
the fit. As explained in the text, an additional systematic error is added to
the $\beta=3.85$ and $\beta=3.4$ results and the errors are displayed as:
$(\sigma_{\mathrm{stat}})(\sigma_{\mathrm{syst}})[\sigma_{\mathrm{total}}]$.
In the last two columns we list $\partial\ln Z/\partial g_{0}^{2}$ for
${Z^{\\{T/3\\}}}$, obtained by differentiating eq. (C.1a) as well as
$\partial\ln r_{\rm m}/\partial g_{0}^{2}$ via differentiating eq. (5.5).
The interested reader may also use our interpolation formula for $Z$ (from eq.
(C.1)) and $r_{\rm m}$ (from eq. (5.5)) and the covariance between the fit
parameters of the two different interpolations,
$\mathrm{cov}(d_{i},c_{j})=\left(\begin{array}[]{lll}\phantom{-}3.502738\times
10^{-4}&-2.149363\times 10^{-4}&-1.303033\times 10^{-7}\\\
\phantom{-}1.464606\times 10^{-3}&-8.876895\times 10^{-4}&\end{array}\right),$
(C.2)
to construct combinations of the two such as $Zr_{\mathrm{m}}$.
## References
* [1] M. Bochicchio, L. Maiani, G. Martinelli, G. C. Rossi and M. Testa, _Chiral Symmetry on the Lattice with Wilson Fermions_ , _Nucl. Phys. B_ 262 (1985) 331.
* [2] L. Maiani, G. Martinelli, M. L. Paciello and B. Taglienti, _Scalar Densities and Baryon Mass Differences in Lattice QCD With Wilson Fermions_ , _Nucl. Phys. B_ 293 (1987) 420.
* [3] G. Martinelli, C. Pittori, C. T. Sachrajda, M. Testa and A. Vladikas, _A General method for nonperturbative renormalization of lattice operators_ , _Nucl. Phys. B_ 445 (1995) 81, [hep-lat/9411010].
* [4] C. Sturm, Y. Aoki, N. H. Christ, T. Izubuchi, C. T. C. Sachrajda and A. Soni, _Renormalization of quark bilinear operators in a momentum-subtraction scheme with a nonexceptional subtraction point_ , _Phys. Rev. D_ 80 (2009) 014501, [0901.2599].
* [5] S. Capitani, M. Lüscher, R. Sommer and H. Wittig, _Non-perturbative quark mass renormalization in quenched lattice QCD_ , _Nucl. Phys. B_ 544 (1999) 669, [hep-lat/9810063]. [Erratum: Nucl. Phys. B 582, 762 (2000)].
* [6] M. Dalla Brida, S. Sint and P. Vilaseca, _The chirally rotated Schrödinger functional: theoretical expectations and perturbative tests_ , _JHEP_ 08 (2016) 102, [1603.00046].
* [7] G. M. de Divitiis and R. Petronzio, _Nonperturbative renormalization constants on the lattice from flavor nonsinglet Ward identities_ , _Phys. Lett. B_ 419 (1998) 311, [hep-lat/9710071].
* [8] T. Bhattacharya, S. Chandrasekharan, R. Gupta, W.-J. Lee and S. R. Sharpe, _Nonperturbative renormalization constants using Ward identities_ , _Phys. Lett. B_ 461 (1999) 79, [hep-lat/9904011].
* [9] T. Bhattacharya, R. Gupta, W.-J. Lee and S. R. Sharpe, _Order a improved renormalization constants_ , _Phys. Rev. D_ 63 (2001) 074505, [hep-lat/0009038].
* [10] M. Guagnelli, R. Petronzio, J. Rolf, S. Sint, R. Sommer and U. Wolff, _Non-perturbative results for the coefficients $b_{\mathrm{m}}$ and $b_{\mathrm{a}}-b_{\mathrm{P}}$ in ${\rm O}(a)$ improved lattice QCD_, _Nucl. Phys. B_ 595 (2001) 44, [hep-lat/0009021].
* [11] T. Bhattacharya, R. Gupta, W. Lee and S. R. Sharpe, _Scaling behavior of discretization errors in renormalization and improvement constants_ , _Phys. Rev. D_ 73 (2006) 114507, [hep-lat/0509160].
* [12] P. Fritzsch, J. Heitger and N. Tantalo, _Non-perturbative improvement of quark mass renormalization in two-flavour lattice QCD_ , _JHEP_ 08 (2010) 074, [1004.3978].
* [13] G. S. Bali, E. E. Scholz, J. Simeth and W. Söldner, _Lattice simulations with $N_{\mathrm{f}}=2+1$ improved Wilson fermions at a fixed strange quark mass_, _Phys. Rev. D_ 94 (2016) 074501, [1606.09039].
* [14] G. M. de Divitiis, P. Fritzsch, J. Heitger, C. C. Köster, S. Kuberski and A. Vladikas, _Non-perturbative determination of improvement coefficients $b_{\mathrm{m}}$ and $b_{\mathrm{A}}-b_{\mathrm{P}}$ and normalisation factor $Z_{\mathrm{m}}Z_{\mathrm{P}}/Z_{\mathrm{A}}$ with $N_{\mathrm{f}}=3$ Wilson fermions_, _Eur. Phys. J. C_ 79 (2019) 797, [1906.03445].
* [15] J. Heitger, F. Joswig and A. Vladikas, _Ward identity determination of $Z_{\mathrm{S}}/Z_{\mathrm{P}}$ for $N_{\mathrm{f}}=3$ lattice QCD in a Schrödinger functional setup_, _Eur. Phys. J. C_ 80 (2020) 765, [2005.01352].
* [16] G. Bali, S. Bürger, S. Collins, M. Göckeler, M. Gruber, S. Piemonte et al., _Nonperturbative Renormalization in Lattice QCD with three Flavors of Clover Fermions: Using Periodic and Open Boundary Conditions_ , 2012.06284.
* [17] T. Bhattacharya, R. Gupta, W. Lee, S. R. Sharpe and J. M. Wu, _Improved bilinears in lattice QCD with non-degenerate quarks_ , _Phys. Rev. D_ 73 (2006) 034504, [hep-lat/0511014].
* [18] M. Constantinou, M. Hadjiantonis, H. Panagopoulos and G. Spanoudes, _Singlet versus nonsinglet perturbative renormalization of fermion bilinears_ , _Phys. Rev. D_ 94 (2016) 114513, [1610.06744].
* [19] M. Lüscher and P. Weisz, _On-Shell Improved Lattice Gauge Theories_ , _Commun. Math. Phys._ 97 (1985) 59. [Erratum: Commun. Math. Phys. 98, 433 (1985)].
* [20] B. Sheikholeslami and R. Wohlert, _Improved Continuum Limit Lattice Action for QCD with Wilson Fermions_ , _Nucl. Phys. B_ 259 (1985) 572.
* [21] M. Bruno et al., _Simulation of QCD with $N_{\mathrm{f}}=2+1$ flavors of non-perturbatively improved Wilson fermions_, _JHEP_ 02 (2015) 043, [1411.3982].
* [22] M. Bruno, T. Korzec and S. Schaefer, _Setting the scale for the CLS $2+1$ flavor ensembles_, _Phys. Rev. D_ 95 (2017) 074504, [1608.08900].
* [23] D. Mohler, S. Schaefer and J. Simeth, _CLS $2+1$ flavor simulations at physical light- and strange-quark masses_, _EPJ Web Conf._ 175 (2018) 02010, [1712.04884].
* [24] J. Heitger, F. Joswig, A. Vladikas and C. Wittemeier, _Non-perturbative determination of $c_{V},Z_{V}$ and $Z_{S}/Z_{P}$ in $N_{f}=3$ lattice QCD_, _EPJ Web Conf._ 175 (2018) 10004, [1711.03924].
* [25] J. Heitger and F. Joswig, _The renormalised $\mathrm{O}(a)$ improved vector current in three-flavour lattice QCD with Wilson quarks_, _Eur. Phys. J. C_ 81 (2021) 254, [2010.09539].
* [26] J. Bulava, M. Della Morte, J. Heitger and C. Wittemeier, _Nonperturbative renormalization of the axial current in $N_{\mathrm{f}}=3$ lattice QCD with Wilson fermions and a tree-level improved gauge action_, _Phys. Rev. D_ 93 (2016) 114513, [1604.05827].
* [27] J. Bulava, M. Della Morte, J. Heitger and C. Wittemeier, _Non-perturbative improvement of the axial current in $N_{\mathrm{f}}=3$ lattice QCD with Wilson fermions and tree-level improved gauge action_, _Nucl. Phys. B_ 896 (2015) 555, [1502.04999].
* [28] L. Chimirri, P. Fritzsch, J. Heitger, F. Joswig, M. Panero, C. Pena et al., _Non-perturbative renormalization of $O(a)$ improved tensor currents_, _PoS_ LATTICE2019 (2020) 212, [1910.06759].
* [29] M. Bruno, I. Campos, P. Fritzsch, J. Koponen, C. Pena, D. Preti et al., _Light quark masses in $N_{\mathrm{f}}=2+1$ lattice QCD with Wilson fermions_, _Eur. Phys. J. C_ 80 (2020) 169, [1911.08025].
* [30] J. Heitger, F. Joswig and S. Kuberski, _Determination of the charm quark mass in lattice QCD with $2+1$ flavours on fine lattices_, 2101.02694.
* [31] G. S. Bali et al., _The strange and light quark contributions to the nucleon mass from Lattice QCD_ , _Phys. Rev. D_ 85 (2012) 054502, [1111.1600].
* [32] G. S. Bali, S. Collins, D. Richtmann, A. Schäfer, W. Söldner and A. Sternbeck, _Direct determinations of the nucleon and pion $\sigma$ terms at nearly physical quark masses_, _Phys. Rev. D_ 93 (2016) 094504, [1603.00827].
* [33] S. Aoki et al., _FLAG Review 2019: Flavour Lattice Averaging Group (FLAG)_ , _Eur. Phys. J. C_ 80 (2020) 113, [1902.08191].
* [34] K. Ottnad, _Excited states in nucleon structure calculations_ , in _38th International Symposium on Lattice Field Theory_ , 11, 2020. 2011.12471.
* [35] J. Green, _Systematics in nucleon matrix element calculations_ , _PoS_ LATTICE2018 (2018) 016, [1812.10574].
* [36] M. Lüscher, S. Sint, R. Sommer and P. Weisz, _Chiral symmetry and $\mathrm{O}(a)$ improvement in lattice QCD_, _Nucl. Phys. B_ 478 (1996) 365, [hep-lat/9605038].
* [37] J. Bulava and S. Schaefer, _Improvement of $N_{\mathrm{f}}=3$ lattice QCD with Wilson fermions and tree-level improved gauge action_, _Nucl. Phys. B_ 874 (2013) 188, [1304.7093].
* [38] M. Lüscher and S. Schaefer. http://luscher.web.cern.ch/luscher/openQCD.
* [39] A. D. Kennedy, I. Horvath and S. Sint, _A New exact method for dynamical fermion computations with nonlocal actions_ , _Nucl. Phys. Proc. Suppl._ 73 (1999) 834, [hep-lat/9809092].
* [40] M. A. Clark and A. D. Kennedy, _Accelerating dynamical fermion computations using the rational hybrid Monte Carlo (RHMC) algorithm with multiple pseudofermion fields_ , _Phys. Rev. Lett._ 98 (2007) 051601, [hep-lat/0608015].
* [41] M. Lüscher, S. Sint, R. Sommer, P. Weisz and U. Wolff, _Non-perturbative O(a) improvement of lattice QCD_ , _Nucl. Phys. B_ 491 (1997) 323, [hep-lat/9609035].
* [42] P. Perez-Rubio, S. Sint and S. Takeda, _An O(a) modified lattice set-up of the Schrödinger functional in SU(3) gauge theory_ , _JHEP_ 07 (2011) 116, [1105.0110].
* [43] L. Del Debbio, H. Panagopoulos and E. Vicari, _$\theta$ dependence of SU(N) gauge theories_, _JHEP_ 08 (2002) 044, [hep-th/0204125].
* [44] P. Fritzsch, A. Ramos and F. Stollenwerk, _Critical slowing down and the gradient flow coupling in the Schrödinger functional_ , _PoS_ Lattice2013 (2014) 461, [1311.7304].
* [45] U. Wolff, _Monte Carlo errors with less errors_ , _Comput. Phys. Commun._ 156 (2004) 143, [hep-lat/0306017]. [Erratum: Comput. Phys. Commun. 176, 383 (2007)].
* [46] S. Schaefer, R. Sommer and F. Virotta, _Critical slowing down and error analysis in lattice QCD simulations_ , _Nucl. Phys. B_ 845 (2011) 93, [1009.5228].
* [47] A. Ramos, _Automatic differentiation for error analysis of Monte Carlo data_ , _Comput. Phys. Commun._ 238 (2019) 19, [1809.01289].
* [48] P. T. Boggs and J. E. Rogers, _Orthogonal distance regression_ , tech. rep., National Institute of Standards and Technology, Gaithersburg, MD, 1989. 10.6028/NIST.IR.89-4197.
* [49] M. Dalla Brida, T. Korzec, S. Sint and P. Vilaseca, _High precision renormalization of the flavour non-singlet Noether currents in lattice QCD with Wilson quarks_ , _Eur. Phys. J. C_ 79 (2019) 23, [1808.09236].
* [50] A. Gerardin, T. Harris and H. B. Meyer, _Nonperturbative renormalization and $O(a)$-improvement of the nonsinglet vector current with $N_{f}=2+1$ Wilson fermions and tree-level Symanzik improved gauge action_, _Phys. Rev. D_ 99 (2019) 014519, [1811.08209].
* [51] S. Sint and P. Weisz, _Further results on O(a) improved lattice QCD to one loop order of perturbation theory_ , _Nucl. Phys. B_ 502 (1997) 251, [hep-lat/9704001].
* [52] S. Aoki, K.-i. Nagai, Y. Taniguchi and A. Ukawa, _Perturbative renormalization factors of bilinear quark operators for improved gluon and quark actions in lattice QCD_ , _Phys. Rev. D_ 58 (1998) 074505, [hep-lat/9802034].
|
††thanks: Research partially supported by MTA Rényi “Lendület” Groups and
Graphs Research Group, by ERC Consolidator Grant 648017 and by NKFIH grants K
124152 and KKP 139502.
Quantum channel, noise, classical simulation, signalling dimension.
In memoriam Katalin Marton
# Classical simulations of communication channels
Péter E. Frenkel Eötvös Loránd University, Pázmány Péter sétány 1/C,
Budapest, 1117 Hungary
and Rényi Institute, Budapest, Reáltanoda u. 13-15, 1053 Hungary
<EMAIL_ADDRESS>
###### Abstract
We investigate whether certain non-classical communication channels can be
simulated by a classical channel with a given number of states and a given
‘amount’ of noise. It is proved that any noisy quantum channel can be
simulated by a corresponding classical channel with ‘the same amount’ of
noise. Classical simulations of general probabilistic channels are also
studied.
## Introduction
A communication protocol with $l$ possible inputs and $k$ possible outputs can
be described by a _transition matrix_ $A=(a_{ij})\in[0,1]^{k\times l}$, where
$a_{ij}$ is the conditional probability of output $i$ if the input is $j$.
This is a _stochastic_ matrix: for all $j$, we have $\sum_{i=1}^{k}a_{ij}=1$.
A _communication channel_ can be described by the set of transition matrices
that it affords. Channel Q _can be simulated_ by channel C if all transition
matrices afforded by Q are convex combinations of transition matrices afforded
by C. Such convex combinations occur naturally in information theory; they
correspond to the sender and receiver having access to (unlimited) shared
randomness. The relation ‘can be simulated by’ is obviously reflexive and
transitive. Two channels are _equivalent_ if each can be simulated by the
other.
The _classical channel with $n$ states_ affords stochastic 0-1 matrices with
at most $n$ nonzero rows. The _quantum channel of level $n$_ affords channel
matrices of the form $(\operatorname{tr}E_{i}\rho_{j})$, where
$\rho_{1},\dots,\rho_{l}\in M_{n}(\mathbb{C})$ are _density matrices_ , and
$E_{1},\dots,E_{k}\in M_{n}(\mathbb{C})$ is a _positive operator valued
measure (POVM)_. It is easy to see that the classical channel with $n$ states
can be simulated by the quantum channel of level $n$. By [4, Theorem 3] of
Weiner and the present author, the converse also holds. The present paper is
about variants of this theorem for general probabilistic channels (Section 1)
and for noisy quantum channels (Section 2). In Section 3, we discuss noiseless
classical simulations of noisy channels. Section 4 contains an open problem
tentatively linking classical simulations of quantum channels to the more
traditional way of comparing efficiency of classical and quantum
communication, involving von Neumann entropy, mutual information and Holevo’s
inequality. The reader who is interested in quantum information theory but not
in general probabilistic theory can safely skip Section 1.
Notations and terminology. The set $\\{1,\dots,k\\}$ is denoted by $[k]$. For
a real number $a$, we write $a_{+}=\max(a,0)$. The indicator of an event $A$
is written $\mathbb{1}(A)$. A _convex body_ is a convex compact set with
nonempty interior.
A matrix is stochastic if all entries are nonnegative reals and each column
sums to 1. The set of $n$-square matrices with complex entries is written
$M_{n}(\mathbb{C})$. The identity matrix is $\bf 1$. A complex matrix $A$ is
psdh if it is positive semi-definite Hermitian, written $A\geq 0$. A _positive
operator valued measure (POVM)_ is a sequence $E_{1}$, …, $E_{k}$ of psdh
matrices summing to $\bf 1$. A density matrix is a psdh matrix with trace 1.
For $0\leq\delta\leq 1$, the _$\delta$ -noisy classical channel with $n$
states_ affords transition matrices of the form $EX\in[0,1]^{k\times l}$,
where $E\in\\{0,1\\}^{k\times n}$ is a stochastic 0-1 matrix and $X$ is an
$n\times l$ stochastic matrix with each column containing $n-1$ entries equal
to $\delta/n$. Here $E$ can be interpreted as a classical decoding map
$[n]\to[k]$, and the columns of $X$ can be interpreted as extremal
$\delta$-noisy classical states. The presence of noise impedes the exact
transmission of pure states, the pure state chosen by the sender is
transmitted unchanged with probability $1-(n-1)\delta/n$ but turns into each
one of the other $n-1$ pure states with probability $\delta/n$. Note that the
0-noisy classical channel with $n$ states is the same as the classical channel
with $n$ states defined previously.
Following the terminology of [2, 3], the _signalling dimension_
$\operatorname{sign.\\!dim}\mathrm{Q}$ of a channel $\mathrm{Q}$ is the
smallest positive integer $n$ such that $\mathrm{Q}$ can be simulated by the
(noiseless) classical channel with $n$ states.
## 1 General probabilistic theory
Let $S$ be a convex body in a finite dimensional real affine space. Let $E$ be
the cone of _effects_ , i.e., affine linear functions $e:S\to[0,\infty)$. A
_partition of unity_ is a sequence $e_{1},\dots,e_{k}\in E$ of effects such
that $e_{1}+\dots+e_{k}=1$ (the constant 1 function). The _channel with state
space $S$_ affords transition matrices of the form
$(e_{i}(x_{j}))\in[0,1]^{k\times l}$, where $x_{1},\dots,x_{l}\in S$, and
$e_{1}$, …, $e_{k}$ is a partition of unity.
### 1.1 Signalling dimension vs. information storability
Following terminology introduced in [2], the _signalling dimension_
$\operatorname{sign.\\!dim}S$ of $S$ is the signalling dimension of the
channel with state space $S$, i.e., the smallest positive integer $n$ such
that the channel with state space $S$ can be simulated by the classical
channel with $n$ states. By [4, Theorem 3] mentioned in the Introduction, the
signalling dimension of the set of $n$-square density matrices is $n$.
Calculating, or even efficiently estimating the signalling dimension of a
given convex body seems to be a difficult problem, and strong general theorems
are yet to be searched for. In this section, we start with weak general
results and work our way towards deeper results for special cases.
The _affine dimension_ $\operatorname{aff.\\!dim}S$ of $S$ is the minimal
dimension of an affine space containing $S$. Adding 1, we get the _linear
dimension_ $\operatorname{lin.\\!dim}S$ of $S$, i.e., the dimension of the
vector space of affine linear functions on $S$. For example, the affine
dimension of the set of $n$-square density matrices is $n^{2}-1$, while its
linear dimension is $n^{2}$.
A partition of unity is _extremal_ if it cannot be written as a convex
combination of two partitions of unity in a nontrivial way. The nonzero
effects appearing in an extremal partition of unity need not lie on extremal
rays of the cone $E$ of effects. When they do, a characterization of extremal
partitions of unity is given in [2, Theorem 2]. We now give a necessary
condition of extremality for a general partition of unity. Although this is
implicitly contained in the paper cited above (see the proof given there), we
include a proof.
###### Proposition 1.1.
The nonzero effects in an extremal partition of unity are linearly
independent. Thus, their number is $\leq$ the linear dimension of $S$.
###### Proof.
Let $e_{1}$, …, $e_{k}$ be an extremal partition of unity. If
$\lambda_{1}e_{1}+\dots+\lambda_{k}e_{k}=0$ and $|\epsilon|\leq
1/\max\\{|\lambda_{i}|:\lambda_{i}\neq 0\\}$, then
$(1\pm\epsilon\lambda_{1})e_{1}$, …, $(1\pm\epsilon\lambda_{k})e_{k}$ is also
a partition of unity, which must coincide with $e_{1}$, …, $e_{k}$ because of
extremality. Thus $\lambda_{i}e_{i}=0$ for all $i$. ∎
Consider the transition matrix $A=(a_{ij})\in[0,1]^{k\times l}$ of some
communication protocol, where $a_{ij}$ is the conditional probability of
output $i\in[k]$ if the input was $j\in[l]$. Let us try to guess the input
based on the output, using a function $G:[k]\to[l]$. If input $j$ occurs with
probability $q_{j}$, then the probability of success will be
$\sum_{j=1}^{l}q_{j}\sum_{i=1}^{k}a_{ij}\mathbb{1}(G(i)=j)=\sum_{i=1}^{k}q_{G(i)}a_{i,G(i)}.$
Choosing the best possible guessing function $G$, the probability of success
is
$\sum_{i=1}^{k}\max_{j}q_{j}a_{ij}.$
Without any communication, the probability of successfully guessing the input,
with the optimal strategy, is $\max_{j}q_{j}$. The ratio
$\sum_{i=1}^{k}\max_{j}q_{j}a_{ij}/\max_{j}q_{j}$ is maximized when
$q_{j}=1/l$ for all $j$, in which case it simplifies to
$\sum_{i=1}^{k}\max_{j}a_{ij}$. Motivated by these considerations, and
following [7] by Matsumoto and Kimura, the _information storability_
$\operatorname{inf.\\!stor}S$ of $S$ is defined to be the maximum of
$\sum_{i=1}^{k}\max_{j}a_{ij}$ over all transition matrices $(a_{ij})$
afforded by $S$, or, equivalently, the maximum of
$\sum_{i=1}^{k}\max_{S}e_{i}$ over all partitions of unity $e_{1}$, …,
$e_{k}$. When taking these maxima, it suffices to consider extremal partitions
of unity. Then Proposition 1.1 and a simple compactness argument shows that
these maxima are attained.
As a simple example, let $S=[0,1]$. Then we can choose the partition of unity
$1=x+(1-x)$ to show that
$\operatorname{inf.\\!stor}S\geq\max_{0\leq x\leq 1}x+\max_{0\leq x\leq
1}(1-x)=1+1=2.$
On the other hand, as any affine linear function on $S$ takes its maximum at 0
or 1, we have
$\sum_{i=1}^{k}\max_{S}e_{i}=\sum_{e_{i}(0)\geq
e_{i}(1)}e_{i}(0)+\sum_{e_{i}(0)<e_{i}(1)}e_{i}(1)\leq 1+1=2$
for any partition of unity $e_{1}$, …, $e_{n}$ on $S$, whence
$\operatorname{inf.\\!stor}S=2$. This is the easiest special case of [7,
Theorems 1 and 4], cited below in relation to Theorem 1.2 and Proposition 1.3.
By [7, Theorem 4],
$\operatorname{inf.\\!stor}S\leq\operatorname{lin.\\!dim}S$. We refine this
inequality as follows.
###### Theorem 1.2.
1. 1.
$\operatorname{inf.\\!stor}S\leq\operatorname{sign.\\!dim}S\leq\operatorname{lin.\\!dim}S$.
2. 2.
If $\operatorname{inf.\\!stor}S\leq\operatorname{aff.\\!dim}S$, then
$\operatorname{sign.\\!dim}S\leq\operatorname{aff.\\!dim}S$.
This theorem is closely related to [3, Theorem 1(i)].
###### Proof.
(1) Let $n=\operatorname{sign.\\!dim}S$. Any transition matrix afforded by $S$
is a convex combination of transition matrices afforded by the classical
channel with $n$ states. Such a matrix has $\leq n$ nonzero rows and therefore
sum of row-maxima $\leq n$. This property is preserved when taking convex
combinations. This proves the first inequality.
Any transition matrix afforded by $S$ is a convex combination of transition
matrices of the form $(e_{i}(x_{j}))$, where $e_{1}$, …, $e_{k}$ is an
_extremal_ partition of unity, and $x_{j}\in S$. By Proposition 1.1, such a
matrix has $\leq\operatorname{lin.\\!dim}S$ nonzero rows, and therefore is a
convex combination of matrices afforded by the classical channel with
$\operatorname{lin.\\!dim}S$ states. This proves the second inequality.
(2) Let $\operatorname{inf.\\!stor}S\leq\operatorname{aff.\\!dim}S=n$. Any
transition matrix afforded by $S$ is a convex combination of matrices of the
form $A=(a_{ij})\in[0,1]^{k\times l}$, where $a_{ij}=e_{i}(x_{j})$, $e_{1}$,
…, $e_{k}$ is an _extremal_ partition of unity, and $x_{j}\in S$. We shall
show that such an $A$ is always a convex combination of transition matrices
afforded by the classical channel with $n$ states. Using Proposition 1.1, we
may assume that $k=n+1$. Set $m_{i}=\max_{S}e_{i}\in[0,1]$ for each $i\in[k]$.
Note that $\sum_{i=1}^{k}(1-m_{i})\geq n+1-\operatorname{inf.\\!stor}S\geq 1$.
Choose a probability distribution $p_{1}$, …, $p_{k}$ such that $p_{i}\leq
1-m_{i}$ for all $i$. Then
$p_{i}\leq 1-a_{ij}=\sum_{i^{\prime}\neq i}a_{i^{\prime}j}$
for all $i$ and $j$, and
$\sum_{i\in T}p_{i}\leq 1=\sum_{i=1}^{k}a_{ij}$
for all $T\subseteq[k]$.
For any fixed $j$, put supply $a_{ij}$ and demand $p_{i}$ at each node $i$ of
the complete (but loopless) graph on $k$ nodes. Then, for the total supply at
the neighbors of any subset $T\subseteq[k]$, we have
$\sum_{i\in N(T)}a_{ij}\geq\sum_{i\in T}p_{i}.$
By the Supply–Demand Theorem [6, 2.1.5. Corollary], the demands can be met:
there exist stochastic column vectors $b_{j}(1)$, …, $b_{j}(k)$ such that the
$i$-th entry of $b_{j}(i)$ is zero for all $i$, and
$\sum_{i=1}^{k}p_{i}b_{j}(i)$ is the $j$-th column of $A$. Now let $B(i)$ be
the matrix with columns $b_{1}(i)$, …, $b_{l}(i)$. Then the $i$-th row of
$B(i)$ is zero, so $B(i)$ has $\leq k-1=n$ nonzero rows, so $B(i)$ is a convex
combination of transition matrices afforded by the classical channel with $n$
states. Then so is $A$, since
$A=\sum_{i=1}^{k}p_{i}B(i).$
∎
For the remainder of this section, assume that $S$ is not just a point. A
_chord_ of $S$ is a segment $AB$ whose endpoints $A$ and $B$ belong to the
boundary of $S$. We write $AOB$ for a chord $AB$ with a distinguished point
$O$ on the chord. The convex body $S$ is _centrally symmetric_ if there exists
a point $O\in S$ such that for any chord $AOB$ of $S$, we have $|OA|=|OB|$ for
the lengths of the segments $OA$ and $OB$. The _Minkowski measure of
asymmetry_ $\operatorname{asymm}S$ of $S$ is the smallest real number $m\geq
1$ such that there exists a point $O\in S$ such that for any chord $AOB$ of
$S$, we have $|OB|\leq m|OA|$.
By [7, Theorem 1] of Matsumoto and Kimura, the information storability is
related to the Minkowski measure of asymmetry as follows.
###### Proposition 1.3.
$\operatorname{inf.\\!stor}S=\operatorname{asymm}S+1$
Although this is a known statement, we include the sketch of a geometric proof
for the convenience of the reader.
###### Proof.
$\leq$: There exists a point $O\in S$ such that for any chord $AOB$ of $S$, we
have $|OB|\leq(\operatorname{asymm}S)|OA|$. Let $n=\operatorname{asymm}S+1$.
Then $e(x)\leq ne(O)$ for all $e\in E$ and $x\in S$, whence
$\sum_{i=1}^{k}\max_{S}e_{i}\leq n\sum_{i=1}^{k}e_{i}(O)=n$
for all partitions of unity $e_{1}$, …, $e_{k}$.
$\geq$: Let $n=\operatorname{inf.\\!stor}S$. Then
$\sum_{i=1}^{k}\max_{S}e_{i}\leq n$ for all partitions of unity $e_{1}$, …,
$e_{k}$. When $k$ is the linear dimension of $S$, this tells us that for any
simplex $\Delta$ containing $S$, there exists a point each of whose
barycentric coordinates with respect to $\Delta$ is at least $1/n$ times the
maximum value of that barycentric coordinate on $S$. Using Helly’s theorem, we
see that there exists a point $O$ that divides the distance between any two
parallel supporting hyperplanes of $S$ in a ratio at least as equitable as
$1:(n-1)$. Then, for any chord $AOB$ of $S$ with $|AO|\leq|OB|$, considering
the supporting hyperplane of $S$ at $A$ and the parallel supporting
hyperplane, we get that $|OB|\leq(n-1)|OA|$. ∎
###### Corollary 1.4.
For the regular octahedron, we have $\operatorname{asymm}=1$,
$\operatorname{inf.\\!stor}=2$,
$\operatorname{sign.\\!dim}=\operatorname{aff.\\!dim}=3$, and
$\operatorname{lin.\\!dim}=4$.
###### Proof.
The regular octahedron is centrally symmetric, which means that
$\operatorname{asymm}=1$. By Proposition 1.3, we have
$\operatorname{inf.\\!stor}=\operatorname{asymm}+1=2$. Obviously,
$\operatorname{aff.\\!dim}=3$ and
$\operatorname{lin.\\!dim}=\operatorname{aff.\\!dim}+1=4$.
By Theorem 1.2(2), we have $\operatorname{sign.\\!dim}\leq 3$. To prove the
converse inequality, let
$X=\begin{pmatrix}1&-1&&&&\\\ &&1&-1&&\\\ &&&&1&-1\end{pmatrix}$
be the matrix whose columns are the vertices of the octahedron (the entries
not shown are zero). Let
$V=\begin{pmatrix}1&1&1\\\ 1&-1&-1\\\ -1&1&-1\\\ -1&-1&1\end{pmatrix},$
then
$VX=\begin{pmatrix}1&-1&1&-1&1&-1\\\ 1&-1&-1&1&-1&1\\\ -1&1&1&-1&-1&1\\\
-1&1&-1&1&1&-1\end{pmatrix}.$
Adding 1 to each entry and dividing by 4, we get the stochastic matrix
$A=\frac{1}{2}\begin{pmatrix}1&0&1&0&1&0\\\ 1&0&0&1&0&1\\\ 0&1&1&0&0&1\\\
0&1&0&1&1&0\end{pmatrix},$
which is therefore a transition matrix afforded by the octahedron. Since any
two rows of $A$ have an 1/2 at the same position, we have
$\sum_{1\leq i<i^{\prime}\leq 4}\max_{1\leq j\leq
6}(a_{ij}+a_{i^{\prime}j})={4\choose 2}=6.$
On the other hand, any $4\times 6$ transition matrix afforded by the classical
channel with 2 states has at least $4-2=2$ zero rows, so the sum above would
be $\leq{4\choose 2}-{{4-2}\choose 2}=5$ — note that this is a special case of
[4, inequality (3.6)]. This inequality is preserved under convex combinations.
Therefore, the octahedron cannot be simulated by the classical 2-state
channel, hence its signalling dimension is (at least) 3. ∎
### 1.2 Noisy balls
If an origin is chosen in $S$, and $0\leq\delta\leq 1$, then the _$\delta$
-noisy channel with state space $S$_ affords the transition matrices
$(e_{i}(x_{j}))$, where $e_{1}$, …, $e_{k}$ is a partition of unity and
$x_{j}\in(1-\delta)S$ for all $j$. This is analogous to the partial
depolarization channel in quantum information theory, cf. Subsection 3.1. Note
that $e_{i}\geq 0$ is required on all of $S$.
It is easy to see that if $S^{\prime}=f(S)$ is an affine image of $S$, then
$S^{\prime}$ can be simulated by $S$. If, in addition, $O^{\prime}=f(O)$, then
$\delta$-noisy $S^{\prime}$ can be simulated by $\delta$-noisy $S$. In
particular, a classical bit can be simulated by $S$ unless $S$ is just a
point, and a $\delta$-noisy classical bit can be simulated by any
$\delta$-noisy $S\neq\\{O\\}$ that is symmetric with respect to $O$.
###### Theorem 1.5.
Let $n$ be an even positive integer. Put
$S=\\{x\in\mathbb{R}^{d}:\|x\|_{n/(n-1)}\leq 1\\},$
the unit ball of the $n/(n-1)$-norm. Let $0\leq\delta\leq 1$.
1. 1.
The $\delta$-noisy channel with state space $S$ can be simulated by the
$\delta$-noisy classical channel with $n$ states.
2. 2.
The signalling dimension of $S$ is $\leq n$.
3. 3.
For an ellipsoid of arbitrary affine dimension $\geq 1$, the signalling
dimension is $2$. A $\delta$-noisy ellipsoid can be simulated by a
$\delta$-noisy classical bit.
The proof below is similar to that of [4, Theorem 3]. However, the mixed
discriminant used there (and used in Section 2 of the present paper) must be
replaced by a different $n$-linear symmetric function
$\\{\cdot,\dots,\cdot\\}$.
To introduce $\\{\cdot,\dots,\cdot\\}$, we can think of an affine linear
function $e:S\to\mathbb{R}$ as a formal sum of a number and a vector:
$e=c+v\in\mathbb{R}\oplus\mathbb{R}^{d}=\mathbb{R}^{d+1}$, meaning that
$e(x)=c+vx$ for $x\in S$, where $vx$ is the usual inner product. For an effect
$e\in E$, the condition $e\geq 0$ translates to $\|v\|_{n}\leq c$ because
$(n/(n-1))^{-1}+n^{-1}=1.$
Given $e_{1},\dots,e_{n}\in\mathbb{R}^{d+1}$, where $e_{i}=c_{i}+v_{i}$, we
define
$\\{e_{1},\dots,e_{n}\\}=c_{1}\cdots c_{n}-v_{1}\cdots v_{n},$
where $v_{1}\cdots v_{n}$ means that we take the coordinatewise product and
then add up the coordinates (which is an $n$-linear generalization of the
usual inner product). For $n=2$, $\\{\cdot,\cdot\\}$ is the Lorentzian
indefinite symmetric bilinear product well known from the special theory of
relativity. For general $n$, $\\{\cdot,\dots,\cdot\\}$ is symmetric,
multilinear and $\\{1,\dots,1\\}=1$. When $e_{1},\dots,e_{n}\in E$, we have
$\\{e_{1},\dots,e_{n}\\}\geq 0$ by repeated application of Hölder’s
inequality. Further, if $0\leq e\leq 1$ holds pointwise on $S$, then writing
$e=c+v$ and $a=\|v\|_{n}$, we have $0\leq a\leq\min(c,1-c)$ and therefore
$\displaystyle\\{e,\dots,e\\}=c^{n}-v^{n}\overset{*}{=}c^{n}-a^{n}=$
$\displaystyle=(c-a)(c^{n-1}+c^{n-2}a+\dots+ca^{n-2}+a^{n-1})\leq$
$\displaystyle\leq(c-a)(c+(1-c))^{n-1}=c-a=\min_{x\in S}e(x).$
Note that the equality marked by a * holds because $n$ is even.
We are now ready to start the proof of Theorem 1.5.
###### Proof.
(1) Let $A\in[0,1]^{k\times l}$ be a $\delta$-noisy transition matrix afforded
by $S$, i.e.,
$a_{ij}=e_{i}((1-\delta)x_{j}),$
where $x_{1},\dots,x_{l}\in S$, $e_{i}\in E$, and $e_{1}+\dots+e_{k}=1$. We
shall prove that $A$ is a convex combination of $\delta$-noisy $n$-state
classical transition matrices.
If $e_{i}=c_{i}+v_{i}$ as before, then $c_{1}+\dots+c_{k}=1$,
$v_{1}+\dots+v_{k}=0$, and
$a_{ij}=c_{i}+(1-\delta)v_{i}x_{j}=\delta c_{i}+(1-\delta)e_{i}(x_{j}),$
so $A=\delta C+(1-\delta)A^{\prime}$, where $C$ is the matrix with entries
$c_{ij}=c_{i}$ not depending on $j$, and $A^{\prime}$ is the matrix with
entries $a^{\prime}_{ij}=e_{i}(x_{j})$.
For $I=(i_{1},\dots,i_{n})\in[k]^{n}$, put
$p_{I}=\\{e_{i_{1}},\dots,e_{i_{n}}\\}.$
We have $p_{I}\geq 0$ for all $I$. Thus, we get a measure $P$ on $[k]^{n}$
defined by
$P(T)=\sum_{I\in T}p_{I}.$
Using the multilinearity of the bracket and the assumption that $e_{1}$, …,
$e_{k}$ is a partition of unity, we see that
$P([k]^{n})=\\{1,\dots,1\\}=\rm 1,$
so $P$ is a probability measure.
Let $D(I)$ be the matrix with entries $d(I)_{ij}=m(i,I)/n$ not depending on
$j$, where $m(i,I)$ is the number of occurrences of $i$ in the sequence $I$.
Then $\int D\mathrm{d}P=C$ because
$\int
d_{ij}\mathrm{d}P=\sum_{I\in[k]^{n}}p_{I}m(i,I)/n=\\{e_{i},1,\dots,1\\}=c_{i}=c_{ij}.$
For any $R\subseteq[k]$, we may put $e_{R}=\sum_{i\in R}e_{i}$, and then we
have
$P(R^{n})=\\{e_{R},\dots,e_{R}\\}\leq\min_{x\in S}e_{R}(x)\leq e_{R}(x_{j})$
for all $j$ since $0\leq e_{R}\leq 1$. The right hand side here is
$A^{\prime}_{j}(R)$, where $A^{\prime}_{j}$ is the probability measure on
$[k]$ given by the numbers $e_{i}(x_{j})$. So we have
$A^{\prime}_{j}(R)\geq P(R^{n})\qquad\textrm{ for all }R\subseteq[k].$
Let us connect $I\in[k]^{n}$ to $i\in[k]$ by an edge if $i$ occurs in $I$.
This gives us a bipartite graph. The neighborhood of any set
$T\subseteq[k]^{n}$ is the set $R\subseteq[k]$ of indices occurring in some
element of $T$. We always have $T\subseteq R^{n}$, whence
$A^{\prime}_{j}(R)\geq P(R^{n})\geq P(T).$
Thus, by the Supply–Demand Theorem [6, 2.1.5. Corollary], and using the fact
that both $A^{\prime}_{j}$ and $P$ are probability measures, there exists a
probability measure $P_{j}$ on $[k]^{n}\times[k]$ which is supported on the
edges of the graph and has marginals $P$ and $A^{\prime}_{j}$. Whenever
$p_{I}\neq 0$, let $B^{\prime}(I)$ be the $k\times l$ stochastic matrix whose
$j$-th column is given by the conditional distribution $P_{j}|I$ on $[k]$. We
have $A^{\prime}=\int B^{\prime}\mathrm{d}P$.
Now $B(I)=\delta D(I)+(1-\delta)B^{\prime}(I)$ is a convex combination of
$\delta$-noisy $n$-state classical transition matrices, and, in turn, $A=\int
B\mathrm{d}P$ is a convex combination of the $B(I)$, as desired.
(2) Set $\delta=0$ in (1).
(3) The signalling dimension of an ellipsoid is the same as that of the
Euclidean unit ball. This is $\leq 2$ by (2), and is $\geq 2$ because the unit
ball is not a point. The noisy claim follows from (1). ∎
## 2 Noisy quantum channels
Let
$K\subseteq\Delta_{n}=\\{(\xi_{1},\dots,\xi_{n}):\xi_{i}\geq 0\;\textrm{ for
all }\;i,\;\xi_{1}+\dots+\xi_{n}=1\\}$
be a convex set of probability distributions that is invariant under all
permutations of the $n$ coordinates. The _$K$ -noisy classical channel_
affords transition matrices of the form $EX\in[0,1]^{k\times l}$, where $X\in
K^{l}$ is an $n\times l$ matrix with all columns in $K$, and $E$ is a $k\times
n$ stochastic 0-1 matrix. A density matrix is _$K$ -noisy_ if the sequence of
its eigenvalues is in $K$. The _$K$ -noisy quantum channel_ affords transition
matrices of the form $(\operatorname{tr}E_{i}\rho_{j})$, where $E_{1}$, …,
$E_{k}$ is a POVM and $\rho_{j}$ is a $K$-noisy density matrix for
$j=1,\dots,l$.
It is easy to see that the $K$-noisy classical channel can be simulated by the
$K$-noisy quantum channel. Our goal is to prove the converse, which is a far-
reaching generalization of [4, Theorem 3] mentioned in the Introduction.
In fact, we may generalize further. Let $K_{j}\subseteq\Delta_{n}$
($j=1,\dots,l$) be convex sets, each of them invariant under all permutations
of the $n$ coordinates. The _$(K_{1},\dots,K_{l})$ -noisy classical channel_
affords transition matrices of the form $EX\in[0,1]^{k\times l}$, where $X\in
K_{1}\times\dots\times K_{l}$ is an $n\times l$ matrix with $j$-th column in
$K_{j}$, and $E$ is a $k\times n$ stochastic 0-1 matrix. The
_$(K_{1},\dots,K_{l})$ -noisy quantum channel_ affords transition matrices of
the form $(\operatorname{tr}E_{i}\rho_{j})$, where $E_{1}$, …, $E_{k}$ is a
POVM and $\rho_{j}$ is a $K_{j}$-noisy density matrix for $j=1,\dots,l$.
It is easy to see that the $(K_{1},\dots,K_{l})$-noisy classical channel can
be simulated by the $(K_{1},\dots,K_{l})$-noisy quantum channel. We shall
prove the converse.
As in [4], our main tool is the _mixed discriminant_ , the unique symmetric
$n$-linear function $D$ on $M_{n}(\mathbb{C})$ such that $D(E,\dots,E)=\det E$
for all $E\in M_{n}(\mathbb{C})$. Explicitly, if
$E_{i}=\left[e_{i}^{1},\dots,e_{i}^{n}\right]$ are the columns, then
$D(E_{1},\dots,E_{n})=\frac{1}{n!}\sum_{\pi\in\mathfrak{S}_{n}}\det\left[e_{\pi(1)}^{1},\dots,e_{\pi(n)}^{n}\right].$
(2.1)
We shall need the following inequalities.
###### Lemma 2.1.
For $\lambda_{1},\dots,\lambda_{n}\in[0,1]$ and $r=1,2,\dots,n$, we have
$\sum_{Q\subseteq[n]}(r-|Q|)_{+}\prod_{m\notin Q}\lambda_{m}\prod_{m\in
Q}(1-\lambda_{m})\leq\lambda_{1}+\dots+\lambda_{r},$ (2.2)
where $a_{+}=\max(a,0)$.
###### Proof.
We have
$(r-|Q|)_{+}\leq\left|[r]\setminus Q\right|=\sum_{s=1}^{r}{\mathbb{1}}(s\notin
Q)$
for all $Q$. Thus, the left hand side of (2.2) is
$\leq\sum_{s=1}^{r}\sum_{Q\subseteq[n]\setminus\\{s\\}}\prod_{m\notin
Q}\lambda_{m}\prod_{m\in
Q}(1-\lambda_{m})=\sum_{s=1}^{r}\lambda_{j}\prod_{m\neq
s}(\lambda_{m}+(1-\lambda_{m}))=\lambda_{1}+\dots+\lambda_{r}.$
∎
###### Lemma 2.2.
For an $n$-square Hermitian matrix $0\leq E\leq\bf 1$ with eigenvalues
$\lambda_{1}$, …, $\lambda_{n}$, and $r=1,2,\dots,n$, we have
$\sum_{q=0}^{r-1}(r-q){n\choose
q}D(\underbrace{E,\dots,E}_{n-q},\underbrace{{\bf 1}-E,\dots,{\bf
1}-E}_{q})\leq\lambda_{1}+\dots+\lambda_{r}.$
###### Proof.
Since the spectrum and the mixed discriminant are both invariant under unitary
conjugation, we may assume that $E$ is a diagonal matrix. Then (2.1) reduces
Lemma 2.2 to Lemma 2.1. ∎
By Bapat’s [1, Lemma 2(vi)], if $E_{1}$, …, $E_{n}$ are all positive
semidefinite Hermitian matrices, then
$D(E_{1},\dots,E_{n})\geq 0.$ (2.3)
Given a POVM $E_{1},\dots,E_{k}\in M_{n}(\mathbb{C})$, we define
$p_{I}=D(E_{i_{1}},\dots,E_{i_{n}})$ (2.4)
for all $I=(i_{1},\dots,i_{n})\in[k]^{n}$. By multilinearity and (2.3), this
defines a probability distribution on $[k]^{n}$.
###### Lemma 2.3.
If $E_{1},\dots,E_{k}\in M_{n}(\mathbb{C})$ is a POVM, $u_{1}$, …, $u_{k}$ are
real numbers, and $\lambda_{1}$, …, $\lambda_{n}$ are the eigenvalues of
$E=\sum_{i=1}^{k}u_{i}E_{i}$, then
$\sum_{I\in[k]^{n}}p_{I}\min\left\\{\sum_{m\in
S}u_{i_{m}}:S\subseteq[n],|S|=r\right\\}\leq\lambda_{1}+\dots+\lambda_{r}$
(2.5)
for all $r=1,2,\dots,n$.
###### Proof.
We may assume that all $u_{i}\geq 0$ because adding $u$ to all $u_{i}$ adds
$ru$ to both sides of (2.5). We may assume $u_{1}\geq\dots\geq u_{k}$. Put
$u_{k+1}=0$. Write $E=\sum_{i=1}^{k}v_{i}F_{i}$, where $v_{i}=u_{i}-u_{i+1}$
and $F_{i}=E_{1}+\dots+E_{i}$.
Let $\sigma_{i}$ be the sum of the $r$ smallest eigenvalues of $F_{i}$. Then
$\sum_{i=1}^{k}v_{i}\sigma_{i}\leq\lambda_{1}+\dots+\lambda_{r}.$ (2.6)
As $0\leq F_{i}\leq\bf 1$, we have
$\sum_{q=0}^{r-1}(r-q)\binom{n}{q}D(\underbrace{F_{i},\dots,F_{i}}_{n-q},\underbrace{{\bf
1}-F_{i},\dots,{\bf 1}-F_{i}}_{q})\leq\sigma_{i}$ (2.7)
for all $i$, by Lemma 2.2.
On the other hand, since $u_{i}=v_{i}+\dots+v_{k}$, we have
$\min\left\\{\sum_{m\in
S}u_{i_{m}}:S\subseteq[n],|S|=r\right\\}=\sum_{i=1}^{k}v_{i}\left(r-|\\{m\in[n]:i_{m}>i\\}|\right)_{+}.$
It remains to check that
$\displaystyle\sum_{I\in[k]^{n}}p_{I}\left(r-|\\{m\in[n]:i_{m}>i\\}|\right)_{+}=$
$\displaystyle=\sum_{q=0}^{r-1}(r-q){n\choose
q}D(\underbrace{F_{i},\dots,F_{i}}_{n-q},\underbrace{{\bf 1}-F_{i},\dots,{\bf
1}-F_{i}}_{q})$
for all $i\in[k]$. This follows from
$\displaystyle\sum\left(p_{I}:I\in[k]^{n},|\\{m\in[n]:i_{m}>i\\}|=q\right)=$
$\displaystyle={n\choose
q}D(\underbrace{F_{i},\dots,F_{i}}_{n-q},\underbrace{{\bf 1}-F_{i},\dots,{\bf
1}-F_{i}}_{q}),$
which is clear from the definitions of $p_{I}$ and $F_{i}$, and from the
symmetry and multilinearity of $D$. ∎
We are ready for the main result of this paper.
###### Theorem 2.4.
The $(K_{1},\dots,K_{l})$-noisy quantum channel can be simulated by the
$(K_{1},\dots,K_{l})$-noisy classical channel. In particular, the $K$-noisy
quantum channel can be simulated by the $K$-noisy classical channel.
###### Proof.
It suffices to prove that for any POVM $E_{1}$, …, $E_{k}$, and any $K$-noisy
density matrix $\rho$, there exist points $x_{I}=(x_{I,1},\dots,x_{I,n})\in K$
for each $I=(i_{1},\dots,i_{n})\in[k]^{n}$ such that
$\operatorname{tr}E_{i}\rho=\sum_{I\in[k]^{n}}p_{I}\sum(x_{I,m}:m\in[n],i_{m}=i)$
(2.8)
for each $i\in[k]$. Here the $p_{I}$ are defined as in (2.4).
Let the eigenvalues of $\rho$ be $0\leq\mu_{1}\leq\dots\leq\mu_{n}$; we have
$\mu_{1}+\dots+\mu_{n}=1$. Since $\rho$ is $K$-noisy, we have
$\mu=(\mu_{1},\dots,\mu_{n})\in K$. Since $K$ is convex and invariant with
respect to permutations, any convex combination of permutations of $\mu$ is in
$K$. Thus, if $x\in[0,1]^{n}$ is a stochastic vector, and any $r$ distinct
coordinates of $x$ sum to $\geq\mu_{1}+\dots+\mu_{r}$ for each
$r=1,2,\dots,n$, then $x\in K$. If we require
* •
these $2^{n}$ inequalities for each $x_{I}$, together with
* •
$x_{I,m}\geq 0$ for all $I$ and $m$, and
* •
(2.8) for all $i$,
then each $x_{I}$ will be a stochastic vector since setting $r=n$ yields
$x_{I,1}+\dots+x_{I,n}\geq\mu_{1}+\dots+\mu_{n}=1,$
while summing (2.8) for $i=1,2,\dots,k$ yields
$1=\sum_{I\in[k]^{n}}p_{I}(x_{I,1}+\dots+x_{I,n}).$
Therefore, it suffices to prove that the system of $(2^{n}+n)k^{n}$
inequalities and $k$ equations above has a solution. By the well-known Farkas
Lemma, this is equivalent to saying that a linear combination of the
inequalities and equations in the system cannot lead to the contradictory
inequality $0\geq 1$. That is, it suffices to prove that if nonnegative
numbers $w_{I,H}$ $(I\in[k]^{n},H\subseteq[n])$ and real numbers $u_{1}$, …,
$u_{k}$ satisfy
$\sum(w_{I,H}:H\subseteq[n],H\ni m)\leq p_{I}u_{i_{m}}$ (2.9)
for all $I\in[k]^{n}$ and all $m\in[n]$, then
$\sum_{I\in[k]^{n}}\sum_{H\subseteq[n]}w_{I,H}(\mu_{1}+\dots+\mu_{|H|})\leq\sum_{i=1}^{k}u_{i}\operatorname{tr}E_{i}\rho.$
(2.10)
Let $\lambda_{1}\leq\dots\leq\lambda_{n}$ be the eigenvalues of
$u_{1}E_{1}+\dots+u_{k}E_{k}$. By von Neumann’s inequality, the right hand
side of (2.10) is
$\geq\lambda_{1}\mu_{n}+\dots+\lambda_{n}\mu_{1}.$
The coefficient of any $\mu_{t}$ on the left hand side of (2.10) is
$\sum_{I\in[k]^{n}}\sum_{|H|\geq t}w_{I,H},$
so it suffices to prove that
$\sum_{t=n-r+1}^{n}\sum_{I\in[k]^{n}}\sum_{|H|\geq
t}w_{I,H}\leq\lambda_{1}+\dots+\lambda_{r}$
for $r=1,\dots,n$. In view of Lemma 2.3, this follows if
$\sum_{t=n-r+1}^{n}\sum_{|H|\geq t}w_{I,H}\leq p_{I}\sum_{m\in S}u_{i_{m}}$
for all $I\in[k]^{n}$ and all $S\subseteq[n]$ with $|S|=r$. This follows from
(2.9) and the fact that
$\sum_{n-r<t\leq|H|}1=(|H|+r-n)_{+}\leq|S\cap H|=\sum_{m\in S\cap H}1$
for all $H,S\subseteq[n]$ with $|S|=r$. ∎
## 3 Simulation of a noisy channel by a noiseless one
Given the $K$-noisy channel, we might try to determine its signalling
dimension, i.e., simulate it by a noiseless classical channel with as few
states as possible. In view of Theorem 2.4, it makes no difference whether the
given channel is classical or quantum.
###### Theorem 3.1.
The $K$-noisy classical (or, equivalently, quantum) channel can be simulated
by the noiseless d-state classical (or, equivalently, level $d$ quantum)
channel if and only if we have
$\mu_{1}+\dots+\mu_{r}\geq\binom{r}{d}\bigg{/}\binom{n}{d}$ (3.1)
for all $\mu=(\mu_{1}\leq\dots\leq\mu_{n})\in K$ and all integers $d\leq r\leq
n$.
###### Proof.
‘Only if’: Let $\mu=(\mu_{1}\leq\dots\leq\mu_{n})\in K$. Let $A=(a_{ij})$ be
an $n\times n!$ stochastic matrix whose columns are the $n!$ permutations of
$\mu$. Then $A$ is a transition matrix afforded by the $K$-noisy channel, thus
also by the noiseless $d$-state channel. By [4, Section 3], we then have
$\binom{n}{r}(\mu_{1}+\dots+\mu_{r})=\sum_{|S|=r}\min_{j\in[l]}\sum_{i\in
S}a_{ij}\geq\binom{n-d}{n-r}$
for all $d\leq r\leq n$, which is equivalent to (3.1).
‘If’: Let $S$ be a uniform random $d$-element subset of $[n]$. It suffices to
prove that, for any $\mu\in K$, there is a random element $m$ of $S$ whose
distribution is given by $\mathbb{P}(m=r)=\mu_{r}$ for all $r=1,\dots,n$. We
may assume $\mu_{1}\leq\dots\leq\mu_{n}$. Let $\nu_{r}=\mathbb{P}(\max S=r)$,
then
$\nu_{1}+\dots+\nu_{r}=\mathbb{P}(\max S\leq
r)=\binom{r}{d}\bigg{/}\binom{n}{d}\leq\mu_{1}+\dots+\mu_{r},$
so $\mu$ is a convex combination of the permutations of $\nu$. But $\nu$ is
the distribution of the greatest element of $S$, so each permutation of $\nu$
is the distribution of an element of $S$, thus $\mu$ is the distribution of a
random element of $S$, as claimed. ∎
For $0\leq\delta\leq 1$, the _$\delta$ -noisy quantum channel of level $n$_
affords transition matrices of the form $(\operatorname{tr}E_{i}\rho_{j})$,
where $E_{1},\dots,E_{k}\in M_{n}(\mathbb{C})$ is a POVM and
$\rho_{1},\dots,\rho_{l}\in M_{n}(\mathbb{C})$ are density matrices with all
eigenvalues $\geq\delta/n$. This channel is equivalent to the $\delta$-noisy
classical channel with $n$ states. This is a special case of Theorem 2.4.
Alternatively, it can be shown by combining ideas from the proofs of Theorem
1.5(1) and [4, Theorem 3].
###### Corollary 3.2.
Let $0\leq\delta\leq 1$. The signalling dimension of the $\delta$-noisy
$n$-state classical (or, equivalently, $n$-level quantum) channel is
$\lceil(1-\delta)n+\delta\rceil$.
###### Proof.
The $\delta$-noisy $n$-state classical channel can be simulated by the
noiseless $d$-state classical channel if and only if we have
$r\delta/n\geq\binom{r}{d}\bigg{/}\binom{n}{d}$ (3.2)
for all integers $d\leq r\leq n-1$ — note that both sides of (3.1) are 1 for
$r=n$. In inequality (3.2), the left hand side is linear in $r$, while the
right hand side is convex for $r=0,1,\dots$. Also, the inequality holds for
$r=0,1,\dots,d-1$. Therefore, it holds for all integers $d\leq r\leq n-1$ if
and only if it holds for $r=n-1$, i.e., $(n-1)\delta/n\geq(n-d)/n$, or,
equivalently, $d\geq(1-\delta)n+\delta$. ∎
### 3.1 Partial replacer quantum channels
The usual mathematical model for a noisy quantum channel is given in terms of
a completely positive trace-preserving map $\mathcal{N}:M_{m}(\mathbb{C})\to
M_{n}(\mathbb{C})$. Let $\operatorname{ran}\mathcal{N}$ stand for the set of
density matrices $\mathcal{N}(\sigma)\in M_{n}(\mathbb{C})$, where $\sigma\in
M_{m}(\mathbb{C})$ is a density matrix. The channel affords transition
matrices of the form $(\operatorname{tr}E_{i}\rho_{j})\in[0,1]^{k\times l}$,
where $E_{1}$, …, $E_{k}$ is a POVM in $M_{n}(\mathbb{C})$ and each $\rho_{j}$
is contained in $\operatorname{ran}\mathcal{N}$. The _signalling dimension_
$\operatorname{sign.\\!dim}\mathcal{N}$ of $\mathcal{N}$ is the signalling
dimension of this channel, i.e., the smallest $d$ such that the channel can be
simulated by the noiseless classical channel with $d$ states. If the spectrum
of every $\rho\in\operatorname{ran}\mathcal{N}$ is contained in a given
permutation-invariant set $K\subseteq\Delta_{n}$, then every transition matrix
afforded by $\mathcal{N}$ is also afforded by the $K$-noisy quantum channel,
so we can can use Theorem 2.4 to show that $\mathcal{N}$ can be simulated by
the $K$-noisy classical channel. Then Theorem 3.1 can be used to give an upper
bound on the signalling dimension of $\mathcal{N}$.
An important special case is given by _partial replacer channels_. Let $m\leq
n$. We embed $M_{m}(\mathbb{C})$ into $M_{n}(\mathbb{C})$ as the set of
matrices that are zero outside of the upper left $m$-square block. We fix a
density matrix $\rho\in M_{n}(\mathbb{C})$. The _replacer channel_
$\mathcal{N}_{\rho}:M_{m}(\mathbb{C})\to M_{n}(\mathbb{C})$ is given by
$\mathcal{N}_{\rho}(X)=(\operatorname{tr}X)\rho$. Given $0\leq\delta\leq 1$,
the _partial replacer channel_
$\mathcal{N}_{\rho}(\delta):M_{m}(\mathbb{C})\to M_{n}(\mathbb{C})$ is given
by $\mathcal{N}_{\rho}(\delta)(X)=(1-\delta)X+\delta(\operatorname{tr}X)\rho$.
In [3, Theorem 3] by Doolittle and Chitambar, it is shown that
$\lceil(1-\delta)m+\delta\rceil\leq\operatorname{sign.\\!dim}\mathcal{N}_{\rho}(\delta)\leq\min\\{m,\lceil(1-\delta)m+1\rceil\\},$
(3.3)
and the upper bound is tight for the _partial erasure channel_ given by the
_erasure flag_ $\rho$ which has entry 1 at position $(m+1,m+1)$ and zero
elsewhere. Note that the difference between the upper and the lower bound in
(3.3) is at most 1.
We shall now prove that the lower bound is tight if $m=n$ and $\rho$ is
sufficiently mixed, in particular, if $\rho=\mathbf{1}/n$ is the _maximally
mixed state_ , yielding the _partial depolarization channel_
$\mathcal{N}(\delta)(X)=(1-\delta)X+(\delta/n)(\operatorname{tr}X)\mathbf{1}.$
From now on, we let $m=n$. Let $d=\lceil(1-\delta)n+\delta\rceil$ stand for
the lower bound in (3.3). Let $\mu_{1}\leq\dots\leq\mu_{n}$ stand for the
eigenvalues of a fixed density matrix $\rho$.
###### Proposition 3.3.
1. 1.
If $\delta(\mu_{1}+\dots+\mu_{r})\geq\binom{r}{d}/\binom{n}{d}$ holds for
$r=d,\dots,n-1$, then
$\operatorname{sign.\\!dim}\mathcal{N}_{\rho}(\delta)=d$.
2. 2.
The partial depolarization channel is equivalent to the $\delta$-noisy
classical channel with $n$ states.
3. 3.
The signalling dimension of the partial depolarization channel is $d$.
###### Proof.
(1) The eigenvalues $\mu_{1}^{\prime}\leq\dots\leq\mu_{n}^{\prime}$ of
$\mathcal{N}_{\rho}(\delta)(\sigma)=(1-\delta)\sigma+\delta\rho\geq\delta\rho$
satisfy
$\mu_{1}^{\prime}+\dots+\mu_{r}^{\prime}\geq\delta(\mu_{1}+\dots+\mu_{r})$ for
any density matrix $\sigma\in M_{m}(\mathbb{C})$ and any $r=1,\dots,n$. Thus,
$\mu_{1}^{\prime}+\dots+\mu_{r}^{\prime}\geq\binom{r}{d}/\binom{n}{d}$ for
$r=1,\dots,n-1$, but also, trivially, for $r=n$. The claim now follows from
Theorem 3.1 together with the first inequality in (3.3).
(2) The range $\operatorname{ran}\mathcal{N}(\delta)$ is the set of density
matrices with all eigenvalues $\geq\delta/n$, so the claim follows from
Theorem 2.4.
(3) follows from (2) together with Corollary 3.2. ∎
## 4 Future research
It is well known that quantum communication can outperform classical
communication if entanglement is used cleverly. On the other hand, in certain
scenarios not involving entanglement, it can be proved that passing from
classical to quantum cannot increase efficiency.
A fundamental result in this direction is the Holevo bound [5] which we now
recall. For any stochastic matrix $A=(a_{ij})\in[0,1]^{k\times l}$ and input
probabilities $q_{j}\geq 0$ $(j=1,\dots,l)$ summing to 1, we define the
_mutual information_
$\operatorname{Info}(A,q)=H(j)+H(i)-H(i,j).$
Here $H$ stands for the Shannon entropy of a random variable, and the joint
distribution of the random pair $(i,j)$ is given by the probabilities
$q_{j}a_{ij}$. Now, for any density matrices $\rho_{j}\in M_{n}(\mathbb{C})$
and any POVM $E_{1},\dots,E_{k}\in M_{n}(\mathbb{C})$, the Holevo inequality
reads
$\operatorname{Info}(A,q)\leq\chi,$ (4.1)
where $a_{ij}=\operatorname{tr}E_{i}\rho_{j}$ and the Holevo quantity $\chi$
is defined by
$\chi=S\left(\sum_{j=1}^{l}q_{j}\rho_{j}\right)-\sum_{j=1}^{l}q_{j}S(\rho_{j}),$
where $S$ is von Neumann entropy, i.e., the Shannon entropy of the spectrum.
If all $\rho_{j}$ with $q_{j}>0$ commute, then a POVM $E_{1}$, …$E_{k}$ can be
found so that equality holds in (4.1). Otherwise, the inequality is strict for
any POVM.
Another result in the above mentioned direction is [4, Theorem 3]: the
$n$-level quantum channel can be simulated by the $n$-state classical channel.
It would be nice to unify these two results. Let a probability distribution
$q_{1}$, …, $q_{l}$ be given. Can every quantum transition matrix
$A=(a_{ij})=(\operatorname{tr}E_{i}\rho_{j})\in[0,1]^{k\times l}$, where
$E_{1},\dots,E_{k}\in M_{n}(\mathbb{C})$ is a POVM, and
$\rho_{1},\dots,\rho_{l}\in M_{n}(\mathbb{C})$ are density matrices, be
written as a convex combination $A=\sum p_{I}A_{I}$ of stochastic matrices
$A_{I}$, each with $\leq n$ nonzero rows, and each satisfying
$\operatorname{Info}(A_{I},q)\leq\chi$ ? Can the proof of Theorem 2.4 be
modified to yield this result and thus, maybe, a new proof of Holevo’s
inequality?
Acknowledgement. I am grateful to Mihály Weiner for useful conversations.
## References
* [1] R. B. Bapat: Mixed discriminants of positive semidefinite matrices. Linear Algebra Appl. 126 (1989), 107–124. https://doi.org/10.1016/0024-3795(89)90009-8https://doi.org/10.1016/0024-3795(89)90009-8
* [2] Michele Dall’Arno, Sarah Brandsen, Alessandro Tosini, Francesco Buscemi, and Vlatko Vedral: No-Hypersignaling Principle, Phys. Rev. Lett. 119 (2017), 020401. https://doi.org/10.1103/PhysRevLett.119.020401https://doi.org/10.1103/PhysRevLett.119.020401
* [3] Brian Doolittle, Eric Chitambar: Certifying the Classical Simulation Cost of a Quantum Channel, Phys. Rev. Research 3, 043073. https://doi.org/10.1103/PhysRevResearch.3.043073https://doi.org/10.1103/PhysRevResearch.3.043073
* [4] P. E. Frenkel and M. Weiner: Classical information storage in an $n$-level quantum system, Communications in Mathematical Physics 340 (2015), 563–574. https://doi.org/10.1007/s00220-015-2463-0https://doi.org/10.1007/s00220-015-2463-0
* [5] A. S. Holevo: Bounds for the Quantity of Information Transmitted by a Quantum Communication Channel, Probl. Peredachi Inf., 9:3 (1973), 3–11; Problems Inform. Transmission, 9:3 (1973), 177–183.
* [6] L. Lovász and M. D. Plummer: Matching Theory. North-Holland, 1986.
* [7] Keiji Matsumoto, Gen Kimura: Information-induced asymmetry of state space in view of general probabilistic theories, https://doi.org/10.48550/arXiv.1802.01162 https://doi.org/10.48550/arXiv.1802.01162
|
# Deformations of varieties of general type
János Kollár<EMAIL_ADDRESS>
###### Abstract.
We prove that small deformations of a projective variety of general type are
also projective varieties of general type, with the same plurigenera.
Our aim is to prove the following.
###### Theorem 1.
Let $g:X\to S$ be a flat, proper morphism of complex analytic spaces. Fix a
point $0\in S$ and assume that the fiber $X_{0}$ is projective, of general
type, and with canonical singularities. Then there is an open neighborhood
$0\in U\subset S$ such that
1. (1.1)
the plurigenera of $X_{s}$ are independent of $s\in U$ for every $r$, and
2. (1.2)
the fibers $X_{s}$ are projective for every $s\in U$.
Here the $r$th plurigenus of $X_{s}$ is $h^{0}(Y_{s},\omega_{Y_{s}}^{r})$,
where $Y_{s}\to X_{s}$ is any resolution of $X_{s}$. By [Nak04, VI.5.2] (see
also (10.2)) $X_{s}$ has canonical singularities, so this is the same as
$h^{0}(X_{s},\omega_{X_{s}}^{[r]})$, where $\omega_{X_{s}}^{[r]}$ denotes the
double dual of the $r$th tensor power $\omega_{X_{s}}^{\otimes r}$.
Comments 1.3. Many cases of this have been proved, but I believe that the
general result is new, even for $X_{0}$ smooth and $S$ a disc.
For smooth surfaces proofs are given in [KS58, Iit69], and for 3-folds with
terminal singularities in [KM92, 12.5.1]. If $g$ is assumed projective, then
of course all fibers are projective, and deformation invariance of plurigenera
was proved by [Siu98] for $X_{0}$ smooth, and by [Nak04, Chap.VI] when $X_{0}$
has canonical singularities. However, frequently $g$ is not projective; see
Example 4 for some smooth, 2-dimensional examples. Many projective varieties
have deformations that are not projective, not even algebraic in any sense; K3
and elliptic surfaces furnish the best known examples.
In Example 3 we construct a deformation of a projective surface with a
quotient singularity and ample canonical class, whose general fibers are non-
algebraic, smooth surfaces of Kodaira dimension 0. Thus canonical is likely
the largest class of singularities where Theorem 1 holds. See also Example 5
for surfaces with simple elliptic singularities.
The projectivity of $X_{0}$ is essential in our proof, but (1.1) should hold
whenever $X_{0}$ is a proper algebraic space of general type with canonical
singularities. Such results are proved in [RT20], provided one assumes that
either $X_{0}$ is smooth and all fibers are Moishezon, or almost all fibers
are of general type.
Our main technical result says that the Minimal Model Program works for
$g:X\to S$. For $\dim X_{0}=2$ and $X_{0}$ smooth, this goes back to [KS58].
For $\dim X_{0}=3$ and terminal singularities, this was proved in [KM92,
12.4.4]. The next result extends these to all dimensions.
###### Theorem 2.
Let $g:X\to S$ be a flat, proper morphism of reduced, complex analytic spaces.
Fix a point $0\in S$ and assume that $X_{0}$ is projective and has canonical
singularities. Then every sequence of MMP-steps $X_{0}=X_{0}^{0}\dasharrow
X_{0}^{1}\dasharrow X_{0}^{2}\dasharrow\cdots$ (see Definition 7) extends to a
sequence of MMP-steps
$X=X^{0}\dasharrow X^{1}\dasharrow X^{2}\dasharrow\cdots,$
over some open neighborhood $0\in U\subset S$.
The proof is given in Paragraph 8 when $S$ is a disc ${\mathbb{D}}$, and in
Paragraph 12 in general. The assumption that $X_{0}$ has canonical
singularities is necessary, as shown by semistable 3-fold flips [KM92].
Extending MMP steps from divisors with canonical singularities is also studied
in [AK19].
If $X_{0}$ is of general type, then a suitable MMP for $X_{0}$ terminates with
a minimal model $X_{0}^{\rm m}$ by [BCHM10], which then extends to $g^{\rm
m}:X^{\rm m}_{U}\to U$ by Theorem 2. For minimal models of varieties of
general type, deformation invariance of plurigenera is easy, leading to a
proof of (1.1) in Paragraph 13. This also implies that all fibers are
bimeromorphic to a projective variety.
If $X_{0}$ is smooth, then it is Kähler, and the $X_{s}$ are also Kähler by
[KS58]. A Kähler variety that is bimeromorphic to an algebraic variety is
projective by [Moi66].
However, there are families of surfaces with simple elliptic singularities
$g:X\to S$ such that $K_{X_{0}}$ is ample, all fibers are bimeromorphic to an
algebraic surface, yet the projective fibers correspond to a countable, dense
set on the base; see Example 5.
We use Theorem 14—taken from [Kol21b, Thm.2]—to obtain the projectivity of the
fibers and complete the proof of Theorem 1 in Paragraph 13.
## 1\. Examples and consequences
The first example shows that Theorem 1 fails very badly for surfaces with non-
canonical quotient singularities.
###### Example 3.
We give an example of a flat, proper morphism of complex analytic spaces
$g:X\to{\mathbb{D}}$, such that
1. (3.1)
$X_{0}$ is a projective surface with a quotient singularity and ample
canonical class, yet
2. (3.2)
$X_{s}$ is smooth, non-algebraic, and of Kodaira dimension 0 for very general
$s\in{\mathbb{D}}$.
Let us start with a K3 surface $Y_{0}\subset{\mathbb{P}}^{3}$ with a
hyperplane section $C_{0}\subset Y_{0}$ that is a rational curve with 3 nodes.
We blow up the nodes $Y^{\prime}_{0}\to Y_{0}$ and contract the birational
transform of $C_{0}$ to get a surface $\tau_{0}:Y^{\prime}_{0}\to X_{0}$. Let
$E_{1},E_{2},E_{3}\subset X_{0}$ be the images of the 3 exceptional curves of
the blow-up.
By explicit computation, we get a quotient singularity of type
${\mathbb{C}}^{2}/\frac{1}{8}(1,1)$, $(E_{i}^{2})=-\frac{1}{2}$ and
$(E_{i}\cdot E_{j})=\frac{1}{2}$ for $i\neq j$. Furthermore,
$E:=E_{1}+E_{2}+E_{3}\sim K_{X_{0}}$ and it is ample by the Nakai-Moishezon
criterion. (Note that $(E\cdot E_{i})=\frac{1}{2}$ and $X_{0}\setminus E\cong
Y_{0}\setminus C_{0}$ is affine.)
Take now a deformation $Y\to{\mathbb{D}}$ of $Y_{0}$ whose very general fibers
are non-algebraic K3 surfaces that contain no proper curves. Take 3 sections
$B_{i}\subset Y$ that pass through the 3 nodes of $C_{0}$. Blow them up and
then contract the birational transform of $C_{0}$; cf. [MR71]. In general
[MR71] says that the normalization of the resulting central fiber is $X_{0}$,
but in our case the central fiber is isomorphic to $X_{0}$ since
$R^{1}(\tau_{0})_{*}{\mathcal{O}}_{Y^{\prime}_{0}}=0$. The contraction is an
isomorphism on very general fibers since there are no curves to contract. We
get $g:X\to{\mathbb{D}}$ whose central fiber is $X_{0}$ and all other fibers
are K3 surfaces blown up at 3 points.
In general, it is very unclear which complex varieties occur as deformations
of projective varieties; see [KLS21] for some of their properties.
###### Example 4.
[Ati58] Let $S_{0}:=(g=0)\subset{\mathbb{P}}^{3}_{\mathbf{x}}$ and
$S_{1}:=(f=0)\subset{\mathbb{P}}^{3}_{\mathbf{x}}$ be surfaces of the same
degree. Assume that $S_{0}$ has only ordinary nodes, $S_{1}$ is smooth,
$\operatorname{Pic}(S_{1})$ is generated by the restriction of
${\mathcal{O}}_{{\mathbb{P}}^{3}}(1)$ and $S_{1}$ does not contain any of the
singular points of $S_{0}$. Fix $m\geq 2$ and consider
$X_{m}:=(g-t^{m}f=0)\subset{\mathbb{P}}^{1}_{\mathbf{x}}\times{\mathbb{A}}^{1}_{t}.$
The singularities are locally analytically of the form $xy+z^{2}-t^{m}=0$.
Thus $X_{m}$ is locally analytically factorial if $m$ is odd. If $m$ is even
then $X_{m}$ is factorial since the general fiber has Picard number 1, but it
is not locally analytically factorial; blowing up $(x=z-t^{m/2}=0)$ gives a
small resolution. Thus we get that
1. (4.1)
$X_{m}$ is bimeromorphic to a proper, smooth family of projective surfaces iff
$m$ is even, but
2. (4.2)
$X_{m}$ is not bimeromorphic to a smooth, projective family of surfaces.
###### Example 5.
Let $E\subset{\mathbb{P}}^{2}$ be a smooth cubic and take $r$ general lines
$L_{i}\subset{\mathbb{P}}^{2}$. To get $S_{0}$, blow up all singular points of
$E+\textstyle{\sum}L_{i}$ and then contract the birational transform of
$E+\textstyle{\sum}L_{i}$. A somewhat tedious computation shows that
$K_{S_{0}}$ is ample for $r\geq 6$. It has 1 simple elliptic singularity
(coming from $E$) and $r$ quotient singularities (coming from the $L_{i}$).
Deform this example by moving the $3r$ points $E\cap\textstyle{\sum}L_{i}$
into general position $p^{1}_{t},\dots,p^{3r}_{t}\in E$ and the points
$L_{i}\cap L_{j}$ into general position on ${\mathbb{P}}^{2}$. Blow up these
points and then contract the birational transform of $E$ to get the surfaces
$S_{t}$. It has only 1 simple elliptic singularity (coming from $E$).
We get a flat family of surfaces with central fiber $S_{0}$ and general fibers
$S_{t}$. Let $L$ denote the restriction of the line class on
${\mathbb{P}}^{2}$ to $E$.
It is easy to see that such a surface $S_{t}$ is non-projective if the
$p^{i}_{t}$ and $L$ are linearly independent in $\operatorname{Pic}(E)$. Thus
$S_{t}$ is not projective for very general $t$ and has Kodaira dimension 0.
The next result is the scheme-theoretic version of Theorem 1. Ideally it
should be proved by the same argument. However, some of the references we use,
especially [Nak04], are worked out for analytic spaces, not for general
schemes. So for now we proceed in a somewhat roundabout way.
###### Corollary 6.
Let $S$ be a noetherian, excellent scheme over a field of characteristic 0.
Let $g:X\to S$ be a flat, proper algebraic space. Fix a point $0\in S$ and
assume that $X_{0}$ is projective, of general type and with canonical
singularities. Then there is an open neighborhood $0\in S^{\circ}\subset S$
such that, for every $s\in S^{\circ}$,
1. (6.1)
the plurigenera $h^{0}(X_{s},\omega_{X_{s}}^{[r]})$ are independent of $s$ for
every $r$, and
2. (6.2)
the fiber $X_{s}$ is projective.
###### Proof.
A proper algebraic space $Y$ over a field $k$ is projective iff $Y_{K}$ is
projective over $K$ for some field extension $K\supset k$. Noetherian
induction then shows that it is enough to prove the claims for the generic
points of the completions (at the point $0\in S$) of irreducible subvarieties
$0\in T\subset S$. Since the defining equations of $\hat{T}$ and of
$X\times_{S}\hat{T}$ involve only countably many coefficients, we may assume
that the residue field is ${\mathbb{C}}$.
Consider now the local universal deformation space $\operatorname{Def}(X_{0})$
of $X_{0}$ in the complex analytic category; see [Bin87]. It is the germ of a
complex analytic space and there is a complex analytic universal family
$G:{\mathbf{X}}\to\operatorname{Def}(X_{0}).$ Since a deformation over an
Artin scheme is automatically complex analytic, we see that the formal
completion $\hat{G}:\hat{\mathbf{X}}\to\widehat{\operatorname{Def}}(X_{0})$ is
the universal formal deformation of $X_{0}$. In particular,
$X\times_{S}\hat{T}$ is the pull-back of
$\hat{G}:\hat{\mathbf{X}}\to\widehat{\operatorname{Def}}(X_{0})$ by a morphism
$\hat{T}\to\widehat{\operatorname{Def}}(X_{0})$. Thus Theorem 1 implies both
claims. ∎
## 2\. Relative MMP
See [KM98] for a general introduction to the minimal model program.
###### Definition 7 (MMP-steps and their extensions).
Let $X\to S$ be a proper morphism of complex analytic spaces with irreducible
fibers. Assume that $K_{X/S}$ is ${\mathbb{Q}}$-Cartier. By an MMP-step for
$X$ over $S$ we mean a diagram
$None$
$\begin{array}[]{lcr}X&\stackrel{{\scriptstyle\pi}}{{\dasharrow}}&X^{+}\\\
\phi\searrow&&\swarrow\phi^{+}\\\ &Z&\end{array}$
where all morphisms are bimeromorphic and proper over $S$, $-K_{X/S}$ is ample
over $Z$, $K_{X^{+}/S}$ is ample over $Z$ and $\phi^{+}$ is small (that is,
without exceptional divisors).
If $X$ is ${\mathbb{Q}}$-factorial and the relative Picard number of $X/Z$ is
1, then there are 2 possible MMP steps:
* •
Divisorial: $\phi$ contracts a single divisor and $\phi^{+}$ is the identity.
* •
Flipping: both $\phi$ and $\phi^{+}$ are small.
However, in general there is a more complicated possibility:
* •
Mixed: $\phi$ contracts (possibly several) divisors and $\phi^{+}$ is small.
For our applications we only need to know that, by [KM98, 3.52], $X^{+}$
exists iff $\oplus_{r\geq 0}\ \omega_{Z/S}^{[r]}$ (which is equal to
$\oplus_{r\geq 0}\phi_{*}\omega_{X/S}^{[r]}$) is a finitely generated sheaf of
${\mathcal{O}}_{Z}$-algebras, and then
$None$ $X^{+}=\operatorname{Proj}_{Z}\oplus_{r\geq 0}\ \omega_{Z/S}^{[r]}.$
We index a sequence of MMP-steps by setting $X^{0}:=X$ and
$X^{i+1}:=(X^{i})^{+}$.
Fix a point $s\in S$ and let $X_{s}$ denote the fiber over $S$. We say that a
sequence of MMP-steps (over $S$) $X^{0}\dasharrow X^{1}\dasharrow
X^{2}\dasharrow\cdots$ extends a sequence of MMP-steps (over $s$)
$X_{s}^{0}\dasharrow X_{s}^{1}\dasharrow X_{s}^{2}\dasharrow\cdots$ if, for
every $i$,
$None$
$\begin{array}[]{rcl}X_{s}^{i}&\stackrel{{\scriptstyle\pi_{s}^{i}}}{{\dasharrow}}&\quad
X_{s}^{i+1}\\\ \phi_{s}^{i}\searrow&&\swarrow(\phi_{s}^{i})^{+}\\\
&Z_{s}^{i}&\end{array}\quad\begin{array}[]{cc}\mbox{is the fiber}\\\
\mbox{over $s$
of}\end{array}\quad\begin{array}[]{rcl}X^{i}&\stackrel{{\scriptstyle\pi^{i}}}{{\dasharrow}}&\quad
X^{i+1}\\\ \phi^{i}\searrow&&\swarrow(\phi^{i})^{+}\\\ &Z^{i}&\end{array}$
###### 8Proof of Theorem 2 for $S={\mathbb{D}}$, the disc.
Since MMP-steps preserve canonical singularities, by induction it is enough to
prove the claim for one MMP step. So we drop the upper index $i$ and identify
$K_{X/{\mathbb{D}}}$ with $K_{X}$.
Let $\phi_{0}:X_{0}\to Z_{0}$ be an extremal contraction. By [MR71]111This
should be changed to [KM92, 11.4], it extends to a contraction $\phi:X\to Z$,
where $Z$ is flat over ${\mathbb{D}}$ with central fiber $Z_{0}$ since
$R^{1}(\phi_{0})_{*}{\mathcal{O}}_{X_{0}}=0$. Note that $K_{X}$ is
${\mathbb{Q}}$-Cartier by (10.1), and $\phi$ is projective since $-K_{X}$ is
$\phi$-ample.
If $\phi_{0}$ is a divisorial contraction, then $K_{Z_{0}}$ is
${\mathbb{Q}}$-Cartier, and so is $K_{Z}$ by (10.1). Thus $X^{+}=Z$.
If $\phi_{0}$ is a flipping or mixed contraction, then $K_{Z}$ is not
${\mathbb{Q}}$-Cartier. By (7.2),
$None$ $X^{+}=\operatorname{Proj}_{Z}\oplus_{r\geq 0}\ \omega_{Z}^{[r]},$
provided $\oplus_{r\geq 0}\ \omega_{Z}^{[r]}$ is a finitely generated sheaf of
${\mathcal{O}}_{Z}$-algebras. (We have identified $\omega_{Z}$ with
$\omega_{Z/{\mathbb{D}}}$.)
Functoriality works better if we twist by the line bundle
${\mathcal{O}}_{Z}(Z_{0})$ and write it as
$X^{+}=\operatorname{Proj}_{Z}\oplus_{r\geq 0}\ \omega_{Z}^{[r]}(rZ_{0}).$
Let $\tau:Y\to X$ be a projective resolution of $X$ (that is, $\tau$ is
projective) such that $Y_{0}$, the bimeromorphic transform of $X_{0}$, is also
smooth. Set $g:=\phi\circ\tau$.
The hardest part of the proof is Nakayama’s theorem (9) which gives a
surjection
$None$ $\oplus_{r\geq
0}g_{*}\omega_{Y}^{r}(rY_{0})\twoheadrightarrow\oplus_{r\geq
0}(g_{0})_{*}\omega_{Y_{0}}^{r}.$
Since $X_{0}$ has canonical singularities
$\tau_{*}\omega_{Y_{0}}^{r}=\omega_{X_{0}}^{[r]}$, and hence
$g_{*}\omega_{Y_{0}}^{r}=\omega_{Z_{0}}^{[r]}$. We also have a natural
inclusion
$g_{*}\omega_{Y}^{r}(rY_{0})\lhook\joinrel\to\omega_{Z}^{[r]}(rZ_{0})$. Thus
pushing forward (8.2) we get a surjection
$None$ $\oplus_{r\geq 0}g_{*}\omega_{Y}^{r}(rY_{0})\to\oplus_{r\geq 0}\
\omega_{Z}^{[r]}(rZ_{0})\twoheadrightarrow\oplus_{r\geq 0}\
\omega_{Z_{0}}^{[r]}.$
Note that $\oplus_{r\geq 0}\ \omega_{Z_{0}}^{[r]}$ is a finitely generated
sheaf of ${\mathcal{O}}_{Z_{0}}$-algebras, defining the MMP-step of $X_{0}\to
Z_{0}$.
Now (11) says that $\oplus_{r\geq 0}\ \omega_{Z}^{[r]}(rZ_{0})$ is also a
finitely generated sheaf of ${\mathcal{O}}_{Z}$-algebras, at least in some
neighborhood of the compact $Z_{0}$. ∎
Next we discuss various results used in the proof.
###### Theorem 9.
[Nak04, VI.3.8] Let $\pi:Y\to S$ be a projective, bimeromorphic morphism of
analytic spaces, $Y$ smooth and $S$ normal. Let $D\subset Y$ be a smooth, non-
exceptional divisor. Then the restriction map
$\pi_{*}\omega_{Y}^{m}(mD)\to\pi_{*}\omega_{D}^{m}\quad\mbox{is surjective for
$m\geq 1$.}\quad\qed$
This is a special case of [Nak04, VI.3.8] applied with $\Delta=0$ and
$L=K_{Y}+D$.
Warning. The assumptions of [Nak04, VI.3.8] are a little hard to find. They
are outlined 11 pages earlier in [Nak04, VI.2.2]. It talks about varieties,
which usually suggest algebraic varieties, but [Nak04, p.231, line 13]
explicitly states that the proofs work with analytic spaces; see also [Nak04,
p.14]. (The statements of [Nak04] allow for a boundary $\Delta$. However,
$K_{Y}+D+\Delta$ should be ${\mathbb{Q}}$-linearly equivalent to a
${\mathbb{Z}}$-divisor and $\lfloor{\Delta}\rfloor=0$ is assumed on [Nak04,
p.231]. There seem to be few cases when both of these can be satisfied.)
###### Lemma 10.
[Nak04, VI.5.2] Let $g:X\to S$ be a flat morphism of complex analytic spaces.
Assume that $X_{0}$ has a canonical singularity at a point $x\in X_{0}$. Then
there is an open neighborhood $x\in X^{*}\subset X$ such that
1. (10.1)
$K_{X^{*}/S}$ is ${\mathbb{Q}}$-Cartier, and
2. (10.2)
all fibers of $g|_{X^{*}}:X^{*}\to S$ have canonical singularities.
###### Proof.
(1) is proved in [Kol83, 3.2.2]; see also [Kol95, 12.7] and [Kol21a, 2.8]. The
harder part is (2), proved in [Nak04, VI.5.2]. ∎
Remark 10.3. If $S$ is smooth then $X^{*}$ has canonical singularities. By
induction, it is enough to prove this when $S={\mathbb{D}}$. Then the proof of
[Nak04, VI.5.2] shows that even the pair $(X^{*},X_{0}\cap X^{*})$ has
canonical singularities.
###### Lemma 11.
Let $\pi:X\to S$ be a proper morphism of normal, complex spaces. Let $L$ be a
line bundle on $X$ and $W\subset S$ a Zariski closed subset. Assume that
${\mathcal{O}}_{W}\otimes_{S}\bigl{(}\oplus_{r\geq 0}\pi_{*}L^{r}\bigr{)}$ is
a finitely generated sheaf of ${\mathcal{O}}_{W}$-algebras.
Then every compact subset $W^{\prime}\subset W$ has an open neighborhood
$W^{\prime}\subset U\subset S$ such that
${\mathcal{O}}_{U}\otimes_{S}\bigl{(}\oplus_{r\geq 0}\pi_{*}L^{r}\bigr{)}$ is
a finitely generated sheaf of ${\mathcal{O}}_{U}$-algebras.
###### Proof.
The question is local on $S$, so we may as well assume that $W$ is a single
point. We may also assume that
${\mathcal{O}}_{W}\otimes_{S}\bigl{(}\oplus_{r\geq 0}\pi_{*}L^{r}\bigr{)}$ is
generated by $\pi_{*}L$. After suitable blow-ups we are reduced to the case
when the base locus of $L$ is a Cartier divisor $D$. By passing to a smaller
neighborhood, we may assume that every irreducible component of $D$ intersects
$\pi^{-1}(W)$. By the Nakayama lemma, the base locus of $L^{r}$ is a subscheme
of $rD$ that agrees with it along $rD\cap\pi^{-1}(W)$. Thus $rD$ is the the
base locus of $L^{r}$ for every $r$. We may thus replace $L$ by $L(-D)$ and
assume that $L$ is globally generated.
Thus $L$ defines a morphism $X\to\operatorname{Proj}_{S}\oplus_{r\geq
0}\pi_{*}L^{r}$, let $\pi^{\prime}:X^{\prime}\to S$ be its Stein
factorization. Then $L$ is the pull-back of a line bundle $L^{\prime}$ that is
ample on $X^{\prime}\to S$ and $\oplus_{r\geq 0}\pi_{*}L^{r}=\oplus_{r\geq
0}\pi^{\prime}_{*}{L^{\prime}}^{r}$ is finitely generated. ∎
###### 12Proof of Theorem 2 for general $S$.
As in Paragraph 8, it is enough to prove the claim for one MMP step, so let
$\phi_{0}:X_{0}\to Z_{0}$ be an extremal contraction and $\phi:X\to Z$ its
extension. As before, $Z$ is flat over $S$ with central fiber $Z_{0}$.
We claim that, for every $r$,
1. (12.1)
$\omega_{Z/S}^{[r]}$ is flat over $S$, and
2. (12.2)
$\omega_{Z/S}^{[r]}|_{Z_{0}}\cong\omega_{Z_{0}}^{[r]}$.
In the language of [Kol08] or [Kol21a, Chap.9], this says that
$\omega_{Z/S}^{[r]}$ is its own relative hull. There is an issue with precise
references here, since [Kol21a, Chap.9] is written in the algebraic setting.
However, [Kol21a, 9.72] considers hulls over the spectra of complete local
rings. Thus we get that there is a unique largest subscheme
$\hat{S}^{u}\subset\hat{S}$ (the formal completion of $S$ at $0$) such that
(1–2) hold after base change to $\hat{S}^{u}$.
By Paragraph 8 we know that (1–2) hold after base change to any disc
${\mathbb{D}}\to S$, which implies that $\hat{S}^{u}=\hat{S}$. That is, (1–2)
hold for $\hat{S}$. Since both properties are invariant under formal
completion, we are done.
Now we know that
$None$ $X^{+}:=\operatorname{Proj}_{Z}\oplus_{r\geq 0}\ \omega_{Z/S}^{[r]},$
is flat over $S$ and its central fiber is $X^{+}_{0}$. Thus it gives the
required extension of the flip of $X_{0}\to Z_{0}$. ∎
## 3\. Proof of Theorem 1
We give a proof using only the $S={\mathbb{D}}$ case of Theorem 2.
###### 13.
Fix $r\geq 2$ and assume first that $S={\mathbb{D}}$. Since $X_{0}$ is of
general type, a suitable MMP for $X_{0}$ ends with a minimal model $X_{0}^{\rm
m}$, and, by Theorem 2, $X_{0}\dasharrow X_{0}^{\rm m}$ extends to a fiberwise
bimeromorphic map $X\dasharrow X^{\rm m}$. We have $g^{\rm m}:X^{\rm
m}\to{\mathbb{D}}$. (From now on, we replace ${\mathbb{D}}$ with a smaller
disc whenever necessary.) Since $K_{X_{0}^{\rm m}}$ is nef and big, the higher
cohomology groups of $\omega_{X_{0}}^{[r]}$ vanish for $r\geq 2$. Thus
$s\mapsto H^{0}(X^{\rm m}_{s},\omega_{X^{\rm m}_{s}}^{[r]})$ is locally
constant at the origin.
By (10.2) $X_{s}$ and $X^{\rm m}_{s}$ both have canonical singularities, so
they have the same plurigenera. Therefore $s\mapsto
H^{0}(X_{s},\omega_{X_{s}}^{[r]})$ is also locally constant at the origin. By
Serre duality, the deformation invariance of $H^{0}(X_{s},\omega_{X_{s}})$ is
equivalent to the deformation invariance of
$H^{n}(X_{s},{\mathcal{O}}_{X_{s}})$. In fact, all the
$H^{i}(X_{s},{\mathcal{O}}_{X_{s}})$ are deformation invariant. For this the
key idea is in [DJ74], which treats deformations of varieties with normal
crossing singularities. The method works for varieties with canonical (even
log canonical) singularities; this is worked out in [Kol21a, Sec.2.5].
For arbitrary $S$, note that $s\mapsto H^{0}(X_{s},\omega_{X_{s}}^{[r]})$ is a
constructible function on $S$, thus locally constant at $0\in S$ iff it is
locally constant on every disc ${\mathbb{D}}\to S$. Once $s\mapsto
H^{0}(X_{s},\omega_{X_{s}}^{[r]})$ is locally constant at $0\in S$, Grauert’s
theorem guarantees that $g_{*}\omega_{X/S}^{[r]}$ is locally free at $0\in S$
and commutes with base changes.
In principle it could happen that for each $r$ we need a smaller and smaller
neighborhood, but the same neighborhood works for all $r\geq 1$ by Lemma 11.
Thus the plurigenera are deformation invariant, all fibers are of general
type, and $g$ is fiberwise bimeromorphic to the relative canonical model
$X^{\rm c}:=\operatorname{Proj}_{S}\oplus_{r\geq 0}g^{\rm m}_{*}\omega_{X^{\rm
m}/S}^{[r]},$
which is projective over $S$. The projectivity of all fibers now follows from
the more precise Theorem 14. ∎
The following is a special case of [Kol21b, Thm.2].
###### Theorem 14.
Let $g:X\to S$ be a flat, proper morphism of complex analytic spaces whose
fibers have rational singularities only. Assume that $g$ is bimeromorphic to a
projective morphism $g^{\rm p}:X^{\rm p}\to S$, and $X_{0}$ is projective for
some $0\in S$.
Then there is a Zariski open neighborhood $0\in U\subset S$ and a locally
closed, Zariski stratification $S=\cup_{i}S_{i}$ such that each
$g|_{X_{i}}:X_{i}:=g^{-1}(S_{i})\to S_{i}\quad\mbox{is
projective.}\quad\hfill\qed$
## 4\. Open problems
For deformations of varieties of general type, the following should be true.
###### Conjecture 15.
Let $X_{0}$ be a projective variety of general type with canonical
singularities. Then its universal deformation space
$\operatorname{Def}(X_{0})$ has a representative ${\mathbf{X}}\to S$ where $S$
is a scheme of finite type and ${\mathbf{X}}$ is an algebraic space.
For varieties of non-general type, the following is likely true [RT20, 1.10].
###### Conjecture 16.
Let $g:X\to S$ be a flat, proper morphism of complex analytic spaces. Assume
that $X_{0}$ is projective and with canonical singularities. Then the
plurigenera $h^{0}(X_{s},\omega_{X_{s}}^{[r]})$ are independent of $s\in S$
for every $r$, in some neighborhood of $0\in S$.
Comments. One can try to follow the proof of Theorem 1. If $X_{0}$ is not of
general type, we run into several difficulties in relative dimensions $\geq
4$. MMP is not know to terminate and even if we get a minimal model, abundance
is not known. If we have a good minimal model, then we run into the following.
###### Conjecture 17.
Let $X$ be a complex space and $g:X\to S$ a flat, proper morphism. Assume that
$X_{0}$ is projective, has canonical singularities and $\omega_{X_{0}}^{[r]}$
is globally generated for some $r>0$. Then the plurigenera are locally
constant at $0\in S$.
Comments. More generally, the same may hold if $X_{0}$ is Moishezon (that is,
bimeromorphic to a projective variety), Kähler or in Fujiki’s class
$\mathcal{C}$ (that is, bimeromorphic to a compact Kähler manifold; see
[Uen83] for an introduction).
A positive answer is known in many cases. [KM92, 12.5.5] proves this if
$X_{0}$ is projective and has terminal singularities. However, the proof works
for the Moishezon and class $\mathcal{C}$ cases as well.
The projective case with canonical singularities is discussed in [Nak04,
VI.3.15–16]; I believe that the projectivity assumption is very much built
into the proof given there; see [Nak04, VI.3.11].
###### Acknowledgments.
I thank D. Abramovich, F. Campana, J.-P. Demailly, O. Fujino, A. Landesman, S.
Mori, T. Murayama, V. Tosatti, D. Villalobos-Paz, C. Voisin and C. Xu for
helpful comments and corrections. Partial financial support was provided by
the NSF under grant number DMS-1901855.
## References
* [AK19] Florin Ambro and János Kollár, _Minimal models of semi-log-canonical pairs_ , Moduli of K-stable Varieties, Springer INdAM Ser., vol. 31, Springer, Cham, 2019, pp. 1–13.
* [Ati58] M. F. Atiyah, _On analytic surfaces with double points_ , Proc. Roy. Soc. London. Ser. A 247 (1958), 237–244. MR MR0095974 (20 #2472)
* [BCHM10] Caucher Birkar, Paolo Cascini, Christopher D. Hacon, and James McKernan, _Existence of minimal models for varieties of log general type_ , J. Amer. Math. Soc. 23 (2010), no. 2, 405–468.
* [Bin87] Jürgen Bingener, _Lokake Modulräume in den analytischen Geometrie I–II_ , Aspects of math., vol. 302, Viehweg, Braunschweig, 1987.
* [DJ74] Philippe Dubois and Pierre Jarraud, _Une propriété de commutation au changement de base des images directes supérieures du faisceau structural_ , C. R. Acad. Sci. Paris Sér. A 279 (1974), 745–747. MR 0376678 (51 #12853)
* [Iit69] Shigeru Iitaka, _Deformations of compact complex surfaces I_ , Global Analysis (Papers in Honor of K. Kodaira), Univ. Tokyo Press, Tokyo, 1969, pp. 267–272. MR MR0260746 (41 #5369)
* [KLS21] Matt Kerr, Radu Laza, and Morihiko Saito, _Deformation of rational singularities and Hodge structure_ , 2021.
* [KM92] János Kollár and Shigefumi Mori, _Classification of three-dimensional flips_ , J. Amer. Math. Soc. 5 (1992), no. 3, 533–703. MR 1149195 (93i:14015)
* [KM98] by same author, _Birational geometry of algebraic varieties_ , Cambridge Tracts in Mathematics, vol. 134, Cambridge University Press, Cambridge, 1998, With the collaboration of C. H. Clemens and A. Corti, Translated from the 1998 Japanese original.
* [Kol83] János Kollár, _Toward moduli of singular varieties, Ph.D. thesis_ , 1983.
* [Kol95] by same author, _Flatness criteria_ , J. Algebra 175 (1995), no. 2, 715–727. MR 1339664 (96j:14010)
* [Kol08] by same author, _Hulls and husks_ , arXiv:0805.0576, 2008.
* [Kol21a] by same author, _Moduli of varieties of general type_ , (book in preparation, https://web.math.princeton.edu/~kollar/FromMyHomePage/modbook.pdf), 2021\.
* [Kol21b] by same author, _Seshadri’s criterion and openness of projectivity_ , math.AG: 2105.06242, 2021.
* [KS58] K. Kodaira and D. C. Spencer, _On deformations of complex analytic structures. I, II_ , Ann. of Math. (2) 67 (1958), 328–466. MR MR0112154 (22 #3009)
* [Moi66] Boris Moishezon, _On $n$-dimensional compact varieties with $n$ algebraically independent meromorphic functions, I, II and III (in Russian)_, Izv. Akad. Nauk SSSR Ser. Mat. 30 (1966), 133–174, 345–386, 621–656.
* [MR71] A. Markoe and H. Rossi, _Families of strongly pseudoconvex manifolds_ , Symposium on Several Complex Variables (Park City, Utah, 1970), Springer, Berlin, 1971, pp. 182–207. Lecture Notes in Math., Vol. 184. MR 0304703 (46 #3835)
* [Nak04] Noboru Nakayama, _Zariski-decomposition and abundance_ , MSJ Memoirs, vol. 14, Mathematical Society of Japan, Tokyo, 2004.
* [RT20] Sheng Rao and I-Hsun Tsai, _Invariance of plurigenera and Chow-type lemma_ , math.AG: 2011.03306, 2020.
* [Siu98] Yum-Tong Siu, _Invariance of plurigenera_ , Invent. Math. 134 (1998), no. 3, 661–673. MR 1660941 (99i:32035)
* [Uen83] Kenji Ueno, _Introduction to the theory of compact complex spaces in the class $\mathcal{C}$_, Algebraic Varieties and Analytic Varieties, Advanced Studies in Pure Mathematics, vol. 1, 1983, pp. 219–230.
Princeton University, Princeton NJ 08544-1000,
|
# The Exact Completion for Regular Categories enriched in Posets
Vasileios Aravantinos-Sotiropoulos
###### Abstract.
We construct an exact completion for regular categories enriched in the
cartesian closed category $\mathsf{Pos}$ of partially ordered sets and
monotone functions by employing a suitable calculus of relations. We then
characterize the embedding of any regular category into its completion and use
this to obtain examples of concrete categories which arise as such
completions. In particular, we prove that the exact completion in this
enriched sense of both the categories of Stone and Priestley spaces is the
category of compact ordered spaces of L. Nachbin. Finally, we consider the
relationship between the enriched exact completion and categories of internal
posets in ordinary categories.
## 1\. Introduction
The notions of regularity and (Barr-)exactness have been fundamental in
Category Theory for quite some time. Exactness was introduced by Barr[4] in
1970 and motivated by a result of Tierney which essentially exhibited the
notion as the non-additive part of the definition of abelian category. From
another perspective, it is the basic property which is common to both abelian
categories and elementary toposes. Regularity is a weaker property which can
be viewed as the requirement that the category affords a good calculus of
internal relations. Alternatively, from the perspective of Categorical Logic,
regular categories are those which correspond to the fragment of first-order
Logic on the operations $\land,\top,\exists$.
In this paper we look at these notions in an enriched setting. More precisely,
we work with versions of them that apply to categories enriched over the
cartesian closed category $\mathsf{Pos}$ of partially ordered sets and
monotone functions. Our main motivation comes from the paper [14] by A. Kurz
and J. Velebil, where $\mathsf{Pos}$-enriched regularity and exactness were
first explicitly considered. The authors employ these notions to obtain
categorical characterizations of (quasi-)varieties of ordered algebras in the
sense of Bloom & Wright[5], very much along the lines of the corresponding
characterizations for ordinary (quasi-)varieties of Universal Algebra. Broadly
speaking, varieties turn out to be the exact categories possessing a “nice”
generator, while quasivarieties can be characterized in a similar fashion by
replacing exactness with the weaker regularity.
Recall here that _ordered algebras_ in the sense of [5] are algebras over some
signature $\Sigma$ which consist of a poset $X$ together with a monotone map
$[\sigma]\colon X^{n}\to X$, for each specified $n$-ary operation $\sigma$. A
_homomorphism_ of such algebras is a monotone map which preserves the
operations. Then a _variety_ in this context is defined as a class of ordered
algebras satisfying a set of formal inequalities $s\leq t$, where $s,t$ are
$\Sigma$-terms. A _quasivariety_ is a class defined by more general formal
implications of the form $\bigwedge\limits_{i\in I}(s_{i}\leq t_{i})\implies
s\leq t$, where again the $s_{i},t_{i},s,t$ are $\Sigma$-terms.
The categories $\mathsf{OrdSGrp}$ and $\mathsf{OrdMon}$ of ordered semigroups
and ordered monoids respectively are both examples of varieties which play an
important role in the theory of automata. More generally, any quasivariety of
ordinary algebras gives rise to a quasivariety of ordered algebras defined by
the same axioms. A different example of quasivariety is given by the
_cancellative_ ordered monoids $\mathsf{OrdMon_{can}}$, i.e. the ordered
monoids $(M,\cdot,\leq)$ satisfying the implications $x\cdot z\leq y\cdot
z\implies x\leq y$ and $z\cdot x\leq z\cdot y\implies x\leq y$ for all
$x,y,z\in M$. A further source of examples is furnished by ordinary varieties
whose axioms contain those of semi-lattices, since they can be equipped with
the equationally definable order $x\leq y\iff x\vee y=y$. Yet more examples of
quasivarieties are given by the _Kleene algebras_ of Logic.
While (quasi-)varieties of ordered algebras are a central source of examples
of $\mathsf{Pos}$-enriched categories, there are other interesting examples
that will appear in the present paper. For one, we have the category
$S$-$\mathsf{Pos}$ [8] of monotone actions of an ordered monoid $S$ on a poset
and monotone equivariant maps between them. Furthermore, there are categories
of ordered topological spaces, such as Priestley spaces or the _compact
ordered spaces_ of Nachbin. These are all examples of categories which are
either themselves monadic over $\mathsf{Pos}$ or reflective in a monadic
category.
The thread of this paper can be seen as one continuation of the ideas
developed in [14] and is in part suggested by the authors at the end of the
latter paper. At the same time it is part of a growing recent interest in the
categorical treatment of ordered algebras, as for example in the recent
preprints [1] and [2]. Our main contribution here is a construction of the
_exact completion of a regular category_ for $\mathsf{Pos}$-categories which
employs a suitably enriched version of the _calculus of relations_. We then
identify varieties of ordered algebras which occur as such completions of
corresponding (quasi-)varieties of ordered or unordered algebras. Furthermore,
we prove that the exact completion of the category of _Priestley spaces_ is
precisely the category of _Nachbin spaces_. This provides an ordered version
of the folklore result which identifies the category of compact Hausdorff
spaces as the exact completion (in the ordinary sense) of the regular category
of Stone spaces. In fact, it will follow by the same token that the exact
completion of Stone spaces in the enriched sense is also the category of
Nachbin spaces.
### Organization of the paper
In section 2 we collect some preliminaries involving regularity for categories
enriched over $\mathsf{Pos}$, mostly for the convenience of the reader. There
is only one original contribution here, 2, which provides a simplification of
the definition of regularity that was presented in [14]. More precisely, we
prove that one of the defining conditions is a consequence of the other three
and can thus be omitted.
Section 3 discusses the main aspects of the calculus of relations which is
available in any regular category. The main result in this section is 3, which
identifies the morphisms of a regular category as the left adjoints in a
suitable bicategory of relations.
Section 4 represents the crux of the paper and is where we construct the exact
completion of a regular category. After the initial definition, we prove in a
sequence of steps that our proposed construction indeed satisfies the required
properties, culminating in 4. The arguments here make extensive use of the
calculus of relations relying on the previous section.
In section 5 we characterize the embedding of a regular category into its
completion. This is subsequently used to obtain examples of categories which
arise as exact completions of one or more of their regular subcategories. In
particular, we show that the category of Nachbin spaces is exact and can be
obtained as the completion of either the category of Priestley or Stone
spaces.
Finally, in section 6 we examine the relationship between the process of exact
completion and that of taking internal posets in an ordinary category. We
prove that, in a suitable sense, these two commute.
## 2\. Preliminaries on Regularity
In this section we collect some preliminaries concerning the notion of
regularity for categories enriched over the cartesian closed category
$\mathsf{Pos}$ of posets and order-preserving functions, as defined by Kurz
and Velebil in [14]. After recalling some basic facts about finite limits, we
reexamine the definition of regularity and observe that one of the conditions
therein is in fact redundant.
Throughout the paper by ‘a category $\mathcal{C}$’ we shall always mean a
category that is enriched over the cartesian closed category $\mathsf{Pos}$ of
partially ordered sets and monotone functions. Explicitly, this means that
$\mathcal{C}$ is a category such that each $\mathop{\rm
Hom}_{\mathcal{C}}(X,Y)$ is equipped with a partial order relation and such
that composition of morphisms is order-preserving in each variable. If we wish
to refer to categories in the usual non-enriched sense we will always use the
adjective ‘ordinary’.
A functor $F\colon\mathcal{C}\to\mathcal{D}$ will always mean a
$\mathsf{Pos}$-functor, i.e. an ordinary functor that furthermore preserves
the order of morphisms.
Similarly, whenever we speak of limits or colimits in a category
$\mathcal{C}$, these will always mean _weighted_ (co-)limits (also called
_indexed_ (co-)limits in [13]). We know from [13] that completeness of a
category $\mathcal{C}$, i.e. the existence of all small weighted limits, is
equivalent to the existence in $\mathcal{C}$ of all small conical limits and
all powers. The former of these can in turn be constructed via products and
equalizers, so that $\mathcal{C}$ is complete if and only if it possesses
products, equalizers and powers. Recall here that the power of an object
$X\in\mathcal{C}$ to a poset $P$ is an object $X^{P}\in\mathcal{C}$ for which
there exists a natural isomorphism
$\mathcal{C}(C,X^{P})\cong\mathrm{Hom}_{\mathsf{Pos}}(P,\mathcal{C}(C,X))$
When the base of enrichment is locally finitely presentable as a monoidal
category[11], as in our case with $\mathsf{Pos}$, there is also a useful
notion of _finite_ weighted limit. In particular, we have that $\mathcal{C}$
is finitely complete if and only if it has finite products, equalizers and
finite powers. By _finite power_ here we mean a power object $X^{P}$ where $P$
is a finitely presentable object in $\mathsf{Pos}$, i.e. a finite poset.
We begin by recalling some basic notions, most of which can also be found in
[14]. First, the notion of monomorphism that is more appropriate in the
ordered context and which will form part of the factorization system leading
to the notion of regularity for $\mathsf{Pos}$-categories.
* 2.1 Definition.
A morphism $m\colon X\to Y$ in a category $\mathcal{C}$ is called an
_$\mathsf{ff}$ -morphism_ (or _representably fully faithful_ , or an _order-
monomorphism_) if for every $Z\in\mathcal{C}$ the monotone map
$\mathcal{C}(Z,m)\colon\mathcal{C}(Z,X)\to\mathcal{C}(Z,Y)$ in $\mathsf{Pos}$
also reflects the order.
We shall use the terms “$\mathsf{ff}$-morphism” and “order-monomorphism”
interchangeably throughout the paper. Furthermore, we will use the term
‘order-epimorphism’ for the dual notion.
Explicitly, $m\colon X\to Y$ is an $\mathsf{ff}$-morphism when for every
$f,g\colon Z\to X$ the implication $mf\leq mg\implies f\leq g$ holds. In
$\mathsf{Pos}$, $m$ is an $\mathsf{ff}$-morphism precisely when it is an
order-embedding, i.e. a map which preserves and reflects the order. Any such
map is of course a monomorphism, but the converse is not true.
The shift from monomorphisms to $\mathsf{ff}$-morphisms is also essentially
the difference between the notion of conical (weighted) limit in a category
$\mathcal{C}$ and the ordinary limit of the same type in the underlying
ordinary category $\mathcal{C}_{0}$. For example, consider two objects
$X,Y\in\mathcal{C}$. Then a diagram $\textstyle{X}$$\textstyle{X\times
Y\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\pi_{X}}$$\scriptstyle{\pi_{Y}}$$\textstyle{Y}$
is a product diagram if the usual unique factorization property is satisfied,
along with the following additional condition: given any two morphisms
$u,v\colon Z\to X\times Y$, the pair of inequalities $\pi_{X}u\leq\pi_{X}v$
and $\pi_{Y}u\leq\pi_{Y}v$ together imply that $u\leq v$. In other words, the
pair of projections $\pi_{X},\pi_{Y}$ must be jointly _order_ -monomorphic,
rather than just jointly monomorphic. This stems from the fact that the
universal property is a natural isomorphism $\mathop{\rm Hom}(Z,X\times
Y)\cong\mathop{\rm Hom}(Z,X)\times\mathop{\rm Hom}(Z,Y)$ in $\mathsf{Pos}$,
rather than in $\mathsf{Set}$. A similar observation applies to colimits in
$\mathcal{C}$.
Let us record below a few basic properties of $\mathsf{ff}$-morphisms familiar
for monomorphisms in an ordinary category.
* 2.2 Lemma.
Consider morphisms $f\colon X\to Y$ and $g\colon Y\to Z$ in a category
$\mathcal{C}$. Then:
1. (1)
If $f,g$ are $\mathsf{ff}$-morphisms, then so is $gf$.
2. (2)
If $gf$ is an $\mathsf{ff}$-morphism, then so is $f$.
3. (3)
$\mathsf{ff}$-morphisms are stable under pullback.
###### Proof.
Perhaps only item (3) needs some details, so consider the following pullback
square in $\mathcal{C}$ where $f$ is an $\mathsf{ff}$-morphism and assume that
$u,v\colon A\to P$ are such that $qu\leq qv$.
$\textstyle{P\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{p}$$\scriptstyle{q}$$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f}$$\textstyle{Y\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{g}$$\textstyle{Z}$
We then have $gqu\leq gqv\implies fpu\leq fpv\implies pu\leq pv$, since $f$ is
an $\mathsf{ff}$-morphism. Then, because we have both $pu\leq pv$ and $qu\leq
qv$, we conclude by the limit property of the pullback that $u\leq v$. ∎
We recall next two particular examples of weighted limits which are not
conical and which play an important role in the context of
$\mathsf{Pos}$-categories. See also [14].
The _comma object_ of an ordered pair of morphisms $(f,g)$ with common
codomain is a square
${C}$${Y}$${X}$${Z}$$\scriptstyle{c_{1}}$$\scriptstyle{c_{0}}$${\leq}$$\scriptstyle{g}$$\scriptstyle{f}$
such that $fc_{0}\leq gc_{1}$ and which is universal with this property, the
latter meaning precisely the following two properties:
1. (1)
Given $u_{0}\colon W\to X$ and $u_{1}\colon W\to Y$ in $\mathcal{C}$ such that
$fu_{0}\leq gu_{1}$, there exists a $u\colon W\to C\in\mathcal{C}$ such that
$c_{0}u=u_{0}$ and $c_{1}u=u_{1}$.
2. (2)
The pair $(c_{0},c_{1})$ is jointly order-monomorphic.
Note in particular that the factorization given in (1) will be unique by (2).
We will usually denote the comma object $C$ by $f/g$. In $\mathsf{Pos}$, the
comma object is given by $f/g=\\{(x,y)\in X\times Y|f(x)\leq g(y)\\}$ with the
order induced from the product.
The _inserter_ of an ordered pair $(f,g)$ of parallel morphisms
${X}$${Y}$$\scriptstyle{f}$$\scriptstyle{g}$ is a morphism $e\colon E\to
X\in\mathcal{C}$ such that $fe\leq ge$ and universal in the following sense:
1. (1)
If $h\colon Z\to X\in\mathcal{C}$ is such that $fh\leq gh$, then there exists
a $u\colon Z\to A$ such that $eu=h$.
2. (2)
$e$ is an $\mathsf{ff}$-morphism.
Again, note that the factorization posited in (1) is unique by property (2).
In $\mathsf{Pos}$, the inserter is precisely $E=\\{x\in X|f(x)\leq g(x)\\}$
with the order induced from $X$.
It will be convenient for us to have an alternative way of constructing all
finite weighted limits using inserters along with some conical limits. This is
then the content of the following proposition, which should be well-known and
in any case follows from more general facts about 2-categorical limits. We
nevertheless include a proof for the sake of completeness.
* 2.3 Proposition.
A category $\mathcal{C}$ is finitely complete if and only if it has finite
products and inserters.
###### Proof.
Suppose that $\mathcal{C}$ has finite products and inserters. To have all
finite conical limits it suffices to construct equalizers. So consider any
parallel pair of morphisms
$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f}$$\scriptstyle{g}$$\textstyle{Y}$
in $\mathcal{C}$. Let $m\colon M\to X$ be the inserter of the pair $(f,g)$ and
then let $n\colon E\to M$ be the inserter of $(gm,fm)$. We claim that now
$e\coloneqq mn\colon E\to X$ is the desired equalizer. Indeed, note first that
$gmn\leq fmn$ and also $fm\leq gm\implies fmn\leq gmn$, so that $fmn=gmn$.
Then suppose that $h\colon Z\to X$ is such that $fh=gh$. Since in particular
$fh\leq gh$, there exists a unique $u\colon Z\to M$ such that $mu=h$. Now $u$
is such that $gmu=gh\leq fh=fmu$, so we have a unique $v\colon Z\to E$ such
that $nv=u$. So $ev=mnv=mu=h$. Finally, it is clear that $e$ is order-
monomorphic, since both $m,n$ are so.
Second, we need to construct finite powers, so let $P$ be a finite poset and
consider any $X\in\mathcal{C}$. Consider the product $\prod\limits_{a\in P}X$
(i.e. the ordinary power) and for every pair of elements $a,b\in P$ with
$a\leq b$ form the inserter of $(\pi_{a},\pi_{b})$, say
$\textstyle{E_{ab}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{e_{ab}}$$\textstyle{\prod\limits_{a\in
P}X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\pi_{a}}$$\scriptstyle{\pi_{b}}$$\textstyle{X}$
in $\mathcal{C}$. Set $E\coloneqq\prod\limits_{a\leq b}E_{ab}$. We claim the
$E$ is the power $X^{P}$.
To show this, consider any family of morphisms $(f_{a}\colon C\to X)_{a\in P}$
such that $a\leq b\implies f_{a}\leq f_{b}$, i.e. a homomorphism
$P\to\mathcal{C}(C,X)$ in $\mathsf{Pos}$. There is then a unique $f\colon
C\to\prod\limits_{a\in P}X$ such that $\pi_{a}f=f_{a}$ for all $a\in P$. Now
for each pair of elements $a,b\in P$ with $a\leq b$ we have $f_{a}\leq f_{b}$,
which is to say $\pi_{a}f\leq\pi_{b}f$. Hence, there is a unique $u_{ab}\colon
C\to E_{ab}$ with $e_{ab}u_{ab}=f$. This in turn induces a unique $u\colon
C\to E$ such that $\pi_{ab}u=u_{ab}$ whenever $a,b\in P$ with $a\leq b$.
Finally, let’s show that this assignment is order-preserving and order
reflecting. So consider another family $(g_{a}\colon C\to X)_{a\in P}$, with
$g\colon C\to\prod\limits_{a\in P}X$ and $v\colon C\to E$ corresponding to $f$
and $u$ as defined above for $(f_{a})_{a\in P}$. If $(f_{a})_{a\in
P}\leq(g_{a})_{a\in P}$, then $f_{a}\leq g_{a}$ for all $a\in P$, i.e.
$\pi_{a}f\leq\pi_{a}g$ for all $a\in P$ and hence $f\leq g$ by the universal
property of the product. This in turn means that whenever $a\leq b$ we have
$e_{ab}u_{ab}=f\leq g=e_{ab}v_{ab}$ and so $u_{ab}\leq v_{ab}$, hence $u\leq
v$. It is clear that these implications can also be reversed. ∎
If a category $\mathcal{C}$ has comma objects, then in particular for any
morphism $f\colon X\to Y\in\mathcal{C}$ we can form the comma of the pair
$(f,f)$. This comma measures the extent to which $f$ fails to be an
$\mathsf{ff}$-morphism and ultimately connects with the notions of regularity
and exactness to be introduced shortly.
* 2.4 Definition.
Given any morphism $f\colon X\to Y$ in a category $\mathcal{C}$, the comma
object $f/f$ is called the _kernel congruence_ of $f$.
* 2.5 Lemma.
For any morphism $f\colon X\to Y$ with kernel congruence
$\textstyle{f/f\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f_{0}}$$\scriptstyle{f_{1}}$$\textstyle{X}$
in a category $\mathcal{C}$, the following are equivalent:
1. (1)
$f$ is an $\mathsf{ff}$-morphism.
2. (2)
$f_{0}\leq f_{1}$.
3. (3)
The canonical morphism $\iota_{f}\colon 1_{X}/1_{X}\to f/f$ is an isomorphism.
###### Proof.
If $f$ is an $\mathsf{ff}$-morphism, then $ff_{0}\leq ff_{1}\implies f_{0}\leq
f_{1}$.
If $f_{0}\leq f_{1}$, then $1_{X}f_{0}\leq 1_{X}f_{1}$ implies that
$(f_{0},f_{1})$ must factor through $1_{X}/1_{X}$ via a morphism which is then
easily seen to be inverse to $1_{X}/1_{X}\rightarrow f/f$.
Finally, assume that $f/f\cong 1_{X}/1_{X}$ and let $u_{0},u_{1}\colon Z\to X$
be such that $fu_{0}\leq fu_{1}$. Then $(u_{0},u_{1})$ factors through $f/f$
and so through $1_{X}/1_{X}$. But the latter means precisely that $u_{0}\leq
u_{1}$. ∎
As we have mentioned already, the class of $\mathsf{ff}$-morphisms will be the
“mono part” of a factorization system for regular categories. The other class
of morphisms is taken to be the class of morphisms orthogonal to all
$\mathsf{ff}$-morphisms, as has to be the case in any orthogonal factorization
system. Let us first recall the definition of orthogonality in this enriched
context.
* 2.6 Definition.
Given morphisms $e\colon A\to B$ and $m\colon X\to Y$ in a category
$\mathcal{C}$, we say that $e$ is _left orthogonal_ to $m$ and write $e\perp
m$ if the square
$\textstyle{\mathcal{C}(B,X)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{-\circ
e}$$\scriptstyle{m\circ-}$$\textstyle{\mathcal{C}(A,X)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{m\circ-}$$\textstyle{\mathcal{C}(B,Y)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{-\circ
e}$$\textstyle{\mathcal{C}(A,Y)}$
is a pullback in $\mathsf{Pos}$.
To make things more explicit, the statement $e\perp m$ means two things:
1. (1)
The usual diagonal fill-in property.
2. (2)
Given two commutative squares
$\textstyle{A\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{e}$$\scriptstyle{u_{1}}$$\textstyle{B\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{v_{1}}$$\scriptstyle{d_{1}}$$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{m}$$\textstyle{Y}$$\textstyle{A\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{e}$$\scriptstyle{u_{2}}$$\textstyle{B\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{v_{2}}$$\scriptstyle{d_{2}}$$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{m}$$\textstyle{Y}$
in $\mathcal{C}$ with $u_{1}\leq u_{2}$ and $v_{1}\leq v_{2}$, the diagonal
fill-ins must also satisfy $d_{1}\leq d_{2}$.
Then, as in [14] we introduce the following class of morphisms, which in some
sense are the $\mathsf{Pos}$-enriched analogue of strong epimorphisms for
ordinary categories.
* 2.7 Definition.
A morphism $e\colon A\to B$ is called an _$\mathsf{so}$ -morphism_ (or
_surjective on objects_) if $e\perp m$ for every $\mathsf{ff}$-morphism
$m\colon X\to Y\in\mathcal{C}$.
* 2.8 Remark.
We note here that, in the special case of checking a condition $e\perp m$ with
$m$ an $\mathsf{ff}$-morphism, the 2-dimensional part of the definition of
orthogonality (property 2 above) follows for free. Indeed, in the notation of
the above diagrams, the fact that $md_{1}=v_{1}\leq v_{2}=md_{2}$ then implies
$d_{1}\leq d_{2}$.
We record here two basic properties of $\mathsf{so}$-morphisms which will be
used throughout the paper. These follow simply because the class of
$\mathsf{so}$-morphisms is defined by a left orthogonality condition.
* 2.9 Lemma.
Consider morphisms $f\colon X\to Y$ and $g\colon Y\to Z$ in a category
$\mathcal{C}$. Then:
1. (1)
If $f,g$ are $\mathsf{so}$-morphisms, then so is $gf$.
2. (2)
If $gf$ is an $\mathsf{so}$-morphism, then so is $g$.
The $\mathsf{so}$-morphisms will be the “epimorphism part” of the
factorization system on regular categories. Indeed, given the existence of
some limits, every $\mathsf{so}$-morphism is an order-epimorphism.
* 2.10 Lemma.
If $\mathcal{C}$ has inserters, then every $\mathsf{so}$-morphism in
$\mathcal{C}$ is an
order-epimorphism.
###### Proof.
Let $e\colon A\to B$ be an $\mathsf{so}$-morphism and consider $f,g\colon B\to
C$ such that $fe\leq ge$. Let $m\colon M\to B$ be the inserter of $(f,g)$.
Then there exists a unique $u\colon A\to M$ such that $mu=e$. But we have
$e\perp m$ and so we obtain a $v\colon B\to M$ such that $ve=u$, $mv=1_{B}$.
Now $m$ is both a split epimorphism and a monomorphism, hence an isomorphism
and so $fm\leq gm\implies f\leq g$. ∎
The notion of regularity for ordinary categories, as is well known, can be
defined either in terms of strong epimorphisms or in terms of regular
epimorphisms. In fact, one can argue that a significant part of the power of
regularity is that it forces these two classes of epimorphisms to coincide.
From another perspective, the regular epimorphisms are the categorical notion
of quotient that is usually appropriate in ordinary categories. For
$\mathsf{Pos}$-categories the corresponding notion is that of _coinserter_ ,
which is dual to the notion of inserter defined earlier. Intuitively, whereas
taking a coequalizer corresponds to adding new equalities, constructing a
coinserter should be thought of as adding new _inequalities_.
Now, just as every regular epimorphism is always strong, so too we have the
following.
* 2.11 Lemma.
Every coinserter is an $\mathsf{so}$-morphism.
###### Proof.
Consider the following commutative diagram, where $e$ is assumed to be the
coinserter of $(f_{0},f_{1})$ and $m$ is an $\mathsf{ff}$-morphism.
$\textstyle{C\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f_{0}}$$\scriptstyle{f_{1}}$$\textstyle{A\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{e}$$\scriptstyle{u}$$\textstyle{B\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{v}$$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{m}$$\textstyle{Y}$
Now we have $muf_{0}=vef_{0}\leq vef_{1}=muf_{1}$, so by virtue of $m$ being
an $\mathsf{ff}$-morphism we get $uf_{0}\leq uf_{1}$. Then by the coinserter
property there exists a unique $d\colon B\to X$ such that $de=u$. It follows
also that $md=v$ because $e$ is an order-epimorphism. ∎
* 2.12 Definition.
A morphism $q\colon X\to Y$ is called an _effective_ (epi-)morphism if it is
the coinserter of some pair of parallel morphisms.
The notion of regularity of a category is essentially an exactness property
relating kernel congruences and coinserters. There are however some exactness
properties involving these notions that always hold, regardless of regularity.
This is the content of the following proposition, which again should be
compared to the corresponding facts involving kernel pairs and coequalizers in
an ordinary category (see [6]).
* 2.13 Proposition.
1. (1)
If an effective epimorphism has a kernel congruence, then it must be the
coinserter of that kernel congruence.
2. (2)
If a kernel congruence has a coinserter, then it is also the kernel congruence
of that coinserter.
###### Proof.
1. (1)
Suppose
$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f_{0}}$$\scriptstyle{f_{1}}$$\textstyle{Y\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{q}$$\textstyle{Q}$
is a coinserter diagram and assume $q$ has a kernel congruence
$\textstyle{q/q\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{q_{0}}$$\scriptstyle{q_{1}}$$\textstyle{Y}$
in $\mathcal{C}$. Then $qf_{0}\leq qf_{1}\implies(\exists!u\colon X\to
q/q)q_{0}u=f_{0},q_{1}u=f_{1}$, by the universal property of kernel
congruence.
Now let $g\colon Y\to Z$ be such that $gq_{0}\leq gq_{1}$. Then $gq_{0}u\leq
gq_{1}u\implies gf_{0}\leq gf_{1}$ and so $(\exists!v\colon Q\to Z)vq=g$.
Finally, $q$ is already an order-epimorphism by virtue of being a coinserter
of some pair of morphisms.
2. (2)
Suppose that
$\textstyle{R\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{r_{0}}$$\scriptstyle{r_{1}}$$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{q}$$\textstyle{Q}$
is a coinserter diagram and that $(r_{0},r_{1})$ is the kernel congruence of
some $f\colon X\to Y$. Then $fr_{0}\leq fr_{1}$ implies the existence of a
unique $u\colon Q\to Y$ such that $uq=f$.
Now let $a,b\colon A\to X$ be such that $qa\leq qb$. Then also $uqa\leq uqb$,
i.e. $fa\leq fb$ and so $(\exists!v\colon A\to R)r_{0}v=a,r_{1}v=b$.
∎
We now come to the definition of regularity for $\mathsf{Pos}$-enriched
categories, as presented by Kurz and Velebil in [14]. We label it as
‘provisional’ in the context of this paper for reasons that will be justified
shortly.
* 2.14 Definition.
(provisional) A category $\mathcal{C}$ is _regular_ if it satisfies the
following:
(R1) $\mathcal{C}$ has all finite (weighted) limits.
(R2) $\mathcal{C}$ has ($\mathsf{so}$,$\mathsf{ff}$)-factorizations.
(R3) $\mathsf{so}$-morphisms are stable under pullback in $\mathcal{C}$.
(R4) Every $\mathsf{so}$-morphism is effective in $\mathcal{C}$.
A main feature of this definition is that it posits the existence of a stable
($\mathsf{so}$,$\mathsf{ff}$) factorization system in $\mathcal{C}$. The
authors go on to state that the ‘gist of the definition’ is property (R4),
i.e. the assumption that $\mathsf{so}$-morphisms and effective morphisms
coincide, which essentially states that it is equivalent to require the stable
factorization system to be (effective,$\mathsf{ff}$). However, we shall show
that condition (R4) in fact follows from the first three conditions, much like
in the case of ordinary regularity one can state the definition equivalently
either in terms of regular epimorphisms or strong epimorphisms. In fact, the
proof is essentially a direct adaptation of the corresponding one in the
ordinary context (see [6]).
Before making good on our claim, we need a preparatory result on the pasting
of a pullback square with a comma square. This is well-known from the realm of
2-categories, but we include its proof for the sake of making this paper more
self-contained.
* 2.15 Lemma.
Consider the following diagram in a category $\mathcal{C}$, where the right-
hand square is a comma square and the left-hand square commutes.
$\textstyle{P\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{p_{1}}$$\scriptstyle{p_{0}}$$\textstyle{Q\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{q_{1}}$$\scriptstyle{q_{0}}$$\scriptstyle{\leq}$$\textstyle{Z\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{g}$$\textstyle{X^{\prime}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{x}$$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f}$$\textstyle{Y}$
Then the outer rectangle is a comma square if and only if the left-hand square
is a pullback.
###### Proof.
Assume first that the left-hand square is a pullback. Note first that by our
assumptions on the diagram we have $fxp_{0}=fq_{0}p_{1}\leq gq_{1}p_{1}$. Now
suppose that $u\colon A\to X^{\prime}$ and $v\colon A\to Z$ are such that
$fxu\leq gv$. Since the right-hand square is a comma, $(\exists!w\colon A\to
Q)q_{0}w=xu,q_{1}w=v$. The first of these equalities by virtue of the pullback
property gives that $(\exists!z\colon A\to P)p_{0}z=u,p_{1}z=w$. Then we have
$q_{1}p_{1}z=q_{1}w=v$ as well.
Finally, assume that $z,z^{\prime}\colon A\to P$ are such that $p_{0}z\leq
p_{0}z^{\prime}$ and $q_{1}p_{1}z\leq q_{1}p_{1}z^{\prime}$. Then we also have
$xp_{0}z\leq xp_{0}z^{\prime}\implies q_{0}p_{1}z\leq q_{0}p_{1}z^{\prime}$,
so that by the universal property of the comma square we get $p_{1}z\leq
p_{1}z^{\prime}$. The latter inequality together with $p_{0}z\leq
p_{0}z^{\prime}$ yield $z\leq z^{\prime}$ by the universal property of the
pullback.
Conversely, assume that the outer square is a comma and let $u\colon A\to
X^{\prime}$ and $v\colon A\to Q$ be such that $xu=q_{0}v$. Then we have
$fxu=fq_{0}v\leq gq_{1}v$, so the outer rectangle being a comma says that
$(\exists!w\colon A\to P)p_{0}w=u,q_{1}p_{1}w=q_{1}v$. But since also
$q_{0}p_{1}w=xp_{0}w=xu=q_{0}v$ and $q_{0},q_{1}$ are jointly monomorphic, we
obtain also that $p_{1}w=v$. Finally, it is clear that $p_{0},p_{1}$ are
jointly order-monomorphic because $p_{0},q_{1}p_{1}$ are so. ∎
Now we can prove that condition (R4) in the definition of regularity is
superfluous. The proof that follows is almost identical to that of Proposition
2.2.2 in [6], concerning ordinary regularity, where we replace some uses of
the familiar lemma on pasting of pullback squares with 2.
* 2.16 Proposition.
If $\mathcal{C}$ is a category satisfying conditions (R1),(R2) and (R3) of 2,
then it also satisfies (R4).
###### Proof.
Let $f\colon X\to Y$ be an $\mathsf{so}$-morphism and consider its kernel
congruence
$\textstyle{f/f\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{k_{0}}$$\scriptstyle{k_{1}}$$\textstyle{X}$,
which exists in $\mathcal{C}$ by (R1). We want to show that $f$ is the
coinserter of $(k_{0},k_{1})$.
So let $g\colon X\to Z\in\mathcal{C}$ be such that $gk_{0}\leq gk_{1}$. We can
consider then the induced morphism $\langle f,g\rangle\colon X\to Y\times Z$.
By (R2) we can factor this morphism as an $\mathsf{so}$-morphism followed by
an $\mathsf{ff}$-morphism, say
$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{p}$$\textstyle{I\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{i}$$\textstyle{Y\times
Z}$. We then form the following diagram in $\mathcal{C}$, where we begin by
forming the bottom right-hand square as a comma square and then the remaining
three squares are pullbacks.
$\textstyle{P\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{v_{1}}$$\scriptstyle{v_{0}}$$\textstyle{P_{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{x_{1}}$$\scriptstyle{u_{1}}$$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{p}$$\textstyle{P_{0}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{u_{0}}$$\scriptstyle{x_{0}}$$\textstyle{C\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{c_{1}}$$\scriptstyle{c_{0}}$$\scriptstyle{\leq}$$\textstyle{I\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\pi_{Y}i}$$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{p}$$\textstyle{I\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\pi_{Y}i}$$\textstyle{Y}$
By an application of 2 and its order-dual as well as the usual pullback gluing
lemma we deduce that the big outer square resulting from the pasting of all
four smaller ones is also a comma square. Then, since $\pi_{Y}ip=f$, we have
$P\cong f/f$ and we can assume that $x_{0}v_{0}=k_{0}$ and $x_{1}v_{1}=k_{1}$.
Also, observe that by (R3) we have that $u_{0},u_{1}$ and then also
$v_{0},v_{1}$ are $\mathsf{so}$-morphisms.
Now we want to show that $\pi_{Y}i$ is an iso. Since $(\pi_{Y}i)p=f$ is an
$\mathsf{so}$-morphism, we already know, by 2, that $\pi_{Y}i$ is an
$\mathsf{so}$-morphism as well. Thus, it suffices to show that it is also an
$\mathsf{ff}$-morphism, which is equivalent to showing $c_{0}\leq c_{1}$.
Since $u_{0}v_{0}=u_{1}v_{1}$ is an $\mathsf{so}$-morphism, so in particular
an order-epimorphism, the latter inequality is equivalent to having
$c_{0}u_{0}v_{0}\leq c_{1}u_{0}v_{0}$. This is in turn equivalent to
$ic_{0}u_{0}v_{0}\leq ic_{1}u_{0}v_{0}$, because $i$ is an
$\mathsf{ff}$-morphism. To prove this last inequality we now observe the
following:
$\pi_{Y}ic_{0}u_{0}v_{0}=\pi_{Y}ipx_{0}v_{0}=fx_{0}v_{0}=fk_{0}\leq
fk_{1}=fx_{1}v_{1}=\pi_{Y}ipx_{1}v_{1}=\pi_{Y}ic_{1}u_{0}v_{0}$
$\pi_{Z}ic_{0}u_{0}v_{0}=\pi_{Z}ipx_{0}v_{0}=gx_{0}v_{0}=gk_{0}\leq
gk_{1}=gx_{1}v_{1}=\pi_{Z}ipx_{1}v_{1}=\pi_{Z}ic_{1}u_{0}v_{0}$
Then the universal property of the product yields the desired inequality.
Finally, we now have a morphism $\pi_{Z}i(\pi_{Y}i)^{-1}\colon Y\to Z$ such
that $\pi_{Z}i(\pi_{Y}i)^{-1}f=\pi_{Z}i(\pi_{Y}i)^{-1}\pi_{Y}ip=\pi_{Z}ip=g$.
Furthermore, we know already that $f$ is an order-epimorphism because it is an
$\mathsf{so}$-morphism by assumption. ∎
Thus, we can officially strike condition (R4) from the definition of
regularity and henceforth adopt the following more economical one.
* 2.17 Definition.
A category $\mathcal{C}$ will be called _regular_ if it satisfies the
following:
(R1) $\mathcal{C}$ has all finite (weighted) limits.
(R2) $\mathcal{C}$ has ($\mathsf{so}$,$\mathsf{ff}$)-factorizations.
(R3) $\mathsf{so}$-morphisms are stable under pullback in $\mathcal{C}$.
Similarly, we can now furthermore establish another equivalent
characterization of regularity in terms of the existence of quotients for
kernel congruences.
* 2.18 Proposition.
A finitely complete category $\mathcal{C}$ is regular if and only if the
following hold:
1. (1)
Every kernel congruence in $\mathcal{C}$ has a coinserter.
2. (2)
Effective morphisms are stable under pullback in $\mathcal{C}$.
###### Proof.
If $\mathcal{C}$ is regular, then it is easy to see that it satisfies the two
conditions above by definition and by an appeal to part (2.) of 2.
Conversely, let us assume that $\mathcal{C}$ satisfies the two conditions in
the statement. Consider any $f\colon X\to Y\in\mathcal{C}$, its kernel
congruence
$\textstyle{f/f\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f_{0}}$$\scriptstyle{f_{1}}$$\textstyle{X}$
and the coinserter $q\colon X\to Q$ in $\mathcal{C}$ of the latter, which
exists by condition 1. Since $ff_{0}\leq ff_{1}$, there exists a unique
$m\colon Q\to Y$ such that $f=mq$. It now suffices to show that $m$ is an
$\mathsf{ff}$-morphism.
For this, consider the kernel congruence
$\textstyle{m/m\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{m_{0}}$$\scriptstyle{m_{1}}$$\textstyle{Q}$
and form the following diagram where the bottom right-hand square is a comma
and the other three are pullbacks.
$\textstyle{P\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{v_{1}}$$\scriptstyle{v_{0}}$$\textstyle{P_{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{x_{1}}$$\scriptstyle{u_{1}}$$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{q}$$\textstyle{P_{0}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{u_{0}}$$\scriptstyle{x_{0}}$$\textstyle{m/m\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{m_{1}}$$\scriptstyle{m_{0}}$$\scriptstyle{\leq}$$\textstyle{Q\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{m}$$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{q}$$\textstyle{Q\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{m}$$\textstyle{Y}$
Similarly to the proof of 2, we have $P\cong f/f$ and we can assume that
$x_{0}v_{0}=f_{0}$ and $x_{1}v_{1}=f_{1}$. Furthermore, by the assumed
stability of effective morphisms under pullback, we can deduce that
$u_{0}v_{0}=u_{1}v_{1}$ is an order-epimorphism. Now we have that
$m_{0}u_{0}v_{0}=qx_{0}v_{0}=qf_{0}\leq qf_{1}=qx_{1}v_{1}=m_{1}u_{1}v_{1}$
whence we deduce that $m_{0}\leq m_{1}$ and so that $m$ is an
$\mathsf{ff}$-morphism. ∎
To end this section, let us list a few examples of regular categories. For
more details on most of these one can consult [14].
* 2.19 Example.
1. (1)
$\mathsf{Pos}$ is regular as a $\mathsf{Pos}$-category. So is any category of
enriched presheaves $[\mathcal{C}^{op},\mathsf{Pos}]$ for $\mathcal{C}$ a
small category.
2. (2)
Any ordinary regular category $\mathcal{C}$ is also regular in the
$\mathsf{Pos}$-enriched sense when equipped with the discrete order on its
Hom-sets. Indeed, in this case $\mathsf{ff}$-morphisms coincide with
monomorphisms and $\mathsf{so}$-morphisms with strong epimorphisms. Note that
$\mathsf{Pos}$ is an example of a category which is not regular in the
ordinary sense, but is regular as an enriched category.
3. (3)
_Quasivarieties of ordered algebras_ in the sense of Bloom and Wright[5] are
regular categories[14]. As particular examples here we have the categories
$\mathsf{OrdMon}$ of ordered monoids, $\mathsf{OrdSGrp}$ of ordered semi-
groups, $\mathsf{OrdCMon}$ of commutative ordered monoids and
$\mathsf{OrdMon_{0}}$ of ordered monoids with the neutral element $0$ of the
monoid operation as the minimum element for the order. These are all in fact
varieties. An example of a quasivariety which is not a variety is the category
$\mathsf{OrdMon_{can}}$ of cancellative monoids, i.e. those ordered monoids
$(M,\cdot,\leq)$ satisfying the implications $x\cdot z\leq y\cdot z\implies
x\leq y$ and $z\cdot x\leq z\cdot y\implies x\leq y$ for all $x,y,z\in M$.
4. (4)
The categories $\mathsf{Nach}$ of _Nachbin_ spaces (or compact ordered spaces)
and $\mathsf{Pries}$ of _Priestley_ spaces with continuous order-preserving
functions in both instances are examples of regular categories. We shall have
more to say on these in section 5.
5. (5)
If $\mathcal{C}$ is monadic over $\mathsf{Pos}$ for a monad
$T\colon\mathsf{Pos}\to\mathsf{Pos}$ which preserves $\mathsf{so}$-morphisms
(i.e. surjections), then $\mathcal{C}$ is regular. An example of this kind is
given by the category $S$-$\mathsf{Pos}$ of $S$-posets for any ordered monoid
$S$ (see [8]). The objects in the latter category are monoid actions $S\times
X\to X$ on a poset $X$ which are monotone in both variables, while the
morphisms are the monotone equivariant functions.
## 3\. Calculus of Relations and Exactness
Recall that in an ordinary regular category $\mathcal{C}$, the existence of a
stable (regular epi,mono) factorization system allows for a well-behaved
calculus of relations in $\mathcal{C}$. More precisely, the existence of the
factorization system allows one to define the composition of two internal
relations and then the stability of regular epimorphisms under pullback is
precisely equivalent to the associativity of this composition. Essentially the
same facts hold also in our $\mathsf{Pos}$-enriched setting.
If $\mathcal{E}$ is a regular category, then by a _relation_ in $\mathcal{E}$
we shall mean an order-subobject $R\rightarrowtail X\times Y$, i.e. a
subobject of a product represented by an $\mathsf{ff}$-morphism. We shall
write $R\colon X\looparrowright Y$ to denote that $R$ is a relation from $X$
to $Y$ in $\mathcal{E}$. The factorization system
($\mathsf{so}$,$\mathsf{ff}$) in $\mathcal{E}$ and the stability of
$\mathsf{so}$-morphisms under pullback allow us to have a well-defined
composition of relations in $\mathcal{E}$: given $R\colon X\looparrowright Y$
and $S\colon Y\looparrowright Z$ in $\mathcal{C}$, the relation $S\circ
R\colon X\looparrowright Z$ is defined by first constructing the pullback
square below
$\textstyle{T\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{t_{0}}$$\scriptstyle{t_{1}}$$\textstyle{R\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{r_{0}}$$\scriptstyle{r_{1}}$$\textstyle{S\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{s_{0}}$$\scriptstyle{s_{1}}$$\textstyle{X}$$\textstyle{Y}$$\textstyle{Z}$
and then taking the ($\mathsf{so}$,$\mathsf{ff}$) factorization of $\langle
r_{0}t_{0},s_{1}t_{1}\rangle\colon T\to X\times Z$, say
$\textstyle{T\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{q}$$\textstyle{M\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\langle
m_{0},m_{1}\rangle}$$\textstyle{X\times Z}$
In other words, in our notation $S\circ R$ is the relation represented by the
$\mathsf{ff}$-morphism $\langle m_{0},m_{1}\rangle$.
This leads to the locally posetal bicategory (a.k.a $\mathsf{Pos}$-category)
$\mathrm{Rel}(\mathcal{E})$, whose objects are those of $\mathcal{E}$ and
whose morphisms are the relations $R\colon X\looparrowright Y$ in
$\mathcal{E}$. The identity morphism on the object $X$ in
$\mathrm{Rel}(\mathcal{E})$ is the diagonal relation $\langle
1_{X},1_{X}\rangle\colon\Delta_{X}\rightarrowtail X\times X$ and composition
of morphisms is given by composition of relations in $\mathcal{E}$.
If we forget about the 2-dimensional nature of the properties that define the
two classes of morphisms in the factorization system
($\mathsf{so}$,$\mathsf{ff}$), then the structure and basic properties of
$\mathrm{Rel}(\mathcal{E})$ are essentially the calculus of relations
_relative to a stable factorization system_ , as explicated by Meisen [17],
Richter[20], Kelly[12] and others. Thus, we shall feel free to take for
granted many of the basic facts concerning the structure of
$\mathrm{Rel}(\mathcal{E})$ without proving them here. As an exception to this
rule, we include the proof of the following lemma because it describes a way
in which one can argue about relations in a regular category using generalized
elements which will be particularly useful to us in subsequent proofs. Recall
here that, given a relation $R\colon X\looparrowright Y$ and generalized
elements $x\colon A\to X$, $y\colon A\to Y$ in $\mathcal{E}$, we write
$(x,y)\in_{A}R$ to indicate that $\langle x,y\rangle\colon A\to X\times Y$
factors through $\langle r_{0},r_{1}\rangle$. Observe also that, given an
$\mathsf{so}$-morphism $q\colon B\twoheadrightarrow A$ in $\mathcal{E}$, we
have that $(x,y)\in_{A}R$ if and only if $(xq,yq)\in_{B}R$. Indeed, while the
“only if” direction is obvious, for the converse note that $(xq,yq)\in_{B}R$
means the existence of a commutative square of the following form.
${B}$${A}$${R}$${X\times Y}$$\scriptstyle{q}$$\scriptstyle{\langle
x,y\rangle}$
Then by the orthogonality between $\mathsf{so}$ and $\mathsf{ff}$-morphisms we
have an induced diagonal $A\to R$ exhibiting $(x,y)\in_{A}R$.
* 3.1 Lemma.
Let $R\colon X\looparrowright Y$ and $S\colon Y\looparrowright Z$ be relations
in a regular category $\mathcal{E}$ and consider any generalized elements
$x\colon P\to X$ and $z\colon P\to Z$. Then $(x,z)\in_{P}S\circ R$ if and only
if there exists an effective epimorphism $q\colon Q\twoheadrightarrow P$ and a
generalized element $y\colon Q\to Y$ such that $(xq,y)\in_{Q}R$ and
$(y,zq)\in_{Q}S$.
###### Proof.
Consider the diagram below, where the square is a pullback.
$\textstyle{T\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{t_{0}}$$\scriptstyle{t_{1}}$$\textstyle{R\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{r_{0}}$$\scriptstyle{r_{1}}$$\textstyle{S\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{s_{0}}$$\scriptstyle{s_{1}}$$\textstyle{X}$$\textstyle{Y}$$\textstyle{Z}$
Then $S\circ R$ is given by the following image factorization
$\langle r_{0}t_{0},s_{1}t_{1}\rangle=\lx@xy@svg{\hbox{\raise 0.0pt\hbox{\kern
6.61632pt\hbox{\ignorespaces\ignorespaces\ignorespaces\hbox{\vtop{\kern
0.0pt\offinterlineskip\halign{\entry@#!@&&\entry<EMAIL_ADDRESS>0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{T\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
18.97363pt\raise 4.50694pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-1.50694pt\hbox{$\scriptstyle{e}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern 41.61636pt\raise 0.0pt\hbox{\hbox{\kern
0.0pt\raise 0.0pt\hbox{\hbox{\kern-1.99997pt\lower
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
41.61636pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise
0.0pt\hbox{$\textstyle{I\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
52.7969pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
57.52425pt\raise 8.0pt\hbox{{}\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\hbox{\hbox{\kern 0.0pt\raise-2.5pt\hbox{$\scriptstyle{\langle
i_{0},i_{1}\rangle}$}}}\kern 3.0pt}}}}}}\ignorespaces{\hbox{\kern
87.79695pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
87.79695pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise 0.0pt\hbox{$\textstyle{X\times
Z}$}}}}}}}\ignorespaces\ignorespaces}}}}$
.
Assume first that $(x,z)\in_{P}S\circ R$, i.e. there exists a morphism
$u\colon P\to I$ such that $\langle i_{0},i_{1}\rangle u=\langle x,z\rangle$.
We then form the pullback square below.
$\textstyle{Q\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{v}$$\scriptstyle{q}$$\textstyle{P\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{u}$$\textstyle{T\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{e}$$\textstyle{I}$
Note that $q$ is an effective epimorphism because $e$ is such. Now set
$y\coloneqq r_{1}t_{0}v=s_{0}t_{1}v\colon Q\to Y$. Then $\langle
xq,y\rangle=\langle i_{0}uq,r_{1}t_{0}v\rangle=\langle
i_{0}ev,r_{1}t_{0}v\rangle=\langle r_{0}t_{0}v,r_{1}t_{0}v\rangle=\langle
r_{0},r_{1}\rangle t_{0}v$ and $\langle y,zq\rangle=\langle
s_{0}t_{1}v,i_{1}uq\rangle=\langle s_{0}t_{1}v,i_{1}ev\rangle=\langle
s_{0}t_{1}v,s_{1}t_{1}v\rangle=\langle s_{0},s_{1}\rangle t_{1}v$, so that
$(xq,y)\in_{Q}R$ and $(y,zq)\in_{Q}S$.
Conversely, assume that $(xq,y)\in_{Q}R$ and $(y,zq)\in_{Q}S$ for some
$y\colon Q\to Y$ and effective epimorphism $q\colon Q\twoheadrightarrow P$.
This means that there exist morphisms $u\colon Q\to R$ and $v\colon Q\to S$
such that $\langle r_{0},r_{1}\rangle u=\langle xq,y\rangle$ and $\langle
s_{0},s_{1}\rangle v=\langle y,zq\rangle$. Since $r_{1}u=y=s_{0}v$, there
exists a unique $w\colon Q\to T$ such that $t_{0}w=u$ and $t_{1}w=v$. Then
$\langle i_{0},i_{1}\rangle ew=\langle r_{0}t_{0},s_{1}t_{1}\rangle w=\langle
r_{0}u,s_{1}v\rangle=\langle xq,zq\rangle$, so that $(xq,zq)\in_{Q}SR$. Since
$q$ is an $\mathsf{so}$-morphism, we can conclude that also $(x,z)\in_{P}SR$.
∎
The above lemma can actually be used to prove many of the fundamental
properties of $\rm{Rel}(\mathcal{E})$. In general, $\rm{Rel}(\mathcal{E})$ is
a _tabular allegory with a unit_ (see e.g. [10],[9]), where the anti-
involution
$(-)^{\circ}\colon\mathrm{Rel}(\mathcal{E})^{op}\to\mathrm{Rel}(\mathcal{E})$
is given by taking the _opposite_ relation. In particular, we have that
Freyd’s _Modular Law_ holds in $\mathrm{Rel}(\mathcal{E})$, i.e.
$QP\cap S\subseteq Q(P\cap Q^{\circ}S)$
for any relations $P\colon X\looparrowright Y$, $Q\colon Y\looparrowright Z$
and $S\colon X\looparrowright Z$ in $\mathcal{E}$. The presence of the anti-
involution $(-)^{\circ}$ implies that the Modular Law is equivalent to its
dual form, namely the inclusion
$QP\cap S\subseteq(Q\cap SP^{\circ})P$
for any relations $P,Q,S$ as above.
Every morphism $f\colon X\to Y\in\mathcal{E}$ defines a relation
$X\looparrowright Y$ represented by the ff-morphism $\langle
1_{X},f\rangle\colon X\to X\times Y$, which we call its _graph_ and denote by
the same letter. This assignment defines a faithful ordinary functor
$\mathcal{E}_{0}\to\mathrm{Rel}(\mathcal{E})$ on the underlying ordinary
category of $\mathcal{E}$ which is the identity on objects. Furthermore, in
$\mathrm{Rel}(\mathcal{E})$ we have an adjunction $f\dashv f^{\circ}$, which
means that the inclusions $f^{\circ}f\supseteq\Delta_{X}$ and
$ff^{\circ}\subseteq\Delta_{Y}$ hold. We say then that the morphisms of
$\mathcal{E}$ are _maps_ in the bicategory $\mathrm{Rel}(\mathcal{E})$. We
should perhaps stress here that taking the graph of a morphism does not define
a functor $\mathcal{E}\to\mathrm{Rel}(\mathcal{E})$ because the order of
morphisms is not preserved. In fact, since $\mathrm{Rel}(\mathcal{E})$ is an
allegory, the anti-involution $(-)^{\circ}$ forces any inclusion $f\subseteq
g$ for morphisms $f,g\colon X\to Y$ to be an equality (see e.g. A3.2.3 in
[10]).
The modular law also implies some restricted versions of distributivity of
composition over binary intersections in $\mathrm{Rel}(\mathcal{E})$ (see e.g.
A3.1.6 of [10]). Two particular instances of this which we would like to
explicitly record for future reference are the following:
$(R\cap S)f=Rf\cap Sf$ $g^{\circ}(R\cap S)=g^{\circ}R\cap g^{\circ}S$
where $R,S$ are relations $Y\looparrowright Z$ and $f\colon X\to Y$ and
$g\colon X\to Z$ are morphisms in $\mathcal{E}$.
While $\mathrm{Rel}(\mathcal{E})$ is a very useful category in terms of
performing calculations with relations in $\mathcal{E}$, it does not really
capture the enriched nature of $\mathcal{E}$. As we have mentioned earlier,
$\mathrm{Rel}(\mathcal{E})$ in some sense only involves an ordinary category
with a stable factorization system. In order to also capture the
$\mathsf{Pos}$-enriched aspects of a regular category $\mathcal{E}$ we will
also need to work in a different bicategory of relations. In terms of our
goals in this paper, this need is related to our desire to identify the
morphisms of $\mathcal{E}$ as the maps in a certain bicategory of relations,
thus generalizing a familiar fact for ordinary regular categories. Indeed,
while certainly any morphism $f\colon X\to Y\in\mathcal{E}$ has as its right
adjoint in $\mathrm{Rel}(\mathcal{E})$ the opposite relation $f^{\circ}$, this
is not a complete characterization of (graphs of) morphisms. As can be deduced
by arguments essentially contained in [12], being a map in
$\mathrm{Rel}(\mathcal{E})$ is a weaker property than being the graph of a
morphism in $\mathcal{E}$.
Another reason for moving to a different bicategory of relations is dictated
by the form of _congruences_ in this enriched setting and the way in which
exactness is defined. This will become apparent a little bit later.
* 3.2 Definition.
A relation $R\colon X\looparrowright Y$ in the regular category $\mathcal{E}$
is called _weakening_ or _weakening-closed_ if, whenever $x,x^{\prime}\colon
A\to X$ and $y,y^{\prime}\colon A\to Y$ are generalized elements in
$\mathcal{E}$, the following implication holds
$x^{\prime}\leq x\hskip 2.84526pt\wedge\hskip 2.84526pt(x,y)\in_{A}R\hskip
2.84526pt\wedge\hskip 2.84526pty\leq
y^{\prime}\implies(x^{\prime},y^{\prime})\in_{A}R$
* 3.3 Remark.
The property of a relation being weakening-closed can be viewed as an order-
compatibility condition, especially if one thinks of a relation $R\colon X\to
Y$ in this poset-enriched context as specifying that certain elements of $X$
are less than or equal to certain elements of $Y$. If one follows the
terminology of 2-category theory, as the authors of [14] do, this property
would be referred to by saying that $R$ is a _two-sided discrete fibration_.
Since we have generally chosen to not really stress the 2-categorical
viewpoint in this paper, we have accordingly adopted the above terminology
which is inspired from logic.
For any given object $X\in\mathcal{E}$, there is a weakening-closed relation
$I_{X}\colon X\looparrowright X$ given by the comma $I_{X}\coloneqq
1_{X}/1_{X}$. Then it is easy to see that a relation $R\colon X\looparrowright
Y$ is weakening-closed if and only if we have $R=I_{Y}RI_{X}$ in
$\mathrm{Rel}(\mathcal{E})$. In particular, the relations $I_{X}$ act as
identity elements for composition of weakening-closed relations. Thus, we can
define a bicategory $\mathrm{Rel}_{w}(\mathcal{E})$ where the objects are
again those of $\mathcal{E}$ but now the morphisms are the weakening-closed
relations.
* 3.4 Remark.
Perhaps we should note here that, although every morphism of
$\mathrm{Rel}_{w}(\mathcal{E})$ is also a morphism in
$\mathrm{Rel}(\mathcal{E})$ and composition in both categories is the same,
this is not a functorial inclusion
$\mathrm{Rel}_{w}(\mathcal{E})\hookrightarrow\mathrm{Rel}(\mathcal{E})$
because identity morphisms are not preserved.
Now to any given morphism $f\colon X\to Y\in\mathcal{E}$ we can canonically
associate two weakening-closed relations $f_{*}\colon X\looparrowright Y$ and
$f^{*}\colon Y\looparrowright X$ via the following commas: $f_{*}\coloneqq
f/1_{Y}$ and $f^{*}\coloneqq 1_{Y}/f$. We sometimes call $f_{*}$ and $f^{*}$
the _hypergraph_ and _hypograph_ of $f$ respectively. In terms of generalized
elements $x\colon A\to X$ and $y\colon A\to Y$ in $\mathcal{E}$ we have
$(x,y)\in_{A}f_{*}\iff fx\leq y$ and $(y,x)\in_{A}f^{*}\iff y\leq fx$.
The following are then easy to see, for example by arguing with generalized
elements.
* 3.5 Lemma.
Let $\mathcal{E}$ be a regular category. Then for any $f\colon X\to Y$ and
$g\colon Y\to Z$ in $\mathcal{E}$ we have
1. (1)
$(gf)_{*}=g_{*}f_{*}$.
2. (2)
$(gf)^{*}=f^{*}g^{*}$.
It is also easy to see that, for any $f,g\colon X\to Y\in\mathcal{E}$, we have
equivalences
$f\leq g\iff g_{*}\subseteq f_{*}\iff f^{*}\subseteq g^{*}$
We can thus define two fully order-faithful functors
$(-)_{*}\colon\mathcal{E}^{co}\hookrightarrow\mathrm{Rel}_{w}(\mathcal{E})$
and
$(-)^{*}\colon\mathcal{E}^{op}\hookrightarrow\mathrm{Rel}_{w}(\mathcal{E})$,
where $\mathcal{E}^{co}$ denotes the “order-dual” category, i.e. the category
obtained from $\mathcal{E}$ by reversing the order on morphisms. Similarly,
arguing with generalized elements we easily deduce the following.
* 3.6 Lemma.
For any $f\colon X\to Y$ in a regular category $\mathcal{E}$ we have
$f^{*}f_{*}=f/f$ as relations in $\mathcal{E}$.
In particular, we see that $f\colon X\to Y$ is an $\mathsf{ff}$-morphism in
$\mathcal{E}$ if and only if $f^{*}f_{*}=I_{X}$ in
$\mathrm{Rel}_{w}(\mathcal{E})$. Similarly, a pair of morphisms
${Y}$${X}$${Z}$$\scriptstyle{f}$$\scriptstyle{g}$ is jointly $\mathsf{ff}$
precisely when $f^{*}f_{*}\cap g^{*}g_{*}=I_{X}$.
Now just as the graph of every morphism $f\colon X\to Y$ induces an adjunction
$f\dashv f^{\circ}$ in $\rm{Rel}(\mathcal{E})$, so do the hypergraph and
hypograph of that morphism form an adjunction in the bicategory
$\rm{Rel}_{w}(\mathcal{E})$.
* 3.7 Lemma.
For any $f\colon X\to Y$ in a regular category $\mathcal{E}$, there is an
adjunction $f_{*}\dashv f^{*}$ in $\mathrm{Rel}_{w}(\mathcal{E})$.
###### Proof.
We saw above that $f^{*}f_{*}$ is precisely the kernel congruence of $f$, so
clearly we have $I_{X}\subseteq f^{*}f_{*}$. To form the composition
$f_{*}f^{*}$ we consider the diagram below, where the top square is a pullback
and the other two are commas.
$\textstyle{Q\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{q_{1}}$$\scriptstyle{q_{2}}$$\textstyle{f^{*}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{s_{1}}$$\scriptstyle{s_{2}}$$\textstyle{f_{*}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{r_{1}}$$\scriptstyle{r_{2}}$$\textstyle{Y\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\leq}$$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f}$$\scriptstyle{f}$$\scriptstyle{\leq}$$\textstyle{Y\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{Y}$$\textstyle{Y}$
Now by definition we have that $f_{*}f^{*}$ is given by the image of $\langle
s_{1}q_{1},r_{2}q_{2}\rangle$. But $s_{1}q_{1}\leq fs_{2}q_{1}=fr_{1}q_{2}\leq
r_{2}q_{2}$, so that $\langle s_{1}q_{1},r_{2}q_{2}\rangle$ factors through
$I_{Y}=1_{Y}/1_{Y}$. This yields the inclusion $f_{*}f^{*}\subseteq I_{Y}$. ∎
In other words, every morphism $f\colon X\to Y$ in a regular category
$\mathcal{E}$ is a map in $\mathrm{Rel}_{w}(\mathcal{E})$ via its hypergraph
$f_{*}$. Our first result in this paper is that in any regular category
$\mathcal{E}$ this is now indeed a complete characterization of morphisms,
i.e. every map in $\rm{Rel}_{w}(\mathcal{E})$ is of the form $f_{*}$ for a
(necessarily unique) morphism $f\colon X\to Y\in\mathcal{E}$.
* 3.8 Theorem.
If $\phi\colon X\looparrowright Y\in\rm{Rel}_{w}(\mathcal{E})$ has a right
adjoint in $\rm{Rel}_{w}(\mathcal{E})$, then there exists a (necessarily
unique) morphism $f\colon X\to Y\in\mathcal{E}$ such that $\phi=f_{*}$.
###### Proof.
Let $\psi\colon Y\looparrowright X\in\mathrm{Rel}_{w}(\mathcal{E})$ be the
right adjoint, so that we have $I_{X}\subseteq\psi\phi$ and $\phi\psi\subseteq
I_{Y}$. Suppose also that $\phi$ and $\psi$ are represented respectively by
the ff-morphisms $\langle\phi_{0},\phi_{1}\rangle\colon T\rightarrowtail
X\times Y$ and $\langle\psi_{0},\psi_{1}\rangle\colon
T^{\prime}\rightarrowtail Y\times X$. We next form the pullback square below,
so that
$\langle\phi_{0}u,\phi_{1}u\rangle=\langle\psi_{1}u^{\prime},\psi_{0}u^{\prime}\rangle\colon
S\rightarrowtail X\times Y$ represents the relation
$\phi\cap\psi^{\circ}\colon X\looparrowright Y\in\rm{Rel}(\mathcal{E})$. We
first want to show that $\phi_{0}u=\psi_{1}u^{\prime}$ is an isomorphism, in
which case we will have $\phi\cap\psi^{\circ}=f$ for the morphism
$f\coloneqq\phi_{1}u(\phi_{0}u)^{-1}\colon X\to Y$.
$\textstyle{S\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{u^{\prime}}$$\scriptstyle{u}$$\textstyle{T^{\prime}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\langle\psi_{1},\psi_{0}\rangle}$$\textstyle{T\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\langle\phi_{0},\phi_{1}\rangle}$$\textstyle{X\times
Y}$
First of all, since $I_{X}\subseteq\psi\phi$, we have
$(1_{X},1_{X})\in_{X}\psi\phi$ and hence there exist an effective epimorphism
$e\colon P\twoheadrightarrow X$ and a $y\colon P\to Y$ such that
$(e,y)\in_{P}\phi$ and $(y,e)\in_{P}\psi$. Then we have
$(e,y)\in_{P}\phi\cap\psi^{\circ}$ and so there exists a $v\colon P\to S$ such
that $\langle\phi_{0}u,\phi_{1}u\rangle v=\langle e,y\rangle$. In particular,
since $\phi_{0}uv=e$ is an effective epimorphism, we deduce that $\phi_{0}u$
is an effective epimorphism as well.
Now it suffices to show that $\phi_{0}u$ is also an $\mathsf{ff}$-morphism. So
let $a,b\colon A\to S$ be such that $\phi_{0}ua\leq\phi_{0}ub$. Then
$(\phi_{0}ua,\phi_{0}ub)\in_{A}I_{X}$ and hence
$(\phi_{0}ua,\phi_{0}ub)\in_{A}\psi\phi$, so there is an effective epimorphism
$e\colon P\twoheadrightarrow A$ and a $z\colon P\to Y$ such that
$(\phi_{0}uae,z)\in_{P}\phi$ and $(z,\phi_{0}ube)\in_{P}\psi$. Since we also
clearly have
$(\phi_{1}uae,\phi_{0}uae)=(\psi_{0}u^{\prime}ae,\psi_{1}u^{\prime}ae)\in_{P}\psi,$
we get $(\phi_{1}uae,z)\in_{P}\phi\psi$, which implies
$(\phi_{1}uae,z)\in_{P}I_{Y}$, i.e. that $\phi_{1}uae\leq z$.
Similarly, since $(\phi_{0}ube,\phi_{1}ube)\in_{P}\phi$ and
$(z,\phi_{0}ube)\in\psi$, we consequently have that
$(z,\phi_{1}ube)\in_{P}\phi\psi$ and hence $(z,\phi_{1}ube)\in_{P}I_{Y}$,
which is to say that $z\leq\phi_{1}ube$. We now have $\phi_{1}uae\leq
z\leq\phi_{1}ube$, hence $\phi_{1}uae\leq\phi_{1}ube$, which in turn implies
that $\phi_{1}ua\leq\phi_{1}ub$, because $e$ is an order-epimorphism. Since
also $\phi_{0}ua\leq\phi_{0}ub$, we obtain $a\leq b$ because
$\langle\phi_{0}u,\phi_{1}u\rangle$ is an $\mathsf{ff}$-morphism.
We now claim that $\phi=f_{*}$. To see this, observe first that
$f_{*}=I_{Y}f=I_{Y}(\phi\cap\psi^{\circ})\subseteq I_{Y}\phi=\phi$, where the
last equality follows because $\phi$ is weakening-closed. Thus,
$f_{*}\subseteq\phi$. But by an application of the modular law 3 in
$\mathrm{Rel}(\mathcal{E})$ we also have $\psi
f=\psi(\phi\cap\psi^{\circ})\supseteq\psi\phi\cap\Delta_{X}\supseteq
I_{X}\cap\Delta_{X}=\Delta_{X}$ and so that $\phi\psi f\supseteq\phi$. Then
$f_{*}=I_{Y}f\supseteq\phi\psi f\supseteq\phi$ and so we conclude that
$f_{*}=\phi$.
Finally, for the uniqueness claim let us assume that $f,g\colon X\to Y$ are
such that $f_{*}=g_{*}$. By an earlier observation we know that the inclusion
$f_{*}\subseteq g_{*}$ is equivalent to the inequality $g\leq f$. Then we have
both $f\leq g$ and $g\leq f$, whence $f=g$. ∎
Next, let us comment here on how the calculus of relations in a regular
category $\mathcal{E}$ can be used to express various limit properties
therein. For example, the statement that a pair of morphisms
${X}$${Z}$${Y}$$\scriptstyle{f}$$\scriptstyle{g}$ represents a given relation
$R\colon X\looparrowright Y$ is equivalent to the following two equalities
between relations:
1. (1)
$R=gf^{\circ}$.
2. (2)
$f^{*}f_{*}\cap g^{*}g_{*}=I_{Z}$.
Note here that we cannot replace condition (1.) by $R=g_{*}f^{*}$, even if $R$
is weakening-closed. Similarly, we cannot replace (2.) by $f^{\circ}f\cap
g^{\circ}g=\Delta_{Z}$, because the latter means that $(f,g)$ are only jointly
monomorphic instead of jointly order-monomorphic.
Based on the above observation, now consider the statement that a commutative
square
${P}$${Y}$${X}$${Z}$$\scriptstyle{p_{1}}$$\scriptstyle{p_{0}}$$\scriptstyle{g}$$\scriptstyle{f}$
is a pullback. It is easy to see that this is equivalent to the following pair
of equalities:
1. (1)
$g^{\circ}f=p_{1}p_{0}^{\circ}$.
2. (2)
$p_{1}^{*}p_{1*}\cap p_{0}^{*}p_{0*}=I_{P}$.
Similarly, the statement that the square
${P}$${Y}$${X}$${Z}$$\scriptstyle{p_{1}}$$\scriptstyle{p_{0}}$${\leq}$$\scriptstyle{g}$$\scriptstyle{f}$
is a comma is equivalent to:
1. (1)
$g^{*}f_{*}=p_{1}p_{0}^{\circ}$.
2. (2)
$p_{1}^{*}p_{1*}\cap p_{0}^{*}p_{0*}=I_{P}$.
Now we turn to discussing exactness for $\mathsf{Pos}$-categories. First, let
us recall the definition of congruence relation from [14]. This can be seen as
the ordered analogue of equivalence relations in an ordinary category. It is
also a special case of a more general notion of congruence for 2-categories.
* 3.9 Definition.
Let $X$ be an object of the regular category $\mathcal{E}$. A _congruence_ on
$X$ is a relation $E\colon X\looparrowright X\in\mathrm{Rel}_{w}(\mathcal{E})$
which is reflexive and transitive.
We say that the congruence $E$ is _effective_ if there exists a morphism
$f\colon X\to Y\in\mathcal{E}$ such that $E=f/f$.
Equivalently, we can say that $E\colon X\looparrowright X$ is a congruence if
it is a transitive relation such that $E\supseteq I_{X}$. In essence, a
congruence is a pre-order relation on $X$ which is compatible with the
canonical order relation on $X$, the latter expressed by the requirement that
it is weakening-closed. We think of a congruence as imposing additional
inequalities on $X$, just as an equivalence relation corresponds to the idea
of imposing new equalities.
With the notion of congruence in hand, we are lead naturally to the notion of
(Barr-)exactness for $\mathsf{Pos}$-categories as considered by Kurz-Velebil
in [14].
* 3.10 Definition.
A regular category $\mathcal{E}$ is called _exact_ if every congruence in
$\mathcal{E}$ is effective.
* 3.11 Example.
1. (1)
$\mathsf{Pos}$ is exact and so is any presheaf category
$[\mathcal{C},\mathsf{Pos}]$ for any small category $\mathcal{C}$.
2. (2)
The locally discrete category $\mathsf{Set}$ is an example of a category which
is regular but not exact (see [14]).
3. (3)
Generalizing the case of $\mathsf{Pos}$, any _variety of ordered algebras_ ,
always in the sense of Bloom & Wright[5], is an exact category. Particular
examples here are furnished by the categories $\mathsf{OrdSGrp}$,
$\mathsf{OrdMon}$, $\mathsf{OrdMon_{0}}$ and $\mathsf{OrdCMon}$. On the other
hand, the quasivariety $\mathsf{OrdCMon_{t.f.}}$ is not exact.
4. (4)
There are also examples of exact categories which are not varieties, but are
monadic over $\mathsf{Pos}$. One such, which will appear in more detail in
section 5, is the category $\mathsf{Nach}$ of Nachbin spaces. Another is given
by the category $S$-$\mathsf{Pos}$ for any ordered monoid $S$.
A congruence in a regular category $\mathcal{E}$, being a transitive relation,
is an idempotent when considered as a morphism in either of
$\mathrm{Rel}(\mathcal{E})$ and $\mathrm{Rel}_{w}(\mathcal{E})$. When it is
moreover effective, then it actually is a _split_ idempotent in the latter
bicategory. Indeed, if $E=f/f$ for some $f\colon X\to Y$, then we can assume
that $f$ is actually the coinserter of $E$ by 2. Then we have
$f^{*}f_{*}=f/f=E$ and $f_{*}f^{*}=I_{Y}$. The next proposition shows that
this splitting actually characterizes effective congruences. This is analogous
to a familiar fact for ordinary regular categories, where an equivalence
relation splits in the bicategory of relations if and only if it occurs as a
kernel pair.
* 3.12 Proposition.
Let $E\colon X\looparrowright X$ be a congruence in the regular category
$\mathcal{E}$. Then $E$ is effective if and only if it splits as an idempotent
in $\mathrm{Rel}_{w}(\mathcal{E})$.
###### Proof.
Suppose that $E=\psi\phi$ for some $\phi\colon X\looparrowright Y$ and
$\psi\colon Y\looparrowright X$ in $\mathrm{Rel}_{w}(\mathcal{E})$ with
$\phi\psi=I_{Y}$. Since $E$ is reflexive and weakening-closed, we have
$\psi\phi=E\supseteq I_{X}$. Since also trivially $\phi\psi\subseteq I_{Y}$,
we have $\phi\dashv\psi$ in $\mathrm{Rel}_{w}(\mathcal{E})$ and so by 3 we
have that $\phi=f_{*}$ and $\psi=f^{*}$ for some morphism $f\colon X\to
Y\in\mathcal{E}$. Thus, we obtain $E=\psi\phi=f^{*}f_{*}=f/f$. ∎
In the following section we will embark on the goal of constructing the exact
completion $\mathcal{E}_{ex/reg}$ of a regular category $\mathcal{E}$ by a
process of splitting idempotents in a bicategory of relations and then taking
maps in the resulting bicategory. This is essentially an attempt to mimic the
construction of the ordinary exact completion of a regular category, as
initially described by Lawvere in [15] and then with more details for example
in [21], [9],[10], motivated by the combination of the two results contained
in 3 and 3.
However, even if the reader is not familiar with the construction of the
splitting of idempotents, the next proposition can serve as a type of
heuristic for coming up with the definition of morphisms in the completion
$\mathcal{E}_{ex/reg}$. It will also be of practical use a little bit later.
Recall here that in a regular category we say
$\textstyle{E\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{e_{0}}$$\scriptstyle{e_{1}}$$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{p}$$\textstyle{P}$
is an _exact sequence_ if $(e_{0},e_{1})$ is the kernel congruence of $p$ and
also $p$ is the coinserter of $(e_{0},e_{1})$. In terms of the calculus of
relations, exactness of the sequence is equivalent to the equalities
$p^{*}p_{*}=E$ and $p_{*}p^{*}=I_{P}$.
* 3.13 Proposition.
Let
$\textstyle{E\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{p}$$\textstyle{P}$
and
$\textstyle{F\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{Y\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{q}$$\textstyle{Q}$
be exact sequences in the regular category $\mathcal{E}$. Then there is an
order-reversing bijection between the following:
1. (1)
Morphisms $P\to Q$ in $\mathcal{E}$.
2. (2)
Relations $R_{*}\colon X\looparrowright Y\in\mathrm{Rel}_{w}(\mathcal{E})$ for
which there exists another relation $R^{*}\colon Y\looparrowright
X\in\mathrm{Rel}_{w}(\mathcal{E})$ such that the following are satisfied:
* –
$FR_{*}E=R_{*}$ and $ER^{*}F=R^{*}$.
* –
$R^{*}R_{*}\supseteq E$ and $R_{*}R^{*}\subseteq F$.
###### Proof.
Consider first a morphism $r\colon P\to Q\in\mathcal{E}$. Set $R_{*}\coloneqq
q^{*}r_{*}p_{*}$ and $R^{*}\coloneqq p^{*}r^{*}q_{*}$. We have
$FR_{*}E=q^{*}q_{*}q^{*}r_{*}p_{*}p^{*}p_{*}=q^{*}r_{*}p_{*}=R_{*}$ and
similarly $ER^{*}F=p^{*}p_{*}p^{*}r^{*}q_{*}q^{*}q_{*}=p^{*}r^{*}q_{*}=R^{*}$.
In addition,
$R^{*}R_{*}=p^{*}r^{*}q_{*}q^{*}r_{*}p_{*}=p^{*}r^{*}I_{Q}r_{*}p_{*}=p^{*}r^{*}r_{*}p_{*}\supseteq
p^{*}p_{*}=E$ and $R_{*}R^{*}=q^{*}r_{*}p_{*}p^{*}r^{*}q_{*}\subseteq
q^{*}r_{*}r^{*}q_{*}\subseteq q^{*}q_{*}=F$.
Conversely, consider a relation $R_{*}$ as in (2.). Set $\phi\coloneqq
q_{*}R_{*}p^{*}$ and then also $\psi\coloneqq p_{*}R^{*}q^{*}$. Then for these
weakening-closed relations we have
$\psi\phi=p_{*}R^{*}q^{*}q_{*}R_{*}p^{*}\supseteq
p_{*}R^{*}R_{*}p^{*}\supseteq p_{*}Ep^{*}=I_{P}$
$\phi\psi=q_{*}R_{*}p^{*}p_{*}R^{*}q^{*}=q_{*}R_{*}ER^{*}q^{*}=q_{*}R_{*}R^{*}q^{*}\subseteq
q_{*}Fq^{*}=I_{Q}$
Thus, by 3 we have $\phi=r_{*}$ for a (unique) morphism $r\colon P\to Q$.
The fact that these two assignments are inverse to each other is expressed by
the two equalities $q_{*}q^{*}r_{*}p_{*}p^{*}=r_{*}$ and
$q^{*}q_{*}R_{*}p^{*}p_{*}=FR_{*}E=R_{*}$. ∎
Before ending this section, let us make a couple more observations on the
assignment $R_{*}\mapsto r$ from the last proposition. Note that, as the
notation suggests and the above proof exhibits, the relation $R_{*}$
corresponds to the hypergraph $r_{*}$ of a morphism $r\colon P\to Q$.
Specifically, $r$ is uniquely determined from $R_{*}$ by the equality
$r_{*}=q_{*}R_{*}p^{*}$. We would like to also record here a relation $R$ that
in some sense corresponds directly to the (graph of the) morphism $r$.
Given $R_{*}\colon X\looparrowright Y$ as in 3, set $R\coloneqq
R_{*}\cap(R^{*})^{\circ}\colon X\looparrowright Y$. Then we claim that
$r=qRp^{\circ}$. To see this, observe first that $qRp^{\circ}$ is also a map
in $\mathrm{Rel}(\mathcal{E})$ because
$(qRp^{\circ})^{\circ}qRp^{\circ}=pR^{\circ}q^{\circ}qRp^{\circ}\supseteq
pR^{\circ}Rp^{\circ}\supseteq p(E\cap E^{\circ})p^{\circ}=\Delta_{P}$
$(qRp^{\circ})(qRp^{\circ})^{\circ}=qRp^{\circ}pR^{\circ}q^{\circ}=qR(E\cap
E^{\circ})R^{\circ}q^{\circ}=qRR^{\circ}q^{\circ}\subseteq q(F\cap
F^{\circ})q^{\circ}=\Delta_{Q}$
Here we used the fact that in an exact sequence
$\textstyle{E\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{p}$$\textstyle{P}$
we have $E\cap E^{\circ}=p^{*}p_{*}\cap(p^{*}p_{*})^{\circ}=p^{\circ}p$, i.e.
$E\cap E^{\circ}$ is precisely the kernel pair of $p$. In addition, the
inclusions $R^{\circ}R\supseteq E\cap E^{\circ}$ and $RR^{\circ}\subseteq
F\cap F^{\circ}$ were used, the first of which is clear and the second follows
by the modular law (see the next section for details).
Now, finally, we have
$r=r_{*}\cap(r^{*})^{\circ}=q_{*}R_{*}p^{*}\cap(q^{*})^{\circ}(R^{*})^{\circ}(p_{*})^{\circ}\supseteq
q(R_{*}\cap(R^{*})^{\circ})p^{\circ}=qRp^{\circ}.$
But since we have an inclusion between two maps in the allegory
$\mathrm{Rel}(\mathcal{E})$, these maps must be equal. Hence, we conclude that
$r=qRp^{\circ}$.
## 4\. Exact Completion
In this section we come to the heart of this paper, which is the construction
of the exact completion of a regular $\mathsf{Pos}$-category $\mathcal{E}$.
The main idea is to try to perform a construction that mimics one of the ways
in which one can define the exact completion of an ordinary regular category
$\mathcal{C}$. Let us thus quickly recall this construction, as originally
suggested by Lawvere in [15]. We will also very much be drawing inspiration
from the presentation of Succi-Cruciani[21].
Given a regular ordinary category $\mathcal{C}$, one first performs a
_splitting of idempotents_ in the bicategory of relations
$\mathrm{Rel}(\mathcal{C})$. More precisely, one splits the class of
equivalence relations, which are indeed idempotent as morphisms in
$\mathrm{Rel}(\mathcal{C})$. This step yields a bicategory which, a
posteriori, is identified as the bicategory of relations
$\mathrm{Rel}(\mathcal{C}_{ex/reg})$ of the completion. The second step is
then to identify the completion itself, which can be done by taking the
category of maps in the bicategory produced by the first step.
We note that the idea for this construction of the exact completion can be
traced back to the following two observations, valid in any ordinary regular
category $\mathcal{C}$:
1. (1)
An equivalence relation $E$ on an object $X\in\mathcal{C}$ is effective if and
only if it splits as an idempotent in $\mathrm{Rel}(\mathcal{C})$.
2. (2)
The morphisms $f\colon X\to Y\in\mathcal{C}$ are precisely the maps in
$\mathrm{Rel}(\mathcal{C})$.
Accordingly, our hope to perform a $\mathsf{Pos}$-enriched version of this
construction hinges on the validity of enriched versions of the two
observations above, as contained respectively in 3 and 3. Hence, in our
context, we should first look at $\mathrm{Rel}_{w}(\mathcal{E})$, split the
idempotents therein which are congruences in $\mathcal{E}$, then finally take
the category of maps in the resulting bicategory.
The fact that we need to work with $\mathrm{Rel}_{w}(\mathcal{E})$ rather than
$\mathrm{Rel}(\mathcal{E})$ already presents some issues. As we have mentioned
earlier, $\mathrm{Rel}(\mathcal{E})$ has the structure of an allegory and it
is this fact that facilitates many computations. Furthermore, the theory of
allegories is well developed and in fact there is a precise correspondence
between ordinary regular and exact categories on the one hand and certain
classes of allegories on the other (see e.g. [10],[9]). On the contrary, the
structure of $\mathrm{Rel}_{w}(\mathcal{E})$ is not as rich. Fundamentally,
the process of taking the opposite of a relation does not restrict to
$\mathrm{Rel}_{w}(\mathcal{E})$. Thus, in our quest to construct the
$\mathsf{Pos}$-enriched exact completion as indicated above, we cannot simply
rely on the general theory of allegories. While this creates a complication,
at the same time it is in some sense to be expected. Indeed, allegories are in
some aspects too simple for our enriched context. For example, the only
inclusions between maps in an allegory are equalities and it is precisely this
fact that does not allow us to recover the order relation on morphisms from
$\mathrm{Rel}(\mathcal{E})$.
Motivated by the above, we embark towards our goal by first defining a
category $Q_{w}(\mathcal{E})$ by splitting the idempotents in
$\mathrm{Rel}_{w}(\mathcal{E})$ which are congruences in $\mathcal{E}$.
Explicitly, $Q_{w}(\mathcal{E})$ is defined as follows:
* •
Objects of $Q_{w}(\mathcal{E})$ are pairs $(X,E)$, where $X$ is an object of
$\mathcal{E}$ and $E\colon X\looparrowright X$ is a congruence relation in
$\mathcal{E}$.
* •
Morphisms $\Phi\colon(X,E)\to(Y,F)$ in $Q_{w}(\mathcal{E})$ are (weakening-
closed) relations $\Phi\colon X\looparrowright Y$ in $\mathcal{E}$ such that
$\Phi E=\Phi=F\Phi$ or equivalently $\Phi=F\Phi E$.
Composition in $Q_{w}(\mathcal{E})$ is composition of relations in
$\mathcal{E}$, while the identity morphism on $(X,E)\in Q_{w}(\mathcal{E})$ is
the relation $E$ itself. The morphisms are locally ordered by inclusion and it
is clear that $Q_{w}(\mathcal{E})$ also has binary infima of morphisms given
by intersection of relations in $\mathcal{E}$. Then we define a category
$\mathcal{E}_{ex/reg}$ by taking the maps in $Q_{w}(\mathcal{E})$.
* 4.1 Definition.
$\mathcal{E}_{ex/reg}\coloneqq\mathrm{Map}(Q_{w}(\mathcal{E}))$.
Explicitly, $\mathcal{E}_{ex/reg}$ has the same objects as
$Q_{w}(\mathcal{E})$, while its morphisms are those
$R_{*}\colon(X,E)\to(Y,F)\in Q_{w}(\mathcal{E})$ for which there exists an
$R^{*}\colon(Y,F)\to(X,E)\in Q_{w}(\mathcal{E})$ such that
$R^{*}R_{*}\supseteq E$ and $R_{*}R^{*}\subseteq F$.
Note that $\mathcal{E}_{ex/reg}$ is not merely an ordinary category, but can
be made into a legitimate Pos-category by defining for any
$R_{*},S_{*}\colon(X,E)\to(Y,F)$ in $\mathcal{E}_{ex/reg}$
$R_{*}\leq S_{*}\vcentcolon\Leftrightarrow R_{*}\supseteq S_{*}$
the inclusion on the right-hand side being that of relations in $\mathcal{E}$.
This is clearly a partial order relation on Homs that is preserved by
composition. Observe furthermore that we can equivalently define the order
$R_{*}\leq S_{*}$ by requiring $R^{*}\subseteq S^{*}$ for the right adjoints.
We also have a canonical functor
$\Gamma\colon\mathcal{E}\to\mathcal{E}_{ex/reg}$ defined by mapping an object
$X\in\mathcal{E}$ to $(X,I_{X})$ and a morphism $f\colon X\to Y\in\mathcal{E}$
to its hypergraph $f_{*}\colon X\looparrowright Y$ considered as a morphism
$(X,I_{X})\to(Y,I_{Y})$. Note that $\Gamma$ is order-preserving and reflecting
by definition of the order on morphisms in $\mathcal{E}_{ex/reg}$.
* 4.2 Remark.
We will consistently denote morphisms of $\mathcal{E}_{ex/reg}$ by a capital
letter with a lower asterisk and their right adjoint in $Q_{w}(\mathcal{E})$
by the same letter with an upper asterisk. This notation represents our
intuition that $R_{*}$ is the hyper-graph of the morphism $R$ and in some
sense we are working towards making this a precise statement. In particular,
we will denote the identity morphism $(X,E)\to(X,E)$ by $1_{(X,E)*}$, where as
relations in $\mathcal{E}$ we have $1_{(X,E)*}=E$ and $1_{(X,E)}^{*}=E$.
Our goal now in this section is to show that the category
$\mathcal{E}_{ex/reg}$ as defined above is the _exact completion_ of
$\mathcal{E}$ as a regular category. The category $Q_{w}(\mathcal{E})$ will be
seen in the end to be precisely the category of weakening-closed relations in
$\mathcal{E}_{ex/reg}$. Accordingly, the proofs of the various statements
about $\mathcal{E}_{ex/reg}$ later on in this section will be motivated by the
description of the various limit and exactness properties in terms of the
calculus of relations in a regular category. However, we know that such a
description cannot be achieved with only weakening-closed relations. Thus, it
will be convenient to construct also at the same time what will turn out to be
the bicategory of all relations in $\mathcal{E}_{ex/reg}$.
This leads us to define another bicategory $Q(\mathcal{E})$ as follows:
* •
Objects of $Q(\mathcal{E})$ are again pairs $(X,E)$, where $E\colon
X\looparrowright X$ is a congruence relation in $\mathcal{E}$.
* •
Morphisms $(X,E)\to(Y,F)$ in $Q(\mathcal{E})$ are relations $\Phi\colon
X\looparrowright Y$ in $\mathcal{E}$ such that $\Phi(E\cap
E^{\circ})=\Phi=(F\cap F^{\circ})\Phi$ or equivalently $(F\cap
F^{\circ})\Phi(E\cap E^{\circ})=\Phi$.
The composition of morphisms is that of relations in $\mathcal{E}$ while the
identity on $(X,E)\in Q(\mathcal{E})$ is $E\cap E^{\circ}$.
In other words, $Q(\mathcal{E})$ is the locally ordered bicategory obtained
from $\mathrm{Rel}(\mathcal{E})$ by splitting those idempotents of the form
$E\cap E^{\circ}$ for a congruence $E$ in $\mathcal{E}$. Notice that
idempotents of this form are equivalence relations in $\mathcal{E}$ and in
particular are symmetric. It then follows (see for example Theorem 3.3.4 in
[10]) that $Q(\mathcal{E})$ is an allegory, where the opposite of a morphism
is given by taking the opposite relation in $\mathcal{E}$.
Now we make some important observations regarding the connection between
morphisms of $Q_{w}(\mathcal{E})$ and $Q(\mathcal{E})$ and between maps in
these two bicategories. If the reader keeps in mind the intuition that these
two categories should respectively be $\mathrm{Rel}_{w}(\mathcal{E}_{ex/reg})$
and $\mathrm{Rel}(\mathcal{E}_{ex/reg})$, then these observations are to be
expected.
First, note that every morphism $\Phi\colon(X,E)\to(Y,F)$ in
$Q_{w}(\mathcal{E})$ can also be considered as a morphism in $Q(\mathcal{E})$,
since
$\Phi=\Delta_{Y}\Phi\Delta_{X}\subseteq(F\cap F^{\circ})\Phi(E\cap
E^{\circ})\subseteq F\Phi E=\Phi$
However, it is important to note as well that this assignment is not
functorial as it does not preserve the identity morphisms.
Second, to any map in $Q_{w}(\mathcal{E})$, i.e. to any morphism of
$\mathcal{E}_{ex/reg}$, we can associate in a natural way a map in
$Q(\mathcal{E})$ as follows.
* 4.3 Lemma.
Consider any $R_{*}\colon(X,E)\to(Y,F)\in\mathcal{E}_{ex/reg}$ and define the
relation $\mathfrak{gr}(R_{*})\coloneqq R_{*}\cap(R^{*})^{\circ}\colon
X\looparrowright Y$ in $\mathcal{E}$. Then $\mathfrak{gr}(R_{*})$ is a map
$(X,E)\to(Y,F)$ in $Q(\mathcal{E})$.
###### Proof.
First, it is easy to see that $(F\cap F^{\circ})\mathfrak{gr}(R_{*})(E\cap
E^{\circ})=\mathfrak{gr}(R_{*})$. Indeed, we have
$\mathfrak{gr}(R_{*})=\Delta_{Y}\mathfrak{gr}(R_{*})\Delta_{X}\subseteq(F\cap
F^{\circ})\mathfrak{gr}(R_{*})(E\cap
E^{\circ})\subseteq(F\mathfrak{gr}(R_{*})E)\cap(F^{\circ}\mathfrak{gr}(R_{*})E^{\circ})$
$\subseteq FR_{*}E\cap
F^{\circ}(R^{*})^{\circ}E^{\circ}=R_{*}\cap(R^{*})^{\circ}=\mathfrak{gr}(R_{*})$
Second, to see that $\mathfrak{gr}(R_{*})$ is indeed a map in $Q(\mathcal{E})$
we argue as follows:
$\mathfrak{gr}(R_{*})\mathfrak{gr}(R_{*})^{\circ}=(R_{*}\cap(R^{*})^{\circ})((R_{*})^{\circ}\cap
R^{*})\subseteq R_{*}R^{*}\cap(R^{*})^{\circ}(R_{*})^{\circ}\subseteq F\cap
F^{\circ}$ $\displaystyle\mathfrak{gr}(R_{*})^{\circ}\mathfrak{gr}(R_{*})$
$\displaystyle=$ $\displaystyle(R_{*}^{\circ}\cap
R^{*})(R_{*}\cap(R^{*})^{\circ})=(R_{*}^{\circ}\cap
R^{*})((R_{*}\cap(R^{*})^{\circ})\cap R_{*})$ $\displaystyle=$
$\displaystyle(R_{*}^{\circ}\cap R^{*})((R_{*}^{\circ}\cap
R^{*})^{\circ}(E\cap E^{\circ})\cap R_{*})$
$\displaystyle\stackrel{{\scriptstyle\ref{Modular Law}}}{{\supseteq}}$
$\displaystyle(E\cap E^{\circ})\cap(R_{*}^{\circ}\cap R^{*})R_{*}=(E\cap
E^{\circ})\cap(E^{\circ}R_{*}^{\circ}\cap R^{*})R_{*}$
$\displaystyle\stackrel{{\scriptstyle\ref{Modular Law*}}}{{\supseteq}}$
$\displaystyle(E\cap E^{\circ})\cap E^{\circ}\cap R^{*}R_{*}\supseteq(E\cap
E^{\circ})\cap E^{\circ}\cap E$ $\displaystyle=$ $\displaystyle E\cap
E^{\circ}$
where for establishing the first and second inclusions we used the modular law
in $\mathrm{Rel}(\mathcal{E})$. ∎
Given a morphism $R_{*}\colon(X,E)\to(Y,F)\in\mathcal{E}_{ex/reg}$, we call
the relation $\mathfrak{gr}(R_{*})$ defined above the _graph_ of $R_{*}$.
Observe furthermore that $\mathfrak{gr}(R_{*})$ satisfies the following two
basic equalities:
$F\circ\mathfrak{gr}(R_{*})=R_{*}$ $\mathfrak{gr}(R_{*})^{\circ}\circ F=R^{*}$
Indeed, on one hand clearly $F\circ\mathfrak{gr}(R_{*})\subseteq
FR_{*}=R_{*}$. On the other hand we have
$\displaystyle F\circ\mathfrak{gr}(R_{*})$ $\displaystyle\supseteq
R_{*}R^{*}\mathfrak{gr}(R_{*})=R_{*}R^{*}(R_{*}\cap(R^{*})^{\circ})$
$\displaystyle\stackrel{{\scriptstyle\ref{Modular
Law}}}{{\supseteq}}R_{*}(R^{*}R_{*}\cap E^{\circ})\supseteq R_{*}(E\cap
E^{\circ})\supseteq R_{*}$
The second equality follows in a similar fashion.
In fact, these two equalities characterize $\mathfrak{gr}(R_{*})$ in the
following sense: if the morphism $\Phi\colon(X,E)\to(Y,F)$ is a map in
$Q(\mathcal{E})$ with $F\Phi=R_{*}$ and $\Phi^{\circ}F=R^{*}$, then
$\Phi=\mathfrak{gr}(R_{*})$. Indeed,
$\Phi\subseteq(F\cap F^{\circ})\Phi\subseteq F\Phi\cap
F^{\circ}\Phi=F\Phi\cap(\Phi^{\circ}F)^{\circ}=R_{*}\cap(R^{*})^{\circ}=\mathfrak{gr}(R_{*})$
and so $\Phi=\mathfrak{gr}(R_{*})$ because $Q(\mathcal{E})$ is an allegory and
hence the inclusion of maps is discrete.
Finally, the assignment $R_{*}\mapsto\mathfrak{gr}(R_{*})$ is functorial.
Given $R_{*}\colon(X,E)\to(Y,F)$ and $S_{*}\colon(Y,F)\to(Z,G)$ we have
$G(\mathfrak{gr}(S_{*})\mathfrak{gr}(R_{*}))=(G\mathfrak{gr}(S_{*}))\mathfrak{gr}(R_{*})=S_{*}\mathfrak{gr}(R_{*})=S_{*}F\mathfrak{gr}(R_{*})=S_{*}R_{*}$
$(\mathfrak{gr}(S_{*})\mathfrak{gr}(R_{*}))^{\circ}G=\mathfrak{gr}(R_{*})^{\circ}\mathfrak{gr}(S_{*})^{\circ}G=\mathfrak{gr}(R_{*})^{\circ}S^{*}=\mathfrak{gr}(R_{*})^{\circ}FS^{*}=R^{*}S^{*},$
so we conclude by the above observation that
$\mathfrak{gr}(S_{*})\mathfrak{gr}(R_{*})=\mathfrak{gr}(S_{*}R_{*})$. Also,
$\mathfrak{gr}(1_{(X,E)*})=1_{(X,E)*}\cap(1_{(X,E)}^{*})^{\circ}=E\cap
E^{\circ}$ and the latter is an identity morphism in
$\mathrm{Map}(Q(\mathcal{E}))$.
* 4.4 Remark.
(on notation) Henceforth, to ease the notation, we shall often denote the
graph $\mathfrak{gr}(R_{*})$ of a morphism
$R_{*}\colon(X,E)\to(Y,E)\in\mathcal{E}_{ex/reg}$ simply by $R$, i.e. we will
just drop the lower asterisk. This shall not present too much risk for
confusion as $R_{*}$ will always have appeared before $R$.
We now begin our work on proving that $\mathcal{E}_{ex/reg}$ is indeed the
desired exact completion of the regular category $\mathcal{E}$. This is broken
down into a sequence of more bite-sized pieces. First, we establish a
fundamental result asserting the existence of certain canonical
representations for morphisms in the bicategories $Q_{w}(\mathcal{E})$ and
$Q(\mathcal{E})$. Following this, we repeatedly employ this representation to
establish step by step that $\mathcal{E}_{ex/reg}$ has the desired finite
limit and exactness properties making it an exact category. The arguments here
are essentially motivated by the description of these properties in terms of
the calculus of relations. Finally, we show that we have indeed constructed
the exact completion by establishing the relevant universal property.
To begin with, we record a small lemma concerning jointly order-monomorphic
pairs of morphisms in $\mathcal{E}_{ex/reg}$.
* 4.5 Lemma.
If
$\textstyle{(Y,F)}$$\textstyle{(X,E)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{R_{*}}$$\scriptstyle{S_{*}}$$\textstyle{(Z,G)}$
is a pair of morphisms in the category $\mathcal{E}_{ex/reg}$ such that
$R^{*}R_{*}\cap S^{*}S_{*}=E$, then this pair is jointly order-monomorphic in
$\mathcal{E}_{ex/reg}$.
###### Proof.
Assume that $R^{*}R_{*}\cap S^{*}S_{*}=E$ and let
$H_{*},K_{*}\colon(A,T)\to(X,E)\in\mathcal{E}_{ex/reg}$ be such that
$R_{*}H_{*}\leq R_{*}K_{*}$ and $S_{*}H_{*}\leq S_{*}K_{*}$ i.e.
$R_{*}H_{*}\supseteq R_{*}K_{*}$ and $S_{*}H_{*}\supseteq S_{*}K_{*}$. Then we
have
$\displaystyle K_{*}H^{*}$ $\displaystyle=$ $\displaystyle
EK_{*}H^{*}=(R^{*}R_{*}\cap S^{*}S_{*})K_{*}H^{*}\subseteq
R^{*}R_{*}K_{*}H^{*}\cap S^{*}S_{*}K_{*}H^{*}$ $\displaystyle\subseteq$
$\displaystyle R^{*}R_{*}H_{*}H^{*}\cap S^{*}S_{*}H_{*}H^{*}\subseteq
R^{*}R_{*}\cap S^{*}S_{*},$
hence $K_{*}H^{*}\subseteq E$. But recall that by definition of morphisms we
have an adjunction $H_{*}\dashv H^{*}$ in $Q_{w}(\mathcal{E})$. Thus,
$K_{*}H^{*}\subseteq E\iff K_{*}\subseteq H_{*}$, so that we obtain $H_{*}\leq
K_{*}$. ∎
The result that follows will be of central importance in establishing all the
desired properties of $\mathcal{E}_{ex/reg}$ throughout the remainder of this
section. It says that any morphism of $Q(\mathcal{E})$ (hence also of
$Q_{w}(\mathcal{E})$) can be expressed in a suitable way via morphisms of
$\mathcal{E}_{ex/reg}$. This should be compared to the fact that in any
regular category $\mathcal{C}$ every relation $R\colon X\looparrowright Y$ can
be written as $R=gf^{\circ}$, where $f\colon Z\to X$ and $g\colon Z\to Y$ are
morphisms in $\mathcal{C}$ with $f^{*}f_{*}\cap g^{*}g_{*}=I_{Z}$. Actually,
our goal is to show in the end that it is _precisely_ this fact, since we will
prove that $Q(\mathcal{E})$ is exactly the bicategory of relations of
$\mathcal{E}_{ex/reg}$, while $Q_{w}(\mathcal{E})$ will be that of weakening-
closed relations.
* 4.6 Proposition.
Let $\Phi\colon(X,E)\to(Y,F)$ be a morphism of $Q(\mathcal{E})$. Then there
exists a pair of morphisms
$\textstyle{(X,E)}$$\textstyle{(Z,T)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{R_{0*}}$$\scriptstyle{R_{1*}}$$\textstyle{(Y,F)}$
in $\mathcal{E}_{ex/reg}$ such that
1. (1)
$\Phi=R_{1}R_{0}^{\circ}$.
2. (2)
$R_{0}^{*}R_{0*}\cap R_{1}^{*}R_{1*}=T$.
Moreover, any pair $(R_{0*},R_{1*})$ with these two properties has the
following universal property:
Given any morphisms
$\textstyle{(X,E)}$$\textstyle{(C,G)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{S_{0*}}$$\scriptstyle{S_{1*}}$$\textstyle{(Y,F)}$
in $\mathcal{E}_{ex/reg}$ such that $S_{1}S_{0}^{\circ}\subseteq\Phi$, there
exists a unique morphism $H_{*}\colon(C,G)\to(Z,T)\in\mathcal{E}_{ex/reg}$
with $R_{0*}H_{*}=S_{0*}$ and $R_{1*}H_{*}=S_{1*}$.
###### Proof.
Suppose that $\Phi$ is represented by the $\mathsf{ff}$-morphism $\langle
r_{0},r_{1}\rangle\colon Z\rightarrowtail X\times Y$ in $\mathcal{E}$. We set
$T\coloneqq r_{0}^{\circ}Er_{0}\cap r_{1}^{\circ}Fr_{1}$ and $R_{0*}\coloneqq
Er_{0}$, $R_{1*}\coloneqq Fr_{1}$. The relations $r_{0}^{\circ}Er_{0}$ and
$r_{1}^{\circ}Fr_{1}$ are inverse images along $r_{0},r_{1}$ respectively of
the congruences $E,F$, hence are themselves congruences. Thus, so is their
intersection $T$.
Also, we claim that that $R_{0*}$ and $R_{1*}$ as defined above are morphisms
$\textstyle{(X,E)}$$\textstyle{(Z,T)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{R_{0*}}$$\scriptstyle{R_{1*}}$$\textstyle{(Y,F)}$
in $\mathcal{E}_{ex/reg}$. Let’s check this for $R_{0*}$: first, we have
$ER_{0*}=EEr_{0}=Er_{0}=R_{0*}$. Furthermore,
$R_{0*}T=Er_{0}(r_{0}^{\circ}Er_{0}\cap r_{1}^{\circ}Fr_{1})\subseteq
Er_{0}r_{0}^{\circ}Er_{0}\subseteq E\Delta_{X}Er_{0}=EEr_{0}=Er_{0}=R_{0*},$
hence $R_{0*}T=R_{0*}$. So $R_{0*}$ is at least a morphism in
$Q_{w}(\mathcal{E})$. To show that it is actually a map, define
$R_{0}^{*}\coloneqq r_{0}^{\circ}E$. We then similarly have
$TR_{0}^{*}=(r_{0}^{\circ}Er_{0}\cap
r_{1}^{\circ}Fr_{1})r_{0}^{\circ}E\subseteq
r_{0}^{\circ}Er_{0}r_{0}^{\circ}E\subseteq
r_{0}^{\circ}EE=r_{0}^{\circ}E=R_{0}^{*}\implies TR_{0}^{*}=R_{0}^{*}$
and $R_{0}^{*}E=r_{0}^{\circ}EE=r_{0}^{\circ}E=R_{0}^{*}$. And finally,
$R_{0}^{*}R_{0*}=r_{0}^{\circ}EEr_{0}=r_{0}^{\circ}Er_{0}\supseteq T$ and
$R_{0*}R_{0}^{*}=Er_{0}r_{0}^{\circ}E\subseteq EE=E$.
Similarly, it follows that $R_{1*}$ is a morphism
$(Z,T)\to(Y,F)\in\mathcal{E}_{ex/reg}$ whose right adjoint in
$Q_{w}(\mathcal{E})$ is $R_{1}^{*}\coloneqq r_{1}^{\circ}F$.
Just by the definitions, we have
$R_{0}^{*}R_{0*}\cap R_{1}^{*}R_{1*}=r_{0}^{\circ}EEr_{0}\cap
r_{1}^{\circ}FFr_{1}=r_{0}^{\circ}Er_{0}\cap r_{1}^{\circ}Fr_{1}=T.$
In addition,
$R_{0}=R_{0*}\cap(R_{0}^{*})^{\circ}=Er_{0}\cap(r_{0}^{\circ}E)^{\circ}=Er_{0}\cap
E^{\circ}r_{0}\stackrel{{\scriptstyle\ref{Map distributivity}}}{{=}}(E\cap
E^{\circ})r_{0}$ and similarly $R_{1}=(F\cap F^{\circ})r_{1}$, so that
$R_{1}R_{0}^{\circ}=(F\cap F^{\circ})r_{1}r_{0}^{\circ}(E\cap
E^{\circ})=(F\cap F^{\circ})\Phi(E\cap E^{\circ})=\Phi.$
where the last equality holds because $\Phi\in Q(\mathcal{E})$.
Next, we have to prove the stated universality property, so let
$\textstyle{(X,E)}$$\textstyle{(C,G)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{S_{0*}}$$\scriptstyle{S_{1*}}$$\textstyle{(Y,F)}$
in $\mathcal{E}_{ex/reg}$ be such that $S_{1}S_{0}^{\circ}\subseteq\Phi$.
Set $H_{*}\coloneqq R_{0}^{*}S_{0*}\cap R_{1}^{*}S_{1*}$ and $H^{*}\coloneqq
S_{0}^{*}R_{0*}\cap S_{1}^{*}R_{1*}$. Since, both $H_{*}$, $H^{*}$ are binary
intersections of compositions of morphisms in $Q_{w}(\mathcal{E})$, it is
immediate that they are both themselves morphisms in that bicategory. We will
show that $H_{*}$ is moreover a map with $H^{*}$ as its right adjoint.
First of all, we have that
$H_{*}H^{*}\subseteq R_{0}^{*}S_{0*}S_{0}^{*}R_{0*}\cap
R_{1}^{*}S_{1*}S_{1}^{*}R_{1*}\subseteq R_{0}^{*}ER_{0*}\cap
R_{1}^{*}FR_{1*}=R_{0}^{*}R_{0*}\cap R_{1}^{*}R_{1*}=T$
For the other inclusion we argue as follows:
$\displaystyle H^{*}H_{*}$ $\displaystyle=$ $\displaystyle(S_{0}^{*}R_{0*}\cap
S_{1}^{*}R_{1*})(R_{0}^{*}S_{0*}\cap
R_{1}^{*}S_{1*})\supseteq(S_{0}^{\circ}R_{0}\cap
S_{1}^{\circ}R_{1})(R_{0}^{\circ}S_{0}\cap R_{1}^{\circ}S_{1})$
$\displaystyle=$ $\displaystyle(S_{0}^{\circ}R_{0}\cap
S_{1}^{\circ}R_{1})((R_{0}^{\circ}S_{0}\cap R_{1}^{\circ}S_{1})(G\cap
G^{\circ})\cap R_{0}^{\circ}S_{0})$
$\displaystyle\stackrel{{\scriptstyle\ref{Modular Law}}}{{\supseteq}}$
$\displaystyle(G\cap G^{\circ})\cap(S_{0}^{\circ}R_{0}\cap
S_{1}^{\circ}R_{1})R_{0}^{\circ}S_{0}$ $\displaystyle=$ $\displaystyle(G\cap
G^{\circ})\cap((G\cap G^{\circ})S_{0}^{\circ}R_{0}\cap
S_{1}^{\circ}R_{1})R_{0}^{\circ}S_{0}$
$\displaystyle\stackrel{{\scriptstyle\ref{Modular Law*}}}{{\supseteq}}$
$\displaystyle(G\cap G^{\circ})\cap(G\cap G^{\circ})\cap
S_{1}^{\circ}R_{1}R_{0}^{\circ}S_{0}$
But now, using adjunction properties in $Q(\mathcal{E})$ together with the
assumption that $S_{1}S_{0}^{\circ}\subseteq\Phi$, we observe that
$S_{1}S_{0}^{\circ}\subseteq\Phi=R_{1}R_{0}^{\circ}\implies S_{1}\subseteq
R_{1}R_{0}^{\circ}S_{0}\implies G\cap G^{\circ}\subseteq
S_{1}^{\circ}R_{1}R_{0}^{\circ}S_{0},$
from which we deduce $H^{*}H_{*}\supseteq(G\cap G^{\circ})\cap
S_{1}^{\circ}R_{1}R_{0}^{\circ}S_{0}=G\cap G^{\circ}$. Then, finally,
$H^{*}H_{*}G\supseteq(G\cap G^{\circ})G$, which implies $H^{*}H_{*}\supseteq
G$.
Thus, $H_{*}$ is indeed a morphism in $\mathcal{E}_{ex/reg}$. Furthermore,
$R_{0*}H_{*}=R_{0*}(R_{0}^{*}S_{0*}\cap R_{1}^{*}S_{1*})\subseteq
R_{0*}R_{0}^{*}S_{0*}\subseteq ES_{0*}=S_{0*}$
$R_{0*}H_{*}=R_{0*}(R_{0}^{*}S_{0*}\cap R_{1}^{*}S_{1*})\supseteq
R_{0}(R_{0}^{\circ}S_{0}\cap
R_{1}^{\circ}S_{1})\stackrel{{\scriptstyle\ref{Modular
Law}}}{{\supseteq}}S_{0}\cap R_{0}R_{1}^{\circ}S_{1}=S_{0}$
where for the last inclusion we made use of the fact that
$S_{1}S_{0}^{\circ}\subseteq\Phi=R_{1}R_{0}^{\circ}$ implies $S_{0}\subseteq
R_{0}R_{1}^{\circ}S_{1}$ by the adjunction property in $Q(\mathcal{E})$. Now
we have $ER_{0*}H_{*}\supseteq ES_{0}$, which implies $R_{0*}H_{*}\supseteq
S_{0*}$. Thus, we deduce that $R_{0*}H_{*}=S_{0*}$ and similarly one obtains
$R_{1*}H_{*}=S_{1*}$. Finally, uniqueness is clear because, as we’ve proved in
the previous lemma, the equality $R_{0}^{*}R_{0*}\cap R_{1}^{*}R_{1*}=T$
implies that $R_{0*},R_{1*}$ are jointly order-monomorphic in
$\mathcal{E}_{ex/reg}$. ∎
A pair of morphisms $(R_{0*},R_{1*})$ in $\mathcal{E}_{ex/reg}$ with the
properties in 4 will be called a _tabulation_ of $\Phi\colon(X,E)\to(Y,F)\in
Q(\mathcal{E})$. This terminology is borrowed from the theory of allegories.
Note incidentally that the latter theory could have been directly applied to
$Q(\mathcal{E})$, but this approach would not work for us. The reason for that
is that we can not identify the morphisms of the would be completion
$\mathcal{E}_{ex/reg}$ as maps in this allegory, but rather only in
$Q_{w}(\mathcal{E})$. Thus, we need both of these bicategories at the same
time: $Q_{w}(\mathcal{E})$ to identify the morphisms of $\mathcal{E}_{ex/reg}$
and $Q(\mathcal{E})$ to express the existence of tabulations and perform
calculations more freely.
Furthermore, since as we’ve noted earlier every morphism
$\Phi\colon(X,E)\to(Y,F)\in Q_{w}(\mathcal{E})$ can be considered as a
morphism in $Q(\mathcal{E})$, we also have tabulations for morphisms of
$Q_{w}(\mathcal{E})$. In this case the inclusion
$S_{1}S_{0}^{\circ}\subseteq\Phi$ in the universal property of tabulations is
equivalent to $S_{1*}S_{0}^{*}\subseteq\Phi$. Indeed,
$S_{1}S_{0}^{\circ}\subseteq\Phi$ implies $FS_{1}S_{0}^{\circ}E\subseteq F\Phi
E$, which is to say $S_{1*}S_{0}^{*}\subseteq\Phi$. Similarly, for the
tabulation $(R_{0*},R_{1*})$ we have
$\Phi=R_{1}R_{0}^{\circ}=R_{1*}R_{0}^{*}$.
As we’ve already claimed, the existence of tabulations will be a fundamental
tool for establishing results about $\mathcal{E}_{ex/reg}$. As a first example
of this, we can now completely characterize what it means for a pair of
morphisms to be jointly order-monomorphic in $\mathcal{E}_{ex/reg}$.
* 4.7 Corollary.
A pair of morphisms
$\textstyle{(Y,F)}$$\textstyle{(X,E)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{R_{*}}$$\scriptstyle{S_{*}}$$\textstyle{(Z,G)}$
is jointly order-monomorphic in $\mathcal{E}_{ex/reg}$ if and only if
$R^{*}R_{*}\cap S^{*}S_{*}=E$.
###### Proof.
We’ve already proven sufficiency earlier, so assume conversely that
$R_{*},S_{*}$ are jointly order-monomorphic. By the proposition above, there
exists a tabulation
$\textstyle{(X,E)}$$\textstyle{(A,T)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{U_{*}}$$\scriptstyle{V_{*}}$$\textstyle{(X,E)}$
for the morphism $R^{*}R_{*}\cap S^{*}S_{*}\in Q_{w}(\mathcal{E})$. Then we
have
$V_{*}U^{*}=R^{*}R_{*}\cap S^{*}S_{*}\subseteq R^{*}R_{*}\implies
R_{*}V_{*}U^{*}\subseteq R_{*}\implies R_{*}V_{*}\subseteq R_{*}U_{*}$
and similarly we obtain $S_{*}V_{*}\subseteq S_{*}U_{*}$. Thus, we have
$R_{*}V_{*}\geq R_{*}U_{*}$ and $S_{*}V_{*}\geq S_{*}U_{*}$ and hence
$V_{*}\geq U_{*}$, which is to say that $V_{*}\subseteq U_{*}$. But now we
have
$R^{*}R_{*}\cap S^{*}S_{*}=V_{*}U^{*}\subseteq U_{*}U^{*}\subseteq E$
Since the reverse inclusion always holds, we conclude that $R^{*}R_{*}\cap
S^{*}S_{*}=E$. ∎
Before beginning to prove the basic finite limit and exactness properties of
$\mathcal{E}_{ex/reg}$ we will need some information on $\mathsf{ff}$ and
$\mathsf{so}$-morphisms therein.
* 4.8 Lemma.
Let $R_{*}\colon(X,E)\to(Y,F)$ be a morphism in $\mathcal{E}_{ex/reg}$. Then:
1. (1)
$R_{*}$ is an $\mathsf{ff}$-morphism in $\mathcal{E}_{ex/reg}$ if and only if
$R^{*}R_{*}=E$.
2. (2)
$R_{*}$ is an iso if and only if $R^{*}R_{*}=E$ and $R_{*}R^{*}=F$.
3. (3)
If $R_{*}R^{*}=F$, then $R_{*}$ is an $\mathsf{so}$-morphism in
$\mathcal{E}_{ex/reg}$.
###### Proof.
1. (1)
This follows immediately from the previous corollary.
2. (2)
Clear.
3. (3)
Consider a commutative square in $\mathcal{E}_{ex/reg}$ as below, where
$M_{*}$ is an $\mathsf{ff}$-morphism.
$\textstyle{(X,E)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{R_{*}}$$\scriptstyle{V_{*}}$$\textstyle{(Y,F)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{S_{*}}$$\textstyle{(Z,G)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{M_{*}}$$\textstyle{(W,H)}$
By (1.), we know that $M^{*}M_{*}=G$. Set $P_{*}\coloneqq V_{*}R^{*}$. First,
we claim that $P_{*}$ is a morphism $(Y,F)\to(Z,G)$ in $\mathcal{E}_{ex/reg}$
with $P^{*}=R_{*}V^{*}$. Indeed, we have
$P^{*}P_{*}=R_{*}V^{*}V_{*}R^{*}\supseteq R_{*}R^{*}=F$. Observe also that
$P_{*}=M^{*}S_{*}$, because $M_{*}V_{*}=S_{*}R_{*}\implies
M^{*}M_{*}V_{*}=M^{*}S_{*}R_{*}\implies V_{*}=M^{*}S_{*}R_{*}\implies
V_{*}R^{*}=M^{*}S_{*}R_{*}R^{*}=M^{*}S_{*}$. Then we can argue that
$P_{*}P^{*}=M^{*}S_{*}R_{*}V^{*}=M^{*}M_{*}V_{*}V^{*}=V_{*}V^{*}\subseteq G$.
Finally, clearly $P_{*}R_{*}=M^{*}S_{*}R_{*}=M^{*}M_{*}V_{*}=V_{*}$ and also
$M_{*}P_{*}=M_{*}V_{*}R^{*}=S_{*}R_{*}R^{*}=S_{*}$.
∎
* 4.9 Proposition.
$\mathcal{E}_{ex/reg}$ has finite limits and
$\Gamma\colon\mathcal{E}\to\mathcal{E}_{ex/reg}$ preserves them.
###### Proof.
Let’s first construct the inserter of a pair
$\textstyle{(X,E)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{R_{*}}$$\scriptstyle{S_{*}}$$\textstyle{(Y,F)}$.
To this end, consider a tabulation
$\textstyle{(X,E)}$$\textstyle{(A,T)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\Phi_{0*}}$$\scriptstyle{\Phi_{1*}}$$\textstyle{(X,E)}$
of $S^{*}R_{*}\cap(E\cap E^{\circ})$ as a morphism $(X,E)\to(X,E)$ in
$Q(\mathcal{E})$. Then we observe that
$\displaystyle\Phi_{1}$ $\displaystyle=$ $\displaystyle\Phi_{1}(T\cap
T^{\circ})=\Phi_{1}(\Phi_{0}^{\circ}\Phi_{0}\cap\Phi_{1}^{\circ}\Phi_{1})\subseteq\Phi_{1}\Phi_{0}^{\circ}\Phi_{0}=(S^{*}R_{*}\cap
E\cap E^{\circ})\Phi_{0}$ $\displaystyle\subseteq$ $\displaystyle(E\cap
E^{\circ})\Phi_{0}=\Phi_{0}$
and hence $\Phi_{1}=\Phi_{0}$ because the inclusion of maps in
$Q(\mathcal{E})$ is discrete. So we have $\Phi_{1*}=\Phi_{0*}$, which we
henceforth denote simply by $\Phi_{*}$. To prove that
$\Phi_{*}\colon(A,T)\to(X,E)$ is the inserter of $(R_{*},S_{*})$ it now
suffices, due to the universal property of tabulations, to show that for every
$H_{*}\colon(Z,G)\to(X,E)$ we have $R_{*}H_{*}\leq S_{*}H_{*}$ if and only if
$HH^{\circ}\subseteq S^{*}R_{*}\cap E\cap E^{\circ}$. Indeed, we have
$\displaystyle R_{*}H_{*}\leq S_{*}H_{*}$ $\displaystyle\iff$ $\displaystyle
S_{*}H_{*}\subseteq R_{*}H_{*}\iff H_{*}\subseteq S^{*}R_{*}H_{*}\iff
H_{*}H^{*}\subseteq S^{*}R_{*}$ $\displaystyle\iff$ $\displaystyle
HH^{\circ}\subseteq S^{*}R_{*}\iff HH^{\circ}\subseteq S^{*}R_{*}\cap E\cap
E^{\circ}$
Next, let us construct the product of a pair of objects $(X,E)$ and $(Y,F)$ in
$\mathcal{E}_{ex/reg}$. Observe that the maximal relation $X\looparrowright Y$
given by the product is clearly a morphism $(X,E)\to(Y,F)$ in
$Q(\mathcal{E})$. Therefore, by 4 it has a tabulation. In fact, looking back
at the proof of the latter proposition we can easily see that the tabulation
thus constructed is given by the pair of morphisms ${(X,E)}$${(X\times
Y,E\times
F)}$${(Y,F)}$$\scriptstyle{\Pi_{(X,E)*}}$$\scriptstyle{\Pi_{(Y,F)*}}$, where
$\Pi_{(X,E)*}=E\pi_{X}$ and $\Pi_{(Y,F)*}=F\pi_{Y}$. In any case, the
universal property of tabulations gives precisely the universal property of a
product diagram in $\mathcal{E}_{ex/reg}$.
Finally, it is easy to check that $\Gamma 1$ is a terminal object for
$\mathcal{E}_{ex/reg}$. ∎
Even though the above proposition tells us that $\mathcal{E}_{ex/reg}$
inherits all finite weighted limits from $\mathcal{E}$, we shall need some
more specific information as well, namely on the construction of comma squares
and pullbacks.
Consider any morphisms $R_{*}\colon(X,E)\to(Z,G)$ and
$S_{*}\colon(Y,F)\to(Z,G)$ in $\mathcal{E}_{ex/reg}$. First, we construct the
comma square below:
$\textstyle{(W,T)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{P_{0*}}$$\scriptstyle{P_{1*}}$$\scriptstyle{\leq}$$\textstyle{(Y,F)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{S_{*}}$$\textstyle{(X,E)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{R_{*}}$$\textstyle{(Z,G)}$
For this, we take $(P_{0*},P_{1*})$ to be a tabulation of
$S^{*}R_{*}\colon(X,E)\to(Y,F)\in Q_{w}(\mathcal{E})$. To prove that this
square is indeed a comma, it suffices to prove that, given any
$U_{*}\colon(A,H)\to(X,E)$ and $V_{*}\colon(A,H)\to(Y,F)$, we have
$V_{*}U^{*}\subseteq S^{*}R_{*}$ if and only if $R_{*}U_{*}\leq S_{*}V_{*}$.
But indeed, using properties of adjunctions we have equivalences
$R_{*}U_{*}\leq S_{*}V_{*}\iff S_{*}V_{*}\subseteq R_{*}U_{*}\iff
V_{*}\subseteq S^{*}R_{*}U_{*}\iff V_{*}U^{*}\subseteq S^{*}R_{*}$
For pullbacks let us take now $(P_{0*},P_{1*})$ to be a tabulation of the
relation $S^{\circ}R\in Q(\mathcal{E})$. Then we claim that the following
square is a pullback.
$\textstyle{(W,T)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{P_{0*}}$$\scriptstyle{P_{1*}}$$\textstyle{(Y,F)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{S_{*}}$$\textstyle{(X,E)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{R_{*}}$$\textstyle{(Z,G)}$
Indeed, in this case we have for any $U_{*}\colon(A,H)\to(X,E)$ and
$V_{*}\colon(A,H)\to(Y,F)$ that $R_{*}U_{*}=S_{*}V_{*}$ if and only if
$RU=SV$, which in the allegory $Q(\mathcal{E})$ is equivalent to the inclusion
$SV\subseteq RU$. By adjunction conditions, the latter is in turn equivalent
to $SVU^{\circ}\subseteq R$ and then to $VU^{\circ}\subseteq S^{\circ}R$.
Thus, the universal property of the pullback is identified with that of the
tabulation.
Next, we prove that $\mathcal{E}_{ex/reg}$ admits the required factorization
system.
* 4.10 Proposition.
$\mathcal{E}_{ex/reg}$ has ($\mathsf{so}$,$\mathsf{ff}$)-factorizations.
###### Proof.
Consider a morphism $R_{*}\colon(X,E)\to(Y,F)\in\mathcal{E}_{ex/reg}$. Then
$RR^{\circ}\in Q(\mathcal{E})$ and so it admits a tabulation
$\textstyle{(Y,F)}$$\textstyle{(Z,G)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{S_{0*}}$$\scriptstyle{S_{1*}}$$\textstyle{(Y,F)}$.
Since tautologically $R_{*}$ is such that $RR^{\circ}\subseteq
S_{1}S^{\circ}_{0}$, there exists a unique
$Q_{*}\colon(X,E)\to(Z,G)\in\mathcal{E}_{ex/reg}$ such that
$S_{0*}Q_{*}=R_{*}=S_{1*}Q_{*}$.
Now observe that in $Q(\mathcal{E})$ we have $S_{1}\subseteq
S_{1}S_{0}^{\circ}S_{0}=RR^{\circ}S_{0}\subseteq S_{0}$ and so we deduce that
$S_{1}=S_{0}$ and hence $S_{1*}=S_{0*}$. We denote this morphism now simply by
$S_{*}$. Then we have $S^{*}S_{*}=G$ by the tabulation property and this tells
us that $S_{*}$ is an $\mathsf{ff}$-morphism. It suffices now to show that
$Q_{*}Q^{*}=G$, so that $Q_{*}$ will be an $\mathsf{so}$-morphism. For this we
argue as follows:
$\displaystyle SQQ^{\circ}S^{\circ}$ $\displaystyle=$ $\displaystyle
RR^{\circ}=SS^{\circ}\implies$ $\displaystyle S^{\circ}SQQ^{\circ}S^{\circ}S$
$\displaystyle=$ $\displaystyle S^{\circ}SS^{\circ}S\implies$
$\displaystyle(G\cap G^{\circ})QQ^{\circ}(G\cap G^{\circ})$ $\displaystyle=$
$\displaystyle G\cap G^{\circ}\implies$ $\displaystyle QQ^{\circ}$
$\displaystyle=$ $\displaystyle G\cap G^{\circ}$
Then $Q_{*}Q^{*}=GQQ^{\circ}G=G(G\cap G^{\circ})G=G$. ∎
* 4.11 Remark.
For a morphism $R_{*}\colon(X,E)\to(Y,F)\in\mathcal{E}_{ex/reg}$ we have
$R_{*}R^{*}=F$ if and only if $RR^{\circ}=F\cap F^{\circ}$. We showed the “if”
direction in the course of the above proof. For the converse, assume that
$R_{*}R^{*}=F$ and argue as follows:
$\displaystyle RR^{\circ}$ $\displaystyle=R(R^{\circ}(F\cap F^{\circ})\cap
R^{*})\stackrel{{\scriptstyle\ref{Modular Law}}}{{\supseteq}}(F\cap
F^{\circ})\cap RR^{*}=(F\cap F^{\circ})\cap(R_{*}\cap(R^{*})^{\circ})R^{*}$
$\displaystyle\stackrel{{\scriptstyle\ref{Modular Law*}}}{{\supseteq}}(F\cap
F^{\circ})\cap R_{*}R^{*}\cap F^{\circ}=(F\cap F^{\circ})\cap F\cap
F^{\circ}=F\cap F^{\circ}$
* 4.12 Corollary.
A morphism $R_{*}\colon(X,E)\to(Y,F)$ is an $\mathsf{so}$-morphism in
$\mathcal{E}_{ex/reg}$ if and only if $R_{*}R^{*}=F$, if and only if
$RR^{\circ}=F\cap F^{\circ}$.
###### Proof.
If $R_{*}R^{*}=F$, then we know that $R_{*}$ is an $\mathsf{so}$-morphism by
4. In addition, by the above remark we know that $R_{*}R^{*}=F$ is equivalent
to $RR^{\circ}=F\cap F^{\circ}$.
Now assume that $R_{*}$ is an $\mathsf{so}$-morphism. From the proof of 4 we
know that $R_{*}$ can be factored as
${(X,E)}$${(Z,G)}$${(Y,F)}$$\scriptstyle{Q_{*}}$$\scriptstyle{S_{*}}$, where
$S_{*}$ is an $\mathsf{ff}$-morphism and $Q_{*}$ satisfies $Q_{*}Q^{*}=G$.
Then $S_{*}$ is also an $\mathsf{so}$-morphism, since $R_{*}$ is such, and
hence must be an iso. It then follows immediately that $R_{*}$ also satisfies
$R_{*}R^{*}=F$. ∎
With this equational characterization of $\mathsf{so}$-morphisms in hand, we
are now in a position to prove their stability under pullback.
* 4.13 Proposition.
$\mathsf{so}$-morphisms are stable under pullback in $\mathcal{E}_{ex/reg}$.
###### Proof.
Consider the following pullback square in $\mathcal{E}_{ex/reg}$ where we
assume that $R_{*}$ is an $\mathsf{so}$-morphism, so that we have
$R_{*}R^{*}=G$ or equivalently $RR^{\circ}=G\cap G^{\circ}$.
$\textstyle{(W,T)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{Q_{*}}$$\scriptstyle{P_{*}}$$\textstyle{(Y,F)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{S_{*}}$$\textstyle{(X,E)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{R_{*}}$$\textstyle{(Z,G)}$
By the construction of pullbacks we know that $(P_{*},Q_{*})$ is a tabulation
of $S^{\circ}R$, so that $QP^{\circ}=S^{\circ}R$. Thus, we have
$\displaystyle QQ^{\circ}$ $\displaystyle=$ $\displaystyle Q(T\cap
T^{\circ})Q^{\circ}=Q(P^{\circ}P\cap
Q^{\circ}Q)Q^{\circ}\stackrel{{\scriptstyle\ref{Modular
Law}}}{{\supseteq}}(QP^{\circ}P\cap Q)Q^{\circ}$
$\displaystyle\stackrel{{\scriptstyle\ref{Modular Law*}}}{{\supseteq}}$
$\displaystyle QP^{\circ}PQ^{\circ}\cap(F\cap
F^{\circ})=S^{\circ}RR^{\circ}S\cap(F\cap F^{\circ})$ $\displaystyle=$
$\displaystyle S^{\circ}(G\cap G^{\circ})S\cap(F\cap
F^{\circ})=S^{\circ}S\cap(F\cap F^{\circ})=F\cap F^{\circ}$
Hence, $QQ^{\circ}=F\cap F^{\circ}$ or equivalently $Q_{*}Q^{*}=F$ and hence
$Q_{*}$ is an $\mathsf{so}$-morphism. ∎
Putting together what we have proved so far, we have the following.
* 4.14 Corollary.
$\mathcal{E}_{ex/reg}$ is a regular category and
$\Gamma\colon\mathcal{E}\to\mathcal{E}_{ex/reg}$ is a fully order-faithful
regular functor.
We next would like to prove that $\mathcal{E}_{ex/reg}$ is exact. To
accomplish this we first make good on promises made much earlier. Namely, we
identify $Q_{w}(\mathcal{E})$ as the bicategory of weakening-closed relations
in $\mathcal{E}_{ex/reg}$ and, before that, $Q(\mathcal{E})$ as the bicategory
of all relations in $\mathcal{E}_{ex/reg}$.
* 4.15 Proposition.
There is an equivalence $\mathrm{Rel}(\mathcal{E}_{ex/reg})\simeq
Q(\mathcal{E})$.
###### Proof.
We will define a functor
$\mathfrak{F}\colon\mathrm{Rel}$$(\mathcal{E}_{ex/reg})\to$ $Q(\mathcal{E})$
by letting it be the identity on objects and mapping a relation represented by
any jointly order-monomorphic pair
$\textstyle{(X,E)}$$\textstyle{(Z,T)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{R_{0*}}$$\scriptstyle{R_{1*}}$$\textstyle{(Y,F)}$
in $\mathcal{E}_{ex/reg}$ to the morphism
$R_{1}R_{0}^{\circ}\colon(X,E)\to(Y,F)\in Q(\mathcal{E})$.
To show that this assignment is functorial, consider first the diagonal
relation on the object $(X,E)$ in $\mathrm{Rel}$$(\mathcal{E}_{ex/reg})$, i.e.
the relation represented by the jointly order-monomorphic pair
$\textstyle{(X,E)}$$\textstyle{(X,E)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{1_{(X,E)*}}$$\scriptstyle{1_{(X,E)*}}$$\textstyle{(X,E)}$.
Then the image of this relation under $\mathfrak{F}$ is
$1_{(X,E)}1_{(X,E)}^{\circ}=(E\cap E^{\circ})(E\cap E^{\circ})^{\circ}=E\cap
E^{\circ}$ and so $\mathfrak{F}$ preserves identity morphisms.
Next, we consider two relations $\mathscr{R},\mathscr{S}$ in
$\mathcal{E}_{ex/reg}$, say represented respectively by the jointly order-
monomorphic pairs
$\textstyle{(X,E)}$$\textstyle{(A,T)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{R_{0*}}$$\scriptstyle{R_{1*}}$$\textstyle{(Y,F)}$
and
$\textstyle{(Y,F)}$$\textstyle{(B,T^{\prime})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{S_{0*}}$$\scriptstyle{S_{1*}}$$\textstyle{(Z,G)}$.
To calculate the composition of these two relations we form the following
pullback square in $\mathcal{E}_{ex/reg}$
$\textstyle{(C,\Omega)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\Pi_{1*}}$$\scriptstyle{\Pi_{0*}}$$\textstyle{(B,T^{\prime})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{S_{0*}}$$\textstyle{(A,T)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{R_{1*}}$$\textstyle{(Y,F)}$
and then the image factorization of $\langle
R_{0*}\Pi_{0*},S_{1*}\Pi_{1*}\rangle$, say
$\textstyle{(C,\Omega)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{Q_{*}}$$\textstyle{(D,\Theta)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\langle
U_{0*},U_{1*}\rangle}$$\textstyle{(X,E)\times(Z,G)}$
By construction of pullbacks in $\mathcal{E}_{ex/reg}$ we know that
$\Pi_{1}\Pi_{0}^{\circ}=S_{0}^{\circ}R_{1}$. Also, by definition
$\mathfrak{F}$ maps the composition of the two relations to
$\mathfrak{F}(\mathscr{S}\mathscr{R})=U_{1}U_{0}^{\circ}$. But now we have
that
$U_{1}U_{0}^{\circ}=U_{1}QQ^{\circ}U_{0}^{\circ}=S_{1}\Pi_{1}\Pi_{0}^{\circ}R_{0}^{\circ}=S_{1}S_{0}^{\circ}R_{1}R_{0}^{\circ}=\mathfrak{F}(\mathscr{S})\mathfrak{F}(\mathscr{R})$
Finally, the fact that $\mathfrak{F}$ preserves the order of morphisms and is
fully (order-) faithful is precisely the existence of tabulations proved in 4.
Thus, $\mathfrak{F}$ is an equivalence of bicategories. ∎
* 4.16 Proposition.
There is an equivalence $\mathrm{Rel}_{w}(\mathcal{E}_{ex/reg})\simeq
Q_{w}(\mathcal{E})$.
###### Proof.
We define a functor
$\mathfrak{F}\colon\mathrm{Rel}_{w}$$(\mathcal{E}_{ex/reg})\to$
$Q_{w}(\mathcal{E})$ exactly as in the proof of the previous proposition. Then
the main observation to make here is the following: a relation
$\mathscr{R}\colon(X,E)\looparrowright(Y,F)$ represented by the jointly order-
monomorphic pair
$\textstyle{(X,E)}$$\textstyle{(Z,T)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{R_{0*}}$$\scriptstyle{R_{1*}}$$\textstyle{(Y,F)}$
in $\mathcal{E}_{ex/reg}$ is weakening-closed if and only if
$R_{1*}R_{0}^{*}=R_{1}R_{0}^{\circ}$ as relations in $\mathcal{E}$.
Recall that $\mathscr{R}$ is a weakening-closed relation precisely if
$I_{(Y,F)}\mathscr{R}I_{(X,E)}=\mathscr{R}$ in
$\mathrm{Rel}$$(\mathcal{E}_{ex/reg})$. To compute the composition
$I_{(Y,F)}\mathscr{R}I_{(X,E)}$ one has to form the following diagram in
$\mathcal{E}_{ex/reg}$ where the top square is a pullback and the bottom two
are commas and then take the image factorization of the morphism $\langle
U_{0*}W_{0*},V_{1*}W_{1*}\rangle$.
$\textstyle{(C,X)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{W_{0*}}$$\scriptstyle{W_{1*}}$$\textstyle{(A,\Phi)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{U_{0*}}$$\scriptstyle{U_{1*}}$$\textstyle{(B,\Psi)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{V_{0*}}$$\scriptstyle{V_{1*}}$$\textstyle{(X,E)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{1_{(X,E)*}}$$\scriptstyle{\leq}$$\textstyle{(Z,T)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{R_{0*}}$$\scriptstyle{R_{1*}}$$\scriptstyle{\leq}$$\textstyle{(Y,F)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{1_{(Y,F)*}}$$\textstyle{(X,E)}$$\textstyle{(Y,F)}$
Note that by the various limit constructions in $\mathcal{E}_{ex/reg}$ we know
that we must have $R_{0}^{*}=U_{1}U_{0}^{\circ}$, $R_{1*}=V_{1}V_{0}^{\circ}$
and $V_{0}^{\circ}U_{1}=W_{1}W_{0}^{\circ}$.
If $I_{(Y,F)}\mathscr{R}I_{(X,E)}=\mathscr{R}$, then there is a factorization
in $\mathcal{E}_{ex/reg}$
$\textstyle{(C,X)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{Q_{*}}$$\textstyle{(Z,T)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\langle
R_{0*},R_{1*}\rangle}$$\textstyle{(X,E)\times(Y,F)}$
with $Q_{*}$ an $\mathsf{so}$-morphism, so that $QQ^{\circ}=T\cap T^{\circ}$.
Then we have that
$R_{1*}R_{0}^{*}=V_{1}V_{0}^{\circ}U_{1}U_{0}^{\circ}=V_{1}W_{1}W_{0}^{\circ}U_{0}^{\circ}=R_{1}QQ^{\circ}R_{0}^{\circ}=R_{1}R_{0}^{\circ}$
Conversely, assume that $R_{1*}R_{0}^{*}=R_{1}R_{0}^{\circ}$ and let us
consider four morphisms
$\textstyle{(X,E)}$$\textstyle{(C,G)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{S_{0*}}$$\scriptstyle{U_{*}}$$\scriptstyle{S_{1*}}$$\scriptstyle{V_{*}}$$\textstyle{(Y,F)}$
such that $(S_{0*},S_{1*})$ factors through $(R_{0*},R_{1*})$ and $U_{*}\leq
S_{0*}$ and $S_{1*}\leq V_{*}$. Then we respectively have
$S_{1*}S_{0}^{*}\subseteq R_{1*}R_{0}^{*}$ and $U^{*}\subseteq S_{0}^{*}$ and
$V_{*}\subseteq S_{1*}$. Hence, $V_{*}U^{*}\subseteq S_{1*}S_{0}^{*}\subseteq
R_{1*}R_{0}^{*}=R_{1}R_{0}^{\circ}$ and so $(U_{*},V_{*})$ must also factor
through $(R_{0*},R_{1*})$ by the universal property of tabulations.
With this observation in hand, one can run the same proof as in the previous
proposition to show that
$\mathfrak{F}\colon\mathrm{Rel}_{w}(\mathcal{E}_{ex/reg})\to
Q_{w}(\mathcal{E})$ thus defined is an equivalence. The only point of minor
difference is in the proof that $\mathfrak{F}$ preserves identity morphisms.
For this, just recall that the identity on an object $(X,E)$ in
$\mathrm{Rel}_{w}(\mathcal{E}_{ex/reg})$ is the relation given by the
following comma square
$\textstyle{(A,T)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{R_{1*}}$$\scriptstyle{R_{0*}}$$\scriptstyle{\leq}$$\textstyle{(X,E)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{1_{(X,E)*}}$$\textstyle{(X,E)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{1_{(X,E)*}}$$\textstyle{(X,E)}$
so that by construction of commas we have that
$R_{1}R_{0}^{\circ}=1_{(X,E)}^{*}1_{(X,E)*}=EE=E$. ∎
Now from this last proposition one can immediately deduce that
$\mathcal{E}_{ex/reg}$ is indeed an exact category.
* 4.17 Corollary.
The category $\mathcal{E}_{ex/reg}$ is exact.
###### Proof.
Using the equivalence $\mathrm{Rel}_{w}(\mathcal{E}_{ex/reg})\simeq
Q_{w}(\mathcal{E})$, we can see that a congruence on an object
$(X,E)\in\mathcal{E}_{ex/reg}$ corresponds precisely to a congruence $R$ on
the object $X\in\mathcal{E}$ with $R\supseteq E$.
Indeed, consider a congruence $\mathscr{R}$ on the object
$(X,E)\in\mathcal{E}_{ex/reg}$, represented by the jointly order-monomorphic
pair ${(X,E)}$${(Y,F)}$${(X,E)}$$\scriptstyle{R_{0*}}$$\scriptstyle{R_{1*}}$.
Consider the functor
$\mathfrak{F}\colon\mathrm{Rel}_{w}(\mathcal{E}_{ex/reg})\to
Q_{w}(\mathcal{E})$ providing the equivalence and let
$R\coloneqq\mathfrak{F}(\mathscr{R})$. Since $\mathfrak{F}$ is an equivalence,
we have that $\mathscr{R}\mathscr{R}\subseteq\mathscr{R}$ if and only if
$RR\subseteq R$ in $Q_{w}(\mathcal{E})$, i.e. that transitivity of
$\mathscr{R}$ is equivalent to the same property for $R$ as a relation in
$\mathcal{E}$. Similarly, the inclusion $\mathscr{R}\supseteq I_{(X,E)}$ is
equivalent to $R\supseteq\mathfrak{F}(I_{(X,E)})=E$. In particular,
$R\supseteq I_{X}$ and so $R$ is a congruence on $X\in\mathcal{E}$.
The idempotent morphism $\mathfrak{F}(\mathscr{R})=R\colon(X,E)\to(X,E)$ in
$Q_{w}(\mathcal{E})$ now splits by construction, namely as
${(X,E)}$${(X,R)}$${(X,E)}$$\scriptstyle{R}$$\scriptstyle{R}$. Thus,
$\mathscr{R}$ splits as an idempotent in
$\mathrm{Rel}_{w}$$(\mathcal{E}_{ex/reg})$ and hence we have shown that
$\mathcal{E}_{ex/reg}$ is exact. ∎
It remains to prove that $\mathcal{E}_{ex/reg}$, or more precisely
$\Gamma\colon\mathcal{E}\to\mathcal{E}_{ex/reg}$ satisfies the required
universal property. Before doing this, we observe in the proposition that
follows that every object of $\mathcal{E}_{ex/reg}$ appears as a quotient of a
congruence coming from $\mathcal{E}$ in a canonical way.
* 4.18 Proposition.
For every object $(X,E)\in\mathcal{E}_{ex/reg}$ there exists an exact sequence
$\textstyle{\Gamma
E\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\Gamma
e_{0}}$$\scriptstyle{\Gamma e_{1}}$$\textstyle{\Gamma
X\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{E}$$\textstyle{(X,E)}$,
where $\langle e_{0},e_{1}\rangle\colon E\rightarrowtail X\times X$ is an
$\mathsf{ff}$-morphism representing the congruence $E$ in $\mathcal{E}$.
###### Proof.
Observe that $E$ indeed defines a morphism $E_{*}\colon\Gamma X\to(X,E)$ in
$\mathcal{E}_{ex/reg}$ which is in fact effective because $EI_{X}=E=EE$,
$E^{*}E_{*}=EE=E\supseteq I_{X}$, $E_{*}E^{*}=EE=E$. Now it suffices to show
that the square below is a comma square in $\mathcal{E}_{ex/reg}$.
$\textstyle{\Gamma
E\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\Gamma
e_{1}}$$\scriptstyle{\Gamma e_{0}}$$\scriptstyle{\leq}$$\textstyle{\Gamma
X\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{E}$$\textstyle{\Gamma
X\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{E}$$\textstyle{(X,E)}$
But by the construction of comma squares in $\mathcal{E}_{ex/reg}$ this is
equivalent to having $(\Gamma e_{0})^{*}\Gamma e_{0}\cap(\Gamma
e_{1})^{*}\Gamma e_{1}=I_{E}$ and $\Gamma e_{1}(\Gamma
e_{0})^{\circ}=E^{*}E_{*}$ i.e. $e_{0}^{*}e_{0*}\cap e_{1}^{*}e_{1*}=I_{E}$
and $e_{1}e_{0}^{\circ}=EE$ as relations in $\mathcal{E}$, both of which hold.
∎
Now we at last come to the proof that $\mathcal{E}_{ex/reg}$ satisfies the
universal property that exhibits it as the exact completion of the regular
category $\mathcal{E}$. Before proceeding, let us make a couple of quick
observations that will be used in the course of our calculations in the proof
that follows below.
Consider an exact fork
$\textstyle{E\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{p}$$\textstyle{P}$
in the regular category $\mathcal{E}$. Recall that in the calculus of
relations this means that $E=p^{*}p_{*}$ and $p_{*}p^{*}=I_{P}$, where the
second equality can equivalently be replaced by $pp^{\circ}=\Delta_{P}$. In
addition, the kernel pair $p^{\circ}p$ of $p$ can be written as
$p^{*}p_{*}\cap(p^{*}p_{*})^{\circ}$ and hence we also have $p^{\circ}p=E\cap
E^{\circ}$ We now observe that the following equalities must hold:
* •
$pE=p_{*}$.
* •
$Ep^{\circ}=p^{*}$.
* •
$p_{*}Ep^{*}=I_{P}$.
Indeed, for the first of these we have $pE\subseteq
p_{*}E=p_{*}p^{*}p_{*}=p_{*}$ and also $pE=pp^{*}p_{*}\supseteq
pp^{\circ}p_{*}=\Delta_{P}p_{*}=p_{*}$. The second one follows similarly.
Finally, for the last one we have
$p_{*}Ep^{*}=p_{*}p^{*}p_{*}p^{*}=I_{P}I_{P}=I_{P}$.
* 4.19 Theorem.
Let $F\colon\mathcal{E}\to\mathcal{F}$ be a regular functor with $\mathcal{F}$
an exact category. Then there is a unique (up to iso) regular functor
$\overline{F}\colon\mathcal{E}_{ex/reg}\to\mathcal{F}$ such that
$\overline{F}\circ\Gamma\cong F$.
###### Proof.
If $\overline{F}$ is to be regular, then by the previous proposition we must
define it on any object $(X,E)\in\mathcal{E}_{ex/reg}$ as the following
coinserter in $\mathcal{F}$
$\textstyle{F(E)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{Fe_{0}}$$\scriptstyle{Fe_{1}}$$\textstyle{FX\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{p_{(X,E)}}$$\textstyle{\overline{F}(X,E)}$
which exists because regular functors preserve congruences and $\mathcal{F}$
is exact.
Also, given any morphism $R_{*}\colon(X,E)\to(Y,G)\in\mathcal{E}_{ex/reg}$, we
have relations $F(R_{*})\colon FX\looparrowright FY$ and $F(R^{*})\colon
FY\looparrowright FX$ in $\mathcal{F}$ satisfying the following equations:
$\displaystyle F(G)F(R_{*})F(E)=F(GR_{*}E)=F(R_{*})$ $\displaystyle
F(E)F(R^{*})F(G)=F(ER^{*}G)=F(R^{*})$ $\displaystyle
F(R^{*})F(R_{*})=F(R^{*}R_{*})\supseteq F(E)$ $\displaystyle
F(R_{*})F(R^{*})=F(R_{*}R^{*})\subseteq F(G)$
where we used the fact that regular functors preserve the compositions and
inclusions of relations. Thus, by 3 we can define $\overline{F}(R_{*})$ to be
the uniquely associated morphism between quotients
$\overline{F}(X,E)\to\overline{F}(Y,G)$. More explicitly,
$\overline{F}(R_{*})$ is the morphism uniquely determined by the equality
$\overline{F}(R_{*})_{*}=p_{(Y,G)*}F(R_{*})p_{(X,E)}^{*}$ in
$\mathrm{Rel}(\mathcal{F})$. It is immediate that this defines a functor
$\mathcal{E}_{ex/reg}\to\mathcal{F}$ and clearly $\overline{F}\circ\Gamma\cong
F$. Note that by the discussion following 3 we also have
$\overline{F}(R_{*})=p_{(Y,G)}(F(R_{*})\cap
F(R^{*})^{\circ})p_{(X,E)}^{\circ}=p_{(Y,G)}F(R)p_{(X,E)}^{\circ}$ as
relations in $\mathcal{F}$.
Now let’s show that $\overline{F}$ preserves finite limits. First, it is clear
that it preserves the terminal object $\Gamma 1$, since $\overline{F}\Gamma
1=F1$ and $F$ preserves the terminal object. Second, the preservation of
binary products follows from the fact that exact sequences in any regular
category are stable under binary products. It suffices then to prove the
preservation of inserters. So suppose that
${(A,T)}$${(X,E)}$${(Y,G)}$$\scriptstyle{\Phi_{*}}$$\scriptstyle{R_{*}}$$\scriptstyle{S_{*}}$
is an inserter diagram in $\mathcal{E}_{ex/reg}$. Recall from 4 that by
construction of inserters in $\mathcal{E}_{ex/reg}$ this means that
$\Phi\Phi^{\circ}=S^{*}R_{*}\cap(E\cap E^{\circ})$ and $\Phi^{*}\Phi_{*}=T$ as
relations in $\mathcal{E}$. Then first of all we have
$\displaystyle\overline{F}(\Phi_{*})^{*}\overline{F}(\Phi_{*})_{*}$
$\displaystyle=$ $\displaystyle
p_{(A,T)*}F(\Phi^{*})p_{(X,E)}^{*}p_{(X,E)*}F(\Phi_{*})p_{(A,T)}^{*}$
$\displaystyle=$ $\displaystyle
p_{(A,T)*}F(\Phi^{*})F(E)F(\Phi_{*})p_{(A,T)}^{*}$ $\displaystyle=$
$\displaystyle p_{(A,T)*}F(\Phi^{*}E\Phi_{*})p_{(A,T)}^{*}$ $\displaystyle=$
$\displaystyle p_{(A,T)*}F(\Phi^{*}\Phi_{*})p_{(A,T)}^{*}$ $\displaystyle=$
$\displaystyle p_{(A,T)*}F(T)p_{(A,T)}^{*}=I_{\overline{F}(A,T)}$
which tells us that $\overline{F}(\Phi_{*})$ is an $\mathsf{ff}$-morphism in
$\mathcal{F}$. Second, we have the following sequence of calculations:
$\displaystyle\overline{F}(S_{*})^{*}\overline{F}(R_{*})_{*}\cap\Delta_{\overline{F}(X,E)}=$
$\displaystyle=p_{(X,E)}p_{(X,E)}^{\circ}(\overline{F}(S_{*})^{*}\overline{F}(R_{*})_{*}\cap\Delta_{\overline{F}(X,E)})p_{(X,E)}p_{(X,E)}^{\circ}$
$\displaystyle=p_{(X,E)}p_{(X,E)}^{\circ}(p_{(X,E)*}F(S^{*})p_{(Y,G)}^{*}p_{(Y,G)*}F(R_{*})p_{(X,E)}^{*}\cap\Delta_{\overline{F}(X,E)})p_{(X,E)}p_{(X,E)}^{\circ}$
$\displaystyle=p_{(X,E)}p_{(X,E)}^{\circ}(p_{(X,E)}F(E)F(S^{*})F(G)F(R_{*})F(E)p_{(X,E)}^{\circ}\cap\Delta_{\overline{F}(X,E)})p_{(X,E)}p_{(X,E)}^{\circ}$
$\displaystyle=p_{(X,E)}p_{(X,E)}^{\circ}(p_{(X,E)}F(ES^{*}GR_{*}E)p_{(X,E)}^{\circ}\cap\Delta_{\overline{F}(X,E)})p_{(X,E)}p_{(X,E)}^{\circ}$
$\displaystyle=p_{(X,E)}p_{(X,E)}^{\circ}(p_{(X,E)}F(S^{*}R_{*})p_{(X,E)}^{\circ}\cap\Delta_{\overline{F}(X,E)})p_{(X,E)}p_{(X,E)}^{\circ}$
$\displaystyle\stackrel{{\scriptstyle\ref{Map distributivity},\ref{Map
distributivity*}}}{{=}}p_{(X,E)}[p_{(X,E)}^{\circ}p_{(X,E)}F(S^{*}R_{*})p_{(X,E)}^{\circ}p_{(X,E)}\cap
p_{(X,E)}^{\circ}p_{(X,E)}]p_{(X,E)}^{\circ}$ $\displaystyle=p_{(X,E)}[F(E\cap
E^{\circ})F(S^{*}R_{*})F(E\cap E^{\circ})\cap F(E\cap
E^{\circ})]p_{(X,E)}^{\circ}$ $\displaystyle=p_{(X,E)}(F(S^{*}R_{*})\cap
F(E\cap E^{\circ}))p_{(X,E)}^{\circ}$ $\displaystyle=p_{(X,E)}F(S^{*}R_{*}\cap
E\cap E^{\circ})p_{(X,E)}^{\circ}$
$\displaystyle=p_{(X,E)}F(\Phi\Phi^{\circ})p_{(X,E)}^{\circ}$
$\displaystyle=p_{(X,E)}F(\Phi)F(T\cap
T^{\circ})F(\Phi^{\circ})p_{(X,E)}^{\circ}$
$\displaystyle=p_{(X,E)}F(\Phi)p_{(A,T)}^{\circ}p_{(A,T)}F(\Phi^{\circ})p_{(X,E)}^{\circ}$
$\displaystyle=\overline{F}(\Phi_{*})\overline{F}(\Phi_{*})^{\circ}$
These tell us that
${\overline{F}(A,T)}$${\overline{F}(X,E)}$${\overline{F}(Y,G)}$$\scriptstyle{\overline{F}(\Phi_{*})}$$\scriptstyle{\overline{F}(R_{*})}$$\scriptstyle{\overline{F}(S_{*})}$
is an inserter diagram in $\mathcal{F}$.
Next, consider an $\mathsf{so}$-morphism
$R_{*}\colon(X,E)\twoheadrightarrow(Y,G)\in\mathcal{E}_{ex/reg}$. This means
that $R_{*}R^{*}=G$ and then we have
$\displaystyle\overline{F}(R_{*})_{*}\overline{F}(R_{*})^{*}$ $\displaystyle=$
$\displaystyle p_{(Y,G)*}F(R_{*})p_{(X,E)}^{*}p_{(X,E)*}F(R^{*})p_{(Y,G)}^{*}$
$\displaystyle=$ $\displaystyle p_{(Y,G)*}F(R_{*})F(E)F(R^{*})p_{(Y,G)}^{*}$
$\displaystyle=$ $\displaystyle
p_{(Y,G)*}F(R_{*}R^{*})p_{(Y,G)}^{*}=p_{(Y,G)*}F(G)p_{(Y,G)}^{*}$
$\displaystyle=$ $\displaystyle I_{\overline{F}(Y,G)}$
from which we obtain that $\overline{F}(R_{*})$ is an $\mathsf{so}$-morphism
in $\mathcal{F}$. Thus, we have proved that $\overline{F}$ is a regular
functor.
Finally, for any regular functor $H\colon\mathcal{E}_{ex/reg}\to\mathcal{F}$,
for every object $(X,E)\in\mathcal{E}_{ex/reg}$ we must have an exact sequence
$\textstyle{H\Gamma
E\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{H\Gamma
X\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{H(X,E)}$ in
$\mathcal{F}$. If $H\Gamma\cong F$, this forces $H\cong\overline{F}$. ∎
## 5\. A Characterization of the Exact Completion and Priestley Spaces
Having established the universal property of the exact completion, in this
section we present a result which identifies the situation in which an exact
category is the exact completion of a given regular category $\mathcal{E}$.
More precisely, we will characterize the canonical functor
$\Gamma\colon\mathcal{E}\to\mathcal{E}_{ex/reg}$ as the unique up to
equivalence functor from $\mathcal{E}$ into an exact category which satisfies
some simple properties. This will in turn allow us to easily deduce some
examples of categories which arise as exact completions of some familiar
regular subcategory.
The main example that we aim to cover here involves the category of
_Priestley_ spaces. Indeed, the latter is regular as a $\mathsf{Pos}$-category
and we prove that its exact completion is the category of _compact ordered
spaces_ (or _Nachbin_ spaces). This provides an ordered version of the
folklore result which identifies the category of compact Hausdorff spaces as
the exact completion (in the ordinary sense) of the regular category of
_Stone_ spaces (see e.g. [3]).
But first, we need some preliminaries. We will say that a functor
$F\colon\mathcal{C}\to\mathcal{D}$ is _order-faithful_ if, for every
$f,g\colon X\to Y\in\mathcal{C}$ we have $Ff\leq Fg\implies f\leq g$. In other
words, $F$ is order-faithful if for every $X,Y\in\mathcal{C}$ the morphism
$\mathcal{C}(X,Y)\to\mathcal{D}(FX,FY)$ is an $\mathsf{ff}$-morphism in
$\mathsf{Pos}$. Note in particular that such a functor is faithful in the
ordinary sense or, in more appropriate language, the underlying functor
between ordinary categories is faithful. In fact, if $\mathcal{C}$ has
inserters which are preserved by $F\colon\mathcal{C}\to\mathcal{D}$, then the
two notions coincide.
The crux of the work now consists of establishing that certain properties of a
regular functor $F\colon\mathcal{E}\to\mathcal{F}$ into an exact category
$\mathcal{F}$ translate to corresponding properties of the induced
$\overline{F}\colon\mathcal{E}_{ex/reg}\to\mathcal{F}$.
* 5.1 Lemma.
Let $F\colon\mathcal{E}\to\mathcal{F}$ be a regular functor with $\mathcal{F}$
an exact category. If $F$ is fully order-faithful, then
$\overline{F}\colon\mathcal{E}_{ex/reg}\to\mathcal{F}$ is order-faithful.
###### Proof.
Let $R_{*},S_{*}\colon(X,E)\to(Y,G)\in\mathcal{E}_{ex/reg}$ be such that
$\overline{F}(R_{*})\leq\overline{F}(S_{*})$ in $\mathcal{F}$. Then using the
definition of $\overline{F}$ we have
$\displaystyle\overline{F}(R_{*})_{*}\supseteq\overline{F}(S_{*})_{*}$
$\displaystyle\implies$ $\displaystyle
p_{(Y,G)}^{*}\overline{F}(R_{*})_{*}p_{(X,E)*}\supseteq
p_{(Y,G)}^{*}\overline{F}(S_{*})_{*}p_{(X,E)*}$ $\displaystyle\implies$
$\displaystyle
p_{(Y,G)}^{*}p_{(Y,G)*}F(R_{*})p_{(X,E)}^{*}p_{(X,E)*}\supseteq$
$\displaystyle\supseteq$ $\displaystyle
p_{(Y,G)}^{*}p_{(Y,G)*}F(S_{*})p_{(X,E)}^{*}p_{(X,E)*}$
$\displaystyle\implies$ $\displaystyle F(G)F(R_{*})F(E)\supseteq
F(G)F(S_{*})F(E)$ $\displaystyle\implies$ $\displaystyle F(R_{*})\supseteq
F(S_{*})$
But since $F$ is fully (order-) faithful, it reflects inclusions of subobjects
and hence we obtain $R_{*}\supseteq S_{*}$, i.e. $R_{*}\leq S_{*}$ in
$\mathcal{E}_{ex/reg}$. ∎
We introduce some further properties of functors that will be of interest. Our
choice of terminology follows the literature of Categorical Logic (e.g.[16]).
* 5.2 Definition.
A functor $F\colon\mathcal{C}\to\mathcal{D}$ is called _covering_ if, for
every object $Y\in\mathcal{D}$, one can find an object $X\in\mathcal{C}$ and
an effective epimorphism $FX\twoheadrightarrow Y$.
We say that $F$ is _full on subobjects_ if, for every $\mathsf{ff}$-morphism
$B\rightarrowtail FX$ in $\mathcal{D}$, there exists an $\mathsf{ff}$-morphism
$A\rightarrowtail X$ in $\mathcal{C}$ such that $FA\cong B$ in
$\mathrm{Sub}_{\mathcal{D}}$$(FX)$.
The following basic observation (even for ordinary categories) seems to not
have appeared explicitly in the literature. Since we will need it below, we
give its easy proof.
* 5.3 Lemma.
Let $F\colon\mathcal{C}\to\mathcal{D}$ be a regular functor between regular
categories. If $F$ is full and covering, then it is full on subobjects.
###### Proof.
Consider an $\mathsf{ff}$-morphism $v\colon D\rightarrowtail FY$ in
$\mathcal{D}$. Since $F$ is covering, there exists some $\mathsf{so}$-morphism
$q\colon FX\twoheadrightarrow D$ in $\mathcal{D}$. Now consider the
composition of the two,
$\textstyle{FX\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{q}$$\textstyle{D\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{v}$$\textstyle{FY}$.
Since $F$ is full, there is a morphism $f\colon X\to Y$ in $\mathcal{C}$ such
that $Ff=vq$.
Since $\mathcal{C}$ is regular, we can factor $f$ as an $\mathsf{so}$ followed
by an $\mathsf{ff}$-morphism, say $f=\lx@xy@svg{\hbox{\raise 0.0pt\hbox{\kern
7.53471pt\hbox{\ignorespaces\ignorespaces\ignorespaces\hbox{\vtop{\kern
0.0pt\offinterlineskip\halign{\entry@#!@&&\entry<EMAIL_ADDRESS>0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
13.80154pt\raise 5.1875pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-0.8264pt\hbox{$\scriptstyle{p}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern 31.53471pt\raise 0.0pt\hbox{\hbox{\kern
0.0pt\raise 0.0pt\hbox{\hbox{\kern-1.99997pt\lower
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
31.53471pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise
0.0pt\hbox{$\textstyle{I\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
42.71526pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
50.42346pt\raise 4.50694pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-1.50694pt\hbox{$\scriptstyle{u}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern 66.71526pt\raise 0.0pt\hbox{\hbox{\kern
0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
66.71526pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise 0.0pt\hbox{$\textstyle{Y}$}}}}}}}\ignorespaces\ignorespaces}}}}$.
But $F$ is a regular functor, so
$\textstyle{FX\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{Fp}$$\textstyle{FI\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{Fu}$$\textstyle{FY}$
is the ($\mathsf{so}$,$\mathsf{ff}$) factorization of $Ff=vq$. By uniqueness
of such factorizations in the regular category $\mathcal{D}$, we deduce that
$FI\cong D$ as subobjects of $FY$. ∎
* 5.4 Proposition.
Let $F\colon\mathcal{E}\to\mathcal{F}$ be a regular functor with $\mathcal{F}$
an exact category. If $F$ is fully order-faithful and covering, then
$\overline{F}\colon\mathcal{E}_{ex/reg}\to\mathcal{F}$ is fully order-faithful
and covering.
###### Proof.
We saw earlier that $F$ being fully order-faithful implies that $\overline{F}$
is order-faithful. Furthermore, it is immediate that $F$ being covering
implies the same property for $\overline{F}$, since we have
$\overline{F}\Gamma\cong F$.
Now consider any morphism
$g\colon\overline{F}(X,E)\to\overline{F}(Y,G)\in\mathcal{F}$. Let $S_{*}\colon
FX\looparrowright FY$ and $S^{*}\colon FY\looparrowright FX$ denote the
relations corresponding to this morphism via the bijection of 3, i.e. the
relations $S_{*}=p_{(Y,G)}^{*}g_{*}p_{(X,E)*}$ and
$S^{*}=p_{(X,E)}^{*}g^{*}p_{(Y,G)*}$. Now since $F$ is a full and covering
regular functor, we know by the previous lemma that it is also full on
subobjects and so there exist relations $R_{*}\colon X\looparrowright Y$ and
$R^{*}\colon Y\looparrowright X$ in $\mathcal{E}$ such that $F(R_{*})=S_{*}$
and $F(R^{*})=S^{*}$. Furthermore, we have the following:
$F(GR_{*}E)=F(G)F(R_{*})F(E)=F(G)S_{*}F(E)=S_{*}=F(R_{*})$
$F(R^{*}R_{*})=F(R^{*})F(R_{*})=S^{*}S_{*}\supseteq F(E)$
$F(R_{*}R^{*})=F(R_{*})F(R^{*})=S_{*}S^{*}\subseteq F(G)$
But now because $F$ is fully (order-) faithful it reflects inclusions of
subobjects. Thus, we deduce that $GR_{*}E=R_{*}$, $R^{*}R_{*}\supseteq E$ and
$R_{*}R^{*}\subseteq G$, so that $R_{*}$ is a morphism
$(X,E)\to(Y,G)\in\mathcal{E}_{ex/reg}$.
Finally, we have by definition of the functor $\overline{F}$ that
$\displaystyle\overline{F}(R_{*})_{*}$
$\displaystyle=p_{(Y,G)*}F(R_{*})p_{(X,E)}^{*}=p_{(Y,G)*}S_{*}p_{(X,E)}^{*}=p_{(Y,G)*}p_{(Y,G)}^{*}g_{*}p_{(X,E)*}p_{(X,E)}^{*}$
$\displaystyle=I_{\overline{F}(Y,G)}g_{*}I_{\overline{F}(X,E)}=g_{*}$
and hence $\overline{F}(R_{*})=g$. ∎
The final ingredient we need is the $\mathsf{Pos}$-enriched analogue of Lemma
1.4.9 from [16].
* 5.5 Lemma.
Let $F\colon\mathcal{E}\to\mathcal{F}$ be a regular functor between regular
categories where moreover $\mathcal{E}$ is exact. If $F$ is fully order-
faithful and covering, then it is an equivalence.
###### Proof.
It suffices to show that $F$ is essentially surjective on objects. So let
$Y\in\mathcal{F}$. By the assumption that $F$ is covering, we can find a
coinserter $q\colon FX\twoheadrightarrow Y$ in $\mathcal{F}$ for some object
$X\in\mathcal{E}$. Consider the kernel congruence
$\textstyle{q/q\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{q_{0}}$$\scriptstyle{q_{1}}$$\textstyle{FX}$
in $\mathcal{F}$. Since $F$ is a covering and full regular functor, we know
that it must be also full on subobjects. In particular, there is a relation
$\textstyle{E\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{e_{0}}$$\scriptstyle{e_{1}}$$\textstyle{X}$
in $\mathcal{E}$ such that $F(E)=q/q$ as relations in $\mathcal{F}$.
Now because $F$ is order-faithful, $F(E)=q/q$ being a congruence in
$\mathcal{F}$ implies that $E$ is a congruence on $X$ in $\mathcal{E}$. Since
$\mathcal{E}$ is assumed to be exact, $E$ has a coinserter, say $p\colon X\to
P$, and is the kernel congruence of that coinserter. Now by regularity of the
functor $F$ we have that $Fp\colon FX\twoheadrightarrow FP$ is the coinserter
of $F(E)=q/q$. Since $q\colon FX\to Y$ is also a coinserter of $q/q$, we
deduce that there exists an iso $FP\cong Y$. ∎
Now putting everything together we have proved the following result.
* 5.6 Theorem.
Let $F\colon\mathcal{E}\to\mathcal{F}$ be a regular functor with $\mathcal{F}$
an exact category. Then $\overline{F}\colon\mathcal{E}_{ex/reg}\to\mathcal{F}$
is an equivalence if and only if $F$ is fully order-faithful and covering.
In other words, if have found a regular functor
$F\colon\mathcal{E}\to\mathcal{F}$ into an exact category $\mathcal{F}$ and
$F$ is fully order-faithful and covering, then we have identified
$\mathcal{F}$ as $\mathcal{E}_{ex/reg}$. This can be immediately applied to
produce some examples of categories which are exact completions of regular
categories.
* 5.7 Example.
1. (1)
Consider $\mathsf{Set}$ viewed as $\mathsf{Pos}$-category with discrete hom-
sets. We have seen that it is regular but not exact. The discrete poset
functor $D\colon\mathsf{Set}\to\mathsf{Pos}$ is clearly fully order-faithful
and regular. It is also trivially covering, since for any poset $(X,\leq)$ we
have an order-preserving surjection $(X,=)\twoheadrightarrow(X,\leq)$. Thus,
by 5 we have that $\mathsf{Set}_{ex/reg}\simeq\mathsf{Pos}$.
2. (2)
Consider the category $\mathsf{OrdMon}$ of ordered monoids, which is a variety
of ordered algebras in the sense of Bloom & Wright[5] and hence is an exact
category[14]. Similarly, the category $\mathsf{OrdMon}_{can}$ of cancellative
ordered monoids is regular, since it is an ordered quasivariety. Then by 5 we
see that $(\mathsf{OrdMon}_{can})_{ex/reg}\simeq\mathsf{OrdMon}$. Indeed,
every ordered monoid admits a surjective homomorphisms from a free one and
clearly every free ordered monoid is cancellative. Recall here that the free
ordered monoid $F(X,\leq)$ on a poset $(X,\leq)$ has elements all finite lists
$(x_{1},x_{2},...,x_{n})$ of elements of $X$, with
$(x_{1},x_{2},...,x_{n})\leq(y_{1},y_{2},...,y_{m})$ in $F(X,\leq)$ if and
only if $m=n$ and $x_{i}\leq y_{i}$ for all $i\in\\{1,2,...,n\\}$.
3. (3)
The ordinary category $\mathsf{Mon}$ of monoids is regular, hence also regular
as a locally discrete $\mathsf{Pos}$-category. The inclusion functor
$\mathsf{Mon}\hookrightarrow\mathsf{OrdMon}$ is regular and any ordered monoid
$(M,\leq)$ admits a surjective homomorphism $(M,=)\twoheadrightarrow(M,\leq)$.
It follows then from 5 that $\mathsf{Mon}_{ex/reg}\simeq\mathsf{OrdMon}$.
4. (4)
It is easy to see that in the above example there is nothing special about the
variety of monoids. Indeed, any ordinary quasivariety gives rise to a
corresponding quasivariety of ordered algebras defined by the same set of
axioms. The ordinary (unordered) version sits inside the ordered one as the
discrete ordered algebras and as such is a regular subcategory. It follows
then that its exact completion qua $\mathsf{Pos}$-category yields precisely
the corresponding ordered quasivariety. Thus, for example, in the case of
semigroups we similarly have $\mathsf{SGrp}_{ex/reg}\simeq\mathsf{OrdSGrp}$.
The examples presented so far of exact completions have all been varieties of
ordered algebras which appear as completions of certain corresponding
quasivarieties. However, the main example we would like to present in this
section is order-topological in nature and involves the category of Priestley
spaces. Let us thus first recall some terminology.
We will say that a triple $(X,\tau,\leq)$ with $\tau$ a topology and $\leq$ a
partial order relation on the set $X$ is an _ordered topological space_.
A _compact ordered space_ is an ordered topological space $(X,\tau,\leq)$ such
that $(X,\tau)$ is compact and $\leq$ is closed as a subspace of $X\times X$.
This class of spaces was introduced and developed by L. Nachbin in [18] as an
ordered analogue of compact Hausdorff spaces, and so we will also call these
_Nachbin_ spaces. Together with the continuous order-preserving functions
between them they form a category which we denote by $\mathsf{Nach}$. Note
that, under the assumption of compactness, the condition that the order
relation be closed in the product space $X\times X$ means the following:
whenever $x,y\in X$ with $x\nleq y$, there exist a open upper set $U$ and an
open lower set $V$ such that $x\in U$, $y\in V$ and $U\cap V=\emptyset$.
Inside $\mathsf{Nach}$ sits the very interesting full subcategory
$\mathsf{Pries}$ of _Priestley_ spaces. This is a class of ordered topological
spaces introduced by H. A. Priestley[19] in order to provide an extension of
Stone duality to distributive lattices. In other words there is an equivalence
of categories $\mathsf{DLat}^{op}\simeq\mathsf{Pries}$, where $\mathsf{DLat}$
denotes the category of (bounded) distributive lattices and lattice
homomorphisms.
Recall then than an ordered topological space $(X,\tau,\leq)$ is a _Priestley
space_ if $(X,\tau)$ is compact and the following is satisfied: whenever
$x\nleq y$, there exists a clopen upper set $U$ such that $x\in U$ and
$y\notin U$. Ordered spaces satisfying the latter condition are often called
_totally order-separated_. It is immediate that every Priestley space is
indeed a Nachbin space. It is furthermore clear that the underlying
topological space of a Priestley space is a Stone space. In fact, the category
$\mathsf{Stone}$ of Stone spaces is embedded in $\mathsf{Pries}$ as the full
subcategory on the objects for which the order relation is discrete.
* 5.8 Proposition.
The category $\mathsf{Nach}$ is exact, while the category $\mathsf{Pries}$ is
regular.
###### Proof.
Observe that the coinserter ${E}$${X}$${X/E\cap E^{\circ}}$$\scriptstyle{q}$
of any internal congruence $E\rightarrowtail X\times X$ in $\mathsf{Nach}$ is
constructed by equipping the set $X/E\cap E^{\circ}$ with the quotient
topology and the induced order relation by the pre-order $E$. Since $E$ is
closed in $X\times X$, so is the equivalence relation $E\cap E^{\circ}$ and so
at the level of spaces we know that the quotient will be a compact Hausdorff
space. It is then a Nachbin space because the order relation is by definition
equal to $(q\times q)[E]$ and the map $q\times q$ is closed.
It now follows that the effective epimorphisms in $\mathsf{Nach}$ are
precisely the continuous monotone surjections. Indeed, if $f\colon X\to
Y\in\mathsf{Nach}$ is surjective then on the level of spaces it is a
continuous surjection between compact Hausdorff spaces and hence it is a
quotient map. This means that the induced $\bar{f}\colon X/R\to Y$ is a
homeomorphism, where $R=\\{(x,x^{\prime})|f(x)=f(x^{\prime})\\}=E\cap
E^{\circ}$, for $E\coloneqq f/f$. But $\bar{f}$ also preserves and reflects
the order because by definition we have
$\bar{f}([x])\leq\bar{f}([x^{\prime}])\iff f(x)\leq
f(x^{\prime})\iff(x,x^{\prime})\in E\iff[x]\leq[x^{\prime}]$. Thus, $f$ is the
coinserter of its kernel congruence.
This shows that $\mathsf{Nach}$ is regular, since the continuous monotone
surjections are clearly stable under pullback. To see that $\mathsf{Pries}$ is
regular, it suffices to observe that the latter is closed under finite limits
and subobjects in $\mathsf{Nach}$.
Finally, consider any internal congruence $E\rightarrowtail X\times X$ in
$\mathsf{Nach}$ and construct its coinserter $q$ as we did above. It is then
immediate by the construction that $E=q/q$ and so we have proved that
$\mathsf{Nach}$ is exact. ∎
We can now deduce an ordered version of the folklore result which identifies
the exact completion of $\mathsf{Stone}$ as the category of compact Hausdorff
spaces. The latter, to the best of the author’s knowledge, seems to have first
appeared in print in [3] where a similar argument involving the ordinary exact
completion was invoked.
* 5.9 Corollary.
$\mathsf{Pries}_{ex/reg}\simeq\mathsf{Nach}\simeq\mathsf{Stone}_{ex/reg}$.
###### Proof.
The inclusions
$\mathsf{Stone}\hookrightarrow\mathsf{Pries}\hookrightarrow\mathsf{Nach}$ are
both regular functors. Furthermore, if $X\in\mathsf{Nach}$, then $X$ is in
particular a compact Hausdorff space and so admits a continuous surjection
$\beta(X)\twoheadrightarrow X$ from a Stone space $\beta(X)$, the latter being
the Stone-Cech compactification of the discrete set $X$. Equipping $\beta(X)$
with the equality relation this becomes a continuous monotone surjection in
$\mathsf{Nach}$. Thus, both inclusion functors are also covering and the
result follows from 5. ∎
Before ending this section, let us record a small observation that generalizes
some of the examples we have seen so far. To this effect, recall that some
varieties of ordered algebras were described as exact completions of certain
ordinary varieties which appeared as the objects with discrete order relation.
For example, we had $\mathsf{Mon}_{ex/reg}\simeq\mathsf{OrdMon}$ for the
category of ordered monoids. Similarly, in the context of the above corollary
we could have included the equivalence
$\mathsf{CHaus}_{ex/reg}\simeq\mathsf{Nach}$, where $\mathsf{CHaus}$ denotes
the locally discrete category of compact Hausdorff spaces.
More generally now, consider any regular category $\mathcal{E}$ and define an
object $X\in\mathcal{E}$ to be _discrete_ if for every $f,g\colon A\to
X\in\mathcal{E}$ we have that $f\leq g\implies f=g$. If we denote by
$\mathsf{Dis}(\mathcal{E})$ the full subcategory on the discrete objects, then
it is plain that $\mathsf{Dis}(\mathcal{E})$ is a locally discrete category
which is closed under finite limits and subobjects in $\mathcal{E}$. Thus,
$\mathsf{Dis}(\mathcal{E})$ is a regular category as well. We will say that
$\mathcal{E}$ has _enough discrete objects_ if for every object
$X\in\mathcal{E}$ there exists an $\mathsf{so}$-morphism $D\twoheadrightarrow
X$ in $\mathcal{E}$ with $D\in\mathsf{Dis}(\mathcal{E})$. By another
application of 5 we now deduce the following:
* 5.10 Corollary.
Let $\mathcal{E}$ be an exact category with enough discrete objects. Then,
$\mathcal{E}\simeq\mathsf{Dis}(\mathcal{E})_{ex/reg}$.
## 6\. Internal Posets and Exact Completion
In this final section we consider the process of taking internal posets in an
ordinary category and how the ordinary and enriched notions of regularity and
exactness are related through said process. Furthermore, we prove a type of
commutation between this construction of internal posets and that of exact
completion.
To begin with, suppose that $\mathcal{C}$ is any finitely complete ordinary
category. We can then define a category $\mathsf{Ord}(\mathcal{C})$ as
follows:
* •
Objects: are pairs $(X,\leq_{X})$, where $X$ is an object of $\mathcal{C}$ and
$\leq_{X}\rightarrowtail X\times X$ is a partial order relation in
$\mathcal{C}$.
* •
Morphisms: A morphism
$f\colon(X,\leq_{X})\to(Y,\leq_{Y})\in\mathsf{Ord}(\mathcal{C})$ is a morphism
$f\colon X\to Y\in\mathcal{C}$ such that $f(\leq_{X})\subseteq\leq_{Y}$.
The condition $f(\leq_{X})\subseteq\leq_{Y}$ means that there is a commutative
diagram in $\mathcal{C}$ of the form
${\leq_{X}}$${\leq_{Y}}$${X\times X}$${Y\times Y}$$\scriptstyle{f\times f}$
Composition of morphisms and identities are those of $\mathcal{C}$.
Furthermore, given morphisms
$f,f^{\prime}\colon(X,\leq_{X})\to(Y,\leq_{Y})\in\mathsf{Ord}(\mathcal{C})$,
we define $f\leq f^{\prime}$ to mean that there exists a commutative diagram
${X}$${\leq_{Y}}$${Y\times Y}$$\scriptstyle{\langle f{,}g\rangle}$
Now it is easy to see that this order relation on morphisms of
$\mathsf{Ord}(\mathcal{C})$ is preserved by composition. For example, if
$f,f^{\prime}\colon(X,\leq_{X})\to(Y,\leq_{Y})\in\mathsf{Ord}(\mathcal{C})$
with $f\leq f^{\prime}$ and $g\colon(Y,\leq_{Y})\to(Z,\leq_{Z})$, we have
$gf\leq gf^{\prime}$ by pasting the following commutative diagrams
${X}$${\leq_{Y}}$${\leq_{Z}}$${Y\times Y}$${Z\times
Z}$$\scriptstyle{\langle{f,f^{\prime}}\rangle}$$\scriptstyle{g\times g}$
Thus, $\mathsf{Ord}(\mathcal{C})$ is enriched in $\mathsf{Pos}$. Our first
observation below is that finite completeness of the ordinary category
$\mathcal{C}$ implies the existence of all finite weighted limits in
$\mathsf{Ord}(\mathcal{C})$.
* 6.1 Proposition.
If $\mathcal{C}$ is finitely complete, then $\mathsf{Ord}(\mathcal{C})$ has
finite weighted limits.
###### Proof.
It is easy to see that ${(X,\leq_{X})}$${(X\times
Y,\leq_{X}\times\leq_{Y})}$${(Y,\leq_{Y})}$$\scriptstyle{\pi_{X}}$$\scriptstyle{\pi_{Y}}$
is a product diagram for every
$(X,\leq_{X}),(Y,\leq_{Y})\in\mathsf{Ord}(\mathcal{C})$. Let us show how to
construct the inserter of a pair of morphisms
${(X,\leq_{X})}$${(Y,\leq_{Y})}$$\scriptstyle{f}$$\scriptstyle{g}$. For this,
form the following pullback square in $\mathcal{C}$
${E}$${\leq_{Y}}$${X}$${Y\times
Y}$$\scriptstyle{e}$$\scriptstyle{e^{\prime}}$$\scriptstyle{\langle{f,g}\rangle}$
Let $\leq_{E}$ be the restriction of $\leq_{X}$ to the subobject
$E\rightarrowtail X$, i.e. $\leq_{E}=(E\times E)\cap\leq_{X}$ as subobjects of
$X\times X$. It is easy to see that $\leq_{E}$ is itself an internal partial
order relation on $E\in\mathcal{C}$ so that we have a morphism
$e\colon(E,\leq_{E})\to(X,\leq_{X})\in\mathsf{Ord}(\mathcal{C})$. Also, by
commutativity of the pullback square above we have $fe\leq ge$.
Now let $h\colon(Z,\leq_{Z})\to(X,\leq_{X})$ be such that $fh\leq gh$. This
means that $\langle fh,gh\rangle=\langle f,g\rangle h$ factors through
$\leq_{Y}\rightarrowtail Y\times Y$, say via $u\colon Z\to\leq_{Y}$, so then
by the pullback property there exists a unique $v\colon Z\to E$ satisfying
$ev=h$, $e^{\prime}v=u$.
Finally, $e$ is an $\mathsf{ff}$-morphism in $\mathsf{Ord}(\mathcal{C})$ by
definition of $\leq_{E}$. Indeed, for any
${(Z,\leq_{Z})}$${(E,\leq_{E})}$$\scriptstyle{h}$$\scriptstyle{h^{\prime}}$,
the inequality $eh\leq eh^{\prime}$ means that $(e\times e)\langle
h,h^{\prime}\rangle$ factors through $\leq_{X}$, which implies $\langle
h,h^{\prime}\rangle$ factors through $\leq_{E}=(E\times E)\cap\leq_{X}$, i.e.
that $h\leq h^{\prime}$. ∎
Since it will be needed later, let us also record here how to construct comma
squares
${(C,\leq_{C})}$${(Y,\leq_{Y})}$${(X,\leq_{X})}$${(Z,\leq_{Z})}$$\scriptstyle{c_{1}}$$\scriptstyle{c_{0}}$${\leq}$$\scriptstyle{g}$$\scriptstyle{f}$
in $\mathsf{Ord}(\mathcal{C})$. This is accomplished by constructing the
following pullback square in $\mathcal{C}$
${C}$${\leq_{Z}}$${X\times Y}$${Z\times
Z}$$\scriptstyle{\langle{c_{0},c_{1}}\rangle}$$\scriptstyle{f\times g}$
and then setting $\leq_{C}\coloneqq(C\times C)\cap(\leq_{X}\times\leq_{Y})$.
Before moving on, let us also discuss $\mathsf{ff}$-morphisms in
$\mathsf{Ord}(\mathcal{C})$. We saw in the course of the previous proof that
an $m\colon(X,\leq_{X})\to(Y,\leq_{Y})\in\mathsf{Ord}(\mathcal{C})$ with
$m\colon X\to Y\in\mathcal{C}$ monomorphic and $\leq_{X}=(X\times
X)\cap\leq_{Y}$ is an $\mathsf{ff}$-morphism in $\mathsf{Ord}(\mathcal{C})$.
It is in fact not too hard to see that this completely characterizes
$\mathsf{ff}$-morphisms in $\mathsf{Ord}(\mathcal{C})$. Indeed, assume that
$m\colon(X,\leq_{X})\to(Y,\leq_{Y})\in\mathsf{Ord}(\mathcal{C})$ is an
$\mathsf{ff}$-morphism. If $f,g\colon Z\to X\in\mathcal{C}$ are such that
$mf=mg$, then we can also consider them as morphisms
$f,g\colon(Z,\Delta_{Z})\to(X,\leq_{X})$ in $\mathsf{Ord}(\mathcal{C})$ and
hence deduce that $f=g$. This proves $m$ must be a monomorphism in
$\mathcal{C}$. Now arguing with generalized elements one can easily deduce
that $\leq_{X}$ is indeed the restriction of $\leq_{Y}$ along $m\colon
X\rightarrowtail Y$.
* 6.2 Lemma.
If $f\colon(X,\leq_{X})\to(Y,\leq_{Y})\in\mathsf{Ord}(\mathcal{C})$ is such
that the underlying $f\colon X\to Y$ is a strong epimorphism in $\mathcal{C}$,
then $f$ is an $\mathsf{so}$-morphism in $\mathsf{Ord}(\mathcal{C})$.
###### Proof.
Consider the following commutative square in $\mathsf{Ord}(\mathcal{C})$,
where we assume $m\colon(M,\leq_{M})\to(Z,\leq_{Z})$ is an
$\mathsf{ff}$-morphism.
${(X,\leq_{X})}$${(Y,\leq_{Y})}$${(M,\leq_{M})}$${(Z,\leq_{Z})}$$\scriptstyle{f}$$\scriptstyle{h}$$\scriptstyle{g}$$\scriptstyle{m}$
In particular, by the preceding discussion we know that $m$ is monomorphic in
$\mathcal{C}$ and so by the property of $f$ as a strong epimorphism in the
latter category we deduce the existence of a $u\colon Y\to M$ such that $uf=h$
and $mu=g$. The fact that $u$ is actually a morphism in
$\mathsf{Ord}(\mathcal{C})$ follows because $mu=g$ is such a morphism and
because $m$ being an $\mathsf{ff}$-morphism also means that $\leq_{M}=(M\times
M)\cap\leq_{Z}$. ∎
We can now prove that ordinary regularity of $\mathcal{C}$ implies (enriched)
regularity for $\mathsf{Ord}(\mathcal{C})$. Note that this result is in some
sense a special case of Proposition 62 in [7], where the authors prove a form
of 2-categorical regularity for the 2-category $\mathsf{Cat}(\mathcal{E})$ of
internal categories in a regular (1-)category $\mathcal{E}$. Nevertheless, we
include a proof here in order to make the paper more self-contained.
* 6.3 Proposition.
If $\mathcal{C}$ is an ordinary regular category, then
$\mathsf{Ord}(\mathcal{C})$ is regular.
###### Proof.
Consider any $f\colon(X,\leq_{X})\to(Y,\leq_{Y})\in\mathsf{Ord}(\mathcal{C})$
along with its (regular epi,mono) factorization
${X}$${M}$${Y}$$\scriptstyle{p}$$\scriptstyle{m}$ in $\mathcal{C}$. Then by
what we have already established earlier, upon setting
$\leq_{M}\coloneqq(M\times M)\cap\leq_{Y}$, we obtain an
($\mathsf{so}$,$\mathsf{ff}$) factorization
${(X,\leq_{X})}$${(M,\leq_{M})}$${(Y,\leq_{Y})}$$\scriptstyle{p}$$\scriptstyle{m}$
in $\mathsf{Ord}(\mathcal{C})$.
Now the existence of these factorizations together with the previous lemma
imply that any
$f\colon(X,\leq_{X})\to(Y,\leq_{Y})\in\mathsf{Ord}(\mathcal{C})$ is an
$\mathsf{so}$-morphism in $\mathsf{Ord}(\mathcal{C})$ if and only if $f\colon
X\to Y$ is a regular(=strong) epimorphism in $\mathcal{C}$. For the “only if”
direction, suppose that $f\colon(X,\leq_{X})\to(Y,\leq_{Y})$ is an
$\mathsf{so}$-morphism and consider its factorization
${(X,\leq_{X})}$${(M,\leq_{M})}$${(Y,\leq_{Y})}$$\scriptstyle{p}$$\scriptstyle{m}$
as constructed above. Then we have that $m\colon(M,\leq_{M})\to(Y,\leq_{Y})$
is both an $\mathsf{so}$ and $\mathsf{ff}$-morphism, hence is an isomorphism
in $\mathsf{Ord}(\mathcal{C})$. In particular, $m$ is an isomorphism in
$\mathcal{C}$ and so $p$ being a strong epimorphism in $\mathcal{C}$ implies
the same for $f=mp$.
Finally, since pullbacks in $\mathsf{Ord}(\mathcal{C})$ are constructed by
simply taking the pullback of the underlying morphisms in $\mathcal{C}$,
pullback-stability of regular epimorphisms in $\mathcal{C}$ implies pullback-
stability of $\mathsf{so}$-morphisms in $\mathsf{Ord}(\mathcal{C})$. ∎
Similarly, ordinary exactness of $\mathcal{C}$ implies $\mathsf{Pos}$-enriched
exactness of $\mathsf{Ord}(\mathcal{C})$. Again, this is a special case of
Proposition 63 in [7].
* 6.4 Proposition.
If $\mathcal{C}$ is an ordinary exact category, then
$\mathsf{Ord}(\mathcal{C})$ is exact.
###### Proof.
Suppose that the relation
${(E,\leq_{E})}$${(X,\leq_{X})\times(X,\leq_{X})}$$\scriptstyle{\langle{e_{0},e_{1}}\rangle}$
is a congruence on $(X,\leq_{X})\in\mathsf{Ord}(\mathcal{C})$. Then we have a
monomorphism $E\rightarrowtail X\times X\in\mathcal{C}$, i.e. a relation $E$
on $X$ in $\mathcal{C}$. Reflexivity and transitivity of the congruence in
$\mathsf{Ord}(\mathcal{C})$ imply the same properties for the relation
$E\colon X\looparrowright X$ in $\mathcal{C}$. Then $R\coloneqq E\cap
E^{\circ}$ is an equivalence relation on $X$ in $\mathcal{C}$.
Since $\mathcal{C}$ is exact, there exists an exact sequence
${R}$${X}$${Q}$$\scriptstyle{q}$ in $\mathcal{C}$, which means that $q$ is the
coequalizer of $R$ and $R$ is the kernel pair of $q$. Let $\leq_{Q}\coloneqq
q(E)$ be the image of $E$ along $q$ in $\mathcal{C}$. Note that in the
calculus of relation in the ordinary regular category $\mathcal{C}$ we can
write $q(E)=qEq^{\circ}$, while exactness of the sequence is equivalent to
$q^{\circ}q=R$ and $qq^{\circ}=\Delta_{Q}$.
Now observe that $\leq_{Q}$ is indeed an internal partial order relation in
$\mathcal{C}$. It is reflexive because $E$ is so and $q$ is a regular
epimorphism. For transitivity we argue as follows
$\leq_{Q}\circ\leq_{Q}=qEq^{\circ}qEq^{\circ}=qEREq^{\circ}=qE(E\cap
E^{\circ})Eq^{\circ}=qEq^{\circ}=\leq_{Q}$
Finally, for anti-symmetry we have
$\displaystyle\leq_{Q}\cap\leq_{Q}^{\circ}$
$\displaystyle=qq^{\circ}(\leq_{Q}\cap\leq_{Q}^{\circ})qq^{\circ}=q(q^{\circ}\leq_{Q}q\hskip
2.84526pt\cap\hskip 2.84526ptq^{\circ}\leq_{Q}^{\circ}q)q^{\circ}$
$\displaystyle=q(E\cap E^{\circ})q^{\circ}=qq^{\circ}qq^{\circ}=\Delta_{Q}$
Now we claim that
${(E,\leq_{E})}$${(X,\leq_{X})}$$\scriptstyle{e_{0}}$$\scriptstyle{e_{1}}$ is
the kernel congruence of the morphism $q\colon(X,\leq_{X})\to(Q,\leq_{Q})$ in
$\mathsf{Ord}(\mathcal{C})$. So let
$g_{0},g_{1}\colon(Z,\leq_{Z})\to(X,\leq_{X})$ be such that $qg_{0}\leq
qg_{1}$. In terms of generalized elements in $\mathcal{C}$ this means that
$(qg_{0},qg_{1})\in_{Z}\leq_{Q}$, which in turn is equivalent to
$(g_{0},g_{1})\in_{Z}q^{-1}(\leq_{Q})$. But now observe that
$q^{-1}(\leq_{Q})=q^{-1}(q(E))=q^{\circ}qEq^{\circ}q=RER=(E\cap
E^{\circ})E(E\cap E^{\circ})=E$
Thus, $(g_{0},g_{1})\in_{Z}E$ as desired. ∎
In particular, if $\mathcal{C}$ is an ordinary regular category and
$\mathcal{C}_{oex/reg}$ is its ordinary exact completion, then
$\mathsf{Ord}(\mathcal{C}_{oex/reg})$ is an exact $\mathsf{Pos}$-category. In
the remainder of this paper we want to prove that the latter category is
equivalent to the exact completion in the enriched sense of both
$\mathsf{Ord}(\mathcal{C})$ and $\mathcal{C}$ itself.
Consider a regular functor $F\colon\mathcal{C}\to\mathcal{D}$ between ordinary
regular categories. Since $F$ preserves internal partial order relations, we
have an induced functor
$\mathsf{Ord}(F)\colon\mathsf{Ord}(\mathcal{C})\to\mathsf{Ord}(\mathcal{D})$
defined on objects by $(X,\leq_{X})\mapsto(FX,F(\leq_{X}))$. By the
construction of finite weighted limits in $\mathsf{Ord}(\mathcal{C})$, the
fact that $F$ preserves finite limits implies the same for the enriched
functor $\mathsf{Ord}(F)$. Similarly, $F$ preserving regular epimorphisms
translates to the fact that $\mathsf{Ord}(F)$ preserves
$\mathsf{so}$-morphisms. Thus, $\mathsf{Ord}(F)$ is a regular functor between
regular $\mathsf{Pos}$-categories.
We now turn to discussing how the properties of $F$ being fully faithful and
covering translate to properties of $\mathsf{Ord}(F)$.
* 6.5 Lemma.
Let $F\colon\mathcal{C}\to\mathcal{D}$ be an ordinary regular functor which is
fully faithful. Then
$\mathsf{Ord}(F)\colon\mathsf{Ord}(\mathcal{C})\to\mathsf{Ord}(\mathcal{D})$
is fully order-faithful.
###### Proof.
It is clear that $\mathsf{Ord}(F)$ is faithful, since its action on morphisms
is that of $F$ itself. Since $\mathsf{Ord}(\mathcal{C})$ has finite limits and
$\mathsf{Ord}(F)$ preserves them, this is equivalent to order-faithfulness.
Now consider any $h\colon(FX,F(\leq_{X}))\to(FY,F(\leq_{Y}))$. By fullness of
$F$, there exists an $f\colon X\to Y\in\mathcal{C}$ such that $Ff=h$. It
suffices then to show that $f$ is order-preserving. For this, observe that
$Ff=h$ being a morphism in $\mathsf{Ord}(\mathcal{D})$ means that there is a
commutative diagram in $\mathcal{D}$ of the form
${F(\leq_{X})}$${F(\leq_{Y})}$${F(X\times X)\cong FX\times FX}$${FY\times
FY\cong F(Y\times Y)}$$\scriptstyle{Ff\times Ff}$
By full faithfulness of $F$ this is then reflected to a commutative diagram in
$\mathcal{C}$ which exhibits $f$ as a morphism $(X,\leq_{X})\to(Y,\leq_{Y})$
in $\mathsf{Ord}(\mathcal{C})$. ∎
* 6.6 Lemma.
Let $F\colon\mathcal{C}\to\mathcal{D}$ be an ordinary regular functor which is
covering. Then
$\mathsf{Ord}(F)\colon\mathsf{Ord}(\mathcal{C})\to\mathsf{Ord}(\mathcal{D})$
is covering.
###### Proof.
Consider any object $(Y,\leq_{Y})\in\mathsf{Ord}(\mathcal{D})$. Since $F$ is
covering, we can find a regular epimorphism $q\colon FX\twoheadrightarrow Y$
in $\mathcal{D}$. We can then consider $FX$ with the discrete order and so we
have an object $(FX,\Delta_{FX})\in\mathsf{Ord}(\mathcal{D})$. Since $q$ is a
regular epimorphism in $\mathcal{D}$, we have an $\mathsf{so}$-morphism
$q\colon(FX,\Delta_{FX})\twoheadrightarrow(Y,\leq_{Y})$ in
$\mathsf{Ord}(\mathcal{D})$. That is to say, we have an $\mathsf{so}$-morphism
$q\colon\mathsf{Ord}(F)(X,\Delta_{X})\twoheadrightarrow(Y,\leq_{Y})$ and so
$\mathsf{Ord}(F)$ is covering. ∎
Putting everything together we now obtain the main result of this section.
* 6.7 Proposition.
For any regular ordinary category $\mathcal{C}$ there is an equivalence of
$\mathsf{Pos}$-categories
$\mathsf{Ord}(\mathcal{C}_{oex/reg})\simeq\mathsf{Ord}(\mathcal{C})_{ex/reg}\simeq\mathcal{C}_{ex/reg}$,
where $\mathcal{C}_{oex/reg}$ denotes the exact completion of $\mathcal{C}$ as
an ordinary category.
###### Proof.
The ordinary regular functor $\Gamma\colon\mathcal{C}\to\mathcal{C}_{oex/reg}$
is fully faithful and covering. By the preceding lemmas we have then that
$\mathsf{Ord}(\Gamma)\colon\mathsf{Ord}(\mathcal{C})\to\mathsf{Ord}(\mathcal{C}_{oex/reg})$
satisfies the same properties in the enriched sense. Furthermore, the category
$\mathsf{Ord}(\mathcal{C}_{oex/reg})$ is exact by 6 and thus from 5 we deduce
that
$\mathsf{Ord}(\mathcal{C}_{oex/reg})\simeq\mathsf{Ord}(\mathcal{C})_{ex/reg}$.
Similarly, the composite functor
$\mathcal{C}\to\mathsf{Ord}(\mathcal{C})\to\mathsf{Ord}(\mathcal{C}_{oex/reg})$
is regular, fully faithful and covering, being a composition of two functors
satisfying these properties. Again by 5 we conclude that
$\mathcal{C}_{ex/reg}\simeq\mathsf{Ord}(\mathcal{C}_{oex/reg})$. ∎
### Acknowledgements
I would like to thank my PhD advisors, Marino Gran and Panagis Karazeris, for
their guidance during the time in which the research contained in this article
was conducted. I am also indebted to Pierre-Alain Jacqmin for reading an
earlier version of this paper and providing invaluable comments and feedback
which improved the quality of the work. I furthermore thank Christina
Vasilakopoulou and Konstantinos Tsamis for useful conversations regarding the
topics of this paper. Special thanks are due to the anonymous referee, whose
useful comments improved the presentation. The present work was financially
supported by the Conseil de Recherche of the Université Catholique de Louvain
in the form of a “Fonds Spéciaux de Recherche” grant, for which I express my
sincere gratitude.
## References
* [1] J. Adámek, M. Dostál, J. Velebil, _A categorical view of varieties of ordered algebras_ , arXiv preprint:2011.13839 (2020).
* [2] J. Adámek, C. Ford, S. Milius, L. Schröder, _Finitary monads on the category of posets_ , arXiv preprint:2011.14796 (2020).
* [3] V. Aravantinos-Sotiropoulos, P. Karazeris, _A property of effectivization and its uses in categorical logic_ , Theory and Applications of Categories, 32 (2017), pp. 769-779.
* [4] M. Barr, _Exact categories and categories of sheaves_ , Lecture Notes in Mathematics, vol. 236 Springer (1970).
* [5] S. L. Bloom, J. B. Wright, _P-varieties: A signature independent characterization of varieties of ordered algebras_ , Journal of Pure and Applied Algebra, 29 (1983), pp. 13-58.
* [6] F. Borceux, _Handbook of Categorical Algebra 2_ vol. 51 of Encyclopedia of Mathematics and its Applications, Cambridge University Press, Cambridge (1994).
* [7] J. Bourke, R. Garner, _Two-dimensional regularity and exactness_ , Journal of Pure and Applied Algebra, 218 (2014), 1346-1371.
* [8] S. Bulman-Fleming, M. Mahmoudi, _The Category of S-Posets_ , Semigroup Forum, 71 (2005), pp. 443-461.
* [9] P. J. Freyd, A. Scedrov, _Categories, allegories_ , Mathematical Library Vol 39, North-Holland (1990).
* [10] P.T. Johnstone, _Sketches of an Elephant: A Topos Theory Compendium_ , vol 43 & 44 of Oxford Logic Guides, The Clarendon Press, New York (2002).
* [11] G. M. Kelly, _Structures defined by finite limits in the enriched context_ , Cah. Top. Geom. Diff Categoriques, tome 23, no1 (1982), p. 3-42
* [12] G. M. Kelly, _A note on relations relative to a factorization system_ , Proceedings of the conference “Category Theory ’90” held in Como, Italy, July 22-28 1990.
* [13] G. M. Kelly, _Basic Concepts of Enriched Category Theory_ , London Mathematical Society Lecture Note Series 64, Cambridge University Press 1982.
* [14] A. Kurz, J. Velebil, _Quasivarieties and varieties of ordered algebras: regularity and exactness_ , Mathematical Structures in Computer Science, 27 (2017), pp. 1153-1194.
* [15] F.W. Lawvere, _Teoria delle categorie sopra un topos di base_ , Lecture Notes, University of Perugia, (1972).
* [16] M. Makkai, G. E. Reyes, _First Order Categorical Logic_ , Lecture Notes in Mathematics 611, Springer-Verlag, Berlin 1977.
* [17] J. Meisen, _On bicategories of relations and pullback spans_ , Communications in Algebra, (1974), pp. 377-401.
* [18] L. Nachbin, _Topology and Order_ , no. 4 in van Nostrand Mathematical Studies, Princeton, NJ, 1965.
* [19] H. A. Priestley, _Ordered topological spaces and the representation of distributive lattices_ , Proceedings of the London Mathematical Society, 3 (1972), pp. 507-530.
* [20] G. Richter, _Mal’cev conditions for categories_ , Categorical Topology, Proc. Conference Toledo, Ohio 1983 (Heldermann Verlag, Berlin, 1984), pp. 453-469.
* [21] R. Succi Cruciani, _La teoria delle relazioni nello studio di categorie regolari e di categorie esatte_ , Riv. Mat. Univ. Parma (4) 1 (1975), pp. 143-158.
|
We introduce the Python package, , as an implementation of functional data. This package provide modules for the analysis of such data. It includes classes for different dimensional data as well as irregularly sampled functional data. A simulation toolbox is also provided. It might be used to simulate different clusters of functional data. Some methodologies to handle these data are implemented, such as dimension reduction and clustering. New methods can be easily added. The package is publicly available on the Python Package Index and Github.
§ INTRODUCTION
With a large number of applications ranging from sports to automotive industry and healthcare, more and more phenomena produce observation entities in the form of a sequence of possibly vector-valued measurements recorded intermittently at several discrete points in time. Functional data analysis (FDA) considers such data as being values of the realizations of a stochastic process, recorded with some error, at discrete random times. The purpose of FDA is to study such trajectories, also called curves or functions. The concept of functional data can be linked to the study of time series data, as dense, usually regular samples of potentially non-smooth functions, or longitudinal data, as sparse and irregular samples of smooth function, or even image data, which can be represented as functions on two-dimensional domains.
In order to apply FDA to a real dataset, there is a need for appropriate softwares with up-to-date methodological implementation and easy addition of new theoretical developments. Currently, the most widely known software for FDA is the package [Ramsay et al., 2020], based on work cited in [Ramsay and Silverman, 2005, Ramsay et al., 2009]. Usually, packages for FDA are specific to one method. For example, one may cite [Brockhaus et al., 2020] and [Goldsmith et al., 2020] for regression and classification, [Bouveyron, 2015], [Schmutz et al., 2019] and [Bouveyron et al., 2020] for clustering or [Tucker, 2020] and [Parodi et al., 2015] for functional data registration, etc. Most of these packages are built upon . However, in most packages, the functional data are restricted to univariate ones that are well described by their coefficients in a given basis of functions. The package [Happ-Kurz, 2020] has been recently released. It aims to provide a unified framework to handle univariate and multivariate functional data defined on different dimensional domains. Sparse functional data are also considered. The [Happ-Kurz, 2020] package is currently the only one built on top of the package. It implements multivariate functional principal components analysis (MFPCA) for data defined on different dimensional domains [Happ and Greven, 2018].
Concerning the Python community, there are only few packages that are related to FDA. One may cite [Löning et al., 2019] and [Tavenard et al., 2020] that provide tools for the analysis of time series as a compatible API. Thus, they implement specific time series methods such as DTW-based ones or shapelets learning. The only one that develops specific methods for FDA is [Carreno et al., 2020]. In particular, it implements diverse registration techniques as well as statistical data depths for functional data. However, most of the methods are for one-dimensional data and they only accept multivariate functional data defined on the same unidimensional domain.
The package implements methods to handle functional data in Python based on an object-oriented approach, in the spirit of . In particular, it provides classes to manipulate dense, irregularly and multivariate functional data defined on one or higher dimensional domains. A large simulation toolbox, based on basis decomposition, is provided. It allows parameters for different clusters simulation to be configured within the data. An implementation of MFPCA for data defined on different domains, as described in [Happ and Greven, 2018], is implemented. Moreover, the $\texttt{fCUBT}$ algorithm [Golovkine et al., 2020], used to create partition in the data, is also available. All methods are implemented using the defined classes. The package is publicly available on Github[<https://github.com/StevenGolovkine/FDApy>] and the Python Package Index[<https://pypi.org/project/FDApy/>].
In the general case, the data consist of independent trajectories of a vector-valued stochastic process $X = (X^{(1)}, \dots, X^{(P)})^\top$, $P\geq 1$. For each $1\leq p \leq P$, let $\TT_p \subset \RR^{d_p}$ with $d_p\geq 1$, as for instance, $\TT_p = [0,1]^{d_p}$. The realizations of each coordinate $X^{(p)}:\TT_p \rightarrow \RR$ are assumed to belong to $\sLp{2}{\TT_p}$, the Hilbert space of squared-integrable, real-valued functions defined on $\TT_p$. Thus, $X$ is a stochastic process indexed by $\pointt = (t_1,\ldots,t_P)$ belonging to the $P-$fold Cartesian product $\TT \coloneqq \TT_1 \times \cdots\times \TT_P$ and taking values in the $P-$fold Cartesian product space $\HH \coloneqq \sLp{2}{\TT_1} \times \dots \times \sLp{2}{\TT_P}$. In practice, realizations of functional data are only obtained on a finite grid and possibly with noise. Let us consider $N$ curves $X_1, \dots, X_n, \dots, X_N$ generated as a random sample of the $P$-dimensional stochastic process $X$ with continuous trajectories. For each $1 \leq n \leq N$, and given a vector of positive integers $\boldsymbol{M}_n = (M_n^{(1)}, \dots, M_n^{(P)}) \in \mathbb{R}^P$, let $T_{n, \boldsymbol{m}} = (T_{n, m_1}^{(1)}, \dots, T_{n, m_P}^{(P)}), 1 \leq m_p \leq M_n^{(p)}, 1 \leq p \leq P$, be the random observation times for the curve $X_n$. These times are obtained as independent copies of a variable $\boldsymbol{T}$ taking values in $\TT$. The vectors $\boldsymbol{M}_1, \dots, \boldsymbol{M}_N$ represent an independent sample of an integer-valued random vector $\boldsymbol{M}$ with expectation $\boldsymbol{\mu}_{\boldsymbol{M}}$. We assume that the realizations of $X$, $\boldsymbol{M}$ and $\boldsymbol{T}$ are mutually independent. The observations associated with a curve, or trajectory, $X_n$ consist of the pairs $(Y_{n, \boldsymbol{m}}, T_{n, \boldsymbol{m}}) \in \mathbb{R}^P \times \mathcal{T}$ where $\boldsymbol{m} = (m_1, \dots, m_P), 1 \leq m_p \leq M_n^{(p)}, 1 \leq p \leq P$, and $Y_{n, \boldsymbol{m}}$ is defined as
\begin{equation}\label{eq:model}
Y_{n, \boldsymbol{m}} = X_n(T_{n, \boldsymbol{m}}) + \varepsilon_{n, \boldsymbol{m}}, \quad , 1 \leq n \leq N,
\end{equation}
and $\varepsilon_{n, \boldsymbol{m}}$ are independent copies of a centered random vector $\varepsilon \in \RR^P$ with finite variance. We use the notation $X_n(\boldsymbol{t})$ for the value at $\boldsymbol{t}$ of the realization $X_n$ of $X$. Univariate functional data refers to the case where $P = 1$.
The remainder of the paper is organized as follows. In Section <ref>, we introduce the classes for an object-oriented implementation of functional data. Section <ref> describes the data we used as examples. In Section <ref>, we presents the creation and manipulation of functional data objects. Sections <ref>, <ref> and <ref> then demonstrate some methods that the package implements: the estimation of components, multivariate functional principal components analysis and the algorithm used to find a partition of the sampled data.
§ CLASSES OF FUNCTIONAL DATA
The representation of functional data is done using two classes, that both extend an abstract class :
* Class represents dense functional data of arbitrary dimension (one for curves, two for images, etc.) on a common set of observation points $t_1, \dotsc, t_M$ for all observations. It may have missing values within the data.
* Class represents irregularly sampled data of arbitrary dimension on different sets of observation points. The number and the location of the sampling points vary between observations. It must not have missing values within the data.
Finally, the implementation of the class is different because it does not extend the class but the one. Thus, an instance of is defined as a list of $P$ elements from the and/or classes that may be defined on different dimensional domains (e.g. curves and images). A diagram of the classes is given in Figure <ref>.
[x=0, y=0, type=abstract]FunctionalData
[x=-2, y=-2]DenseFunctionalData
[x=2, y=-2]IrregularFunctionalData
[x=7, y=0, type=abstract]UserList
[x=7, y=-2]MultivariateFunctionalData
Representation of the main classes
In practice, the difference between dense and irregularly sampled functional data can be tricky. By design, dense functional data are assumed to be sampled on the complete grid $\mathcal{T} = \{t_1, \dotsc, t_M\}$ and measurement errors may exist. Taking data from sensors as an example, observations are recorded at a given sampling rate and are timestamped but some anomalies may happen during the recording process. While for an irregularly sampled functional data, we assume that the curves are observed at different sampling points with potentially different numbers of points. This is usually the case in medical studies such as growth curves analysis because one cannot expect that the individuals are measured at the exact same time.
The and classes represents the data in a similar way: the instance variable contains the sampling points and the instance variable represents the data. In the case of dense functional data, the is a dictionary whese each entry contains a numpy array that represents the common sampling points for a given dimension, while is a numpy array containing the observations. In the case of one-dimensional data sampled on a grid with $M$ points, contains only one entry as an array of shape $(M,)$ and is an array of dimension $(N, M)$ where each row is an observation. For two-dimensional observations with $M^{(1)} \times M^{(2)}$ sampling points, contains two entries, the first being an array of shape $(M^{(1)},)$ and the second an array of shape $(M^{(2)},)$ and is an array of dimension $(N, M^{(1)}, M^{(2)})$ where the first coordinate gives the observation. The higher dimensional data are represented by adding an entry in the dictionary and a dimension in the array. For irregularly sampled functional data, both and are dictionaries. The entries of are dictionaries where each entry consists of the sampling points for a particular observation. In a similar way, each entry of the dictionary represents an observation. For one-dimensional irregularly sampled functional data, contains one entry which is a dictionary of size $N$ containing the sampling points as array of shape $(M_n,), 1 \leq n \leq N$ and is a dictionary with $N$ entries containing the observations as arrays of shape $(M_n,), 1 \leq n \leq N$. For higher dimensions, each entry of the dictionary represents a dimension of the process and contains another dictionary with $N$ entries for the sampling points. Likewise, the dictionary has $N$ entries and every one of them is an array of shape $(M^{(1)}_n, M^{(2)}_n, \dotsc), 1 \leq n \leq N$.
Finally, the class inherits from the class, and thus gathers $P$ instances of and/or as a list. As a result, this class has access to all the methods applicable to lists such as , , , etc. Given a specific dataset, instances of the different classes are called , or objects. In the following, the generic term, functional data object, will refer to instances of all the three classes.
§ DATA USED IN THE EXAMPLES
We will consider two datasets in the code examples. The first one will be the Canadian weather data, which is presented in the textbook by Ramsay and Silverman, 2005 and available in their package [Ramsay et al., 2020]. The second dataset is the CD4 cell count dataset, used in [Goldsmith et al., 2013], and available in the package [Goldsmith et al., 2020]. As both examples are one-dimensional data, higher dimensional datasets, in particular images ones, will be simulated using the simulation toolbox provided in the package.
The Canadian weather dataset contains daily recording of the temperature (in degree Celsius) and the precipitation (in millimeters) for $N = 35$ Canadian cities spread across the country and averaged over the years 1960 to 1994. The daily temperature data will be used as an example of defined on a one-dimensional domain. We will add the daily precipitation records to the temperature ones in order to create a object with elements defined on different one-dimensional domains ($\mathcal{T}_1 = [1, 364]$ for the temperature and $\mathcal{T}_2 = [1, 363]$ for the precipitation).
From the MACS (Multicenter AIDS Cohort Study), the CD4 cell count dataset collects the number of CD4 cells per milliliter of blood of $N = 366$ participants. CD4 cells are a particular type of white blood cell and are key components of the immune system. HIV attacks the CD4 cells in the patient's blood. Thus, the count of CD4 cells can be viewed as a measure of the disease progression. For this dataset, the number of CD4 cells are measured roughly twice a year and centered at the time of seroconversion, which is the time that HIV becomes detectable. For every individual, the number of measurements varies between $1$ to $11$ over a period of $18$ months before and $42$ months after seroconversion. The sampling points are different between observations. We will use this dataset as an example of .
§ MANIPULATION OF FUNCTIONAL DATA OBJECTS
With the help of the two example datasets, this section will present how to create and manipulate a functional data object. In particular, we review the different instance variables used to extract information from the data. We also present methods to modify and plot functional data objects. General methods, such as the computation of the mean or covariance, for objects usually call the corresponding methods for each individual and concatenate the results appropriately. For all the code examples, we assume that the correct functions from the FDApy package are loaded as well as the packages numpy and pandas using the following code snippet:
import numpy as np
import pandas as pd
§.§ Creation of objects
Assuming the Canadian temperature data is stored in a temperature.csv file and the Canadian precipitation data in a precipitation.csv file, the following code loads the data into pandas dataframes and creates instances from them. We explicitly named the dimension of the observation.
temperature = pd.read_csv('temperature.csv', index_col=0)
argvals = pd.factorize(temperature.columns)[0]
values = np.array(temperature)
dailyTemp = DenseFunctionalData('input_dim_0': argvals,
precipitation = pd.read_csv('precipitation.csv', index_col=0)
argvals = pd.factorize(precipitation.columns)[0]
values = np.array(precipitation)
dailyPrec = DenseFunctionalData('input_dim_0': argvals,
Given multiple functional data objects, the creation of instances is done by passing a list of objects to the constructor method.
canadWeather = MultivariateFunctionalData([dailyTemp,
The construction of an instance is similar, except that the dictionaries for and must contain an entry for each observation of the data. We consider that the CD4 cell count data are stored in a cd4.csv file containing a matrix representing the CD4 counts for each patient on the common grid of all sampling points and the missing values are coded as . Thus, the following code extracts only the non-missing values for each patient and construct an instance of .
cd4 = pd.read_csv('cd4.csv', index_col=0)
all_argvals = cd4.columns.astype(np.int64)
argvals = idx: np.array(all_argvals[ np.isnan(row)])
for idx, row in enumerate(cd4.values)
values = idx: row[ np.isnan(row)]
for idx, row in enumerate(cd4.values)
cd4counts = IrregularFunctionalData('input_dim_0': argvals,
Two loaders are included within the package: and . These methods can be used to load already well formatted data from csv or ts files. In particular, the ts files are, in particular, used in the UEA $\&$ UCR Time Series Classification Repository[<http://www.timeseriesclassification.com/index.php>]. These functions are wrapper functions of the above code snippets. Nonetheless, multivariate functional data cannot be imported in this way.
Basic information about the functional data object is printed on the standard output when the object is called in the command line. For example, for the temperature dataset, the output will be:
Univariate functional data object with 35 observations on a 1-dimensional
The outputs are similar for instances of the other types of functional data objects.
For dense and irregular functional objects, a subset of the data can be extracted using the convenient way to substract objects provided in Python. For example, in order to get the observations from $5$ to $12$ from the temperature data, we may write:
Univariate functional data object with 8 observations on a 1-dimensional
Note that this will not work with instances. This subsetting method will instead return the univariate functional data in the list. However, an iterator through the observations of the multivariate functional data is provided as the method.
In regards to Remark <ref>, we implement functions to convert instances into instances and to do the reverse operation. The missing values are coded with . The code is thus written:
dailyTemp.as_irregular() # dense to irregular
cd4.as_dense() # irregular to dense
§.§ Access to the instance variables
The functional data classes come with multiple instance variables. In Python, they can usually be accessed using . We will present some of them in the following. Note that, some variables cannot be accessed directly for multivariate functional data and have to be retrieved by looping through its univariate elements.
Of course, the and are accessible using and and show what the user gave to the object constructor. Furthermore, we provide a variable with the same shape as but with normalized sampling points. The instance variables are the following:
* – number of observations in the object.
* – number of sampling points in the object for each dimension as a dictionary. For the multivariate functional data object, it should be a list of $P$ entries. In the case of , the returned number is the mean number of sampling points per observation.
* – input dimension of the functional data (one for curves, two for images, etc.). For objects, is expressed as a list.
* – minimum and maximum values of the observations as a tuple.
* – minimum and maximum values of the sampling points as a tuple. The calculation is based on the variable .
§.§ Plotting
Basic plotting methods for functional data objects are provided in the package. They are built upon the matplotlib package. We assume the package is loaded with
import matplotlib.pyplot as plt
The method returns an instance of from the matplotlib library. Thus, all the plotting options relative to ticks, frames and so on, are modifiable using this instance of . Customization of the graph parameters, such as colors, linetypes or linewidths for example, can be made by passing the arguments as inputs to the function. The following snippet is used to plot all the temperature curves for all the Canadian weather station data (represented as a object),
_ = plot(dailyTemp)
plt.title('Daily Temperature Data')
while a plot of the CD4 cell counts for $10$ patients on the log-scale (represented as an object) is given by
_ = plot(cd4counts[5:15])
plt.xlabel('Month since seroconversion')
plt.ylabel('CD4 cell counts (log-scale)')
plt.title('CD4 counts for individual 5-14')
The plots are shown in Figure <ref>.
Results of the method for functional data object.
§.§ Data simulation
Simulation functions are implemented in order to test new methodological developments. The data can be simulated using a truncated version of the Karhunen-Loève representation (class ) as well as diverse Brownian motions (class ) that inherits from the class (see Figure <ref>). An element of the class have two principal instance variables: that contains the used basis and that contains the simulated observation (after running the fonction ).
[x=0, y=0, type=abstract]Simulation
[x=-2, y=-2]Brownian
[x=2, y=-2]KarhunenLoeve
Links between classes in the simulation toolbox.
For Brownian motions, three types are implemented: , and . For example, we can simulate $N = 10$ realizations of a fractional Brownian motion on the one-dimensional observation grid $\{0, 0.01, \dotsc, 1\}$ with a Hurst parameter equal to $0.7$ using
brownian = Brownian(name='fractional')
brownian.new(n_obs=10, argvals=np.linspace(0, 1, 101),
The process $X$ has a Karhunen-Loève decomposition. Each of its realizations can be represented using this decomposition, trucated at $J$ coefficients:
$$X_n(t) = \mu(t) + \sum_{j = 1}^{J}\xi_{j, n}\phi_j(t), \quad t \in \mathcal{T},~ n = 1, \dotsc, N,$$
with a common mean function $\mu$ and an orthonormal basis of functions $\{\phi_j\}_{j=1, \dotsc, J}$. The coefficient $\xi_{j, n}$ are realizations of random Gaussian variables $\xi_{j}$ such that $\EE(\xi_j) = 0$ and $\Var(\xi_j) = \lambda_j$ with eigenvalues $\lambda_j \geq 0$ that decrease towards $0$. Multiple orthonormal bases are implemented: Legendre polynomials, eigenfunctions of a Wiener process, Fourier series and B-splines basis. The variance of the coefficients can have a linear or exponential decrease or be the eigenvalues of a Wiener process. The user can set their own. New bases can easily be added.
For example, we can simulate $N = 10$ curves on $\mathcal{T} = [0, 1]$, using $5$ eigenfunctions from a B-splines basis on $\mathcal{T}$ and eigenvalues with exponential decrease:
kl = KarhunenLoeve(name='bsplines', n_functions=5)
kl.new(n_obs=10, argvals=np.linspace(0, 1, 101),
Example of simulated data. (a) Brownian motion. (b) Karhunen-Loève expansion
Figure <ref> presents a plot of the simulated Brownian motions and those from the Karhunen-Loève decomposition. The simulation of two dimensional data is based on the tensor product of basis functions. Simulation for higher dimensional data is not implemented.
We also added methods to generate noisy observations as well as sparse data. Note that, these functions are only implemented on instances of . The function adds pointwise noise to the observations. Both homoscedastic and heteroscedastic noise are implemented. If a single scalar is given as a parameter to the function, homoscedastic noise will be simulated. For the heteroscedastic case, lambda functions and vectors of size can be supplied by the user. The noisy data are stored in the instance variable . For example, to add random noise with variance $\sigma^2 = 0.05$, we run
and, for heteroscedastic noise with variance defined by $x \rightarrow \sqrt{1 + \lvert x \rvert}$,
kl.add_noise(var_noise=lambda x: np.sqrt(1 + np.abs(x)))
The function randomly removes sampling points from the observation. Precisely, we randomly generate the number of sampling points to retain for each observation and then randomly select the sampling points to remove from each observation. The sparse data are stored in the instance variable . For example, to randomly remove $50\%$ of the sampling points (more or less $5\%$) on the Brownian simulated data, we run
brownian.sparsify(percentage=0.5, epsilon=0.05)
Figure <ref> presents a plot of the noisy and sparse verions of the Karhunen-Loève simulated data.
Results for the and functions on the Karhunen-Loève simulated data. (a) Noisy data. (b) Sparse data.
§.§.§ Clusters simulation
Let $K$ be a positive integer, and let $Z$ be a discrete random variable taking values in the range $\{1, \dotsc, K\}$ such that
$$\mathbb{P}(Z = k) = p_k \quad\text{with}\quad p_k > 0 \quad\text{and}\quad \sum_{k=1}^K p_k = 1.$$
The variable $Z$ represents the cluster membership of the realizations of the process. We consider that the stochastic process follows a functional mixture model with $K$ components, that is, it allows for the following decomposition:
$$X(t) = \sum_{k = 1}^{K}\mu_k(t)\1_{\{Z = k\}} + \sum_{j \geq 1}\xi_j\phi_j(t), \quad t \in \mathcal{T},$$
* $\mu_1, \dotsc, \mu_K$ are the mean curves per cluster.
* $\{\phi_j\}_{j \geq 1}$ is an orthonormal basis of functions.
* $\xi_j, j \geq 1$ are real-valued random variables which are conditionally independent given $Z$. For each $1 \leq k \leq K$, $\xi_j \vert Z = k \sim \mathcal{N}(0, \sigma_{kj}^2)$.
For example, we can generate $N = 10$ realizations of two clusters using $3$ eigenfunctions with given coefficients with
N = 10
n_features = 3
n_clusters = 2
centers = np.array([[2, -1], [-0.5, 1.5], [0, 0]])
cluster_std = np.array([[2, 1], [0.5, 1], [1, 1]])
simu = KarhunenLoeve('wiener', n_functions=n_features)
simu.new(n_obs=N, n_clusters=n_clusters,
centers=centers, cluster_std=cluster_std)
Figure <ref> shows the plot of the simulated data corresponding to the previous code snippet.
Simulation of data with two clusters. Each color represents a cluster.
§ PARAMETERS ESTIMATION
§.§ Curves denoising
Considering the model defined in (<ref>), we assume that $P = 1$ and for the sake of readability, we omit the superscript. The objective is to estimate the function $X_n(\cdot)$ using the available sample points. Thus, we consider local polynomial smoothers [Fan and Gijbels, 1996]. This type of estimators crucially depends on a tuning parameter, the bandwidth.
Let $\degree \geq 0$ be an integer and $\T \in \TT$ be the evaluation points for the estimation of $X_n$. For any $u \in \RR$, we consider the vector $U(u) = (1, u, \dotsc, u^{\degree}/\degree!)$ and note that $U_h(\cdot) = U(\cdot / h)$. Let $K: \RR \rightarrow \RR$ be a positive kernel and define $K_h(\cdot) = h^{-1}K(\cdot/h)$. Moreover, we define:
\begin{equation}\label{eq:loc-poly-min}
\vartheta_{M_n,h} \coloneqq
\argmin_{\vartheta\in\RR^{\degree+1}}
\sum_{m = 1}^{M_n}\left\{ Y_{n, m} - \vartheta^\top
U_h\left(T_{n, m} - \T \right)\right\}^2
K_h\left(T_{n, m} - \T \right),
\end{equation}
where $h$ is the bandwidth.
The vector $\vartheta_{M_n,h}$ satisfies the normal equations $A \vartheta_{M_n,h} = a$ with
\begin{align}
A = A_{M_n,h} &= \frac{1}{M_n} \sum_{m=1}^{M_n} U_h\left( T_{n,m} -\T \right)U_h^\top\left( T_{n, m} - \T \right)K_h\left( T_{n, m} - \T \right)\label{eq:Anstar}\\
a = a_{M_n,h} &= \frac{1}{M_n} \sum_{m=1}^{M_n}
Y_{n,m} U_h\left( \Tnm - \T \right) K_h\left( \Tnm - \T \right).\label{eq:anstar}
\end{align}
Let ${\lambda}$ be the smallest eigenvalue of the matrix ${A}$ and note that, whenever ${\lambda} > 0$, we have ${\vartheta}_{M_n,h} = A^{-1} {a}$.
With at hand an estimation of the bandwidth $\widehat h$, the local polynomial estimator of $\hatXp{n}(\T)$ of order $\degree$ is given by:
\begin{equation}\label{eq:loc-poly}
\hatXp{n}(\T) = U^\top(0) \widehat{\vartheta}, \quad\text{where}\quad \widehat{\vartheta} = \vartheta_{M_n,\widehat h}.
\end{equation}
If $\degree = 0$, we are in the particular case of the Nadaraya-Watson estimator. The Gaussian, Epanechnikov, tri-cube and bi-square kernels are implemented and others can be added in a modular way. We propose an estimate of the bandwidth $h$ that is based on the regularity of the underlying function [Golovkine et al., 2020].
For example, if we want to smooth the daily temperature curves using a local polynomial smoother with an estimate of bandwidth at $t_0 = 0.5$ and a neighborhood of $2$ points, we run:
dailyTemp_smooth = dailyTemp.smooth(points=0.5,
Figure <ref> presents the plot of the smoothed temperature data compared to the original ones.
(a) Curve and (b) smoothed estimation for the Canadian Temperature data.
§.§ Mean and covariance estimation
In this section, we develop estimators for the mean and the covariance functions of a component $X^{(p)}, 1 \leq p \leq P$ from the process $X$. These estimators might be used to compute estimators of eigenvalues and eigenfunctions of $X^{(p)}$ for the Karhunen-Loève expansion.
Let $\widehat{X}_n^{(p)}$ be a suitable nonparametric estimator of the curve $X_n^{(p)}$ applied with the $M_n^{(p)}$ pairs $(Y_{n, m_p}^{(p)}, T_{n, m_p}^{(p)}), n = 1, \dots, N_0$, as for instance a local polynomial estimator such as that presented in the previous subsection. With at hand the $\widehat{X}_n$'s tuned for the mean function estimation, we define
\begin{equation}\label{eq:est_mean}
\widehat \mu_{N}^{(p)}(t_p) = \frac{1}{N} \sum_{n=1}^{N} \widehat{X}_n^{(p)}(t_p), \quad t_p \in \TT_p.
\end{equation}
For example, the code snippet for the estimation of the mean curve of the daily temperature curves using local linear smoother with bandwidth equal to $0.05$ is
mean_temp = dailyTemp.mean(smooth='LocalLinear',
For the covariance function, following [Yao et al., 2005], we distinguish the diagonal from the non-diagonal points. With at hand the $\widehat{X}_n^{(p)}$'s tuned for the covariance function estimation,
\begin{equation}\label{eq:est_cov1}
\widehat{C}_{p, p}(s_p,t_p) = \frac{1}{N} \sum_{n=1}^{N} \widehat{X}_n^{(p)}(s_p)\widehat{X}_n^{(p)}(t_p) - \widehat{\mu}_{N}^{(p)}(s_p)\widehat{\mu}_{N}^{(p)}(t_p) ,\quad s_p, t_p \in \TT_p, \;\;s_p \neq t_p.
\end{equation}
The diagonal of the covariance is then estimated using two-dimensional kernel smoothing with $\widehat{C}_{p, p}(s_p,t_p), s_p \neq t_p$ as input data. See [Yao et al., 2005] for the details.
cov_temp = dailyTemp.covariance(smooth='GAM')
(a) Mean and (b) covariance estimation for the Canadian Temperature data.
§ MFPCA
The package implements MFPCA for data defined on potentially different domains, developped by Happ and Greven, 2018. The implementation of the method is build upon the functional data classes defined in the package. After giving a short review of the methodology in Section <ref>, we explain how to effectively use it in Section <ref>. For theoretical details, please refer to [Happ and Greven, 2018].
§.§ Methodological background
Following Happ and Greven, 2018, the multivariate components for $X$ are computed by plugging in the univariate components computed from each component $X^{(p)}$. These estimations are done as the follows.
* Perform a univariate fPCA on each of the components of $X$ separately. For a component $X^{(p)}$, the eigenfunctions and eigenvectors are computed as a matrix analysis of the estimated covariance $\widehat{C}_{p, p}$. This results in a set of eigenfunctions $\left(\widehat{\rho}_1^{(p)}, \ldots, \widehat{\rho}_{J^{(p)}}^{(p)}\right)$ associated with a set of eigenvalues $\left(\widehat{\lambda}_1^{(p)}, \ldots, \widehat{\lambda}_{J^{(p)}}^{(p)}\right)$ for a given truncation integer $J^{(p)}$. The univariate scores for a realization $X_n^{(p)}$ of $X^{(p)}$ are then given by $\widehat{\mathbf{c}}_{j, n}^{(p)} = \inLp{\widehat{X}_n^{(p)}, \widehat{\rho}_j^{(p)}}, ~1 \leq j \leq J^{(p)}$.
* Define the matrix $\mathcal{Z} \in \mathbb{R}^{N_0 \times J_+}, J_+ = \sum_{p=1}^{P} J^{(p)}$, where each row stacks the scores for each components for a unique observation $\left(\widehat{\mathbf{c}}_{1, n}^{(1)}, \ldots, \widehat{\mathbf{c}}_{J^{(1)}, n}^{(1)}, \ldots, \widehat{\mathbf{c}}_{1, n}^{(P)}, \ldots, \widehat{\mathbf{c}}_{J^{(p)}, n}^{(P)}\right)$. Define $\mathbf{Z} \in \mathbb{R}^{J_+ \times J_+}$ such that $\mathbf{Z} = (N_0 - 1)^{-1}\mathcal{Z}^\top\mathcal{Z}$.
* An eigenanalysis of the matrix $\mathbf{Z}$ is performed and leads to the eigenvectors $\widehat{\boldsymbol{v}}_j$ and eigenvalues $\widehat{\lambda}_j$.
* Finally, the multivariate eigenfunctions are estimated with
\begin{equation*}
\widehat{\varphi}_j^{(p)}(t_p) = \sum\nolimits_{j^\prime = 1}^{J^{(p)}}[\widehat{\boldsymbol{v}}_j]_{j^\prime}^{(p)}\widehat{\rho}_{j^\prime}^{(p)}(t_p),\quad t_p \in \TT_p,~ 1 \leq j \leq J_+,~ 1 \leq p \leq P.
\end{equation*}
and the multivariate scores with
$$\widehat{\mathfrak{c}}_{j, n} = \mathcal{Z}_{{n,\cdot}}\widehat{\boldsymbol{v}}_j, \quad 1 \leq n \leq N_0, \quad 1 \leq j \leq J_+.$$
The multivariate Karhunen-Loève expansion of the process $X$ is thus
\begin{equation}\label{eq:KL_estim}
\widehat{X}_{n}(\pointt) = \widehat{\mu}_{N_0}(\pointt) + \sum_{j = 1}^J \widehat{\mathfrak{c}}_{j, n}\widehat{\varphi}_j(\pointt), \quad \pointt \in \mathcal{T}.
\end{equation}
where $\widehat{\mu}_{N_0}(\cdot) = \left(\widehat \mu_{N_0}^{(1)}(\cdot), \ldots, \widehat \mu_{N_0}^{(P)}(\cdot)\right)$ is the vector of the estimated mean functions.
§.§ Implementation
The implementation of the MFPCA is based on the class. Hence, we construct an object of class specifying the number of eigencomponents that we want. The computation of the eigenelements is performed using the method, and the scores are then calculated using the method. Given scores, the inverse transformation to the functional space is done using the method. The triptych , and is based on the implementation choice of sklearn.
§.§.§ MFPCA for the Canadian Weather data
In this example, we perform a MFPCA for the bivariate Canadian Weather data. We expand each univariate element using an univariate FPCA with a number of components that explain $99\%$ of the variance within the data. The number of components are specified in a list in the constructor. The parameter in the method indicates how the univariate scores are computed. Here, we use numerical integration to derive them.
fpca = MFPCA(n_components=[0.99, 0.99])
fpca.fit(canadWeather, method='NumInt')
The scores are computed using the function:
scores = fpca.transform(data=canadWeather)
Results of the MFPCA for the Canadian Weather data. The first row represents the mean functions of the temperature (left) and precipitation (right) data. The second row corresponds to the bivariate eigenfunctions found for $99\%$ of explained variance.
The eigenvalues are stored as instance variables. We remark the rapid decrease of the eigenvalues. Hence, we only need a few eigencomponents to explain most of the variance within the data.
array([4.36e+01, 4.62e+00, 1.20e+00, 5.76e-01, 1.14e-01, 2.61e-02])
§.§.§ Implemented univariate basis expansion
is based on the univariate basis expansion of each of the components of the process. Currently, only two basis expansions are implemented. New bases can easily be added to the package. All univariate basis expansions should implemented the methods: , used to compute the elements of the basis, , to compute the scores of the observations within the basis, and , to return the observations in the functional space given their scores. The implemented bases are:
* – Univariate Functional Principal Components Analysis for data on one dimensional domains. This basis was used in the Canadian Weather example. Multiple smoothing methods are implemented for the estimation of the mean and the covariance (see Section <ref>), such as local polynomial estimation or GAM with penalized B-splines. The scores are computed using numerical integration. Considering sparse functional data, one may also used the PACE algorithm [Yao et al., 2005]. The main argument to build an instance of the class is which can be the proportion of variance explained by the principal components, if $\texttt{n\_comp} < 1$, or the number of principal components to computed, if $\texttt{n\_comp} \geq 1$.
* – Functional Candecomp/Parafac Tensor Power Algorithm for data on two dimensional domains. This algorithm is used to find a basis decomposition of image data. Consider $N$ realizations of a stochastic process $X$ defined on $S_x \times S_y$, the data can be represented as a tensor $\mathbf{X}$ in $\RR^{N \times S_x \times S_y}$. A Candecomp/Parafac representation of the data is assumed:
\begin{equation}\label{eq:cp_decomp}
\mathbf{X} = \sum_{j = 1}^J \lambda_j u_j \otimes v_j \otimes w_j,
\end{equation}
where $\lambda_j$ is scalar, $u_j \in \RR^N, v_j \in \RR^{S_x}$ and $w_j \in \RR^{S_y}$ are vectors and $\otimes$ denotes the outer product. In addition, the outer product $v_j \otimes w_j$ can be interpreted as the $j$th eigenimage evaluated on the same grid points as the original data. Moreover, the vector $\lambda_jv_j$ is the score vector gathering the observations projected onto the eigenimage $v_j \otimes w_j$. Our implementation is adapted from the function of the package [Happ-Kurz, 2020]. The main argument, to build an instance of the class , is which is the number of principal components to computed.
The package implements for the clustering of functional data objects defined on potentially different domains, developed by [Golovkine et al., 2020]. The implementation of the method is build upon the functional data classes defined in the package. After giving a short review of the methodology in Section <ref>, we explain how to effectively use it in Section <ref>. For a detailed description, please refer to [Golovkine et al., 2020].
§.§ Methodological background
Let $\mathcal{S}$ be a sample of realizations of the process $X$. We consider the problem of learning a partition $\mathcal{U}$ such that every element $U$ of $\mathcal{U}$ gathers similar elements of $\mathcal{S}$. The partition $\mathcal{U}$ is built as a tree $\mathfrak{T}$ defined using a top-down procedure by recursive splitting. Each node of the tree $\mathfrak{T}$ is denoted by $\mathfrak{S}_{\mathfrak{d, j}}$.
§.§.§ Growing
At each stage, a node $(\mathfrak{d, j})$ is possibly split into two subnodes in a four step procedure:
* A MFPCA, with $\mathtt{n_{comp}}$ components, is conducted on the elements of $\mathfrak{S}_{\mathfrak{d, j}}$. It results in a set of eigenvalues $\Lambda_{\mathfrak{d, j}}$ associated with a set of eigenfunctions $\Phi_{\mathfrak{d, j}}$.
* The matrix of scores $C_{\mathfrak{d, j}}$ is then defined with the columns built with the projections of the elements of $\mathcal{S}_{\mathfrak{d, j}}$ onto the elements of $\Phi_{\mathfrak{d, j}}$.
* For each $K = 1, \dots, K_{max}$, we fit a GMM to the columns of the matrix $C_{\mathfrak{d, j}}$. The resulting models are denoted as $\{\mathcal{M}_1, \dots, \mathcal{M}_{K_{max}}\}$. Considering the BIC, we determine
\begin{equation}\label{eq:K_hat}
\widehat{K}_{\mathfrak{d, j}} = \argmax_{K = 1, \dots, K_{max}} \text{BIC}(\mathcal{M}_K)
\end{equation}
* If $\widehat{K}_{\mathfrak{d, j}} > 1$, we split $\mathfrak{S}_{\mathfrak{d, j}}$ using the model $\mathcal{M}_2$, which is a mixture of two Gaussian vectors. Otherwise, the node is considered to be a terminal node and the construction of the tree is stopped for this node.
The recursive procedure continues downwards until one of the following stopping rules are satisfied: there are less than $\mathtt{minsize}$ observations in the node or the estimation $\widehat{K}_{\mathfrak{d, j}}$ of the number of clusters in the mode is equal to $1$. When the algorithm ends, a label is assigned to each leaf (terminal node). The resulting tree is referred to as the maximal binary tree.
§.§.§ Joining
In this step, the idea is to join terminal nodes which do not necessarily share the same direct ancestor.
* Build the graph $\mathcal{G} = (V, E)$ where
$$V = \{\mathfrak{S}_{\mathfrak{d, j}}, 0 \leq j < 2^\mathfrak{d}, 0 \leq \mathfrak{d} < \mathfrak{D} \mathbin{\vert} \mathfrak{S}_{\mathfrak{d, j}} \;\text{is a terminal node}\}, \quad\text{and}$$
\begin{equation}\label{eq:set_edges}
E = \left\{(\mathfrak{S}_{\mathfrak{d, j}}, \mathfrak{S}_{\mathfrak{d^\prime, j^\prime}}) \mathbin{\vert} \mathfrak{S}_{\mathfrak{d, j}}, \mathfrak{S}_{\mathfrak{d^\prime, j^\prime}} \in V,\; \mathfrak{S}_{\mathfrak{d, j}} \neq \mathfrak{S}_{\mathfrak{d^\prime, j^\prime}} \;\text{and}\; \widehat{K}_{(\mathfrak{d, j}) \cup (\mathfrak{d^\prime, j^\prime})} = 1\right\}.
\end{equation}
* Let $(\mathfrak{S}_{\mathfrak{d, j}}, \mathfrak{S}_{\mathfrak{d^\prime, j^\prime}})$ be the edge with the maximum BIC value. Remove this edge then and replace the asssociated vertex by $\mathfrak{S}_{\mathfrak{d, j}} \cup \mathfrak{S}_{\mathfrak{d^\prime, j^\prime}}$.
* Continue the procedure by applying the step 1. with $\{V \setminus \{\mathfrak{S}_{\mathfrak{d, j}}, \mathfrak{S}_{\mathfrak{d^\prime, j^\prime}}\}\} \cup \{\mathfrak{S}_{\mathfrak{d, j}} \cup \mathfrak{S}_{\mathfrak{d^\prime, j^\prime}}\}$.
The procedure continues until the set $V$ is reduced to a unique element or the set $E$ is the empty set.
§.§ Implementation
The implementation of the is based on the class. Hence, we construct an object of class specifyng the root node of the tree which contains a sample of data. The growth of the tree is performed using the function with the number of eigencomponents to keep at each node as parameters. Once the tree has grown, the joining step is made using the function. The prediction of the class of a new observation is possible through the function (or for the probabilities to belong to each class).
§.§.§ Example on the Canadian Weather data
In this example, we perform a clustering of the univariate Canadian Temperature data extracted from the bivariate Canadian Weather data. We build the root node containing all the observations within the dataset. The constructor is called.
root_node = Node(dailyTemp, is_root=True)
fcubt = FCUBT(root_node=root_node)
To grow the tree, we choose to consider a number of components that explain $95\%$ of the variance of the remaining observations at each node of the tree. Moreover, the construction of the branch is stopped if there are less than $5$ observations in a node. Figure <ref> presents the results of clustering. The function from the class allows us to show the maximum tree once the data has been fitted (currently, only for univariate data objects). This representation is particularly useful for the understanding of the clustering results. One might also cut the tree at a given height. For example, considering Figure <ref>,
fcubt.grow(n_components=0.95, min_size=5)
Plot of the Canadian Temperature dataset. Each color represents a different cluster.
Grown tree $\mathfrak{T}$ illustration for the Canadian Temperature dataset.
The joining step is performed using the function. We choose to consider $95\%$ of the explained variance of the observations to join two nodes.
§ CONCLUSION
The package is publicly available on Github[<https://github.com/StevenGolovkine/FDApy>] and the Python Package Index[<https://pypi.org/project/FDApy/>]. A documentation, including examples, is available with the package. Some tests are also implemented using which is the unit testing framework provided with Python.
§ ACKNOWLEDGMENT
The author wishes to thank Groupe Renault and the ANRT (French National Association for Research and Technology) for their financial support via the CIFRE convention no. 2017/1116.
[Bouveyron, 2015]
C. Bouveyron.
funFEM: Clustering in the Discriminative Functional Subspace,
R package version 1.1.
[Bouveyron et al., 2020]
C. Bouveyron, J. Jacques, and A. Schmutz.
funLBM: Model-Based Co-Clustering of Functional Data, 2020.
R package version 2.1.
[Brockhaus et al., 2020]
S. Brockhaus, D. Rügamer, and S. Greven.
Boosting functional regression models with FDboost.
Journal of Statistical Software, 940 (10):0
1–50, 2020.
[Carreno et al., 2020]
C. R. Carreno, hzzhyj, Pablo, D. G. Fernandez, P. Marcos, pedrorponga, M. C.
Berrocal, amandaher, D. Tucker, and S. Johnsen.
Gaa-uam/scikit-fda: Version 0.5, Dec. 2020.
[Fan and Gijbels, 1996]
J. Fan and I. Gijbels.
Local polynomial modelling and its applications.
Number 66 in Monographs on statistics and applied probability ,
ISSN 0960-6696 ; ZDB-ID: 22968-4. Chapman & Hall, London, 1996.
[Goldsmith et al., 2013]
J. Goldsmith, S. Greven, and C. Crainiceanu.
Corrected Confidence Bands for Functional Data Using
Principal Components.
Biometrics, 690 (1):0 41–51, 2013.
[Goldsmith et al., 2020]
J. Goldsmith, F. Scheipl, L. Huang, J. Wrobel, C. Di, J. Gellar, J. Harezlak,
M. W. McLean, B. Swihart, L. Xiao, C. Crainiceanu, and P. T. Reiss.
refund: Regression with Functional Data, 2020.
R package version 0.1-22.
[Golovkine et al., 2020]
S. Golovkine, N. Klutchnikoff, and V. Patilea.
Clustering multivariate functional data using unsupervised binary
arXiv:2012.05973 [cs, stat], Dec. 2020a.
[Golovkine et al., 2020]
S. Golovkine, N. Klutchnikoff, and V. Patilea.
Learning the smoothness of noisy curves with application to online
curve estimation.
arXiv:2009.03652 [math, stat], Sept. 2020b.
[Happ and Greven, 2018]
C. Happ and S. Greven.
Multivariate Functional Principal Component Analysis for
Data Observed on Different (Dimensional) Domains.
Journal of the American Statistical Association, 1130
(522):0 649–659, Apr. 2018.
[Happ-Kurz, 2020]
C. Happ-Kurz.
Object-Oriented Software for Functional Data.
Journal of Statistical Software, 930 (1):0
1–38, Apr. 2020a.
[Happ-Kurz, 2020]
C. Happ-Kurz.
MFPCA: Multivariate Functional Principal Component Analysis for
Data Observed on Different Dimensional Domains, 2020b.
R package version 1.3-6.
[Löning et al., 2019]
M. Löning, A. Bagnall, S. Ganesh, V. Kazakov, J. Lines, and F. J.
sktime: A Unified Interface for Machine Learning with
Time Series.
arXiv:1909.07872 [cs, stat], Sept. 2019.
[Parodi et al., 2015]
A. Parodi, M. Patriarca, L. Sangalli, P. Secchi, S. Vantini, and V. Vitelli.
fdakma: Functional Data Analysis: K-Mean Alignment, 2015.
R package version 1.2.1.
[Ramsay and Silverman, 2005]
J. Ramsay and B. W. Silverman.
Functional Data Analysis.
Springer Series in Statistics. Springer-Verlag, New York, 2
edition, 2005.
[Ramsay et al., 2009]
J. O. Ramsay, G. Hooker, and S. Graves.
Functional Data Analysis with R and MATLAB.
Use R! Springer-Verlag, New York, 2009.
[Ramsay et al., 2020]
J. O. Ramsay, S. Graves, and G. Hooker.
fda: Functional Data Analysis, 2020.
R package version 5.1.5.1.
[Schmutz et al., 2019]
A. Schmutz, J. Jacques, and C. Bouveyron.
funHDDC: Univariate and Multivariate Model-Based Clustering in
Group-Specific Functional Subspaces, 2019.
R package version 2.3.0.
[Tavenard et al., 2020]
R. Tavenard, J. Faouzi, G. Vandewiele, F. Divo, G. Androz, C. Holtz, M. Payne,
R. Yurchak, M. Rußwurm, K. Kolar, and E. Woods.
Tslearn, A Machine Learning Toolkit for Time Series
Journal of Machine Learning Research, 210
(118):0 1–6, 2020.
[Tucker, 2020]
J. D. Tucker.
fdasrvf: Elastic Functional Data Analysis, 2020.
R package version 1.9.4.
[Yao et al., 2005]
F. Yao, H.-G. Müller, and J.-L. Wang.
Functional Data Analysis for Sparse Longitudinal Data.
Journal of the American Statistical Association, 1000
(470):0 577–590, June 2005.
|
aainstitutetext: Department of Physics & Brown Theoretical Physics Center,
Brown University, Providence, RI, 02912, USAbbinstitutetext: Illinois Center
for Advanced Studies of the Universe & Department of Physics, University of
Illinois at Urbana-Champaign, Urbana, IL 61801, USAccinstitutetext: Department
of Physics, Harvard University, Cambridge, MA, 02138. USA
# Spillway Preheating
JiJi Fan b Kaloian D. Lozanov c Qianshu Lu<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
In traditional models only an order one fraction of energy is transferred from
the inflaton to radiation through nonperturbative resonance production in
preheating immediately after inflation, due to backreaction effects. We
propose a particle production mechanism that could improve the depletion of
the inflaton energy density by up to four orders of magnitude. The improvement
comes from the fast perturbative decays of resonantly produced daughter
particles. They act as a “spillway” to drain these daughter particles,
reducing their backreaction on the inflaton and keeping the resonant
production effective for a longer period. Thus we dub the scenario “spillway
preheating”. We also show that the fraction of energy density remaining in the
inflaton has a simple inverse power-law scaling in the scenario. In general,
spillway preheating is a much more efficient energy dissipation mechanism,
which may have other applications in model building for particle physics.
## 1 Introduction
Over the past decades, cosmological observations have provided compelling
evidence for an inflationary phase in the early Universe and a hot big bang
phase after it. Yet it is highly nontrivial to connect these two phases. It is
generally believed that the phase transition is achieved through processes of
(p)reheating, during which the inflaton energy is transferred to the thermal
energies of other particles. The thermal particles could be produced either
through perturbative decays of the inflaton Abbott:1982hn ; Dolgov:1982th ;
Albrecht:1982mp , or through non-perturbative and out-of-equilibrium dynamics
Traschen:1990sw ; Dolgov:1989us ; Shtanov:1994ce ; Kofman:1994rk ;
Boyanovsky:1995ud ; Yoshimura:1995gc ; Kaiser:1995fb ; Kofman:1997yn ;
Allahverdi:2010xz ; Amin:2014eta . While the first possibility is called
“reheating”, the latter possibility is often referred to as “preheating”,
since it usually happens much faster and earlier than reheating.111At the end
of preheating the daughter particles are not necessarily in thermal
equilibrium (unlike in the case of reheating). However, they are in a
‘prethermal’ state which has no memory of the initial conditions for
preheating, set at the end of inflation Allahverdi:2010xz ; Amin:2014eta ;
PhysRevLett.93.142002 ; Micha:2004bv .
Compared to reheating, preheating contains intriguing rich dynamics that is
beyond the reach of perturbative calculations and calls for a better
understanding. It could also lead to interesting direct or indirect
observables such as a shift of inflation observables (scalar tilt and tensor-
to-scalar ratio) Liddle:2003as ; Dai:2014jja ; Munoz:2014eqa ; Martin:2016oyk
; Hardwick:2016whe ; Lozanov:2016hid ; Lozanov:2017hjm ; Antusch:2020iyq , a
stochastic gravitational wave background at high frequencies Khlebnikov:1997di
; Easther:2006vd ; Easther:2006gt ; GarciaBellido:2007af ; Dufaux:2007pt ;
Dufaux:2008dn ; Dufaux:2010cf ; Bethke:2013vca ; Adshead:2018doq ;
Kitajima:2018zco ; Bartolo:2016ami ; Figueroa:2017vfa ; Caprini:2018mtu ;
Bartolo:2018qqn ; Lozanov:2019ylm ; Adshead:2019igv ; Adshead:2019lbr as well
as non-Gaussianities Lyth:2001nq ; Kofman:2003nx ; Dvali:2003em ;
Chambers:2007se ; Chambers:2008gu ; Bond:2009xx ; Leung:2012ve ; Leung:2013rza
; Imrith:2019njf ; Fan:2020xgh . Yet most preheating mechanisms that have been
studied in the literature, such as parametric resonance Kofman:1997yn and
tachyonic resonance Dufaux:2006ee , could at most transfer an order one
fraction of the inflaton energy to radiation.222The only known exception to
this rule of thumb is tachyonic gauge preheating with a scalar, $\varphi$, or
a pseudo-scalar, $a$, inflaton coupled to the gauge field via
$f(\varphi)F^{2}$ Deskins:2013dwa ; Adshead:2017xll or $aF\tilde{F}$
Adshead:2015pva ; Cuissa:2018oiw interaction terms, respectively. Such
scenarios can boost the depletion of the inflaton energy density by up to two
orders of magnitude.333An analytical argument for the order one fraction of
energy transfer, based on effective field theory, is given in Ref.
Giblin:2017qjp . In other words, they could not complete the phase transition
from inflation to the thermal big bang. Perturbative reheating still needs to
happen at a (much) later time to finish the transition. The central question,
which is the focus of this paper, is then: could there exist a new preheating
mechanism to improve the efficiency of the energy transfer from the inflaton
to radiation?
In this article, we propose a new preheating mechanism that could improve the
depletion of the inflaton energy density by orders of magnitude, compared to
the well-known mechanisms. The bottleneck of non-perturbative particle
production is the backreaction effects. Once the (direct) daughter particles,
e.g., scalars denoted as $\chi$’s, are copiously produced through various
instabilities, they backreact on the inflaton, $\phi$, and pause the particle
production processes. As a result, the inflaton releases at most about half of
its energy to radiation, as realized in the tachyonic resonance preheating
Dufaux:2006ee . One possible method to reduce the backreaction is to provide a
“spillway” to the daughter particles so that they could be drained after being
produced abundantly and particle production could keep going without much
backreaction. This could be realized through having the daughter particles
decay perturbatively to second-generation daughter particles, e.g., fermions
denoted as $\psi$’s. We will argue that the cascade decays,
$\phi\to\chi\to\psi$ with the first step being non-perturbative and the second
step being perturbative, could improve the depletion of the inflaton energy
density by up to four orders of magnitude, within the range of parameters we
could simulate numerically. We dub this new mechanism spillway preheating.
Alert readers may wonder why we cannot just have inflaton decay
perturbatively, as in the simple reheating scenario, instead of combining the
complicated particle production and the perturbative decays into a more
complicated scenario? There are a couple of motivations to consider the
spillway preheating: a) in this scenario, the perturbative decays
$\chi\to\psi$ could happen on a much shorter time scale and thus during the
preheating stage due to a larger coupling between $\chi$ and $\psi$, while the
perturbative decays of the inflaton $\phi$ happens on a much longer time scale
since the inflaton’s couplings to other particles are usually suppressed
(otherwise the inflaton’s potential is unprotected during inflation444One
could construct models where the inflaton’s couplings vary during and after
inflation Kofman:2003nx ; Dvali:2003em ; Bernardeau:2004zz . We will not
explore this possibility further.); b) in a more realistic (p)reheating model
containing the standard model (SM) beyond the simplified model we study
involving only three species, one could easily envision such cascade decay
processes: inflaton first produce some SM particles through preheating, e.g.,
$W$, $Z$ or $h$, which then subsequently decay into other SM particles.
Readers who are familiar with the preheating literature will note that
spillway preheating has similar ingredients to a known scenario, instant
preheating Felder:1998vq . Yet our studies bear several important differences.
In instant preheating, which we will review in more detail, perturbative
decays of $\chi$’s occur during the first oscillation of the inflaton after
inflation and end the preheating stage. It could again at most transfer an
order one fraction of the inflaton energy to the daughter particles. Its main
advantage is that it could be used to produce heavy particles which could not
be generated through perturbative decays. As argued before, our goal is to
improve the energy transfer efficiency of the preheating mechanism. We do not
require the perturbative decays to happen during the first oscillation of the
inflaton. Instead they kick in after ${\cal{O}}(1-10)$ oscillations when an
order one fraction of energy in $\phi$ is transferred to $\chi$ and
backreaction starts to become important. They convert $\chi$ into radiation
that does not directly backreact on the inflaton. In this way, particle
production without much backreaction could continue for a while until the
driven instability disappears. As a net result, only a tiny fraction, as small
as $10^{-4}$ of the total energy density remains in the inflaton while the
dominant fraction is transferred to radiation.
The effects of perturbative decays had been studied in the framework of Higgs
inflation in GarciaBellido:2008ab and Repond:2016sol . In those studies, the
SM Higgs field is the inflaton and resonantly generates $W$ and $Z$ gauge
bosons after inflation. The massive gauge bosons decay to SM fermions
perturbatively. It was found that the fraction of Higgs inflaton energy
density remained at the end of preheating reduces from $26\%$ to $2\%$ when
the gauge boson decays are turned on. Our approach is similar to
Repond:2016sol in general, as will be explained later when we describe our
model and numerical method. Yet instead of specifying the inflationary model
and fixing relevant couplings as in Repond:2016sol , we use a simplified model
containing only the potential after inflation and allow the parameters to
vary. It will be easier to derive and understand in the simplified model: 1)
the constraints on the parameters for the spillway mechanism to work, and 2)
the dependences of the results on the parameters. These results could be
applied to particle production beyond the Higgs inflation scenario. In
addition, we find that there exists regions of parameter space in which the
remaining inflaton energy density can be reduced to as small as $0.01\%$.
The paper is organized as follows. In Sec. 2, we review the two preheating
mechanisms in the literature which are most relevant to our scenario. In Sec.
3, we describe our model and the numerical approach to study the evolution of
the system. In Sec. 4, we present our results. We demonstrate how the energy
transfer efficiency is improved, study the parametric conditions for several
key assumptions to hold and validate the results using different choices of
parameters. We conclude in Sec. 5.
## 2 Non-perturbative particle production
In this section, we will review two of the non-perturbative particle
production mechanisms, tachyonic resonance Dufaux:2006ee and instant
preheating Felder:1998vq , which are most relevant to our study. We will
summarize their key ingredients and features. Readers who are familiar with
the subject could skip this section.
### 2.1 Tachyonic resonance preheating
In the model of tachyonic resonance preheating Dufaux:2006ee , the potential
after inflation is given by
$V_{\text{tach}}=\frac{1}{2}m^{2}\phi^{2}+\frac{1}{2}\frac{M^{2}}{f}\phi\chi^{2}+\frac{1}{4}\lambda\chi^{4},$
(1)
where $\phi$ is the inflaton and $\chi$ is the scalar field that could be
produced by $\phi$’s decays, either perturbatively or non-perturbatively.
There are multiple mass and energy scales involved in the model. $m$ is the
inflaton mass. Throughout the paper, we fix $m=10^{-6}M_{\text{pl}}$ unless
specified otherwise, with the reduced Planck scale $M_{\text{pl}}\approx
2.4\times 10^{18}$ GeV, which is chosen to be consistent with the CMB
constraints on inflation Akrami:2018odb . The high energy scale $f$ that
suppresses the coupling between $\phi$ and $\chi$ will be taken to be the
Planck scale, $f=M_{\text{pl}}$. After inflation, $\phi$ starts to oscillate
around the minimum of its potential with an oscillation amplitude $\Phi$. The
initial amplitude is taken to be $\Phi_{0}=f$. As $\phi$ oscillates, $\chi$
obtains an effective mass through its coupling to $\phi$, which is of order
$M$ initially. When embedding this toy model in a UV completion, e.g., a
supersymmetric scenario, $M$ and $m$ both originate from SUSY breaking and are
of the same order without tuning in the simplest case. Yet for the particle
production to happen, we need $M/m\gtrsim{\cal{O}}(10)$, which we will explain
below. This requires either tuning in the UV theory or a more complicated
model, e.g., a model in which SUSY breaking contributing to $m$ is sequestered
compared to that to $M$. $\lambda$ is the self-coupling of $\chi$ and the
self-coupling term is needed for the potential to be bounded from below.
The potential is sketched in Fig. 1. As $\phi$ oscillates around the minimum
of its potential, the effective mass squared of $\chi$, $M^{2}\phi/f$, also
oscillates. When $\phi$ passes through the origin from the positive to the
negative side, the sign of the mass squared term flips and triggers a
tachyonic instability, which could drive an exponentially fast production of
$\chi$. At the initial stage of particle production when only a small fraction
of energy is transferred from $\phi$ to $\chi$, one could study the system
using linearized equations of motion and, e.g., carry out a Floquet analysis.
Such stability analysis show that for the tachyonic instability to develop, we
need
$q_{0}=\frac{M^{2}}{m^{2}}\gg 1.$ (2)
A more intuitive way to understand this requirement is by comparing two
different time scales. The oscillation period of the inflaton is $\sim 1/m$
while the time scale associated with the change of $\chi$’s potential is $\sim
1/M$. In order for $\chi$ to respond to the change of its potential within one
oscillation of the inflaton, i.e., to be excited non-adiabaticlly, we need
$1/m\gg 1/M$, which is equivalent to Eq. (2).
Once there are comparable energies in $\phi$ and $\chi$, the backreaction from
$\chi$ to $\phi$ could no longer be ignored and the linear approximation
breaks down. One way to see that is when $\langle\chi^{2}\rangle$ is
sufficiently large, the self-coupling term of $\chi$, that is ignored in the
linear analysis, turns into a positive effective mass term:
$\lambda\langle\chi^{2}\rangle\chi^{2}$, which becomes increasingly important
and always counteracts the trilinear coupling when $\phi$ flips to the
tachyonic side. The larger $\lambda$ is, the stronger the backreaction becomes
and the less efficient particle production is. The detailed evolution of the
system has to be studied by numerical simulations. Yet the final energy
transfer efficiency is roughly controlled by a single parameter, the
backreaction efficiency parameter,
$b\equiv\frac{1}{4}\left(\frac{\frac{1}{2}\frac{M^{2}}{f}\phi\chi^{2}}{\frac{1}{2}m^{2}\phi^{2}}\right)\left(\frac{\frac{1}{2}\frac{M^{2}}{f}\phi\chi^{2}}{\frac{1}{4}\lambda\chi^{4}}\right)=\frac{M^{4}}{2\lambda
m^{2}f^{2}}.$ (3)
We need $b\leq 1$ for the potential to be bounded from below (so that
$\lambda$ could not be zero or arbitrarily small), and $b\approx 1$ for
tachyonic resonance to be efficient.
When $q_{0}\gg 1$ and $b\approx 1$, the maximum fraction of energy transferred
from $\phi$ to $\chi$ is about $50\%$. The equation-of-state of the coupled
system reaches a plateau with $w\subset(0.2-0.3)$, signaling an exotic and
intriguing mixed matter-radiation epoch. For more detailed discussions on the
parametric conditions in Eq. (2) and (3) as well as numerical analyses, see
Refs. Amin:2018kkg and Fan:2020xgh .
Figure 1: The shape of the potential $V_{\text{tach}}$ in Eq. (1).
### 2.2 Instant preheating
In instant preheating Felder:1998vq , particle production is achieved almost
instantaneously within the first oscillation of the inflaton. In the original
model, the potential is given by
$V_{\rm{ins}}=\frac{1}{2}m^{2}\phi^{2}+\frac{g^{2}}{2}\phi^{2}\chi^{2}+y\chi\bar{\psi}\psi,$
(4)
where $\psi$ is a heavy fermion field and $y$ is the Yukawa coupling.
When $\phi$ crosses the origin in the field space in the first oscillation,
particle production of $\chi$ occurs, the same as what happens in the well-
known parametric resonance preheating Kofman:1997yn . The effective mass of
$\chi$, proportional to $\phi^{2}$, is initially small when being produced and
keeps growing as $\left|\phi\right|$ increases. Thus the energy density stored
in $\chi$ grows as well. When $\left|\phi\right|$ gets close to the maximum
value (the initial oscillation amplitude), $\chi$ decays to $\psi$’s and dumps
all its energy into the fermions. Note that in order to achieve this
mechanism, one must finely tune $y$ so that the perturbative decay lifetime of
$\chi$ matches the (quarter of the) oscillation period of $\phi$ Felder:1998vq
:
$y^{2}g\approx 5\times 10^{-4}.$ (5)
We want to emphasize that this simplest model could still at most transfer an
order-one fraction of energy from the inflaton to the daughter fields, which
is only achieved when the quartic coupling $g$ is of order one. If this order-
one quartic coupling is present during inflation, it will spoil the flatness
of the inflaton potential in the absence of fine tuning. To avoid tuning the
inflaton potential, one then needs to construct models in which the coupling
is absent during inflation and is only effective after inflation. In short,
from the point of view on the efficiency of energy transfer, instant
preheating does not improve over other preheating mechanisms, i.e., tachyonic
resonance preheating. Its main advantage is that it could produce very heavy
particles (e.g., heavy $\psi$’s in Eq. (4)) with masses which could be close
to the Planck scale.
## 3 Model and approach
In this section, we will present our model and describe the approach to
simulate the evolution of the system. In our model, the potential after
inflation is given by
$V_{\rm{spillway}}=\frac{1}{2}m^{2}\phi^{2}+\frac{1}{2}\frac{M^{2}}{f}\phi\chi^{2}+\frac{1}{4}\lambda\chi^{4}+y\chi\bar{\psi}\psi,$
(6)
where the notations are the same as in the previous section: $\phi$ is the
inflaton; $\chi$ is a scalar while $\psi$ is a fermion.
At first glance, the model is simply a hybrid of the tachyonic resonance and
instant preheating. The motivation to consider this model is as follows. While
tachyonic resonance is very efficient (more efficient than parametric
resonance) in energy transfer, it could still at most transfer about half of
the inflaton energy to the daughter fields due to backreaction once a large
number of $\chi$’s are produced. The reason for the addition of the
perturbative decays $\chi\to\bar{\psi}\psi$ is to drain $\chi$’s to reduce the
backreaction from $\chi$ to $\phi$ and to keep the tachyonic production going.
It is then expected that the cascade decays $\phi\to\chi\to\psi$ could improve
the energy transfer from the inflaton to radiation. Indeed as we will show in
the next section, the evolution of the system behaves quite differently, with
significantly improved energy transfer efficiency in at least part of the
parameter space of this model, compared to the two models reviewed in the
previous section.
The Yukawa coupling generates a nonzero $\chi$ decay width, which we
approximate as
$\Gamma_{\chi}=\frac{y^{2}}{8\pi}m_{\chi}(\phi),$ (7)
where the effective mass of $\chi$, $m_{\chi}$, is $\phi$-dependent due to the
trilinear coupling between $\phi$ and $\chi$. To ensure that $m_{\chi}$ is
real and positive at all times, we will define $m_{\chi}$ to be the curvature
at the minimum of its potential, which is
$\Gamma_{\chi}=\begin{cases}\frac{y^{2}}{8\pi}\sqrt{\frac{M^{2}}{f}\phi},&\phi>0\\\
\frac{y^{2}}{8\pi}\sqrt{\frac{2M^{2}}{f}\absolutevalue{\phi}},&\phi<0.\end{cases}$
(8)
For the same $|\phi|$, $m_{\chi}$ is larger by a factor of $\sqrt{2}$ for
$\phi<0$ because $V_{\text{tach}}$ has a higher curvature at the minimum of
the double-valley than the single-valley as shown in Fig. 1.
Conventionally, numerical study of the preheating system is implemented by
solving the equations of motion for the classical fields and Friedmann
equations on a spatial lattice. The classical field equations are good
approximations when the occupation numbers of the fields are high. But our
system contains a fermion field, whose evolution may not be well approximated
by its classical equation of motion. Moreover, the classical field equation
for $\chi$ with the potential above will not generate the correct behavior for
a field with a nonzero perturbative decay width. For a decaying field, the
occupation number with momentum $k$ should decrease exponentially as
$n_{\chi}(k)\sim\exp[-\frac{\Gamma_{\chi}t}{\gamma}],$ (9)
where $\gamma$ is the boost factor associated with the momentum $k$. In
particular, the rate of the exponential decay is dependent only on
$m_{\chi}(\phi)$, while in the classical equation of motion for $\chi$, the
Yukawa term will contribute a $\psi$-dependent term instead.
To overcome the difficulties above, we adopt the strategy used in
Repond:2016sol . The equation of motion for the inflaton, $\phi$, is not
affected by the Yukawa coupling and its induced decays:
$\ddot{\phi}+3H\dot{\phi}-\frac{1}{a^{2}}\nabla^{2}\phi+m^{2}\phi+\frac{M^{2}}{f}\chi^{2}=0,$
(10)
where $a$ is the scale factor. Then, to mimic the effect of $\chi$ decays, we
add a friction term, $\Gamma_{\chi}\dot{\chi}$, to the equation of motion for
$\chi$:
$\ddot{\chi}+3H\dot{\chi}-\frac{1}{a^{2}}\nabla^{2}\chi+\frac{M^{2}}{f}\phi\chi+\lambda\chi^{3}+\Gamma_{\chi}\dot{\chi}=0,$
(11)
where $\Gamma_{\chi}$ is given in Eq. (7). The fermionic decay products are
modeled as a homogenous, radiation-like perfect fluid with energy density
$\rho_{\psi}$, which is independent of position. The time evolution equation
for $\rho_{\psi}$ is derived by the conservation of the stress-energy tensor
of the entire system including $\phi$, $\chi$ and the fermionic fluid,
combined with Eqs. (10) and (11):
$\nabla_{\mu}T^{\mu
0}=0\quad\Rightarrow\quad\dot{\rho}_{\psi}+4H\rho_{\psi}-\langle\Gamma_{\chi}\dot{\chi}^{2}\rangle=0,$
(12)
where in the last term, $\langle\cdots\rangle$ refers to the spatial average.
We see that the $\dot{\chi}^{2}$ term will act as a perpetual source of
$\rho_{\psi}$, which is what we expect since $\chi$ decays to $\psi$’s. But
because $\chi$ is not a homogeneous field, forcing its decay products to be a
homogenous fluid means that the conservation equations $\nabla_{\mu}T^{\mu
i}=0$ are violated. In other words, the gradient energy of $\chi$ is lost when
it decays to $\rho_{\psi}$. Whether this is a large effect or not can be
checked with self-consistency in the background evolution of the Universe.
Evolution of the scale factor is governed by
$\frac{\ddot{a}}{a}=-\frac{4\pi
G}{3}\expectationvalue{\rho_{\text{tot}}+3p_{\text{tot}}},\quad\left(\frac{\dot{a}}{a}\right)^{2}=\frac{8\pi
G}{3}\expectationvalue{\rho_{\text{tot}}},$ (13)
where $\rho_{\text{tot}}$ and $p_{\text{tot}}$ are the total energy density
and pressure density, including contribution from the fermion fluid. These two
equations are redundant when combined with the two equations for $\phi$ and
$\chi$ as well as the evolution equation of the fermionic fluid. We will use
the second scale factor equation as a consistency check for “energy
conservation”. In all simulations, energy conservation is satisfied at the
level of $10^{-3}$ or higher precision. This means that the approximation of
$\rho_{\psi}$ as a homogenous fluid does not generate a large loss of gradient
energy.
To solve Eqs. (10), (11), (12) and (13), we use the public LatticeEasy package
Felder:2000hq , but with the integrator modified to the 4th order Runge-Kutta
method.
If the perturbative decay products of $\chi$ were scalars instead of fermions,
we would have a different phenomenology. For example, if a scalar field,
$\varphi$, is coupled to $\chi$ through the trilinear interaction $V_{\rm
int}=\sigma\chi\varphi^{2}$, where $\sigma$ is a dimensionful coupling
constant, the perturbative decay rate is
$\Gamma_{\chi\rightarrow\varphi\varphi}=\sigma^{2}/(8\pi m_{\chi}(\phi))$
Kofman:1997yn ; Dufaux:2006ee . Note that the dependence of the decay rate on
the $\chi$ effective mass, $\propto m_{\chi}^{-1}$, is opposite to the one in
the case of a fermionic daughter field, see Eq. (7). Hence, $\chi$ would be
most likely to decay when it has a vanishing mass, which coincides with the
zero-crossings of $\phi$. At these moments, $\chi$ evolves non-adiabatically.
Its time-dependent ground and excited states do not coincide with the ones of
the free theory in flat spacetime and thus its decays into pairs of
$\varphi$’s cannot be captured with the phenomenological friction term. On the
other hand, in the fermion case the perturbative decays of $\chi$ occur in the
adiabatic regime (when $\phi$ is near the extreme or evolves slowly) and can
be described as the exponential damping of the amplitude of the excited $\chi$
(whose ground and excited states are the ones of the free theory in flat
spacetime). Nevertheless, the non-perturbative decays of $\chi$ into $\varphi$
pairs can be studied with classical lattice simulations with all the scalar
fields included Dufaux:2006ee . However, our numerical simulations show that
this scenario does not lead to improved efficient energy transfer to the
daughter fields.
Alternatively, we can consider a scalar coupling of the form
$V_{\text{int}}\supset y\chi\varphi^{3}$. The decay width of $\chi$ will then
be proportional to its mass, $\Gamma_{\chi\to\varphi\varphi}\propto
y^{2}m_{\chi}(\phi)$, similar to the decays through the Yukawa coupling in our
model. But compared to the $\phi$-$\chi$-$\psi$ system, the stability
condition of the three-scalar system is more complicated, because both $\chi$
and $\varphi$ directions are potentially unstable depending on the sizes of
their individual quartic self-interaction strengths. The stability condition
can no longer be defined using a single parameter like we do in Eq. (3). In
order to reduce the number of moving parts, we only consider fermionic
perturbative decay products of $\chi$ in this paper.
## 4 Results
We will present and discuss the results of the numerical simulations for our
model in this section. We will first show the key results in one class of
benchmark parameters and compare them with those of tachyonic resonances
without perturbative decays. We will then discuss one key assumption of our
simulations, that is, ignoring the backreaction of the fermionic fluid back to
the scalar sub-system. Based on analytical arguments, we provide parametric
relations between different parameters of the model for the assumption to
hold. Lastly, we revisit our benchmark parameter choices and discuss the
alternative choice as well as the validity of the results.
### 4.1 Enhanced energy transfer
As reviewed in Sec. 2.1, to have efficient particle production in the
tachyonic resonance scenario, we want to satisfy $q_{0}\gg 1$ and $b\sim 1$.
Motivated by these relations, we first choose
$q_{0}=\frac{M^{2}}{m^{2}}=200,\quad b=\frac{M^{4}}{2\lambda m^{2}f^{2}}=0.9.$
(14)
We also take $m=10^{-6}M_{\mathrm{pl}}$ and the initial amplitude of the
inflaton $\Phi_{0}=f=M_{\mathrm{pl}}$, as explained before. Note that this set
of parameters corresponds to a tiny quartic coupling $\lambda\approx 3.3\times
10^{-8}$.
We simulate the system on a box of length $L=2m^{-1}$ with $128^{3}$ points
and $y^{2}/(8\pi)=0$ and 0.1. We also put a UV cutoff on the initial power
spectra of $\phi$ and $\chi$ to keep only modes that are excited by the linear
tachyonic resonance. In other words, we cut off the initial spectrum of $\phi$
at $k_{\phi,\text{max}}/m=0$ and we cut off the spectrum of $\chi$ at
$k_{\chi,\text{max}}/m=2\sqrt{q_{0}}$. The time evolutions of energy densities
of $\phi$, $\chi$, and fermionic fluid, in the comoving volume, are shown in
Fig. 2. We use $\rho_{\phi}/\rho_{\text{tot}}$ as a measure of the efficiency
of energy transfer, where $\rho_{\text{tot}}$ is the total energy density of
the system. The time evolution of $\rho_{\phi}/\rho_{\text{tot}}$ is shown in
Fig. 3.
(a) $y^{2}/8\pi=0$
(b) $y^{2}/8\pi=0.1$
Figure 2: Time evolution of $\phi$, $\chi$, and fermionic fluid energy density
for $b=0.9$, $m=10^{-6}M_{\mathrm{pl}}$, $\Phi_{0}=f=M_{\mathrm{pl}}$,
$q_{0}=200$ and $y^{2}/8\pi=0$ or 0.1. When $y$ is nonzero, energy transfer
continues to happen after $\rho_{\chi}$ becomes comparable with $\rho_{\phi}$.
This second stage of energy transfer stops eventually when the system leaves
the tachyonic resonance band. The smaller $y$ is, the longer it takes to reach
this stop. The total comoving energy density is not conserved here because the
equation of state of the system quickly deviates from being matter-like, as
shown in Fig. 5.
(a) $y^{2}/8\pi=0$
(b) $y^{2}/8\pi=0.1$
Figure 3: Time evolution of $\rho_{\phi}/\rho_{\text{tot}}$ for $b=0.9$,
$m=10^{-6}M_{\mathrm{pl}}$, $\Phi_{0}=f=M_{\mathrm{pl}}$, $q_{0}=200$ and
$y^{2}/8\pi=0$ or 0.1. The initial rapid decrease is due to tachyonic
production of $\chi$. When $y\neq 0$, the $\chi\to\bar{\psi}\psi$ decay
alleviates $\chi$’s backreaction on $\phi$, and
$\rho_{\phi}/\rho_{\text{tot}}$ continues to decrease. After the second stage
of energy transfer ends, $\rho_{\phi}/\rho_{\text{tot}}$ slowly increases
since $\phi$ is matter-like and redshifts slower than the rest of the system,
which is radiation-like.
From the simulation results, we see that the $y=0$ and $y\neq 0$ systems
exhibit qualitatively different features in the time evolution of energy
densities. To understand the differences, let’s first check what happens when
$y=0$. In the beginning, $\rho_{\chi}\ll\rho_{\phi}$, and the system is in the
linear regime. Energy rapidly transfers from $\phi$ to $\chi$ by tachyonic
resonance production, and $\rho_{\chi}$ becomes comparable with $\rho_{\phi}$
within $mt\sim\mathcal{O}(1)$. Then the two fields evolve nonlinearly for a
long time while maintaining $\rho_{\chi}\approx\rho_{\phi}$. Thus
$\rho_{\phi}/\rho_{\text{tot}}$ stays around $\approx 0.5$, as shown in Fig.
3(a). The fact that $\rho_{\phi}/\rho_{\text{tot}}$ stays relatively constant
after the system enters the non-linear regime does not mean that energy
transfer from the inflaton stops. If the energy transfer $\phi\to\chi$ is
completely shut off, $\rho_{\phi}/\rho_{\text{tot}}$ would increase since
$\rho_{\phi}$ is matter-like and redshifts slower than $\rho_{\chi}$, which is
radiation-like. As soon as the expansion of the Universe reduces the ratio
$\rho_{\chi}/\rho_{\phi}$, tachyonic resonance quickly transfers energy to
increase the ratio to reach $\rho_{\chi}\approx\rho_{\phi}$ again. In other
words, when $\rho_{\chi}$ becomes comparable with $\rho_{\phi}$, the
backreaction of $\chi$ does not terminate the tachyonic resonance, it simply
pauses it. Making this distinction is important to understand the system with
$y\neq 0$. The system with the cascade decays $\phi\to\chi\to\psi$, is
precisely exploiting the fact that rapid $\phi\to\chi$ transfer will happen
again once we reduce the energy density of $\chi$. This is evident from the
evolution of energy densities shown in Fig. 2(b). After the initial rapid
growth of $\rho_{\chi}$ to reach $\rho_{\chi}\approx\rho_{\phi}$, the
$\chi\to\bar{\psi}\psi$ decays continuously reduce $\rho_{\chi}$, alleviate
backreaction of $\chi$, and restart $\phi\to\chi$ until the energy
equipartition between $\phi$ and $\chi$ is achieved again. Indeed we see that
$\rho_{\phi}$ closely tracks $\rho_{\chi}$ for a while before decoupling.
Thanks to the continuous draining of $\rho_{\chi}$ from
$\chi\to\bar{\psi}\psi$ decay, the energy transfer from $\phi$ to $\chi$ is
still rapid during the nonlinear regime, and this helps achieve the depletion
of the inflaton energy density improved by two orders of magnitude compared to
the $y=0$ case.
For both cases, the tachyonic resonance will eventually be terminated when the
value of $\phi$ is too small to drive the resonance. The value of $\phi$ could
be reduced by both the redshift due to the expansion of the Universe and
decays to the daughter fields. When there is no longer resonant production, no
more energy will be transferred out of $\phi$, and the time evolution of
$\rho_{\phi}$ decouples from that of $\rho_{\chi}$. The ratio
$\rho_{\phi}/\rho_{\text{tot}}$ reaches the minimum value at this point. After
$\phi$ decouples, $\rho_{\phi}$ redshifts slower than the rest of the system
and $\rho_{\phi}/\rho_{\text{tot}}$ gradually increases from the minimum
value. This general picture of what happens after tachyonic resonance
terminates apply to both systems with $y=0$ or $y\neq 0$. But when the
termination happens can be drastically different. For system with $y=0$, once
the backreaction is effective, $\phi$ and $\chi$ are always coupled together,
and there is no significant decrease in $a^{3}\rho_{\phi}$. The decoupling may
not happen until well beyond the time range we could simulate. However, for
$y\neq 0$, the rapid decrease in $a^{3}\rho_{\phi}$ implies a quick reduction
in $a^{3/2}\phi$, and tachyonic resonance can potentially end much earlier.
Indeed as shown in Fig. 2(b), for $y^{2}/(8\pi)=0.1$, $\phi$ decouples from
$\chi$ at around $mt\approx 100$. This is also confirmed in Fig. 3(b):
$\rho_{\phi}/\rho_{\text{tot}}$ decreases rapidly initially and then reaches a
minimum value at around $mt\approx 100$. After decoupling, it increases slowly
due to the redshift effects.
When $y\neq 0$, even though redshifts can slowly increase the fraction of
inflaton energy back up again, it is important to understand how the minimum
value of $\rho_{\phi}/\rho_{\rm tot}$ achieved scales with different
parameters in the model. A full analytic understanding of the relationship is
difficult given the nonlinearity of the evolution. Yet we could still learn
something useful from the linear analysis. In the Floquet analysis, the
tachyonic instability exists only when
$q=\frac{M^{2}}{m^{2}}\frac{\Phi}{f}=q_{0}\frac{\Phi}{f}\gg 1,$ (15)
where $\Phi$ is the coherent oscillation amplitude of $\phi$. For a given
$q_{0}$, tachyonic resonance terminates when $\Phi/f\sim 1/q_{0}$. The
fraction of inflaton energy density at this point is
$\frac{\rho_{\phi}}{\rho_{\text{tot}}}\sim\frac{\Phi^{2}}{f^{2}}\sim\frac{1}{q_{0}^{2}}.$
(16)
This linear analysis is not going to be a precise description of the
evolution, but we expect the conclusion holds generally: when $q_{0}$ is
larger, more energy is transferred from $\phi$, and a smaller
$(\rho_{\phi}/\rho_{\text{tot}})_{\text{min}}$ value is achieved.
Figure 4: $(\rho_{\phi}/\rho_{\text{tot}})_{\text{min}}$ as a function of
$q_{0}$ for different choices of $y$. The blue points are the simulation
results, and the black line is the best fit with a power law $\propto
q_{0}^{x}$. Each panel also shows in dashed gray the power law best fit for
the $y=0$ case, which is flat at
$(\rho_{\phi}/\rho_{\text{tot}})_{\text{min}}\approx 0.5$. We fix $b=0.9$,
$m=10^{-6}M_{\mathrm{pl}}$ and $\Phi_{0}=f=M_{\mathrm{pl}}$.
This intuition based on the linear analysis is indeed verified by simulation
results. We conduct simulations with $q_{0}=$ 50, 100, 200, 500, 1000, 2000,
and $y^{2}/(8\pi)=$ 0.01, 0.05, 0.10, 0.15. $m$, $\Phi_{0}$ and $b$ are fixed
at the values specified at the beginning of this section. We use $N=128$ and
$L=2m^{-1}$ for all simulations. The $\phi$ and $\chi$ initial power spectra
are again cut off at $k_{\phi,\text{max}}/m=0$ and
$k_{\chi,\text{max}}/m=2\sqrt{q_{0}}$. For each parameter choice, the
simulation is run for a sufficiently long time until $\rho_{\phi}$ has
completely decoupled from $\rho_{\chi}$ and $\rho_{\psi}$, so we can read off
the value of $(\rho_{\phi}/\rho_{\text{tot}})_{\text{min}}$. Fig. 4 shows how
$(\rho_{\phi}/\rho_{\text{tot}})_{\text{min}}$ scales with $q_{0}$ for
different choices of $y$. For every given $y$, we see that
$(\rho_{\phi}/\rho_{\text{tot}})_{\text{min}}$ scales with $q_{0}$ by a simple
power law, and is improved by several orders of magnitude compared to the
$y=0$ case which is flat at
$(\rho_{\phi}/\rho_{\text{tot}})_{\text{min}}\approx 0.5$. For greater values
of $q_{0}$ beyond our simulation results, we expect the power law improvement
to continue. However a definite statement is difficult given the limits of
both numerical and analytical understanding of a nonlinear system.
For fixed $q_{0}$, Fig. 4 shows that energy transfer efficiency improves (or
equivalently, $(\rho_{\phi}/\rho_{\rm tot})_{\rm min}$ decreases) as
$y^{2}/(8\pi)$ increases. However, we don’t expect this improvement to
continue to arbitrarily large value of $y$. The energy transfer efficiency
should deteriorate when $y\ll 1$ or $y\gg 1$: preheating can only amplify a
field value when it is nonzero. When $y$ is so large that
$\chi\to\bar{\psi}\psi$ depletes $\chi$ faster than production from
preheating, preheating will shut off, and there will be little energy
transferred out of the inflaton sector to begin with.555We note that our
classical lattice simulations do not account for the perturbative decay of
$\phi$ into pairs of $\chi$. We are allowed to ignore this inherently quantum
process here, since it is not efficient during the time interval of our
simulation, $\Gamma_{\phi\rightarrow\chi\chi}\sim(M^{2}/f)^{2}/m=2b\lambda
m\ll H$, for the parameters chosen here. On the other hand, when $y\to 0$, the
decay $\chi\to\psi$ has little effect on the evolution of the system and we
get back to the usual tachyonic resonance scenario, in which
$(\rho_{\phi}/\rho_{\rm tot})_{\rm min}\sim 0.5$ when $q_{0}\gg 1$. Given what
we expect at the two extreme limits, the energy transfer efficiency must be
optimal at some intermediate $y$. The precise optimal value of $y$ could be
beyond the range of our simulations.
The smaller $(\rho_{\phi}/\rho_{\text{tot}})_{\text{min}}$ is, the more
radiation-like the Universe will be at the end of preheating. Fig. 5 shows the
time evolution of the equation of state $w$ for systems with $y^{2}/(8\pi)=0$
and 0.1. When $y=0$, there is still a significant energy density left in the
inflaton sector. $w$ is in the range between 0 and $1/3$, which corresponds to
a mixed matter and radiation state, as reviewed in Sec. 2.1. When $y\neq 0$,
the efficiency to transfer energy from inflaton to radiation is improved
dramatically, and $w$ converges rapidly to $1/3$. While there is still a tiny
fraction of energy density left in inflaton, it takes much longer for the
inflaton to dominate the energy density again due to the slower redshift.
(a) $y^{2}/8\pi=0$
(b) $y^{2}/8\pi=0.1$
Figure 5: Time evolution of the equation of state $w$ for $b=0.9$,
$m=10^{-6}M_{\mathrm{pl}}$, $\Phi_{0}=f=M_{\mathrm{pl}}$, $q_{0}=200$ and
$y^{2}/8\pi=0$ or 0.1. The gray curves are the raw simulation results, and the
blue curves are the time average to remove the rapid oscillations. The dashed
orange horizontal line is drawn at $w=1/3$. When $y\neq 0$, the system energy
density is dominated by the radiation-like fermion fluid, thus $w$ rapidly
approaches $1/3$.
Another distinctive feature of the system with perturbative decays is that in
the nonlinear stage, the system with perturbative decays has much slower power
propagation to the UV end of the spectra. Fig. 6 shows the time evolution of
the power spectra of $\phi$ and $\chi$ for $q_{0}=200$ and $y^{2}/(8\pi)=0$ or
$0.1$. For both $y^{2}/(8\pi)=0$ and 0.1, there is an initial exponential
growth in the power of $\chi$ due to tachyonic resonance. For $y=0$, the
system quickly enters the nonlinear stage with
$\rho_{\chi}\approx\rho_{\phi}$, and power spectra keep growing due to
rescattering. Higher $k$ modes are excited and power gradually propagates to
the UV end. However, for $y^{2}/(8\pi)=0.1$, $\phi$ is much more depleted than
in the $y^{2}/(8\pi)=0$ case, hence there is little backreaction/rescattering
between $\phi$ and $\chi$ and less power is transferred to the UV. The power
spectrum of $\phi$ is mostly undisturbed at the later stage of the evolution,
while the power spectrum of $\chi$ decreases in magnitude due to the
perturbative decays.
(a) $y^{2}/8\pi=0$
(b) $y^{2}/8\pi=0.1$
Figure 6: The time evolution of field power spectra for $b=0.9$,
$m=10^{-6}M_{\mathrm{pl}}$, $\Phi_{0}=f=M_{\mathrm{pl}}$, $q_{0}=200$, and
$y^{2}/8\pi=0$ or 0.1. For each field $F$, $P_{F}$ is the comoving power
$P_{F}\equiv(\differential/\differential\ln k)\overline{a^{3}F(x)^{2}}$ in
units of $M_{\mathrm{pl}}$. Time evolves from red to blue. For
$y^{2}/(8\pi)=0.1$, there is little propagation of power to the UV compared to
the $y=0$ case.
### 4.2 Constraints on the parameters
Results presented in the last section show that having perturbative decays of
$\chi$ can greatly reduce $(\rho_{\phi}/\rho_{\text{tot}})_{\text{min}}$ and
improve the efficiency to transfer the inflaton energy density to radiation.
However, there are two important implicit assumptions in our simulation.
Firstly, we ignore the backreaction of the fermionic fluid to the scalar
system and treat the fluid as an infinite energy sink. Secondly, we ignore
Pauli repulsion effects which can potentially prohibit the decays
$\chi\to\psi$ from happening. In this section, we present analytical arguments
on the parametric relations required for these assumptions to be valid.
We start with estimating the fermionic backreaction. There are two possible
ways to derive the conditions when the backreaction becomes non-negligible or
vice versa. The first way is to check the effective fermion mass generated by
a nonzero value of $\chi$. In our simulations, we take the fermions to be
massless by modeling them as a radiation-like fluid. Yet the Yukawa coupling,
$y\chi\bar{\psi}\psi$, generates an effective fermion mass:
$m_{\psi}\sim y\expectationvalue{|\chi|}\sim y\sqrt{\frac{M^{2}|\Phi|}{\lambda
f}}\sim y\frac{M}{\sqrt{\lambda}},$ (17)
where $\expectationvalue{|\chi|}$ is set to be its value at the minimum of the
potential when $\phi$ oscillates to the tachyonic side. We ignore the time
evolution of $\Phi$ in the estimate above. For the decays
$\chi\to\bar{\psi}\psi$ to happen, we need $E_{\chi}\gtrsim m_{\psi}$. Both
analytical and numerical analyses show that the characteristic value of
$\chi$’s energy is $E_{\chi}\sim M$. Therefore, the kinematic constraint,
$E_{\chi}\gtrsim m_{\psi}$, translates into
$\frac{y}{\sqrt{\lambda}}\lesssim 1.$ (18)
This is a rough upper bound since we ignore the time evolution of relevant
quantities.
Once there is a large number of fermions around, they will induce a tadpole
term to the potential of $\chi$, $y\chi\langle\bar{\psi}\psi\rangle$. Thus the
other way to check the importance of backreaction is to compare the tadpole
term with the other terms, i.e., the term that drives the tachyonic resonance
production, $(M^{2}/f)\phi\chi^{2}$, in the scalar potential. When the
backreaction is sub-dominant, we should have
$y\chi\expectationvalue{\bar{\psi}\psi}\lesssim\frac{M^{2}}{f}\phi\chi^{2}.$
(19)
The fermion condensate can be approximated as
$\expectationvalue{\bar{\psi}\psi}\sim\frac{\rho_{\psi}}{E_{\psi}}\sim\frac{m^{2}f^{2}}{M},$
(20)
where we estimate $\rho_{\psi}$ to be the initial energy density of $\phi$
since we expect most of the energy is transferred from $\phi$ to $\psi$
eventually. We also approximate $E_{\psi}\sim E_{\chi}\sim M$, $\phi\sim f$,
and $\chi^{2}\sim M^{2}/\lambda$. The condition becomes
$y\frac{m^{2}f^{2}}{M}\lesssim\frac{M^{3}}{\sqrt{\lambda}}\quad\Rightarrow\quad\frac{y}{\sqrt{\lambda}}\lesssim
1,$ (21)
where in the last step, we use the fact that for efficient tachyonic resonance
production, $b=M^{4}/(2\lambda m^{2}f^{2})$ has to be close to one. Note that
this is the same as Eq. (18). Both arguments from somewhat different points of
view lead to the same parametric relation for the fermion’s backreaction to be
negligible.
Now we consider how to avoid Pauli blocking. Pauli blocking will prevent
$\chi\to\psi$ decays if the phase space is not big enough to accomodate the
fermionic decay products. To check that, we need to estimate the occupation
number of $\psi$. The number density of $\psi$, when the fermion fluid becomes
the dominant component of the energy density, could be estimated as
$n_{\psi}\sim\frac{\rho_{\psi}}{E_{\psi}}\sim\frac{m^{2}f^{2}}{M}.$ (22)
The occupation number of $\psi$ is then
$\frac{n_{\psi}}{k_{\psi}^{3}}\sim\frac{m^{2}f^{2}}{M^{4}}\sim\frac{1}{\lambda},$
(23)
where the range of momenta of $\psi$, $k_{\psi}$, is set by the typical energy
of $\chi$ produced from tachyonic resonance, $k_{\chi}\sim M$. In the last
step, we again use the fact that $b$ has to be close to one.
At first glance, in order for the occupation number of $\psi$ to be smaller
than 2 so that the decays are free from Pauli blocking, we simply need
$\lambda$ to be of order one. Yet this choice is problematic. The phase space
volume of $\psi$ is similar to that of $\chi$ and the occupation number of
$\chi$ could be estimated in the same way to be $1/\lambda$. An order one
$\lambda$ then implies that $\chi$ is on the border line of being treated as a
classical field. More importantly, the perturbative decays of inflaton through
the trilinear coupling $M^{2}\phi\chi^{2}/f$ happens on the time scale of
$\Gamma^{-1}_{\phi}\sim\frac{8\pi f^{2}m}{M^{4}}\sim\frac{8\pi}{\lambda m}.$
(24)
Since the preheating happens on the time scale of ${\cal O}(1-10)m^{-1}$, an
order one $\lambda$ indicates that the time scales of perturbative reheating
and preheating are about the same and there is no need to consider preheating
in this case. Thus in the parameter space where preheating matters and occurs
before reheating, $\lambda\ll 1$. In order to enhance the energy transfer
efficiency without Pauli blocking in our model, we need to introduce $N_{f}$
species of fermions, $\psi_{i=1\cdots N_{f}}$. For simplicity, we assume that
the Yukawa couplings of all the fermions have the same value,
$y_{i}=y^{\prime}$. The occupation number of each fermion species is then
$1/(N_{f}\lambda)$, which tells us to circumvent Pauli blocking,
$N_{f}\lambda\gtrsim 1.$ (25)
While too many fermion species imply a too low cutoff of our toy model as a
valid effective field theory, it is reasonable to consider a (not too) small
$\lambda$, e.g., $\lambda\sim{\cal O}(0.01)$ and $N_{f}\sim{\cal O}(100)$.
From the simulation point of view, it makes no difference implementing one
fermion fluid or many fermion fluids, if we do not explicitly include Pauli
blocking terms. For simplicity, we proceed with simulations with a single
fermion fluid, with the implicit understanding that this is an approximation
of an $N_{f}$-fermion system with $N_{f}\gtrsim 1/\lambda$. $\rho_{\psi}$ is
then the total energy density of the $N_{f}$ fermions, and the decay width
$\Gamma_{\chi}=y^{2}m_{\chi}/(8\pi^{2})$ is the sum of the decay widths to
individual fermions, $\Gamma_{\chi\to\psi_{i}}=y^{\prime
2}m_{\chi}/(8\pi^{2})$. In other words, the Yukawa couplings are related by
$y^{2}=N_{f}y^{\prime 2}$. When we include multiple fermions in the system,
the condition for negligible fermion backreaction in Eq. (18) and Eq. (21)
should be understood as constraints on $y^{\prime}$, the coupling of $\chi$ to
an individual fermion,
$y^{\prime 2}=\frac{y^{2}}{N_{f}}\lesssim\lambda\Rightarrow y^{2}\lesssim
N_{f}\lambda.$ (26)
Combining the analytic estimates above with results in the previous section,
we find that in order to make the spillway preheating a much more efficient
preheating mechanism, there are multiple requirements on the parameters
involved:
1. 1.
Have efficient tachyonic resonance production: $q_{0}\gg 1$ and $b\sim 1$ or
equivalently $M\gg m$ and $M^{2}\sim\sqrt{\lambda}mf$.
2. 2.
Tachyonic resonance is the dominant mechanism of energy transfer
$\phi\to\chi$, or equivalently perturbative decays of $\phi$ is inefficient
during the preheating stage: $\lambda\ll 1$.
3. 3.
Perturbative decays of $\chi$ happen around the time when the energy
transferred to $\chi$ through tachyonic particle production becomes comparable
to $\rho_{\phi}$: $\Gamma_{\chi}^{-1}\sim{\cal O}(1-10)m^{-1}$. Equivalently,
$y^{2}/(8\pi)=N_{f}y^{\prime 2}/(8\pi)\sim m/M$ up to some order one numerical
factor.
4. 4.
Satisfy the CMB constraint, i.e., the normalization of the scalar
perturbation, on the inflaton mass scale: $m\sim 10^{-6}M_{\mathrm{pl}}$ for
quadratic chaotic inflation.
5. 5.
Free from Pauli blocking of fermions: there are $N_{f}$ species of fermions
with similar Yukawa couplings and $N_{f}\gtrsim 1/\lambda$.
6. 6.
Free from backreaction of fermions: $y^{\prime 2}=y^{2}/N_{f}\lesssim\lambda$.
The system that satisfies all the requirements would have
$m\sim 10^{-6}M_{\mathrm{pl}},\;\;m\ll M\ll f\sim
M_{\mathrm{pl}},\;\;y^{2}\sim 8\pi\frac{m}{M}\lesssim
N_{f}\lambda,\;\;\sqrt{mf}\gg
M\gtrsim\left(m^{3}f^{2}/N_{f}\right)^{1/5},\;\;N_{f}\gtrsim\frac{1}{\lambda}.$
(27)
The simulations shown in the previous section satisfy all conditions, as long
as $N_{f}\gtrsim 1/\lambda\sim 10^{6}-10^{9}$, where system with a smaller
$q_{0}$ require a larger $N_{f}$. Given that the required value of $N_{f}$ is
large, we need to make sure that the cutoff of our effective field theory
(EFT) is not too low: the scale at which gravity becomes strongly coupled and
the EFT description breaks down is $M_{\text{pl,eff}}\sim
M_{\text{pl}}/\sqrt{N_{f}}\sim(10^{-4.5}-10^{-3})M_{\text{pl}}$ Dvali:2007wp .
The maximum comoving momentum excited in the simulations is typically on the
same order as that shown in Fig. 6, with $k\sim 100m=10^{-4}M_{\text{pl}}$.
Taking into account the expansion of the universe, the physical momentum
excited in the system is $k_{\text{phys}}\sim k/a\sim 10^{-5}M_{\text{pl}}$.
This is somewhat close to the gravitational cutoff, but smaller, so the EFT
description is still safe.
System with a smaller $N_{f}$ (thus larger $\lambda$) with the same $m$ and
$f$ would require a larger $M$, which is computationally more expensive to
simulate. The maximum $k$-mode excited by the tachyonic resonance scales as
$k_{\text{max}}/m=\sqrt{q_{0}}=M/m$, and the number of gridpoints required to
cover such a $k$-range increases as $N\sim k_{\max}/m$. CPU-time needed grows
at least as $N^{3}$, and potentially more because higher $k$ modes require
smaller time steps to resolve.
However, system with a smaller $N_{f}$ is numerically feasible in an
alternative part of the parameter space, where the three mass scales $m$, $M$,
$f$ are close to each other. In the next section, we will consider $f\sim{\cal
O}(10)M$ and $M\sim{\cal{O}}(10)m$. This allows
$y\sim\lambda\sim\mathcal{O}(1)$ without making the simulation computationally
infeasible. System with this choice of parameters violates the second
requirement (slow perturbative decay of $\phi$) and fourth requirement (CMB
constraint), but the essential features of the spillway preheating mechanism
is intact. Moreover, the minimum required $N_{f}$ is $N_{f}\gtrsim
1/\lambda\sim\mathcal{O}(1)$, much smaller compared to what is needed the
previous section. This will be an independent check of the results obtained in
the previous section in a qualitatively different region of the parameter
space.
### 4.3 Alternative simulations
We consider a suite of alternative simulations based on the following
parameters
$f=M_{\mathrm{pl}},\quad m=1.3\times 10^{-2}M_{\mathrm{pl}},\quad
q_{0}=\frac{M^{2}}{m^{2}}=200,\quad{\rm and}\quad b=0.9,$ (28)
which corresponds to $\lambda=4.05$. As in Sec. 4.1, we simulate the system on
a box of length $L=2m^{-1}$ with $128^{3}$ points and $y^{2}/(8\pi)=0$ and
0.1. We also put a UV cutoff on the initial power spectra of $\phi$ and $\chi$
at $k_{\phi,\text{max}}/m=0$ and for $\chi$ we cut off at
$k_{\chi,\text{max}}/m=2\sqrt{q_{0}}$.666The default initial field
fluctuations set by LatticeEasy makes
$\rho_{\phi}(t=0)\approx\rho_{\chi}(t=0)$. We manually decrease the magnitude
of the initial fluctuations of $\chi$ by a factor of $10^{3}$ to make
$\rho_{\chi}(t=0)\ll\rho_{\phi}(t=0)$, so that it is easier to observe the
interplay between tachyonic resonance and the $\chi\rightarrow\psi\psi$
decays.
The time evolution of the system with either $y^{2}/(8\pi)=0$ or $0.1$ is
shown in Fig. 7. Apart from short-term oscillations of the energy densities,
the system evolution for both $y=0$ and $y^{2}/(8\pi)=0.1$ is qualitatively
similar to what we have observed in Sec. 4.1. When $y=0$, $\rho_{\chi}$
quickly builds up due to tachyonic resonance. But once
$\rho_{\chi}\approx\rho_{\phi}$, their ratio stays constant for a long time.
For $y^{2}/(8\pi)=0.1$, there is a significant enhancement of energy transfer
out of the inflaton due to the $\phi\to\chi\to\psi$ cascade decays.
(a) $y^{2}/8\pi=0$
(b) $y^{2}/8\pi=0.1$
Figure 7: Time evolution of $\phi$, $\chi$, and fermion fluid energy density
for $b=0.9$, $m=1.3\times 10^{-2}M_{\mathrm{pl}}$,
$\Phi_{0}=f=M_{\mathrm{pl}}$, $q_{0}=200$ and $y^{2}/8\pi=0$ or 0.1.
We also check if there is a power-law dependence of
$(\rho_{\phi}/\rho_{\text{tot}})_{\text{min}}$ on $q_{0}$, similar to what we
found in Sec. 4. We study the same choices of parameters as Sec. 4,
$y^{2}/(8\pi)=0.01$, 0.05, 0.1, and 0.15 for $q_{0}$ = 50, 100, 200, 500,
1000, and 2000. For a given value of $q_{0}$, we fix $f=M_{\mathrm{pl}}$,
$\lambda=4.05$, and $b=0.9$, which sets the values of $m$ and $M$. This means
that for larger value of $q_{0}$, $m$ and $M$ are further apart and smaller
compared to $f$. For all simulations, we use a lattice with $128^{3}$ points
and $L=2m^{-1}$. The results are shown in Fig. 8. Again we observe a power law
scaling of $(\rho_{\phi}/\rho_{\text{tot}})_{\text{min}}$ with $q_{0}$, with
quantitatively similar features as what we present in Sec. 4: the exponents
have similar values, and the energy transfer efficiency also improves with
greater value of $y$. As discussed before, we expect the energy transfer
efficiency to deteriorate when $y\ll 1$ or $y\gg 1$, which are beyond the
range of our simulations.
In summary, the results in Sec. 4.1 and this section show quantitatively
similar patterns of energy transfer for two quite different mass hierarchies
in spillway preheating. The common features they share and the net result of
enhanced energy transfer efficiency due to the perturbative decays serving as
a spillway are expected to persist in the numerically infeasible parameter
space where all conditions in Eq. (27) are satisfied with a smaller $N_{f}$
(and larger $\lambda$) than that in Sec. 4.1.
Figure 8: $(\rho_{\phi}/\rho_{\text{tot}})_{\text{min}}$ as a function of
$q_{0}$ and $y$ for $b=0.9$, $\lambda=4.05$, $\Phi_{0}=f=M_{\mathrm{pl}}$. The
blue points are the simulation results, and the black line is the best fit
with a power law $q_{0}^{x}$. Each panel also shows in gray the power law best
fit for the $y=0$ case, which is flat at
$(\rho_{\phi}/\rho_{\text{tot}})_{\text{min}}\approx 0.5$.
## 5 Conclusions and Outlook
In this article, we have studied a preheating scenario featuring non-
perturbative decays of the inflaton, $\phi$, into a daughter scalar, $\chi$,
and a perturbative fermionic decay channel $\chi\rightarrow\psi\psi$. We show
that in the cases where the perturbative decays of $\chi$ into fermions become
efficient after $\chi$ has been significantly excited by the oscillating
$\phi$, but before the backreaction of $\chi$ on $\phi$ kicks in, up to
$99.99\%$ of the inflaton energy can be transferred into the daughter species.
This new class of preheating scenario is unmatched in terms of energy transfer
efficiency. We dub it spillway preheating.
We employ classical lattice simulations to explore the non-perturbative decays
of the $\phi$-condensate into the daughter $\chi$ bosons. To incorporate the
inherently quantum perturbative decays of $\chi$ into pairs of fermionic
$\psi$ particles, we add a phenomenological friction term to the classical
Klein-Gordon equations of motion governing the evolution of $\chi$. The
fermions are added to the lattice as a homogeneous radiation fluid,
$\rho_{\psi}$. The simulations are carried out in an FRW background, expanding
in a self-consistent manner. The evolution of the scale factor was determined
by the energies and equations of state of the effective $\phi$, $\chi$ and
$\psi$ fluids. The excellent energy conservation, better than one part in a
thousand even when the energy budget is dominated by the fermionic fluid, is a
strong indication for the validity of our effective description of the theory.
We also provide a parametric understanding of the energy transfer efficiency.
We show that the minimum fraction of energy density remaining in the inflaton
scales as a simple power law in a parameter $q_{0}$, which is the ratio of the
squared mass scales of the daughter scalar and the inflaton. The larger the
mass hierarchy between $\chi$ and $\phi$ is, the more efficient the energy
transfer becomes.
There is much more to be explored in spillway preheating, e.g.,
* •
With the computational resources we have, we simulate $q_{0}$ up to 2000 and
show that the depletion of the inflaton energy density could be improved by
four orders of magnitude, compared to traditional preheating scenarios. Will
the simple power-law scaling we find persist for even larger $q_{0}$’s and
what could be the maximum energy transfer efficiency achievable in the
scenario? Could this preheating scenario alone be sufficient to complete the
phase transition from inflation to the thermal big bang?
* •
We only consider non-perturbative decays of $\phi$ into $\chi$ due to a
tachyonic instability. It would be interesting to see if the results change
for resonant instabilities coming from, e.g., $\phi^{2}\chi^{2}$ interactions.
We leave the investigation of the effects of the form of the inflaton
couplings on spillway preheating for future work.
* •
What are the effects on the cosmological observables, such as the inflationary
observables and gravitational waves? Spillway preheating speeds up the
transition to a radiation-dominated state of expansion ($w=1/3$), which can
reduce the theoretical uncertainties in $n_{\rm s}$ and $r$ significantly
Lozanov:2016hid ; Lozanov:2017hjm ; Antusch:2020iyq . We defer the study of
such observational effects for the future.
* •
Could this very efficient dissipation mechanism and its variants be applied to
solve other interesting problems in particle physics, e.g., solve the
cosmological moduli problem Giblin:2017wlo or expand the parameter space of
dark photon dark matter Agrawal:2018vin ; Co:2018lka ; Dror:2018pdh ; Bastero-
Gil:2018uel ?
## Acknowledgments
We thank Mustafa A. Amin for collaboration in the early stage of the project.
We thank Matt Reece, Jean-Samuel Roux and Scott Watson for useful feedback on
the manuscript. JF is supported by the DOE grant DE-SC-0010010 and NASA grant
80NSSC18K1010. The work of KL is supported in part by the US Department of
Energy through grant DE-SC0015655. QL is supported by the DOE Grant DE-
SC-0013607 and the NASA Grant 80NSSC20K0506. This research was conducted using
computational resources and services at the Center for Computation and
Visualization, Brown University.
## References
* (1) L. Abbott, E. Farhi, and M. B. Wise, “Particle Production in the New Inflationary Cosmology,” Phys. Lett. B 117 (1982) 29.
* (2) A. Dolgov and A. D. Linde, “Baryon Asymmetry in Inflationary Universe,” Phys. Lett. B 116 (1982) 329.
* (3) A. Albrecht, P. J. Steinhardt, M. S. Turner, and F. Wilczek, “Reheating an Inflationary Universe,” Phys. Rev. Lett. 48 (1982) 1437.
* (4) J. H. Traschen and R. H. Brandenberger, “Particle Production During Out-of-equilibrium Phase Transitions,” Phys. Rev. D 42 (1990) 2491–2504.
* (5) A. Dolgov and D. Kirilova, “ON PARTICLE CREATION BY A TIME DEPENDENT SCALAR FIELD,” Sov. J. Nucl. Phys. 51 (1990) 172–177.
* (6) Y. Shtanov, J. H. Traschen, and R. H. Brandenberger, “Universe reheating after inflation,” Phys. Rev. D 51 (1995) 5438–5455, arXiv:hep-ph/9407247.
* (7) L. Kofman, A. D. Linde, and A. A. Starobinsky, “Reheating after inflation,” Phys. Rev. Lett. 73 (1994) 3195–3198, arXiv:hep-th/9405187.
* (8) D. Boyanovsky, M. D’Attanasio, H. de Vega, R. Holman, D.-S. Lee, and A. Singh, “Reheating the postinflationary universe,” arXiv:hep-ph/9505220.
* (9) M. Yoshimura, “Catastrophic particle production under periodic perturbation,” Prog. Theor. Phys. 94 (1995) 873–898, arXiv:hep-th/9506176.
* (10) D. I. Kaiser, “Post inflation reheating in an expanding universe,” Phys. Rev. D 53 (1996) 1776–1783, arXiv:astro-ph/9507108.
* (11) L. Kofman, A. D. Linde, and A. A. Starobinsky, “Towards the theory of reheating after inflation,” Phys. Rev. D 56 (1997) 3258–3295, arXiv:hep-ph/9704452.
* (12) R. Allahverdi, R. Brandenberger, F.-Y. Cyr-Racine, and A. Mazumdar, “Reheating in Inflationary Cosmology: Theory and Applications,” Ann. Rev. Nucl. Part. Sci. 60 (2010) 27–51, arXiv:1001.2600 [hep-th].
* (13) M. A. Amin, M. P. Hertzberg, D. I. Kaiser, and J. Karouby, “Nonperturbative Dynamics Of Reheating After Inflation: A Review,” Int. J. Mod. Phys. D 24 (2014) 1530003, arXiv:1410.3808 [hep-ph].
* (14) J. Berges, S. Borsányi, and C. Wetterich, “Prethermalization,” Phys. Rev. Lett. 93 (Sep, 2004) 142002. https://link.aps.org/doi/10.1103/PhysRevLett.93.142002.
* (15) R. Micha and I. I. Tkachev, “Turbulent thermalization,” Phys. Rev. D 70 (2004) 043538, arXiv:hep-ph/0403101.
* (16) A. R. Liddle and S. M. Leach, “How long before the end of inflation were observable perturbations produced?,” Phys. Rev. D 68 (2003) 103503, arXiv:astro-ph/0305263.
* (17) L. Dai, M. Kamionkowski, and J. Wang, “Reheating constraints to inflationary models,” Phys. Rev. Lett. 113 (2014) 041302, arXiv:1404.6704 [astro-ph.CO].
* (18) J. B. Munoz and M. Kamionkowski, “Equation-of-State Parameter for Reheating,” Phys. Rev. D 91 no. 4, (2015) 043521, arXiv:1412.0656 [astro-ph.CO].
* (19) J. Martin, C. Ringeval, and V. Vennin, “Information Gain on Reheating: the One Bit Milestone,” Phys. Rev. D 93 no. 10, (2016) 103532, arXiv:1603.02606 [astro-ph.CO].
* (20) R. J. Hardwick, V. Vennin, K. Koyama, and D. Wands, “Constraining Curvatonic Reheating,” JCAP 08 (2016) 042, arXiv:1606.01223 [astro-ph.CO].
* (21) K. D. Lozanov and M. A. Amin, “Equation of State and Duration to Radiation Domination after Inflation,” Phys. Rev. Lett. 119 no. 6, (2017) 061301, arXiv:1608.01213 [astro-ph.CO].
* (22) K. D. Lozanov and M. A. Amin, “Self-resonance after inflation: oscillons, transients and radiation domination,” Phys. Rev. D 97 no. 2, (2018) 023533, arXiv:1710.06851 [astro-ph.CO].
* (23) S. Antusch, D. G. Figueroa, K. Marschall, and F. Torrenti, “Energy distribution and equation of state of the early Universe: matching the end of inflation and the onset of radiation domination,” Phys. Lett. B 811 (2020) 135888, arXiv:2005.07563 [astro-ph.CO].
* (24) S. Khlebnikov and I. Tkachev, “Relic gravitational waves produced after preheating,” Phys. Rev. D 56 (1997) 653–660, arXiv:hep-ph/9701423.
* (25) R. Easther, J. Giblin, John T., and E. A. Lim, “Gravitational Wave Production At The End Of Inflation,” Phys. Rev. Lett. 99 (2007) 221301, arXiv:astro-ph/0612294.
* (26) R. Easther and E. A. Lim, “Stochastic gravitational wave production after inflation,” JCAP 04 (2006) 010, arXiv:astro-ph/0601617.
* (27) J. Garcia-Bellido, D. G. Figueroa, and A. Sastre, “A Gravitational Wave Background from Reheating after Hybrid Inflation,” Phys. Rev. D 77 (2008) 043517, arXiv:0707.0839 [hep-ph].
* (28) J. F. Dufaux, A. Bergman, G. N. Felder, L. Kofman, and J.-P. Uzan, “Theory and Numerics of Gravitational Waves from Preheating after Inflation,” Phys. Rev. D 76 (2007) 123517, arXiv:0707.0875 [astro-ph].
* (29) J.-F. Dufaux, G. Felder, L. Kofman, and O. Navros, “Gravity Waves from Tachyonic Preheating after Hybrid Inflation,” JCAP 03 (2009) 001, arXiv:0812.2917 [astro-ph].
* (30) J.-F. Dufaux, D. G. Figueroa, and J. Garcia-Bellido, “Gravitational Waves from Abelian Gauge Fields and Cosmic Strings at Preheating,” Phys. Rev. D 82 (2010) 083518, arXiv:1006.0217 [astro-ph.CO].
* (31) L. Bethke, D. G. Figueroa, and A. Rajantie, “On the Anisotropy of the Gravitational Wave Background from Massless Preheating,” JCAP 06 (2014) 047, arXiv:1309.1148 [astro-ph.CO].
* (32) P. Adshead, J. T. Giblin, and Z. J. Weiner, “Gravitational waves from gauge preheating,” Phys. Rev. D 98 no. 4, (2018) 043525, arXiv:1805.04550 [astro-ph.CO].
* (33) N. Kitajima, J. Soda, and Y. Urakawa, “Gravitational wave forest from string axiverse,” JCAP 10 (2018) 008, arXiv:1807.07037 [astro-ph.CO].
* (34) N. Bartolo et al., “Science with the space-based interferometer LISA. IV: Probing inflation with gravitational waves,” JCAP 12 (2016) 026, arXiv:1610.06481 [astro-ph.CO].
* (35) D. G. Figueroa and F. Torrenti, “Gravitational wave production from preheating: parameter dependence,” JCAP 10 (2017) 057, arXiv:1707.04533 [astro-ph.CO].
* (36) C. Caprini and D. G. Figueroa, “Cosmological Backgrounds of Gravitational Waves,” Class. Quant. Grav. 35 no. 16, (2018) 163001, arXiv:1801.04268 [astro-ph.CO].
* (37) N. Bartolo, V. Domcke, D. G. Figueroa, J. García-Bellido, M. Peloso, M. Pieroni, A. Ricciardone, M. Sakellariadou, L. Sorbo, and G. Tasinato, “Probing non-Gaussian Stochastic Gravitational Wave Backgrounds with LISA,” JCAP 11 (2018) 034, arXiv:1806.02819 [astro-ph.CO].
* (38) K. D. Lozanov and M. A. Amin, “Gravitational perturbations from oscillons and transients after inflation,” Phys. Rev. D 99 no. 12, (2019) 123504, arXiv:1902.06736 [astro-ph.CO].
* (39) P. Adshead, J. T. Giblin, M. Pieroni, and Z. J. Weiner, “Constraining Axion Inflation with Gravitational Waves across 29 Decades in Frequency,” Phys. Rev. Lett. 124 no. 17, (2020) 171301, arXiv:1909.12843 [astro-ph.CO].
* (40) P. Adshead, J. T. Giblin, M. Pieroni, and Z. J. Weiner, “Constraining axion inflation with gravitational waves from preheating,” Phys. Rev. D 101 no. 8, (2020) 083534, arXiv:1909.12842 [astro-ph.CO].
* (41) D. H. Lyth and D. Wands, “Generating the curvature perturbation without an inflaton,” Phys. Lett. B 524 (2002) 5–14, arXiv:hep-ph/0110002.
* (42) L. Kofman, “Probing string theory with modulated cosmological fluctuations,” arXiv:astro-ph/0303614.
* (43) G. Dvali, A. Gruzinov, and M. Zaldarriaga, “A new mechanism for generating density perturbations from inflation,” Phys. Rev. D 69 (2004) 023505, arXiv:astro-ph/0303591.
* (44) A. Chambers and A. Rajantie, “Lattice calculation of non-Gaussianity from preheating,” Phys. Rev. Lett. 100 (2008) 041302, arXiv:0710.4133 [astro-ph]. [Erratum: Phys.Rev.Lett. 101, 149903 (2008)].
* (45) A. Chambers and A. Rajantie, “Non-Gaussianity from massless preheating,” JCAP 08 (2008) 002, arXiv:0805.4795 [astro-ph].
* (46) J. Bond, A. V. Frolov, Z. Huang, and L. Kofman, “Non-Gaussian Spikes from Chaotic Billiards in Inflation Preheating,” Phys. Rev. Lett. 103 (2009) 071301, arXiv:0903.3407 [astro-ph.CO].
* (47) G. Leung, E. R. Tarrant, C. T. Byrnes, and E. J. Copeland, “Reheating, Multifield Inflation and the Fate of the Primordial Observables,” JCAP 09 (2012) 008, arXiv:1206.5196 [astro-ph.CO].
* (48) G. Leung, E. R. Tarrant, C. T. Byrnes, and E. J. Copeland, “Influence of Reheating on the Trispectrum and its Scale Dependence,” JCAP 08 (2013) 006, arXiv:1303.4678 [astro-ph.CO].
* (49) S. V. Imrith, D. J. Mulryne, and A. Rajantie, “Primordial curvature perturbation from lattice simulations,” Phys. Rev. D 100 no. 4, (2019) 043543, arXiv:1903.07487 [astro-ph.CO].
* (50) J. Fan and Z.-Z. Xianyu, “A Cosmic Microscope for the Preheating Era,” arXiv:2005.12278 [hep-ph].
* (51) J. F. Dufaux, G. N. Felder, L. Kofman, M. Peloso, and D. Podolsky, “Preheating with trilinear interactions: Tachyonic resonance,” JCAP 07 (2006) 006, arXiv:hep-ph/0602144.
* (52) J. Deskins, J. T. Giblin, and R. R. Caldwell, “Gauge Field Preheating at the End of Inflation,” Phys. Rev. D 88 no. 6, (2013) 063530, arXiv:1305.7226 [astro-ph.CO].
* (53) P. Adshead, J. T. Giblin, and Z. J. Weiner, “Non-Abelian gauge preheating,” Phys. Rev. D 96 no. 12, (2017) 123512, arXiv:1708.02944 [hep-ph].
* (54) P. Adshead, J. T. Giblin, T. R. Scully, and E. I. Sfakianakis, “Gauge-preheating and the end of axion inflation,” JCAP 12 (2015) 034, arXiv:1502.06506 [astro-ph.CO].
* (55) J. R. C. Cuissa and D. G. Figueroa, “Lattice formulation of axion inflation. Application to preheating,” JCAP 06 (2019) 002, arXiv:1812.03132 [astro-ph.CO].
* (56) O. Özsoy, J. T. Giblin, E. Nesbit, G. Şengör, and S. Watson, “Toward an Effective Field Theory Approach to Reheating,” Phys. Rev. D 96 no. 12, (2017) 123524, arXiv:1701.01455 [hep-th].
* (57) F. Bernardeau, L. Kofman, and J.-P. Uzan, “Modulated fluctuations from hybrid inflation,” Phys. Rev. D 70 (2004) 083004, arXiv:astro-ph/0403315.
* (58) G. N. Felder, L. Kofman, and A. D. Linde, “Instant preheating,” Phys. Rev. D 59 (1999) 123523, arXiv:hep-ph/9812289.
* (59) J. Garcia-Bellido, D. G. Figueroa, and J. Rubio, “Preheating in the Standard Model with the Higgs-Inflaton coupled to gravity,” Phys. Rev. D 79 (2009) 063531, arXiv:0812.4624 [hep-ph].
* (60) J. Repond and J. Rubio, “Combined Preheating on the lattice with applications to Higgs inflation,” JCAP 07 (2016) 043, arXiv:1604.08238 [astro-ph.CO].
* (61) Planck Collaboration, Y. Akrami et al., “Planck 2018 results. X. Constraints on inflation,” Astron. Astrophys. 641 (2020) A10, arXiv:1807.06211 [astro-ph.CO].
* (62) M. A. Amin, J. Fan, K. D. Lozanov, and M. Reece, “Cosmological dynamics of Higgs potential fine tuning,” Phys. Rev. D 99 no. 3, (2019) 035008, arXiv:1802.00444 [hep-ph].
* (63) G. N. Felder and I. Tkachev, “LATTICEEASY: A Program for lattice simulations of scalar fields in an expanding universe,” Comput. Phys. Commun. 178 (2008) 929–932, arXiv:hep-ph/0011159.
* (64) G. Dvali and M. Redi, “Black Hole Bound on the Number of Species and Quantum Gravity at LHC,” Phys. Rev. D 77 (2008) 045027, arXiv:0710.4344 [hep-th].
* (65) J. T. Giblin, G. Kane, E. Nesbit, S. Watson, and Y. Zhao, “Was the Universe Actually Radiation Dominated Prior to Nucleosynthesis?,” Phys. Rev. D 96 no. 4, (2017) 043525, arXiv:1706.08536 [hep-th].
* (66) P. Agrawal, N. Kitajima, M. Reece, T. Sekiguchi, and F. Takahashi, “Relic Abundance of Dark Photon Dark Matter,” Phys. Lett. B 801 (2020) 135136, arXiv:1810.07188 [hep-ph].
* (67) R. T. Co, A. Pierce, Z. Zhang, and Y. Zhao, “Dark Photon Dark Matter Produced by Axion Oscillations,” Phys. Rev. D 99 no. 7, (2019) 075002, arXiv:1810.07196 [hep-ph].
* (68) J. A. Dror, K. Harigaya, and V. Narayan, “Parametric Resonance Production of Ultralight Vector Dark Matter,” Phys. Rev. D 99 no. 3, (2019) 035036, arXiv:1810.07195 [hep-ph].
* (69) M. Bastero-Gil, J. Santiago, L. Ubaldi, and R. Vega-Morales, “Vector dark matter production at the end of inflation,” JCAP 04 (2019) 015, arXiv:1810.07208 [hep-ph].
|
<EMAIL_ADDRESS>[Pavel Kroupa]P. Kroupa [Rachel Parziale]R. Parziale
[Moritz Haslbauer]M. Haslbauer
# The Phantom of RAMSES user guide for galaxy simulations using Milgromian and
Newtonian gravity
S. T. Nagesh Argelander-Institut für Astronomie, Universität Bonn, Auf dem
Hügel 71, 53121 Bonn, Germany. I. Banik Helmholtz-Institut für Strahlen- und
Kernphysik, Universität Bonn, Nussallee 14-16, 53115 Bonn, Germany. I. Thies
missing Helmholtz-Institut für Strahlen- und Kernphysik, Universität Bonn,
Nussallee 14-16, 53115 Bonn, Germany and Astronomical Institute, Faculty of
Mathematics and Physics; Charles University in Prague, V Holešovičkách 2,
CZ-180 00 Praha, Czech Republic. B. Famaey Université de Strasbourg, CNRS
UMR 7550, Observatoire astronomique de Strasbourg, 11 rue de l’Université,
67000 Strasbourg, France N. Wittenburg missing missing Max-Planck-Institut
für Radioastronomie, Auf dem Hügel 69, 53121 Bonn, Germany and Helmholtz-
Institut für Strahlen- und Kernphysik, Universität Bonn, Nussallee 14-16,
53115 Bonn, Germany.
###### Abstract
This document describes the general process of setting up, running, and
analysing disc galaxy simulations using the freely available program phantom
of ramses (por). This implements Milgromian Dynamics (MOND) with a patch to
the ramses grid-based $N$-body and hydrodynamical code that uses adaptive mesh
refinement. We discuss the procedure of setting up isolated and interacting
disc galaxy initial conditions for por, running the simulations, and analysing
the results. This manual also concisely documents all previously developed
MOND simulation codes and the results obtained with them.
## 1 Introduction
Milgromian Dynamics (MOND) is an extension of Newtonian dynamics to encompass
the observed dynamics in the Solar System as well as in galaxies without
postulating invisible haloes around them [1]. MOND computes the gravitational
potential of galaxies using only the distribution of baryons. It has been very
successful in this regard, especially because it predicted some very tight
scaling relations which were subsequently observed [2, 3]. These are a
consequence of Milgrom’s formula
$\displaystyle g\leavevmode\nobreak\ =\leavevmode\nobreak\
\sqrt{g_{\mathrm{N}}a_{{}_{0}}}\leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \textrm{for}\leavevmode\nobreak\ \leavevmode\nobreak\
g_{\mathrm{N}}\ll a_{{}_{0}}=1.2\times 10^{-10}\,\textrm{m s}^{-2},$ (1)
where $a_{{}_{0}}$ is Milgrom’s constant, $g$ is the strength of the true
gravity, and $g_{\mathrm{N}}$ is that of the Newtonian gravity. To achieve a
generalization of gravity applicable in non-spherical systems, MOND requires a
generalized Poisson equation derived from a Lagrangian. Two classical variants
have been proposed, one with an aquadratic Lagrangian [AQUAL; 4], and one with
a Lagrangian making use of an auxiliary field, which is called the quasi-
linear formulation of MOND [QUMOND; 5]. MOND may be a consequence of the
quantum vacuum [6, 7, 8, 9]. Reviews of MOND can be found in [2, 10].
phantom of ramses [por; 11] is a numerical implementation of QUMOND, whose
field equation for the potential $\Phi$ is
$\displaystyle\nabla^{2}\Phi\leavevmode\nobreak\ \equiv\leavevmode\nobreak\
-\nabla\cdot\bm{g}\leavevmode\nobreak\ =\leavevmode\nobreak\
-\nabla\cdot\left(\nu\bm{g}_{\mathrm{N}}\right)\,,$ (2)
where $\nu$ is the MOND interpolating function with argument $y\equiv
g_{\mathrm{N}}/a_{{}_{0}}$, with $\bm{g}$ and $\bm{g}_{\mathrm{N}}$ being the
true and Newtonian gravitational acceleration vectors, respectively, and
$v\equiv\left|\bm{v}\right|$ for any vector $\bm{v}$. The current version of
por uses the simple form of the interpolating function (e.g. equation 5 of
[12])
$\displaystyle\nu\left(y\right)\leavevmode\nobreak\ =\leavevmode\nobreak\
\frac{1}{2}+\sqrt{\frac{1}{4}+\frac{1}{y}}\,.$ (3)
$\bm{g}_{\mathrm{N}}$ is found from the baryonic density $\rho_{b}$ using the
standard Poisson equation
$\displaystyle\nabla\cdot\bm{g}_{\mathrm{N}}\leavevmode\nobreak\
=\leavevmode\nobreak\ -4\mathrm{\pi}G\rho_{b}\,.$ (4)
The boundary condition for the MOND potential far from an isolated matter
distribution is
$\displaystyle\Phi\leavevmode\nobreak\ =\leavevmode\nobreak\
\sqrt{GMa_{{}_{0}}}\ln R\,,$ (5)
where $M$ is the mass within the simulation box, and $R$ is the distance from
its barycentre in the simulation unit of length.
A handful of Milgromian $N$-body codes were developed before por to handle
MOND computations, and these have been applied to various problems. The first
multi-grid, Milgromian $N$-body code was developed by [13] to investigate the
stability of disc galaxies. This was later extended to simulate how they might
warp due to the external field effect [EFE; 14]. Another $N$-body solver which
implemented the AQUAL formulation of MOND was developed and used to study the
evolution of spiral galaxies using pure stellar discs [15]. Gas dynamics was
later included using a sticky particle scheme at low resolution [16]. n-mody
was developed to solve the Milgromian Poisson equation in spherical
coordinates [17] and used to investigate dynamical friction [18], orbit
instabilities [19], and stellar kinematics [20, 21, 22]. Milgromian $N$-body
codes tailored to cosmological simulations have also been developed [23, 24,
25, 26]. Another $N$-body solver called raymond [27] was developed to
implement both the AQUAL [4] and QUMOND [5] formulations of MOND. raymond has
been applied to cosmological [28], galaxy cluster [29], and other problems.
However, not all the aforementioned $N$-body codes can be applied to generic
scenarios simultaneously involving particles, gas dynamics, and star
formation.
The fortran-based por code was developed by Fabian Lüghausen [11]. It is a
customized version of ramses [30], which exclusively uses Newtonian dynamics
to compute gravity. por can compute gravity using MOND by numerically solving
Equation 2. Since por is a patch to ramses, it inherits use of the adaptive
mesh refinement (AMR) technique. por is equipped to handle particles, gas
dynamics, and star formation, and can be applied to diverse problems. It
allows the user to compute gravity in both Milgromian and Newtonian frameworks
[11].
This document serves as a tutorial/manual for the general use of por, with
some suggestions for the specific case of setting up and simulating a disc
galaxy. Most of the steps and parameters described here are specific to por,
except the installation of ramses. For a detailed description of individual
parameters, it is always recommended to read the ramses
manual111https://bitbucket.org/rteyssie/ramses/src/master/. Most of the
parameters and files described here can be edited safely without disturbing
the core algorithms. Before changing parameters or files that are not
mentioned here, it is important to fully understand the workings and
consequences of the change.
In Section 2, we explain the installation procedure of ramses and por. In
Section 3, we explain how to set up MOND disc templates using Disk Initial
Conditions Environment (dice), and thereby generate rotating disc initial
conditions for por. In Section 4, we describe the workings of por, focusing on
particle-only and hydrodynamical runs with and without star formation. In
Section 5, random turbulence generation is briefly discussed. Section 6
discusses the extract_por tool used to analyse particle data in ramses
simulation outputs. In Section 7, we mention all publications based on por. We
conclude in Section 8.
## 2 Installation and setup of the code
The por patch by Fabian Lüghausen is rated to work with the 2015 version of
ramses, which has since been modified (the latest ramses version is available
here 1). Later versions are not compatible with por. It is therefore
recommended to use the 2015 version of ramses with por. The jointly tested
version of ramses and por is available here
222https://bitbucket.org/SrikanthTN/bonnpor/src/master/, in the PoR_hydro
folder.
The following steps describe the installation and compilation of ramses and
por. These procedures are adapted from the ramses manual, where they are
described further.
1. 1.
The main folder needed for compilation of ramses is bin. In the bin folder,
there is a makefile. Now, do:
⬇
$ cd ~/PoR_hydro/ramses/bin/
2. 2.
In the makefile, certain flags need to be changed. Makefile:
Compilation time parameters NVECTOR = 32 NDIM = 3 NPRE = 8 NVAR = 6 NENER = 0
SOLVER = hydro #PATCH = ../patch/phantom_units #PATCH =
../patch/phantom_staticparts (particle-only run) #PATCH =
../patch/hydro/phantom_merger PATCH = ../patch/hydro/phantom_extfield (hydro
run) EXEC = RAMSES
3. 3.
All these flags are explained in the ramses manual1. F90 and FFLAGS should be
set carefully.
F90 = mpif90 -frecord-marker=4 -O3 -ffree-line-length-none -g -fbacktrace
FFLAGS = -x f95-cpp-input $(DEFINES)$
4. 4.
F90 sets the Fortran compiler and FFLAGS is used to specify the required MPI
libraries, which are mainly used for parallel computing. This is important
given the likely high computational cost. The default makefile2 uses the
above-mentioned F90 and FFLAGS. If one’s computer is not compatible with these
default parameters, they can be changed in the makefile.
5. 5.
Once all the required flags are set, compile the code:
⬇
$ make
After compilation, one can test the installation as described in section 2.3
of the ramses manual1.
6. 6.
To make the files again, go to the bin folder and execute:
⬇
$ make clean
$ make
The ramses manual was written in 2002 and has not been updated since, so there
might be subsequent modifications to the parameter file. One must use the
phantom patch to do simulations in MOND. For Newtonian simulations, it is
recommended to use this patch and set the mond flag to .false. in the
namelist.
### 2.1 Compilation of the code with the por patch
We now describe the procedure to link the por patch and re-compile ramses. To
activate the por patch, the following parameters must be specified in the
makefile (# means a comment):
NDIM = 3 PATCH = ../patch/phantom_staticparts #PATCH = ../patch/phantom #PATCH
= ../patch/hydro/phantom_extfield
By default, ramses uses periodic boundary conditions, but the por patch in2
specifies and uses different boundary conditions appropriate to isolated
galaxy simulations. The 2015 version of ramses available in2 contains a
staticpart patch and a hydrodynamical patch with compulsory additional merger
and EFE patches whose effects can be disabled (Section 4.2). These patches are
task-specific customizations of por. For a particle run, phantom_staticparts
can be used while for a hydrodynamical run, phantom_extfield can be used. Only
one patch can be used at a time. One must change the path to the user’s
directory before making the file. After specifying the parameters, make the
file again.
## 3 Disk Initial Conditions Environment (DICE)
Any galaxy or cosmological simulation needs initial conditions. The ramses
user guide refers to two websites for these, but they are for cosmological
runs. The music333https://bitbucket.org/ohahn/music/src/master/ code also
provides initial conditions for the latter, and is recommended. The setup of
cosmological MOND simulations will be discussed elsewhere. This guide focuses
on galaxy simulations, for which we generate initial conditions with an
adapted version of Disk Initial Conditions Environment [dice; 31]. The
original version of dice can be found
here444https://bitbucket.org/vperret/dice/src/master/. It is not compatible
with MOND or the 2015 version of ramses2, so we used a modified version of
dice available here2. This has two versions, one for particle-only runs (the
dice_particle folder, hereafter p-dice) and the other for hydrodynamical runs
(in dice_gas, hereafter h-dice). These algorithms were developed by Graeme
Candlish, Roy Truelove, Indranil Banik, and Ingo Thies. Both are equipped to
initialize disc galaxies in MOND, but in principle other methods could be used
and advanced with the por patch. Before installing dice, CMake, GSL, and FFTW
must be installed. If this is not already the case, installation instructions
are provided here2.
### 3.1 Installation and setup
As mentioned above, the folders dice_particle and dice_gas contain p-dice and
h-dice, respectively. Extract them to /local in one’s home directory. In
dice_particle, the disc folder is required for disc galaxy simulations. Now,
in disc, the bin folder contains the makefile needed for compilation, while
the example folder contains the parameter files. To compile p-dice, execute:
⬇
$ cd dice_particle
$ cd disc
$ mkdir build
$ cd build
$ cmake ..
$ make
$ make install
To compile h-dice, execute:
⬇
$ cd dice_gas
$ mkdir build
$ cd build
$ cmake ..
$ make
$ make install
h-dice does not contain an additional disc folder like p-dice.
### 3.2 Running DICE
The dice_gas and disc (in dice_particle) folders contain four sub-folders:
1. 1.
cmake should not be altered,
2. 2.
build will contain the executable,
3. 3.
src contains the source files which encode the physics required for
computation, and
4. 4.
example contains files required to generate the initial conditions, with task-
specific configuration files like M31, M33, and generic scenarios like a disc
galaxy, disc with a bulge etc. Only the Milky Way (MW), M31, and M33 cases are
rated to work.
We used the test_mw.config configuration file for our disc galaxy:
Redshift 3.0 Galaxy ../../example/params_files/testMilkyWay.params Filename
dice_highz ICformat Gadget2 Nthreads 32
In the .config file, specify the path to the parameter file. The redshift is
unused in our MONDified dice. The testMilkyWay.params is the parameter file
used, though other params files exist in the /example/params_files folder.
Custom templates can be created using these parameter files, though only the
MW, M31, and M33 cases are rated to work. There are mainly three types of
parameters in testMilkyWay.params: global parameters, outer disc, and inner
disc. For both p-dice and h-dice, once the parameter file is set, go one
directory up and execute:
⬇
$ cd bin
$ ./dice ../../example/test_mw.config
After execution, 2 output files named Milky_Way_output_p2_k0.txt and
Milky_Way_rotation_curve.txt will be created in the bin folder. The rotation
curve is only required for hydrodynamical simulations.
## 4 Running por
After compilation of ramses with the required por patch, one can customize the
namelist file available in the PoR_namelist folder to meet a scientific goal.
PoR_namelist consists of all the namelist files we have used for our runs.
PoR.nml is a general template which can be customized. PoR-static.nml is the
file we used for our particle-only run, while Test_hydro_mw_NSFR.nml was used
for the hydrodynamical run without star formation. There is a general namelist
folder which consists of .nml files that can be used to test e.g. the
installation of ramses.
### 4.1 Particle-only run (staticpart patch)
Use the /patch/phantom_staticparts patch and PoR-static.nml, take care on the
boundary conditions:
&RUN_PARAMS poisson=.true. pic=.true. mond=.true. – Activates MOND poisson
solver nrestart=0 – used to restart a run from any output / &AMR_PARAMS .
levelmin=7 levelmax=12 ngridmax=2000000 boxlen=1024.0 npartmax=2000000 /
ngridmax and npartmax should be of order $10^{6}$ to avoid memory errors.
&OUTPUT_PARAMS foutput=8000 – Frequency of outputs in terms of coarse steps
noutput=100 – Number of outputs to be generated delta_tout=100. – Interval of
the output in Myr tend=10000. – Simulation end time, in this case 10 Gyr /
&INIT_PARAMS filetype=‘ascii’ initfile= ../path/to/Milky_Way_output_p2_k0.txt
as the input. / &POISSON_PARAMS a0_ms2=1.2e-10 m_threshold=1.e+30 – critical
part, set it based on usage gravity_type=0 cg_levelmin=999 / &BOUNDARY_PARAMS
nboundary=6 ibound_min=-1, 1, 0, 0, 0, 0, ibound_max=-1, 1, 0, 0, 0, 0,
jbound_min= 0, 0,-1, 1, 0, 0, jbound_max= 0, 0,-1, 1, 0, 0, kbound_min= 0, 0,
0, 0,-1, 1, kbound_max= 0, 0, 0, 0,-1, 1, bound_type= 1, 1, 1, 1, 1, 1, /
There are some parameters which are mandatory in all runs, such as
&Run_Params, &AMR_Params, &Output_Params, &Init_Params. Others vary based on
specific requirements. Most of these parameters are detailed in the ramses
manual1, so we only stress those specific to por. Parameters not shown here
should not be changed unless required.
The staticpart patch integrates particles below a certain mass m_threshold,
while more massive particles are kept static but are considered when
evaluating $\bm{g}_{\mathrm{N}}$ in Equation 4. This method is an effective
way to save computation time. If one wants to evolve all the stellar particles
in a particle-only simulation, then m_threshold should be set to a suitably
large value, e.g. $10^{30}M_{\odot}$. The units from the dice output are the
same as required by por for input (i.e. $M_{\odot}$, kpc, and km/s), while
units.f90 has the units used by por in which $G=1$.
One can modify &Output_Params and &AMR_Params, but it is not recommend to
tamper with other blocks. In the above example, the Milky_Way_output_p2_k0.txt
obtained from p-dice is given as the input file, with the rotation curve
unused. Once all parameters are set in the namelist file, the simulation can
be started by executing:
⬇
$ mpiexec -n 32 ../ramses3d ../filename.nml
This calls the simulation to run on 32 CPUs using parallel computing (the
number can be changed). Regardless of the directory of execution, one must
specify full paths to the ramses3d and namelist files. To run these
simulations without parallel computing, simply execute:
⬇
$ ../bin/ramses3d ../filename.nml
Users should check the computing capacity before running simulations without
parallel computing.
After starting the simulation, it might terminate with error message \-
“SEGSEV - invalid memory reference”. This is a memory error, which can be
solved by increasing npartmax up to $10^{7}$ and ngridmax up to $8\times
10^{6}$ (the codes are not rated for larger values). The npartmax variable
must be at least equal to the number of particles in the dice template.
Turning off the movie may help. CPU and memory errors are a bit alarming, but
are easily overcome and should not be a big concern for beginners.
The simulation will produce output folders for each snapshot. During the run,
if the memory allocated is too small, the simulation will stop and ask to
increase the number of grid cells. One must then go back to the namelist file
and increase ngridmax or npartmax based on what is asked. The restart protocol
is rated to work, so restart the run from the last output file by setting
nrestart to the desired output number (the default of 0 means to start from
scratch). If the run stops before finalising output_..45, set nrestart = 45
and resume the run by executing the above-mentioned command.
### 4.2 Hydrodynamical run without star formation
We performed this run with the /patch/hydro/phantom_extfield patch, which is a
modification to por. The EFE and merger scenarios are included using the MOND
Poisson solver, but both features can be turned off.
#### 4.2.1 DICE with gas component
dice is again used to set up the initial conditions. Since the gas component
is included, we used h-dice in the dice_gas folder. To include a gas
component, the test_MilkyWay.params was slightly modified:
################## # Global parameters ################### # Virial velocity
of the galaxy [km/s] v200 200.0 # Virial mass of the galaxy [1e10 Msol] #
Overrides the v200 parameter m200 9.15#8.4 old Gas_fraction 0.2 Gas_T 50000.0
The highlighted lines are the new additions to the h-dice template. These
lines specify the gas component parameters. The gas fraction depends on the
galaxy, here 20% gas fraction was used for the MW. The gas temperature should
be set equal to another parameter called T2_ISM, which is present in the
namelist file of por. One must be careful that the gas fraction should be
greater than the mass fraction of the outer component, in this particular
case, more than 18%. This is because the distribution of gas in h-dice is done
in a particular way, so one should be cautious while setting the gas fraction
[32]. The template has a default mass fraction of 17.64% for the outer disc
component, with the remaining 82.36% for the inner disc [33]. This version is
only rated for two exponential disc components. For a beginner, it is
recommend to take advice at this point before proceeding further.
h-dice can be started the same way as p-dice. After starting the dice run, one
might notice a message in the terminal “Gas is too cold to satisfy the Toomre
condition everywhere. Increase $T$ by a factor of …”, and/or “WARNING: only
writing data for component 1”. The first message is just a warning about the
global disc stability, and has no impact on the results $-$ it can be ignored.
Temperature here is used as a measure of velocity dispersion including
turbulence, so it is not the true gas temperature. The second message can also
be ignored $-$ it indicates that the second component defined in dice is
treated as stars and used for calculating the potential, but not printed in
the output as the gas will be added in por [32]. The particle data written to
the disc template file contains only the stellar component.
The rotation curve file has columns for the gas disc scale height and its
radial gradient. This is critical as the gas component is created in por
itself (in the merger and extfield patch) by reading in the gas data from the
rotation curve file and some parameters to be set in the namelist file, e.g.
gas mass and temperature. Care is needed to ensure compatibility of the
parameters used for dice and por.
#### 4.2.2 por with merger and external field patch
In the hydrodynamical case, we did not explicitly set up an isolated disc
galaxy, but instead adapted the merger template
condinit555http://www.physics.usyd.edu.au/~tepper/codes/ramses/trunk/doc/html/patch_2hydro_2merger_2condinit_8f90_source.html.
This sets up two disc galaxies in the simulation box, so we switched off the
second galaxy by setting its mass and velocity to zero and placing it outside
the simulation box. The namelist file for each run should be customized as
required, we show part of Test_hydro_mw_NSFR.nml as an example:
&RUN_PARAMS mond=.true. Activate_g_ext=.false. &INIT_PARAMS filetype=‘ascii’
initfile(1)=‘path/to/the/DICE/output/’ &MERGER_PARAMS rad_profile=‘double_exp’
z_profile=‘sech_sq’ Mgas_disc1=45.75 Mgas_disc2=0 IG_density_factor=1.0e-2
T2_ISM=40.d3 scale_a2=1. Vcirc_dat_file1=‘Milky_Way_rotation_curve.txt’
Vcirc_dat_file2=‘Milky_Way_rotation_curve.txt’
ic_part_file_gal1=‘Milky_Way_output_p2_k0.txt’
ic_part_file_gal2=‘Milky_Way_output_p2_k0.txt’ gal_center1= 0.,0.,0.
gal_center2= 2000,0.,0. Vgal1=0.,0.,0. Vgal2=0.,0.,0.
The namelist has other parameters, but we show only those critical to the
simulation. If one were to use a similar setup, the following suggestions are
helpful:
1. 1.
In the ramses makefile, one must provide the path to the external field patch
and recompile ramses.
2. 2.
The por patch can accommodate both MOND and Newtonian physics. The latter is
used if one sets mond = .false., allowing simulations with both gravity
theories using por.
3. 3.
Setting Activate_g_ext to false turns the EFE off. Hydrodynamical simulations
with the EFE are discussed further in [32].
4. 4.
For the initfile(1), one must give the path to the directory where
Milky_Way_output_p2_k0.txt and Milky_Way_rotation_curve.txt are present. These
files should be specified for ic_part_file_gal1 and Vcirc_dat_file1,
respectively. The same path and files can be given for the second galaxy,
which is unused here.
5. 5.
The main things that need attention are the &Merger_Params. The gas mass of
the galaxy is in units of $10^{9}M_{\odot}$. If simulating interacting
galaxies, they should not start too close together. We switched the second
galaxy off by setting its gas mass and velocity to 0 and placing it outside
the box, e.g. box size = 500 kpc, gal_center2 = (2000, 0, 0) kpc. The first
galaxy was placed at the box centre.
6. 6.
For isolated simulations, both galaxies should have zero initial velocity, i.e
Vgal1 and Vgal2 should be zero. gal_axis defines the disc’s spin axis. For
standard isolated simulations, use $\left(0,0,1\right)$ for counter-clockwise
rotation around the $z$-axis, or $\left(0,0,-1\right)$ for clockwise.
7. 7.
The T2_ISM parameter in the namelist and Gas_T in the h-dice template should
be equal. The temperature floor T2_star should be set to a slightly lower
value than T2_ISM. We used T2_ISM = 40,000 K and T2_Star = 30,000 K. The
simulation is not rated to work with T2_ISM or T2_Star below 25,000 K.
After taking care of all these parameters, one can start the run, leading to
creation of the output folders in due course. The simulations are RAM and
memory intensive $-$ a hydrodynamical disc galaxy advanced for 1.5 Gyr on a
4-core laptop could take a week or two depending on the RAM and might occupy
up to 100 GB of hard disk space. These estimates would vary depending on the
parameters and machine used $-$ see section 4.2 of the ramses manual1 for more
details.
### 4.3 Hydrodynamical run with star formation
Converting gas into stars requires careful treatment of baryons, for which
ramses is well equipped and tested [34]. Since por is just a modification to
the Poisson solver, it does not affect the baryon treatment, inheriting that
of standard ramses.
To activate star formation, one has to include the &Physics_Params in the
namelist file. One can add &Physics_Params to the Test_hydro_mw_NSFR.nml and
activate star formation. Alternatively, one can use the MW_hydro_SFR.nml
provided in2, which we used for our star formation run.
&PHYSICS_PARAMS cooling=.true. g_star=1.6666D0 n_star=0.1D0 eps_star=0.0D0
t_star=3.0d0 (star forming timescale in Gyr) T2_star=4.0d4 /
All the above parameters are described in the ramses manual. The t_star
parameter is the star formation timescale in Gyr. Setting it to a finite, non-
zero value activates star formation. One can add other parameters as per
requirements.
## 5 Random turbulence generation
To allow for initial turbulence and (optionally) density fluctuations, a
random perturbation algorithm has been included based on the square-square
subdivision [35]. It is similar to the well-known diamond-square algorithm
widely used for the generation of random terrains, but provides a higher
quality of randomness and fewer artefacts. The algorithm first applies random
perturbations on a $2\times 2\times 2$ cubic array. At each subsequent step,
the cube cells are subdivided into $2\times 2\times 2$ arrays and perturbed
again, while the magnitude of the perturbation is reduced $2\times$ (unless
the user chooses a different value). Additional factors can be applied to the
magnitude of each step, following a user-defined power spectrum. The resulting
random noise is then multiplied with the density and/or the three velocity
components to get turbulence.
The algorithm requires some additional variables to be set in the
&Merger_Params in the namelist file, and an extra parameter file qqm3d.par.
The extra lines in the namelist are:
&MERGER_PARAMS … flg_qqm3d=-1 !Master switch (-1 means off) devflat_dens=1.0
!density mean unperturbed level devscal_dens=0.1 !density deviation scale
devflat_vel=1.0 !same, for velocities devscal_vel=0.1 scale_objsize=1.0 !size
of the perturbation mask
The master switch controls the overall usage of the random perturbation
algorithm. “-1” means “off”, other modes are:
* •
0 or 10: only density is perturbed,
* •
1: only adds absolute perturbation to velocities,
* •
2: combines modes 0 and 1,
* •
11 and 12: like 1 and 2, but with velocity perturbations scaled by the
circular velocity (recommended),
* •
21 and 22: like 1 and 2, but with velocity perturbation relative to actual
velocities (experimental).
The parameter file qqm3d.par contains:
**** Setup parameters for qqm4ramses **** 8 2.5 4. 1 nsize,fsize,scalh
ini,balance values 1 1 1. init mode, deviate (0:lin, 1:Gauss), power 0.0
initial corner master values 10 0 1.0 1.0 hr_mode, stop rnd h after n iter
($<=0$:off), hreduce iter factor+power 309562 -1 seed, seed initialization
mode —- Corner perturbation scaling —- 0.2 scalh00 —- Feature power spectrum
—- 0.1 scalh01 … —– Corner initial values —– 0\. x01 …
Only the lines most relevant for beginners are shown. Other lines are mostly
experimental and should be left as they are, unless the user looks at the
source code for more details about their purpose. The most relevant values for
hr_mode are:
* •
4: uses the lines from the power spectrum block as weightings. The magnitude
of the first non-zero perturbation is equal to devscal and will be reduced by
hreduce (typically $1/2$) for each refinement level.
* •
10: uses a flat power spectrum with starting level fsize. Non-integer values
are used via an interpolation scheme.
The other hr_modes should not be used for scientific runs. For details, see
the source file qqm4ramses.f90 and its subroutine init_qqm3d.
## 6 Extraction of data with extract_por
For the extraction of particle data, a tool called extract_por was developed
by Ingo Thies and used here. Now including additional features related to star
formation, extract_por_sfr is available here2. This is a user-friendly tool
that does not require much time to learn. After the tool is downloaded, it can
be extracted to /home/local. Installation is done by executing:
⬇
$ make xpordata
Inside extract_por, fmtRAMSES.par is the parameter file where the extraction
parameters can be set. It contains:
‘path/to/your/outputfiles’ 38 Output No. 32 Number of CPU threads 0 COM reset
—- RADIAL BINNING SECTION —- 10\. 500 binning radius (in system units), nbins
—- Image params —- 1 1 0 flgimage 250 250 imagecenter 200 200 image width 500
500 nbx,nby 1.5 1.5 hx,hy smoothing (pixel units)
Again, not all the parameters are detailed here, the file itself being very
well commented. Only the parameters that might be important for a beginner are
shown.
1. 1.
The path should only specify where the output folders are located, not the
output folders themselves. Thus, ../../output_0001 will not be recognised.
2. 2.
COM reset subtracts the center of mass position and velocity. It could be used
if the object of interest lies outside the field of view.
To just extract the particle positions and velocities, only parameters until
the Partial COM section are important. After setting the parameters, execute:
⬇
$./xpordata
Based on the number of output files selected, the corresponding number of
part.asc and sfr.dat files will be created. The part.asc files contain data in
ascii format with the following column meanings:
⬇
1-3: position, 4-6: velocity, 7: mass,
8: particle ID
The sfr.dat file contains:
⬇
1: time interval in Myr, 2: SFR in M_Sun/Myr
The extraction algorithm calculates the total stellar mass in a given
snapshot, and evaluates the difference in stellar mass between two snapshots.
One can resolve the SFR better by changing delta_tout in the namelist file, or
by extracting particle birth times.
Any tool can be used to extract and plot the results from the part.asc files.
Even extract_por can be used for plotting, in which case all the sections
below Image_Params can be helpful. These sections can be used to set the
projected density, resolution etc. To use extract_por for plotting, the
following suggestions might be helpful:
1. 1.
In Image_params, unless one has a special case like [36], using 2:rgb or
3:rgbw is not helpful. Set it to 1:gray. This works and one can set the
required projected density.
2. 2.
The use of binning radius might be critical for resolution. In case of poor
resolution, increase the number of bins. To increase the pixel resolution/zoom
in, reduce the image field of view in fmtRAMSES.par, i.e. reduce the bin
sizes.
3. 3.
Users could set the box width equal to the simulation box size and locate the
galaxy manually.
4. 4.
hx and hy smoothing smoothens the image. This can be changed based on needs.
5. 5.
All parameters below hx, hy smoothing are not to be changed.
Run extract_por and expect two output files to be produced:
1. 1.
part.asc
2. 2.
image.dat
The image.dat has the data required for plotting the image (particle
positions). The simplest way is to use gnuplot:
⬇
$ gnuplot –> plot “image.dat” with image
## 7 Tests and publications using por
Since its development in 2015, por has been applied to a variety of problems.
A first implementation showed that the observed dynamics in polar ring
galaxies is explained naturally in MOND [37]. [38] compared Antennae-like
galaxy encounters in MOND and in dark matter models, studying the evolution
towards merging and the triggering of star formation in both models. The
Galactic tidal streams of Sagittarius [39] and Palomar 5 [40] were
investigated as gravitational experiments, with the latter’s asymmetry
interpreted as evidence for the EFE. [41] showed that the satellite galaxy
planes of the MW and M31 might arise from a past encounter between them. [42]
showed that exponential disc galaxies form naturally in MOND out of collapsing
post-Big Bang gas clouds. [32] simulated M33, finding that its long-term
evolution is well understood in MOND, especially its weak bar and lack of a
bulge. Their work also details some of the numerical methods, especially in
h-dice and the extfield patch.
## 8 Conclusions
por [11] is a general-purpose $N$-body and hydrodynamical solver for MOND. It
is based on adapting ramses, whose modern version is not compatible with the
por patch. It is recommended to use por from here2. This manual is a generic
outline with which one can understand the basics required to set up, run, and
analyse por simulations. The above-mentioned files like the namelist and
patches like staticpart and hydro are custom-made for a specific purpose, so
care should be taken before using them for a different application. All the
algorithms and tools mentioned in this guide are available here2, and are
rated to work.
## Acknowledgements
IB is supported by an Alexander von Humboldt Foundation postdoctoral research
fellowship. BF acknowledges funding from the Agence Nationale de la Recherche
(ANR project ANR-18-CE31-0006 and ANR-19-CE31-0017) and from the European
Research Council (ERC) under the European Union’s Horizon 2020 research and
innovation programme (grant agreement No. 834148). On a historical note, when
PK joined the HISKP at Bonn University in 2013, financial means became
available that allowed Fabian Lüghausen to be hired as a PhD student co-
supervised by PK and BF, to program the por patch for ramses, and to buy
computer servers for MOND simulations. This led to the development of the por
code [11]. The authors would like to thank Jan Pflamm-Altenburg and the
referees for comments which helped to clarify this guide.
## References
* Milgrom [1983] M. Milgrom. A modification of the Newtonian dynamics as a possible alternative to the hidden mass hypothesis. _ApJ_ , 270:365–370, July 1983. 10.1086/161130.
* Famaey and McGaugh [2012] B. Famaey and S. S. McGaugh. Modified Newtonian Dynamics (MOND): Observational Phenomenology and Relativistic Extensions. _Living Reviews in Relativity_ , 15:10, September 2012. 10.12942/lrr-2012-10.
* Lelli et al. [2017] F. Lelli, S. S. McGaugh, J. M. Schombert, and M. S. Pawlowski. One Law to Rule Them All: The Radial Acceleration Relation of Galaxies. _ApJ_ , 836:152, February 2017. 10.3847/1538-4357/836/2/152.
* Bekenstein and Milgrom [1984] J. Bekenstein and M. Milgrom. Does the missing mass problem signal the breakdown of Newtonian gravity? _ApJ_ , 286:7–14, November 1984. 10.1086/162570.
* Milgrom [2010] M. Milgrom. Quasi-linear formulation of MOND. _MNRAS_ , 403:886–895, April 2010. 10.1111/j.1365-2966.2009.16184.x.
* Milgrom [1999] M. Milgrom. The modified dynamics as a vacuum effect. _Phys. Lett. A_ , 253:273–279, March 1999. 10.1016/S0375-9601(99)00077-8.
* Pazy [2013] E. Pazy. Quantum statistical modified entropic gravity as a theoretical basis for MOND. _Phys. Rev. D_ , 87(8):084063, April 2013. 10.1103/PhysRevD.87.084063.
* Verlinde [2016] E. P. Verlinde. Emergent Gravity and the Dark Universe. _SciPost Physics_ , 2:16, November 2016. 10.21468/SciPostPhys.2.3.016.
* Smolin [2017] L. Smolin. MOND as a regime of quantum gravity. _Physical Review D_ , 96(8):083523, October 2017\. 10.1103/PhysRevD.96.083523.
* Milgrom [2015] M. Milgrom. MOND theory. _Canadian Journal of Physics_ , 93(2):107–118, February 2015. 10.1139/cjp-2014-0211.
* Lüghausen et al. [2015] F. Lüghausen, B. Famaey, and P. Kroupa. Phantom of RAMSES (POR): A new Milgromian dynamics N-body code. _Canadian Journal of Physics_ , 93:232–241, February 2015\. 10.1139/cjp-2014-0168.
* Banik and Zhao [2018a] I. Banik and H. Zhao. Testing gravity with wide binary stars like $\alpha$ Centauri. _MNRAS_ , 480:2660–2688, October 2018a. 10.1093/mnras/sty2007.
* Brada and Milgrom [1999] R. Brada and M. Milgrom. Stability of Disk Galaxies in the Modified Dynamics. _ApJ_ , 519:590–598, July 1999. 10.1086/307402.
* Brada and Milgrom [2000] R. Brada and M. Milgrom. The Modified Dynamics is Conducive to Galactic Warp Formation. _ApJL_ , 531(1):L21–L24, March 2000. 10.1086/312510.
* Tiret and Combes [2007] O. Tiret and F. Combes. Evolution of spiral galaxies in modified gravity. _A &A_, 464:517–528, March 2007. 10.1051/0004-6361:20066446.
* Tiret and Combes [2008] O. Tiret and F. Combes. Evolution of spiral galaxies in modified gravity. II. Gas dynamics. _A &A_, 483:719–726, June 2008. 10.1051/0004-6361:200809357.
* Londrillo and Nipoti [2009] P. Londrillo and C. Nipoti. N-MODY: a code for collisionless N-body simulations in modified Newtonian dynamics. _Memorie della Societa Astronomica Italiana Supplementi_ , 13:89, January 2009.
* Nipoti et al. [2008] C. Nipoti, L. Ciotti, J. Binney, and P. Londrillo. Dynamical friction in modified Newtonian dynamics. _MNRAS_ , 386(4):2194–2198, June 2008. 10.1111/j.1365-2966.2008.13192.x.
* Nipoti et al. [2011] C. Nipoti, L. Ciotti, and P. Londrillo. Radial-orbit instability in modified Newtonian dynamics. _MNRAS_ , 414(4):3298–3306, July 2011. 10.1111/j.1365-2966.2011.18632.x.
* Wu and Kroupa [2013] X. Wu and P. Kroupa. The dynamical phase transitions of stellar systems and the corresponding kinematics. _MNRAS_ , 435(1):728–742, October 2013. 10.1093/mnras/stt1332.
* Wu and Kroupa [2018] X Wu and P Kroupa. Gas Expulsion in MOND: The Possible Origin of Diffuse Globular Clusters and Ultra-faint Dwarf Galaxies. _ApJ_ , 853(1):60, January 2018. 10.3847/1538-4357/aaa081.
* Wu and Kroupa [2019] X. Wu and P. Kroupa. The kinematics of star clusters undergoing gas expulsion in Newtonian and Milgromian dynamics. _MNRAS_ , 487(3):4012–4024, August 2019. 10.1093/mnras/stz1519.
* Llinares et al. [2008] C. Llinares, A. Knebe, and H. Zhao. Cosmological structure formation under MOND: a new numerical solver for Poisson’s equation. _MNRAS_ , 391:1778–1790, December 2008. 10.1111/j.1365-2966.2008.13961.x.
* Llinares [2011] C Llinares. _On the linear and non-linear evolution of dust density perturbations with MOND_. PhD thesis, Rijksuniversiteit Groningen, January 2011.
* Angus et al. [2011] G. W. Angus, A. Diaferio, and P. Kroupa. Using dwarf satellite proper motions to determine their origin. _MNRAS_ , 416:1401–1409, September 2011. 10.1111/j.1365-2966.2011.19138.x.
* Angus et al. [2013] G. W. Angus, A. Diaferio, B. Famaey, and K. J. van der Heyden. Cosmological simulations in MOND: the cluster scale halo mass function with light sterile neutrinos. _MNRAS_ , 436(1):202–211, November 2013. 10.1093/mnras/stt1564.
* Candlish et al. [2015] G. N. Candlish, R. Smith, and M. Fellhauer. RAyMOND: an N-body and hydrodynamics code for MOND. _MNRAS_ , 446:1060–1070, January 2015. 10.1093/mnras/stu2158.
* Candlish [2016] G. N. Candlish. The velocity field in MOND cosmology. _MNRAS_ , 460:2571–2585, August 2016. 10.1093/mnras/stw1130.
* Candlish et al. [2018] G. N. Candlish, R. Smith, Y. Jaffé, and A. Cortesi. Consequences of the external field effect for MOND disc galaxies in galaxy clusters. _MNRAS_ , 480:5362–5379, November 2018. 10.1093/mnras/sty2228.
* Teyssier [2002] R. Teyssier. Cosmological hydrodynamics with adaptive mesh refinement. A new high resolution code called RAMSES. _A &A_, 385:337–364, April 2002. 10.1051/0004-6361:20011817.
* Perret et al. [2014] V. Perret, F. Renaud, B. Epinat, P. Amram, F. Bournaud, T. Contini, R. Teyssier, and J.-C. Lambert. Evolution of the mass, size, and star formation rate in high redshift merging galaxies. MIRAGE - A new sample of simulations with detailed stellar feedback. _A &A_, 562:A1, February 2014. 10.1051/0004-6361/201322395.
* Banik et al. [2020] I. Banik, I. Thies, G. Candlish, B. Famaey, R. Ibata, and P. Kroupa. The global stability of M33 in MOND. _ApJ_ , 905(2):135, December 2020. 10.3847/1538-4357/abc623.
* Banik and Zhao [2018b] I. Banik and H. Zhao. The escape velocity curve of the Milky Way in modified Newtonian dynamics. _MNRAS_ , 473:419–430, January 2018b. 10.1093/mnras/stx2350.
* Rasera and Teyssier [2006] Y. Rasera and R. Teyssier. The history of the baryon budget: Cosmic logistics in a hierarchical universe. _A &A_, 445(1):1–27, January 2006. 10.1051/0004-6361:20053116.
* Miller [1986] G. S. P. Miller. The definition and rendering of terrain maps. _SIGGRAPH Comput. Graph._ , 20(4):39–48, August 1986. ISSN 0097-8930. 10.1145/15886.15890.
* Oehm et al. [2017] W. Oehm, I. Thies, and P. Kroupa. Constraints on the dynamical evolution of the galaxy group M81. _MNRAS_ , 467:273–289, May 2017. 10.1093/mnras/stw3381.
* Lüghausen et al. [2013] F. Lüghausen, B. Famaey, P. Kroupa, G. Angus, F. Combes, G. Gentile, O. Tiret, and H. Zhao. Polar ring galaxies as tests of gravity. _MNRAS_ , 432(4):2846–2853, July 2013. 10.1093/mnras/stt639.
* Renaud et al. [2016] F. Renaud, B. Famaey, and P. Kroupa. Star formation triggered by galaxy interactions in modified gravity. _MNRAS_ , 463:3637–3652, December 2016. 10.1093/mnras/stw2331.
* Thomas et al. [2017] G. F. Thomas, B. Famaey, R. Ibata, F. Lüghausen, and Pavel Kroupa. Stellar streams as gravitational experiments. I. The case of Sagittarius. _A &A_, 603:A65, July 2017. 10.1051/0004-6361/201730531.
* Thomas et al. [2018] G. F. Thomas, B. Famaey, R. Ibata, F. Renaud, N. F. Martin, and P. Kroupa. Stellar streams as gravitational experiments. II. Asymmetric tails of globular cluster streams. _A &A_, 609:A44, January 2018. 10.1051/0004-6361/201731609.
* Bílek et al. [2018] M. Bílek, I. Thies, P. Kroupa, and B. Famaey. MOND simulation suggests the origin of some peculiarities in the Local Group. _A &A_, 614:A59, June 2018. 10.1051/0004-6361/201731939.
* Wittenburg et al. [2020] N. Wittenburg, P. Kroupa, and B. Famaey. The formation of exponential disk galaxies in MOND. _ApJ_ , 890, February 2020. 10.3847/1538-4357/ab6d73.
|
# The cosmology dependence of galaxy clustering and lensing from a hybrid
$N$-body–perturbation theory model
Nickolas Kokron1,2 , Joseph DeRose3,4, Shi-Fan Chen3, Martin White3,5, Risa H.
Wechsler1,2
1 Kavli Institute for Particle Astrophysics and Cosmology and Department of
Physics, Stanford University, 382 Via Pueblo Mall, Stanford, CA 94305, USA
2 Kavli Institute for Particle Astrophysics and Cosmology, SLAC National
Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, CA 94025, USA
3 Department of Physics, University of California, Berkeley, 366 LeConte Hall,
Berkeley, CA 94720, USA
4 Santa Cruz Institute for Particle Physics, University of California, Santa
Cruz, CA 95064, USA
5 Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 93720,
USA
Contact e-mail<EMAIL_ADDRESS>
###### Abstract
We implement a model for the two-point statistics of biased tracers that
combines dark matter dynamics from $N$-body simulations with an analytic
Lagrangian bias expansion. Using Aemulus, a suite of $N$-body simulations
built for emulation of cosmological observables, we emulate the cosmology
dependence of these nonlinear spectra from redshifts $z=0$ to $z=2$. We
quantify the accuracy of our emulation procedure, which is sub-per cent at
$k=1\,h{\rm Mpc}^{-1}$ for the redshifts probed by upcoming surveys and
improves at higher redshifts. We demonstrate its ability to describe the
statistics of complex tracer samples, including those with assembly bias and
baryonic effects, reliably fitting the clustering and lensing statistics of
such samples at redshift $z\simeq 0.4$ to scales of $k_{\rm max}\approx
0.6\,h\mathrm{Mpc}^{-1}$. We show that the emulator can be used for unbiased
cosmological parameter inference in simulated joint clustering and
galaxy–galaxy lensing analyses with data drawn from an independent $N$-body
simulation. These results indicate that our emulator is a promising tool that
can be readily applied to the analysis of current and upcoming datasets from
galaxy surveys.
###### keywords:
cosmology: theory – large-scale structure of Universe – methods: statistical –
methods: computational
††pubyear: 2021††pagerange: The cosmology dependence of galaxy clustering and
lensing from a hybrid $N$-body–perturbation theory model–LABEL:lastpage
## 1 Introduction
We are entering a golden era for studying the large-scale structure of the
Universe. Over the next decade, ambitious imaging surveys will map out large
swathes of the sky to unprecedented depths, imaging billions of galaxies and
their shapes (Ivezić et al., 2019; Laureijs et al., 2011; Doré et al., 2015,
2019), enabling studies of weak gravitational lensing by the intervening
distribution of matter (Bartelmann & Schneider, 2001; Mandelbaum, 2018). Weak
lensing has only recently begun to contribute competitive cosmological
constraints on dark matter and dark energy (Abbott et al., 2018; Heymans et
al., 2020), but is one of the most promising future directions to pursue.
Meanwhile, spectroscopic surveys will observe tens of millions of radial
positions of galaxies (Takada et al., 2014; Aghamousa et al., 2016), enabling
unparalleled understanding of the spatial distribution of galaxies in our
Universe. The cross-correlation between positions and lensing, galaxy–galaxy
lensing, is and will continue to be a key driver of cosmological constraints
from galaxy surveys.
The quality and quantity of these upcoming datasets imposes a significant
challenge in their analysis. Even now, models for summary statistics such as
correlation functions and power spectra are inadequate across the full range
of scales probed by such surveys (Krause et al., 2017; Nishimichi et al.,
2020). Either a large amount of the data must be discarded, or mitigation
schemes must be developed to prevent contamination from scales where the
models are insufficiently calibrated or constrained (MacCrann et al., 2020;
Park et al., 2020). Models for clustering and lensing must be substantially
improved if we are to extract the maximal information about the Universe we
live in, from surveys that are already ongoing or planned. To date, two
separate approaches have been developed to build models for the observables of
cosmic surveys: analytically, through perturbative techniques, or numerically,
using non-linear $N$-body simulations.
Perturbation theory provides a systematic, analytic way to compute $N$-point
summary statistics to systematically higher precision and smaller scales
(Bernardeau et al., 2002). Below the nonlinear scale the effects of these
nonlinearities can be tamed and parametrized within the framework of effective
theories (Baumann et al., 2012; Carrasco et al., 2012; Vlah et al., 2015).
This increased precision, however, comes at the cost of very large
inaccuracies beyond the nonlinear scale at which the self-gravitating dark
matter fluid ceases to be perturbative (Blas et al., 2014; McQuinn & White,
2016). In addition, perturbative frameworks provide a rigorous, first-
principles approach to include physics beyond the standard $\Lambda$CDM model
in large-scale structure observables such as neutrinos, baryonic effects and
more exotic early-universe scenarios (Lewandowski et al., 2015; Senatore &
Zaldarriaga, 2017; Aviles & Banerjee, 2020; Chen et al., 2020c; Laguë et al.,
2020; Ivanov et al., 2020; 2020arXiv200612420D; Aviles et al., 2020).
Understanding the domain of applicability of perturbation theory is still an
active field of research (Baldauf et al., 2016b; Nishimichi et al., 2020; Chen
et al., 2020a).
The other approach, simulation-based modelling, involves numerically solving
the equations of motion for an initial distribution of matter (Hockney &
Eastwood, 1988; Bagla, 2005; Kuhlen et al., 2012). The resulting catalogs can
be analysed in a way analogous to data to obtain predictions of cosmological
observables across a wide range of scales at the cosmological parameters of
the simulation.
However, a limiting factor in simulation-based analyses is that $N$-body
simulations require significant computational resources for a single
realization. Thus, standard inference procedures such as Markov Chain Monte
Carlo (MCMC) become prohibitively expensive when using models derived from
simulations.
In order to ameliorate the issues with simulation-based inference, recent
developments in statistical learning have popularized so-called emulators as
models (Heitmann et al., 2010, 2009; Lawrence et al., 2010). Emulators combine
a set of simulations that representatively sample cosmological parameter space
with sophisticated regression techniques to ‘fill in the blanks’ across
parameter space. Once trained, an emulator provides rapid evaluations of a
model which can be seamlessly integrated in analysis pipelines. For example,
recent emulators for the nonlinear matter power spectrum (Knabenhans et al.,
2019) have runtimes with negligible overhead compared to the underlying
Boltzmann codes used for linear predictions.
While galaxy surveys observe luminous tracers of the underlying dark matter
density distribution, most suites of $N$-body simulations used to construct
emulators deal only with the dark matter component. Thus, emulators for galaxy
survey observables are presented with the additional challenge of capturing
the relationship between the galaxy distribution and the underlying dark
matter. Understanding the details of this relationship, known as the
galaxy–halo connection, is an active field of research (see e.g. Wechsler &
Tinker, 2018, for a recent review). Even for well-studied samples of galaxies,
there are no consensus models to describe this relationship. For any given
model of the galaxy–halo connection, an entirely new emulator has to be
trained (Kwan et al., 2015; Wibking et al., 2019; Zhai et al., 2019;
McLaughlin et al., 2021). Emulation of models with a large number of free
parameters is also a challenging task, with techniques such as Gaussian
processes scaling as $\mathcal{O}(N^{3})$ with $N$ training points and a
substantially larger set of training data being required as one increases the
dimensionality of the model. The simplest forms of galaxy–halo connections
such as halo occupation distributions have five free parameters (Zheng et al.,
2005), and it is expected that for more complex selections of galaxy samples
the number will grow considerably (Guo et al., 2019; Yuan et al., 2018; Favole
et al., 2020; Zu, 2020).
In comparison, modern perturbation theory approaches to galaxy clustering
operate at the field level via so-called bias expansions, which encode the
response of small-scale galaxy physics (e.g. the galaxy–halo connection) to
large-scale structure via a series of bias coefficients (see e.g. Desjacques
et al. 2018 for a recent review). A key advantage of bias models is that while
their dependence on parameters is simple and analytic, they should describe
the statistics of a broad range of galaxy (and halo) samples as long as they
are formed by processes that respect the symmetries of the underlying
processes of structure and galaxy formation, namely rotational and Galilean
invariance and the equivalence principle. Indeed, it was recently shown that
the bias expansion can be directly derived by generating all possible
dynamical terms and eliminating combinations not allowed by these symmetries
(Fujita & Vlah, 2020).
The challenges in using bias models come, instead, from the aforementioned
limitations of perturbation theory models themselves. Similarly to
perturbation theories for the clustering of dark matter, bias models are not
expected to hold across all scales. Instead, they are expected to be valid at
scales larger than or comparable to the Lagrangian size of haloes. This regime
is where one is insensitive to the internal structure of haloes (McDonald &
Roy, 2009; Fujita et al., 2020; Lazeyras & Schmidt, 2019; Vlah et al., 2016).
It is worth noting, however, that the nonlinear and halo scales are not
identical and scale differently with redshift — at higher redshifts
perturbative models may be more limited by the larger Lagrangian radii of
(typically more luminous or massive) samples than dynamical nonlinearities,
and vice versa at lower redshifts. This distinction is particularly apparent
in the Lagrangian basis (Matsubara, 2008; Vlah et al., 2016), in which galaxy
clustering due to dynamics and biasing are explicitly disentangled. Recently
Modi et al. (2020) suggested a way to combine the generality of bias
expansion-based models with $N$-body simulations in a manner that is
particularly suited for emulation, particularly in the regime where dynamics
become nonlinear on scales larger than the halo scales of interest. Since
higher-order Lagrangian biases have been found in simulations to be small for
low and intermediate mass haloes (Abidi & Baldauf, 2018; Lazeyras & Schmidt,
2018), this scheme keeps the dynamical nonlinearities from $N$-body
simulations to all orders while including Lagrangian bias only up to second
order.
In the remainder of this work we concern ourselves with the construction of an
emulator for the halo–halo and halo–matter correlations with analytic
dependence on bias parameters, extending the method presented in Modi et al.
(2020) to a generic cosmological parameter dependence which can then be
readily used for cosmological clustering analyses. The structure is as
follows: in section 2 we briefly review the Lagrangian description of galaxy
bias. In section 3 we describe the hybrid technique which combines
displacements obtained from $N$-body simulations with Lagrangian bias. Section
4 describes the Aemulus suite of simulations (DeRose et al., 2019b), which we
use to build the training data for the emulator. The measurements of the
‘basis spectra’ of the hybrid Lagrangian bias model, and their emulation, are
outlined in section 5. Section 6 concerns itself with assessing the
performance of the emulator. Specifically, sub-section 6.1 addresses the scale
and redshift-dependent error for each of the ten basis functions that span the
model. Subsection 6.2 assesses how well the model describes the statistics of
complicated galaxy samples, including those possessing concentration and spin
secondary biases, as well as the effect of baryons at small scales. Our final
test, subsection 6.3, pits the emulator against a series of increasingly
complex simulated likelihood analyses, in order to assess potential biases in
inferred cosmological parameters using our emulator and their origin.
## 2 Lagrangian bias expansion
In the Lagrangian approach to bias formulated in Matsubara (2008), the
observed clustering of galaxies is obtained through first weighting fluid
elements by a local functional $F[\delta(\textbf{q})]$ at their initial
(Lagrangian) positions q and then advecting these weights to their observed
positions via fluid trajectories $\textbf{x}=\textbf{q}+\mathbf{\Psi}$, where
$\mathbf{\Psi}(\textbf{q},t)$ is the Lagrangian displacement. As discussed in
the introduction, the bias functional $F$ is obtained by summing up all scalar
terms allowed by Galilean invariance and the equivalence principle up to a
given order in the initial conditions; up to quadratic order we have (Vlah et
al., 2016)
$\displaystyle F(\bm{q})\approx\,1+$ $\displaystyle
b_{1}\delta_{L}(\bm{q})+\frac{b_{2}}{2!}(\delta_{L}^{2}(\bm{q})-\langle\delta_{L}^{2}\rangle)\,+$
(1) $\displaystyle b_{s^{2}}(s_{L}^{2}(\bm{q})-\langle
s_{L}^{2}\rangle)+\,b_{\nabla^{2}}\nabla^{2}\delta_{L}(\bm{q})+\,\epsilon(\bm{q}),$
where $s^{2}=s_{ij}s_{ij}$ is the tidal shear tensor. The bias expansion is
local above the halo scale and the initial fields in the above functional are
to be interpreted as smoothed; any ‘nonlocal’ effects as we approach this
scale, as well as dependences on smoothing, are parametrized to lowest order
by the derivative bias $b_{\nabla^{2}}$. Modes below the halo scale,
uncorrelated with the large scales of interest, are represented by the
stochastic noise $\epsilon$.
From the weighting $F(\textbf{q})$, the observed clustering is given via
number conservation to be
$1+\delta_{\alpha}(\textbf{x},z)=\int
d^{3}q\,\delta^{D}(\textbf{x}-\textbf{q}-\mathbf{\Psi}(\textbf{q},z))F(\bm{q}),$
(2)
where the Lagrangian displacement $\mathbf{\Psi}$ denotes the movement of the
fluid element relative to its initial position. At any given order, the
Lagrangian galaxy overdensity above can be mapped onto e.g. the Eulerian basis
of McDonald & Roy (2009) by Taylor expanding $\mathbf{\Psi}$. However, keeping
the nonlinear mapping in the integral above will generate a tower of Eulerian
bias parameters even if only a few of the Lagrangian bias parameters are
nonzero (see e.g. Abidi & Baldauf, 2018). We will treat the bias values,
$b_{\alpha}$, as free parameters. Ab initio predictions of the $b_{\alpha}$
for general tracer populations is a harder problem, and a current active area
of research.
## 3 Lagrangian bias and simulations
Recently, it has been proposed that one can combine the fully resolved dark
matter dynamics of an $N$-body simulation with the analytic perturbative bias
techniques we outlined in the previous section (Modi et al., 2020). The use of
dynamics from an $N$-body simulation means this hybrid model circumvents the
need for perturbative calculations related to the equations of motion of the
dark matter fluid itself. Additionally, $N$-body simulations are relatively
inexpensive (compared to hydrodynamical simulations) and well-controlled,
well-defined limits for observables exist so that convergence of measured
quantities can be assessed systematically (e.g. Power et al., 2016; Mansfield
& Avestruz, 2020; Joyce et al., 2020). As such, this hybrid model combines two
techniques with solid theoretical foundations, ensuring robustness of its
predictions. We will briefly describe the technique and how one implements it
below, but refer the reader to Modi et al. (2020) for a more complete
discussion.
When creating initial conditions of an $N$-body simulation, one starts from a
noiseless linear cosmological density field, $\delta_{L}(\bm{x})$.
Traditionally, this density is only used to sample initial displacements which
impart a cosmological signal on a set of _pre-_ initial conditions. First-
order displacements using the Zeldovich approximation,
$\Psi(\bm{q})=\int\frac{d^{3}k}{(2\pi)^{3}}e^{i\bm{k}\cdot\bm{q}}\frac{i\bm{k}}{k^{2}}\delta_{L}(\bm{k}),$
(3)
result in so-called 1LPT initial conditions. However, higher order initial
conditions (Crocce et al., 2006; Garrison et al., 2016; Michaux et al., 2020)
are now ubiquitous in modern simulations.
The noiseless initial density field can also be used to construct the
different component fields of the Lagrangian bias expansion of the initial
conditions:
${O}_{L}\supset{1,\delta_{L},\delta_{L}^{2},s_{L}^{2},\nabla^{2}\delta_{L},\cdots},$
(4)
where the subscript $L$ indicates these are the Lagrangian fields. Advecting
$N$-body particles weighted by $\mathcal{O}_{L}$ to a specific snapshot
results in bias-weighted fields,
$\delta_{\mathcal{O}_{L}}(\textbf{x})\equiv\int d^{3}\textbf{q}\
\mathcal{O}_{L}(\textbf{q})\
\delta_{D}(\textbf{x}-\textbf{q}-\Psi(\textbf{q})),$ (5)
which trace the non-linear dark matter distribution. In Fig. 1 (middle panel)
we show an example of the different bias-weighted fields produced by this
procedure. These fields are similar to the ‘Eulerian-shifted’ operator basis
of Schmittfull et al. (2019). A notable difference is that in our case the
displacements are fully resummed, while the Eulerian-shifted basis of
Schmittfull et al. (2019) only resums the Zeldovich displacement (1LPT).
Higher order displacements ($n$LPT) are Taylor-expanded up to third order as
part of their bias expansion. The difference is because our aim in this paper
is to attempt to model scales beyond the reach of standard one-loop
perturbation theory, whereas the goal of Schmittfull et al. (2019) was to
validate one-loop perturbation theory at the field level (see also Taruya et
al. 2018).
The power spectrum of any combination of tracers can then generically be
written as ($X,\,Y\equiv{\delta_{\mathcal{O}_{L}}}$)
$P^{ab}(k)=\sum_{X,Y}b_{X}^{a}b^{b}_{Y}P_{XY}(k)+P_{SN},$ (6)
where $P_{XY}$ is the cross-power spectrum at a fixed cosmology between the
different fields at a given redshift. For example, the unweighted spectrum,
$P_{11}$, is the non-linear matter power spectrum.
This Lagrangian bias model can handle cross-correlations of arbitrary tracers.
However, we also note that given a set of bias parameters for a single tracer
sample $\alpha$, $\\{b_{X}^{\alpha},\,X\in\mathcal{O}_{L}\\}$, one can also
self-consistently predict the tracer–matter cross-correlation by taking the
second sample to have $b_{Y}^{m}=0$ except for $Y=1$. In this case there are
only $P_{X1}$ terms. The tracer–matter cross-correlation is the primary cosmic
contribution to the signal of galaxy–galaxy lensing, one of the key
cosmological observables of current and upcoming galaxy surveys (Prat et al.,
2018; Yoo et al., 2006; Wibking et al., 2020; Mandelbaum, 2018). The
tracer–matter cross-correlation is also the primary contribution to the cross-
correlation between galaxy positions and lensing of the cosmic microwave
background (CMB), one of the most powerful and complementary statistics that
is measured between galaxy and CMB surveys (Bianchini et al., 2015; Pullen et
al., 2016; DiPompeo et al., 2017; Peacock & Bilicki, 2018; Omori et al., 2019;
Singh et al., 2019; Krolewski et al., 2020). For notational convenience,
throughout the remainder of this paper we will refer to the tracer–tracer
correlation as $P^{hh}(k)$ and the tracer–matter correlation as $P^{hm}(k)$.
This hybrid approach of combining $N$-body simulations with Lagrangian bias
can fit the power spectrum of tracers to significantly smaller scales than
standard Lagrangian perturbation theory (Modi et al., 2020). While the
dependence on the Lagrangian bias parameters $b_{X}$ is analytic in this
model, one still requires an $N$-body simulation to measure the basis spectra.
An $N$-body simulation at a given point of cosmological parameter space then
provides a measurement of the basis spectra at that point. With $N$-body
simulations that sufficiently sample parameter space one can estimate the
cosmological dependence of these basis functions across the entire space. This
is precisely the goal of this work.
Figure 1: Visualization of the methodology implemented in this paper, from the
advection process to the measurements of the basis spectra. Our emulation
scheme approximates the cosmology and redshift dependence of each spectrum in
the ten panels in the lower part of the figure. The top panel has each
Lagrangian field scaled to have equal variance, in order to highlight the
qualitative differences between the fields. The middle panel shows the bias
weighted-fields that result from the advection process. Different weights
highlight qualitatively different aspect of the matter density. The cross-
spectra of these fields give the spectra shown in the lower panel.
## 4 The Aemulus simulations
In order to properly emulate the cosmology dependence of the basis spectra
$P_{XY}(k)$, the underlying suite of $N$-body simulations used for
measurements of observables must be constructed carefully.
The Aemulus suite of $N$-body simulations (DeRose et al., 2019b) has been
purpose-built for precise emulation of cosmological observables measured in
galaxy surveys. The suite is composed of a set of 75 simulations that span 47
points in the $w$CDM parameter space allowed by a combination of modern CMB,
BAO and type Ia supernova experiments.
Each Aemulus box has a size $L_{\rm box}=1050\,h^{-1}$Mpc with $N=1400^{3}$
particles, corresponding to a mass resolution of $3.51\times
10^{10}\left(\frac{\Omega_{m}}{0.3}\right)h^{-1}M_{\odot}$. The Aemulus
simulations have undergone rigorous convergence and validation tests for
several observables. There are 10 particle snapshots ranging from $0<z<3$,
allowing for measurements the redshift-dependence of the non-linear basis
spectra.
Aemulus’ halo mass function emulator has sufficient accuracy to remain valid,
for the defined cosmological parameter space, through the Rubin Observatory’s
Y1 LSST survey (McClintock et al., 2019), while the galaxy correlation
function can predict the clustering of massive galaxy samples, such as those
observed by DESI, to within 1 per cent down to scales of $r\approx
1\,h^{-1}$Mpc (Zhai et al., 2019).
Thus, Aemulus represents an appropriate setting to construct an emulator for
the Lagrangian bias basis spectra described in section 3. The only missing
component is that the initial conditions code used in Aemulus, 2LPTIC (Crocce
et al., 2012), does not output the noiseless linear density fields. We patched
the code to read out this field and re-generated the initial conditions.
Figure 2: Ratio of the measured basis spectra compared to LPT predictions for
one of the cosmologies in the Aemulus test set. The mean of the five
independent boxes in the test set is shown, and the shaded band represents one
standard deviation as inferred from the boxes. The dashed vertical line at
$k=0.1h{\rm Mpc}^{-1}$ shows the point where we revert to predictions of LPT.
As discussed in the text, we find some small multiplicative differences at
large scales for most basis spectra, that are larger for the basis spectra
built from higher powers of the density field. This is most likely due to
discrepancies in growth factors obtained between linear theory and $N$-body
simulations.
## 5 Emulating the basis spectra
### 5.1 Measuring basis spectra
We now describe in detail our implementation of the hybrid Lagrangian biasing
scheme described in Section 3. Schematically, the process of obtaining
measurements of the basis spectra from an $N$-body box can be broken down into
four steps:
1. 1.
Compute the Lagrangian bias fields: given the noiseless density field
$\delta_{L}$ one constructs the other weight fields $\mathcal{O}_{L}$ by
applying the appropriate transformations.
2. 2.
Advect particles to a given snapshot: every particle ID can be associated with
a grid cell $\\{i,j,k\\}$ in the fields $\mathcal{O}_{L}$. Every particle in a
snapshot receives a weight
$\left(\frac{D(z)}{D(z_{0})}\right)^{n}\times\mathcal{O}_{L}[i,j,k]$, where
$\left(\frac{D(z)}{D(z_{0})}\right)$ is the ratio of growth factors between
the snapshot and initial conditions, and $n$ is the number of powers in the
linear density field that make up $\mathcal{O}_{L}$.
3. 3.
Paint the weighted particles to a grid, to form the late-time bias fields.
4. 4.
Measure the basis spectra: the painted bias fields are cross-correlated with
each other to measure the basis spectra $P_{XY}$ for that given cosmology and
redshift.
This procedure imposes some additional storage requirements. While a particle
catalog normally has seven entries for every particle, $(ID,\,\bm{x},\bm{v})$,
each bias field weight will add an additional entry. Naively saving component
weights at every snapshot will lead to a 57 per cent increase in catalog size.
However, the time evolution of the weights is determined entirely by the
linear growth function and can be determined on the fly. Thus, the fractional
increase in catalog size will only be of order $\sim(1/7)(N_{b}/N_{z})$, where
$N_{b}$ is the number of bias-weighted fields computed and $N_{z}$ is the
number of snapshots used. For the second order basis of
$\mathcal{O}=\\{1,\delta_{L},\delta_{L}^{2},s_{L}^{2},\nabla^{2}\delta_{L}\\}$
this represents a fractional increase in catalog size of 6 per cent. Even if
the weights are not stored, all of the steps outlined above can be carried out
on the fly when needed.
In Fig. 1 we show a comparison between the predictions of one-loop Lagrangian
perturbation theory and the basis spectra averaged across five Aemulus boxes
with the same cosmology, from the test suite. For all basis spectra we recover
the LPT result at large scales to within a few per cent. While one would
expect the agreement at large scales to be exact, it is well known that
$N$-body simulations struggle to correctly recover linear growth at large
scales (Heitmann et al., 2010; Schneider et al., 2016; Garrison et al., 2016)
due to transients from the grid that particles are initialized on, and the
discrete nature of the kick-drift-kick operators used in time-stepping. This
discrepancy is also present in Aemulus, as can be seen in fig. 13 of DeRose et
al. (2019b). The Aemulus simulations have a 1 per cent mismatch in growth at
large scales, which is redshift independent at the largest scales. Differences
in growth between linear theory and the simulations would then be amplified
for the basis spectra built from multiple fields. In Appendix C we explore the
$k\to 0$ differences between LPT and our emulator, present prescriptions for
enforcing consistency and discuss the small impact they have on parameter
inference.
At small scales, we see that non-linear structure formation imbues significant
differences between LPT and the simulations. At the highest redshift shown,
$z=2$, the agreement for the three spectra that dominate the signal ($\langle
1,1\rangle,\,\langle 1,\delta\rangle,\mathrm{and}\langle\delta,\delta\rangle$)
is close throughout all scales probed in our simulation. Thus, for the scales
under consideration, we find no need to extend the emulator to $z>2$.
Above $z=1$, the Aemulus simulations only have snapshots at $z=2$ and $z=3$,
and thus any attempt to emulate redshift evolution between these snapshots is
too poorly sampled for the emulator to achieve our desired performance. For
$z\geq 2$ the emulator reverts to predictions from velocileptors (Chen et al.,
2020b), a public code to predict LPT power spectra and correlation functions
to one loop order. This agrees quite well with most basis spectra given Fig.
1. When reverting to LPT at $z>2$, our implementation includes an additional
free parameter. This parameter corresponds to the $k^{2}$ counterterm for
matter that takes into account the effects of small-scale physics not captured
by perturbation theory (Vlah et al., 2015). We note that there are no specific
impediments to measuring basis spectra, or the emulation scheme adopted, at
higher redshifts. Given simulations that are sufficiently well sampled in
time, out to the furthest bin one wishes to include, the techniques described
here should apply.
The LPT predictions shown in Fig. 1 are a limit of a more complete theory that
includes redshift-space distortions (Chen et al., 2020a, b) . The agreement
between $N$-body simulations and this subset of LPT at large scales implies
the bias parameters in the full theory and our hybrid model are equivalent; a
set of bias parameters obtained from fitting the emulator to a sample can then
be used in tandem with RSD measurements analysed purely with perturbation
theory at a slightly more restrictive $k_{\mathrm{max}}$. Since the RSD
measurements are done in 3D, rather than projection, one can achieve small
measurement errors at more restrictive $k_{\rm max}$ making this combination
an efficient one, e.g. for testing general relativity (Alam et al., 2017;
Zhang et al., 2020).
We note that we omit results for the basis spectra $\langle
X,\nabla^{2}\delta\rangle$. The initial weight field $\nabla^{2}\delta_{L}$
has a large amount of power at very small scales, making its Fourier transform
unwieldy due to the presence of an explicit smoothing scale of $k\sim L_{\rm
grid}^{-1}$. As a result, we find the basis spectra as measured through the
advection procedure have a cosmology-dependent amplitude mismatch when
compared to LPT predictions at large scales. Therefore we adopt the
approximation $\langle X,\nabla^{2}\delta\rangle\approx-k^{2}\langle
X,1\rangle$ in the actual emulation scheme. Since these higher derivative bias
contributions most closely correspond to the effects of baryonic physics and
finite-size effects for haloes, we check that the approximation performs
similarly in Section 6.2. Specifically, in Fig. 8 we explicitly show the
differences between the measured $P_{1\nabla^{2}}$ and the approximation
employed. We also note the approximation lowers the complexity of the
emulation scheme, reducing the full set of basis functions at second order to
be emulated from 15 to 10.
### 5.2 Principal components of non-linear spectra
Once the basis spectra have been measured across all boxes, the emulator is
built by adopting a suitable interpolation scheme between the different
spectra. While other emulators using the Aemulus simulations have been
constructed using Gaussian processes (GPs), we adopt a different approach
here, similar to that used in the Euclid emulator (Knabenhans et al., 2019),
using a combination of principal component analysis and polynomial chaos
expansions (PCE) (Xiu, 2010).
We prefer PCE to GP emulation for a few practical reasons. GPs are more
difficult to train, requiring explicit choices for kernels and tuning of real
valued hyper–parameters. Additionally, the run time for evaluating a trained
GP scales with the amount of data used for training, while the run-time of a
PCE model evaluation scales only with the order of the PCE. Furthermore, the
polynomial nature of PCEs means that they have fast, analytic gradients,
making them easy to integrate with sampling techniques such as Hamiltonian
Monte Carlo (Hoffman & Gelman, 2011), although we have not done so in this
work. GPs may still be preferred when the model being emulated is highly
complex, but, as we show in the following sections, we are able to attain a
nearly optimal emulator performance with the simpler and faster PCE scheme.
To begin, we compute one-loop LPT predictions for each basis spectrum at every
cosmology and redshift in the Aemulus training design, which we will refer to
as $P_{\rm XY}^{\rm LPT}(k,\mathbf{\Omega})$, where $\mathbf{\Omega}$ denotes
the cosmology and redshift in question. To do this we make use of the
velocileptors code (Chen et al., 2020b).
We then compute the ratio between the LPT predictions and the measured basis
spectra, $P_{XY}^{\rm NL}(k,\mathbf{\Omega})$, from each snapshot. These
ratios are thus consistent with unity at small wavenumbers, and while they
deviate significantly from unity at high $k$ they have significantly less
dynamic range than the basis spectra. In order to de-noise these ratios, we
apply a Savitsky–Golay (Savitzky & Golay, 1964) filter of order three using an
11-point window in $k$. Doing so dramatically reduces the amount of noise in
the spectra, and is a simple alternative to reduce noise at high $k$, where
techniques such as fixed amplitude, paired phase simulations do little to
reduce variance (Angulo & Pontzen, 2016; Villaescusa-Navarro et al., 2018;
Chuang et al., 2019). As a final preprocessing step, we also take the base-10
logarithm of these smoothed ratios in order to further decrease the dynamic
range. This yields the quantity that we emulate, which we call
$\Gamma^{XY}(k,\mathbf{\Omega})$,
Figure 3: The first two principal components of the log-ratios between
$N$-body and LPT spectra, $\Gamma^{XY}$, for each basis spectrum. The
principal components are very smooth compared to the raw basis spectrum
measurements from the simulations. Two principal components are sufficient to
explain greater than 99 per cent of the variance in all spectra as a function
of redshift and cosmology.
$\Gamma^{XY}(k,\mathbf{\Omega})\equiv\log_{10}\left(\frac{P_{XY}^{\rm
NL}(k,\mathbf{\Omega})}{P_{XY}^{\rm LPT}(k,\mathbf{\Omega})}\right)$ (7)
After these pre-processing steps, we proceed by constructing a principal
component basis for these spectra. At this point we restrict ourselves to
$0.1<k<1$ and $0<z<2$, however we note that in principle there are no issues
extending to broader scales and redshifts if simulations allow for it.
Let $\mathbf{X}_{XY}$ be the $N\times M$ array containing $\Gamma^{XY}$, where
$N=N_{\rm cosmo}\times N_{z}$, $N_{\rm cosmo}$ is the number of cosmologies in
our training set, $N_{z}$ is the number of redshift outputs per cosmology in
our training set and $M$ is the number of $k$ values under consideration. Then
a basis of principal components can be constructed by computing the
eigenvectors of the covariance matrix of $\mathbf{X}_{XY}$:
$\displaystyle\mathbf{C}_{XY}$ $\displaystyle=\mathbf{X}_{XY}^{\rm
T}\mathbf{X}_{XY},$ (8)
$\displaystyle=\mathbf{W}_{XY}\mathbf{\Lambda}_{XY}\mathbf{W}_{XY}^{\rm T},$
where the rows of $\mathbf{W}_{XY}$ are the eigenvectors, i.e., the principal
components, in question and $\mathbf{\Lambda}_{XY}$ is a diagonal matrix of
the eigenvalues, which are equal to the variance of the data described by each
eigenvector. In all cases, greater than 99 per cent of the variance in each
basis spectrum is described by the first two principal components, shown in
Figure 3. We thus disregard all other principal components for the duration of
this work. Given the results discussed in Section 6, we deem this to be
sufficient. Having computed the principal components, we then determine the
projection of them onto each measured $\Gamma^{XY}$ via:
$\displaystyle\mathbf{A}_{XY}=\mathbf{X}_{XY}\mathbf{W}_{XY},$ (9)
where $\mathbf{A}_{XY}$ is an $N\times 2$ matrix containing the principle
component coefficients $\alpha^{XY}_{i}(\mathbf{\Omega})$ for each cosmology
and redshift in our training set. It is the dependence of these coefficients
on cosmology and redshift that we build a surrogate model for using polynomial
chaos expansions (Wiener, 1938).
### 5.3 Emulating cosmology dependence with polynomial chaos
With our principal components in hand, every point in cosmological parameter
space sampled by the training set has coefficients for the approximation
$\displaystyle\Gamma^{XY}(k,\mathbf{\Omega})\approx\sum_{i}\alpha_{i}^{XY}(\mathbf{\Omega})\mathrm{PC}_{i}^{XY}(k).$
(10)
The problem of emulating the cosmology dependence of the $\Gamma^{XY}$
functions is now reduced to that of figuring out the cosmology dependence of
the PC coefficients $\alpha_{i}(\mathbf{\Omega})$. A polynomial chaos
expansion (PCE) (of order $N$) of this dependence is the decomposition of the
$\alpha_{i}$ onto a basis of products of orthogonal polynomials
$\Phi_{\mathbf{i}}(\mathbf{\Omega})$ organized by a multi-index $\mathbf{i}$
(Xiu, 2010):
$\displaystyle\alpha(\mathbf{\Omega})=\sum_{|\mathbf{i}|\leq
N}c_{\mathbf{i}}\Phi_{\mathbf{i}}(\mathbf{\Omega}).$ (11)
Each component of the multi-index $\mathbf{i}=(i_{1},\cdots,i_{d})$, denotes
the order of the polynomial for that cosmological parameter, e.g.,
$\displaystyle\Phi_{\mathbf{i}}(\mathbf{\Omega})=\phi_{i_{1}}(\Omega_{1})\cdots\phi_{i_{d}}(\Omega_{d}),$
(12)
and so $\phi_{i_{d}}(\Omega_{d})$ is a univariate orthogonal polynomial of
order $i_{d}$.
While this is in principle a decomposition into a combinatorially large space
of coefficients $c_{\mathbf{i}}$, it is known to be a sparse representation
(Blatman & Sudret, 2008, 2011), and there exist many algorithms (and numerical
libraries) optimized to perform regression over this space and obtain values
for the coefficients. We use the package Chaospy (Feinberg & Langtangen, 2015;
Feinberg et al., 2018) to perform the decomposition and subsequent regression.
Note that since the parameter dependence of the principal components is given
by a combination of polynomials, our model in principle has an analytic
dependence on cosmology, redshift, and bias. Since the coefficients are
determined via regression, a PCE emulator does not recover the input data
exactly. However, the tests conducted in section 6 indicate that this drawback
is not an issue.
In total, the hyperparameters in the model are:
1. 1.
The number of principal components used, $N_{\mathrm{PC}}$.
2. 2.
The maximum order of the multi-index $|\mathbf{i}|$. In practice we separately
optimize over the maximum polynomial order of each individual parameter
$i_{d}$, with $i_{d}\leq 4$.
As mentioned previously, we restrict ourselves to $N_{\mathrm{PC}}=2$, as this
is sufficient to capture over 99 per cent of the variance in each basis
spectrum. To optimize over the polynomial orders $i_{d}$, we run a simple grid
search across the aforementioned values for the seven $w$CDM parameters
$\mathbf{\Omega}=(\Omega_{b}h^{2},\Omega_{c}h^{2},\sigma_{8},H_{0},n_{s},N_{\mathrm{eff}},w)$
and evaluate our results on the Aemulus test suite. We select the set of
orders that minimizes global error across all test boxes and snapshots. We
describe the tests of this optimized emulator below.
Figure 4: Coefficients of $\mathrm{PC}_{1}^{XY}(k)$ for the first three basis
spectra as a function of $\sigma_{8}$, colored by redshift. The coefficients
vary smoothly for all redshifts as $\sigma_{8}$ is varied. It is the
dependence of these coefficients that we emulate via PCE as a function of
cosmology and redshift. The panels look similar for the remaining basis
spectra. Figure 5: Emulation residuals for basis spectra. _Lower left
triangle:_ the fractional error obtained for each basis spectrum when compared
to the measurements averaged from each set of boxes in the test suite. _Upper
right triangle:_ the relative size of the emulator residuals compared to the
total halo–halo spectrum measured for a fiducial halo sample. In each panel,
the dark blue curves are the mean residuals across all redshifts and test
boxes, the red curves report the median residual error across the test suite
as a function of redshift, and the black curves report the expected sample
variance at the volume of an Aemulus training box.
## 6 Results
### 6.1 Analysis of emulator residuals
A crucial step in producing viable emulators of cosmological observables is
characterizing the accuracy of the emulation scheme. We use the Aemulus set of
test boxes to assess the performance of the scheme described in the previous
section. The test boxes span seven points in cosmological parameter space,
each with five independent realizations of that cosmology. We use the average
of five basis spectra at each test cosmology as reference quantities to
understand the errors induced in the emulation procedure as a function of
scale, across parameter space.
We report the accuracy of our optimized PCE emulator for the basis spectra
over the range $0.1\leq k\leq 1.0\,h\,{\rm Mpc}^{-1}$ in the lower left panel
of Fig. 5. Across most redshift bins in the test suite and for most basis
spectra we achieve better than 1 per cent accuracy in the test set. At $z=0$
we observe worse performance, however this can be attributed to numerical
difficulties in computing the LPT spectra at $z=0$ at small scales, as can be
seen in Fig. 1. As there is little cosmological information in the very low
redshift universe, we do not consider this to be a significant issue. Indeed,
our additional validation tests support that the model has sufficient accuracy
to analyse current survey data.
Adopting a fiducial set of bias parameters corresponding to a halo sample of
$12.5\leq\log_{10}\left(\frac{M}{h^{-1}M_{\odot}}\right)\leq 13$, we compute
the emulator residuals for each basis spectrum relative to the _total_
$P^{hh}(k)$. The results are shown in the upper right triangle of Fig. 5. The
individual basis spectrum error rarely exceeds a permille of the total power.
This implies that the slightly larger errors for cubic basis spectra shown in
Fig. 5 are sub-leading relative to the total signal we expect to model.
### 6.2 Fitting assembly bias and baryons
Beyond samples of fixed halo mass, the general bias expansion in Eq. 1 should
also be able to describe the clustering statistics of more complex tracer
populations. It is well known that haloes of a fixed mass bin exhibit
different clustering properties depending on whether they are sub-selected on
certain properties. This effect, originally discovered in the context of
assembly history, and generally known as assembly bias or secondary bias, has
been observed for selections on concentration, occupation, local environment,
spin, and other secondary halo properties (Wechsler et al., 2002; Gao et al.,
2005; Wechsler et al., 2006; Dalal et al., 2008; Mao et al., 2018; Salcedo et
al., 2018; Mansfield & Kravtsov, 2020).
Figure 6: Emulator predictions at fixed cosmology for halo samples exhibiting
concentration (top panels) and spin (bottom panels) assembly bias. Central
panels show the signal from the halo sample with no selection on a secondary
parameter. The left and right panels show samples split on the lowest and
highest quartiles of the relevant secondary bias parameter, respectively.
Shaded bands show the regions where residuals are within 2 per cent and 1 per
cent respectively, while the dashed envelope shows the expected cosmic
variance for a sample with $V\approx 5.8(h^{-1}{\rm Gpc})^{3}$. The spectra
are measured at $z=0.7$ and the fit is performed with the data vector out to
$k_{\rm max}=0.6\,h{\rm Mpc}^{-1}$.
As a test of our model, we construct halo catalogs with different amounts of
concentration and spin secondary bias, splitting the sample by quartile. The
magnitude of the effect varies differently as a function of mass for each
secondary bias parameter. Thus, we adopt separate halo mass bins for each
parameter, in a regime where we have both reliable estimates of the secondary
quantities and know that the secondary bias effect is not drastic, following
fig. 4 of Sato-Polito et al. (2019). The mass range
$12\leq\log_{10}\left(\frac{M}{h^{-1}M_{\odot}}\right)\leq 12.5$ was used to
build samples contaminated with concentration bias, and
$12.5\leq\log_{10}\left(\frac{M}{h^{-1}M_{\odot}}\right)\leq 13$ for spin
bias. We consider the highest and lowest quartile samples in both
concentration and spin, as well as a sample with no secondary bias, sub-
sampled to the same number density as the samples contaminated with secondary
bias. We additionally do not subtract the shot-noise contribution from
measured spectra, and opt instead to include it in our covariance matrix as
detailed in Eqn. 14.
Using the emulator for the basis spectra evaluated at the cosmology of these
test boxes, we jointly fit the halo–halo and halo–matter spectra
$\\{P_{hh},P_{hm}\\}$ with five parameters:
$b_{i}=\\{b_{1},b_{2},b_{s^{2}},b_{\nabla^{2}},\bar{n}^{-1}\\}$. We minimize
the $\chi^{2}$ between the mean of five simulations assuming a disconnected
covariance for the observables as described in Eq. 14, with
$V=5\times(1.05\,h^{-1}{\rm Gpc})^{3}$ each. The resulting fits are shown in
Fig. 6.
We fit the spectra to a maximum scale of $k_{\rm max}=0.6\,h\,{\rm Mpc}^{-1}$.
For most panels, we see that the hybrid $N$-body/Lagrangian bias model can
jointly describe the clustering and lensing spectra to within 1 per cent down
to scales even smaller than employed for the model fit. At large scales, the
lowest spin assembly bias bin seems to be systematically higher by at most 10
per cent. Changing the $k_{\rm max}$ of the fit down to $0.2\,h\,{\rm
Mpc}^{-1}$ does not qualitatively alleviate the large-scale discrepancies. We
observe similar behavior if the average of the basis spectra from this
cosmology are used instead of the emulator, implying this is not an issue of
the emulator and could perhaps be attributed to large-scale noise. Another
possibility is that a second-order Lagrangian bias model is unable to fully
capture the effects of spin secondary bias, but we leave this investigation to
future work.
In Fig. 7 we show the reduced $\chi^{2}$ for the fits to the samples split on
concentration. We see that the goodness of fit degrades significantly past
$k\simeq 0.6h\,{\rm Mpc}^{-1}$ for some subsamples. The fits to smaller
$k_{\rm max}$ have $\chi^{2}/{\rm d.o.f.}\lesssim 1.5$. Note that in these
tests we use the emulator at a volume that is significantly larger than the
boxes it was trained on, and the covariance matrices do not have any
contributions due to the emulator uncertainty. If we instead use the mean
basis spectra the $\chi^{2}/{\rm d.o.f.}$ cross the $\chi^{2}/{\rm d.o.f.}\sim
1$ threshold at $k_{\rm max}\sim 0.6h\,{\rm Mpc}^{-1}$ and grow significantly
afterwards, signalling a potential breakdown of the applicability of this
Lagrangian bias model to these samples.
Figure 7: The goodness of fit $\chi^{2}/{\rm d.o.f.}$ from increasing $k_{\rm
max}$ for the halo sample selected on concentration quartiles, using the
emulator as a model. Note the significant degradation of the goodness of fit
for the subsample split on the lowest quartile after $k_{\rm max}=0.6$.
Baryonic physics is known to impact the statistics of biased tracers at the
scales we are considering (White, 2004; Zhan & Knox, 2004; Chisari et al.,
2019; van Daalen et al., 2020). In our model, the $\langle
1,\nabla^{2}\delta\rangle$ basis spectrum should have the scale dependence
required to capture the first-order impacts of baryons (Lewandowski et al.,
2015). In order to test this, we produce mock ‘baryonified’ spectra using the
fitting function of van Daalen et al. (2020), which is obtained from analysis
of a comprehensive suite of hydrodynamic simulations. We compare the fitting
function to two parametrizations for the impact of baryons:
1. 1.
Including terms that scale as the basis functions $b_{\nabla^{2}}\langle
1,\nabla^{2}\delta\rangle$ and
$b_{1}b_{\nabla^{2}}\langle\delta,\nabla^{2}\delta\rangle$.
2. 2.
Same as above, but substituting the basis functions with the approximation
$\langle X,\nabla^{2}\delta\rangle\simeq-k^{2}\langle X,1\rangle$.
The results of this test are shown in Figure 8. While the baryonic suppression
factors presented by the two parametrizations differ, in the bottom panel we
see that both capture the effects of baryons to within 1 per cent out to
$k\approx 0.8\,h\,\mathrm{Mpc}^{-1}$, whereas not including the contributions
leads to errors larger than 1 per cent at $k\approx
0.2\,h\,\mathrm{Mpc}^{-1}$.
Additionally, our framework can simultaneously treat the effects of finite
halo size and baryonic physics. As both are captured by the same basis
spectra, this corresponds to treating the halo tracer as having one set of
$b_{\nabla}^{2}$ and the matter tracer in the $P^{hm}$ correlation as having a
separate higher derivative coefficient $b^{\prime}_{\nabla^{2}}$, while
keeping all other bias parameters equal to zero.
Figure 8: Higher derivative bias terms and their comparison to the baryonic
physics fitting function of van Daalen et al. (2020). The top panel shows the
fitting function, the basis spectrum as measured in the $N$-body simulations
and the approximation we employ in the text. In the lower panel we show
residuals between the different treatments and the fitting function. The blue
curve in the lower panel shows the difference between the unprocessed dark
matter power spectrum and the fitting function. The green curve is the
approximation to the higher derivative fitting functions that is implemented
in our analyses.
### 6.3 Recovering input cosmology
In this section we present an increasingly complex series of tests to ensure
our emulator can be used for cosmological inference, i.e., to demonstrate that
it can recover input cosmological parameters in an unbiased way. The general
structure of the analyses we run is as follows.
The input data-vectors will be the joint halo–halo and halo–matter power
spectra $\mathbf{d}=\\{P_{hh}(k),P_{hm}(k)\\}$. We assume a Gaussian
likelihood in the residuals between $\mathbf{d}$ and the emulator prediction
at a cosmology $\mathbf{x}(\mathbf{\Omega})$:
$\log\mathcal{L}(d|\mathbf{\Omega})\propto-(\mathbf{d}-\mathbf{x}(\mathbf{\Omega}))^{T}\mathbf{C}^{-1}(\mathbf{d}-\mathbf{x}(\mathbf{\Omega})).$
(13)
We adopt a baseline covariance matrix that includes only dependence on the
two-point functions of the tracer density field, known as the disconnected
contribution (Li et al., 2019). The result is a block-diagonal matrix with
format
$\mathbf{C}(k,k^{\prime})\equiv\frac{2\pi^{2}\delta_{k,k^{\prime}}}{k^{2}\Delta
kV}\times\begin{cases}2P_{hh}^{2}(k),&\text{ for }hh\times hh\\\
2P_{hh}(k)P_{hm}(k),&\text{ for }hh\times hm\\\
\bigg{[}P_{hh}(k)P_{mm}(k)&\text{ for }hm\times hm\\\
+P_{hm}^{2}(k)\bigg{]},\par\end{cases}$ (14)
for each sub-block. We use non-linear power spectra and $P_{hh}$ includes the
shot-noise contribution. At the smaller scales we probe in the resulting
analyses, the purely disconnected approximation is known to fail and off-
diagonal (connected) components become increasingly important (Meiksin &
White, 1999; Scoccimarro et al., 1999; Cooray & Hu, 2001; Mohammed et al.,
2016; Lacasa, 2018). The intent of this paper is not to conclusively quantify
the information content available at small scales. Rather, we would like to
ensure that the emulator is an unbiased model when pushing to such small
scales. Therefore, we consider the form of the covariance in Eqn. 14 to be a
sufficient baseline to carry out our analyses. We assess its performance in
more detail in Appendix A. As we will discuss in more detail in section 6.3.2,
the approximation of taking only the disconnected contribution neglects two
forms of error: that arising from the connected contribution, and model error
from the emulator itself. We discuss the contribution to the covariance from
emulator error in Appendix A, and find that in the regime under which our
tests are carried out, its inclusion is important in achieving unbiased
constraints.
We sample the posterior distributions of the model parameters via Markov Chain
Monte Carlo (MCMC), using emcee (Goodman & Weare, 2010; Foreman-Mackey et al.,
2013). Chains are run with either $N=64$ or $N=128$ walkers across 8000 (4000)
steps respectively. We checked that these values ensure converged chains for
the simulated likelihood analyses we run; the posteriors are not altered
significantly by doubling the length or number of walkers. We adopt wide
uniform priors on the bias parameters,
$b_{i}\sim U(-5,5),$ (15)
and uniform priors surrounding the boundaries of the Aemulus training suite,
specified in Table 1.
Parameter | Range
---|---
$\Omega_{b}h^{2}$ | [0.0207 , 0.0237]
$\Omega_{c}h^{2}$ | [0.101 , 0.132]
$w_{0}$ | [-1.399 , -0.566]
$n_{s}$ | [0.928 , 0.997]
$\sigma_{8}$ | [0.575 , 0.964]
$H_{0}$ | [61.69 , 74.77]
$N_{\mathrm{eff}}$ | [2.62 , 4.28]
Table 1: Boundaries of the cosmological parameters of simulations spanned by
the Aemulus training suite. These are the values used as flat priors for
cosmological parameters.
#### 6.3.1 Synthetic Data
As a first test of the emulator, we perform a simulated likelihood analysis on
a noiseless data vector drawn from the emulator itself. We fit the basis
spectra to a halo sample of mass $12\leq\log_{10}M_{h}/M_{\odot}\leq 12.5$
from one of Aemulus’ test boxes. The cosmology and best-fitting bias values
are used as inputs to the emulator to produce a mock noiseless data-vector. As
the data in this test is not a random draw from a distribution, the exact
format of the covariance matrix does not matter. However, we use the block-
diagonal disconnected covariance of Eqn. 14 with
$V=(1050\,h^{-1}\mathrm{Mpc})^{3}$ so as to replicate an analysis on an
individual Aemulus test box.
The results of this first mock analysis are shown in in Fig. 9. The three-
parameter analysis constrains all cosmological and bias parameters in an
unbiased fashion, indicating that there are no issues in fitting the emulator
to itself at this volume. We also conduct seven-parameter analysis for $w$CDM
parameters. The results returns unbiased posteriors relative to the true input
values, however it is hard to constrain all $w$CDM parameters using a single
halo sample at the volume of a single Aemulus box. For this reason, several of
the cosmological parameters simply saturate the priors and remain
unconstrained.
#### 6.3.2 Halo samples from the test suite
Figure 9: Cosmological parameter inference using the emulator where the data
are a noiseless draw from itself. We vary the subset of parameters
$\omega_{c},\,\sigma_{8},$ and $H_{0}$, using a Gaussian likelihood and purely
disconnected covariance with volume $V=(1.05)^{3}(h^{-1}\mathrm{Gpc})^{3}$.
The fiducial values used to generate the data vector are shown in the dashed
lines. The bias parameter posteriors are equally unbiased and Gaussian, but
omitted from the figure for aesthetic purposes. Figure 10: Cosmological
parameter inference using the emulator fit to the mean of five realizations at
the seven Aemulus test cosmologies. We vary the cosmological parameters
$\omega_{c},\,\sigma_{8},$ and $H_{0}$, using a disconected covariance with
volume $V=(1.05)^{3}(h^{-1}\mathrm{Gpc})^{3}$, including a contribution
arising from correlated emulator residuals. The contours are shown in the
space of differences relative to the true cosmology of each box.
A subsequent test we perform is inference on halo catalogs drawn from the
Aemulus test suite. We refer to Aemulus I (DeRose et al., 2019b) for details
on how the halo finding procedure was done. This fiducial halo sample contains
the mass bin $13\leq\log_{10}\left(\frac{M_{h}}{h^{-1}M_{\odot}}\right)\leq
13.5$ at $z=0.4$.
We run a suite of chains for this halo sample, to assess emulator performance
in terms of inferring cosmological parameters. We measure the halo–halo and
halo–matter power spectra for each independent test box across the seven
different test cosmologies. The data vector is averaged over the five
independent realizations from the test suite. This set of chains allows us to
assess the emulator biases such that they are less susceptible to projection
effects. We can also study the cosmology dependence of the emulator in this
way and the interplay between the bias coefficients of our model and
cosmological parameters.
Using solely the purely disconnected covariance matrix in Eqn. 14 leads to
strong biases in inferred cosmological parameters, despite all residuals being
smaller than 1 per cent as a function of scale. This can be understood by the
fact that sample variance at small scales will eventually become smaller than
the $1-2$ per cent emulator error observed in Fig. 5 (see also Fig. 13).
However, the aforementioned figure allows us to estimate the emulator
uncertainty as a function of scale. This can then be included as a separate
contribution to the covariance matrix. We detail how this is done in Appendix
A 111All contour plots shown from this point forward will include the effects
of emulator error unless stated otherwise..
The result of the test is shown in Fig. 10, with the full set of contours
shown in Fig. 17. The cosmological parameters inferred scatter around the best
fits for $\omega_{c}$ and $\sigma_{8}$, whereas they recover $H_{0}$ to within
one standard deviation for most cosmologies but biased slightly high. However,
we note these tests are conservative, as they neglect the contribution to the
covariance matrix arising from shape noise, the lensing equivalent of shot-
noise that would contribute to the $hm\,\times\,hm$ term of the covariance
matrix. Given the conservative nature of this test we deem the emulator
performance to be sufficient and continue with the final and most stringent
test we consider in this work.
#### 6.3.3 A redMaGiC sample from an independent simulation
$\log M_{\mathrm{min}}$ | $\sigma_{\log M}$ | $f_{c}$ | $\log M_{0}$ | $\log M_{1}^{{}^{\prime}}$ | $\alpha$
---|---|---|---|---|---
12.1 | 0.4 | 0.13 | 11.45 | 13.73 | 1.48
Table 2: HOD parameters used to populate the redMaGiC sample described in
section 6.3.3.
So far, we have reported tests performed on samples that originate either from
the emulator itself or from the same suite of simulations used to construct
it. It is also important that the model is useful for inference on spectra
measured from tracer samples generated by independent methods, both in how
halo samples are defined and the underlying $N$-body simulation used. For
example, in Modi et al. (2020) it was shown that this hybrid Lagrangian bias
model can successfully fit galaxy power spectra produced from a halo
occupation distribution (HOD; see e.g. Zheng et al. 2005).
We perform a final test: a simulated likelihood analysis with spectra produced
from populating an independent $N$-body simulation with an HOD that matches
the density and clustering properties of redMaGiC galaxies (Rozo et al.,
2016). redMaGiC galaxies are the primary photometric Luminous Red Galaxy
sample used in current and future weak lensing surveys (Elvin-Poole et al.,
2018).
The HOD parametrization we adopt is an extension of the model presented in
Zheng et al. (2007), allowing for the central occupation at high mass to be
less than unity
$\displaystyle\langle
N_{\mathrm{cen}}(M)\rangle=\frac{f_{c}}{2}\left[1+\mathrm{erf}\left(\frac{\log
M-\log M_{\mathrm{min}}}{\sigma_{\log M}}\right)\right],$ (16)
$\displaystyle\langle
N_{\mathrm{sat}}(M)\rangle=\frac{1}{2}\left[1+\left(\frac{\log M-\log
M_{\mathrm{min}}}{\sigma_{\log
M}}\right)\right]\left(\frac{M-M_{0}}{M_{1}^{{}^{\prime}}}\right)^{\alpha}.$
(17)
The HOD parameters corresponding to the redMaGiC samples used can be found in
Table 2, and are derived from a redMaGiC sample selected from simulations
similar to those presented in DeRose et al. (2019a).
We paint redMaGiC galaxies onto halo catalogs measured from the UNIT
simulations (Chuang et al., 2019) at $z\approx 0.59$, a redshift different
from the Aemulus snapshots. A UNIT realization boasts a comparable volume to
Aemulus of $V=1\,(h^{-1}\mathrm{Gpc})^{3}$ at a significantly higher number of
particles, $N=(4096)^{3}$. Every UNIT simulation has two realizations with
opposite phases and fixed amplitudes. Averaging two-point statistics measured
from these paired–fixed realizations leads to very high sample variance
suppression at large scales, comparable to averaging $\sim 150$ simulations of
the same volume.
The cosmological parameter constraints corresponding to this test are shown in
Fig. 11. The emulator recovers the input cosmology of UNIT within its $68$ per
cent contours. Although this test is idealized, the constraints inferred are
promising if they translate even moderately well to a realistic analysis: a
2.5 per cent constraint on $\omega_{c}$, a 0.5 per cent constraint on
$\sigma_{8}$ and a 1.6 per cent constraint of $H_{0}$. In a realistic lensing
analysis one would expect these quantities to be degraded due to the inclusion
of shape noise and only having access to two-dimensional lensing maps instead
of the 3D matter field. Nevertheless, even a 100% degradation of these
constraints due to the aforementioned complications would still result in
highly competitive measurements of these parameters. Note we adopt no priors
beyond the (moderately informative) priors set by the boundaries of the
Aemulus suite.
Figure 11: Cosmological parameter constraints from the redMaGiC sample
constructed from the UNIT simulations. The true cosmological parameters and
the best-fit bias parameters assuming the true cosmology are shown in the
dashed lines. All parameters are recovered to well within the one-sigma
errors.
The simulated likelihood analysis performed on this sample additionally allow
us to quantify both model and emulator errors in a space that is closer to
observations that will be carried out in the near future. As redMaGiC galaxies
are commonly used as lens samples in galaxy–galaxy lensing analyses, we can
translate the $P^{hh},P^{hm}$ residuals to those in the observables
$C_{\ell}^{gg},C_{\ell}^{g\kappa}$. We assume a redshift distribution $n(z)$
for redMaGiC galaxies consistent with data (Elvin-Poole et al., 2018) and
fiducial parametrizations for the source sample that are consistent with those
that will be achieved in future imaging surveys (Mandelbaum et al., 2018). For
a redMaGiC sample spanning $z=[0.45,0.6]$ we present the results in Fig. 12.
The harmonic space observables are calculated assuming the Limber
approximation, with the additional approximation that the residuals between 3D
power spectra do not evolve as a function of redshift. The residuals stay
within one per cent out to $\ell\approx 1000$. If we instead use residuals
from fitting the emulator at fixed cosmology to the same sample out to $k_{\rm
max}=1.0\,h{\rm Mpc}^{-1}$ the residuals remain within ten per cent out to
$\ell_{\rm max}=2000$, at the cost of worse performance at large scales. This
indicates that the combined emulator and model error remain well under control
for the analysis of current galaxy–galaxy lensing datasets.
Figure 12: Residuals of the emulator fit to the redMaGiC sample in the space
of a projected analysis. Residuals are shown for the range $\ell\in[50,2000]$.
The redshift distributions of this analysis are consistent with those of
current and upcoming surveys. The dashed envelope corresponds to the sample
variance contribution in the absence of noise with sky coverage consistent
with upcoming surveys and angular binning of $\Delta\ell=50$. That is,
shot/shape noise will only increase the size of this envelope. The light gray
and dark gray bands correspond to 2 and 1 per cent error bands, respectively.
## 7 Conclusions
In this work we have built an emulator to study the cosmology dependence of
the model of Modi et al. (2020) for the two-point statistics of biased
tracers. The model combines $N$-body simulations with a symmetries-based bias
expansion to provide accurate predictions beyond the regime of validity of
standard perturbative approaches.
Specifically, we built an emulator for the cosmology and redshift dependence
of the ten non-linear basis functions that span this model. We use
measurements from the Aemulus suite of simulations, which has been designed to
enable the construction of emulators that satisfy the modelling requirements
of upcoming cosmic surveys. The model and emulation techniques used are
general; there are no limitations to extending the range of validity given the
availability of an improved suite of simulations.
We find that:
1. 1.
The emulator recovers each basis spectrum to $\lesssim$ 1 per cent accuracy
across a wide range of scales, $0.1<k/\left(h^{-1}{\rm Mpc}\right)\leq 1.0$,
and redshifts, $0\leq z\leq 2$.
2. 2.
The Lagrangian bias model is capable of capturing the clustering and lensing
statistics of samples imbued with non-trivial amounts of secondary bias and
contamination from baryonic physics.
3. 3.
The test set used to validate the emulator can also be used to calibrate its
‘theoretical uncertainty’. This allows us to include contributions to the
covariance matrix of an analysis related to model error, which cannot be
neglected when pushing to small scales.
4. 4.
The emulator, as constructed, can recover unbiased cosmological parameters
from realistic simulated likelihood analyses.
These findings indicate that our emulator is a robust tool that can be readily
applied to analyses of current and even upcoming datasets. The code will be
made publicly available github and can be integrated with modern sampling
packages such as Cobaya (Torrado & Lewis, 2020). We also point out a few
further directions to be investigated as a result of this work.
First, while the simulations used here are sufficient to obtain per cent level
emulator accuracy, improved simulations will be important for maximizing the
applicability of this model. The biggest immediate limitation of this emulator
is the extent of the cosmological parameter space that it is trained on. We
plan on running simulations over a broader parameter space, including massive
neutrinos, in the near future. Another limiting factor in the current emulator
construction is our ability to match the basis spectra measured from our
simulations to their perturbation theory analogs at low $k$. Running larger
simulation volumes, or implementing a method for sample variance mitigation
such as that presented in Chartier et al. (2020), would ameliorate this issue
by reducing noise in the $N$-body measurements. This will allow them to be
matched more easily to the perturbation theory predictions at scales that are
still safely within perturbative reach. Mismatches in the linear growth
predictions from $N$-body simulations also limit the accuracy of the large
scale matching. Simulations with more stringent time-stepping criteria would
reduce these inaccuracies, at the cost of increased run-time. For this reason,
methods that explicitly enforce linear growth on large scales may be worth
exploring in the future (Feng et al., 2016; Howlett et al., 2015). Finally,
the accuracy of the model for redshift evolution of the basis spectra in the
current emulator is limited by the number of snapshots saved in the Aemulus
suite. For this reason, saving snapshots with finer redshift resolution out to
higher redshifts will be a priority when running future simulations to upgrade
the current emulator.
While in this paper we have restricted ourselves to predictions of survey
observables in Fourier space, one could use this same field-level approach to
measure configuration-space correlation statistics instead. The model employed
should also be able to describe the statistics of biased tracers at the field
level, beyond two-point statistics. This includes both field-level
characterizations of the Lagrangian bias model similarly to what was
investigated in Schmittfull et al. (2019) and higher order functions such as
the bispectrum or the collapsed tri-spectra that form the connected component
of covariance matrices.
The field-level approach to bias modelling described in Schmittfull et al.
(2019) was recently extended to redshift space (Schmittfull et al., 2020). For
our emulator to be used to describe the statistics of 3D galaxy clustering in
spectroscopic galaxy surveys, it would need to be extended to redshift space
in a similar manner. Alternatively, we note that the bias parameters in this
model are equivalent to those of the Lagrangian perturbation theory of Chen et
al. (2020a, b). This suggests one could perform a joint analysis that combines
perturbation theory for describing the redshift-space clustering, where the 3D
nature of the measurements allow tight constraints even on quasi-linear
scales, and an emulator for describing projected statistics, which need to
extend to smaller scales in order to beat down sample variance. In addition to
providing a large dynamic range and sensitivity to both metric potentials, the
combination of measurements would help to break bias parameter degeneracies
and thus improve cosmological constraints.
The second release of the Aemulus suite, Aemulus-$\nu$, will include two-fluid
simulations that capture the effects of massive neutrinos on the matter
density field. The techniques described in this paper can be translated to
this new set of simulations to construct an emulator that can be used to
constrain the sum of neutrino masses, one of the key science drivers of
ongoing and future cosmological surveys.
We leave these extensions to future work.
## Acknowledgements
We thank Simone Ferraro and Anže Slosar for helpful comments on a draft of the
paper and Sean McLaughlin for many helpful discussions. We are grateful to the
Aemulus collaboration for making the simulation suite used here publicly
available. This work was supported in part by U.S. Department of Energy
contracts to SLAC (DE-AC02-76SF00515) and by Stanford University. N.K. thanks
the LSSTC Data Science Fellowship Program, which is funded by LSSTC, NSF
Cybertraining Grant #1829740, the Brinson Foundation, and the Moore
Foundation. S.C. is supported by the National Science Foundation Graduate
Research Fellowship (Grant No. DGE 1106400) and by the UC Berkeley Theoretical
Astrophysics Center Astronomy and Astrophysics Graduate Fellowship. M.W. is
supported by the U.S. Department of Energy and the NSF. This research has made
use of NASA’s Astrophysics Data System and the arXiv preprint server.
Some of the computing for this project was performed on the Sherlock cluster.
We would like to thank Stanford University and the Stanford Research Computing
Center for providing computational resources and support that contributed to
these research results.
Calculations and figures in this work have been made using nbodykit (Hand et
al., 2018), GetDist (Lewis, 2019), and the SciPy Stack (Harris et al., 2020;
Virtanen et al., 2020; Hunter, 2007).
## Data Availability
The data underlying this article are available in the Aemulus Project’s
website.
## References
* Abbott et al. (2018) Abbott T., et al., 2018, Phys. Rev. D, 98, 043526
* Abidi & Baldauf (2018) Abidi M. M., Baldauf T., 2018, JCAP, 07, 029
* Aghamousa et al. (2016) Aghamousa A., et al., 2016, arXiv e-prints
* Alam et al. (2017) Alam S., Miyatake H., More S., Ho S., Mandelbaum R., 2017, Mon. Not. Roy. Astron. Soc., 465, 4853
* Angulo & Pontzen (2016) Angulo R. E., Pontzen A., 2016, Mon. Not. Roy. Astron. Soc., 462, L1
* Aviles & Banerjee (2020) Aviles A., Banerjee A., 2020, J. Cosmology Astropart. Phys., 2020, 034
* Aviles et al. (2020) Aviles A., Valogiannis G., Rodriguez-Meza M. A., Cervantes-Cota J. L., Li B., Bean R., 2020, arXiv e-prints, p. arXiv:2012.05077
* Bagla (2005) Bagla J. S., 2005, Curr. Sci., 88, 1088
* Baldauf et al. (2016a) Baldauf T., Mirbabayi M., Simonović M., Zaldarriaga M., 2016a, arXiv e-prints
* Baldauf et al. (2016b) Baldauf T., Schaan E., Zaldarriaga M., 2016b, JCAP, 03, 007
* Bartelmann & Schneider (2001) Bartelmann M., Schneider P., 2001, Phys. Rept., 340, 291
* Baumann et al. (2012) Baumann D., Nicolis A., Senatore L., Zaldarriaga M., 2012, J. Cosmology Astropart. Phys., 2012, 051
* Bernardeau et al. (2002) Bernardeau F., Colombi S., Gaztanaga E., Scoccimarro R., 2002, Phys. Rept., 367, 1
* Bianchini et al. (2015) Bianchini F., et al., 2015, The Astrophysical Journal, 802, 64
* Blas et al. (2014) Blas D., Garny M., Konstandin T., 2014, J. Cosmology Astropart. Phys., 2014, 010
* Blatman & Sudret (2008) Blatman G., Sudret B., 2008, Comptes Rendus Mecanique, 336, 518
* Blatman & Sudret (2011) Blatman G., Sudret B., 2011, Journal of Computational Physics, 230, 2345
* Carrasco et al. (2012) Carrasco J. J. M., Hertzberg M. P., Senatore L., 2012, JHEP, 09, 082
* Chartier et al. (2020) Chartier N., Wandelt B., Akrami Y., Villaescusa-Navarro F., 2020, arXiv e-prints, p. arXiv:2009.08970
* Chen et al. (2020a) Chen S.-F., Vlah Z., Castorina E., White M., 2020a, Redshift-Space Distortions in Lagrangian Perturbation Theory (arXiv:2012.04636)
* Chen et al. (2020b) Chen S.-F., Vlah Z., White M., 2020b, JCAP, 07, 062
* Chen et al. (2020c) Chen S.-F., Vlah Z., White M., 2020c, JCAP, 11, 035
* Chisari et al. (2019) Chisari N. E., et al., 2019, Open J. Astrophys., 2, 4
* Chuang et al. (2019) Chuang C.-H., et al., 2019, Mon. Not. Roy. Astron. Soc., 487, 48
* Chudaykin et al. (2020) Chudaykin A., Ivanov M. M., Simonović M., 2020, arXiv e-prints
* Cooray & Hu (2001) Cooray A., Hu W., 2001, The Astrophysical Journal, 554, 56–66
* Crocce et al. (2006) Crocce M., Pueblas S., Scoccimarro R., 2006, Mon. Not. Roy. Astron. Soc., 373, 369
* Crocce et al. (2012) Crocce M., Pueblas S., Scoccimarro R., 2012, 2LPTIC: 2nd-order Lagrangian Perturbation Theory Initial Conditions (ascl:1201.005)
* Dalal et al. (2008) Dalal N., White M., Bond J. R., Shirokov A., 2008, Astrophys. J., 687, 12
* DeRose et al. (2019a) DeRose J., et al., 2019a, arXiv e-prints, p. arXiv:1901.02401
* DeRose et al. (2019b) DeRose J., et al., 2019b, Astrophys. J., 875, 69
* Desjacques et al. (2018) Desjacques V., Jeong D., Schmidt F., 2018, Phys. Rept., 733, 1
* DiPompeo et al. (2017) DiPompeo M. A., Hickox R. C., Eftekharzadeh S., Myers A. D., 2017, MNRAS, 469, 4630
* Doré et al. (2015) Doré O., et al., 2015, Cosmology with the SPHEREX All-Sky Spectral Survey (arXiv:1412.4872)
* Doré et al. (2019) Doré O., et al., 2019, WFIRST: The Essential Cosmology Space Observatory for the Coming Decade (arXiv:1904.01174)
* Elvin-Poole et al. (2018) Elvin-Poole J., et al., 2018, Phys. Rev. D, 98, 042006
* Favole et al. (2020) Favole G., et al., 2020, Mon. Not. Roy. Astron. Soc., 497, 5432
* Feinberg & Langtangen (2015) Feinberg J., Langtangen H. P., 2015, Journal of Computational Science, 11, 46
* Feinberg et al. (2018) Feinberg J., Eck V. G., Langtangen H. P., 2018, SIAM Journal on Scientific Computing, 40, A199
* Feng et al. (2016) Feng Y., Chu M.-Y., Seljak U., McDonald P., 2016, MNRAS, 463, 2273
* Foreman-Mackey et al. (2013) Foreman-Mackey D., Hogg D. W., Lang D., Goodman J., 2013, PASP, 125, 306
* Fujita & Vlah (2020) Fujita T., Vlah Z., 2020, J. Cosmology Astropart. Phys., 2020, 059
* Fujita et al. (2020) Fujita T., Mauerhofer V., Senatore L., Vlah Z., Angulo R., 2020, JCAP, 01, 009
* Gao et al. (2005) Gao L., Springel V., White S. D., 2005, Mon. Not. Roy. Astron. Soc., 363, L66
* Garrison et al. (2016) Garrison L. H., Eisenstein D. J., Ferrer D., Metchnik M. V., Pinto P. A., 2016, Mon. Not. Roy. Astron. Soc., 461, 4125
* Goodman & Weare (2010) Goodman J., Weare J., 2010, Communications in Applied Mathematics and Computational Science, 5, 65
* Guo et al. (2019) Guo H., et al., 2019, Astrophys. J., 871, 147
* Hand et al. (2018) Hand N., Feng Y., Beutler F., Li Y., Modi C., Seljak U., Slepian Z., 2018, The Astronomical Journal, 156, 160
* Harris et al. (2020) Harris C. R., et al., 2020, Nature, 585, 357–362
* Heitmann et al. (2009) Heitmann K., Higdon D., White M., Habib S., Williams B. J., Wagner C., 2009, Astrophys. J., 705, 156
* Heitmann et al. (2010) Heitmann K., White M., Wagner C., Habib S., Higdon D., 2010, Astrophys. J., 715, 104
* Heymans et al. (2020) Heymans C., et al., 2020, arXiv e-prints, p. arXiv:2007.15632
* Hockney & Eastwood (1988) Hockney R., Eastwood J., 1988, Computer Simulation Using Particles. CRC Press, https://books.google.com/books?id=nTOFkmnCQuIC
* Hoffman & Gelman (2011) Hoffman M. D., Gelman A., 2011, arXiv e-prints, p. arXiv:1111.4246
* Howlett et al. (2015) Howlett C., Manera M., Percival W. J., 2015, Astronomy and Computing, 12, 109
* Hunter (2007) Hunter J. D., 2007, Computing in Science Engineering, 9, 90
* Ivanov et al. (2020) Ivanov M. M., McDonough E., Hill J. C., Simonović M., Toomey M. W., Alexander S., Zaldarriaga M., 2020, Phys. Rev. D, 102, 103502
* Ivezić et al. (2019) Ivezić v., et al., 2019, Astrophys. J., 873, 111
* Joyce et al. (2020) Joyce M., Garrison L., Eisenstein D., 2020, Mon. Not. Roy. Astron. Soc.
* Knabenhans et al. (2019) Knabenhans M., et al., 2019, Mon. Not. Roy. Astron. Soc., 484, 5509
* Krause et al. (2017) Krause E., et al., 2017, arXiv e-prints
* Krolewski et al. (2020) Krolewski A., Ferraro S., Schlafly E. F., White M., 2020, J. Cosmology Astropart. Phys., 2020, 047
* Kuhlen et al. (2012) Kuhlen M., Vogelsberger M., Angulo R., 2012, Phys. Dark Univ., 1, 50
* Kwan et al. (2015) Kwan J., Heitmann K., Habib S., Padmanabhan N., Finkel H., Lawrence E., Frontiere N., Pope A., 2015, Astrophys. J., 810, 35
* Lacasa (2018) Lacasa F., 2018, Astronomy & Astrophysics, 615, A1
* Laguë et al. (2020) Laguë A., Bond J. R., Hložek R., Marsh D. J., Söding L., 2020, arXiv e-prints
* Laureijs et al. (2011) Laureijs R., et al., 2011, Euclid Definition Study Report (arXiv:1110.3193)
* Lawrence et al. (2010) Lawrence E., Heitmann K., White M., Higdon D., Wagner C., Habib S., Williams B., 2010, The Astrophysical Journal, 713, 1322–1331
* Lazeyras & Schmidt (2018) Lazeyras T., Schmidt F., 2018, J. Cosmology Astropart. Phys., 2018, 008
* Lazeyras & Schmidt (2019) Lazeyras T., Schmidt F., 2019, JCAP, 11, 041
* Lewandowski et al. (2015) Lewandowski M., Perko A., Senatore L., 2015, JCAP, 05, 019
* Lewis (2019) Lewis A., 2019, GetDist: a Python package for analysing Monte Carlo samples (arXiv:1910.13970)
* Li et al. (2019) Li Y., Singh S., Yu B., Feng Y., Seljak U., 2019, Journal of Cosmology and Astroparticle Physics, 2019, 016–016
* MacCrann et al. (2020) MacCrann N., Blazek J., Jain B., Krause E., 2020, Mon. Not. Roy. Astron. Soc., 491, 5498
* Mandelbaum (2018) Mandelbaum R., 2018, Ann. Rev. Astron. Astrophys., 56, 393
* Mandelbaum et al. (2018) Mandelbaum R., et al., 2018, arXiv e-prints
* Mansfield & Avestruz (2020) Mansfield P., Avestruz C., 2020, Mon. Not. Roy. Astron. Soc.
* Mansfield & Kravtsov (2020) Mansfield P., Kravtsov A. V., 2020, Mon. Not. Roy. Astron. Soc., 493, 4763
* Mao et al. (2018) Mao Y.-Y., Zentner A. R., Wechsler R. H., 2018, Mon. Not. Roy. Astron. Soc., 474, 5143
* Matsubara (2008) Matsubara T., 2008, Phys. Rev. D, 78, 083519
* McClintock et al. (2019) McClintock T., et al., 2019, Astrophys. J., 872, 53
* McDonald & Roy (2009) McDonald P., Roy A., 2009, Journal of Cosmology and Astroparticle Physics, 2009, 020–020
* McLaughlin et al. (2021) McLaughlin S., Wechsler R. H., Banerjee A., DeRose J., Mao Y.-Y., Tinker J. L., Zhai Z., 2021, arXiv e-prints, p. To appear
* McQuinn & White (2016) McQuinn M., White M., 2016, J. Cosmology Astropart. Phys., 2016, 043
* Meiksin & White (1999) Meiksin A., White M., 1999, Monthly Notices of the Royal Astronomical Society, 308, 1179–1184
* Michaux et al. (2020) Michaux M., Hahn O., Rampf C., Angulo R. E., 2020, Mon. Not. Roy. Astron. Soc., 500, 663
* Modi et al. (2017) Modi C., White M., Vlah Z., 2017, J. Cosmology Astropart. Phys., 2017, 009
* Modi et al. (2020) Modi C., Chen S.-F., White M., 2020, Mon. Not. Roy. Astron. Soc., 492, 5754
* Mohammed et al. (2016) Mohammed I., Seljak U., Vlah Z., 2016, Monthly Notices of the Royal Astronomical Society, 466, 780–797
* Nishimichi et al. (2020) Nishimichi T., D’Amico G., Ivanov M. M., Senatore L., Simonović M., Takada M., Zaldarriaga M., Zhang P., 2020, arXiv e-prints
* Omori et al. (2019) Omori Y., et al., 2019, Physical Review D, 100
* Park et al. (2020) Park Y., Rozo E., Krause E., 2020, arXiv e-prints
* Peacock & Bilicki (2018) Peacock J. A., Bilicki M., 2018, MNRAS, 481, 1133
* Power et al. (2016) Power C., Robotham A. S. G., Obreschkow D., Hobbs A., Lewis G. F., 2016, Monthly Notices of the Royal Astronomical Society, 462, 474–489
* Prat et al. (2018) Prat J., et al., 2018, Physical Review D, 98
* Pullen et al. (2016) Pullen A. R., Alam S., He S., Ho S., 2016, MNRAS, 460, 4098
* Rozo et al. (2016) Rozo E., et al., 2016, Mon. Not. Roy. Astron. Soc., 461, 1431
* Salcedo et al. (2018) Salcedo A. N., Maller A. H., Berlind A. A., Sinha M., McBride C. K., Behroozi P. S., Wechsler R. H., Weinberg D. H., 2018, Monthly Notices of the Royal Astronomical Society, 475, 4411–4423
* Sato-Polito et al. (2019) Sato-Polito G., Montero-Dorta A. D., Abramo L. R., Prada F., Klypin A., 2019, Mon. Not. Roy. Astron. Soc., 487, 1570
* Savitzky & Golay (1964) Savitzky A., Golay M. J. E., 1964, Analytical Chemistry, 36, 1627
* Schmittfull et al. (2019) Schmittfull M., Simonović M., Assassi V., Zaldarriaga M., 2019, Physical Review D, 100
* Schmittfull et al. (2020) Schmittfull M., Simonović M., Ivanov M. M., Philcox O. H. E., Zaldarriaga M., 2020, Modeling Galaxies in Redshift Space at the Field Level (arXiv:2012.03334)
* Schneider et al. (2016) Schneider A., et al., 2016, Journal of Cosmology and Astroparticle Physics, 2016, 047–047
* Scoccimarro et al. (1999) Scoccimarro R., Zaldarriaga M., Hui L., 1999, The Astrophysical Journal, 527, 1–15
* Senatore & Zaldarriaga (2017) Senatore L., Zaldarriaga M., 2017, arXiv e-prints, p. arXiv:1707.04698
* Singh et al. (2019) Singh S., Mandelbaum R., Seljak U., Rodríguez-Torres S., Slosar A., 2019, Monthly Notices of the Royal Astronomical Society, 491, 51–68
* Takada et al. (2014) Takada M., et al., 2014, PASJ, 66, R1
* Taruya et al. (2018) Taruya A., Nishimichi T., Jeong D., 2018, Phys. Rev. D, 98, 103532
* Torrado & Lewis (2020) Torrado J., Lewis A., 2020, Cobaya: Code for Bayesian Analysis of hierarchical physical models (arXiv:2005.05290)
* Villaescusa-Navarro et al. (2018) Villaescusa-Navarro F., et al., 2018, Astrophys. J., 867, 137
* Virtanen et al. (2020) Virtanen P., et al., 2020, Nature Methods, 17, 261
* Vlah et al. (2015) Vlah Z., White M., Aviles A., 2015, J. Cosmology Astropart. Phys., 2015, 014
* Vlah et al. (2016) Vlah Z., Castorina E., White M., 2016, Journal of Cosmology and Astroparticle Physics, 2016, 007–007
* Wechsler & Tinker (2018) Wechsler R. H., Tinker J. L., 2018, Ann. Rev. Astron. Astrophys., 56, 435
* Wechsler et al. (2002) Wechsler R. H., Bullock J. S., Primack J. R., Kravtsov A. V., Dekel A., 2002, Astrophys. J., 568, 52
* Wechsler et al. (2006) Wechsler R. H., Zentner A. R., Bullock J. S., Kravtsov A. V., 2006, Astrophys. J., 652, 71
* White (2004) White M. J., 2004, Astropart. Phys., 22, 211
* Wibking et al. (2019) Wibking B. D., et al., 2019, Mon. Not. Roy. Astron. Soc., 484, 989
* Wibking et al. (2020) Wibking B. D., Weinberg D. H., Salcedo A. N., Wu H.-Y., Singh S., Rodríguez-Torres S., Garrison L. H., Eisenstein D. J., 2020, Mon. Not. Roy. Astron. Soc., 492, 2872
* Wiener (1938) Wiener N., 1938, American Journal of Mathematics, 60, 897
* Xiu (2010) Xiu D., 2010, Numerical Methods for Stochastic Computations: A Spectral Method Approach. Princeton University Press, USA
* Yoo et al. (2006) Yoo J., Tinker J. L., Weinberg D. H., Zheng Z., Katz N., Dave R., 2006, The Astrophysical Journal, 652, 26–42
* Yuan et al. (2018) Yuan S., Eisenstein D. J., Garrison L. H., 2018, Mon. Not. Roy. Astron. Soc., 478, 2019
* Zhai et al. (2019) Zhai Z., et al., 2019, Astrophys. J., 874, 95
* Zhan & Knox (2004) Zhan H., Knox L., 2004, Astrophys. J. Lett., 616, L75
* Zhang et al. (2020) Zhang Y., et al., 2020, Mon. Not. Roy. Astron. Soc.
* Zheng et al. (2005) Zheng Z., et al., 2005, Astrophys. J., 633, 791
* Zheng et al. (2007) Zheng Z., Coil A. L., Zehavi I., 2007, Astrophys. J., 667, 760
* Zu (2020) Zu Y., 2020, arXiv e-prints
* van Daalen et al. (2020) van Daalen M. P., McCarthy I. G., Schaye J., 2020, Mon. Not. Roy. Astron. Soc., 491, 2424
## Appendix A Including emulator error in the covariance matrix
As seen in Fig. 5, there is a scale-dependent error associated with our
emulation scheme. This error is small, on the order of $\sim$ 1 per cent, and
within the accuracy requirements for the next generation of surveys. However,
at the smallest scales we would like to test this model, $k\simeq
0.6\,h\,\mathrm{Mpc}^{-1}$, it will often be larger than the combined cosmic
variance and shot noise (and absence of shape noise) of our tests. In this
regime, the combination of using the average of only five boxes as our data,
the approximate disconnected form of the covariance in Eqn. 14 and failing to
include model uncertainty in an analysis could then lead to biased inference
on cosmological parameters (Baldauf et al., 2016a; Chudaykin et al., 2020).
Since the Aemulus test suite is composed of 35 simulations, at seven distinct
points of cosmological parameter space, we can use the emulator residuals at
these points to construct a model for the theoretical uncertainty. In this
appendix we discuss our procedure to construct this model and study its impact
when employed in inference.
Let $\bar{P}_{XY}(k,\Omega_{i})$ be the mean basis spectrum measured from five
Aemulus boxes at the cosmology $\Omega_{i}$. For a given box, we define
normalized emulator residuals as
$\hat{r}^{XY}(k)=\frac{\hat{P}_{XY}(k,\Omega_{i})-P^{\mathrm{Emu}}_{XY}(k,\Omega_{i}))}{\bar{P}_{XY}(k,\Omega_{i})},$
(18)
where $P^{\mathrm{Emu}}_{XY}$ is the emulator prediction at the same
cosmology. Normalized this way, we assume the residuals are cosmology
independent. At each redshift we have 35 sets of residuals. With these
measurements we can build an estimate of the residual correlation matrix
$\mathrm{Corr}^{\mathrm{Emu}}(k,k^{\prime})=\frac{\mathrm{Cov}[\hat{r}^{XY}(k),\hat{r}^{XY}(k^{\prime})]}{\sqrt{\mathrm{Cov}(k,k)\mathrm{Cov}(k^{\prime},k^{\prime})}}$
(19)
which captures how correlated the emulator residuals are across the test set
as a function of scale. The quantities in the numerator and denominator of
Eqn. 19 are the same, but we apply the shorthand
$\mathrm{Cov}(k,k)\equiv\mathrm{Cov}[\hat{r}^{XY}(k),\hat{r}^{XY}(k)]$ to not
overload the expression. We proceed to define an emulator floor,
$f_{\mathrm{Emu}}$, specifying what fraction of the signal is of the order
emulator error. From Fig. 5, the dominant source of uncertainty will come from
the error in the $P_{11}$ spectrum. This implies $f_{\mathrm{Emu}}\simeq 0.01$
at small scales for redshifts $z>0$. We then estimate that the emulator error
will scale as
$\mathrm{Cov}^{\mathrm{Err}}(k,k^{\prime})=(f_{\mathrm{Emu}}P_{hh,hm}(k))^{2}\times\mathrm{Corr}^{\mathrm{Emu}}(k,k^{\prime}),$
(20)
where $P_{hh,hm}$ is used depending on whether we are including this
contribution to the block corresponding to the halo–halo correlation or the
halo-matter correlation. We then add this contribution in quadrature to Eq. 14
$\mathrm{Cov}(k,k^{\prime})=\mathrm{Cov}^{G}(k,k^{\prime})+\mathrm{Cov}^{\mathrm{Err}}(k,k^{\prime}).$
(21)
We run chains with the covariance in Eq. 21, as well as chains including only
the diagonal contribution due to uncertainty, which we will call the ‘floor’
covariance. The contours for cosmological parameters are shown in Fig. 15.
While this is clearly an approximate treatment, we observe that including this
contribution helps prevent significant biases in cosmological parameter
inference due to the noisy input data and very low noise assumed in the fit.
Figure 13: Comparison of our model for emulator uncertainty compared to the
disconnected component of the covariance matrix. The left panel corresponds to
$P_{hh}P_{hh}$ contribution and the right panel to $P_{hm}P_{hm}$. Figure 14:
Correlation matrix of emulator residuals described in Appendix A. We see at
small scales, past $k\simeq 0.4h{\rm Mpc}^{-1}$, the emulator residuals are
significantly correlated. Figure 15: UNIT contours with the different
covariance forms discussed in Appendix A. Chains are run with the standard
scale cuts of $k_{\rm max}=0.6\,h^{-1}{\rm Mpc}$.
## Appendix B Subsets of the bias model
A common critique of EFT-based models is that they are over-parametrized, and
can fit to any signal due to the large number of free parameters. For
perturbative Lagrangian bias models, this question has been previously
explored in the context of CMB lensing cross-correlations. In Modi et al.
(2017), it was shown that significant biases are obtained in $\sigma_{8}$ in
these analyses if one uses a simplified model with linear galaxy bias and non-
linear matter power spectra. To address whether this holds for our model, we
run a series of tests of the emulator, with differing subsets of the bias
parameters set to zero. The full set we adopt is
1. 1.
‘All $b_{i}$’s ’, the full bias parametrization.
2. 2.
‘$b_{1}$ only’, where $b_{2}=b_{s^{2}}=b_{\nabla^{2}}=0$.
3. 3.
‘$b_{1},\,b_{\nabla^{2}}$’, where $b_{2}=b_{s^{2}}=0$.
4. 4.
‘No $b_{s^{2}}$’, where $b_{s^{2}}=0$.
5. 5.
‘No $b_{2}$’, where $b_{2}=0$.
Figure 16: Posteriors for varying subsets of the bias model in Eqn. 1, for two
different scale cut configurations.
A contribution due to shot-noise is included in all of the chains. All chains
in Fig. 16 are run with the same data vector and covariance matrices, and the
$k_{\rm max}$ cuts highlighted in each row. We observe significant biases for
every subset of bias parameters, except for the complete parameterization
which recovers the input cosmological parameters as previously discussed in
section 6.3.3. This implies, at least in this simplified analysis, that the
full set of bias parameters is required to achieve unbiased inference with
this model.
To check the scale-dependence of the importance of the full parameterization,
the second row of Fig. 16 repeats this test limiting ourselves to
$k_{\mathrm{max}}=0.4h{\rm Mpc}^{-1}$. The full bias model and the subset
including only linear, quadratic and higher derivative biases perform
comparatively well.
## Appendix C The $k\to 0$ limit of the emulator
In this appendix we investigate the impact of not correctly recovering large-
scale linear growth in $N$-body simulations on the emulator, as highlighted in
§1. We implement two different forms of enforcing consistency with linear
theory at large scales:
* •
Strictly reverting to LPT at $k<k_{\rm min}$. This introduces a ‘kink’ in the
basis spectra predicted by the emulator.
* •
Extrapolating the principal component predictions out to $k<k_{\rm min}$, but
with a filter to enforce linear growth.
The filter is applied to the $\Gamma^{XY}(k)$ that we use to build the
emulator,
$\displaystyle\Gamma^{XY}(k,{\bf\Omega)}\to F(k)\Gamma^{XY}(k,{\bf\Omega}).$
(22)
With this filtering approach, we recover LPT at large scales by construction
without the discontinuity introduced by simply forcing LPT after some
transition. The functional form we adopted for $F(k)$ is
$\displaystyle
F(k)=\frac{1}{2}\left[1+\tanh\left(\alpha\frac{k-k_{*}}{k_{*}}\right)\right].$
(23)
This quantity asymptotes to 0 at large scales, ensuring the $\Gamma^{XY}$ are
0, and thus the ratios are consistent with unity. Fiducial values adopted are
$k_{*}=0.125$ and $\alpha=2.5$ but the impact is similar for other values.
Since the samples we use to test the emulator are also derived from boxes with
incorrect growth, for all figures in this paper we adopt a ‘fiducial model’
where we use the $\Gamma^{XY}$ with no corrections at large-scales. The
emulator then has large-scale growth compatible with the boxes.
If we perform a simulated likelihood analysis with the other variants that
enforce LPT at large scales we see small shifts in some cosmological
parameters away from their true values. The shifts in parameters are all less
than one $\sigma$, and one must keep in mind that the noise levels in our
analysis are quite stringent (for example, we have no shape noise in the
simulated lensing constraint). When phrased in terms of the uncertainties in
parameters obtained by recent analyses (Heymans et al., 2020), these shifts
are less than $(1/4)\,\sigma$.
Figure 17: The same chains as Fig. 10 but showing all parameters varied.
|
# Accurate and Efficient Simulations of Hamiltonian Mechanical Systems with
Discontinuous Potentials
Molei Tao School of Mathematics, Georgia Institute of Technology, Atlanta GA
30332, USA. Email<EMAIL_ADDRESS>Shi Jin School of Mathematical Sciences,
Institute of Natural Sciences and MOE-LSE, Shanghai Jiao Tong University,
Shanghai 200240, China. Email<EMAIL_ADDRESS>
###### Abstract
This article considers Hamiltonian mechanical systems with potential functions
admitting jump discontinuities. The focus is on accurate and efficient
numerical approximations of their solutions, which will be defined via the
laws of reflection and refraction. Despite of the success of symplectic
integrators for smooth mechanical systems, their construction for the
discontinuous ones is nontrivial, and numerical convergence order can be
impaired too. Several rather-usable numerical methods are proposed, including:
a first-order symplectic integrator for general problems, a third-order
symplectic integrator for problems with only one linear interface, arbitrarily
high-order reversible integrators for general problems (no longer symplectic),
and an adaptive time-stepping version of the previous high-order method.
Interestingly, whether symplecticity leads to favorable long time performance
is no longer clear due to discontinuity, as traditional Hamiltonian backward
error analysis does not apply any more. Therefore, at this stage, our
recommended default method is the last one. Various numerical evidence, on the
order of convergence, long time performance, momentum map conservation, and
consistency with the computationally-expensive penalty method, are supplied. A
complex problem, namely the Sauteed Mushroom, is also proposed and numerically
investigated, for which multiple bifurcations between trapped and ergodic
dynamics are observed.
Dedicated to the centenary of the birth of Kang Feng
## 1 Introduction
The developments of accurate and efficient integrators for simulating smooth
Hamiltonian mechanical systems, as well as the associated theoretical
analysis, have been a major triumph of contemporary numerical analysis (see
e.g., [34, 72, 23, 56, 4]). Symplectic integrators (e.g., [22, 73]), for
instance, are a celebrated class of numerical methods suitable for such
systems. For example, explicit symplectic integrators have been constructed,
with arbitrarily high-order versions for both separable Hamiltonians (e.g.,
[15, 26, 80, 87]) and general, non-separable Hamiltonians [82]. Their
explicitness and the ability to use relatively large timesteps lead to
computationally efficient simulations (for stiff/multiscale problems, see
also, e.g., [28, 74, 85, 84]), and the high-orderness, which has mostly been
achieved by a powerful technique known as the splitting method (e.g., [63] for
a review), yields high accuracy at least for short-time simulations. Moreover,
favorable long-time properties of symplectic integrators have also been
proved, including linear growth of error (for integrable systems, e.g., [10,
71]), near preservation of energy (e.g., [3]), and conservation of momentum
maps associated with symmetries (e.g., [62]). Central to many of these
beautiful analyses is what is nowadays known as (Hamiltonian) backward error
analysis (e.g., [34]). It views the iterations of a symplectic integrator as
stroboscopic samples of the solution of a near-by Hamiltonian system, which is
to be found and hopefully close to the original Hamiltonian. In addition, also
worth noting is another class of useful methods for structured continuous
systems, namely reversible integrators. This class often overlaps with
symplectic integrators, although they are not always the same, and for them
one can also establish long term accuracy via (reversible) backward error
analysis under reasonable assumptions (see Chap.XI of [34]).
On the other hand, if the potential of a mechanical system has discontinuity,
each corresponding to a potential barrier111Here we assume one is simply
interested in a Newtonian problem (i.e., $H(q,p)=\|p\|^{2}/2+V(q)$), which is
a special case of separable Hamiltonian problems defined via
$H(q,p)=K(p)+V(q)$, where $K$ and $V$ are respectively referred to as the
kinetic and the potential energy)., most aforementioned results no longer
hold. Even the sense in which one discusses the solution has to be defined,
because the standard equations of motion known as the (canonical) Hamilton’s
equation (i.e. $\dot{q}=\partial H/\partial p,\dot{p}=-\partial H/\partial q$)
is ill-defined due to indifferentiability of $V(q)$. Following [45, 40] (and
also its higher-dimensional extension to curved interfaces [41], for
Hamiltonian systems with discontinuous Hamiltonians [48], as well as an
earlier such approach for well-balanced schemes for the shallow-water
equations [70]), we will define a solution based on physical principle, or
more precisely, using the classical idea of particle refraction and reflection
(used in optics and derived rigorously from Maxwell’s equation [39]). See Fig.
1 for a simplified illustration and Fig. 2 for a more general case. This
definition will be numerically shown (Sec.4.1 and 4.3) to be consistent with
an alternative treatment termed as the penalty method (see Sec.1.1 for a brief
discussion of the penalty method), which was also observed in [47], but with
improved efficiency and accuracy.
Figure 1: Toy 1D illustration of the dichotomy of _reflection_ and
_refraction_ : whether the particle goes through a discontinuous barrier
depends on if it has enough kinetic energy to overcome it. In 1D, this is a
consequence of energy conservation. Arrow size indicates the magnitude of
momentum.
In the previous works [45, 48], the proposed schemes were in general neither
symplectic nor of high order accuracy. In fact, since the so-defined solution
will exhibit discontinuity, in particular in the momentum variable, as a
response to the discontinuity in the potential, these two discontinuities make
the construction of a symplectic integrator, or even just a reversible
integrator, nontrivial. Designing high-order methods becomes even more
challenging, as the splitting approach for boosting the convergence order will
be shown no longer effective due to the discontinuity. Moreover, backward
error analysis, either the one for symplectic integrators or the one for
reversible integrators, fails as well, and long time performance guarantees
are not proved any more.
In order to improve the numerical simulation for such singular Hamiltonian
systems, we propose four numerical methods, each specializing in certain
tasks. See Table 1. Properties of these methods are numerically studied in
Sec. 4, such as convergence order (Sec. 4.1&4.2), long time accuracy (Sec.
4.1,4.2,4.3), and the conservation of momentum map due to symmetry (Sec. 4.3).
section | symplectic? | reversible?# | global error∗ | other feature(s)
---|---|---|---|---
3.1 | Yes | Yes | 1st-order | general
3.3 | Yes | Yes | 3rd-order | 1 linear interface† only
3.4 | No | Yes | arbitrary | general
3.5 | No | Yes | arbitrary | general; adaptive time-stepping
#: the more precise question is whether the integrator can be made reversible.
*: global error considered here is that of position, away from interface interceptions.
$\dagger$: a linear interface is a co-dimension 1 hyperplane of
discontinuities in the potential.
Table 1: A brief summary of numerical methods proposed in this article.
### 1.1 Related work
##### ‘Nonsmooth mechanics’.
A rich field termed ‘nonsmooth mechanics’ / ‘nonsmooth Hamiltonian systems’ /
‘mechanical systems with hard constraints’ / ‘unilaterally constrained
systems’ / ‘contact integrators’ / ‘collision integrators’ already exists
(e.g., [78, 64, 36, 79, 54, 50, 38, 76, 55, 68, 24, 13, 7, 16, 52, 21, 69, 53,
51, 17, 58]). Problems considered there correspond to a special case of this
study.
More precisely, to the best of our knowledge, these literature mainly
consider, in the language of this article, a discontinuous potential barrier
of height $+\infty$, so that trajectories can only stay within the finite
potential region222Note these literature equivalently formulated the problem
as there are a collection of unilateral holonomic constraints $f_{i}(q)\geq
0$.. This setup is already very important in engineering and science
applications; for example, in robotics, the interested mechanical object is
often interacting with hard surfaces – think about a bipedal robot that walks
by frequently impacting the ground (e.g., [31]), and in molecular and polymer
dynamics molecules are sometimes viewed as hard spheres (so that they remain
at least a certain distance from each other; e.g., [19]). However, in these
cases, the boundary manifold (i.e., the interface(s)) cannot be crossed, and
one will only have _reflection_ s but never _refraction_ s. Therefore, those
interactions with the interface were commonly called in the literature
‘contacts’ and ‘collisions’.
On the contrary, the setup in [45, 48], which is adopted in this article,
allows finite discontinuous jumps, and therefore the full dichotomy of
_reflection_ and _refraction_ can both manifest in the dynamics.
In addition to this main difference, here are some more details on other
aspects in which this research compares with the existing works in nonsmooth
mechanics (all for $+\infty$ jump only): One objective of this article is to
develop explicit symplectic integrators. Some pioneering breakthroughs
developed symplectic integrators which are however implicit (e.g., [24, 58]).
An intriguing paper [51] also noted that if one relaxes the symplecticity
requirement and instead only requires symplecticity over smooth trajectories
intervals, then it is possible to obtain better energy behaviors. This also
reminds us of an inspiring and clever earlier work [7], which uses backward
error analysis for continuous systems to design stabilized event-driven
collision integrator. In addition, there is a collection of substantial works
based on finite element (e.g., [50, 52, 13, 21]). Although inevitably
incapable of discussing all results in the rich ‘$+\infty$ jump’ field, we
also mention the relevant paper [17] in the context of symplectic integrator.
Its main goal is to simulate smooth potentials, but it approximates the smooth
potential by a piecewise constant or quadratic function and uses analytically
obtained solution of the nonsmooth approximation as a numerical solution. What
is new in this paper, in comparison, is we will have a continuous part of the
potential in addition to this piecewisely-defined discontinuous part.
##### Penalty method / regularization.
A popular idea commonly known as regularization/regularisation corresponds to
modifying the discontinuous vector field of a differential equation and
replacing its discontinuities by steep but continuous transitions. After the
regularization both the equations of motion and the solution become well-
defined. The hope is to recover the original dynamics as the steepness goes to
infinity. This idea was proved to be very useful, for example, in engineering
applications (e.g, [30]). It should be pointed out, however, that
regularization generally creates artificial numerical stiffness (see two
paragraphs below), and caution should be exercised even without considering
numerics, as regularization doesn’t always guarantee a good, or even correct,
approximation. In fact, general discontinuous problems may not even have
unique solutions (e.g., [25, 18]), and [67], for example, provides a
mathematical discussion on when regularization actually removes this ambiguity
(we also refer to the notion of renormalized solution [20, 2], which is
another way of removing ambiguity, not via regularization though). Also,
regularization isn’t always possible (e.g., [61]). Furthermore, in the case of
geometric optics through an interface, an arbitrary smoothing of the interface
could lead to incorrect (partial) transmission and reflection rates [46].
Profound analyses exist and provide sufficient conditions for effective
regularization (e.g., [27, 60, 59, 86, 61]), however only for several
subclasses of problems.
The setup in this article is also just a subclass, because we only consider
Newtonian mechanical systems and only the potential is discontinuous. In some
sense this produces a higher-order singularity in the problem than what the
previous paragraph discussed, as the forcing term in the ‘vector field’ will
become not just discontinuous but Dirac. It is natural to interpret
regularization in this case to be the (sufficiently differentiable)
regularization of the potential function instead of the vector field. This
idea appeared, for example, in the ‘softening’ of gravitational potential
(e.g., [1]). Again, such regularization can work very well for specific
scientific investigations, but its general validity is not warranted (e.g,
[29] for an empirical example).
Focusing directly on the discontinuous problem, this article bares no ambition
of investigating the general validity of regularized potentials, but only uses
their numerical simulations as one of few available methods to compare to. We
will simulate the regularized Hamiltonians using classical smooth symplectic
integrators and use the numerical solutions as approximations of the
discontinuous solutions. This approach will be called _the penalty method_
thereafter. The specific form of regularization used in our experiments is
based on sigmoid function (e.g., eq.12,13), and for these examples, the
regularized dynamics do appear to be a good approximation of our definition of
the exact (discontinuous) solution. We conjecture the accuracy of this
approximation to be $\mathcal{O}(1/\alpha)+\mathcal{O}((\alpha h)^{p})$, away
from time points of interface crossing, if generic integrators are used (for
some non-generic stiff integrators, see e.g, [8, 83], but their error bounds
in this case are unclear yet). Here $\alpha$ is the steepest parameter that
should go to $\infty$, and $h$ and $p$ are respectively the step size and
order of the smooth integrator used in the penalty method. One can see that
the artificial stiffness created by the regularization poses a strong
constraint on the step size, which renders the penalty method computationally
inefficient; such severe time-step constraints were already known (e.g.,
[47]). Our proposed discontinuous integrators, on the other hand, do not have
this restriction.
## 2 The definition of an exact solution
### 2.1 The setup of the problem
Consider a Newtonian mechanical system in a $2d$-dimensional Euclidean phase
space, whose potential function has jump discontinuities across interfaces.
Assuming the location and size of each discontinuity is known, then the
potential can be decomposed as the sum of a continuous part and a piecewise
constant part. Assume also that the continuous part is two-times continuously
differentiable. That is, formally, the system is governed by a
Hamiltonian333Note the mass matrix has been assumed to be $I$, because other
mass matrices can be equivalently turned into the identity via coordinate
changes, which simply correspond to alternative $V$ and $U$; see, e.g., [75]
for a summary of how to transform the potentials.
$H(q,p)=\frac{1}{2}p^{T}p+V(q)+U(q),$ (1)
where $U(\cdot)$ is a $\mathcal{C}^{2}$ function, and $V(q)=V_{i}$ for some
constant $V_{i}$ when $q\in D_{i}$. $D_{i}$’s for $i=1,\cdots,M$ are open sets
whose closure form a partition of the configuration space $\mathbb{R}^{d}$.
Let
$B_{ij}=\begin{cases}\overline{D_{i}}\cap\overline{D_{j}},&\qquad i\neq j\\\
\emptyset,&\qquad i=j\end{cases}$
denote the discontinuity interfaces and assume they are either empty or
1-codimensional $\mathcal{C}^{1}$ submanifolds.
### 2.2 Exact solution via physical laws of reflection and refraction
Due to the non-differentiability of $V(\cdot)$, Hamilton’s equation can no
longer be used to describe the (meta)particle’s global motion. Nevertheless,
one can turn to mechanical behavior of particles at the interface to define
the solution, as proposed in [45, 48, 40] (for curved interface in high
dimension see [41]): basically, in order for the solution to make sense
physically, a corresponding particle should simply evolve locally according to
the smooth Hamiltonian dynamics given by $\hat{H}=\frac{1}{2}p^{T}p+U(q)$,
until it hits an interface, and then the particle will either reflect or
refract instantaneously, depending on the normal momentum magnitude and
whether the jump in $V$ corresponds to a potential barrier or dip across the
interface. Then the particle evolves again locally in some $D_{i}$ according
to $\hat{H}$, until the next interface hitting.
More precisely, under nontrivial but not too restrictive assumptions (see
Conditions 1 and 2), the solution will be well-defined as an alternation
between two phases, _flow_ and _impact_ , which will now be detailed. To do
so, denote by $\phi^{t}$ the time-$t$-flow map of $\hat{H}$, and by
$\mathcal{Q}$ the operator that projects $[q,p]$ to the $q$-component. Let
$t_{0}$ be the initial time of evolution, and let $i_{t_{0}}$ be the integer
such that the initial condition satisfies
$q(t_{0})\in\overline{D_{i_{t_{0}}}}$ (note: if the initial condition is on an
interface, $i_{t_{0}}$ is not unique, and its choice needs to be specified as
part of the initial condition). The next hitting time is defined to be
$t_{k+1}=t_{k}+\min_{j=1,\cdots,M}\inf\left\\{\delta\,\Big{\rvert}\,\delta>0,\mathcal{Q}\circ\phi^{\delta}[q(t_{k}),p(t_{k})]\in
B_{i_{t_{0}}j}\right\\},$
let
$i_{t_{k+1}}=\text{argmin}_{j=1,\cdots,M}\inf\left\\{\delta\,\Big{\rvert}\,\delta>0,\mathcal{Q}\circ\phi^{\delta}[q(t_{k}),p(t_{k})]\in
B_{i_{t_{0}}j}\right\\},$
and let the solution on time interval $[t_{k},t_{k+1}]$ be defined by
$[q(t_{k}+\delta),p(t_{k}+\delta)]=\phi^{\delta}[q(t_{k}),p(t_{k})],\qquad
0\leq\delta\leq t_{k+1}-t_{k}.$
This gives one of the two phases of the solution, which shall be called
_flow_.
###### Remark 2.1.
It is easy to see flow preserves both (i) the differentiable part of the total
energy, $\hat{H}$, and (ii) the total energy $H$ as long as time points
$t_{k}$ and $t_{k+1}$ are not considered (on which $V$ is ill-defined).
After _flow_ , the other phase, called _impact_ , will take place (unless
$t_{k+1}=\infty$).
To define _impact_ , let $\hat{n}$ be the unit normal vector of the interface
$B_{i_{t_{k}}i_{t_{k+1}}}$ at $q(t_{k+1})$, in the direction of from
$D_{i_{t_{k}}}$ to $D_{i_{t_{k+1}}}$. Decompose the pre-impact momentum as
$p(t_{k+1})=p^{t}_{k+1}+p^{n}_{k+1}$, where
$p^{n}_{k+1}=(p(t_{k+1})\cdot\hat{n})\hat{n}$ is the projection onto the
normal direction. Let $\Delta V=V_{t_{k+1}}-V_{t_{k}}$.
If the particle has enough normal momentum to transmit through the interface,
i.e.,
$\|p^{n}_{k+1}\|^{2}/2\geq\Delta V,$
a _refraction_ will happen in the sense that the post-impact normal momentum
will be reduced, and the value of $p(t_{k+1})$ will be overwritten by
conservation of energy and tangential momentum:
$p(t_{k+1})=p^{t}_{k+1}+\sqrt{\|p_{k+1}^{n}\|^{2}-2\Delta V}\hat{n}.$ (2)
If there is not enough normal momentum on the other hand, i.e.,
$\|p^{n}_{k+1}\|^{2}/2<\Delta V,$
a _reflection_ will take place in the sense that the post-impact normal
momentum will simply change its sign, and the value of $p(t_{k+1})$ will be
overwritten by
$p(t_{k+1})=p^{t}_{k+1}-p^{n}_{k+1}.$ (3)
However, in both cases, the value of the position, $q(t_{k+1})$, will remain
unchanged. An illustration of _refraction_ and _reflection_ is provided in
Figure 2.
Figure 2: Two ways in which an _impact_ can change the momentum, 2D
illustration.
After _impact_ , which is instantaneous in time (at $t_{k+1}$), the solution
will be continued by _flow_ again, and these two types of behaviors alternate.
###### Remark 2.2.
Unlike _flow_ which produces continuous trajectories, _impact_ creates
discontinuities in $p$ ($q$ is still continuous).
###### Remark 2.3.
It is not very meaningful to state if _impact_ conserves the total energy,
because it is an instantaneous change of momentum, and at the impact, the
position $q(t_{k+1})$ is on an interface where $V(\cdot)$ is undefined.
However, if one considers the composition of an infinitesimal-time pre-impact
flow, the impact, and another infinitesimal-time post-impact flow, then the
composed map conserves the total energy $H$.
Important to mention is, the above definition of the exact solution requires
two conditions, namely:
###### Condition 1.
In the interested time horizon, the interface hitting position of the
solution, $q(t_{k+1})$, only belongs to one interface
$B_{i_{t_{k}}i_{t_{k+1}}}$ for each $k$. For example, the rare situation of an
_impact_ at the intersection of three pieces illustrated in Fig. 3 is assumed
to never happen, because in this case there is no unique way of defining the
post-impact momentum.
Figure 3: Ambiguity in post-impact momentum due to multiple-interface
intersection.
###### Condition 2.
For any $t$ in the interested time horizon such that the aforedefined $q(t)$
belongs to some interface $B_{ij}$, we have $p(t)\notin T_{q(t)}B_{ij}$. That
is, sliding along an interface for a nonzero amount of time is assumed to
never happen.
###### Remark 2.4.
Because the discontinuity in our problem is in the scalar-valued potential
$V(\cdot)$ instead in the vector field, we do not face challenges such as
sliding motion in Filippov systems (e.g., [25, 57, 18]), and defining a
solution is easier.
If Cond.2 is not satisfied, i.e., sliding along an interface occurs, one needs
to use Geometrical Theory of Diffraction; see [49].
It is unclear, however, how to relax Condition 1 in a deterministic way. We
feel that how to define a unique deterministic solution when Condition 1 fails
will be problem dependent and requiring additional information about how the
problem is set up. On the other hand, it is possible to define a stochastic
solution by mimicking quantum mechanics; this is beyond the scope of the
current work.
#### 2.2.1 Analytical solution for the quadratic case
When the local Hamiltonian $\hat{H}=\frac{1}{2}p^{T}p+U(q)$ is integrable, the
exact flow of the full problem $H=\hat{H}+V(q)$ is obtainable. As an example,
this subsection will consider the case where $U$ is quadratic, and its exact
solution will be used later for two purposes: (i) as part of a numerical
algorithm (Section 3.3), and (ii) as a benchmark for assessing numerical
accuracy (Section 4.1).
For simplicity, consider one degree-of-freedom problems. Assume without loss
of generality that $V$ corresponds to only 1 interface, i.e.,
$U(q)=\omega^{2}(q-q_{\text{off}})^{2}/2,\qquad V(q)=\begin{cases}\Delta
V,\qquad&q>q_{\text{jump}}\\\ 0,\qquad&q<q_{\text{jump}}\\\
\text{undefined},\qquad&q=q_{\text{jump}}\end{cases}$ (4)
where $\omega,q_{\text{off}},q_{\text{jump}},\Delta V$ are constant scalar
parameters.
Let $q,p$ denote the current position and momentum, and let $Q,P$ denote those
after time $h$. Assume $h$ is small enough such that the interface is
encountered at most once in time $h$ (if $h$ is large, break it into smaller
time lapses and iterate the following). $Q,P$ can be obtained in the following
way:
compute a position proposal: $\tilde{q}=q_{\text{off}}+\cos(\omega
h)(q-q_{\text{off}})+\sin(\omega h)p/\omega$, which corresponds to the new
position when no interface crossing happens ;
if _$(q-q_{\text{jump}})(\tilde{q}-q_{\text{jump}}) <0$, _i.e., interface is
crossed,__ then
compute $\hat{p}$ the pre-_impact_ momentum:
$\hat{p}=\sigma\sqrt{\omega^{2}(q-q_{\text{off}})^{2}+p^{2}-\omega^{2}(q_{\text{jump}}-q_{\text{off}})^{2}}$,
where $\sigma=-1$ if $q-q_{jump}>0$ and $\sigma=1$ if $q-q_{jump}<0$ ;
compute $t$ the time to _impact_ : let
$t_{1}=\text{atan2}(\hat{p}/\omega,q_{\text{jump}}-q_{\text{off}})-\text{atan2}(p/\omega,q-q_{\text{off}})$,
$t_{2}=\begin{cases}t_{1}&\text{if }t_{1}\geq 0\\\ t_{1}+2\pi&\text{if
}t_{1}<0\end{cases}$, and $t=(2\pi-t_{2})/\omega$ ;
use post-_impact_ position $\bar{q}=q_{\text{jump}}$ and compute post-_impact_
momentum $\bar{p}$:
if _$q-q_{\text{jump}} <0$, _i.e. crossing is from left to right,__ then
if _$\omega^{2}(q-q_{\text{off}})^{2}+p^{2} >2\Delta
V+\omega^{2}(q_{\text{jump}}-q_{\text{off}})^{2}$ _ then
$\bar{p}=\sqrt{\omega^{2}(q-q_{\text{off}})^{2}+p^{2}-2\Delta
V-\omega^{2}(q_{\text{jump}}-q_{\text{off}})^{2}}$, i.e., _refraction_ ;
else
$\bar{p}=-\sqrt{\omega^{2}(q-q_{\text{off}})^{2}+p^{2}-\omega^{2}(q_{\text{jump}}-q_{\text{off}})^{2}}$,
i.e., _reflection_ ;
end if
else
if _$\omega^{2}(q-q_{\text{off}})^{2}+p^{2} >-2\Delta
V+\omega^{2}(q_{\text{jump}}-q_{\text{off}})^{2}$ _ then
$\bar{p}=-\sqrt{\omega^{2}(q-q_{\text{off}})^{2}+p^{2}+2\Delta
V-\omega^{2}(q_{\text{jump}}-q_{\text{off}})^{2}}$, i.e., _refraction_ ;
else
$\bar{p}=\sqrt{\omega^{2}(q-q_{\text{off}})^{2}+p^{2}-\omega^{2}(q_{\text{jump}}-q_{\text{off}})^{2}}$,
i.e., _reflection_ ;
end if
end if
$Q=q_{\text{off}}+\cos(\omega(h-t))(\bar{q}-q_{\text{off}})+\sin(\omega(h-t))\bar{p}/\omega$,
$P=-\omega\sin(\omega(h-t))(\bar{q}-q_{\text{off}})+\cos(\omega(h-t))\bar{p}$,
i.e., _flow_ $t$-time after _impact_.
else
$Q=q_{\text{off}}+\cos(\omega h)(q-q_{\text{off}})+\sin(\omega h)p/\omega$,
$P=-\omega\sin(\omega h)(q-q_{\text{off}})+\cos(\omega h)p$, i.e., no _impact_
, _flow_ $h$-time only;
end if
## 3 The numerical methods
### 3.1 A first-order in position time-reversible symplectic integrator
A symplectic integrator for (1) can be constructed via the approach of
Hamiltonian splitting and composition [34].
More precisely, denote by $\phi_{1}^{\delta}$ the $\delta$-time flow of the
Hamiltonian $H_{1}=U(q)$, and by $\phi_{2}^{\delta}$ the $\delta$-time flow of
$H_{2}=\frac{1}{2}p^{T}p+V(q)$. Although the exact flow of $H$,
$\phi^{\delta}$, is generally not numerically obtainable, the actions of
$\phi_{1}^{\delta}$ and $\phi_{2}^{\delta}$ can be exactly obtained in
explicit forms. Then appropriate compositions of $\phi_{1}$ and $\phi_{2}$
(e.g., Integrator 1) will provide symplectic approximations of $\phi$ with
vanishing error as $\delta\rightarrow 0$.
More specifically, $\phi_{1}$ is easy to evaluate:
$\phi_{1}^{\delta}:[q,p]\mapsto[q,p-\delta\nabla U(q)]$ (5)
On the other hand, $\phi_{2}^{\delta}[q,p]$ can be obtained by evolving the
exact flow of $H_{2}$ for $\delta$-time. Section 2.2 described how to do so by
alternating _flow_ and _impact_ phases. In fact, $\phi_{2}$ is analytically
computable, because $H_{2}$ does not contain the nonlinear potential
$U(\cdot)$, and each _flow_ phase simply corresponds to free drift in a
straight line.
Therefore, the only nontrivial parts for evaluating $\phi_{2}$ are (i) to
compute, given $q,p$ vectors, how much time a particle at position $q$ with
momentum (same as velocity) $p$ first hits one of the known interfaces, and
(ii) to alter the momentum afterwards. How to compute the first hitting time
depends on how the interfaces is provided in the problem setup.
##### The demonstrative case in which the interface geometry is analytically
known.
In this case, we can always find an affine transformation of $q$ and an
associated linear transformation of $p$ (for making the transformation of both
$q$ and $p$ canonical), such that $p$ is rotated to align with the $x$-axis,
$q$ is on the $x$-axis, and the relevant interface444Recall: when $\delta$ is
small enough, there will be at most one interface encountered. passes through
the origin; denote the unit tangent vector of the interface at origin by
$\hat{t}$.
Then $[Q,P]=\phi_{2}^{\delta}[q,p]$ will be given by the following steps:
first, let
$\tau=-q/p,$ (6)
where the division is understood as a ratio between their $x$-components;
$\tau$ is the time to hit the interface. If $\tau>\delta$ or $\tau<0$, i.e.,
no _impact_ within this step, then let
$Q=q+\delta p,\qquad P=p$ (7)
and $\phi_{2}$ computation is completed; otherwise, let
$q_{a}=0,\qquad p_{a}=p$ (8)
be the result of the pre-impact _flow_ , let
$p_{at}=(p_{a}\cdot\hat{t})\hat{t},\quad\text{and}\quad p_{an}=p_{a}-p_{at}$
be the tangential and normal components of the incident momentum at the
interface, denote by $\Delta V=(V(0^{-})-V(0^{+}))\text{sgn}(\hat{x}\cdot q)$
the potential jump across the interface, let
$q_{b}=0,\qquad p_{b}=\begin{cases}p_{at}-p_{an}&\qquad\|p_{an}\|^{2}/2<\Delta
V\\\ p_{at}+\frac{p_{an}}{\|p_{an}\|}\sqrt{\|p_{an}\|^{2}-2\Delta
V}&\qquad\|p_{an}\|^{2}/2>\Delta V\end{cases}$ (9)
be the result of _impact_ (the first case is _reflection_ and the second
_refraction_), and let
$Q=(\delta-\tau)p_{b},\qquad P=p_{b}$ (10)
be the result of the post-impact _flow_. The result of $\phi_{2}^{\delta}$
will be $Q,P$.
It is not difficult to see that the same calculation works in general
coordinate systems as long as the interface geometries are simple enough such
that the first hitting time $\tau$ is a computable function of $q$ and $p$.
When the interface geometry is too complex to be explicitly and analytically
characterized, $\tau$ can be numerically computed to machine precision
rapidly. See Section 3.2 for details when interfaces are provided by either
(i) level sets of $\mathcal{C}^{1}$ functions, or (ii) discontinuous
$V(\cdot)$ values.
##### Two additional comments.
Necessary to further clarify is, in order to have guaranteed accuracy of our
numerical methods, $\delta$ needs to be sufficiently small, not only for
controlling the error of the composition, but also for ensuring there is at
most one _impact_ per $\delta$-sized time step. We will also describe a
numerical method that can account for multiple _impacts_ per step; however,
whether all _impacts_ within a step can be detected depends on how interfaces
are provided. If the numerical algorithms in Section 3.2 need to be used,
there is no guarantee on the capture of the earliest _impact_ if $\delta$ is
too large.
It is also notable that although the above description of $\phi_{2}^{\delta}$
evaluation might appear ‘discrete’, it is exact. In practice, of course, its
accuracy will be limited by the machine precision. It is possible to
algorithmically alleviate some limitation of machine precision and we refer to
the innovative idea in [37], but in this article we will equate ‘accurate to
machine precision’ with ‘exact’.
As the explicit evaluations of $\phi_{1}$ and $\phi_{2}$ are obtained, a
numerical integrator can be constructed by composing these maps. The method
proposed in this section is a one-step method that uses a constant step size
of $\delta$ and a one-step update given by
###### Integrator 1 (1st-order in position, time-reversible, symplectic).
Use one step update
$[q_{n+1},p_{n+1}]=\phi_{1}^{\delta/2}\circ\phi_{2}^{\delta}\circ\phi_{1}^{\delta/2}[q_{n},p_{n}],$
where $\phi_{1}$ is given by (5) and $\phi_{2}$ is given by (6–10).
##### Symplecticity.
Obviously $\phi_{1}^{\delta}$ defined in (5) is a symplectic map. If
$\phi_{2}^{\delta}$ is also symplectic as it should be (since it is the exact
flow of some Hamiltonian system), then Integrator 1 is symplectic because it
is the composition of symplectic maps. And indeed $\phi_{2}^{\delta}$ is
symplectic. In fact, both the ‘no impact’ case (7) and the _reflection_
/_refraction_ case correspond to symplectic maps. The former is obvious, and
for the latter we have:
###### Theorem 3.1.
Under Conditions 1 and 2, the _reflection_ /_refraction_ case corresponds to
symplectic $\phi_{2}^{\delta}$.
###### Proof.
Substituting (6), (8), (9) into (10) and computing the Jacobian
$J=d[Q,P]/d[q,p]$, one can verify that $J^{T}\Omega J=\Omega$ where
$\Omega=\begin{bmatrix}0&I\\\ -I&0\end{bmatrix}$ for each case of (9). This
shows the preservation of the canonical symplectic 2-form in vector space. ∎
###### Remark 3.1.
One may worry that when transforming an infinitesimal phase space volume by
$\phi_{2}^{\delta}$, part of it undergoes _reflection_ and another part
undergoes _refraction_ , which would challenge the above case-by-case
demonstration of symplecticity. However, this possibility is ruled out by
Condition 2, because the transition between _reflection_ and _refraction_
corresponds to sliding along the interface.
Worth commenting, however, is that neither of the three submaps of $\phi_{2}$,
namely pre-impact _flow_ (8), _impact_ (9), or post-impact _flow_ (10), is
symplectic (simple algebra will show $J^{T}\Omega J\neq\Omega$). The intuition
is, (8) and (10) are drifts, but unlike simple drifts over constant time which
are symplectic, they have additional $q,p$ dependence through $\tau$ (see
(6)), which breaks the symplecticity; for (9), both the reflection and
refraction cases rescale one component of the momentum without changing
anything else, and thus cannot be symplectic. Therefore, it is a nontrivial
fact that their composition, $\phi_{2}^{\delta}$, resumes to be symplectic.
What will the symplecticity of the proposed integrator imply about its
accuracy in numerical simulations? We do not yet have a theory. Traditionally,
the favorable long time performances of symplectic integrators are supported
by elegant theoretical guarantees such as backward error analysis (e.g., [65,
32, 3, 34]), linear error growth for integrable systems (e.g., [11, 10, 71,
34]), and near-preservation of adiabatic invariants [33, 14, 34]. However,
none of them can be directly applied to the discontinuous Hamiltonian problem,
mainly due to a lack of differentiability in both the Hamiltonian and the $p$
trajectory. However, symplectic integrators proposed in this article were
still observed to exhibit pleasant long time accuracy in numerical experiments
(see Section 4), and this is intuitive because symplecticity would still be
desired at least in _flow_ phases, in which the numerical solution only
fluctuates the energy instead of drifting it as many nonsymplectic methods may
do, and for most of the time the particle is in _flow_ phases.
We imagine that a proof of long time accuracy might require a discontinuous
generalization of canonical perturbation theory, which is beyond the scope of
this article.
##### Order of accuracy, time-reversibility, and higher-order splitting
schemes
Since $H=H_{1}+H_{2}$, Integrator 1 can be recognized as a Strang splitting
method [77]. In the traditional (smooth) theory this would suggest that
$\phi^{\delta}=\phi_{1}^{\delta/2}\circ\phi_{2}^{\delta}\circ\phi_{1}^{\delta/2}+\mathcal{O}(\delta^{3})$,
i.e., Integrator 1 is 2nd-order due to a 3rd-order truncation error. However,
that is no longer the case due to discontinuities in $p$ (momentum), and the
method actually only has 2nd-order local truncation error in position (see
Appendix 7.1 for a detailed demonstration).
More generally, a common technique for turning a 1st-order method into 2nd-
order is to compose it with its adjoint (see e.g., Chap II.4 in [34]). It is
true that $\phi_{1}$ and $\phi_{2}$ are self-adjoint (a.k.a. symmetric or
time-reversible) and they respectively form semigroups, and Integrator 1 thus
may be seen as a 1st-order method
$\psi^{\delta/2}:=\phi_{1}^{\delta/2}\circ\phi_{2}^{\delta/2}$ composed with
its adjoint
$\left(\psi^{\delta/2}\right)^{*}=\phi_{2}^{\delta/2}\circ\phi_{1}^{\delta/2}$.
However, the tradition proof of the increased order of
$\psi^{\delta/2}\circ\left(\psi^{\delta/2}\right)^{*}$ relies on Taylor
expansion, which is no longer applicable due to the discontinuity produced by
$\phi_{2}$. Because of this, Integrator 1 loses some order of convergence,
although the symmetric nature of its composition makes the method time-
reversible.
One may also wonder if higher-order splitting schemes will work as designed,
such as triple jump (whose one step update is given by
$\phi_{1}^{\delta\gamma/2}\circ\phi_{2}^{\delta\gamma}\circ\phi_{1}^{\delta(1-\gamma)/2}\circ\phi_{2}^{\delta(1-2\gamma)}\circ\phi_{1}^{\delta(1-\gamma)/2}\circ\phi_{2}^{\delta\gamma}\circ\phi_{1}^{\delta\gamma/2}$,
with $\gamma=1/(2-2^{1/3})$ [15, 26, 80, 87], which normally is 4th-order, or
more versions given in, for instance, [63, 12, 35, 5, 6]. Unfortunately, they
cannot produce anything beyond 1st-order (in position), again due to the lost
of smoothness; the constant in the error bound may be improved though.
Detailed proof by a counter-example is analogous to that in Appendix 7.1 and
omitted.
Also important to note is, Integrator 1 only has a 1st-order local truncation
error in momentum if the step includes a discontinuous momentum change.
Without considering the problem’s specific structure, this would imply a 0th-
order global error, not only in momentum but also in the position variable as
they are coupled. Such a ‘bad accuracy’ is actually expected to some extent,
because the momentum exhibits a jump discontinuity across each interface. To
better explain this, suppose the actual time of an interface crossing is
different from the crossing time of a numerical simulation, then no matter how
close these two time points are and how accurate the numerical momentum is
outside the interval limited by these two times, inside this time interval the
numerical error in momentum is $\mathcal{O}(1)$, because there either the
numerical solution or the exact solution has already completed an
$\mathcal{O}(1)$ jump in momentum, but not both.
However, for our specific problem and specific integrator, ‘bad accuracy’ is
actually localized. Observed in all numerical experiments (Section 4) was, the
method still exhibited 1st-order global error in position, and while the
momentum did not have 1st-order global error uniformly in time, away from
interface crossing events the momentum was still 1st-order accurate. The
intuition is, the numerical momentum will still catch up with the exact
momentum after both solutions complete the interface crossing, and the effect
of this lag will not be much amplified because we assumed there is no
immediate consecutive interface crossings, and _impact_ and _flow_ have to
alternate.
### 3.2 Computing the time to _impact_ in complex situations
Given $q,p$ and a generic explicit flow map $\Phi^{h}[q,p]$, which is either
provided by an exact solution (the case of (7), used for example in the 1st-
order method in Section 3.1), or provided by a numerical integrator (the case
for the high-order construction in Section 3.4), we can accurately and
efficiently compute the time $\tau$ such that the position component of
$\Phi^{\tau}[q,p]$ exactly hits an interested interface $B_{ij}$. In [48] a
linear interpolation was used to approximate $\tau$. Here we will compute it
more accurately. Two cases will be discussed.
For simplified notation, $Q(h)$ will denote the position component of
$\Phi^{h}[q,p]$.
* •
When the interface is given by a level set. If $B_{ij}=f^{-1}(\\{0\\})$ for
some known $\mathcal{C}^{1}$ function $f(\cdot)$, and $Q(h)$ is a
$\mathcal{C}^{1}$ function (any exact flow of a continuous system satisfies
this property, and so should any reasonable approximation of an exact flow),
we use Newton’s method to solve for $\tau$ in
$f(Q(\tau))=0.$
The solution is given by the simple iteration
$\tau_{k+1}=\tau_{k}-\frac{f(Q(\tau_{k}))}{f^{\prime}(Q(\tau_{k}))Q^{\prime}(\tau_{k})}.$
Under standard (reasonable) assumptions this iteration converges
quadratically. This means that $\tau$ can be computed to machine precision by
a small amount of iterations. Since $\tau$ is just 1-dimensional, these
iterations are computationally cheap. Initialization can be made efficient
too, for instance via
$\tau_{0}=-\delta f(Q(0))/f(Q(\delta)),$
derived from $0=f(Q(\tau))\approx f(Q(0))+\frac{\tau}{\delta}f(Q(\delta))$.
This case is demonstrated by the numerical experiment in Section 4.3.
* •
When the interface is only known indirectly through $V(\cdot)$ values, we use
the bisection method: Initially, let $t_{l}=0$ and $t_{r}=\delta$. At each
iteration, let $t_{m}=(t_{l}+t_{r})/2$ and compare $V(Q(t_{m}))$ with
$V(Q(t_{l}))$ and $V(Q(t_{r}))$; if it agrees with the left value, update by
$t_{r}\leftarrow t_{m}$, and other update by $t_{k}\leftarrow t_{m}$.
Terminate the iteration when $t_{r}-t_{l}<\epsilon$ for some preset small
$\epsilon$, and let $\tau=t_{m}$.
This way, no derivative information is needed, and the convergence of
estimated $\tau$ values is linear, which implies an exponential decay of
error. This speed of convergence is slower than that of Newton, but difficult
to improve in this setup because one can only evaluate $V$’s values at chosen
locations and $V$ is piecewise constant. On the other hand, to obtain any
prefixed accuracy, one still only needs logarithmically many iterations.
Therefore, machine precision can again be achieved with a small computational
budget.
This case is demonstrated by the numerical experiment in Section 4.3.
### 3.3 A third-order in position time-reversible symplectic integrator, when
there is only one interface and one degree of freedom
When $U$ is at least a $\mathcal{C}^{3}$ function, position is one-dimensional
and there is only one interface, we are able to increase the order of
convergence while maintaining the symplecticity. To do so, denote the
interface location by $q_{\text{jump}}$, and we first decompose the smooth
nonlinear potential $U$ into a quadratic approximation centered at
$q_{\text{jump}}$ and a nonlinear correction: let
$\displaystyle U_{\text{quad}}(q)$
$\displaystyle:=U(q_{\text{jump}})+\frac{dU}{dq}(q_{\text{jump}})\big{(}q-q_{\text{jump}}\big{)}+\frac{1}{2}\frac{d^{2}U}{dq^{2}}(q_{\text{jump}})\big{(}q-q_{\text{jump}}\big{)}^{2},$
$\displaystyle U_{\text{corr}}(q)$ $\displaystyle:=U(q)-U_{\text{quad}}(q).$
Then we split the original Hamiltonian as the sum of a discontinuous quadratic
problem and a correction: let
$H_{1}=\frac{1}{2}p^{T}p+V(q)+U_{\text{quad}}(q)\quad\text{and}\quad
H_{2}=U_{\text{corr}}(q)$ (11)
and denote by $\phi_{1}^{\delta}$ and $\phi_{2}^{\delta}$ respectively their
exact solution flows. Then both are symplectic maps and explicitly obtainable:
the latter is a simple drift in the momentum, and the former is given in
Section 2.2.1. More precisely, to convert to notations in Section 2.2.1, it is
easy to see
$\displaystyle
U_{\text{quad}}=\frac{1}{2}\omega^{2}(q-q_{\text{off}})^{2}+(an~{}unimportant~{}constant),$
where $\displaystyle\omega^{2}=U^{\prime\prime}(q_{\text{jump}}),\qquad
q_{\text{off}}=q_{\text{jump}}-U^{\prime}(q_{\text{jump}})/U^{\prime\prime}(q_{\text{jump}}).$
Finally, we combine this splitting and the classical idea of triple-jump to
obtain a method whose one step update is given by:
###### Integrator 2 (3rd-order in position, time-reversible, symplectic).
Use one step update
$[q_{n+1},p_{n+1}]=\phi_{2}^{\delta\gamma/2}\circ\phi_{1}^{\delta\gamma}\circ\phi_{2}^{\delta(1-\gamma)/2}\circ\phi_{1}^{\delta(1-2\gamma)}\circ\phi_{2}^{\delta(1-\gamma)/2}\circ\phi_{1}^{\delta\gamma}\circ\phi_{2}^{\delta\gamma/2}[q_{n},p_{n}],$
where $\gamma=1/(2-2^{1/3})$.
###### Remark 3.2 (order reduction).
This method does not have a 4th-order global error as a continuous analogue
would have, but numerically observed is that it has 3rd-order global accuracy
in terms of $q$, and the global error of $p$ is also 3rd-order whenever at a
time point sufficiently away from _impact_.
###### Remark 3.3 (an alternative approach).
Triple-jump is a classical way of obtaining a 4th-order smooth integrator, but
it is not the only approach. Another classical example is Suzuki’s fractal
[80], which in our case is
$\phi_{2}^{\delta\gamma/2}\circ\phi_{1}^{\delta\gamma}\circ\phi_{2}^{\delta\gamma}\circ\phi_{1}^{\delta\gamma}\circ\phi_{2}^{\delta(1-3\gamma)/2}\circ\phi_{1}^{\delta(1-4\gamma)}\circ\phi_{2}^{\delta(1-3\gamma)/2}\circ\phi_{1}^{\delta\gamma}\circ\phi_{2}^{\delta\gamma}\circ\phi_{1}^{\delta\gamma}\circ\phi_{2}^{\delta\gamma)/2}$
where $\gamma=1/(4-4^{1/3})$. It often leads to error at the same order but
with smaller prefactor, however at the expense of using more stages.
Suzuki’s fractal doesn’t directly transfer to our non-smooth setup either. If
one uses the $\phi_{1}$ and $\phi_{2}$ constructed in Sec.3.1 for generic
problems, the resulting method remains only 1st-order. With the specialized
$\phi_{1}$ and $\phi_{2}$ in this section, the result will be 3rd-order (in
position, away from _impact_), same as triple jump (note the order reduction).
Symplecticity and reversibility, however, can be obtained.
###### Remark 3.4 (2nd-order in position, time-reversible, symplectic
integrator).
If a one-step update of
$\phi_{2}^{\delta/2}\circ\phi_{1}^{\delta}\circ\phi_{2}^{\delta/2}$ is used
instead (i.e., Strang splitting), numerical experiments suggest that the
method has a 2nd-order global error; that is, no order was lost. This is an
improvement of the generic method in Section 3.1.
Our intuition behind the reduced order (4$\to$3) is, away from interfaces the
triple-jump and the Strang splitting will have regular 5th-order and 3rd-order
local truncation errors, and their truncation errors degrade to lower order
only when the current step involves an _impact_. However, to encounter an
_impact_ , the current position must be $\mathcal{O}(h)$ away from the
interface, which means, according to Taylor expansion, $U_{\text{corr}}$ is
$\mathcal{O}(h^{3})$ and $U^{\prime}_{\text{corr}}$ is $\mathcal{O}(h^{2})$.
This exact scaling allows the splitting (global) accuracy to increase from
1st-order (see Section 3.1) to 3rd-order near the _impact_. If a splitting
scheme of order higher than 3 in a smooth setup is used (e.g., triple jump),
3rd-order will be retained; in the case of Strang splitting, itself is only
2nd-order, and therefore its discontinuous version based on (11) is still just
2nd-order due to limitations of non-_impact_ steps.
The aforementioned methods based on triple jump and Strang splitting, or any
symmetric composition of $\phi_{1}$ and $\phi_{2}$, are reversible.
###### Remark 3.5.
The method in this section generalizes to only a small subset of multi-degree-
of-freedom problems: if there is only one interface and it is linear, then we
can similarly construct $U_{quad}$ by expanding $U$ in the direction normal to
the interface; if there are multiple interfaces and all of them are linear, a
Voronoi decomposition may help construct a continuous, piecewise quadratic
$U_{quad}$; however, it is unclear whether the idea in this subsection can
work for nonlinear interface(s).
### 3.4 High-order time-reversible but nonsymplectic integrators
Although it is difficult to obtain a high-order symplectic method due to
discontinuity in momentum (see discussions in Sections 3.1 and 3.3), an event-
driven approach can be adopted to construct arbitrarily high-order time-
reversible methods. It is not clear yet what would be the disadvantage of
losing symplecticity, as numerical experiments will also demonstrate pleasant
long-time properties, similar to those given by the long time error analysis
of smooth reversible integrator for reversible integrable systems [34]; note
however that a theoretical explanation is lacking.
The one-step update of the new method, with step size denoted by $\delta$,
will be constructed based on an $l$-th order symplectic integrator for the
smooth Hamiltonian $\hat{H}=\frac{1}{2}p^{T}p+U(q)$. The construction of such
integrators is well known (see e.g., [34], or [81] for a recap). Denote the
one-step update of this integrator by $\psi^{\delta}$, where $\delta$ is the
step size. We will also assume the explicit existence of a function $I(q)$
that return $i$ if $q\in D_{i}$, the $i$-th component of the partition of
position space into smooth regions (see its definition below (1)). Then a
high-order integrator for the discontinuous problem $H$ can be constructed
through the following stages:
###### Integrator 3 ($l$-th order in position, time-reversible (if the legacy
method $\psi$ is reversible)).
1. 1.
Given the current d-dimensional position $q$ and momentum $p$, check if an
_impact_ will occur in $\delta$ time. To do so, compute
$[Q,P]=\psi^{\delta}[q,p]$. If $I(q)\neq I(\hat{q})$, at least one _impact_
occurred.
If no impact occurred, update $[q,p]$ by $[Q,P]$, and go back to Stage 1 for
the next step. Otherwise, continue and denote by $B_{ij}$ the relevant
interface $B_{I(q)I(\hat{q})}$.
2. 2.
Otherwise, numerically approximate the time $\tau$ (under the flow of
$\hat{H}$) to _impact_. Depending on how the interface $B$ is provided, this
first hitting time $\tau$ may be explicitly computable, or it needs to be
numerically estimated. For the latter case, Section 3.2 provides details about
$\tau$’s rapid computation to machine precision, when the interface is
provided (i) as a level set of a $\mathcal{C}^{1}$ function, or (ii) in the
general case, purely through values of the discontinuous potential $V(\cdot)$.
3. 3.
Update $q$,$p$ using one step of the continuous symplectic integrator with
step size $\tau$; i.e,
$[q,p]\leftarrow\psi^{\tau}[q,p]$
4. 4.
Compute the action of an _impact_. That is, based on (i) the jump size of $V$
given by the interface $B_{ij}$, and (ii) the normal vector of $B_{ij}$ at
position $q$ from $D_{i}$ to $D_{j}$, perform an instantaneous update of $p$
either via _refraction_ (eq. (2)) or _reflection_ (eq. (3)).
Note: if $B_{ij}$ is only implicitly described by $V$ values, the normal
vector has to be numerically searched, but this search can again be done
efficiently and to machine precision using the bisection method, which will
take poly-log time.
5. 5.
Update $q$,$p$ again using one step of the continuous symplectic integrator,
this time with step size $\delta-\tau$; i.e,
$[q,p]\leftarrow\psi^{\delta-\tau}[q,p]$
###### Remark 3.6 (Applicability).
As long as $\delta$ is small enough, there is at most one _impact_ per step
under Conditions 1, 2 and the assumption that $D_{i}$’s are open sets.
###### Remark 3.7 (Non-symplecticity).
(i) If $\tau$ were a constant (independent of $q,p$), then Stage 3 and Stage 5
are both symplectic updates; however, Stage 4 is not symplectic. (ii) If
$\tau$ were the exact hitting time ($q,p$ dependent) and $\psi$ were the exact
flow of $\hat{H}$, the new method would be symplectic, but this is unlikely to
be possible because then the new method is in fact exact. In this case,
neither Stage 3, 4 or 5 is symplectic, but their composition is. (iii) For the
new method, Stage 4 remains non-symplectic, neither Stage 3 or 5 is symplectic
because $\tau$ depends on $q,p$, and the composition of stages is generally
not symplectic.
###### Remark 3.8 (Order of accuracy).
The accuracy of the new method is numerically observed in experiments to be
$l$-th order (same as the legacy smooth integrator $\psi$), however in a
modified sense that the global error of position is $\mathcal{O}(\delta^{l})$,
and that of momentum is $\mathcal{O}(\delta^{l})$ at time points away from
interface crossings (the accuracy in $p$ is 0th-order near interface
crossings). The intuition is, during simulation till a fixed $\mathcal{O}(1)$
time, there are only finitely many interface crossing events, which means the
$\mathcal{O}(\delta^{l})$ error of $\psi$ will only be amplified at most by a
constant factor.
###### Remark 3.9 (Reversibility).
The new method can be easily checked to be reversible if (i) $\psi$ is
reversible, (ii) the hitting time is solved for exactly, (iii) there is at
most one interface crossing per step.
### 3.5 The adaptive version for safer usages of large timesteps
Depending on the problem, there might be a regime of $\delta$ values that are
small enough for resolving the dynamics generated by the smooth Hamiltonian
$\hat{H}$, however too large in the sense that multiple interface crossings
can be encountered within one step. This can happen, for instance, when an
interface has a large curvature, which may result in consecutive _impact_ s
that are close in time. In this case, methods proposed cannot be accurate when
the step size $\delta$ is large.
To avoid this deterioration of accuracy, one could of course simply reduce
$\delta$, so that at most one interface crossing occurs per step. However,
this is not the most computationally efficient solution. Instead, we propose
to employ an adaptive time-stepping approach. Our way of introducing
adaptivity will destroy symplecticity, and therefore it will be demonstrated
on the high-order nonsymplectic method in Section 3.4. Most parts will be the
same as before, except for a handful of modifications which are underlined:
###### Integrator 4 ($l$-th order in position, adaptive).
At the beginning of each step, let $\hat{\delta}=\delta$. Then,
1. 1.
Given the current d-dimensional position $q$ and momentum $p$, check if an
_impact_ will occur in $\hat{\delta}$ time. To do so, compute
$[Q,P]=\psi^{\delta}[q,p]$. If $I(q)\neq I(\hat{q})$, at least one _impact_
occurred.
If no impact occurred, update $[q,p]$ by $[Q,P]$, and the current step is
completed. Otherwise, continue.
2. 2.
Numerically approximate the time $\tau$ (under the flow of $\hat{H}$) to the
first _impact_. Depending on how the interface $B$ is provided, this first
hitting time $\tau$ may be explicitly computable, or it needs to be
numerically estimated. For the latter case, Section 3.2 provides details about
$\tau$’s rapid computation to machine precision, when the interface is
provided (i) as a level set of a $\mathcal{C}^{1}$ function, or (ii) in the
general case, purely through values of the discontinuous potential $V(\cdot)$.
Note that there is no guarantee those numerical estimations really correspond
to the first hitting time, if $\hat{\delta}$ is too large.
3. 3.
Update $q$,$p$ using one step of the continuous symplectic integrator with
step size $\tau$; i.e,
$[q,p]\leftarrow\psi^{\tau}[q,p]$
4. 4.
Compute the action of an _impact_. That is, based on (i) the jump size of $V$
given by the interface $B_{ij}$, and (ii) the normal vector of $B_{ij}$ at
position $q$ from $D_{i}$ to $D_{j}$, perform an instantaneous update of $p$
either via _refraction_ (eq. (2)) or _reflection_ (eq. (3)).
Note: if $B_{ij}$ is only implicitly described by $V$ values, the normal
vector has to be numerically searched, but this search can again be done
efficiently and to machine precision using the bisection method, which will
take poly-log time.
5. 5.
Update the remaining time using $\hat{\delta}\leftarrow\hat{\delta}-\tau$, and
return to Stage 1.
###### Remark 3.10.
This method no longer requires at most one _impact_ per step, and it is
effective as long as the computed hitting time corresponds to the first
hitting. It is not symplectic. Order of accuracy is numerically observed to be
$l$ in the same sense as before. Reversibility is achieved if (i) $\psi$ is
reversible, and (ii) every time the first (backward) hitting time is exactly
solved for.
The example in Section 4.4 showcases the efficacy of this adaptive method.
There the interface actually has kinks (corresponding to corners). Numerically
these corners are never exactly reached, but _impacts_ near them can be
clustered in time.
## 4 Numerical experiments
### 4.1 Quantification via a quadratic benchmark problem
##### Setup.
Consider a one degree-of-freedom problem where the smooth part of the
potential is a quadratic function and the jump part corresponds to only one
interface, i.e., (4). This is an analytically solvable case (see Sec.2.2.1),
and we chose it so that numerical and exact solutions can be compared to
precisely quantify long time accuracy and numerical order of convergence.
Here we use parameters $\omega=2,q_{\text{off}}=1,\Delta
V=3,q_{\text{jump}}=2$.
Figure 4: A benchmark problem: potential = quadratic + step function. First 4
rows: respectively, 1st-order symplectic, 1st-order non-symplectic, 4th-order
reversible, 4th-order _ir_ reversible (Runge-Kutta based); left: $h=0.01$,
right: $h=0.1$. Last row: penalty method, i.e., 4th-order symplectic
integration of a regularized Hamiltonian; left: $\alpha=10^{4}$, $h=10^{-5}$,
right: $\alpha=10^{3}$, $h=10^{-5}$.
Fig. 4 compares the performances of our 1st-order symplectic Integrator 1
(Sec. 3.1), a non-symplectic version of this 1st-order method, our 4th-order
reversible Integrator 3 (based on a 4th-order reversible smooth integrator)
(Sec. 3.4), an irreversible version of this 4th-order method (based on a
smooth integrator of 4th-order Runge-Kutta), and the penalty method (based on
a regularized Hamiltonian and a 4th-order reversible symplectic integration of
this smooth Hamiltonian). The 3rd-order symplectic integrator in Sec. 3.3 is
excluded because it gives the exact solution for this example. The exact
solution is periodic with period $\approx 3$, and the comparison is over
$T=10^{3}$ which should be considered as a long time.
The penalty method simulates a regularized smooth penalty Hamiltonian
$H(q,p)=\|p\|_{2}^{2}/2+U(q)+\Delta
V\frac{1}{1+\exp\big{(}-\alpha(q-q_{jump})\big{)}}.$ (12)
The simulation uses 4th-order symplectic integrator based on triple jump (see
e.g., [34]).
The non-symplectic 1st-order integrator used is simply a forward Euler type,
with one $h$-step update given by
$[q,p]\mapsto[q,p]+(\phi_{1}^{h/2}-id)[q,p]+(\phi_{2}^{h}-id)[q,p]+(\phi_{1}^{h/2}-id)[q,p],$
where $\phi_{1}$ is given by (5), $\phi_{2}$ is given by (6–10), and $id$ is
the identity map. As a reminder and a comparison, the symplectic 1st-order
integrator used here is
$[q,p]\mapsto\phi_{1}^{h/2}\circ\phi_{2}^{h}\circ\phi_{1}^{h/2}[q,p]$.
##### Results.
The left half of Fig.4 shows that the 1st-order symplectic method and the 4th-
order reversible method exhibit linear growth of error, and almost no drift in
energy but only fluctuations. Both are similar to that of the
symplectic/reversible integration of smooth integrable systems. On the
contrast, the 1st-order non-symplectic method has too much artificial energy
injected due to the numerics, and the 4th-order _ir_ reversible method (Runge-
Kutta based) has artificial energy dissipation, although the amount is small
due to small $h$, high-order, and $T$ not too large. More on long time
performance will follow.
Comparing the first 4 rows of the left and right halves of Fig.4, which differ
by different step sizes, one sees consistency with the claimed order of each
method. More on convergence order will follow.
The 5th row shows that the penalty method, when used with a sufficiently small
$h$, such as $o(1/\alpha)$, has error that is only 1st order in $1/\alpha$.
Figure 5: Super long time performances of proposed methods. 3 rows:
respectively, 1st-order symplectic, 4th-order reversible, 4th-order
irreversible. Penalty method unaffordable.
To further study these observations, performances over even longer time span
($T=2\times 10^{5}$) are investigated in Fig.5. We see rather irregular,
however bounded global error of the 1st-order symplectic method. This should
not be surprising as the classical Hamiltonian backward error analysis no
longer applies due to discontinuity. The 4th-order irreversible (Runge-Kutta
based) method artificially dissipates energy, and its solution error changes
like the traditional exponential error growth of non-symplectic methods for
smooth problems. The 4th-order reversible (symplectic integrator based) method
seems to exhibit linear error growth; however, we note a small but definite
drift in its energy error, which is not solely oscillatory. We hope that a
symplectic counterpart would not have this drift, but its design remains an
open problem.
Figure 6: Verifications of orders of 1st-order symplectic Integrator 1 and
4th-order reversible Integrator 3 (based on a symplectic reversible smooth
integrator). Root-mean-square errors are computed as
$\sqrt{1/(T/h+1)\sum_{i=0}^{T/h}\left(q^{num}_{i}-q^{exact}(ih)\right)^{2}}$,
$T=1000$.
Fig.6 confirms that our 1st-order symplectic integrator and 4th-order
reversible integrator are indeed 1st and 4th order (note time points of
_impact_ are few and therefore negligible after the averaging).
### 4.2 Improved accuracy when there is only one linear interface: a
nonlinear example
Let’s now consider a problem with smooth and jump potentials, respectively,
$U(q)=(q-q_{c})^{4}/12,\qquad V(q)=\begin{cases}\Delta V,\qquad&q>q_{jump}\\\
0,\qquad&q<q_{jump}\\\ \text{undefined},\qquad&q=q_{jump}\end{cases}.$
Due to the nonlinearity created by $U$, no exact solution is available as a
benchmark to compare against. However, as there is only one linear interface,
the high-order symplectic method in Sec. 3.3 applies.
Figure 7: Long time performances of 1st-, 2nd-, and 3rd-order symplectic
integrators for a nonlinear problem with one linear interface.
Fig. 7 compares the performances of a 3rd-order Integrator 2, a 2nd-order
integrator (Remark 3.4), and a 1st-order Integrator 1, all symplectic. The
results indicate consistency with the claimed orders of the methods (Fig. 8
further confirms this), and the 2nd- and 3rd-order versions have much more
regular long time energy behaviors (note: all three are reversible!)
Figure 8: Verifications of the orders of 1st-, 2nd-, and 3rd-order symplectic
methods (Integrator 1, that in Rmk. 3.4, and Integrator 2). Averaged global
errors are root-mean-square errors, computed as
$\sqrt{1/(T/h+1)\sum_{i=0}^{T/h}\left(q^{num}_{i}-q^{benchmark}(ih)\right)^{2}}$,
$T=100$.
In these experiments, $\Delta V=2$, $q_{jump}=0$, $q_{c}=1$. Trajectory errors
are computed by comparing against a tiny step-sized ($h=10^{-5}$) 3rd-order
(Integrator 2) simulation.
### 4.3 An example in which the interface is given by a level set;
conservation of momentum map
##### Setup.
We now consider an example in which the discontinuous Hamiltonian has a
symmetry. For continuous Hamiltonians, this would imply the conservation of a
corresponding momentum map, due to Noether’s theorem [66]. Moreover, a
symplectic discretization of the continuous Hamiltonian system can inherit
this conservation under nontrivial but reasonable conditions (see [62] Chap.
1.3.3 and 1.4.2 for details). Unfortunately, the analogous results for
discontinuous Hamiltonians are currently unknown. For our specific example,
however, the exact solution would still have a symmetry-based conservation law
in addition to energy conservation, and this section investigates whether
symplectic Integrator 1 numerically captures this conservation too.
This example has 2 degrees of freedom and significant nonlinearity. The smooth
potential is gravitational, $U(q)=-1/\|q\|_{2}$. However, the solution does
not follow the classical 1-body dynamics which corresponds to Keplerian
orbits, as there is an additional nonsmooth potential
$V(q)=\begin{cases}0,\quad&\|q\|<r_{jump}\\\ \Delta
V,\quad&\|q\|>r_{jump}\end{cases}.$
Obviously here the discontinuous interface is nonlinear, and representable by
the zero level set of function $\|q\|-r_{jump}$.
The Hamiltonian $\|p\|_{2}^{2}/2+U(q)+V(q)$ is invariant under rotations in
the plane, and it is not hard to see its exact solution conserves the angular
momentum $L:=p\times q=p_{1}q_{2}-p_{2}q_{1}$ as _impact_ only changes the
radial component of $p$.
Meanwhile, note that although the Hamiltonian is invariant under rotations,
its trajectory is not. Even without the jump discontinuity, the solution as a
Keplerian orbit is not a circle, unless its initial condition is special
enough to lead to a zero eccentricity.
(a) orbit projected to the $[q_{1},q_{2}]$ plane
(b) energy error
(c) angular momentum error
(d) errors in angular momentum L and energy E over super long time; note these
are unavailable for the benchmark method, and blowing up for the nonsymplectic
method
Figure 9: Conservation of angular momentum in the presence of rotational
symmetry
Fig.9 compares our 1st-order symplectic simulation (by Integrator 1) with a
benchmark solution that uses $\sim 100\times$ computational cost, as well as a
nonsymplectic version of Integrator 1. $\Delta V=0.125$, $r_{jump}=1.2$,
$q(0)=[1,0],p(0)=[0,1.4]$.
The nonsymplectic version used is simply a forward Euler type, with one
$h$-step update given by
$[q,p]\mapsto[q,p]+(\phi_{1}^{h}-id)[q,p]+(\phi_{2}^{h}-id)[q,p],$
where $\phi_{1}$ is given by (5), $\phi_{2}$ is given by (6–10), and $id$ is
the identity map. For a fair comparison, the symplectic version used here is
an irreversible variation (based on Lie-Trotter splitting instead of Strang
splitting), with one $h$-step update given by.
$[q,p]\mapsto\phi_{1}^{h}\circ\phi_{2}^{h}[q,p].$
The benchmark solution was generated by fine symplectic simulation of a
regularized smooth penalty Hamiltonian
$H(q,p)=\|p\|_{2}^{2}/2+U(q)+\Delta
V\frac{1}{1+\exp\big{(}-\alpha(\|q\|-r_{jump})\big{)}}.$ (13)
This simulation uses 4th-order symplectic integrator based on triple jump (see
e.g., [34]) with $h=10^{-5}$, with penalty parameter $\alpha=10^{5}$. Both
parameters $\alpha$ and $h$ were tuned to ensure high precision with lowest
possible computational cost.
##### Results.
Fig. 9(a) illustrates the orbits and our method agrees well with the benchmark
but uses a $100\times$ larger stepsize, and that even if it uses a
$1000\times$ larger stepsize, the long time error is still moderate (mainly
due to accumulated phase error). For the purpose of visualizing the orbit, the
simulation time is chosen to be $T=500$, which is relatively long, as one can
see a nonsymplectic method gradually loses its energy and the particle
eventually drops to an orbit with a large semi-major axis which no longer
crosses the interface.
Fig. 9(b) and 9(c) respectively plot how the energy and angular momentum,
computed from the numerical solutions, deviates from their true values in time
dependent ways. As expected, (i) the 1st-order symplectic method exhibits
$\mathcal{O}(h)$ fluctuation in energy, while a nonsymplectic version
accumulates error in energy; (ii) angular momentum is numerically conserved;
the small error is due to limited machine precision, and $h=0.001$ gives more
error than $h=0.01$ because it uses $10\times$ more steps, each of which
induces a small arithmetic error. Fig. 9(d) confirms that these deviations are
truly bounded over super long time ($T=100,000$).
### 4.4 Sauteed Mushroom: irregular interface geometry and complex dynamics
(trapped or ergodic?)
Finally, we demonstrate the capability of the proposed approach using an
example where both the interface and the corresponding dynamics are
complicated. Among all methods mentioned in this paper, only the adaptive
Integrator 4 suits the investigation of the ergodic aspect of the dynamics,
which requires accurate and affordable long-time simulation, because it can be
$\geq 4$th-order and capable of capturing multiple _impact_ s in a short
duration while still using a large step size. Note the purpose of this section
changed a little bit, as we are shifting from demonstrating the correctness of
the proposed method to using it as a tool that, for the first time, allows us
to probe some hard problems and make conjectures.
More precisely, let’s study a system that complicates the Bunimovich Mushroom,
which is a classical example of Hamiltonian systems in divided phase space
that ‘demonstrates a continuous transition from a completely chaotic system
(stadium) to a completely integrable one (circle)’ [9]. The specific mushroom
we consider is a subset of $\mathbb{R}^{2}$, defined as
$\mathcal{M}=\\{(x,y)\mid x^{2}+y^{2}\leq 2,y\geq 0\\}\cup\\{(x,y)\mid|x|\leq
1,|y|\leq 1,y\leq 0\\}$
In the language of this paper, the classical Bunimovich Mushroom considers the
discontinuous Hamiltonian dynamics of a particle, with initial condition
inside $\mathcal{M}$, without any smooth potential (i.e., $U(q)=0$), and an
infinite potential barrier at the mushroom boundary (i.e., $V(q)=0$ if
$q\in\mathcal{M}^{\circ}$, $V(q)=+\infty$ if $q\in\mathcal{M}^{c}$, and
undefined otherwise). The particle basically travels in straight line at
constant speed until hitting the boundary, and then be reflected and travels
as a free particle again until the next reflection, and the whole procedure
repeats. Note the reflections can be arbitrarily frequent due to sharp corners
(and hence our choice of an adaptive integrator). Among many beautiful
results, one was the demonstration of that the phase space splits an
integrable island and a chaotic sea [9], and initial conditions in one region
will not be able to percolate into the other region.
New to this paper is the addition of a nontrivial smooth potential. We are
interested in how it could change the global dynamics. Specifically, consider
the aforementioned jump potential $V$ and a smooth potential
$U(q)=a((q_{1}-q^{s}_{1})^{4}+(q_{2}-q^{s}_{2})^{4})/4$, where $a$ and vector
$q^{s}$ are constant parameters, corresponding to a vectorial anharmonic
attraction to $q^{s}$.
In all experiments presented here, the sautee source is fixed at
$q_{s}=[-0.5;-2]$, i.e., the left bottom of the mushroom. The initial
condition is fixed as $q(0)=[1.5;0.2]$, $p(0)=[0;1]$, which is in the regular
island of the classical mushroom (i.e., $a=0$).
(a) $a=0$, medium time
(b) $a=0.008$, medium time
(c) $a=0.08$, medium time
(d) $a=0$, long time
(e) $a=0.008$, long time
(f) $a=0.08$, long time
Figure 10: Sauteed mushroom: switching between trapped and ergodic dynamics
controlled by the sautee parameter $a$.
Fig.10 shows distinct dynamics for different values of $a$ (short time
simulations were provided in addition to long time ones for visualizing the
dynamics). The same initial condition is used for all $a$ values. Although
this initial condition corresponds to a regular island in the classical
Bunimovich mushroom (Fig.10(a)), when $a=0.008$ the dynamics appears to be
chaotic and ergodic on the entire mushroom. When $a$ takes a larger value of
$0.08$, however, it seems the dynamics is no longer ergodic any more, although
still possibly chaotic, and the trajectory remains trapped in part of the
mushroom.
Of course, these observations depend on the choice of the sautee source
$q^{s}$ too.
Here both the legacy code $\psi$ used by Integrator 4 and the integrator in
the bisection method (Sec.3.2) for estimating the time to _impact_ are the
4th-order symplectic integrator based on triple jump.
## 5 Discussion and conclusion
The accurate and efficient simulation of Hamiltonian mechanical systems with
discontinuous potentials is an important problem. In fact, a special case,
namely ‘impact/collision/contact integrators’ for potentials with infinitely-
high discontinuous barrier(s), has been extensively studied due to extensive
applications in engineering and sciences.
The general case where jumps can be finite, however, appears to be
insufficiently studied yet. To that end, this article, along the line of [45,
40, 48] in which the particle reflection and refraction at the interface are
built into the dynamics, proposes four numerical methods, each with distinct
applicability. As for general problems, the first method that we recommend to
try (among the four plus the penalty method) is the adaptive high-order
Integrator 4. This is because of its robustness to complex interface geometry,
together with the fact that whether/how symplecticity benefits long time
accuracy is no longer clear (yet) in the discontinuous setting. This
integrator already has, at least empirically, pleasant long time behaviors,
and is computationally rather efficient too.
Several questions remain open. For example, (i) How to construct high-order
symplectic integrators for general discontinuous potentials? Although we did
obtain a 1st-order version for general problems, severe order-reduction from
the classical continuous theory is encountered, and it is unclear if there is
an order barrier or it is just that a higher-order explicit version remains to
be developed. (ii) What would be the advantage(s) of having a symplectic
method? Backward error analysis, if still applicable, needs to be completely
revamped, and this includes both the modified equation/Hamiltonian theory and
the error propagation analysis.
Moreover, the rich field of ‘impact/collision/contact integrators’ has already
developed a number of brilliant ideas, and we think many of them can be
extrapolated to the more general setting in this article. For example,
stabilization techniques may lead to further improved long term behaviors.
Such (and more) explorations will be left as future work.
Other applications and extensions of these methods include geometrical optics,
where waves can be partially transmitted and reflected [46] at interfaces,
high frequency elastic waves through interfaces [41], surface hopping problems
[44] in computational chemistry, and quantum-classical couping algorithms [42,
43].
A side remark is that the sauteed mushroom (Sec.4.4) is definitely under-
investigated in this article from a dynamical system perspective, but we hope
it could demonstrate the applicability of our numerical integrator, and
provoke thinking about its global dynamics and bifurcation in the future.
## 6 Acknowledgment
MT is thankful for the partial support by NSF DMS-1847802 and ECCS-1936776. SJ
was supported by NSFC grant No. 12031013 and by the Strategic Priority
Research Program of Chinese Academy of Sciences Grant No. XDA25010404.
## 7 Appendix
### 7.1 Why does Strang splitting no longer produce a 2nd-order method
This section will give an example for which Integrator 1 does not have a 3rd-
order local truncation error, even though it is a time-reversible method
constructed via symmetric Strang splitting (which is guaranteed to have a 3rd-
order truncation error in the smooth case).
Consider the quadratic problem given in Section 2.2.1 and denote by $q,p$ the
current position and momentum. Assume $q=q_{\text{jump}}-Ch$ for some bounded
constant $C>0$, and $p>0$ is sufficiently large, so that an _impact_ will
happen in $h$-time and the interface crossing will be a refraction. In this
case, the exact solution after $h$-time, $Q,P$, is given by
$\displaystyle\hat{p}$
$\displaystyle=\sqrt{\omega^{2}(q-q_{\text{off}})^{2}+p^{2}-\omega^{2}(q_{\text{jump}}-q_{\text{off}})^{2}}$
$\displaystyle t$
$\displaystyle=\big{(}2\pi-\text{atan2}(\hat{p}/\omega,q_{\text{jump}}-q_{\text{off}})+\text{atan2}(p/\omega,q-q_{\text{off}})\big{)}/\omega$
$\displaystyle\bar{p}$
$\displaystyle=\sqrt{\omega^{2}(q-q_{\text{off}})^{2}+p^{2}-2\Delta
V-\omega^{2}(q_{\text{jump}}-q_{\text{off}})^{2}}$ $\displaystyle Q$
$\displaystyle=q_{\text{off}}+\cos(\omega(h-t))(q_{\text{jump}}-q_{\text{off}})+\sin(\omega(h-t))\bar{p}/\omega$
$\displaystyle P$
$\displaystyle=-\omega\sin(\omega(h-t))(q_{\text{jump}}-q_{\text{off}})+\cos(\omega(h-t))\bar{p}.$
The numerical solution produced by Integrator 1, denoted by $Q_{1},P_{1}$, is
given by
$\displaystyle\hat{p}_{1}$ $\displaystyle=p-h\omega^{2}/2(q-q_{\text{off}})$
$\displaystyle\tau$ $\displaystyle=(q_{\text{jump}}-q)/\hat{p}_{1}$
$\displaystyle\hat{p}_{2}$ $\displaystyle=\sqrt{\hat{p}_{1}^{2}-2\Delta V}$
$\displaystyle Q_{1}$ $\displaystyle=q_{\text{jump}}+(h-\tau)\hat{p}_{2}$
$\displaystyle P_{1}$
$\displaystyle=\hat{p}_{2}-h\omega^{2}/2(Q_{1}-q_{\text{off}})$
##### Position.
Its truncation error is only 2nd-order. More precisely, we check how well
$Q_{1}$ approximates $Q$ by letting
$a_{0}=\lim_{h\rightarrow 0}(Q-Q_{1}),\quad a_{1}=\lim_{h\rightarrow
0}\frac{Q-Q_{1}-a_{0}}{h},\quad a_{2}=\lim_{h\rightarrow
0}\frac{Q-Q_{1}-a_{0}-a_{1}h}{h^{2}}.$
Laborious algebra will show that
$a_{0}=0,\quad a_{1}=0,\quad a_{2}=\frac{2C\Delta
V+(Cp-p^{2})\left(p-\sqrt{p^{2}-2\Delta V}\right)}{2p^{3}\sqrt{p^{2}-2\Delta
V}}(C-p)(q_{\text{off}}-q_{\text{jump}})\omega^{2},$
which means $Q=Q_{1}+\mathcal{O}(h^{2})$. However, if $\Delta V=0$, it can be
checked that $a_{2}=0$, which means the truncation error returns to be 3rd-
order, and that is consistent with the fact that the integrator should be 2nd-
order in the smooth case.
##### Momentum.
Its truncation error is only 1st-order. More precisely, we check how well
$P_{1}$ approximates $P$ by letting
$b_{0}=\lim_{h\rightarrow 0}(P-P_{1}),\quad b_{1}=\lim_{h\rightarrow
0}\frac{P-P_{1}-b_{0}}{h},\quad b_{2}=\lim_{h\rightarrow
0}\frac{P-P_{1}-b_{0}-b_{1}h}{h^{2}}.$
Laborious algebra will show that
$\displaystyle b_{0}=0,\quad b_{1}=\frac{p-\sqrt{p^{2}-2\Delta
V}}{2p\sqrt{p^{2}-2\Delta
V}}(2C-p)(q_{\text{off}}-q_{\text{jump}})\omega^{2},$ $\displaystyle
b_{2}=\frac{\omega^{2}}{4p^{3}\alpha^{4}}\left(-2C^{2}\left(4\Delta
V^{2}\left(p\alpha-\beta\right)-2\Delta
Vp^{2}\left(p\alpha-2\beta\right)+p^{3}\left(\alpha-p\right)\beta\right)-4C\Delta
Vp^{2}\alpha^{3}+\Delta Vp^{3}\alpha\beta\right),$
where $\alpha=\sqrt{p^{2}-2\Delta V}$ and
$\beta=\omega^{2}(q_{\text{jump}}-q_{\text{off}})^{2}$.
This means $P=P_{1}+\mathcal{O}(h)$. However, if $\Delta V=0$, it can be
checked that $b_{1}=0$ and $b_{2}=0$, which means the truncation error returns
to be 3rd-order. That is again consistent with the 2nd-order nature in the
smooth case.
## References
* [1] S. J. Aarseth and F. Hoyle, Dynamical evolution of clusters of galaxies, i, Monthly Notices of the Royal Astronomical Society, 126 (1963), pp. 223–255.
* [2] L. Ambrosio, Transport equation and cauchy problem for bv vector fields, Inventiones mathematicae, 158 (2004), pp. 227–260.
* [3] G. Benettin and A. Giorgilli, On the hamiltonian interpolation of near-to-the identity symplectic mappings with application to symplectic integration algorithms, Journal of Statistical Physics, 74 (1994), pp. 1117–1143.
* [4] S. Blanes and F. Casas, A concise introduction to geometric numerical integration, CRC press, 2017.
* [5] S. Blanes, F. Casas, P. Chartier, and A. Murua, Optimized high-order splitting methods for some classes of parabolic equations, Mathematics of Computation, 82 (2013), pp. 1559–1576.
* [6] S. Blanes, F. Casas, A. Farres, J. Laskar, J. Makazaga, and A. Murua, New families of symplectic splitting methods for numerical integration in dynamical astronomy, Applied Numerical Mathematics, 68 (2013), pp. 58–72.
* [7] S. D. Bond and B. J. Leimkuhler, Stabilized integration of hamiltonian systems with hard-sphere inequality constraints, SIAM Journal on Scientific Computing, 30 (2008), pp. 134–147.
* [8] F. A. Bornemann and C. Schütte, Homogenization of Hamiltonian systems with a strong constraining potential, Phys. D, 102 (1997), pp. 57–77.
* [9] L. A. Bunimovich, Mushrooms and other billiards with divided phase space, Chaos: An Interdisciplinary Journal of Nonlinear Science, 11 (2001), pp. 802–808.
* [10] M.-P. Calvo and E. Hairer, Accurate long-term integration of dynamical systems, Appl. Numer. Math., 18 (1995), pp. 95–105.
* [11] M. P. Calvo and J. Sanz-Serna, The development of variable-step symplectic integrators, with application to the two-body problem, SIAM Journal on Scientific Computing, 14 (1993), pp. 936–952.
* [12] F. Castella, P. Chartier, S. Descombes, and G. Vilmart, Splitting methods with complex times for parabolic equations, BIT Numerical Mathematics, 49 (2009), pp. 487–508.
* [13] F. Cirak and M. West, Decomposition contact response (dcr) for explicit finite element dynamics, International Journal for Numerical Methods in Engineering, 64 (2005), pp. 1078–1110.
* [14] D. Cohen, T. Jahnke, K. Lorenz, and C. Lubich, Numerical integrators for highly oscillatory Hamiltonian systems: a review, in Analysis, modeling and simulation of multiscale problems, Springer, Berlin, 2006, pp. 553–576.
* [15] M. Creutz and A. Gocksch, Higher-order hybrid Monte Carlo algorithms, Physical Review Letters, 63 (1989), p. 9.
* [16] P. Deuflhard, R. Krause, and S. Ertel, A contact-stabilized newmark method for dynamical contact problems, International Journal for Numerical Methods in Engineering, 73 (2008), pp. 1274–1290.
* [17] S. Dharmaraja, H. Kesari, E. Darve, and A. J. Lew, Time integrators based on approximate discontinuous hamiltonians, International journal for numerical methods in engineering, 89 (2012), pp. 71–104.
* [18] L. Dieci and L. Lopez, Sliding motion in filippov differential systems: theoretical results and a computational approach, SIAM Journal on Numerical Analysis, 47 (2009), pp. 2023–2051.
* [19] M. Dijkstra, Phase behavior of hard spheres with a short-range yukawa attraction, Physical Review E, 66 (2002), p. 021402.
* [20] R. J. DiPerna and P.-L. Lions, Ordinary differential equations, transport theory and sobolev spaces, Inventiones mathematicae, 98 (1989), pp. 511–547.
* [21] D. Doyen, A. Ern, and S. Piperno, Time-integration schemes for the finite element dynamic Signorini problem, SIAM Journal on Scientific Computing, 33 (2011), pp. 223–249.
* [22] K. Feng, Difference schemes for Hamiltonian formalism and symplectic geometry, J. Comput. Math., 4 (1986), pp. 279–289.
* [23] K. Feng and M. Qin, Symplectic geometric algorithms for Hamiltonian systems, Springer, 2010.
* [24] R. C. Fetecau, J. E. Marsden, M. Ortiz, and M. West, Nonsmooth lagrangian mechanics and variational collision integrators, SIAM Journal on Applied Dynamical Systems, 2 (2003), pp. 381–416.
* [25] A. F. Filippov, Differential equations with discontinuous righthand sides: control systems, vol. 18, Springer Science & Business Media, 2013.
* [26] E. Forest, Canonical integrators as tracking codes, in AIP Conf. Proc., vol. 184, AIP Publishing, 1989, pp. 1106–1136.
* [27] L. M. Fridman, Slow periodic motions with internal sliding modes in variable structure systems, International Journal of Control, 75 (2002), pp. 524–537.
* [28] B. García-Archilla, J. M. Sanz-Serna, and R. D. Skeel, Long-time-step methods for oscillatory differential equations, SIAM J. Sci. Comput., 20 (1999), pp. 930–963.
* [29] R. A. Gerber, Global effects of softening n-body galaxies, The Astrophysical Journal, 466 (1996), p. 724.
* [30] Y. Gonthier, J. McPhee, C. Lange, and J.-C. Piedboeuf, A regularized contact model with asymmetric damping and dwell-time dependent friction, Multibody System Dynamics, 11 (2004), pp. 209–233.
* [31] J. W. Grizzle, C. Chevallereau, R. W. Sinnet, and A. D. Ames, Models, feedback control, and open problems of 3d bipedal robotic walking, Automatica, 50 (2014), pp. 1955–1988.
* [32] E. Hairer, Backward analysis of numerical integrators and symplectic methods, Annals of Numerical Mathematics, 1 (1994), pp. 107–132.
* [33] E. Hairer and C. Lubich, Long-time energy conservation of numerical methods for oscillatory differential equations, SIAM journal on numerical analysis, 38 (2000), pp. 414–441.
* [34] E. Hairer, C. Lubich, and G. Wanner, Geometric Numerical Integration: Structure-Preserving Algorithms for Ordinary Differential Equations, Springer, Berlin Heidelberg New York, second ed., 2006.
* [35] E. Hansen and A. Ostermann, High order splitting methods for analytic semigroups exist, BIT Numerical Mathematics, 49 (2009), pp. 527–542.
* [36] D. Heyes, Molecular dynamics simulations of restricted primitive model 1: 1 electrolytes, Chemical Physics, 69 (1982), pp. 155–163.
* [37] N. J. Higham, The accuracy of floating point summation, SIAM Journal on Scientific Computing, 14 (1993), pp. 783–799.
* [38] Y. A. Houndonougbo, B. B. Laird, and B. J. Leimkuhler, A molecular dynamics algorithm for mixed hard-core/continuous potentials, Molecular Physics, 98 (2000), pp. 309–316.
* [39] J. D. Jackson, Classical electrodynamics, John Wiley & Sons, 2007.
* [40] S. Jin, Numerical methods for hyperbolic systems with singular coefficients: well-balanced scheme, hamiltonian preservation, and beyond, Hyperbolic Problems: Theory, Numerics and Applications, 67 (2009), p. 93.
* [41] S. Jin and X. Liao, A hamiltonian-preserving scheme for high frequency elastic waves in heterogeneous media, Journal of Hyperbolic Differential Equations, 3 (2006), pp. 741–777.
* [42] S. Jin and K. A. Novak, A semiclassical transport model for thin quantum barriers, Multiscale Modeling & Simulation, 5 (2006), pp. 1063–1086.
* [43] , A semiclassical transport model for two-dimensional thin quantum barriers, Journal of Computational Physics, 226 (2007), pp. 1623–1644.
* [44] S. Jin, P. Qi, and Z. Zhang, An eulerian surface hopping method for the schrödinger equation with conical crossings, Multiscale Modeling & Simulation, 9 (2011), pp. 258–281.
* [45] S. Jin and X. Wen, Hamiltonian-preserving schemes for the liouville equation with discontinuous potentials, Communications in Mathematical Sciences, 3 (2005), pp. 285–315.
* [46] , A hamiltonian-preserving scheme for the liouville equation of geometrical optics with partial transmissions and reflections, SIAM Journal on Numerical Analysis, 44 (2006), pp. 1801–1828.
* [47] , Hamiltonian-preserving schemes for the liouville equation of geometrical optics with discontinuous local wave speeds, Journal of Computational Physics, 214 (2006), pp. 672–697.
* [48] S. Jin, H. Wu, and Z. Huang, A hybrid phase-flow method for hamiltonian systems with discontinuous hamiltonians, SIAM Journal on Scientific Computing, 31 (2009), pp. 1303–1321.
* [49] S. Jin and D. Yin, Computational high frequency waves through curved interfaces via the liouville equation and geometric theory of diffraction, Journal of Computational Physics, 227 (2008), pp. 6106–6139.
* [50] C. Kane, E. A. Repetto, M. Ortiz, and J. E. Marsden, Finite element analysis of nonsmooth contact, Computer methods in applied mechanics and engineering, 180 (1999), pp. 1–26.
* [51] D. M. Kaufman and D. K. Pai, Geometric numerical integration of inequality constrained, nonsmooth hamiltonian systems, SIAM Journal on Scientific Computing, 34 (2012), pp. A2670–A2703.
* [52] H. B. Khenous, P. Laborde, and Y. Renard, Mass redistribution method for finite element contact problems in elastodynamics, European Journal of Mechanics-A/Solids, 27 (2008), pp. 918–932.
* [53] R. Krause and M. Walloth, Presentation and comparison of selected algorithms for dynamic contact based on the newmark scheme, Applied Numerical Mathematics, 62 (2012), pp. 1393–1410.
* [54] T. Laursen and V. Chawla, Design of energy conserving algorithms for frictionless dynamic contact problems, International Journal for Numerical Methods in Engineering, 40 (1997), pp. 863–886.
* [55] T. Laursen and G. Love, Improved implicit integrators for transient impact problems—geometric admissibility within the conserving framework, International Journal for Numerical Methods in Engineering, 53 (2002), pp. 245–274.
* [56] B. Leimkuhler and S. Reich, Simulating Hamiltonian dynamics, vol. 14, Cambridge University Press, 2004.
* [57] R. I. Leine and H. Nijmeijer, Dynamics and bifurcations of non-smooth mechanical systems, vol. 18, Springer Science & Business Media, 2013\.
* [58] S. Leyendecker, C. Hartmann, and M. Koch, Variational collision integrator for polymer chains, Journal of Computational Physics, 231 (2012), pp. 3896–3911.
* [59] J. Llibre, P. R. Da Silva, and M. A. Teixeira, Regularization of discontinuous vector fields on $\mathbb{R}^{3}$ via singular perturbation, Journal of Dynamics and Differential Equations, 19 (2007), pp. 309–331.
* [60] J. Llibre, P. R. Da Silva, M. A. Teixeira, et al., Sliding vector fields via slow–fast systems, Bulletin of the Belgian Mathematical Society-Simon Stevin, 15 (2008), pp. 851–869.
* [61] O. Makarenkov and J. S. Lamb, Dynamics and bifurcations of nonsmooth systems: A survey, Physica D: Nonlinear Phenomena, 241 (2012), pp. 1826–1844.
* [62] J. E. Marsden and M. West, Discrete mechanics and variational integrators, Acta Numer., 10 (2001), pp. 357–514.
* [63] R. McLachlan and G. R. W. Quispel, Splitting methods, Acta Numer., (2002), pp. 341–434.
* [64] W. J. McNeil and W. G. Madden, A new method for the molecular dynamics simulation of hard core molecules, The Journal of Chemical Physics, 76 (1982), pp. 6221–6226.
* [65] J. Moser, Lectures on Hamiltonian systems, vol. 81, American Mathematical Soc., 1968.
* [66] E. Noether, Invariante variationsprobleme, Nachr. D. König. Gesellsch. D. Wiss. Zu Göttingen, Math-phys. Klasse, (1918), pp. 235–257.
* [67] A. Nordmark, H. Dankowicz, and A. Champneys, Friction-induced reverse chatter in rigid-body mechanisms with impacts, IMA Journal of Applied Mathematics, 76 (2011), pp. 85–119.
* [68] A. Pandolfi, C. Kane, J. E. Marsden, and M. Ortiz, Time-discretized variational formulation of non-smooth frictional contact, International Journal for Numerical Methods in Engineering, 53 (2002), pp. 1801–1829.
* [69] D. Pekarek and T. D. Murphey, Variational nonsmooth mechanics via a projected hamilton’s principle, in 2012 American Control Conference (ACC), IEEE, 2012, pp. 1040–1046.
* [70] B. Perthame and C. Simeoni, A kinetic scheme for the saint-venant system with a source term, Calcolo, 38 (2001), pp. 201–231.
* [71] G. Quispel and C. Dyt, Volume-preserving integrators have linear error growth, Phys. Lett. A, 242 (1998), pp. 25–30.
* [72] J. Sanz-Serna and M. Calvo, Numerical Hamiltonian problems, Chapman and Hall/CRC, 1st ed., 1994.
* [73] J. M. Sanz-Serna, Symplectic integrators for Hamiltonian problems: an overview, Acta Numer., 1 (1992), pp. 243–286.
* [74] J. M. Sanz-Serna, Mollified impulse methods for highly oscillatory differential equations, SIAM J. Numer. Anal., 46 (2) (2008), pp. 1040–1059.
* [75] A. Souza and M. Tao, Metastable transitions in inertial Langevin systems: what can be different from the overdamped case?, European Journal of Applied Mathematics, (2018).
* [76] D. E. Stewart, Rigid-body dynamics with friction and impact, SIAM review, 42 (2000), pp. 3–39.
* [77] G. Strang, On the construction and comparison of difference schemes, SIAM J. Numer. Anal., 5 (1968), pp. 506–517.
* [78] R. M. Stratt, S. L. Holmgren, and D. Chandler, Constrained impulsive molecular dynamics, Molecular Physics, 42 (1981), pp. 1233–1143.
* [79] S.-H. Suh, L. Mier-y Teran, H. White, and H. Davis, Molecular dynamics study of the primitive model of 1–3 electrolyte solutions, Chemical physics, 142 (1990), pp. 203–211.
* [80] M. Suzuki, Fractal decomposition of exponential operators with applications to many-body theories and Monte Carlo simulations, Phys. Lett. A, 146 (1990), pp. 319–323.
* [81] M. Tao, Explicit high-order symplectic integrators for charged particles in general electromagnetic fields, Journal of Computational Physics, 327 (2016), pp. 245–251.
* [82] , Explicit symplectic approximation of nonseparable Hamiltonians: algorithm and long time performance, Phys. Rev. E, 94 (2016), p. 043303.
* [83] M. Tao and H. Owhadi, Variational and linearly-implicit integrators, with applications, IMA Journal of Numerical Analysis, 36 (2016), pp. 80–107.
* [84] M. Tao, H. Owhadi, and J. E. Marsden, Nonintrusive and structure preserving multiscale integration of stiff ODEs, SDEs and Hamiltonian systems with hidden slow dynamics via flow averaging, Multiscale Modeling & Simulation: A SIAM Interdisciplinary Journal, 8 (2010), pp. 1269–1324.
* [85] , From efficient symplectic exponentiation of matrices to symplectic integration of high-dimensional Hamiltonian systems with slowly varying quadratic stiff potentials, Appl. Math. Res. Express, (2011), pp. 242–280.
* [86] M. A. Teixeira and P. R. da Silva, Regularization and singular perturbation techniques for non-smooth systems, Physica D: Nonlinear Phenomena, 241 (2012), pp. 1948–1955.
* [87] H. Yoshida, Construction of higher order symplectic integrators, Phys. Lett. A, 150 (1990), pp. 262–268.
|
# Supervised quantum machine learning models are kernel methods
Maria Schuld Xanadu, Toronto, ON, M5G 2C8, Canada
###### Abstract
With near-term quantum devices available and the race for fault-tolerant
quantum computers in full swing, researchers became interested in the question
of what happens if we replace a supervised machine learning model with a
quantum circuit. While such “quantum models” are sometimes called “quantum
neural networks”, it has been repeatedly noted that their mathematical
structure is actually much more closely related to kernel methods: they
analyse data in high-dimensional Hilbert spaces to which we only have access
through inner products revealed by measurements. This technical manuscript
summarises and extends the idea of systematically rephrasing supervised
quantum models as a kernel method. With this, a lot of near-term and fault-
tolerant quantum models can be replaced by a general support vector machine
whose kernel computes distances between data-encoding quantum states. Kernel-
based training is then guaranteed to find better or equally good quantum
models than variational circuit training. Overall, the kernel perspective of
quantum machine learning tells us that the way that data is encoded into
quantum states is the main ingredient that can potentially set quantum models
apart from classical machine learning models.
## I Motivation
Figure 1: Quantum computing and kernel methods are based on a similar
principle. Both have mathematical frameworks in which information is mapped
into and then processed in high-dimensional spaces to which we have only
limited access. In kernel methods, the access to the feature space is
facilitated through kernels or inner products of feature vectors. In quantum
computing, access to the Hilbert space of quantum states is given by
measurements, which can also be expressed by inner products of quantum states.
The mathematical frameworks of quantum computing and kernel methods are
strikingly similar: both describe how information is processed by mapping it
to vectors that live in potentially inaccessibly large spaces, without the
need of ever computing an explicit numerical representation of these vectors
(Figure 1). This similarity is particularly obvious – and as we will see,
useful – in quantum machine learning, an emerging research field that
investigates how quantum computers can learn from data [1, 2, 3]. If the data
is “classical” as in standard machine learning problems, quantum machine
learning algorithms have to encode it into the physical states of quantum
systems. This process is formally equivalent to a feature map that assigns
data to quantum states (see [4, 5] but also earlier notions in [6, 7, 8]).
Inner products of such data-encoding quantum states then give rise to a
kernel, a kind of similarity measure that forms the core concept of kernel
theory.
The natural shape of this analogy sparked more research in the past years, for
example on training generative quantum models [9], constructing kernelised
machine learning models [10], understanding the separation between the
computational complexity of quantum and classical machine learning [5, 11, 12]
or revealing links between quantum machine learning and maximum mean
embeddings [13] as well as metric learning [14]. But despite the growing
amount of literature, a comprehensive review of the link between quantum
computation and kernel theory, as well as its theoretical consequences, is
still lacking. This technical manuscript aims at filling the gap by
summarising, formalising and extending insights scattered across existing
literature and “quantum community folklore”. The central statement of this
line of work is that quantum algorithms optimised with data can fundamentally
be formulated as a classical kernel method whose kernel is computed by a
quantum computer. This statement holds both for the popular class of
classically trained variational near-term algorithms (e.g., [15]) as well as
for more sophisticated fault-tolerant algorithms trained by a quantum computer
(e.g., [6]). It will be apparent that once the right “spaces” for the analysis
are defined (as first proposed in [5]), the theory falls into place itself.
This is in stark contrast to the more popular, but much less natural, attempt
to force quantum theory into the shape of neural networks.111In some sense,
many near-term approaches to quantum machine learning can be understood as a
kernel method with a special kind of kernel, where the model (and possibly
even the kernel [14]) are trained like neural networks. This mix of both
worlds makes quantum machine learning an interesting mathematical playground
beyond the questions of asymptotic speedups that quantum computing researchers
tend to ask by default.
A lot of the results presented here are of theoretical nature, but have
important practical implications. Understanding quantum models as kernel
methods means that the expressivity, optimisation and generalisation behaviour
of quantum models is largely defined by the data-encoding strategy or quantum
embedding which fixes the kernel. Furthermore, it means that while the kernel
itself may explore high-dimensional state spaces of the quantum system,
quantum models can be trained and operated in a low-dimensional subspace. In
contrast to the popular strategy of variational models (where a quantum
algorithm depends on a tractable number of classical parameters that are
optimised), we do not have to worry about finding the right variational
circuit ansatz, or about how to avoid barren plateaus problems [16, 17] – but
pay the price of having to compute pairwise distances between data points.
For classical machine learning research, the kernel perspective can help to
demystify quantum machine learning. A medium-term benefit may also derive from
quantum computing’s extensive tools that describe information in high-
dimensional spaces, and possibly from interesting new kinds of kernels derived
from physics. In the longer term, quantum computers promise access to fast
linear algebra processing capabilities which are in principle able to deliver
the polynomial speed-up that allows kernel methods to process big data without
relying on approximations and heuristics.
The manuscript is aimed at readers coming from either a machine learning or
quantum computing background, but assumes an advanced level of mathematical
knowledge of Hilbert spaces and the like (and there will be a lot of Hilbert
spaces). Instead of giving lengthy introductions to both fields at the
beginning, I will try to explain relevant concepts such as quantum states,
measurements, or kernels as they are needed. Since neither kernel methods nor
quantum theory are easy to digest, the next section will summarise all the
main insights from a high-level point of view to connect the dots right from
the start.
A final side note may be useful: quantum computing researchers love to precede
any concept with the word “quantum”. In a young and explorative discipline
like quantum machine learning (there we go!), this leads to very different
ideas being labeled as “quantum kernels”, “quantum support vector machines”,
“quantum classifiers” or even “quantum neural networks”. To not add to this
state of confusion I will – besides standard technical terms – only use the
“quantum” prefix if a quantity is explicitly computed by a quantum algorithm
(instead of being a mathematical construction in quantum theory). I will
therefore speak of “quantum models” and “quantum kernels”, but try to avoid
constructions like “quantum feature maps” and “quantum reproducing kernel
Hilbert space”.
## II Summary of results
Figure 2: Interpreting a quantum circuit as a machine learning model. After
encoding the data with the routine $S_{x}$, a quantum circuit “processes” the
embedded input, followed by a measurement (left). The processing circuit may
depend on classically trainable parameters, as investigated in near-term
quantum machine learning with variational circuits, or it may consist of
standard quantum routines such as amplitude amplification or quantum Fourier
transforms. The expected outcome of the measurement $\mathcal{M}$ is
interpreted as the model’s prediction, which is deterministic (generative
models, which would consider the measurement samples as outputs, are not
considered here). Since the processing circuit only changes the basis in which
the measurement is taken, it can conceptually be understood as part of the
measurement procedure (right). In this sense, quantum models consist of two
parts, the data encoding/embedding and the measurement. Training a quantum
model is the problem of finding the measurement that minimises a data-
dependent cost function. Note that while the measurement could depend on
trainable parameters I will not consider trainable embedding circuits here.
First, a quick overview of the scope. Quantum algorithms have been proposed
for many jobs in supervised machine learning, but the majority of them replace
the model, such as a classifier or generator, with an algorithm that runs on a
quantum computer. These algorithms – I will call them quantum models – usually
consist of two parts: the data encoding, which maps data inputs $x$ to quantum
states $\left|\phi(x)\right\rangle$ (effectively embedding them into the space
of quantum states), and a measurement $\mathcal{M}$. Statistical properties of
the measurement are then interpreted as the output of the model. Training a
quantum model means to find the measurement which minimises a cost function
that depends on training data. This overall definition is fairly general, and
it includes most near-term supervised quantum machine learning algorithms as
well as many more complex, fault-tolerant quantum algorithms (see Figure 2).
Throughout this manuscript I will interpret the expected measurement – or in
practice, the average over measurement outcomes – as a prediction, but the
results may carry over to other settings, such as generative quantum models
(e.g., [18]). I will also consider the embedding fixed and not trainable as
proposed in [19, 14].
The bridge between quantum machine learning and kernel methods is formed by
the observation that quantum models map data into a high-dimensional feature
space, in which the measurement defines a linear decision boundary as shown in
Figure 3. Note that for this to hold we need to define the data-encoding
density matrices
$\rho(x)=\left|\phi(x)\right\rangle\\!\left\langle\phi(x)\right|$ as the
feature “vectors”222The term feature vectors derives from the fact that they
are elements of a vector space, not that they are vectors in the sense of the
space $\mathbb{C}^{N}$ or $\mathbb{R}^{N}$. instead of the Dirac vectors
$\left|\phi(x)\right\rangle$ (see Section V.1). This was first proposed in
Ref. [5]. Density matrices are alternative descriptions of quantum states as
Hermitian operators which are handy because they can also express probability
distributions over quantum states (in which case they are describing so-called
mixed instead of pure states). We can therefore consider the space of complex
matrices enriched with the Hilbert-Schmidt inner product as the feature space
of a quantum model and state:
> 1\. Quantum models are linear models in the “feature vectors” $\rho(x)$.
As famously known from support vector machines [20], linear models in feature
spaces can be efficiently evaluated and trained if we have access to inner
products of feature vectors, which is a function $\kappa$ in two data points
$x,x^{\prime}$ called the kernel. Kernel theory essentially uses linear
algebra and functional analysis to derive statements about the expressivity,
trainability and generalisation power of linear models in feature spaces
directly from the kernel. For us this means that we can learn a lot about the
properties of quantum models if we study inner products
$\kappa(x,x^{\prime})=\mathrm{tr}\left[\rho(x^{\prime})\rho(x)\right]$, or,
for pure states,
$\kappa(x,x^{\prime})=|\left\langle\phi(x^{\prime})\left|\phi(x)\right.\right\rangle|^{2}$
(see in particular Ref. [12]). I will call these functions “quantum kernels’.
Figure 3: Quantum models as linear models in a feature space. A quantum model
can be understood as a model that maps data into a feature space in which the
measurement defines a linear decision boundary. This feature space is not
identical to the Hilbert space of the quantum system. Instead we can define it
as the space of complex matrices enriched with the Hilbert-Schmidt inner
product – which is the space where density matrices live in.
To understand what kernels can tell us about quantum machine learning, we need
another important concept from kernel theory: the reproducing kernel Hilbert
space (RKHS). An RKHS is an alternative feature space of a kernel – and
therefore reproduces all “observable” behaviour of the machine learning model.
More precisely, it is a feature space of functions $x\to
g_{x}(\cdot)=\kappa(x,\cdot)$, which are constructed from the kernel. The RKHS
contains one such function for every input $x$, as well as their linear
combinations (for example, for the popular Gaussian kernel these linear
combinations are sums of Gaussians centered in the individual data points). In
an interesting – and by no means trivial – twist, these functions happen to be
identical to the linear models in feature space. For quantum machine learning
this means that the space of quantum models and the RKHS of the quantum kernel
contain exactly the same functions (see Section V.2). What we gain is an
alternative representation of quantum models, one that only depends on the
quantity $\mathrm{tr}\left[\rho(x^{\prime})\rho(x)\right]$ (see Figure 4).
Figure 4: Overview of the link between quantum models and kernel methods. The
strategy with which data is encoded into quantum states is a feature map from
the space of data to the feature space $\mathcal{F}$ “of density matrices”
$\rho$. In this space, quantum models can be expressed as a linear model whose
decision boundary is defined by the measurement. According to kernel theory,
an alternative feature space with the same kernel is the RKHS $F$, whose
vectors are functions arising from fixing one entry of the kernel (i.e., the
inner product of data-encoding density matrices). The RKHS is equivalent to
the space of quantum models, which are linear models in the data-encoding
feature space. These connections can be used to study the properties of
quantum models as learners, which turn out to be largely determined by the
kernel, and therefore by the data-encoding strategy.
This alternative representation can be very useful for all sorts of things.
For example, it allows us to study the universality of quantum models as
function approximators by investigating the universality of the RKHS, which in
turn is a property of the quantum kernel. But probably the most important use
is to study optimisation: minimising typical cost functions over the space of
quantum models is equivalent to minimising the same cost over the RKHS of the
quantum kernel (see Section VI.1). The famous representer theorem uses this to
show that “optimal models” (i.e., those that minimise the cost) can be written
in terms of the quantum kernel as
$f_{\rm
opt}(x)=\sum_{m=1}^{M}\alpha_{m}\mathrm{tr}\left[\rho(x^{m})\rho(x)\right]=\mathrm{tr}\left[\left(\sum_{m=1}^{M}\alpha_{m}\rho(x^{m})\right)\rho(x)\right],$
(1)
where $x^{m},m=1,\dots,M$ is the training data and $\alpha_{m}\in\mathbb{R}$
(see Section VI.2). Looking at the expression in the round brackets, this
enables us to say something about optimal measurements for quantum models:
> 2\. Quantum models that minimise typical machine learning cost functions
> have measurements that can be written as “kernel expansions in the data”,
> $\mathcal{M}=\sum_{m}\alpha_{m}\rho(x^{m})$.
In other words, we are guaranteed that the best measurements for machine
learning tasks only have $M$ degrees of freedom $\\{\alpha_{m}\\}$, rather
than the $\mathcal{O}(2^{2n})$ degrees of freedom needed to express a general
measurement on a standard $n$-qubit quantum computer. Even more, if we include
a regularisation term into the cost function, the kernel defines entirely
which models are actually penalised or preferred by regularisation. Since the
kernel only depends on the way in which data is encoded into quantum states,
one can conclude that data encoding fully defines the minima of a given cost
function used to train quantum models (see Section VI.3).
But how can we find the optimal model in Eq. (1)? We could use the near-term
approach to quantum machine learning and simply train an ansatz, hoping that
it learns the right measurement. But as illustrated in Figure 5, variational
training typically only searches through a small subspace of all possible
quantum models/measurements. This has a good reason: to train a circuit that
can express any quantum model (and is hence guaranteed to find the optimal
one) would require parameters for all $\mathcal{O}(2^{2n})$ degrees of
freedom, which is intractable for all but toy models. However, also here
kernel theory can help: not only is the optimal measurement defined by $M\ll
2^{2n}$ degrees of freedom, finding the optimal measurement has the same
favourable scaling (see Section VI.4) if we switch to a kernel-based training
approach.
> 3\. The problem of finding the optimal measurement for typical machine
> learning cost functions trained with $M$ data samples can be formulated as
> an $M$-dimensional optimisation problem.
If the loss is convex, as is common in machine learning, the optimisation
problem is guaranteed to be convex as well. Hence, under rather general
assumptions, we are guaranteed that the “hard” problem of picking the best
quantum model shown in Eq. (1) is tractable and of a simple structure, even
without reverting to variational heuristics. In addition, convexity – the
property that there is only one global minimum – may help with trainability
problems like the notorious “barren plateaus” [16] in variational circuit
training. If the loss function is the hinge loss, things reduce to a standard
support vector machine with a quantum kernel, which is one of the algorithms
proposed in [4] and [5].
Figure 5: Kernel-based training vs. variational training. Training a quantum
model as defined here tries to find the optimal measurement $\mathcal{M}_{\rm
opt}$ over all possible quantum measurements. Kernel theory guarantees that in
most cases this optimal measurement will have a representation that is a
linear combination in the training data with coefficients
$\alpha=(\alpha_{1},\dots,\alpha_{M})$. Kernel-based training therefore
optimises over the parameters $\alpha$ directly, effectively searching for the
best model in an $M$-dimensional subspace spanned by the training data (blue).
We are guaranteed that $\mathcal{M}_{\alpha}^{\rm opt}=\mathcal{M}_{\rm opt}$,
and if the loss is convex this is the only minimum, which means that kernel-
based training will find the best measurement out of all measurements.
Variational training parametrises the measurement instead by a general ansatz
that depends on $K$ parameters $\theta=(\theta_{1},\dots,\theta_{K})$, and
tries to find the optimal measurement $\mathcal{M}_{\theta}^{\rm opt}$ in the
subspace explored by the ansatz. This $\theta$-subspace is not guaranteed to
contain the globally optimal measurement $\mathcal{M}_{\rm opt}$, and
optimisation is usually non-convex. We are therefore guaranteed that kernel-
based training finds better or the same minima to variational training, but at
the expense of having to compute pairwise distances of data points for
training and classification.
Altogether, approaching quantum machine learning from a kernel perspective can
have profound implications for the way we think about it. Firstly, most
quantum models can be formulated as general support vector machines (in the
sense of [20]) with a kernel evaluated on a quantum computer. As a corollary,
we know that the measurements of optimal quantum models live in a low-
dimensional subspace spanned by the training data, and that we can train in
that space. Kernel-based training is guaranteed to find better minima – or as
phrased here, measurements – than variational circuit training, at the expense
of having to evaluate pair-wise distances of data points in feature space. (In
the conclusion I will discuss how larger fault-tolerant quantum computers
could potentially help with this as well!). Secondly, if the kernel defines
the model, and the data encoding defines the kernel, we have to be very aware
of the data encoding strategy we use in quantum machine learning – a step that
has often taken the backseat over other parts of quantum models. Thirdly,
since quantum models can always be rewritten as a classical model plus quantum
kernel, the separation between classical and quantum machine learning lies
only in the ability of quantum computers to implement classically hard
kernels. The first steps into investigating such separations have been made in
papers like [11, 12], but it is still unclear whether any useful applications
turn out to be enabled solely by quantum computers.
The remainder of the paper will essentially follow the structure of this
synopsis to discuss every statement in more mathematical detail.
## III Quantum computing, feature maps and kernels
Let us start by laying the ground work for the kernel perspective on quantum
machine learning. First I review the link between the process of encoding data
into quantum states and feature maps, and construct the “quantum kernel” that
we will use throughout the manuscript. I will then give some examples of data-
encoding feature maps and quantum kernels, including a general description
that allows us to understand these kernels via Fourier series.
### III.1 Encoding data into quantum states is a feature map
First, a few important concepts from quantum computing, which can be safely
skipped by readers with a background in the field. Those who deem the
explanations to be too casual shall be referred to the wonderful script by
Michael Wolf [21].
> Quantum state. According to quantum theory, the state of a quantum system is
> fully described by a length-1 vector $\left|\psi\right\rangle$ (or, more
> precisely, a ray represented by this vector) in a complex Hilbert space
> $\mathcal{H}$. The notation $\left|\cdot\right\rangle$ can be intimidating,
> but simply reminds of the fact that the Hilbert space has an inner product
> $\langle\cdot,\cdot\rangle$, which for Hilbert spaces describing quantum
> systems is denoted as $\left\langle\cdot\left|\cdot\right.\right\rangle$,
> and that its vectors constitute “the right side” of the inner product.
> Quantum theory textbooks then introduce the left side of the inner product
> as a functional $\left\langle\varphi\right|$ from a dual space
> $\mathcal{H}^{*}$ acting on elements of the original Hilbert space.
> Mainstream quantum computing considers rather simple quantum systems of $n$
> binary subsystems called “qubits”, whose Hilbert space is the
> $\mathbb{C}^{2^{n}}$. The dual space $\mathcal{H}^{*}$ can then be thought
> of as the space of complex $2^{n}$-dimensional “row vectors”. A joint
> description of two quantum systems $\left|\psi\right\rangle$ and
> $\left|\varphi\right\rangle$ is expressed by the tensor product
> $\left|\psi\right\rangle\otimes\left|\phi\right\rangle$.
>
>
> Density matrix. There is an alternative representation of a quantum state as
> a Hermitian operator called a density matrix. The density matrix
> corresponding to a state vector $\left|\psi\right\rangle$ reads
>
> $\rho=\left|\psi\right\rangle\\!\left\langle\psi\right|.$ (2)
>
> If we represent quantum states as vectors in $\mathbb{C}^{2^{n}}$, then the
> corresponding density matrix is given by the outer product of a vector with
> itself – resulting in a matrix (and hence the name). The density matrix
> contains all observable information of $\left|\psi\right\rangle$, but is
> useful to model probability distributions $\\{p_{k}\\}$ over multiple
> quantum states
> $\\{\left|\psi_{k}\right\rangle\\!\left\langle\psi_{k}\right|\\}$ as so-
> called mixed states
>
>
> $\rho=\sum_{k}p_{k}\left|\psi_{k}\right\rangle\\!\left\langle\psi_{k}\right|,$
> (3)
>
> without changing the equations of quantum theory. For simplicity I will
> assume that we are dealing with pure states in the following, but as far as
> I know everything should hold for mixed states as well.
>
>
> Quantum computations. A quantum computation applies physical operations to
> quantum states, which – in analogy to classical circuits – are known as
> “quantum gates”. The gates are applied to a small amount of qubits at a
> time. A collection of quantum gates (possibly followed by a measurement,
> which will be explained below) is called a quantum circuit. Any physical
> operation acting on the quantum system maps from a density matrix $\rho$ to
> another density matrix $\rho^{\prime}$. In the most basic setting, such a
> transformation is described by a unitary operator $U$, with
> $\rho^{\prime}=U^{\dagger}\rho U$, or
> $\left|\psi^{\prime}\right\rangle=U\left|\psi\right\rangle$.333The unitary
> operator is the quantum equivalent of a stochastic matrix which acts on
> vectors that represent discrete probability distributions. Unitary
> operations are length-preserving linear transformations, which is why we
> often say that a unitary “rotates” the quantum state. In the finite-
> dimensional case, a unitary operator can conveniently be represented by a
> unitary matrix, and the evolution of a quantum state becomes a matrix
> multiplication.
Consider a physical operation or quantum circuit $U(x)$ that depends on data
$x\in\mathcal{X}$ from some data domain $\mathcal{X}$. For example, if the
domain is the set of all bit strings of length $n$, the quantum circuit may
apply specific operations only if bits are $1$ and do nothing if they are $0$.
After the operation, the quantum state
$\left|\phi(x)\right\rangle=U(x)\left|\psi\right\rangle$ depends on $x$. In
other words, the data-dependent operation “encodes” or “embeds” $x$ into a
vector $\left|\phi(x)\right\rangle$ from a Hilbert space (and I will use both
terms interchangeably). This is a common definition of a feature map in
machine learning, and we can say that any data-dependent quantum computation
implements a feature map.
While from a quantum physics perspective it seems natural – and has been done
predominantly in the early literature – to think of
$x\rightarrow\left|\phi(x)\right\rangle$ as the feature map that links quantum
computing to kernel methods, we will see below that quantum models are not
linear in the Hilbert space of the quantum system [5], which means that the
apparatus of kernel theory does not apply elegantly. Instead, I will define
$x\to\rho(x)$ as the feature map and call it the data-encoding feature map.
Note that consistent with the proposed naming scheme, the term “quantum
feature map” would be misleading, since the result of the feature map is a
state, which without measurement is just a mathematical concept.
###### Definition 1 (Data-encoding feature map).
Given a $n$-qubit quantum system with states $\left|\psi\right\rangle$, and
let $\mathcal{F}$ be the space of complex-valued $2^{n}\times
2^{n}$-dimensional matrices equipped with the Hilbert-Schmidt inner product
$\langle\rho,\sigma\rangle_{\mathcal{F}}=\mathrm{tr}\\{\rho^{\dagger}\sigma\\}$
for $\rho,\sigma\in\mathcal{F}$. The data-encoding feature map is defined as
the transformation
$\phi:\mathcal{X}\rightarrow\mathcal{F},$ (4)
$\phi(x)=\left|\phi(x)\right\rangle\\!\left\langle\phi(x)\right|=\rho(x),$ (5)
and can be implemented by a data-encoding quantum circuit $U(x)$.
While density matrices of qubit systems live in a subspace of $\mathcal{F}$
(i.e., the space of positive semi-definite trace-class operators), it will be
useful to formally define the data-encoding feature space as above. Firstly,
it makes sure that the feature space is a Hilbert space, and secondly, it
allows measurements to live in the same space [21], which we will need to
define linear models in $\mathcal{F}$. Section III.3 will discuss that this
definition of the feature space is equivalent to the tensor product space of
complex vectors $\left|\psi\right\rangle\otimes\left|\psi^{*}\right\rangle$
used in [12].
### III.2 The data-encoding feature map gives rise to a kernel
Let us turn to kernels.
> Kernels. Unsurprisingly, the central concept of kernel theory are kernels,
> which in the context of machine learning are defined as real or complex-
> valued positive definite functions in two data points,
> $\kappa:\mathcal{X}\times\mathcal{X}\rightarrow\mathbb{K}$, where
> $\mathbb{K}$ can be $\mathbb{C}\text{ or }\mathbb{R}$. For every such
> function we are guaranteed that there exist at least one feature map such
> that inner products of feature vectors $\phi(x)$ from the feature Hilbert
> space $\mathcal{F}$ form the kernel,
> $\kappa(x,x^{\prime})=\langle\phi(x^{\prime}),\phi(x)\rangle_{\mathcal{F}}$.
> Vice versa, every feature map gives rise to a kernel. The importance of
> kernels for machine learning is that they are a means of “computing” in
> feature space without ever accessing or numerically processing the vectors
> $\phi(x)$: everything we need to do in machine learning can be expressed by
> inner products of feature vectors, instead of the feature vectors
> themselves. In the cases that are practically useful, these inner products
> can be computed by a comparably simple function. This makes the computations
> in intractably large spaces tractable.
With the Hilbert-Schmidt inner product from Definition 1 we can immediately
write down the kernel induced by the data-encoding feature map, which we will
call the “quantum kernel” (since it is a function computed by a quantum
computer):
###### Definition 2 (Quantum kernel).
Let $\phi$ be a data-encoding feature map over domain $\mathcal{X}$. A quantum
kernel is the inner product between two data-encoding feature vectors
$\rho(x),\rho(x^{\prime})$ with $x,x^{\prime}\in\mathcal{X}$,
$\kappa(x,x^{\prime})=\mathrm{tr}\left[\rho(x^{\prime})\rho(x)\right]=|\left\langle\phi(x^{\prime})\left|\phi(x)\right.\right\rangle|^{2}.$
(6)
To justify the term “kernel” we need to show that the quantum kernel is indeed
a positive definite function. The quantum kernel is a product of the complex-
valued kernel
$\kappa_{c}(x,x^{\prime})=\left\langle\phi(x^{\prime})\left|\phi(x)\right.\right\rangle$
and its complex conjugate
$\kappa_{c}(x,x^{\prime})^{*}=\left\langle\phi(x)\left|\phi(x^{\prime})\right.\right\rangle=\left\langle\phi(x^{\prime})\left|\phi(x)\right.\right\rangle^{*}$.
Since products of two kernels are known to be kernels themselves, we only have
to show that the complex conjugate of a kernel is also a kernel. For any
$x^{m}\in\mathcal{X},m=1\dots M$, and for any $c_{m}\in\mathbb{C}$, we have
$\displaystyle\sum_{m,m^{\prime}}c_{m}c^{*}_{m^{\prime}}\left(\kappa_{c}(x^{m},x^{m^{\prime}})\right)^{*}$
$\displaystyle=\sum_{m,m^{\prime}}c_{m}c^{*}_{m^{\prime}}\left\langle\phi(x^{m})\left|\phi(x^{m^{\prime}})\right.\right\rangle$
$\displaystyle=\left(\sum_{m}c_{m}\left\langle\phi(x^{m})\right|\right)\left(\sum_{m}c^{*}_{m}\left|\phi(x^{m})\right\rangle\right)$
$\displaystyle=\|\sum_{m}c^{*}_{m}\left|\phi(x^{m})\right\rangle\|^{2}$
$\displaystyle\geq 0,$
which means that the complex conjugate of a kernel is also positive definite.
###### Example III.1.
Figure 6: Example of a data-encoding feature map and quantum kernel. A scalar
input is encoded into a single-qubit quantum state, which is represented as a
point on a Bloch sphere. The embedding uses a feature map facilitated by a
Pauli-X rotation. As can be seen when plotting the quantum states encoding
equidistant points on an interval, the embedding preserves the structure of
the data rather well, but is periodic. The embedding gives rise to a quantum
kernel $\kappa$. When we fix the first input at zero, we can visualise the
distance measure, which is a squared cosine function.
Consider an embedding that encodes a scalar input $x\in\mathbb{R}$ into the
quantum state of a single qubit. The embedding is implemented by the Pauli-X
rotation gate $R_{X}(x)=e^{-i\frac{x}{2}\sigma_{x}}$, where $\sigma_{x}$ is
the Pauli-X operator. The data-encoding feature map is then given by
$\phi:x\to\rho(x)$ with
$\rho(x)=\cos^{2}\left(\frac{x}{2}\right)\left|0\right\rangle\\!\left\langle
0\right|+i\cos\left(\frac{x}{2}\right)\sin\left(\frac{x}{2}\right)\left|0\right\rangle\\!\left\langle
1\right|-i\cos\left(\frac{x}{2}\right)\sin\left(\frac{x}{2}\right)\left|1\right\rangle\\!\left\langle
0\right|+\sin^{2}\left(\frac{x}{2}\right)\left|1\right\rangle\\!\left\langle
1\right|,$ (7)
and the quantum kernel becomes
$\kappa(x,x^{\prime})=\left|\cos\left(\frac{x}{2}\right)\cos\left(\frac{x^{\prime}}{2}\right)+\sin\left(\frac{x}{2}\right)\sin\left(\frac{x^{\prime}}{2}\right)\right|^{2}=\cos\left(\frac{x-x^{\prime}}{2}\right)^{2},$
(8)
which is a translation invariant squared cosine kernel. We will stick with
this simple example throughout the following sections. It is illustrated in
Figure 6.
### III.3 Making sense of matrix-valued feature vectors
For readers that struggle to think of density matrices as feature vectors the
data-encoding feature map (and further below, linear models) may be hard to
visualise. I want to therefore insert a brief comment on an alternative
version of the data-encoding feature map.
For all matters and purposes, the data-encoding feature map can be replaced by
an alternative formulation
$\phi_{v}:\mathcal{X}\rightarrow\mathcal{F}_{v}\subset\mathcal{H}\otimes\mathcal{H}^{*},$
(9)
$\phi_{v}=\left|\phi(x)\right\rangle\otimes\left|\phi^{*}(x)\right\rangle,$
(10)
where $\left|\phi^{*}(x)\right\rangle$ denotes the quantum state created from
applying the complex conjugated (but not transposed) unitary
$\left|\phi^{*}(x)\right\rangle=U^{*}(x)\left|0\right\rangle$ instead of
$\left|\phi(x)\right\rangle=U(x)\left|0\right\rangle$, and $\mathcal{F}_{v}$
is the space of tensor products of a data-encoding Dirac vector with its
complex conjugate. Note that since the complex conjugate of a unitary is a
unitary, the unusual notation $\left|\phi^{*}(x)\right\rangle$ describes a
valid quantum state which can be prepared by a physical circuit. The
alternative feature space $\mathcal{F}_{v}$ is a subspace of the Hilbert space
$\mathcal{H}\otimes\mathcal{H}^{*}$ with the property that inner products are
real. One can show (but I won’t do it here) that $\mathcal{F}_{v}$ is indeed a
Hilbert space.
The inner product in this alternative feature space $\mathcal{F}_{v}$ is the
absolute square of the inner product in the Hilbert space $\mathcal{H}$ of
quantum states,
$\langle\psi|\varphi\rangle_{\mathcal{F}_{v}}=|\left\langle\psi\left|\varphi\right.\right\rangle_{\mathcal{H}}|^{2},$
(11)
and is therefore equivalent to the inner product in $\mathcal{F}$. This
guarantees that it leads to the same quantum kernel.
The subscript $v$ refers to the fact that
$\left|\phi(x)\right\rangle\otimes\left|\phi^{*}(x)\right\rangle$ is a
vectorisation of $\rho(x)$, which reorders the $2^{n}$ matrix elements as a
vector in $\mathbb{C}^{4n}$. To see this, let us revisit Example III.1 from
above.
###### Example III.2.
Consider the embedding from Example III.1. The vectorised version of the data-
encoding feature map is given by
$\displaystyle\phi_{v}:x\to\left|\phi(x)\right\rangle\otimes\left|\phi^{*}(x)\right\rangle$
$\displaystyle=\left(\cos\left(\frac{x}{2}\right)\left|0\right\rangle-i\sin\left(\frac{x}{2}\right)\left|1\right\rangle\right)\otimes\left(\cos\left(\frac{x}{2}\right)\left|0\right\rangle+i\sin\left(\frac{x}{2}\right)\left|1\right\rangle\right)$
(12) $\displaystyle=\begin{pmatrix}\cos^{2}\left(\frac{x}{2}\right)\\\
i\cos\left(\frac{x}{2}\right)\sin\left(\frac{x}{2}\right)\\\
-i\cos\left(\frac{x}{2}\right)\sin\left(\frac{x}{2}\right)\\\
\sin^{2}\left(\frac{x}{2}\right)\end{pmatrix},$ (13)
and one can verify easily that the inner product of two such vectors leads to
the same kernel.
Vectorised density matrices are common in the theory of open quantum systems
[22], where they are written as $\left|\rho\right\rrangle$ (see also the Choi-
Jamiolkowski isomorphism). I will adopt this notation in Section VI.2 below to
replace the Hilbert-Schmidt inner product
$\mathrm{tr}\left[\rho^{\dagger}\sigma\right]$ with
$\left\llangle\rho\left|\sigma\right.\right\rrangle$, which can be more
illustrative at times. Note that the vectorised feature map, as opposed to
Definition 1, cannot capture mixed quantum states and is therefore less
powerful.
## IV Examples of quantum kernels
encoding | kernel $\kappa(x,x^{\prime})$
---|---
basis encoding | $\delta_{x,x}$
amplitude encoding | $|\mathbf{x}^{\dagger}\mathbf{x}^{\prime}|^{2}$
repeated amplitude encoding | $(|\mathbf{x}^{\dagger}\mathbf{x}^{\prime}|^{2})^{r}$
rotation encoding | $\prod_{k=1}^{N}|\cos(x^{\prime}_{k}-x_{k})|^{2}$
coherent state encoding | $e^{-|\mathbf{x}-\mathbf{x}^{\prime}|^{2}}$
general near-term encoding | $\sum_{s,t\in\Omega}e^{i\mathbf{s}\mathbf{x}}e^{i\mathbf{t}\mathbf{x}^{\prime}}c_{\mathbf{s},\mathbf{t}}$
Table 1: Overview of data encoding strategies used in the literature and their
quantum kernels. If bold notation is used, the input domain is assumed to be
the $\mathcal{X}\subseteq\mathbb{R}^{N}$.
To fill the definition of the quantum kernel with life, let us have a look at
typical information encoding strategies or data embeddings in quantum machine
learning, and the kernels they give rise to (following [4], and see Table 1).
Note that it has been shown that there are kernels that cannot be efficiently
computed on classical computers [11].444 The argument basically defines a
feature map based on a computation that is conjectured by quantum computing
research to be classically hard. As important as such results are, the
question of quantum kernels that are actual useful for every-day problems is
still wide open.
### IV.1 Data encoding that relates to classical kernels
The following strategies to encode data all have resemblance to kernels from
the classical machine learning literature. This means that, sometimes up to an
absolute square value, we can identify them with standard kernels such as the
polynomial or Gaussian kernel. These kernels are plotted in Figure 7 using
simulations of quantum computations implemented in the quantum machine
learning software library PennyLane [23]. Note that I switch to bold notation
when the input space is $\mathbb{C}^{N}$ or $\mathbb{R}^{N}$
Basis encoding. Basis encoding is possibly the most common information
encoding strategy in qubit-based quantum computing. Inputs $x\in\mathcal{X}$
are assumed to be binary strings of length $n$, and
$\mathcal{X}=\\{0,1\\}^{\otimes n}$. Every binary string has a unique integer
representation $i_{x}=\sum_{k=0}^{n-1}2^{k}x_{k}$. The data-encoding feature
map maps the binary string to a computational basis state,
$\phi:x\rightarrow\left|i_{x}\right\rangle\\!\left\langle i_{x}\right|.$ (14)
The quantum kernel is given by the Kronecker delta
$\kappa(x,x^{\prime})=|\left\langle
i_{x^{\prime}}\left|j_{x}\right.\right\rangle|^{2}=\delta_{x,x^{\prime}},$
(15)
which is of course a very strict similarity measure on input space, and
arguably not the best choice of data encoding for quantum machine learning
tasks. Basis encoding requires $\mathcal{O}(n)$ qubits.
Amplitude encoding. Amplitude encoding assumes that
$\mathcal{X}=\mathbb{C}^{2^{n}}$, and that the inputs are normalised as
$\|\mathbf{x}\|^{2}=\sum_{i}|x_{i}|^{2}=1$. The data-encoding feature map
associates each input with a quantum state whose amplitudes in the
computational basis are the elements in the input vector,
$\phi:\mathbf{x}\rightarrow\left|\mathbf{x}\right\rangle\\!\left\langle\mathbf{x}\right|=\sum_{i,j=1}^{N}x_{i}x^{*}_{j}\left|i\right\rangle\\!\left\langle
j\right|.$ (16)
This data-encoding strategy leads to an identity feature map, which can be
implemented by a non-trivial quantum circuit (for obvious reasons also known
as “arbitrary state preparation”), which takes time $\mathcal{O}(2^{n})$ [24].
The quantum kernel is the absolute square of the linear kernel
$\kappa(\mathbf{x},\mathbf{x}^{\prime})=|\left\langle\mathbf{x}^{\prime}\left|\mathbf{x}\right.\right\rangle|^{2}=|\mathbf{x}^{\dagger}\mathbf{x}^{\prime}|^{2}.$
(17)
It is obvious that this quantum kernel does not add much power to a linear
model in the original feature space, and it is more of interest for
theoretical investigations that want to eliminate the effect of the feature
map. Amplitude encoding requires $\mathcal{O}(n)$ qubits.
Repeated amplitude encoding. Amplitude encoding can be repeated $r$ times,
$\phi:\mathbf{x}\rightarrow\left|\mathbf{x}\right\rangle\\!\left\langle\mathbf{x}\right|\otimes\cdots\otimes\left|\mathbf{x}\right\rangle\\!\left\langle\mathbf{x}\right|$
(18)
to get powers of the quantum kernel in amplitude encoding
$\kappa(\mathbf{x},\mathbf{x}^{\prime})=(|\left\langle\mathbf{x}^{\prime}\left|\mathbf{x}\right.\right\rangle|^{2})^{r}=(|(\mathbf{x}^{\prime})^{\dagger}\mathbf{x}|^{2})^{r}.$
(19)
A constant non-homogenity can be added by extending the original input with
constant dummy features. Repeated amplitude encoding requires
$\mathcal{O}(rn)$ qubits.
Rotation encoding. Rotation encoding is a qubit-based embedding that assumes
$\mathcal{X}=\mathbb{R}^{n}$ (where $n$ is again the number of qubits) without
any normalisation condition. Since it is $2\pi$-periodic one may want to limit
$\mathbb{R}^{n}$ to the hypercube $[0,2\pi]^{\otimes n}$. The $i$th feature
$x_{i}$ is encoded into the $i$th qubit via a Pauli rotation. For example, a
Pauli-Y rotation puts the qubit into state
$\left|q_{i}(x_{i})\right\rangle=\cos(x_{i})\left|0\right\rangle+\sin(x_{i})\left|1\right\rangle$.
The data-encoding feature map is therefore given by
$\phi:\mathbf{x}\rightarrow\left|\phi(\mathbf{x})\right\rangle\\!\left\langle\phi(\mathbf{x})\right|\text{
with
}\left|\phi(\mathbf{x})\right\rangle=\sum_{q_{1},\dots,q_{n}=0}^{1}\prod_{k=1}^{n}\cos(x_{k})^{q_{k}}\sin(x_{k})^{1-q_{k}}\left|q_{1},\dots,q_{n}\right\rangle,$
(20)
and the corresponding quantum kernel is related to the cosine kernel:
$\kappa(\mathbf{x},\mathbf{x}^{\prime})=\prod_{k=1}^{n}|\sin x_{k}\sin
x^{\prime}_{k}+\cos x_{k}\cos
x^{\prime}_{k}|^{2}=\prod_{k=1}^{n}|\cos(x_{k}-x^{\prime}_{k})|^{2}.$ (21)
Rotation encoding requires $\mathcal{O}(n)$ qubits.
Coherent state encoding. Coherent states are known in the field of quantum
optics as a description of light modes. Formally, they are superpositions of
so called Fock states, which are basis states from an infinite-dimensional
discrete basis
$\\{\left|0\right\rangle,\left|1\right\rangle,\left|2\right\rangle,...\\}$,
instead of the binary basis of qubits. A coherent state has the form
$\left|\alpha\right\rangle=e^{-\frac{|\alpha|^{2}}{2}}\sum\limits_{k=0}^{\infty}\frac{\alpha^{k}}{\sqrt{k!}}\left|k\right\rangle,$
(22)
for $\alpha\in\mathbb{C}$. Encoding a real scalar input $x_{i}\in\mathbb{R}$
into a coherent state $\left|\alpha_{x_{i}}\right\rangle$, corresponds to a
data-encoding feature map with an infinite-dimensional feature space,
$\phi:x_{i}\rightarrow\left|\alpha_{x_{i}}\right\rangle\\!\left\langle\alpha_{x_{i}}\right|,\text{
with
}\left|\alpha_{x_{i}}\right\rangle=e^{-\frac{|x_{i}|^{2}}{2}}\sum\limits_{k=0}^{\infty}\frac{x_{i}^{k}}{\sqrt{k!}}\left|k\right\rangle.$
(23)
We can encode a real vector $\mathbf{x}=(x_{1},...,x_{n})$ into $n$ joint
coherent states,
$\left|\alpha_{\mathbf{x}}\right\rangle\\!\left\langle\alpha_{\mathbf{x}}\right|=\left|\alpha_{x_{1}}\right\rangle\\!\left\langle\alpha_{x_{1}}\right|\otimes\dots\otimes\left|\alpha_{x_{n}}\right\rangle\\!\left\langle\alpha_{x_{n}}\right|.$
(24)
The quantum kernel is a Gaussian kernel [7]:
$\kappa(\mathbf{x},\mathbf{x}^{\prime})=\left|e^{-\left(\frac{|\mathbf{x}|^{2}}{2}+\frac{|\mathbf{x}^{\prime}|^{2}}{2}-\mathbf{x}^{T}\mathbf{x}^{\prime}\right)}\right|^{2}=e^{-|\mathbf{x}-\mathbf{x}^{\prime}|^{2}}$
(25)
Preparing coherent states can be done with displacement operations in quantum
photonics.
Figure 7: Quantum kernels of different data embeddings. Plots of some of the
functions $\kappa(\tilde{x},x)$ for the kernels introduced above, using
$\mathbf{x}=(x_{1},x_{2})\in\mathbb{R}^{2}$ for illustration purposes. The
first entry $\tilde{\mathbf{x}}$ is fixed at $\tilde{\mathbf{x}}=(0,0)$ for
basis and rotation embedding, and at
$\tilde{\mathbf{x}}=(\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}})$ for the
variations of amplitude embedding. The second value is depicted as the x-y
plane.
### IV.2 Fourier representation of the quantum kernel
It is suspicious that all embeddings plotted in Figure 7 have a periodic,
trigonometric structure. This is a fundamental characteristic of how physical
parameters enter quantum states. To see this we will define a general class of
embeddings (also called “time-evolution encoding”) that is used a lot in near-
term quantum machine learning, and which includes all examples above if we
allow for classical pre-processing of the features. This strategy assumes that
$\mathcal{X}=\mathbb{R}^{N}$ for some arbitrary $N$ (whose relation to the
number of qubits $n$ depends on the embedding), which means that I will stick
to the bold notation. The embedding of $x_{i}$ is executed by gates of the
form $e^{-ix_{i}G_{i}}$ where $G_{i}$ is $d_{i}\leq 2^{n}$-dimensional
Hermitian operator called the generating Hamiltonian. For the popular choice
of Pauli rotations, $G_{i}=\frac{1}{2}\sigma$ with the Pauli operator
$\sigma\in\\{\sigma_{z},\sigma_{y},\sigma_{z}\\}$. The gates can be applied to
different qubits as in rotation encoding, or to the same qubits, and to be
general we allow for arbitrary quantum computations between each encoding
gate.
Refs. [25] and [26] showed that the Dirac vectors
$\left|\phi(\mathbf{x})\right\rangle$ can be represented in terms of periodic
functions of the form $e^{ix_{i}\omega}$, where $\omega\in\mathbb{R}$ can be
interpreted as a frequency. The frequencies involved in the construction of
the data-encoding feature vectors are solely determined by the generating
Hamiltonians $\\{G_{i}\\}$ of the gates that encode the data. For popular
choices of Hamiltonians, the frequencies $\omega$ are integer-valued, which
means that the feature space is constructed from Fourier basis functions
$e^{ix_{i}n},n\in\mathbb{Z}$. This allows us to describe and analyse the
quantum kernel with the tools of Fourier analysis.
Let me state the result for the simplified case that each input $x_{i}$ is
only encoded once, and that all the encoding Hamiltonians are the same
($G_{1}=\dots=G_{N}=G$). The proof is deferred to Appendix A, which also shows
how our example of Pauli-X encoding can be cast as a Fourier series.
###### Theorem 1 (Fourier representation of the quantum kernel).
Let $\mathcal{X}=\mathbb{R}^{N}$ and $S(\mathbf{x})$ be a quantum circuit that
encodes the data inputs $\mathbf{x}=(x_{1},\dots,x_{N})\in\mathcal{X}$ into a
$n$-qubit quantum state
$S(\mathbf{x})\left|0\right\rangle=\left|\phi(\mathbf{x})\right\rangle$ via
gates of the form $e^{-ix_{i}G}$ for $i=1,\dots,N$. Without loss of generality
$G$ is assumed to be a $d\leq 2^{n}$-dimensional diagonal operator with
spectrum $\lambda_{1},\dots,\lambda_{d}$. Between such data-encoding gates,
and before and after the entire encoding circuit, arbitrary unitary evolutions
$W^{(1)},\dots,W^{(N+1)}$ can be applied, so that
$S(\mathbf{x})=W^{(N+1)}e^{-ix_{N}G}W^{(N)}\dots W^{(2)}e^{-ix_{1}G}W^{(1)}.$
(26)
The quantum kernel $\kappa(\mathbf{x},\mathbf{x}^{\prime})$ can be written as
$\kappa(\mathbf{x},\mathbf{x}^{\prime})=\sum_{\mathbf{s},\mathbf{t}\in\Omega}e^{i\mathbf{s}\mathbf{x}}e^{i\mathbf{t}\mathbf{x}^{\prime}}c_{\mathbf{st}},$
(27)
where $\Omega\subseteq\mathbb{R}^{N}$, and $c_{\mathbf{st}}\in\mathbb{C}$. For
every $\mathbf{s},\mathbf{t}\in\Omega$ we have
$-\mathbf{s},-\mathbf{t}\in\Omega$ and
$c_{\mathbf{st}}=c^{*}_{-\mathbf{s}-\mathbf{t}}$, which guarantees that the
quantum kernel is real-valued.
While the conditions of this theorem may sound restrictive at first, it
includes a fairly general class of quantum models. The standard way to control
a quantum system is to apply an evolution of Hamiltonian $G$ for time $t$,
which is exactly described by the form $e^{-itG}$. The time $t$ is associated
with the input to the quantum computer (which may be the original input
$x\in\mathcal{X}$ or the result of some pre-processing, in which case we can
just redefine the dataset to be the pre-processed one). In short, most quantum
kernels will be of the form shown in Eq. (27).
Importantly, for the class of Pauli generators, the kernel becomes a Fourier
series:
###### Corollary 1.1 (Fourier series representation of the quantum kernel).
For the setting described in Theorem 1, if the eigenvalue spectrum of $G$ is
such that any difference $\lambda_{i}-\lambda_{j}$ for $i,j=1,\dots,d$ is in
$\mathbb{Z}$, then $\Omega$ becomes the set of $N$-dimensional integer-valued
vectors $\mathbf{n}=(n_{1},\dots,n_{N})$, $n_{1},\dots n_{N}\in\mathbb{Z}$. In
this case the quantum kernel is a multi-dimensional Fourier series,
$\kappa(\mathbf{x},\mathbf{x}^{\prime})=\sum_{\mathbf{n},\mathbf{n^{\prime}}\in\Omega}e^{i\mathbf{n}\mathbf{x}}e^{i\mathbf{n^{\prime}}\mathbf{x}^{\prime}}c_{\mathbf{n,n^{\prime}}},$
(28)
Figure 8: Kernels generated by rotation embeddings. Plots of the quantum
kernel $\kappa(\tilde{\mathbf{x}},\mathbf{x})$ with $\tilde{\mathbf{x}}=(0,0)$
using a very general data encoding strategy that repeats the input encoding
into a single qubit one, two and three times. It is obvious that the
repetition decreases the smoothness of the kernel by increasing the Fourier
basis functions from which the kernel is inherently constructed.
Expressions (27) and (28) reveal a lot about the structure of quantum kernels,
for example that they are not necessarily translation invariant,
$\kappa(\mathbf{x},\mathbf{x}^{\prime})\neq
g(\mathbf{x}-\mathbf{x}^{\prime})$, unless the data-encoding strategy leads to
$c_{\mathbf{st}}=\tilde{c}_{\mathbf{st}}\delta_{\mathbf{st}}=c_{\mathbf{s}}$
and
$\kappa(\mathbf{x},\mathbf{x}^{\prime})=\sum_{\mathbf{s}\in\Omega}e^{i\mathbf{s}(\mathbf{x}-\mathbf{x}^{\prime})}\tilde{c}_{\mathbf{s}}.$
(29)
Since $e^{-ix_{i}G}e^{ix^{\prime}_{i}G}=e^{-i(x_{i}-x^{\prime}_{i})G}$, this
is true for all data embeddings that encode each original input into a
separate physical subsystem, like rotation encoding introduced above.
It is an interesting question if this link between data embedding and Fourier
basis functions given to us by physics can help design particularly suitable
kernels for applications, or be used to control smoothness properties of the
kernel in a useful manner.
## V Quantum models and reproducing kernel Hilbert spaces
I will now discuss the observation that quantum models are linear models in
the feature space $\mathcal{F}$ of the data-encoding feature map. This
automatically allows us to apply the results of kernel methods to quantum
machine learning. A beautiful summary of these results can be found in [20]
and [27], which serve as a basis for many of the following insights.
### V.1 Quantum models are linear models in feature space
First, let us define a quantum model. For this we need measurements.
> Measurements. In quantum computing, a measurement produces the observable
> result of a quantum circuit, and can therefore be seen as the final step of
> a quantum algorithm555An important exception is when the outcome of a
> measurement is used to influence the quantum circuit itself, but I do not
> consider those complications here.. Mathematically speaking, a measurement
> corresponds to a Hermitian operator $\mathcal{M}$ acting on vectors in the
> Hilbert space of the quantum system $\mathcal{H}$. Just like density
> matrices, measurement operators can be represented as elements of the space
> of $2^{n}\times 2^{n}$-dimensional complex matrices [21], and therefore live
> in a subspace of the data-encoding feature space $\mathcal{F}$. This will
> become quite crucial below.
>
> A Hermitian operator can always be diagonalised and written as
>
>
> $\mathcal{M}=\sum_{i}\mu_{i}\left|\mu_{i}\right\rangle\\!\left\langle\mu_{i}\right|,$
> (30)
>
> where $\mu_{i}$ are the eigenvalues of $\mathcal{M}$ and
> $\\{\left|\mu_{i}\right\rangle\\}$ is an orthonormal basis in the Hilbert
> space $\mathcal{H}$ of the quantum system. Note that
> $\left|\mu_{i}\right\rangle\\!\left\langle\mu_{i}\right|$ is an outer
> product, and can be thought of as a (density) matrix.
>
> The apparatus of quantum theory allows us to compute expected outcomes or
> expectations of measurement results. Such expectations derive from
> expressing the quantum state in the eigenbasis of the measurement operator,
> $\left|\psi\right\rangle=\sum_{i}\left\langle\mu_{i}\left|\psi\right.\right\rangle\left|\mu_{i}\right\rangle$,
> and using the fact that
> $\mathcal{M}\left|\mu_{i}\right\rangle=\mu_{i}\left|\mu_{i}\right\rangle$
> and $\left\langle\mu_{i}\left|\mu_{i}\right.\right\rangle=1$:
>
>
> $\mathrm{tr}\left[\rho\mathcal{M}\right]=\langle\psi|\mathcal{M}|\psi\rangle=\sum_{i,j}\left\langle\psi\left|\mu_{j}\right.\right\rangle\left\langle\mu_{i}\left|\psi\right.\right\rangle\left\langle\mu_{j}\right|\mathcal{M}\left|\mu_{i}\right\rangle=\sum_{i}|\left\langle\psi\left|\mu_{i}\right.\right\rangle|^{2}\mu_{i}=\sum_{i}p(\mu_{i})\mu_{i}.$
> (31)
>
> The above used the “Born rule”, which states that the probability of
> measuring outcome $\mu_{i}$ is given by
>
> $p(\mu_{i})=|\langle\mu_{i}|\psi\rangle|^{2}.$ (32)
>
> It is clear that the right hand side of Eq. (31) is an expectation of a
> random variable in the classical sense of probability theory, but the
> probabilities themselves are computed by an unusual mathematical framework.
> Finally, it is good to know that the expectation of a measurement
> $\mathcal{M}_{\varphi}=\left|\varphi\right\rangle\\!\left\langle\varphi\right|$
> (where $\left|\varphi\right\rangle$ is an arbitrary quantum state) gives us
> the overlap of $\left|\varphi\right\rangle$ and $\left|\psi\right\rangle$,
>
>
> $\mathrm{tr}\left[\rho\mathcal{M}_{\varphi}\right]=\langle\psi|\mathcal{M}_{\varphi}|\psi\rangle=|\langle\varphi|\psi\rangle|^{2}.$
> (33)
>
> Note that only because we can write down a measurement mathematically, we
> cannot necessarily implement it efficiently on a quantum computer. However,
> for measurements of type $\mathcal{M}_{\varphi}$ there is a very efficient
> routine called the SWAP test to do so, if we can prepare the corresponding
> state efficiently. In practice, more complicated measurements are
> implemented by applying a circuit $W$ to the final quantum state, followed
> by a simple measurement (such as the well-known Pauli-Z measurement
> $\sigma_{z}$ that probes the state of qubits, which effectively implements
> $\mathcal{M}=W^{\dagger}\sigma_{z}W$).
>
> Of course, actual quantum computers can only ever produce an estimate of the
> above statistical properties, namely by repeating the entire computation $K$
> times and computing the empirical probability/frequency or the empirical
> expectation $\frac{1}{K}\sum_{i=1}^{K}\mu_{i}$. However, repeating a fixed
> computation tens of thousands of times can be done in a fraction of a second
> on most hardware platforms, and only leads to a small constant overhead.
We can define a quantum model as a measurement performed on a data-encoding
state:
###### Definition 3 (Quantum model).
Let $\rho(x)$ be a quantum state that encodes classical data $x\in\mathcal{X}$
and $\mathcal{M}$ a Hermitian operator representing a quantum measurement. A
quantum model is the expectation of the quantum measurement as a function of
the data input,
$f(x)=\mathrm{tr}\left[\rho(x)\mathcal{M}\right].$ (34)
The space of all quantum models contains functions
$f:\mathcal{X}\rightarrow\mathbb{R}$. For pure-state embeddings with
$\rho(x)=\left|\phi(x)\right\rangle\\!\left\langle\phi(x)\right|$, this
simplifies to
$f(x)=\left\langle\phi(x)\right|\mathcal{M}\left|\phi(x)\right\rangle.$ (35)
As mentioned above, this definition is very general, but does not consider the
important class of generative quantum models.
###### Example V.1.
Getting back to the standard example of the Pauli-X rotation encoding, we can
upgrade it to a full quantum model with parametrised measurement by applying
an additional arbitrary rotation $R(\theta_{1},\theta_{2},\theta_{3})$, which
is parametrised by three trainable angles and is expressive enough to
represent any single-qubit computation. After this, we measure in the Pauli-Z
basis, yielding the overall quantum model:
$f(x)=\mathrm{tr}\left[\rho(x)\mathcal{M}(\theta_{1},\theta_{2},\theta_{3})\right]=\left\langle\phi(x)\right|\mathcal{M}(\theta_{1},\theta_{2},\theta_{3})\left|\phi(x)\right\rangle,$
(36)
with measurement
$\mathcal{M}(\theta_{1},\theta_{2},\theta_{3})=R^{\dagger}(\theta_{1},\theta_{2},\theta_{3})\sigma_{z}R(\theta_{1},\theta_{2},\theta_{3})$,
$R(\theta_{1},\theta_{2},\theta_{3})=\begin{pmatrix}e^{i(-\frac{\theta_{1}}{2}-\frac{\theta_{3}}{2})}\cos(\frac{\theta_{2}}{2})&-e^{i(-\frac{\theta_{1}}{2}+\frac{\theta_{3}}{2})}\sin(\frac{\theta_{2}}{2})\\\
e^{i(\frac{\theta_{1}}{2}-\frac{\theta_{3}}{2})}\sin(\frac{\theta_{2}}{2})&e^{i(\frac{\theta_{1}}{2}+\frac{\theta_{3}}{2})}\cos(\frac{\theta_{2}}{2})\end{pmatrix}$
(37)
and $\left|\phi(x)\right\rangle=R_{x}(x)\left|0\right\rangle$. One can use a
computer-algebra system (or, for the patient among us, lengthy calculations)
to verify that the quantum model is equivalent to the function
$f(x)=\cos(\theta_{2})\cos(x)-\sin(\theta_{1})\sin(\theta_{2})\sin(x),$ (38)
and hence independent of the third parameter.
Next, let us define what a linear (machine learning) model in feature space
is:
###### Definition 4 (Linear model).
Let $\mathcal{X}$ be a data domain and $\phi:\mathcal{X}\to\mathcal{F}$ a
feature map. We call any function
$f(x)=\langle\phi(x),w\rangle_{\mathcal{F}},$ (39)
with $w\in\mathcal{F}$ a linear model in $\mathcal{F}$.
From these two definitions we immediately see that:
###### Theorem 2 (Quantum models are linear models in data-encoding feature
space).
Let $f(x)=\mathrm{tr}\left[\rho\mathcal{M}\right]$ be a quantum model with
feature map $\phi:x\in\mathcal{X}\to\rho(x)\in\mathcal{F}$ and data domain
$\mathcal{X}$. The quantum model $f$ is a linear model in $\mathcal{F}$.
It is interesting to note that the measurement $\mathcal{M}$ can always be
expressed as a linear combination $\sum_{k}\gamma_{k}\rho(x^{k})$ of data-
encoding states $\rho(x^{k})$ where $x^{k}\in\mathcal{X}$.
###### Theorem 3 (Quantum measurements are linear combinations of data-
encoding states).
Let $f_{\mathcal{M}}(x)=\mathrm{tr}\left[\rho\mathcal{M}\right]$ be a quantum
model. There exists a measurement $\mathcal{M}_{\rm exp}\in\mathcal{F}$ of the
form
$\mathcal{M}_{\rm exp}=\sum_{k}\gamma_{k}\rho(x^{k})$ (40)
with $x^{k}\in\mathcal{X}$, such that $f_{\mathcal{M}}(x)=f_{\mathcal{M}_{\rm
exp}}(x)$ for all $x\in\mathcal{X}$.
###### Proof.
We can divide $\mathcal{M}$ into the part that lies in the image of
$\mathcal{X}$ and the remainder $R$,
$\mathcal{M}=\mathcal{M}_{\rm exp}+R.$ (41)
Since the trace is linear, we have:
$\mathrm{tr}\left[\rho(x)\mathcal{M}\right]=\mathrm{tr}\left[\rho(x)\mathcal{M}_{\rm
exp}\right]+\mathrm{tr}\left[\rho(x)R\right].$ (42)
The data-encoding state $\rho(x)$ only has contributions in $\mathcal{F}$,
which means that the inner product $\mathrm{tr}\left[\rho(x)R\right]$ is
always zero. ∎
Below we will see that optimal measurements with respect to typical machine
learning cost functions can be expanded in the training data only.
Note that the fact that a quantum model can be expressed as a linear model in
the feature space does not mean that it is linear in the Hilbert space of the
Dirac vectors $\left|\phi(x)\right\rangle$, nor is it linear in the data input
$x$. As mentioned before, in the context of variational circuits the
measurement usually depends on trainable parameters, which is realised by
applying a parametrised quantum operation or circuit that “rotates” the basis
of a fixed measurement. Variational quantum models are also not necessarily
linear in their actual trainable parameters.
As a last comment for readers that prefer the vectorised version of the data-
encoding feature map, by writing the measurement operator
$\mathcal{M}=\sum_{i}\mu_{i}\left|\mu_{i}\right\rangle\\!\left\langle\mu_{i}\right|$
in its eigenbasis, we can likewise write a quantum model as the inner product
of a vectorised feature vector
$\left|\phi(x)\right\rangle\otimes\left|\phi^{*}(x)\right\rangle\in\mathcal{F}_{v}$
with some other vector
$\sum_{i}\mu_{i}\left|\mu_{i}\right\rangle\otimes\left|\mu_{i}\right\rangle\in\mathcal{F}_{v}$.
$\displaystyle f(x)$
$\displaystyle=\left\langle\phi(x)\right|\mathcal{M}\left|\phi(x)\right\rangle$
(43)
$\displaystyle=\sum_{i}\mu_{i}|\left\langle\mu_{i}\left|\phi(x)\right.\right\rangle|^{2}$
(44)
$\displaystyle=\Big{(}\left\langle\phi(x)\right|\otimes\left\langle\phi^{*}(x)\right|\Big{)}\Big{(}\sum_{i}\mu_{i}\left|\mu_{i}\right\rangle\otimes\left|\mu^{*}_{i}\right\rangle\Big{)},$
(45)
or using the vectorised density matrix notation introduced above,
$f(x)=\left\llangle\rho(x)\left|w\right.\right\rrangle,$ (46)
with $w=\sum_{i}\mu_{i}\left|\rho_{i}\right\rrangle$.
### V.2 The RKHS of the quantum kernel and the space of quantum models are
equivalent
So far we were dealing with two different kinds of Hilbert spaces: The Hilbert
space $\mathcal{H}$ of the quantum system, and the feature space $\mathcal{F}$
that contains the embedded data. I will now construct yet another feature
space for the quantum kernel, but one derived directly from the kernel and
with no further notion of a quantum model. This time the feature space is a
Hilbert space $F$ of functions, and due to its special construction it is
called the reproducing kernel Hilbert space (RKHS). The relevance of this
feature space is that the functions it contains turn out to be exactly the
quantum model functions $f$ (which is a bit surprising at first: this feature
space contains linear models defined in an equivalent feature space!).
The RKHS $F$ of the quantum kernel can be defined as follows (as per Moore-
Aronsajn’s construction666See also
http://www.stats.ox.ac.uk/~sejdinov/teaching/atml14/Theory_2014.pdf for a
great overview.):
###### Definition 5 (Reproducing kernel Hilbert space).
Let $\mathcal{X}\neq\emptyset$. The reproducing kernel Hilbert space of a
kernel $\kappa$ over $\mathcal{X}$ is the Hilbert space $F$ created by
completing the span of functions $f:\mathcal{X}\rightarrow\mathbb{R}$,
$f(\cdot)=\kappa(x,\cdot)$, $x\in\mathcal{X}$ (i.e., including the limits of
Cauchy series). For two functions
$f(\cdot)=\sum_{i}\alpha_{i}\kappa(x^{i},\cdot)$,
$g(\cdot)=\sum_{j}\beta_{j}\kappa(x^{j},\cdot)\in F$, the inner product is
defined as
$\langle f,g\rangle_{F}=\sum_{ij}\alpha_{i}\beta_{j}\kappa(x^{i},x^{j}),$ (47)
with $\alpha_{i},\beta_{j}\in\mathbb{R}$.
Note that according to Theorem 1 the “size” of the space of common quantum
models, and likewise the RKHS of the quantum kernel, are fundamentally limited
by the generators of the data-encoding gates. If we consider $\kappa$ as the
quantum kernel, the definition of the inner product reveals with
$\langle\kappa(x,\cdot),\kappa(x^{\prime},\cdot)\rangle_{F}=\kappa(x,x^{\prime}),$
(48)
that $x\rightarrow\kappa(x,\cdot)$ is a feature map of this kernel (but one
mapping data to functions instead of matrices, which feels a bit odd at
first). In this sense, $F$ can be regarded as an alternative feature space to
$\mathcal{F}$. The name of this unique feature space comes from the
reproducing property
$\langle f,\kappa(x,\cdot)\rangle_{F}=f(x)\text{ for all }f\in F,$ (49)
which also shows that the kernel is the evaluation functional $\delta_{x}$
which assigns $f$ to $f(x)$. An alternative definition of the RKHS is the
space in which the evaluation functional is bounded, which gives the space a
lot of favourable properties from a mathematical perspective.
To most of us, the definition of an RKHS is terribly opaque when first
encountered, so a few words of explanation may help (see also Figure 9). One
can think of the RKHS as a space whose elementary functions $\kappa(x,\cdot)$
assign a distance measure to every data point. Functions of this form were
also plotted in Figure 7 and 8. By feeding another data point $x^{\prime}$
into this “similarity measure”, we get the distance between the two points. As
a vector space, $F$ also contains linear combinations of these building
blocks. The functions living in $F$ are therefore linear combinations of data
similarities, just like for example kernel density estimation constructs a
smooth function by adding Gaussians centered in the data. The kernel then
regulates the “resolution” of the distance measure, for example by changing
the variance of the Gaussian.
Figure 9: Intuition for the functions living in the reproducing kernel Hilbert
space (RKHS). The RKHS $F$ contains functions that are linear combinations of
kernel functions where one “slot” is fixed in a possible data sample
$x^{k}\in\mathcal{X}$. This illustration of one such function $f\in F$, using
a Gaussian kernel, shows how the kernel regulates the “smoothness” of the
functions in $F$, as a wider kernel will simplify $f$. Since the RKHS is
equivalent to the space of linear models that it has been derived from, the
kernel fundamentally defines the class of functions that the linear model can
express.
Once one gets used to this definition, it is immediately apparent that the
functions living in the RKHS of the quantum kernel are what we defined as
quantum models:
###### Theorem 4.
Functions in the RKHS $F$ of the quantum kernel are linear models in the data-
encoding feature space $\mathcal{F}$ and vice versa.
###### Proof.
The functions in the RKHS of the quantum kernel are of the form
$f(\cdot)=\sum_{k}\gamma_{k}\kappa(x^{k},\cdot)$, with $x^{k}\in\mathcal{X}$.
We get
$\displaystyle f(x)$ $\displaystyle=\sum_{k}\gamma_{k}\kappa(x^{k},x)$ (50)
$\displaystyle=\sum_{k}\gamma_{k}\mathrm{tr}\left[\rho(x^{k})\rho(x)\right]$
(51)
$\displaystyle=\mathrm{tr}\left[\sum_{k}\gamma_{k}\rho(x^{k})\rho(x)\right]$
(52) $\displaystyle=\mathrm{tr}\left[\mathcal{M}\rho(x)\right].$ (53)
Using Theorem 3 we know that all quantum models can be expressed by
measurements $\sum_{k}\gamma_{k}\rho(x^{k})$, and hence by functions in the
RKHS. ∎
In fact, the above observation applies to any linear model in a feature space
that gives rise to the quantum kernel (see Theorem 4.21 in [20]).
As a first taste of how the connection of quantum models and kernel theory can
be exploited for quantum machine learning, consider the question whether
quantum models are universal function approximators. If quantum models are
universal, the RKHS of the quantum kernel must be universal (or dense in the
space of functions we are interested in). This leads to the definition of a
universal kernel (see [20] Definition 4.52):
###### Definition 6 (Universal kernel).
A continuous kernel $\kappa$ on a compact metric space $\mathcal{X}$ is called
universal if the RKHS $F$ of $\kappa$ is dense in $C(\mathcal{X})$, i.e., for
every function $g$ in the set of functions $C(\mathcal{X})$ mapping from
elements in $\mathcal{X}$ to a scalar value, and for all $\epsilon>0$ there
exists an $f\in F$ such that
$\|f-g\|_{\infty}\leq\epsilon.$ (54)
The reason why this is useful is that there are a handful of known necessary
conditions for a kernel to be universal, for example if its feature map is
injective (see [20] for more details). This immediately excludes quantum
models defined on the data domain $\mathcal{X}=\mathbb{R}$ which use single-
qubit Pauli rotation gates of the form $e^{ix\sigma}$ (with $\sigma$ a Pauli
matrix) to encode data: since such rotations are $2\pi$-periodic, two
different $x,x^{\prime}\in\mathcal{X}$ get mapped to the same data-encoding
state $\rho(x)$. In other words, and to some extent trivially so, on a data
domain that extends beyond the periodicity of a quantum model we never have a
chance for universal function approximation. Another example for universal
kernels are kernels of the form
$\kappa(x,x^{\prime})=\sum_{k=1}^{\infty}c_{k}\langle x^{\prime},x\rangle^{k}$
(see [20] Corollary 4.57). Vice versa, the universality proof for a type of
quantum model in [26] suggests that some quantum kernels of the form (1) are
universal in the asymptotic limit of exponentially large circuits.
I want to finish with a final note about the relation between “wavefunctions”
and functions in the RKHS of quantum systems (see also the appendix of [4]).
Quantum states are sometimes called “wavefunctions”, since an alternative
definition of the Hilbert space of a quantum system is the space of functions
$f(\cdot)=\psi(\cdot)$ which map a measurement outcome $i$ corresponding to
basis state $\left|i\right\rangle$ to an “amplitude” $\psi(i)=\left\langle
i\left|\psi\right.\right\rangle$. (The dual basis vector $\left\langle
i\right|$ can here be understood as the evaluating functional $\delta_{i}$
which returns this amplitude.) Hence, the Hilbert space of a quantum system
can be written as a space of functions mapping from
$\\{i\\}\rightarrow\mathbbm{C}$. But the functions that we are interested in
for machine learning are functions in the data, not in the possible
measurement outcomes. This means that the Hilbert space of the quantum system
is only equivalent to the RKHS of a quantum machine learning model if we
associate data with the measurement outcomes. This is true for many proposals
of generative quantum machine learning models [18, 28], and it would be
interesting to transfer the results to this setting.
## VI Training quantum models
While the question of universality addresses the expressivity of quantum
models, the remaining sections will look at questions of trainability and
optimisation, for which the kernel perspective has the most important results
to offer. Notably, we will see that the optimal measurements of quantum models
for typical machine learning cost functions only have relatively few degrees
of freedom. Similarly, the process of finding these optimal models (i.e.,
training over the space of all possible quantum models) can be formulated as a
low-dimensional optimisation problem. Most of the results are based on the
fact that for kernel methods, the task of training a model is equivalent to
optimising over the model’s corresponding RKHS.
### VI.1 Optimising quantum models is equivalent to optimising over the RKHS
In machine learning we want to find optimal models, or those that minimise the
cost functions derived from learning problems. This process is called
training. From a learning theory perspective, training can be phrased as
regularised empirical risk minimisation, and the problem of training quantum
models can be cast as follows:
###### Definition 7 (Regularised empirical risk minimisation of quantum
models).
Let $\mathcal{X},\mathcal{Y}$ be data input and output domains, $p$ a
probability distribution on $\mathcal{X}$ from which data is drawn, and
$L:\mathcal{X}\times\mathcal{Y}\times\mathbb{R}\rightarrow[0,\infty)$ a loss
function that quantifies the quality of the prediction of a quantum model
$f(x)=\mathrm{tr}\left[\rho(x)\mathcal{M}\right]$. Let
$\mathcal{R}_{L}(f)=\int_{\mathcal{X}\times\mathcal{Y}}L(x,y,f(x))\,\mathrm{d}p(x,y)$
(55)
be the expected loss (or “risk”) of $f$ under $L$, where $L$ may depend
explicitly on $x$. Since $p$ is unknown, we approximate the risk by the
empirical risk
$\hat{\mathcal{R}}_{L}(f)=\frac{1}{M}\sum_{m=1}^{M}L(x^{m},y,f(x^{m})).$ (56)
Regularised empirical risk minimisation of quantum models is the problem of
minimising the empirical risk over all possible quantum models while also
minimising the norm of the measurement $\mathcal{M}$,
$\inf_{\mathcal{M}\in\mathcal{F}}\lambda\|\mathcal{M}\|^{2}_{\mathcal{F}}+\hat{\mathcal{R}}_{L}(\mathrm{tr}\left[\rho(x)\mathcal{M}\right]),$
(57)
where $\lambda\in\mathbb{R}^{+}$ is a positive hyperparameter that controls
the strength of the regularisation term.
We saw in Section V that quantum models are equivalent to functions in the
RKHS of the quantum kernel, which allows us to replace the term
$\hat{\mathcal{R}}_{L}(\mathrm{tr}\left[\rho(x)\mathcal{M}\right])$ in the
empirical risk by $\hat{\mathcal{R}}_{L}(f)$, $f\in F$.
But what about the regularisation term? Since with Theorem 3 we can write
$\displaystyle\|\mathcal{M}\|^{2}_{\mathcal{F}}$
$\displaystyle=\mathrm{tr}\left[\mathcal{M}^{2}\right]$ (58)
$\displaystyle=\sum_{ij}\gamma_{i}\gamma_{j}\mathrm{tr}\left[\rho(x^{i})\rho(x^{j})\right]$
(59) $\displaystyle=\sum_{ij}\gamma_{i}\gamma_{j}\kappa(x^{i},x^{j})$ (60)
$\displaystyle=\langle\sum_{i}\gamma_{i}\kappa(x^{i},\cdot),\sum_{i}\gamma_{i}\kappa(x^{i},\cdot)\rangle_{F}$
(61) $\displaystyle=\langle f,f\rangle_{F},$ (62)
the norm of $\mathcal{M}\in\mathcal{F}$ is equivalent to the norm of a
corresponding $f\in{F}$. Hence, the regularised empirical risk minimisation
problem in Eq. (57) is equivalent to
$\inf_{f\in F}\gamma\|f\|^{2}_{F}+\hat{\mathcal{R}}_{L}(f),$ (63)
which minimises the regularised risk over the RKHS of the quantum kernel. We
will see in the remaining sections that this allows us to characterise the
problem of training and its solutions to a surprising degree.
### VI.2 The measurements of optimal quantum models are expansions in the
training data
The representer theorem, one of the main achievements of classical kernel
theory, prescribes that the function $f$ from the RKHS which minimises the
regularised empirical risk can always be expressed as a weighted sum of the
kernel between $x$ and the training data. Together with the connection between
quantum models and the RKHS of the quantum kernel, this fact will allow us to
write optimal quantum machine learning models in terms of the quantum kernel.
More precisely, the representer theorem can be stated as follows (for a more
general version, see [27], Theorem 5.1):
###### Theorem 5 (Representer theorem).
Let $\mathcal{X},\mathcal{Y}$ be an input and output domain,
$\kappa:\mathcal{X}\times\mathcal{X}\to\mathbb{R}$ a kernel with a
corresponding reproducing kernel Hilbert space $F$, and given training data
$\mathcal{D}=\\{(x^{1},y^{1}),\dotsc,(x^{M},y^{M})\in\mathcal{X}\times\mathcal{Y}\\}$.
Consider a strictly monotonic increasing regularisation function
$g\colon[0,\infty)\to\mathbb{R}$, and an arbitrary loss
$L\colon\mathcal{X}\times\mathcal{Y}\times\mathbb{R}\to\mathbb{R}\cup\\{\infty\\}$.
Any minimiser of the regularised empirical risk
$f_{\rm opt}=\underset{f\in
F}{\mathrm{argmin}}\left\\{\hat{\mathcal{R}}_{L}(f)+g\left(\lVert
f\rVert_{F}\right)\right\\},\quad$ (64)
admits a representation of the form:
$f_{\rm opt}(x)=\sum_{m=1}^{M}\alpha_{m}\;\kappa(x^{m},x),$ (65)
where $\alpha_{m}\in\mathbb{R}$ for all $1\leq m\leq M$.
Note that the crucial difference to the form in Theorem (3) is that $m$ does
not sum over arbitrary data from $\mathcal{X}$, but over a finite training
data set. For us this means that the optimal quantum model can be written as
$f_{\rm
opt}(x)=\sum_{m=1}^{M}\alpha_{m}\;\mathrm{tr}\left[\rho(x)\rho(x^{m})\right]=\sum_{m=1}^{M}\alpha_{m}\;|\langle\phi(x)|\phi(x^{m})\rangle|^{2}.$
(66)
This in turn defines the measurements $\mathcal{M}$ of optimal quantum models.
###### Theorem 6 (Optimal measurements).
For the settings described in Theorem 5, the measurement that minimises the
regularised empirical risk can be written as an expansion in the training data
$x^{m}$, $m=1\dots M$,
$\mathcal{M}_{\rm opt}=\sum_{m}\alpha_{m}\rho(x^{m}),$ (67)
with $\alpha_{m}\in\mathbb{R}$.
###### Proof.
This follows directly by noting that
$\displaystyle f_{\rm opt}(x)$
$\displaystyle=\sum_{m=1}^{M}\alpha_{m}\;\mathrm{tr}\left[\rho(x)\rho(x^{m})\right]$
(68)
$\displaystyle=\mathrm{tr}\left[\rho(x)\sum_{m=1}^{M}\alpha_{m}\rho(x^{m})\right]$
(69) $\displaystyle=\mathrm{tr}\left[\rho(x)\mathcal{M}_{\rm opt}\right]$ (70)
∎
As mentioned in the summary and Figure 5, in variational circuits we typically
only optimise over a subspace of the RKHS since the measurements $\mathcal{M}$
are constrained by a particular circuit ansatz. We can therefore not guarantee
that the optimal measurement can be expressed by the variational ansatz.
However, the above guarantees that there will always be a measurement of the
form of Eq. (67) for which the quantum model has a lower regularised empirical
risk than the best solution of the variational training.
As an example, we can use the apparatus of linear regression to show that the
optimal measurement for a quantum model under least-squares loss can indeed be
written as claimed in Eq. (67). For this I will assume once more that
$\mathcal{X}=\mathbb{R}^{N}$ where $N=2^{n}$ and $n$ is the number of qubits,
and switch to bold notation. I will also use the (here much more intuitive)
vectorised notation in which the quantum model
$f(x)=\mathrm{tr}\left[\rho(x)\mathcal{M}\right]$ becomes
$f(x)=\langle\left\llangle\mathcal{M}\left|\rho(x)\right.\right\rrangle$, with
the vectorised measurement
$\left|\mathcal{M}\right\rrangle=\sum_{k}\gamma_{k}\left|\rho(x^{k})\right\rrangle$.
A well-known result from linear regression states that the vector $\mathbf{w}$
that minimises the least-squares loss of a linear model
$f(\mathbf{x})=\mathbf{w}^{T}\mathbf{x}$ is given by
$\mathbf{w}=(\mathbf{X}^{\dagger}\mathbf{X})^{-1}\mathbf{X}^{\dagger}\mathbf{y},$
(71)
if the inverse of $\mathbf{X}^{\dagger}\mathbf{X}$ exist. Here, $\mathbf{X}$
is the matrix that contains the data vectors as rows,
$\mathbf{X}=\begin{pmatrix}x^{1}_{1}&\ldots&x^{1}_{N}\\\
\vdots&\ddots&\vdots\\\ x^{M}_{1}&\ldots&x^{M}_{N}\end{pmatrix},$ (72)
and $\mathbf{y}$ is an $M$-dimensional vector containing the target labels. A
little trick exposes that $\mathbf{w}$ can be written as a linear combination
of training inputs,
$\mathbf{w}=\mathbf{X}^{\dagger}\left(\mathbf{X}(\mathbf{X}^{\dagger}\mathbf{X})^{-2}\mathbf{X}^{\dagger}\mathbf{y}\right)=\mathbf{X}^{\dagger}\bm{\alpha}=\sum_{m}\alpha_{m}\mathbf{x}^{m},$
(73)
where $\bm{\alpha}=(\alpha_{1},\dots,\alpha_{M})$.
Since a quantum model is a linear model in feature space, we can associate the
vectors in linear regression with the vectorised measurement and density
matrix, and immediately derive
$\left|\mathcal{M}\right\rrangle=\sum_{m}y^{m}\left(\sum_{m^{\prime}}\left|\rho(\mathbf{x}^{m^{\prime}})\right\rrangle\\!\left\llangle\rho(\mathbf{x}^{m^{\prime}})\right|\right)^{-1}\left|\rho(\mathbf{x}^{m})\right\rrangle,$
(74)
by making use of the fact that in our notation
$\mathbf{X}^{\dagger}\mathbf{X}\Longleftrightarrow\sum_{m}\left|\rho(\mathbf{x}^{m})\right\rrangle\\!\left\llangle\rho(\mathbf{x}^{m})\right|,$
(75)
and
$\mathbf{X}^{\dagger}\mathbf{y}\Longleftrightarrow\sum_{m}y^{m}\left|\rho(\mathbf{x}^{m})\right\rrangle.$
(76)
Note that although this looks like an expansion in the feature states, the
“coefficient” of $\left|\rho(\mathbf{x}^{m})\right\rrangle$ still contains an
operator. However, with Eq. (73) and writing
$\sum_{m}\left|\rho(\mathbf{x}^{m})\right\rrangle\\!\left\llangle\rho(\mathbf{x}^{m})\right|$
in its diagonal form,
$\sum_{m}\left|\rho(\mathbf{x}^{m})\right\rrangle\\!\left\llangle\rho(\mathbf{x}^{m})\right|=\sum_{k}h_{k}\left|h_{k}\right\rrangle\\!\left\llangle
h_{k}\right|,$ (77)
we have
$\left|\mathcal{M}\right\rrangle=\sum_{m}\alpha_{m}\left|\rho(\mathbf{x}^{m})\right\rrangle,$
(78)
with
$\alpha_{m}=\sum_{k}h^{-2}_{k}{\left\llangle
h_{k}\left|\rho(\mathbf{x}^{m})\right.\right\rrangle}\sum_{m^{\prime}}y^{m^{\prime}}\left\llangle
h_{k}\left|\rho(\mathbf{x}^{m^{\prime}})\right.\right\rrangle.$ (79)
The optimal measurement in “matrix form” reads
$\mathcal{M}=\sum_{m}\alpha_{m}\rho(\mathbf{x}^{m})=\sum_{m}\alpha_{m}\left|\phi(\mathbf{x}^{m})\right\rangle\\!\left\langle\phi(\mathbf{x}^{m})\right|,$
(80)
as claimed by the representer theorem.
Of course, it may require a large routine to implement this measurement fully
quantumly, since it involves inverting operators acting on the feature space.
Alternatively one can compute the desired $\\{\alpha_{m}\\}$ classically and
use the quantum computer to just measure the kernel. In the last section we
will see ideas of how to use quantum algorithms to do the inversion, but these
quantum training algorithms are complex enough to require fault-tolerant
quantum computers which we do not have available today.
### VI.3 The kernel defines which models are punished by regularisation
In statistical learning theory, the role of the regulariser in the regularised
empirical risk minimisation problem is to “punish” some functions and favour
others. Above, we specifically looked at regularisers of the form
$\|f\|^{2}_{F}$, $f\in F$, which was shown to be equivalent to minimising the
norm of the measurement (or the length of the vectorised measurement) in
feature space. But what is it exactly that we are penalising here? It turns
out that the kernel does not only fix the space of quantum models themselves,
it also defines which functions are penalised in regularised empirical risk
minimisation problems. This is beautifully described in [27] Section 4.3, and
I will only give a quick overview here.
To understand regularisation, we need to have a closer look at the
regularising term $\|f\|^{2}_{F}=\langle f,f\rangle_{F}$. But with the
construction of the RKHS it actually remains very opaque what this inner
product actually computes. It turns out that for every RKHS $F$ there is a
transformation $\Upsilon:F\rightarrow L_{2}(\mathcal{X})$ that maps functions
in the RKHS to square integrable functions on $\mathcal{X}$. What we gain is a
more intuitive inner product formed by an integral,
$\langle f,f\rangle_{F}=\langle\Upsilon f,\Upsilon
f\rangle_{L_{2}}=\int_{\mathcal{X}}(\Upsilon f(x))^{2}dx.$ (81)
The operator $\Upsilon$ can be understood as extracting the information from
the model $f$ which gets integrated over in the usual $L_{2}$ norm, and hence
penalised during optimisation. For example, for some kernels this can be shown
to be the derivative of functions, and regularisation therefore provably
penalise models with “large” higher-order derivatives – which means it favours
smooth functions.
The important point is that every kernel defines a unique transformation
$\Upsilon$, and therefore a unique kind of regularisation. This is summarised
in Theorem 4.9 in [27], which I will reprint here without proof:
###### Theorem 7 (RKHS and Regularization Operators).
For every RKHS with reproducing kernel $\kappa$ there exists a corresponding
regularization operator $\Upsilon:F\to D$ (where $D$ is an inner product
space) such that for all $f\in F$,
$\langle\Upsilon\kappa(x,\cdot),\Upsilon f(\cdot)\rangle_{D}=f(x),$ (82)
and in particular
$\langle\Upsilon\kappa(x,\cdot),\Upsilon\kappa(x^{\prime},\cdot)\rangle_{D}=\kappa(x,x^{\prime}).$
(83)
Likewise, for every regularization operator $\Upsilon:F\to D$, where $F$ is
some function space equipped with a dot product, there exists a corresponding
RKHS $F$ with reproducing kernel $\kappa$ such that these two equations are
satisfied.
In short, the quantum kernel or data-encoding strategy does not only tell us
about universality and optimal measurements, it also fixes the regularisation
properties in empirical risk minimisation. Which data encoding actually leads
to which regularisation property is still an interesting open question for
research.
### VI.4 Picking the best quantum model is a low-dimensional (convex)
optimisation problem
Besides the representer theorem, a second main achievement of kernel theory is
to recognise that optimising the empirical risk of convex loss functions over
functions in an RKHS can be formulated as a finite-dimensional convex
optimisation problem (or in less cryptic language, optimising over extremely
large spaces is surprisingly easy when we use training data, something noted
in [12] before).
The fact that the optimisation problem is finite-dimensional – and we will see
the dimension is equal to the number of training data – is important, since
the feature spaces in which the model classifies the data are usually very
high-dimensional, and possibly even infinite-dimensional. This is obviously
true for the data-encoding feature space of quantum computations as well –
which is precisely why variational quantum machine learning parametrise
circuits with a small number of trainable parameters instead of optimising
over all unitaries/measurements. But even if we optimise over all quantum
models, the results of this section guarantee that the dimensionality of the
problem is limited by the size of the training data set.
The fact that optimisation is convex means that there is only one global
minimum, and that we have a lot of tools to find it [29] \- in particular,
more tools than mere gradient descent. Convex optimisation problems can be
roughly solved in time $\mathcal{O}(M^{2})$ in the number of training data.
Although prohibitive for large datasets, it makes the optimisation guaranteed
to be tractable (and below we will see that quantum computers could in
principle help to train with a runtime of $\mathcal{O}(M)$).
Let me make the statement more precise. Again, it follows from the fact that
optimising over the RKHS of the quantum kernel is equivalent to optimising
over the space of quantum models.
###### Theorem 8 (Training quantum models can be formulated as a finite-
dimensional convex program).
Let $\mathcal{X}$ be a data domain and $\mathcal{Y}$ an output domain,
$L:\mathcal{X}\times\mathcal{Y}\times\mathbb{R}\rightarrow[0,\infty)$ be a
loss function, $F$ the RKHS of the quantum kernel over a non-empty convex set
$\mathcal{X}$ with the reproducing kernel $\kappa$. Furthermore, let
$\lambda\geq 0$ be a regularisation parameter and
$D=\\{(x^{m},y^{m}),m=1,\dots,M\\}\subset\mathcal{X}\times\mathcal{Y}$ a
training data set. The regularised empirical risk minimisation problem is
finite-dimensional, and if the loss is convex, it is also convex.
###### Proof.
Recall that according to the Representer Theorem 5, the solution to the
regularised empirical risk minimisation problem
$f_{\rm opt}=\inf_{f\in F}\lambda\|f\|^{2}_{F}+\hat{\mathcal{R}}_{L}(f)$ (84)
has a representation of the form
$f_{\rm opt}(x)=\sum_{m}\alpha_{m}\mathrm{tr}\left[\rho(x^{m})\rho(x)\right].$
(85)
We can therefore write
$\hat{\mathcal{R}}_{L}(f)=\frac{1}{M}\sum_{m}L(x^{m},y^{m},\sum_{m^{\prime}}\alpha_{m^{\prime}}\kappa(x^{m},x^{m^{\prime}})).$
(86)
If the loss $L$ is convex, then this term is also convex, and it is
$M$-dimensional since it only involves the $M$ degrees of freedom
$\alpha_{m}$.
Now let us turn to the regularisation term and try to show the same. Consider
$\|f\|^{2}_{F}=\sum_{m,m^{\prime}}\alpha_{m}\alpha_{m^{\prime}}\mathrm{tr}\left[\rho(x^{m})\rho(x^{m^{\prime}})\right]=\sum_{m,m^{\prime}}\alpha_{m}\alpha_{m^{\prime}}\kappa(x^{m},x^{m^{\prime}})=\bm{\alpha}^{T}\mathbf{K}\bm{\alpha},$
(87)
where $\mathbf{K}\in\mathbb{R}^{M\times M}$ is the kernel matrix or Gram
matrix with entries $K_{m,m^{\prime}}=\kappa(x^{m},x^{m^{\prime}})$, and
$\bm{\alpha}=(\alpha_{1},\dots,\alpha_{M})$ is the vector of coefficients
$\alpha_{m}$. Since $\mathbf{K}$ is by definition of the kernel positive
definite, this term is also convex. Both $\bm{\alpha}$ and $\mathbf{K}$ are
furthermore finite-dimensional.
Together, training a quantum model to find the optimal solution from Eq. (66)
can be done by solving the optimisation problem
$\displaystyle\inf_{\bm{\alpha}\in\mathbb{R}^{M}}\frac{1}{M}\sum_{m}L(x^{m},y^{m},\sum_{m^{\prime}}\alpha_{m^{\prime}}\kappa(x^{m},x^{m^{\prime}}))+\lambda\bm{\alpha}^{T}\mathbf{K}\bm{\alpha},$
(88)
which optimises over $M$ trainable parameters, and is convex for convex loss
functions. ∎
A support vector machine is a special case of kernel-based training which uses
a special convex loss function, namely the hinge loss, for $L$:
$L(f(x),y)=\max(0,1-f(x)y),$ (89)
where one assumes that $y\in\\{-1,1\\}$. As derived in countless textbooks,
the resulting optimisation problem can be constructed from geometric arguments
as maximising the “soft” margin of the closest vectors to a decision boundary.
Under this loss, Eq. (88) reduces to
$\mathbf{\alpha}_{\rm
opt}=\max_{\mathbf{\alpha}}\;\sum_{m}\alpha_{m}-\frac{1}{2}\sum_{m,m^{\prime}}\alpha_{m}\alpha_{m^{\prime}}y^{m}y^{m^{\prime}}\kappa(x^{m},x^{m^{\prime}}).$
(90)
Training a support vector machine with hinge loss and a quantum kernel
$\kappa$ is equivalent to finding the general quantum model that minimises the
hinge loss. The “quantum support vector machine” in [4, 5] is therefore not
one of many ideas to build a hybrid classifier, it is a generic blueprint of
how to train quantum models in a kernel-based manner.
## VII Should we switch to kernel-based quantum machine learning?
The fact that quantum models can be formulated as kernel methods with a
quantum kernel raises an important question for current quantum machine
learning research: how do kernel-based models, i.e., solutions to the problem
in Eq. (88), compare to models whose measurements are trained variationally?
Let us revisit Figure 5 in light of the results of the previous section.
We saw in Section VI.4 how kernel-based training optimises the measurement
over a subspace spanned by $M$ encoded training inputs by finding the best
coefficients $\alpha_{m}$, $m=1\dots M$. We also saw in Section VI.2 that this
subspace contains the globally optimal measurement. Variational training
instead optimises over a subspace defined by the parametrised ansatz, which
may or may not overlap with the training-data subspace, and could therefore
not have access to the global optimum. The advantages of kernel-based training
are therefore that we are guaranteed to find the globally optimal measurement
over all possible quantum models. If the loss is convex, the optimisation
problem is furthermore of a favourable structure that comes with a lot of
guarantees about the performance and convergence of optimisation algorithms.
But besides these great properties, in classical machine learning with big
data, kernel methods were superseded by neural networks or approximate kernel
methods [30] because of their poor scaling. Training involves computing the
pair-wise distances between all training data in the Gram matrix of Eq. (88),
which has at least a runtime of $\mathcal{O}(M^{2})$ in the number of training
samples $M$.777Note that this is also true when using the trained model for
predictions, where we need to compute the distance between a new input to any
training input in feature space as shown in Eq. (66). However, in maximum
margin classifiers, or support vector machines in the stricter sense, most
$\alpha_{m}$ coefficients are zero, and only the distances to a few “support
vectors” are needed. In contrast, training neural networks takes time
$\mathcal{O}(M)$ that only depends linearly on the number of training samples.
Can the training of variational quantum circuits offer a similar advantage
over kernel-based training?
The answer is that it depends. So far, training variational circuits with
gradient-based methods on hardware is based on so-called parameter-shift rules
[31, 32] instead of backpropagation. This strategy introduces a linear scaling
with the number of parameters $|\theta|$, and the number of circuits that need
to be evaluated to train a variational quantum model therefore grows with
$\mathcal{O}(|\theta|M)$. If the number of parameters in an application grows
sufficiently slowly with the dataset size, variational circuits will almost be
able to match the good scaling behaviour of neural networks, which is an
important advantage over kernel-based training. But if, like in neural
networks, the number of parameters in a variational ansatz grows linearly with
the number of data, variational quantum models end up having the same
quadratic scaling as the kernel-based approach regarding the number of
circuits to evaluate. Practical experiments with $10-20$ parameters and about
$100$ data samples show that the constant overhead of gradient calculations on
hardware make kernel-based training in fact much faster for small-scale
applications.888See
https://pennylane.ai/qml/demos/tutorial_kernel_based_training.html. In
addition, there is no guarantee that the final measurement is optimal, we have
high-dimensional non-convex training landscapes, and the additional burden of
choosing a good variational ansatz. In conclusion, the kernel perspective is
not only a powerful and theoretically appealing alternative to think about
quantum machine learning, but may also speed up current quantum machine
learning methods significantly.
As a beautiful example of the mutually beneficial relation of quantum
computing and kernel methods, the story does not end here. While all of the
above is based on models evaluated on a quantum computer but trained
classically, convex optimisation problems happen to be exactly the kind of
thing quantum computers are good at [33]. We can therefore ask whether quantum
models could not in principle be trained by quantum algorithms. “In principle”
alludes to the fact that such algorithms would likely be well beyond the reach
of near-term devices, since training is a more complex affair that requires
fully error-corrected quantum computers which we do not have yet.
The reasons why quantum training could help to lower this scaling are hidden
in results from the early days of quantum machine learning, when quantum-based
training was actively studied in the hope of finding exponential speedups for
classical machine learning [34, 6, 35]. While these speedups only hold up
under very strict assumptions of data loading oracles, they imply quadratic
speedups for rather general settings (see also Appendix B). They can be
summarised as follows: given a feature map implemented by a fault-tolerant
quantum computer, we can train kernel methods in time that grows linearly in
the data. If a kernel can be implemented as a quantum computation (like the
Gaussian kernel [7]), this speedup would also hold for “classical models” –
which are then merely run on a quantum computer.
Of course, fault-tolerant quantum computers may still take many years to
develop and are likely to have a large constant overhead due to the expensive
nature of quantum error correction. But in the longer term, this shows that
the use of quantum computing is not only to implement interesting kernels.
Quantum computers have the potential to become a game changer for kernel-based
machine learning in a similar way to how GPU-accelerated hardware enabled deep
learning.
## Acknowledgements
I want to thank Johannes Jakob Meyer, Nathan Killoran, Olivia di Matteo and
Filippo Miatto, Nicolas Quesada and Ilya Sinayskiy for their time and helpful
comments.
## References
* Wittek [2014] P. Wittek, _Quantum machine learning: what quantum computing means to data mining_ (Academic Press, 2014).
* Biamonte _et al._ [2017] J. Biamonte, P. Wittek, N. Pancotti, P. Rebentrost, N. Wiebe, and S. Lloyd, Quantum machine learning, Nature 549, 195 (2017).
* Schuld and Petruccione [2018] M. Schuld and F. Petruccione, _Supervised learning with quantum computers_ (Springer, 2018).
* Schuld and Killoran [2019] M. Schuld and N. Killoran, Quantum machine learning in feature hilbert spaces, Physical Review Letters 122, 040504 (2019).
* Havlíček _et al._ [2019] V. Havlíček, A. D. Córcoles, K. Temme, A. W. Harrow, A. Kandala, J. M. Chow, and J. M. Gambetta, Supervised learning with quantum-enhanced feature spaces, Nature 567, 209 (2019).
* Rebentrost _et al._ [2014] P. Rebentrost, M. Mohseni, and S. Lloyd, Quantum support vector machine for big data classification, Physical Review Letters 113, 130503 (2014).
* Chatterjee and Yu [2016] R. Chatterjee and T. Yu, Generalized coherent states, reproducing kernels, and quantum support vector machines, arXiv preprint arXiv:1612.03713 (2016).
* Schuld _et al._ [2018] M. Schuld, A. Bocharov, K. Svore, and N. Wiebe, Circuit-centric quantum classifiers, arXiv preprint arXiv:1804.00633 (2018).
* Liu and Wang [2018] J.-G. Liu and L. Wang, Differentiable learning of quantum circuit born machines, Physical Review A 98, 062324 (2018).
* Blank _et al._ [2020] C. Blank, D. K. Park, J.-K. K. Rhee, and F. Petruccione, Quantum classifier with tailored quantum kernel, npj Quantum Information 6, 1 (2020).
* Liu _et al._ [2020] Y. Liu, S. Arunachalam, and K. Temme, A rigorous and robust quantum speed-up in supervised machine learning, arXiv preprint arXiv:2010.02174 (2020).
* Huang _et al._ [2020] H.-Y. Huang, M. Broughton, M. Mohseni, R. Babbush, S. Boixo, H. Neven, and J. R. McClean, Power of data in quantum machine learning, arXiv preprint arXiv:2011.01938 (2020).
* Kübler _et al._ [2019] J. M. Kübler, K. Muandet, and B. Schölkopf, Quantum mean embedding of probability distributions, Physical Review Research 1, 033159 (2019).
* Lloyd _et al._ [2020] S. Lloyd, M. Schuld, A. Ijaz, J. Izaac, and N. Killoran, Quantum embeddings for machine learning, arXiv preprint arXiv:2001.03622 (2020).
* Benedetti _et al._ [2019a] M. Benedetti, E. Lloyd, S. Sack, and M. Fiorentini, Parameterized quantum circuits as machine learning models, Quantum Science and Technology 4, 043001 (2019a).
* McClean _et al._ [2018] J. R. McClean, S. Boixo, V. N. Smelyanskiy, R. Babbush, and H. Neven, Barren plateaus in quantum neural network training landscapes, Nature communications 9, 1 (2018).
* Holmes _et al._ [2021] Z. Holmes, K. Sharma, M. Cerezo, and P. J. Coles, Connecting ansatz expressibility to gradient magnitudes and barren plateaus, arXiv preprint arXiv:2101.02138 (2021).
* Benedetti _et al._ [2019b] M. Benedetti, D. Garcia-Pintos, O. Perdomo, V. Leyton-Ortega, Y. Nam, and A. Perdomo-Ortiz, A generative modeling approach for benchmarking and training shallow quantum circuits, npj Quantum Information 5, 1 (2019b).
* Pérez-Salinas _et al._ [2020] A. Pérez-Salinas, A. Cervera-Lierta, E. Gil-Fuster, and J. I. Latorre, Data re-uploading for a universal quantum classifier, Quantum 4, 226 (2020).
* Steinwart and Christmann [2008] I. Steinwart and A. Christmann, _Support vector machines_ (Springer Science & Business Media, 2008).
* Wolf [2012] M. Wolf, Quantum channels and operations: Guided tour (2012).
* Jagadish and Petruccione [2019] V. Jagadish and F. Petruccione, An invitation to quantum channels, arXiv preprint arXiv:1902.00909 (2019).
* Bergholm _et al._ [2018] V. Bergholm, J. Izaac, M. Schuld, C. Gogolin, M. S. Alam, S. Ahmed, J. M. Arrazola, C. Blank, A. Delgado, S. Jahangiri, _et al._ , Pennylane: Automatic differentiation of hybrid quantum-classical computations, arXiv preprint arXiv:1811.04968 (2018).
* Iten _et al._ [2016] R. Iten, R. Colbeck, I. Kukuljan, J. Home, and M. Christandl, Quantum circuits for isometries, Physical Review A 93, 032318 (2016).
* Vidal and Theis [2019] J. G. Vidal and D. O. Theis, Input redundancy for parameterized quantum circuits, arXiv preprint arXiv:1901.11434 (2019).
* Schuld _et al._ [2020] M. Schuld, R. Sweke, and J. J. Meyer, The effect of data encoding on the expressive power of variational quantum machine learning models, arXiv preprint arXiv:2008.08605 (2020).
* Schölkopf _et al._ [2002] B. Schölkopf, A. J. Smola, F. Bach, _et al._ , _Learning with kernels: support vector machines, regularization, optimization, and beyond_ (MIT press, 2002).
* Cheng _et al._ [2018] S. Cheng, J. Chen, and L. Wang, Information perspective to probabilistic modeling: Boltzmann machines versus born machines, Entropy 20, 583 (2018).
* Boyd _et al._ [2004] S. Boyd, S. P. Boyd, and L. Vandenberghe, _Convex optimization_ (Cambridge university press, 2004).
* Rahimi and Recht [2007] A. Rahimi and B. Recht, Random features for large-scale kernel machines, Advances in neural information processing systems 20, 1177 (2007).
* Mitarai _et al._ [2018] K. Mitarai, M. Negoro, M. Kitagawa, and K. Fujii, Quantum circuit learning, Physical Review A 98, 032309 (2018).
* Schuld _et al._ [2019] M. Schuld, V. Bergholm, C. Gogolin, J. Izaac, and N. Killoran, Evaluating analytic gradients on quantum hardware, Physical Review A 99, 032331 (2019).
* Harrow _et al._ [2009] A. W. Harrow, A. Hassidim, and S. Lloyd, Quantum algorithm for linear systems of equations, Physical Review Letters 103, 150502 (2009).
* Wiebe _et al._ [2012] N. Wiebe, D. Braun, and S. Lloyd, Quantum algorithm for data fitting, Physical Review Letters 109, 050505 (2012).
* Lloyd _et al._ [2014] S. Lloyd, M. Mohseni, and P. Rebentrost, Quantum principal component analysis, Nature Physics 10, 631 (2014).
## Appendix A Proof of Theorem 1
First, note that we are able to assume without loss of generality that the
encoding generator $G$ is diagonal because one can diagonalise Hermitian
operators as $G=Ve^{-ix_{i}\Sigma}V^{\dagger}$ with
$e^{-ix_{i}\Sigma}=\begin{pmatrix}e^{-ix_{i}\lambda_{1}}&\vdots&0\\\
0&\ddots&\\\ 0&\vdots&e^{-ix_{i}\lambda_{d}}\end{pmatrix}$ (91)
where $\\{\lambda_{1},\dots,\lambda_{d}\\}$ are the eigenvalues of $G$.
Formally one can “absorb” $V,V^{\dagger}$ into the arbitrary circuits $W$
before and after the encoding gate. The remainder is just a matter of writing
the matrix multiplications that represent the quantum circuit as a sum in the
computational basis, and trying to introduce notation that hides irrelevant
complexity:
$\displaystyle\kappa(\mathbf{x},\mathbf{x}^{\prime})$
$\displaystyle=|\left\langle\phi(\mathbf{x}^{\prime})\left|\phi(\mathbf{x})\right.\right\rangle|^{2}$
(92) $\displaystyle=\left|\left\langle
0\right|(W^{(1)})^{\dagger}(e^{-ix^{\prime}_{1}\Sigma})^{\dagger}\cdots(e^{-ix^{\prime}_{N}\Sigma})^{\dagger}\underbrace{(W^{(N+1)})^{\dagger}W^{(N+1)}}_{\mathbbm{1}}e^{-ix_{N}\Sigma}\cdots
e^{-ix_{1}\Sigma}W^{(1)}\left|0\right\rangle\right|^{2}$ (93)
$\displaystyle=\left|\left\langle
0\right|(W^{(1)})^{\dagger}(e^{-ix^{\prime}_{1}\Sigma})^{\dagger}\cdots(e^{-ix^{\prime}_{N}\Sigma})^{\dagger}e^{-ix_{N}\Sigma}\cdots
e^{-ix_{1}\Sigma}W^{(1)}\left|0\right\rangle\right|^{2}$ (94)
$\displaystyle=\left|\sum_{j_{1},\dots,j_{N}=1}^{d}\sum_{k_{1},\dots,k_{N}=1}^{d}e^{-i\left(\lambda_{j_{1}}x_{1}-\lambda_{k_{1}}x^{\prime}_{1}+\dots+\lambda_{j_{N}}x_{N}-\lambda_{k_{N}}x^{\prime}_{N})\right)}\left(W^{(1)}_{1k_{1}}\dots
W^{(N)}_{k_{N-1}k_{N}}\right)^{*}W^{(N)}_{j_{N}j_{N-1}}\dots
W^{(1)}_{j_{1}1}\right|^{2}$ (95)
$\displaystyle=\left|\sum_{\mathbf{j}}\sum_{\mathbf{k}}e^{-i\left(\Lambda_{\mathbf{j}}\mathbf{x}-\Lambda_{\mathbf{k}}\mathbf{x}^{\prime}\right)}(w_{\mathbf{k}})^{*}w_{\mathbf{j}}\right|^{2}$
(96)
$\displaystyle=\sum_{\mathbf{j}}\sum_{\mathbf{k}}\sum_{\mathbf{h}}\sum_{\mathbf{l}}e^{-i\left(\Lambda_{\mathbf{j}}-\Lambda_{\mathbf{l}}\right)\mathbf{x}}e^{i\left(\Lambda_{\mathbf{k}}-\Lambda_{\mathbf{h}}\right)\mathbf{x}^{\prime}}(w_{\mathbf{k}}w_{\mathbf{h}})^{*}w_{\mathbf{j}}w_{\mathbf{l}}$
(97)
Here, the scalars $W^{(i)}_{ab}$, $i=1,\dots,N$, refer to the element
$\left\langle a\right|W^{(i)}\left|b\right\rangle$ of the unitary operator
$W^{(i)}$, the bold multi-index $\mathbf{j}$ summarises the set
$(j_{1},\dots,j_{N})$ where $j_{i}\in\\{1,\dots,d\\}$ and
$\Lambda_{\mathbf{j}}$ is a vector containing the eigenvalues selected by the
multi-index (and similarly for $\mathbf{k},\mathbf{h},\mathbf{l}$).
We can now summarise all terms where
$\Lambda_{\mathbf{j}}-\Lambda_{\mathbf{l}}=\mathbf{s}$ and
$\Lambda_{\mathbf{k}}-\Lambda_{\mathbf{h}}=\mathbf{t}$, in other words where
the differences of eigenvalues amount to the same vectors
$\mathbf{s},\mathbf{t}$. Then
$\displaystyle\kappa(\mathbf{x},\mathbf{x}^{\prime})$
$\displaystyle=\sum_{\mathbf{s},\mathbf{t}\in\Omega}e^{-i\mathbf{s}\mathbf{x}}e^{i\mathbf{t}\mathbf{x}^{\prime}}\sum_{\mathbf{j},\mathbf{l}|\Lambda_{\mathbf{j}}-\Lambda_{\mathbf{l}}=\mathbf{s}}\;\sum_{\mathbf{k},\mathbf{h}|\Lambda_{\mathbf{k}}-\Lambda_{\mathbf{h}}=\mathbf{t}}w_{\mathbf{j}}w_{\mathbf{l}}(w_{\mathbf{k}}w_{\mathbf{h}})^{*}$
(98)
$\displaystyle=\sum_{\mathbf{s},\mathbf{t}\in\Omega}e^{-i\mathbf{s}\mathbf{x}}e^{i\mathbf{t}\mathbf{x}^{\prime}}c_{\mathbf{st}}.$
(99)
The frequency set $\Omega$ contains all vectors
$\\{\Lambda_{\mathbf{j}}-\Lambda_{\mathbf{k}}\\}$ with
$\Lambda_{\mathbf{j}}=(\lambda_{j_{1}},\dots,\lambda_{j_{N}})$,
$j_{1},\dots,j_{N}\in[1,\dots,d]$. Let me illustrate this rather unwieldy
notation with our standard example of encoding a real scalar input $x$ via a
Pauli-X rotation.
###### Example A.1.
Consider the embedding from Example III.2. We have
$W^{(1)}=W^{(2)}=\mathbbm{1}$. With a singular value decomposition one can
write the rotation operator as
$R_{x}(x)=e^{-ix\frac{1}{2}\sigma_{x}}=V^{\dagger}e^{-ix\frac{1}{2}\Sigma}V,$
(100)
with
$V=\frac{1}{\sqrt{2}}\begin{pmatrix}1&1\\\ -1&1\end{pmatrix}.$ (101)
The unitary operators $V,V^{\dagger}$ can be absorbed into the general
unitaries applied before and after the encoding, which sets
$W^{(1)}=V^{\dagger}$ and $W^{(2)}=V$. The remaining $\frac{1}{2}\Sigma$ is a
diagonal operator with eigenvalues
$\\{\lambda_{1}=-\frac{1}{2},\lambda_{2}=\frac{1}{2}\\}$. We get
$\kappa(x,x^{\prime})=\left|\sum_{j=1}^{2}\sum_{k=1}^{2}\sum_{i=1}^{2}e^{-i(\lambda_{j}x-\lambda_{k}x^{\prime})}\;(V_{1k})^{*}(V_{ki})^{*}V_{ij}V_{j1}\right|^{2}.$
(102)
Due to unitarity, inner products of different rows/columns of $V$,
$V^{\dagger}$ are zero, and so $\sum_{i=1}^{2}(V_{ki})^{*}V_{ij}=\delta_{kj}$,
leading to
$\displaystyle\kappa(x,x^{\prime})$
$\displaystyle=\left|\sum_{j=1}^{2}e^{-i\lambda_{j}(x-x^{\prime})}\;(V_{1j})^{*}V_{j1}\right|^{2}$
(103)
$\displaystyle=\left|e^{-i\lambda_{1}(x-x^{\prime})}\;(V_{11})^{*}V_{11}+e^{-i\lambda_{2}(x-x^{\prime})}\;(V_{12})^{*}V_{21}\right|^{2}$
(104)
$\displaystyle=\left|\frac{1}{2}e^{i\frac{x-x^{\prime}}{2}}+\frac{1}{2}e^{-i\frac{x-x^{\prime}}{2}}\right|^{2}$
(105) $\displaystyle=|\cos\left(\frac{x-x^{\prime}}{2}\right)|^{2}$ (106)
$\displaystyle=\cos^{2}\left(\frac{x-x^{\prime}}{2}\right).$ (107)
This is the same result as in the “straight” computation from Eq. (38).
## Appendix B Convex optimisation with quantum computers
The family of quantum algorithms for convex optimisation in machine learning
consists of many variations, but is altogether based on results that establish
fast linear algebra processing routines for quantum computers. They are very
technical in design, which is why they may not be easily accessible to many
machine learning researchers (or in fact, for anyone who does not spend years
of her life studying quantum computational complexity). This is why I will
only summarise the results from a high-level perspective here.
* •
Given access to a quantum algorithm that encodes data into quantum states, we
can prepare a mixed quantum state $\rho$ representing a $M\times M$ kernel
Gram matrix in time $\mathcal{O}(MN)$, where $N$ is the size of the inputs
$\mathbf{x}\in\mathbb{R}^{N}$ (see [6] or [3] Section 6.2.5),
* •
We can prepare a quantum state $\left|\mathbf{y}\right\rangle$ representing
$M$ binary labels as amplitudes in time $\mathcal{O}(M)$ (see for example
[24], or [3] Section 5.2.1).
* •
Given $\left|\mathbf{y}\right\rangle$, as well as
$k\in\mathcal{O}(\epsilon^{-1})$ “copies” of $\rho(\mathbf{x})$ (meaning that
we have to repeat the first step $k$ times), we can prepare
$\left|\bm{\alpha}\right\rangle=\rho^{-1}(\mathbf{x})\left|\mathbf{y}\right\rangle$,
a state whose amplitudes correspond to the coefficients $\bm{\alpha}$ in
Theorem (8), to precision $\epsilon$ in time $\mathcal{O}(k\log d)$, where $d$
is the rank of $\rho$ (see [35], where this quantum algorithm was called
“quantum principal component analysis”, or [3] Section 5.4.3).
* •
We can estimate the amplitudes of $\left|\bm{\alpha}\right\rangle$ in time
$\mathcal{O}(S/\tilde{\epsilon}^{2})$ to precision $\tilde{\epsilon}$, where
$S\leq M$ is the number of nonzero amplitudes (following from standard
probability theory applied to quantum measurements, or [3] Section 5.1.3).
Overall, this is a recipe to compute the $S$ coefficients of the support
vectors in time that is linear in the number of data points, a feat that is
unlikely to be possible with a classical computer, at least not without
imposing more structure on the problem, or allowing for heuristic results.
|
# SDSS-IV MaNGA: the “G-dwarf problem” revisited
Michael J. Greener,1 Michael Merrifield,1 Alfonso Aragón-Salamanca,1 Thomas
Peterken,1 Brett Andrews,2 and Richard R. Lane3
1School of Physics & Astronomy, University of Nottingham, University Park,
Nottingham, NG7 2RD, UK
2Department of Physics and Astronomy, University of Pittsburgh, 3941 O’Hara
Street, Pittsburgh, Pennsylvania 15260, USA
3Instituto de Astronomía y Ciencias Planetarias de Atacama, Universidad de
Atacama, Copayapu 485, Copiapó, Chile
E-mail<EMAIL_ADDRESS>
(Accepted XXX. Received YYY; in original form ZZZ)
###### Abstract
The levels of heavy elements in stars are the product of enhancement by
previous stellar generations, and the distribution of this metallicity among
the population contains clues to the process by which a galaxy formed. Most
famously, the “G-dwarf problem” highlighted the small number of low-
metallicity G-dwarf stars in the Milky Way, which is inconsistent with the
simplest picture of a galaxy formed from a “closed box” of gas. It can be
resolved by treating the Galaxy as an open system that accretes gas throughout
its life. This observation has classically only been made in the Milky Way,
but the availability of high-quality spectral data from SDSS-IV MaNGA and the
development of new analysis techniques mean that we can now make equivalent
measurements for a large sample of spiral galaxies. Our analysis shows that
high-mass spirals generically show a similar deficit of low-metallicity stars,
implying that the Milky Way’s history of gas accretion is common. By contrast,
low-mass spirals show little sign of a G-dwarf problem, presenting the
metallicity distribution that would be expected if such systems evolved as
pretty much closed boxes. This distinction can be understood from the
differing timescales for star formation in galaxies of differing masses.
###### keywords:
galaxies: spiral – galaxies: – evolution – galaxies: abundances
††pubyear: 2020††pagerange: SDSS-IV MaNGA: the “G-dwarf problem”
revisited–SDSS-IV MaNGA: the “G-dwarf problem” revisited
## 1 Introduction
Almost all of the elements heavier than helium that we find in our galaxy’s
stars, the “metals”, are there because these objects incorporate matter
recycled from previous stellar generations, with stars born early on
containing less of this enhanced material (Schmidt, 1963; Talbot & Arnett,
1971; Tinsley, 1980). There are thus clues to the star-formation history of
the Galaxy encoded in the distribution of the metallicity that we find in its
stars (Talbot & Arnett, 1971). This phenomenon can be most simply quantified
by the cumulative metallicity distribution function (CMDF), which is just the
total mass in stars in which the heavy element fraction is less than $Z$,
$M_{*}(<Z)$.
Such a simple distribution clearly does not contain the full life history of
the Galaxy’s star formation and gas recycling, but it is sufficiently robust
to make quite strong statements about its past history. For example, if the
Milky Way formed in isolation from a single initial gas cloud of mass
$M_{\rm{gas,}\>0}$, with enhanced material well mixed in as it is
recycled111Throughout this work, we adopt the instantaneous recycling
approximation, which assumes that metals are expelled by a generation of stars
immediately after these stars form (see Binney & Merrifield, 1998 Section
5.3.1). (a scenario termed the “closed box” model of chemical evolution;
Talbot & Arnett, 1971; Tinsley, 1974), then the CMDF takes the simple form
$M_{*}\left(<Z\right)=M_{\rm{gas,}\>0}\left[1-\exp{\left(-Z/p\right)}\right],$
(1)
where $p$ is a parameter that defines the yield of heavy elements created by
each generation of stars (see, for example, Binney & Merrifield, 1998 Section
5.3.1). An illustration of the resulting function is shown in Figure 1. A
conflict between this model and observation was first noted by van den Bergh
(1962), who pointed out that the Milky Way contains many fewer low-metallicity
G-dwarf stars than the steep initial rise in this function predicts. This
“G-dwarf problem” has subsequently been observed in populations of K dwarfs
(Casuso & Beckman, 2004) and M dwarfs (Mould, 1978; Woolf & West, 2012; Woolf
& Wallerstein, 2020), and seen both in the Solar neighbourhood (e.g. Rocha-
Pinto & Maciel, 1996; Gratton et al., 1996; Chiappini et al., 1996; Holmberg
et al., 2007) and throughout the Galaxy (e.g. Chiappini et al., 2001; Hayden
et al., 2015), so is clearly a substantive issue.
Figure 1: Simple model CMDFs showing the fractional mass of stars that have a
metallicity less than $Z$ for a closed box (blue) and an accreting box (red).
Characteristically, the yield $p$ of a generation of star formation is of
order the value of Solar metallicity ($Z_{\odot}=0.0148$; Lodders, 2019); for
these models, we adopt yields of one-third $Z_{\odot}$ for the closed box, and
three times $Z_{\odot}$ for the accreting box. These yield values are not
physically motivated, but have been selected simply for illustrative purposes.
In essence, the problem is that by the time a closed box has built up
sufficient heavy elements to make stars with high metallicity, there is very
little gas left to make new stars, so it will always produce the majority of
its stars at low metallicities. A variety of mechanisms have been invoked to
seek to resolve the G-dwarf problem (for an extensive list of proposed
solutions to the problem, see Pagel, 2009 Section 8.4). However, conceptually
the simplest solution – and the most widely accepted – is to introduce a
steady stream of pristine gas to the galaxy, the “accreting box” model
(Tinsley, 1974, 1980). In this case, the CMDF can be shown to be
$M_{*}\left(<Z\right)=-M_{\rm{gas}}\left[\ln{\left(1-Z/p\right)}\right],$ (2)
where $M_{\rm{gas}}$ is a constant (Binney & Merrifield, 1998 Section 5.3.3).
As can be seen from Figure 1, the constant addition of new gas provides the
raw material necessary for more star formation at later times, tipping the
balance in favour of high-metallicity stars. The resulting change in the shape
of the CMDF has been found to largely eliminate the G-dwarf problem both in
the Solar neighbourhood (Gratton et al., 1996; Chiappini et al., 1996) and
across the entire Galaxy (Chiappini et al., 2001; Hayden et al., 2015).
While such a scenario is reassuring for our understanding of the Milky Way, we
lack the context to know where our galaxy fits into the wider picture of
chemical enrichment. Although metallicity distribution functions can be
produced from analysis of resolved stellar populations in Local Group galaxies
(e.g. Escala et al., 2018; Manning & Cole, 2018; Gilbert et al., 2019), for
more distant unresolved galaxies all that we know for sure is that the average
stellar metallicities of less massive galaxies are lower (e.g. Gallazzi et
al., 2005; Panter et al., 2008). It therefore remains unclear where the Milky
Way lies relative to its spiral galaxy peers in terms of its CMDF.
Fortunately, as recent work by Mejía-Narváez et al. (2020) indicates, the
wealth of data obtained by integral field unit (IFU) surveys in the past few
years means that we are now in a position to address this question.
Observations from the Mapping Nearby Galaxies at Apache Point Observatory
(MaNGA) project (Bundy et al., 2015) have provided spectra right across the
faces of thousands of nearby galaxies. Spectral synthesis fitting with codes
such as STARLIGHT (Cid Fernandes et al., 2005) can then be used to decompose
such spectra into their component stellar populations of differing ages and
metallicities. By integrating across all ages and co-adding all the spatial
data for each galaxy, we can reconstruct the CMDFs of these spiral systems for
comparison with the Milky Way. Clearly, collapsing all this data into a single
one-dimensional function is not making full use of all of the information that
it contains, but it does offer a simple robust metric of the global metal
content of a spiral galaxy. While the quality of the reconstructed CMDFs may
not be as high as for our own galaxy, it should be more than adequate to
distinguish between the very different functions of Figure 1, providing an
overview of the metallicity evolution of a complete sample of spiral galaxies
in the local Universe.
## 2 Data and Analysis
### 2.1 The MaNGA Survey
MaNGA (Bundy et al., 2015) is part of the fourth generation of the Sloan
Digital Sky Survey (SDSS-IV; Blanton et al., 2017), and has recently completed
its mission to acquire spectroscopic observations for 10000 nearby galaxies
(Yan et al., 2016b; Wake et al., 2017). The MaNGA survey thus represents a
complete sample of these systems in the local Universe. Using hexagonal IFU
fibre bundles (Law et al., 2015) to feed into a spectrograph (Smee et al.,
2013; Drory et al., 2015) mounted on the $2.5\,{\rm m}$ telescope at Apache
Point Observatory (Gunn et al., 2006), spectra were obtained across the face
of each galaxy out to at least 1.5 effective radii, capturing most of the
light from each system. The raw data were reduced and calibrated (Yan et al.,
2016a) by the Data Reduction Pipeline (DRP; Law et al., 2016), before being
processed through the Data Analysis Pipeline (DAP; Westfall et al., 2019;
Belfiore et al., 2019) to create the data products employed here.
### 2.2 Sample Selection
Since the intent of this paper is to place the Milky Way metallicity data in
context, we need to select a sample of comparable spiral galaxies from the
full MaNGA data set. Fortunately, the citizen science project Galaxy Zoo 2
(GZ2; Willett et al., 2013) provides robust classifications of galaxies upon
which we can draw. The process that we follow is essentially identical to that
described in Peterken et al. (2020), except that we make use of the more
current ninth MaNGA Product Launch (MPL-9) data. The reasoning behind the
method adopted here is described in more detail by Willett et al. (2013) and
Hart et al. (2016).
GZ2 classifications are available for a total of 7330 MPL-9 galaxies. From
this sample, we first reject 58 galaxies which were flagged by GZ2 as obscured
by a star or other artifact. We then ensure each galaxy has a spiral
morphology: following the recommendations of Willett et al. (2013) we require
that $>43\%$ of $N\geq 20$ respondents observed either spiral features or a
disk in the galaxy. This requirement reduces the sample to 5255 potentially
spiral galaxies. Since we are seeking a clean sample of spiral systems, we
retain only those which are oriented reasonably face-on so that their spiral
structure is apparent. Again following Willett et al. (2013), we require that
$>80\%$ of $N\geq 20$ respondents determine that each galaxy is not edge-on,
and we also implement a cut based on the photometric axis ratios of the
galaxies such that $\frac{b}{a}\geq 0.5$, which is equivalent to an
inclination of $i\geq$$$. This constraint is slightly more stringent than that
suggested by Hart et al. (2017), as discussed by Peterken et al. (2020), and
leaves a sample of 1641 reasonably face-on spiral galaxies. Finally, we remove
a further 166 galaxies that were flagged for poor data quality by the DRP or
had for any reason failed to produce the necessary DAP data sets.
Collectively, these criteria produce the final clean sample of 1475 face-on
spiral galaxies that are analysed in this work. The galaxies in this final
sample have a median redshift of $z=0.037$. We also note that none of the
results depend at all sensitively on the exact sample selection criteria.
### 2.3 Spectral Fitting
The stellar evolution histories of the sample galaxies were determined using
the full-spectrum stellar population fitting code STARLIGHT (Cid Fernandes et
al., 2005). STARLIGHT essentially derives a best fit to each spectrum by
combining a set of templates of differing ages and metallicities; the process
is very similar to that employed by Greener et al. (2020), and is explained in
detail by Peterken et al. (2020). Here, we summarise the main steps relevant
to this work.
After removing any emission lines using the MaNGA DAP and shifting to zero
redshift, each spectrum is fitted using a linear combination of the single
stellar population (SSP) E-MILES templates of Vazdekis et al. (2016). The
E-MILES library of SSP templates is based on the earlier MILES library
(Vazdekis et al., 2010), and we adopt a Chabrier (2003) initial mass function
(IMF), the “Padova” isochrones of Girardi et al. (1999), and an appropriately
metallicity scaled value for alpha-element enrichment. The E-MILES templates
incorporate nine ages $(\log(\rm
age/yr)=7.85,\allowbreak\>8.15,\>8.45,\>8.75,\>9.05,\>9.35,\>9.65,\>9.95,\>10.25)$
and six metallicities $([\rm
M/H]=-1.71,\>-1.31,\>-0.71,\>-0.40,\>+0.00,\>+0.22)$. Template logarithmic
values, $\rm[M/H]$, are then converted to metallicity $Z=Z_{\odot}\times
10^{\rm[M/H]}$. To reproduce younger stellar populations, we include an
additional six ages $(\log(\rm
age/yr)=6.8,\>6.9,\>7.0,\allowbreak\>7.2,\>7.4,\>7.6)$ and two metallicities
$([\rm M/H]=-0.41,\>+0.00)$ from the templates of Asa’d et al. (2017). Apart
from adopting the slightly different Bertelli et al. (1994) isochrones, these
younger templates were generated using exactly the same method as the E-MILES
templates. We use the STARLIGHT configuration settings which prioritise
robustness over computation times, following the recommendations of Ge et al.
(2018) and Cid Fernandes (2018), and as fully described and tested by Peterken
et al. (2020, including Appendix A).
The result of this fitting process for every spaxel across the face of a
spiral galaxy is a set of weights for the mass contribution made by each SSP
to the light seen in that spectrum. Co-adding the results from each spaxel
then gives a fit to the integrated light from the entire galaxy, with
contributions from SSPs spanning the two-dimensional parameter space of
metallicity and age. Adding the contributions from SSPs of different ages
reduces the data to a one-dimensional function of the contribution from stars
of different metallicities to the total mass of that galaxy. Finally, adding
together all the contributions from templates with metallicities less than $Z$
produces the required CMDF for the galaxy, $M_{*}(<Z)$.
## 3 Results and Discussion
The resulting CMDFs are presented in Figure 2. In order to investigate any
trend with galaxy mass, we have combined the galaxies into five
logarithmically-spaced mass bins, normalised each galaxy by its total stellar
mass, and calculated the median normalised CMDF within each bin. The step-like
nature of the resulting cumulative functions reflects the relatively small
number of template metallicities used in the fitting process, which, in turn,
is determined by the limited amount of information that can be derived when
decomposing such integrated spectral data.
Figure 2: CMDFs for the spiral galaxies in the MaNGA sample, binned by stellar
mass. The histograms show the median value for the CMDF within each mass bin,
normalised by the total mass of each galaxy.
It is immediately apparent from Figure 2 that the shape of a galaxy’s CMDF
depends strongly on stellar mass. Higher mass galaxies show a steepening CMDF,
indicating a relative paucity of low-metallicity stars. Like their kin the
Milky Way – a galaxy of stellar mass ${\sim}5\times 10^{10}\ \rm M_{\odot}$
(McMillan, 2017) – they show a G-dwarf problem, which, comparison to Figure 1
confirms, is resolved if these systems are modelled as accreting boxes. By
contrast, spiral galaxies with stellar masses of less than $10^{10}\ \rm
M_{\odot}$ show a rapid initial rise in $M_{*}(<Z)$, reflecting their much
greater proportion of low-metallicity stars, and matching rather well to the
closed box model shown in Figure 1. This finding builds on the significance of
the much smaller sample studied by Mejía-Narváez et al. (2020), who found
evidence that the distribution of metallicities is broader in lower mass
spiral galaxies. It also fits with what has already been gleaned from the
other axis in this population decomposition of MaNGA data, the time evolution
of star formation, in which it was found that more massive spiral galaxies
formed most of their stars in a relatively short period of time, whereas the
less massive spiral systems have been slowly but steadily forming their
stellar content over most of the lifetime of the Universe (Peterken et al.
2021, _submitted_). It would appear that in the more massive galaxies, in
order to keep up with the demand for gas to make more stars, largely unmixed
pristine gas is pulled in to add to material enriched by the previous
generations, making them produce a much larger fraction of high-metallicity
stars in what is effectively an accreting box system. By contrast, the more
leisurely star formation rate of the lower mass spirals affords them the
opportunity to mix recycled gas thoroughly between stellar generations, making
them behave as close to closed boxes. While the Milky Way is entirely typical
of spiral galaxies of its size in displaying the G-dwarf problem caused by
such systems’ rush to make stars, lower-mass spiral galaxies avoid the issue
by taking their time.
## Data Availability
This publication uses the team-internal MPL-9 MaNGA science data products. The
full sample of data used here will be publicly released in 2021 as part of
SDSS DR17.
## Acknowledgements
We thank the anonymous referee for their very positive comments and
suggestions which have improved this manuscript.
This research was supported by funding from the Science and Technology
Facilities Council (STFC).
Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P.
Sloan Foundation, the U.S. Department of Energy Office of Science, and the
Participating Institutions. SDSS acknowledges support and resources from the
Center for High-Performance Computing at the University of Utah. The SDSS
website is www.sdss.org.
SDSS is managed by the Astrophysical Research Consortium for the Participating
Institutions of the SDSS Collaboration including the Brazilian Participation
Group, the Carnegie Institution for Science, Carnegie Mellon University, the
Chilean Participation Group, the French Participation Group, Harvard-
Smithsonian Center for Astrophysics, Instituto de Astrofísica de Canarias, The
Johns Hopkins University, Kavli Institute for the Physics and Mathematics of
the Universe (IPMU) / University of Tokyo, the Korean Participation Group,
Lawrence Berkeley National Laboratory, Leibniz Institut für Astrophysik
Potsdam (AIP), Max-Planck-Institut für Astronomie (MPIA Heidelberg), Max-
Planck-Institut für Astrophysik (MPA Garching), Max-Planck-Institut für
Extraterrestrische Physik (MPE), National Astronomical Observatories of China,
New Mexico State University, New York University, University of Notre Dame,
Observatório Nacional / MCTI, The Ohio State University, Pennsylvania State
University, Shanghai Astronomical Observatory, United Kingdom Participation
Group, Universidad Nacional Autónoma de México, University of Arizona,
University of Colorado Boulder, University of Oxford, University of
Portsmouth, University of Utah, University of Virginia, University of
Washington, University of Wisconsin, Vanderbilt University, and Yale
University.
This research made use of Astropy (Robitaille et al., 2013); Marvin (Cherinka
et al., 2018); Matplotlib (Hunter, 2007); NumPy (van der Walt et al., 2011);
SciPy (Virtanen et al., 2020); and TOPCAT (Taylor, 2005).
## References
* Asa’d et al. (2017) Asa’d R. S., Vazdekis A., Cervino M., Noel N. E. D., Beasley M. A., Kassab M., 2017, MNRAS, 471, 3599
* Belfiore et al. (2019) Belfiore F., et al., 2019, AJ, 158, 160
* Bertelli et al. (1994) Bertelli G., Bressan A., Chiosi C., Fagotto F., Nasi E., 1994, A&AS, 106, 275
* Binney & Merrifield (1998) Binney J., Merrifield M., 1998, Galactic Astronomy. Princeton University Press, Princeton
* Blanton et al. (2017) Blanton M. R., et al., 2017, AJ, 154, 28
* Bundy et al. (2015) Bundy K., et al., 2015, ApJ, 798, 7
* Casuso & Beckman (2004) Casuso E., Beckman J. E., 2004, A&A, 419, 181
* Chabrier (2003) Chabrier G., 2003, PASP, 115, 763
* Cherinka et al. (2018) Cherinka B., et al., 2018, AJ, 158, 74
* Chiappini et al. (1996) Chiappini C., Matteucci F., Gratton R., 1996, ApJ, 477, 765
* Chiappini et al. (2001) Chiappini C., Matteucci F., Romano D., 2001, ApJ, 554, 1044
* Cid Fernandes (2018) Cid Fernandes R., 2018, MNRAS, 480, 4480
* Cid Fernandes et al. (2005) Cid Fernandes R., Mateus A., Sodré L., Stasińska G., Gomes J. M., 2005, MNRAS, 358, 363
* Drory et al. (2015) Drory N., et al., 2015, AJ, 149, 77
* Escala et al. (2018) Escala I., et al., 2018, MNRAS, 474, 2194
* Gallazzi et al. (2005) Gallazzi A., Charlot S., Brinchmann J., White S. D., Tremonti C. A., 2005, MNRAS, 362, 41
* Ge et al. (2018) Ge J., Yan R., Cappellari M., Mao S., Li H., Lu Y., 2018, MNRAS, 478, 2633
* Gilbert et al. (2019) Gilbert K. M., Kirby E. N., Escala I., Wojno J., Kalirai J. S., Guhathakurta P., 2019, ApJ, 883, 128
* Girardi et al. (1999) Girardi L., Bressan A., Bertelli G., Chiosi C., 1999, A&AS, 141, 371
* Gratton et al. (1996) Gratton R., Carretta E., Matteucci F., Sneden C., 1996, ASPC, 92, 307
* Greener et al. (2020) Greener M. J., et al., 2020, MNRAS, 495, 2305
* Gunn et al. (2006) Gunn J. E., et al., 2006, AJ, 131, 2332
* Hart et al. (2016) Hart R. E., et al., 2016, MNRAS, 461, 3663
* Hart et al. (2017) Hart R. E., et al., 2017, MNRAS, 472, 2263
* Hayden et al. (2015) Hayden M. R., et al., 2015, ApJ, 808, 132
* Holmberg et al. (2007) Holmberg J., Nordstrom B., Andersen J., 2007, A&A, 475, 519
* Hunter (2007) Hunter J. D., 2007, Comput. Sci. Eng., 9, 90
* Law et al. (2015) Law D. R., et al., 2015, AJ, 150, 19
* Law et al. (2016) Law D. R., et al., 2016, AJ, 152, 83
* Lodders (2019) Lodders K., 2019, eprint (arXiv:1912.00844)
* Manning & Cole (2018) Manning E. M., Cole A. A., 2018, MNRAS, 471, 4194
* McMillan (2017) McMillan P. J., 2017, MNRAS, 465, 76
* Mejía-Narváez et al. (2020) Mejía-Narváez A., Sánchez S. F., Lacerda E. A. D., Carigi L., Galbany L., Husemann B., García-Benito R., 2020, MNRAS, 499, 4838
* Mould (1978) Mould J. R., 1978, ApJ, 226, 923
* Pagel (2009) Pagel B. E. J., 2009, Nucleosynthesis and Chemical Evolution of Galaxies, 2nd edn. Cambridge University Press, Cambridge
* Panter et al. (2008) Panter B., Jimenez R., Heavens A. F., Charlot S., 2008, MNRAS, 391, 1117
* Peterken et al. (2020) Peterken T., Merrifield M., Aragón-Salamanca A., Fraser-McKelvie A., Avila-Reese V., Riffel R., Knapen J., Drory N., 2020, MNRAS, 495, 3387
* Robitaille et al. (2013) Robitaille T. P., et al., 2013, A&A, 558
* Rocha-Pinto & Maciel (1996) Rocha-Pinto H. J., Maciel W. J., 1996, MNRAS, 279, 447
* Schmidt (1963) Schmidt M., 1963, ApJ, 137, 758
* Smee et al. (2013) Smee S. A., et al., 2013, AJ, 146, 32
* Talbot & Arnett (1971) Talbot R. J., Arnett W. D., 1971, ApJ, 170, 409
* Taylor (2005) Taylor M. B., 2005, Astronomical Data Analysis Software and Systems XIV - ASP Conference Series, 347, 29
* Tinsley (1974) Tinsley B. M., 1974, ApJ, 192, 629
* Tinsley (1980) Tinsley B. M., 1980, Fund. Cosmic Phys., 5, 287
* Vazdekis et al. (2010) Vazdekis A., Sánchez-Blázquez P., Falcón-Barroso J., Cenarro A. J., Beasley M. A., Cardiel N., Gorgas J., Peletier R. F., 2010, MNRAS, 404, 1639
* Vazdekis et al. (2016) Vazdekis A., Koleva M., Ricciardelli E., Röck B., Falcón-Barroso J., 2016, MNRAS, 463, 3409
* Virtanen et al. (2020) Virtanen P., et al., 2020, Nature Methods, 17, 261
* Wake et al. (2017) Wake D. A., et al., 2017, AJ, 154, 86
* Westfall et al. (2019) Westfall K. B., et al., 2019, AJ, 158, 231
* Willett et al. (2013) Willett K. W., et al., 2013, MNRAS, 435, 2835
* Woolf & Wallerstein (2020) Woolf V. M., Wallerstein G., 2020, MNRAS, 494, 2718
* Woolf & West (2012) Woolf V. M., West A. A., 2012, MNRAS, 422, 1489
* Yan et al. (2016a) Yan R., et al., 2016a, AJ, 151, 8
* Yan et al. (2016b) Yan R., et al., 2016b, AJ, 152, 197
* van den Bergh (1962) van den Bergh S., 1962, AJ, 67, 486
* van der Walt et al. (2011) van der Walt S., Colbert S. C., Varoquaux G., 2011, Comput. Sci. Eng., 13, 22
|
# On formal concepts of random formal contexts
Taro Sakurai Department of Mathematics and Informatics, Graduate School of
Science, Chiba University, 1-33, Yayoi-cho, Inage-ku, Chiba-shi, Chiba,
263-8522 Japan<EMAIL_ADDRESS>
###### Abstract.
In formal concept analysis, it is well-known that the number of formal
concepts can be exponential in the worst case. To analyze the average case, we
introduce a probabilistic model for random formal contexts and prove that the
average number of formal concepts has a superpolynomial asymptotic lower
bound.
###### Key words and phrases:
asymptotic lower bound, average case analysis, formal concept analysis, formal
concepts, random formal contexts
###### 2010 Mathematics Subject Classification:
68T30 (06B99, 05C80, 60C05)
###### Contents
1. 1 Introduction
2. 2 Preliminaries
3. 3 Random contexts
4. 4 Average number of concepts
5. 5 Asymptotic lower bound
6. 6 Conclusions
## 1\. Introduction
How many formal concepts does a formal context have? This is one of the
fundamental problems in the theory of _formal concept analysis_ —an
application area of lattice theory which originates from Wille [6] to support
_data analysis_ and _knowledge processing_. In the graph-theoretic language,
the problem asks the number of maximal bicliques of bipartite graphs. The
problem of determining the number of formal concepts is proved to be
#P-complete by Kuznetsov [5, Theorem 1]. Even though the counting problem is
hard in general, it is of interest to get a general idea of how large the
number is.
It is well-known that the number of formal concepts can be exponential in the
worst case, and it can be one in the best case. Such extremal formal contexts
are obtained from contranomial scales and formal contexts defined by the empty
relation. Since these examples appear to be highly atypical, it is natural to
study the number of formal concepts in the _average_ case. To this end, we
introduce random formal contexts (Definition 3.2) and present an exact formula
for the average number of formal concepts (Proposition 4.1). Lastly, we prove
that the average number of formal concepts has a _superpolynomial_ asymptotic
lower bound (Theorem 5.1), which is the main result of this article. Our
theorem and its proof help to understand why a “typical” formal context has
numerous formal concepts.
## 2\. Preliminaries
### 2.1. Formal concept analysis
We recall basic notions in formal concept analysis which can be found in the
textbook by Ganter and Wille [2, Chapter 1]. A _( formal) context_ is defined
to be a triple $K=(G,M,I)$ consists of two sets $G$, $M$, and a subset $I$ of
$G\times M$. An element $g$ of $G$ is called an _object_ , an element $m$ of
$M$ is called an _attribute_ , and $I$ is called the _incidence relation_ of
the context $K$. An object $g$ is said to _have_ an attribute $m$ if a pair
$(g,m)$ belongs to $I$. A context is often represented by a _cross table_
whose rows and columns are indexed by objects and attributes, and the
incidence relation is indicated by crosses as in Figure 1.
${m}$ ${g}$ ${\times}$ Figure 1. The cross table of a context.
Let $A$ be a set of objects and let $B$ be a set of attributes. The set of
attributes that all objects in $A$ have in common is denoted by
$\displaystyle A^{\prime}$ $\displaystyle=\bigcap_{g\in A}\\{\,m\in
M\mid(g,m)\in I\,\\}.$ Similarly, the set of objects that have all attributes
in $B$ is denoted by $\displaystyle B^{\prime}$ $\displaystyle=\bigcap_{m\in
B}\\{\,g\in G\mid(g,m)\in I\,\\}.$
A pair $(A,B)$ is defined to be a _( formal) concept_ if $A^{\prime}=B$ and
$B^{\prime}=A$; the first and second components are called the _extent_ and
_intent_ of the concept. The set of concepts of a context $K$ is denoted by
$\mathfrak{B}(K)$.
### 2.2. Asymptotic analysis
We recall two useful notations in asymptotic analysis: the _little-oh
notation_ and the _Vinogradov notation_. Let $(x_{n})$ and $(y_{n})$ be real
sequences. For an arbitrary positive real number $\varepsilon$, if
$\lvert{x_{n}}\rvert<\varepsilon\lvert{y_{n}}\rvert$ for sufficiently large
$n$, then we write $x_{n}=o(y_{n})$. If there is some positive real number
$\gamma$ satisfying $\lvert{x_{n}}\rvert\leq\gamma\lvert{y_{n}}\rvert$ for
sufficiently large $n$, then we write $y_{n}\gg x_{n}$.
## 3\. Random contexts
In this section, we introduce a probabilistic model for random contexts.
Although we provide its measure-theoretic formalization later for
completeness, the randomness we consider might be best described by the
following informal manner.
Let $n$ be a positive integer and take an $n$-set, say $U=\\{1,2,\dotsc,n\\}$.
For each element of $U$, we regard it as an _object_ with probability $p$ and
as an _attribute_ with probability $1-p$, independently. Subsequently, for
each pair $(g,m)$ of an object $g$ and an attribute $m$, we regard an object
$g$ _has_ an attribute $m$ with probability $q$, independently. We add that
the probabilities $p$ and $q$ are not necessarily constants like $p=1/2$ and
may be functions of $n$ like $q=1-1/n$.
A similar probabilistic model with a fixed number of objects and attributes is
used by Kovács in [4, §2.1] to estimate the number of concepts. Those who
familiar with random graph theory would instantly recognize that this is very
much alike to the model for binomial random graphs [3, p. 2], which is also
known as the Erdős-Rényi model. In this article, we content ourselves with
this simplest model for random contexts. The readers may wish to skim through
the next notation and definition if they are comfortable with this informal
description of our probabilistic model.
Throughout this article, we use a convention to write random variables in
bold. For basic concepts of probability theory, we refer the readers to a work
by Bauer [1, Chapter I], for example.
###### Notation 3.1.
Let $n$ be a positive integer and let $p$ and $q$ be real numbers belonging to
the unit interval $[0,1]$. Set $U=\\{1,2,\dotsc,n\\}$. Write $\Omega$ for the
set of contexts $(G,M,I)$ with $G+M=U$ where $+$ denotes the disjoint union.
Define the probability measure $P=\kappa_{n,p,q}$ on the power set
$2^{\Omega}$ by
$P\\{(G,M,I)\\}=p^{\lvert{G}\rvert}(1-p)^{\lvert{M}\rvert}\,q^{\lvert{I}\rvert}(1-q)^{\lvert{G\times
M-I}\rvert}.$
The probability space $(\Omega,2^{\Omega},P)$ is our mathematical model for
random contexts.
###### Definition 3.2.
We call an $\Omega$-valued random variable $\bm{K}$ a _random context_ and
write $\bm{K}\sim\kappa_{n,p,q}$ if the distribution of $\bm{K}$ equals
$\kappa_{n,p,q}$.
For a real-valued function $f$ on $\Omega$ and a random context $\bm{K}$, we
write
$E(f\circ\bm{K})=\int f\,dP=\sum_{K\in\Omega}f(K)P\\{K\\}$
for the expectation.
## 4\. Average number of concepts
Based on the notion of random contexts that is introduced in the previous
section, we show an exact formula for the average number of concepts in this
section.
###### Proposition 4.1.
Let $\bm{K}$ be a random context with $\bm{K}\sim\kappa_{n,p,q}$. Then
(4.1)
$E(\lvert{\mathfrak{B}(\bm{K})}\rvert)=\sum_{(a,b,c,d)}\binom{n}{a\;b\;c\;d}\,p^{a+c}(1-p)^{b+d}\,q^{ab}(1-q^{a})^{d}(1-q^{b})^{c}$
where the sum is taken over all non-negative integers with $a+b+c+d=n$.
###### Proof.
Set $\bm{K}=(\bm{G},\bm{M},\bm{I})$. Let $A$ and $B$ be subsets of $U$. We
write $\bm{1}_{\\{(A,B)\in\mathfrak{B}(\bm{K})\\}}$ for the indicator variable
of an event that a pair $(A,B)$ is a concept of $\bm{K}$. By the linearity of
expectation and the law of total probability, we may reduce the problem as
$\displaystyle E(\lvert{\mathfrak{B}(\bm{K})}\rvert)$
$\displaystyle=\sum_{(A,B)}E(\bm{1}_{\\{(A,B)\in\mathfrak{B}(\bm{K})\\}})=\sum_{(A,B)}P\\{(A,B)\in\mathfrak{B}(\bm{K})\\}$
$\displaystyle=\sum_{(A,B,C,D)}P(\\{(A,B)\in\mathfrak{B}(\bm{K})\\}\cap\\{\bm{G}=A+C\\}\cap\\{\bm{M}=B+D\\})$
where the sums are taken over all tuples of subsets of $U$. Suppose that
$(A,B,C,D)$ is an ordered partition of the set $U$. From the reduction, it is
enough to show that
$\quad
P(\\{(A,B)\in\mathfrak{B}(\bm{K})\\}\cap\\{\bm{G}=A+C\\}\cap\\{\bm{M}=B+D\\})\hfill\\\
\hfill{}=p^{\lvert{A+C}\rvert}(1-p)^{\lvert{B+D}\rvert}\,q^{\lvert{A\times
B}\rvert}(1-q^{\lvert{A}\rvert})^{\lvert{D}\rvert}(1-q^{\lvert{B}\rvert})^{\lvert{C}\rvert}.\quad$
The cross table of a context in Figure 2 may help the readers to see why this
claim holds.
${B}$ ${D}$ ${A}$ ${\times}$ ${\vdots}$ ${C}$ ${\cdots}$ ${\ast}$ Figure 2.
When
$\\{(A,B)\in\mathfrak{B}(\bm{K})\\}\cap\\{\bm{G}=A+C\\}\cap\\{\bm{M}=B+D\\}$
occurs.
First, every element of $A+C$ must belong to $\bm{G}$ with probability
$p^{\lvert{A+C}\rvert}$ (row header), and every element of $B+D$ must belong
to $\bm{M}$ with probability $(1-p)^{\lvert{B+D}\rvert}$ (column header).
Second, every pair of $A\times B$ must belong to $\bm{I}$ with probability
$q^{\lvert{A\times B}\rvert}$ (upper-left corner). Next, every attribute in
$D$ must not be shared by all objects in $A$ with probability
$(1-q^{\lvert{A}\rvert})^{\lvert{D}\rvert}$ (upper-right corner), and every
object in $C$ must not have all attributes in $B$ with probability
$(1-q^{\lvert{B}\rvert})^{\lvert{C}\rvert}$ (lower-left corner). Last, the
rest entries (lower-right corner) do not affect the occurrence of the event.
The above argument establishes the claim and completes the proof. ∎
## 5\. Asymptotic lower bound
In this section, we study random contexts with constant probabilities
$p=q=1/2$ in detail and prove that the average number of concepts has a
superpolynomial asymptotic lower bound. The following is the main result of
this article.
###### Theorem 5.1.
Let $(\bm{K}_{n})$ be a sequence of random contexts with
$\bm{K}_{n}\sim\kappa_{n,\frac{1}{2},\frac{1}{2}}$. Then
$E(\lvert{\mathfrak{B}(\bm{K}_{n})}\rvert)>n^{\log n}$
for sufficiently large $n$. In particular,
$E(\lvert{\mathfrak{B}(\bm{K}_{n})}\rvert)\gg n^{\log n}$.
For a real number $x$, the integer part and fractional part of $x$ are denoted
by $[{x}]$ and $\\{{x}\\}$. To obtain a lower bound for the average number of
concepts of $\bm{K}_{n}$, we _single out_ the specific term
(5.1) $\displaystyle t_{n}$
$\displaystyle=\binom{n}{a_{n}\;b_{n}\;c_{n}\;d_{n}}\,p^{a_{n}+c_{n}}(1-p)^{b_{n}+d_{n}}\,q^{a_{n}b_{n}}(1-q^{a_{n}})^{d_{n}}(1-q^{b_{n}})^{c_{n}}$
in (4.1) for constant probabilities $p=q=1/2$ where
(5.2) $\displaystyle\begin{split}a_{n}&=\bigg{[}{\frac{\log n}{\log
2}}\bigg{]},\qquad b_{n}=\bigg{[}{\frac{\log n}{\log
2}}\bigg{]}+2\bigg{\\{}{\frac{n}{2}}\bigg{\\}},\qquad\text{and}\\\
c_{n}&=d_{n}=\bigg{[}{\frac{n}{2}}\bigg{]}-\bigg{[}{\frac{\log n}{\log
2}}\bigg{]}.\end{split}$
Although this is just one term in the summation, it turns out to be large
enough for our purpose. The asymptotic behavior of $t_{n}$ is described as
follows.
###### Lemma 5.2.
With notation in (5.1),
$\log t_{n}=\frac{\log^{2}n}{\log 2}\left(1+o(1)\right).$
To prove this asymptotic equivalence, we need some lemmas.
###### Lemma 5.3.
With notation in (5.2),
$\log\binom{n}{a_{n}\;b_{n}\;c_{n}\;d_{n}}=n\log 2+2\frac{\log^{2}n}{\log
2}+o(\log^{2}n).$
###### Proof.
By the Stirling formula and the Taylor formula,
$\displaystyle\log n!$ $\displaystyle=n\log n-n+o(\log^{2}n),$
$\displaystyle\log a_{n}!$ $\displaystyle=\log\left(\frac{\log n}{\log
2}-\bigg{\\{}{\frac{\log n}{\log 2}}\bigg{\\}}\\!\right)!=o(\log^{2}n),$
$\displaystyle\log b_{n}!$ $\displaystyle=\log\left(\frac{\log n}{\log
2}-\bigg{\\{}{\frac{\log n}{\log
2}}\bigg{\\}}+2\bigg{\\{}{\frac{n}{2}}\bigg{\\}}\\!\right)!=o(\log^{2}n),\qquad\text{and}$
$\displaystyle\log c_{n}!$ $\displaystyle=\log
d_{n}!=\log\left(\frac{n}{2}-\bigg{\\{}{\frac{n}{2}}\bigg{\\}}-\frac{\log
n}{\log 2}+\bigg{\\{}{\frac{\log n}{\log 2}}\bigg{\\}}\\!\right)!$
$\displaystyle=\log\left(\frac{n}{2}-\frac{\log n}{\log 2}+o(\log n)\right)!$
$\displaystyle=\left(\frac{n}{2}-\frac{\log n}{\log 2}+o(\log
n)\right)\log\left(\frac{n}{2}-\frac{\log n}{\log 2}+o(\log n)\right)$
$\displaystyle\qquad-\left(\frac{n}{2}-\frac{\log n}{\log 2}+o(\log
n)\right)+o(\log^{2}n)$ $\displaystyle=\left(\frac{n}{2}-\frac{\log n}{\log
2}\right)\log\left(\frac{n}{2}-\frac{\log n}{\log 2}+o(\log
n)\right)-\frac{n}{2}+o(\log^{2}n)$
$\displaystyle=\left(\frac{n}{2}-\frac{\log n}{\log 2}\right)\\!\left(\log
n-\log 2-\frac{2}{n}\frac{\log n}{\log
2}+o\left(\frac{\log^{2}n}{n}\right)\\!\right)-\frac{n}{2}+o(\log^{2}n)$
$\displaystyle=\frac{1}{2}n\log n-\frac{1}{2}(1+\log 2)n-\frac{\log^{2}n}{\log
2}+o(\log^{2}n).$
Therefore
$\displaystyle\log\binom{n}{a_{n}\;b_{n}\;c_{n}\;d_{n}}$
$\displaystyle\qquad\quad=n\log n-n-2\left(\frac{1}{2}n\log
n-\frac{1}{2}(1+\log 2)n-\frac{\log^{2}n}{\log 2}\right)+o(\log^{2}n)$
$\displaystyle\qquad\quad=n\log 2+2\frac{\log^{2}n}{\log 2}+o(\log^{2}n).\qed$
###### Lemma 5.4.
With notation in (5.2),
$\big{\lvert}{\log(1-2^{-a_{n}})^{d_{n}}(1-2^{-b_{n}})^{c_{n}}}\big{\rvert}<2.$
###### Proof.
We may assume that $n>2$. Note that $c_{n}=d_{n}\leq n/2$ and
$\displaystyle 1-2^{-b_{n}}\geq 1-2^{-a_{n}}=1-2^{-\frac{\log n}{\log
2}+\\{{\frac{\log n}{\log 2}}\\}}=1-\frac{2^{\\{{\frac{\log n}{\log
2}}\\}}}{n}>1-\frac{2}{n}.$
Hence
$\displaystyle\big{\lvert}{\log(1-2^{-a_{n}})^{d_{n}}(1-2^{-b_{n}})^{c_{n}}}\big{\rvert}$
$\displaystyle\qquad\qquad=-d_{n}\log(1-2^{-a_{n}})-c_{n}\log(1-2^{-b_{n}})<-n\log\left(1-\frac{2}{n}\right)\leq
2.\qed$
###### Proof of Lemma 5.2.
By Lemmas 5.3 and 5.4,
$\displaystyle\log t_{n}$
$\displaystyle=\log\binom{n}{a_{n}\;b_{n}\;c_{n}\;d_{n}}-\left(\frac{n}{2}-\bigg{\\{}{\frac{n}{2}}\bigg{\\}}\\!\right)\log
2-\left(\frac{n}{2}+\bigg{\\{}{\frac{n}{2}}\bigg{\\}}\\!\right)\log 2$
$\displaystyle\qquad-\left(\frac{\log n}{\log 2}-\bigg{\\{}{\frac{\log n}{\log
2}}\bigg{\\}}\\!\right)\\!\left(\frac{\log n}{\log 2}-\bigg{\\{}{\frac{\log
n}{\log 2}}\bigg{\\}}+2\bigg{\\{}{\frac{n}{2}}\bigg{\\}}\\!\right)\log 2$
$\displaystyle\qquad+\log(1-2^{-a_{n}})^{d_{n}}(1-2^{-b_{n}})^{c_{n}}$
$\displaystyle=n\log 2+2\frac{\log^{2}n}{\log 2}-\frac{n}{2}\log
2-\frac{n}{2}\log 2-\frac{\log^{2}n}{\log 2}+o(\log^{2}n)$
$\displaystyle=\frac{\log^{2}n}{\log 2}+o(\log^{2}n)=\frac{\log^{2}n}{\log
2}\left(1+o(1)\right).\qed$
###### Proof of Theorem 5.1.
By Proposition 4.1, we have $E(\lvert{\mathfrak{B}(\bm{K}_{n})}\rvert)\geq
t_{n}$. Set $\varepsilon=1-\log 2=0.306\dotsm\,$. It follows from Lemma 5.2
that
$\log E(\lvert{\mathfrak{B}(\bm{K}_{n})}\rvert)\geq\log
t_{n}>\frac{\log^{2}n}{\log 2}(1-\varepsilon)=\log^{2}n$
for sufficiently large $n$, which proves the theorem. ∎
$n$ | $10^{1}$ | $10^{2}$ | $10^{3}$ | $10^{4}$ | $10^{5}$ | $10^{6}$ | $10^{7}$ | $10^{8}$ | $10^{9}$ | $10^{10}$
---|---|---|---|---|---|---|---|---|---|---
$\delta_{n}$ | $1.467$ | $0.860$ | $0.646$ | $0.566$ | $0.477$ | $0.416$ | $0.386$ | $0.347$ | $0.316$ | $0.299$
Table 1. How large $n$ should be for the theorem?
In the end, we make a short comment on how large $n$ should be for the
theorem. Table 1 shows the rounded values of
$\delta_{n}=\bigg{\lvert}{\frac{\log t_{n}}{\log^{2}n/\log 2}-1}\bigg{\rvert}$
for $n=10^{1},\dotsc,10^{10}$. The proof indicates that $n>10^{10}$ would be
sufficient for the theorem.
## 6\. Conclusions
In this article, we addressed the problem of how large the average number of
concepts is. To this end, we introduced the distribution $\kappa_{n,p,q}$ for
random contexts and presented an exact formula for the average number
$E(\lvert{\mathfrak{B}(\bm{K})}\rvert)$ of concepts of a random context
$\bm{K}\sim\kappa_{n,p,q}$. To establish a superpolynomial asymptotic lower
bound, random contexts with constant probabilities $p=q=1/2$ were studied in
detail. For a sequence of random contexts $(\bm{K}_{n})$ with
$\bm{K}_{n}\sim\kappa_{n,\frac{1}{2},\frac{1}{2}}$, we proved that
$E(\lvert{\mathfrak{B}(\bm{K}_{n})}\rvert)\gg n^{\log n}$.
## Acknowledgments
The author would like to thank Ken’ichi Kuga for his understanding of the
preparation of this article. The author would also like to thank Manabu
Hagiwara for conducting several seminars on FCA and thank the participants:
Yuki Kondo, Hokuto Takahashi, and Hayato Yamamura.
## References
* [1] H. Bauer, Probability Theory (de Gruyter, 1996, Berlin) MR 1385460, Zbl 0868.60001.
* [2] B. Ganter and R. Wille, Formal Concept Analysis (Springer, Berlin, 1999) MR 1707295, Zbl 0909.06001.
* [3] S. Janson, T. Łuczak, and A. Ruciński, Random Graphs (Wiley, New York, 2000) MR 1782847, Zbl 0968.05003.
* [4] L. Kovács, ‘Efficient approximation for counting of formal concepts generated from formal context’, Miskolc Math. Notes 19 (2018) 983–996, doi:10.18514/MMN.2018.2529, MR 3915517, Zbl 1425.68400.
* [5] S. O. Kuznetsov, ‘On computing the size of a lattice and related decision problems’, Order 18 (2001) 313–321, doi:10.1023/A:1013970520933, MR 1884424, Zbl 0991.06006.
* [6] R. Wille, ‘Restructuring lattice theory: An approach based on hierarchies of concepts’, Ordered sets, Proceedings of the NATO Advanced Study Institute held at Banff, Canada, August 28 to September 12, 1981 (ed. I. Rival; Reidel, Dordrecht, 1982) 445–470, doi:10.1007/978-94-009-7798-3_15, MR 661303, Zbl 0491.06008.
|
11institutetext: IRAP, Université de Toulouse, CNRS, CNES, UPS, (Toulouse),
France
# Probing core overshooting using subgiant asteroseismology: The case of
KIC10273246
A. Noll 11 S. Deheuvels 11 J. Ballot 11<EMAIL_ADDRESS>
###### Abstract
Context. The size of convective cores remains uncertain, despite their
substantial influence on stellar evolution, and thus on stellar ages. The
seismic modeling of young subgiants can be used to obtain indirect constraints
on the core structure during main sequence, thanks to the high probing
potential of mixed modes.
Aims. We selected the young subgiant KIC10273246, observed by Kepler, based on
its mixed-mode properties. We thoroughly modeled this star, with the aim of
placing constraints on the size of its main-sequence convective core. A
corollary goal of this study is to elaborate a modeling technique that is
suitable for subgiants and can later be applied to a larger number of targets.
Methods. We first extracted the parameters of the oscillation modes of the
star using the full Kepler data set. To overcome the challenges posed by the
seismic modeling of subgiants, we propose a method that is specifically
tailored to subgiants with mixed modes and uses nested optimization. We then
applied this method to perform a detailed seismic modeling of KIC10273246.
Results. We obtain models that show good statistical agreements with the
observations, both seismic and non-seismic. We show that including core
overshooting in the models significantly improves the quality of the seismic
fit, optimal models being found for $\alpha_{\mathrm{ov}}=0.15$. Higher
amounts of core overshooting strongly worsen the agreement with the
observations and are thus firmly ruled out. We also find that having access to
two g-dominated mixed modes in young subgiants allows us to place stronger
constraints on the gradient of molecular weight in the core and on the central
density.
Conclusions. This study confirms the high potential of young subgiants with
mixed modes to investigate the size of main-sequence convective cores. It
paves the way for a more general study including the subgiants observed with
Kepler, TESS, and eventually PLATO.
###### Key Words.:
Asteroseismology - Convection - Stars: evolution - Stars: interiors - Stars:
individual: KIC10273246
††offprints: A. Noll
## 1 Introduction
One of the most important current open questions in stellar physics is the
extent of convective cores. Several physical processes are known to extend the
convective core boundaries beyond the standard Schwarzschild limit. The most
frequently quoted are overshooting of ascending blobs of fluids due to their
inertia, rotational mixing or semi-convection. All these processes remain
poorly described by theory, and the way they interact is understood even less.
They are therefore generally modeled, in stellar evolution codes, as an
extension of the mixed core over a distance $d_{\mathrm{ov}}$, which is often
referred to as the distance of overshoot, even though other processes can
contribute as well. In practice, this can either be achieved by simply
extending the fully mixed central region, or by treating overshooting as a
diffusive process, following a behavior found in numerical simulations
(Freytag et al., 1996). In both cases, a free parameter controlling the extent
of the additional mixing is required. Observations are therefore necessary to
better constrain those processes.
Initial constraints have been obtained thanks to the study of the HR diagram
of clusters (see e.g., Maeder & Mermilliod 1981, VandenBerg et al. 2006), and
the modeling of eclipsing binaries (Claret & Torres, 2016). Most of those
studies favor adding overshooting, to various extents. Typically,
$d_{\mathrm{ov}}$ is around $0.2\,H_{p}$, where $H_{p}$ is the pressure scale
height. Claret & Torres (2016) found that $\alpha_{\mathrm{ov}}$, the ratio
between $d_{\mathrm{ov}}$ and $H_{p}$, increases with mass for stars under 2
$M_{\odot}$ before reaching a plateau. However, this result is still debated
(Constantino & Baraffe 2018, Claret & Torres 2019).
Over the last decade, asteroseismology allowed us to probe the structure of
stellar cores. Thanks to the data of CoRoT (Baglin et al., 2006), Kepler
(Borucki et al., 2010) and now TESS (Ricker et al., 2014) missions, we have
been able to precisely measure the oscillation frequencies of numerous
pulsators. The study of pressure (p) modes in low-mass main sequence (MS)
stars, showed the need for core overshooting to correctly reproduce the
observed frequencies (Goupil et al. 2011, Deheuvels et al. 2010, Silva Aguirre
et al. 2013). Deheuvels et al. (2016), modeling several MS stars, found that
$\alpha_{\mathrm{ov}}$ increases with the mass. Moreover, gravity (g) mode
pulsators, like slowly-pulsating B (SPB) stars, are interesting targets to
constrain the additional mixing around convective cores. Indeed, gravity modes
probe the inner chemical structure of the star and allow detailed
investigation of the convective core extensions. Moravveji et al. (2015,
2016), when modeling SPB stars, found that overshoot was necessary, and they
favored diffusive overshooting over a simple extension of the central mixed
region.
Post-main-sequence stars are another way to put constraints on the amount of
overshooting. Once the central hydrogen is exhausted, nuclear energy
production stops, leaving an inert radiative helium core. This core then
contracts, heating the surrounding hydrogen layers of the star until shell
burning starts. At that moment, the star begins its subgiant phase, and
evolves on a nuclear timescale for masses below about $1.5$ solar masses
($M_{\odot}$). For stars that are close to the terminal-age main sequence
(TAMS), the structure and the evolution remain highly influenced by the
properties of the MS convective core. Interestingly, the star begins to
exhibit mixed modes at that moment. These modes behave like gravity modes in
the internal propagation cavity and pressure modes in the outer one. Thus,
they allow us to finely probe the deepest layers of the star, all the while
being observable. This and the proximity of the subgiant to the TAMS make the
mixed modes of young subgiants valuable data in studying the extension of
convective cores.
Another particularity of mixed modes is their very fast evolution, compared to
the nuclear evolution timescale of the subgiant phase. Indeed, mixed mode
frequencies change dramatically over the course of a few million years. This
makes their seismic modeling challenging. Recently, increasing efforts have
been made to model subgiants (Huber et al. 2019; Stokholm et al. 2019;
Metcalfe et al. 2020; Deheuvels et al. 2020; Li et al. 2019, 2020), driven by
both their great physical interest and the sudden increase of seismic data for
these stars. Most of those works focused on finding the optimal stellar
parameters for one or several subgiants. So far, few studies have used
subgiants as tools to test stellar physics, mainly due to the challenges of
their modeling, as mentioned above.
Deheuvels & Michel (2011) successfully constrained $\alpha_{\mathrm{ov}}$ from
a subgiant observed by CoRoT, HD 49385, which exhibits only one g-dominated
mode and is therefore very close to the TAMS. They found that either no
overshooting, or a model with $\alpha_{\mathrm{ov}}=0.19$ were giving equally
good results. In this work, we modeled a young subgiant, KIC10273246, which
was observed by Kepler over almost 1000 days. That star exhibits two
g-dominated modes, which allows us to better constrain its inner structure. We
performed a thorough seismic modeling of the star, in order to precisely
estimate its stellar parameters and to place constraints on the extension of
its MS convective core.
In Sect. 2, we show the utility of having access to two g-dominated mixed
modes in young subgiants. In Sect. 3, we present the surface observables of
KIC10273246 and perform a fresh analysis of its oscillation spectrum using the
full Kepler data set. We then describe, in Sect. 4, the modeling technique
that we adopted, which is an improved version of the method developed by
Deheuvels & Michel (2011). Sect. 5 presents our optimal stellar models and the
constraints that were obtained from the extent of the MS convective core for
KIC10273246. We discuss these results in Sect. 6, and Sect. 7 is dedicated to
our conclusions.
## 2 Probing potential of mixed modes
Just after the main sequence, the oscillation spectra of solar-like pulsators
show the presence of mixed modes, which are due to the coupling between the
observed p-modes and low radial-order g-modes ($n_{g}=1,2,3$, $n_{g}$ being
the number of nodes in the g-mode cavity). The frequency spacing between low-
order g-modes is large (several times the large separation of p modes), so
that only a few are in the observable frequency window during the subgiant
phase. Moreover, with $n_{g}$ being low, the pure g modes that couple to p
modes do not follow an asymptotic behavior (as described in Shibahashi 1979,
Tassoul 1980). The oscillation spectra of subgiants therefore constrast with
those of more evolved stars, which typically have more g-dominated modes than
p-dominated modes, and for which $n_{g}$ is of the order of several tens (e.g.
Mosser et al. 2012).
Figure 1: Typical propagation diagram of a low-mass subgiant star. The Brunt-
Väisälä frequency is represented in blue, delimiting the g-mode cavity (light
blue). The Lamb frequency, in orange, delimits the p-mode cavity (light
orange). Two g-dominated mixed mode angular frequencies, with $n_{g}=1,2$, are
represented (solid lines in propagation zones, dotted lines in evanescent
zones). The G cavity turning points are noted as $r_{i1}$, $r_{i2}$ and
$r_{o1}$, $r_{o2}$. Finally, the thermal and chemical contributions to the
Brunt-Väisälä frequency are represented in green (dashed) and red (dot-
dashed), respectively.
The frequencies of mixed modes are mostly determined by two structural
features of the star. The first is the g-mode (G) cavity, which is delimited
by the Brunt-Väisälä frequency $N$. The second is the evanescent zone between
the g-mode and p-mode (P) cavities, the latter being delimited by the Lamb
frequency $S_{l}$.
The G cavity affects the frequency of the g-mode that is involved in the mixed
mode frequency. G-mode frequencies, in the asymptotic theory, can be
approximated by
$\nu_{n,l}\approx\frac{\sqrt{l(l+1)}}{2\pi^{2}(n-1/2)}\int_{r_{1}}^{r_{2}}\frac{N}{r}\mathrm{d}r,$
(1)
$l$ being the degree of the mode, $r_{1}$ and $r_{2}$ the turning points in
the G cavity, and $r$ the local radius of the star. In our case, $n_{g}$ is
low for the observed modes, so the asymptotic expression given in Eq. 1 should
not apply. However, it has been shown that it can provide qualitative
information about the behavior of the mixed mode frequencies (Deheuvels et
al., 2010). It tells us that g-dominated modes should give strong constraints
on the Brunt-Väisälä frequency in the G cavity. One can write it in the
following form (e.g. Kippenhahn et al. 2012):
$N^{2}=\frac{g\delta}{H_{p}}\left(\nabla_{\mathrm{ad}}-\nabla+\frac{\phi}{\delta}\nabla_{\mu}\right),$
(2)
where $g$ is the local gravity, $\delta=-(\partial\ln\rho/\partial\ln
T)_{P,\mu}$, $\phi=(\partial\ln\rho/\partial\ln\mu)_{P,T}$,
$\nabla_{\mathrm{ad}}=(\partial\ln T/\partial\ln P)_{\rm ad}$,
$\nabla=\partial\ln T/\partial\ln P,$ and
$\nabla_{\mu}=\partial\ln\mu/\partial\ln P$. The Brunt-Väisälä frequency
consequently carries information about both the thermal (two first terms in
parentheses) and compositional structure (last term) of the star.
The evanescent zone affects the coupling between the two cavities, whose
strength is linked to the size of this region and the value of $N$ inside it
(e.g., Unno et al. 1989). Using a toy model, Deheuvels & Michel (2011) showed
that a strong coupling induces a shift of the $l\geq 1$ p-dominated
frequencies that are close to a g-mode. The p-dominated frequencies therefore
provide complementary information about the internal structure of the
subgiant.
In this work, we investigated whether having several g-dominated modes in the
observed oscillation spectrum could offer more information regarding the
extension of the MS core. From above, we know that the frequencies of the
g-dominated mixed modes are related to the $N/r$ integral between the turning
points of the G cavity. Fig. 1 shows the propagation diagram of a subgiant
close to the TAMS, highlighting the frequencies of the first two g-dominated
$l=1$ mixed modes, that is, those that arise due to the coupling of p modes
with g modes of radial orders $n_{g}=1$ and 2. We denote their turning points
in the G cavity as $r_{i1}$, $r_{o1}$ for the mode with $n_{g}=1$, and
$r_{i2}$, $r_{o2}$ for the mode with $n_{g}=2$. The difference between the two
frequencies is thus mainly related to the Brunt-Väisälä frequency value
between $r_{o1}$ and $r_{o2}$ (as the one in the $[r_{i1},r_{i2}]$ region is
negligible). This region, as it can be seen in Fig. 1, is dominated by the
$\mu$-gradient contribution. This gradient is related to the characteristics
of the hydrogen-burning shell, especially the nuclear energy generation, and
thus its temperature and composition. It has been shown that a H-burning shell
structure depends on the MS core features, and especially on the amount of
core overshooting. One can see this in Fig. 5 of Deheuvels & Michel (2010),
which exhibits two Brunt-Väisälä profiles of stars having the same
evolutionary stage and position in the HR diagram, but computed with different
$\alpha_{\mathrm{ov}}$. The two profiles differ mainly in the peak caused by
the $\mu$-gradient, and these structural differences are large enough to have
a significant impact on the eigenfrequencies of the star.
For all those reasons, stars with two visible g-dominated modes are therefore
expected to be interesting targets on which to place constraints on the
efficiency of core overshooting. That criterion led us to the choice of
KIC10273246, a subgiant with two g-dominated modes and a mass of $1.26\pm
0.10\,M_{\odot}$ (Creevey et al., 2012), which places it safely in the mass
range where stars are expected to have a convective core during the MS.
## 3 Observational properties of KIC 10273246
### 3.1 Surface constraints
#### 3.1.1 Constraints from spectroscopy
Surface constraints were obtained for KIC10273246 by Creevey et al. (2012).
The authors used different algorithms on the same spectra obtained with the
FIES spectrograph. For our target, they found effective temperatures
($T_{\mathrm{eff}}$) ranging from 5933 to 6380 K. A weighted mean gives us a
value of $6150\pm 100$ K, which we have used to constrain our seismic
modeling. The star was also found to have to have a sub-solar metallicity,
with $\mathrm{[Fe/H]}=-0.13\pm 0.1\,\mathrm{dex}$.
#### 3.1.2 Constraints from broadband photometry
To obtain a reliable value of the luminosity of the star, we performed a
spectral energy distribution (SED) fit, following the procedure of Stassun &
Torres (2016). We extracted photometric values using the VizieR photometry
tool. Those data come from the NUV filter from _GALEX_ (Martin et al., 2005),
the $B_{T}$ and $V_{T}$ filters from _Tycho-2_ (Høg et al., 2000), the $J$,$H$
and $K_{s}$ filters from _2MASS_ (Skrutskie et al., 2006), the _gri_ filters
from _SDSS_ (Skrutskie et al., 2006), the W1-W4 filters from _WISE_ (Wright et
al., 2010), and the G magnitude from _Gaia_ (Evans et al., 2018). The
atmosphere model comes from the Kurucz atmosphere grid (Kurucz, 2005), with
the surface gravity ($\log g$) derived from the seismic relations (see Sect.
3.2.4), and the metallicity coming from spectroscopic measurements. We then
fit the photometry points to the spectrum, with the $T_{\mathrm{eff}}$ and
extinction $A_{v}$ as free parameters. We also used the spectroscopic data
from Creevey et al. (2012) and the extinction from Green et al. (2019) as
priors. With a reduced $\chi^{2}$ of 0.7, we found $T_{\mathrm{eff}}=6000\pm
33\,\mathrm{K}$, and $A_{v}=0.00^{+0.037}_{-0.000}\,\mathrm{mag}$. The fit
spectrum and the photometric data are represented in Fig. 2. Finally, we
integrated the flux over all the wavelengths and used the distance from _Gaia_
to obtain the luminosity of the star. According to Zinn et al. (2019), a
parallax bias exists in the Kepler field, which depends on the G-band
magnitude and the pseudo-color $\nu_{\mathrm{eff}}$ (effective wavenumber of
the photon flux distribution in the Gaia band) of the star. We found
$\varpi-\varpi_{\mathrm{Gaia}}=39.15\pm 9.46\,\mu\mathrm{as}$, which gives
$L=5.74\pm 0.17\,L_{\odot}$. This result is, as expected, lower than the
_Gaia_ archive value ($5.92\pm 0.13\,L_{\odot}$) due to the parallax offset.
Figure 2: Best fit of the SED using Kurucz atmosphere models. The orange
points represent the observations with the corresponding error bars, and the
blue curve represents the best fit SED model.
### 3.2 Seismic constraints
#### 3.2.1 Preparation of _Kepler_ light curve
The subgiant KIC10273246 was observed with _Kepler_ between quarters Q0 and
Q11 (total duration of 978 days) in short cadence (58.85 s). An early seismic
analysis of the target was performed by Campante et al. (2011) using the first
four quarters of _Kepler_ observations (325 days of data). They estimated the
frequencies of oscillation modes of degrees $l=0,1,$ and 2 over eight
overtones. We revisited this analysis using the complete _Kepler_ data set.
The light curve of the star was processed using the _Kepler_ pipeline
developed by Jenkins et al. (2010). Corrections from outliers, occasional
drifts and jumps were performed following the method of García et al. (2011).
The power density spectrum (PSD) was then obtained by applying the Lomb-
Scargle periodogram (Lomb 1976, Scargle 1982).
The PSD is shown in the shape of an échelle diagram in Fig. 3. We recall that
the échelle diagram is built by dividing the PSD in consecutive chunks with a
length corresponding to the large separation of acoustic modes $\Delta\nu$,
and piling them up. Here, we used the estimate of $\Delta\nu=48.2\,\mu$Hz
obtained by Campante et al. (2011). The main interest of échelle diagrams is
that acoustic modes of the same degree align in nearly straight ridges, which
eases mode identification. The neighboring $l=0$ and $l=2$ ridges are readily
identified on the left part of the échelle diagram (modes indicated by crosses
and triangles, respectively, in Fig. 3).
The ridge of $l=1$ modes (indicated by diamonds in Fig. 3) deviates from the
nearly-vertical line that is expected for purely acoustic modes. This behavior
is known to arise for dipolar modes in the presence of avoided crossings
between low-order g modes and p modes in subgiants. Each avoided crossing is
characterized by the presence of an additional mode, which lies away from the
ridge of theoretical purely acoustic $l=1$ modes (which would be a nearly
vertical line at an abscissa of about 35 $\mu$Hz in Fig. 3). This mode is most
strongly trapped in the core and is thus g-dominated. The neighboring $l=1$
modes are p-dominated, but their frequencies are nevertheless affected by the
presence of the g-dominated mode. The modes with frequencies larger than the
g-dominated mode are shifted to higher frequencies (to the right in the
échelle diagram) and those with frequencies below the g-dominated mode are
shifted to lower frequencies (to the left in the échelle diagram). These
features are clearly visible in Fig. 3, corresponding to two $l=1$ avoided
crossings. The $l=1$ g-dominated modes associated to these avoided crossings
are circled in red in Fig. 3.
Figure 3: Échelle diagram of KIC10273246, folded with $\Delta\nu=48.2\,\mu$Hz.
For clarity, the spectrum was smoothed over a 0.2-$\mu$Hz boxcar. The white
symbols indicate the frequencies that have been extracted for modes of degree
$l=0$ (crosses), $l=1$ (diamonds), and $l=2$ (triangles) in Sect. 3.2.2. The
two $l=1$ g-dominated mixed modes are circled in red.
#### 3.2.2 Extraction of oscillation mode parameters
To extract the parameters of the oscillation modes, we followed the method of
Appourchaux et al. (2008). Here, we briefly recall the main steps of the
procedure and refer the reader to that paper for more details.
Prior to fitting the individual oscillation modes, we modeled the background
of the PSD. The contribution from granulation was modeled by two Harvey-like
profiles, following the prescription of Karoff et al. (2013), and we added a
white noise component to account for photon noise. The overall contribution
from the oscillations was modeled as a Gaussian function. We fit this model to
the PSD using maximum-likelihood estimation (MLE). The central frequency of
the Gaussian function gives an estimate of the frequency of maximum power of
the oscillations $\nu_{\max}$. To determine the error on this quantity, we
subdivided the _Kepler_ light curve in ten chunks of equal duration, and fit
the background model on the PSD calculated with these time series. The error
on $\nu_{\rm max}$ was taken as the standard deviation of the measurements of
this quantity for each chunk. We thus obtained $\nu_{\rm max}=843\pm
20\,\mu$Hz. The PSD was then divided by the optimal background model.
We then performed a fit of the oscillation modes, which were modeled as
Lorentzian functions to account for their finite lifetimes. Each mode profile
of degree $l$, radial order $n$, and azimuthal order $m$ was characterized by
its central frequency $\nu_{n,l,m}$, its height $H_{n,l,m}$ and its line width
$\Gamma_{n,l,m}$. Since dipolar modes have a mixed character, it cannot be
assumed that they share similar heights and line widths with the neighboring
radial modes, as is often done for main sequence solar-like pulsators. Most
quadrupolar modes are expected to be p-dominated, owing to the weak coupling
between the P and G cavities for these modes. We therefore assumed that the
$l=2$ modes have the same heights and widths as their closest $l=0$ modes,
with the exception of one g-dominated $l=2$ mode, which is discussed below and
in Sect. 3.2.3. Non-radial modes are split into multiplets by rotation. Owing
to the slow rotation of subgiants, the effects of rotation on the mode
frequencies can be found by applying a first-order perturbation analysis. The
components of a rotational multiplet are thus expected to be equally spaced by
the rotational splittings $\delta\nu_{n,l}$. We also assumed that they share
similar line widths, and that their height ratios depend only on the
inclination angle $i$ of the star following the expressions given by Gizon &
Solanki (2003). In principle, mixed modes can have different rotational
splittings, because they probe the rotation at different depths in the star.
This has been used to probe the internal rotation of subgiants (e.g.,
Deheuvels et al. 2014).
To test whether individual rotational splittings can be measured in
KIC10273246, we first performed local fits of the non-radial modes. Following
the method described by Deheuvels et al. (2015), we fit each mode using two
different models: one under the $H_{0}$ hypothesis (no rotation, so that each
mode is modeled as a single Lorentzian), and one under the $H_{1}$ hypothesis
(rotation is considered and each mode is modeled as a set of $2l+1$
Lorentzians separated by the rotational splitting). It is clear that
hypothesis $H_{1}$ necessarily provides better fits to the data than
hypothesis $H_{0}$ since it involves two additional free parameters
(inclination angle and rotational splitting). The significance of hypothesis
$H_{1}$ can be tested using the likelihoods $\ell_{0}$ and $\ell_{1}$ of the
best fits obtained under the $H_{0}$ and $H_{1}$ hypotheses, respectively. As
shown by Wilks (1938), the quantity $\Delta\Lambda\equiv
2(\ln\ell_{1}-\ln\ell_{0})$ follows the distribution of a $\chi^{2}$ with
$\Delta n$ degrees of freedom, where $\Delta n$ is the difference between the
number of free parameters involved in hypotheses $H_{1}$ and $H_{0}$ (here,
$\Delta n=2$).111We note that the definition of $\Delta\Lambda$ in Sect. 3.1
of Deheuvels et al. (2015) contains an erroneous minus sign. This is just a
typo and the results presented in the paper consider the correct expression
for $\Delta\Lambda$. For each multiplet, we thus obtained a value of
$\Delta\Lambda$. The false-alarm probability was then given by the $p$-value
$p=P(\chi^{2}(2\hbox{ dof})\geqslant\Delta\Lambda)$, which corresponds to the
probability that a mode under the null hypothesis can produce such a high
value of $\Delta\Lambda$.
For dipolar modes, the lowest $p$-value that we found is 0.08, which is too
high to consider the measurement as significant. This means that we cannot
reliably extract individual rotational splittings for dipolar modes in this
star. The most likely explanation is that the modes have large line widths
compared to the rotational splitting. For quadrupolar modes, only one mode
(the one with a frequency around 779.4 $\mu$Hz) was found to have a low
$p$-value, of about $4\times 10^{-5}$, which shows a very high significance
level. A rotational splitting of $0.53\pm 0.03\,\mu$Hz was obtained for this
mode (see Fig. 4). This mode is in fact a g-dominated mixed mode, as we show
in Sect. 3.2.3.
Figure 4: Oscillation spectrum of KIC102773246 in the vicinity of a
quadrupolar mode that was found to be significantly split by rotation (see
Sect. 3.2.2). The thick red curve corresponds to our best-fit model of the
spectrum. Two quadrupolar mixed modes are visible (around 779.4 $\mu$Hz and
783.9 $\mu$Hz) and one radial mode (around 785.6 $\mu$Hz).
We then performed a global fit of the modes (all the modes are fit
simultaneously). Since individual splittings cannot be measured, we assumed a
common rotational splitting for all $l=1$ and $l=2$ modes (except for the
aforementioned $l=2$ mode around 779.4 $\mu$Hz). Since most non-radial modes
are p-dominated, we expect the common rotational splitting to essentially
measure the rotation in the envelope. The best fit corresponds to a rotational
splitting of $\delta\nu=0.45\pm 0.02\,\mu$Hz for non-radial modes and an
inclination angle of $i=55\pm 6^{\circ}$. As was done for local fits, we also
performed an additional fit of the modes without including the effects of
rotation (null hypothesis). We could therefore estimate the $p$-value
corresponding to the measurement of a mean rotational splitting. We found
$p\sim 10^{-4}$, which indicates a high level of significance. Our results are
compatible with the estimates of Campante et al. (2011), who had found
$i\gtrsim 20^{\circ}$ for this star, and optimal values of the rotational
splitting slightly below 0.5 $\mu$Hz.
The best-fit parameters for the oscillation modes (frequencies, heights, and
line widths) are given in Table 3. The uncertainties of the fit dipolar mode
frequencies range from 0.08 to 0.50 $\mu$Hz. The measured mode frequencies are
in quite good agreement with the ones found by Campante et al. (2011).
Discrepancies at the level of 3 $\sigma$ were found for only two modes (the
dipole mode around 1055 $\mu$Hz and the quadrupole mode around 880 $\mu$Hz).
Using the complete _Kepler_ data set enabled us to detect $l=0$ and $l=2$
modes over three additional radial overtones compared to Campante et al.
(2011). Our results are also in very good agreement with the recent
measurements of mode frequencies for KIC10273246 by Li et al. (2020) using the
complete _Kepler_ data set (agreement at the level of 2 $\sigma$ or better for
all oscillation modes).
#### 3.2.3 Detection of an $l=2$ mixed mode
We mentioned above that the $l=2$ mode with a frequency of about 779.4 $\mu$Hz
is the only mode for which an individual rotational splitting could be
measured. This mode also has other distinctive features. It is separated from
the closest radial mode by $6.1\pm 0.2\,\mu$Hz. By comparison, for the other
radial orders, the average separation between the $l=2$ mode and the
neighboring $l=0$ mode is 4.4 $\mu$Hz, with a standard deviation of 0.4
$\mu$Hz. This suggests that this mode might be an $l=2$ mixed mode, the
frequency of which is modified by the coupling between the p- and g-mode
cavities. This hypothesis is strengthened by the fact that it has a short line
width ($0.26\pm 0.08\,\mu$Hz) compared to the width of the neighboring $l=2$
modes (between 1.7 and 2.4 $\mu$Hz). Indeed, if the mode under study is a
g-dominated mixed mode, it should have a higher inertia than p-dominated $l=2$
modes, and therefore a shorter line width. Figure 4 shows the profile of the
radial mode that is closest to the $l=2$ mode under study. There appears to be
an additional mode in the left wing of the radial mode, at a frequency of
about 783.9 $\mu$Hz. To determine the significance of this mode, we performed
local fits assuming either its presence ($H_{1}$ hypothesis) or absence
($H_{0}$ hypothesis). We found a $p$-value of 0.01, indicating a reasonable
significance level. This also supports the identification of the $l=2$ mode at
779.4 $\mu$Hz as a mixed mode. In this case, the additional mode at 783.9
$\mu$Hz would also be an $l=2$ mixed mode undergoing an avoided crossing with
its close neighbor. As is shown in Sect. 5, the best-fit models for
KIC10273246 do show a pair of mixed modes in the vicinity of these two modes,
which confirms our identification.
#### 3.2.4 First estimates of stellar parameters using seismic scaling
relations
To obtain first estimates of the global stellar parameters of the star, we
used seismic scaling relations, which relate the global seismic parameters
$\Delta\nu$ and $\nu_{\max}$ to stellar properties such as the mass, radius
and surface gravity (Brown et al., 1991). These relations could be derived
because $\nu_{\max}$ scales to an equally good approximation as the acoustic
cut-off frequency (Brown et al. 1991; Stello et al. 2008; Belkacem et al.
2011).
To estimate the asymptotic large separation of acoustic modes, we followed the
prescription of Mosser et al. (2013). We fit an expression of the type
$\nu_{n,0}=\left[n+\frac{\alpha}{2}\left(n-n_{\rm
max}\right)^{2}+\varepsilon_{\rm p}\right]\Delta\nu_{\rm obs}$ (3)
to the observed radial modes, where $\Delta\nu_{\rm obs}$ is the observed
large separation around $\nu_{\rm max}$, $\alpha$ measures the curvature
corresponding the to the second-order term in the asymptotic development,
$\varepsilon_{\rm p}$ is an offset, and $n_{\rm max}=\nu_{\rm
max}/\Delta\nu_{\rm obs}$. We thus obtained $\Delta\nu_{\rm obs}=48.47\pm
0.02\,\mu$Hz, which translates into an asymptotic large separation of
$\Delta\nu_{\rm as}=50.63\pm 0.02\,\mu$Hz, following Mosser et al. (2013).
Using our estimates of $\Delta\nu_{\rm as}$, $\nu_{\rm max}$ from Sect. 3.2.2,
and $T_{\rm eff}$ from Sect. 3.1, we could apply seismic scaling relations to
derive preliminary estimates of the star’s mass, radius, and surface gravity.
We obtained $M=1.24\pm 0.12\,M_{\odot}$, $R=2.10\pm 0.07\,R_{\odot}$, and
$\log g=3.88\pm 0.03$.
## 4 Seismic modeling method
### 4.1 Physics of the models
We used MESA v10108 (Paxton et al., 2015) evolution models, with OPAL equation
of states and opacity tables (Rogers et al. 1996, Iglesias & Rogers 1996),
with the solar mixture from Asplund et al. (2009). The models were computed
with an Eddington-gray atmosphere. The convection regions were treated using
the standard mixing length theory (MLT) as prescribed in Cox & Giuli (1968),
with a free parameter $\alpha_{\mathrm{conv}}$ corresponding to the ratio
between the mixing length and the pressure scale height. Microscopic diffusion
was taken into account, unless otherwise specified, using the Burgers
formalism (Burgers, 1969) and diffusion coefficients from Stanton & Murillo
(2016). However, radiative accelerations have not been included in the
computed models, as the increase in computational time could not be afforded
in this study. The impact of those processes are discussed in Sect. 6.2.
As Gabriel et al. (2014) stated, for stars that have a growing convective
core, it is necessary to use the Ledoux criterion to determine the radius of
the convective core $R_{\mathrm{cc}}$. This way, we avoid the creation of
unphysical convective zones outside the core in strongly chemically stratified
regions, which may have an impact on the composition profile of the star and
thus on its evolution. Moreover, we used the predictive mixing scheme (Paxton
et al., 2018).
Core overshooting was modeled as a step extension of the convective core, over
a distance
$d_{\mathrm{ov}}=\alpha_{\mathrm{ov}}\min\left(H_{p},R_{\mathrm{cc}}/\alpha_{\mathrm{conv}}\right),$
(4)
where $d_{\mathrm{ov}}$ is the distance of instant mixing overshooting,
$H_{p}$ the pressure scale height, and $\alpha_{\mathrm{ov}}$ a free parameter
quantifying the phenomenon. Eq. 4 replaces the traditional expression
$d_{\mathrm{ov}}=\alpha_{\mathrm{ov}}H_{p}$ in order to prevent
$d_{\mathrm{ov}}$ from becoming unphysically large when the core is small
($H_{p}\to\infty$ when $r\to 0$). It is important to note that this definition
varies from one evolution code to another (see, e.g., Eq. 1 of Deheuvels &
Michel 2011 for Cesam2K). Low-mass stars have small convective cores,
therefore those differences must be kept in mind when comparing models coming
from different codes. Additionally, the impact on our results of using a
diffusive overshooting, as proposed by Freytag et al. (1996), is discussed in
Sect. 6.1.
The adiabatic oscillations of the models were computed using ADIPLS
(Christensen-Dalsgaard, 2008), and the surface effects were corrected for
using the cubic term of the prescription of Ball & Gizon (2014).
### 4.2 Why modeling subgiants is challenging
The frequencies of g-dominated mixed modes evolve over a very short time, with
a non-linear change of several $\mu$Hz per million years, which is much larger
than the usual uncertainties coming from Kepler data. As this timescale is
several orders of magnitude shorter than the typical nuclear evolution time of
low-mass subgiants, reproducing the mixed modes with a traditional grid
technique requires extremely small steps in mass and age. This makes this
method prohibitive when the number of free parameters is large, as is required
to test the model physics. Interpolation in age is possible (Li et al., 2020),
but somewhat difficult for $l=2$ g-dominated modes, which we observed in
KIC10273246. Interpolation across tracks (as used e.g., in AIMS, Rendle et al.
2019) could mitigate the need for extremely fine steps in mass, but needs to
be tested for subgiants, especially regarding the extreme sensitivity of the
g-dominated frequencies to the masses of the models. Additionally, an “on-the-
fly” optimization technique may perform badly due to the highly non-linear
behavior of the mixed modes, especially during the computation of the
derivatives in the case of a gradient-descent kind of algorithm.
To overcome those difficulties, a new approach is necessary. We thus developed
a nested optimization, where we optimize the physical parameters of models
(e.g., metallicity, initial helium abundance etc.) that have been optimized in
mass and age beforehand. This way, we can handle those two sensitive
parameters using a dedicated and specific procedure, separately from the other
ones for which a more traditional technique is possible. This modeling method
originates from Deheuvels & Michel (2011) and has been adapted to make it more
robust. In the following, we recall the basic principles of this method and
highlight the differences with the one used in the present study.
### 4.3 Optimization in mass and age
Figure 5: HR-diagram representing the evolution tracks of stellar models with
masses varying from $1.2\,M_{\odot}$ (lightest gray) to $1.3\,M_{\odot}$
(darkest gray) and otherwise identical physics. Each evolution is stopped when
$\nu_{\mathrm{g}}$ is correctly reproduced.
In that part of the optimization process, we compute models with only two free
parameters, the mass and the age of the star, the rest being fixed. The
optimization of those two parameters can be made easier thanks to the fact
that, if all the other physical parameters (such as metallicity, mixing-length
parameter…) are fixed, reproducing only $\Delta\nu$ and the frequency
$\nu_{\mathrm{g}}$ of a g-mode is enough to precisely constrain the mass and
the age.
A physical justification of that approach can be found in Deheuvels & Michel
(2011). We remind the reader of it here using a HR-diagram represented in Fig.
5. It shows the iso-$\Delta\nu$ line, as $L\propto T_{\mathrm{eff}}^{5}$ for
models with the same large separation, and the iso-$\nu_{\mathrm{g}}$ line,
computed from stellar evolution models. The two lines meet at a unique point,
that can be reached by tuning only the mass (i.e., choosing the “right”
evolution path) and age (i.e., stopping at the right moment on that path). In
concrete terms, our first step is, at a given mass, to find the age that
correctly reproduces the $\nu_{\mathrm{g}}$ frequency.
As we only see mixed modes and not pure g-modes, we cannot directly measure
$\nu_{\mathrm{g}}$. A possible solution would be to choose a g-dominated mode
(i.e., a non-radial mode far from its ridge) frequency. Unfortunately, such a
frequency does not evolve monotonously with age, as can be seen in the upper
panel of Fig. 6. We thus preferred to look at the distance between that
g-dominated mode and its closest radial mode, which we call $\delta\nu$.
As we can see in the top panel of Fig. 6, this value always decreases with
age, but it also keeps the interesting properties of the mixed modes as it
evolves very quickly during an avoided crossing, allowing us to tightly
constrain the age.
Figure 6: Evolution of $\delta\nu$ (top panel) and $\nu_{1,11}$ (bottom panel)
with age for a $1.3\,M_{\odot}$ star, after the main sequence. Here, like in
our modeling of KIC10273246, $\delta\nu$ is defined as
$\nu_{1,11}-\nu_{0,13}$, and the observed value is represented by the dotted
line. The plot has been strongly magnified in order to see the 1-$\sigma$
uncertainties from the data.
This step would be equivalent to following a unique evolution path in Fig. 5
and stopping it when it crosses the iso-$\nu_{\mathrm{g}}$ line.
We then optimize on those “good-age” models in order to correctly reproduce
the large separation. In practice, to do this we minimize the $\chi^{2}$ of
only the radial modes, which we define as
$\chi^{2}_{\mathrm{rad}}=\sum_{\mathrm{n}}\frac{\left(\nu_{\mathrm{0,n}}^{\mathrm{mod}}-\nu_{\mathrm{0,n}}^{\mathrm{obs}}\right)^{2}}{\sigma_{0,n}^{2}}.$
(5)
We do not take into account the non-radial modes at this stage to eliminate
the complexity of behavior of the mixed modes. This approach differs from the
one followed by Deheuvels & Michel (2011), who at this stage searched for
models that minimized the difference between the observed average large
separation and the one coming from the models. By using all the radial modes
instead here, we found that the optimization process is more robust regarding
the correction of near-surface effects. It may be observed that the behavior
of $\Delta\nu$ (and, in turn, of the radial frequencies) is close to linear
when varying the mass. Then, a simple Newton-type algorithm (such as a
Levenberg-Marquard algorithm) is enough to quickly find the optimal mass. This
step would then be equivalent to the right evolution path that leads to the
meeting points of the two iso-lines on Fig. 5.
Figure 7 shows the échelle diagram of a model that we can obtain after that
first step, with arbitrary physical parameters: metallicity
$[\mathrm{Fe}/\mathrm{H}]=-0.2\,\mathrm{dex}$, mixing-length parameter
$\alpha_{\mathrm{conv}}=1.5$, initial helium abundance $Y_{0}=0.28$. We can
see that the radial modes and $\delta\nu=\nu_{1,11}-\nu_{0,13}$ (the proxy for
$\nu_{\mathrm{g}}$) are, by construction, correctly reproduced. However, the
other frequencies are far from the observed ones. Especially, the g-dominated
mode $\nu_{1,18}$ is about 10 $\mu$Hz away from the observations. Thus, to
find a better matching model, we adjust the other parameters.
Figure 7: Èchelle diagram of a model optimized in mass and age (open symbols)
and of the observed frequencies (full, with their 3-$\sigma$ error bars). The
radial and dipolar modes are represented by crosses and diamonds,
respectively, with their radial order indicated.
### 4.4 Optimizing the other parameters
Now that we have a method to correctly find the mass and the age of a star at
a given physics, we must find the other parameters of the stars, this time
taking into account all the observational constraints. Thus, we define a new
$\chi^{2}$ as
$\displaystyle\chi^{2}=\sum_{i=1}^{N_{\mathrm{obs}}}\frac{\left(x_{i}^{\mathrm{obs}}-x_{i}^{\mathrm{mod}}\right)^{2}}{\sigma_{i}^{2}}=\sum_{i=1}^{N_{\mathrm{obs}}}\Delta_{i},$
(6)
where $N_{\mathrm{obs}}$ is the total number of observational constraints,
both seismic and non-seismic, $x_{i}^{\mathrm{obs}},x_{i}^{\mathrm{mod}}$ the
values of those observed constraints or their computed equivalent, and
$\sigma_{i}$ their respective uncertainties. We also introduced the quantities
$\Delta_{i}\equiv(x_{i}^{\mathrm{obs}}-x_{i}^{\mathrm{mod}})^{2}/\sigma_{i}^{2}$,
which indicate the contributions of every observable to the $\chi^{2}$, to be
used later.
As those parameters have a lower impact on the frequencies than the mass and
age, it is now possible to use more traditional approaches. One possibility is
to compute grids of models, where each model of the grid is optimized in mass
and age. Another option is to perform an optimization using an iterative
method, where again each iteration consists of an optimization of the mass and
age. To model KIC10273246, we opted for a hybrid method, which is described in
the following section.
### 4.5 Fitting procedure adopted for KIC10273246
For the modeling of KIC10273246, we left the initial metallicity $[Z/X]_{0}$,
the mixing-length parameter $\alpha_{\mathrm{conv}}$, the initial helium
abundance $Y_{0},$ and, of course, the overshoot parameter
$\alpha_{\mathrm{ov}}$ as free parameters. At first, to explore the global
behavior of the $\chi^{2}$, we computed a very loose grid
($[\mathrm{Fe}/\mathrm{H}]$ between -0.2 and 0.2, step 0.1; $Y_{0}$ between
0.24 and 0.28, step 0.02; $\alpha_{\mathrm{conv}}$ between 1.5 and 2.1, step
0.2 and $\alpha_{\mathrm{ov}}$ between 0.0 and 0.2, step 0.05). We recall that
each model of this grid is optimized in mass and age as explained in Sect.
4.3. The purpose of this loose grid was to investigate whether double
solutions or local minima exist. No such features have been found. Moreover,
those grids allowed us to identify the region of the best parameters.
We thereafter refined those parameters. As mentioned in Sect. 4.4, the
optimization of [Z/X], $\alpha_{\mathrm{conv}}$, $Y_{0}$,
$\alpha_{\mathrm{ov}}$ can be performed either through a grid approach or an
iterative procedure. We therefore conducted several tests, using stellar
models as mock observations, to determine which method is preferable. We found
the best robustness when following a hybrid approach: for given values of
$Y_{0}$ (0.26 through 0.31, step 0.01) and $\alpha_{\mathrm{ov}}$ (0 through
0.25, step 0.05), we conducted iterative optimizations with the Levenberg-
Marquardt algorithm to find optimal values of $[\mathrm{Fe}/\mathrm{H}]$ and
$\alpha_{\mathrm{conv}}$. This method differs from the one used in Deheuvels &
Michel (2011) where a single grid was used for all the free parameters.
Among those models, we considered only those that were compatible with the
observational estimates of the chemical enrichment law $\Delta Y_{0}/\Delta
Z_{0}$. Typical values quoted for $\Delta Y_{0}/\Delta Z_{0}$ range from 1.4
to 4 (e.g., Chiappini et al. 2002, Balser 2006, Casagrande et al. 2006). We
consequently had a conservative approach and took into account all models with
$\Delta Y_{0}/\Delta Z_{0}<5$.
## 5 Results
In this section, we describe the general characteristics of the best models,
before commenting the constraints on $\alpha_{\mathrm{ov}}$. We finally
investigate at the internal structures of the best models and the constraints
brought by the mixed modes.
### 5.1 General characteristics of the best models
$\alpha_{\mathrm{ov}}$ | Age (Gyr) | $M/M_{\odot}$ | $R/R_{\odot}$ | $T_{\mathrm{eff}}$ (K) | $L/L_{\odot}$ | $[Z/X]_{0}$ (dex) | $\alpha_{\mathrm{conv}}$ | $Y_{0}$ | $\chi^{2}$
---|---|---|---|---|---|---|---|---|---
Uncert. | 0.25 | 0.030 | 0.021 | 83 | 0.20 | 0.010 | 0.089 | 0.020 | –
0.00 | 4.08 | 1.21 | 2.11 | 6109 | 5.60 | 0.005 | 1.77 | 0.29 | 315
0.05 | 3.89 | 1.20 | 2.10 | 6187 | 5.85 | -0.034 | 1.81 | 0.29 | 255
0.10 | 4.03 | 1.22 | 2.11 | 6134 | 5.72 | -0.030 | 1.74 | 0.28 | 201
0.15 | 3.88 | 1.22 | 2.11 | 6192 | 5.89 | -0.073 | 1.74 | 0.28 | 127
0.20 | 3.96 | 1.27 | 2.12 | 6226 | 6.11 | -0.155 | 1.64 | 0.24 | 446
0.25 | 3.26 | 1.31 | 2.13 | 6537 | 7.50 | -0.184 | 2.06 | 0.26 | 3020
Table 1: Characteristics of the best models, for every value of
$\alpha_{\mathrm{ov}}$.
Following the method described in Sect. 4.5, we obtained optimal models for
each value of $\alpha_{\mathrm{ov}}$, whose characteristics are in Table 1.
The best model, with $\alpha_{\mathrm{ov}}=0.15$, has a reduced $\chi^{2}$ of
3.2. The échelle diagram of this model is represented in Fig. 8. Also, surface
observables are consistent with the ones found in the literature, or with the
SED fitting previously described: we find a less than 1-$\sigma$ difference
for the effective temperature $T_{\mathrm{eff}}$, the metallicity
$[\mathrm{Fe}/\mathrm{H}],$ and the luminosity $L$. We found a radius and a
mass that are consistent with the seismic scaling relations as well. We can
note that, as expected from the mass-based prediction, all good models had a
convective core during the MS. This supports our choice of using this star to
constrain $\alpha_{\mathrm{ov}}$.
We note that our best-fit models are significantly less massive and are older
than those of Li et al. (2020), who also performed a seismic modeling of
KIC20173246 and found $M=1.49\pm 0.08\,M_{\odot}$ and an age of $2.84\pm
0.60\,\mathrm{Gyr}$. These discrepancies could be partially explained by the
different assumptions made on the input physics. For instance, Li et al.
(2020) considered a solar-calibrated mixing length parameter, while we left
this parameter free in our study. Also, contrary to us, Li et al. (2020)
neglected element diffusion and adopted the mixture of Grevesse & Sauval
(1998). Finally, we stress that the agreement with the observed dipole mode
frequencies, in particular for the g-dominated mode $\nu_{1,18}$, is
significantly better in the present study than it is for the best-fit models
of Li et al. (2020) (compare Fig. 10 of Li et al. 2020 to Fig. 8 of the
present paper). These mismatches between models and observations for dipole
modes are acknowledged by Li et al. (2020), and the authors attribute them to
an imprecise modeling of the core structure.
For each combination of ($\alpha_{\mathrm{ov}}$, $Y_{0}$), our minimization
using the LM algorithm can be used to derive uncertainties in the stellar
parameters. The error bars in the free parameters of the fit
($[\mathrm{Fe}/\mathrm{H}]$ and $\alpha_{\mathrm{conv}}$) are obtained as the
diagonal coefficients of the inverse of the Hessian matrix. The uncertainties
on the other parameters can then be obtained using Eq. 10 of Deheuvels et al.
(2016). We thus obtain very small error bars, of the order of 0.007 for
$[\mathrm{Fe}/\mathrm{H}]$ and 0.002 for $\alpha_{\mathrm{conv}}$, which
translates into uncertainties of approximately $0.004\,M_{\odot}$ for the
stellar mass and $0.04\,$Gyr for the age. This means that for a given
combination of ($\alpha_{\mathrm{ov}}$, $Y_{0}$), the available observables
provide very strong constraints on the stellar parameters. By comparison, we
find that optimal models with different $Y_{0}$ can yield similar agreement
with the observations (statistically equivalent $\chi^{2}$) but have quite
different stellar parameters. This degeneracy of stellar models with respect
to $Y_{0}$ is addressed in more detail in Sect. 6.3. It thus seems that the
uncertainties in the stellar parameters are dominated by the model degeneracy
in $Y_{0}$. We thus used the optimal models with different $Y_{0}$ to estimate
uncertainties in the stellar parameters. For a given $\alpha_{\mathrm{ov}}$,
we fit a second order polynomial to the $\chi^{2}$ curve and retrieved the
interval of values corresponding to $\chi^{2}_{\min}+1$. This gave us the
1-$\sigma$ uncertainties, which are reported in Table 1.
### 5.2 Constraints on core overshooting
Figure 8: Echelle diagram of the best model, with $\alpha_{\mathrm{ov}}=0.15$.
Symbols and colors are identical to those in Fig. 7. Figure 9: $\chi^{2}$ of
the best model for every value of overshooting. The colored regions indicate
the contributions to the $\chi^{2}$ (i.e., the sum of $\Delta_{i}$) of the
surface observables ($T_{\mathrm{eff}}$, $L$ and $[\mathrm{Fe}/\mathrm{H}]$)
and the frequencies according to their degrees. Figure 10: Difference of the
$\chi^{2}$ contributions of the different observables, between
$\alpha_{\mathrm{ov}}=0.0$ and $0.15$ models. The two dipolar g-dominated
modes are represented by dotted vertical lines. (a), (b), and (c) are
$T_{\mathrm{eff}}$, $L,$ and [Fe/H], respectively.
Figure 9 shows the variation in the $\chi^{2}$ of the optimal models with
$\alpha_{\mathrm{ov}}$. We can see that adding overshooting allows us to
reproduce the observed frequencies significantly better, with the $\chi^{2}$
of the models without overshoot and with $\alpha_{\mathrm{ov}}=0.15$ being 315
and 127, respectively.
To better understand which frequencies allow us to favor models with
overshoot, we investigated the contributions of the different observables to
the $\chi^{2}$ ($\Delta_{i}$ in Eq. 6). We denote the $\chi^{2}$ contributions
of the observables coming from the optimal models without overshoot as
$\Delta^{\mathrm{nov}}_{i}$ and the equivalent from models with
$\alpha_{\mathrm{ov}}=0.15$ as $\Delta^{\mathrm{ov}}_{i}$. Figure 10
represents the differences
$\Delta_{i}^{\mathrm{nov}}-\Delta^{\mathrm{ov}}_{i}$, positive values meaning
that the observable is better reproduced by the $\alpha_{\mathrm{ov}}=0.15$
model. As expected, we can see that the main $\chi^{2}$ difference is due to
the dipolar modes, which have a mixed behavior. However, we observe that the
g-dominated modes (indicated by the dotted vertical lines) hardly contribute
to distinguishing the models with and without overshooting. Both types of
model fit the g-dominated frequencies well. The main contributors to the
$\chi^{2}$ differences are in fact the $l=1$ p-dominated modes in the
neighborhood of the g-dominated modes. As explained in Sect. 2, the
frequencies of these modes are mainly influenced by the coupling between the P
and the G cavities. The intensity of that coupling thus accounts for the main
part of the differences between the models. We note that all those models
correctly reproduce $\nu_{1,18}$, as the high sensitivity of this g-dominated
mode strongly constrains the region of parameters of the models with the
smallest $\chi^{2}$. The major role played by the dipolar modes in the
constraints on $\alpha_{\mathrm{ov}}$ is also illustrated in Fig. 9, where the
colored regions indicate the contributions to the $\chi^{2}$ of the surface
observables and modes depending on their degree.
Moreover, Fig. 9 indicates that the contribution to the $\chi^{2}$ of the
$l=2$ modes hardly changes with $\alpha_{\mathrm{ov}}$. This was partly
expected, because their evanescent zone is larger than that of the dipole
modes, making the coupling between the G and P cavities weaker. Most of the
detectable modes are therefore very strongly p-dominated and do not constrain
the deep structure of the star, hence $\alpha_{\mathrm{ov}}$. Yet, one
g-dominated $l=2$ mode was detected ($\nu_{2,10}$, see Sect. 3.2.3). It is
interesting to see that, in a similar way to the previous paragraph, its
frequency is equally well reproduced by models with and without overshooting.
One can see this in Fig. 10, where
$\Delta_{i}^{\mathrm{nov}}-\Delta^{\mathrm{ov}}_{i}$ of that mode is less than
1-$\sigma$. On the other hand, the (2,11) mode, whose frequency is close
enough to $\nu_{2,10}$ to be influenced by the coupling, varies substantially
with $\alpha_{\mathrm{ov}}$. Figure 10 shows a 3-$\sigma$ difference, despite
the high $0.65\,\mu$Hz observational uncertainty, confirming the key role of
the coupling in the constraint on $\alpha_{\mathrm{ov}}$. Interestingly,
however, while the $\alpha_{\mathrm{ov}}=0.15$ model better reproduces the
dipolar modes, the (2,11) mode is better fit in the model without
overshooting. Nevertheless, its large observational uncertainty prevents it
from being too constraining.
Finally, we notice that adding a larger amount of overshooting
($\alpha_{\mathrm{ov}}>0.15$) strongly worsens the quality of the fit, placing
a strong maximum limit on the value of $\alpha_{\mathrm{ov}}$. To better
understand this behavior, we investigate the seismic constraints on the
internal structure of the models in the next section.
### 5.3 Constraints on the internal structure from mixed modes
#### 5.3.1 Constraints on central density
Figure 11: Evolution of $\rho_{c}$ with age, for models with different
$\alpha_{\mathrm{ov}}$ that have been optimized in mass and age in order to
reproduce $\Delta\nu$ and $\nu_{g}$.
A tight link is expected between the g-dominated frequencies and the central
density $\rho_{c}$. This comes from the relation between the g-mode
frequencies and the Brunt-Väisälä frequency, which approximately scales as
$\rho_{c}$ (see Eq. 15 from Deheuvels & Michel 2011). To verify this, we
investigated the constraints placed on $\rho_{c}$ by the frequency of a
g-dominated dipolar mode. For this purpose, we considered the values of
$\rho_{c}$ in the models computed in the loose grids defined in Sect. 4.5, in
which models are optimized in mass and age in order to reproduce $\Delta\nu$
and the frequency of a g-dominated mode (here $\nu_{1,11}$, as described in
Sect. 4.3). We observed that, despite the wide range of parameters considered,
the standard deviation of $\rho_{c}$ among those models is as low as
$32.4\,\mathrm{g.cm}^{-3}$, which represents around 1% of the mean central
density $\widetilde{\rho_{c}}=2100\,\mathrm{g.cm}^{-3}$. This standard
deviation even decreases to $10\,\mathrm{g.cm}^{-3}$ if we only keep the 200
best models, illustrating the tight relation between $\rho_{c}$ and the
frequency of a g-dominated mixed mode.
This plays a role in the increase of $\chi^{2}$ for
$\alpha_{\mathrm{ov}}>0.15$. To illustrate this point, we computed models with
the same values of [Z/X], $Y_{0}$ , and $\alpha_{\mathrm{conv}}$, but with
different values of $\alpha_{\mathrm{ov}}$. Each of these models was optimized
in mass and age, as described above. Fig. 11 shows the evolution of $\rho_{c}$
with age for those models. One can see that they all reach approximately the
same value of central density, $\widetilde{\rho_{c}}$, in accordance with the
previous paragraph. Moreover, the intensity of the $\rho_{c}$ jump that is due
to the post-MS core contraction increases with $\alpha_{\mathrm{ov}}$. We can
explain this by the fact that for bigger cores, the layers where hydrogen
remains are more distant from the center. They are colder and require a
stronger core contraction, hence a stronger jump in $\rho_{c}$, to reach the
fusion temperature. Therefore, when $\alpha_{\mathrm{ov}}$ increases, models
that reach $\widetilde{\rho_{c}}$ are closer to the TAMS. When the model gets
too close to the TAMS, this impacts the whole stellar structure. Internally,
the $\mu$-gradient in the core is very much affected, because the nuclear
reactions in the H-burning shell did not have enough time to smooth its shape.
In the outer layers, the convective envelope, which expands during the
subgiant phase, has a different size. Those processes alter the frequencies
(the g- and p-dominated ones, respectively), which are thereby not compatible
with the observations.
#### 5.3.2 Constraints on the Brunt-Väisälä profile.
Figure 12: $N^{2}$ profiles of the loose grid models. The left and right panel
profiles are colored in relation to the difference between the models and the
observations of $\nu_{1,18}$ and $\delta$, respectively. Those differences are
normalized by the observational uncertainties. The blue horizontal lines
represent the observed g-dominated frequencies.
Based on Eq. 1, we expect the frequency of mixed modes to place constraints on
the integral of the Brunt-Väisälä frequency in the G cavity. To investigate
this, in Fig. 12 we plot the $N^{2}$ profiles for the models of the loose grid
defined in Sect. 4.5, which all reproduce the low-frequency g-dominated mode
($\nu_{1,11}$) and $\Delta\nu$. We observe that reproducing both already
strongly constrains the part of the Brunt-Väisälä frequency dominated by the
thermal term ($\nabla_{\mathrm{ad}}-\nabla$), which corresponds to the most
central layers ($r<0.05\,R_{\odot}$). This was expected because of the $1/r$
factor in the Eq. 1 integral. On the contrary, the part of $N^{2}$ that is
dominated by the $\mu$-gradient changes significantly within the grid.
We expect that part to be strongly determined by the dipolar modes. We
therefore investigated the constraints brought by the two most determining
seismic features (see Sect. 2): the coupling intensity and the frequency of
pure g-modes. As those two are not directly measurable, we used observational
proxies. The intensity of the coupling can be quantified by
$\delta\equiv\nu_{1,12}-\nu_{1,11}$, which is from Deheuvels & Michel (2011).
That value is the difference between the low-frequency g-dominated mode
($\nu_{1,11}$) and the following dipolar p-dominated mode ($\nu_{1,12}$).
Thus, $\delta$ increases with the coupling. The frequency of a pure g-mode is
measured through $\nu_{1,18}$, which is the high-frequency g-dominated mode.
For those two values, we color-coded, in Fig. 12, the profiles of the Brunt-
Väisälä and Lamb frequencies based on their agreement with the observations
(left panel for $\delta$ and right panel for $\nu_{1,18}$).
One can see on the right panel that models correctly reproducing the coupling
(i.e., dark profiles) have very similar H-burning shell positions ($N^{2}$
peak around $r=0.05\,R_{\odot}$). However, the Brunt-Väisälä profiles become
more degenerate for higher $r$: several different profiles can reproduce
$\delta$ within 1-$\sigma$. This degeneracy is lifted thanks to the high-
frequency g-dominated mode: on the left panel, models closely reproducing
$\nu_{1,18}$ all have similar Brunt-Väisälä profiles. This corroborates our
theoretical approach in Sect. 2: the high-frequency g-dominated mode adding
tight constraints on the shape of the $\mu$-gradient. Important gains in
structural constraints are therefore obtained from having a second detectable
g-dominated mode.
## 6 Discussion
### 6.1 Diffusive overshooting
During this study, overshooting was modeled as a step extension of the
convective core. However, Freytag et al. (1996) proposed another prescription,
based on results coming from 2D simulations of A-stars and white dwarfs.
Overshoot is then modeled as a diffusive process, with a coefficient
$D_{\mathrm{ov}}$ exponentially decaying from the boundary of core, following
the expression
$D_{\mathrm{ov}}=D_{\mathrm{conv}}\exp\left[-\frac{2(r-R_{\mathrm{cc}})}{f_{\mathrm{ov}}H_{p}}\right],$
(7)
with $D_{\mathrm{conv}}$ being the MLT derived coefficient taken just below
$R_{\mathrm{cc,}}$ and $f_{\mathrm{ov}}$ a free parameter that tunes the
length scale of the overshooting region.
In order to compare the results coming from the two types of overshooting, we
first found the $f_{\mathrm{ov}}$ that is equivalent to a given value of
$\alpha_{\mathrm{ov}}$. In order to do this, we searched for the value of
$f_{\mathrm{ov}}$ that gives models reaching the TAMS at the same age as
models computed with a step overshooting and $\alpha_{\mathrm{ov}}=0.15$. We
found $f_{\mathrm{ov}}=0.01$. After that, we modeled it using a method similar
to the one used in Sect. 5 and compared the best model with the one computed
with a step overshoot.
Figure 13: Differences in the $\chi^{2}$ contributions of the observables
coming from the best models with step and diffusive overshoot.
As we can see in Fig. 13, the differences between the frequencies and
observables of the best models with step and diffusive overshoot are mainly
less than 1-$\sigma$. We note the exception of the g-dominated $\nu_{2,10}$
mode, which is better reproduced by the diffusive overshoot model. However,
its impact on the global $\chi^{2}$ is counter-balanced by the generally
better reproduced dipolar frequencies of the step overshoot model. Moreover,
the difference between the characteristics of the two models are within the
uncertainties of Table 1. Therefore, we cannot discriminate between the two
kinds of modelings with the current set of data.
### 6.2 Effect of microscopic diffusion
Figure 14: Brunt-Väisälä profiles of the best models with gravitational
settling and chemical diffusion (blue) and without those two processes
(orange).
The models presented in Sect. 5 of our study include both gravitational
settling and chemical diffusion. Such processes, which happen to be necessary
in order to correctly reproduce the helioseismic observations (Christensen-
Dalsgaard et al., 1993), are expected to have an impact on our results for two
main reasons. The first is the sinking during the main sequence of heavier
elements because of the gravitational settling. This reduces the hydrogen
abundance in the core and shortens the main sequence, which eventually
decreases the age of models with same mean density. The second is the
smoothing of the structure of the subgiant. High $\mu$-gradient peaks, like
the one produced by the withdrawal of the convective core, are strongly
smoothed out by the chemical diffusion, which impacts the mixed mode
frequencies. Thus, it is interesting to see how gravitational settling and
chemical diffusion change the characteristics and the quality of the fit. We
therefore modeled the star following a similar methodology, but without
including those two processes. We found that the best model also has
$\alpha_{\mathrm{ov}}=0.15$, but provides a significantly worse agreement with
the observations than the best diffusion model, with
$\chi^{2}_{\mathrm{nodiff}}-\chi^{2}_{\mathrm{diff}}=71$. It is more massive
($1.29\,M_{\odot}$) and older ($4.85\,$Gyr), as was expected from the fact
that heavy elements do not sink during the MS. We note that the surface
observables are less well reproduced, with a best model that is too cold
($5874\,$K, which represents $1.84$ times the observational uncertainty) and
has too low a luminosity ($5.0\,L_{\odot}$, which represents $4.3\,\sigma$).
Moreover, similarly to what we found in Sect. 5, the quality of the fit
improves as $\alpha_{\mathrm{ov}}$ increases for $\alpha_{\mathrm{ov}}\leq
0.15$. However, this is less significant
($\chi^{2}_{\alpha_{\mathrm{ov}}=0}-\chi^{2}_{\alpha_{\mathrm{ov}}=0.15}=24$).
For higher values, the quality of the fit strongly worsens, in a comparable
way to what has been found with gravitational settling and chemical diffusion.
Figure 14 illustrates the differences in the Brunt-Väisälä profiles between
the two best models, with and without both diffusive processes. One can see
the high $\nabla_{\mu}$ peak (at $r=0.06\,R_{\odot}$) is, as expected,
smoothed out by the chemical diffusion. Otherwise, the two profiles are
remarkably similar, despite the different physics of the models, which
highlights the robustness of the constraints coming from the mixed modes.
Finally, the effects of gravitational settling are expected to be somewhat
counter-balanced in the envelope by radiative accelerations, which can have a
significant impact on stars with masses greater than $1.1\,M_{\odot}$ (Deal et
al., 2018). However, including the effects of radiative accelerations in the
models increases the computational time of stellar evolution calculation by
orders of magnitude, and it could not be afforded in the present study. To
test the impact of this process on our modeling, we computed a model that
takes into account radiative accelerations, with the same parameters as the
best model for $\alpha_{\mathrm{ov}}=0.15$. We obtained slightly different
large separations ($\Delta\nu_{\rm rad}-\Delta\nu_{\rm
norad}=0.12\,\mu\mathrm{Hz}$) but very similar frequencies, once normalized by
the large separation. Radiative accelerations are therefore not expected to
change the conclusions of this study.
### 6.3 Helium degeneracy
Figure 15: Echelle diagrams of the best model (blue, $Y_{0}=0.28$) and the best model for $Y_{0}=0.24$ (orange). 3-$\sigma$ uncertainties from the observations are represented by black bars. Param | $Y_{0}=0.24$ model | $Y_{0}=0.28$ model
---|---|---
$M$ ($M_{\odot}$) | 1.292 | 1.222
$R$ ($R_{\odot}$) | 2.152 | 2.109
Age (Gyr) | 4.13 | 3.88
$L$ ($L_{\odot}$) | 5.95 | 5.89
$T_{\mathrm{eff}}$ (K) | 6145 | 6192
$[\mathrm{Fe}/\mathrm{H}]$ (dex) | -0.098 | -0.072
$\alpha_{\mathrm{conv}}$ | 1.70 | 1.73
$Y_{0}$ | 0.24 | 0.28
$\chi^{2}$ | 159 | 127
Table 2: Characteristics of the two models of Fig. 15
To model KIC10273246, we initially performed optimizations with fixed
$\alpha_{\mathrm{ov}}$ and considering $Y_{0}$, $\alpha_{\mathrm{conv}}$ and
$[\mathrm{Fe}/\mathrm{H}]$ as free parameters. In this case, we observed an
unwanted sensitivity of the $Y_{0}$ parameter to the guess value of our
optimization process. This led us to the hybrid approach described in Sect.
4.5, using optimizations with fixed values of $Y_{0}$ and varying
$[\mathrm{Fe}/\mathrm{H}]$, $\alpha_{\mathrm{conv}}$. We found that optimal
models with different values of $Y_{0}$ indeed have surprisingly close
frequencies, despite their wide range of mass. This is illustrated in Fig. 15,
which shows the échelle diagrams of the best model with $Y_{0}=0.28$ (blue)
and the best model with $Y_{0}=0.24$ (orange), both of which have
$\alpha_{\mathrm{ov}}=0.15$. Those models have quite different
characteristics, as reported in Table 2. However, their frequencies are almost
indistinguishable, despite the very small uncertainties on the mode
frequencies from Kepler data. Only the g-dominated $l=2$ mode allows us to
slightly favor the $Y_{0}=0.28$ model. Such degeneracy is related to the anti-
correlation between mass and $Y_{0}$, that has been observed in MS stars (see
e.g., Lebreton & Goupil 2014) as well as subgiant stars (Li et al., 2020).
Additionally, we note that no monotonic behavior has been found between the
age and $Y_{0}$. We therefore conclude that the seismic modeling of subgiants,
despite bringing strong constraints on the deep structure of the star, does
not lift the degeneracy between $Y_{0}$ and the mass.
### 6.4 Internal rotation
We mentioned in Sect. 3.2.2 that a rotational splitting of $0.53\pm
0.03\,\mu$Hz could be measured with a high level of significance for the $l=2$
mode at 779.4 $\mu$Hz. This is obviously not enough to probe the internal
rotation in detail. However, since this mode is g-dominated, it can be used to
place approximate constraints on the rotation in the core of the star.
Using our best-fit model from Sect. 5, we were able to compute the rotational
kernel $K(r)$, which relates the splitting $\delta\nu_{\rm s}$ of this mode to
the rotation profile $\Omega(r):$
$\delta\nu_{\rm s}=\int_{0}^{R}K(r)\Omega(r)/(2\pi)\,\hbox{d}r.$ (8)
This can be re-written as $\delta\nu_{\rm s}=K_{\rm g}\langle\Omega_{\rm
g}\rangle+K_{\rm p}\langle\Omega_{\rm p}\rangle$, where $\langle\Omega_{\rm
g}\rangle$ and $\langle\Omega_{\rm p}\rangle$ are average rotation rates in
the g- and p-mode cavities, respectively, and $K_{\rm g}$ (resp. $K_{\rm p}$)
corresponds to the integral of $K(r)$ in the g-mode (resp. p-mode) cavities.
For the $l=2$ mode under study, using our best-fit stellar model we found that
84% of the kernel energy is enclosed in the g-mode cavity, which confirms that
the mode is indeed g-dominated.
Campante et al. (2011) found a clear rotational modulation in the _Kepler_
light curve of KIC10273246. They thus estimated the surface rotation rate of
the star to about 0.5 $\mu$Hz (rotation period of about 23 days). This value
is comparable to the average rotational splitting of $0.45\pm 0.02\,\mu$Hz
that we obtained in this study. As mentioned in Sect. 3.2.2, this average
splitting is dominated by the contribution of p-dominated modes, to the extent
that it essentially measures the envelope rotation rate. Taken together, these
two measurements suggest a low rotation contrast within the p-mode cavity,
which is in line with the conclusions of Benomar et al. (2015) and Nielsen et
al. (2015) for main-sequence solar-like pulsators.
The splitting measured for the $l=2$ g-dominated mode is close to the rotation
rate inferred for the envelope, which suggests a low amount of radial
differential rotation in the star. If we take $\langle\Omega_{\rm
p}\rangle/(2\pi)\approx 0.45\,\mu$Hz, we obtain a core rotation rate of about
0.65 $\mu$Hz (rotation period of about 18 days). Clearly, more rotational
splittings would be required to precisely measure the core rotation rate.
However, our results indicate that KIC10273246 could be rotating nearly as a
solid-body, like the two _Kepler_ subgiants whose internal rotation profiles
were recently measured (Deheuvels et al. 2020).
## 7 Conclusion
In this study, we performed a seismic modeling of KIC10273246, a subgiant
observed by Kepler, and obtained strong constraints on the size of its MS
convective core. We chose this target because it exhibits two dipolar
g-dominated modes, which we expected to bring stronger constraints on the
internal structure. We extracted the mode parameters from the oscillation
spectrum of the star using the full Kepler data set and thus updated the mode
frequencies that were previously obtained by Campante et al. (2011).
The seismic modeling of subgiants is notoriously complex. We here elaborated
on the algorithm proposed by Deheuvels & Michel (2011). This method consists
of a two-step approach. The purpose of the first step is to find the mass and
age that match the large separation of p modes and the frequency of one
g-dominated mixed mode. The second step optimizes the other free parameters
($[\mathrm{Fe}/\mathrm{H}]$, $Y_{0}$, $\alpha_{\mathrm{conv}}$ and
$\alpha_{\mathrm{ov}}$) to reproduce the other frequencies as closely as
possible. In this work, we improved this algorithm to make it more robust.
This enabled us to perform a detailed seismic modeling of KIC10273246, with a
particular emphasis on the investigation of the size of the MS convective
core.
We found models in good agreement with the observations, with a reduced
$\chi^{2}$ of 3.2 for the best model, and with surface observables that are
reproduced to within less than 1 $\sigma$. One key result of this study is
that models with core overshooting during the MS reproduce the observations
significantly better, with an optimal value of $\alpha_{\mathrm{ov}}=0.15$.
For higher values of $\alpha_{\mathrm{ov}}$, the quality of the fit
significantly worsens. We found that such models are very close to the TAMS.
Their internal structure thus differs from that of the
lower-$\alpha_{\mathrm{ov}}$ solutions, and their seismic properties show
strong mismatch with the observations. We tested the robustness of our
conclusions by considering other choices for the input physics. No significant
difference was found when modeling core overshooting as a diffusive process.
Models computed without microscopic diffusion also favor models with
$\alpha_{\mathrm{ov}}=0.15$, albeit less significantly, and show a strong
mismatch compared with the observations for higher values of
$\alpha_{\mathrm{ov}}$. However, they yield poorer agreement with the seismic
and surface observables compared to the models computed with microscopic
diffusion. This study thus confirms the high potential of young subgiants with
mixed modes to measure the extent of the MS convective cores.
We also investigated the information conveyed by the mixed modes about the
core structure. We showed that the combined knowledge of the large separation
$\Delta\nu$ and the frequency of one g-dominated mixed mode is enough to
estimate the central density $\rho_{\rm c}$ to a precision of about 1%. This
helps us understand why models with a greater amount of core overshooting
($\alpha_{\mathrm{ov}}>0.15$) are not compatible with the observations.
Because of their larger MS convective core, they have a higher $\rho_{\rm c}$
just after the end of the MS, and they thus reach the optimal central density
closer to the TAMS. We then studied the roles of the different mixed mode
frequencies in determining the profile of the Brunt-Väisälä frequency inside
the star. While the first g-dominated mixed mode strongly constrains the
thermal part, the second one helps constrain the part of the Brunt-Väisälä
frequency that is dominated by the $\mu$-gradient. We therefore confirm that
having access to two g-dominated mixed modes helps better characterize the
Brunt-Väisälä profile.
Also, despite the strong constraints that were obtained on the internal
structure, we noted the existence of a degeneracy between the stellar mass and
the initial helium abundance $Y_{0}$. This degeneracy, which is already well
known for MS stars (e.g., Lebreton & Goupil 2014), is not lifted by the mixed
modes. We find that it is in fact the main source of uncertainties in the
determination of the stellar parameters. This should be kept in mind when
modeling subgiants. Current modeling techniques, such as traditional grid-
based methods, tend to miss a significant fraction of the best-fit models
because of the size of the mesh. In such conditions, the degeneracy between
$Y_{0}$ and mass could be explored only partially, thus causing the
uncertainties on the stellar parameters to be underestimated.
As a byproduct of this work, we obtained partial constraints on the internal
rotation of KIC10273246. We were not able not measure individual rotational
splittings for the dipolar mixed modes, but we obtained a splitting of
$0.53\pm 0.03\mu$Hz for the only g-dominated $l=2$ mixed mode in the spectrum
of the star. Interestingly, this value is close to the surface rotation rate
of $0.5\mu$Hz that was found for this star by Campante et al. (2011) using
photometric data from Kepler. This suggests that this star might be rotating
nearly as a solid-body, similarly to the two young subgiants recently studied
by Deheuvels et al. (2020).
This work highlights the large potential of the seismic modeling of young
subgiants to indirectly obtain constraints on the core structure of the star
during the MS. The next step will be to use this method on a larger sample of
stars drawn from the targets observed with Kepler and TESS, and therefore
place quantitative constraints on the overshooting process in low-mass stars.
The data from the upcoming PLATO mission (Rauer et al., 2014) will add a large
amount of potential targets for this type of analysis.
Moreover, we show in this study that detecting several g-dominated dipole
modes places stronger constraints on the shape of the Brunt-Väisälä profile,
and therefore on the $\mu$-gradient in the stellar core. It could thus be
anticipated that more evolved subgiants, which show a larger number of
g-dominated mixed modes, would be more favorable targets for our purpose.
However, these stars are also further from the end of the MS, and a worry is
that the chemical composition in the core might be less dependent on the
properties of the MS core. We plan, therefore, to study this effect in a
subsequent study.
###### Acknowledgements.
We thank Morgan Deal for enlightening discussions about microscopic diffusion.
We also thank the anonymous referee for comments that improved the clarity of
this paper. We acknowledge support from from the project BEAMING ANR-18-CE31-
0001 of the French National Research Agency (ANR) and from the Centre National
d’Etudes Spatiales (CNES).
## References
* Appourchaux et al. (2008) Appourchaux, T., Michel, E., Auvergne, M., et al. 2008, A&A, 488, 705
* Asplund et al. (2009) Asplund, M., Grevesse, N., Sauval, A. J., & Scott, P. 2009, ARA&A, 47, 481
* Baglin et al. (2006) Baglin, A., Auvergne, M., Barge, P., et al. 2006, in ESA Special Publication, Vol. 1306, The CoRoT Mission Pre-Launch Status - Stellar Seismology and Planet Finding, ed. M. Fridlund, A. Baglin, J. Lochard, & L. Conroy, 33
* Ball & Gizon (2014) Ball, W. H. & Gizon, L. 2014, A&A, 568, A123
* Balser (2006) Balser, D. S. 2006, AJ, 132, 2326
* Belkacem et al. (2011) Belkacem, K., Goupil, M. J., Dupret, M. A., et al. 2011, A&A, 530, A142
* Benomar et al. (2015) Benomar, O., Takata, M., Shibahashi, H., Ceillier, T., & García, R. A. 2015, MNRAS, 452, 2654
* Borucki et al. (2010) Borucki, W. J., Koch, D., Basri, G., et al. 2010, Science, 327, 977
* Brown et al. (1991) Brown, T. M., Gilliland, R. L., Noyes, R. W., & Ramsey, L. W. 1991, ApJ, 368, 599
* Burgers (1969) Burgers, J. M. 1969, Flow Equations for Composite Gases
* Campante et al. (2011) Campante, T. L., Handberg, R., Mathur, S., et al. 2011, A&A, 534, A6
* Casagrande et al. (2006) Casagrande, L., Portinari, L., & Flynn, C. 2006, MNRAS, 373, 13
* Chiappini et al. (2002) Chiappini, C., Renda, A., & Matteucci, F. 2002, A&A, 395, 789
* Christensen-Dalsgaard (2008) Christensen-Dalsgaard, J. 2008, Ap&SS, 316, 113
* Christensen-Dalsgaard et al. (1993) Christensen-Dalsgaard, J., Proffitt, C. R., & Thompson, M. J. 1993, ApJ, 403, L75
* Claret & Torres (2016) Claret, A. & Torres, G. 2016, A&A, 592, A15
* Claret & Torres (2019) Claret, A. & Torres, G. 2019, ApJ, 876, 134
* Constantino & Baraffe (2018) Constantino, T. & Baraffe, I. 2018, A&A, 618, A177
* Cox & Giuli (1968) Cox, J. P. & Giuli, R. T. 1968, Principles of stellar structure
* Creevey et al. (2012) Creevey, O. L., Doǧan, G., Frasca, A., et al. 2012, A&A, 537, A111
* Deal et al. (2018) Deal, M., Alecian, G., Lebreton, Y., et al. 2018, A&A, 618, A10
* Deheuvels et al. (2015) Deheuvels, S., Ballot, J., Beck, P. G., et al. 2015, A&A, 580, A96
* Deheuvels et al. (2020) Deheuvels, S., Ballot, J., Eggenberger, P., et al. 2020, A&A, 641, A117
* Deheuvels et al. (2016) Deheuvels, S., Brandão, I., Silva Aguirre, V., et al. 2016, A&A, 589, A93
* Deheuvels et al. (2014) Deheuvels, S., Doğan, G., Goupil, M. J., et al. 2014, A&A, 564, A27
* Deheuvels & Michel (2010) Deheuvels, S. & Michel, E. 2010, Astronomische Nachrichten, 331, 929
* Deheuvels & Michel (2011) Deheuvels, S. & Michel, E. 2011, A&A, 535, A91
* Deheuvels et al. (2010) Deheuvels, S., Michel, E., Goupil, M. J., et al. 2010, A&A, 514, A31
* Evans et al. (2018) Evans, D. W., Riello, M., De Angeli, F., et al. 2018, A&A, 616, A4
* Freytag et al. (1996) Freytag, B., Ludwig, H. G., & Steffen, M. 1996, A&A, 313, 497
* Gabriel et al. (2014) Gabriel, M., Noels, A., Montalbán, J., & Miglio, A. 2014, A&A, 569, A63
* García et al. (2011) García, R. A., Hekker, S., Stello, D., et al. 2011, MNRAS, 414, L6
* Gizon & Solanki (2003) Gizon, L. & Solanki, S. K. 2003, ApJ, 589, 1009
* Goupil et al. (2011) Goupil, M. J., Lebreton, Y., Marques, J. P., et al. 2011, in Journal of Physics Conference Series, Vol. 271, GONG-SoHO 24: A New Era of Seismology of the Sun and Solar-Like Stars, 012032
* Green et al. (2019) Green, G. M., Schlafly, E., Zucker, C., Speagle, J. S., & Finkbeiner, D. 2019, ApJ, 887, 93
* Grevesse & Sauval (1998) Grevesse, N. & Sauval, A. J. 1998, Space Sci. Rev., 85, 161
* Høg et al. (2000) Høg, E., Fabricius, C., Makarov, V. V., et al. 2000, A&A, 355, L27
* Huber et al. (2019) Huber, D., Chaplin, W. J., Chontos, A., et al. 2019, AJ, 157, 245
* Iglesias & Rogers (1996) Iglesias, C. A. & Rogers, F. J. 1996, ApJ, 464, 943
* Jenkins et al. (2010) Jenkins, J. M., Caldwell, D. A., Chandrasekaran, H., et al. 2010, ApJ, 713, L120
* Karoff et al. (2013) Karoff, C., Campante, T. L., Ballot, J., et al. 2013, ApJ, 767, 34
* Kippenhahn et al. (2012) Kippenhahn, R., Weigert, A., & Weiss, A. 2012, Stellar Structure and Evolution
* Kurucz (2005) Kurucz, R. L. 2005, Memorie della Societa Astronomica Italiana Supplementi, 8, 14
* Lebreton & Goupil (2014) Lebreton, Y. & Goupil, M. J. 2014, A&A, 569, A21
* Li et al. (2020) Li, T., Bedding, T. R., Christensen-Dalsgaard, J., et al. 2020, MNRAS, 495, 3431
* Li et al. (2019) Li, T., Bedding, T. R., Kjeldsen, H., et al. 2019, MNRAS, 483, 780
* Lomb (1976) Lomb, N. R. 1976, Ap&SS, 39, 447
* Maeder & Mermilliod (1981) Maeder, A. & Mermilliod, J. C. 1981, A&A, 93, 136
* Martin et al. (2005) Martin, D. C., Fanson, J., Schiminovich, D., et al. 2005, ApJ, 619, L1
* Metcalfe et al. (2020) Metcalfe, T. S., van Saders, J. L., Basu, S., et al. 2020, arXiv e-prints, arXiv:2007.12755
* Moravveji et al. (2015) Moravveji, E., Aerts, C., Pápics, P. I., Triana, S. A., & Vandoren, B. 2015, A&A, 580, A27
* Moravveji et al. (2016) Moravveji, E., Townsend, R. H. D., Aerts, C., & Mathis, S. 2016, ApJ, 823, 130
* Mosser et al. (2012) Mosser, B., Goupil, M. J., Belkacem, K., et al. 2012, A&A, 540, A143
* Mosser et al. (2013) Mosser, B., Michel, E., Belkacem, K., et al. 2013, A&A, 550, A126
* Nielsen et al. (2015) Nielsen, M. B., Schunker, H., Gizon, L., & Ball, W. H. 2015, A&A, 582, A10
* Paxton et al. (2015) Paxton, B., Marchant, P., Schwab, J., et al. 2015, ApJS, 220, 15
* Paxton et al. (2018) Paxton, B., Schwab, J., Bauer, E. B., et al. 2018, ApJS, 234, 34
* Rauer et al. (2014) Rauer, H., Catala, C., Aerts, C., et al. 2014, Experimental Astronomy, 38, 249
* Rendle et al. (2019) Rendle, B. M., Buldgen, G., Miglio, A., et al. 2019, MNRAS, 484, 771
* Ricker et al. (2014) Ricker, G. R., Winn, J. N., Vanderspek, R., et al. 2014, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 9143, Space Telescopes and Instrumentation 2014: Optical, Infrared, and Millimeter Wave, 914320
* Rogers et al. (1996) Rogers, F. J., Swenson, F. J., & Iglesias, C. A. 1996, ApJ, 456, 902
* Scargle (1982) Scargle, J. D. 1982, ApJ, 263, 835
* Shibahashi (1979) Shibahashi, H. 1979, PASJ, 31, 87
* Silva Aguirre et al. (2013) Silva Aguirre, V., Basu, S., Brandão, I. M., et al. 2013, ApJ, 769, 141
* Skrutskie et al. (2006) Skrutskie, M. F., Cutri, R. M., Stiening, R., et al. 2006, AJ, 131, 1163
* Stanton & Murillo (2016) Stanton, L. G. & Murillo, M. S. 2016, Phys. Rev. E, 93, 043203
* Stassun & Torres (2016) Stassun, K. G. & Torres, G. 2016, AJ, 152, 180
* Stello et al. (2008) Stello, D., Bruntt, H., Preston, H., & Buzasi, D. 2008, ApJ, 674, L53
* Stokholm et al. (2019) Stokholm, A., Nissen, P. E., Silva Aguirre, V., et al. 2019, MNRAS, 489, 928
* Tassoul (1980) Tassoul, M. 1980, ApJS, 43, 469
* Unno et al. (1989) Unno, W., Osaki, Y., Ando, H., Saio, H., & Shibahashi, H. 1989, Nonradial oscillations of stars
* VandenBerg et al. (2006) VandenBerg, D. A., Bergbusch, P. A., & Dowler, P. D. 2006, ApJS, 162, 375
* Wilks (1938) Wilks, S. S. 1938, Ann. Math. Stat., 9, 60
* Wright et al. (2010) Wright, E. L., Eisenhardt, P. R. M., Mainzer, A. K., et al. 2010, AJ, 140, 1868
* Zinn et al. (2019) Zinn, J. C., Pinsonneault, M. H., Huber, D., & Stello, D. 2019, ApJ, 878, 136
## Appendix A Appendix A: Observed frequencies
Table 3: Estimated mode parameters for KIC10273246. $l$ | $\nu$ ($\mu$Hz) | $H$ (ppm2/$\mu$Hz) | $\Gamma$ ($\mu$Hz)
---|---|---|---
$0$ | $594.58\pm 0.13$ | $10.9^{+2.8}_{-2.2}$ | $1.4^{+0.4}_{-0.3}$
$0$ | $642.73\pm 0.11$ | $10.5^{+3.3}_{-2.5}$ | $1.2^{+0.5}_{-0.3}$
$0$ | $690.80\pm 0.23$ | $7.9^{+1.7}_{-1.4}$ | $3.5^{+1.0}_{-0.8}$
$0$ | $738.19\pm 0.10$ | $18.4^{+2.7}_{-2.3}$ | $1.7^{+0.2}_{-0.2}$
$0$ | $785.00\pm 0.13$ | $17.3^{+2.1}_{-1.8}$ | $2.2^{+0.2}_{-0.2}$
$0$ | $833.65\pm 0.13$ | $18.9^{+2.4}_{-2.1}$ | $2.4^{+0.3}_{-0.3}$
$0$ | $883.30\pm 0.12$ | $18.7^{+2.3}_{-2.1}$ | $2.2^{+0.2}_{-0.2}$
$0$ | $932.16\pm 0.17$ | $13.0^{+1.6}_{-1.5}$ | $2.8^{+0.3}_{-0.3}$
$0$ | $981.26\pm 0.30$ | $8.9^{+1.4}_{-1.2}$ | $4.9^{+1.0}_{-0.8}$
$0$ | $1030.30\pm 0.38$ | $4.4^{+0.9}_{-0.7}$ | $4.2^{+0.9}_{-0.8}$
$0$ | $1078.72\pm 0.38$ | $3.2^{+1.0}_{-0.8}$ | $3.0^{+1.2}_{-0.9}$
$1$ | $622.85\pm 0.16$ | $6.2^{+1.8}_{-1.4}$ | $1.6^{+0.5}_{-0.4}$
$1$ | $662.16\pm 0.17$ | $6.3^{+1.3}_{-1.0}$ | $2.4^{+0.5}_{-0.4}$
$1$ | $695.96\pm 0.10$ | $11.1^{+5.7}_{-3.8}$ | $0.7^{+0.5}_{-0.3}$
$1$ | $724.33\pm 0.10$ | $12.9^{+2.6}_{-2.1}$ | $1.5^{+0.3}_{-0.3}$
$1$ | $764.19\pm 0.12$ | $13.0^{+1.7}_{-1.5}$ | $2.6^{+0.3}_{-0.3}$
$1$ | $809.76\pm 0.08$ | $23.7^{+3.4}_{-3.0}$ | $1.7^{+0.2}_{-0.2}$
$1$ | $857.24\pm 0.09$ | $19.6^{+2.6}_{-2.3}$ | $2.0^{+0.2}_{-0.2}$
$1$ | $905.14\pm 0.10$ | $16.3^{+2.1}_{-1.8}$ | $2.3^{+0.3}_{-0.2}$
$1$ | $950.42\pm 0.15$ | $9.1^{+1.2}_{-1.1}$ | $3.0^{+0.4}_{-0.3}$
$1$ | $978.28\pm 0.10$ | $11.9^{+5.2}_{-3.6}$ | $0.5^{+0.3}_{-0.2}$
$1$ | $1008.23\pm 0.21$ | $5.8^{+0.8}_{-0.7}$ | $3.5^{+0.5}_{-0.5}$
$1$ | $1054.85\pm 0.48$ | $2.4^{+0.4}_{-0.4}$ | $6.2^{+1.3}_{-1.1}$
$1$ | $1103.65\pm 0.50$ | $2.0^{+0.5}_{-0.4}$ | $6.1^{+3.1}_{-2.0}$
$2$ | $590.15\pm 0.21$ | $5.4^{+1.4}_{-1.1}$ | $1.4^{+0.4}_{-0.3}$
$2$ | $638.42\pm 0.22$ | $5.2^{+1.6}_{-1.2}$ | $1.2^{+0.5}_{-0.3}$
$2$ | $685.80\pm 0.46$ | $4.0^{+0.8}_{-0.7}$ | $3.5^{+1.0}_{-0.8}$
$2$ | $733.83\pm 0.19$ | $9.2^{+1.3}_{-1.2}$ | $1.7^{+0.2}_{-0.2}$
$2$ | $779.53\pm 0.03$ | $54.2^{+19.0}_{-14.1}$ | $0.3^{+0.1}_{-0.1}$
$2$ | $784.03\pm 0.65$ | $12.5^{+3.2}_{-2.5}$ | $1.5^{+0.5}_{-0.4}$
$2$ | $829.97\pm 0.21$ | $9.4^{+1.2}_{-1.1}$ | $2.4^{+0.3}_{-0.3}$
$2$ | $878.97\pm 0.20$ | $9.3^{+1.2}_{-1.0}$ | $2.2^{+0.2}_{-0.2}$
$2$ | $927.47\pm 0.23$ | $6.5^{+0.8}_{-0.7}$ | $2.8^{+0.3}_{-0.3}$
$2$ | $976.97\pm 0.66$ | $4.4^{+0.7}_{-0.6}$ | $4.9^{+1.0}_{-0.8}$
$2$ | $1025.32\pm 0.65$ | $2.2^{+0.4}_{-0.4}$ | $4.2^{+0.9}_{-0.8}$
$2$ | $1074.32\pm 0.72$ | $1.6^{+0.5}_{-0.4}$ | $3.0^{+1.2}_{-0.9}$
|
# The Highest Energy HAWC Sources are Likely Leptonic and Powered by Pulsars
Takahiro Sudoh,11footnotetext: Corresponding author. Tim Linden Dan Hooper
###### Abstract
The HAWC Collaboration has observed gamma rays at energies above 56 TeV from a
collection of nine sources. It has been suggested that this emission could be
hadronic in nature, requiring that these systems accelerate cosmic-ray protons
or nuclei up to PeV-scale energies. In this paper, we instead show that the
spectra of these objects favor a leptonic (inverse Compton) origin for their
emission. More specifically, the gamma-ray emission from these objects can be
straightforwardly accommodated within a model in which $\sim\mathcal{O}(10\%)$
of the host pulsar’s spindown power is transferred into the acceleration of
electrons and positrons with a power-law spectrum that extends to several
hundred TeV or higher. The spectral break that is observed among these sources
is naturally explained within the context of this simple model, and occurs at
the energy where the timescale for energy losses matches the age of the
pulsar. In contrast, this spectral feature cannot be straightforwardly
accommodated in hadronic scenarios. Furthermore, hadronic models predict that
these sources should produce more emission at GeV-scale energies than is
observed. In light of these considerations, we conclude that HAWC’s highest
energy sources should be interpreted as TeV halos or pulsar wind nebulae,
which produce their emission through inverse Compton scattering, and are
powered by the rotational kinetic energy of their host pulsar.
## 1 Introduction
The cosmic-ray spectrum is thought to be dominated by Galactic sources up to
energies of $\sim$ 1 PeV, corresponding to the spectral feature known as the
“knee”. The nature of the Milky Way’s so-called “PeVatrons” remains an open
and widely debated question. Among the proposed candidates, supernova remnants
have long been the most popular, and gamma-ray measurements support the
conclusion that these objects produce high-energy protons [1, 2]. That being
said, it has also been argued that such sources may be unable to accelerate
protons beyond a few hundred TeV [3, 4]. Other PeVatron candidates include the
Milky Way’s supermassive black hole [5, 6, 7], and clusters of young and
massive stars [8]. The sources of the highest energy Galactic protons are
expected to generate gamma rays through the production and decay of neutral
pions, resulting in a power-law gamma-ray spectrum that extends to $\sim$100
TeV.
Pulsars can also accelerate electrons and positrons up to energies of at least
$\sim$100 TeV. Due to the Klein-Nishina suppression associated with inverse-
Compton scattering, electrons and positrons in this energy range lose much of
their energy to synchrotron emission, suppressing the leptonic production of
$\sim$100 TeV-scale gamma rays. Through this distinction, very high-energy
gamma-ray telescopes provide us with one of the most powerful ways to
discriminate between accelerators of hadronic and leptonic cosmic rays.
The High Altitude Water Cherenkov (HAWC) observatory has recently produced a
catalog of nine gamma-ray sources detected at energies above 56 TeV. Three of
these sources have been observed above 100 TeV, making this the highest energy
gamma-ray catalog reported to date [9].222The Tibet air shower array has also
reported the detection of emission above 100 TeV from the Crab Nebula [10].
Given that all nine of these sources are located within $0.5^{\circ}$ of a
known pulsar, it appears likely that they are associated with this class of
objects. Furthermore, eight of these nine pulsars are quite young
($t_{c}\equiv P/2\dot{P}\sim 1-50\,{\rm kyr}$), and have exceptionally high
spindown power ($\dot{E}>10^{36}\,{\rm erg/s}$). This information suggests two
possible interpretations. On the one hand, the gamma-ray emission from these
sources could be leptonic in nature, powered by the host pulsars’ rotational
kinetic energy. Alternatively, the observed emission could be hadronic,
revealing these systems’ supernova remnants to be among the Milky Way’s long-
sought-after PeVatrons.
In this paper, we examine the luminosity, spectrum, and morphology of the very
high-energy sources observed by HAWC in order to evaluate whether they are
more likely to be leptonic sources powered by the rotational kinetic energy of
the young pulsar, or hadronic PeVatrons powered by the supernova remnant. We
find that the former interpretation is favored by three factors. First, the
spectra of these sources can be easily accommodated by simple models in which
very high-energy electrons and positrons are accelerated with a power-law
spectrum. In contrast, hadronic models cannot straightforwardly account for
the spectra observed from several of HAWC’s highest energy sources. Second,
the gamma-ray luminosities observed from these sources are well-matched to the
past integrated spindown power of their host pulsars. And third, the spectral
break observed among these systems at $E_{\gamma}\sim\mathcal{O}(10\,{\rm
TeV})$ is naturally explained by the guaranteed suppression of the inverse
Compton scattering cross section by Klein-Nishina effects, and the energy
dependence of the electron/positron energy-loss time-scale, which is smaller
than the pulsar age for the highest-energy leptons.
In light of these considerations, we conclude that HAWC’s highest energy
sources are likely to be TeV halos and/or pulsar wind nebulae, with gamma-ray
emission that is 1) powered by the rotational kinetic energy of the host
pulsar, and 2) produced through inverse Compton scattering.
## 2 TeV Halos and Pulsar Wind Nebulae
Observations by HAWC and Milagro have detected diffuse multi-TeV emission from
the regions surrounding the nearby Geminga and Monogem pulsars [11, 12, 13,
14]. The spectrum and intensity of this emission indicate that these sources
convert a significant fraction ($\sim$ $10\%$) of their total spindown power
into very high-energy electron-positron pairs. Furthermore, each of these TeV
halos exhibits an angular extension of $\sim$ $2^{\circ}$ (corresponding to
$\sim$ $25\,{\rm pc}$), indicating that cosmic-ray propagation in the vicinity
of these pulsars is much less efficient than is typically experienced
elsewhere in the interstellar medium [15, 16, 17, 18, 19, 20, 21, 22].
Looking beyond the specific examples of Geminga and Monogem, observations by
HAWC (and HESS [23, 24]) have led to the identification of a new class of
spatially extended, multi-TeV gamma-ray sources, powered by the rotational
kinetic energy of pulsars, and which produce their observed emission through
the inverse Compton scattering of very high-energy electrons and positrons on
the surrounding radiation field [25, 26]. A large fraction of the sources
detected by HAWC [11, 12, 27] have been shown to be spatially coincident with
a pulsar, and all indications suggest that TeV halos are a generic feature of
middle-aged pulsars (whether or not TeV halos also accompany millisecond
pulsars is an open question [28]). These observations suggest that nearby TeV-
halos are likely responsible for the observed cosmic-ray positron excess [15,
29, 30, 29, 31, 32] (for earlier work, see Refs. [33, 34, 35]), as well as the
diffuse TeV excess observed by Milagro [36], and could plausibly dominate the
TeV-scale emission observed from the Galactic Center by HESS [37] (as opposed
to the hypothesis that this emission is produced by a Galactic Center PeVatron
[7]). Extrapolating to the Milky Way’s larger pulsar population, we expect
HAWC and the Cherenkov Telescope Array (CTA) [38] to ultimately detect $\sim$
$50-240$ TeV halos [26], including many whose pulsed radio and gamma-ray
emission is not beamed in the direction of Earth [25].
When referring to TeV halos, we adopt a definition for this source class which
requires that the high-energy electrons and positrons responsible for the
observed gamma-ray emission propagate via diffusion, rather than convection or
advection. This distinguishes TeV halos from pulsar wind nebulae, for which
advection plays an important and often dominant role (for a review, see Ref.
[39]). TeV halos are also more spatially extended than typical pulsar wind
nebulae. Pulsar wind nebulae are created when the energetic outflow from a
pulsar collides with the ambient medium (supernova ejecta or interstellar
medium), resulting in a shockwave surrounding a diffuse plasma of electrons
and positrons. Like TeV halos, the emission from a pulsar wind nebula is
powered by its pulsar’s rotational kinetic energy, and is leptonic in nature.
We consider it to be plausible that HAWC’s highest energy sources could be a
combination of TeV halos, pulsar wind nebulae, and objects that are currently
in a transitional state between these two classifications.333An alternative
definition has been put forth by Giacinti et al. [40] which classifies a
region as a TeV halo if it contains an overdensity of relativistic electrons
and positrons around a pulsar, and if the pulsar and associated supernova
remnant does not dominate the dynamics or composition of the interstellar
medium in that region. Compared to our definition, this choice leads Giacinti
et al. to classify many objects that we would call TeV halos as pulsar wind
nebulae, despite the fact that the dynamics assumed by both groups are
similar.
## 3 Associating Very High-Energy HAWC Sources With Known Pulsars
We begin by describing the known pulsars that could potentially be responsible
for powering the nine sources found in the eHWC ($>$ $56\,{\rm TeV}$) catalog
[9]. In Table 1, we list some of the selected characteristics of these
pulsars, as reported in the Australia Telescope National Facility (ATNF)
pulsar catalog [41]. This list of pulsars was identified in Ref. [9] based on
their locations (within $0.5^{\circ}$ of the corresponding HAWC sources), and
their high spindown power. In some cases, other nearby pulsars are not listed,
primarily when observations indicate that they have substantially lower values
of $\dot{E}$.
Comparing the values of the spindown luminosity of these pulsars,
$\dot{E}/4\pi d^{2}$, to their integrated gamma-ray flux, $F_{\gamma}$, it is
clear that their rotational kinetic energy is more than sufficient to produce
the very high-energy gamma-ray emission reported by HAWC.444Note that the
values of $F_{\gamma}$ given in Table 1 are based on an extrapolation to
energies lower than those measured by HAWC, and thus may somewhat overestimate
the total gamma-ray flux above 0.1 TeV. More quantitatively, this comparison
suggests that between 0.5% and 20% of these pulsars’ spindown power goes into
the production of gamma rays above 0.1 TeV (consistent with the range of
values required to explain the TeV halos of Geminga and Monogem [15, 25, 26]).
The only exception to this is eHWC J0534+220, which would be far brighter if
the spindown power of its pulsar was transferred into gamma rays with similar
efficiency. Given that this source is associated with the Crab Nebula, we do
not find this result particularly surprising. In particular, the magnetic
field of the Crab pulsar wind nebula is significantly stronger than that found
among typical pulsar wind nebulae (or TeV halos), causing a large fraction of
its spindown power to be transferred into the production of synchrotron
emission [42, 43, 44, 45, 46].
HAWC Source | Pulsar Candidate | Distance | $\dot{E}$ | $\dot{E}/4\pi d^{2}$ | $F_{\gamma}$ | $F_{\gamma}$ / ($\dot{E}/4\pi d^{2}$) | $P$ | $\dot{P}$ | $t_{c}\equiv P/2\dot{P}$ | Radio
---|---|---|---|---|---|---|---|---|---|---
| | $({\rm kpc})$ | $({\rm erg/s})$ | $({\rm TeV/cm^{2}/s})$ | $({\rm TeV/cm^{2}/s})$ | | $({\rm s})$ | $\times 10^{14}$ | $({\rm kyr})$ |
eHWC J0534+220 | PSR J0534+2200 | 2.00 | $4.5\times 10^{38}$ | $5.9\times 10^{-7}$ | $1.8\times 10^{-10}$ | 0.0003 | 0.033 | 42.1 | 1.26 | ${\rm Yes}$
eHWC J1809-193 | PSR J1809-1917 | 3.27 | $1.8\times 10^{36}$ | $8.8\times 10^{-10}$ | $8.5\times 10^{-11}$ | 0.1 | 0.083 | 2.55 | 51.4 | ${\rm Yes}$
– | PSR J1811-1925 | 5.00 | $6.4\times 10^{36}$ | $1.3\times 10^{-9}$ | – | 0.07 | 0.065 | 4.40 | 23.3 | ${\rm No}$
eHWC J1825-134 | PSR J1826-1334 | 3.61 | $2.8\times 10^{36}$ | $1.1\times 10^{-9}$ | $2.3\times 10^{-10}$ | 0.2 | 0.101 | 7.53 | 21.4 | ${\rm Yes}$
– | PSR J1826-1256 | 1.55 | $3.6\times 10^{36}$ | $7.8\times 10^{-9}$ | – | 0.03 | $0.110$ | 12.1 | 14.4 | ${\rm No}$
eHWC J1839-057 | PSR J1838-0537 | 2.0 | $6.0\times 10^{36}$ | $7.8\times 10^{-9}$ | $4.1\times 10^{-10}$ | 0.05 | 0.146 | 47.2 | 4.89 | ${\rm No}$
eHWC J1842-035 | PSR J1844-0346 | 2.4 | $4.2\times 10^{36}$ | $3.8\times 10^{-9}$ | $7.6\times 10^{-11}$ | 0.02 | 0.113 | 15.5 | 11.6 | ${\rm No}$
eHWC J1850+001 | PSR J1849-0001 | 7.00 | $9.8\times 10^{36}$ | $1.0\times 10^{-9}$ | $4.5\times 10^{-11}$ | 0.05 | 0.039 | 1.42 | 43.1 | ${\rm No}$
eHWC J1907+063 | PSR J1907+0602 | 2.37 | $2.8\times 10^{36}$ | $2.6\times 10^{-9}$ | $4.6\times 10^{-11}$ | 0.02 | 0.107 | 8.68 | 19.5 | ${\rm Yes}$
eHWC J2019+368 | PSR J2021+3651 | 1.80 | $3.4\times 10^{36}$ | $5.5\times 10^{-9}$ | $2.7\times 10^{-11}$ | 0.005 | 0.104 | 9.57 | 17.2 | ${\rm Yes}$
eHWC J2030+412 | PSR J2032+4127 | 1.33 | $1.5\times 10^{35}$ | $4.4\times 10^{-10}$ | $5.1\times 10^{-11}$ | 0.1 | 0.143 | 1.13 | 201 | ${\rm Yes}$
Table 1: Properties of the pulsars potentially associated with the highest
energy HAWC sources [9], as reported in the ATNF Catalog [41]. Note that eHWC
J1809-193 and eHWC J1825-134 each have two possible pulsar associations. When
possible, we show the distance determinations as provided in the ATNF catalog,
and for the cases of PSR J1838-0537, PSR J1844-0346, and PSR J1849-0001, we
show those from Refs. [47], [48], and [49], respectively. The quantity
$F_{\gamma}$ is the integrated gamma-ray flux between 0.1 and 100 TeV,
adopting the best-fit power law parameters as reported in Ref. [12]. In the
rightmost column, we report “Yes” if the ATNF catalog reports a detection of
emission at any of 0.4, 1.2, or 2 GHz. Figure 1: The locations and spatial
extent of the nine very high-energy HAWC sources described in Ref. [9]. The
black ‘$e$’ and the surrounding black circle in each frame denotes the best-
fit location and 68% containment of the source, as reported in the eHWC
catalog (no circle is shown in the case of eHWC J0534+220, as its morphology
is consistent with that of a point source). The symbol ‘2’ represents the
best-fit center of the source as reported in the previous 2HWC catalog. The
symbol ‘P’ (and in cases with multiple possible associations, the symbol ‘S’)
represents the location of the associated pulsars (see Table 1). Also shown
are the location and spatial extent of any nearby TeV gamma-ray sources (red),
as reported by HESS, VERITAS, and/or MAGIC, as well as the GeV counterparts as
detected by Fermi (blue) [50, 51]. The dotted blue circles represent the best-
fit spatial extent of the GeV emission, as reported in Ref. [52]. Figure 2:
The gamma-ray spectra of the nine very-high energy HAWC sources reported in
the eHWC catalog [9] (black stars), as well as the best-fit power law from the
earlier 2HWC catalog [12] (gray dashed). Also shown are the spectra of the
potential counterparts measured by HESS, VERITAS, and Fermi [53, 54, 50, 52,
51]. We note that the gamma-ray fluxes reported by different telescopes (such
as HAWC and HESS) can in some cases be different, likely resulting from the
larger (typically $\sim 2^{\circ}$) angular extension adopted in the analysis
of the HAWC Collaboration. Note that for eHWC sources which only have an
integrated flux above 56 TeV reported, we have adopted a spectral index of 2.5
in this figure.
In Fig. 1, we show the locations of the nine very-high-energy HAWC sources,
along with the positions of any pulsars and gamma-ray sources that are
potentially associated with them. In each frame, we show the location of the
HAWC source as reported in Ref. [9], as well as in the earlier 2HWC catalog
[12]. Following Ref. [12], we also show any nearby TeV gamma-ray sources, as
reported by HESS, VERITAS, and/or MAGIC. In addition, we show any GeV
counterparts555We disregard Fermi-LAT sources that are identified as GeV
pulsars, as the pulsed spectrum from these sources falls off rapidly above 10
GeV, and does not appreciably contribute to the TeV emission. that are
associated with a TeV source according to Fermi’s 4FGL catalog [50] (unless
otherwise noted below), as well as those very recently reported in Ref. [52].
When there is no counterpart listed in the 4FGL catalog, we include any other
4FGL gamma-ray sources that lie within 0.5∘ of a given eHWC source. Note that
the circles shown in this figure do not represent the uncertainties pertaining
to a given source’s location, but rather the best-fit angular extent as
reported by each collaboration. Also note that in the case of J0534+220 (the
Crab Nebula), the gamma-ray emission is observed to be point-like and
coincident with the radio pulsar PSR J0534+2200. While it is very likely that
most of these pulsars are authentically associated with the eHWC source in
question, it would not be surprising if one or two were the result of chance
alignment, and not physically related to the corresponding gamma-ray source.
We will now comment on these associations on a source-by-source basis:
* •
eHWC J0534+220 is associated with the Crab Nebula [12]. This source has been
detected over a wide range of wavelengths [42, 43, 44, 45], and is a point-
like and well-localized gamma-ray source, coincident with the radio pulsar PSR
J0534+2200.
* •
eHWC J1809-193 is associated with the unidentified TeV gamma-ray source HESS
J1809-193 [12], and with the extended GeV source 4FGL J1810.3-1925e [50].
* •
eHWC J1825-134 is associated with HESS J1825-137 [12], which is known to be a
spatially extended pulsar wind nebula. This source is also close to another
source (HESS J1826-130). However, this second HESS source is dimmer than the
2HWC source by nearly an order of magnitude (see Fig. 2), suggesting that the
association is unlikely. This HAWC source is also associated with the extended
Fermi source 4FGL J1824.5-1351e [50]. We additionally show the extent of this
source as reported in Ref. [52], using their “IEM-4FGL” interstellar emission
model and the 30-100 GeV energy bin.
* •
eHWC J1839-057 is associated with HESS J1841-055 [12], which is a complex
region containing two supernova remnants, three bright pulsars, and one X-ray
binary. While HESS J1837-069 is also considered as a potential counterpart to
the 2HWC source [12], this HESS source is located relatively far ($\sim$
$1^{\circ}$) from the best-fit position of eHWC source. Since this separation
is larger than the HAWC angular resolution and extension of the eHWC source
($\sim$ $0.3^{\circ}$), this association seems unlikely. HESS J1841-055 is
also associated with 4FGL J1840.9-0532e [50].
* •
eHWC J1842-035 is associated with the unidentified source HESS J1843-033 [12].
Although the 2HWC paper also considers HESS J1844-030 as a potential
association [12], the flux of this HESS source is dimmer than HAWC’s
measurement by nearly an order of magnitude (see Fig. 2), making this
association unlikely. eHWC J1842-035 is located within $0.5^{\circ}$ of three
Fermi sources: 4FGL J1842.5-0359c, 4FGL J1842.2-0328, and 4FGL J1843.7-0326.
* •
eHWC J1850+001 is associated with the pulsar wind nebula HESS J1849-000 [12]
and is located within $0.5^{\circ}$ of the Fermi source 4FGL J1851.8-0007c.
* •
eHWC J1907+063 is associated with the pulsar wind nebula MGRO J1908+06 [12],
which is also known as HESS J1908+063, and is also associated with 4FGL
J1906.2+0631 [50]. We additionally show the extent of this source as reported
in Ref. [52], using their “IEM-4FGL” interstellar emission model. The flux
reported in Ref. [52] for this source is very different from that listed in
the 4FGL catalog, due to this source’s significant spatial extension. In
presenting our results, we will use the spectrum for this source as reported
in Ref. [52].
* •
eHWC J2019+368 is associated with the source VER J2019+368 [12], an extended
source that covers two pulsars and one star-forming region. We present the
spectra of this source as reported in Ref. [54]. There are no sources in the
4FGL catalog located near eHWC 2019+368, nor is there any extended emission
reported in Ref. [52].
* •
eHWC J2030+412 is associated with the pulsar wind nebula TeV J2031+4130 [12].
We present the spectra reported by the MAGIC Collaboration in Ref. [53]. The
flux measured by HAWC comes from a larger angular region and is much brighter
than that measured by VERITAS and MAGIC, suggesting a contribution from
additional components. Although there are no sources near eHWC 2030+412 in the
4FGL catalog, a potential GeV counterpart (identified using Fermi data) has
been reported in Ref. [51].
In Fig. 2, we show the spectra of these nine very-high-energy HAWC sources as
reported in the eHWC catalog [9], as well as the best-fit power law from the
earlier 2HWC catalog [12]. Also shown in these frames are the spectra of the
potential counterparts measured by HESS, VERITAS, and Fermi. In most cases,
these measurements lead to a consistent picture across a wide range of
energies. We note that measurements by different telescopes (such as HAWC and
HESS) can in some cases be different, due to the treatment of these sources’
spatial extension.
## 4 Pulsars and Inverse Compton Emission
In this section, we describe our calculation of the gamma-ray spectrum
produced through the inverse Compton scattering of a population of very high-
energy electrons and positrons, injected with a given spectrum and over a
given time profile.
Very high-energy electrons and positrons lose energy through a combination of
inverse Compton scattering and synchrotron processes, leading to the following
energy loss rate [55]:
$\displaystyle-\frac{dE_{e}}{dt}$ $\displaystyle=$
$\displaystyle\sum_{i}\frac{4}{3}\sigma_{T}u_{i}S_{i}(E_{e})\bigg{(}\frac{E_{e}}{m_{e}}\bigg{)}^{2}+\frac{4}{3}\sigma_{T}u_{\rm
mag}\bigg{(}\frac{E_{e}}{m_{e}}\bigg{)}^{2}$ (4.1) $\displaystyle\equiv$
$\displaystyle b(E_{e})\,\bigg{(}\frac{E_{e}}{{\rm TeV}}\bigg{)}^{2},$
where $\sigma_{T}$ is the Thomson cross section and
$\displaystyle b$ $\displaystyle\approx$ $\displaystyle 1.02\times
10^{-13}\,{\rm TeV}/{\rm s}\,$ (4.2) $\displaystyle\times$
$\displaystyle\bigg{[}\sum_{i}\frac{u_{i}}{{\rm eV}/{\rm
cm}^{3}}\,S_{i}(E_{e})+\frac{u_{\rm mag}}{{\rm eV}/{\rm cm}^{3}}\bigg{]}.$
The sum in this expression is carried out over the various components of the
radiation backgrounds, consisting of the cosmic microwave background (CMB),
infrared emission (IR), and starlight (star). We take each of these radiation
components to have a blackbody spectrum and adopt the following values for
their energy densities and temperatures: $u_{\rm CMB}=0.260$ eV/cm3, $u_{\rm
IR}=0.30$ eV/cm3, $u_{\rm star}=0.3$ eV/cm3, $T_{\rm CMB}=2.7$ K, $T_{\rm
IR}=20$ K, and $T_{\rm star}=5000$ K [56]. For the energy density of the
magnetic field, we adopt as our default value $u_{\rm mag}=0.224$ eV/cm3,
corresponding to $B\simeq 3\,\mu$G. At relatively low electron energies, these
parameters correspond to a value of $b\simeq 1.2\times 10^{-13}$ TeV/s
($S_{i}\approx 1$). At very high energies ($E_{e}\mathrel{\raise
1.29167pt\hbox{$>$\kern-7.5pt\lower 4.30554pt\hbox{$\sim$}}}m^{2}_{e}/2T$),
however, the inverse Compton scattering will be well outside of the Thomson
regime, and Klein-Nishina suppression will play an important role. For our
calculations, we utilize the full Klein-Nishina cross-section formula, as
calculated in Ref. [55] and as implemented in the publicly available code
naima [57] (see also Refs. [58, 59]). For illustrative purposes, however, the
key features of Klein-Nishina suppression can be seen more clearly using the
following approximate expression [60]:
$S_{i}(E_{e})\approx\frac{45\,m^{2}_{e}/64\pi^{2}T^{2}_{i}}{(45\,m^{2}_{e}/64\pi^{2}T^{2}_{i})+(E^{2}_{e}/m^{2}_{e})}.$
(4.3)
For electrons of a given energy, the effects of Klein-Nishina suppression are
most pronounced for the highest-energy target photons. For the very-high-
energy ($E_{e}\mathrel{\raise 1.29167pt\hbox{$>$\kern-7.5pt\lower
4.30554pt\hbox{$\sim$}}}\,{\rm TeV}$) electrons/positrons of most interest to
this study, energy losses from inverse Compton scattering are dominated by
scattering with the IR background, as well as the CMB. At energies greater
than $\sim$ $50\,{\rm TeV}$, the CMB alone dominates this process.
Over a period of time in which an electron or positron of energy $E_{e}$ loses
a small quantity of energy, $\Delta E_{e}$, that particle will generate the
following spectrum of inverse Compton emission:
$\displaystyle\frac{dN_{\gamma}}{dE_{\gamma}}(E_{\gamma},E_{e})$
$\displaystyle=$ $\displaystyle A(E_{e},\Delta E_{e})\,f_{\rm
ICS}(E_{e})\,l_{e}$ $\displaystyle\times$
$\displaystyle\int\frac{dn}{d\epsilon}(\epsilon)\,\frac{d\sigma_{ICS}}{dE_{\gamma}}(\epsilon,E_{\gamma},E_{e})\,d\epsilon,$
where $dn/d\epsilon$ is the spectrum of target radiation, which we take to the
be the sum of the blackbody distributions described above. The quantity $A$ is
set by requirement that $\Delta E_{e}=\int
dE_{\gamma}\,E_{\gamma}\,dN_{\gamma}/dE_{\gamma}$, and $f_{\rm ICS}(E_{e})$ is
the fraction of the electron or positron’s energy losses that are from inverse
Compton scattering (as opposed to synchrotron). The differential cross section
for inverse Compton scattering is given by [61]:
$\displaystyle\frac{d\sigma_{ICS}}{dE_{\gamma}}(\epsilon,E_{\gamma},E_{e})$
$\displaystyle=$ $\displaystyle\frac{3\sigma_{T}m_{e}^{2}}{4\epsilon
E_{e}^{2}}\,\bigg{[}1+\bigg{(}\frac{z^{2}}{2(1-z)}\bigg{)}$ (4.5)
$\displaystyle+\bigg{(}\frac{z}{\beta(1-z)}\bigg{)}$ $\displaystyle-$
$\displaystyle\bigg{(}\frac{2z^{2}}{\beta^{2}(1-z)}\bigg{)}-\bigg{(}\frac{z^{3}}{2\beta(1-z)^{2}}\bigg{)}$
$\displaystyle-$
$\displaystyle\bigg{(}\frac{2z}{\beta(1-z)}\bigg{)}\ln\bigg{(}\frac{\beta(1-z)}{z}\bigg{)}\bigg{]},$
where $z\equiv E_{\gamma}/E_{e}$ and $\beta\equiv 4\epsilon E_{e}/m_{e}^{2}$.
At energies within the range measured by HAWC, inverse Compton scattering
generally yields photons with energies not very far below that of the incident
electrons and positrons, $E_{\gamma}\sim E_{e}$.
As time passes, pulsars slow down and lose rotational kinetic energy,
transferring much of this energy into the acceleration of particles which
produce the radio, gamma-ray, and other non-thermal emission that is observed
from these objects. From the measured quantities $P$ and $\dot{P}$, we can
define the pulsar’s characteristic age, $t_{c}$:
$t_{c}\equiv\frac{P}{2\dot{P}}=\frac{n-1}{2}(t_{\rm age}+\tau),$ (4.6)
where $n$ is the braking index, $t_{\rm age}$ is the age of the pulsar, and
$\tau$ is its spindown timescale. From the spindown equations, we can write
the spindown timescale as
$\tau=\frac{2t_{c}}{n-1}\left(\frac{P_{0}}{P}\right)^{n-1},$ (4.7)
where $P_{0}$ is the initial period of the pulsar. For a given set of $P_{0}$
and $n$, these equations determine $\tau$ and $t_{\rm age}$. The spindown
power of a pulsar evolves as follows:
$\dot{E}(t)=4\pi^{2}I\frac{\dot{P}}{P^{3}}=\dot{E}_{0}\left(1+\frac{t}{\tau}\right)^{-\frac{n+1}{n-1}},$
(4.8)
where $\dot{E}_{0}$ is the initial spindown power, given by
$\dot{E}_{0}=4\pi^{2}I\frac{\dot{P}}{P^{3}}\left(1+\frac{t_{\rm
age}}{\tau}\right)^{\frac{n+1}{n-1}}.$ (4.9)
These equations leave us with $P_{0}$, $n$, and $I$ as free parameters. Unless
otherwise stated, we will adopt $I=10^{45}$ g cm2 and $n=3$ throughout this
study.
## 5 Results
Figure 3: The spectrum of inverse Compton emission predicted from the pulsars
PSR J1826-1256, PSR J1826-1334, PSR J1907+0602, and PSR J2021+3651, compared
to the measured gamma-ray spectra from the associated HAWC sources (see Fig.
2). We have parameterized the injected electron/positron spectrum as
$dN_{e}/dE_{e}\propto E_{e}^{-\gamma}\exp(-E_{e}/E_{\rm cut})$ and show
results for three values of $E_{\rm cut}$. We adopt a spectral index of
$\gamma=2.1$ in this figure, except for in the bottom left frame, where we
have used $\gamma=2.0$. For each pulsar, we have selected values for the
electron/positron efficiency ($\eta_{e}$) and initial period ($P_{0}$) which
lead to reasonable agreement with the observed spectrum and intensity of each
source. We also show the value of each pulsar’s age, as calculated from the
value of $P_{0}$. We emphasize that the cutoff observed in these spectra above
$\sim$ $1-10\,{\rm TeV}$ is an unavoidable consequence of the model, and
occurs when the age of a pulsar exceeds the timescale for electron/positron
energy losses.
In Fig. 3, we show the spectra of inverse Compton emission predicted from the
pulsars PSR J1826-1256, PSR J1826-1334, PSR J1907+0602, and PSR J2021+3651,
comparing our results with the gamma-ray observations of each associated HAWC
source. In each case, we have parameterized the injected electron/positron
spectrum as a power-law with an exponential cutoff, $dN_{e}/dE_{e}\propto
E_{e}^{-\gamma}\,\exp(-E_{e}/E_{\rm cut})$. Along with $\gamma$ and $E_{\rm
cut}$, we treat as free parameters each pulsar’s initial period, and the
fraction of its spindown power that goes into the production of electrons and
positrons integrated above 10 GeV, $\eta_{e}$. For each pulsar’s distance,
period, and rate of change of its period, we adopt the values reported in the
Australia Telescope National Facility (ATNF) pulsar catalog [41] (as shown in
Table 1). We adopt $\gamma=2.0$ in the bottom left frame of Fig. 3, and 2.1 in
the other three frames. In each frame, we show results for three choices of
$E_{\rm cut}$, and have selected values of $\eta_{e}$ and $P_{0}$ (obtaining
the corresponding value of $t_{\rm age}$) which, when possible, lead to
reasonable agreement with the observed spectral shape and intensity of each
source.
Figure 4: As in Fig. 3, but for selected parameter variations, and focusing on
the case of eHWC J1825-134/PSR J1826-1334. In the upper left frame, we
consider three different combinations for the values of the initial period and
pulsar age, while in the lower left frame we show results for three choices of
the energy density of the magnetic field. In the upper right frame, we
consider scenarios in which the electrons and positrons are able to escape the
emitting region on a timescale given by $t_{\rm esc}=t_{\rm
diff,K}\times(E_{e}/{\rm TeV})^{-1/3}$, corresponding to the energy dependence
predicted for Kolmogorov diffusion. In the lower right frame, we show results
for four choices of the pulsar’s braking index, $n$. In each panel, unless
stated otherwise, we have adopted $P_{0}=30\,{\rm ms}$ ($t_{\rm age}=19\,{\rm
kyr}$), and $\eta_{e}=0.7$.
As seen in Fig. 3, the gamma-ray spectrum that is produced through inverse
Compton scattering is automatically suppressed at energies above $\sim$
$10\,{\rm TeV}$, for which the age of these pulsars exceeds the timescale for
electron/positron energy losses, $t_{\rm age}\mathrel{\raise
1.29167pt\hbox{$>$\kern-7.5pt\lower 4.30554pt\hbox{$\sim$}}}(bE_{e})^{-1}$.
Around this energy, the spectrum of the ambient electrons and positrons
transitions from the injected index ($\gamma$) to a significantly softened
index ($\gamma-1$). Note that this suppression occurs even if the injected
spectrum does not have a cutoff in the relevant energy range ($E_{\rm cut}\gg
100\,{\rm TeV}$). Klein-Nishina effects also influence the exact shape of the
high-energy spectrum. In these results, there is no indication of a cutoff in
the injected spectrum of electrons and positrons, suggesting that these
sources accelerate such particles to at least several hundred TeV. At lower
energies, $E_{e}\ll(b\,t_{\rm age})^{-1}\sim 20-50\,{\rm TeV}$, the electrons
and positrons that have been injected from the pulsar over its history have
not lost much of their initial energy. In this limit, the normalization of the
gamma-ray spectrum is set by the total integrated energy that has been
injected from the pulsar in the form of electrons and positrons, which is
proportional to $\eta_{e}\,t_{\rm age}/P^{2}_{0}$.
From the lower frames of Fig. 3, we see that the pulsars PSR J1907+0602 and
PSR J2021+3651 can produce the emission observed by HAWC and Fermi, in each
case requiring an efficiency similar to that of Geminga or Monogem,
$\eta_{e}\sim 0.1$. In the upper frames, we see that either PSR J1826-1256 or
PSR J1826-1334 (or some combination thereof) could be responsible for the
gamma-ray emission attributed to eHWC J1825-134, although the latter would
require a high value of $\eta_{e}\sim 0.7$, and neither of these pulsars
provides a particularly good fit in the $\sim 1-10$ TeV range.
In Fig. 4, we consider some variations regarding our parameter choices,
focusing on the case of eHWC J1825-134 and its corresponding PSR J1826-1334.
In the upper left frame, we consider three different combinations for the
values of the initial period and pulsar age. As described above, this does not
impact the spectrum at high energies, where only the current power of the
injected electrons/positrons determines the normalization. At lower energies,
however, the normalization scales as $\eta_{e}t_{\rm age}/P^{2}_{0}$,
corresponding to the total energy injected into high-energy electrons and
positrons over the life of the pulsar. In the lower-left frame, we consider
variations to the energy density of the magnetic field, showing results for
$u_{\rm mag}=0.224$ eV/cm3 (our default value), 0.5 eV/cm3, 5.0 eV/cm3 and 20
eV/cm3, corresponding to $B=3.0$ $\mu$G, 4.5 $\mu$G, 14.2 $\mu$G and 28.3
$\mu$G, respectively. By increasing the energy density of the magnetic field,
a larger fraction of the energy in electrons and positrons is lost to
synchrotron, suppressing the gamma-ray emission that is produced through
inverse Compton scattering.
Thus far in our calculations, we have assumed that the electrons and positrons
remain within the TeV halo or pulsar wind nebula, and do not escape via
diffusion. This corresponds to one or both of the following conditions being
satisfied: $t_{\rm esc}\gg t_{\rm age}$ or $t_{\rm esc}\gg(bE_{e})^{-1}$,
where $t_{\rm diff}$ is the timescale for particles to escape the TeV halo via
diffusion. In the upper right frame of Fig. 4, we consider a class of
scenarios in which the electrons/positrons instead escape on a timescale given
by $t_{\rm esc}=t_{\rm diff,K}\times(E_{e}/{\rm TeV})^{-1/3}$, corresponding
to the energy dependence predicted for Kolmogorov diffusion. More
quantitatively, we reduce the number of electrons and positrons within the
emission region by a factor of $e^{-\delta t/t_{\rm esc}}$ in each timestep of
length $\delta t$. The impact of diffusion is significant only when $t_{\rm
esc}$ is smaller than both the age of the pulsar (which, in this case, is 19
kyr), and the timescale for energy losses (which is $\sim$ $10^{3}\,{\rm yr}$
at the highest energies shown, and $\sim$ $10^{5}\,{\rm yr}$ at TeV-scale
energies). This could, in principle, significantly suppress the predicted
gamma-ray emission, but only in scenarios with very rapid diffusion (much
faster than favored by the spectra of Geminga and Monogem [15]). We do not
expect diffusion to play an important role in most of the sources under
consideration in this study.
Lastly, in the lower right frame of Fig. 4, we show results for four choices
of the pulsar’s braking index, $n$. The spectrum of this particular source is
somewhat better fit for lower values of the braking index.
Figure 5: The fraction of each pulsar’s spindown power that must go into the
production of electrons and positrons, $\eta_{e}$, in order to explain the
intensity of the gamma-ray emission observed by HAWC for each of the pulsars
potentially associated with a source in the eHWC catalog. These results are
shown for two choices of the injected spectral index, and as a function of
either the energy density of the magnetic field, or the timescale for
diffusion. In each case, we have adopted $P_{0}=30$ ms, and $E_{\rm cut}=1$
PeV. For reference, we show as vertical lines the values of these quantities
as measured in the local interstellar medium, assuming a 10 pc emitting region
for our calculation of the diffusion timescale. From these results, it is
clear that the intensity of the observed gamma-ray emission can be produced
for reasonable efficiencies, $\eta_{e}\sim\mathcal{O}(0.1)$, so long as 1) the
magnetic field is not much stronger than in the local interstellar medium
($u_{\rm mag}\mathrel{\raise 1.29167pt\hbox{$<$\kern-7.5pt\lower
4.30554pt\hbox{$\sim$}}}1$ eV/cm3), 2) diffusion is highly suppressed in the
region of inverse Compton scattering ($t_{\rm diff}\mathrel{\raise
1.29167pt\hbox{$>$\kern-7.5pt\lower 4.30554pt\hbox{$\sim$}}}10$ kyr), and 3)
the injected spectral index is somewhat hard ($\gamma\sim 2$). These
characteristics are consistent with those observed from the Geminga and
Monogem TeV halos.
In Fig. 5, we show the values of $\eta_{e}$ that are required to explain the
intensity of the gamma-ray emission observed by HAWC from each of the sources
in the eHWC catalog, for each of the potentially associated pulsars listed in
Table 1 (with the exception of the Crab Pulsar, which requires a significantly
smaller value of $\eta_{e}$ for a given value of $u_{\rm mag}$). We show
results for two choices of the injected spectral index ($\gamma=2.0$, 2.3),
and present these results as a function of either the energy density of the
magnetic field, or the timescale for diffusion. In each case, we have adopted
$P_{0}=30$ ms, and $E_{\rm cut}=1$ PeV. For reference, we show as vertical
lines the values of these quantities as measured in the local interstellar
medium.
From Fig. 5, it is clear that in the case of $\gamma=2$, the intensity of the
observed gamma-ray emission can be produced for reasonable efficiencies,
$\eta_{e}\sim\mathcal{O}(0.1)$, so long as 1) the magnetic field is not much
stronger than in the local interstellar medium ($u_{\rm mag}\mathrel{\raise
1.29167pt\hbox{$<$\kern-7.5pt\lower 4.30554pt\hbox{$\sim$}}}1$ eV/cm3), and 2)
diffusion is highly suppressed in the region of inverse Compton scattering
($t_{\rm diff}\mathrel{\raise 1.29167pt\hbox{$>$\kern-7.5pt\lower
4.30554pt\hbox{$\sim$}}}10$ kyr), as is known to be the case for both the
Geminga and Monogem TeV halos. Comparing this to the results found in the
$\gamma=2.3$ case, it is clear that somewhat hard spectral indices are also
required to produce the observed emission, again consistent with that observed
from Geminga and Monogem. Note that in calculating the values of $\eta_{e}$,
we have adopted a power-law injected spectrum of electrons and positrons,
integrated to a minimum energy of 10 GeV. Multiwavelength studies of pulsar
wind nebulae often require the electrons/positrons to be injected with a
broken power-law spectrum, with $E_{\rm br}\sim$ 0.1 TeV (see, for example,
Ref. [62]). Adopting such a function can reduce the required efficiency by a
factor of approximately $\sim(E_{\rm br}/10~{}\rm GeV)^{\gamma-2}$.
## 6 Comparison of Hadronic and Leptonic Models
Figure 6: A comparison of the gamma-ray spectra observed from four of the
sources in the eHWC catalog to that predicted from both leptonic and hadronic
models. For three of these four sources, the hadronic emission that is
predicted at GeV-scale energies significantly exceeds that observed by Fermi.
We have adopted a spectrum of protons that is described by a power-law with an
exponential cutoff, $dN/dE\propto E^{-p}\exp(-E/E_{\rm cut})$, with $E_{\rm
cut}=1$ PeV and extending to a minimum energy of 10 GeV. In each frame, we
have adopted a value of $p$ which best accommodates the spectra reported by
HAWC.
The spectral feature that is observed from these sources around $\sim$1-10 TeV
is a natural consequence of our leptonic model, and occurs at the energy where
the timescale for energy losses matches the age of the pulsar. In hadronic
models, no such feature is expected. Furthermore, hadronic models that can
explain the spectrum observed from these sources at very high energies
generally predict far more emission than is observed in the GeV range [50, 52,
51]. This disfavors models in which these sources produce their gamma-ray
emission primarily through hadronic processes.
In Fig. 6, we compare the gamma-ray spectra observed from four of the sources
in the eHWC catalog to that predicted in both leptonic and hadronic models. As
we did to calculate the emission from inverse Compton scattering, we made use
of the publicly available code naima to determine the spectra of hadronic
gamma-ray emission [57] (see also Refs. [63]). For three of these four
sources, the hadronic emission that is predicted at GeV-scale energies
significantly exceeds that observed by Fermi. In this figure, we have adopted
a spectrum of protons that is described by a power-law with an exponential
cutoff, $dN/dE\propto E^{-p}\exp(-E/E_{\rm cut})$, extending from a minimum
energy of 10 GeV, and with $E_{\rm cut}=1$ PeV. In each frame, we have adopted
values of $p$ which best accommodate the spectrum reported by HAWC ($p=2.65$
in the upper left, 2.15 in the upper right, 2.2 in the bottom left, and 2.0 in
the bottom right). In the case of the Crab Nebula (upper left), we adopt a
braking index of $n=2.5$ in the leptonic model, and adopt a large value for
the strength of the magnetic field, $B=90~{}\mu$G (in order to be compatible
with the emission observed in the $\sim\mathcal{O}(0.1\,{\rm GeV})$ range,
which is attributed to synchrotron). In the case of the Crab Nebula, we have
also included synchrotron photons as targets of inverse Compton scattering,
within a region taken to be 2 parsecs in radius. For PSR J1825-1334 we have
adopted $n=1.5$, while we have retained our default choice of $n=3$ for the
three remaining pulsars. For each curve, we adopt a normalization to match the
HAWC data.
Since each eHWC source is located in a region where young pulsars, and hence
recent star formation and supernova explosion exist, there may also be
contributions from hadronic processes to the gamma-ray flux. Comparing our
model curves with GeV data indicates that, hadronic component could also
produce a significant fraction of the observed flux for eHWC J2019+368, while
likely at most $\sim$10$\%$ level for the other sources. We note that our
hadronic models assume that protons are injected with a single power law. If
we assume a broken power law to reduce the energy injected into GeV-scale
protons, more contributions from hadronic processes could be allowed without
violating the Fermi data. Such a hard spectrum could be realized in a scenario
where very-high-energy protons that escape early from the SNR travel into
massive gas clouds, producing gamma rays there, while lower-energy protons
remain confined in the accelerator (e.g., [64]). However, the eHWC sources
shown in Figure 6, except for eHWC 1825-134, have not been reported to have a
clear spatial correlation with gases, which challenges this scenario.
Regarding eHWC 1825-134, mixed hadronic/leptonic contributions is a plausible
scenario. ([65], see also Sec. 7)
## 7 Discussion and Summary
The nature of the highest energy HAWC sources is a subject of considerable
interest, which has recently been discussed by a number of authors and
collaborations. In particular, the HAWC Collaboration has used multiwavelength
data to argue that the gamma-ray emission from eHWC J2019+368 is leptonic in
origin [66], in agreement with our assessment of this source. More recently,
HAWC has performed a stacking analysis of ten pulsars that are not associated
with any eHWC sources, identifying evidence of gamma-ray emission at energies
above 56 TeV [67]. More broadly speaking, they conclude from this information
that high-spindown power pulsars universally produce extremely high energy
photons. In Ref. [65], members of the HAWC Collaboration argued that eHWC
J1825-134 can be separated into four components: diffuse Galactic emission,
HAWC J1826-128 (the counterpart to HESS J1826-130), HAWC J1825-138 (the
counterpart to HESS J1825-137), and the newly discovered source HAWC
J1825-134. The spectrum of the emission associated with HAWC J1825-134, and
its spatial correlation with dense gas, favors a hadronic interpretation for
this emission. In contrast, the other two HAWC sources that contribute to eHWC
J1825-134 are likely leptonic in origin.
Beyond the HAWC Collaboration, Di Mauro et al. [52] have shown that the
spectra of three eHWC sources (eHWC J1825-134, J1907+063, and J2019+368) can
be well fit by leptonic models, in concordance with our conclusions (see also,
Ref [68]). On similar grounds, Fang et al. [69] have argued that eHWC
J2019+368 is likely leptonic in nature. In contrast, the authors of Ref. [70]
have claimed that HESS J1809-193 (associated with eHWC J1809-193) is likely to
be a hadronic source.
In this paper, we have studied each of the nine gamma-ray sources contained in
the eHWC catalog, expanding on the previous work described above, and
identifying significant evidence that their emission is likely leptonic in
origin. In particular, the gamma-ray emission from these sources can be
straightforwardly accommodated within a model in which $\sim\mathcal{O}(10\%)$
of the host pulsar’s spindown power is transferred into the acceleration of
electrons and positrons with a simple power-law spectrum. The spectral break
that is observed among these sources is an unavoidable consequence of this
model.
In contrast, the spectral feature that is observed from these sources is not
expected in hadronic scenarios, which also predict far more emission at GeV-
scale energies than is observed. For the three eHWC sources with detailed
spectral information, we can rule out scenarios in which a significant
fraction of their observed emission is hadronic in origin. While it remains
possible that one or more of the other six eHWC sources could produce hadronic
emission (see, for example, Ref. [70]), we stress that nothing in our analysis
differentiates any of these sources from those that are clearly leptonic in
nature. This disfavors an interpretation of these sources as the long-sought-
after Galactic PeVatrons.
Furthermore, all nine sources in the eHWC catalog can be powered by the
rotational kinetic energy of their host pulsar, requiring efficiencies that
are similar to those of the Geminga and Monogem TeV halos. Also like Geminga
and Monogem, diffusion appears to be suppressed within the emission regions of
these sources, and electrons and positrons are injected into these regions
with a relatively hard spectral index, $\gamma\sim 2$.
In light of the considerations described in the paragraphs above, we conclude
that HAWC’s highest energy sources are likely to be TeV halos or pulsar wind
nebulae, which produce their gamma-ray emission through inverse Compton
scattering, and which are powered by the rotational kinetic energy of their
host pulsar. We find no evidence that this class of sources produces
significant gamma-ray emission through hadronic processes, or accelerates
protons to PeV-scale energies.
Acknowledgments. We would like to thank Mattia Di Mauro for providing us with
the data from Ref. [52]. TS is supported by a Research Fellowship of Japan
Society for the Promotion of Science (JSPS) and by JSPS KAKENHI Grant No. JP
18J20943. TL is partially supported by the Swedish Research Council under
contract 2019-05135, the Swedish National Space Agency under contract 117/19
and the European Research Council under grant 742104. DH is supported by the
Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the
U.S. Department of Energy, Office of High Energy Physics. In this work, we
have made use of naima [57], gammapy666https://www.gammapy.org[71, 72],
astropy777http://www.astropy.org [73, 74], matplotlib [75], numpy [76], and
scipy [77].
## References
* [1] M. Tavani et al., _Direct Evidence for Hadronic Cosmic-Ray Acceleration in the Supernova Remnant IC 443_ , _The Astrophysical Journal Letters_ 710 (2010) L151 [1001.5150].
* [2] Fermi-LAT collaboration, _Detection of the Characteristic Pion-Decay Signature in Supernova Remnants_ , _Science_ 339 (2013) 807 [1302.3307].
* [3] A. Bell, K. Schure, B. Reville and G. Giacinti, _Cosmic ray acceleration and escape from supernova remnants_ , _Mon. Not. Roy. Astron. Soc._ 431 (2013) 415 [1301.7264].
* [4] S. Gabici, _Gamma-Ray Emission from Supernova Remnants and Surrounding Molecular Clouds_ , _AIP Conf. Proc._ 1792 (2017) 020002 [1610.06234].
* [5] Y. Fujita, K. Murase and S.S. Kimura, _Sagittarius A* as an Origin of the Galactic PeV Cosmic Rays?_ , _JCAP_ 04 (2017) 037 [1604.00003].
* [6] Y.-Q. Guo, Z. Tian, Z. Wang, H.-J. Li and T.-L. Chen, _The Galactic Center: A Petaelectronvolt Cosmic-ray Acceleration Factory_ , _Astrophys. J._ 836 (2017) 233 [1604.08301].
* [7] H.E.S.S. collaboration, _Acceleration of petaelectronvolt protons in the Galactic Centre_ , _Nature_ 531 (2016) 476 [1603.07730].
* [8] F. Aharonian, R. Yang and E. de Oña Wilhelmi, _Massive Stars as Major Factories of Galactic Cosmic Rays_ , _Nature Astron._ 3 (2019) 561 [1804.02331].
* [9] HAWC collaboration, _Multiple Galactic Sources with Emission Above 56 TeV Detected by HAWC_ , _Phys. Rev. Lett._ 124 (2020) 021102 [1909.08609].
* [10] M. Amenomori et al., _First Detection of Photons with Energy beyond 100 TeV from an Astrophysical Source_ , _Phys. Rev. Lett._ 123 (2019) 051101 [1906.05521].
* [11] A. Albert et al., _3HWC: The Third HAWC Catalog of Very-High-Energy Gamma-ray Sources_ , 2007.08582.
* [12] A. Abeysekara et al., _The 2HWC HAWC Observatory Gamma Ray Catalog_ , _Astrophys. J._ 843 (2017) 40 [1702.02992].
* [13] HAWC collaboration, _Extended gamma-ray sources around pulsars constrain the origin of the positron flux at Earth_ , _Science_ 358 (2017) 911 [1711.06223].
* [14] A.A. Abdo et al., _Milagro Observations of Multi-TeV Emission from Galactic Sources in the Fermi Bright Source List_ , _The Astrophysical Journal Letters_ 700 (2009) L127 [0904.1018].
* [15] D. Hooper, I. Cholis, T. Linden and K. Fang, _HAWC Observations Strongly Favor Pulsar Interpretations of the Cosmic-Ray Positron Excess_ , _Phys. Rev. D_ 96 (2017) 103013 [1702.08436].
* [16] D. Hooper and T. Linden, _Measuring the Local Diffusion Coefficient with H.E.S.S. Observations of Very High-Energy Electrons_ , _Phys. Rev. D_ 98 (2018) 083009 [1711.07482].
* [17] R. López-Coto and G. Giacinti, _Constraining the properties of the magnetic turbulence in the Geminga region using HAWC $\gamma$-ray data_, _Mon. Not. Roy. Astron. Soc._ 479 (2018) 4526 [1712.04373].
* [18] G. Johannesson, T.A. Porter and I.V. Moskalenko, _Cosmic-Ray Propagation in Light of the Recent Observation of Geminga_ , _Astrophys. J._ 879 (2019) 91 [1903.05509].
* [19] M. Di Mauro, S. Manconi and F. Donato, _Evidences of low-diffusion bubbles around Galactic pulsars_ , _Phys. Rev. D_ 101 (2020) 103035 [1908.03216].
* [20] R.-Y. Liu, H. Yan and H. Zhang, _Understanding the Multiwavelength Observation of Geminga’s Tev Halo: The Role of Anisotropic Diffusion of Particles_ , _Phys. Rev. Lett._ 123 (2019) 221103 [1904.11536].
* [21] C. Evoli, T. Linden and G. Morlino, _Self-generated cosmic-ray confinement in TeV halos: Implications for TeV $\gamma$-ray emission and the positron excess_, _Phys. Rev. D_ 98 (2018) 063017 [1807.09263].
* [22] K. Fang, X.-J. Bi and P.-F. Yin, _Possible origin of the slow-diffusion region around Geminga_ , _Mon. Not. Roy. Astron. Soc._ 488 (2019) 4074 [1903.06421].
* [23] HESS collaboration, _The H.E.S.S. Galactic plane survey_ , _Astron. Astrophys._ 612 (2018) A1 [1804.02432].
* [24] HESS collaboration, _The population of TeV pulsar wind nebulae in the H.E.S.S. Galactic Plane Survey_ , _Astron. Astrophys._ 612 (2018) A2 [1702.08280].
* [25] T. Linden, K. Auchettl, J. Bramante, I. Cholis, K. Fang, D. Hooper et al., _Using HAWC to discover invisible pulsars_ , _Phys. Rev. D_ 96 (2017) 103016 [1703.09704].
* [26] T. Sudoh, T. Linden and J.F. Beacom, _TeV Halos are Everywhere: Prospects for New Discoveries_ , _Phys. Rev. D_ 100 (2019) 043016 [1902.08203].
* [27] HAWC collaboration, _A Systematic Search for TeV Halos associated with known pulsars_ , _PoS_ ICRC2019 (2020) 797.
* [28] D. Hooper and T. Linden, _Millisecond Pulsars, TeV Halos, and Implications For The Galactic Center Gamma-Ray Excess_ , _Phys. Rev. D_ 98 (2018) 043005 [1803.08046].
* [29] K. Fang, X.-J. Bi, P.-F. Yin and Q. Yuan, _Two-zone diffusion of electrons and positrons from Geminga explains the positron anomaly_ , _Astrophys. J._ 863 (2018) 30 [1803.02640].
* [30] S. Profumo, J. Reynoso-Cordova, N. Kaaz and M. Silverman, _Lessons from HAWC pulsar wind nebulae observations: The diffusion constant is not a constant; pulsars remain the likeliest sources of the anomalous positron fraction; cosmic rays are trapped for long periods of time in pockets of inefficient diffusion_ , _Phys. Rev. D_ 97 (2018) 123008 [1803.09731].
* [31] X. Tang and T. Piran, _Positron flux and $\gamma$-ray emission from Geminga pulsar and pulsar wind nebula_, _Mon. Not. Roy. Astron. Soc._ 484 (2019) 3491 [1808.02445].
* [32] S. Manconi, M. Di Mauro and F. Donato, _Contribution of pulsars to cosmic-ray positrons in light of recent observation of inverse-Compton halos_ , _Phys. Rev. D_ 102 (2020) 023015 [2001.09985].
* [33] H. Yüksel, M.D. Kistler and T. Stanev, _TeV Gamma Rays from Geminga and the Origin of the GeV Positron Excess_ , _Phys. Rev. Lett._ 103 (2009) 051101 [0810.2784].
* [34] F.A. Aharonian, A.M. Atoyan and H.J. Voelk, _High energy electrons and positrons in cosmic rays as an indicator of the existence of a nearby cosmic tevatron_ , _Astronomy and Astrophysics_ 294 (1995) L41.
* [35] F.A. Aharonian, _Very high energy gamma-ray astronomy and the origin of cosmic rays._ , _Nuclear Physics B Proceedings Supplements_ 39 (1995) 193.
* [36] T. Linden and B.J. Buckman, _Pulsar TeV Halos Explain the Diffuse TeV Excess Observed by Milagro_ , _Phys. Rev. Lett._ 120 (2018) 121101 [1707.01905].
* [37] D. Hooper, I. Cholis and T. Linden, _TeV Gamma Rays From Galactic Center Pulsars_ , _Phys. Dark Univ._ 21 (2018) 40 [1705.09293].
* [38] CTA Consortium collaboration, B. Acharya et al., _Science with the Cherenkov Telescope Array_ , WSP (11, 2018), 10.1142/10986, [1709.07997].
* [39] B.M. Gaensler and P.O. Slane, _The evolution and structure of pulsar wind nebulae_ , _Ann. Rev. Astron. Astrophys._ 44 (2006) 17 [astro-ph/0601081].
* [40] G. Giacinti, A. Mitchell, R. López-Coto, V. Joshi, R. Parsons and J. Hinton, _Halo fraction in TeV-bright pulsar wind nebulae_ , _Astron. Astrophys._ 636 (2020) A113 [1907.12121].
* [41] R.N. Manchester, G.B. Hobbs, A. Teoh and M. Hobbs, _The Australia Telescope National Facility pulsar catalogue_ , _Astron. J._ 129 (2005) 1993 [astro-ph/0412641].
* [42] M. Lyutikov, T. Temim, S. Komissarov, P. Slane, L. Sironi and L. Comisso, _Interpreting Crab Nebula’s synchrotron spectrum: two acceleration mechanisms_ , _Mon. Not. Roy. Astron. Soc._ 489 (2019) 2403 [1811.01767].
* [43] E. Amato, D. Guetta and P. Blasi, _Signatures of high energy protons in pulsar winds_ , _Astron. Astrophys._ 402 (2003) 827 [astro-ph/0302121].
* [44] M. Meyer, D. Horns and H.-S. Zechlin, _The Crab Nebula as a standard candle in very high-energy astrophysics_ , _Astron. Astrophys._ 523 (2010) A2 [1008.4524].
* [45] H.E.S.S. collaboration, _Resolving the Crab pulsar wind nebula at teraelectronvolt energies_ , _Nature Astron._ 4 (2019) 167 [1909.09494].
* [46] D. Khangulyan, M. Arakawa and F. Aharonian, _Detection of ultra-high-energy gamma rays from the Crab Nebula: physical implications_ , _Mon. Not. Roy. Astron. Soc._ 491 (2020) 3217 [1911.07438].
* [47] H. Pletsch et al., _PSR J1838-0537: Discovery of a young, energetic gamma-ray pulsar_ , _Astrophys. J. Lett._ 755 (2012) L20 [1207.5333].
* [48] J. Wu et al., _The Einstein@Home Gamma-ray Pulsar Survey. II. Source Selection, Spectral Analysis, and Multiwavelength Follow-up_ , _Astrophys. J._ 854 (2018) 99 [1712.05395].
* [49] E. Gotthelf, J. Halpern, R. Terrier and F. Mattana, _Discovery of an Energetic 38.5 ms Pulsar Powering the Gamma-ray Source IGR J18490-0000/HESS J1849-000_ , _Astrophys. J. Lett._ 729 (2011) L16 [1012.2121].
* [50] Fermi-LAT collaboration, _$Fermi$ Large Area Telescope Fourth Source Catalog_, _Astrophys. J. Suppl._ 247 (2020) 33 [1902.10045].
* [51] A.U. Abeysekara et al., _A Very High Energy $\gamma$-Ray Survey toward the Cygnus Region of the Galaxy_, _ApJ_ 861 (2018) 134 [1805.05989].
* [52] M. Di Mauro, S. Manconi, M. Negro and F. Donato, _Investigating $\gamma$-ray halos around three HAWC bright sources in Fermi-LAT data_, _arXiv e-prints_ (2020) arXiv:2012.05932 [2012.05932].
* [53] VERITAS, MAGIC collaboration, _Periastron Observations of TeV Gamma-Ray Emission from a Binary System with a 50-year Period_ , _Astrophys. J. Lett._ 867 (2018) L19 [1810.05271].
* [54] E. Aliu et al., _Investigating the TeV Morphology of MGRO J1908+06 with VERITAS_ , _Astrophys. J._ 787 (2014) 166 [1404.7185].
* [55] G.R. Blumenthal and R.J. Gould, _Bremsstrahlung, synchrotron radiation, and compton scattering of high-energy electrons traversing dilute gases_ , _Rev. Mod. Phys._ 42 (1970) 237.
* [56] T.A. Porter, G. Johannesson and I.V. Moskalenko, _High-Energy Gamma Rays from the Milky Way: Three-Dimensional Spatial Models for the Cosmic-Ray and Radiation Field Densities in the Interstellar Medium_ , _Astrophys. J._ 846 (2017) 67 [1708.00816].
* [57] V. Zabalza, _naima: a python package for inference of relativistic particle energy distributions from observed nonthermal spectra_ , _Proc. of International Cosmic Ray Conference 2015_ (2015) 922 [1509.03319].
* [58] F.A. Aharonian, S.R. Kelner and A.Y. Prosekin, _Angular, spectral, and time distributions of highest energy protons and associated secondary gamma rays and neutrinos propagating through extragalactic magnetic and radiation fields_ , _Phys. Rev. D_ 82 (2010) 043002 [1006.1045].
* [59] D. Khangulyan, F.A. Aharonian and S.R. Kelner, _Simple Analytical Approximations for Treatment of Inverse Compton Scattering of Relativistic Electrons in the Blackbody Radiation Field_ , _ApJ_ 783 (2014) 100 [1310.7971].
* [60] R. Schlickeiser and J. Ruppel, _Klein–nishina steps in the energy spectrum of galactic cosmic-ray electrons_ , _New Journal of Physics_ 12 (2010) 033044.
* [61] F. Aharonian and A. Atoyan, _Compton scattering of relativistic electrons in compact x-ray sources_ , _Astrophysics and Space Science_ 79 (1981) 321.
* [62] D.F. Torres, A. Cillis, J. Martín and E. de Oña Wilhelmi, _Time-dependent modeling of TeV-detected, young pulsar wind nebulae_ , _JHEAp_ 1-2 (2014) 31 [1402.5485].
* [63] E. Kafexhiu, F. Aharonian, A.M. Taylor and G.S. Vila, _Parametrization of gamma-ray production cross sections for p p interactions in a broad proton energy range from the kinematic threshold to PeV energies_ , _Phys. Rev. D_ 90 (2014) 123014 [1406.7369].
* [64] S. Gabici and F.A. Aharonian, _Searching for Galactic Cosmic-Ray Pevatrons with Multi-TeV Gamma Rays and Neutrinos_ , _ApJ_ 665 (2007) L131 [0705.3011].
* [65] A. Albert et al., _Evidence of 200 TeV photons from HAWC J1825-134_ , 2012.15275.
* [66] HAWC collaboration, _Spectrum and Morphology of the Very-High-Energy Source HAWC J2019+368_ , 2101.01649.
* [67] HAWC collaboration, _Evidence that Ultra-High-Energy Gamma Rays are a Universal Feature Near Powerful Pulsars_ , 2101.07895.
* [68] M. Breuhaus, J. Hahn, C. Romoli, B. Reville, G. Giacinti, R. Tuffs et al., _Ultra-high energy Inverse Compton emission from Galactic electron accelerators_ , 2010.13960.
* [69] J. Fang, L. Wen, H. Yu and S. Chen, _Investigating the multiband non-thermal emission of the 100 TeV source eHWC J2019+368 with a pulsar wind nebula scenario_ , _Mon. Not. Roy. Astron. Soc._ 498 (2020) 4901 [2007.13943].
* [70] M. Araya, _GeV Emission in the Region of HESS J1809 $-$193 and HESS J1813$-$178: Is HESS J1809$-$193 a Proton Pevatron?_, _Astrophys. J._ 859 (2018) 69 [1804.03325].
* [71] C. Deil et al., _Gammapy - A prototype for the CTA science tools_ , in _35th International Cosmic Ray Conference (ICRC2017)_ , vol. 301 of _International Cosmic Ray Conference_ , p. 766, Jan., 2017 [1709.01751].
* [72] C. Nigro et al., _Towards open and reproducible multi-instrument analysis in gamma-ray astronomy_ , _Astronomy & Astrophysics_ 625 (2019) A10 [1903.06621].
* [73] Astropy Collaboration, _Astropy: A community Python package for astronomy_ , _Astronomy & Astrophysics_ 558 (2013) A33 [1307.6212].
* [74] Astropy Collaboration, _The Astropy Project: Building an Open-science Project and Status of the v2.0 Core Package_ , _The Astronomical Journal_ 156 (2018) 123 [1801.02634].
* [75] J.D. Hunter, _Matplotlib: A 2d graphics environment_ , _Computing in Science & Engineering_ 9 (2007) 90.
* [76] S. van der Walt, S.C. Colbert and G. Varoquaux, _The numpy array: A structure for efficient numerical computation_ , _Computing in Science Engineering_ 13 (2011) 22.
* [77] E. Jones, T. Oliphant, P. Peterson et al., _SciPy: Open source scientific tools for Python"_ (2001–).
|
# Adding eccentricity to quasicircular binary-black-hole waveform models
Yoshinta Setyawati Max Planck Institute for Gravitational Physics (Albert
Einstein Institute), Callinstraße 38, 30167 Hannover, Germany Leibniz
Universität Hannover, 30167 Hannover, Germany Institute for Gravitational and
Subatomic Physics (GRASP) Department of Physics, Utrecht University,
Princetonplein 1, 3584 CC Utrecht, The Netherlands Frank Ohme Max Planck
Institute for Gravitational Physics (Albert Einstein Institute), Callinstraße
38, 30167 Hannover, Germany Leibniz Universität Hannover, 30167 Hannover,
Germany
###### Abstract
The detection of gravitational-wave signals from coalescing eccentric binary
black holes would yield unprecedented information about the formation and
evolution of compact binaries in specific scenarios, such as dynamical
formation in dense stellar clusters and three-body interactions. The
gravitational-wave searches by the ground-based interferometers, LIGO and
Virgo, rely on analytical waveform models for binaries on quasicircular
orbits. Eccentric merger waveform models are less developed, and only few
numerical simulations of eccentric mergers are publicly available, but several
eccentric inspiral models have been developed from the Post-Newtonian
expansion. Here we present a novel method to convert the dominant quadrupolar
mode of any circular analytical binary-black-hole model into an eccentric
model. First, using numerical simulations, we examine the additional amplitude
and frequency modulations of eccentric signals that are not present in their
circular counterparts. Subsequently, we identify suitable analytical
descriptions of those modulations and interpolate key parameters from twelve
numerical simulations designated as our training dataset. This allows us to
reconstruct the modulated amplitude and phase of any waveform up to mass ratio
3 and eccentricity 0.2. We find that the minimum overlap of the new model with
numerical simulations is around 0.98 over all of our test dataset that are
scaled to a 50M⊙ black-hole binary starting at 35 Hz with aLIGO A+ design
sensitivity. A Python package pyrex easily carries out the computation of this
method.
## I Introduction
Coalescing stellar-mass black-hole binaries are one of the primary sources of
gravitational-wave (GW) signals detected by the ground-based interferometers,
the advanced Laser Interferometer Gravitational-wave Observatory (aLIGO)
advancedLIGO , Virgo advanceVirgo , and KAGRA kagra . In the first three
observing runs (O1–O3), detection pipelines assumed binary-black-hole (BBH)
mergers to have negligible eccentricity when entering the orbital frequencies
to which aLIGO, Virgo, and KAGRA are sensitive PhysRevX.9.031040 ; LIGOeccen ;
GWTC2 . BBHs formed in an isolated environment through a massive stellar
evolution are expected to circularize and therefore have undetectable
eccentricity by the time they enter the LIGO band PhysRev.136.B1224 . However,
BBHs with a detectable eccentricity can form in a dense stellar cluster
through dynamical capture Samsing_2014 ; PhysRevD.98.083028 .
A possible scenario is that the binary gains eccentricity due to gravitational
torques exchanged with a circumbinary disk refId0 . Eccentric BBHs can also
form from three-body interactions PhysRevD.98.083028 , where the BBH behaves
as the inner binary. In this system, the Kozai-Lidov kozai ; lidov mechanism
triggers the oscillation that boosts the eccentricity.
Interactions of BBHs in a typical globular cluster suggest a significant
eccentric BBH merger rate. As many as $\sim$$5\%$ of binaries may enter the
LIGO detector band ($f\geq$ 10 Hz) with eccentricities $e>0.1$ gloclus ;
PhysRevD.98.123005 ; PhysRevD.97.103014 . A confident measurement of
significant eccentricity in a BBH system would be strong evidence for the
dynamical formation scenarios in dense stellar clusters and would boost our
understanding of the dynamical evolution of compact objects.
The impact of eccentricity is more substantial during the early inspiral and
therefore plays a vital role in the space-based detector era lisa . In the
LIGO band, the detection of GWs from an eccentric orbit would suggest that the
binary was formed with a small initial separation and did not have time to
circularize, or the binary evolved through an unknown dynamical process.
Incorporating eccentric BBH simulations may also lead to an increase in the
LIGO/Virgo/KAGRA detection rate PhysRevD.98.123005 . Besides, the detection of
eccentric BBH mergers could capture effects from the extreme-gravity regime
and therefore can be used for testing the general theory of relativity
PhysRevD.100.124032 ; YunesLiv .
We highlight the significance of detecting GWs from eccentric BBHs.
Constructing template models for eccentric waveforms is challenging, and we
aim to make progress towards this goal especially for the late inspiral and
merger regimes that are most accessible with today’s observations. One of the
main difficulties in developing an eccentric waveform model is that only a few
numerical relativity (NR) simulations with higher eccentricity are available.
Thus, many studies focus on developing eccentric models from the post-
Newtonian (PN) expansion. The development of full inspiral-merger-ringdown
(IMR) eccentric waveform models is currently an actively researched area
PhysRevD.97.024031 ; PhysRevD.96.044028 ; PhysRevD.98.044015 .
Huerta et al. PhysRevD.97.024031 construct a time-domain eccentric
nonspinning waveform model ($e_{0}<0.2$) up to mass ratio 5.5, where $e_{0}$
is the eccentricity 10 cycles before the merger. Their model is called ENIGMA,
a hybrid waveform that has been calibrated using a set of numerical
simulations and trained using Gaussian process regression (GPR). Reference
PhysRevD.96.044028 presents a low-eccentricity model ($e_{0}<0.2$) called
SEOBNRE using the expansion of the effective one-body (EOB) waveform family. A
more up-to-date EOB formalism is demonstrated in Refs. PhysRevD.101.101501 ;
Nagar:2021gss . Hinder et al. PhysRevD.98.044015 present a time-domain,
nonspinning eccentric waveform model up to mass ratio $q=m_{1}/m_{2}=3$ from
23 NR simulations that are publicly available in the SXS catalog. The
referenced eccentricity is $e_{\textrm{ref}}\leq 0.08$ starting at seven
cycles before the merger. Like Ref. PhysRevD.97.024031 , the early inspiral of
this model is hybridized with a PN expansion to produce a full IMR model in a
Mathematica package mathhinder . In addition, Ref. PhysRevD.103.064022
recently developed an eccentric model NRSur2dq1Ecc for nonspinning waveforms
and eccentricities up to 0.2 from 47 NR simulations. Although the model was
trained for $q=1$, it can be extended to mass ratio $q\approx 3$. Apart from
the studies above, nonspinning, low-eccentricity frequency-domain models from
the PN expansion are publicly available in the LIGO algorithm library (LAL)
PhysRevD.93.064031 ; PhysRevD.90.084016 ; PhysRevD.93.124061 .
The excitement to search for an eccentric BBH motivated the following
analysis. References 10.1093/mnras/stz2996 ; 10.1093/mnrasl/slaa084 ;
Romero_Shaw_2020 recently developed an analysis to find the signature of an
eccentric BBH in the O1, O2 and several events in the O3 data using the
SEOBNRE model. Additionally, Ref. gayathri2020gw190521 analyzed the heaviest
BBH system during O1–O3, GW190521 PhysRevLett.125.101102 with 325 NR
simulations. They found that this event is consistent with highly precessing,
eccentric model with $e\approx 0.7$.
We present a promising method to add eccentricity to quasicircular systems
independent of the PN expansion. We apply this method to nonspinning, time-
domain waveforms, although in principle it can be used in more general
settings. Our technique focuses on a fast reconstruction of the near-merger
eccentric BBH waveform and can be applied to any analytical circular
nonspinning model. We build our model from 12 NR simulations and test against
further 8 NR simulations from the open SXS catalog SXS . Our method is very
simple and can be applied to any circular time-domain model obtained from,
e.g., the phenomenological PhysRevD.93.044007 ; Hannam:2013oca ;
PhysRevD.82.064016 or EOB PhysRevD.59.084006 ; PhysRevD.81.084041 families.
We model the deviation from circularity visible in the amplitude and phase of
eccentric GW signals. This deviation is modeled across the parameter space and
can be simply added to any quasicircular model, which elevates that model to
include eccentric effects. This approach is inspired by the ”twisting”
technique that is applied for reconstructing precessing spins from an aligned-
spin model to build, e.g., the IMRPhenomP family Hannam:2013oca ;
2020PhRvD.101b4056K ; Khan:2018fmp ; Pratten:2020ceb ; Estelles:2020osj . The
dynamic calibration of the waveform model is motivated by our previous study
PhysRevD.99.024010 and the regression techniques tested in detail in Ref.
Setyawati_2020 .
We calibrate our model for mass ratios $q\leq 3$ and eccentricity $e\leq 0.2$,
and provide it as a Python package called pyrex pyrexzen . Our model has been
constructed for a fiducial 50 $M_{\odot}$ BBH and can then be rescaled for
other total masses $M$. We find that the overlap of all our test data against
NR is above 98%. Moreover, we expand the construction to earlier regimes than
the calibrated time span. Although we do not calibrate for higher mass ratios,
the early inspiral, or higher orbital eccentricity, we allow the building of
waveforms beyond the parameter boundaries used for calibration.
The organization of this manuscript is as follows: In Sec. II, we present the
methodology to construct this model. Section III discusses the primary outcome
and the faithfulness of our model. Finally, Sec. IV summarizes and concludes
the prospect of our studies. Throughout this article, we use geometric units
in which $G=c=1$.
## II Method
Using NR simulations, we investigate the frequency and amplitude modulations
in eccentric BBH signals and implement them in analytical waveforms to develop
our model. As described by Peters PhysRev.136.B1224 , the orbital eccentricity
in binary systems decreases over time due to energy loss through GW radiation.
Pfeiffer et al. Pfeiffer_2007 investigated this in numerical simulations of
the SXS catalog. The authors point out that one of the main differences in the
evolution of low-eccentricity initial data compared to quasicircular binaries
is an overall time and phase shift, where the quasicircular data represent the
binary at a point close to merger. Following these studies, Hinder et al.
PhysRevD.98.044015 showed that the GW emissions from low-eccentric binaries
and circular binaries are indistinguishable near the merger stage.
Specifically, Hinder et al. suggest that one only loses 4% of the signal when
substituting the GW emission from low-eccentricity binaries with circular
orbits 30$M$ before the peak of the amplitude ($t=0$). They use this fact to
build an eccentric IMR model by replacing the late inspiral eccentric model
with a circular waveform. Combining the finding above, we model the decaying
eccentricity as amplitude and phase modulation up to $t=-29M$. We then
substitute the GW strain at $t>-29M$ with the circular model for the same
binary masses.
### II.1 Data preparation
We use 20 nonspinning NR simulations from the SXS catalog up to mass ratio 3
and eccentricity 0.2 to build our model (see Table 1). We follow the
definition of eccentricity $e_{\textrm{comm}}$ in Ref. PhysRevD.98.044015 as
the eccentricity measured at the referenced frequency,
$x=(M\omega)^{2/3}=0.075$. These simulations are divided into a training data
set of 12 simulations and the test datasets of 8 simulations, as shown in Fig.
1. Binaries of the test dataset fall within the training data’s parameter
boundaries. Hence, we do not perform extrapolation with the test data.
We combine the “+” and “$\times$” polarization using the spin-weighted
spherical harmonics with the following expression formatNR :
$h_{+}-ih_{\times}=\frac{M}{r}\sum_{\ell=2}^{\infty}\sum_{m=-\ell}^{m=\ell}h_{\ell
m}(t)\;^{-2}Y_{\ell m}(\iota,\phi),$ (1)
where $M$ and $r$ are the total mass of the system and the distance from the
observer, respectively; ${}^{-2}Y_{\ell m}$ are the spin-weighted spherical
harmonics that depend on the inclination angle $\iota$ and the phase angle
$\phi$; and $h_{\ell m}(t)$ can be extracted from the NR data in the
corresponding catalog. We construct our model for $h_{2\pm 2}$, the leading
contribution of spherical harmonic modes with $\ell=2$, $m=\pm 2$. Reference
PhysRevD.98.044015 suggests that other, subdominant modes are less
significant for nearly equal-mass systems with low eccentricity. Here we
consider only moderately small eccentricities; therefore we only model the
dominant mode. For future studies, subdominant harmonics will be important to
model high-eccentricity signals accurately.
Table 1: NR simulations from the SXS catalog used in this study with mass ratio $q=m_{1}/m_{2}$, eccentricity at the reference frequency $e_{\textrm{comm}}$, and the number of orbits before the maximum amplitude of $\|{h_{\textrm{22}}}\|$. $e_{\textrm{comm}}$ is the eccentricity at the reference frequency $(M\omega)^{2/3}\,=\,0.075$ as described in Ref. PhysRevD.98.044015 . The quasicircular waveforms ($e_{\textrm{comm}}$ = 0.000) have eccentricities lower than 10-5 at the reference frequency. Case | Simulations | Training/test | $q$ | $e_{\textrm{comm}}$ | $N_{\textrm{orbs}}$
---|---|---|---|---|---
1 | SXS:BBH:0180 | Training | 1 | 0.000 | 26.7
2 | SXS:BBH:1355 | Training | 1 | 0.053 | 11.9
3 | SXS:BBH:1357 | Training | 1 | 0.097 | 12.8
4 | SXS:BBH:1358 | Test | 1 | 0.099 | 12.1
5 | SXS:BBH:1359 | Test | 1 | 0.100 | 11.7
6 | SXS:BBH:1360 | Test | 1 | 0.142 | 11.1
7 | SXS:BBH:1361 | Test | 1 | 0.144 | 10.9
8 | SXS:BBH:1362 | Training | 1 | 0.189 | 10.2
9 | SXS:BBH:1363 | Training | 1 | 0.192 | 10.1
10 | SXS:BBH:0184 | Training | 2 | 0.000 | 13.7
11 | SXS:BBH:1364 | Training | 2 | 0.044 | 14.2
12 | SXS:BBH:1365 | Test | 2 | 0.060 | 14.1
13 | SXS:BBH:1366 | Test | 2 | 0.095 | 13.6
14 | SXS:BBH:1367 | Test | 2 | 0.096 | 13.6
15 | SXS:BBH:1368 | Training | 2 | 0.097 | 13.6
16 | SXS:BBH:1369 | Training | 2 | 0.185 | 13.6
17 | SXS:BBH:0183 | Training | 3 | 0.000 | 13.5
18 | SXS:BBH:1372 | Test | 3 | 0.092 | 15.6
19 | SXS:BBH:1373 | Training | 3 | 0.093 | 15.3
20 | SXS:BBH:1374 | Training | 3 | 0.180 | 13.5
Figure 1: The training and test data, shown by the red circles and the blue
plus signs, are located in the parameter space of mass ratio and eccentricity.
We use 20 NR simulations from the SXS catalog and divide them into 12 NR
training datasets and 8 test datasets.
We prepare the data as follows. First, we align all the waveforms in the time
domain such that the peak amplitude is at $t=0$. Subsequently, we remove the
first 250$M$ from the start of the waveforms due to the junk radiation, and
the last 29$M$ before $t=0$ due to circularization (see Fig. 2). Later, we use
a circular waveform for $t>-29M$. We then decompose $h_{2\pm 2}$ into
amplitude ($\mathcal{A}$), phase ($\Psi$), and the phase derivative,
$\omega=\frac{d\Psi}{dt}$, where the referenced frequency follows Ref
PhysRevD.98.044015 .
Figure 2: The full and the chopped waveform of the SXS:BBH:1364 simulation
($q=2,e_{\textrm{comm}}=0.044$). The blue line shows the full NR
$h_{\textrm{22}}$ mode, and the orange line presents the time range used in
this study. We remove the first $250M$ due to the junk radiation and modulate
the residual oscillation at $-1500M\leq t\leq-29M$.
We model amplitude $\mathcal{A}_{\textrm{22}}$ and frequency
($\omega_{\textrm{22}}$) as a simple quasicircular piece plus an oscillatory
function. The final model then yields the phase ($\Psi_{\textrm{22}}$) by
integrating the frequency.
Figure 3: The top-left panel shows the amplitude, the top-right panel shows
the time derivative of the phase
$\omega_{\textrm{22}}=d\Psi_{\textrm{22}}/dt$, and the bottom panel shows the
phase of $h_{\textrm{22}}$. We present the key parameters from the training
dataset for $q=2$ ($\ell=2,m=2$). The numbers in the legend correspond to the
case numbers of the simulations shown in Table 1. Although higher-eccentricity
waveforms produce more oscillations than the lower-eccentricity waveforms, all
data appear identical at $t>-30M$ due to circularization as shown in the top
panels. We employ the residual amplitude $\mathcal{A}_{\textrm{22}}$ and
frequency $\omega_{\textrm{22}}$ to develop our model in the late inspiral
regime.
### II.2 Eccentricity estimator
In numerical simulations, eccentricity is often discussed as a consequence of
imperfections in the initial data Ramos-Buades:2018azo . It manifests itself
as small oscillations on top of the gradual binary evolution, where the
oscillation’s amplitude is proportional to the eccentricity (see
$\mathcal{A}_{\textrm{22}}$ and $\omega_{\textrm{22}}$ plots in Figs. 2 and
3). We use this residual oscillation as a key to estimating the eccentricity
evolution.
Mroué et al. PhysRevD.82.124016 compare various methods to estimate
eccentricity using $e_{\textrm{X}}(t)$. The orbital eccentricity is
proportional to the amplitude of a sinusoidal function, $e_{\textrm{X}}(t)$,
expressed by
$\displaystyle e_{X}(t)$
$\displaystyle=\frac{X_{\textrm{NR}}(t)-X_{\textrm{c}}(t)}{2X_{\textrm{c}}(t)},$
$\displaystyle\Leftrightarrow e_{X}(X_{c})$
$\displaystyle=\frac{X_{\textrm{NR}}(X_{c})-X_{\textrm{c}}}{2X_{\textrm{c}}},$
(2)
where $X$ is either $\omega_{\textrm{22}}$ or $\mathcal{A}_{\textrm{22}}$, and
$X_{\textrm{c}}(t)$ is the $X$ quantity in circular binaries instead of low-
order polynomial fitting functions that are often used in the literature. We
reverse this relation to convert a circular model [with given $X_{c}(t)$] to
an eccentric model using an analytical description of the oscillatory function
$e_{X}(X_{c})$. We apply the Savitzky-Golay filter savgol to smooth the
$e_{X}(t)$ curves from noises caused by numerical artifacts. Savitzky-Golay is
a digital filter applied to smooth the selected data points without altering
the signal direction by fitting the adjacent data with a low-degree polynomial
fit.
We stress that the definition of the orbital eccentricity is not unique. Thus,
one could use different definitions of eccentricity. In principle, any
definition can be accepted if consistently applied to the study in question.
The NR data we use are labeled with a value for the initial eccentricity that
is based on PN initial data PhysRevD.98.044015 . As we shall discuss below,
these labels are similar to what we estimate for the eccentricity using Eq.
(2), but not identical. However, we refrain from redefinition of the initial
eccentricity of the NR data and instead identify each NR simulation with the
value of eccentricity at the reference frequency $(M\omega)^{2/3}=0.075$
determined by the original Ref. PhysRevD.98.044015 . We do this because (i) we
want to avoid any confusion as to what NR data we are using and what their
properties are, and (ii) by making the amplitude of $e_{X}$ a function of the
eccentricity label imposed by Ref. PhysRevD.98.044015 , we introduce an extra
uncertainty that may be seen as representing the ambiguity in determining the
initial eccentricity of the respected NR simulations. Thus, we present a
conservative estimate of the approach’s accuracy.
As a check, we compute the orbital eccentricity using the eccentricity
estimator ($e_{\textrm{X}}$) and find that the results agree with a maximum
relative error of roughly 10% against $e_{\mathrm{comm}}$ quoted in the SXS
catalog and given in Table. 1. In Fig. 4, we present the eccentricity
estimator $e_{\textrm{X}}(X_{\textrm{c}})$ as a function of its circular
amplitude and frequency, $\mathcal{A}_{\textrm{c}}$ and $\omega_{\textrm{c}}$,
respectively.
Figure 4: The eccentricity estimator from $\mathcal{A}_{\textrm{22}}$ plotted
against the circular amplitude $\mathcal{A}_{\textrm{c}}$ (left), and the
eccentricity estimator from $\omega_{\textrm{22}}$ plotted against the
circular omega $\omega_{\textrm{c}}$ (right) with the same mass ratio.
Different colors show different cases of training data for mass ratio $q=2$.
We smooth the data from numerical artifacts using the Savitzky-Golay filter
(see text).
### II.3 Fitting $e_{\textrm{X}}$
Our main goal is to model an eccentric waveform by modulating the amplitude
and phase of a circular model. To construct the model, we interpolate the
additional oscillation of an eccentric waveform depending on its eccentricity
and mass ratio, where the relationship between the circular and the eccentric
model is expressed in Eq. (2). Accordingly, we look for a fitting function to
model $e_{\textrm{X}}(X_{\textrm{c}})$ that relies on the desired parameters
($q$, $e$) and reverse Eq. (2) to obtain the eccentric amplitude and
frequency. We then integrate the frequency to obtain the eccentric phase and
construct the eccentric $h_{\mathrm{22}}$.
We note that alternatives to fitting the amplitude and frequency modulations
have been studied in Ref. PhysRevD.103.064022 . In particular, they
investigated using the phase residual instead of the frequency, or fitting the
eccentric amplitude and phase (or frequency) directly instead of recasting the
problem in terms of differences to noneccentric signals. Here we find that the
most suitable strategy for our approach is to fit the residual amplitude and
frequency oscillation defined as the eccentricity estimator ($e_{\textrm{X}}$)
that comes from {$\mathcal{A}_{\textrm{22}}$, $\omega_{\textrm{22}}$} and
integrate $\omega_{\textrm{22}}$ to obtain the phase ($\Psi_{\textrm{22}}$).
In a suitable parametrization, the eccentricity estimator $e_{\textrm{X}}$ is
a decaying sinusoidal function (see Fig. 4) with its amplitude defined by the
orbital eccentricity $e$ PhysRevD.82.124016 . To model $e_{\textrm{X}}$ for
various eccentricities and mass ratios, we fit $e_{\textrm{X}}$ with a set of
free parameters modifying a damped sinusoidal function. These parameters are
two amplitude quantities ($A$ and $B$), a frequency ($f$), and phase
($\varphi$) with the following relation:
$e_{\textrm{X}}(X_{\textrm{c}})=Ae^{B\,X_{c}^{\kappa}}\sin(f\,X_{c}^{\kappa}+\varphi).$
(3)
$A,B,f$, and $\varphi$ are standard damped sinusoidal parameters obtained from
the optimized curve fitting.
We use a $X^{\kappa}_{c}$ instead of $X_{\textrm{c}}$ to describe the
evolution of the residual oscillations of the amplitude and frequency mainly
for the following reasons: $X_{\textrm{c}}$ is a rapidly evolving function.
Therefore, it is more difficult to model $e_{X}$ with a standard sinusoidal
function with a constant frequency. Although it is in principle possible to
use $X_{\textrm{c}}$ directly in the model, we would have to slice the data
into multiple small time windows that overlap. Thus, the results will be less
smooth; one would have to blend all those individual functions defined on
small intervals into one big function. Besides, we cannot guarantee our result
beyond our calibration range, especially for the early inspiral. Using a power
law allows us to fit the entire region with one set of free parameters.
However, we note that the power law of $X_{\textrm{c}}$ induces a twist
resulting in infinitely large eccentricities for the very early inspiral
stage. That is a problem with assuming exponential decay, and the fact that
the power law we use has a negative exponent.
We fit our model $e_{\textrm{X}}$ from the starting frequency
$f_{\textrm{low}}=25\,\mathrm{Hz}$ for a circular BBH with a total mass
$M=50\,M_{\odot}$. The power law for $\omega_{c}$ is $\kappa=-59/24$, and for
$\mathcal{A}_{c}$ it is $\kappa=-83/24$. We emphasize that these values are
customized i.e., we expect that one might need different values to calibrate
with higher eccentricity, a higher mass ratio, or a different starting
frequency.
By optimizing the curve fit between $e_{\mathrm{X}}$ and Eq. 3, we obtain the
four quantities ($A,B,f,\varphi$) for all training data. The relation between
the mass ratio ($q$), eccentricity ($e$), and the three parameters $A$, $B$,
$f$ is shown in Fig. 5. The amplitude components $A$ and $B$ are strongly
correlated to eccentricity, whereas the mass ratio determines the frequency
squared. Hence, we perform one-dimensional linear interpolation across
eccentricity to obtain the values of $A$ and $B$. Similar to that, we linearly
interpolate $f^{2}$ across mass ratios. We choose $f^{2}$ instead of $f$
because the data is smoother for interpolation. The square root of $f^{2}$
gives either positive or negative values. However, this ambiguity can be
absorbed by the phase parameter $\varphi$.
The phase parameter $\varphi$ is an additional degree of freedom that we
cannot explore sufficiently with the available NR data. For small sets of NR
simulations with nearly constant values of $q$ and $e$, but varying $\ell$, we
find that the best-fit $\varphi$ mirrors changes in $\ell$. Thus, we expect
that it may correlate strongly with the mean anomaly. Because the orientation
of the ellipse is astrophysically less interesting than the value of the
eccentricity, we do not attempt to model the effect of varying the mean
anomaly other than introducing the phenomenological nuisance parameter
$\varphi$. We interpolate the other parameters when generating a new waveform
model with different mass ratios and referenced eccentricities.
Figure 5: Key quantities of $\mathcal{A}_{\textrm{22}}$ (left) and
$\omega_{\textrm{22}}$ (right) of a damped sinusoidal function obtained from
the curve fitting [see Eq. 3]. The amplitude parameters ($A$ and $B$) depend
strongly on the eccentricity ($e$), whereas the square of the frequency
($f^{2}$) is correlated to the mass ratio ($q$). We leave $\varphi$ as a free
nuisance parameter that we maximize over when comparing to the test data. The
left color bar corresponds to the bottom panel, and the right color bar to the
top panel.
We apply a one-dimensional interpolation for each key quantity shown in Fig.
5. $A$ and $B$ are interpolated over different eccentricities $e$, $f^{2}$ is
interpolated over the mass ratio $q$, and the phase of the oscillation
$\varphi$ can be chosen arbitrarily.
Once we obtain the eccentricity estimators $e_{\textrm{X}}$ using the
interpolated quantities, we substitute the results to reconstruct
$\mathcal{A}_{\textrm{22}}$ and $\omega_{\textrm{22}}$ using Eq. 2. To
construct $\Psi_{\textrm{22}}$, we integrate $\omega_{\textrm{22}}$
numerically using the trapezoidal rule. We truncate the waveform at $t=-50M$
and join it with the nonspinning circular model. We then smooth the transition
with the Savitzky-Golay filter at $-46M<t<-25M$.
We then build h2±2 as the combination of the amplitude and phase as follows:
$h_{\ell m}=\mathcal{A}_{\ell m}\;e^{-i\Psi_{\ell m}}.$ (4)
To reconstruct the gravitational-wave strain $h=h_{+}-h_{\times}$, we compute
the spin-weighted spherical harmonics $Y_{\ell m}(\iota,\phi)$ and employ Eq.
1.
## III Results
We built a new nonspinning eccentric model by modulating the residual
amplitude and phase oscillations of the circular analytical models, IMRPhenomD
PhysRevD.93.044007 and SEOBNRv4 PhysRevD.95.044028 . IMRPhenomD is an
aligned-spin IMR model that was originally built in frequency domain and
calibrated to numerical simulations for mass ratios $q\leq 18$. SEOBNRv4 is an
aligned-spin time-domain IMR model PhysRevD.95.044028 ; PhysRevD.89.061502
that has been calibrated to 140 NR waveforms produced with the SpEC code up to
mass ratio 8 and extreme-mass-ratio signals.
As described in Sec. II, we interpolate the residual amplitude and phase
oscillations of the training dataset for the given mass ratio and
eccentricity. To construct a new, eccentric waveform for the intermediate to
near-merger regime, we then use one of the nonspinning circular models with
the desired mass ratio, compute the eccentricity estimators ($e_{X}$) from the
analytical description given in Eq. (3), and reconstruct the desired eccentric
waveform model for each test data. We develop a map from circular nonspinning
waveforms to eccentric waveforms that can be applied to any analytical model
with a relatively simple and fast function using only 20 NR simulations.
We evaluate the results by computing the overlap between the new model and the
NR test data. The overlap is maximized over a time and phase shift, as well as
the free phase offsets of the residual oscillations. Mathematically, we define
the overlap $\mathcal{O}$ based on an inner product between two waveforms:
$\displaystyle\langle h_{1},h_{2}\rangle$
$\displaystyle=4\operatorname{Re}\int_{f_{1}}^{f_{2}}\frac{\tilde{h}_{1}(f)\,\tilde{h}_{2}^{*}(f)}{\mathrm{S_{n}}(f)}\mathrm{d}f,$
(5) $\displaystyle\mathcal{O}$
$\displaystyle=\max_{\\{t_{0},\Psi_{0},\varphi_{\mathcal{A}},\varphi_{\omega}\\}}\frac{\langle
h_{1},h_{2}\rangle}{\|h_{1}\|\|h_{2}\|},$ (6)
where $\mathrm{S_{n}}$ is the sensitivity curve of the corresponding GW
interferometer, $\tilde{h}(f)$ is the Fourier transform of $h(t)$, ∗ denotes
complex conjugation and $\|h\|=\sqrt{\langle h,h\rangle}$. The mismatch or
unfaithfulness is defined by
$\mathcal{M}=1-\mathcal{O}.$ (7)
We investigate three sensitivity curves for the future GW detectors, aLIGO A+,
the Einstein Telescope (ET), and Cosmic Explorer (CE). LIGO A+ is the future
GW interferometer with 1.7 times better sensitivity than the current detector,
expected to start observing in mid-2024 at the earliest NSFPressConf . The ET
is a 10 km GW observatory planned to be built on the border between Belgium,
Germany, and Netherlands which could be operating in the mid-2030s
Maggiore_2020 . The ET is expected to have higher sensitivity towards the low-
frequency range. CE is a 40 km third-generation GW detector which has higher
sensitivity towards low redshift ($z>10$) that is planned to start observing
in the 2030s CEdocs . Since our model focuses on the late inspiral case, and
because the unfaithfulness is insensitive to a change in overall signal-to-
noise ratio, the values obtained for the future third-generation detectors
show similar behavior thirdgen . Hence, we only show the overlap results for
the LIGO A+ design sensitivity. A possible caveat is that our model might not
fill the LIGO A+ band down to 10 Hz. Thus, there is a chunk of inspiral power
missing in the signal.
Figure 6 visually compares the strain $h_{\mathrm{2\pm 2}}$ of each NR test
dataset with the new eccentric nonspinning signal built from analytical
models, IMRPhenomD and SEOBNRv4 for a $50\,M_{\odot}$ BBH with inclination
angle $\iota$=0 (face-on) and phase of coalescence, $\phi_{c}$=0. Using our
method, we find that the minimum overlap between the new model and NR is
$\approx 0.98$ ($\log_{\textrm{10}}\mathcal{M}\,=\,-1.8$) over all of our test
datasets. The minimum overlap occurs at the highest eccentricity in the test
dataset.
Figure 6: IMRPhenomD (orange) and SEOBNRv4 (green) circular waveforms twisted
into eccentric models. $\log_{10}\mathcal{M}_{1}$ is the log mismatch of
IMRPhenomD against the NRs waveform (shown in blue), and
$\log_{10}\mathcal{M}_{2}$ gives the log mismatch of SEOBNRv4 against NR with
the same mass ratio and eccentricity, respectively. The total mass of the
system is $M\,=\,50M_{\odot}$, and the mass ratio ($q$) and eccentricity ($e$)
are shown in the title of each plot. We employ the A+ design sensitivity curve
starting at $f\,=\,35\,\textrm{Hz}$ (see text) to compute the match. The black
vertical lines mark the range in which we perform the interpolation and
compute the match.
Although we calibrated the new model for limited ranges in mass ratio,
eccentricity, and time, we let the production of the new model go beyond our
calibration range. In Fig. 7, we show the unfaithfulness of the new model
against the NR test data for various total masses with the aLIGO A+ design
sensitivity curve. The left panel shows the unfaithfulness within the
calibrated frequency range, between 25 Hz and the ISCO frequency scaled over
the total mass. Similarly, the right panel presents the unfaithfulness beyond
the calibrated frequency range, between 20 Hz and the ringdown frequency. We
use the definitions of the ISCO and ringdown frequencies as follows:
$f_{\textrm{ISCO}}=1/(6^{3/2}\pi M),$ (8)
and
$f_{\textrm{RD}}=0.1/M.$ (9)
Figure 7 shows that the mismatches decrease toward higher-total-mass systems.
As the total mass increases, the overlap computation covers a smaller waveform
regime towards merger in the frequency space. Since the eccentricity decreases
over time, the near-merger regime has lower eccentricities. Thus, the overlap
between the model and the corresponding NR simulation is better for the
higher-mass systems compared to the lower-mass ones. For comparison, we find
that mismatches between circular analytical models and the eccentric NR test
data are at least 1 order of magnitude worse than the results we find for our
eccentric model.
The unfaithfulness between eccentric waveforms is better for {25, ISCO} than
for {20, Ringdown}. We investigate the contribution weight between the early
inspiral and the ringdown in the unfaithfulness results by comparing with the
{25, Ringdown} and {20, ISCO} ranges. We argue that the mismatches for the low
masses are dominated by the inspiral, whereas for high masses, the mismatches
are dominated by the merger or ringdown. In the mismatch computation, we add
padding in the ringdown area, but the early inspiral should come purely from
the fitting data.
Figure 7: Mismatch results of eccentric variants of IMRPhenomD and SEOBNRv4
against the NR test data for different total masses assuming aLIGO A+ design
sensitivity. Left: $25\,Hz$ to ISCO frequency (within the calibration range).
Right: from 20 Hz to ringdown frequency (beyond the calibration range), where
we define the ringdown frequency as $f_{\textrm{RD}}$=$0.1/M$.
Furthermore, we test how well one can extract the parameters of an eccentric
signal $h(q,e)$ by comparing with various waveforms with different
eccentricities $e$ and mass ratios $q$. We generate a pyrex waveform ($q=1$,
$e=0.144$) and compare it with various other signal parameters ($q,e$) using
the same analytical waveform model. The results are shown in Fig. 8. We
emphasize that in this study, we did not run a standard parameter estimation
(PE) pipeline that stochastically explores a much greater parameter space. In
particular, we do not consider varying the total mass or spin. Hence, our
results are only a first indication of potential parameter ambiguities. Our
results in Fig. 8 show that the mismatch between the generated waveform and
other waveforms having similar mass ratios but different eccentricities is
relatively low, suggesting that an accurate measurement of the eccentricity is
challenging for high-mass BBH systems where only the late inspiral and merger
are accessible through the GW detection.
Figure 8: Comparison with the highest eccentricity in the test dataset,
$e=0.144$, $q=2$. We generate an eccentric waveform model derived from a
nonspinning circular model, IMRPhenomD or SEOBNRv4, and compare the signal
with models for different mass ratios and eccentricities. Waveforms with
higher parameter distance have lower overlap. The color bar shows the
$\log_{\rm{10}}$ mismatch.
## IV Conclusion and future perspectives
The detection of GWs from an eccentric BBH merger would be a crucial step
towards understanding the physical evolution of compact binary coalescences
and the nature of BBHs in globular clusters. Due to limitations in waveform
modeling, the current search and parameter estimation pipelines in the
LIGO/Virgo data analysis rely on analytical waveform models for circular
binaries. One of the limitations to developing eccentric BBH models is the
small number of eccentric NR simulations. NR simulations that are publicly
available have low eccentricities ($e\leq 0.2$) at $M\omega^{2/3}$ = 0.075. We
use 20 NR simulations from the open SXS catalog and split them into 12
training datasets and 8 test datasets to develop our method.
We presented a novel method to convert any circular nonspinning waveform model
into a low-eccentricity nonspinning waveform. To develop our method, we
analyzed the residual modulations in the amplitude and frequency of eccentric
waveforms compared to the circular signals with the same mass ratio in the 12
NR simulations of the training dataset. We modeled the decrease of
eccentricity over time, known as the eccentricity estimators, $e_{X}$, using a
damped sinusoidal fit, where the fitting function is built upon four key
parameters. We then performed a one-dimensional interpolation for each key
parameter ($A$, $B$, and $f$) to build the eccentric waveform with the desired
mass ratio and eccentricity. One of our model parameters, $\varphi$, shows no
clear correlation with the physical parameters we explore. However, the small
number of NR simulations used here did not allow us to model the effect of
varying the mean anomaly in detail, and we expect $\varphi$ to represent this
degree of freedom. When quantifying the agreement between our model and the
test data, we maximize over this nuisance parameter.
We then build a new model using the fitting values of $e_{X}$ and the
amplitude and frequency of the circular model which here we take from
IMRPhenomD and SEOBNRv4. Our new model has an overlap
$0.98\lesssim\mathcal{O}\lesssim 0.999$ over all NR simulations in our test
dataset with the LIGO A+ design sensitivity curve. We hint that we need more
training and test datasets for further development of this model beyond the
current parameter boundaries. The computation of our method can be performed
easily and quickly in the Python package pyrex pyrexzen .
Although we calibrate our model to a 50 $M_{\odot}$ BBH ($q\leq 3$, $e\leq
0.2$) starting at frequency $f_{\textrm{low}}=25$ Hz, we let the computation
go slightly beyond the calibrated range. The calibrated time range of the
waveform is from the late inspiral up to the near-merger phase, but we can
extend the model through merger and ringdown by using the circular data. For
the early inspiral, an analytical PN model could be used to complete the
description of the entire coalescence. This way, our approach can be adapted
to develop a complete IMR eccentric model. This would be especially important
for future generations of GW interferometers as they have higher sensitivity
especially in the low-frequency range. Careful studies of eccentric search and
parameter estimation are needed to detect eccentric compact binary
coalescences and their origin.
###### Acknowledgements.
The authors would like to thank David Yeeles, Maria Haney, and Sebastian Khan
for useful discussions, and the anonymous referee for insightful comments on
the manuscript. Computations were carried out on the Holodeck cluster of the
Max Planck Independent Research Group “Binary Merger Observations and
Numerical Relativity” and the LIGO Laboratory computing cluster at California
Institute of Technology. This work was supported by the Max Planck Society’s
Research Group Grant.
## References
* (1) Aasi J, et al. Characterization of the LIGO detectors during their sixth science run. Classical and Quantum Gravity. 2015 may;32(11):115012. Available from: https://doi.org/10.1088/0264-9381/32/11/115012.
* (2) Acernese F, et al. Advanced Virgo: a second-generation interferometric gravitational wave detector. Classical and Quantum Gravity. 2015 December;32:024001. Available from: http://stacks.iop.org/0264-9381/32/i=2/a=024001.
* (3) Akutsu T, Ando M, et al. KAGRA: 2.5 generation interferometric gravitational wave detector. Nat Astron. 2019 Jan;3:35–40. Available from: https://www.nature.com/articles/s41550-018-0658-y.
* (4) Abbott BP, et al. GWTC-1: A Gravitational-Wave Transient Catalog of Compact Binary Mergers Observed by LIGO and Virgo during the First and Second Observing Runs. Phys Rev X. 2019 Sep;9:031040. Available from: https://link.aps.org/doi/10.1103/PhysRevX.9.031040.
* (5) Abbott BP, et al. Search for Eccentric Binary Black Hole Mergers with Advanced LIGO and Advanced Virgo during Their First and Second Observing Runs. The Astrophysical Journal. 2019 sep;883(2):149. Available from: https://doi.org/10.3847/1538-4357/ab3c2d.
* (6) Abbott BP, et al.. GWTC-2: Compact Binary Coalescences Observed by LIGO and Virgo During the First Half of the Third Observing Run; 2020. arXiv:2010.14527.
* (7) Peters PC. Gravitational Radiation and the Motion of Two Point Masses. Phys Rev. 1964 Nov;136:B1224–B1232. Available from: https://link.aps.org/doi/10.1103/PhysRev.136.B1224.
* (8) Samsing J, MacLeod M, Ramirez-Ruiz E. The formation of eccenric compact binary inspirals and the role of gravitational wave emission in binary-single stellar encounters. The Astrophysical Journal. 2014 mar;784(1):71. Available from: https://doi.org/10.1088%2F0004-637x%2F784%2F1%2F71.
* (9) Lower ME, Thrane E, Lasky PD, Smith R. Measuring eccentricity in binary black hole inspirals with gravitational waves. Phys Rev D. 2018 Oct;98:083028. Available from: https://link.aps.org/doi/10.1103/PhysRevD.98.083028.
* (10) Papaloizou, J C B , Nelson, R P , Masset, F . Orbital eccentricity growth through disc-companion tidal interaction. A&A. 2001;366(1):263–275. Available from: https://doi.org/10.1051/0004-6361:20000011.
* (11) Kozai Y. Asteroids with large secular orbital variations. Icarus. 1980;41(1):89 – 95. Available from: http://www.sciencedirect.com/science/article/pii/001910358090161X.
* (12) Lidov ML. The evolution of orbits of artificial satellites of planets under the action of gravitational perturbations of external bodies. Planetary and Space Science. 1962;9(10):719 – 759. Available from: http://www.sciencedirect.com/science/article/pii/0032063362901290.
* (13) Gultekin K, Miller MC, Hamilton DP. Three-Body Dynamics with Gravitational Wave Emission. American Astronomical Society; 2006. Available from: https://doi.org/10.1086/499917.
* (14) Rodriguez CL, Amaro-Seoane P, Chatterjee S, Kremer K, Rasio FA, Samsing J, et al. Post-Newtonian dynamics in dense star clusters: Formation, masses, and merger rates of highly-eccentric black hole binaries. Phys Rev D. 2018 Dec;98:123005. Available from: https://link.aps.org/doi/10.1103/PhysRevD.98.123005.
* (15) Samsing J. Eccentric black hole mergers forming in globular clusters. Phys Rev D. 2018 May;97:103014. Available from: https://link.aps.org/doi/10.1103/PhysRevD.97.103014.
* (16) Amaro-Seoane P, et al.. Laser Interferometer Space Antenna; 2017. arXiv:1702.00786.
* (17) Ma S, Yunes N. Improved constraints on modified gravity with eccentric gravitational waves. Phys Rev D. 2019 Dec;100:124032. Available from: https://link.aps.org/doi/10.1103/PhysRevD.100.124032.
* (18) Yunes N, Siemens X. Gravitational-Wave Tests of General Relativity with Ground-Based Detectors and Pulsar-Timing Arrays. Living Rev Relativ. 2013 Nov;16:9. Available from: https://link.springer.com/article/10.12942/lrr-2013-9.
* (19) Huerta EA, Moore CJ, Kumar P, George D, Chua AJK, Haas R, et al. Eccentric, nonspinning, inspiral, Gaussian-process merger approximant for the detection and characterization of eccentric binary black hole mergers. Phys Rev D. 2018 Jan;97:024031. Available from: https://link.aps.org/doi/10.1103/PhysRevD.97.024031.
* (20) Cao Z, Han WB. Waveform model for an eccentric binary black hole based on the effective-one-body-numerical-relativity formalism. Phys Rev D. 2017 Aug;96:044028. Available from: https://link.aps.org/doi/10.1103/PhysRevD.96.044028.
* (21) Hinder I, Kidder LE, Pfeiffer HP. Eccentric binary black hole inspiral-merger-ringdown gravitational waveform model from numerical relativity and post-Newtonian theory. Phys Rev D. 2018 Aug;98:044015. Available from: https://link.aps.org/doi/10.1103/PhysRevD.98.044015.
* (22) Chiaramello D, Nagar A. Faithful analytical effective-one-body waveform model for spin-aligned, moderately eccentric, coalescing black hole binaries. Phys Rev D. 2020 May;101:101501. Available from: https://link.aps.org/doi/10.1103/PhysRevD.101.101501.
* (23) Nagar A, Bonino A, Rettegno P. All in one: effective one body multipolar waveform model for spin-aligned, quasi-circular, eccentric, hyperbolic black hole binaries; 2021. https://arxiv.org/abs/2101.08624.
* (24) Hinder I. ”EccentricIMR”; 2018. https://github.com/ianhinder/EccentricIMR. Available from: https://github.com/ianhinder/EccentricIMR.
* (25) Islam T, Varma V, Lodman J, Field SE, Khanna G, Scheel MA, et al. Eccentric binary black hole surrogate models for the gravitational waveform and remnant properties: Comparable mass, nonspinning case. Phys Rev D. 2021 Mar;103:064022. Available from: https://link.aps.org/doi/10.1103/PhysRevD.103.064022.
* (26) Tanay S, Haney M, Gopakumar A. Frequency and time-domain inspiral templates for comparable mass compact binaries in eccentric orbits. Phys Rev D. 2016 Mar;93:064031. Available from: https://link.aps.org/doi/10.1103/PhysRevD.93.064031.
* (27) Huerta EA, Kumar P, McWilliams ST, O’Shaughnessy R, Yunes N. Accurate and efficient waveforms for compact binaries on eccentric orbits. Phys Rev D. 2014 Oct;90:084016. Available from: https://link.aps.org/doi/10.1103/PhysRevD.90.084016.
* (28) Moore B, Favata M, Arun KG, Mishra CK. Gravitational-wave phasing for low-eccentricity inspiralling compact binaries to 3PN order. Phys Rev D. 2016 Jun;93:124061. Available from: https://link.aps.org/doi/10.1103/PhysRevD.93.124061.
* (29) Romero-Shaw IM, Lasky PD, Thrane E. Searching for eccentricity: signatures of dynamical formation in the first gravitational-wave transient catalogue of LIGO and Virgo. Monthly Notices of the Royal Astronomical Society. 2019 10;490(4):5210–5216. Available from: https://doi.org/10.1093/mnras/stz2996.
* (30) Romero-Shaw IM, Farrow N, Stevenson S, Thrane E, Zhu XJ. On the origin of GW190425. Monthly Notices of the Royal Astronomical Society: Letters. 2020 05;496(1):L64–L69. Available from: https://doi.org/10.1093/mnrasl/slaa084.
* (31) Romero-Shaw I, Lasky PD, Thrane E, Bustillo JC. GW190521: Orbital Eccentricity and Signatures of Dynamical Formation in a Binary Black Hole Merger Signal. The Astrophysical Journal. 2020 oct;903(1):L5. Available from: https://doi.org/10.3847/2041-8213/abbe26.
* (32) Gayathri V, Healy J, Lange J, O’Brien B, Szczepanczyk M, Bartos I, et al.. GW190521 as a Highly Eccentric Black Hole Merger; 2020. arXiv:2009.05461.
* (33) Abbott R, et al. GW190521: A Binary Black Hole Merger with a Total Mass of $150\text{ }\text{ }{M}_{\bigodot}$. Phys Rev Lett. 2020 Sep;125:101102. Available from: https://link.aps.org/doi/10.1103/PhysRevLett.125.101102.
* (34) ”SXS catalog”; 2020. http://www.black-holes.org/waveforms. Available from: http://www.black-holes.org/waveforms.
* (35) Khan S, Husa S, Hannam M, Ohme F, Pürrer M, Forteza XJ, et al. Frequency-domain gravitational waves from nonprecessing black-hole binaries. II. A phenomenological model for the advanced detector era. Phys Rev D. 2016 Feb;93:044007. Available from: https://link.aps.org/doi/10.1103/PhysRevD.93.044007.
* (36) Hannam M, Schmidt P, Bohé A, Haegel L, Husa S, Ohme F, et al. Simple Model of Complete Precessing Black-Hole-Binary Gravitational Waveforms. Phys Rev Lett. 2014;113(15):151101.
* (37) Santamaría L, Ohme F, Ajith P, Brügmann B, Dorband N, Hannam M, et al. Matching post-Newtonian and numerical relativity waveforms: Systematic errors and a new phenomenological model for nonprecessing black hole binaries. Phys Rev D. 2010 Sep;82:064016. Available from: https://link.aps.org/doi/10.1103/PhysRevD.82.064016.
* (38) Buonanno A, Damour T. Effective one-body approach to general relativistic two-body dynamics. Phys Rev D. 1999 Mar;59:084006. Available from: https://link.aps.org/doi/10.1103/PhysRevD.59.084006.
* (39) Pan Y, Buonanno A, Buchman LT, Chu T, Kidder LE, Pfeiffer HP, et al. Effective-one-body waveforms calibrated to numerical relativity simulations: Coalescence of nonprecessing, spinning, equal-mass black holes. Phys Rev D. 2010 Apr;81:084041. Available from: https://link.aps.org/doi/10.1103/PhysRevD.81.084041.
* (40) Khan S, Ohme F, Chatziioannou K, Hannam M. Including higher order multipoles in gravitational-wave models for precessing binary black holes. Phys Rev D. 2020 Jan;101(2):024056.
* (41) Khan S, Chatziioannou K, Hannam M, Ohme F. Phenomenological model for the gravitational-wave signal from precessing binary black holes with two-spin effects. Phys Rev D. 2019;100(2):024059.
* (42) Pratten G, García-Quirós C, Colleoni M, Ramos-Buades A, Estellés H, Mateu-Lucena M, et al. Computationally efficient models for the dominant and subdominant harmonic modes of precessing binary black holes. Phys Rev D. 2021 May;103:104056. Available from: https://link.aps.org/doi/10.1103/PhysRevD.103.104056.
* (43) Estellés H, Ramos-Buades A, Husa S, García-Quirós C, Colleoni M, Haegel L, et al.. IMRPhenomTP: A phenomenological time domain model for dominant quadrupole gravitational wave signal of coalescing binary black holes; 2020. arXiv:2004.08302.
* (44) Setyawati Y, Ohme F, Khan S. Enhancing gravitational waveform models through dynamic calibration. Phys Rev D. 2019 Jan;99:024010. Available from: https://link.aps.org/doi/10.1103/PhysRevD.99.024010.
* (45) Setyawati Y, Pürrer M, Ohme F. Regression methods in waveform modeling: a comparative study. Classical and Quantum Gravity. 2020 mar;37(7):075012. Available from: https://doi.org/10.1088%2F1361-6382%2Fab693b.
* (46) Setyawati Y, Ohme F. Yoshinta/pyrex: public release of pyrex. Zenodo; 2021. Available from: https://doi.org/10.5281/zenodo.4818195.
* (47) Pfeiffer HP, Brown DA, Kidder LE, Lindblom L, Lovelace G, Scheel MA. Reducing orbital eccentricity in binary black hole simulations. Classical and Quantum Gravity. 2007 may;24(12):S59–S81. Available from: https://doi.org/10.1088%2F0264-9381%2F24%2F12%2Fs06.
* (48) Ajith P, et al.. Data formats for numerical relativity waves; 2011. arXiv:0709.0093.
* (49) Ramos-Buades A, Husa S, Pratten G. Simple procedures to reduce eccentricity of binary black hole simulations. Phys Rev D. 2019;99(2):023003.
* (50) Mroué AH, Pfeiffer HP, Kidder LE, Teukolsky SA. Measuring orbital eccentricity and periastron advance in quasicircular black hole simulations. Phys Rev D. 2010 Dec;82:124016. Available from: https://link.aps.org/doi/10.1103/PhysRevD.82.124016.
* (51) Savitzky A, Golay MJE. Smoothing and Differentiation of Data by Simplified Least Squares Procedures. Analytical Chemistry. 1964 July;36:8. Available from: https://doi.org/10.1021/ac60214a047.
* (52) Bohé A, Shao L, Taracchini A, Buonanno A, Babak S, Harry IW, et al. Improved effective-one-body model of spinning, nonprecessing binary black holes for the era of gravitational-wave astrophysics with advanced detectors. Phys Rev D. 2017 Feb;95:044028. Available from: https://link.aps.org/doi/10.1103/PhysRevD.95.044028.
* (53) Taracchini A, Buonanno A, Pan Y, Hinderer T, Boyle M, Hemberger DA, et al. Effective-one-body model for black-hole binaries with generic mass ratios and spins. Phys Rev D. 2014 Mar;89:061502. Available from: https://link.aps.org/doi/10.1103/PhysRevD.89.061502.
* (54) Upgraded LIGO to search for universe’s most extreme events; 2019. [Online; accessed 20-Jan-2021]. https://www.nsf.gov/news/news_summ.jsp?cntn_id=297414.
* (55) Maggiore M, et al. Science case for the Einstein telescope. Journal of Cosmology and Astroparticle Physics. 2020 mar;2020(03):050–050. Available from: https://doi.org/10.1088/1475-7516/2020/03/050.
* (56) Reitze D, et al. Cosmic Explorer: The U.S. Contribution to Gravitational-Wave Astronomy beyond LIGO. Bulletin of the AAS. 2019 9;51(7). Https://baas.aas.org/pub/2020n7i035. Available from: https://baas.aas.org/pub/2020n7i035.
* (57) ”Exploring the Sensitivity of Next Generation Gravitational Wave Detectors”; 2016\. https://dcc.ligo.org/LIGO-P1600143/public. Available from: https://dcc.ligo.org/LIGO-P1600143/public.
*[GW]: gravitational-wave
*[aLIGO]: advanced Laser Interferometer Gravitational-wave Observatory
*[BBH]: binary-black-hole
*[BBHs]: binary-black-hole
*[GWs]: gravitational-wave
*[NR]: numerical relativity
*[PN]: post-Newtonian
*[IMR]: inspiral-merger-ringdown
*[GPR]: Gaussian process regression
*[EOB]: effective one-body
*[LAL]: LIGO algorithm library
*[ET]: Einstein Telescope
*[CE]: Cosmic Explorer
*[NRs]: numerical relativity
*[PE]: parameter estimation
|
# Muppet: Massive Multi-task Representations with Pre-Finetuning
Armen Aghajanyan
Facebook
<EMAIL_ADDRESS>
Anchit Gupta
Facebook
<EMAIL_ADDRESS>
Akshat Shrivastava
Facebook
<EMAIL_ADDRESS>
Xilun Chen
Facebook
<EMAIL_ADDRESS>
Luke Zettlemoyer
Facebook
<EMAIL_ADDRESS>
Sonal Gupta
Facebook
<EMAIL_ADDRESS>
###### Abstract
We propose pre-finetuning, an additional large-scale learning stage between
language model pre-training and fine-tuning. Pre-finetuning is massively
multi-task learning (around 50 datasets, over 4.8 million total labeled
examples), and is designed to encourage learning of representations that
generalize better to many different tasks. We show that pre-finetuning
consistently improves performance for pretrained discriminators (e.g. RoBERTa)
and generation models (e.g. BART) on a wide range of tasks (sentence
prediction, commonsense reasoning, MRC, etc.), while also significantly
improving sample efficiency during fine-tuning. We also show that large-scale
multi-tasking is crucial; pre-finetuning can hurt performance when few tasks
are used up until a critical point (usually above 15) after which performance
improves linearly in the number of tasks.
## 1 Introduction
The recent success of language model pre-training Devlin et al. (2018); Liu et
al. (2019b); Lewis et al. (2019); Raffel et al. (2019); Radford et al. (2019)
is remarkable, at least in part, due to the exclusive use of self supervision,
without any manually labeled data. For many tasks, however, we already have
training examples for related problems, which we should be able to leverage.
Recent work has shown gains from fine-tuning schemes that are multi-task
Raffel et al. (2019); Khashabi et al. (2020) and multi-stage Liu et al.
(2019a), but it can be difficult to know which intermediate tasks will best
transfer Raffel et al. (2019). In this paper, we show that multi-task
supervised tuning, if done at a sufficiently large scale with many different
tasks, can be an effective second stage of task-agnostic pre-training,
removing the need to pre-select the best intermediate tasks.
More specifically, in addition to the standard pre-training/fine-tuning
methodology of learning language tasks, we introduce a new intermediate stage,
pre-finetuning. Pre-finetuning involves a massive multi-task learning step
(4.8 million total training examples) performed on around 50 classification,
summarization, question answering, and common sense reasoning tasks. We
believe we are the first to investigate multi-task learning at this scale in
terms of both number and types of tasks. We show, in particular, that standard
multi-tasking schemes can be unstable and often fail to learn high quality
representations. However, we introduce a new training scheme which uses loss
scaling and task-heterogeneous batches so that gradient steps are more evenly
balanced across multiple different competing tasks, greatly improving training
stability and overall performance. We call our pre-finetuned models MUPPET;
Massive Multi-task RePresentation with PrE-fineTuning.
Through extensive experiments, we show that incorporating pre-finetuning to
RoBERTa Liu et al. (2019b) and BART Lewis et al. (2019) models yields
consistent improvements, including new state-of-the-art performance for RTE
Bentivogli et al. (2009) and HellaSWAG Zellers et al. (2019), without having
to specify specific intermediate transfer tasks. These gains are particularly
strong in the low resource regime, where there is relatively little labeled
data for fine-tuning. We also study why pre-finetuning outperforms previous
multi-tasking schemes. We first compare different optimization techniques to
stabilize training, and find it important to use task-heterogeneous batches
with task-rebalancing loss scaling. We also show that scale is crucial for
effective multi-task learning. We empirically see a critical point in terms of
the number of tasks (usually over 15); having fewer tasks degrades
representations, while having more seems to improve performance linearly as
far as we were able to scale.
To summarize, our contributions include:
* •
We show that we can further improve pre-trained representations with an
additional stage we call pre-finetuning, which utilizes massively multi-task
learning. We show standard pre-trained representations, when further refined
with pre-finetuning consistently improve performance on downstream tasks.
* •
We introduce a new multi-task training scheme for effective learning at scale,
which uses loss scaling and task-heterogeneous batches.
* •
We explore the effects of scale on multi-task learning and show the existence
of critical points in multi-task training, beyond which increasing the number
of tasks improves generalizable representations.
* •
We conduct a study surrounding the data efficiency of standard pre-trained
representations and their respective pre-finetuned counterparts. We show that
the pre-finetuned models consistently require less data for fine-tuning.
## 2 Related Work
Multi-task learning has been an increasingly active topic in recent
literature. Recent advances such as MT-DNN show that by leveraging multi-task
learning, we can further improve performance on several language benchmarks on
top of traditional pre-training (Liu et al., 2019a). However, T5 (Raffel et
al., 2019) shows that incorporating multi-task learning ontop of larger models
does not improve upon the standardized pre-training / finetuning. Thus the
effect of multi-task learning across different pre-training methods is not
fully understood.
Recently Khashabi et al. (2020) showed how doing MTL training on a range of QA
tasks can improve the performance of T5 by taking advantage of cross dataset
transfer. Unlike our approach, they convert all the data to a seq2seq format,
operate on a smaller MTL scale, have a different batching strategy, and focus
solely on improving QA tasks. Our work shows how even seemingly very different
datasets, for example, summarization and extractive QA, can help each other by
improving the model’s representations.
Our work aims to explore multi-task learning at a much larger scale; by
incorporating a larger number of tasks, we show that we can consistently
improve several language benchmarks from several domains. Contrary to T5, we
show that incorporating a secondary stage of multi-task learning does lead to
better representations. In §5 we demonstrate the effectiveness of multi-task
learning to be coming from the large scale of our MTL setup.
## 3 Pre-Finetuning Through Massive Multitask Learning
Previous work has reported mixed results from experiments on multi-task
learning Liu et al. (2019a); Raffel et al. (2019). In general, it can be
challenging to balance the losses from different tasks; upsampling can lead to
overfitting low resource tasks, and downsampling can lead to improper learning
of specific tasks. This difficulty is particularly pronounced when operating
at the scale of experiments we show in Section 5.1, where there are more
diverse tasks than previously considered. This section presents our pre-
finetuning approach that leads to more stable and accurate multi-task training
by introducing new optimization, loss scaling, and task sampling schemes to
balance each minibatch’s updates better.
### 3.1 Tasks and Losses
#### Diverse Tasks
To learn general language representations, we include a variety of tasks
across many domains. We select language tasks across four different domains:
classification, commonsense reasoning, machine reading comprehension, and
summarization. In Table 1, we show the break down of each of the task types
along with the number of samples used from each during pre-finetuning. In
total our multi-task set up learns over 4.8 supervised samples across 4
families of tasks.
Task Type | # Datasets | # Train | # Eval
---|---|---|---
Classification | 26 | 2.9M | 188K
Summarization | 4 | 524K | 30K
MRC | 6 | 1.05M | 123M
CommonSense | 10 | 360K | 49K
Total | 46 | 4.8M | 390K
Table 1: Break down of MTL pre-finetuning datasets. The table shows the number
of datasets we used per task type and the number of samples in training and
evaluation sets.
A full list of all of the datasets we leverage for pre-finetuning is described
in appendix §A.1.
#### Standard Losses
To train on several datasets, our model contains task-specific heads, each
optimizing for a task-specific loss. The loss functions are summarized in
table 2. Each loss is scaled with loss scaling described in §3.3. After loss
scaling, the gradients from each task are averaged before doing the model
update step.
Task Type | Loss Function
---|---
Classification | Cross Entropy (CE)
Summarization | Label Smoothed CE Szegedy et al. (2015)
MRC | Span Prediction Seo et al. (2016)
Commonsense | Sentence Ranking Loss Liu et al. (2019b)
Table 2: Description of loss functions for each task type. Note for
summarization the label smoothed cross entropy loss is averaged across tokens.
### 3.2 Optimization
We show two strategies to learn multi-task representations at scale:
Accumulating Gradients Across Tasks (Heterogeneous Batches) and Leveraging
Better Finetuning.
#### Accumulating Gradients Across Tasks
Our model is trying to optimize not a single objective but several potentially
competing objectives to create a unified representation across several tasks
during model training. During gradient descent, moving along the gradient of a
single task may not be the optimal direction for the model to move to learn a
single unified representation across tasks. To overcome this, we ensure each
batch our model optimizes consists of several tasks. Each worker samples a
random batch from our set of tasks and computes a gradient, accumulated for
the final update. Empirically we use 64 GPUs for pre-finetuning, resulting in
each batch consisting of gradients across 64 sampled tasks. In §5.2 we show
how such a strategy allows for our model to arrive at a better representation
for end task finetuning.
#### Better Finetuning
Instead of starting from scratch, we initialize our model with representations
learned from self-supervised pre-training in pre-finetuning. This can inherit
the knowledge captured in the pre-trained representations and speed up
training. Mosbach et al. (2020) show that standard fine-tuning of pre-trained
models can be unstable, which may be aggravated in our case as we are training
on a diverse set of tasks simultaneously. Therefore, we employ the R3F/R4F
methods Aghajanyan et al. (2020) to combat this issue. In particular, R3F/R4F
consists of an additional loss term, ensuring that small perturbations to the
input space result in similar representations, which can be used to learn more
robust representations during pre-finetuning.
In early experimentation, we found that R3F was pivotal in getting MUPPET to
work for BART. All other fine-tuning and pre-finetuning was done using
standard SGD.
### 3.3 Loss Scaling
Loss scaling methods introduce a multiplicative reweighting of individual
losses per data-point. Various loss scaling techniques have been proposed,
from dynamic scaling by inverse training loss to simple scaling by the number
of data-points in respective datasets (Chen et al., 2018).
As pre-finetuning optimizes several different types of tasks and datasets,
each having its own output spaces, loss scaling becomes essential to ensure
stable training. We attempted various forms of loss-scaling throughout initial
experimentation, but the most effective was the novel method we describe
below.
Let us denote $\mathcal{L}_{i}(x_{i},y_{i};\theta)$ as the loss for datapoint
$i$ for a model parameterized by $\theta$. Remember that the loss depends on
the type of task (commonsense loss is different from binary classification).
Furthermore let $n:\mathbb{N}\rightarrow\mathbb{N}$ be a function which for
each data-point returns the number of predictions $\mathcal{L}$ operates over.
For example, for binary classification, $n$ would return two, while for
generation, $n$ would return the size of the vocabulary (since we average
across loss per token generated). We scale data-point loss so that, if the
class distribution were uniformly distributed along with our models
predictions, all of our losses would have equivalent values.
$\mathcal{L}^{scaled}_{i}(x_{i},y_{i};\theta)=\frac{\mathcal{L}_{i}(x_{i},y_{i};\theta)}{\log{n(i)}}$
(1)
We found that this static scaling worked incredibly well, outperforming other
loss scaling methods in early experimentation.
### 3.4 Sampling
Another approach to balancing various tasks in a multi-task set up is to up-
sample smaller datasets and down-sample larger ones to achieve more uniformity
between dataset sizes.
Existing results for dataset sampling methods in multi-task learning are
conflicting, but recent work has shown that it does not work well for multi-
task learning of pre-trained representations. For example, T5 showed that all
various forms of sampling did not improve overusing the natural size of
datasets (Raffel et al., 2019).
We also found that sampling datasets were consistently detrimental for multi-
task learning over pre-trained representations during initial experimentation.
Specifically, we saw unmanageable over-fitting and stability issues. Therefore
we opt for maintaining the natural distribution of the datasets throughout all
of our experiments.
### 3.5 Experimental Setup
We selected RoBERTa (Liu et al., 2019b) and BART (Lewis et al., 2019) as our
initial pre-trained models to further pre-finetune. For each task type we use
a different prediction scheme. Every Sentence Prediction dataset gets a
separate classification head, for Commonsense and MRC we utilize a separate
unified head for each task. For Summarization, we do not add any parameters
and use the BART decoder and output layer as is. Experimentally we saw using a
different head per individual Commonsense and MRC datasets lead to severe
overfitting.
For both models, we do the pre-finetuning procedure for both the Base and
Large models. We trained each model configuration with 64 GPUs until
convergence. Dependent on configuration, this ranged from a day to 4 days. We
include the hyper-parameters used per pre-finetuning run in the Appendix in
Section §A.2.
## 4 Empirical Results
We first show that pre-finetuning improves the representations of pre-training
models. To do so, we fine-tune our pre-finetuned models on a large set of
tasks.
For each of the individual downstream tasks, we use a fixed hyper-parameter
search to optimize over simple hyperparameters such as learning rate, Adam
$\epsilon$ (Kingma and Ba, 2014) and dropout (Srivastava et al., 2014). We
present our results in two tables. Table 3 shows our results on the GLUE
benchmark (Wang et al., 2018) as well as two MRC tasks; SQuAD (Rajpurkar et
al., 2016a) and ReCoRD (Zhang et al., 2018). Table 4 reports results on other
Sentence Prediction tasks as well as Commonsense tasks. We also include
results from MT-DNN Liu et al. (2019a), ELECTRA Clark et al. (2020),111For
ELECTRA results we leverage the results presented in the ELECTRA github
https://github.com/google-research/electra#expected-results and RoBERTa Liu et
al. (2019b) models. For Summarization tasks we show that our pre-finetuned
BART model outperforms all other summarization baselines. Both of these tables
report over data-sets available during the pre-finetuning stage.
Given that our pre-finetuned models now have an understanding of the task at
hand through the use of classification heads, we have a choice during
finetuning on whether or not to use these heads. In general we found re-using
heads to be beneficial for MRC, Commonsense and Sentence Prediction tasks with
small dataset size.
| GLUE | MRC |
---|---|---|---
| MNLI | QQP | RTE | QNLI | MRPC | SST-2 | SQuAD |
RoBERTa-B | 87.6 | 91.9 | 78.7 | 92.8 | 90.2 | 94.8 | 82.6 |
\+ MUPPET | 88.1 | 91.9 | 87.8 | 93.3 | 91.7 | 96.7 | 86.6 |
RoBERTa-L | 90.2 | 92.2 | 88.1 | 94.7 | 90.9 | 96.4 | 88.7 |
\+ MUPPET | 90.8 | 92.2 | 92.8 | 94.9 | 91.4 | 97.4 | 89.4 |
BART | 89.9 | 92.5 | 87.0 | 94.9 | 90.4 | 96.6 | |
\+ MUPPET | 89.9 | 92.7 | 92.4 | 94.6 | 92.2 | 96.9 | |
ELECTRA-B | 88.8 | 91.5 | 82.7 | 93.2 | 89.5 | 95 | 80.5 |
ELECTRA-L | 90.9 | 92.4 | 88.0 | 95.0 | 90.8 | 96.9 | 88.1 |
MT-DNN | 87.1 | 91.9/89.2 | 83.4 | 92.9 | 91.0/87.5 | 94.3 | - |
Table 3: We present results for the GLUE benchmark task and a MRC dataset. Bolded numbers show the MUPPET vs. base model, underline marks the best number. If not explicitly stated, the results are showing the accuracy of the evaluation set. For the MRC tasks, we report both exact match (EM) and F1 as is standard in the literature. For SQuAD, we reused the task head from pre-finetuning. | SP | Commonsense | Summarization
---|---|---|---
| BoolQ | CQA | HellaSwag | OpenQA | CNN/DailyMail | Gigaword | Reddit TIFU
RoBERTa-B | 82.0 | 66.2 | 65.1 | 63.8 | - | - | -
\+ MUPPET | 83.8 | 69.4 | 69.0 | 64.6 | - | - | -
RoBERTa-L | 86.4 | 78.1 | 83.4 | 73.6 | - | - | -
\+ MUPPET | 87.5 | 79.2 | 86.4 | 74.4 | - | - | -
BART | 86.2 | 78.1 | 84.1 | 71.4 | 44.16/21.28/40.90 | 39.29/20.09/35.65 | 24.19/8.12/21.31
\+ MUPPET | 86.9 | 74.8 | 75.9 | 70.8 | 44.45/21.25/41.4 | 40.40/20.54/36.21 | 30.30/11.25/24.92
T5-L | 86.2 | 75.6 | 83.9 | 70.4 | 42.50/20.68/39.75 | - | -
T5-11B | 86.8 | 78.9 | 85.8 | 75.4 | 43.52/21.55/40.69 | - | -
PEGASUS | - | - | - | - | 44.17/21.47/41.11 | 39.12/19.86/36.24 | 26.63/9.01/21.60
ERNIE-GEN | - | - | - | - | 44.02/21.17/41.26 | 39.25/ 20.25/36.53 | -
ProphetNet | - | - | - | - | 44.20/21.17/41.30 | 39.51/20.42/36.69 | -
Table 4: We present results for the non-GLUE Sentence Prediction tasks as well as a set of standard Commonsense tasks. Bolded numbers signify MUPPET vs. base model, while an underline signifies the best number. If not explicitly stated, the results are showing the accuracy of the evaluation set. For commonsense tasks, we re-use the task head from pre-finetuning. | SP | Structured Prediction (Penn) | Summarization
---|---|---|---
| Hyperpartisan | Chunking | Parsing | POS | Arxiv | PubMed | BigPatent
RoBERTa-B | 84.2 | 93.4 | 95.1 | 93.7 | - | - | -
\+ MUPPET | 85.8 | 95.5 | 94.5 | 93.2 | - | - | -
RoBERTa-L | 90.4 | 95.1 | 94.5 | 93.4 | - | - | -
\+ MUPPET | 92.5 | 96.9 | 95.7 | 97.9 | - | - | -
BART | 85.1 | 92.1 | 91.1 | 91.8 | 41.20/9.20/32.45 | 39.87/16.43/35.56 | 48.54/29.35/39.42
\+ MUPPET | 87.2 | 96.1 | 94.5 | 97.2 | 43.90/14.50/40.10 | 45.13/19.80/39.90 | 52.34/33.50/42.80
Pegasus | - | - | - | - | 43.85/16.83/39.17 | 44.53/19.30/40.70 | 52.25/33.04/41.80
Table 5: We present results on a large set of different tasks across datasets that are not available to the model during the pre-finetuning stage. Bolded numbers signify MUPPET vs. base model, while an underline signifies the best number. For Chunking/Parsing, we use F1, while for Part-Of-Speech tagging, we use accuracy. Model | Training Data | A1 | A2 | A3 | ANLI
---|---|---|---|---|---
RoBERTa | S,M | 47.6 | 25.4 | 22.1 | 31.1
| +F | 54.0 | 24.2 | 22.4 | 32.8
| +F+A1⋆2 | 68.7 | 19.3 | 22.0 | 35.8
| +F+A1+A2⋆3 | 71.2 | 44.3 | 20.4 | 43.7
| S,M,F,ANLI | 73.8 | 48.9 | 44.4 | 53.7
RoBERTa-MUPPET | S,M | 49.9 | 28.2 | 24.2 | 33.3
| +F | 55.2 | 26.8 | 24.6 | 33.9
| +F+A1⋆2 | 70.9 | 22.5 | 25.1 | 36.7
| +F+A1+A2⋆3 | 74.3 | 48.2 | 22.8 | 45.9
| S,M,F,ANLI | 76.9 | 52.3 | 44.2 | 56.9
InfoBERT Wang et al. (2021) | S,M,F,ANLI | 76.4 | 51.6 | 48.6 | 58.3
ALUM Liu et al. (2020) | S,M,F,ANLI | 73.3 | 53.4 | 48.2 | 57.7
XL-NET Yang et al. (2019) | S,M,F,ANLI | 67.6 | 50.7 | 48.3 | 55.1
Table 6: We show the performance of the RoBERTa model and the pre-finetuned
RoBERTa-MUPPET model on the ANLI benchmark. Bolded numbers signify MUPPET vs
base model, underline signifies best number. ‘S’ refers to SNLI, ‘M’ to MNLI
dev (-m=matched, -mm=mismatched), and ‘F’ to FEVER; ‘A1–A3’ refer to the
rounds respectively and ‘ANLI’ refers to A1+A2+A3.
Across the board, pre-trained representations that were further refined with
pre-finetuning outperformed standard pre-trained representations. We see more
modest gains on larger datasets, most likely because we do not need to refine
representations beforehand if the fine-tuning dataset is large. On smaller
datasets, we see substantial gains. For example, the pre-finetuned RoBERTa-
BASE model on RTE improves by close to 9 points, rivaling the RoBERTa-Large
accuracy, while the pre-finetuned RoBERTa-Large model gets new state-of-the-
art on RTE rivaling models an order of magnitude larger than it.
We do not improve just over sentence prediction tasks but on every set of
tasks that we measured. For example, we reach a new state of the art on the
HellaSwag dataset previously achieved by utilizing a new fine-tuning approach.
Our methods do not increase parameter count or any complexity measures but are
quite successful at refining features and preparing them for downstream fine-
tuning.
### 4.1 Finetuning Outside of Pre-Finetuning Domain
We also report the performance on tasks not included in the pre-finetuning
data. To do so, we finetune our models on a set of tasks including (1) ANLI
Nie et al. (2019) and Hyperpartisan Kiesel et al. (2019) for classification,
(2) Arxiv He et al. (2019), PubMed Cohan et al. (2018), Sharma et al. (2019)
for summarization, and (3) Chunking, Constituency Parsing and Part-Of-Speech
tagging for structured prediction from the Penn Treebank dataset Marcus et al.
(1993). We present these results in Table 5 and Table 6.
We see that the MUPPET variants of our models out-perform the baselines
consistently across task type and dataset. As a special case we do an in depth
analysis of the MUPPET variant of RoBERTa on the notoriously tough ANLI
dataset and see the same pattern. Pre-finetuned models consistently outperform
their base counterparts.
## 5 Understanding Multi-Task at Scale
### 5.1 Importance of Scale
The first axis we would like to explore is the scale on which multi-task
learning is done. Previous work, such as T5 and MT-DNN, focused on the MTL
scale of around a dozen datasets. To the best of our knowledge, our paper has
the largest MTL set up to date. Accordingly, we are interested in empirically
exploring the effects of scaling up the number of datasets to the
representations learned during MTL.
We pre-finetune a collection of RoBERTa-Base models with varying numbers of
datasets. We train seven models, six uniformly chosen between 10 and 40,
ensuring that at each point, the selected datasets are a superset of the
datasets from prior points. The last model is fully trained on all datasets.
Concretely given two models trained with a different number of datasets
$a,b:a>b$, model $a$ will contain all datasets used to train model $b$ and
more.
For each version of the model, we fine-tune five datasets and plot the results
in Figure 1. Specifically we finetune STS-B (Cer et al., 2017), BoolQ (Clark
et al., 2019), RACE (Lai et al., 2017), SQuAD (Lai et al., 2017), and MNLI
(Williams et al., 2018a). We include these five datasets in the first MTL run
(10 datasets) to remove any bias from adding them in a later stage.
Figure 1: We plot the RoBERTa evaluation accuracy of five datasets: RTE,
BoolQ, RACE, SQuAD, and MNLI, across various scales of multi-task learning
measured in the number of datasets. We notice that performance initially
degrades until a critical point is reached regarding the number of the
datasets used by the MTL framework for all but one dataset. Post this critical
point; our representations improve over the original RoBERTa model.
We see a couple of interesting patterns. First, for individual tasks such as
RTE (Bentivogli et al., 2009), increasing the pre-finetuning scale
monotonically improves performance. This is aligned with other papers that
have seen benefits from first training on MNLI (Williams et al., 2018a) and
then fine-tuning on RTE (Liu et al., 2019b). For other datasets, we see that
doing MTL in the $<15$ datasets regime is detrimental for end-task fine-
tuning. This is also aligned with other empirical observations, i.e., T5
reported that doing MTL did not improve over only fine-tuning. Nevertheless,
it seems that as we increase the number of tasks past some critical point, our
pre-trained representations become more generalizable. Furthermore, although
dependent on the dataset, this critical point is roughly between 10 and 25
tasks.
This suggests that previously observed MTL limitations were not fundamental
and can instead be attributed to the lack of sufficient scale.
### 5.2 Importance of Heterogenous Batches
Another critical factor to getting MTL to learn generalizable representations
is the method through which MTL is implemented, specifically the selection of
batches. To better quantify this trend, we experimented with three balancing
schemes: dataset homogenous, batch homogenous and batch heterogenous.
We refer to dataset homogenous as selecting batches from datasets
sequentially. So we first train on dataset $A$, then train on dataset $B$,
etc. On the other hand, batch homogenous refers to selecting batches
containing only data from the same task; therefore, all gradients are from the
same dataset. This is implemented by selecting all datasets, batching on a
dataset level, and selecting those same batches randomly during training.
Finally, batch heterogeneous refers to a single update containing a batch from
multiple different datasets spanning different tasks. We implemented this by
first creating homogenous sub-batches, calculating loss per sub-batch per GPU,
and then aggregating across GPUs manifesting in a gradient update that
contains various datasets and, therefore, tasks.
To dissect the importance of heterogeneous batches, we train a RoBERTa-Base
model on 35 randomly selected tasks using the three data selection
methodologies outlined above. We then fine-tune these three models on the same
five data-sets mentioned in the previous section.
Figure 2: We plot the evaluation accuracy of RoBERTa across five datasets:
RTE, BoolQ, RACE, SQuAD, and MNLI, using our three batching strategies for
multi-task: Dataset Homogeneous, Batch Homogeneous, Batch Heterogeneous. The
use of heterogenous batches outperforms other batching strategies by a
significant margin and highlights the importance of implementing MTL with the
correct batching strategy.
We present our results in Figure 2. We see the importance of properly defining
a batching strategy for effective multi-task learning. Our findings are also
consistent with Aghajanyan et al. (2020) which saw that sequential training of
data-sets degrades generalizable representations.
### 5.3 Low Resource Experiments
We noticed in Section §4 that data-sets with smaller data-set sizes tended to
improve more from MTL training. To strengthen this hypothesis, we look at two
factors: the scale of pre-finetuning and the scale of fine-tuning (size of
fine-tuning data-set).
We select three data-sets that were not used in pre-finetuning in Section
§5.1. We also select nine partitions per fine-tuning data-set, which is
sampled uniformly between 10% of the data-set and 100% of the data-set.
Selecting the low-resource splits was done through random sampling.
We then fine-tune every low-resource split with every pre-finetuning
checkpoint from Section §5.1. We plot the heatmaps generated from these runs
in Figure 3.
Figure 3: We fine-tune every low-resource split with every pre-finetuning
checkpoint from Section §5.1 for two datasets not available in any of the pre-
finetuning MTL datasets; QNLI (Rajpurkar et al., 2016b) and CoLA (Warstadt et
al., 2018). The pre-finetuning scale is reported in terms of the number of
datasets.
Multiple patterns emerge. First, we see a clear visualization of the critical
point mentioned when doing pre-finetuning. As we increase the scale of MTL,
better representations are available for downstream finetuning. Furthermore,
we see that pre-finetuned models at a larger scale are much more data-
efficient than standard pre-trained models.
Specifically looking at the 34/40 pre-finetuning scale on Figure 3 we see that
we reach higher evaluation accuracies much sooner than the base RoBERTa model
(row 0).
## 6 Conclusion
In this work, we propose pre-finetuning, a stage after pre-training to further
refine representations before end-task finetuning. We show that we can
effectively learn more robust representations through multi-task learning
(MTL) at scale. Our MTL models outperform their vanilla pre-trained
counterparts across several tasks. Our analysis shows that properly scaling
MTL with heterogeneous batches and loss scaling is critical to leveraging
better representations. We also show a critical point regarding the number of
tasks when doing multi-task learning, where fewer tasks degrade
representations compared to the pre-trained model, but more tasks than this
point improve representations.
We discussed a practical setting in which doing this massive multi-task
learning is stable and effective through simple loss scaling and heterogeneous
batches. With our method, we improve upon prior state of the art methods for
RTE Bentivogli et al. (2009) and HellaSWAG Zellers et al. (2019), as well as
improve upon vanilla pre-trained representations for MNLI Williams et al.
(2018a), SQuAD Rajpurkar et al. (2016a), BoolQ Clark et al. (2019), and Common
Sense QA Talmor et al. (2018). We also our MTL model performance with low
resource experiments. We show that on held-out datasets, leveraging
representations from our pre-finetuned models with 34-40 tasks, we reach
higher evaluation accuracies with much less data than the RoBERTa model.
## References
* Aghajanyan et al. (2020) Armen Aghajanyan, Akshat Shrivastava, Anchit Gupta, Naman Goyal, Luke Zettlemoyer, and Sonal Gupta. 2020. Better fine-tuning by reducing representational collapse. _arXiv preprint arXiv:2008.03156_.
* Amini et al. (2019) Aida Amini, Saadia Gabriel, Peter Lin, Rik Koncel-Kedziorski, Yejin Choi, and Hannaneh Hajishirzi. 2019. Mathqa: Towards interpretable math word problem solving with operation-based formalisms. _arXiv preprint arXiv:1905.13319_.
* Bentivogli et al. (2009) Luisa Bentivogli, Peter Clark, Ido Dagan, and Danilo Giampiccolo. 2009. The fifth pascal recognizing textual entailment challenge. In _TAC_.
* Bowman et al. (2015) Samuel Bowman, Gabor Angeli, Christopher Potts, and Christopher Manning. 2015. A large annotated corpus for learning natural language inference. In _Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP)_. Association for Computational Linguistics.
* Cer et al. (2017) Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez-Gazpio, and Lucia Specia. 2017\. Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation. _arXiv preprint arXiv:1708.00055_.
* Chen et al. (2018) Zhao Chen, Vijay Badrinarayanan, Chen-Yu Lee, and Andrew Rabinovich. 2018. Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks. In _International Conference on Machine Learning_ , pages 794–803. PMLR.
* Clark et al. (2019) Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. BoolQ: Exploring the surprising difficulty of natural yes/no questions. In _Proceedings of NAACL-HLT 2019_.
* Clark et al. (2020) Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. 2020. Electra: Pre-training text encoders as discriminators rather than generators. _arXiv preprint arXiv:2003.10555_.
* Clark et al. (2018) Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question answering? try arc, the ai2 reasoning challenge. _arXiv preprint arXiv:1803.05457_.
* Cohan et al. (2018) Arman Cohan, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Seokhwan Kim, Walter Chang, and Nazli Goharian. 2018. A discourse-aware attention model for abstractive summarization of long documents. _arXiv preprint arXiv:1804.05685_.
* De Marneffe et al. (2019) Marie-Catherine De Marneffe, Mandy Simons, and Judith Tonhauser. 2019. The CommitmentBank: Investigating projection in naturally occurring discourse. To appear in proceedings of Sinn und Bedeutung 23. Data can be found at https://github.com/mcdm/CommitmentBank/.
* Devlin et al. (2018) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. _arXiv preprint arXiv:1810.04805_.
* (13) Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C. Wallace. Eraser: A benchmark to evaluate rationalized nlp models.
* Dolan and Brockett (2005) William B Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In _Proceedings of the Third International Workshop on Paraphrasing (IWP2005)_.
* Dua et al. (2019) Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. Drop: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In _Proc. of NAACL_.
* Eidelman (2019) Vladimir Eidelman. 2019. Billsum: A corpus for automatic summarization of us legislation. In _Proceedings of the 2nd Workshop on New Frontiers in Summarization_ , pages 48–56.
* Fabbri et al. (2019) Alexander R Fabbri, Irene Li, Tianwei She, Suyi Li, and Dragomir R Radev. 2019. Multi-news: A large-scale multi-document summarization dataset and abstractive hierarchical model. _arXiv preprint arXiv:1906.01749_.
* He et al. (2019) Jun He, Liqun Wang, Liu Liu, Jiao Feng, and Hao Wu. 2019. Long document classification from local word glimpses via recurrent attention learning. _IEEE Access_ , 7:40707–40718.
* Hermann et al. (2015) Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In _Advances in neural information processing systems_ , pages 1693–1701.
* Hovy et al. (2001) Eduard Hovy, Laurie Gerber, Ulf Hermjakob, Chin-Yew Lin, and Deepak Ravichandran. 2001. Toward semantics-based answer pinpointing. In _Proceedings of the First International Conference on Human Language Technology Research_.
* Huang et al. (2019) Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2019. Cosmos qa: Machine reading comprehension with contextual commonsense reasoning. _arXiv preprint arXiv:1909.00277_.
* Iyer et al. (2017) Shankar Iyer, Nikhil Dandekar, and Kornel Csernai. 2017. First quora dataset release: Question pairs.
* Joshi et al. (2017) Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. triviaqa: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension. _arXiv e-prints_ , page arXiv:1705.03551.
* Khashabi et al. (2018) Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. 2018. Looking beyond the surface: A challenge set for reading comprehension over multiple sentences. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_ , pages 252–262.
* Khashabi et al. (2020) Daniel Khashabi, Sewon Min, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Clark, and Hannaneh Hajishirzi. 2020. Unifiedqa: Crossing format boundaries with a single qa system.
* Khot et al. (2018) Tushar Khot, Ashish Sabharwal, and Peter Clark. 2018. SciTail: A textual entailment dataset from science question answering. In _AAAI_.
* Kiesel et al. (2019) Johannes Kiesel, Maria Mestre, Rishabh Shukla, Emmanuel Vincent, Payam Adineh, David Corney, Benno Stein, and Martin Potthast. 2019. Semeval-2019 task 4: Hyperpartisan news detection. In _Proceedings of the 13th International Workshop on Semantic Evaluation_ , pages 829–839.
* Kingma and Ba (2014) Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_.
* Kwiatkowski et al. (2019) Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: a benchmark for question answering research. _Transactions of the Association of Computational Linguistics_.
* Lai et al. (2017) Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. Race: Large-scale reading comprehension dataset from examinations. _arXiv preprint arXiv:1704.04683_.
* Levesque et al. (2012) Hector Levesque, Ernest Davis, and Leora Morgenstern. 2012. The winograd schema challenge. In _Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning_. Citeseer.
* Levesque et al. (2011) Hector J Levesque, Ernest Davis, and Leora Morgenstern. 2011. The Winograd schema challenge. In _AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning_ , volume 46, page 47.
* Lewis et al. (2019) Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. _arXiv preprint arXiv:1910.13461_.
* Li and Roth (2002) Xin Li and Dan Roth. 2002. Learning question classifiers. In _COLING 2002: The 19th International Conference on Computational Linguistics_.
* Liu et al. (2020) X. Liu, Hao Cheng, Pengcheng He, W. Chen, Yu Wang, Hoifung Poon, and Jianfeng Gao. 2020. Adversarial training for large neural language models. _ArXiv_ , abs/2004.08994.
* Liu et al. (2019a) Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019a. Multi-task deep neural networks for natural language understanding. _arXiv preprint arXiv:1901.11504_.
* Liu et al. (2019b) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized bert pretraining approach. _arXiv preprint arXiv:1907.11692_.
* Maas et al. (2011) Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_ , pages 142–150, Portland, Oregon, USA. Association for Computational Linguistics.
* Marcus et al. (1993) Mitchell Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of english: The penn treebank.
* McCoy et al. (2019) R. Thomas McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. _CoRR_ , abs/1902.01007.
* Mihaylov et al. (2018) Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct electricity? a new dataset for open book question answering. _arXiv preprint arXiv:1809.02789_.
* Mosbach et al. (2020) Marius Mosbach, Maksym Andriushchenko, and Dietrich Klakow. 2020. On the stability of fine-tuning bert: Misconceptions, explanations, and strong baselines. _arXiv preprint arXiv:2006.04884_.
* Narayan et al. (2018) Shashi Narayan, Shay B Cohen, and Mirella Lapata. 2018. Don’t give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. _arXiv preprint arXiv:1808.08745_.
* Nie et al. (2019) Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2019. Adversarial nli: A new benchmark for natural language understanding. _arXiv preprint arXiv:1910.14599_.
* Pang and Lee (2005) Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In _Proceedings of the ACL_.
* Pilehvar and Camacho-Collados (2019) Mohammad Taher Pilehvar and Jose Camacho-Collados. 2019. WiC: The word-in-context dataset for evaluating context-sensitive meaning representations. In _Proceedings of NAACL-HLT_.
* Radford et al. (2019) Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. _OpenAI Blog_ , 1(8):9.
* Raffel et al. (2019) Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. _arXiv preprint arXiv:1910.10683_.
* Rajpurkar et al. (2016a) Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016a. Squad: 100,000+ questions for machine comprehension of text. _arXiv preprint arXiv:1606.05250_.
* Rajpurkar et al. (2016b) Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016b. Squad: 100,000+ questions for machine comprehension of text. _arXiv preprint arXiv:1606.05250_.
* Roemmele et al. (2011) Melissa Roemmele, Cosmin Adrian Bejan, and Andrew S. Gordon. 2011. Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In _2011 AAAI Spring Symposium Series_.
* Seo et al. (2016) Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. _arXiv preprint arXiv:1611.01603_.
* Sharma et al. (2019) Eva Sharma, Chen Li, and Lu Wang. 2019. Bigpatent: A large-scale dataset for abstractive and coherent summarization. _arXiv preprint arXiv:1906.03741_.
* Socher et al. (2013) Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In _Proceedings of the 2013 conference on empirical methods in natural language processing_ , pages 1631–1642.
* Srivastava et al. (2014) Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. _The journal of machine learning research_ , 15(1):1929–1958.
* Szegedy et al. (2015) Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. 2015. Rethinking the inception architecture for computer vision. corr abs/1512.00567 (2015).
* Talmor et al. (2018) Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2018. Commonsenseqa: A question answering challenge targeting commonsense knowledge. _arXiv preprint arXiv:1811.00937_.
* Wang et al. (2019) Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. SuperGLUE: A stickier benchmark for general-purpose language understanding systems. _arXiv preprint 1905.00537_.
* Wang et al. (2018) Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In _Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP_ , pages 353–355, Brussels, Belgium. Association for Computational Linguistics.
* Wang et al. (2021) Boxin Wang, Shuohang Wang, Yu Cheng, Zhe Gan, Ruoxi Jia, Bo Li, and Jingjing Liu. 2021. Info{bert}: Improving robustness of language models from an information theoretic perspective. In _International Conference on Learning Representations_.
* Warstadt et al. (2018) Alex Warstadt, Amanpreet Singh, and Samuel R Bowman. 2018. Neural network acceptability judgments. _arXiv preprint arXiv:1805.12471_.
* Welbl et al. (2017) Johannes Welbl, Nelson F Liu, and Matt Gardner. 2017. Crowdsourcing multiple choice science questions. _arXiv preprint arXiv:1707.06209_.
* Williams et al. (2018a) Adina Williams, Nikita Nangia, and Samuel Bowman. 2018a. A broad-coverage challenge corpus for sentence understanding through inference. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_ , pages 1112–1122. Association for Computational Linguistics.
* Williams et al. (2018b) Adina Williams, Nikita Nangia, and Samuel Bowman. 2018b. A broad-coverage challenge corpus for sentence understanding through inference. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_ , pages 1112–1122. Association for Computational Linguistics.
* Yang et al. (2019) Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In _Advances in neural information processing systems_ , pages 5753–5763.
* Yang et al. (2018) Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In _Conference on Empirical Methods in Natural Language Processing (EMNLP)_.
* Yi et al. (2015) Yang Yi, Yih Wen-tau, and Christopher Meek. 2015. WikiQA: A Challenge Dataset for Open-Domain Question Answering. page 2013–2018.
* Zellers et al. (2018) Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. Swag: A large-scale adversarial dataset for grounded commonsense inference. _arXiv preprint arXiv:1808.05326_.
* Zellers et al. (2019) Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. Hellaswag: Can a machine really finish your sentence? _arXiv preprint arXiv:1905.07830_.
* Zhang and Tetreault (2019) Rui Zhang and Joel Tetreault. 2019. This email could save your life: Introducing the task of email subject line generation. In _Proceedings of The 57th Annual Meeting of the Association for Computational Linguistics_ , Florence, Italy.
* Zhang et al. (2018) Sheng Zhang, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Kevin Duh, and Benjamin Van Durme. 2018. Record: Bridging the gap between human and machine commonsense reading comprehension. _arXiv preprint arXiv:1810.12885_.
* Zhang et al. (2015a) Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015a. Character-level Convolutional Networks for Text Classification. _arXiv:1509.01626 [cs]_.
* Zhang et al. (2015b) Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. 2015b. Character-level convolutional networks for text classification. In _NIPS_.
## Appendix A Appendices
### A.1 Datasets Used
1. 1.
CoLA Warstadt et al. (2018)
2. 2.
SST-2 Socher et al. (2013)
3. 3.
MRPC Dolan and Brockett (2005)
4. 4.
QQP Iyer et al. (2017)
5. 5.
MNLI Williams et al. (2018a)
6. 6.
QNLI Rajpurkar et al. (2016b)
7. 7.
RTE Bentivogli et al. (2009)
8. 8.
WNLI Levesque et al. (2012)
9. 9.
SuperGLUE Wang et al. (2019)
10. 10.
Bool Q Clark et al. (2019)
11. 11.
MultiRC Khashabi et al. (2018)
12. 12.
WIC Pilehvar and Camacho-Collados (2019)
13. 13.
WSC Levesque et al. (2011)
14. 14.
CB De Marneffe et al. (2019)
15. 15.
COPA Roemmele et al. (2011)
16. 16.
AG News Zhang et al. (2015b)
17. 17.
IMDB Maas et al. (2011)
18. 18.
MultiNLI Williams et al. (2018b)
19. 19.
SNLI Bowman et al. (2015)
20. 20.
HANS McCoy et al. (2019)
21. 21.
Rotten Tomatoes Pang and Lee (2005)
22. 22.
Yelp Polarity Zhang et al. (2015a)
23. 23.
Eraser Multi RC DeYoung et al.
24. 24.
Wiki QA Yi et al. (2015)
25. 25.
Trec Li and Roth (2002); Hovy et al. (2001)
26. 26.
SciTail Khot et al. (2018)
27. 27.
CNN Daily Mail Hermann et al. (2015)
28. 28.
Billsum Eidelman (2019)
29. 29.
XSUM Narayan et al. (2018)
30. 30.
Aeslc Zhang and Tetreault (2019)
31. 31.
Multinews Fabbri et al. (2019)
32. 32.
Math QA Amini et al. (2019)
33. 33.
Openbook QA (Mihaylov et al., 2018)
34. 34.
SWAG Zellers et al. (2018)
35. 35.
HellaSWAG Zellers et al. (2019)
36. 36.
RACE Lai et al. (2017)
37. 37.
CommonSense QA Talmor et al. (2018)
38. 38.
Cosmos QA Huang et al. (2019)
39. 39.
AI2 ARC - Easy Clark et al. (2018)
40. 40.
AI2 ARC - Challenge Clark et al. (2018)
41. 41.
SCIQ Welbl et al. (2017)
42. 42.
SQUAD Rajpurkar et al. (2016a)
43. 43.
NQ Kwiatkowski et al. (2019)
44. 44.
DROP Dua et al. (2019)
45. 45.
RECORD Zhang et al. (2018)
46. 46.
Hotpot Yang et al. (2018)
47. 47.
TriviaQA Joshi et al. (2017)
### A.2 Hyperparameters
|
# The $(l,r)$-Stirling numbers: a combinatorial approach.
Belbachir Hacène USTHB, Faculty of Mathematics, RECITS Laboratory
P.O. Box 32
El Alia, 16111, Bab Ezzouar, Algiers
Algeria<EMAIL_ADDRESS><EMAIL_ADDRESS>and Djemmada Yahia
USTHB, Faculty of Mathematics, RECITS Laboratory
P.O. Box 32
El Alia, 16111, Bab Ezzouar, Algiers
Algeria<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract.
This work deals with a new generalization of $r$-Stirling numbers using
$l$-tuple of permutations and partitions called $(l,r)$-Stirling numbers of
both kinds. We study various properties of these numbers using combinatorial
interpretations and symmetric functions. Also, we give a limit representation
of the multiple zeta function using $(l,r)$-Stirling of the first kind.
###### Key words and phrases:
Permutations, Set partitions, Stirling numbers, Symmetric functions,
$r$-Stirling numbers.
###### 2010 Mathematics Subject Classification:
Primary 11B73, 11B83; Secondary 05A05, 05A18, 05E05.
## 1\. Introduction
Let $\sigma$ be a permutation of the set $[n]=\\{1,2,\dots,n\\}$ having $k$
cycles $c_{1},c_{2},\dots,c_{k}$. A cycle leaders set of $\sigma$, denoted
$cl(\sigma)$, is the set of the smallest elements on their cycles, i. e.
$cl(\sigma)=\\{\min c_{1},\min c_{2},\dots,\min c_{k}\\}.$
As the same way, let $\pi$ be a partition of the set $[n]=\\{1,2,\dots,n\\}$
into $k$ blocks $b_{1},b_{2},\dots,b_{k}$. A block leaders set of $\pi$,
denoted $bl(\pi)$, is the set of the smallest elements on their blocks, i. e.
$bl(\pi)=\\{\min b_{1},\min b_{2},\dots,\min b_{k}\\}.$
###### Example.
* * •
For $n=6$, the permutation $\sigma=(13)(245)(6)$ have the set of cycle leaders
$cl(\sigma)=\\{1,2,6\\}$.
* •
For $n=7$, the partition $\pi=1,2,4|3,5,7|6$ have the set of block leaders
$bl(\pi)=\\{1,3,6\\}$.
It is well known that the Stirling numbers of the first kind, denoted
${n\brack k}$, count the number of all permutations of $[n]$ having exactly
$k$ cycles, and Stirling numbers of the second kind, denoted ${n\brace k}$,
count the number of all partitions of $[n]$ having exactly $k$ blocks.
One of the most interesting generalization of Stirling numbers was the
$r$-Stirling numbers of both kind introduced By Broder [6]. Analogously to the
classical Stirling numbers of both kinds, the author considered that
$r$-Stirling numbers of the first kind ${n\brack k}_{r}$ (resp. the second
kind ${n\brace k}_{r}$) counts the number of permutations $\sigma$ (resp.
partitions $\pi$) having exactly $k$ cycles (resp. $k$ blocks) such that the
$r$ first elements $1,2,\dots,r$ lead.
Dumont, in [8], gives the first interpretation for the ”central factorial”
numbers of the second kind $U(n,k)$ given by the recurrence
(1.1) $U(n,k)=U(n-1,k-1)+k^{2}U(n-1,k),\qquad\text{for }0<k\leq n;$
where $U(n,k)=T(2n,2k)$.
Then, using the notion of quasi-permutations, Foata and Han [9] , showed that
$U(n,k)$ counts the number of pair $(\pi_{1},\pi_{2})$-partitions of $[n]$
into $k$ blocks such that $bl(\pi_{1})=bl(\pi_{2})$.
In this work, we give an extension of the $r$-Stirling numbers of both kinds
with considering $l$-tuple partitions (and permutations) of Dumont’s partition
model [8, 9].
This paper is organized as follows. In Section 2 and Section 4, we introduce
the $(l,r)$-Stirling numbers of both kinds. Some properties are given as
recurrences, orthogonality, generating functions, a relation between
$(l,r)$-Stirling numbers and Bernoulli polynomials via Faulhaber sums and
symmetric functions. In Section 7, we show the relations between multiple-zeta
function and the $(l,r)$-Stirling numbers. Finally, in Section 8, we discuss
some remarks which connect this numbers to the rooks polynomials [3].
## 2\. The $(l,r)$-Stirling numbers of both kinds
Let us consider the following generalization,
###### Definition 2.1.
The $(l,r)$-Stirling number of the first kind ${n\brack k}_{r}^{(l)}$ counts
the number of $l$-tuple of permutations
$(\sigma_{1},\sigma_{2},\dots,\sigma_{l})$ of $[n]$ having exactly $k$ cycles
such that $1,2,\dots,r$ first elements lead, and
$cl(\sigma_{1})=cl(\sigma_{2})=\dots=cl(\sigma_{l}).$
###### Definition 2.2.
The $(l,r)$-Stirling number of the second kind ${n\brace k}_{r}^{(l)}$ counts
the number of $l$-tuple of partitions $(\pi_{1},\pi_{2},\dots,\pi_{l})$ of
$[n]$ having exactly $k$ blocks such that $1,2,\dots,r$ first elements lead,
and
$bl(\pi_{1})=bl(\pi_{2})=\dots=bl(\pi_{l}).$
###### Theorem 2.3.
The $(l,r)$-Stirling numbers of the first satisfy the following recurrences
(2.1) ${n\brack k}_{r}^{(l)}={n-1\brack k-1}_{r}^{(l)}+(n-1)^{l}{n-1\brack
k}_{r}^{(l)},\qquad\text{for }n>r$
and
(2.2) ${n\brack k}_{r}^{(l)}=\frac{1}{(r-1)^{l}}\left({n\brack
k-1}_{r-1}^{(l)}-{n\brack k-1}_{r}^{(l)}\right),\qquad\text{for }n\geq r>1.$
with boundary conditions ${n\brack k}_{r}^{(l)}=0,$ for $n<r$; and ${n\brack
k}_{r}^{(l)}=\delta_{k,r}$, for $n=r$.
###### Proof.
The $(\sigma_{1},\sigma_{2},\dots,\sigma_{l})$-permutations of the set $[n]$
having $k$ cycles such that $1,2,\dots,r$ first elements are in distinct
cycles and $cl(\sigma_{1})=cl(\sigma_{2})=\dots=cl(\sigma_{l})$ is either
obtained from:
* •
Inserting the $nth$ elements after any element in each permutation of
$(\sigma_{1},\sigma_{2},\dots,\sigma_{l})$-permutations of the set $[n-1]$
having $k$ cycles such that $1,2,\dots,r$ first elements are in distinct
cycles and $cl(\sigma_{1})=cl(\sigma_{2})=\dots=cl(\sigma_{l})$, hence there
are $(n-1)^{l}{n-1\brack k}_{r}^{(l)}$ choices.
* •
The $nth$ element forms a cycle in each permutation of
$(\sigma_{1},\sigma_{2},\dots,\sigma_{l})$-permutations, the remaining $[n-1]$
have to be $(\sigma_{1},\sigma_{2},\dots,\sigma_{l})$-permuted in $(k-1)$
cycles under the preceding conditions, hence there are ${n-1\brack
k-1}_{r}^{(l)}$.
This correspondence yields the first recurrence.
For the second recurrence, we use the double counting principle. Let us count
the numbers of $(\sigma_{1},\sigma_{2},\dots,\sigma_{l})$-permutations of the
set $[n]$ having $(k-1)$ cycles such that $1,\dots,r-1$ are cycle leaders but
$r$ is not, with $cl(\sigma_{1})=cl(\sigma_{2})=\dots=cl(\sigma_{l})$, this is
either obtained from:
* •
We count the $(\sigma_{1},\sigma_{2},\dots,\sigma_{l})$-permutations of the
set $[n]$ having $(k-1)$ cycles such that $1,\dots,r-1$ are cycle leaders then
we exclude from them the
$(\sigma_{1},\sigma_{2},\dots,\sigma_{l})$-permutations having $r$ as cycle
leader. That gives
${n\brack k-1}_{r-1}^{(l)}-{n\brack k-1}_{r}^{(l)},$
* •
Or we count the $(\sigma_{1},\sigma_{2},\dots,\sigma_{l})$-permutations of the
set $[n]$ having $k$ cycles such that $1,\dots,r$ are cycle leaders then we
appending the cycle having $r$ as leader at the end of a cycle having a
smaller leader. We have $(r-1)$ choices to do in each permutation. That gives
$(r-1)^{l}{n\brack k}_{r}^{(l)},$
from the two ways of counting we get the result. ∎
###### Theorem 2.4.
The $(l,r)$-Stirling numbers of the second satisfy the following recurrences
(2.3) ${n\brace k}_{r}^{(l)}={n-1\brace k-1}_{r}^{(l)}+k^{l}{n-1\brace
k}_{r}^{(l)},\qquad\text{for }n>r$
and
(2.4) ${n\brace k}_{r}^{(l)}={n\brace k}_{r-1}^{(l)}-(r-1)^{l}{n-1\brace
k}_{r-1}^{(l)},\qquad\text{for }n\geq r>1.$
with boundary conditions ${n\brace k}_{r}^{(l)}=0$, for $n<r$; and ${n\brace
k}_{r}^{(l)}=\delta_{k,r}$, for $n=r$.
###### Proof.
As in Theorem 2.3, the $(\pi_{1},\pi_{2},\dots,\pi_{l})$-partitions of the set
$[n]$ into $k$ blocks such that $1,2,\dots,r$ first elements are in distinct
blocks and $bl(\pi_{1})=bl(\pi_{2})=\dots=bl(\pi_{l})$ is either obtained
from:
* •
Inserting the $nth$ elements in a block of each partition of
$(\pi_{1},\pi_{2},\dots,\pi_{l})$-partitions of the set $[n-1]$ into $k$
blocks such that $1,2,\dots,r$ first elements are in distinct blocks and
$bl(\pi_{1})=bl(\pi_{2})=\dots=bl(\pi_{l})$, hence there are $k^{l}{n-1\brace
k}_{r}^{(l)}$ choices (the position of the $nth$ element in a block doesn’t
matter).
* •
The $nth$ element forms a block in each partition of
$(\pi_{1},\pi_{2},\dots,\pi_{l})$-partitions, the remaining $[n-1]$ have to be
$(\pi_{1},\pi_{2},\dots,\pi_{l})$-partitioned into $(k-1)$ blocks under the
preceding conditions, hence there are ${n-1\brace k-1}_{r}^{(l)}$.
For the Identity (2.4), we use the double counting principle to count the
numbers of $(\pi_{1},\pi_{2},\dots,\pi_{l})$-partitions of $[n]$ into $k$
blocks such that $1,2,\dots,(r-1)$ are block leaders but $r$ is not, with
$bl(\pi_{1})=bl(\pi_{2})=\dots=bl(\pi_{l})$, this is either obtained from:
* •
We count the $(\pi_{1},\pi_{2},\dots,\pi_{l})$-partitions of the set $[n]$
into $k$ blocks such that $1,\dots,r-1$ are block leaders then we exclude from
them the $(\pi_{1},\pi_{2},\dots,\pi_{l})$-partitions having $r$ as block
leader, with $bl(\pi_{1})=bl(\pi_{2})=\dots=bl(\pi_{l})$. That gives
${n\brace k}_{r-1}^{(l)}-{n\brace k}_{r}^{(l)},$
* •
Or we count the $(\pi_{1},\pi_{2},\dots,\pi_{l})$-partitions of the set
$[n]$\$\\{r\\}$ into $k$ blocks such that $1,\dots,r-1$ are block leaders then
we include the element $\\{r\\}$ in any block having a smaller leader then
$r$. We have $(r-1)$ choices to do in each partition of
$(\pi_{1},\pi_{2},\dots,\pi_{l})$-partitions, that gives
$(r-1)^{l}{n-1\brace k}_{r-1}^{(l)},$
from the two ways of counting we get the result.
∎
###### Remark 2.5.
Using the previous recurrences it is easy to get the following special cases
(2.5) $\displaystyle{n\brack
r}_{r}^{(l)}=r^{l}(r+1)^{l}\cdots(n-2)^{l}(n-1)^{l}=(r^{\overline{n-r}})^{l},\qquad\text{for
}n\geq r$ and (2.6) $\displaystyle{n\brace
r}_{r}^{(l)}=r^{l(n-r)},\qquad\text{for }n\geq r.$
## 3\. Orthogonality of $(l,r)$-Stirling numbers pair
###### Theorem 3.1.
For $n\geq k\geq 0$, for all positive integer $l$, we have the two
orthogonality relations bellow
(3.1) $\sum_{j}{n\brack j}_{r}^{(l)}{j\brace
k}_{r}^{(l)}(-1)^{j}=\left\\{\begin{array}[]{l
r}(-1)^{n}\delta_{n,k},&\qquad\text{for }n\geq r;\\\ &\\\ 0,&\qquad\text{for
}n<r\end{array}\right.$
and
(3.2) $\sum_{j}{j\brack n}_{r}^{(l)}{k\brace
j}_{r}^{(l)}(-1)^{j}=\left\\{\begin{array}[]{l
r}(-1)^{n}\delta_{n,k},&\qquad\text{for }n\geq r;\\\ &\\\ 0,&\qquad\text{for
}n<r.\end{array}\right.$
###### Proof.
Let us start by Identity (3.1). The proof goes by induction on $n$
* •
For $n<r$ the assertion is obvious.
* •
For $n=r$,
$\begin{split}\sum_{j}{r\brack j}_{r}^{(l)}{j\brace
k}_{r}^{(l)}(-1)^{j}&={r\brace
k}_{r}^{(l)}(-1)^{r}=(-1)^{r}\delta_{k,r}.\end{split}$
* •
For $n>r$, Theorem 2.3 and the induction hypothesis implies that
$\begin{split}\sum_{j}{n\brack j}_{r}^{(l)}{j\brace
k}_{r}^{(l)}(-1)^{j}&=\sum_{j}\left({n-1\brack
j-1}_{r}^{(l)}+(n-1)^{l}{n-1\brack j}_{r}^{(l)}\right){j\brace
k}_{r}^{(l)}(-1)^{j}\\\ &=(n-1)^{l}(-1)^{n-1}\delta_{n-1,k}+\sum_{j}{n-1\brack
j-1}_{r}^{(l)}{j\brace k}_{r}^{(l)}(-1)^{j},\\\ \end{split}$
and from Theorem 2.4, we get
$\begin{split}\sum_{j}{n\brack j}_{r}^{(l)}{j\brace
k}_{r}^{(l)}(-1)^{j}&=(n-1)^{l}(-1)^{n-1}\delta_{n-1,k}-(-1)^{n-1}\delta_{n-1,k-1}-(k)^{l}(-1)^{n-1}\delta_{n-1,k}\\\
&=(-1)^{n}\delta_{n,k}.\end{split}$
For the Identity (3.2), we go by induction on $k$ as same as the previous
proof. ∎
## 4\. Properties via symmetric functions
Let $x_{1},x_{2},\dots,x_{n}$ be $n$ random variables. We denote,
respectively, by
$e_{k}(x_{1},x_{2},\dots,x_{n})$ and $h_{k}(x_{1},x_{2},\dots,x_{n})$ the
elementary symmetric function and the complete homogeneous symmetric function
of degree $k$ in $n$-variables given for $n\geq k\geq 1$, by
(4.1) $e_{k}(x_{1},x_{2},\dots,x_{n})=\sum_{1\leq i_{1}<i_{2}<\cdots<i_{k}\leq
n}x_{i_{1}}\cdots x_{i_{k}}$
and
(4.2) $h_{k}(x_{1},x_{2},\dots,x_{n})=\sum_{1\leq i_{1}\leq
i_{2}\leq\cdots\leq i_{k}\leq n}x_{i_{1}}\cdots x_{i_{k}}.$
In particular
$e_{0}(x_{1},x_{2},\dots,x_{n})=h_{0}(x_{1},x_{2},\dots,x_{n})=\delta_{0,n}$.
The generating functions of the symmetric functions are given by
(4.3) $E(t)=\sum_{k\geq
0}e_{k}(x_{1},x_{2},\cdots,x_{n})t^{k}=\prod_{i=1}^{n}(1+x_{i}t)$
and
(4.4) $H(t)=\sum_{k\geq
0}h_{k}(x_{1},x_{2},\cdots,x_{n})t^{k}=\prod_{i=1}^{n}(1-x_{i}t)^{-1}.$
For more details about symmetric functions we refer readers to [2, 13, 15] and
the references therein.
Let us now give some results linked to the symmetric functions and their
generating functions.
###### Theorem 4.1.
The $(l,r)$-Stirling of the first kind and the elementary symmetric function
are linked as
(4.5) ${n+1\brack n+1-k}_{r}^{(l)}=e_{k}(r^{l},\dots,n^{l}),$
equivalently
(4.6) ${n\brack k}_{r}^{(l)}=e_{n-k}(r^{l},\dots,(n-1)^{l}).$
###### Proof.
It is clear that in each
$(\sigma_{1},\sigma_{2},\dots,\sigma_{l})$-permutation having $(n-k)$ cycles
with $\\{1,\dots,r\\}$ lead, we have $\\{1,2,\dots,r,y_{r+1},\dots,y_{n-k}\\}$
lead a cycle and $\\{x_{1},x_{2},\dots,x_{k}\\}$ elements don’t lead where
$r<y_{r+1}<y_{n-k}<\dots\leq n$ and $r<x_{1}<x_{2}<\dots\leq n$.
To construct all $(\sigma_{1},\sigma_{2},\dots,\sigma_{l})$-permutations
having $(n-k)$ cycles where $\\{1,\dots,r\\}$ lead, we proceed as follows
* •
Construct $(n-k)$ cycles having only one element from $\\{1,2,\dots,r,$
$y_{r+1},\dots,{y_{n-k}}\\}$, i. e.
$\sigma=(1)(2)\dots(r)(y_{r+1})\dots(y_{n-k}),$
* •
Insert $x_{1}$ after an element of cycles smaller than $x_{1}$, we have
$(x_{1}-1)$ ways of inserting $x_{1}$. Then Insert $x_{2}$ after an element of
cycles smaller than $x_{2}$, we have $(x_{2}-1)$ choices, and so on. We have
$(x_{1}-1)(x_{2}-1)\cdots(x_{k}-1)$ ways to construct a permutation.
* •
Repeat the process with each permutation
$\sigma\in\\{\sigma_{1},\dots,\sigma_{l}\\}$, so we have
$(x_{1}-1)^{l}(x_{2}-1)^{l}\cdots(x_{k}-1)^{l}$ ways of construction.
* •
Summing over all possible set of numbers $\\{x_{1},x_{2},\dots,x_{k}\\}$,
hence the total number of ways to construct
$(\sigma_{1},\sigma_{2},\dots,\sigma_{l})$-permutations having $(n-k)$ cycles
with $\\{1,\dots,r\\}$ lead is
$\begin{split}{n\brack n-k}_{r}^{(l)}&=\sum_{r<x_{1}<x_{2}<\cdots\leq
n}(x_{1}-1)^{l}(x_{2}-1)^{l}\cdots(x_{k}-1)^{l}\\\ &=\sum_{r\leq
x_{1}<x_{2}<\cdots<n}x_{1}^{l}x_{2}^{l}\cdots x_{k}^{l}\\\
&=e_{k}(r^{l},\dots,(n-1)^{l}).\end{split}$
∎
###### Theorem 4.2.
The $(l,r)$-Stirling of the first kind and the complete homogeneous symmetric
function are linked as
(4.7) ${n+k\brace n}_{r}^{(l)}=h_{k}(r^{l},\dots,n^{l}),$
###### Proof.
Let us count the number of $(\pi_{1},\pi_{2},\dots,\pi_{l})$-partitions of
$[n+k]$ into $n$ blocks with $\\{1,2,\dots,r\\}$ are leaders. First, we
denote, $\\{y_{1},y_{2},\dots,y_{k}\\}$ the elements that are not leaders
where $y_{1}<y_{2}<\dots<y_{k}$. Let $x_{i}$ be the number of leaders smaller
than $y_{i}$, $i\in\\{1,\dots,k\\}$, it is clear that $r\leq i_{1}\leq
i_{2}\leq\cdots\leq i_{k}\leq n$.
The construction of such partition goes as follows
* •
Construct a partition of $n$ blocks with
$[n+k]$\$\\{y_{1},y_{2},\dots,y_{k}\\}$ where ${1,2,\dots,r}$ are leaders, i.
e.
$\\{1\\}\\{2\\}\dots\\{r\\}\\{z_{r+1}\\}\dots\\{z_{n}\\}.$
* •
Insert the $\\{y_{1},y_{2},\dots,y_{k}\\}$ elements to the $n$ blocks. It is
clear that $y_{i}$ can belong only to a block having a leader smaller than
$y_{i}$, we have $x_{1}\cdot x_{2}\cdots x_{k}$ ways to do.
* •
Repeat the process with each partition $\pi\in\\{\pi_{1},\dots,\pi_{l}\\}$, so
we have $(x_{1})^{l}(x_{2})^{l}\dots(x_{k})^{l}$ ways of construction.
* •
Summing over all possible set of numbers $\\{x_{1},x_{2},\dots,x_{k}\\}$,
hence the total number of ways to construct
$(\pi_{1},\pi_{2},\dots,\pi_{l})$-partitions of $[n+k]$ having $n$ blocks with
$\\{1,\dots,r\\}$ lead is
$\begin{split}{n+k\brace n}_{r}^{(l)}&=\sum_{r\leq x_{1}\leq
x_{2}\leq\cdots\leq n}x_{1}^{l}x_{2}^{l}\cdots x_{k}^{l}\\\
&=h_{k}(r^{l},\dots,n^{l}).\end{split}$
∎
## 5\. Generating functions
Now, we can use the symmetric functions to construct the generating functions
for the $(l,r)$-Stirling of both kinds.
###### Theorem 5.1.
The generating function for the $(l,r)$-Stirling numbers of the first kind is
(5.1) $\sum_{k}{n\brack
k}_{r}^{(l)}z^{k}=z^{r}\prod_{i=r}^{n-1}\left(z+i^{l}\right)=z^{r}\left(z+r^{l}\right)\left(z+(r+1)^{l}\right)\cdots\left(z+(n-1)^{l}\right),$
###### Proof.
From Theorem 4.1 and the generating function (4.3) we obtain
(5.2) $\begin{split}\sum_{k}{n\brack
k}_{r}^{(l)}z^{k}&=z^{n}\sum_{k}e_{k}(r^{l},\dots,(n-1)^{l})(z^{-1})^{k}\\\
&=z^{n}\prod_{i=r}^{n-1}\left(1+\frac{i^{l}}{z}\right)\\\
&=z^{r}\prod_{i=r}^{n-1}(z+i^{l}).\end{split}$
∎
###### Theorem 5.2.
The generating function for the $(l,r)$-Stirling numbers of the second kind is
(5.3) $\sum_{n=k}{n\brace
k}_{r}^{(l)}z^{n}=z^{k}\left(\prod_{i=r}^{k}(1-zi^{l})\right)^{-1}=\frac{z^{k}}{(1-zr^{l})(1-z(r+1)^{l})(1-zk^{l})}.$
###### Proof.
From Theorem 4.2 and the generating function of homogeneous symmetric function
(4.4), we obtain
$\begin{split}\sum_{n\geq k}{n\brace k}_{r}^{(l)}z^{n}&=\sum_{j\geq
0}{k+j\brace k}z^{k+j}=z^{k}\sum_{j\geq
0}h_{j}(r^{l},\dots,k^{l})z^{j}=z^{k}\left(\prod_{i=r}^{k}(1-zi^{l})\right)^{-1}.\end{split}$
∎
In the following theorem we investigate the symmetric functions to obtain a
convolution formula for the $(l,r)$-Stirling numbers of both kinds.
###### Theorem 5.3.
For all positive integers $l$, $n$, $k$ and $r$ with $(n\geq k\geq r)$, we
have
(5.4) $\sum_{\begin{array}[]{c}i_{0}+2i_{1}\cdots+2^{l}i_{l}=k\\\
i_{0},\cdots,i_{l}\geq 0\end{array}}{n+i_{l}\brace
n}_{r}^{(2^{l})}\prod_{s=0}^{l-1}{n+1\brack
n+1-i_{s}}_{r}^{(2^{s})}={n+k\brace n}_{r}.$
###### Proof.
Let us consider the generating function of the complete homogeneous symmetric
function (4.4). From that we have
$\begin{split}\sum_{k\geq
0}h_{k}(x_{1},\dots,x_{n})z^{k}&=\prod_{i=1}^{n}\frac{1}{(1-x_{i}z)}\\\
&=\prod_{i=1}^{n}\frac{1}{(1-x_{i}z)}\prod_{s=0}^{l-1}\left(\frac{1+x_{i}^{2^{s}}z^{2^{s}}}{1+x_{i}^{2^{s}}z^{2^{s}}}\right)\\\
&=\prod_{i=1}^{n}\frac{1}{(1-x_{i}^{2^{l}}z^{2^{l}})}\prod_{s=0}^{l-1}\left(1+x_{i}^{2^{s}}z^{2^{s}}\right)\\\
&=\sum_{k\geq
0}h_{k}(x_{1}^{2^{l}},\dots,x_{n}^{2^{l}})z^{2^{l}k}\prod_{s=0}^{l-1}\sum_{k\geq
0}e_{k}(x_{1}^{2^{s}},\dots,x_{n}^{2^{s}})z^{2^{s}k}\\\ &=\sum_{k\geq
0}h_{k}(x_{1}^{2^{l}},\dots,x_{n}^{2^{l}})z^{2^{l}k}\sum_{k\geq
0}e_{k}(x_{1},\dots,x_{n})z^{k}\sum_{k\geq
0}e_{k}(x_{1}^{2},\dots,x_{n}^{2})z^{2k}\cdots\sum_{k\geq
0}e_{k}(x_{1}^{2^{l-1}},\dots,x_{n}^{2^{l-1}})z^{2^{l-1}k}\\\ &=\sum_{k\geq
0}\left(\sum_{\begin{array}[]{c}i_{0}+2i_{1}+\dots+2^{l}i_{l}=k;\\\
i_{0},\dots,i_{l}\geq
0.\end{array}}h_{i_{l}}(x_{1}^{2^{l}},\dots,x_{n}^{2^{l}})\prod_{s=0}^{l-1}e_{i_{s}}(x_{1}^{2^{s}},\dots,x_{n}^{2^{s}})\right)z^{k}.\end{split}$
From Theorem 4.1 and Theorem 4.2 and by comparing the coefficients of $z^{k}$
of the two sides the result holds true. ∎
The simplest case of the previous theorem is the corollary bellow which
generalize the result of Broder [6].
###### Corollary 5.4.
For $l=1$, we have
(5.5) $\sum_{i=0}^{\lfloor k/2\rfloor}{n+i\brace n}_{r}^{(2)}{n+1\brack
n+1+2i-k}_{r}={n+k\brace n}_{r}.$
## 6\. The $(l,r)$-Stirling numbers, the sum powers and Bernoulli polynomials
Recall, for every integer $n\geq 0$, the Bernoulli polynomials, denoted
$B_{n}(x)$, are defined by
(6.1) $\sum_{n=0}^{\infty}B_{n}(x)\frac{t^{n}}{n}=\frac{te^{xt}}{e^{t}-1}.$
The sum of the powers of natural numbers is closely related to the Bernoulli
polynomials $B_{n}(x)$. Jacobi [12, 16] gives the following identity using the
sum of powers and Bernoulli polynomials
(6.2) $\sum_{j=1}^{n}j^{m}=\frac{B_{m+1}(n+1)-B_{m+1}(0)}{m+1}.$
The following theorem gives the relation between $(l,r)$-Stirling of both
kinds and Bernoulli polynomials.
###### Theorem 6.1.
For all positive integers $n$, $k$ and $l$, we have
(6.3) $\sum_{j=0}^{k}(-1)^{j}(j+1){n+1\brack n-j}^{(l)}{n+k-j\brace
n}^{(l)}=\frac{B_{lk+l+1}(n+1)-B_{lk+l+1}(0)}{lk+l+1},$
###### Proof.
In the first hand we have Jacobi’s Identity (6.2)
(6.4) $\sum_{j=1}^{n}(j^{l})^{k}=\frac{B_{lk+1}(n+1)-B_{lk+1}(0)}{lk+1},$
in the second hand, we have
$H(t)=\sum_{k\geq
0}h_{k}(1^{l},2^{l},\dots,n^{l})t^{k}=\prod_{j=1}^{n}\frac{1}{(1-j^{s}t)}$
and
$E(t)=\sum_{k\geq
0}e_{k}(1^{l},2^{l},\dots,n^{l})t^{k}=\prod_{j=1}^{n}(1+j^{s}t),$
from the obvious observation that $H(t)=1/E(-t)$, we obtain
(6.5) $\frac{d}{dt}\ln{H(t)}=\frac{H^{\prime}(t)}{H(t)}=H(t)E^{\prime}(-t)$
but
(6.6)
$\frac{d}{dt}\ln{H(t)}=\sum_{j=1}^{n}\frac{j^{l}}{(1-j^{l}t)}=\sum_{k\geq
0}\sum_{j=1}^{n}j^{s(k+1)}t^{k}.$
Then from equations (6.5) and (6.6), we get
(6.7) $\begin{split}\sum_{k\geq
0}\sum_{j=1}^{n}j^{s(k+1)}t^{k}&=H(t)E^{\prime}(-t)\\\ &=\left(\sum_{k\geq
0}h_{k}(1^{l},\dots,n^{l})t^{k}\right)\left(\sum_{k\geq
1}k(-1)^{k-1}e_{k}(1^{l},\dots,n^{l})t^{k-1}\right).\\\ \end{split}$
Cauchy product and equating coefficient of $t^{k}$ gives
(6.8) $\begin{split}\sum_{j=1}^{n}j^{s(k+1)}&=\sum_{j\geq
1}^{n}(j+1)(-1)^{j}e_{j+1}(1^{l},\dots,n^{l})h_{k-j}(1^{l},\dots,n^{l}),\end{split}$
replacing symmetric functions by stirling numbers from Theorem 4.1 and Theorem
4.2, and comparing with Equation (6.4) we get the result.
∎
## 7\. Multiple zeta function and $(l,r)$-Stirling numbers of the first kind
For any ordered sequence of positive integers $i_{1},i_{2},\dots,i_{k}$, the
multiple zeta function is introduced by Hoffman [11] and independently Zagier
[17] by the following infinite sums
(7.1)
$\zeta(i_{1},i_{2},\dots,i_{k})=\sum_{0<j_{1}<j_{2}<\cdots<j_{k}}\frac{1}{j_{1}^{i_{1}}j_{2}^{i_{2}}\cdots
j_{k}^{i_{k}}}.$
Recently, the multiple zeta function has been studied quite intensively by
many authors in various fields of mathematics and physics (see [4, 5, 7, 11,
17, 19]). Here we give a relation between $(l,r)$-Stirling numbers of the
first kind and the multiple zeta function.
###### Theorem 7.1.
For all positive integers $n$, $k$, $l$ and $r$ with $(n\geq k\geq r)$, we
have
(7.2) $\begin{split}{n+1\brack
k+1}_{r}^{(l)}&=\left(\frac{n!}{(r-1)!}\right)^{l}\sum_{j_{k}=k}^{n}\sum_{j_{k-1}=k-1}^{j_{k}-1}\cdots\sum_{j_{r}=r}^{j_{(r+1)}-1}\frac{1}{\left(j_{r}j_{2}\cdots
j_{k}\right)^{l}}\\\
&=\left(\frac{n!}{(r-1)!}\right)^{l}\sum_{r-1<j_{1}<j_{2}<\cdots<j_{k}\leq
n}\frac{1}{(j_{1}j_{2}\cdots j_{k})^{l}}.\end{split}$
###### Proof.
Since ${n\brack k}_{r}^{(l)}={n-1\brack k-1}_{r}^{(l)}+(n-1)^{l}{n-1\brack
k}_{r}^{(l)}$ from Theorem 2.3. If we proceed iteratively, we obtain that
(7.3) ${n\brack
k}_{r}^{(l)}=\left((n-1)!\right)^{l}\sum_{j=k-1}^{n-1}\frac{1}{(j!)^{l}}{j\brack
k-1}_{r}^{(l)}.$
For $k={r}$, from (2.5) and (7.3) we obtain
(7.4) ${n\brack
r}_{r}^{(l)}=(r^{\overline{n-r}})^{l}=\left(\frac{(n-1)!}{(r-1)!}\right)^{l}.$
For $k={r+1}$, from (7.3) and (7.4) we obtain
(7.5) $\begin{split}{n\brack
r+1}_{r}^{(l)}&=\left((n-1)!\right)^{l}\sum_{j=r}^{n-1}\frac{1}{(j!)^{l}}{j\brack
r}_{r}^{(l)}\\\
&=\left(\frac{(n-1)!}{(r-1)!}\right)^{l}\sum_{j=r}^{n-1}\left(\frac{(j-1)!}{j!}\right)^{l}\\\
&=\left(\frac{(n-1)!}{(r-1)!}\right)^{l}\sum_{j=r}^{n-1}\frac{1}{j^{l}}.\end{split}$
For $k=r+2$, from (7.4) and (7.5) we obtain
(7.6) ${n\brack
r+2}_{r}^{(l)}=\left(\frac{(n-1)!}{(r-1)!}\right)^{l}\sum_{j=r+1}^{n-1}\sum_{i=r}^{j-1}\frac{1}{(ij)^{l}},$
iterating the process with $k\in\\{r+3,r+4,\dots\\}$ and so on, then yields
the result. ∎
###### Proposition 7.2.
For $r=1$, we have
(7.7) $\lim_{n\to\infty}\frac{1}{\left(n!\right)^{l}}{\displaystyle{n+1\brack
k+1}}^{(l)}=\zeta(\\{l\\}_{k}),$
where $\\{l\\}_{n}=(\underbrace{l,l,\dots,l}_{n\text{ times}}).$
###### Proof.
The proposition follows immediately from the definition of multiple zeta
function (7.1) as an infinity sums and Theorem 7.1 for $r=1$. ∎
###### Corollary 7.3.
For $k\geq 1$, we have
* •
For $l=2$
(7.8) $\lim_{n\to\infty}\frac{1}{\left(n!\right)^{2}}\displaystyle{n+1\brack
k+1}^{(2)}=\frac{\pi^{2k}}{(2k+1)!}.$
* •
For $l=4$
(7.9) $\lim_{n\to\infty}\frac{1}{\left(n!\right)^{4}}\displaystyle{n+1\brack
k+1}^{(4)}=\frac{4(2\pi)^{4k}}{(4k+2)!}\left(\frac{1}{2}\right)^{2k+1}.$
* •
For $l=6$
(7.10) $\lim_{n\to\infty}\frac{1}{\left(n!\right)^{6}}\displaystyle{n+1\brack
k+1}^{(6)}=\frac{6(2\pi)^{6k}}{(6k+3)!}.$
* •
For $l=8$
(7.11) $\lim_{n\to\infty}\frac{1}{\left(n!\right)^{8}}\displaystyle{n+1\brack
k+1}^{(8)}=\frac{\pi^{8k}}{(8k+4)!}2^{8k+3}\left(\left(1+\frac{1}{\sqrt{2}}\right)^{4k+2}+\left(1-\frac{1}{\sqrt{2}}\right)^{4k+2}\right).$
###### Proof.
Authors in [7] give the following special values of multiple zeta function
$\displaystyle\zeta(\\{2\\}_{n})$ $\displaystyle=$
$\displaystyle\frac{\pi^{2n}}{(2n+1)!},$ $\displaystyle\zeta(\\{4\\}_{n})$
$\displaystyle=$
$\displaystyle\frac{4(2\pi)^{4n}}{(4n+2)!}\left(\frac{1}{2}\right)^{2n+1},$
$\displaystyle\zeta(\\{6\\}_{n})$ $\displaystyle=$
$\displaystyle\frac{6(2\pi)^{6n}}{(6n+3)!},$ $\displaystyle\zeta(\\{8\\}_{n})$
$\displaystyle=$
$\displaystyle\frac{\pi^{8n}}{(8n+4)!}2^{8n+3}\left(\left(1+\frac{1}{\sqrt{2}}\right)^{4n+2}+\left(1-\frac{1}{\sqrt{2}}\right)^{4n+2}\right),$
the corollary is a consequence of the previous special cases and Proposition
7.2. ∎
## 8\. Remarks
* •
The $(l,r)$-Stirling gives another graphical view of Rooks polynomials of
higher dimensions in triangle boards [3, 18] using set partitions.
* •
In this work we gives a limit representation of multiple zeta function using
$(l,r)$-Stirling numbers.
* •
We can obtain the well-known Euler identity $\zeta(2)=\frac{\pi^{2}}{6}$ from
Equation (7.8) for $k=1$.
### Acknowledgment
We would like to thank the anonymous reviewers for their suggestions and
comments which improved the quality of the present paper. The paper was
partially supported by the DGRSDT grant №:C0656701.
## References
* [1] Ablinger, J., and Blümlein, J. (2013). Harmonic sums, polylogarithms, special numbers, and their generalizations. In Computer Algebra in Quantum Field Theory (pp. 1-32). Springer, Vienna.
* [2] Abramowitz, M., and Stegun, I. A. (1964). Handbook of mathematical functions with formulas, graphs, and mathematical tables (Vol. 55). US Government printing office.
* [3] Alayont, F., and Krzywonos, N. (2013). Rook polynomials in three and higher dimensions. Involve, a Journal of Mathematics, 6(1), 35-52.
* [4] Arakawa, T., and Kaneko, M. (1999). Multiple zeta values, poly-Bernoulli numbers, and related zeta functions. Nagoya Mathematical Journal, 153, 189-209.
* [5] Bradley, D. M. (2005). Partition identities for the multiple zeta function. In Zeta functions, topology and quantum physics (pp. 19-29). Springer, Boston, MA.
* [6] Broder, A. Z. (1984). The r-Stirling numbers. Discrete Mathematics, 49(3), 241-259.
* [7] Borwein, J. M., and Bradley, D. M. (1997). Evaluations of k-fold Euler/Zagier sums: a compendium of results for arbitrary k. the electronic journal of combinatorics, 4(2), R5.
* [8] Dumont, D. (1974). Interprétations combinatoires des nombres de Genocchi. Duke Mathematical Journal, 41(2), 305-318.
* [9] Foata, D., and Han, G. N. (2000). Principes de combinatoire classique. Lecture notes, Strasbourg.
* [10] Gelineau, Y., and Zeng, J. (2010). Combinatorial interpretations of the Jacobi-Stirling numbers. the electronic journal of combinatorics, 17(R70), 1.
* [11] Hoffman, M. (1992). Multiple harmonic series. Pacific Journal of Mathematics, 152(2), 275-290.
* [12] Jacobi, C. G. J. (1834). De usu legitimo formulae summatoriae Maclaurinianae. Journal für die reine und angewandte Mathematik, 1834(12), 263-272.
* [13] Macdonald, I. G. (1998). Symmetric functions and Hall polynomials. Oxford university press.
* [14] Merca, M., and Cuza, A. I. (2012). A special case of the generalized Girard-Waring formula. Journal of Integer Sequences, 15(2), 3.
* [15] Merca, M. (2016). New convolutions for complete and elementary symmetric functions. Integral Transforms and Special Functions, 27(12), 965-973.
* [16] Srivastava, H. M. (2000, July). Some formulas for the Bernoulli and Euler polynomials at rational arguments. In Mathematical Proceedings of the Cambridge Philosophical Society (Vol. 129, No. 1, pp. 77-84).
* [17] Zagier, D. (1994). Values of zeta functions and their applications. In First European Congress of Mathematics Paris, July 6–10, 1992 (pp. 497-512). Birkhäuser Basel.
* [18] Zindle, B. (2007). Rook polynomials for chessboards of two and three dimensions. Master thesis.
* [19] Zudilin, W. W. (2003). Algebraic relations for multiple zeta values. Russian Mathematical Surveys, 58(1), 1-29.
|
# Complementary Composite Minimization, Small Gradients in General Norms, and
Applications to Regression Problems
Jelena Diakonikolas
University of Wisconsin-Madison
<EMAIL_ADDRESS>Supported by the NSF grant CCF-2007757 and by the Office of
the Vice Chancellor for Research and Graduate Education at the University of
Wisconsin–Madison with funding from the Wisconsin Alumni Research Foundation.
Cristóbal Guzmán
Pontificia Universidad Católica de Chile
<EMAIL_ADDRESS>Partially supported by INRIA through the INRIA Associate
Teams project, CORFO through the Clover 2030 Engineering Strategy -
14ENI-26862, and ANID – Millennium Science Initiative Program – NCN17_059.
###### Abstract
Composite minimization is a powerful framework in large-scale convex
optimization, based on decoupling of the objective function into terms with
structurally different properties and allowing for more flexible algorithmic
design. In this work, we introduce a new algorithmic framework for
complementary composite minimization, where the objective function decouples
into a (weakly) smooth and a uniformly convex term. This particular form of
decoupling is pervasive in statistics and machine learning, due to its link to
regularization.
The main contributions of our work are summarized as follows. First, we
introduce the problem of complementary composite minimization in general
normed spaces; second, we provide a unified accelerated algorithmic framework
to address broad classes of complementary composite minimization problems; and
third, we prove that the algorithms resulting from our framework are near-
optimal in most of the standard optimization settings. Additionally, we show
that our algorithmic framework can be used to address the problem of making
the gradients small in general normed spaces. As a concrete example, we obtain
a nearly-optimal method for the standard $\ell_{1}$ setup (small gradients in
the $\ell_{\infty}$ norm), essentially matching the bound of Nesterov (2012)
that was previously known only for the Euclidean setup. Finally, we show that
our composite methods are broadly applicable to a number of regression
problems, leading to complexity bounds that are either new or match the best
existing ones.
## 1 Introduction
_No function can simultaneously be both smooth and strongly convex with
respect to an $\ell_{p}$ norm
and have a dimension-independent condition number, unless $p=2$._
This is a basic fact from convex analysis111More generally, it is known that
the existence of a continuous uniformly convex function with growth bounded by
the squared norm implies that the space has an equivalent $2$-uniformly convex
norm (Borwein et al., 2009); furthermore, using duality (Zalinescu, 1983), we
conclude that the existence of a smooth and strongly convex function implies
that the space has equivalent $2$-uniformly convex and $2$-uniformly smooth
norms, a rare property for a normed space (the most notable examples of spaces
that are simultaneously $2$-uniformly convex and $2$-uniformly smooth are
Hilbert spaces; see e.g., Ball et al. (1994) for related definitions and more
details). and the primary reason why in the existing literature smooth and
strongly convex optimization is normally considered only for Euclidean (or,
slightly more generally, Hilbert) spaces. In fact, it is not only that moving
away from $p=2$ the condition number becomes dimension-dependent, but that the
dependence on the dimension is polynomial for all examples of functions we
know of, unless $p$ is trivially close to two. Thus, it is tempting to assert
that dimension-independent linear convergence (i.e., with logarithmic
dependence on the inverse accuracy $1/\epsilon$) is reserved for Euclidean
spaces, which has long been common wisdom among optimization researchers.
Contrary to this wisdom, we show that it is in fact possible to attain linear
convergence even in $\ell_{p}$ (or, more generally, in normed vector) spaces,
as long as the objective function can be decomposed into two functions with
complementary properties. In particular, we show that if the objective
function can be written in the following _complementary composite_ form
$\bar{f}(\mathbf{x})=f(\mathbf{x})+\psi(\mathbf{x}),$ (1)
where $f$ is convex and $L$-smooth w.r.t. a (not necessarily Euclidean) norm
$\|\cdot\|$ and $\psi$ is $m$-strongly convex w.r.t. the same norm and
“simple,” meaning that the optimization problems of the form
$\min_{\mathbf{x}}\left\langle\mathbf{z},\mathbf{x}\right\rangle+\psi(\mathbf{x})$
(2)
can be solved efficiently for any linear functional $\mathbf{z},$ then
$\bar{f}(\mathbf{x})$ can be minimized to accuracy $\epsilon>0$ in
$O\Big{(}\sqrt{\frac{L}{m}}\log(\frac{L\phi(\bar{\mathbf{x}}^{*})}{\epsilon})\Big{)}$
iterations, where
$\bar{\mathbf{x}}^{*}=\operatorname*{argmin}_{\mathbf{x}}\bar{f}(\mathbf{x})$.
As in other standard first-order iterative methods, each iteration requires
one call to the gradient oracle of $f$ and one call to a solver for the
problem from Eq. (2). To the best of our knowledge, such a result was
previously known only for Euclidean spaces (Nesterov, 2013).
This is the basic variant of our result. We also consider more general setups
in which $f$ is only weakly smooth (with Hölder-continuous gradients) and
$\psi$ is uniformly convex (see Section 1.2 for specific definitions and
useful properties). We refer to the resulting objective functions $\bar{f}$ as
_complementary composite objective functions_ (as functions $f$ and $\psi$
that constitute $\bar{f}$ have complementary properties) and to the resulting
optimization problems as _complementary composite optimization problems._ The
algorithmic framework that we consider for complementary composite
optimization in Section 2 is near-optimal (optimal up to logarithmic or poly-
logarithmic factors) in terms of iteration complexity in most of the standard
optimization settings, which we certify by providing near-matching oracle
complexity lower bounds in Section 4. We now summarize some further
implications of our results.
#### Small gradients in $\ell_{p}$ and $\mathscr{S}_{p}$ norms.
The original motivation for complementary composite optimization in our work
comes from making the gradients of smooth functions small in non-Euclidean
norms. This is a fundamental optimization question, whose study was initiated
by Nesterov (2012) and that is still far from being well-understood. Prior to
this work, (near)-optimal algorithms were known only for the Euclidean
($\ell_{2}$) and $\ell_{\infty}$ setups.222In the $\ell_{\infty}$ setup, a
non-Euclidean variant of gradient descent is optimal in terms of iteration
complexity.
For the Euclidean setup, there are two main results: due to Kim and Fessler
(2020) and due to Nesterov (2012). The algorithm of Kim and Fessler (2020) is
iteration-complexity-optimal; however, the methodology by which this algorithm
was obtained is crucially Euclidean, as it relies on numerical solutions to
semidefinite programs, whose formulation is made possible by assuming that the
norm of the space is inner-product-induced. An alternative approach, due to
Nesterov (2012), is to apply the fast gradient method to a regularized
function
$\bar{f}(\mathbf{x})=f(\mathbf{x})+\frac{\lambda}{2}\|\mathbf{x}-\mathbf{x}_{0}\|_{2}^{2}$
for a sufficiently small $\lambda>0,$ where $f$ is the smooth function whose
gradient we hope to minimize. Under the appropriate choice of $\lambda>0,$ the
resulting algorithm is near-optimal (optimal up to a logarithmic factor).
As discussed earlier, applying fast gradient method directly to a regularized
function as in Nesterov (2012) is out of question for $p\neq 2,$ as the
resulting regularized objective function cannot simultaneously be smooth and
strongly convex w.r.t. $\|\cdot\|_{p}$ without its condition number growing
with the problem dimension. This is where the framework of complementary
composite optimization proposed in our work comes into play. Our result also
generalizes to normed matrix spaces endowed with $\mathscr{S}_{p}$
(Schatten-$p$) norms.333$\mathscr{S}_{p}$ norm of a matrix $\mathbf{A}$ is
defined as the $\ell_{p}$ norm of $\mathbf{A}$’s singular values. As a
concrete example, our approach leads to near-optimal complexity results in the
$\ell_{1}$ and $\mathscr{S}_{1}$ setups, where the gradient is minimized in
the $\ell_{\infty}$, respectively, $\mathscr{S}_{\infty}$, norm.
It is important to note here why strongly convex regularizers are not
sufficient in general and what motivated us to consider the more general
uniformly convex functions $\psi$. While for $p\in(1,2]$ choosing
$\psi(\mathbf{x})=\frac{1}{2}\|\cdot\|_{p}^{2}$ (which is $(p-1)$-strongly
convex w.r.t. $\|\cdot\|_{p};$ see Nemirovskii and Yudin (1983); Juditsky and
Nemirovski (2008)) is sufficient, when $p>2$ the strong convexity parameter of
$\frac{1}{2}\|\cdot\|_{p}^{2}$ w.r.t. $\|\cdot\|_{p}$ is bounded above by
$1/d^{1-\frac{2}{p}}$. This is not only true for
$\frac{1}{2}\|\cdot\|_{p}^{2}$, but for any convex function bounded above by a
constant on a unit $\ell_{p}$-ball; see e.g., (d’Aspremont et al., 2018,
Example 5.1). Thus, in this case, we work with
$\psi(\mathbf{x})=\frac{1}{p}\|\cdot\|_{p}^{p}$, which is only uniformly
convex.
#### Lower complexity bounds.
We complement the development of algorithms for complementary composite
minimization and minimizing the norm of the gradient with lower bounds for the
oracle complexity of these problems. Our lower bounds leverage recent lower
bounds for weakly smooth convex optimization from Guzmán and Nemirovski
(2015); Diakonikolas and Guzmán (2020). These existing results suffice for
proving lower bounds for minimizing the norm of the gradient, and certify the
near-optimality of our approach for the smooth (i.e., with Lipschitz
continuous gradient) setting, when $1\leq p\leq 2$. On the other hand, proving
lower bounds for complementary convex optimization requires the design of an
appropriate oracle model; namely, one that takes into account that our
algorithm accesses the gradient oracle of $f$ and solves subroutines of type
(2) w.r.t. $\psi$. With this model in place, we combine constructions from
uniformly convex nonsmooth lower bounds (Srebro and Sridharan, 2012; Juditsky
and Nesterov, 2014) with local smoothing (Guzmán and Nemirovski, 2015;
Diakonikolas and Guzmán, 2020) to provide novel lower bounds for complementary
composite minimization. The resulting bounds show that our algorithmic
framework is nearly optimal (up to poly-logarithmic factors w.r.t. dimension,
target accuracy, regularity constants of the objective, and initial distance
to optimum) for all interesting regimes of parameters.
#### Applications to regression problems.
The importance of complementary composite optimization and making the
gradients small in $\ell_{p}$ and $\mathscr{S}_{p}$ norms is perhaps best
exhibited by considering some of the classical regression problems that are
frequently used in statistics and machine learning. It turns out that
considering these regression problems in the appropriate complementary
composite form not only leads to faster algorithms in general, but also
reveals some interesting properties of the solutions. For example,
applications of our framework to the complementary composite form of bridge
regression (a generalization of lasso and ridge regression; see Section 5)
leads to an interesting and well-characterized trade-off between the “goodness
of fit” of the model and the $\ell_{p}$ norm of the regressor.
Section 5 highlights a number of interesting regression problems that can be
addressed using our framework, including lasso, elastic net, (b)ridge
regression, Dantzig selector, $\ell_{p}$ regression (with standard and
correlated errors), and related spectral variants. It is important to note
that a single algorithmic framework suffices for addressing all of these
problems. Most of the results we obtain in this way are either conjectured or
known to be unimprovable.
### 1.1 Further Related Work
Nonsmooth convex optimization problems with the composite structure of the
objective function $\bar{f}(\mathbf{x})=f(\mathbf{x})+\psi(\mathbf{x})$, where
$f$ is smooth and convex, but $\psi$ is nonsmooth, convex, and “simple,” are
well-studied in the optimization literature (Beck and Teboulle, 2009;
Nesterov, 2013; Scheinberg et al., 2014; He et al., 2015; Gasnikov and
Nesterov, 2018, and references therein). The main benefit of exploiting the
composite structure lies in the ability to recover accelerated rates for
nonsmooth problems. One of the most celebrated results in this domain are the
FISTA algorithm of Beck and Teboulle (2009) and a method based on composite
gradient mapping due to Nesterov (2013), which demonstrated that accelerated
convergence (with rate $1/k^{2}$) is possible for this class of problems.
By comparison, the literature on complementary composite minimization is
scarce. For example, Nesterov (2013) proved that in a Euclidean space
complementary composite optimization attains a linear convergence rate. The
algorithm proposed there is different from ours, as it relies on the use of
composite gradient mapping, for which the proximal operator of $\psi$
(solution to problems of the form
$\min_{\mathbf{x}}\\{\psi(\mathbf{x})+\frac{1}{2}\|\mathbf{x}-\mathbf{z}\|_{2}^{2}\\}$
for all $\mathbf{z};$ compare to Eq. (2)) is assumed to be efficiently
computable. In addition to being primarily applicable to Euclidean spaces,
this assumption further restricts the class of functions that can be
efficiently optimized compared to our approach (see Section 2.2 for a further
discussion). Another composite algorithm where linear convergence has been
proved is the celebrated method by Chambolle and Pock (2011), where proximal
steps are taken w.r.t. both terms in the composite model ($f$ and $\psi$). In
the case where both $f$ and $\psi$ are strongly convex, a linear convergence
rate can be established. Notice that this assumption is quite different from
our setting, and that this method was only investigated for the Euclidean
setup.
Beyond the realm of Euclidean norms, linear convergence results have been
established for functions that are _relatively smooth and relatively strongly
convex_ (Bauschke et al., 2017, 2019; Lu et al., 2018). The class of
complementary composite functions does not fall into this category. Further,
while we show accelerated rates (with square-root dependence on the
appropriate notion of the condition number) for complementary composite
optimization, such results are likely not attainable for relatively smooth
relatively strongly convex optimization (Dragomir et al., 2019).444Lower
bounds from (Dragomir et al., 2019) show the impossibility of acceleration for
the relatively smooth setting. This is strong evidence of the impossibility of
acceleration in the relatively smooth and relatively strongly convex setting.
The problem of minimizing the norm of the gradient has become a central
question in optimization and its applications in machine learning, mainly
motivated by nonconvex settings, where the norm of the gradient is useful as a
stopping criterion. However, the norm of the gradient is also useful in
linearly constrained convex optimization problems, where the norm of the
gradient of a Fenchel dual is useful in controlling the feasibility violation
in the primal (Nesterov, 2012). Our approach for minimizing the norm of the
gradient is inspired by the regularization approach proposed by Nesterov
(2012). As discussed earlier, this regularization approach is not directly
applicable to non-Euclidean settings, and is where our complementary composite
framework becomes crucial.
Finally, our work is inspired by and uses fundamental results about the
geometry of high-dimensional normed spaces; in particular, the fact that for
$\ell_{p}$ and $\mathscr{S}_{p}$ spaces the optimal constants of uniform
convexity are known (Ball et al., 1994). These results imply that powers of
the respective norm are uniformly convex, which suffices for our
regularization. Moreover, those functions have explicitly computable convex
conjugates (problems as in Eq. (2) can be solved in closed form), which is
crucial for our algorithms to work.
### 1.2 Notation and Preliminaries
Throughout the paper, we use boldface letters to denote vectors and italic
letters to denote scalars.
We consider real finite-dimensional normed vector spaces $\mathbf{E},$ endowed
with a norm $\|\cdot\|,$ and denoted by $(\mathbf{E},\|\cdot\|).$ The space
dual to $(\mathbf{E},\|\cdot\|)$ is denoted by
$(\mathbf{E}^{*},\|\cdot\|_{*}),$ where $\|\cdot\|_{*}$ is the norm dual to
$\|\cdot\|,$ defined in the usual way by
$\|\mathbf{z}\|_{*}=\sup_{\mathbf{x}\in\mathbf{E}:\|\mathbf{x}\|\leq
1}\left\langle\mathbf{z},\mathbf{x}\right\rangle,$ where
$\left\langle\mathbf{z},\mathbf{x}\right\rangle$ denotes the evaluation of a
linear functional $\mathbf{z}$ on a point $\mathbf{x}\in\mathbf{E}.$ As a
concrete example, we may consider the $\ell_{p}$ space
$(\mathbb{R}^{d},\|\cdot\|_{p}),$ where
$\|\mathbf{x}\|_{p}=\big{(}\sum_{i=1}^{d}|x_{i}|^{p}\big{)}^{1/p},$ $1\leq
p\leq\infty.$ The space dual to $(\mathbb{R}^{d},\|\cdot\|_{p})$ is
isometrically isomorphic to the space $(\mathbb{R}^{d},\|\cdot\|_{p_{\ast}}),$
where $\frac{1}{p}+\frac{1}{p_{\ast}}=1.$ Throughout, given $1\leq
p\leq\infty$, we will call $p_{\ast}=\frac{p}{p-1}$ the conjugate exponent to
$p$ (notice that $1\leq p_{\ast}\leq\infty$, and
$\frac{1}{p}+\frac{1}{p_{\ast}}=1$). The (closed) $\|\cdot\|$-norm ball
centered at $\mathbf{x}$ with radius $R>0$ will be denoted at ${\cal
B}_{\|\cdot\|}(\mathbf{x},R)$.
We start by recalling some standard definitions from convex analysis.
###### Definition 1.1.
A function $f:\mathbf{E}\to\mathbb{R}$ is said to be $(L,\kappa)$-weakly
smooth w.r.t. a norm $\|\cdot\|$, where $L>0$ and $\kappa\in(1,2],$ if its
gradients are $(L,\kappa-1)$ Hölder continuous, i.e., if
$(\forall\mathbf{x},\mathbf{y}\in\mathbf{E}):\quad\|\nabla
f(\mathbf{x})-\nabla f(\mathbf{y})\|_{*}\leq
L\|\mathbf{x}-\mathbf{y}\|^{\kappa-1}.$
We denote the class of $(L,\kappa)$-weakly smooth functions w.r.t. $\|\cdot\|$
by ${\cal F}_{\|\cdot\|}(L,\kappa)$.
Note that when $\kappa=1,$ the function may not be differentiable. Since we
will only be working with functions that are proper, convex, and lower
semicontinuous, we will still have that $f$ is subdifferentiable on the
interior of its domain (Rockafellar, 1970, Theorem 23.4). The definition of
$(L,\kappa)$-weakly smooth functions then boils down to the bounded variation
of the subgradients.
###### Definition 1.2.
A function $\psi:\mathbf{E}\to\mathbb{R}$ is said to be $q$-uniformly convex
w.r.t. a norm $\|\cdot\|$ and with constant $\lambda$ (and we refer to such
functions as $(\lambda,q)$-uniformly convex), where $\lambda\geq 0$ and $q\geq
2$, if $\forall\alpha\in(0,1):$
$(\forall\mathbf{x},\mathbf{y}\in\mathbf{E}):\quad\psi((1-\alpha)\mathbf{x}+\alpha\mathbf{y})\leq(1-\alpha)\psi(\mathbf{x})+\alpha\psi(\mathbf{y})-\frac{\lambda}{q}\alpha(1-\alpha)\|\mathbf{y}-\mathbf{x}\|^{q}.$
We denote the class of $(\lambda,q)$-uniformly convex functions w.r.t.
$\|\cdot\|$ by ${\cal U}_{\|\cdot\|}(\lambda,q)$.
With the abuse of notation, we will often use $\nabla\psi(\mathbf{x})$ to
denote an arbitrary but fixed element of $\partial\psi(\mathbf{x}).$ We also
make a mild assumption that the subgradient oracle of $\psi$ is consistent,
i.e., that it returns the same element of $\partial\psi(\mathbf{x})$ whenever
queried at the same point $\mathbf{x}.$
Observe that when $\lambda=0,$ uniform convexity reduces to standard
convexity, while for $\lambda>0$ and $q=2$ we recover the definition of strong
convexity. We will only be considering functions that are lower
semicontinuous, convex, and proper. These properties suffice for a function to
be subdifferentiable on the interior of its domain. It is then not hard to
show that if $\psi$ is $(\lambda,q)$-uniformly convex w.r.t. a norm
$\|\cdot\|$ and $\mathbf{g}_{\mathbf{x}}\in\partial\psi(\mathbf{x})$ is its
subgradient at a point $\mathbf{x},$ we have
$(\forall\mathbf{y}\in\mathbf{E}):\quad\psi(\mathbf{y})\geq\psi(\mathbf{x})+\left\langle\mathbf{g}_{\mathbf{x}},\mathbf{y}-\mathbf{x}\right\rangle+\frac{\lambda}{q}\|\mathbf{y}-\mathbf{x}\|^{q}.$
(3)
###### Definition 1.3.
Let $\psi:\mathbf{E}\to\mathbb{R}\cup\\{+\infty\\}.$ The convex conjugate of
$\psi,$ denoted by $\psi^{*}$, is defined by
$(\forall\mathbf{z}\in\mathbf{E}^{*}):\quad\psi^{*}(\mathbf{z})=\sup_{\mathbf{x}\in\mathbf{E}}\\{\left\langle\mathbf{z},\mathbf{x}\right\rangle-\psi(\mathbf{x})\\}.$
Recall that the convex conjugate of any function is convex. Some simple
examples of conjugate pairs of functions that will be useful for our analysis
are: (i) univariate functions $\frac{1}{p}|\cdot|^{p}$ and
$\frac{1}{p_{\ast}}|\cdot|^{p_{\ast}},$ where $1<p<\infty$ (see, e.g., Borwein
and Zhu (2004, Exercise 4.4.2)) and (ii) functions $\frac{1}{2}\|\cdot\|^{2}$
and $\frac{1}{2}\|\cdot\|_{*}^{2},$ where norms $\|\cdot\|$ and
$\|\cdot\|_{*}$ are dual to each other (see, e.g., Boyd and Vandenberghe
(2004, Example 3.27)). The latter example can be easily adapted to prove that
the functions $\frac{1}{p}\|\cdot\|^{p}$ and
$\frac{1}{p_{\ast}}\|\cdot\|_{\ast}^{p_{\ast}}$ are conjugates of each other,
for $1<p<\infty$.
The following auxiliary facts will be useful for our analysis.
###### Fact 1.4.
Let $\psi:\mathbf{E}\to\mathbb{R}\cup\\{+\infty\\}$ be proper, convex, and
lower semicontinuous, and let $\psi^{*}$ be its convex conjugate. Then
$\psi^{*}$ is proper, convex, and lower semicontinuous (and thus
subdifferentiable on the interior of its domain) and
$\forall\mathbf{z}\in\mathrm{int\,dom}(\psi^{*})$:
$\mathbf{g}\in\partial\psi^{*}(\mathbf{z})$ if and only if
$\mathbf{g}\in\operatorname*{argsup}_{\mathbf{x}\in\mathbb{R}^{d}}\\{\left\langle\mathbf{z},\mathbf{x}\right\rangle-\psi(\mathbf{x})\\}.$
The following proposition will be repeatedly used in our analysis, and we
prove it here for completeness.
###### Proposition 1.5.
Let $(\mathbf{E},\|\cdot\|)$ be a normed space with
$\|\cdot\|^{2}:\mathbf{E}\rightarrow\mathbb{R}$ differentiable, and let
$1<q<\infty$. Then
$\Big{\|}\nabla\Big{(}\frac{1}{q}\|\mathbf{x}\|^{q}\Big{)}\Big{\|}_{*}=\|\mathbf{x}\|^{q-1}=\|\mathbf{x}\|^{q/q_{\ast}},$
where $q_{\ast}=\frac{q}{q-1}$ is the exponent conjugate to $q$.
###### Proof.
We notice that $\|\cdot\|^{2}$ is differentiable if and only if
$\|\cdot\|^{q}$ is differentiable (Zalinescu, 2002, Thm. 3.7.2). Since the
statement clearly holds for $\mathbf{x}=\textbf{0},$ in the following we
assume that $\mathbf{x}\neq\textbf{0}.$ Next, write $\frac{1}{q}\|\cdot\|^{q}$
as a composition of functions $\frac{1}{q}|\cdot|^{q/2}$ and $\|\cdot\|^{2}.$
Applying the chain rule of differentiation, we now have:
$\displaystyle\nabla\Big{(}\frac{1}{q}\|\mathbf{x}\|^{q}\Big{)}=\frac{1}{2}\Big{(}\|\mathbf{x}\|^{2}\Big{)}^{\frac{q}{2}-1}\nabla\big{(}\|\mathbf{x}\|^{2}\big{)}=\|\mathbf{x}\|^{q-2}\nabla\Big{(}\frac{1}{2}\|\mathbf{x}\|^{2}\Big{)}.$
It remains to argue that
$\Big{\|}\nabla\Big{(}\frac{1}{2}\|\mathbf{x}\|^{2}\Big{)}\Big{\|}_{*}=\|\mathbf{x}\|.$
This immediately follows by Fact 1.4, as $\frac{1}{2}\|\cdot\|^{2}$ and
$\frac{1}{2}\|\cdot\|_{*}^{2}$ are convex conjugates of each other. ∎
We also state here a lemma that allows approximating weakly smooth functions
by weakly smooth functions of a different order. A variant of this lemma (for
$p=2$) first appeared in (Devolder et al., 2014), while the more general
version stated here is from d’Aspremont et al. (2018).
###### Lemma 1.6.
Let $f:\mathbf{E}\to\mathbb{R}$ be a function that is $(L,\kappa)$-weakly
smooth w.r.t. some norm $\|\cdot\|$. Then for any $\delta>0$ and
$M\geq\Big{[}\frac{2(p-\kappa)}{p\kappa\delta}\Big{]}^{\frac{p-\kappa}{\kappa}}L^{\frac{p}{\kappa}}$
(4)
we have
$(\forall\mathbf{x},\mathbf{y}\in\mathbf{E}):\quad f(\mathbf{y})\leq
f(\mathbf{x})+\left\langle\nabla
f(\mathbf{x}),\mathbf{y}-\mathbf{x}\right\rangle+\frac{M}{p}\|\mathbf{y}-\mathbf{x}\|^{p}+\frac{\delta}{2}.$
Finally, the following lemma will be useful when bounding the gradient norm in
Section 3 (see also (Zalinescu, 2002, Section 3.5)).
###### Lemma 1.7.
Let $f:\mathbf{E}\to\mathbb{R}$ be a function that is convex and
$(L,\kappa)$-weakly smooth w.r.t. some norm $\|\cdot\|$. Then:
$(\forall\mathbf{x},\mathbf{y}\in\mathbf{E}):\frac{\kappa-1}{L^{\frac{1}{\kappa-1}}\kappa}\|\nabla
f(\mathbf{y})-\nabla f(\mathbf{x})\|^{\frac{\kappa}{\kappa-1}}\leq
f(\mathbf{y})-f(\mathbf{x})-\left\langle\nabla
f(\mathbf{x}),\mathbf{y}-\mathbf{x}\right\rangle.$
###### Proof.
Let $h(\mathbf{x})$ be any $(L,\kappa)$-weakly smooth function and let
$\mathbf{x}^{*}\in\operatorname*{argmin}_{\mathbf{x}\in\mathbb{R}^{d}}h(\mathbf{x}).$
As $h$ is $(L,\kappa)$-weakly smooth, we have for all
$\mathbf{x},\mathbf{y}\in\mathbb{R}^{d}:$
$h(\mathbf{y})\leq h(\mathbf{x})+\left\langle\nabla
h(\mathbf{x}),\mathbf{y}-\mathbf{x}\right\rangle+\frac{L}{\kappa}\|\mathbf{y}-\mathbf{x}\|^{\kappa}.$
Fixing $\mathbf{x}\in\mathbb{R}^{d}$ and minimizing both sides of the last
inequality w.r.t. $\mathbf{y}\in\mathbb{R}^{d}$, it follows that
$h(\mathbf{x}^{*})\leq
h(\mathbf{x})-\frac{L^{1-\kappa_{\ast}}}{\kappa_{\ast}}\|\nabla
h(\mathbf{x})\|_{*}^{\kappa_{\ast}},$ (5)
where we have used that the functions $\frac{1}{\kappa}\|\cdot\|^{\kappa}$ and
$\frac{1}{\kappa_{\ast}}\|\cdot\|_{*}^{\kappa_{\ast}}$ are convex conjugates
of each other.
To complete the proof, it remains to apply Eq. (5) to function
$h_{\mathbf{x}}(\mathbf{y})=f(\mathbf{y})-\left\langle\nabla
f(\mathbf{x}),\mathbf{y}-\mathbf{x}\right\rangle$ for any fixed
$\mathbf{x}\in\mathbb{R}^{d},$ and observe that $h_{\mathbf{x}}(\mathbf{y})$
is convex, $(L,\kappa)$-weakly smooth, and minimized at
$\mathbf{y}=\mathbf{x}.$ ∎
## 2 Complementary Composite Minimization
In this section, we consider minimizing complementary composite functions,
which are of the form
$\bar{f}(\mathbf{x})=f(\mathbf{x})+\psi(\mathbf{x}),$ (6)
where $f$ is $(L,\kappa)$-weakly smooth w.r.t. some norm $\|\cdot\|$,
$\kappa\in(1,2],$ and $\psi$ is $(\lambda,q)$-uniformly convex w.r.t. the same
norm, for some $q\geq 2$, $\lambda\geq 0$. We assume that the feasible set
$\mathcal{X}\subseteq\mathbf{E}$ is closed, convex, and nonempty.
### 2.1 Algorithmic Framework and Convergence Analysis
The algorithmic framework we consider is a generalization of AGD+ from Cohen
et al. (2018), stated as follows:
Generalized AGD+ $\displaystyle\mathbf{x}_{k}$
$\displaystyle=\frac{A_{k-1}}{A_{k}}\mathbf{y}_{k-1}+\frac{a_{k}}{A_{k}}\mathbf{v}_{k-1}$
(7) $\displaystyle\mathbf{v}_{k}$
$\displaystyle=\operatorname*{argmin}_{\mathbf{u}\in\mathcal{X}}\Big{\\{}\sum_{i=0}^{k}a_{i}\left\langle\nabla
f(\mathbf{x}_{i}),\mathbf{u}-\mathbf{x}_{i}\right\rangle+A_{k}\psi(\mathbf{u})+m_{0}\phi(\mathbf{u})\Big{\\}}$
$\displaystyle\mathbf{y}_{k}$
$\displaystyle=\frac{A_{k-1}}{A_{k}}\mathbf{y}_{k-1}+\frac{a_{k}}{A_{k}}\mathbf{v}_{k},$
$\displaystyle\mathbf{y}_{0}$
$\displaystyle=\mathbf{v}_{0},\;\mathbf{x}_{0}\in\mathcal{X},$
where $m_{0}$ and the sequence of positive numbers $\\{a_{k}\\}_{k\geq 0}$ are
parameters of the algorithm specified in the convergence analysis below,
$A_{k}=\sum_{i=0}^{k}a_{i},$ and we take $\phi(\mathbf{u})$ to be a function
that satisfies
$\phi(\mathbf{u})\geq\frac{1}{q}\|\mathbf{u}-\mathbf{x}_{0}\|^{q}.$ For
example, if $\lambda>0,$ we can take
$\phi(\mathbf{u})=\frac{1}{\lambda}D_{\psi}(\mathbf{u},\mathbf{x}_{0}).$ When
$\lambda=0$, we take $\phi$ to be $(1,q)$-uniformly convex.
The convergence analysis relies on the approximate duality gap technique
(ADGT) of Diakonikolas and Orecchia (2019). The main idea is to construct an
upper estimate
$G_{k}\geq\bar{f}(\mathbf{y}_{k})-\bar{f}(\bar{\mathbf{x}}^{*})$ of the true
optimality gap, where
$\bar{\mathbf{x}}^{*}=\operatorname*{argmin}_{\mathbf{x}\in\mathcal{X}}\bar{f}(\mathbf{u}),$
and then argue that $A_{k}G_{k}\leq A_{k-1}G_{k-1}+E_{k}$, which in turn
implies:
$\bar{f}(\mathbf{y}_{k})-\bar{f}(\bar{\mathbf{x}}^{*})\leq\frac{A_{0}G_{0}}{A_{k}}+\frac{\sum_{i=1}^{k}E_{i}}{A_{k}}.$
I.e., as long as $A_{0}G_{0}$ is bounded and the cumulative error
$\sum_{i=1}^{k}E_{i}$ is either bounded or increasing slowly compared to
$A_{k}$, the optimality gap of the sequence $\mathbf{y}_{k}$ converges to the
optimum at rate $(1+\sum_{i=1}^{k}E_{i})/A_{k}.$ The goal is, of course, to
make $A_{k}$ as fast-growing as possible, but that turns out to be limited by
the requirement that $A_{k}G_{k}$ be non-increasing or slowly increasing
compared to $A_{k}$.
The gap $G_{k}$ is constructed as the difference $U_{k}-L_{k},$ where
$U_{k}\geq\bar{f}(\mathbf{y}_{k})$ is an upper bound on
$\bar{f}(\mathbf{y}_{k})$ and $L_{k}\leq\bar{f}(\bar{\mathbf{x}}^{*})$ is a
lower bound on $\bar{f}(\bar{\mathbf{x}}^{*}).$ In this particular case, we
make the following choices:
$U_{k}=f(\mathbf{y}_{k})+\frac{1}{A_{k}}\sum_{i=0}^{k}a_{i}\psi(\mathbf{v}_{i}).$
As $\mathbf{y}_{k}=\frac{1}{A_{k}}\sum_{i=0}^{k}a_{i}\mathbf{v}_{i},$ we have,
by Jensen’s inequality: $U_{k}\geq
f(\mathbf{y}_{k})+\psi(\mathbf{y}_{k})=\bar{f}(\mathbf{y}_{k}),$ i.e., $U_{k}$
is a valid upper bound on $\bar{f}(\mathbf{y}_{k}).$ For the lower bound, we
use the following inequalities:
$\displaystyle\bar{f}(\bar{\mathbf{x}}^{*})$
$\displaystyle\geq\frac{1}{A_{k}}\sum_{i=0}^{k}a_{i}f(\mathbf{x}_{i})+\frac{1}{A_{k}}\sum_{i=0}^{k}a_{i}\left\langle\nabla
f(\mathbf{x}_{i}),\bar{\mathbf{x}}^{*}-\mathbf{x}_{i}\right\rangle+\psi(\bar{\mathbf{x}}^{*})+\frac{m_{0}}{A_{k}}\phi(\bar{\mathbf{x}}^{*})-\frac{m_{0}}{A_{k}}\phi(\bar{\mathbf{x}}^{*})$
$\displaystyle\geq\frac{1}{A_{k}}\sum_{i=0}^{k}a_{i}f(\mathbf{x}_{i})+\frac{1}{A_{k}}\min_{\mathbf{u}\in\mathcal{X}}\Big{\\{}\sum_{i=0}^{k}a_{i}\left\langle\nabla
f(\mathbf{x}_{i}),\mathbf{u}-\mathbf{x}_{i}\right\rangle+A_{k}\psi(\mathbf{u})+m_{0}\phi(\mathbf{u})\Big{\\}}-\frac{m_{0}}{A_{k}}\phi(\bar{\mathbf{x}}^{*})$
$\displaystyle=:L_{k},$
where the first inequality uses
$f(\bar{\mathbf{x}}^{*})\geq\frac{1}{A_{k}}\sum_{i=0}^{k}a_{i}f(\mathbf{x}_{i})+\frac{1}{A_{k}}\sum_{i=0}^{k}a_{i}\left\langle\nabla
f(\mathbf{x}_{i}),\bar{\mathbf{x}}^{*}-\mathbf{x}_{i}\right\rangle,$ by
convexity of $f.$
We start by bounding the initial (scaled) gap $A_{0}G_{0}.$
###### Lemma 2.1 (Initial Gap).
For any $\delta_{0}>0$ and
$M_{0}=\Big{[}\frac{2(q-\kappa)}{q\kappa\delta_{0}}\Big{]}^{\frac{q-\kappa}{\kappa}}L^{\frac{q}{\kappa}},$
if $A_{0}M_{0}=m_{0},$ then
$A_{0}G_{0}\leq m_{0}\phi(\bar{\mathbf{x}}^{*})+\frac{A_{0}\delta_{0}}{2}.$
###### Proof.
By definition, and using that $a_{0}=A_{0}$,
$\displaystyle A_{0}G_{0}=$
$\displaystyle\;A_{0}\Big{(}f(\mathbf{y}_{0})+\psi(\mathbf{v}_{0})-f(\mathbf{x}_{0})-\left\langle\nabla
f(\mathbf{x}_{0}),\mathbf{v}_{0}-\mathbf{x}_{0}\right\rangle-\psi(\mathbf{v}_{0})-\frac{m_{0}}{A_{0}}\phi(\mathbf{v}_{0})\Big{)}+m_{0}\phi(\bar{\mathbf{x}}^{*})$
$\displaystyle=$
$\displaystyle\;A_{0}(f(\mathbf{y}_{0})-f(\mathbf{x}_{0})-\left\langle\nabla
f(\mathbf{x}_{0}),\mathbf{y}_{0}-\mathbf{x}_{0}\right\rangle)-m_{0}\phi(\mathbf{y}_{0})+m_{0}\phi(\bar{\mathbf{x}}^{*})$
where the second line is by $\mathbf{y}_{0}=\mathbf{v}_{0}$.
By assumption,
$\phi(\mathbf{u})\geq\frac{1}{q}\|\mathbf{u}-\mathbf{x}_{0}\|^{q},$ for all
$\mathbf{u},$ and, in particular,
$\phi(\mathbf{y}_{0})\geq\frac{1}{q}\|\mathbf{y}_{0}-\mathbf{x}_{0}\|_{q}^{q}.$
On the other hand, by $(L,\kappa)$-weak smoothness of $f$ and using Lemma 1.6,
we have that (below
$M_{0}=\big{[}\frac{2(q-\kappa)}{q\kappa\delta_{0}}\big{]}^{\frac{q-\kappa}{\kappa}}L^{\frac{q}{\kappa}}$):
$f(\mathbf{y}_{0})-f(\mathbf{x}_{0})-\left\langle\nabla
f(\mathbf{x}_{0}),\mathbf{y}_{0}-\mathbf{x}_{0}\right\rangle\leq\frac{M_{0}}{q}\|\mathbf{y}_{0}-\mathbf{x}_{0}\|_{q}^{q}+\frac{\delta_{0}}{2}.$
Therefore:
$A_{0}G_{0}\leq\big{(}A_{0}M_{0}-m_{0}\big{)}\frac{\|\mathbf{y}_{0}-\mathbf{x}_{0}\|^{q}}{q}+m_{0}\phi(\bar{\mathbf{x}}^{*})+\frac{A_{0}\delta_{0}}{2}=m_{0}\phi(\bar{\mathbf{x}}^{*})+\frac{A_{0}\delta_{0}}{2},$
(8)
as $m_{0}=A_{0}M_{0}$. ∎
The next step is to bound $A_{k}G_{k}-A_{k-1}G_{k-1},$ as in the following
lemma.
###### Lemma 2.2 (Gap Evolution).
Given arbitrary $\delta_{k}>0$ and
$M_{k}=\Big{[}\frac{2(q-\kappa)}{q\kappa\delta_{k}}\Big{]}^{\frac{q-\kappa}{\kappa}}L^{\frac{q}{\kappa}}$,
if $\frac{{a_{k}}^{q}}{{A_{k}}^{q-1}}\leq\frac{\max\\{\lambda
A_{k-1},m_{0}\\}}{M_{k}}$ then
$A_{k}G_{k}-A_{k-1}G_{k-1}\leq\frac{A_{k}\delta_{k}}{2}.$
###### Proof.
To bound $A_{k}G_{k}-A_{k-1}G_{k-1}$, we first bound
$A_{k}U_{k}-A_{k-1}U_{k-1}$ and $A_{k}L_{k}-A_{k-1}L_{k-1}.$ By definition of
$U_{k},$
$\displaystyle A_{k}U_{k}-A_{k-1}U_{k-1}$
$\displaystyle=A_{k}f(\mathbf{y}_{k})-A_{k-1}f(\mathbf{y}_{k-1})+a_{k}\psi(\mathbf{v}_{k})$
(9)
$\displaystyle=A_{k}(f(\mathbf{y}_{k})-f(\mathbf{x}_{k}))+A_{k-1}(f(\mathbf{x}_{k})-f(\mathbf{y}_{k-1}))+a_{k}f(\mathbf{x}_{k})+a_{k}\psi(\mathbf{v}_{k}).$
For the lower bound, define the function under the minimum in the definition
of the lower bound as
$h_{k}(\mathbf{u}):=\sum_{i=0}^{k}a_{i}\left\langle\nabla
f(\mathbf{x}_{i}),\mathbf{u}-\mathbf{x}_{i}\right\rangle+A_{k}\psi(\mathbf{u})+m_{0}\phi(\mathbf{u}),$
so that we have:
$A_{k}L_{k}-A_{k-1}L_{k-1}=a_{k}f(\mathbf{x}_{k})+h_{k}(\mathbf{v}_{k})-h_{k-1}(\mathbf{v}_{k-1}).$
(10)
Observe first that
$h_{k}(\mathbf{v}_{k})-h_{k-1}(\mathbf{v}_{k})=a_{k}\left\langle\nabla
f(\mathbf{x}_{k}),\mathbf{v}_{k}-\mathbf{x}_{k}\right\rangle+a_{k}\psi(\mathbf{v}_{k}).$
(11)
On the other hand, using the definition of Bregman divergence and the fact
that Bregman divergence is blind to constant and linear terms, we can bound
$h_{k-1}(\mathbf{v}_{k})-h_{k-1}(\mathbf{v}_{k-1})$ as
$\displaystyle h_{k-1}(\mathbf{v}_{k})-h_{k-1}(\mathbf{v}_{k-1})$
$\displaystyle=\left\langle\nabla
h_{k-1}(\mathbf{v}_{k-1}),\mathbf{v}_{k}-\mathbf{v}_{k-1}\right\rangle+D_{h_{k-1}}(\mathbf{v}_{k},\mathbf{v}_{k-1})$
$\displaystyle\geq
A_{k-1}D_{\psi}(\mathbf{v}_{k},\mathbf{v}_{k-1})+m_{0}D_{\phi}(\mathbf{v}_{k},\mathbf{v}_{k-1}),$
where the second line is by $\mathbf{v}_{k-1}$ being the minimizer of
$h_{k-1}$. Combining with Eqs. (10) and (11), we have:
$\displaystyle A_{k}L_{k}-A_{k-1}L_{k-1}\geq
a_{k}f(\mathbf{x}_{k})+a_{k}\psi(\mathbf{v}_{k})+a_{k}\left\langle\nabla
f(\mathbf{x}_{k}),\mathbf{v}_{k}-\mathbf{x}_{k}\right\rangle+A_{k-1}D_{\psi}(\mathbf{v}_{k},\mathbf{v}_{k-1})-m_{0}D_{\phi}(\mathbf{v}_{k},\mathbf{v}_{k-1}).$
(12)
Combining Eqs. (9) and (12), we can now bound $A_{k}G_{k}-A_{k-1}G_{k-1}$ as
$\displaystyle A_{k}G_{k}-A_{k-1}G_{k-1}\leq$
$\displaystyle\;A_{k}(f(\mathbf{y}_{k})-f(\mathbf{x}_{k}))+A_{k-1}(f(\mathbf{x}_{k})-f(\mathbf{y}_{k-1}))$
$\displaystyle-a_{k}\left\langle\nabla
f(\mathbf{x}_{k}),\mathbf{v}_{k}-\mathbf{x}_{k}\right\rangle-
A_{k-1}D_{\psi}(\mathbf{v}_{k},\mathbf{v}_{k-1})-m_{0}D_{\phi}(\mathbf{v}_{k},\mathbf{v}_{k-1})$
$\displaystyle\leq$
$\displaystyle\;A_{k}(f(\mathbf{y}_{k})-f(\mathbf{x}_{k})-\left\langle\nabla
f(\mathbf{x}_{k}),\mathbf{y}_{k}-\mathbf{x}_{k}\right\rangle)-A_{k-1}D_{\psi}(\mathbf{v}_{k},\mathbf{v}_{k-1})-m_{0}D_{\phi}(\mathbf{v}_{k},\mathbf{v}_{k-1}),$
where we have used
$f(\mathbf{x}_{k})-f(\mathbf{y}_{k-1})\leq\left\langle\nabla
f(\mathbf{x}_{k}),\mathbf{x}_{k}-\mathbf{y}_{k-1}\right\rangle$ (by convexity
of $f$) and the definition of $\mathbf{y}_{k}$ from Eq. (7). Similarly as for
the initial gap, we now use the weak smoothness of $f$ and Lemma 1.6 to write:
$\displaystyle f(\mathbf{y}_{k})-f(\mathbf{x}_{k})-\left\langle\nabla
f(\mathbf{x}_{k}),\mathbf{y}_{k}-\mathbf{x}_{k}\right\rangle$
$\displaystyle\leq\frac{M_{k}}{q}\|\mathbf{y}_{k}-\mathbf{x}_{k}\|^{q}+\frac{\delta_{k}}{2}$
$\displaystyle=\frac{M_{k}}{q}\frac{{a_{k}}^{q}}{{A_{k}}^{q}}\|\mathbf{v}_{k}-\mathbf{v}_{k-1}\|^{q}+\frac{\delta_{k}}{2},$
where
$M_{k}=\Big{[}\frac{2(q-\kappa)}{q\kappa\delta_{k}}\Big{]}^{\frac{q-\kappa}{\kappa}}L^{\frac{q}{\kappa}}$
and the equality is by
$\mathbf{y}_{k}-\mathbf{x}_{k}=\frac{a_{k}}{A_{k}}(\mathbf{v}_{k}-\mathbf{v}_{k-1})$,
which follows by the definition of algorithm steps from Eq. (7).
On the other hand, as $\psi$ is $(\lambda,q)$-uniformly convex, we have that
$D_{\psi}(\mathbf{v}_{k},\mathbf{v}_{k-1})\geq\frac{\lambda}{q}\|\mathbf{v}_{k}-\mathbf{v}_{k-1}\|^{q}$.
Further, if $\lambda=0,$ we have that
$D_{\phi}(\mathbf{v}_{k},\mathbf{v}_{k-1})\geq\frac{1}{q}\|\mathbf{v}_{k}-\mathbf{v}_{k-1}\|^{q}$.
Thus:
$\displaystyle A_{k}G_{k}-A_{k-1}G_{k-1}$
$\displaystyle\leq\Big{(}M_{k}\frac{{a_{k}}^{q}}{{A_{k}}^{q-1}}-\max\\{\lambda
A_{k-1},m_{0}\\}\Big{)}\frac{\|\mathbf{v}_{k}-\mathbf{v}_{k-1}\|^{q}}{q}+\frac{A_{k}\delta_{k}}{2}$
$\displaystyle\leq\frac{A_{k}\delta_{k}}{2},$
as $\frac{{a_{k}}^{q}}{{A_{k}}^{q-1}}\leq\frac{\max\\{\lambda
A_{k-1},m_{0}\\}}{M_{k}}.$ ∎
We are now ready to state and prove the main result from this section.
###### Theorem 2.3.
Let $\bar{f}(\mathbf{x})=f(\mathbf{x})+\psi(\mathbf{x}),$ where $f$ is convex
and $(L,\kappa)$-weakly smooth w.r.t. a norm $\|\cdot\|$, $\kappa\in(1,2],$
and $\psi$ is $q$-uniformly convex with constant $\lambda\geq 0$ w.r.t. the
same norm for some $q\geq 2$. Let $\bar{\mathbf{x}}^{*}$ be the minimizer of
$\bar{f}.$ Let $\mathbf{x}_{k},\mathbf{v}_{k},\mathbf{y}_{k}$ evolve according
to Eq. (7) for an arbitrary initial point $\mathbf{x}_{0}\in\mathcal{X},$
where $A_{0}M_{0}=m_{0}$,
${{a_{k}}^{q}}\leq\frac{\max\\{\lambda{A_{k-1}}^{q},m_{0}{A_{k}}^{q-1}\\}}{M_{k}}$
for $k\geq 1,$ and
$M_{k}=\Big{[}\frac{2(q-\kappa)}{q\kappa\delta_{k}}\Big{]}^{\frac{q-\kappa}{\kappa}}L^{\frac{q}{\kappa}}$,
for $\delta_{k}>0$ and $k\geq 0.$ Then, $\forall k\geq 1:$
$\bar{f}(\mathbf{y}_{k})-\bar{f}(\bar{\mathbf{x}}^{*})\leq\frac{2A_{0}M_{0}\phi(\bar{\mathbf{x}}^{*})+\sum_{i=0}^{k}A_{i}\delta_{i}}{2A_{k}}.$
In particular, for any $\epsilon>0,$ setting
$\delta_{k}=\frac{a_{k}}{A_{k}}\epsilon$, for $k\geq 0,$ and $a_{0}=A_{0}=1,$
and
${{a_{k}}^{q}}=\frac{\max\\{\lambda{A_{k-1}}^{q},m_{0}{A_{k}}^{q-1}\\}}{M_{k}}$
for $k\geq 1,$ we have that
$\bar{f}(\mathbf{y}_{k})-\bar{f}(\bar{\mathbf{x}}^{*})\leq\epsilon$ after at
most
$\displaystyle k=O\bigg{(}\min\bigg{\\{}$
$\displaystyle\Big{(}\frac{1}{\epsilon}\Big{)}^{\frac{q-\kappa}{q\kappa-q+\kappa}}\Big{(}\max\Big{\\{}\frac{L^{\frac{q}{\kappa}}}{\lambda},1\Big{\\}}\Big{)}^{\frac{\kappa}{q\kappa-q+\kappa}}\log\Big{(}\frac{L\phi(\bar{\mathbf{x}}^{*})}{\epsilon}\Big{)},$
$\displaystyle\Big{(}\frac{L}{\epsilon}\Big{)}^{\frac{q}{q\kappa-q+\kappa}}\big{(}\phi(\bar{\mathbf{x}}^{*})\big{)}^{\frac{\kappa}{q\kappa-q+\kappa}}\bigg{\\}}\bigg{)}$
iterations.
###### Proof.
The first part of the theorem follows immediately by combining Lemma 2.1 and
Lemma 2.2.
For the second part, we have
$\bar{f}(\mathbf{y}_{k})-\bar{f}(\bar{\mathbf{x}}^{*})\leq\frac{A_{0}M_{0}\phi(\bar{\mathbf{x}}^{*})}{A_{k}}+\frac{\epsilon}{2},$
so all we need to show is that, under the step size choice from the theorem
statement, we have
$\frac{A_{0}M_{0}\phi(\bar{\mathbf{x}}^{*})}{A_{k}}\leq\frac{\epsilon}{2}.$
As $A_{0}=a_{0}=1,$ we have that $\delta_{0}=\epsilon$ and
$M_{0}=\Big{[}\frac{2(q-\kappa)}{q\kappa\epsilon}\Big{]}^{\frac{q-\kappa}{\kappa}}L^{\frac{q}{\kappa}}.$
(13)
It remains to bound the growth of $A_{k}.$ In this case, by theorem
assumption, we have
${{a_{k}}^{q}}=\frac{\max\\{\lambda{A_{k-1}}^{q},m_{0}{A_{k}}^{q-1}\\}}{M_{k}}$.
Thus, (i) $\frac{{a_{k}}^{q}}{{A_{k-1}}^{q}}\geq\frac{\lambda}{M_{k}}$ and
(ii) $\frac{{a_{k}}^{q}}{{A_{k}}^{q-1}}\geq\frac{m_{0}}{M_{k}}$, and the
growth of $A_{k}$ can be bounded below as the maximum of growths determined by
these two cases.
Consider $\frac{{a_{k}}^{q}}{{A_{k-1}}^{q}}\geq\frac{{\lambda}}{M_{k}}$ first.
As $\delta_{k}=\frac{a_{k}}{A_{k}}\epsilon$ and
$M_{k}=\Big{[}\frac{2(q-\kappa)}{q\kappa\delta_{k}}\Big{]}^{\frac{q-\kappa}{\kappa}}L^{\frac{q}{\kappa}}$,
the condition
$\frac{{a_{k}}^{q}}{{A_{k}}^{q-1}A_{k-1}}\geq\frac{\lambda}{M_{k}}$ can be
equivalently written as:
$\frac{{a_{k}}^{q-\frac{q}{\kappa}+1}}{{A_{k-1}}^{q-\frac{q}{\kappa}+1}}\geq\Big{[}\frac{2(q-\kappa)}{q\kappa\epsilon}\Big{]}^{-\frac{q-\kappa}{\kappa}}\frac{\lambda}{L^{\frac{q}{\kappa}}}.$
Hence,
$\frac{{a_{k}}}{A_{k-1}}\geq\Big{[}\frac{2(q-\kappa)}{q\kappa\epsilon}\Big{]}^{-\frac{q-\kappa}{q\kappa-q+\kappa}}\Big{(}\frac{\lambda}{L^{\frac{q}{\kappa}}}\Big{)}^{\frac{\kappa}{q\kappa-q+\kappa}}.$
As $a_{k}=A_{k}-A_{k-1},$ it follows that $\frac{A_{k}}{A_{k-1}}\geq
1+\Big{[}\frac{2(q-\kappa)}{q\kappa\epsilon}\Big{]}^{-\frac{q-\kappa}{q\kappa-q+\kappa}}\Big{(}\frac{\lambda}{L^{\frac{q}{\kappa}}}\Big{)}^{\frac{\kappa}{q\kappa-q+\kappa}},$
further leading to
$\displaystyle
A_{k}\geq\bigg{(}1+\Big{[}\frac{2(q-\kappa)}{q\kappa\epsilon}\Big{]}^{-\frac{q-\kappa}{q\kappa-q+\kappa}}\Big{(}\frac{\lambda}{L^{\frac{q}{\kappa}}}\Big{)}^{\frac{\kappa}{q\kappa-q+\kappa}}\bigg{)}^{k}.$
On the other hand, the condition
$\frac{{a_{k}}^{q}}{{A_{k}}^{q-1}}\geq\frac{m_{0}}{M_{k}}$ can be equivalently
written as:
$\frac{{a_{k}}^{\frac{q\kappa-q}{\kappa}+1}}{A_{k}^{\frac{q\kappa-q}{\kappa}}}\geq\frac{m_{0}}{L^{\frac{q}{\kappa}}}\Big{[}\frac{q\kappa\epsilon}{2(q-\kappa)}\Big{]}^{\frac{q-\kappa}{\kappa}}=1,$
where we have used the definition of $m_{0},$ which implies
$A_{k}=\Omega\Big{(}k^{\frac{q\kappa-q+\kappa}{\kappa}}\Big{)},$ (14)
and further leads to the claimed bound on the number of iterations. ∎
Let us point out some special cases of the bound from Theorem 2.3. When $f$ is
smooth ($\kappa=2$) and $\psi$ is $q$-uniformly convex, assuming
$L^{q/2}\geq\lambda,$ the bound simplifies to
$k=O\bigg{(}\min\bigg{\\{}\Big{(}\frac{1}{\epsilon}\Big{)}^{\frac{q-2}{q+2}}\Big{(}\frac{L^{\frac{q}{2}}}{\lambda}\Big{)}^{\frac{2}{q+2}}\log\Big{(}\frac{L\phi(\bar{\mathbf{x}}^{*})}{\epsilon}\Big{)},\;\Big{(}\frac{L}{\epsilon}\Big{)}^{\frac{q}{q+2}}\big{(}\phi(\bar{\mathbf{x}}^{*})\big{)}^{\frac{2}{q+2}}\bigg{\\}}\bigg{)}.$
(15)
In particular, if $\psi$ is strongly convex ($q=2$), we recover the same bound
as in the Euclidean case:
$k=O\bigg{(}\min\bigg{\\{}\sqrt{\frac{L}{\lambda}}\log\Big{(}\frac{L\phi(\bar{\mathbf{x}}^{*})}{\epsilon}\Big{)},\;\sqrt{\frac{L\phi(\bar{\mathbf{x}}^{*})}{\epsilon}}\bigg{\\}}\bigg{)}.$
(16)
Note that this result uses smoothness of $f$ and strong convexity of $\psi$
with respect to the same but arbitrary norm $\|\cdot\|$. Because we do not
require the same function to be simultaneously smooth and strongly convex
w.r.t. $\|\cdot\|$, the resulting “condition number” $\frac{L}{\lambda}$ can
be dimension-independent even for non-Euclidean norms (in particular, this
will be possible for any $\ell_{p}$ norm with $p\in(1,2]$).
Because $\bar{f}$ is $q$-uniformly convex, Theorem 2.3 also implies a bound on
$\|\mathbf{y}_{k}-\bar{\mathbf{x}}^{*}\|$ whenever $\lambda>0,$ as follows.
###### Corollary 2.4.
Under the same assumptions as in Theorem 2.3, and assuming, in addition, that
$\lambda>0,$ we have that
$\|\mathbf{y}_{k}-\bar{\mathbf{x}}^{*}\|\leq\bar{\epsilon}$ after at most
$k=O\bigg{(}\Big{(}\frac{q}{\lambda\bar{\epsilon}^{q}}\Big{)}^{\frac{q-\kappa}{q\kappa-q+\kappa}}\Big{(}\frac{L^{\frac{q}{\kappa}}}{\lambda}\Big{)}^{\frac{\kappa}{q\kappa-q+\kappa}}\log\Big{(}\frac{qL\phi(\bar{\mathbf{x}}^{*})}{\bar{\epsilon}^{q}\lambda}\Big{)}\bigg{)}$
iterations.
###### Proof.
By $q$-uniform convexity of $\bar{f}$ and $\textbf{0}\in\partial
f(\bar{\mathbf{x}}^{*})$ (as $\bar{\mathbf{x}}^{*}$ minimizes $\bar{f}$), we
have
$\|\mathbf{y}_{k}-\bar{\mathbf{x}}^{*}\|^{q}\leq\frac{q}{\lambda}(\bar{f}(\mathbf{y}_{k})-\bar{f}(\bar{\mathbf{x}}^{*})).$
Thus, it suffices to apply the bound from Theorem 2.3 with the accuracy
parameter ${\epsilon}=\frac{\lambda\bar{\epsilon}^{q}}{q}.$ ∎
### 2.2 Computational Considerations
At a first glance, the result from Theorem 2.3 may seem of limited
applicability, as there are potentially four different parameters
($L,\kappa,\lambda,q$) that one would need to tune. However, we now argue that
this is not a constraining factor. First, for most of the applications in
which one would be interested in using this framework, function $\psi$ is a
regularizing function with known uniform convexity parameters $\lambda$ and
$q$ (see Section 5 for several interesting examples). Second, the knowledge of
parameters $L$ and $\kappa$ is not necessary for our results; we presented the
analysis assuming the knowledge of these parameters to not over-complicate the
exposition.
In particular, the only place in the analysis where the $(L,\kappa)$
smoothness of $f$ is used is in the inequality
$f(\mathbf{y}_{k})\leq f(\mathbf{x}_{k})+\left\langle\nabla
f(\mathbf{x}_{k}),\mathbf{y}_{k}-\mathbf{x}_{k}\right\rangle+\frac{M_{k}}{q}\|\mathbf{y}_{k}-\mathbf{x}_{k}\|^{q}+\frac{\delta_{k}}{2}.$
(17)
But instead of explicitly computing the value of $M_{k}$ based on $L,\kappa,$
one could maintain an estimate of $M_{k}$, double it whenever the inequality
from Eq. (17) is not satisfied, and recompute all iteration-$k$ variables.
This is a standard trick employed in optimization, due to Nesterov (2015).
Observe that, due to $(L,\kappa)$-weak smoothness of $f$ and Lemma 1.6, there
exists a sufficiently large $M_{k}$ for any value of $\delta_{k}$. In
particular, under the choice $\delta_{k}=\frac{a_{k}}{A_{k}}\epsilon$ from
Theorem 3.1, the total number of times that $M_{k}$ can get doubled is
logarithmic in all of the problem parameters, which means that it can be
absorbed in the overall convergence bound from Theorem 2.3.
Finally, the described algorithm (Generalized AGD+ from Eq. (7)) can be
efficiently implemented only if the minimization problems defining
$\mathbf{v}_{k}$ can be solved efficiently (preferably in closed form, or with
$\tilde{O}(d)$ arithmetic operations). This is indeed the case for most
problems of interest. In particular, when $\psi$ is uniformly convex, we will
typically take $\phi(\mathbf{u})$ to be the Bregman divergence
$D_{\psi}(\mathbf{u},\mathbf{x}_{0}).$ Then, the computation of
$\mathbf{v}_{k}$ boils down to solving problems of the form (2), i.e.,
$\min_{\mathbf{u}\in\mathcal{X}}\\{\left\langle\mathbf{z},\mathbf{x}\right\rangle+\psi(\mathbf{x})\\},$
for a given $\mathbf{z}.$ Such problems are efficiently solvable whenever the
convex conjugate of $\psi+I_{\mathcal{X}}$, where $I_{\mathcal{X}}$ is the
indicator function of the closed convex set $\mathcal{X},$ is efficiently
computable, in which case the minimizer is
$\nabla(\psi+I_{\mathcal{X}})^{*}(\mathbf{z})$. In particular, for
$\mathcal{X}=\mathbf{E}$ and $\psi(\mathbf{x})=\frac{1}{q}\|\cdot\|^{q}$,
$q>1,$ (a common choice for our applications of interest; see Section 5), the
minimizer is computable in closed form as
$\nabla\big{(}\frac{1}{q_{\ast}}\|\mathbf{z}\|_{*}^{q_{\ast}}\big{)},$ where
$q_{\ast}=\frac{q}{q-1}$ is the exponent dual to $q.$ This should be compared
to the computation of proximal maps needed in Nesterov (2013), where the
minimizer would be the gradient of the infimal convolution of $\psi$ and the
Euclidean norm squared, for which there are much fewer efficiently computable
examples. Note that such an assumption would be sufficient for our algorithm
to work in the Euclidean case (by taking
$\phi(\mathbf{u})=\frac{1}{2}\|\mathbf{u}-\mathbf{x}_{0}\|_{2}^{2}$); however,
it is not necessary.
## 3 Minimizing the Gradient Norm in $\ell_{p}$ and $\mathrm{Sch}_{p}$ Spaces
We now show how to use the result from Theorem 2.3 to obtain near-optimal
convergence bounds for minimizing the norm of the gradient. In particular,
assuming that $f$ is $(L,\kappa)$-weakly smooth w.r.t. $\|\cdot\|_{p},$ to
obtain the desired results, we apply Theorem 2.3 to function
$\bar{f}(\cdot)=f(\cdot)+\lambda\psi_{p}(\cdot),$ where
$\psi_{p}(\mathbf{x})=\begin{cases}\frac{1}{2(p-1)}\|\mathbf{x}-\mathbf{x}_{0}\|_{p}^{2},&\text{
if }p\in(1,2],\\\ \frac{1}{p}\|\mathbf{x}-\mathbf{x}_{0}\|_{p}^{p},&\text{ if
}p\in(2,+\infty).\end{cases}$ (18)
Function $\psi_{p}$ is then $(1,\max\\{2,p\\})$-uniformly convex. The proof of
strong convexity of $\psi_{p}$ when $1<p\leq 2$ can be found, e.g., in Beck
(2017, Example 5.28). For $2<p<+\infty$, $\psi_{p}$ is a separable function,
hence its $p$-uniform convexity can be proved from the duality between uniform
convexity and uniform smoothness (Zalinescu, 1983) and direct computation.
These $\ell_{p}$ results also have spectral analogues, given by the Schatten
spaces $\mathscr{S}_{p}=(\mathbb{R}^{d\times d},\|\cdot\|_{\mathscr{S},p})$.
Here, the functions below can be proved to be $(1,\max\\{2,p\\})$-uniformly
convex, which is a consequence of sharp estimates of uniform convexity for
Schatten spaces (Ball et al., 1994; Juditsky and Nemirovski, 2008)
$\Psi_{\mathscr{S},p}(\mathbf{x})=\begin{cases}\frac{1}{2(p-1)}\|\mathbf{x}-\mathbf{x}_{0}\|_{\mathscr{S},p}^{2},&\text{
if }p\in(1,2],\\\
\frac{1}{p}\|\mathbf{x}-\mathbf{x}_{0}\|_{\mathscr{S},p}^{p},&\text{ if
}p\in(2,+\infty).\end{cases}$ (19)
Finally, both for $\ell_{1}$ and $\mathscr{S}_{1}$ spaces, our algorithms can
work on the equivalent norm with power $p=\ln d/(\ln d-1).$ The cost of this
change of norm is at most logarithmic in $d$ for the diameter and strong
convexity constants. Similarly, our results also extend to the case
$p=\infty$, by similar considerations (here, using exponent $p=\ln d$).
To obtain the results for the norm of the gradient in $\ell_{p}$ spaces, we
can apply Theorem 2.3 with $\phi(\mathbf{x})=\psi_{p}(\mathbf{x}),$ where
$\psi_{p}$ is specified in Eq. (18). The result is summarized in the following
theorem. The same result can be obtained for $\mathscr{S}_{p}$ spaces, by
following the same argument as in Theorem 3.1 below, which we omit for
brevity.
###### Theorem 3.1.
Let $f$ be a convex, $(L,\kappa)$\- weakly smooth function w.r.t. a norm
$\|\cdot\|_{p}$, where $p\in(1,\infty).$ Then, for any $\epsilon>0,$
Generalized AGD+ from Eq. (7), initialized at some point
$\mathbf{x}_{0}\in\mathbb{R}^{d}$ and applied to $\bar{f}=f+\lambda\psi_{p},$
where $\psi_{p}$ is specified in Eq. (18),
$\lambda=\begin{cases}\frac{\epsilon(p-1)}{2\|\mathbf{x}^{*}-\mathbf{x}_{0}\|_{p}},&\text{
if }p\in(1,2],\\\
\frac{\epsilon}{2\|\mathbf{x}^{*}-\mathbf{x}_{0}\|_{p}^{p-1}},&\text{ if
}p\in(2,\infty),\end{cases}$
and with the choice $\phi=\psi_{p},$ constructs a point $\mathbf{y}_{k}$ with
$\|\nabla f(\mathbf{y}_{k})\|_{p_{\ast}}\leq\epsilon$ in at most
$k=\begin{cases}O\bigg{(}\Big{(}\frac{2L}{\epsilon}\Big{)}^{\frac{\kappa}{(\kappa-1)(3\kappa-2)}}\Big{(}\frac{\kappa^{2\kappa}}{(\kappa-1)^{2\kappa}}\cdot\frac{\|\mathbf{x}^{*}-\mathbf{x}_{0}\|_{p}^{2}}{(p-1)^{{\kappa}}}\Big{)}^{\frac{1}{3\kappa-2}}\log\Big{(}\frac{L\|\mathbf{x}^{*}-\mathbf{x}_{0}\|_{p}}{(p-1)\epsilon}\Big{)}\bigg{)},&\text{
if }p\in(1,2],\\\
O\bigg{(}\Big{(}\frac{2L\|\mathbf{x}^{*}-\mathbf{x}_{0}\|_{p}}{\epsilon}\Big{)}^{\frac{\kappa(p-1)}{p\kappa-p+\kappa}}\Big{(}\frac{\kappa}{\kappa-1}\Big{)}^{\frac{p}{p\kappa-p+\kappa}}\log\Big{(}\frac{L\|\mathbf{x}^{*}-\mathbf{x}_{0}\|_{p}^{p}}{\epsilon}\Big{)}\bigg{)},&\text{
if }p\in(2,\infty),\end{cases}$
iterations. In particular, when $\kappa=2$ (i.e., when $f$ is $L$-smooth):
$k=\begin{cases}\widetilde{O}\bigg{(}\sqrt{\frac{L\|\mathbf{x}^{*}-\mathbf{x}_{0}\|_{p}}{\epsilon}}\bigg{)},&\text{
if }p\in(1,2],\\\
\widetilde{O}\bigg{(}\Big{(}\frac{L\|\mathbf{x}^{*}-\mathbf{x}_{0}\|_{p}}{\epsilon}\Big{)}^{\frac{2(p-1)}{p+2}}\bigg{)},&\text{
if }p\in(2,\infty),\end{cases}$
where $\widetilde{O}$ hides logarithmic factors in $L$,
$\|\mathbf{x}-\mathbf{x}_{0}\|_{p}$, $\frac{1}{p-1}$ and $1/\epsilon$.
###### Proof.
Let us first relate $\|\bar{\mathbf{x}}^{*}-\mathbf{x}_{0}\|_{p}$ to
$\|\mathbf{x}^{*}-\mathbf{x}_{0}\|_{p},$ where
$\bar{\mathbf{x}}^{*}=\operatorname*{argmin}_{\mathbf{x}\in\mathbb{R}^{d}}\bar{f}(\mathbf{x}),$
$\mathbf{x}^{*}\in\operatorname*{argmin}_{\mathbf{x}\in\mathbb{R}^{d}}{f}(\mathbf{x}).$
By the definition of $\bar{f}$:
$\displaystyle 0$
$\displaystyle\leq\bar{f}(\mathbf{x}^{*})-\bar{f}(\bar{\mathbf{x}}^{*})$
$\displaystyle=f(\mathbf{x}^{*})-f(\bar{\mathbf{x}}^{*})+\lambda\psi_{p}(\mathbf{x}^{*})-\lambda\psi_{p}(\bar{\mathbf{x}}^{*})$
$\displaystyle\leq\lambda\psi_{p}(\mathbf{x}^{*})-\lambda\psi_{p}(\bar{\mathbf{x}}^{*}).$
It follows that
$\psi_{p}(\bar{\mathbf{x}}^{*})\leq\psi_{p}(\mathbf{x}^{*}).$
Thus, using the definition of $\psi_{p},$
$\|\bar{\mathbf{x}}^{*}-\mathbf{x}_{0}\|_{p}\leq\|\mathbf{x}^{*}-\mathbf{x}_{0}\|_{p}.$
(20)
By triangle inequality and
$\bar{\mathbf{x}}^{*}=\operatorname*{argmin}_{\mathbf{x}\in\mathbb{R}^{d}}\bar{f}(\mathbf{x})$
(which implies $\nabla\bar{f}(\mathbf{x}^{*})=\textbf{0}$),
$\displaystyle\|\nabla f(\mathbf{y}_{k})\|_{p_{\ast}}$
$\displaystyle\leq\|\nabla f(\mathbf{y}_{k})-\nabla
f(\bar{\mathbf{x}}^{*})\|_{p_{\ast}}+\|\nabla
f(\bar{\mathbf{x}}^{*})\|_{p_{\ast}}$ $\displaystyle=\|\nabla
f(\mathbf{y}_{k})-\nabla
f(\bar{\mathbf{x}}^{*})\|_{p_{\ast}}+\|\nabla\bar{f}(\bar{\mathbf{x}}^{*})-\lambda\nabla\psi_{p}(\bar{\mathbf{x}}^{*})\|_{p_{\ast}}$
$\displaystyle=\|\nabla f(\mathbf{y}_{k})-\nabla
f(\bar{\mathbf{x}}^{*})\|_{p_{\ast}}+\lambda\|\nabla\psi_{p}(\bar{\mathbf{x}}^{*})\|_{p_{\ast}}.$
(21)
As $f$ is convex and $(L,\kappa)$ weakly smooth, using Lemma 1.7, we also
have:
$\displaystyle\frac{\kappa-1}{L^{\frac{1}{\kappa-1}}\kappa}\|\nabla
f(\mathbf{y}_{k})-\nabla
f(\bar{\mathbf{x}}^{*})\|_{p_{\ast}}^{\frac{\kappa}{\kappa-1}}$
$\displaystyle\leq
f(\mathbf{y}_{k})-f(\bar{\mathbf{x}}^{*})-\left\langle\nabla
f(\bar{\mathbf{x}}^{*}),\mathbf{y}_{k}-\bar{\mathbf{x}}^{*}\right\rangle$
$\displaystyle=\bar{f}(\mathbf{y}_{k})-\bar{f}(\bar{\mathbf{x}}^{*})-\lambda\psi_{p}(\mathbf{y}_{k})+\lambda\psi_{p}(\bar{\mathbf{x}}^{*})-\left\langle\nabla\bar{f}(\bar{\mathbf{x}}^{*})-\lambda\nabla\psi_{p}(\bar{\mathbf{x}}^{*}),\mathbf{y}_{k}-\bar{\mathbf{x}}^{*}\right\rangle$
$\displaystyle=\bar{f}(\mathbf{y}_{k})-\bar{f}(\bar{\mathbf{x}}^{*})-\lambda\big{(}\psi_{p}(\mathbf{y}_{k})-\psi_{p}(\bar{\mathbf{x}}^{*})-\left\langle\nabla\psi_{p}(\bar{\mathbf{x}}^{*}),\mathbf{y}_{k}-\bar{\mathbf{x}}^{*}\right\rangle\big{)}$
$\displaystyle\leq\bar{f}(\mathbf{y}_{k})-\bar{f}(\bar{\mathbf{x}}^{*}),$ (22)
where the second line uses $\bar{f}=f+\psi_{p}$, the third line follows by
$\nabla\bar{f}(\bar{\mathbf{x}}^{*})=0$ (as
$\bar{\mathbf{x}}^{*}=\operatorname*{argmin}_{\mathbf{x}\in\mathbb{R}^{d}}\bar{f}(\mathbf{x})$),
and the last inequality is by convexity of $\psi_{p}.$
From Eqs. (21) and (22), to obtain $\|\nabla
f(\mathbf{y}_{k})\|_{p_{\ast}}\leq\epsilon,$ it suffices that
$\lambda\|\nabla\psi_{p}(\bar{\mathbf{x}}^{*})\|_{p_{\ast}}\leq\frac{\epsilon}{2}$
and
$\bar{f}(\mathbf{y}_{k})-\bar{f}(\bar{\mathbf{x}}^{*})\leq\big{(}\frac{\epsilon}{2}\big{)}^{\frac{\kappa}{\kappa-1}}\frac{\kappa-1}{L^{\frac{1}{\kappa-1}}\kappa}.$
The first condition determines the value of $\lambda.$ Using Proposition 1.5,
$\lambda\|\nabla\psi_{p}(\bar{\mathbf{x}}^{*})\|_{p_{\ast}}\leq\frac{\epsilon}{2}$
is equivalent to
$\begin{cases}\frac{\lambda}{p-1}\|\bar{\mathbf{x}}^{*}-\mathbf{x}_{0}\|_{p}\leq\frac{\epsilon}{2},&\text{
if }p\in(1,2]\\\
{\lambda}\|\bar{\mathbf{x}}^{*}-\mathbf{x}_{0}\|_{p}^{p-1}\leq\frac{\epsilon}{2},&\text{
if }p\in(2,\infty).\end{cases}$
Using Eq. (20), it suffices that:
$\lambda=\begin{cases}\frac{\epsilon(p-1)}{2\|\mathbf{x}^{*}-\mathbf{x}_{0}\|_{p}},&\text{
if }p\in(1,2],\\\
\frac{\epsilon}{2\|\mathbf{x}^{*}-\mathbf{x}_{0}\|_{p}^{p-1}},&\text{ if
}p\in(2,\infty).\end{cases}$ (23)
Using the choice of $\lambda$ from Eq. (23), it remains to apply Theorem 2.3
to bound the number of iterations until
$\bar{f}(\mathbf{y}_{k})-\bar{f}(\bar{\mathbf{x}}^{*})\leq\big{(}\frac{\epsilon}{2}\big{)}^{\frac{\kappa}{\kappa-1}}\frac{\kappa-1}{L^{\frac{1}{\kappa-1}}\kappa}.$
Applying Theorem 2.3, we have:
$k=O\bigg{(}\Big{(}\frac{2^{\frac{\kappa}{\kappa-1}}L^{\frac{1}{\kappa-1}}\kappa}{\epsilon^{\frac{\kappa}{\kappa-1}}(\kappa-1)}\Big{)}^{\frac{q-\kappa}{q\kappa-q+\kappa}}\Big{(}\frac{L^{\frac{q}{\kappa}}}{\lambda}\Big{)}^{\frac{\kappa}{q\kappa-q+\kappa}}\log\Big{(}\frac{2^{\frac{\kappa}{\kappa-1}}L^{2}\kappa\psi_{p}(\bar{\mathbf{x}}^{*})}{\epsilon^{\frac{\kappa}{\kappa-1}}(\kappa-1)}\Big{)}\bigg{)}.$
It remains to plug in the choice of $\lambda$ from Eq. (23),
$q=\max\\{p,2\\},$ and simplify. ∎
###### Remark 3.2.
Observe that, as the gradient norm minimization relies on the application of
Theorem 2.3, the knowledge of parameters $L$ and $\kappa$ is not needed, as
discussed in Section 2.2. The only parameter that needs to be determined is
$\lambda,$ which cannot be known in advance, as it would require knowing the
initial distance to optimum $\|\mathbf{x}^{*}-\mathbf{x}_{0}\|.$ However,
tuning $\lambda$ can be done at the cost of an additional
$\log(\frac{\lambda}{\lambda_{0}})$ multiplicative factor in the convergence
bound. In particular, one could start with a large estimate of $\lambda$ (say,
$\lambda=\lambda_{0}=1$), run the algorithm, and halt and restart with
$\lambda\leftarrow\lambda/2$ each time
$\|\nabla\bar{f}(\mathbf{y}_{k})\|_{*}\leq 2\epsilon$ but $\|\nabla
f(\mathbf{y}_{k})\|_{*}>\epsilon.$ This condition is sufficient because, when
$\lambda$ is of the correct order,
$\lambda\|\nabla\psi(\mathbf{y}_{k})\|_{*}=O(\lambda\|\nabla\psi(\bar{\mathbf{x}}^{*})\|_{*})=O(\epsilon)$,
$\|\nabla f(\mathbf{y}_{k})\|_{*}\leq\epsilon,$ and
$\|\nabla\bar{f}(\mathbf{y}_{k})\|_{*}\leq\|\nabla
f(\mathbf{y}_{k})\|_{*}+\lambda\|\nabla\psi(\mathbf{y}_{k})\|_{*}\leq
O(\epsilon).$
## 4 Lower Bounds
In this section, we address the question of the optimality of our algorithmic
framework, in a formal oracle model of computation. We first study the
question of minimizing the norm of the gradient, which follows from a simple
reduction to the complexity of minimizing the objective function and for which
nearly tight lower bounds are known. In this case, the lower bounds show that
our resulting algorithms are nearly optimal when $q=\kappa=2$. In cases where
either we have weaker smoothness ($\kappa<2$) or larger uniform convexity
exponent ($q>2$), we observe the presence of polynomial gaps in the complexity
w.r.t. $1/\epsilon$.
One natural question regarding the aforementioned gaps is whether this is due
to the suboptimality of the complementary composite minimization algorithm
used, or the reduction from the solution obtained by this method to obtain a
small gradient norm. In this respect, we discard the first possibility,
showing sharp lower bounds for complementary composite optimization in a new
composite oracle model. Our lower bounds show that the complementary composite
minimization algorithms are optimal up to factors which depend at most
logarithmically on the initial distance to the optimal solution, the target
accuracy, and dimension.
Before proceeding to the specific results, we provide a short summary of the
classical oracle complexity in convex optimization and some techniques that
will be necessary for our results. For more detailed information on the
subject, we refer the reader to the thorough monograph of Nemirovskii and
Yudin (1983). In the oracle model of convex optimization, we consider a class
of objectives ${\cal F}$, comprised of functions $f:\mathbf{E}\to\mathbb{R}$;
an oracle ${\cal O}:{\cal F}\times\mathbf{E}\to\mathbf{F}$ (where $\mathbf{F}$
is a vector space); and a target accuracy, $\epsilon>0$. An algorithm ${\cal
A}$ can be described by a sequence of functions $({\cal
A}_{k})_{k\in\mathbb{N}}$, where ${\cal
A}_{k+1}:(\mathbf{E}\times\mathbf{F})^{k+1}\to\mathbf{E}$, so that the
algorithm sequentially interacts with the oracle querying points
$\mathbf{x}^{k+1}={\cal A}_{k+1}(\mathbf{x}^{0},{\cal
O}(f,\mathbf{x}^{0}),\ldots,\mathbf{x}^{k},{\cal O}(f,\mathbf{x}^{k})).$
The running time of algorithm ${\cal A}$ is given by the minimum number of
queries to achieve some measure of accuracy (up to a given accuracy
$\epsilon>0$), and will be denoted by $T({\cal A},f,\epsilon)$. The most
classical example in optimization is achieving additive optimality gap bounded
by $\epsilon$:
$T({\cal A},f,\epsilon)=\inf\\{k\geq 0:f(\mathbf{x}^{k})\leq
f^{\ast}+\epsilon\\},$
but other relevant goal for our work is achieving a (dual) norm of the
gradient upper bounded by $\epsilon$
$T({\cal A},f,\epsilon)=\inf\\{k\geq 0:\|\nabla
f(\mathbf{x}^{k})\|_{\ast}\leq\epsilon\\}.$
Given a measure of efficiency $T$, the worst-case oracle complexity for a
problem class ${\cal F}$ endowed with oracle ${\cal O}$, is given by
$\mbox{Compl}({\cal F},{\cal O},\epsilon)=\inf_{\cal A}\sup_{f\in{\cal
F}}T(\mathcal{A},f,\epsilon).$
### 4.1 Lower Complexity Bounds for Minimizing the Norm of the Gradient
We provide lower complexity bounds for minimizing the norm of the gradient.
For the sake of simplicity, we can think of these lower bounds for the oracle
${\cal O}(f,x)=\nabla f(\mathbf{x})$, but we point out they work more
generally for arbitrary local oracles (more on this in the next section).
In short, we reduce the problem of making the gradient small to that of
approximately minimizing the objective.
###### Proposition 4.1.
Let $f:\mathbf{E}\to\mathbb{R}$ be a convex and differentiable function, with
a global minimizer $\mathbf{x}^{\ast}$. Then, if $\|\nabla
f(\mathbf{x})\|_{\ast}\leq\epsilon$ and $\|\mathbf{x}-\mathbf{x}^{\ast}\|\leq
R$, then $f(\mathbf{x})-f(\mathbf{x}^{\ast})\leq\epsilon R$.
###### Proof.
By convexity of $f$,
$f(\mathbf{x})-f(\mathbf{x}^{\ast})\leq\langle\nabla
f(\mathbf{x}),\mathbf{x}-\mathbf{x}^{\ast}\rangle\leq\|\nabla
f(\mathbf{x})\|_{\ast}\|\mathbf{x}-\mathbf{x}^{\ast}\|\leq\epsilon R,$
where the second inequality is by duality of norms $\|\cdot\|$ and
$\|\cdot\|_{*}.$ ∎
For the classical problem of minimizing the objective function value, lower
complexity bounds for $\ell_{p}$-setups have been previously studied in both
constrained (Guzmán and Nemirovski, 2015) and unconstrained (Diakonikolas and
Guzmán, 2020) settings. Here we summarize those results.555More precisely, to
obtain this result one can use the $p$-norm smoothing construction from Guzmán
and Nemirovski (2015, Section 2.3), in combination with the norm term used in
Diakonikolas and Guzmán (2020, Eq. (3)). This would lead to a smooth objective
over an unconstrained domain that provides a hard function class.
###### Theorem 4.2 ((Guzmán and Nemirovski, 2015; Diakonikolas and Guzmán,
2020)).
Let $1\leq p\leq\infty$, and consider the problem class of unconstrained
minimization with objectives in the class ${\cal
F}_{\mathbb{R}^{d},\|\cdot\|_{p}}({\kappa},L)$, whose minima are attained in
${\cal B}_{\|\cdot\|_{p}}(0,R)$. Then, the complexity of achieving additive
optimality gap $\epsilon$, for any local oracle, is bounded below by:
* •
$\Omega\Big{(}\Big{(}\frac{LR^{\kappa}}{\epsilon[\ln
d]^{\kappa-1}}\Big{)}^{\frac{2}{3\kappa-2}}\Big{)}$ if $1\leq p<2$;
* •
$\Omega\Big{(}\Big{(}\frac{LR^{\kappa}}{\epsilon\min\\{p,\ln
d\\}^{\kappa-1}}\Big{)}^{\frac{p}{\kappa p+\kappa-p}}\Big{)}$, if $2\leq
p<\infty$; and,
* •
$\Omega\Big{(}\Big{(}\frac{LR^{\kappa}}{\epsilon[\ln
d]^{\kappa-1}}\Big{)}^{\frac{1}{\kappa-1}}\Big{)}$, if $p=\infty$.
The dimension $d$ for the lower bound to hold must be at least as large as the
lower bound itself.
By combining the reduction from Proposition 4.1 with the lower bounds for
function minimization from Theorem 4.2, we can now immediately obtain lower
bounds for minimizing the $\ell_{p}$ norm of the gradient, as follows.
###### Corollary 4.3.
Let $1\leq p\leq\infty$, and consider the problem class with objectives in
${\cal F}_{\mathbb{R}^{d},\|\cdot\|_{p}}({\kappa},L)$, whose minima are
attained in ${\cal B}_{\|\cdot\|_{p}}(0,R)$. Then, the complexity of achieving
the dual norm of the gradient bounded by $\epsilon$, for any local oracle, is
bounded below by:
* •
$\Omega\Big{(}\Big{(}\frac{LR^{\kappa-1}}{\epsilon[\ln
d]^{\kappa-1}}\Big{)}^{\frac{2}{3\kappa-2}}\Big{)}$ if $1\leq p<2$;
* •
$\Omega\Big{(}\Big{(}\frac{LR^{\kappa-1}}{\epsilon\min\\{p,\ln
d\\}^{\kappa-1}}\Big{)}^{\frac{p}{\kappa p+\kappa-p}}\Big{)}$, if $2\leq
p<\infty$; and,
* •
$\Omega\Big{(}\Big{(}\frac{LR^{\kappa-1}}{\epsilon[\ln
d]^{\kappa-1}}\Big{)}^{\frac{1}{\kappa-1}}\Big{)}$, if $p=\infty$.
The dimension $d$ for the lower bound to hold must be at least as large as the
lower bound itself.
Comparing to the upper bounds from Theorem 3.1, it follows that for
$p\in(1,2]$ and $\kappa=2$, our bound is optimal up to a
$\log(d)\log(\frac{LR}{(p-1)\epsilon})$ factor; i.e., it is near-optimal.
Recall that the upper bound for $p=1$ can be obtained by applying the result
from Theorem 3.1 with $p=\log(d)/[\log d-1].$ When $p>2$ and $\kappa=2$, our
upper bound is larger than the lower bound by a factor
$\Big{(}\frac{LR}{\epsilon}\Big{)}^{\frac{p-2}{p+2}}\log(\frac{LR}{\epsilon})(\min\\{p,\log(d)\\})^{\frac{p}{p+2}}$.
The reason for the suboptimality in the $p>2$ regime comes from the polynomial
in $1/\epsilon$ factors in the upper bound for complementary composite
minimization from Section 2, and it is a limitation of the regularization
approach used in this work to obtain bounds for the norm of the gradient. In
particular, we believe that it is not possible to obtain tighter bounds via an
alternative analysis by using the same regularization approach. Thus, it is an
interesting open problem to obtain tight bounds for $p>2$, and it may require
developing completely new techniques. Similar complexity gaps are encountered
when $\kappa<2;$ however, it is reasonable to suspect that here the lower
bounds are not sharp. In particular, when $\kappa=1$ points with small
subgradients may not even exist, which is not at all reflected in the lower
bound. Therefore, it is an interesting open problem to investigate how to
strengthen these lower bounds for weakly smooth function classes.
### 4.2 Lower Complexity Bounds for Complementary Composite Minimization
We investigate the (sub)optimality of the composite minimization algorithm in
an oracle complexity model. To accurately reflect how our algorithms work
(namely, using gradient information on the smooth term and regularized
proximal subproblems w.r.t. the uniformly convex term), we introduce a new
problem class and oracle for the complementary composite problem. We observe
that existing constructions in the literature of lower bounds for nonsmooth
uniformly convex optimization (e.g., Juditsky and Nesterov (2014); Srebro and
Sridharan (2012)) apply to our composite setting for $\kappa=1$. The main idea
of the lower bounds in this section is to combine these constructions with
local smoothing, to obtain composite functions that match our assumptions.
###### Assumptions 4.4.
Consider the problem class ${\cal P}({\cal F}_{\|\cdot\|}(L,\kappa),{\cal
U}_{\|\cdot\|}(\lambda,q),R)$, given by composite objective functions
$(P_{f,\psi})~{}~{}~{}\min_{\mathbf{x}\in\mathbf{E}}[\bar{f}(\mathbf{x})=f(\mathbf{x})+\psi(\mathbf{x})],$
with the following assumptions:
1. (A.1)
$f\in{\cal F}_{\|\cdot\|}(L,\kappa)$;
2. (A.2)
$\psi\in{\cal U}_{\|\cdot\|}(\lambda,q)$; and,
3. (A.3)
the optimal solution of $(P_{f,\psi})$ is attained within ${\cal
B}_{\|\cdot\|}(0,R)$.
The problem class is additionally endowed with oracles ${\cal O}_{\cal F}$ and
${\cal O}_{\cal U}$, for function classes ${\cal F}_{\|\cdot\|}(L,\kappa)$ and
${\cal U}_{\|\cdot\|}(\lambda,q)$, respectively; which satisfy
1. (O.1)
${\cal O}_{\cal F}$ is a local oracle: if $f,g\in{\cal
F}_{\|\cdot\|}(L,\kappa)$ are such that there exists $r>0$ such that they
coincide in a neighborhood ${\cal B}_{\|\cdot\|}(\mathbf{x},r)$, then ${\cal
O}_{\cal F}(\mathbf{x},f)={\cal O}_{\cal F}(\mathbf{x},g)$; and,
2. (O.2)
${\cal U}_{\|\cdot\|}(\lambda,q)$ is any oracle (not necessarily local).
In brief, we are interested in the oracle complexity of achieving
$\epsilon$-optimality gap for the family of problems $(P_{f,\psi})$, where
$f\in{\cal F}_{\|\cdot\|}(L,\kappa)$ is endowed with a local oracle,
$\psi\in{\cal U}_{\|\cdot\|}(\lambda,q)$ is endowed with any oracle, and the
optimal solution of problem $(P_{f,\psi})$ lies in ${\cal
B}_{\|\cdot\|}(0,R)$. A simple observation is that in the case $\lambda=0$,
our model coincides with the classical oracle mode, which was discussed in the
previous section. The goal now is to prove a more general lower complexity
bound for the composite model.
Before proving the theorem, we first provide some building blocks in this
construction, borrowed from past work of Guzmán and Nemirovski (2015);
Diakonikolas and Guzmán (2020). In particular, our lower bound works generally
for $q$-uniformly convex and locally smoothable spaces.
###### Assumptions 4.5.
Given the normed space $(\mathbf{E},\|\cdot\|)$, we consider the following
properties:
1. 1.
$\psi(\mathbf{x})=\frac{1}{q}\|\mathbf{x}\|^{q}$ is $q$-uniformly convex with
constant $\bar{\lambda}$ w.r.t. $\|\cdot\|$.
2. 2.
The space $(\mathbf{E},\|\cdot\|)$ is $(\kappa,\eta,\eta,\bar{\mu})$-locally
smoothable. That is, there exists a mapping ${\cal S}:{\cal
F}_{(\mathbf{E},\|\cdot\|)}(0,1)\to{\cal
F}_{(\mathbf{E},\|\cdot\|)}(\kappa,\overline{\mu})$ (denoted as the smoothing
operator in (Diakonikolas and Guzmán, 2020, Definition 2)), such that $\|{\cal
S}f-f\|_{\infty}\leq\eta$, and this operator preserves the equality of
functions when they coincide in a ball of radius $2\eta$; i.e., if $f|_{{\cal
B}_{\|\cdot\|}(0,2\eta)}=g|_{{\cal B}_{\|\cdot\|}(0,2\eta)}$ then ${\cal
S}f|_{{\cal B}_{\|\cdot\|}(0,\eta)}={\cal S}g|_{{\cal
B}_{\|\cdot\|}(0,\eta)}.$
3. 3.
There exists $\Delta>0$ and vectors
$\mathbf{z}^{1},\ldots,\mathbf{z}^{M}\in\mathbf{E}$ with
$\|\mathbf{z}^{i}\|_{\ast}\leq 1$, such that for all
$s_{1},\ldots,s_{M}\in\\{-1,+1\\}^{M}$
$\inf_{\bm{\alpha}\in\bm{\Delta}_{M}}\Big{\|}\sum_{i\in[M]}\alpha_{i}s_{i}\mathbf{z}^{i}\Big{\|}_{\ast}\geq\Delta,$
(24)
where
$\bm{\Delta}_{M}=\\{\bm{\alpha}\in\mathbb{R}_{+}^{M}:\sum_{i}\alpha_{i}=1\\}$
is the discrete probability simplex in $M$-dimensions.
The three assumptions in Assumption 4.5 are common in the literature, and can
be intuitively understood as follows. The first is the existence of a simple
function that we can use as the uniformly convex term in the composite model.
The second appeared in (Guzmán and Nemirovski, 2015), and provides a simple
way to reduce the complexity of smooth convex optimization to its nonsmooth
counterpart. We emphasize there is a canonical way to construct smoothing
operators, which is stated in Observation 4.6 below. Finally, the third
assumption comes from the hardness constructions in nonsmooth convex
optimization in Nemirovskii and Yudin (1983), which are given by piecewise
linear objectives that are learned one by one by an adversarial argument. The
fact that the resulting piecewise linear function has a sufficiently negative
optimal value (for any adversarial choice of signs) can be directly obtained
by minimax duality from Eq. (24).
We point out that $\ell_{p}^{d}$ satisfies the assumptions above when $2\leq
p<\infty$.
###### Observation 4.6 ((Guzmán and Nemirovski, 2015)).
Let $2\leq p<\infty$ and $\eta>0$, and consider the space
$\ell_{p}^{d}=(\mathbb{R}^{d},\|\cdot\|_{p})$. We now verify the Assumptions
4.5 for $q=p$, $\bar{\lambda}=1$, $\bar{\mu}=2^{2-\kappa}(\min\\{p,\ln
d\\}/\eta)^{\kappa-1}$ and $\Delta=1/M^{1/p}$. Indeed,
1. 1.
The $p$-uniform convexity of $\psi$ was discussed after Eq. (18).
2. 2.
The smoothing operator can be obtained by infimal convolution, with kernel
function $\phi(\mathbf{x})=2\|\mathbf{x}\|_{r}^{2}$ (with $r=\min\\{p,3\ln
d\\}$. We recall that the infimal convolution of two functions $f$ and $\phi$
is given by
$(f\square\phi)(\mathbf{x})=\inf_{h\in{\cal
B}_{p}(0,1)}[f(\mathbf{x}+\mathbf{h})+\phi(\mathbf{h})].$
The infimal convolution above can be adapted to obtain arbitrary uniform
approximation to $f$ and the preservation of equality of functions (see
(Guzmán and Nemirovski, 2015, Section 2.2) for details).
3. 3.
Letting $\mathbf{z}^{i}=\mathbf{e}_{i}$, $i\in[M]$ be the first $M$ canonical
vectors, we have
$\Big{\|}\sum_{i\in[M]}\alpha_{i}s_{i}\mathbf{z}^{i}\Big{\|}_{p_{\ast}}=\|\bm{\alpha}\|_{p_{\ast}}\geq
M^{1/p_{\ast}-1}\|\bm{\alpha}\|_{1}=M^{-1/p}.$
This bound is achieved when $\alpha_{i}=1/M$, for all $i$.
Before proving the result for $\ell_{p}$-spaces, we provide a general lower
complexity bound for the composite setting, which we will later apply to
derive the lower bounds for $\ell_{p}$ setups.
###### Lemma 4.7.
Let $(\mathbf{E},\|\cdot\|)$ be a normed space that satisfies Assumption 4.5
and let ${\cal P}({\cal F}_{\|\cdot\|}(L,\kappa),{\cal
U}_{\|\cdot\|}(\lambda,q),R)$ be a class of complementary composite problems
that satisfies Assumption 4.4. Suppose the following relations between
parameters are satisfied:
1. (a)
$2qL\bar{\lambda}/[\lambda\bar{\mu}]\leq R^{q-1}$.
2. (b)
$(M+3)\eta\leq 4R$.
3. (c)
$\frac{L}{4\bar{\mu}}(M+7)\eta\leq\frac{1}{2q_{\ast}}\big{(}\frac{L\Delta}{\bar{\mu}}\big{)}^{q_{\ast}}\big{(}\frac{\bar{\lambda}}{\lambda}\big{)}^{\frac{1}{q-1}}$.
Then, the worst-case optimality gap for the problem class is bounded below by
$\frac{1}{2q_{\ast}}\Big{(}\frac{L\Delta}{\bar{\mu}}\Big{)}^{q_{\ast}}\Big{(}\frac{\bar{\lambda}}{\lambda}\Big{)}^{\frac{1}{q-1}}.$
###### Proof.
Given $M\in\mathbb{N}$, scalars $\delta_{1},\ldots,\delta_{M}>0$, and
$s_{1},\ldots,s_{M}\in\\{-1,+1\\}$, we consider the functions
$f_{s}(\mathbf{x})=\frac{L}{\bar{\mu}}{\cal S}\Big{(}\max_{i\in[M]}[\langle
s_{i}\mathbf{z}^{i},\cdot\rangle-\delta_{i}]\Big{)}(\mathbf{x}),$
and
$\bar{f}_{s}(\mathbf{x})=f_{s}(\mathbf{x})+(\lambda/\bar{\lambda})\psi(\mathbf{x})$,
where $\psi$ is given by Assumption 4.5.
We now show the composite objective $\bar{f}_{s}$ satisfies Assumption 4.4.
Properties (A.1) and (A.2) are clearly satisfied. Regarding (A.3), we prove
next that the optimum of these functions lies in ${\cal B}_{\|\cdot\|}(0,R)$.
For this, notice that by Assumption 4.5, Property 2:
$\displaystyle\bar{f}_{s}(\mathbf{x})$ $\displaystyle\geq$
$\displaystyle\frac{L}{\bar{\mu}}\max_{i\in[M]}[\langle
s_{i}\mathbf{z}^{i},\mathbf{x}\rangle-\delta_{i}]-\frac{L\eta}{\bar{\mu}}+\frac{\lambda}{q\bar{\lambda}}\|\mathbf{x}\|^{q}$
$\displaystyle\geq$
$\displaystyle\|\mathbf{x}\|\Big{[}\frac{\lambda}{\bar{\lambda}q}\|\mathbf{x}\|^{q-1}-\frac{L}{\bar{\mu}}\Big{]}-\frac{L}{\bar{\mu}}(\eta+\max_{i}\delta_{i}).$
We will later show that $\eta+\max_{i}\delta_{i}\leq(M+3)\eta/4\leq R$ (the
last inequality by (b)), hence for $\|\mathbf{x}\|\geq R$
$\bar{f}_{s}(\mathbf{x})\geq\Big{(}\frac{\lambda}{\bar{\lambda}q}\|\mathbf{x}\|^{q-1}-\frac{2L}{\bar{\mu}}\Big{)}\|\mathbf{x}\|\geq
0,$
where the last inequality follows from (a). To conclude the verification of
Assumption (A.3), we now prove that
$\min_{\mathbf{x}\in\mathbb{R}}\bar{f}_{s}(\mathbf{x})<0$. Again, by
Assumption 4.5, Property 2:
$\displaystyle\inf_{\mathbf{x}\in\mathbf{E}}\bar{f}(\mathbf{x})$
$\displaystyle\leq$
$\displaystyle\inf_{\mathbf{x}\in\mathbf{E}}\Big{(}\frac{L}{\bar{\mu}}\max_{i\in[M]}[\langle
s_{i}\mathbf{z}^{i},x\rangle-\delta_{i}]+\frac{L}{\bar{\mu}}\eta+\frac{\lambda}{q\bar{\lambda}}\|\mathbf{x}\|^{q}\Big{)}$
$\displaystyle=$
$\displaystyle\max_{\bm{\alpha}\in\bm{\Delta}_{M}}\inf_{x\in\mathbf{E}}\Big{(}\Big{\langle}\frac{L}{\overline{\mu}}\sum_{i\in[M]}\alpha_{i}s_{i}\mathbf{z}^{i},x\Big{\rangle}+\frac{\lambda}{q\bar{\lambda}}\|\mathbf{x}\|^{q}-\frac{L}{\overline{\mu}}\sum_{i\in[M]}\alpha_{i}\delta_{i}+\frac{L}{\overline{\mu}}\eta\Big{)}$
$\displaystyle=$
$\displaystyle\max_{\bm{\alpha}\in\bm{\Delta}_{M}}-\frac{1}{q_{\ast}}\Big{(}\frac{L}{\bar{\mu}}\Big{)}^{q_{\ast}}\Big{(}\frac{\bar{\lambda}}{\lambda}\Big{)}^{\frac{1}{q-1}}\Big{\|}\sum_{i\in[M]}\alpha_{i}s_{i}\mathbf{z}^{i}\Big{\|}_{\ast}^{q_{\ast}}-\frac{L}{\overline{\mu}}\sum_{i\in[M]}\alpha_{i}\delta_{i}+\frac{L}{\overline{\mu}}\eta$
$\displaystyle=$
$\displaystyle-\frac{1}{q_{\ast}}\Big{(}\frac{L}{\bar{\mu}}\Big{)}^{q_{\ast}}\Big{(}\frac{\bar{\lambda}}{\lambda}\Big{)}^{\frac{1}{q-1}}\Delta^{q_{\ast}}+\frac{L}{\overline{\mu}}\eta.$
Notice that the second step above follows from the Sion Minimax Theorem (Sion,
1958). We conclude that the optimal value of $(P_{f,\psi})$ is negative by
(c).
Following the arguments provided in Guzmán and Nemirovski (2015, Proposition
2), one can prove that for any algorithm interacting with oracle ${\cal
O}_{\cal F}$, after $M$ steps there exists a choice of
$s_{1},\ldots,s_{M}\in\\{-1,+1\\}^{M}$ such that
$\min_{t\in[M]}f_{s}(\mathbf{x}^{t})\geq\frac{L}{\bar{\mu}}[-\eta-\max_{i\in[M]}\delta_{i}];$
further, for this adversarial argument it suffices that
$\min_{i\in[M]}\delta_{i}=0$, and $\max_{i\in[M]}\delta_{i}\geq(M-1)\eta/4$.
We conclude that the optimality gap after $M$ steps is bounded below by
$\min_{t\in[M]}\bar{f}_{s}(\mathbf{x}^{t})-\min_{\mathbf{x}\in\mathbf{E}}\bar{f}_{s}(\mathbf{x})\geq-\frac{L}{4\bar{\mu}}(M+7)\eta+\frac{1}{q_{\ast}}\Big{(}\frac{L}{\bar{\mu}}\Big{)}^{q_{\ast}}\Big{(}\frac{\bar{\lambda}}{\lambda}\Big{)}^{\frac{1}{q-1}}\Delta^{q_{\ast}}\geq\frac{1}{2q_{\ast}}\Big{(}\frac{L\Delta}{\bar{\mu}}\Big{)}^{q_{\ast}}\Big{(}\frac{\bar{\lambda}}{\lambda}\Big{)}^{\frac{1}{q-1}},$
where we used the third bound from the statement. ∎
We now proceed to the lower bounds for $\ell_{p}$-setups, with $2\leq
p\leq\infty$.
###### Theorem 4.8.
Consider the space $\ell_{p}^{d}=(\mathbb{R}^{d},\|\cdot\|_{p})$, where $2\leq
p<\infty$. Then, the oracle complexity of problem class ${\cal P}:={\cal
P}({\cal F}_{\|\cdot\|}(L,\kappa),{\cal U}_{\|\cdot\|}(\lambda,p),R)$,
comprised of composite problems in the form $(P_{f,\psi})$ under Assumptions
4.4, is bounded below by
$\mathrm{Compl}({\cal P},({\cal O}_{\cal F},{\cal
O}_{\psi}),\epsilon)\geq\left\\{\begin{array}[]{ll}\Big{\lfloor}\sqrt{\frac{L}{2\lambda}}-7\Big{\rfloor}&\mbox{
if }p=\kappa=2,\,\epsilon<2\sqrt{2\lambda
L}R^{2}\min\\{\frac{2\lambda}{L},1\\}\\\ \frac{C(p,\kappa)}{\min\\{p,\ln
d\\}^{2(\kappa-1)}}\left(\frac{L^{p}}{\lambda^{\kappa}\epsilon^{p-\kappa}}\right)^{\frac{1}{\kappa
p+\kappa-p}}&\mbox{ if }1\leq\kappa<p,\,p\in[2,\infty],\,\mbox{and
}\lambda\geq\tilde{\lambda}.\end{array}\right.$
where
$C(p,\kappa):=\left(\Big{(}\frac{p-1}{p}\Big{)}^{\kappa(p-1)}2^{\frac{(p-\kappa)(1-2p)+(\kappa-1)p(2p-3)}{(p-1)}}\right)^{\frac{1}{\kappa
p+\kappa-p}}$ is bounded below by an absolute constant, and
$\tilde{\lambda}:=C\max\left\\{\min\\{p,\ln
d\\}^{3}\Big{(}\dfrac{\epsilon^{\kappa}}{LR}\Big{)}^{\frac{1}{\kappa-1}},\min\\{p,\ln
d\\}^{5}\left(\dfrac{\epsilon^{p}}{L^{(p+1)}R^{\frac{(p-1)(\kappa
p+\kappa-p)}{(\kappa-1)}}}\right)^{\frac{\kappa-1}{\kappa p+1-p}}\right\\},$
(25)
with $C>0$ is a universal constant.
In particular, our lower bounds show that the algorithm presented in the
previous section –particularly the rates stated in Theorem 2.3– are nearly
optimal. In the case $p=\kappa=2$, the gap between upper and lower bounds is
only given by a factor which grows at most logarithmically in
$L\phi(\bar{\mathbf{x}}^{\ast})/\epsilon$, and in the case $\kappa<p$, the gap
is $O\big{(}\log(L\phi(\bar{\mathbf{x}}^{\ast})/\epsilon)/\min\\{p,\ln
d\\}^{\Theta(1)}\big{)}$. In both cases, the gaps are quite moderate, so the
proposed algorithm is proved to be nearly optimal. Finally, we would also like
to emphasize that the constant $C(p,\kappa)=\Theta(1)$, as a function of
$1<\kappa\leq 2$ and $2\leq p\leq\infty$. Therefore, the lower bounds also
apply to the case $p=\infty$.
###### Proof of Theorem 4.8.
By Observation 4.6, in the case of $\ell_{p}^{d}$, with $2\leq p<\infty$,
Assumption 4.5 is satisfied if $q=p$, $\Delta=1/M^{1/p}$,
$\overline{\lambda}=1$, and $\bar{\mu}=2^{2-\kappa}(\min\\{p,\ln
d\\}/\eta)^{\kappa-1}$ (for given $\eta>0$). This way, hypotheses (a), (b),
(c) in Lemma 4.7 become
1. (a)
$\eta\leq\frac{\min\\{p,\ln d\\}}{2}\big{(}\frac{\lambda
R^{p-1}}{pL}\big{)}^{\frac{1}{\kappa-1}}$.
2. (b)
$(M+3)\eta\leq 4R$.
3. (c)
$\eta^{p-\kappa}\leq\frac{2^{p+\kappa-3}L}{p_{\ast}^{(p-1)}\min\\{p,\ln
d\\}^{(\kappa-1)}\lambda M(M+7)^{(p-1)}}$.
Case 1: $p=\kappa=2$. In order to satisfy (c), it suffices to choose
$M=\Big{\lfloor}\sqrt{\frac{L}{2\lambda}}-7\Big{\rfloor}.$ Given such choice,
to satisfy (a), (b) of the lemma, we can choose
$\eta=\min\Big{\\{}\frac{\lambda R}{2L},\frac{4R}{M+3}\Big{\\}}\geq
R\sqrt{\frac{2\lambda}{L}}\min\Big{\\{}\frac{1}{4}\sqrt{\frac{2\lambda}{L}},4\Big{\\}}.$
Now, under the conditions imposed above, the lemma provides an optimality gap
lower bound of
$\displaystyle\frac{1}{4\lambda}\Big{(}\frac{L\eta}{2\sqrt{M}}\Big{)}^{2}\geq
2\sqrt{2\lambda L}R^{2}\min\Big{\\{}\frac{2\lambda}{L},1\Big{\\}}.$
In conclusion, if $\epsilon<2\sqrt{2\lambda L}R^{2}\min\\{2\lambda/L,1\\}$,
then
$\mathrm{Compl}({\cal P},({\cal O}_{\cal F},{\cal
O}_{\psi}),\epsilon)\geq\Big{\lfloor}\sqrt{\frac{L}{2\lambda}}\Big{\rfloor}-1.$
Case 2: $p>\kappa$ (where $1<\kappa\leq 2,$ $2\leq p<\infty$). Here, to ensure
(a), (b) it suffices that
$\eta\leq\min\Big{\\{}\frac{4R}{M+3},\frac{\min\\{p,\ln
d\\}}{2}\Big{(}\frac{\lambda
R^{p-1}}{p}\Big{)}^{\frac{1}{\kappa-1}}\Big{\\}}.$ (26)
We will later certify these conditions hold. On the other hand, for (c) it
suffices to let
$\eta=\Big{[}\Big{(}\frac{p-1}{p}\Big{)}^{p-1}\frac{2^{p+\kappa-3}L}{\lambda\min\\{p,\ln
d\\}^{\kappa-1}M(M+7)^{p-1}}\Big{]}^{\frac{1}{p-\kappa}}.$
Then by Lemma 4.7 the optimality gap is bounded below as
$\displaystyle\frac{1}{2p_{\ast}}\Big{(}\frac{L^{p}\eta^{p(\kappa-1)}}{2^{p(2-\kappa)}\lambda
M\min\\{p,\ln d\\}^{p(\kappa-1)}}\Big{)}^{\frac{1}{p-1}}$ $\displaystyle=$
$\displaystyle\left[\Big{(}\frac{p-1}{p}\Big{)}^{\kappa(p-1)}2^{\frac{(p-\kappa)(1-2p)+(\kappa-1)p(2p-3)}{(p-1)}}\cdot\frac{L^{p}}{\min\\{p,\ln
d\\}^{\frac{p(\kappa-1)(\kappa
p-2\kappa+1)}{p-1}}\lambda^{\kappa}(M+7)^{\kappa
p+\kappa-p}}\right]^{\frac{1}{p-\kappa}}.$
Let
$C(p,\kappa):=\left(\Big{(}\frac{p-1}{p}\Big{)}^{\kappa(p-1)}2^{\frac{(p-\kappa)(1-2p)+(\kappa-1)p(2p-3)}{(p-1)}}\right)^{\frac{1}{\kappa
p+\kappa-p}}$. In particular, if $\epsilon$ is smaller than the gap above,
resolving for $M$ gives
$\mathrm{Compl}({\cal P},({\cal O}_{\cal F},{\cal O}_{\psi}),\epsilon)\geq
M=\frac{C(p,\kappa)}{\min\\{p,\ln
d\\}^{2(\kappa-1)}}\left(\frac{L^{p}}{\lambda^{\kappa}\epsilon^{p-\kappa}}\right)^{\frac{1}{\kappa
p+\kappa-p}},$ (27)
where we further simplified the bound, noting that $\frac{p(\kappa-1)(\kappa
p-2\kappa+1)}{(p-1)(p\kappa+\kappa-p)}\leq 2(\kappa-1)$.
Now, given the chosen value of $M$, we will verify that (26) holds. For this,
we note that (26) is implied by the following pair of inequalities
$\displaystyle\lambda$ $\displaystyle\geq$ $\displaystyle
C^{\prime}(p,\kappa)\min\\{p,\ln
d\\}^{(\kappa-1)(2\kappa-1)}\Big{(}\dfrac{\epsilon^{\kappa}}{LR}\Big{)}^{\frac{1}{\kappa-1}}$
(28) $\displaystyle\lambda$ $\displaystyle\geq$ $\displaystyle
C^{\prime\prime}(p,\kappa)\min\\{p,\ln
d\\}^{5}\left(\dfrac{\epsilon^{p}}{L^{(p+1)}R^{\frac{(p-1)(\kappa
p+\kappa-p)}{(\kappa-1)}}}\right)^{\frac{\kappa-1}{\kappa p+1-p}}$ (29)
with $C^{\prime}(p,\kappa),C^{\prime\prime}(p,\kappa)\geq C>0$, are bounded
below by a universal positive constant. Therefore, there exists a universal
constant $C>0$ such that if $\lambda$ satisfies Eqs. (28) and (29) where
$C^{\prime}(p,\kappa),C^{\prime\prime}(p,\kappa)$ are replaced by $C$, then
the lower complexity bound from Eq. (27) holds. ∎
###### Remark 4.9.
Observe that the lower bounds from Theorem 4.8 apply only when $\lambda$ is
sufficiently large, which is consistent with the behavior of our algorithm,
which for small values of $\lambda$ obtains iteration complexity matching the
classical smooth setting (as if we ignore the uniform convexity of the
objective).
## 5 Applications
We now provide some interesting applications of the results from Sections 2
and 3 to different regression problems. In typical applications, the data
matrix $\mathbf{A}$ is assumed to have fewer rows than columns, so that the
system $\mathbf{A}\mathbf{x}=\mathbf{b}$, where $\mathbf{b}$ is the vector of
labels, is underdetermined, and one seeks a sparse solution $\mathbf{x}^{*}$
that provides a good linear fit between the data and the labels.
### 5.1 Elastic Net
One of the simplest applications of our framework is to the elastic net
regularization, introduced by Zou and Hastie (2005). Elastic net regularized
problems are of the form:
$\min_{\mathbf{x}\in\mathbb{R}^{d}}f(\mathbf{x})+\frac{\lambda_{2}}{2}\|\mathbf{x}\|_{2}^{2}+\lambda_{1}\|\mathbf{x}\|_{1},$
i.e., the elastic net regularization combines the lasso and ridge
regularizers. Function $f$ is assumed to be $(L,2)$-weakly smooth (i.e.,
$L$-smooth) w.r.t. the Euclidean norm $\|\cdot\|_{2}$. It is typically chosen
as either the linear least squares or the logistic loss.
We can apply results from Section 2 to this problem for $q=\kappa=2,$ choosing
$\psi(\mathbf{x})=\frac{\lambda}{2}\|\mathbf{x}\|_{2}^{2}$ and
$\phi(\mathbf{x})=\frac{1}{2}\|\mathbf{x}-\mathbf{x}_{0}\|_{2}^{2}.$ Observe
that our algorithm only needs to solve subproblems of the form
$\min_{\mathbf{x}\in\mathbb{R}^{d}}\Big{\\{}\left\langle\mathbf{z},\mathbf{x}\right\rangle+\frac{\lambda^{\prime\prime}}{2}\|\mathbf{x}\|_{2}^{2}+\lambda^{\prime}\|\mathbf{x}\|_{1}\Big{\\}},$
for fixed vectors $\mathbf{z}\in\mathbb{R}^{d}$ and fixed parameters
$\lambda^{\prime},\lambda^{\prime\prime}$, which is computationally
inexpensive, as the problem under the min is separable.
Applying Theorem 2.3, the elastic net regularized problems can be solved to
any accuracy $\epsilon>0$ using
$k=O\bigg{(}\min\bigg{\\{}\sqrt{\frac{L}{\lambda_{2}}}\log\bigg{(}\frac{L\|\mathbf{x}^{*}-\mathbf{x}_{0}\|_{2}}{\epsilon}\Big{)},\;\sqrt{\frac{L\|\mathbf{x}^{*}-\mathbf{x}_{0}\|_{2}^{2}}{\epsilon}}\bigg{\\}}\bigg{)}$
iterations, where $\mathbf{x}^{*}\in\mathbb{R}^{d}$ is the problem minimizer.
### 5.2 Bridge Regression
Bridge regression problems were originally introduced by Frank and Friedman
(1993), and are defined by
$\displaystyle\min_{\begin{subarray}{c}\mathbf{x}\in\mathbb{R}^{d}:\\\
\|\mathbf{x}\|_{p}\leq t\end{subarray}}$
$\displaystyle\;\frac{1}{2}\|\mathbf{A}\mathbf{x}-\mathbf{b}\|_{2}^{2}$ (30)
where $t$ is a positive scalar, $p\in[1,2],$ $\mathbf{A}$ is the matrix of
observations, and $\mathbf{b}$ is the vector of labels. In particular, for
$p=1,$ the problem reduces to lasso, while for $p=2$ we recover ridge
regression.
Bridge regression has traditionally been used either as an interpolation
between lasso and ridge regression, or to model Bayesian priors with the
exponential power distribution (see Park and Casella (2008) and Hastie et al.
(2009, Section 3.4.3). The problem is often posed in the equivalent (due to
Lagrangian duality) penalized (or regularized) form:
$\min_{\mathbf{x}\in\mathbb{R}^{d}}\Big{\\{}\frac{1}{2}\|\mathbf{A}\mathbf{x}-\mathbf{b}\|_{2}^{2}+\frac{\lambda}{p}\|\mathbf{x}\|_{p}^{p}\Big{\\}}.$
Writing the regularizer as $\frac{1}{p}\|\mathbf{x}\|_{p}^{p}$ is typically
chosen due to its separable form. However, using different parametrization,
the problem from Eq. (30) is also equivalent to
$\min_{\mathbf{x}\in\mathbb{R}^{d}}\Big{\\{}\frac{1}{2}\|\mathbf{A}\mathbf{x}-\mathbf{b}\|_{2}^{2}+\frac{\lambda}{2}\|\mathbf{x}\|_{p}^{2}\Big{\\}},$
(31)
which is more convenient for the application of our results, as
$\frac{1}{2}\|\mathbf{x}\|_{p}^{2}$ is $(p-1)$-strongly convex w.r.t.
$\|\cdot\|_{p}$.
Further, looking at the gradient $\nabla
f(\mathbf{x})=\mathbf{A}^{T}\mathbf{A}\mathbf{x}-\mathbf{A}^{T}\mathbf{b}$ of
$f(\mathbf{x})=\frac{1}{2}\|\mathbf{A}\mathbf{x}-\mathbf{b}\|_{2}^{2}$, it is
not hard to argue that $f(\mathbf{x})$ is $L_{p}$-smooth w.r.t.
$\|\cdot\|_{p},$ where $L_{p}=\|\mathbf{A}^{T}\mathbf{A}\|_{p\to
p_{\ast}}=\sup_{\mathbf{x}\in\mathbb{R}^{d}:\|\mathbf{x}\|_{p}\neq
0}\frac{\|\mathbf{A}^{T}\mathbf{A}\mathbf{x}\|_{p_{\ast}}}{\|\mathbf{x}\|_{p}}$.
Namely, this follows as
$\|\nabla f(\mathbf{x})-\nabla
f(\mathbf{y})\|_{p_{\ast}}=\|\mathbf{A}^{T}\mathbf{A}(\mathbf{x}-\mathbf{y})\|_{p_{\ast}}\leq\|\mathbf{A}^{T}\mathbf{A}\|_{p\to
p_{\ast}}\|\mathbf{x}-\mathbf{y}\|_{p}.$
An interesting feature of the formulation in Eq. (31) is that it implies a
certain trade-off between the $p_{\ast}$-fit of the data and the $p$-norm of
the regressor. Namely, if $\bar{\mathbf{x}}^{*}$ solves the problem from Eq.
(31), then
$\|\mathbf{A}^{T}(\mathbf{A}\bar{\mathbf{x}}^{*}-\mathbf{b})\|_{p_{\ast}}=\lambda\|\bar{\mathbf{x}}^{*}\|_{p}.$
(32)
This simply follows by setting the gradient of
$\frac{1}{2}\|\mathbf{A}\mathbf{x}-\mathbf{b}\|_{2}^{2}+\frac{1}{2}\|\mathbf{x}\|_{p}^{2}$
to zero, and using that
$\big{\|}\nabla\big{(}\frac{1}{2}\|\mathbf{x}\|_{p}^{2}\big{)}\big{\|}_{p_{\ast}}=\|\mathbf{x}\|_{p},$
$\forall\mathbf{x}\in\mathbb{R}^{d}$ (see Proposition 1.5).
More recently, related problems of the form
$\min_{\mathbf{x}\in\mathbb{R}^{d}}\Big{\\{}\sqrt{\ell(\mathbf{x},\mathbf{A},\mathbf{b})}+\lambda^{\prime}\|\mathbf{x}\|_{p}\Big{\\}},$
where $\ell(\mathbf{x},\mathbf{A},\mathbf{b})$ is a more general loss
function, have been used in distributionally robust optimization (see Blanchet
et al. (2019)). Again, a different parametrization of the same problem leads
to the equivalent form
$\min_{\mathbf{x}\in\mathbb{R}^{d}}\Big{\\{}{\ell(\mathbf{x},\mathbf{A},\mathbf{b})}+\frac{\lambda}{2}\|\mathbf{x}\|_{p}^{2}\Big{\\}},$
(33)
and our results can be applied as long as
$\ell(\mathbf{x},\mathbf{A},\mathbf{b})$ is $L_{p}$-smooth w.r.t.
$\|\cdot\|_{p}$.666Note that, by the inequalities relating $\ell_{p}$-norms,
any function that is $L$-smooth w.r.t. $\|\cdot\|_{2}$, is also $L$-smooth
w.r.t. $\|\cdot\|_{p}$ for $p\in[1,2]$. That is, for $p\in[1,2],$ the
smoothness parameter w.r.t. $\|\cdot\|_{p}$ can only be lower than the
smoothness parameter w.r.t. $\|\cdot\|_{2}$, often being significantly lower.
A direct application of our result from Theorem 2.3 tells us that we can
approximate the problem from Eq. (31) with accuracy $\epsilon>0$ using
$k=O\bigg{(}\min\bigg{\\{}\sqrt{\frac{L_{p}}{\lambda(p-1)}}\log\Big{(}\frac{L_{p}\|\bar{\mathbf{x}}^{*}-\mathbf{x}_{0}\|_{p}}{\epsilon}\Big{)},\;\sqrt{\frac{L_{p}\|\bar{\mathbf{x}}^{*}-\mathbf{x}_{0}\|_{p}^{2}}{\epsilon}}\bigg{\\}}\bigg{)}$
(34)
iterations of Generalized AGD+ from Eq. (7).
Further, using Corollary 2.4, we get that within the same number of iterations
the output point $\mathbf{y}_{k}$ of the algorithm satisfies
$\|\mathbf{y}_{k}-\bar{\mathbf{x}}^{*}\|_{p}\leq\sqrt{\frac{2\epsilon}{\lambda(p-1)}}.$
Additionally, for quadratic losses, using triangle inequality and Eq. (32), we
have the following “goodness of fit” guarantee
$\displaystyle\|\mathbf{A}^{T}(\mathbf{A}\mathbf{y}_{k}-\mathbf{b})\|_{p_{\ast}}\leq\|\mathbf{A}^{T}\mathbf{A}(\mathbf{y}_{k}-\bar{\mathbf{x}}^{*})\|_{p_{\ast}}+\lambda\|\bar{\mathbf{x}}^{*}\|_{p}\leq
L_{p}\sqrt{\frac{2\epsilon}{\lambda(p-1)}}+\lambda\|\bar{\mathbf{x}}^{*}\|_{p}.$
Finally, note that it is possible to apply our algorithm to $\ell_{1}$
regularized problems (lasso), applying results from Theorem 2.3 with
$\psi(\mathbf{x})=\lambda\|\mathbf{x}\|_{1}$ and
$\phi(\mathbf{x})=\frac{1}{2}\|\mathbf{x}-\mathbf{x}_{0}\|_{2}^{2}.$ In this
case, as $\psi$ is not strongly convex, the resulting bound is
$k=O\Big{(}\sqrt{\frac{L_{2}\|\bar{\mathbf{x}}^{*}-\mathbf{x}_{0}\|_{2}^{2}}{\epsilon}}\Big{)}$,
which matches the iteration complexity of FISTA (Beck and Teboulle, 2009).
### 5.3 Dantzig Selector Problem
Dantzig selector problem, introduced by Candés and Tao (2007), consists in
solving problems of the form
$\min_{\begin{subarray}{c}\mathbf{x}\in\mathbb{R}^{d}:\\\
\|\mathbf{x}\|_{1}\leq
t\end{subarray}}\|\mathbf{A}^{T}(\mathbf{A}\mathbf{x}-\mathbf{b})\|_{\infty},\quad\text{
or, equivalently
}\quad\min_{\begin{subarray}{c}\mathbf{x}\in\mathbb{R}^{d}:\\\
\|\mathbf{A}^{T}(\mathbf{A}\mathbf{x}-\mathbf{b})\|_{\infty}\leq
t\end{subarray}}\|\mathbf{x}\|_{1},$
where $t$ is some positive parameter.
Similar to other regression problems described in this section, Dantzig
selector problem can be considered in its unconstrained, regularized form. One
variant of the problem that can be addressed with our algorithm is
$\min_{\mathbf{x}\in\mathbb{R}^{d}}\frac{1}{2}\|\mathbf{A}^{T}(\mathbf{A}\mathbf{x}-\mathbf{b})\|_{p_{\ast}}^{2}+\frac{\lambda}{2}\|\mathbf{x}\|_{p}^{2},$
(35)
where $p$ is chosen sufficiently close to one so that $\|\cdot\|_{p}$ closely
approximates $\|\cdot\|_{1}$ and $\|\cdot\|_{p_{\ast}}$ closely approximates
$\|\cdot\|_{\infty}$, where $\frac{1}{p}+\frac{1}{p_{\ast}}=1$. In particular,
when $p^{\ast}=[\log d]/\ln(1+\epsilon)$we have that
$(1-\epsilon)\|\mathbf{x}\|_{1}\leq\|\mathbf{x}\|_{p}\leq\|\mathbf{x}\|_{1}$
and
$\|\mathbf{x}\|_{\infty}\leq\|\mathbf{x}\|_{p}\leq(1+\epsilon)\|\mathbf{x}\|_{\infty},$
$\forall\mathbf{x}\in\mathbb{R}^{d}.$
As discussed at the beginning of Section 3, in this case,
$\psi(\mathbf{x})=\frac{\lambda}{2}\|\mathbf{x}\|_{p}^{2}$ is
$\lambda(p-1)=\Theta(\frac{\lambda\epsilon}{\log(d)})$-strongly convex w.r.t.
$\|\cdot\|_{p}$ and, by the relationship between norms, is also strongly
convex w.r.t. $\|\cdot\|_{1}$ with the strong convexity constant of the same
order. Further,
$f(\mathbf{x})=\frac{1}{2}\|\mathbf{A}^{T}(\mathbf{A}\mathbf{x}-\mathbf{b})\|_{p_{\ast}}^{2}$
can be shown to be $L_{1}$-smooth w.r.t. $\|\cdot\|_{1}$, for
$L_{1}=(1+\epsilon)(p_{\ast}-1)A_{\max}=\Theta(\frac{\log{d}}{\epsilon}A_{\max}),$
where $A_{\max}=\max_{1\leq i,j\leq d}|(\mathbf{A}^{T}\mathbf{A})_{ij}|.$ This
can be done as follows. Using that $\frac{1}{2}\|\cdot\|_{p_{\ast}}^{2}$ is
$(p_{\ast}-1)$-smooth w.r.t. $\|\cdot\|_{p_{\ast}}$ (as $p>2$), we have,
$\forall\mathbf{x},\mathbf{y}\in\mathbb{R}^{d},$
$\displaystyle\|\nabla f(\mathbf{x})-\nabla f(\mathbf{y})\|_{\infty}$
$\displaystyle\leq\|\nabla f(\mathbf{x})-\nabla f(\mathbf{y})\|_{p}$
$\displaystyle\leq(p_{\ast}-1)\|(\mathbf{A}^{T}\mathbf{A})(\mathbf{x}-\mathbf{y})\|_{p_{\ast}}$
$\displaystyle\leq(p_{\ast}-1)\|\mathbf{A}^{T}\mathbf{A}\|_{1\to
p_{\ast}}\|\mathbf{x}-\mathbf{y}\|_{1}$
$\displaystyle\leq(p_{\ast}-1)(1+\epsilon)\|\mathbf{A}^{T}\mathbf{A}\|_{1\to\infty}\|\mathbf{x}-\mathbf{y}\|_{1}$
$\displaystyle=(1+\epsilon)(p_{\ast}-1)\max_{1\leq i,j\leq
d}|(\mathbf{A}^{T}\mathbf{A})_{ij}|\cdot\|\mathbf{x}-\mathbf{y}\|_{1}.$
Hence, applying Theorem 2.3, we have that the problem from Eq. (35) can be
approximated to arbitrary additive error $\bar{\epsilon}$ with
$k=O\Big{(}\sqrt{\frac{A_{\max}}{\lambda}}\frac{\log(d)}{\bar{\epsilon}}\log\Big{(}\frac{\log(d)A_{\max}\|\bar{\mathbf{x}}^{*}-\mathbf{x}_{0}\|}{\bar{\epsilon}}\Big{)}\Big{)}$
iterations of Generalized AGD+ from Section 2.
Similar to bridge regression, there is an interesting trade-off between the
$\ell_{1}$ norm of the regressor and goodness of fit revealed by the
formulation we consider (Eq. (35)). In particular, using that at an optimal
solution $\bar{\mathbf{x}}^{*}$ the gradient of the objective from Eq. (35) is
zero and using Proposition 1.5,
$\displaystyle({1-\epsilon})\lambda\|\bar{\mathbf{x}}^{*}\|_{1}\leq{\lambda}\|\bar{\mathbf{x}}^{*}\|_{p}$
$\displaystyle={\lambda}\Big{\|}\nabla\Big{(}\frac{1}{2}\|\bar{\mathbf{x}}^{*}\|_{p}^{2}\Big{)}\Big{\|}_{p_{\ast}}$
$\displaystyle=\Big{\|}\nabla\Big{(}\frac{1}{2}\|\mathbf{A}^{T}(\mathbf{A}\bar{\mathbf{x}}^{*}-\mathbf{b})\|_{p_{\ast}}^{2}\Big{)}\Big{\|}_{p_{\ast}}$
$\displaystyle\leq\|\mathbf{A}^{T}\mathbf{A}\|_{p\to
p_{\ast}}\|\mathbf{A}^{T}(\mathbf{A}\bar{\mathbf{x}}^{*}-\mathbf{b})\|_{p_{\ast}}$
$\displaystyle\leq\frac{1+\epsilon}{1-\epsilon}A_{\max}\|\mathbf{A}^{T}(\mathbf{A}\bar{\mathbf{x}}^{*}-\mathbf{b})\|_{\infty}.$
Hence,
$\lambda\|\bar{\mathbf{x}}^{*}\|_{1}\leq(1+O(\epsilon))A_{\max}\|\mathbf{A}^{T}(\mathbf{A}\bar{\mathbf{x}}^{*}-\mathbf{b})\|_{\infty}.$
As the $\ell_{1}$ norm of the regressor is considered a proxy for sparsity,
this bound provides a trade-off between the parsimony of the model and the
goodness of fit, as a function of the regularization parameter $\lambda$.
### 5.4 $\ell_{p}$ Regression
Standard $\ell_{p}$-regression problems have as their goal finding a vector
$\mathbf{x}^{*}$ that minimizes $\|\mathbf{A}\mathbf{x}-\mathbf{b}\|_{p},$
where $p\geq 1.$ When $p=1$ or $p=\infty,$ this problem can be solved using
linear programming. More generally, when $p\notin\\{1,\infty\\},$ the problem
is nonlinear, and multiple approaches have been developed for solving it,
including, e.g., a homotopy-based solver (Bubeck et al., 2018), solvers based
on iterative refinement (Adil et al., 2019a; Adil and Sachdeva, 2020), and
solvers based on the classical method of iteratively reweighted least squares
(Ene and Vladu, 2019; Adil et al., 2019b). Such solvers typically rely on fast
linear system solves and attain logarithmic dependence on the inverse accuracy
$1/\epsilon,$ at the cost of iteration count scaling polynomially with one of
the dimensions of $\mathbf{A}$ (typically the lower dimension, which is equal
to the number of rows $m$), each iteration requiring a constant number of
linear system solves.
Here, we consider algorithmic setups in which the iteration count is
dimension-independent and no linear system solves are required, but the
dependence on $1/\epsilon$ is polynomial. First, for standard
$\ell_{p}$-regression problems, we can use use a non-composite variant of the
algorithm (with $\psi(\cdot)=0$), while relying on the fact that the function
$\frac{1}{q}\|\cdot\|_{p}^{q}$ with $q=\min\\{2,p\\}$ is $(1,p)$-weakly smooth
for $p\in(1,2)$ and $(p-1,2)$-weakly smooth for $p\geq 2.$ Using this fact, it
follows that the function
$f_{p}(\mathbf{x})=\frac{1}{q}\|\mathbf{A}\mathbf{x}-\mathbf{b}\|_{p}^{q}$
is $(L_{p},q)$-weakly smooth w.r.t. $\|\cdot\|_{p}$, with
$L_{p}=\max\\{p-1,1\\}\|\mathbf{A}\|_{p\to p_{\ast}}^{q-1}.$ On the other
hand, function
$\phi(\mathbf{x})=\frac{1}{\bar{q}\min\\{p-1,1\\}}\|\mathbf{x}-\mathbf{x}_{0}\|_{p}^{\bar{q}}$,
where $\bar{q}=\max\\{2,p\\}$ is $(1,\bar{q})$-uniformly convex w.r.t.
$\|\cdot\|_{p}.$ Thus, applying Theorem 2.3, we find that we can construct a
point $\mathbf{y}_{k}\in\mathbb{R}^{d}$ such that
$f_{p}(\mathbf{y}_{k})-f_{p}(\mathbf{x}^{*}),$ where
$\mathbf{x}^{*}\in\operatorname*{argmin}_{\mathbf{x}\in\mathbb{R}^{d}}f_{p}(\mathbf{x}),$
with at most
$k=\begin{cases}O\Big{(}\Big{(}\frac{\|\mathbf{A}\|_{p\to
p_{\ast}}^{p-1}}{\epsilon}\Big{)}^{\frac{2}{3p-2}}\Big{(}\frac{\|\mathbf{x}^{*}-\mathbf{x}_{0}\|_{p}^{2}}{p-1}\Big{)}^{\frac{p}{3p-2}}\Big{)},&\text{
if }p\in(1,2)\\\ O\Big{(}\Big{(}\frac{(p-1)\|\mathbf{A}\|_{p\to
p_{\ast}}}{\epsilon}\Big{)}^{\frac{p}{p+2}}\Big{(}\frac{\|\mathbf{x}^{*}-\mathbf{x}_{0}\|_{p}^{p}}{p}\Big{)}^{\frac{2}{p+2}}\Big{)},&\text{
if }p\geq 2\end{cases}$
iterations of Generalized AGD+. The same result can be obtained by applying
the iteration complexity-optimal algorithms for smooth minimization over
$\ell_{p}$-spaces (Nemirovskii and Nesterov, 1985; d’Aspremont et al., 2018).
More interesting for our framework is the $\ell_{p}$ regression on correlated
errors, described in the following.
#### $\ell_{p}$-regression on correlated errors.
As argued in Candés and Tao (2007), there are multiple reasons why minimizing
the correlated errors $\mathbf{A}^{T}(\mathbf{A}\mathbf{x}-\mathbf{b})$ in
place of the standard errors $\mathbf{A}\mathbf{x}-\mathbf{b}$ is more
meaningful for many applications. First, unlike standard errors, correlated
errors are invariant to orthonormal transformations of the data. Indeed, if
$\mathbf{U}$ is a matrix with orthonormal columns, then
$(\mathbf{U}\mathbf{A})^{T}(\mathbf{U}\mathbf{A}\mathbf{x}-\mathbf{U}\mathbf{b})=\mathbf{A}^{T}(\mathbf{A}\mathbf{x}-\mathbf{b})$,
but the same cannot be established for the standard error
$\mathbf{A}\mathbf{x}-\mathbf{b}$. Other reasons involve ensuring that the
model includes explanatory variables that are highly correlated with the data,
which is only possible to argue when working with correlated errors (see
Candés and Tao (2007) for more information).
Within our framework, minimization of correlated errors in $\ell_{p}$-norms
can be reduced to making the gradient small in the $\ell_{p}$-norm; i.e., to
applying results from Section 3. In particular, consider the function:
$f(\mathbf{x})=\frac{1}{2}\|\mathbf{A}\mathbf{x}-\mathbf{b}\|_{2}^{2}.$
The gradient of this function is precisely the vector of correlated errors,
i.e., $\nabla f(\mathbf{x})=\mathbf{A}^{T}(\mathbf{A}\mathbf{x}-\mathbf{b}).$
Further, function $f$ is $L_{p_{\ast}}$-smooth w.r.t. $\|\cdot\|_{p_{\ast}}$,
where $L_{p_{\ast}}=\|\mathbf{A}^{T}\mathbf{A}\|_{p_{\ast}\to p}.$
Applying the results from Theorem 3.1, it follows that, for any $\epsilon>0,$
we can construct a vector $\mathbf{y}_{k}\in\mathbb{R}^{d}$ with
$\|\mathbf{A}^{T}(\mathbf{A}\mathbf{y}_{k}-\mathbf{b})\|_{p}\leq\epsilon,$
where $\frac{1}{p}+\frac{1}{p_{\ast}}=1,$ with at most
$k=\begin{cases}\widetilde{O}\bigg{(}\Big{(}\frac{\|\mathbf{A}^{T}\mathbf{A}\|_{p_{\ast}\to
p}\|\mathbf{x}^{*}-\mathbf{x}_{0}\|_{p_{\ast}}}{\epsilon}\Big{)}^{\frac{2}{3p-2}}\bigg{)},&\text{
if }p\in(1,2)\\\
\widetilde{O}\bigg{(}\sqrt{\frac{\|\mathbf{A}^{T}\mathbf{A}\|_{p_{\ast}\to
p}\|\mathbf{x}^{*}-\mathbf{x}_{0}\|_{p_{\ast}}}{\epsilon}}\bigg{)},&\text{ if
}p>2\end{cases}$
iterations of generalized AGD+, where $\widetilde{O}$ hides a factor that is
logarithmic in $1/\epsilon$ and where each iteration takes time linear in the
number of non-zeros of $\mathbf{A}$.
### 5.5 Spectral Variants of Regression Problems
The algorithms we propose in this work are not limited to $\ell_{p}$ settings,
but apply more generally to uniformly convex spaces. A notable example of such
spaces are the Schatten spaces, $\mathscr{S}_{p}:=(\mathbb{R}^{d\times
d},\|\cdot\|_{\mathscr{S},p}),$ where
$\|\mathbf{X}\|_{\mathscr{S},p}=(\sum_{j\in[d]}\sigma_{j}(\mathbf{X})^{p})^{1/p},$
where $\sigma_{1}(\mathbf{X}),\ldots,\sigma_{d}(\mathbf{X})$ are the singular
values of $\mathbf{X}$. In particular, the aforementioned
$\ell_{p}$-regression problems have their natural spectral counterparts, e.g.,
given a linear operator ${\cal A}:\mathbb{R}^{d\times d}\to\mathbb{R}^{k}$,
and $\mathbf{b}\in\mathbb{R}^{k}$,
$\min_{\mathbf{X}\in\mathbb{R}^{d\times d}}\frac{1}{l}\|{\cal
A}\mathbf{X}-\mathbf{b}\|_{q}^{l}+\frac{\lambda}{r}\|\mathbf{X}\|_{\mathscr{S},p}^{r}.$
The most popular example of such a formulation comes from the nuclear norm
relaxation for low-rank matrix completion (Recht et al., 2010; Chandrasekaran
et al., 2012; Nesterov and Nemirovski, 2013). We observe that the exact
formulation of the problem may vary, but by virtue of Lagrangian relaxation we
can interchangeably consider these different formulations as equivalent
(modulo appropriate choice of regularization/constraint parameter choice).
To apply our algorithms to Schatten norm settings, we observe the functions
below are $(1,r)$-uniformly convex, with $r=\max\\{2,p\\}$:
$\Psi_{\mathscr{S},p}(\mathbf{X})=\begin{cases}\frac{1}{2(p-1)}\|\mathbf{X}\|_{\mathscr{S},p}^{2},&\text{
if }p\in(1,2],\\\ \frac{1}{p}\|\mathbf{X}\|_{\mathscr{S},p}^{p},&\text{ if
}p\in(2,+\infty).\end{cases}$
On the other hand, notice that more generally than regression problems, for
composite objectives
$f(\mathbf{X})+\lambda\Psi_{\mathscr{S},p}(\mathbf{X}-\mathbf{X}_{0}),$
if the function $f$ is unitarily invariant and convex, there is a well-known
formula for its subdifferential, based on the subdifferential of its vector
counterpart (there is a one-to-one correspondence between unitarily invariant
functions $\mathbb{R}^{d\times d}$ and absolutely symmetric functions on
$\mathbb{R}^{d}$) (Lewis, 1995). Even if $f$ is not unitarily invariant, in
the case of regression problems the gradients can be computed explicitly. On
the other hand, the regularizer $\Psi_{\mathscr{S},p}$ admits efficiently
computable solutions to problems from Eq. (2), given its unitary invariance
(see, e.g., Beck (2017, Section 7.3.2)).
Iteration complexity bounds obtained with these regularizers are analogous to
those obtained in the $\ell_{p}$ setting. On the other hand, the lower
complexity bounds proved in Section 4 also apply to Schatten spaces by
diagonal embedding from $\ell_{p}^{d}$, hence all the optimality/suboptimality
results established for $\ell_{p}$ carry over into $\mathscr{S}_{p}$.
## 6 Conclusion and Future Work
We presented a general algorithmic framework for _complementary composite
optimization_ , where the objective function is the sum of two functions with
complementary properties – (weak) smoothness and uniform/strong convexity. The
framework has a number of interesting applications, including in making the
gradient of a smooth function small in general norms and in different
regression problems that frequently arise in machine learning. We also
provided lower bounds that certify near-optimality of our algorithmic
framework for the majority of standard $\ell_{p}$ and $\mathscr{S}_{p}$
setups.
Some interesting questions for future work remain. For example, the
regularization-based approach that we employed for gradient norm minimization
leads to near-optimal oracle complexity bounds only when the objective
function is smooth and the norm of the space is strongly convex (i.e., when
the $p_{\ast}$-norm of the gradient is sought for $p_{\ast}\geq 2$). The
primary reason for this result is that these are the only settings in which
the complementary composite minimization leads to linear convergence. As the
bounds we obtain for complementary composite minimization are near-tight, this
represents a fundamental limitation of direct regularization-based approach.
It is an open question whether the non-tight bounds for gradient norm
minimization can be improved using some type of recursive regularization, as
in Allen-Zhu (2018). Of course, there are clear challenges in trying to
generalize such an approach to non-Euclidean norms, caused by the fundamental
limitation that non-Euclidean norms cannot be simultaneously smooth and
strongly convex, as discussed at the beginning of the paper. Another
interesting question is whether there exist direct (not regularization-based)
algorithms for minimizing general gradient norms and that converge with
(near-)optimal oracle complexity.
## References
* Adil and Sachdeva (2020) Deeksha Adil and Sushant Sachdeva. Faster $p$-norm minimizing flows, via smoothed $q$-norm problems. In _Proc. ACM-SIAM SODA’20_ , 2020.
* Adil et al. (2019a) Deeksha Adil, Rasmus Kyng, Richard Peng, and Sushant Sachdeva. Iterative refinement for $\ell_{p}$-norm regression. In _Proc. ACM-SIAM SODA’19_ , 2019a.
* Adil et al. (2019b) Deeksha Adil, Richard Peng, and Sushant Sachdeva. Fast, provably convergent IRLS algorithm for $p$-norm linear regression. In _Proc. NeurIPS’19_ , 2019b.
* Allen-Zhu (2018) Zeyuan Allen-Zhu. How to make the gradients small stochastically: Even faster convex and nonconvex SGD. In _Proc. NeurIPS’18_ , 2018.
* Ball et al. (1994) Keith Ball, Eric A Carlen, and Elliott H Lieb. Sharp uniform convexity and smoothness inequalities for trace norms. _Inventiones mathematicae_ , 115(1):463–482, 1994\.
* Bauschke et al. (2017) Heinz H Bauschke, Jérôme Bolte, and Marc Teboulle. A descent lemma beyond Lipschitz gradient continuity: First-order methods revisited and applications. _Mathematics of Operations Research_ , 42(2):330–348, 2017.
* Bauschke et al. (2019) Heinz H Bauschke, Jérôme Bolte, Jiawei Chen, Marc Teboulle, and Xianfu Wang. On linear convergence of non-Euclidean gradient methods without strong convexity and Lipschitz gradient continuity. _Journal of Optimization Theory and Applications_ , 182(3):1068–1087, 2019.
* Beck (2017) Amir Beck. _First-Order Methods in Optimization_. MOS-SIAM Series on Optimization, 2017.
* Beck and Teboulle (2009) Amir Beck and Marc Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. _SIAM journal on imaging sciences_ , 2(1):183–202, 2009.
* Blanchet et al. (2019) Jose Blanchet, Yang Kang, and Karthyek Murthy. Robust Wasserstein profile inference and applications to machine learning. _Journal of Applied Probability_ , 56(3):830–857, 2019.
* Borwein et al. (2009) J. Borwein, A. J. Guirao, P. Hájek, and J. Vanderwerff. Uniformly convex functions on Banach spaces. _Proceedings of the AMS_ , 137(3):1081–1091, 2009\.
* Borwein and Zhu (2004) Jonathan M Borwein and Qiji J Zhu. _Techniques of Variational Analysis_. Springer, 2004.
* Boyd and Vandenberghe (2004) Stephen Boyd and Lieven Vandenberghe. _Convex optimization_. Cambridge university press, 2004.
* Bubeck et al. (2018) Sébastien Bubeck, Michael B Cohen, Yin Tat Lee, and Yuanzhi Li. An homotopy method for lp regression provably beyond self-concordance and in input-sparsity time. In _Proc. ACM STOC’18_ , 2018.
* Candés and Tao (2007) Emmanuel Candés and Terence Tao. The dantzig selector: Statistical estimation when $p$ is much larger than $n$. _The annals of Statistics_ , 35(6):2313–2351, 2007.
* Chambolle and Pock (2011) Antonin Chambolle and Thomas Pock. A first-order primal-dual algorithm for convex problems with applications to imaging. _Journal of Mathematical Imaging and Vision_ , 40(1):120–145, 2011.
* Chandrasekaran et al. (2012) Venkat Chandrasekaran, Benjamin Recht, Pablo A. Parrilo, and Alan S. Willsky. The convex geometry of linear inverse problems. _Found. Comput. Math._ , 12(6):805–849, 2012\.
* Cohen et al. (2018) Michael Cohen, Jelena Diakonikolas, and Lorenzo Orecchia. On acceleration with noise-corrupted gradients. In _Proc. ICML’18_ , pages 1019–1028, 2018.
* d’Aspremont et al. (2018) Alexandre d’Aspremont, Cristóbal Guzmán, and Martin Jaggi. Optimal affine-invariant smooth minimization algorithms. _SIAM Journal on Optimization_ , 28(3):2384–2405, 2018.
* Devolder et al. (2014) Olivier Devolder, François Glineur, and Yurii Nesterov. First-order methods of smooth convex optimization with inexact oracle. _Mathematical Programming_ , 146(1-2):37–75, 2014\.
* Diakonikolas and Guzmán (2020) Jelena Diakonikolas and Cristóbal Guzmán. Lower bounds for parallel and randomized convex optimization. _Journal of Machine Learning Research_ , 21(5):1–31, 2020.
* Diakonikolas and Orecchia (2019) Jelena Diakonikolas and Lorenzo Orecchia. The approximate duality gap technique: A unified theory of first-order methods. _SIAM Journal on Optimization_ , 29(1):660–689, 2019.
* Dragomir et al. (2019) Radu-Alexandru Dragomir, Adrien Taylor, Alexandre d’Aspremont, and Jérôme Bolte. Optimal complexity and certification of Bregman first-order methods. _arXiv preprint arXiv:1911.08510_ , 2019.
* Ene and Vladu (2019) Alina Ene and Adrian Vladu. Improved convergence for $\ell_{1}$ and $\ell_{\infty}$ regression via iteratively reweighted least squares. In _Proc. ICML’19_ , 2019.
* Frank and Friedman (1993) LLdiko E Frank and Jerome H Friedman. A statistical view of some chemometrics regression tools. _Technometrics_ , 35(2):109–135, 1993.
* Gasnikov and Nesterov (2018) Alexander Vladimirovich Gasnikov and Yu E Nesterov. Universal method for stochastic composite optimization problems. _Computational Mathematics and Mathematical Physics_ , 58(1):48–64, 2018.
* Guzmán and Nemirovski (2015) Cristóbal Guzmán and Arkadi Nemirovski. On lower complexity bounds for large-scale smooth convex optimization. _Journal of Complexity_ , 31(1):1–14, 2015.
* Hastie et al. (2009) Trevor Hastie, Robert Tibshirani, and Jerome Friedman. _The elements of statistical learning: data mining, inference, and prediction_. Springer Science & Business Media, 2009.
* He et al. (2015) Niao He, Anatoli B. Juditsky, and Arkadi Nemirovski. Mirror prox algorithm for multi-term composite minimization and semi-separable problems. _Comput. Optim. Appl._ , 61(2):275–319, 2015\.
* Juditsky and Nemirovski (2008) Anatoli Juditsky and Arkadii S Nemirovski. Large deviations of vector-valued martingales in 2-smooth normed spaces. _arXiv preprint arXiv:0809.0813_ , 2008.
* Juditsky and Nesterov (2014) Anatoli Juditsky and Yuri Nesterov. Deterministic and stochastic primal-dual subgradient algorithms for uniformly convex minimization. _Stoch. Syst._ , 4(1):44–80, 2014.
* Kim and Fessler (2020) Donghwan Kim and Jeffrey A Fessler. Optimizing the efficiency of first-order methods for decreasing the gradient of smooth convex functions. _Journal of Optimization Theory and Applications_ , pages 1–28, 2020\.
* Lewis (1995) A.S. Lewis. The convex analysis of unitarily invariant matrix functions. _Journal of Convex Analysis_ , 2(1/2):173–183, 1995.
* Lu et al. (2018) Haihao Lu, Robert M Freund, and Yurii Nesterov. Relatively smooth convex optimization by first-order methods, and applications. _SIAM Journal on Optimization_ , 28(1):333–354, 2018.
* Nemirovskii and Nesterov (1985) Arkadi S Nemirovskii and Yu E Nesterov. Optimal methods of smooth convex minimization. _USSR Computational Mathematics and Mathematical Physics_ , 25(2):21–30, 1985.
* Nemirovskii and Yudin (1983) A.S. Nemirovskii and Yudin. _Problem Complexity and Method Efficiency in Optimization_. Wiley, 1983.
* Nesterov (2013) Yu Nesterov. Gradient methods for minimizing composite functions. _Mathematical Programming_ , 140(1):125–161, 2013\.
* Nesterov (2015) Yu Nesterov. Universal gradient methods for convex optimization problems. _Mathematical Programming_ , 152(1-2):381–404, 2015.
* Nesterov (2012) Yurii Nesterov. How to make the gradients small. _Optima. Mathematical Optimization Society Newsletter_ , (88):10–11, 2012.
* Nesterov and Nemirovski (2013) Yurii Nesterov and Arkadi Nemirovski. On first-order algorithms for $\ell_{1}$/nuclear norm minimization. _Acta Numerica_ , 22:509, 2013.
* Park and Casella (2008) Trevor Park and George Casella. The Bayesian lasso. _Journal of the American Statistical Association_ , 103(482):681–686, 2008.
* Recht et al. (2010) Benjamin Recht, Maryam Fazel, and Pablo A Parrilo. Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. _SIAM review_ , 52(3):471–501, 2010.
* Rockafellar (1970) R. Tyrrell Rockafellar. _Convex analysis_. Princeton Mathematical Series. Princeton University Press, Princeton, N. J., 1970.
* Scheinberg et al. (2014) Katya Scheinberg, Donald Goldfarb, and Xi Bai. Fast first-order methods for composite convex optimization with backtracking. _Foundations of Computational Mathematics_ , 14(3):389–417, 2014.
* Sion (1958) Maurice Sion. On general minimax theorems. _Pacific Journal of Mathematics_ , 8(1):171–176, 1958.
* Srebro and Sridharan (2012) Nathan Srebro and Karthik Sridharan. On convex optimization, fat shattering and learning. unpublished note, 2012.
* Zalinescu (1983) C. Zalinescu. On uniformly convex functions. _Journal of Mathematical Analysis and Applications_ , 95:344–374, 1983.
* Zalinescu (2002) Constantin Zalinescu. _Convex analysis in general vector spaces_. World scientific, 2002.
* Zou and Hastie (2005) Hui Zou and Trevor Hastie. Regularization and variable selection via the elastic net. _Journal of the royal statistical society: series B (statistical methodology)_ , 67(2):301–320, 2005.
|
# On the Support of the Wiener Measure for a hypoelliptic diffusion
Marco Carfagnini† Department of Mathematics
University of Connecticut
Storrs, CT 06269, U.S.A<EMAIL_ADDRESS>
###### Abstract.
A support theorem for the law of a hypoelliptic Brownian motion on the
Heisenberg group $\mathbb{H}$ is proven. We consider a control norm associated
to left-invariant vector fields on $\mathbb{H}$, and describe the support in
terms of the space of finite energy horizontal curves.
###### Key words and phrases:
Diffusion processes, Wiener measure, Heisenberg group, hypoelliptic operator
###### 1991 Mathematics Subject Classification:
Primary 58J65, 60H10; Secondary 60J60, 60H05
11footnotemark: 1${\dagger}$ Research was supported in part by NSF Grants
DMS-1712427 and DMS-1954264.
###### Contents
1. 1 Introduction
2. 2 The setting and the main result
1. 2.1 Heisenberg group as Lie group
2. 2.2 Heisenberg group as a sub-Riemannian manifold
3. 2.3 The Wiener meaure
4. 2.4 Main result
3. 3 Proof of Theorem 2.14
1. 3.1 Approximation of the hypoelliptic Brownian motion
2. 3.2 Support of the Wiener measure
###### Table of Contents
1. 1 Introduction
2. 2 The setting and the main result
1. 2.1 Heisenberg group as Lie group
2. 2.2 Heisenberg group as a sub-Riemannian manifold
3. 2.3 The Wiener meaure
4. 2.4 Main result
3. 3 Proof of Theorem 2.14
1. 3.1 Approximation of the hypoelliptic Brownian motion
2. 3.2 Support of the Wiener measure
## 1\. Introduction
The purpose of this paper is to describe the support of the law of a
hypoelliptic diffusion on the Heisenber group $\mathbb{H}$. The group
$\mathbb{H}$ is the simplest example of a sub-Riemannian manifold, and it
comes with a natural left-invariant distance, the Carnot-Carathéodory distance
$d_{cc}$. This distance is the control distance associated to left-invariant
vector fields on $\mathbb{H}$. We then consider the uniform norm
$\|g\|_{W_{0}\left(\mathbb{H}\right)}:=\max_{0\leqslant t\leqslant 1}|g_{t}|,$
on the path space $W_{0}\left(\mathbb{H}\right)$ of $\mathbb{H}$-valued
continuous curves starting at the identity, where $|\cdot|$ is the homogeneous
norm on $\mathbb{H}$ equivalent to the Carnot-Carathéodory distance $d_{cc}$.
The norm $|\cdot|$ is more explicit and easier to handle than the distance
$d_{cc}$. We refer to Section 2, and to Remark 2.15 for details about the use
of different norms on $W_{0}\left(\mathbb{H}\right)$.
We consider a hypoelliptic Brownian motion $g_{t}$ on $\mathbb{H}$ starting at
the identity, and we describe the support of its law. The support of a general
diffusion was first studied by Stroock and Varadhan in [17], which we will
describe briefly. Suppose $X_{t}$ is an $\mathbb{R}^{d}$-valued diffusion
which is a solution to the stochastic differential equation
(1.1) $dX_{t}=\sigma\left(t,X_{t}\right)\circ
dW_{t}+b\left(t,X_{t}\right)dt,\quad X_{0}=0,$
where $\sigma=\sigma(t,x)$ is a $d\times\ell$ matrix whose entries are
functions of $(t,x)\in[0,1]\times\mathbb{R}^{d}$, and $b=b(t,x)$ is a vector
in $\mathbb{R}^{d}$, and $W_{t}$ is an $\ell$-dimensional Brownian motion. By
$\circ$ we denoted the stochastic differential in Stratonovich’s form. We can
view $X_{t}$ as a $W_{0}^{d}$-valued random variable, where $W_{0}^{d}$ is the
set of $\mathbb{R}^{d}$-valued continuous paths $t\rightarrow\gamma(t)$ for
$0\leqslant t\leqslant 1$ with $\gamma(0)=0$. The space $W_{0}^{d}$ is a
Banach space with the uniform topology $\|\gamma\|:=\max_{0\leqslant
t\leqslant 1}|\gamma(t)|_{\mathbb{R}^{d}}$, and the law of $X_{t}$, denoted by
$\mu$, is a probability measure on $W_{0}^{d}$. Let $\mathcal{S}_{\mu}$ be the
support of $\mu$. If we denote by $H$ the subset of $W_{0}^{\ell}$ consisting
of absolutely continuous paths starting at zero, then to any $\phi\in H$ we
can associate a deterministic path $x_{\phi}$ as being the solution to the
ordinary differential equation
(1.2) $\displaystyle
x^{\prime}_{\phi}(t)=\sigma\left(t,x_{\phi}(t)\right)\phi^{\prime}(t)dt+b\left(t,x_{\phi}(t)\right)dt,$
$\displaystyle x_{\phi}(0)=0.$
We follow [13] and refer to a solution to (1.2) as a controlled system. Then
the set of controlled systems $\left\\{x_{\phi},\;\phi\in H\right\\}$ is dense
in the support of $\mu$, that is,
(1.3) $\mathcal{S}_{\mu}=\overline{\left\\{x_{\phi},\;\phi\in
H\right\\}}^{\infty},$
where the closure is taken in the uniform topology in $W_{0}^{d}$. Stroock-
Varadhan support Theorem 1.3 was first proven in [17] under the assumption
that $\sigma$ is of class $\mathcal{C}^{2}$ in space and $\mathcal{C}^{1}$ in
time, bounded together with its partial derivatives of order one and two, and
$b$ is globally Lipschitz and bounded. A different proof, under the same
assumption on $\sigma$ and $b$, can also be found in [15]. In a series of
papers by Gyöngy [4, 5, 7], and by Gyöngy-Pröhle [8] a stochastic differential
equation driven by a continuous semimartingale is considered, and a support
theorem like 1.3 is proven under the assumption that $\sigma$ and $b$ have
linear growth, and the derivatives of $\sigma$ are bounded. A support theorem
is proven for diffusion processes on Hilbert spaces in [1] and [6]. We also
mention that in [14] a rough paths approach is used, and a support theorem in
the $p$-variational topology is proven.
One can ask under what condition the closure in 1.3 coincides with the whole
path space $W_{0}^{d}$. This question has been addressed in [13], where the
author gives nearly necessary and sufficient conditions for
(1.4) $W_{0}^{d}=\overline{\left\\{x_{\phi},\;\phi\in H\right\\}}^{\infty}$
to hold.
Our main result is Theorem 2.14 where we prove an analogue of 1.3, and 1.4 for
the law of the stochastic process $g_{t}$. Let $H\left(\mathbb{H}\right)$ be
the set of finite energy horizontal curves. Then the support
$\mathcal{S}_{\mu}$ of the law of $g_{t}$ is the closure of
$H\left(\mathbb{H}\right)$ in the
$\|\cdot\|_{W_{0}\left(\mathbb{H}\right)}$-topology. Alternatively, we can
describe $H\left(\mathbb{H}\right)$ as the set of controlled systems.
Note that the hypoelliptic Brownian motion $g_{t}$ can be viewed as an
$\mathbb{R}^{3}$-valued stochastic process, and it satisfies a stochastic
differential equation similar to (1.1). One can then use [8, Theorem 3.1] to
prove a support theorem 1.3 for the law of $g_{t}$. We emphasize that the norm
considered in [8] is the uniform norm $\|\gamma\|:=\max_{0\leqslant t\leqslant
1}|\gamma(t)|_{\mathbb{R}^{3}}$. In the current paper we replace the Euclidean
norm by a control norm associated to left-invariant vector fields on
$\mathbb{H}$, which is more natural and consistent with the underline sub-
Riemannian structure.
In addition to Theorem 2.14, in Proposition 3.10 we prove an analogue of 1.4,
that is, that
(1.5)
$W_{0}\left(\mathbb{H}\right)=\overline{H\left(\mathbb{H}\right)}^{\|\cdot\|_{W_{0}\left(\mathbb{H}\right)}}.$
Our proof is explicit and relies on the group structure of $\mathbb{H}$.
The paper is organized as follows. In Section 2 we describe the Heisenberg
group $\mathbb{H}$ and the corresponding sub-Laplacian and hypoelliptic
Brownian motion. We then state the main result of this paper, Theorem 2.14,
where we describe the support of the law of $g_{t}$ in terms of the set of
finite energy horizontal curves. Section 3 contains the proof of Theorem 2.14,
which is dived in two steps. First we construct a family of stochastic
processes $g_{\delta}(t)$ that approximates $g_{t}$ in the sense that the law
$\mu_{\delta}$ of $g_{\delta}(t)$ weakly converges in
$W_{0}\left(\mathbb{H}\right)$ to the law $\mu$ of $g_{t}$. This approximation
is used to prove that
$\mathcal{S}_{\mu}\subset\overline{H\left(\mathbb{H}\right)}^{\|\cdot\|_{W_{0}\left(\mathbb{H}\right)}}$.
We further study relations between the measures $\mu_{\delta}$ and $\mu$. We
prove that the set $H\left(\mathbb{H}\right)$ of finite energy horizontal
curves has $\mu_{\delta}$-measure one and $\mu$-measure zero, and hence for
each fixed $\delta$ the measures $\mu_{\delta}$ and $\mu$ are singular.
To show the reverse inclusion
$\overline{H\left(\mathbb{H}\right)}^{\|\cdot\|_{W_{0}\left(\mathbb{H}\right)}}\subset\mathcal{S}_{\mu}$
we use Theorem 3.6 and an explicit form of the process $g_{t}$. Namely,
$g_{t}=\left(B_{t},A_{t}\right)$, where $B_{t}$ is a two-dimensional standard
Brownian motion and $A_{t}$ is the corresponding Lévy’s stochastic area. Our
proof relies on the classical identity $A_{t}=b_{\tau(t)}$, where $b_{t}$ is a
one-dimensional Brownian motion independent of $B_{t}$, and $\tau(t)$ is a
stopping time. This observation in the proof of Theorem 3.6 is specific to the
Heisenberg group, and we do not expect a similar argument to hold for a more
general Carnot group.
We conclude with the proof of Proposition 3.10 where we show that the set of
horizontal curves is dense in the space of $\mathbb{H}$-valued continuous
curves.
## 2\. The setting and the main result
### 2.1. Heisenberg group as Lie group
The Heisenberg group $\mathbb{H}$ as a set is
$\mathbb{R}^{3}\cong\mathbb{R}^{2}\times\mathbb{R}$ with the group
multiplication given by
$\displaystyle\left(\mathbf{v}_{1},z_{1}\right)\cdot\left(\mathbf{v}_{2},z_{2}\right):=\left(x_{1}+x_{2},y_{1}+y_{2},z_{1}+z_{2}+\frac{1}{2}\omega\left(\mathbf{v}_{1},\mathbf{v}_{2}\right)\right),$
$\displaystyle\text{ where
}\mathbf{v}_{1}=\left(x_{1},y_{1}\right),\mathbf{v}_{2}=\left(x_{2},y_{2}\right)\in\mathbb{R}^{2},$
$\displaystyle\omega:\mathbb{R}^{2}\times\mathbb{R}^{2}\longrightarrow\mathbb{R},$
$\displaystyle\omega\left(\mathbf{v}_{1},\mathbf{v}_{2}\right):=x_{1}y_{2}-x_{2}y_{1}$
is the standard symplectic form on $\mathbb{R}^{2}$. The identity in
$\mathbb{H}$ is $e=(0,0,0)$ and the inverse is given by
$\left(\mathbf{v},z\right)^{-1}=(-\mathbf{v},-z)$.
The Lie algebra of $\mathbb{H}$ can be identified with the space
$\mathbb{R}^{3}\cong\mathbb{R}^{2}\times\mathbb{R}$ with the Lie bracket
defined by
$\left[\left(\mathbf{a}_{1},c_{1}\right),\left(\mathbf{a}_{2},c_{2}\right)\right]=\left(0,\omega\left(\mathbf{a}_{1},\mathbf{a}_{2}\right)\right).$
The set $\mathbb{R}^{3}\cong\mathbb{R}^{2}\times\mathbb{R}$ with this Lie
algebra structure will be denoted by $\mathfrak{h}$.
Let us now recall some basic notation for Lie groups. Suppose $G$ is a Lie
group, then the left and right multiplication by an element $k\in G$ are
denoted by
$\displaystyle L_{k}:G\longrightarrow G,$ $\displaystyle g\longmapsto
k^{-1}g,$ $\displaystyle R_{k}:G\longrightarrow G,$ $\displaystyle
g\longmapsto gk.$
Recall that the tangent space $T_{e}G$ can be identified with the Lie algebra
$\mathfrak{g}$ of left-invariant vector fields on $G$, that is, vector fields
$X$ on $G$ such that $dL_{k}\circ X=X\circ L_{k}$, where $dL_{k}$ is the
differential of $L_{k}$. More precisely, if $A$ is a vector in $T_{e}G$, then
we denote by $\tilde{A}\in\mathfrak{g}$ the (unique) left-invariant vector
field such that $\tilde{A}(e)=A$. A left-invariant vector field is determined
by its value at the identity, namely,
$\tilde{A}\left(k\right)=dL_{k}\circ\tilde{A}\left(e\right)$.
For the Heisenberg group the differential of left and right multiplication can
be described explicitly as follows.
###### Proposition 2.1.
Let $k=(k_{1},k_{2},k_{3})=(\mathbf{k},k_{3})$ and
$g=(g_{1},g_{2},g_{3})=(\mathbf{g},g_{3})$ be two elements in $\mathbb{H}$.
Then, for every $v=\left(v_{1},v_{2},v_{3}\right)=(\mathbf{v},v_{3})$ in
$T_{g}\mathbb{H}$, the differentials of the left and right multiplication are
given by
$\displaystyle dL_{k}:T_{g}\mathbb{H}\longrightarrow T_{k^{-1}g}\mathbb{H},$
$\displaystyle dR_{k}:T_{g}\mathbb{H}\longrightarrow T_{gk}\mathbb{H},$
$\displaystyle
dL_{k}(v)=\left(v_{1},v_{2},v_{3}+\frac{1}{2}\omega(\mathbf{v},\mathbf{k})\right),$
(2.1) $\displaystyle
dR_{k}(v)=\left(v_{1},v_{2},v_{3}+\frac{1}{2}\omega(\mathbf{v},\mathbf{k})\right).$
### 2.2. Heisenberg group as a sub-Riemannian manifold
The Heisenberg group $\mathbb{H}$ is the simplest non-trivial example of a
sub-Riemannian manifold. We define $X$, $Y$ and $Z$ as the unique left-
invariant vector fields satisfying $X_{e}=\partial_{x}$, $Y_{e}=\partial_{y}$
and $Z_{e}=\partial_{z}$, that is,
$\displaystyle X=\partial_{x}-\frac{1}{2}y\partial_{z},$ $\displaystyle
Y=\partial_{y}+\frac{1}{2}x\partial_{z},$ $\displaystyle Z=\partial_{z}.$
Note that the only non-zero Lie bracket for these left-invariant vector fields
is $[X,Y]=Z$, so the vector fields $\left\\{X,Y\right\\}$ satisfy Hörmander’s
condition. We define the _horizontal distribution_ as
$\mathcal{H}:=\operatorname{span}\left\\{X,Y\right\\}$ fiberwise, thus making
$\mathcal{H}$ a sub-bundle in the tangent bundle $T\mathbb{H}$. To finish the
description of the Heisenberg group as a sub-Riemannian manifold we need to
equip the horizontal distribution $\mathcal{H}$ with an inner product. For any
$p\in\mathbb{H}$ we define the inner product
$\langle\cdot,\cdot\rangle_{\mathcal{H}_{p}}$ on $\mathcal{H}_{p}$ so that
$\left\\{X\left(p\right),Y\left(p\right)\right\\}$ is an orthonormal
(horizontal) frame at any $p\in\mathbb{H}$. Vectors in $\mathcal{H}_{p}$ will
be called _horizontal_ , and the corresponding norm will be denoted by
$\|\cdot\|_{\mathcal{H}_{p}}$.
In addition, Hörmander’s condition ensures that a natural sub-Laplacian on the
Heisenberg group
(2.2) $\Delta_{\mathcal{H}}=X^{2}+Y^{2}$
is a hypoelliptic operator by [9].
We recall now another important object in sub-Riemannian geometry, namely,
horizontal curves.
###### Notation 2.2.
A curve
$\gamma(t)=\left(x\left(t\right),y\left(t\right),z\left(t\right)\right)$ in
$\mathbb{H}$ will be denoted by
$\left(\mathbf{x}\left(t\right),z\left(t\right)\right)$, and its corresponding
tangent vector $\gamma^{\prime}(t)$ in $T\mathbb{H}_{\gamma(t)}$ will be
denoted by
$\gamma^{\prime}(t)=\left(x^{\prime}\left(t\right),y^{\prime}\left(t\right),z^{\prime}\left(t\right)\right)=\left(\bm{x}^{\prime}\left(t\right),z^{\prime}\left(t\right)\right).$
###### Definition 2.3.
An absolutely continuous path $t\longmapsto\gamma(t)\in\mathbb{H},t\in[0,1]$
is said to be horizontal if $\gamma^{\prime}(t)\in\mathcal{H}_{\gamma(t)}$ for
all $t$, that is, the tangent vector to $\gamma\left(t\right)$ at every point
$\gamma\left(t\right)$ is horizontal. Equivalently we can say that $\gamma$ is
horizontal if
$c\left(t\right):=dL_{\gamma\left(t\right)}\left(\gamma^{\prime}(t)\right)\in\mathcal{H}_{e}$
for a.e. $t$.
Note that for
$\gamma(t)=\left(\mathbf{x}\left(t\right),z\left(t\right)\right)$ we have
(2.3) $\displaystyle
c_{\gamma}\left(t\right):=c\left(t\right)=dL_{\gamma\left(t\right)}\left(\gamma^{\prime}(t)\right)$
$\displaystyle=\left(\mathbf{x}^{\prime}\left(t\right),z^{\prime}\left(t\right)-\frac{1}{2}\omega(\mathbf{x}\left(t\right),\mathbf{x}^{\prime}\left(t\right))\right),$
where we used Proposition 2.1. Equation (2.3) can be used to characterize
horizontal curves in terms of the components as follows. The curve $\gamma$ is
horizontal if and only if
(2.4)
$z^{\prime}(t)-\frac{1}{2}\omega(\mathbf{x}\left(t\right),\mathbf{x}^{\prime}\left(t\right)))=0.$
###### Definition 2.4.
We say that a horizontal curve
$t\longmapsto\gamma(t)\in\mathbb{H},\,t\in[0,1]$ has finite energy if
(2.5)
$\|\gamma\|_{H\left(\mathbb{H}\right)}^{2}:=\int_{0}^{1}|c_{\gamma}\left(s\right)|^{2}_{\mathcal{H}_{e}}ds=\int_{0}^{1}|dL_{\gamma(s)}\left(\gamma^{\prime}(s)\right)|^{2}_{\mathcal{H}_{e}}ds<\infty.$
We denote by $H\left(\mathbb{H}\right)$ the space of finite energy horizontal
curves starting at the identity. The inner product corresponding to the norm
$\|\cdot\|_{H\left(\mathbb{H}\right)}$ is denoted by
$\langle\cdot,\cdot\rangle_{H\left(\mathbb{H}\right)}$.
Note that the Heisenberg group as a sub-Riemannian manifold comes with a
natural left-invariant distance.
###### Definition 2.5.
For any $g_{1},g_{2}\in\mathbb{H}$ the Carnot-Carathéodory distance is defined
as
$\displaystyle d_{cc}(g_{1},g_{2}):=$
$\displaystyle\inf\left\\{\int_{0}^{1}|c_{\gamma}\left(s\right)|^{2}_{\mathcal{H}_{e}},\right.$
$\displaystyle\left.\gamma:[0,1]\longrightarrow\mathbb{H},\gamma(0)=g_{1},\gamma(1)=g_{2},\gamma\text{
is horizontal}\right\\}.$
Another consequence of Hörmander’s condition for left-invariant vector fields
$X$, $Y$ and $Z$ is that we can apply the Chow–Rashevskii theorem. As a
result, given two points in $\mathbb{H}$ there exists a horizontal curve
connecting them, and therefore the Carnot-Carathéodory distance is finite on
$\mathbb{H}$. The Carnot-Carathéodory distance defined in Definition 2.5 is an
example of a control distance related to the left-invariant vector fields $X$,
$Y$ and $Z$. We refer for more details to [2, Definition 5.2.2].
In addition to the Carnot-Carathéodory distance on the Heisenberg group, we
will use the following homogeneous distance
(2.6)
$\rho(g_{1},g_{2}):=\left(\|\mathbf{x}_{1}-\mathbf{x}_{2}\|^{4}_{\mathbb{R}^{2}}+|z_{1}-z_{2}+\omega(\mathbf{x}_{1},\mathbf{x}_{2})|^{2}\right)^{\frac{1}{4}},$
which is equivalent to the Carnot-Carathéodory distance, that is, there exist
two positive constants $c$ and $C$ such that
(2.7) $c\rho(g_{1},g_{2})\leqslant d_{cc}(g_{1},g_{2})\leqslant
C\rho(g_{1},g_{2})$
for all $g_{1},g_{2}\in\mathbb{H}$. For more details about control theory,
Carnot-Carathéodory distance, and control distances we refer to [2, Section
5.1].
Finally, we need to describe a hypoelliptic Brownian motion with values in
$\mathbb{H}$. This is a stochastic process whose generator is the sub-
Laplacian $-\frac{1}{2}\Delta_{\mathcal{H}}$ defined by Equation (2.2).
###### Notation 2.6.
Throughout the paper we use the following notation. Let
$\left(\Omega,\mathcal{F},\mathcal{F}_{t},\mathbb{P}\right)$ be a filtered
probability space. We denote the expectation under $\mathbb{P}$ by
$\mathbb{E}$.
By a standard Brownian motion $\left\\{B_{t}\right\\}_{t\geqslant 0}$ we mean
a continuous adapted $\mathbb{R}$-valued stochastic process defined on
$\left(\Omega,\mathcal{F},\mathcal{F}_{t},\mathbb{P}\right)$ such that for all
$0\leqslant s\leqslant t$, we have that $B_{t}-B_{s}$ is independent of
$\mathcal{F}_{s}$ and has a normal distribution with mean $0$ and the variance
$t-s$.
###### Definition 2.7.
Let $W_{t}=\left(W_{1}(t),W_{2}(t),0\right)$ be an $\mathfrak{h}$-valued
stochastic process, where $\bm{W}_{t}:=\left(W_{1}(t),W_{2}(t)\right)$ is a
standard two-dimensional Brownian motion. A hypoelliptic Brownian motion
$g_{t}=\left(x_{t},y_{t},z_{t}\right)$ on $\mathbb{H}$ is the continuous
$\mathbb{H}$-valued process defined by
(2.8) $g_{t}:=\left(\bm{W}_{t},A_{t}\right),$
where
$A_{t}:=\frac{1}{2}\int_{0}^{t}\omega\left(\bm{W}_{s},d\bm{W}_{s}\right)$ is
the Levy’s stochastic area.
Note that we used the Itô integral in the definition rather than the
Stratonovich integral. However, these two integrals are equal since the
symplectic form $\omega$ is skew-symmetric, and therefore Lévy’s stochastic
area functional is the same for both integrals.
One can also write a stochastic differential equation for
$g_{t}=\left(x_{t},y_{t},z_{t}\right)$,
$g_{0}=\left(0,0,0\right)=e\in\mathbb{H}$. This form is the standard
stochastic differential equation for a Lie group-valued Brownian motion,
namely,
$\displaystyle dL_{g_{t}}\left(dg_{t}\right)=dW_{t},$ $\displaystyle g_{0}=e.$
Equation (2.8) gives an explicit solution to this stochastic differential
equation.
### 2.3. The Wiener meaure
We recall here the definition of Wiener measure, and collect some notations
that will be used throughout the paper.
###### Notation 2.8 (Topology on $\mathbb{H}$).
Let $\rho$ be the homogeneous distance defined in (2.6), and for any
$g\in\mathbb{H}$ we denoted by $|g|:=\rho(g,e)$ the corresponding norm. The
norm $|g|:=\rho(g,e)$ is a control norm according to [2, Definition 5.1.1]. We
consider the topology on $\mathbb{H}$ whose open balls centered at the
identity are $\left\\{g\in\mathbb{H},\,\>|g|<r\right\\}$.
###### Notation 2.9 (Standard Wiener space).
We denote by
$W_{0}\left(\mathbb{R}^{n}\right)=W_{0}\left([0,1],\mathbb{R}^{n}\right)$ the
space of $\mathbb{R}^{n}$-valued continuous functions starting at $0$. This
space comes with the norm
$\|h\|_{W_{0}\left(\mathbb{R}^{n}\right)}:=\max_{0\leqslant t\leqslant
1}|h(t)|_{\mathbb{R}^{n}},\quad h\in W_{0}\left(\mathbb{R}^{n}\right),$
and the associated distance
$d_{W_{0}\left(\mathbb{R}^{n}\right)}(h,k)=\max_{0\leqslant t\leqslant
1}|h(t)-k(t)|_{\mathbb{R}^{n}}$, where $|\cdot|_{\mathbb{R}^{n}}$ is the
Euclidean norm. If $B_{t}$ is an $n$-dimensional Brownian motion then its law
is a probability measure on $W_{0}^{n}$ which we denote by $\nu$.
###### Definition 2.10 (Wiener space over $\mathbb{H}$).
The Wiener space over $\mathbb{H}$, denoted by $W_{0}\left(\mathbb{H}\right)$,
is the space of $\mathbb{H}$-valued continuous functions starting at identity
in $\mathbb{H}$.
Once a norm on $\mathbb{H}$ is fixed, one can introduce a topology on
$W_{0}\left(\mathbb{H}\right)$ in the following way. We endow
$W_{0}\left(\mathbb{H}\right)$ with the following norm
$\|\eta\|_{W_{0}\left(\mathbb{H}\right)}:=\max_{0\leqslant t\leqslant
1}|\eta(t)|,\quad\eta\in W_{0}\left(\mathbb{H}\right),$
and the associated distance is
$d_{W_{0}\left(\mathbb{H}\right)}(\eta,\gamma)=\|\eta^{-1}\gamma\|=\max_{0\leqslant
t\leqslant 1}|\eta(t)^{-1}\gamma(t)|$ for any $\eta,\gamma\in
W_{0}\left(\mathbb{H}\right)$.
###### Definition 2.11.
Let $W_{0}\left(\mathbb{H}\right)$ be the Wiener space over $\mathbb{H}$, and
$g_{t}$ be the hypoelliptic Brownian motion defined by equation (2.8). We call
its law the Wiener measure and we denote it by $\mu$.
The process $g_{t}$ can be viewed as a $W_{0}\left(\mathbb{H}\right)$-valued
random variable, that is,
$\displaystyle g\,:\Omega\longrightarrow
W_{0}\left(\mathbb{H}\right),\qquad\omega\longrightarrow\left\\{t\rightarrow
g_{t}(\omega)\right\\}.$
The measure $\mu$ is then given by
$\mu(E)=\mathbb{P}\left(g^{-1}(E)\right)=\mathbb{P}\left(g\in E\right)$ for
any Borel set $E$ in $W_{0}\left(\mathbb{H}\right)$. We denote the support of
$\mu$ by $\mathcal{S}_{\mu}$, that is, $\mathcal{S}_{\mu}$ is the smallest
closed subset of $W_{0}\left(\mathbb{H}\right)$ having $\mu$-measure one.
###### Remark 2.12.
First observe that even though the hypoelliptic Brownian motion $g_{t}$ is an
$\mathbb{R}^{3}$-valued stochastic process, it is not a Gaussian process, and
its law $\mu$ is not a Gaussian measure on $W_{0}\left(\mathbb{H}\right)$.
Moreover, contrary to the Euclidean case, the space
$W_{0}\left(\mathbb{H}\right)$ is not a Banach space. It is easy to see that
the space $W_{0}\left(\mathbb{H}\right)$ is closed under the norm
$\|\cdot\|_{W_{0}\left(\mathbb{H}\right)}$ but it is not a linear space.
###### Remark 2.13.
Let
$\phi=\left(\phi_{1},\phi_{2},\phi_{3}\right)=\left(\bm{\phi},\phi_{3}\right)\in
H\left(\mathbb{H}\right)$ be a finite energy horizontal curve as in Definition
2.4. Then $\bm{\phi}(t)$ is in the Cameron-Martin space on $\mathbb{R}^{2}$,
that is, $\bm{\phi}(t)$ is an absolutely continuous $\mathbb{R}^{2}$-valued
curve starting at zero such that
$\int_{0}^{1}|\bm{\phi}^{\prime}(s)|_{\mathbb{R}^{2}}^{2}ds<\infty.$
### 2.4. Main result
Now we have all the ingredients needed to state the main result of this paper,
that is, we describe the support of the Wiener measure for the hypoelliptic
Brownian motion $g_{t}$ in terms of horizontal paths.
###### Theorem 2.14.
Let $W_{0}\left(\mathbb{H}\right)$ be the Wiener space over $\mathbb{H}$, and
$\mu$ be the Wiener measure on $W_{0}\left(\mathbb{H}\right)$, and
$H\left(\mathbb{H}\right)$ be the space of horizontal curves with finite
energy. Then
$\mathcal{S}_{\mu}=\overline{H\left(\mathbb{H}\right)}^{\|\cdot\|_{W_{0}\left(\mathbb{H}\right)}}=W_{0}\left(\mathbb{H}\right),$
that is, the support of $\mu$ coincides with the closure of the set of finite
energy horizontal curves on $\mathbb{H}$ in the
$\|\cdot\|_{W_{0}\left(\mathbb{H}\right)}$-topology , which is the whole
$W_{0}\left(\mathbb{H}\right)$.
###### Remark 2.15.
Theorem 2.14 holds if we consider an equivalent norm on the underlying space
$\mathbb{H}$. Let $\|\cdot\|^{\prime}$ be a norm on
$W_{0}\left(\mathbb{H}\right)$ of the form
$\|\eta\|^{\prime}:=\max_{0\leqslant t\leqslant
1}|\eta(t)|^{\prime},\quad\eta\in W_{0}\left(\mathbb{H}\right).$
for some norm $|\cdot|^{\prime}$ on $\mathbb{H}$. If $|\cdot|^{\prime}$ is
equivalent to $|\cdot|:=\rho(\cdot,e)$ then $\|\cdot\|^{\prime}$ and
$\|\cdot\|_{W_{0}\left(\mathbb{H}\right)}$ are equivalent as well, and hence
$\overline{H\left(\mathbb{H}\right)}^{\|\cdot\|^{\prime}}=\overline{H\left(\mathbb{H}\right)}^{\|\cdot\|_{W_{0}\left(\mathbb{H}\right)}}$.
All control norms are equivalent to the norm induced by the Carnot-
Carathéodory distance, see [2, Proposition 5.1.4]. Therefore Theorem 2.14
holds if one considers the norm
$\|\eta\|_{W_{0}\left(\mathbb{H}\right),cc}:=\max_{0\leqslant t\leqslant
1}d_{cc}(\eta(t),e),\quad\forall\eta\in W_{0}\left(\mathbb{H}\right),$
where $d_{cc}$ is the Carnot-Carathéodory distance.
## 3\. Proof of Theorem 2.14
We will divide the proof of Theorem 2.14 in two steps. First, we introduce a
family of processes that approximates $g_{t}$. This is used in Corollary 3.4
to show that the support $\mathcal{S}_{\mu}$ is contained in
$\overline{H\left(\mathbb{H}\right)}^{\|\cdot\|_{W_{0}\left(\mathbb{H}\right)}}$.
The reverse inclusion is proven in Corollary 3.7 which follows from Theorem
3.6. In Proposition 3.10 we prove that
$\overline{H\left(\mathbb{H}\right)}^{\|\cdot\|_{W_{0}\left(\mathbb{H}\right)}}=W_{0}\left(\mathbb{H}\right)$,
which concludes the proof of Theorem 2.14.
### 3.1. Approximation of the hypoelliptic Brownian motion
The aim of this step is to show that the support $\mathcal{S}_{\mu}$ of the
law of $g_{t}$ is contained in
$\overline{H\left(\mathbb{H}\right)}^{\|\cdot\|_{W_{0}\left(\mathbb{H}\right)}}$,
the closure of the set of finite energy horizontal curves on $\mathbb{H}$ in
the $\|\cdot\|_{W_{0}\left(\mathbb{H}\right)}$-topology. This will be
accomplished by constructing a horizontal piecewise approximation
$g_{\delta}(t)$ of $g_{t}$ such that $\mu_{\delta}\rightarrow\mu$ weakly,
where $\mu_{\delta}$ is the law of $g_{\delta}(t)$. Different approximations
of a Brownian motion have been extensively studied over the decades, see for
example Wong-Zakai [18], Kunita [12], Nakao-Yamamoto [16], Ikeda-Nakao-Yamato
[10], and Ikeda-Watanabe [11, Chapter 6, Section 7] for more details. We are
not able to refer to all the vast literature on the subject, but we mentioned
some results which are closer and more relevant to the techniques we use in
this paper.
Let $B_{t}$ be a two dimensional Brownian motion, and $f_{i}$ $i=1,\,2$ be
continuous differentiable functions on $[0,1]$ such that $f_{i}(0)=0$ and
$f_{i}(1)=1$. Set
(3.1)
$B_{i,\delta}(t):=B_{i}(k\delta)+f_{i}\left(\frac{t-k\delta}{\delta}\right)\left(B_{i}(k\delta+\delta)-B_{i}(k\delta)\right)\quad
t\in\left[k\delta,(k+1)\delta\right),$
and
(3.2)
$A_{\delta}(t):=\frac{1}{2}\int_{0}^{t}\left(B_{1,\delta}(s)B_{2,\delta}^{\prime}(s)-B_{2,\delta}(s)B_{1,\delta}^{\prime}(s)\right)ds.$
It then follows from [11, Theorem 7.1] that
(3.3)
$\mathbb{E}\left[\|B_{\delta}-B\|_{W_{0}\left(\mathbb{R}^{2}\right)}^{2}\right]\longrightarrow
0,\quad\text{as}\;\,\delta\rightarrow 0,\;\;\text{and}$ (3.4)
$\mathbb{E}\left[\|A_{\delta}-A\|_{W_{0}\left(\mathbb{R}\right)}^{2}\right]\longrightarrow
0\quad\text{as}\;\,\delta\rightarrow 0.$
Let us define now a sequence of processes $g_{\delta}(t)$ on $\mathbb{H}$.
###### Definition 3.1.
Let $B_{\delta}(t)$ be an approximation of a two-dimensional Brownian motion
as in (3.1). For each $\delta$, $t$, and $\omega$ we set
$g_{\delta}(t)=\left(g_{1,\delta}(t),g_{2,\delta}(t),g_{3,\delta}(t)\right)$
where
$\displaystyle g_{1,\delta}(t)=B_{1,\delta}(t)$ (3.5) $\displaystyle
g_{2,\delta}(t)=B_{2,\delta}(t)$ $\displaystyle
g_{3,\delta}(t)=A_{\delta}(t).$
Let $C^{2}_{p}\left(\mathbb{R}^{2}\right)$ be the space of piecewise
continuously twice differentiable curves in $\mathbb{R}^{2}$ starting at zero,
and set
$H_{p}\left(\mathbb{H}\right):=\left\\{\gamma:[0,1]\longrightarrow\mathbb{H},\,\bm{\gamma}\in
C^{2}_{p}(\mathbb{R}^{2}),\,\gamma_{3}(t)=\frac{1}{2}\int_{0}^{t}\omega\left(\bm{\gamma}(s),\bm{\gamma}^{\prime}(s)\right)ds\right\\},$
where
$\gamma=\left(\gamma_{1},\gamma_{2},\gamma_{3}\right)=\left(\bm{\gamma},\gamma_{3}\right)$,
that is, $H_{p}\left(\mathbb{H}\right)$ is the set of piecewise continuously
twice differentiable horizontal curves. Clearly we have that
$\overline{H\left(\mathbb{H}\right)}^{\|\cdot\|_{W_{0}\left(\mathbb{H}\right)}}=\overline{H_{p}\left(\mathbb{H}\right)}^{\|\cdot\|_{W_{0}\left(\mathbb{H}\right)}}.$
We can view $g_{\delta}$ as a $H_{p}\left(\mathbb{H}\right)$-valued random
variable, that is,
(3.6) $\displaystyle g_{\delta}:\Omega\longrightarrow
H_{p}\left(\mathbb{H}\right),$
$\displaystyle\quad\omega\longrightarrow\left\\{t\rightarrow
g_{\delta}(t,\omega)\right\\},$
and hence we can induce a probability measure $\mu_{\delta}$ on
$W_{0}\left(\mathbb{H}\right)$ by
$\mu_{\delta}(E):=\mathbb{P}\left(g_{\delta}^{-1}\left(E\cap
H_{p}\left(\mathbb{H}\right)\right)\right)$
for any Borel set $E$ in $W_{0}\left(\mathbb{H}\right)$.
###### Proposition 3.2.
Let $\mathcal{S}_{\mu_{\delta}}$ be the support of the measure $\mu_{\delta}$.
Then
$\mathcal{S}_{\mu_{\delta}}\subset\overline{H_{p}\left(\mathbb{H}\right)}^{\|\cdot\|_{W_{0}\left(\mathbb{H}\right)}}=\overline{H\left(\mathbb{H}\right)}^{\|\cdot\|_{W_{0}\left(\mathbb{H}\right)}},$
###### Proof.
By 3.6 we have that $g_{\delta}(\Omega)\subset H_{p}\left(\mathbb{H}\right)$
and hence
$\Omega\subset g_{\delta}^{-1}g_{\delta}\left(\Omega\right)\subset
g_{\delta}^{-1}\left(H_{p}\left(\mathbb{H}\right)\right)\subset\Omega.$
Therefore by the definition of $\mu_{\delta}$ it follows that
$1=\mathbb{P}\left(g_{\delta}^{-1}\left(H_{p}\left(\mathbb{H}\right)\right)\right)=\mu_{\delta}\left(H_{p}\left(\mathbb{H}\right)\right)\leqslant\mu_{\delta}\left(\overline{H_{p}\left(\mathbb{H}\right)}^{\|\cdot\|_{W_{0}\left(\mathbb{H}\right)}}\right)\leqslant
1,$
and the proof is complete since $S_{\mu_{\delta}}$ is the smallest closed
subset of $W_{0}\left(\mathbb{H}\right)$ having $\mu_{\delta}$-measure one. ∎
We can now state and prove the main result of this section, that is, that the
family $\left\\{g_{\delta}\right\\}_{\delta>0}$ is an approximation of the
hypoelliptic Brownian motion $g_{t}$ in the sense that
$\mathbb{E}\left[d_{W_{0}\left(\mathbb{H}\right)}(g_{\delta},g)^{2}\right]\longrightarrow
0$ as $\delta$ goes to zero. As a consequence, the support of the measure
$\mu$ is contained in
$\overline{H\left(\mathbb{H}\right)}^{\|\cdot\|_{W_{0}\left(\mathbb{H}\right)}}$.
###### Theorem 3.3.
Let $\left\\{g_{\delta}\right\\}_{\delta>0}$ be the sequence defined by (3.1).
Then
(3.7) $\lim_{\delta\rightarrow
0}\mathbb{E}\left[d_{W_{0}\left(\mathbb{H}\right)}(g_{\delta},g)^{2}\right]=0.$
###### Proof.
By definition of $d_{W_{0}\left(\mathbb{H}\right)}$ we have that
$\displaystyle
d_{W_{0}\left(\mathbb{H}\right)}\left(g_{\delta},g\right)^{4}=\|g_{\delta}^{-1}g\|_{W_{0}\left(\mathbb{H}\right)}^{4}\leqslant\|B-B_{\delta}\|_{W_{0}\left(\mathbb{R}^{2}\right)}^{4}+\|A-A_{\delta}-\frac{1}{2}\omega\left(B_{\delta},B\right)\|_{W_{0}\left(\mathbb{R}\right)}^{2}$
$\displaystyle\leqslant\left(\|B-B_{\delta}\|_{W_{0}\left(\mathbb{R}^{2}\right)}^{2}+\|A-A_{\delta}-\frac{1}{2}\omega\left(B_{\delta},B\right)\|_{W_{0}\left(\mathbb{R}\right)}\right)^{2},$
and hence
$\displaystyle\mathbb{E}\left[d_{W_{0}\left(\mathbb{H}\right)}\left(g_{\delta},g\right)^{2}\right]\leqslant\mathbb{E}\left[\|B-B_{\delta}\|_{W_{0}\left(\mathbb{R}^{2}\right)}^{2}\right]+\mathbb{E}\left[\|A-A_{\delta}\|_{W_{0}\left(\mathbb{R}\right)}\right]+\mathbb{E}\left[\|\frac{1}{2}\omega\left(B_{\delta},B\right)\|_{W_{0}\left(\mathbb{R}\right)}\right]$
$\displaystyle\leqslant\mathbb{E}\left[\|B-B_{\delta}\|_{W_{0}\left(\mathbb{R}^{2}\right)}^{2}\right]+\mathbb{E}\left[\|A-A_{\delta}\|_{W_{0}\left(\mathbb{R}\right)}^{2}\right]^{\frac{1}{2}}+\mathbb{E}\left[\|\frac{1}{2}\omega\left(B_{\delta},B\right)\|_{W_{0}\left(\mathbb{R}\right)}\right],$
where by definition
$\|\frac{1}{2}\omega\left(B_{\delta},B\right)\|_{W_{0}\left(\mathbb{R}\right)}=\frac{1}{2}\max_{0\leqslant
t\leqslant 1}\left|B_{1,\delta}(t)B_{2}(t)-B_{2,\delta}(t)B_{1}(t)\right|$. By
(3.3) and (3.4), we only need to show that
$\mathbb{E}\left[\|\frac{1}{2}\omega\left(B_{\delta},B\right)\|_{W_{0}\left(\mathbb{R}\right)}\right]\longrightarrow
0,\text{ as $\delta\rightarrow 0$. }$
Since $B_{i}$ is independent of $B_{j,\delta}-B_{j}$ when $i\neq j$, and
$\displaystyle\|\frac{1}{2}\omega\left(B_{\delta},B\right)\|_{W_{0}\left(\mathbb{R}\right)}\leqslant\frac{1}{2}\|B_{1}\|_{W_{0}\left(\mathbb{R}\right)}\|B_{2,\delta}-B_{2}\|_{W_{0}\left(\mathbb{R}\right)}$
$\displaystyle+\frac{1}{2}\|B_{2}\|_{W_{0}\left(\mathbb{R}\right)}\|B_{1,\delta}-B_{1}\|_{W_{0}\left(\mathbb{R}\right)},$
the proof is complete. ∎
###### Corollary 3.4.
We have that $\mu_{\delta}\rightarrow\mu$ weakly. In particular
(3.8)
$\mathcal{S}_{\mu}\subset\overline{H\left(\mathbb{H}\right)}^{\|\cdot\|_{W_{0}\left(\mathbb{H}\right)}}.$
###### Proof.
Let us first show that $g_{\delta}(t)$ converges to $g_{t}$ in probability in
$W_{0}\left(\mathbb{H}\right)$. For any fixed $\varepsilon>0$ we have that
$\mathbb{P}\left(d_{W_{0}\left(\mathbb{H}\right)}(g_{\delta},g)>\varepsilon\right)\leqslant\frac{1}{\varepsilon^{2}}\mathbb{E}\left[d_{W_{0}\left(\mathbb{H}\right)}(g_{\delta},g)^{2}\right]$
which goes to zero by Theorem 3.3. Therefore $g_{\delta}(t)$ converges to
$g_{t}$ in distribution, and hence $\mu_{\delta}$ converges weakly to $\mu$ in
$W_{0}\left(\mathbb{H}\right)$. Thus, for any closed set $F$ in
$W_{0}\left(\mathbb{H}\right)$ we have that
$\mu(F)\geqslant\limsup_{\delta\rightarrow 0}\mu_{\delta}(F).$
In particular, for
$F=\overline{H_{p}\left(\mathbb{H}\right)}^{\|\cdot\|_{W_{0}\left(\mathbb{H}\right)}}$
and by Proposition 3.2 it follows that
$\mu\left(\overline{H_{p}\left(\mathbb{H}\right)}^{\|\cdot\|_{W_{0}\left(\mathbb{H}\right)}}\right)\geqslant\limsup_{\delta\rightarrow
0}\mu_{\delta}(\overline{H_{p}\left(\mathbb{H}\right)}^{\|\cdot\|_{W_{0}\left(\mathbb{H}\right)}})=1.$
Since $\mathcal{S}_{\mu}$ is the smallest closed subset having $\mu$-measure
one, we have that
$\mathcal{S}_{\mu}\subset\overline{H_{p}\left(\mathbb{H}\right)}^{\|\cdot\|_{W_{0}\left(\mathbb{H}\right)}}=\overline{H\left(\mathbb{H}\right)}^{\|\cdot\|_{W_{0}\left(\mathbb{H}\right)}}$
∎
We conclude this section showing that for each fixed $\delta$, the measures
$\mu$ and $\mu_{\delta}$ are singular.
###### Proposition 3.5.
For each $\delta$ the measures $\mu$ and $\mu_{\delta}$ are singular.
###### Proof.
From the proof of Proposition 3.2 we know that
$\mu_{\delta}\left(H\left(\mathbb{H}\right)\right)=1$. It is then enough to
show that $\mu\left(H\left(\mathbb{H}\right)\right)=0$. Let $B_{t}$ be a two
dimensional Brownian motion and let $\nu$ be its law, that is, a probability
measure on $W_{0}^{2}:=W_{0}\left(\mathbb{R}^{2}\right)$. Let us also consider
the map $\pi\,:W_{0}\left(\mathbb{H}\right)\rightarrow W_{0}^{2}$ defined by
$\pi\left(\gamma\right):=\bm{\gamma}$ for any
$\gamma=\left(\gamma_{1},\gamma_{2},\gamma_{3}\right)=\left(\bm{\gamma},\gamma_{3}\right)\in
W_{0}\left(\mathbb{H}\right)$. By definition of $g_{t}$, the following diagram
commutes
${\Omega}$${W_{0}\left(\mathbb{H}\right)}$${W_{0}\left(\mathbb{R}^{2}\right),}$$\scriptstyle{g}$$\scriptstyle{B}$$\scriptstyle{\pi}$
and for any Borel set $E$ in $W_{0}\left(\mathbb{R}^{2}\right)$ we have that
$\nu(E):=\mathbb{P}\left(B^{-1}(E)\right)=\mathbb{P}\left(g^{-1}\circ\pi^{-1}(E)\right)=\mu\left(\pi^{-1}(E)\right).$
Moreover, from Remark 2.13 we know that
$\pi\left(H\left(\mathbb{H}\right)\right)$ is in the Cameron-Martin space
overt $\mathbb{R}^{2}$, which is known to have $\nu$-measure zero, see [3] for
more details. Therefore we can conclude that
$\mu\left(H\left(\mathbb{H}\right)\right)\leqslant\mu\left(\pi^{-1}\pi\left(H\left(\mathbb{H}\right)\right)\right)=\nu\left(\pi\left(H\left(\mathbb{H}\right)\right)\right)=0.$
∎
### 3.2. Support of the Wiener measure
The goal of this section is to prove that
$\overline{H\left(\mathbb{H}\right)}^{\|\cdot\|_{W_{0}\left(\mathbb{H}\right)}}\subset\mathcal{S}_{\mu}$
which will follow from Theorem 3.6. Moreover, in Proposition 3.10 we show that
$\overline{H\left(\mathbb{H}\right)}^{\|\cdot\|_{W_{0}\left(\mathbb{H}\right)}}=W_{0}\left(\mathbb{H}\right)$.
###### Theorem 3.6.
Let
$\phi=\left(\phi_{1},\phi_{2},\phi_{3}\right)=\left(\bm{\phi},\phi_{3}\right)\in
H_{p}\left(\mathbb{H}\right)$. For $\delta>0$ let us denote by
$E_{\delta,\phi}$ the event
$E_{\delta,\phi}:=\left\\{\sup_{0\leqslant t\leqslant
1}|B_{t}-\bm{\phi}(t)|_{\mathbb{R}^{2}}<\delta\right\\}.$
Then for any $\varepsilon>0$
$\lim_{\delta\rightarrow
0}\mathbb{P}\left(d_{W_{0}\left(\mathbb{H}\right)}\left(g,\phi\right)>\varepsilon\,|\,E_{\delta,\phi}\right)=0.$
###### Proof.
For $\phi\in H_{p}\left(\mathbb{H}\right)$ we have that
$\displaystyle
d_{W_{0}\left(\mathbb{H}\right)}\left(g,\phi\right)^{4}:=\sup_{0\leqslant
t\leqslant 1}|\phi(t)^{-1}g_{t}|^{4}\leqslant\sup_{0\leqslant t\leqslant
1}|B_{t}-\bm{\phi}(t)|^{4}_{\mathbb{R}^{2}}$ $\displaystyle+\sup_{0\leqslant
t\leqslant
1}\left|\frac{1}{2}\int_{0}^{t}\omega\left(B_{s}-\bm{\phi}(s),dB_{s}-\bm{\phi}^{\prime}(s)ds\right)+\int_{0}^{t}\omega\left(B_{s}-\bm{\phi}(s),\bm{\phi}^{\prime}(s)\right)ds\right|^{2}$
$\displaystyle\leqslant\left(\sup_{0\leqslant t\leqslant
1}|B_{t}-\bm{\phi}(t)|^{2}_{\mathbb{R}^{2}}\right.$
$\displaystyle\left.+\sup_{0\leqslant t\leqslant
1}\left|\frac{1}{2}\int_{0}^{t}\omega\left(B_{s}-\bm{\phi}(s),dB_{s}-\bm{\phi}^{\prime}(s)ds\right)+\int_{0}^{t}\omega\left(B_{s}-\bm{\phi}(s),\bm{\phi}^{\prime}(s)\right)ds\right|\right)^{2}.$
Therefore on the event $E_{\delta,\phi}$ we have that
$\displaystyle
d_{W_{0}\left(\mathbb{H}\right)}\left(g,\phi\right)^{2}\leqslant\sup_{0\leqslant
t\leqslant 1}|B_{t}-\bm{\phi}(t)|^{2}_{\mathbb{R}^{2}}$
$\displaystyle+\sup_{0\leqslant t\leqslant
1}\left|\frac{1}{2}\int_{0}^{t}\omega\left(B_{s}-\bm{\phi}(s),dB_{s}-\bm{\phi}^{\prime}(s)ds\right)+\int_{0}^{t}\omega\left(B_{s}-\bm{\phi}(s),\bm{\phi}^{\prime}(s)\right)ds\right|$
$\displaystyle\leqslant\delta^{2}+\sup_{0\leqslant t\leqslant
1}\left|\int_{0}^{t}\omega\left(B_{s}-\bm{\phi}(s),\bm{\phi}^{\prime}(s)\right)ds\right|+\sup_{0\leqslant
t\leqslant
1}\left|\frac{1}{2}\int_{0}^{t}\omega\left(B_{s}-\bm{\phi}(s),dB_{s}-\bm{\phi}^{\prime}(s)ds\right)\right|$
$\displaystyle\leqslant\delta^{2}+\delta C_{\phi}+\sup_{0\leqslant t\leqslant
1}\left|\frac{1}{2}\int_{0}^{t}\omega\left(B_{s}-\bm{\phi}(s),dB_{s}-\bm{\phi}^{\prime}(s)ds\right)\right|,$
where $C_{\phi}:=\int_{0}^{1}|\phi_{1}^{\prime}(s)|+|\phi_{2}^{\prime}(s)|ds$.
It then follows that
$\displaystyle\mathbb{P}\left(d_{W_{0}\left(\mathbb{H}\right)}\left(g,\phi\right)>\varepsilon\,|E_{\delta,\phi}\right)$
$\displaystyle\leqslant\mathbb{P}\left(\sup_{0\leqslant t\leqslant
1}\left|\frac{1}{2}\int_{0}^{t}\omega\left(B_{s}-\bm{\phi}(s),dB_{s}-\bm{\phi}^{\prime}(s)ds\right)\right|>\varepsilon^{2}-\delta
C_{\phi}-\delta^{2}\;\;|\;\;E_{\delta,\phi}\right).$
Note that this last expression only depends on the process
$B_{t}-\bm{\phi}(t)$. Since $\phi=\left(\bm{\phi},\phi_{3}\right)\in
H_{p}\left(\mathbb{H}\right)$, by Remark 2.13 we know that $\bm{\phi}$ belongs
to the Cameron-Martin space over $\mathbb{R}^{2}$. Therefore from the Cameron-
Martin-Girsanov Theorem there exists a probability measure $\mathbb{Q}^{\phi}$
such that the process $B^{\phi}_{t}:=B_{t}+\bm{\phi}(t)$ is a Brownian motion
under $\mathbb{Q}^{\phi}$. More precisely there exists an exponential
martingale $\mathcal{E}^{\phi}$ such that
$\mathbb{Q}^{\phi}(A)=\mathbb{E}\left[\mathcal{E}^{\phi}\mathbbm{1}_{A}\right]\quad\forall
A\in\mathcal{F},$
where
$\mathcal{E}^{\phi}=\exp\left(-\int_{0}^{1}\langle\bm{\phi}^{\prime}(s),dB_{s}\rangle_{\mathbb{R}^{2}}ds-\frac{1}{2}\int_{0}^{1}|\bm{\phi}^{\prime}(s)|^{2}_{\mathbb{R}^{2}}ds\right)$.
Note that
$\displaystyle
d\left(B_{t}-\phi(t)\right)=dB_{t}-\phi^{\prime}(t)dt,\;\text{and}$
$\displaystyle dB_{t}=dB^{\phi}_{t}-\phi^{\prime}(t)dt,$
that is, the law of $B_{t}-\bm{\phi}(t)$ under $\mathbb{P}$ is the same as the
law of $B_{t}$ under $\mathbb{Q}^{\phi}$. Therefore we can write
$\displaystyle\mathbb{P}\left(E_{\delta,\phi}\right)=\mathbb{P}\left(\sup_{0\leqslant
t\leqslant 1}|B_{t}-\bm{\phi}(t)|_{\mathbb{R}^{2}}<\delta\right)$
$\displaystyle=\mathbb{Q}^{\phi}\left(\sup_{0\leqslant t\leqslant
1}|B_{t}|_{\mathbb{R}^{2}}<\delta\right)=\mathbb{E}\left[\mathcal{E}^{\phi}\mathbbm{1}_{E_{\delta}}\right]=\mathbb{E}\left[\mathcal{E}^{\phi}|E_{\delta}\right]\mathbb{P}\left(E_{\delta}\right),$
where we set $E_{\delta}:=\left\\{\sup_{0\leqslant t\leqslant
1}|B_{t}|_{\mathbb{R}^{2}}<\delta\right\\}$. Similarly we have that
$\displaystyle\mathbb{P}\left(\sup_{0\leqslant t\leqslant
1}\left|\frac{1}{2}\int_{0}^{t}\omega\left(B_{s}-\bm{\phi}(s),dB_{s}-\bm{\phi}^{\prime}(s)ds\right)\right|>\varepsilon^{2}-\delta
C_{\phi}-\delta^{2}\,,\;E_{\delta,\phi}\right)$
$\displaystyle=\mathbb{E}\left[\mathcal{E}^{\phi}|F_{\delta,\phi}^{\varepsilon}\cap
E_{\delta}\right]\mathbb{P}\left(F_{\delta,\phi}^{\varepsilon}\cap
E_{\delta}\right),$
where $F_{\delta,\phi}^{\varepsilon}:=\left\\{\sup_{0\leqslant t\leqslant
1}\left|\frac{1}{2}\int_{0}^{t}\omega\left(B_{s},dB_{s}\right)\right|>\varepsilon^{2}-\delta
C_{\phi}-\delta^{2}\right\\}$. Therefore it follows that
$\displaystyle\mathbb{P}\left(d_{W_{0}\left(\mathbb{H}\right)}\left(g,\phi\right)>\varepsilon\,|E_{\delta,\phi}\right)$
$\displaystyle\leqslant\mathbb{P}\left(\sup_{0\leqslant t\leqslant
1}\left|\frac{1}{2}\int_{0}^{t}\omega\left(B_{s}-\bm{\phi}(s),dB_{s}-\bm{\phi}^{\prime}(s)ds\right)\right|>\varepsilon^{2}-\delta
C_{\phi}-\delta^{2}\,|E_{\delta,\phi}\right)$ (3.9)
$\displaystyle=\frac{\mathbb{P}\left(F_{\delta,\phi}^{\varepsilon}\cap
E_{\delta}\right)E\left[\mathcal{E}^{\phi}|F_{\delta,\phi}^{\varepsilon}\cap
E_{\delta}\right]}{\mathbb{P}\left(E_{\delta}\right)\mathbb{E}\left[\mathcal{E}^{\phi}|E_{\delta}\right]}=\mathbb{P}\left(F_{\delta,\phi}^{\varepsilon}\,|\,E_{\delta}\right)\times\frac{E\left[\mathcal{E}^{\phi}|F_{\delta,\phi}^{\varepsilon}\cap
E_{\delta}\right]}{\mathbb{E}\left[\mathcal{E}^{\phi}|E_{\delta}\right]}$
We will show later in the paper, see Lemma 3.9, that for any $\varepsilon>0$
and any $\phi\in H_{p}\left(\mathbb{H}\right)$ we have that
(3.10) $\lim_{\delta\rightarrow
0}\frac{\mathbb{E}\left[\mathcal{E}^{\phi}\,|\,F_{\delta,\phi}^{\varepsilon}\cap
E_{\delta}\right]}{\mathbb{E}\left[\mathcal{E}^{\phi}\,|\,E_{\delta}\right]}=1.$
In light of 3.2 and 3.10, the proof will be completed once we show that
$\displaystyle\lim_{\delta\rightarrow
0}\mathbb{P}\left(F_{\delta,\phi}^{\varepsilon}\,|\,E_{\delta}\right):=$
$\displaystyle\lim_{\delta\rightarrow 0}\mathbb{P}\left(\sup_{0\leqslant
t\leqslant
1}\left|\frac{1}{2}\int_{0}^{t}\omega\left(B_{s},dB_{s}\right)\right|>\varepsilon^{2}-\delta
C_{\phi}-\delta^{2}\;\left|\;\;\sup_{0\leqslant t\leqslant
1}|B_{t}|_{\mathbb{R}^{2}}<\delta\right)=0.\right.$
The process $A_{t}:=\frac{1}{2}\int_{0}^{t}\omega\left(B_{s},dB_{s}\right)$ is
a square integrable martingale with zero mean, and therefore there exists a
one dimensional Brownian motion $b_{t}$ such that
$b_{\tau(t)}=\frac{1}{2}\int_{0}^{t}\omega\left(B_{s},dB_{s}\right),$
where $\tau(t)=\frac{1}{4}\int_{0}^{t}B_{1}(s)^{2}+B_{2}(s)^{2}ds$. Moreover
it is known that $b_{t}$ is independent of $B_{t}$, see for example [11,
Chapter 6 p. 470]. Hence we have that
$\displaystyle\mathbb{P}\left(\sup_{0\leqslant t\leqslant
1}|b_{\tau(t)}|>\varepsilon^{2}-\delta
C_{\phi}-\delta^{2}\,|\,\sup_{0\leqslant t\leqslant
1}|B_{t}|_{\mathbb{R}^{2}}<\delta\right)$
$\displaystyle\leqslant\mathbb{P}\left(\sup_{0\leqslant
t\leqslant\frac{1}{4}\delta^{2}}|b_{t}|>\varepsilon^{2}-\delta
C_{\phi}-\delta^{2}\,|\sup_{0\leqslant t\leqslant
1}|B_{t}|_{\mathbb{R}^{2}}<\delta\right)$
$\displaystyle=\mathbb{P}\left(\sup_{0\leqslant
t\leqslant\frac{1}{4}\delta^{2}}|b_{t}|>\varepsilon^{2}-\delta
C_{\phi}-\delta^{2}\right)=\mathbb{P}\left(\sup_{0\leqslant t\leqslant
1}|b_{t}|>2(\frac{\varepsilon^{2}}{\delta}-C_{\phi}-\delta)\right),$
which goes to zero as $\delta$ goes to zero. ∎
###### Corollary 3.7.
$\overline{H\left(\mathbb{H}\right)}^{\|\cdot\|_{W_{0}\left(\mathbb{H}\right)}}\subset\mathcal{S}_{\mu}.$
###### Proof.
Let us first prove that for any $\phi\in H_{p}\left(\mathbb{H}\right)$ and
$\varepsilon>0$ we have that $\mu\left(B_{\varepsilon}(\phi)\right)>0$, where
$B_{\varepsilon}(\phi)$ is the ball of radius $\varepsilon$ in the
$W_{0}\left(\mathbb{H}\right)$-norm centered at $\phi$. Indeed, for any
$\phi\in H_{p}\left(\mathbb{H}\right)$ and $\varepsilon>0$ we have that
$\displaystyle\mu\left(B_{\varepsilon}(\phi)\right):=\mathbb{P}\left(g\in
B_{\varepsilon}(\phi)\right)=\mathbb{P}\left(d_{W_{0}\left(\mathbb{H}\right)}\left(g,\phi\right)<\varepsilon\right)$
$\displaystyle\geqslant\mathbb{P}\left(d_{W_{0}\left(\mathbb{H}\right)}\left(g,\phi\right)<\varepsilon\,|\,E_{\delta,\phi}\right)\mathbb{P}\left(E_{\delta,\phi}\right),$
where $E_{\delta,\phi}:=\left\\{\sup_{0\leqslant t\leqslant
1}|B_{t}-\bm{\phi}(t)|_{\mathbb{R}^{2}}<\delta\right\\}.$ From Theorem (3.6)
there exists a $\delta_{0}$ such that for every $\delta\in(0,\delta_{0})$
$\mathbb{P}\left(d_{W_{0}\left(\mathbb{H}\right)}\left(g,\phi\right)<\varepsilon\,|\,E_{\delta,\phi}\right)\geqslant\frac{1}{2},$
for any $\varepsilon>0$. Combining everything together we have that
$\mu\left(B_{\varepsilon}(\phi)\right)\geqslant\frac{1}{2}\mathbb{P}\left(\sup_{0\leqslant
t\leqslant
1}|B_{t}-\bm{\phi}(t)|_{\mathbb{R}^{2}}<\frac{\delta_{0}}{2}\right),$
and the latter is positive since $\bm{\phi}$ is in the Cameron-Martin space
over $\mathbb{R}^{2}$. Therefore, if $O$ is any open set in
$W_{0}\left(\mathbb{H}\right)$ with $\mu(O)=0$ then $O\subset
H_{p}\left(\mathbb{H}\right)^{c}$, and hence
$\displaystyle\bigcup_{\mathclap{\begin{subarray}{c}O\,\text{open}\\\
\mu(O)=0\end{subarray}}}O\subset
H_{p}\left(\mathbb{H}\right)^{c},\;\;\text{that is,
}\;\;\mathcal{S}_{\mu}:=\bigcap_{\mathclap{\begin{subarray}{c}F\,\text{closed}\\\
\mu(F)=1\end{subarray}}}F\supset H_{p}\left(\mathbb{H}\right),$
and since $S_{\mu}$ is closed, we have that
$\mathcal{S}_{\mu}\supset\overline{H_{p}\left(\mathbb{H}\right)}^{\|\cdot\|_{W_{0}\left(\mathbb{H}\right)}}=\overline{H\left(\mathbb{H}\right)}^{\|\cdot\|_{W_{0}\left(\mathbb{H}\right)}}$.
∎
The proof of Theorem 3.6 will be completed once we show (3.10). Before
proceeding to the proof of (3.10), we need the following lemma whose proof can
be found in [11, pp. 536-537].
###### Lemma 3.8 (pp. 536-537 in [11]).
Let $I_{1},\ldots,I_{n}$ be $n$ random variables on a probability space
$\left(\Omega,\mathcal{F},\mathbb{P}\right)$. Let
$\left\\{A_{\delta}\right\\}_{0<\delta<1}$ be a family of events in
$\mathcal{F}$ and $a_{1},\ldots,a_{n}$ be $n$ numbers. If for every real
number $c$ and every $1\leqslant i\leqslant n$
$\limsup_{\delta\rightarrow
0}\mathbb{E}\left[\exp(c\,I_{i})\,|A_{\delta}\right]\leqslant\exp(c\,a_{i}),$
then
$\lim_{\delta\rightarrow
0}\mathbb{E}\left[\exp\left(\sum_{i=1}^{n}I_{i}\right)|A_{\delta}\right]=\exp\left(\sum_{i=1}^{n}a_{i}\right).$
###### Lemma 3.9.
Let $E_{\delta}$ and $F^{\varepsilon}_{\delta,\phi}$ be given as in the proof
of Theorem 3.6. Then
$\lim_{\delta\rightarrow
0}\frac{\mathbb{E}\left[\mathcal{E}^{\phi}\,|\,F_{\delta,\phi}^{\varepsilon}\cap
E_{\delta}\right]}{\mathbb{E}\left[\mathcal{E}^{\phi}\,|\,E_{\delta}\right]}=1.$
###### Proof.
Let us first prove that
(3.11) $\lim_{\delta\rightarrow
0}\mathbb{E}\left[\mathcal{E}^{\phi}\,|\,E_{\delta}\right]=\exp\left(-\frac{1}{2}\int_{0}^{1}|\bm{\phi}^{\prime}(s)|^{2}_{\mathbb{R}^{2}}ds\right).$
Since
$\mathcal{E}^{\phi}=\exp\left(-\int_{0}^{1}\langle\phi^{\prime}(s),dB_{s}\rangle_{\mathbb{R}^{2}}ds-\frac{1}{2}\int_{0}^{1}|\phi^{\prime}(s)|^{2}_{\mathbb{R}^{2}}ds\right)$,
by Lemma 3.8 and the definition of $E_{\delta}$, it is enough to show that for
any real number $c$ and $i=1,2$
$\limsup_{\delta\rightarrow
0}\mathbb{E}\left[\exp\left(-c\int_{0}^{1}\phi^{\prime}_{i}(s)dB_{i}(s)\right)\,\left|\,\max_{0\leqslant
t\leqslant 1}|B_{t}|_{\mathbb{R}^{2}}<\delta\right]\leqslant 1.\right.$
For $\phi\in H_{p}\left(\mathbb{H}\right)$ we can write
$\int_{0}^{1}\phi^{\prime}_{i}(s)dB_{i}(s)=\phi^{\prime}_{i}(1)B_{i}(1)-\int_{0}^{1}\phi^{\prime\prime}_{i}(s)B_{i}(s)ds$,
and hence on the event $E_{\delta}$ we have that
$\displaystyle\exp\left(-c\int_{0}^{1}\phi^{\prime}_{i}(s)dB_{i}(s)\right)\leqslant\exp\left(-ck_{\phi}\delta\right),$
for some finite constant $k_{\phi}$ only depending on $\phi$. Therefore we
have that
$\displaystyle\limsup_{\delta\rightarrow
0}\mathbb{E}\left[\exp\left(-c\int_{0}^{1}\phi^{\prime}_{i}(s)dB_{i}(s)\right)\,\left|\,\max_{0\leqslant
t\leqslant
1}|B_{t}|_{\mathbb{R}^{2}}<\delta\right]\leqslant\mathbb{E}\left[\limsup_{\delta\rightarrow
0}e^{-ck_{\phi}\delta}|E_{\delta}\right]\leqslant 1.\right.$
In a similar way it can be shown that $\lim_{\delta\rightarrow
0}\mathbb{E}\left[\mathcal{E}^{\phi}\,|\,F_{\delta,\phi}^{\varepsilon}\cap
E_{\delta}\right]=\exp\left(-\frac{1}{2}\int_{0}^{1}|\bm{\phi}^{\prime}(s)|^{2}_{\mathbb{R}^{2}}ds\right)$,
and the proof is completed. ∎
We conclude now the proof of Theorem 2.14, that is, we show that
$\overline{H\left(\mathbb{H}\right)}^{\|\cdot\|_{W_{0}\left(\mathbb{H}\right)}}=W_{0}\left(\mathbb{H}\right)$.
###### Proposition 3.10.
We have that
$\overline{H\left(\mathbb{H}\right)}^{\|\cdot\|_{W_{0}\left(\mathbb{H}\right)}}=W_{0}\left(\mathbb{H}\right).$
###### Proof.
Any element in $W_{0}\left(\mathbb{H}\right)$ can be approximated with
piecewise linear curves in the
$\|\cdot\|_{W_{0}\left(\mathbb{H}\right)}$-topology. It is then enough to
prove that for any piecewise linear curve $\xi$ there exists a sequence of
horizontal finite energy curves
$\left\\{\phi^{\xi}_{n}\right\\}_{n\in\mathbb{N}}$ such that
$d_{W_{0}\left(\mathbb{H}\right)}\left(\phi^{\xi}_{n},\xi\right)\rightarrow
0$. Let us first explain the geometric construction through the following
example. Consider the curve $t\rightarrow\xi(t)=(0,0,t)\in\mathbb{H}$ for
$t\in[0,1]$, which is the prototype of a non-horizontal curve. Let us define a
family of finite energy horizontal curves $\phi_{n}$ by
$\phi_{n}(s):=\left(\frac{2}{n}\cos\left(n^{2}s\right),\frac{1}{n}\sin\left(n^{2}s\right),s\right).$
Geometrically, the curves $\phi_{n}$ are helics that shrink around the $\xi$
as $n$ goes to infinity. Indeed,
$\displaystyle
d_{W_{0}\left(\mathbb{H}\right)}\left(\phi_{n},\xi\right)^{4}=\max_{0\leqslant
s\leqslant
1}\left[\left(\left(\frac{2}{n}\cos\left(n^{2}s\right)\right)^{2}+\left(\frac{1}{n}\sin\left(n^{2}s\right)\right)^{2}\right)^{2}\right.$
$\displaystyle\left.+\left(s-\frac{1}{2}\int_{0}^{t}\omega\left(\bm{\phi}_{n}(u),\bm{\phi}^{\prime}_{n}(u)\right)du\right)^{2}\right]$
$\displaystyle=\max_{0\leqslant s\leqslant
1}\left(\left(\frac{2}{n}\cos\left(n^{2}s\right)\right)^{2}+\left(\frac{1}{n}\sin\left(n^{2}s\right)\right)^{2}\right)^{2}\longrightarrow
0,$
as $n$ goes to infinity.
Now, let $\xi(t)=(a_{1}t,a_{2}t,a_{3}t)$ be a linear curve in $\mathbb{H}$,
where $a_{1},\,a_{2},\,a_{3}\in\mathbb{R}$. Then set
$\displaystyle\phi_{n}(s):=\left(a_{1}s+\frac{2}{n}\cos\left(n^{2}a_{3}s\right),a_{2}s+\frac{1}{n}\sin\left(n^{2}a_{3}s\right),\right.$
$\displaystyle\left.a_{3}s-\frac{a_{2}s}{n}\cos\left(n^{2}a_{3}s\right)+\frac{a_{1}s}{2n}\sin\left(n^{2}a_{3}s\right)+\frac{1}{n}\int_{0}^{s}2a_{2}\cos\left(n^{2}a_{3}u\right)-a_{1}\sin\left(n^{2}a_{3}u\right)du\right).$
Geometrically, $\phi_{n}$’s are helics that shrink around the curve $\xi$ as
$n$ goes to infinity. It is easy to check that for any $n\in\mathbb{N}$,
$\phi_{n}$ is a finite energy horizontal curve such that
$\displaystyle(\phi_{n}^{-1}\xi)(s)=\left(-\frac{2}{n}\cos\left(n^{2}a_{3}s\right),-\frac{1}{n}\sin\left(n^{2}a_{3}s\right),\right.$
$\displaystyle\left.\frac{1}{n}\int_{0}^{s}a_{1}\sin\left(n^{2}a_{3}u\right)-2a_{2}\cos\left(n^{2}a_{3}u\right)du\right),$
which implies that
$d_{W_{0}\left(\mathbb{H}\right)}\left(\phi_{n},\xi\right)\rightarrow 0$ as
$n$ goes to infinity. ∎
###### Acknowledgement.
The author wishes to thank Professor M. Gordina for carefully reading the
manuscript and suggesting significant improvements to the text.
## References
* [1] Shigeki Aida, _Support theorem for diffusion processes on Hilbert spaces_ , Publ. Res. Inst. Math. Sci. 26 (1990), no. 6, 947–965. MR 1079903
* [2] A. Bonfiglioli, E. Lanconelli, and F. Uguzzoni, _Stratified Lie groups and potential theory for their sub-Laplacians_ , Springer Monographs in Mathematics, Springer, Berlin, 2007. MR 2363343
* [3] Leonard Gross, _Abstract Wiener spaces_ , Proc. Fifth Berkeley Sympos. Math. Statist. and Probability (Berkeley, Calif., 1965/66), Vol. II: Contributions to Probability Theory, Part 1, Univ. California Press, Berkeley, Calif., 1967, pp. 31–42. MR 0212152
* [4] I. Gyöngy, _On the approximation of stochastic differential equations_ , Stochastics 23 (1988), no. 3, 331–352. MR 959118
* [5] by same author, _On the approximation of stochastic partial differential equations. I_ , Stochastics 25 (1988), no. 2, 59–85. MR 999363
* [6] by same author, _The stability of stochastic partial differential equations and applications. Theorems on supports_ , Stochastic partial differential equations and applications, II (Trento, 1988), Lecture Notes in Math., vol. 1390, Springer, Berlin, 1989, pp. 91–118. MR 1019596
* [7] by same author, _On the support of the solutions of stochastic differential equations_ , Teor. Veroyatnost. i Primenen. 39 (1994), no. 3, 649–653. MR 1347193
* [8] I. Gyöngy and T. Pröhle, _On the approximation of stochastic differential equation and on Stroock-Varadhan’s support theorem_ , Comput. Math. Appl. 19 (1990), no. 1, 65–70. MR 1026782
* [9] Lars Hörmander, _Hypoelliptic second order differential equations_ , Acta Math. 119 (1967), 147–171. MR 0222474 (36 #5526)
* [10] Nobuyuki Ikeda, Shintaro Nakao, and Yuiti Yamato, _A class of approximations of Brownian motion_ , Publ. Res. Inst. Math. Sci. 13 (1977/78), no. 1, 285–300. MR 0458587
* [11] Nobuyuki Ikeda and Shinzo Watanabe, _Stochastic differential equations and diffusion processes_ , second ed., North-Holland Mathematical Library, vol. 24, North-Holland Publishing Co., Amsterdam, 1989. MR MR1011252 (90m:60069)
* [12] Hiroshi Kunita, _Diffusion processes and control systems_ , Course at University of Paris VI (1974).
* [13] by same author, _Supports of diffusion processes and controllability problems_ , Proceedings of the International Symposium on Stochastic Differential Equations (Res. Inst. Math. Sci., Kyoto Univ., Kyoto, 1976), Wiley, New York-Chichester-Brisbane, 1978, pp. 163–185. MR 536011
* [14] M. Ledoux, Z. Qian, and T. Zhang, _Large deviations and support theorem for diffusion processes via rough paths_ , Stochastic Process. Appl. 102 (2002), no. 2, 265–283. MR 1935127
* [15] Annie Millet and Marta Sanz-Solé, _A simple proof of the support theorem for diffusion processes_ , Séminaire de Probabilités, XXVIII, Lecture Notes in Math., vol. 1583, Springer, Berlin, 1994, pp. 36–48. MR 1329099
* [16] Shintaro Nakao and Yuiti Yamato, _Approximation theorem on stochastic differential equations_ , Proceedings of the International Symposium on Stochastic Differential Equations (Res. Inst. Math. Sci., Kyoto Univ., Kyoto, 1976), Wiley, New York-Chichester-Brisbane, 1978, pp. 283–296. MR 536015
* [17] Daniel W. Stroock and S. R. S. Varadhan, _On the support of diffusion processes with applications to the strong maximum principle_ , Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability (Univ. California, Berkeley, Calif., 1970/1971), Vol. III: Probability theory, 1972, pp. 333–359. MR 0400425
* [18] Eugene Wong and Moshe Zakai, _On the relation between ordinary and stochastic differential equations_ , Internat. J. Engrg. Sci. 3 (1965), 213–229. MR 0183023
|
remarkRemark hypothesisHypothesis claimClaim RCR: Reduced Compact
Representation J. J. Brust, R. F. Marcia, C. G. Petra, and M. A. Saunders
ARGONNE NATIONAL LABORATORY
9700 South Cass Avenue
Argonne, Illinois 60439
Large-scale optimization with linear equality constraints using reduced
compact representation
J. J. Brust, R. F. Marcia, C. G. Petra and M. A. Saunders
Mathematics and Computer Science Division
Preprint ANL/MCS-P9279-0120
August 2021
11footnotetext: This work was supported by the U.S. Department of Energy,
Office of Science, Advanced Scientific Computing Research, under Contract DE-
AC02-06CH11357 at Argonne National Laboratory. through the Project
”Multifaceted Mathematics for Complex Energy Systems.” This work was also
performed under the auspices of the U.S. Department of Energy by Lawrence
Livermore National Laboratory under Contract DE-AC52-07NA27344.
The submitted manuscript has been created by UChicago Argonne, LLC, Operator
of Argonne National Laboratory (“Argonne”). Argonne, a U.S. Department of
Energy Office of Science laboratory, is operated under Contract No. DE-
AC02-06CH11357. The U.S. Government retains for itself, and others acting on
its behalf, a paid-up nonexclusive, irrevocable worldwide license in said
article to reproduce, prepare derivative works, distribute copies to the
public, and perform publicly and display publicly, by or on behalf of the
Government. The Department of Energy will provide public access to these
results of federally sponsored research in accordance with the DOE Public
Access Plan. http://energy.gov/downloads/doe-public-accessplan
# Large-scale optimization with linear equality constraints using reduced
compact representation††thanks: Dedicated to Dr Oleg Burdakov, 1953–2021.
Version of . Submitted to SISC 2021.This work was supported by the U.S.
Department of Energy, Office of Science, Advanced Scientific Computing
Research, under Contract DE-AC02-06CH11357 at Argonne National Laboratory.
This work performed under the auspices of the U.S. Department of Energy by
Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Johannes J. Brust Department of Mathematics, University of California San
Diego, San Diego, CA (formerly Argonne National Laboratory) ().
<EMAIL_ADDRESS>Roummel F. Marcia Department of Applied Mathematics,
University of California Merced, Merced, CA<EMAIL_ADDRESS>Cosmin G.
Petra Center for Applied Scientific Computing, Lawrence Livermore National
Laboratory, Livermore, CA<EMAIL_ADDRESS>Michael A. Saunders Department
of Management Science and Engineering, Stanford University, Stanford, CA ().
<EMAIL_ADDRESS>
###### Abstract
For optimization problems with linear equality constraints, we prove that the
(1,1) block of the inverse KKT matrix remains unchanged when projected onto
the nullspace of the constraint matrix. We develop _reduced compact
representations_ of the limited-memory inverse BFGS Hessian to compute search
directions efficiently when the constraint Jacobian is sparse. Orthogonal
projections are implemented by a sparse QR factorization or a preconditioned
LSQR iteration. In numerical experiments two proposed trust-region algorithms
improve in computation times, often significantly, compared to previous
implementations of related algorithms and compared to IPOPT.
###### keywords:
Large-scale optimization, compact representation, trust-region method, limited
memory, LSQR, sparse QR
LLNL Release Number: LLNL-JRNL-818401
68Q25, 68R10, 68U05
## 1 Introduction
Linear equality constrained minimization problems are formulated as
(1) $\underset{x\in\mathbb{R}^{n}}{\text{ minimize }}f(x)\quad\text{subject
to}\quad Ax=b,$
where $f:\mathbb{R}^{n}\to\mathbb{R}$ and $A\in\mathbb{R}^{m\times n}$. We
assume that the number of variables $n$ is large, $g(x)=\nabla f(x)$ is
available, $A$ is sparse, and that the initial guess $x_{0}$ is feasible:
$Ax_{0}=b$. If $A$ has low rank, one can obtain a full-rank matrix by
deleting rows in $A$ that correspond to small diagonals of the triangular
matrix in a sparse QR factorization of $A^{\top}$. Our methods here use the
rank information contained in sparse QR factors, and thus we assume that $A$
has full rank until implementation details are described in section Appendix
B. For large problems, computing the Hessian
$\nabla^{2}f(x)\in\mathbb{R}^{n\times n}$ is often not practical, and we
approximate this matrix using a limited-memory BFGS (Broyden-Fletcher-
Goldfarb-Shanno, [2, 16, 20, 29]) quasi-Newton matrix
${B}_{k}\approx\nabla^{2}f({x}_{k})$. Starting from $x_{0}$, we update
iterates according to ${x}_{k+1}={x}_{k}+{s}_{k}$. The step ${s}_{k}$ is
computed as the solution of a quadratic trust-region subproblem, in which the
quadratic objective is defined as $q(s)\equiv
s^{\top}{g}_{k}+\frac{1}{2}s^{\top}{B}_{k}s$ with ${g}_{k}\equiv g({x}_{k})$.
For a given trust-region radius $\Delta>0$ and norm $\|\cdot\|$, the trust-
region subproblem is
(2) $\underset{\|s\|\leq\Delta}{\text{ minimize }}q(s)\quad\text{subject
to}\quad As=0,$
which ensures that each search direction is in the nullspace of $A$, and thus
each iterate $x_{k}$ is feasible.
### 1.1 Background
Large problems of the form (1) are the focus of recent research because large
statistical- and machine-learning problems can be cast in this way. As such,
(1) constitutes the backbone of the Alternating Direction Method of
Multipliers (ADMM) [1], with applications to optimal exchange problems,
consensus and sharing problems, support-vector machines, and more. Recent work
[18] emphasizes methods that use gradients of ${\color[rgb]{0,0,0}f}$ and
suggest accelerations via quasi-Newton approximations. Quasi-Newton methods
estimate Hessian matrices using low-rank updates at each iteration (typically
rank-1 or rank-2). Starting from an initial matrix, the so-called _compact
representation_ of quasi-Newton matrices [8] is a matrix representation of the
recursive low-rank updates. Because the compact representation enables
effective limited memory implementations, which update a small number of
previously stored vectors, these methods are well suited to large problems.
Trust-region and line-search methods are standard for determining search
directions for smooth problems, and each approach has its own merits.
Combinations of trust-region methods and quasi-Newton compact representations
have been developed in [3, 4, 5, 7]. Widely used quasi-Newton line-search
methods are [9, 24, 31, 32]. The main ideas in this article are applicable to
both trust-region and line-search methods.
### 1.2 Compact representation
A storage-efficient approach to quasi-Newton matrices is the compact
representation of Byrd et al. [8], which represents the BFGS matrices in the
form
(3) ${B}_{k}=\gamma_{k}I+{J}_{k}{M}_{k}{J}_{k}^{\top},$
with scalar $\gamma_{k}>0$. The history of vectors
${\color[rgb]{0,0,0}\\{}{s}_{k}{\color[rgb]{0,0,0}\\}}={\color[rgb]{0,0,0}\\{}{x}_{k+1}-{x}_{k}{\color[rgb]{0,0,0}\\}}$
and
${\color[rgb]{0,0,0}\\{}{y}_{k}{\color[rgb]{0,0,0}\\}}={\color[rgb]{0,0,0}\\{}{g}_{k+1}-{g}_{k}{\color[rgb]{0,0,0}\\}}$
is stored in rectangular
${S}_{k}\equiv\begin{bmatrix}s_{0},\dots,s_{k-1}\end{bmatrix}\in\mathbb{R}^{n\times
k}$ and
${Y}_{k}\equiv\begin{bmatrix}y_{0},\dots,y_{k-1}\end{bmatrix}\in\mathbb{R}^{n\times
k}$. The matrices
(4) $\displaystyle{J}_{k}$
$\displaystyle\equiv\begin{bmatrix}{S}_{k}&{Y}_{k}\end{bmatrix},$ (5)
$\displaystyle{S}_{k}^{\top}{Y}_{k}$
$\displaystyle\equiv{L}_{k}+{D}_{k}+\bar{T}_{k},$ (6) $\displaystyle{M}_{k}$
$\displaystyle\equiv-\begin{bmatrix}\delta_{k}{S}_{k}^{\top}{S}_{k}&\,\delta_{k}{L}_{k}\\\
\delta_{k}{L}_{k}^{\top}&\,-{D}_{k}\end{bmatrix}^{-1}$
are defined with $\delta_{k}=1/\gamma_{k}$, where ${L}_{k}$ and $\bar{T}_{k}$
are the strictly lower and upper triangular parts of ${S}_{k}^{\top}{Y}_{k}$
and ${D}_{k}$ is the diagonal. For large problems, limited-memory versions
store only a small subset of recent pairs $\\{s_{i},y_{i}\\}_{i=k-l}^{k-1}$,
resulting in storage-efficient matrices ${J}_{k}\in\mathbb{R}^{n\times 2l}$
and ${M}_{k}\in\mathbb{R}^{2l\times 2l}$ where $l\ll n$. Following Byrd et
al. [8, Theorem 2.2], the inverse BFGS matrix has the form
(7) ${B}^{-1}_{k}=\delta_{k}I+{J}_{k}{W}_{k}{J}_{k}^{\top},$
where ${W}_{k}\in\mathbb{R}^{2l\times 2l}$ is given by
(8)
${W}_{k}=\begin{bmatrix}{T}_{k}^{-\top}({D}_{k}+\delta_{k}{Y}_{k}^{\top}{Y}_{k}){T}_{k}^{-1}&-\delta_{k}{T}_{k}^{-\top}\\\
-\delta_{k}{T}^{-1}_{k}&0_{l\times l}\end{bmatrix}.$
The diagonal matrix ${D}_{k}$ (and hence the upper triangular matrix
${T}_{k}\equiv{D}_{k}+\bar{T}_{k}$) are nonsingular as long as ${B}_{k}$ is
also.
### 1.3 Outline
Section 2 describes our contributions in the context of large problems, while
section 3 motivates our proposed representations. Section 4 develops the
reduced compact representation and updating techniques that enable efficient
implementations. Section 5 describes computations of orthogonal projections,
and the trust-region strategy for optimization. Section 6 gives an efficient
method when an $\ell_{2}$-norm trust-region subproblem is used. Sections 7 and
8 develop an effective factorization, and a method that uses a shape-changing
norm in the trust-region subproblem. Numerical experiments are reported in
section 9, and conclusions are drawn in section 10.
## 2 Contributions
The first-order necessary conditions for the solution of problem (2) without
the norm constraint are characterized by the linear system
(9) $\begin{bmatrix}B_{k}&A^{\top}\\\ A&0_{m\times
m}\end{bmatrix}\begin{bmatrix}{\color[rgb]{0,0,0}s_{E}}\\\
{\color[rgb]{0,0,0}\lambda_{E}}\end{bmatrix}=\begin{bmatrix}-g_{k}\\\
0_{m}\end{bmatrix},$
where ${\color[rgb]{0,0,0}\lambda_{E}}\in\mathbb{R}^{m}$ is a vector of
Lagrange multipliers and $s_{E}$ denotes the “equality” constrained minimizer
of (2). Adopting the naming convention of [27, Sec. 16.1, p. 451], we refer to
(9) as the KKT system (a slight misnomer, as use of the system for the
equality constrained setting predates the work of Karush, Kuhn, and Tucker).
For large $n$, compact representations of the (1,1) block in the inverse KKT
matrix were recently proposed by Brust et al. [6]. Two limited-memory trust-
region algorithms, LTRL2-LEC and LTRSC-LEC (which we refer to as TR1 and TR2
in the numerical experiments in Sec. 9), use these representations to compute
search directions efficiently when $A$ has relatively few rows. This article
develops efficient algorithms when the number of equality constraints is large
and the constraint matrix is sparse. In particular, by exploiting the
property that part of the solution to the KKT system is unaltered when it is
projected onto the nullspace of $A$, we develop _reduced compact
representations (RCR)_ , which need a small amount of memory and lead to
efficient methods for solving problems with many constraints (large $m$ and
$n$) and possibly many degrees of freedom (large $n-m$). In numerical
experiments when solving large problems, the proposed methods are often
significantly more efficient than both our previous implementations and IPOPT
[30].
## 3 Motivation
The solution ${\color[rgb]{0,0,0}s_{E}}$ in (9) can be computed from only the
(1,1) block of the inverse KKT matrix, as opposed to both the (1,1) and (1,2)
blocks, because of the zeros in the right-hand side. Let ${V}_{k}$ be the
(1,1) block of the inverse KKT matrix (obtained for example from a block LDU
factorization). It is given by
(10)
${V}_{k}\equiv({B}^{-1}_{k}-{B}^{-1}_{k}A^{\top}(A{B}^{-1}_{k}A^{\top})^{-1}A{B}^{-1}_{k}),$
and then ${\color[rgb]{0,0,0}s_{E}}=-{V}_{k}{g}_{k}$. At first sight the
expression in (10) appears to be expensive to compute because of the multiple
inverse operations and matrix-vector products. However, as
${B}^{-1}_{k}=\delta_{k}I+{J}_{k}{W}_{k}{J}_{k}^{\top}$, we can exploit
computationally useful structures. Specifically, with
${G}_{k}\equiv(A{B}^{-1}_{k}A^{\top})^{-1}$ and ${C}_{k}\equiv
A{J}_{k}{W}_{k},$ [6, Lemma 1] describes the expression
(11)
${V}_{k}=\delta_{k}I+\begin{bmatrix}A^{\top}&{J}_{k}\end{bmatrix}\begin{bmatrix}-\delta_{k}^{2}{G}_{k}&-\delta_{k}{G}_{k}{C}_{k}\\\
-\delta_{k}{C}_{k}^{\top}{G}_{k}&{W}_{k}-{C}_{k}^{\top}{G}_{k}{C}_{k}\end{bmatrix}\begin{bmatrix}A\\\
{J}_{k}^{\top}\end{bmatrix}.$
For large $n$, once the components of the middle matrix in (11) are available,
this compact representation of ${V}_{k}$ enables efficient computation of a
matrix-vector product ${V}_{k}{g}_{k}$, hence the solution of (9), and an
economical eigendecomposition ${V}_{k}=U\Lambda U^{\top}$. However, unless $m$
is small (there are few rows in $A$), multiplying with the
$(m+2l)\times(m+2l)$ middle matrix is not practical.
With large $n$ and $m$ in mind, we note that the solution $s_{E}$ is
unchanged if instead of ${g}_{k}$ a projection of this vector onto the
nullspace of $A$ is used, or if ${\color[rgb]{0,0,0}s_{E}}$ is projected onto
the nullspace of $A$. This is a consequence of the properties of ${V}_{k}$. To
formalize these statements, let the orthogonal projection matrix onto
$\text{null}(A)$ be $P=I_{n}-A^{\top}(AA^{\top})^{-1}A.$ Since the columns of
the (1,1) block of the inverse from (9) (namely columns of $V_{k}$) are in
the nullspace of $A$, the orthogonal projection onto $\text{null}(A)$ acts as
an identity operator on the vector space spanned by $V_{k}$:
(12) ${V}_{k}={V}_{k}P=P^{\top}{V}_{k}=P^{\top}{V}_{k}P.$
Relation (12) can equivalently be derived from (10), the expression for $P$,
and the equality $V_{k}A^{\top}=0$. The methods in this article are based on
representations of projected matrices $P^{\top}{V}_{k}P$
$\in\mathbb{R}^{n\times n}$, whose properties enable desirable numerical
advantages for large $n$ and $m$. Instead of multiplying with the possibly
large ${G}_{k}\in\mathbb{R}^{m\times m}$ and ${C}_{k}\in\mathbb{R}^{m\times
2l}$ in (11), we store the matrices ${S}_{k}\in\mathbb{R}^{n\times l}$ and
${Z}_{k}\equiv P{Y}_{k}\in\mathbb{R}^{n\times l}$ and small square matrices
that depend on the memory parameter $l$ but not on $m$. The columns of
${Z}_{k}$ are defined as ${z}_{k}=P{y}_{k}=P({g}_{k+1}-{g}_{k}),$ and they are
contained in the nullspace of $A$.
With (10) and (11) we motivated the solution of (2) without the norm
constraint (giving the equality-constrained step $s_{E}$). Computing $s_{E}$
is important for the implementation of practical algorithms, but it is even
more important to solve (2) efficiently with the norm constraint. In Sec. 6,
using the $\ell_{2}$ norm, we develop a modified version of ${V}_{k}$ as a
function of a scalar parameter $\sigma>0$, i.e., ${V}_{k}(\sigma)$. In Secs. 7
and 8, we describe how the structure of ${V}_{k}$ can be exploited to compute
an inexpensive eigendecomposition that, when combined with a judiciously
chosen norm (the shape-changing infinity norm from [7, Sec. 4.2.1]), provides
a search direction by an analytic formula. Note that the representation of
${V}_{k}$ is not specific to the L-BFGS matrix, and other compact quasi-Newton
matrices could be used (Byrd et al. [8], DeGuchy et al. [14]).
## 4 Reduced compact representation (RCR)
This section describes a computationally effective representation of (12),
which we call the _reduced compact representation_ (RCR). In section 4.1, the
RCR is placed into historical context with reduced Hessian methods.
Subsequently, sections 4.2–4.4 develop the specific formulas that enable
effective computations.
### 4.1 Reduced Hessian
The name _reduced compact representation_ is related to the term _reduced
Hessian_ [19], where ${\color[rgb]{0,0,0}\hat{Z}}\in\mathbb{R}^{n\times(n-m)}$
denotes a basis for the nullspace of $A$ (satisfying
$A{\color[rgb]{0,0,0}\hat{Z}}=0$). In turn, ${\color[rgb]{0,0,0}\hat{Z}}$
defines the so-called reduced Hessian matrix as
${\color[rgb]{0,0,0}\hat{Z}}^{\top}\nabla^{2}f_{k}{\color[rgb]{0,0,0}\hat{Z}}$
or ${\color[rgb]{0,0,0}\hat{Z}}^{\top}{B}_{k}{\color[rgb]{0,0,0}\hat{Z}}$. In
order to compute an equality-constrained step ${\color[rgb]{0,0,0}s_{E}}$, a
reduced Hessian method solves
$({\color[rgb]{0,0,0}\hat{Z}}^{\top}{B}_{k}{\color[rgb]{0,0,0}\hat{Z}}){\color[rgb]{0,0,0}\hat{s}_{E}}=-{\color[rgb]{0,0,0}\hat{Z}}^{\top}{g}_{k}$
and computes
${\color[rgb]{0,0,0}s_{E}}={\color[rgb]{0,0,0}\hat{Z}}{\color[rgb]{0,0,0}\hat{s}_{E}}$.
Known computational challenges with reduced Hessian methods are that a
desirable basis ${\color[rgb]{0,0,0}\hat{Z}}$ may be expensive to compute, the
condition number of the reduced linear system may be larger than the original
one, and the product
${\color[rgb]{0,0,0}\hat{Z}}^{\top}{B}_{k}{\color[rgb]{0,0,0}\hat{Z}}$ is not
necessarily sparse even if the matrices themselves are. For large-scale
problems, these challenges can result in significant computational
bottlenecks. In the sequel we refer to $P^{\top}{V}_{k}P$ as a _reduced
compact representation_ because it has a reduced memory footprint compared to
${V}_{k}$ in (11) (although the matrices have the same dimensions). We also
note that ${V}_{k}$ and $P^{\top}{V}_{k}P$ have the same condition, and
$P^{\top}{V}_{k}P$ has structure that enables efficient implementations.
### 4.2 Reduced compact representation
To simplify (11), we note that ${V}_{k}=P^{\top}{V}_{k}P$, that
$P^{\top}\\!A^{\top}=0$, and
$P^{\top}\\!{J}_{k}=\begin{bmatrix}{S}_{k}&{Z}_{k}\end{bmatrix}$ (where
$P^{\top}{Y}_{k}\equiv{Z}_{k}$ by definition), so that
$P^{\top}{V}_{k}P=\delta_{k}P+\begin{bmatrix}{S}_{k}&{Z}_{k}\end{bmatrix}({W}_{k}-{C}_{k}^{\top}{G}_{k}{C}_{k})\begin{bmatrix}{S}_{k}&{Z}_{k}\end{bmatrix}^{\top}\\!.$
In Appendix A we show that ${C}_{k}^{\top}{G}_{k}{C}_{k}$ simplifies to
${C}_{k}^{\top}{G}_{k}{C}_{k}=\big{[}\begin{smallmatrix}({C}_{k}^{\top}{G}_{k}{C}_{k})_{11}&0\\\
0&0\end{smallmatrix}\big{]}$ with
$({C}_{k}^{\top}{G}_{k}{C}_{k})_{11}=\delta_{k}{T}_{k}^{-\top}{Y}_{k}^{\top}A^{\top}(AA^{\top})^{-1}A{Y}_{k}{T}^{-1}_{k}$.
Based on this, we derive a _reduced compact representation_ of ${V}_{k}$.
Lemma 1: The _RCR of ${V}_{k}$ in (11) for the L-BFGS matrix is given by_
(13)
${V}_{k}=\delta_{k}I+\begin{bmatrix}A^{\top}&{S}_{k}&{Z}_{k}\end{bmatrix}\begin{bmatrix}-\delta_{k}(AA^{\top})^{-1}&\\\
&{N}_{k}\end{bmatrix}\begin{bmatrix}A\\\ {S}_{k}^{\top}\\\
{{\color[rgb]{0,0,0}Z}}_{k}^{\top}\end{bmatrix},$
_where_
${N}_{k}=\begin{bmatrix}{T}_{k}^{-\top}({D}_{k}+\delta_{k}{Z}_{k}^{\top}{Z}_{k}){T}_{k}^{-1}&-\delta_{k}{T}_{k}^{-\top}\\\
-\delta_{k}{T}^{-1}_{k}&0_{k\times k}\end{bmatrix}.$
###### Proof.
Multiplying ${V}_{k}$ in (11) from the left and right by $P^{\top}$ and $P$
yields
${V}_{k}=\delta_{k}P+\begin{bmatrix}{S}_{k}&{Z}_{k}\end{bmatrix}({W}_{k}-{C}_{k}^{\top}{G}_{k}{C}_{k})\begin{bmatrix}{S}_{k}&{Z}_{k}\end{bmatrix}^{\top}$.
Since only the (1,1) block in ${C}_{k}^{\top}{G}_{k}{C}_{k}$ is nonzero, we
consider only the (1,1) blocks, namely
$({W}_{k})_{11}-({C}_{k}^{\top}{G}_{k}{C}_{k})_{11}={T}_{k}^{-\top}({D}_{k}+\delta_{k}({Y}_{k}^{\top}{Y}_{k}-{Y}_{k}^{\top}A^{\top}(AA^{\top})^{-1}A{Y}_{k})){T}_{k}^{-1}.$
Since
${Y}_{k}^{\top}P^{\top}{Y}_{k}={Y}_{k}^{\top}P^{\top}P{Y}_{k}={Z}_{k}^{\top}{Z}_{k}$,
we obtain the (1,1) block in ${N}_{k}$. Subsequently, by factoring $P$ as
$P=I-\begin{bmatrix}A^{\top}&{S}_{k}&{Z}_{k}\end{bmatrix}\begin{bmatrix}-\delta_{k}(AA^{\top})^{-1}&\\\
&0_{2k\times 2k}\end{bmatrix}\begin{bmatrix}A\\\ {S}_{k}^{\top}\\\
{{\color[rgb]{0,0,0}Z}}_{k}^{\top}\end{bmatrix},$
we see that
$P^{\top}{V}_{k}P=\delta_{k}I+\begin{bmatrix}A^{\top}&{S}_{k}&{Z}_{k}\end{bmatrix}\begin{bmatrix}-\delta_{k}(AA^{\top})^{-1}&\\\
&{W}_{k}-{C}_{k}^{\top}{G}_{k}{C}_{k}\end{bmatrix}\begin{bmatrix}A\\\
{S}_{k}^{\top}\\\ {{\color[rgb]{0,0,0}Z}}_{k}^{\top}\end{bmatrix}.$
Because all blocks of ${W}_{k}-{C}_{k}^{\top}{G}_{k}{C}_{k}$ except for the
(1,1) block are equal to those in ${W}_{k}$, all blocks in ${N}_{k}$ are fully
specified and representation (13) is complete.
Note that
${S}_{k}^{\top}{Y}_{k}={D}_{k}+{L}_{k}+\bar{T}_{k}={S}_{k}^{\top}{Z}_{k}$,
which means that ${D}_{k}$ and ${T}_{k}={D}_{k}+\bar{T}_{k}$ can be computed
from ${S}_{k}$ and ${Z}_{k}$ alone, and that ${G}_{k}$ and ${C}_{k}$ need not
be explicitly computed. Therefore, for the RCR, only ${S}_{k}$, ${Z}_{k}$,
${T}_{k}$ and ${D}_{k}$ are stored. An addition is the scalar $\delta_{k}$,
which is typically set to be
$\delta_{k}={s}_{k}^{\top}{y}_{k}\big{/}{y}_{k}^{\top}{y}_{k}={s}_{k}^{\top}{z}_{k}\big{/}{y}_{k}^{\top}{y}_{k}$
and may depend on the most recent ${y}_{k}$. As
$P{J}_{k}=\begin{bmatrix}{S}_{k}&{Z}_{k}\end{bmatrix}$, we note a key
advantage of the RCR: that (13) can be written as
(14)
${V}_{k}=\delta_{k}P+P{J}_{k}{N}_{k}{J}_{k}^{\top}P^{\top}=\delta_{k}P+\begin{bmatrix}{S}_{k}&{Z}_{k}\end{bmatrix}{N}_{k}\begin{bmatrix}{S}_{k}^{\top}\\\\[4.0pt]
{Z}_{k}^{\top}\end{bmatrix}.$
By storing a few columns of
$\begin{bmatrix}{S}_{k}&{Z}_{k}\end{bmatrix}\in\mathbb{R}^{n\times 2l}$ (as
described in section 4.4), which in turn define a small matrix
${N}_{k}\in\mathbb{R}^{2l\times 2l}$ (cf. Lemma 1), we can separate the solves
with $AA^{\top}$ from other calculations. Concretely, note that solves with
$AA^{\top}$ only occur as part of the orthogonal projection $P$, which can be
represented as a linear operator and does not need to be explicitly formed.
Also note that (7) and (14) are related, with the difference being that
${Y}_{k}$ and $\delta_{k}I$ in (7) are replaced by ${Z}_{k}$ and $\delta_{k}P$
in (14). Hence for large $n$ and $m$, computation with (14) is efficient and
requires little memory, provided orthogonal projections with $P$ are handled
effectively (as described in section 5). On the other hand, the compact
representation in (11) does not neatly decouple solves with $AA^{\top}$, and
results in perhaps prohibitively expensive computations for large $m$. In
particular, ${G}_{k}$ in the middle matrix of (11) is defined by
${G}_{k}\equiv(A{B}^{-1}_{k}A^{\top})^{-1}\in\mathbb{R}^{m\times m}$, which
interleaves solves with $AA^{\top}$ and other terms. Therefore, the RCR in
(13)–(14) is recognizably more practical for large $n$ and $m$ than (11). We
apply ${V}_{k}$ from (14) to a vector $g$ as
(15) $h=\begin{bmatrix}{S}_{k}^{\top}\\\\[4.0pt]
{Z}_{k}^{\top}\end{bmatrix}g,\qquad{V}_{k}g=\begin{bmatrix}{S}_{k}&{Z}_{k}\end{bmatrix}{N}_{k}h+\delta_{k}Pg.$
### 4.3 Computational complexity
With adequate precomputation and storage, the cost of the matrix-vector
product (15) is often inexpensive. If the columns of ${Z}_{k}$ are stored,
updating the small $2l\times 2l$ matrix ${N}_{k}$ does not depend on solves
with $AA^{\top}$. Moreover, factors of $P$ can be precomputed once at $k=0$
and reused. In particular, suppose that a (sparse) QR factorization
$A^{\top}=\big{[}\begin{smallmatrix}Q_{1}&Q_{2}\end{smallmatrix}\big{]}\big{[}\begin{smallmatrix}R\\\
0\end{smallmatrix}\big{]}$ is obtained once, with
$Q=\big{[}\begin{smallmatrix}Q_{1}&Q_{2}\end{smallmatrix}\big{]}$ being
sparse, such that the product $Q^{\top}\\!g$ takes $\mathcal{O}(rn)$
multiplications, where $r$ is constant. Subsequently, the projection
$Pg=g-Q_{1}Q_{1}^{\top}g$ can be computed in $\mathcal{O}(n+2rn)$
multiplications (or $Pg=Q_{2}Q_{2}^{\top}g$ in $\mathcal{O}(2rn)$
multiplications). Thus, we summarize the multiplications in (15) as: $h$ with
$2nl$, ${N}_{k}h$ with negligible $(2l)^{2}$,
$\begin{bmatrix}{S}_{k}&{Z}_{k}\end{bmatrix}{N}_{k}h$ with $2nl$, and $Pg$
with, say, $2nr$. The total, without negligible terms, is
$\mathcal{O}(2n(2l+r))$. The multiplications scale linearly with $n$, are
related to the sparsity in $A$, and are thus suited for large problems.
### 4.4 Updating
We store and update the columns of
${Z}_{k}=\begin{bmatrix}z_{k-l}&\cdots&z_{k-1}\end{bmatrix}$ one at a time and
recall that $z_{k}=Pg_{k+1}-P{g}_{k}$. Based on this, no additional solves
with $AA^{\top}$ are required to represent the matrix $V_{k+1}$.
Specifically, suppose that we computed and stored $P{g}_{k}$ at the end of the
previous iteration, and that we compute $Pg_{k+1}$ at the end of the current
iteration. We can use this vector in two places: first to represent $Z_{k+1}$
with ${z}_{k}=Pg_{k+1}-P{g}_{k}$ and hence ${V}_{k+1}$, and secondly in the
computation of ${V}_{k+1}{g}_{k+1}$. Thus only one solve with $AA^{\top}$ per
iteration is necessary to update ${V}_{k+1}$ and to compute a step of the form
$s=-{V}_{k+1}{g}_{k+1}$.
For large problems, the limited-memory representation in (13) is obtained by
storing only the last $l$ columns of ${S}_{k}$ and ${Z}_{k}$. With $1\leq l\ll
n$, limited-memory strategies enable computational efficiencies and lower
storage requirements [26]. Updating ${S}_{k}$ and ${Z}_{k}$ requires replacing
or inserting one column at each iteration. Let an underline below a matrix
represent the matrix with its first column removed. That is,
$\underline{Z}_{k}$ represents ${Z}_{k}$ without its first column. With this
notation, a column update of a matrix ${Z}_{k}$ by a vector ${z}_{k}$ is
defined as
$\text{colUpdate}\left({Z}_{k},{z}_{k}\right)\equiv\begin{cases}[\>{Z}_{k}\>{z}_{k}\>]&\text{
if }k<l,\\\ [\>\underline{Z}_{k}\>{z}_{k}\>]&\text{ if }k\geq l.\\\
\end{cases}$
Such a column update either directly appends a column to a matrix or first
removes a column and then appends one. This column update will be used, for
instance, to obtain ${Z}_{k+1}$ from ${Z}_{k}$ and ${z}_{k}$, i.e.,
${Z}_{k+1}=\text{colUpdate}({Z}_{k},{z}_{k})$. Next, let an overline above a
matrix represent the matrix with its first row removed. That is,
$\overline{S^{\top}_{k}Z}_{k}$ represents $S^{\top}_{k}{Z}_{k}$ without its
first row. With this notation, a product update of ${S}_{k}^{\top}{Z}_{k}$ by
matrices ${S}_{k}$ and ${Z}_{k}$ and vectors ${s}_{k}$ and ${z}_{k}$ is
defined as
$\text{prodUpdate}\left({S}_{k}^{\top}{Z}_{k},{S}_{k},{Z}_{k},{s}_{k},{z}_{k}\right)\equiv\begin{cases}\left[\begin{array}[]{
c c }{S}_{k}^{\top}{Z}_{k}&{S}_{k}^{\top}{z}_{k}\\\
{s}_{k}^{\top}{Z}_{k}&{s}_{k}^{\top}{z}_{k}\end{array}\right]&\text{ if
}k<l,\vspace{0.1cm}\\\ \left[\begin{array}[]{ c c
}\left(\underline{\overline{S^{\top}_{k}Z_{k}}}\right)&\underline{S}_{k}^{\top}{z}_{k}\\\
{s}_{k}^{\top}\underline{Z}_{k}&{s}_{k}^{\top}{z}_{k}\end{array}\right]&\text{
if }k\geq l.\\\ \end{cases}$
This product update is used to compute matrix products such as
${S}_{k+1}^{\top}{Z}_{k+1}$ with $\mathcal{O}(2ln)$ multiplications, instead
of $\mathcal{O}(l^{2}n)$ when the product ${S}_{k}^{\top}{Z}_{k}$ is stored
and the vectors ${s}_{k}$ and ${z}_{k}$ have been computed. Note that a
diagonal matrix can be updated in this way by setting the rectangular matrices
${S}_{k}$ and ${Z}_{k}$ to zero and
${D}_{k+1}=\text{prodUpdate}({D}_{k},0,0,{s}_{k},{z}_{k})$. An upper
triangular matrix can be updated in a similar way, e.g.,
${T}_{k+1}=\text{prodUpdate}({T}_{k},{S}_{k},0,{s}_{k},{z}_{k})$. To save
computation, products with zero matrices are never formed explicitly.
## 5 Computing projections
With $P=I_{{\color[rgb]{0,0,0}n}}-A^{\top}(AA^{\top})^{-1}A$, projections
$z=Py$ can be computed by direct or iterative methods. Their efficiency
depends on the sparsity of $A$.
### 5.1 QR factorization
When $A$ has full row-rank and the QR factorization
(16) $A^{\top}=Q\begin{bmatrix}R\\\
0\end{bmatrix}=\begin{bmatrix}Q_{1}&Q_{2}\end{bmatrix}\begin{bmatrix}R\\\
0\end{bmatrix}=Q_{1}R$
is available, the projection operator becomes
$P=I-Q_{1}Q_{1}^{\top}=Q_{2}Q_{2}^{\top}$. Thus, $z=Py$ can be computed stably
as $z=Q_{2}(Q_{2}^{\top}y)$. With $m<n$, the QR factors are best obtained
using a product of Householder transformations [21]:
(17) $Q^{\top}A^{\top}=H_{m}\dots H_{3}H_{2}H_{1}A^{\top}=\begin{bmatrix}R\\\
0\end{bmatrix}=\begin{bmatrix}Q_{1}^{\top}\\\
Q_{2}^{\top}\end{bmatrix}A^{\top}.$
Thus $Q=H_{1}H_{2}H_{3}\dots H_{m}$ and the operators $Q_{1}$ and $Q_{2}$ are
available from
(18) $\displaystyle Q_{1}=Q\begin{bmatrix}I\\\
0\end{bmatrix}\qquad{\color[rgb]{0,0,0}\textnormal{and}}$ $\displaystyle\qquad
Q_{2}=Q\begin{bmatrix}0\\\ I\end{bmatrix}.$
When $A$ is sparse, the SuiteSparseQR software [11] permutes the columns of
$A^{\top}$ in (17) to retain sparsity in $H_{k}$ and $R$. The projection
$z=Py=Q_{2}(Q_{2}^{\top}y)$ can then be computed efficiently.
One can avoid storage of $Q_{1}$ by noting that $Q_{1}=A^{\top}R^{-1}$. The
projection can be computed as
$z=(I-Q_{1}Q_{1}^{\top})y=y-A^{\top}R^{-1}R^{-\top}Ay$, though with lower
precision than $z=Q_{2}(Q_{2}^{\top}y)$.
### 5.2 Iterative computation of $z$
Computing QR factors is sometimes not practical because $A$ contains one or
more relatively dense columns. (In the numerical experiments of section 9,
this occurred with only 2 out of 142 sparse constrained problems.) The
multifrontal QR solver SuiteSparseQR [11] then has to handle dense factors,
slowing computing times. For problems with thousands of constraints we regard
column $j$ as relatively dense if $\text{nnz}(A_{:j})/m>0.1$. When one expects
the QR factorization to be slow because of dense columns, an alternative is to
solve the least-squares problem
(19) $\min_{w}\|A^{\top}w-y\|$
and compute the residual $z=Py=y-A^{\top}w$. Suitable iterative solvers for
(19) are CGLS [23], LSQR [28], and LSMR [17]. If $\tilde{A}$ is the same as
$A$ with any relatively dense columns deleted, the factor $\tilde{R}$ from a
sparse QR factorization of ${\tilde{A}}^{\top}$ (again with suitable column
permutation) should be a good right-preconditioner to accelerate the iterative
solvers. If $\tilde{A}$ does not have full row-rank, the zero or small
diagonals of $\tilde{R}$ can be changed to 1 before $\tilde{R}$ is used as a
preconditioner.
### 5.3 Implementation
Appendix B describes the implementation of the two preceding projections. We
refer to these operations through the definition
$z\equiv\text{compProj}(A,y,\texttt{P})\equiv\begin{cases}\text{Householder
QR}&\text{if }\texttt{P}=1,\\\ \text{Preconditioned LSQR}&\text{if
}\texttt{P}=2.\end{cases}$
Note that the implementations do not require $A$ to have full row rank.
### 5.4 Trust-region algorithm
To solve (1) we use the trust-region strategy, which is regarded as a robust
minimization method [10]. At each iteration, the method measures progress
using the ratio of actual over predicted reductions:
$\rho_{{\color[rgb]{0,0,0}k}}=\frac{f({x}_{k})-f({x}_{k}+{\color[rgb]{0,0,0}{s}_{k}})}{q(0)-q({\color[rgb]{0,0,0}{s}_{k}})},$
where ${s}_{k}$ is an intermediate search direction , in the sense that
${s}_{k}$ will ultimately be used as an update only if $\rho_{k}$ is greater
than a threshold. By accepting steps that fulfill the so-called sufficient
decrease condition $\rho>c_{1}$ ( suppressing the subscript $k$ on $\rho_{k}$)
for a constant $c_{1}>0$, the method successively moves towards a local
minimizer (though there is no guarantee that a minimizer will be reached). The
trust-region radius $\Delta>0$ controls the norm of the search direction by
means of the constraint $\|s\|_{2}\leq\Delta$. There are two possible cases
for the solution of the TR subproblem: either the search direction is in the
interior of the constraint ($\|s\|<\Delta$) or it is on the boundary
($\|s\|=\Delta$). Since the L-BFGS matrix ${B}_{k}$ is positive definite, the
solution of (2) is given by the unconstrained minimizer
$s={\color[rgb]{0,0,0}s_{E}}$ from (9) if
$\|{\color[rgb]{0,0,0}s_{E}}\|\leq\Delta$. Otherwise, if
$\|{\color[rgb]{0,0,0}s_{E}}\|>0$, then (2) is solved with the active norm
constraint $\|s\|=\Delta$. Note that even if
$\|{\color[rgb]{0,0,0}s_{E}}\|\leq\Delta$, the condition $\rho>c_{1}$ might
not hold. In this situation, or in any case when $\rho\leq c_{1}$, the radius
$\Delta$ is reduced and a new problem (2) (with smaller $\Delta$) and
constraint $\|s\|=\Delta$ is solved. The overall trust-region strategy for one
iteration is given next, with radius $\Delta>0$ and $c_{1}>0$ and iteration
counter suppressed.
Trust-Region Strategy:
---
1. | Compute the unconstrained step $s\leftarrow{\color[rgb]{0,0,0}s_{E}}$ from (9) (using (15)) XXXX
2. | While ($\|s\|_{2}>\Delta$ or $\rho\leq c_{1}$)
| 2.1. Solve (2) with $\|s\|=\Delta$
| 2.2. Reduce $\Delta$
| end
3. | Increase (or at least do not decrease) $\Delta$
4. | Update iterate $x\leftarrow x+s$
Practical aspects of an implementation include the setting of constants and
starting the method. Detailed procedures are described in sections 6, 7, 8 and
9.
## 6 $\ell_{2}$-norm trust-region constraint
With an $\ell_{2}$-norm trust-region constraint in (2), the search direction
is given by
$s_{L2}=\underset{\|s\|_{2}\leq\Delta_{k}}{\text{ arg min
}}q(s)\quad\text{subject to}\quad As=0.$
With $\sigma\geq 0$ denoting a scalar Lagrange multiplier, the search
direction is a feasible solution to a shifted KKT system including the norm
constraint:
(20) $\begin{bmatrix}{B}_{k}+\sigma I&A^{\top}\\\
A&0\end{bmatrix}\begin{bmatrix}s_{L2}\\\
\lambda_{L2}\end{bmatrix}=\begin{bmatrix}-{g}_{k}\\\
0\end{bmatrix},\qquad\|s_{L2}\|_{2}\leq\Delta_{k}.$
By computing the (1,1) block of the shifted inverse KKT matrix, we note that
a necessary condition for the solution is
$s_{L2}(\sigma)=-{V}_{k}(\sigma){g}_{k}$, where
${V}_{k}(\sigma)=({B}_{k}+\sigma I)^{-1}-({B}_{k}+\sigma
I)^{-1}A^{\top}(A({B}_{k}+\sigma I)^{-1}A^{\top})^{-1}A({B}_{k}+\sigma
I)^{-1}.$
For the L-BFGS matrix, with $\tau_{k}=\tau_{k}(\sigma)=(1/\delta_{k}+\sigma)$
we have $({B}_{k}+\sigma
I)^{-1}=\tau_{k}^{-1}I+{J}_{k}{W}_{k}(\sigma){J}_{k}^{\top},$ where the small
$2l\times 2l$ matrix is
${W}_{k}(\sigma)=-\begin{bmatrix}\theta_{k}{S}_{k}^{\top}{S}_{k}&\theta_{k}{L}_{k}+\tau_{k}{T}_{k}\\\
\theta_{k}{L}_{k}^{\top}+\tau_{k}{T}_{k}^{\top}&\tau_{k}(\tau_{k}{D}_{k}+{Y}_{k}^{\top}{Y}_{k})\end{bmatrix}^{-1}$
with $\theta_{k}=\tau_{k}(1-\delta_{k}\tau_{k})$. In terms of
${C}_{k}(\sigma)\equiv A{J}_{k}{W}_{k}(\sigma)$ and
${G}_{k}(\sigma)\equiv(A({B}_{k}+\sigma I)^{-1}A^{\top})^{-1}$, the compact
representation of ${V}_{k}(\sigma)$ [6, Corollary 1] is
(21) $\displaystyle{V}_{k}(\sigma)=$
$\displaystyle\frac{1}{\tau_{k}}I+\begin{bmatrix}A^{\top}&{J}_{k}\end{bmatrix}\begin{bmatrix}-\frac{1}{\tau_{k}^{2}}{G}_{k}(\sigma)&-\frac{1}{\tau_{k}}{G}_{k}(\sigma){C}_{k}(\sigma)\\\
-\frac{1}{\tau_{k}}{C}_{k}(\sigma)^{\top}{G}_{k}(\sigma)&{W}_{k}(\sigma)-{C}_{k}(\sigma)^{\top}{G}_{k}(\sigma){C}_{k}(\sigma)\end{bmatrix}\begin{bmatrix}A\\\\[4.0pt]
{J}_{k}^{\top}\end{bmatrix}.$
Once the middle matrix in (21) is formed, the compact representation can be
used to compute matrix-vector products efficiently. However, when $m$ is large
(many equality constraints), computing terms such as ${G}_{k}(\sigma)$ become
expensive. Therefore, we describe a reduced representation similar to (13),
based on the property that $P^{\top}{V}_{k}(\sigma)P={V}_{k}(\sigma)$ and by
storing ${S}_{k}$ and ${Z}_{k}$. Lemma 2 summarizes the outcome.
Lemma 2: _The RCR of ${V}_{k}(\sigma)$ in (21) for the L-BFGS matrix is given
by_
(22)
${V}_{k}(\sigma)=\frac{1}{\tau_{k}}I+\begin{bmatrix}A^{\top}&{S}_{k}&{Z}_{k}\end{bmatrix}\begin{bmatrix}-\frac{1}{\tau_{k}}(AA^{\top})^{-1}&\\\
&{N}_{k}(\sigma)\end{bmatrix}\begin{bmatrix}A\\\\[2.0pt]
{S}_{k}^{\top}\\\\[2.0pt] {{\color[rgb]{0,0,0}Z}}_{k}^{\top}\end{bmatrix},$
_where_ ${\color[rgb]{0,0,0}\tau_{k}=\tau_{k}(\sigma)=(1/\delta_{k}+\sigma)}$,
${\color[rgb]{0,0,0}\theta_{k}=\theta_{k}(\sigma)=\tau_{k}(\sigma)(1-\delta_{k}\tau_{k}(\sigma))}$,
_and_
${N}_{k}(\sigma)=-\begin{bmatrix}{\color[rgb]{0,0,0}\theta_{k}(\sigma)}{S}_{k}^{\top}{S}_{k}&{\color[rgb]{0,0,0}\theta_{k}(\sigma)}{L}_{k}+{\color[rgb]{0,0,0}\tau_{k}(\sigma)}{T}_{k}\\\
{\color[rgb]{0,0,0}\theta_{k}(\sigma)}{L}_{k}^{\top}+{\color[rgb]{0,0,0}\tau_{k}(\sigma)}{T}_{k}^{\top}&{\color[rgb]{0,0,0}\tau_{k}(\sigma)}({\color[rgb]{0,0,0}\tau_{k}(\sigma)}{D}_{k}+{Z}_{k}^{\top}{Z}_{k})\end{bmatrix}^{-1}.$
###### Proof.
To simplify notation, we suppress the explicit dependence on $\sigma$ in this
proof, so that ${V}_{k}\equiv{V}_{k}(\sigma)$, ${C}_{k}\equiv{C}_{k}(\sigma)$,
and ${W}_{k}\equiv{W}_{k}(\sigma)$. Multiplying ${V}_{k}$ in (21) from the
left and right by $P^{\top}$ and $P$ yields
${V}_{k}=\frac{1}{\tau_{k}}P+\begin{bmatrix}{S}_{k}&{Z}_{k}\end{bmatrix}({W}_{k}-{C}_{k}^{\top}{G}_{k}{C}_{k})\begin{bmatrix}{S}_{k}&{Z}_{k}\end{bmatrix}^{\top}.$
Observe that
${C}_{k}=A{J}_{k}{W}_{k}=\begin{bmatrix}0&A{Y}_{k}\end{bmatrix}{W}_{k}$ is
block-rectangular and that
${G}_{k}=(A(\frac{1}{\tau_{k}}I+{J}_{k}{W}_{k}{J}_{k}^{\top})A^{\top})^{-1}$
depends on ${W}_{k}$. Defining ${F}_{k}\equiv\tau_{k}(AA^{\top})^{-1}$, we
show that the Sherman-Morrison-Woodbury (SMW) inverse gives the simplification
$\displaystyle{W}_{k}-{C}_{k}^{\top}{G}_{k}{C}_{k}$
$\displaystyle={W}_{k}-{W}_{k}\begin{bmatrix}0\\\
{Y}_{k}^{\top}A^{\top}\end{bmatrix}{G}_{k}\begin{bmatrix}0&A{Y}_{k}\end{bmatrix}{W}_{k}$
$\displaystyle={W}_{k}-{W}_{k}\begin{bmatrix}0\\\
{Y}_{k}^{\top}A^{\top}\end{bmatrix}\big{(}I+\begin{bmatrix}0&{F}_{k}A{Y}_{k}\end{bmatrix}{W}_{k}\begin{bmatrix}0\\\
{Y}_{k}^{\top}A^{\top}\end{bmatrix}\big{)}^{-1}\begin{bmatrix}0&{F}_{k}A{Y}_{k}\end{bmatrix}{W}_{k}$
$\displaystyle=\left({W}_{k}^{-1}+\begin{bmatrix}0\\\
{Y}_{k}^{\top}A^{\top}\end{bmatrix}\begin{bmatrix}0&{F}_{k}A{Y}_{k}\end{bmatrix}\right)^{-1},$
where the third equality is obtained by applying the SMW formula in reverse.
Since only the (2,2) block in the low-rank matrix of the third equality is
nonzero, and since ${F}_{k}=\tau_{k}(AA^{\top})^{-1}$, note that
$({W}_{k}^{-1})_{22}+{Y}_{k}^{\top}A^{\top}{F}_{k}A{Y}_{k}=-(\tau_{k}(\tau_{k}{D}_{k}+{Y}_{k}^{\top}{Y}_{k}-{Y}_{k}^{\top}A^{\top}(AA^{\top})^{-1}A{Y}_{k})),$
which corresponds to the $(2,2)$ block ${N}_{k}(\sigma)$ in (22). Because all
other blocks are unaffected, it holds that
${W}_{k}-{C}_{k}^{\top}{G}_{k}{C}_{k}={N}_{k}(\sigma)$. Subsequently, by
factoring $P=I-A^{\top}(AA^{\top})^{-1}A$ we deduce the compact representation
(22).
Note that
${S}_{k}^{\top}{Z}_{k}={S}_{k}^{\top}{Y}_{k}={L}_{k}+{D}_{k}+\bar{T}_{k}$,
with ${T}_{k}={D}_{k}+\bar{T}_{k}$, means that the RCR for ${V}_{k}(\sigma)$
is fully specified by storing ${S}_{k}$ and ${Z}_{k}$. An exception is the
scalar $\delta_{k}$, which may depend on the most recent ${y}_{k}$. Also when
$\sigma=0$, the representations (13) and (22) coincide. We apply
${V}_{k}(\sigma)$ to a vector $g$ as
$h=\begin{bmatrix}{S}_{k}^{\top}\\\
{Z}_{k}^{\top}\end{bmatrix}g,\qquad{V}_{k}(\sigma)g=\begin{bmatrix}{S}_{k}&{Z}_{k}\end{bmatrix}{N}_{k}(\sigma)h+\frac{1}{\tau_{k}}Pg.$
### 6.1 $\ell_{2}$-norm search direction
To compute the $\ell_{2}$ TR minimizer we first set $\sigma=0$ and
$s_{L2}(0)=-{V}_{k}(0){g}_{k}$. If $\|s_{L2}(0)\|_{2}\leq\Delta_{k}$, the
minimizer with the $\ell_{2}$-norm is given by $s_{L2}(0)$. Otherwise
($\|s_{L2}(0)\|_{2}>\Delta_{k}$) we define the so-called secular equation [10]
as
$\phi(\sigma)\equiv\frac{1}{\|s_{L2}(\sigma)\|_{2}}-\frac{1}{\Delta_{k}}.$
To solve the secular equation we apply the 1D Newton iteration
$\sigma_{j+1}=\sigma_{j}-\frac{\phi(\sigma_{j})}{\phi^{\prime}(\sigma_{j})},$
where
$\phi^{\prime}(\sigma_{j})=-(s_{L2}(\sigma_{j})^{\top}s_{L2}(\sigma_{j})^{\prime})/\|s_{L2}(\sigma_{j})\|^{3}_{2}$
and $s_{L2}(\sigma_{j})^{\prime}=-{V}_{k}(\sigma_{j})s_{L2}(\sigma_{j})$ (with
prime “ ′ ” denoting the derivative). Note that $s_{L2}(\sigma_{j})^{\prime}$
can be derived from the shifted system (20) by differentiation with respect to
$\sigma$. Applying the product rule in (20) and regarding the solutions as
functions of $\sigma$, i.e., $s_{L2}^{\prime}\equiv s_{L2}(\sigma)^{\prime}$
and $\lambda_{L2}^{\prime}\equiv\lambda_{L2}(\sigma)^{\prime}$, one obtains
the differentiated system
$\begin{bmatrix}{B}_{k}+\sigma I&A^{\top}\\\
A&0\end{bmatrix}\begin{bmatrix}s_{L2}^{\prime}\\\
\lambda_{L2}^{\prime}\end{bmatrix}=\begin{bmatrix}-s_{L2}\\\ 0\end{bmatrix}.$
Since the system matrix is the same as in (20) (only the right-hand side
differs), $s_{L2}(\sigma_{j})^{\prime}$ is fully determined by
${V}_{k}(\sigma_{j})$ and $s_{L2}(\sigma_{j})$. Starting from $\sigma_{0}=0$,
we terminate the Newton iteration if $|\phi(\sigma_{j+1})|\leq\varepsilon$ or
an iteration limit is reached. The search direction is then computed as
$s_{L2}(\sigma_{j+1})=-{V}_{k}(\sigma_{j+1}){g}_{k}$.
Our approach with the $\ell_{2}$ norm is summarized in Algorithm 1. This
algorithm is based on storing and updating ${S}_{k},{Z}_{k}$, and the small
blocks of ${N}_{k}(\sigma)$ in (22). Suppose that $s_{0}$ and $z_{0}$ are
obtained by an initialization procedure ( for instance, Init. 1 from section
9). With $k=0$, the initial matrices that define ${V}_{k}(\sigma)$ are given
as
(23) $\displaystyle{S}_{k}=\begin{bmatrix}{s}_{k}\end{bmatrix},$
$\displaystyle\quad{Z}_{k}=\begin{bmatrix}{z}_{k}\end{bmatrix},$ (24)
$\displaystyle{D}_{k}=\begin{bmatrix}{s}_{k}^{\top}{z}_{k}\end{bmatrix},\quad{T}_{k}=\begin{bmatrix}{s}_{k}^{\top}{z}_{k}\end{bmatrix},$
$\displaystyle\quad{Z}_{k}^{\top}{Z}_{k}=\begin{bmatrix}{z}_{k}^{\top}{z}_{k}\end{bmatrix},\quad{L}_{k}=\begin{bmatrix}0\end{bmatrix}.$
Once the iteration starts, we update
(25)
${S}_{k+1}=\text{colUpdate}({S}_{k},{s}_{k}),\quad{Z}_{k+1}=\text{colUpdate}({Z}_{k},{z}_{k}),$
(26) $\displaystyle{D}_{k+1}$
$\displaystyle=\text{prodUpdate}({D}_{k},0,0,{s}_{k},{z}_{k}){\color[rgb]{0,0,0},}$
$\displaystyle{T}_{k+1}$
$\displaystyle=\text{prodUpdate}({T}_{k},{S}_{k},0,{s}_{k},{z}_{k}){\color[rgb]{0,0,0},}$
$\displaystyle{Z}_{k+1}^{\top}{Z}_{k+1}$
$\displaystyle=\text{prodUpdate}({Z}_{k}^{\top}{Z}_{k},{Z}_{k},{Z}_{k},{z}_{k},{z}_{k}),\text{
{\color[rgb]{0,0,0} and}}$ $\displaystyle{L}_{k+1}$
$\displaystyle=\text{prodUpdate}({L}_{k},0,{Z}_{k},{s}_{k},0).$
Note that we store and update matrices like
${Z}_{k}^{\top}{Z}_{k}\in\mathbb{R}^{l\times l}$ instead of recomputing them.
Because of the limited memory technique (typically $3\leq l\leq 7$ [8]), such
matrices are very small relative to large $n$. Subsequently,
${N}_{k}(\sigma)\in\mathbb{R}^{2l\times 2l}$, defined by the blocks in (26),
remains very small compared to $n$.
Algorithm 1 LTRL2-SLEC (Limited-Memory Trust-Region 2-norm for Sparse Linear
Equality Constraints)
0: $0<c_{1}$, $0<c_{2},c_{3},c_{4},c_{5},c_{6}<1<c_{7}$,
$0<\varepsilon_{1},\varepsilon_{2}$, $0<i_{\text{max}}$, $k=0$,
${\color[rgb]{0,0,0}0<l}$, $\Delta_{k}=\|{x}_{k}\|_{2}$, ${g}_{k}=\nabla
f({x}_{k})$, $\texttt{P}\in[0,1]$,
${g}_{k}^{P}=\text{compProj}(A,{g}_{k},\texttt{P})$,
${g}_{k+1}^{P},{s}_{k},{z}_{k},{y}_{k}$ (from initialization),
${S}_{k},{Z}_{k},{D}_{k},{T}_{k},{L}_{k},{Z}_{k}^{\top}{Z}_{k}$ from (23) and
(24), $\delta_{k}={s}_{k}^{\top}{z}_{k}/{y}_{k}^{\top}{y}_{k}$, $\sigma=0$,
$\tau_{k}=(1/\delta_{k}+\sigma)$, $\theta_{k}=\tau_{k}(1-\delta_{k}\tau_{k})$,
${N}_{k}(\sigma)$ from (22), $k=k+1$
1: while $(\varepsilon_{1}\leq\|g^{P}_{k}\|_{\infty})$ do
2: $h=-\begin{bmatrix}{S}_{k}&{Z}_{k}\end{bmatrix}^{\top}{g}_{k}$
3:
${s}_{k}=\begin{bmatrix}{S}_{k}&{Z}_{k}\end{bmatrix}{N}_{k}(0)h-\delta_{k}g^{P}_{k}$;
$\rho_{k}=0$ {Equality constrained step}
4: if $\|{s}_{k}\|_{2}\leq\Delta_{k}$ then
5: $\rho_{k}=(f({x}_{k})-f({x}_{k}+{s}_{k}))/(q(0)-q({s}_{k}))$
6: end if
7: while $\rho_{k}\leq c_{1}$ do
8: $\sigma=0$, $i=0$; $\tau_{k}=(1/\delta_{k}+\sigma)$,
$\theta_{k}=\tau_{k}(1-\delta_{k}\tau_{k})$
9: $h^{\prime}=-\begin{bmatrix}{S}_{k}&{Z}_{k}\end{bmatrix}^{\top}{s}_{k}$
10:
${s}_{k}^{\prime}=\begin{bmatrix}{S}_{k}&{Z}_{k}\end{bmatrix}{N}_{k}(\sigma)h^{\prime}-\delta_{k}{s}_{k}$;
11: while $\varepsilon_{2}<|\phi(\sigma)|$ and $i<i_{\max}$ do
12: $\sigma=\sigma-\phi(\sigma)/\phi^{\prime}(\sigma)$
13: $\tau_{k}=(1/\delta_{k}+\sigma)$,
$\theta_{k}=\tau_{k}(1-\delta_{k}\tau_{k})$
14: $h=-\begin{bmatrix}{S}_{k}&{Z}_{k}\end{bmatrix}^{\top}{g}_{k}$;
${s}_{k}=\begin{bmatrix}{S}_{k}&{Z}_{k}\end{bmatrix}{N}_{k}(\sigma)h-\frac{1}{\tau_{k}}g^{P}_{k}$
15: $h^{\prime}=-\begin{bmatrix}{S}_{k}&{Z}_{k}\end{bmatrix}^{\top}{s}_{k}$;
${s}_{k}^{\prime}=\begin{bmatrix}{S}_{k}&{Z}_{k}\end{bmatrix}{N}_{k}(\sigma)h^{\prime}-\frac{1}{\tau_{k}}{s}_{k}$;
16: $i=i+1$
17: end while{Newton’s method}
18: $\rho_{k}=0$
19: if $0<(f({x}_{k})-f({x}_{k}+{s}_{k}))$ then
20: $\rho_{k}=(f({x}_{k})-f({x}_{k}+{s}_{k}))/(q(0)-q({s}_{k}))$
21: end if
22: if $\rho_{k}\leq c_{2}$ then
23: $\Delta_{k}=\min(c_{3}\|{s}_{k}\|_{2},c_{4}\Delta_{k})$
24: end if
25: end while
26: $x_{k+1}={x}_{k}+{s}_{k}$ {Accept step}
27: if $c_{5}\Delta_{k}\leq\|{s}_{k}\|_{2}$ and $c_{6}\leq\rho_{k}$ then
28: $\Delta_{k}=c_{7}\Delta_{k}$
29: end if
30: ${g}_{k+1}=\nabla f({x}_{k+1})$,
${g}_{k+1}^{P}=\text{compProj}(A,{g}_{k+1},\texttt{P})$,
${z}_{k}={g}_{k+1}^{P}-{g}_{k}^{P}$, ${y}_{k}={g}_{k+1}-{g}_{k}$,
${S}_{k+1},{Z}_{k+1},{D}_{k+1},{T}_{k+1},{L}_{k+1},{Z}_{k+1}^{\top}{Z}_{k+1}$
from (25) and (26) $\delta_{k+1}={z}_{k}^{\top}{s}_{k}/{y}_{k}^{\top}{y}_{k}$,
$\sigma=0$, $\tau_{k}=(1/\delta_{k}+\sigma)$,
$\theta_{k}=\tau_{k}(1-\delta_{k}\tau_{k})$
31: Update ${N}_{k}(\sigma)$ from (22), $k=k+1$
32: end while
## 7 Eigendecomposition of ${V}_{k}$
We describe how to exploit the structure of the RCR (13) to compute an
implicit eigendecomposition of ${V}_{k}$, and how to combine this with a
shape-changing norm. The effect is that the trust-region subproblem solution
is given by an analytic formula. Since the RCR is equivalent to representation
(11), we can apply previous results. However, using representation (13) is
computationally more efficient. First, note that
${N}_{k}\in\mathbb{R}^{2l\times 2l}$ is a small symmetric square matrix.
Therefore, computing the nonzero eigenvalues and corresponding eigenvectors of
the matrix
$\begin{bmatrix}{S}_{k}&{Z}_{k}\end{bmatrix}{N}_{k}\begin{bmatrix}{S}_{k}&{Z}_{k}\end{bmatrix}^{\top}=U_{2}\Lambda_{2}U_{2}^{\top}$
is inexpensive. In particular, we compute the thin QR factorization
$\begin{bmatrix}{S}_{k}&{Z}_{k}\end{bmatrix}=\widehat{Q}_{2}\widehat{R}_{2}$
and the small eigendecomposition
$\widehat{R}_{2}{N}_{k}\widehat{R}_{2}^{\top}=\widehat{P}_{2}\Lambda_{2}\widehat{P}_{2}^{\top}.$
The small factorization is then
$\begin{bmatrix}{S}_{k}&{Z}_{k}\end{bmatrix}{N}_{k}\begin{bmatrix}{S}_{k}&{Z}_{k}\end{bmatrix}^{\top}=\widehat{Q}_{2}(\widehat{R}_{2}{N}_{k}\widehat{R}_{2}^{\top})\widehat{Q}_{2}^{\top}=\widehat{Q}_{2}(\widehat{P}_{2}\Lambda_{2}\widehat{P}_{2}^{\top})\widehat{Q}_{2}^{\top}\equiv
U_{2}\Lambda_{2}U_{2}^{\top},$
where the orthonormal matrix on the right-hand side is defined as
$U_{2}\equiv\widehat{Q}_{2}\widehat{P}_{2}$. Since
$A^{\top}(AA^{\top})^{-1}A=Q_{1}Q_{1}^{\top}$ from (16), we express ${V}_{k}$
as
${V}_{k}=\delta_{k}I+\begin{bmatrix}Q_{1}&U_{2}\end{bmatrix}\begin{bmatrix}-\delta_{k}I_{m}&\\\
&\Lambda_{2}\end{bmatrix}\begin{bmatrix}Q_{1}^{\top}\\\
U_{2}^{\top}\end{bmatrix},$
where $Q_{1}\in\mathbb{R}^{n\times m}$ and $U_{2}\in\mathbb{R}^{n\times 2l}$
are orthonormal, while $\Lambda_{2}\in\mathbb{R}^{2l\times 2l}$ is diagonal.
Defining the orthogonal matrix
$U\equiv\begin{bmatrix}Q_{1}&U_{2}&U_{3}\end{bmatrix},$ where
$U_{3}\in\mathbb{R}^{n\times n-(m+2l)}$ represents the orthogonal complement
of $\begin{bmatrix}Q_{1}&U_{2}\end{bmatrix}$, we obtain the implicit
eigendecomposition of ${V}_{k}$ as
(27)
${V}_{k}=\begin{bmatrix}Q_{1}&U_{2}&U_{3}\end{bmatrix}\begin{bmatrix}0_{m}&&\\\
&\delta_{k}I_{2l}+\Lambda_{2}&\\\
&&\delta_{k}I_{n-(m+2l)}\end{bmatrix}\begin{bmatrix}Q_{1}^{\top}\\\
U_{2}^{\top}\\\ U_{3}^{\top}\end{bmatrix}\equiv U\Lambda U^{\top}.$
Note that we do not explicitly form the potentially expensive to compute
orthonormal matrix $U_{3}$, as only scaled projections
$\delta_{k}U_{3}U_{3}^{\top}$ are needed. We therefore refer to factorization
(27) as being implicit. In particular, from the identity $UU^{\top}=I$, we
obtain that
$U_{3}U_{3}^{\top}=I-Q_{1}Q_{1}^{\top}-U_{2}U_{2}^{\top}=P-U_{2}U_{2}^{\top}.$
Note here and above that $U_{2}$ is a thin rectangular matrix with only $2l$
columns.
## 8 Shape-changing-norm trust-region constraint
To make use of the implicit eigensystem (27), we apply the so-called shape-
changing infinity norm introduced in [7]:
$\|s\|_{U}\equiv\text{max}\left\\{\left\|\begin{bmatrix}Q_{1}&U_{2}\end{bmatrix}^{\top}\\!s\right\|_{\infty},\left\|U_{3}^{\top}s\right\|_{2}\right\\}.$
With this norm, the trust-region subproblem has a computationally efficient
solution that can be obtained from
$s_{SC}=\underset{\|s\|_{U}\leq\Delta_{k}}{\text{ arg min
}}q(s)\quad\text{subject to}\quad As=0.$
Since the RCR is equivalent to (11), we invoke [6, section 5.5] to obtain an
direct formula for the search direction:
$s_{SC}=U_{2}(v_{2}-\beta U_{2}^{\top}{g}_{k})+\beta P{g}_{k},$
where with
$U_{2}^{\top}{g}_{k}=\widehat{P}_{2}^{\top}\widehat{R}_{2}^{-\top}\begin{bmatrix}{S}_{k}&{Z}_{k}\end{bmatrix}^{\top}{g}_{k}\equiv{u}_{k}$,
and $\mu_{i}=(\delta_{k}+(\Lambda_{2})_{ii})^{-1}$,
(28) $\displaystyle(v_{2})_{i}=$
$\displaystyle\begin{cases}\frac{-({u}_{k})_{i}}{\mu_{i}}&\text{ if
}\left|\frac{({u}_{k})_{i}}{\mu_{i}}\right|\leq\Delta_{k},\\\
\frac{-\Delta_{k}({u}_{k})_{i}}{|({u}_{k})_{i}|}&\text{
otherwise},\end{cases}$ (29) $\displaystyle\beta=$
$\displaystyle\begin{cases}-\delta_{k}&\text{ if
}\|\delta_{k}U_{3}^{\top}{g}_{k}\|_{2}\leq\Delta_{k},\\\
\frac{-\Delta_{k}}{\|U_{3}^{\top}{g}_{k}\|_{2}}&\text{ otherwise},\end{cases}$
for $1\leq i\leq 2l$. More details for the computation of $s_{SC}$ are in
Appendix C. Note that the norm $\|U_{3}^{\top}{g}_{k}\|_{2}$ can be computed
without explicitly forming $U_{3}$, since
$\|U_{3}^{\top}{g}_{k}\|^{2}_{2}={g}_{k}^{\top}(P-U_{2}U_{2}^{\top}){g}_{k}=\|P{g}_{k}\|_{2}^{2}-\|U_{2}^{\top}{g}_{k}\|_{2}^{2}.$
The trust-region algorithm using the RCR and the shape-changing norm is
summarized in Algorithm 2 below. Like Algorithm 1, this algorithm is based on
storing and updating ${S}_{k},{Z}_{k}$ and the small blocks of ${N}_{k}$ in
(13). Therefore, the initializations (23)–(24) and updates (25)–(26) can be
used. In addition, since in the thin QR factorization
$\begin{bmatrix}{S}_{k}&{Z}_{k}\end{bmatrix}=\hat{Q}_{2}\hat{R}_{2}$ the
triangular $\hat{R}_{2}$ is computed from a Cholesky factorization of
$\begin{bmatrix}{S}_{k}&{Z}_{k}\end{bmatrix}^{\top}\begin{bmatrix}{S}_{k}&{Z}_{k}\end{bmatrix}$,
we initialize the matrices
(30)
${S}_{k}^{\top}{S}_{k}=\begin{bmatrix}{s}_{k}^{\top}{s}_{k}\end{bmatrix},\quad{S}_{k}^{\top}{Z}_{k}=\begin{bmatrix}{s}_{k}^{\top}{z}_{k}\end{bmatrix},$
with corresponding updates
(31) $\displaystyle{S}_{k+1}^{\top}{S}_{k+1}$
$\displaystyle=\text{prodUpdate}({S}_{k}^{\top}{S}_{k},{S}_{k},{S}_{k},{s}_{k},{s}_{k}),\textnormal{
{\color[rgb]{0,0,0} and} }$ $\displaystyle{S}_{k+1}^{\top}{Z}_{k+1}$
$\displaystyle=\text{prodUpdate}({S}_{k}^{\top}{Z}_{k},{S}_{k},{Z}_{k},{s}_{k},{z}_{k}).$
As before, with a small memory parameter $l$, these matrices are very small
compared to large $n$, and computations with them are inexpensive.
Algorithm 2 LTRSC-SLEC (Limited-Memory Trust-Region Shape-Changing Norm for
Sparse Linear Equality Constraints)
0: $0<c_{1}$, $0<c_{2},c_{3},c_{4},c_{5},c_{6}<1<c_{7}$, $0<\varepsilon_{1}$,
${\color[rgb]{0,0,0}0<l}$, $k=0$, $\Delta_{k}=\|{x}_{k}\|_{2}$,
${g}_{k}=\nabla f({x}_{k})$, $\texttt{P}\in[0,1]$,
${g}_{k}^{P}=\text{compProj}(A,{g}_{k},\texttt{P})$,
${g}_{k+1}^{P},{s}_{k},{z}_{k},{y}_{k}$ (from initialization),
${S}_{k},{Z}_{k},{D}_{k},{T}_{k},{Z}_{k}^{\top}{Z}_{k},{S}_{k}^{\top}{S}_{k},{S}_{k}^{\top}{Z}_{k}$
from (23), (24) and (30),
$\delta_{k}={s}_{k}^{\top}{z}_{k}/{y}_{k}^{\top}{y}_{k}$, ${N}_{k}$ from (13),
$k=k+1$
1: while $(\varepsilon_{1}\leq\|g^{P}_{k}\|_{\infty})$ do
2: $h=-\begin{bmatrix}{S}_{k}&{Z}_{k}\end{bmatrix}^{\top}{g}_{k}$
3:
${s}_{k}=\begin{bmatrix}{S}_{k}&{Z}_{k}\end{bmatrix}{N}_{k}h-\delta_{k}g^{P}_{k};$
${\rho}_{k}$ = 0 ; {Equality constrained step}
4: if $\|{s}_{k}\|_{2}\leq\Delta_{k}$ then
5: $\rho_{k}=(f({x}_{k})-f({x}_{k}+{s}_{k}))/(q(0)-q({s}_{k}))$;
$\|{s}_{k}\|=\|{s}_{k}\|_{2}$
6: end if
7: if $\rho_{k}\leq c_{1}$ then
8:
$\hat{R}_{2}^{\top}\hat{R}_{2}=\bigg{[}\begin{smallmatrix}{S}_{k}^{\top}{S}_{k}&{S}_{k}^{\top}{Z}_{k}\\\
{Z}_{k}^{\top}{S}_{k}&{Z}_{k}^{\top}{Z}_{k}\end{smallmatrix}\bigg{]}$
{Cholesky factorization}
9:
$\hat{P}_{2}\Lambda_{2}\hat{P}_{2}^{\top}=\hat{R}_{2}{N}_{k}\hat{R}_{2}^{\top}$
{Eigendecomposition}
10:
${u}_{k}=\hat{P}_{2}^{\top}\hat{R}_{2}^{-\top}\begin{bmatrix}{S}_{k}&{Z}_{k}\end{bmatrix}^{\top}{g}_{k}$
11: $\xi_{k}=(\|{g}_{k}^{P}\|_{2}^{2}-\|{u}_{k}\|_{2}^{2})^{\frac{1}{2}}$
12: while $\rho_{k}\leq c_{1}$ do
13: Set $v_{2}$ from (28) using ${u}_{k}$, $\Lambda_{2}$
14: Set $\beta$ from (29) using $\xi_{k}=\|U_{3}^{\top}{g}_{k}\|_{2}$
15:
${s}_{k}=\begin{bmatrix}{S}_{k}&{Z}_{k}\end{bmatrix}\hat{R}_{2}^{-1}\hat{P}_{2}(v_{2}-\beta{u}_{k})+\beta{g}_{k}^{P}$;
$\rho_{k}=0$
16: if $0<(f({x}_{k})-f({x}_{k}+{s}_{k}))$ then
17: $\rho_{k}=(f({x}_{k})-f({x}_{k}+{s}_{k}))/(q(0)-q({s}_{k}))$
18: end if
19: if $\rho_{k}\leq c_{2}$ then
20: $\Delta_{k}=\min(c_{3}\|{s}_{k}\|_{U},c_{4}\Delta_{k})$
21: end if
22: end while
23: $\|{s}_{k}\|=\|{s}_{k}\|_{U}$
24: end if
25: $x_{k+1}={x}_{k}+{s}_{k}${Accept step}
26: if $c_{5}\Delta_{k}\leq\|{s}_{k}\|$ and $c_{6}\leq\rho_{k}$ then
27: $\Delta_{k}=c_{7}\Delta_{k}$
28: end if
29: ${g}_{k+1}=\nabla f({x}_{k+1})$,
${g}_{k+1}^{P}=\text{compProj}(A,{g}_{k+1},\texttt{P})$,
${z}_{k}={g}_{k+1}^{P}-{g}_{k}^{P}$, ${y}_{k}={g}_{k+1}-{g}_{k}$,
${S}_{k+1},{Z}_{k+1},{D}_{k+1},{T}_{k+1},{Z}_{k+1}^{\top}{Z}_{k+1},{S}_{k+1}^{\top}{S}_{k+1},{S}_{k+1}^{\top}{Z}_{k+1}$
from (25), (26) and (31);
$\delta_{k+1}={z}_{k}^{\top}{s}_{k}/{y}_{k}^{\top}{y}_{k}$
30: Update ${N}_{k}$ from (13); $k=k+1$
31: end while
## 9 Numerical experiments
The numerical experiments are carried out in MATLAB 2016a on a MacBook Pro
@2.6 GHz Intel Core i7 with 32 GB of memory. For comparisons, we use the
implementations of Algorithms 1 and 2 from [6], which we label TR1 and TR2.
All codes are available in the public domain:
https://github.com/johannesbrust/LTR_LECx
For TR1, TR2 we use the modified stopping criterion
$\|P{g}_{k}\|_{\infty}\leq\epsilon$ in place of
$\|P{g}_{k}\|_{2}/\text{max}(1,x_{k})\leq\epsilon$ in order to compare
consistently across solvers. Unless otherwise specified, the default
parameters of these two algorithms are used. We use the following names for
our proposed algorithms:
TR1H: | Alg. 1 with representation (22) and Householder QR
---|---
TR1L: | Alg. 1 with representation (22) and preconditioned LSQR
TR2H: | Alg. 2 with representation (13) and Householder QR
TR2L: | Alg. 2 with representation (13) and preconditioned LSQR
Note that TR1 and TR2 were developed for low-dimensional linear equality
constraints. In addition, we include IPOPT [30] with an L-BFGS quasi-Newton
matrix (we use a precompiled Mex file with IPOPT 3.12.12 that includes MUMPS
and MA57 libraries). We note that a commercial state-of-the-art quasi-Newton
trust-region solver that uses a projected conjugate gradient solver is
implemented in the KNITRO-INTERIOR/CG [9, Algorithm 3.2]. For the freely
available IPOPT we specify the limited-memory BFGS option using the option
hessian_approximation=‘limited memory’ with tol=1e-5. (The parameter tol is
used by IPOPT to ensure that the (scaled) projected gradient in the infinity
norm and the constraint violation are below the specified threshold. The
default value is tol=1e-8.) All other parameters in IPOPT are at their
default values unless otherwise specified. The parameters in TR1{H,L} and
TR2{H,L} are set to $c_{1}$ (as machine epsilon), $c_{2}=0.75$, $c_{3}=0.5$,
$c_{4}=0.25$, $c_{5}=0.8$, $c_{6}=0.25$, $c_{7}=2$, and $i_{\text{max}}=10$.
The limited-memory parameter of all compared TR solvers is set to $l=5$ (IPOPT
’s default is $l=6$). Because the proposed methods are applicable to problems
with a large number of constraints, problems with large dimensions such as
$m\geq 10^{4}$, $n\geq 10^{5}$ are included. Throughout the experiments,
$A\in\mathbb{R}^{m\times n}$ with $m<n$.
To initialize the algorithm, we distinguish two main cases. If $x_{0}$ is not
available, it is computed as the minimum-norm solution
$x_{0}=\text{argmin}_{x}\|x\|_{2}\text{~{}s.t.~{}}Ax=b.$ (e.g.,
$x_{0}=A^{\top}(AA^{\top})^{-1}b$ when $A$ is full rank.) If $\hat{x}_{0}$ is
provided but is infeasible, the initial vector can be computed from
$p_{0}=\text{argmin}_{p}\|p\|_{2}\text{~{}s.t.~{}}Ap=b-A\hat{x}_{0}$ and
$x_{0}=\hat{x}_{0}+p_{0}$. To compute the initial vectors $s_{0}=x_{1}-x_{0}$,
$z_{0}=Pg_{1}-Pg_{0}$, and $y_{0}=g_{1}-g_{0}$ we determine an initial $x_{1}$
value also. Suppose that at $k=0$, all of ${x}_{k}$, ${g}_{k}=\nabla
f({x}_{k})$ and ${g}_{k}^{P}=P{g}_{k}$ are known. An initialization for
${s}_{k}$, ${z}_{k}$ and ${y}_{k}$ at $k=0$ is the following:
Init. 1:
---
1. | Backtracking line-search: ${x}_{k+1}={x}_{k}-\alpha{g}_{k}^{P}/\|{g}_{k}^{P}\|_{2}$ (cf. [27, Alg. 3.1])
2. | ${g}_{k+1}=\nabla f({x}_{k+1})$, ${g}_{k+1}^{P}=\text{compProj}(A,{g}_{k+1},\texttt{P})$
3. | ${s}_{k}={x}_{k+1}-{x}_{k}$
3. | ${z}_{k}={g}_{k+1}^{P}-{g}_{k}^{P}$
3. | ${y}_{k}={g}_{k+1}-{g}_{k}$
Once $s_{0}$, $z_{0}$ and $y_{0}$ have been initialized (with initial radius
$\Delta_{0}=\|s_{0}\|_{2}$), all other updates are done automatically within
the trust-region strategy.
The outcomes from the subsequent Experiments I–III are summarized in Figures
1–3 as performance profiles (Dolan and Moré [15], extended in [25] and often
used to compare the effectiveness of various solvers). Detailed information
for each problem instance is in Tables 3–5. Relative performances are
displayed in terms of iterations and computation times. The performance metric
$\rho_{s}(\tau)$ on $n_{p}$ test problems is given by
$\rho_{s}(\tau)=\frac{\text{card}\left\\{p:\pi_{p,s}\leq\tau\right\\}}{n_{p}}\quad\text{and}\quad\pi_{p,s}=\frac{t_{p,s}}{\underset{1\leq
i\leq S,\ i\neq s}{\text{ min }t_{p,i}}},$
where $t_{p,s}$ is the “output” (i.e., iterations or time) of “solver” $s$ on
problem $p$, and $S$ denotes the total number of solvers for a given
comparison. This metric measures the proportion of how close a given solver is
to the best result. Extended performance profiles are the same as the
classical ones but include the part of the domain where $\tau\leq 1$. In the
profiles we include a dashed vertical grey line to indicate $\tau=1$. We note
that although the iteration numbers are recorded differently for each solver,
they correspond approximately to the number of KKT systems solved.
Overall, we observe that the number of iterations used by the respective
solvers is relatively similar across different problems. However, the
differences in computation times are large. In particular, the RCR
implementations use the least time in almost all problem instances. This is
possible because RCR enables an efficient decoupling of computations with the
constraint matrix $A$ and remaining small terms.
### 9.1 Experiment I
This experiment uses problems with sparse and possibly low-rank
$A\in\mathbb{R}^{m\times n}$. The objective is the Rosenbrock function
$f(x)=\sum_{i=1}^{n/2}(x_{2i}-x_{2i-1})^{2}+(1-x_{2i-1})^{2},$
where $n$ is an even integer. The matrices $A\in\mathbb{R}^{m\times n}$ are
obtained from the SuiteSparse Matrix Collection [13]. Because TR1 and TR2 were
not developed for problems with a large number of constraints, these solvers
are only applied to problems for which $m\leq 2500$. All other solvers were
run on all test problems. Convergence of an algorithm is determined when two
conditions are satisfied:
(32)
$\|P{g}_{k}\|_{\infty}<10^{-5}\quad\text{and}\quad\|A{x}_{k}-b\|_{2}<10^{-7}.$
We summarize the outcomes in Figure 1 and Table 3.
Figure 1: Comparison of the 7 solvers from Experiment I using performance
profiles [15] on 50 test problems from [12]. TR2H and TR1H converge on all
problem instances (100%). TR2L, TR1L and IPOPT converge on 47 problems (94%).
TR2 and TR1 are not applied to 9 large problems. In the right plot, TR2L and
TR1L are the fastest (as seen from their curves being above others), while
TR2H and TR1H are the most robust (as seen from their curves ultimately
reaching the top of the plot). Overall, TR2{H,L} and TR1{H,L} are faster than
the other solvers.
In this experiment we observe that our proposed algorithms (any of TR1{H,L},
TR2{H,L}) perform well in terms of computing time. Both “H” versions of the
proposed algorithms converged to the prescribed tolerances on all problems. On
the other hand, the “L” versions are often the overall fastest, yet they did
not converge on 3 problem instances (beacxc, lp_cre_d, fit2d).
After rerunning the 3 problems for which IPOPT did not converge, we note that
IPOPT did converge to its own (scaled) tolerances on one of these problems
(beacxc), yet the computed solution did not satisfy (32). On the other two
problems (lp_cre_d, fit2d), IPOPT returned a message such as
info.status=$-2$, which is caused by an abort when the “restoration phase” is
called at an almost feasible point.
### 9.2 Experiment II
In a second experiment, we compare the 7 solvers on large problems from the
CUTEst collection [22]. The dimension $n$ is determined by the size of the
corresponding CUTEst problem, while we set $m$ to be about $25\%$ of $n$,
i.e., m=ceil(0.25n). The matrices $A$ are formed as A=sprand(m,n,0.1), with
rng(090317). Convergence is determined by each algorithm internally. For TR1,
TR1H, TR1L, TR2, TR2H, TR2L the conditions $\|P{g}_{k}\|_{\infty}<1\times
10^{-5}$ and $\|A{x}_{k}-b\|_{2}<5\times 10^{-8}$ are explicitly enforced,
while for IPOPT we set options_ipopt.ipopt.tol=1e-5. We use the iteration
limit of $100,000$ for all solvers. The limited-memory parameter is $l=5$ for
all TR solvers and $l=6$ (default) for IPOPT . We summarize the outcomes in
Figure 2 and Table 4.
Figure 2: Comparison of the 7 solvers from Experiment II using performance
profiles on 62 test problems from [22]. TR1L converged on 58 problems. All
other solvers except IPOPT converged on 57 problems. In the left plot, the
iteration numbers for TR1, TR1{H,L}, TR2 and TR2{H,L} are similar, as seen by
the tight clustering of the lines. However, the computational times of TR1 and
TR2 are markedly higher than those of TR1{H,L} and TR2{H,L}, as seen from the
widening gap in the right plot.
### 9.3 Experiment III
In a third experiment we compare the 7 solvers on 31 linear equality
constrained problems from CUTEst. Four of these problems (AUG2D, AUG2DC,
AUG3D, AUG3DC) directly correspond to the problem formulation (1). The
remaining problems have additional bound constraints, which are relaxed in
this experiment. Problems 1–19 in Table 5 are convex and can immediately be
attempted by the solvers (with bounds released). Problems 20–31 are not convex
when the bounds are relaxed, but adding the term
$\frac{\delta}{2}\|x\|_{2}^{2}$ with $\delta=10$ to the objective functions
produced finite solutions for these problems. As in the previous experiment,
convergence is determined by each algorithm internally. For TR1, TR1H, TR1L,
TR2, TR2H, TR2L the conditions $\|P{g}_{k}\|_{\infty}<1\times 10^{-5}$ and
$\|A{x}_{k}-b\|_{2}<5\times 10^{-8}$ are explicitly enforced, while for IPOPT
we set options_ipopt.ipopt.tol=1e-5. We use the iteration limit of $100,000$
for all solvers. The limited-memory parameter is $l=5$ for all TR solvers and
$l=6$ (default) for IPOPT. Since TR1 and TR2 are not designed for large $m$,
they are applied to problems with $m<2500$, with the exception of 3 problems
(BLOWEYA, BLOWEYB, BLOWEYC) that did not terminate within hours using TR1 and
TR2. All other solvers are applied to all problems. The results are in Figure
3 and Table 5.
Figure 3: Comparison of the 7 solvers from Experiment III using performance
profiles on 31 large linear equality constrained test problems from [22]. TR1
and TR2 are applied to 6 problems (they are not practical on the remaining
problems because of their size). TR2H (also TR1H) converged on all 31
instances. TR1L, TR2L, and IPOPT converged on 30 problems. In the ITER plot
the number of iterations is relatively similar across the solvers that
converged. In the TIME plot there is a gap between TR1{H,L},TR2{H,L} and
IPOPT. TR2L can have computational advantages, but appears slightly less
robust than TR2H, as seen from the final staircase in the TIME plot.
## 10 Conclusion
For subproblem (2), this article develops the reduced compact representation
(RCR) of the (1,1) block in the inverse KKT matrix, when the objective Hessian
is approximated by a compact quasi-Newton matrix. The representation is based
on the fact that part of the solution to the KKT system is unaffected when it
is projected onto the nullspace of the constraints. An advantage of the RCR is
that it enables a decoupling of solves with the constraint matrix and
remaining small terms. Moreover, a projected gradient can be used in two
places: once as part of the matrix update, and second as part of the new step.
By effectively handling orthogonal projections, in combination with limited
memory techniques, we can compute search directions efficiently. We apply
the orthogonal projections with a sparse QR factorization or a preconditioned
LSQR iteration, including large and potentially rank-deficient constraints.
The RCRs are implemented in two trust-region algorithms, one of which exploits
the underlying matrix structures in order to compute the search direction by
an analytic formula. The other is based on an $\ell_{2}$ norm and uses the
RCR within a 1D Newton iteration to determine the optimal scalar shift. In
numerical experiments on large problems, our implementations of the RCR yield
often significant improvements in the computation time, as a result of the
advantageous structure of the proposed matrices.
Applications of problem (1) often include bounds $\ell\leq x\leq u$. When
second derivatives of the objective function are available, the problem is
best handled by an interior method. Otherwise, a barrier function could be
added to the objective, and the methods here may sometimes be effective on a
sequence of large equality-constrained subproblems.
## Appendix A
Here we describe a simplified expression for the matrix
${C}_{k}^{\top}{G}_{k}{C}_{k}$ from section 4.2. Recall that the L-BFGS
inverse ${B}^{-1}_{k}=\delta_{k}I+{J}_{k}{W}_{k}{J}_{k}^{\top}$ is defined by
${J}_{k}=\begin{bmatrix}{S}_{k}&{Y}_{k}\end{bmatrix},\quad{W}_{k}=\begin{bmatrix}{T}_{k}^{-\top}({D}_{k}+\delta_{k}{Y}_{k}^{\top}{Y}_{k}){T}^{-1}_{k}&-\delta_{k}{T}_{k}^{-\top}\\\
-\delta_{k}{T}^{-1}_{k}&0_{l\times l}\end{bmatrix}.$
First, note that
${C}_{k}\equiv
A{J}_{k}{W}_{k}=\begin{bmatrix}0&A{Y}_{k}\end{bmatrix}{W}_{k}=\begin{bmatrix}-\delta_{k}A{Y}_{k}{T}^{-1}_{k}&0\end{bmatrix}.$
Second, it holds that
${G}^{-1}_{k}\equiv
A{B}^{-1}_{k}A^{\top}=\delta_{k}AA^{\top}+A{J}_{k}{W}_{k}{J}_{k}^{\top}A^{\top}=\delta_{k}AA^{\top}+{C}_{k}\begin{bmatrix}0\\\
(A{Y}_{k})^{\top}\end{bmatrix},$
so that ${G}^{-1}_{k}=\delta_{k}AA^{\top}$, because the last term in the above
expression for ${G}^{-1}_{k}$ vanishes. Multiplying ${C}_{k}^{\top}$,
${G}_{k}$ and ${C}_{k}$ we see that
${C}_{k}^{\top}{G}_{k}{C}_{k}=\begin{bmatrix}\delta_{k}{T}_{k}^{-\top}{Y}_{k}^{\top}A^{\top}(AA^{\top})^{-1}A{Y}_{k}{T}^{-1}_{k}&0_{l\times
l}\\\ 0_{l\times l}&0_{l\times l}\end{bmatrix}.$
## Appendix B
This appendix describes how we apply the functions from the SuiteSparse
library [12] in our implementations. We use SuiteSparse version 5.8.1 from
https://github.com/DrTimothyAldenDavis/SuiteSparse/releases.
### B.1: Householder QR projection
The Matlab commands to compute the projection $P{g}_{k}$ using a Householder
QR factorization are listed in Table 1.
Table 1: Matlab commands to use SparseSuite functions for computing
projections $z=Py$ using a Householder QR factorization. % Options
---
opts.Q = ‘Householder’;
opts.permutation = ‘vector’;
% QR factorization using SPQR
[Q,~,~,info] = spqr(A’,opts);
rankA = info.rank_A_estimate;
% Projection
ztmp = spqr_qmult(Q,y,0);
zrkA = zeros(rankA,1);
z = [zrkA;ztmp(rankA+1:end)];
z = spqr_qmult(Q,z,1);
### B.2: Preconditioned LSQR projection
The Matlab commands to compute the projection $P{g}_{k}$ using preconditioned
LSQR [28] are listed in Table 2.
Table 2: Matlab commands for computing projections $z=Py$ using preconditioned
LSQR (where $P=I-A^{\top}(AA^{\top})^{-1}A$). If $A$ has full row rank
($\texttt{rankA}=m$), LSQR should need only 1 iteration. Notes: SPQR uses all
of $A^{\top}$ in the QR factorization $A^{\top}P_{\text{msk}}=QR$, where
$P_{\text{msk}}$ is a column permutation of $A^{\top}$ and $R$ is upper
trapezoidal. We store the permutation in the vector maskA. If $A^{\top}$ does
not have full row rank, we use the first rankA columns of
$A^{\top}P_{\text{msk}}$ (the command A(maskA(1:rankA),:)’). If $A$ contains
some relatively dense columns, we should partition
$AP_{\text{prt}}=[\>A_{S}\>A_{D}\>]$ into sparse and dense columns, then use
$A_{S}$ in place of $A$ in the call to spqr. % Options
---
opts.econ = 0;
opts.Q = ‘Householder’;
opts.permutation = ‘vector’;
tol = 1e-15;
maxit = m;
% Preconditioner using a triangular
% factor from SPQR
[~,R,maskA,info] = spqr(A’,opts);
rankA = info.rank_A_estimate;
% Projection
x = lsqr(A(maskA(1:rankA),:)’,y,...
tol,maxit,R(1:rankA,1:rankA));
z = y - A(maskA( 1:rankA),:)’*x(1:rankA,1);
## Appendix C
This appendix overviews the subproblem solution with the shape-changing norm.
Note that
$U=\begin{bmatrix}Q_{1}&U_{2}&U_{3}\end{bmatrix}\in\mathbb{R}^{n\times n}$
(from section 7) represents an orthogonal matrix, and that the quadratic
function is
$q(s)=s^{\top}{g}_{k}+\frac{1}{2}s^{\top}{B}_{k}s=s^{\top}UU^{\top}{g}_{k}+\frac{1}{2}s^{\top}UU^{\top}{B}_{k}UU^{\top}s.$
We introduce the change of variables
$v^{\top}=\begin{bmatrix}v_{1}^{\top}&v_{2}^{\top}&v_{3}^{\top}\end{bmatrix}\equiv
s^{\top}U$. Moreover, it holds that
$U^{\top}{B}_{k}U=\begin{bmatrix}Q_{1}^{\top}{B}_{k}Q_{1}&Q_{1}^{\top}{B}_{k}U_{2}&Q_{1}^{\top}{B}_{k}U_{3}\\\
U_{2}^{\top}{B}_{k}Q_{1}&(\delta_{k}I+\Lambda_{2})^{-1}&\\\
U_{3}^{\top}{B}_{k}Q_{1}&&\delta_{k}^{-1}I\end{bmatrix}$
(cf. [6, Lemma 2]), and that
$AUU^{\top}s=AUv=\begin{bmatrix}R&0&0\end{bmatrix}\begin{bmatrix}v_{1}\\\
v_{2}\\\ v_{3}\end{bmatrix}=Rv_{1}.$
With the constraint $As=0=AUv$, this implies $v_{1}=0$ (for $R$ nonsingular).
Therefore, the trust-region subproblem defined by the shape-changing norm
decouples into a problem with $v_{2}$ and $v_{3}$ only (once $v_{1}=0$ is
fixed):
$\displaystyle\underset{\tiny\begin{array}[]{c}\|s\|_{U}\leq\Delta_{k}\\\
As=0\end{array}}{\text{ minimize }}q(s)=\bigg{\\{}$
$\displaystyle\underset{\|v_{2}\|_{\infty}\leq\Delta_{k}}{\text{ minimize
}}v_{2}^{\top}U_{2}^{\top}{g}_{k}+\frac{1}{2}v_{2}^{\top}(\delta_{k}I+\Lambda_{2})^{-1}v_{2}$
$\displaystyle+\underset{\|v_{3}\|_{2}\leq\Delta_{k}}{\text{ minimize
}}v_{3}^{\top}U_{3}^{\top}{g}_{k}+\frac{\|v_{3}\|^{2}_{2}}{2\delta_{k}}\bigg{\\}}.$
This reformulated subproblem can be solved analytically and the component-wise
solution of $v_{2}$ is in (28). The analytic solution of $v_{3}$ is
$v_{3}=\beta U_{3}^{\top}{g}_{k}$ with $\beta$ from (29). Subsequently, $s$ is
obtained by transforming variables as $s=Uv=U_{2}v_{2}+U_{3}v_{3}$. The
orthonormal matrix $U_{2}$ is computed as
$U_{2}=\begin{bmatrix}{S}_{k}&{Z}_{k}\end{bmatrix}\hat{R}_{2}^{-1}\hat{P}_{2}$,
and since $U_{3}U_{3}^{\top}=P-U_{2}U_{2}^{\top}$, the optimal step with the
shape-changing norm is as in (8):
$s_{SC}=U_{2}(v_{2}-\beta U_{2}^{\top}{g}_{k})+\beta P{g}_{k}.$
With ${u}_{k}\equiv U_{2}^{\top}{g}_{k}$, the step is then computed as in
Algorithm 2 (line 15):
$s_{SC}=\begin{bmatrix}{S}_{k}&{Z}_{k}\end{bmatrix}\hat{R}_{2}^{-1}\hat{P}_{2}(v_{2}-\beta{u}_{k})+\beta
P{g}_{k}.$
## Appendix D
### D.1: Detailed Table for Experiment I
Table 3: Experiment I compares 7 solvers on problems from the SuiteSparse
Matrix Collection [13]. Entries with $\texttt{N/A}^{*}$ denote problems to
which TR1 and TR2 were not applied, because they are too large.
$\texttt{NC}^{\dagger}$ means the solver did not converge to tolerances. TR2H
and TR1H converged on all problem instances. Overall, the computational times
of TR2{H,L} and TR1{H,L} were lower by a significant factor compared to the
times of TR1, TR2, and IPOPT. The number of iterations for each solver is
similar across all problems.
Problem $m$/$n$ $\text{rank}(A)$ TR2 TR2H TR2L TR1 TR1H TR1L IPOPT It Sec It
Sec It Sec It Sec It Sec It Sec It Sec beacxc 497/506 449/0.2 73 0.52 25
_0.044_ 25 0.15 419 3.8 25 0.041 25 0.15 $\texttt{NC}^{\dagger}$ NC lp_25fv47
821/1876 820/0.007 60 0.82 60 0.21 60 0.14 62 0.85 62 0.22 62 _0.14_ 61 0.73
lp_agg2 516/758 516/0.01 40 0.21 40 _0.054_ 40 0.052 42 0.21 42 0.056 42 0.055
41 0.22 lp_agg3 516/758 516/0.01 39 0.21 39 0.051 39 _0.051_ 39 0.2 39 0.052
39 0.051 44 0.24 lp_bnl1 643/1586 642/0.005 70 0.57 70 0.14 70 _0.079_ 67 0.6
67 0.14 67 0.078 62 0.59 lp_bnl2 2324/4486 2324/0.001 69 11 69 0.62 69 _0.28_
69 11 69 0.52 69 0.27 67 2.2 lp_cre_a 3516/7248 3428/0.0007 $\texttt{N/A}^{*}$
N/A 83 0.65 83 0.37 $\texttt{N/A}^{*}$ N/A 88 0.71 88 _0.38_ 87 3.3 lp_cre_d
8926/73948 6476/0.0004 $\texttt{N/A}^{*}$ N/A 556 1.2e+02 510 24
$\texttt{N/A}^{*}$ N/A 503 1e+02 552 _25_ $\texttt{NC}^{\dagger}$ NC lp_czprob
929/3562 929/0.003 17 0.27 17 0.059 17 _0.032_ 17 0.25 17 0.049 17 0.028 18
0.34 lp_d6cube 415/6184 404/0.01 35 0.22 35 0.4 35 0.17 36 0.21 36 0.44 36
_0.18_ 38 1.1 lp_degen3 1503/2604 1503/0.006 39 2.2 39 0.25 39 _0.25_ 39 2.3
39 0.27 39 0.24 40 1.5 lp_dfl001 6071/12230 6071/0.0005 $\texttt{N/A}^{*}$ N/A
226 16 231 19 $\texttt{N/A}^{*}$ N/A 226 _16_ 238 20 207 1.2e+02 lp_etamacro
400/816 400/0.008 78 0.27 78 0.12 78 0.085 86 0.29 86 0.13 86 _0.09_ 68 0.44
lp_fffff800 524/1028 524/0.01 $\texttt{NC}^{\dagger}$ NC 61 0.095
$\texttt{NC}^{\dagger}$ NC $\texttt{NC}^{\dagger}$ NC 59 _0.097_
$\texttt{NC}^{\dagger}$ NC 57 0.45 lp_finnis 497/1064 497/0.005 150 0.72 151
0.2 156 0.13 159 0.69 155 0.2 155 _0.14_ 167 1.2 lp_fit2d 25/10524 25/0.5 266
1.3 266 _0.9_ 258 0.88 247 1.4 261 0.91 279 0.99 $\texttt{NC}^{\dagger}$ NC
lp_ganges 1309/1706 1309/0.003 41 1.3 41 0.11 41 0.067 41 1.4 41 0.12 41
_0.073_ 37 0.43 lp_gfrd_pnc 616/1160 616/0.003 $\texttt{NC}^{\dagger}$ NC 54
0.054 54 _0.043_ $\texttt{NC}^{\dagger}$ NC 54 0.052 54 0.042 48 0.36
lp_greenbea 2392/5598 2389/0.002 149 47 149 1.2 149 0.63 157 33 153 1.3 150
_0.72_ 181 6.8 lp_greenbeb 2392/5598 2389/0.002 149 45 149 1.2 149 _0.65_ 157
31 153 1.3 150 0.65 181 6.5 lp_grow22 440/946 440/0.02 79 0.24 79 0.079 79
_0.071_ 79 0.24 79 0.08 79 0.069 65 0.36 lp_ken_07 2426/3602 2426/0.001 34 12
34 0.091 34 0.067 34 7.4 34 0.093 34 _0.07_ 31 0.85 lp_maros 846/1966
846/0.006 74 0.87 74 _0.21_ $\texttt{NC}^{\dagger}$ NC 74 0.86 74 0.21
$\texttt{NC}^{\dagger}$ NC 71 0.92 lp_maros_r7 3136/9408 3136/0.005
$\texttt{N/A}^{*}$ N/A 57 _2.1_ 57 2.2 $\texttt{N/A}^{*}$ N/A 57 2.1 57 2.3 51
25 lp_modszk1 687/1620 686/0.003 71 0.51 71 0.13 71 0.07 71 0.51 71 0.14 71
_0.071_ 70 0.53 lp_osa_30 4350/104374 4350/0.001 $\texttt{N/A}^{*}$ N/A 46 8.5
46 1.9 $\texttt{N/A}^{*}$ N/A 45 8.8 45 _2_ 43 30 lp_osa_60 10280/243246
10280/0.0006 $\texttt{N/A}^{*}$ N/A 47 24 47 5.9 $\texttt{N/A}^{*}$ N/A 44 23
44 _5.9_ 42 1.1e+02 lp_pds_02 2953/7716 2953/0.0007 $\texttt{N/A}^{*}$ N/A 25
0.25 25 _0.14_ $\texttt{N/A}^{*}$ N/A 25 0.24 25 0.099 26 1.3 lp_pds_10
16558/49932 16558/0.0001 $\texttt{N/A}^{*}$ N/A 61 13 61 _8.1_
$\texttt{N/A}^{*}$ N/A 60 13 60 7.9 59 62 lp_perold 625/1506 625/0.007 58 0.39
58 0.16 58 0.084 58 0.38 58 0.16 58 _0.087_ 57 0.59 lp_pilot 1441/4860
1441/0.006 105 13 105 1.3 105 0.83 109 6.2 109 1.3 109 _0.97_ 117 5.8
lp_pilot87 2030/6680 2030/0.006 102 17 102 2.6 102 1.9 104 15 104 2.7 104
_1.9_ 110 15 lp_pilot_we 722/2928 722/0.004 73 0.69 73 0.24 73 _0.11_ 73 0.65
73 0.21 73 0.11 81 1.4 lp_pilotnov 975/2446 975/0.006 77 1.3 77 0.35
$\texttt{NC}^{\dagger}$ NC 77 1.3 77 _0.35_ $\texttt{NC}^{\dagger}$ NC 78 1.3
lp_qap12 3192/8856 3192/0.001 $\texttt{N/A}^{*}$ N/A 27 _3.3_ 27 3.7
$\texttt{N/A}^{*}$ N/A 26 3.1 26 3.6 25 1.4e+02 lp_qap8 912/1632 912/0.005 20
0.42 20 0.15 20 _0.11_ 22 0.31 22 0.15 22 0.1 21 1.9 lp_scfxm1 330/600
330/0.01 45 0.1 45 0.042 45 _0.036_ 44 0.098 44 0.041 44 0.036 44 0.19
lp_scfxm2 660/1200 660/0.007 52 0.42 52 0.079 52 0.056 57 0.43 57 0.094 57
_0.06_ 55 0.46 lp_scfxm3 990/1800 990/0.005 45 0.8 45 0.096 45 0.061 45 0.76
45 0.097 45 _0.062_ 48 0.54 lp_scsd1 77/760 77/0.04 74 0.035 74 0.035 74 0.044
74 0.033 74 _0.034_ 74 0.043 $\texttt{NC}^{\dagger}$ NC lp_scsd6 147/1350
147/0.02 84 0.077 84 0.054 84 0.06 92 0.084 92 _0.06_ 92 0.065 75 0.34
lp_scsd8 397/2750 397/0.008 66 0.21 66 0.07 66 0.066 65 0.21 65 0.069 65
_0.066_ 66 0.54 lp_sctap1 300/660 300/0.009 107 0.25 107 0.088 107 _0.075_ 102
0.24 102 0.081 102 0.07 100 0.45 lp_sctap2 1090/2500 1090/0.003 145 5.5 146
0.43 146 0.18 145 3.6 143 0.46 146 _0.19_ 157 2.3 lp_sctap3 1480/3340
1480/0.002 204 27 205 0.75 201 _0.39_ 199 13 197 0.74 202 0.39 220 4.3
lp_ship04l 402/2166 360/0.007 84 0.25 84 0.12 84 0.077 84 0.27 84 0.12 84
_0.08_ 92 0.84 lp_ship04s 402/1506 360/0.007 74 0.17 74 0.063 74 0.053 74 0.16
74 0.065 74 _0.055_ 71 0.48 lp_stair 356/614 356/0.02 47 0.11 47 0.047 47
_0.046_ 47 0.11 47 0.047 47 0.045 47 0.23 lp_standata 359/1274 359/0.007 78
0.22 78 0.072 78 _0.058_ 79 0.21 79 0.067 79 0.057 80 0.65 lp_standmps
467/1274 467/0.007 52 0.21 52 0.06 52 0.042 52 0.21 52 0.065 52 _0.043_ 58
0.48
In this experiment the degree of difficulty in solving a problem depends
largely on handling $A$, because the structure of the objective function is
the same for all instances. We observe that our proposed algorithms (any of
TR1{H,L}, TR2{H,L}) always use less computation time (often significantly),
except for two problem instances. On problem lp_d6cube, TR2 used less time
than TR2H, as did TR1 over TR1H. However, the “L” versions were fastest
overall on this problem. On problem lp_scsd1, TR1 used the least time. In
these two problems the number of constraints is not large, and one can expect
that TR1, TR2 do comparatively well. However, for all other 48 problems the
new methods used the least time. We observe that both “H” versions converged
to the prescribed tolerances on all problems. On the other hand, the “L”
versions are often the overall fastest, yet they did not converge on 3 problem
instances (beacxc, lp_cre_d, fit2d).
### D.2: Detailed Table for Experiment II
Table 4: Experiment II compares 7 solvers on 61 large problems from the CUTEst
collection [22]. $\texttt{NC}^{\dagger}$ means the solver did not converge to
tolerances. $\texttt{MX}^{\dagger}$ means the iteration limit was reached.
TR1L converged on 58 problems, the largest number of problems amongst the
solvers. TR2H was faster than TR2 on 51 problems, and TR2L was faster than TR2
on 46 problems (the differences are often significant). TR1H was faster than
TR1 on 49 problems and TR1L was faster than TR1 on 41 problems (often
significantly). All of TR1{H,L} and TR2{H,L} were faster than IPOPT.
Problem $m$/$n$ TR2 TR2H TR2L TR1 TR1H TR1L IPOPT It Sec It Sec It Sec It Sec
It Sec It Sec It Sec ARWHEAD 1250/5000 343 1.7e+02 349 19 372 19 264 72 304 16
315 _16_ $\texttt{NC}^{\dagger}$ NC BDQRTIC 1250/5000 181 50 174 8.1 187 9.9
174 31 186 8.9 160 _8.4_ 78 1.2e+02 BOX 2500/10000 240 1.5e+03 280 63 281 79
218 2.1e+02 258 54 208 _58_ $\texttt{NC}^{\dagger}$ NC BROYDN7D 1250/5000 355
20 370 _18_ 367 18 355 20 370 17 381 19 432 6.5e+02 BRYBND 1250/5000 897
1.5e+02 883 45 1273 64 1396 1.2e+02 1177 _60_ 1421 70 1027 1.7e+03 COSINE
2500/10000 $\texttt{NC}^{\dagger}$ NC 5028 1e+03 4527 1.2e+03 4755 2e+03 7318
1.6e+03 3292 _910_ $\texttt{NC}^{\dagger}$ NC CRAGGLVY 1250/5000 373 63 371 18
369 _19_ 400 45 390 20 397 21 205 3.4e+02 CURLY10 2500/10000 1563 7.2e+02 2498
5.3e+02 1496 _429_ 1512 4.5e+02 1549 347 1759 4.9e+02 1775 3e+04 CURLY20
2500/10000 1951 9.5e+02 2015 455 1993 _552_ 3149 9.5e+02 4110 8.7e+02 3836
1.1e+03 $\texttt{NC}^{\dagger}$ NC CURLY30 2500/10000 4457 2.8e+03 4210 _952_
3669 1e+03 2744 783 6940 1.6e+03 6145 1.7e+03 $\texttt{NC}^{\dagger}$ NC
DIXMAANA 750/3000 10 0.53 10 0.43 10 0.51 10 0.5 10 0.47 10 _0.46_ 13 8.3
DIXMAANB 750/3000 9 0.55 9 0.5 9 _0.5_ 9 0.59 9 0.5 9 0.55 11 8.1 DIXMAANC
750/3000 12 0.73 12 0.67 12 0.72 12 _0.65_ 12 0.63 12 0.69 14 10 DIXMAAND
750/3000 23 1.7 23 1.1 23 1.2 22 _0.93_ 22 0.83 22 1 27 16 DIXMAANE 750/3000
35 1.1 35 1 35 1.1 35 0.83 35 _0.88_ 35 1.1 41 18 DIXMAANF 750/3000 183 5.2
194 3.9 194 5.7 194 6.6 195 _4.9_ 203 6.7 297 1.3e+02 DIXMAANG 750/3000 434 19
397 8.3 439 12 435 13 408 _9.8_ 404 11 $\texttt{NC}^{\dagger}$ NC DIXMAANH
750/3000 433 14 470 11 454 13 459 _11_ 421 9.3 443 12 422 1.8e+02 DIXMAANI
750/3000 82 2 82 _1.8_ 82 2.4 82 1.6 82 1.8 82 2.5 103 46 DIXMAANJ 750/3000
1054 41 1506 35 1023 _27_ 1415 42 1490 34 944 24 $\texttt{NC}^{\dagger}$ NC
DIXMAANK 750/3000 2971 1e+02 3026 65 3082 71 2831 80 2870 61 2691 _62_
$\texttt{NC}^{\dagger}$ NC DIXMAANL 750/3000 1461 38 3198 69 2609 60 2690 66
2728 _58_ 2597 59 $\texttt{NC}^{\dagger}$ NC DIXON3DQ 2500/10000 51 17 51 _12_
51 17 51 17 51 12 51 16 56 6.7e+02 DQDRTIC 1250/5000 13 1.7 7 0.85 7 0.77 13
1.5 7 0.75 7 _0.75_ 7 13 DQRTIC 1250/5000 63 _4.6_ 107 6.7 107 7 63 4.5 107
5.7 107 6.1 93 1.5e+02 EDENSCH 500/2000 32 _0.33_ 32 0.4 32 0.38 32 0.32 32
0.39 32 0.36 34 5 EG2 250/1000 423 2.2 504 _1.3_ 439 1.3 514 5.2 624 2 502 1.9
908 23 ENGVAL1 1250/5000 31 2.7 31 1.8 31 2 31 2.6 31 _1.9_ 31 2 38 61
EXTROSNB 250/1000 148 _0.44_ 148 0.45 148 0.49 145 0.53 145 0.46 145 0.39 129
3 FLETCHCR 250/1000 150 _0.4_ 150 0.46 150 0.51 150 0.37 150 0.42 150 0.41 137
3.1 FMINSRF2 1407/5625 122 10 122 _9.3_ 122 10 122 10 122 7.8 122 9.6 167
4e+02 FREUROTH 1250/5000 287 1e+02 247 12 235 _13_ 274 37 255 13 234 13 202
3.2e+02 GENHUMPS 1250/5000 2215 1.2e+02 1762 99 1829 93 2215 1.3e+02 1762 98
1829 _95_ $\texttt{NC}^{\dagger}$ NC LIARWHD 1250/5000 3854 1.6e+03 3998
4.4e+02 2726 _196_ 2638 1.2e+03 2408 2.6e+02 1591 128 $\texttt{NC}^{\dagger}$
NC MOREBV 1250/5000 151 23 151 22 151 20 151 19 151 _16_ 151 16
$\texttt{NC}^{\dagger}$ NC MSQRTALS 256/1024 $\texttt{MX}^{\dagger}$ MX
$\texttt{MX}^{\dagger}$ MX $\texttt{MX}^{\dagger}$ MX $\texttt{MX}^{\dagger}$
MX 78461 6.6e+02 99724 _620_ $\texttt{NC}^{\dagger}$ NC MSQRTBLS 256/1024
$\texttt{MX}^{\dagger}$ MX $\texttt{MX}^{\dagger}$ MX $\texttt{MX}^{\dagger}$
MX $\texttt{MX}^{\dagger}$ MX $\texttt{MX}^{\dagger}$ MX
$\texttt{MX}^{\dagger}$ MX $\texttt{NC}^{\dagger}$ NC NCB20 1253/5010 345 47
348 18 349 18 314 33 317 16 307 _16_ 252 3.9e+02 NONCVXU2 1250/5000 185 20 185
9 185 9.5 186 14 187 _9.2_ 186 9.4 120 1.9e+02 NONCVXUN 1250/5000 282 33 283
14 282 _14_ 360 31 354 17 370 19 199 3.1e+02 NONDIA 1250/5000 1612 6.9e+02
1600 88 1734 _88_ 2764 7.3e+02 1407 78 1907 98 $\texttt{NC}^{\dagger}$ NC
NONDQUAR 1250/5000 897 4.3e+02 865 47 811 42 816 2.1e+02 876 47 857 _44_ 332
8.1e+02 PENALTY1 250/1000 8 0.051 2 0.018 2 _0.017_ 8 0.056 2 0.019 2 0.016 1
0.043 POWELLSG 1250/5000 88 6.1 88 4.4 88 4.6 88 5.6 88 _4.4_ 88 4.6 99
1.5e+02 POWER 2500/10000 51 17 $\texttt{MX}^{\dagger}$ MX
$\texttt{MX}^{\dagger}$ MX 51 _17_ $\texttt{MX}^{\dagger}$ MX
$\texttt{MX}^{\dagger}$ MX 62 6.9e+02 QUARTC 1250/5000 70 _4.9_ 104 5.3 104
5.4 70 4.5 104 5.1 104 5.6 89 1.4e+02 SCHMVETT 1250/5000
$\texttt{MX}^{\dagger}$ MX 70882 3.9e+03 $\texttt{MX}^{\dagger}$ MX
$\texttt{NC}^{\dagger}$ NC $\texttt{MX}^{\dagger}$ MX 96572 5.1e+03
$\texttt{NC}^{\dagger}$ NC SINQUAD 1250/5000 236 56 282 15 214 11 247 32 216
_12_ 277 14 116 1.8e+02 SPARSQUR 2500/10000 35 13 43 _10_ 43 14 35 13 43 9.9
43 14 31 3.5e+02 SPMSRTLS 1250/4999 2222 2.7e+02 1791 95 2377 1.2e+02 2792
2e+02 2475 1.3e+02 1834 _98_ $\texttt{NC}^{\dagger}$ NC SROSENBR 1250/5000
5561 4.1e+02 8211 4.3e+02 4814 235 6400 4.3e+02 6747 3.6e+02 5280 _270_
$\texttt{NC}^{\dagger}$ NC TOINTGSS 1250/5000 39 3.1 39 2.2 39 2.3 39 3 39 2.3
39 _2.3_ 49 76 TQUARTIC 1250/5000 2069 8.7e+02 1155 64 1508 _78_ 1494 3.7e+02
1867 1e+02 1871 98 $\texttt{NC}^{\dagger}$ NC TRIDIA 1250/5000 147 9 82 4.2 82
4.3 147 9.1 82 _4.2_ 82 4.4 66 1e+02 WOODS 1000/4000 1192 45 1157 38 1077 27
1236 44 1167 37 1132 _29_ 971 1.3e+03 SPARSINE 1250/5000 1504 1.7e+02 1476 79
1464 74 2188 1.6e+02 1407 _74_ 3999 2e+02 2294 5.6e+03 TESTQUAD 1250/5000
10988 623 14186 7.3e+02 13357 6.5e+02 10988 _643_ 14186 7.3e+02 13357 6.6e+02
$\texttt{NC}^{\dagger}$ NC JIMACK 888/3549 $\texttt{NC}^{\dagger}$ NC
$\texttt{NC}^{\dagger}$ NC $\texttt{NC}^{\dagger}$ NC $\texttt{NC}^{\dagger}$
NC $\texttt{NC}^{\dagger}$ NC $\texttt{NC}^{\dagger}$ NC
$\texttt{NC}^{\dagger}$ NC NCB20B 1250/5000 57 4.1 56 3.2 56 3.2 57 4.2 56 3.1
56 _3.2_ 47 73 EIGENALS 638/2550 202 _3.2_ 204 3.7 203 4.1 202 3 204 3.6 203 4
161 43 EIGENBLS 638/2550 28 0.59 28 0.65 28 0.6 28 0.51 28 _0.52_ 28 0.62 28
7.7
In Experiment II, the objective functions for each problem are defined by a
large CUTEst problem, whereas the corresponding $A$ matrices are not meant to
be overly challenging. We observe that the proposed algorithms (the ones
including “{H,L}”) improve the computation times on the majority of problems.
For the 10 instances in which TR2 used less time than TR2H, the differences
are relatively small. An exception is DIXMAANL, where the difference amounts
to 31s. However, for the other 51 problems, TR2H resulted in often significant
improvements in computation time. For instance, in LIARWHD this difference
amounts to 1182s (more than 19 minutes). These observations carry over when
comparing TR1 with TR1H. The “L” versions exhibit similar outcomes as the “H”
ones, with occasional increases in computation times. Overall, TR1L converged
to the specified tolerances on the largest number of problems. The problems
reported as “NC” in IPOPT ’s column correspond to status flags other than “0,
1, 2” $\equiv$ “solved, solved to acceptable level, infeasible problem
detected”.
### D.3: Detailed Table for Experiment III
In Experiment III, TR2H and TR1H converged on all 31 problems, while all other
solvers (besides TR1 and TR2) converged on all problems except one: CVXQP2.
TR2H was the fastest on 10 problems (the best outcome among the solvers),
while TR1L was the fastest on 9 problems (the second best outcome). Problems
A0ESDNDL and A0ESINDL appear noteworthy: they contain dense columns
(satisfying the condition $\textnormal{nnz}(A_{:,j})\big{/}m>0.1$). Sparse QR
factorization is expensive because of fill-in. However, the iterative method
LSQR (with the preconditioning technique from section 5.2) can overcome these
difficulties.
Table 5: Experiment III compares 7 solvers on 31 linear equality constrained
problems from the CUTEst collection [22]. $\texttt{NC}^{\dagger}$ means the
solver did not converge to tolerances. N/A means that TR1 and TR2 were not
applied because the problem size rendered them not practical. TR2H and TR1H
converged on all 31 problems. TR2L, TR1L, and IPOPT converged on 30 problems
(the exception is CVXQP2). The fastest and second fastest solvers for each
problem are highlighted in bold and italic fonts, respectively. Overall, TR2H
was fastest on 12 problems (the best outcome on this experiment), while TR1L
was fastest on 11 problems (the second best outcome). Problems A0ESDNDL and
A0ESINDL contain dense columns in $A$, and the sparse QR factorization takes
additional time as seen from the entries of TR2H and TR1H. However,
preconditioned LSQR can overcome this difficulty, as observed in the entries
for TR2L and TR1L for these problem instances.
Problem $m$/$n$ TR2 TR2H TR2L TR1 TR1H TR1L IPOPT It Sec It Sec It Sec It Sec
It Sec It Sec It Sec AUG2D 10000/20200 $\texttt{N/A}^{*}$ N/A 7 0.26 7 _0.15_
$\texttt{N/A}^{*}$ N/A 7 0.24 7 0.13 12 1.4 AUG2DC 10000/20200
$\texttt{N/A}^{*}$ N/A 2 0.11 2 _0.067_ $\texttt{N/A}^{*}$ N/A 2 0.1 2 0.067 1
0.15 AUG2DCQP 10000/20200 $\texttt{N/A}^{*}$ N/A 2 0.11 2 _0.072_
$\texttt{N/A}^{*}$ N/A 2 0.11 2 0.07 1 0.16 AUG2DQP 10000/20200
$\texttt{N/A}^{*}$ N/A 7 0.23 7 0.13 $\texttt{N/A}^{*}$ N/A 7 0.24 7 _0.13_ 12
1.4 AUG3D 8000/27543 $\texttt{N/A}^{*}$ N/A 10 0.68 10 _0.52_
$\texttt{N/A}^{*}$ N/A 10 0.6 10 0.51 11 2.6 AUG3DC 8000/27543
$\texttt{N/A}^{*}$ N/A 2 0.3 2 _0.28_ $\texttt{N/A}^{*}$ N/A 2 0.3 2 0.26 1
0.31 AUG3DCQP 8000/27543 $\texttt{N/A}^{*}$ N/A 2 0.3 2 _0.27_
$\texttt{N/A}^{*}$ N/A 2 0.33 2 0.26 1 0.33 AUG3DQP 8000/27543
$\texttt{N/A}^{*}$ N/A 10 0.74 10 _0.55_ $\texttt{N/A}^{*}$ N/A 10 0.64 10 0.5
11 2.6 CVXQP1 5000/10000 $\texttt{N/A}^{*}$ N/A 827 7.8 805 3.8
$\texttt{N/A}^{*}$ N/A 827 7.3 805 _3.8_ 740 51 CVXQP2 2500/10000
$\texttt{N/A}^{*}$ N/A 39596 1.5e+02 $\texttt{NC}^{\dagger}$ NC
$\texttt{N/A}^{*}$ N/A 47572 1.8e+02 $\texttt{NC}^{\dagger}$ NC
$\texttt{NC}^{\dagger}$ NC CVXQP3 7500/10000 $\texttt{N/A}^{*}$ N/A 169 2.8
169 _1.4_ $\texttt{N/A}^{*}$ N/A 169 2.4 169 1.4 118 8.9 STCQP1 4095/8193
$\texttt{N/A}^{*}$ N/A 88 0.15 88 0.42 $\texttt{N/A}^{*}$ N/A 88 _0.18_ 88
0.36 75 6.8e+02 STCQP2 4095/8193 $\texttt{N/A}^{*}$ N/A 142 0.25 142 0.8
$\texttt{N/A}^{*}$ N/A 144 _0.28_ 144 0.72 136 4.8 DTOC1L 3996/5998
$\texttt{N/A}^{*}$ N/A 13 0.073 13 0.13 $\texttt{N/A}^{*}$ N/A 13 _0.075_ 13
0.14 16 0.41 DTOC3 2998/4499 $\texttt{N/A}^{*}$ N/A 5 0.025 5 0.059
$\texttt{N/A}^{*}$ N/A 5 _0.03_ 5 0.033 4 0.09 PORTSQP 1/100000 2 0.09 2 0.064
2 0.067 2 _0.062_ 2 0.059 2 0.062 1 0.42 HUES-MOD 2/5000 1 0.0028 1 0.0018 1
0.0027 1 0.0026 1 _0.0018_ 1 0.0026 1 0.024 HUESTIS 2/5000 2 0.0073 2 0.0042 2
0.011 2 0.0061 2 _0.0047_ 2 0.0094 2 0.072 A0ESDNDL 15002/45006
$\texttt{N/A}^{*}$ N/A 5 69 5 _0.13_ $\texttt{N/A}^{*}$ N/A 5 71 5 0.12 6 1.8
A0ESINDL 15002/45006 $\texttt{N/A}^{*}$ N/A 5 73 5 _0.12_ $\texttt{N/A}^{*}$
N/A 5 70 5 0.11 6 1.8 PORTSNQP 2/100000 $\texttt{NC}^{\dagger}$ NC 2 0.092 2
0.11 14 0.47 2 _0.095_ 2 0.1 2 0.88 BLOWEYA 2002/4002 $\texttt{N/A}^{*}$ N/A 2
0.011 2 0.031 $\texttt{N/A}^{*}$ N/A 2 _0.015_ 2 0.021 2 0.082 BLOWEYB
2002/4002 $\texttt{N/A}^{*}$ N/A 2 0.015 2 0.019 $\texttt{N/A}^{*}$ N/A 2
_0.016_ 2 0.019 2 0.082 BLOWEYC 2002/4002 $\texttt{N/A}^{*}$ N/A 2 0.015 2
0.017 $\texttt{N/A}^{*}$ N/A 2 _0.015_ 2 0.021 2 0.15 CONT5-QP 40200/40601
$\texttt{N/A}^{*}$ N/A 2 _0.51_ 2 0.79 $\texttt{N/A}^{*}$ N/A 2 0.49 2 0.8 2
1.3 DTOC1L 3996/5998 $\texttt{N/A}^{*}$ N/A 5 0.03 5 0.09 $\texttt{N/A}^{*}$
N/A 5 _0.043_ 5 0.045 4 0.12 FERRISDC 210/2200 2 0.084 2 0.083 2 0.077 2
_0.076_ 2 0.078 2 0.079 0 0.021 GOULDQP2 9999/19999 $\texttt{N/A}^{*}$ N/A 2
0.038 2 0.025 $\texttt{N/A}^{*}$ N/A 2 0.038 2 _0.026_ 2 0.2 GOULDQP3
9999/19999 $\texttt{N/A}^{*}$ N/A 6 0.076 6 _0.054_ $\texttt{N/A}^{*}$ N/A 6
0.077 6 0.053 7 0.69 LINCONT 419/1257 5 0.058 5 _0.02_ 5 0.031 5 0.05 5 0.019
5 0.03 5 0.055 SOSQP2 2501/5000 $\texttt{N/A}^{*}$ N/A 3 0.017 3 0.04
$\texttt{N/A}^{*}$ N/A 3 0.022 3 _0.019_ 4 0.11
## Acknowledgments
We would like to acknowledge the valuable discussions initiated by Ariadna
Cairo Baza and spurred by the 9th ICIAM conference at the Universidad de
Valencia. R. Marcia’s research was partially supported by NSF Grant IIS
1741490. We thank two referees for their extremely detailed and helpful
comments.
## References
* [1] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers, Foundations and Trends in Machine Learning, 3 (2011), p. 1–122, https://doi.org/10.1561/2200000016, https://doi.org/10.1561/2200000016.
* [2] C. G. Broyden, The convergence of a class of double-rank minimization algorithms 1. General considerations, IMA J. Applied Mathematics, 6 (1970), pp. 76–90, https://doi.org/10.1093/imamat/6.1.76, https://doi.org/10.1093/imamat/6.1.76, https://arxiv.org/abs/http://oup.prod.sis.lan/imamat/article-pdf/6/1/76/2233756/6-1-76.pdf.
* [3] J. Brust, O. Burdakov, J. Erway, and R. Marcia, A dense initialization for limited-memory quasi-Newton methods, Comput. Optim. Appl., 74 (2019), pp. 121–142.
* [4] J. J. Brust, Large-Scale Quasi-Newton Trust-Region Methods: High-Accuracy Solvers, Dense Initializations, and Extensions, PhD thesis, University of California, Merced, 2018. https://escholarship.org/uc/item/2bv922qk.
* [5] J. J. Brust, J. B. Erway, and R. F. Marcia, On solving L-SR1 trust-region subproblems, Comput. Optim. Appl., 66 (2017), pp. 245–266.
* [6] J. J. Brust, R. F. Marcia, and C. G. Petra, Large-scale quasi-Newton trust-region methods with low-dimensional linear equality constraints, Comput. Optim. Appl., (2019), https://doi.org/10.1007/s10589-019-00127-4, https://doi.org/10.1007/s10589-019-00127-4.
* [7] O. Burdakov, L. Gong, Y.-X. Yuan, and S. Zikrin, On efficiently combining limited memory and trust-region techniques, Mathematical Programming Computation, 9 (2016), pp. 101–134.
* [8] R. H. Byrd, J. Nocedal, and R. B. Schnabel, Representations of quasi-Newton matrices and their use in limited-memory methods, Math. Program., 63 (1994), pp. 129–156.
* [9] R. H. Byrd, J. Nocedal, and R. A. Waltz, Knitro: An Integrated Package for Nonlinear Optimization, Springer US, Boston, MA, 2006, pp. 35–59, https://doi.org/10.1007/0-387-30065-1_4, https://doi.org/10.1007/0-387-30065-1_4.
* [10] A. R. Conn, N. I. M. Gould, and P. L. Toint, Trust-Region Methods, SIAM, Philadelphia, PA, 2000.
* [11] T. A. Davis, Algorithm 915, SuiteSparseQR: Multifrontal multithreaded rank-revealing sparse QR factorization, ACM Trans. Math. Softw., 38 (2011), pp. 8:1–22.
* [12] T. A. Davis and Y. Hu, The University of Florida sparse matrix collection, ACM Trans. Math. Softw., 38 (2011), p. 25.
* [13] T. A. Davis, Y. Hu, and S. Kolodziej, Suitesparse matrix collection. https://sparse.tamu.edu/, 2015–present.
* [14] O. DeGuchy, J. B. Erway, and R. F. Marcia, Compact representation of the full Broyden class of quasi-Newton updates, Numer. Linear Algebra Appl., 25 (2018), p. e2186.
* [15] E. Dolan and J. Moré, Benchmarking optimization software with performance profiles, Math. Program., 91 (2002), pp. 201–213.
* [16] R. Fletcher, A new approach to variable metric algorithms, The Computer Journal, 13 (1970), pp. 317–322, https://doi.org/10.1093/comjnl/13.3.317, https://doi.org/10.1093/comjnl/13.3.317, https://arxiv.org/abs/http://oup.prod.sis.lan/comjnl/article-pdf/13/3/317/988678/130317.pdf.
* [17] D. C.-L. Fong and M. Saunders, LSMR: An iterative algorithm for least-squares problems, SIAM J. Sci. Comput., 33 (2011), pp. 2950–2971, https://doi.org/https://doi.org/10.1137/10079687X.
* [18] A. Fu, J. Zhang, and S. Boyd, Anderson accelerated Douglas–Rachford splitting, SIAM J. Sci. Comput., 42 (2020), pp. A3560–A3583, https://doi.org/10.1137/19M1290097, https://doi.org/10.1137/19M1290097.
* [19] P. E. Gill and W. Murray, Numerical Methods for Constrained Optimization, Academic Press, London, 1974.
* [20] D. Goldfarb, A family of variable-metric methods derived by variational means, Math. Comp., 24 (1970), pp. 23–26, https://doi.org/10.1090/S0025-5718-1970-0258249-6, https://doi.org/10.1090/S0025-5718-1970-0258249-6.
* [21] G. H. Golub and C. F. Van Loan, Matrix Computations, Johns Hopkins Studies in the Mathematical Sciences, The Johns Hopkins University Press, Baltimore, 4th ed., 2013.
* [22] N. I. M. Gould, D. Orban, and P. L. Toint, CUTEr and SifDec: A constrained and unconstrained testing environment, revisited, ACM Trans. Math. Softw., 29 (2003), pp. 373–394.
* [23] M. R. Hestenes and E. Stiefel, Methods of conjugate gradients for solving linear systems, J. Research Nat. Bur. Standards, 49 (1952), pp. 409–436.
* [24] D. C. Liu and J. Nocedal, On the limited memory bfgs method for large scale optimization, Mathematical Programming, 45 (1989), pp. 503–528.
* [25] A. Mahajan, S. Leyffer, and C. Kirches, Solving mixed-integer nonlinear programs by qp diving, Technical Report ANL/MCS-P2071-0312, Mathematics and Computer Science Division, Argonne National Laboratory, Lemont, IL, 2012.
* [26] J. Nocedal, Updating quasi-Newton matrices with limited storage, Math. Comput., 35 (1980), pp. 773–782.
* [27] J. Nocedal and S. J. Wright, Numerical Optimization, Springer-Verlag, New York, 2nd ed., 2006.
* [28] C. C. Paige and M. A. Saunders, LSQR: An algorithm for sparse linear equations and sparse least squares, ACM Trans. Math. Softw., 8 (1982a), pp. 43–71, https://doi.org/https://doi.org/10.1145/355984.355989.
* [29] D. F. Shanno, Conditioning of quasi-Newton methods for function minimization, Math. Comp., 24 (1970), pp. 647–656, https://doi.org/10.1090/S0025-5718-1970-0274029-X, https://doi.org/10.1090/S0025-5718-1970-0274029-X.
* [30] A. Wächter and L. T. Biegler, On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming, Math. Program., 106 (2006), pp. 25–57.
* [31] H. Zhang and W. W. Hager, A nonmonotone line search technique and its application to unconstrained optimization, SIAM Journal on Optimization, 14 (2004), pp. 1043–1056, https://doi.org/10.1137/S1052623403428208.
* [32] C. Zhu, R. H. Byrd, P. Lu, and J. Nocedal, Algorithm 778: L-bfgs-b: Fortran subroutines for large-scale bound-constrained optimization, ACM Trans. Math. Softw., 23 (1997), p. 550–560, https://doi.org/10.1145/279232.279236, https://doi.org/10.1145/279232.279236.
The submitted manuscript has been created by UChicago Argonne, LLC, Operator
of Argonne National Laboratory (“Argonne”). Argonne, a U.S. Department of
Energy Office of Science laboratory, is operated under Contract No. DE-
AC02-06CH11357. The U.S. Government retains for itself, and others acting on
its behalf, a paid-up nonexclusive, irrevocable worldwide license in said
article to reproduce, prepare derivative works, distribute copies to the
public, and perform publicly and display publicly, by or on behalf of the
Government. The Department of Energy will provide public access to these
results of federally sponsored research in accordance with the DOE Public
Access Plan. http://energy.gov/downloads/doe-public-accessplan
|
# C-for-Metal: High Performance SIMD Programming on Intel GPUs
Guei-Yuan Lueh, Kaiyu Chen, Gang Chen, Joel Fuentes, Wei-Yu Chen, Fangwen Fu,
Hong Jiang, Hongzheng Li, and Daniel Rhee Intel Corporation
Santa Clara, CA, USA
{guei-yuan.lueh, kai.yu.chen, gang.y.chen, joel.fuentes, weiyu.chen,
fangwen.fu,
hong.h.jiang, hongzheng.li<EMAIL_ADDRESS>
###### Abstract
The SIMT execution model is commonly used for general GPU development. CUDA
and OpenCL developers write scalar code that is implicitly parallelized by
compiler and hardware. On Intel GPUs, however, this abstraction has profound
performance implications as the underlying ISA is SIMD and important hardware
capabilities cannot be fully utilized. To close this performance gap we
introduce C-For-Metal (CM), an explicit SIMD programming framework designed to
deliver close-to-the-metal performance on Intel GPUs. The CM programming
language and its vector/matrix types provide an intuitive interface to exploit
the underlying hardware features, allowing fine-grained register management,
SIMD size control and cross-lane data sharing. Experimental results show that
CM applications from different domains outperform the best-known SIMT-based
OpenCL implementations, achieving up to 2.7x speedup on the latest Intel GPU.
###### Index Terms:
SIMD, SIMT, GPU programming
## I Introduction
Mainstream GPU programming as exemplified by CUDA [1] and OpenCL [2] employ a
“Single Instruction Multiple Threads” (SIMT) programming model. The CPU host
code in an OpenCL application defines an N-dimensional computation grid where
each index represents an element of execution called a “work-item”. An OpenCL
kernel describes the algorithm that will be executed on GPU for one work-item.
Work-items are grouped together into independent “work-groups” that execute
concurrently. Work-items inside one work-group may communicate through fast
on-chip shared local memory (SLM) and barrier synchronization.
OpenCL’s programming model is a powerful paradigm to express data parallelism,
as developers can write purely scalar code for their kernels without knowing
the details of how the work-items are mapped to the hardware execution units.
This abstraction has profound performance implications, however, as the Intel
GPU architecture (also called Gen) and the underlying instruction set
architecture (ISA) is “Single Instruction Multiple Data” (SIMD). Intel GPUs
feature an expressive instruction set that supports variable SIMD-sizes as
well as powerful regioning capabilities that allow for fast cross-lane data
sharing. An execution unit (EU) on Gen has a fixed number of hardware threads,
and each thread executes SIMD instructions on its dedicated 4KB byte-
addressable register file. The OpenCL compiler is responsible for vectorizing
the kernel into one of the three SIMD sizes (8, 16, 32) for thread dispatch,
and work-items execute the same instructions on one thread in lock-step. SIMD
size selection is thus the most important optimization decision for the
compiler, as it affects thread occupancy, instruction-level parallelism (ILP),
SIMD-lane utilization due to divergence, and register spill.
A high-performance program on Gen needs to exploit a thread’s dedicated
register file to cut down memory traffic while avoiding register spill, which
is often fatal for performance. This can be surprisingly difficult to achieve
for OpenCL programs, however, as in order to stay portable the language offers
no mechanism for direct register file control. Register pressure estimate at
the source level is often wildly inaccurate due to the various compiler
optimizations and transformations that must happen to lower OpenCL C into Gen
ISA.
Since under the SIMT model each work-item executes independently, OpenCL
programs also lose control of data sharing among the cooperative items in the
same thread. Furthermore, the SIMT model prevents OpenCL programs from
directly accessing Gen ISA’s powerful regioning mechanisms, which allows one
SIMD lane to access another lane’s data at no additional cost. The
introduction of subgroups in OpenCL 2.0 partially alleviates the gaps by
exposing some of the underlying hardware capabilities through builtin
functions, but getting close to the metal performance with OpenCL on Intel
GPUs remains challenging.
This paper presents the C-for-Metal (CM) development framework, an explicit
SIMD programming model designed specifically for coding to the metal on Intel
GPUs. The CM language is an extension to C/C++ that provides an intuitive
interface to express explicit data-parallelism at a high level of abstraction.
At the core of the language are two special vector and matrix types that form
the foundation of its programming model. Vector and matrix variables are to be
allocated in registers, which makes it much easier to control register usage
at the source level. A CM kernel describes the algorithm for an entire
hardware thread instead of a single work-item through builtin operations on
vectors and matrices; of particular importance is the select operator that
supports efficient register-gather of elements in a variable and is mapped
directly to the Gen ISA regions. Programmers explicitly control an
instruction’s SIMD size by varying the number of elements returned in a select
operation, and different SIMD sizes may be used based on considerations such
as register demand and divergence.
The CM compiler (CMC) is based on the LLVM infrastructure [3] and is
responsible for generating Gen ISA SIMD instructions from the high-level
vector and matrix operations. A number of CM-specific intrinsics are
introduced to effectively represent such operations in the LLVM intermediate
representation (IR). A sequence of CM-specific optimizations and
transformations are developed around those intrinsics. One unique challenge in
developing this compiler is that we need to strike a careful balance between
compiler optimizations and What-You-Write-is-What-You-Get. CM kernels are
fully compatible with the Intel GPU OpenCL runtime [4] and oneAPI Level Zero
[5] and can be launched directly as if they are written in OpenCL. While Gen
is CM’s native architecture, CM kernels may also be executed on CPU for
debugging purposes. The CM development framework is open source and can be
found in [6].
We present a comprehensive experimental evaluation of representative
applications from different domains implemented in CM and OpenCL. For each
workload we provide an implementation sketch on how to code to the metal on
Gen using CM. We show that CM kernels achieve up to 2.7x speedup compared to
the best-known OpenCL implementations that use available Intel-specific GPU
extensions [7]. The speedup offered by CM does not mean a sacrifice to
productivity; while OpenCL may allow for rapid prototyping of sequential code,
this advantage is often negated by the subsequent tuning efforts required to
obtain good performance on GPUs. Results from the development process of
several compute kernels indicate that CM provides 2-3x more productivity in
terms of the development effort than OpenCL.
The rest of the paper is organized as follows: Section II briefly covers the
related work; Section III discusses the main motivations of CM as an efficient
SIMD programming model; Section IV describes the CM programming language;
Section V describes the CM compiler; Section VI presents several applications
implemented in CM and their experimental evaluation; and finally Section VII
concludes this paper.
## II Related Work
SIMT and SIMD are two dominant programming models that express data
parallelism. CUDA [1] and OpenCL [2] are two representative SIMT programming
languages. In addition to SIMT execution, OpenCL also supports a task parallel
programming model in which a work-group contains a single work-item and
parallelism is expressed via vector data types and multiple task enqueues.
However, SIMT remains the dominant choice by far for OpenCL GPU
implementations.
As OpenCL is designed to be cross-platform, it does not reflect the full
architectural features for any specific hardware implementations. As a result,
OpenCL is generally acknowledged to suffer from poor performance portability
[8, 9, 10, 11], and time-consuming tuning efforts including the use of non-
portable vendor extensions are often mandatory to obtain good performance.
Auto-tuning [12] has long been suggested as a method to improve OpenCL’s
performance portability, but given the wide disparities among the underlying
hardware architecture it is unclear if such techniques can be generally
applicable.
[13] presented a comprehensive performance comparison of CUDA and OpenCL and
concluded that OpenCL programs can achieve similar performance to CUDA ”under
a fair comparison” once differences in optimization strategies and compilers
are accounted for. Their study is performed on NVIDIA GPUs which employ a SIMT
architecture that naturally match both CUDA and OpenCL’s execution model. In
contrast, CM is designed specifically for Intel GPUs and adopts an explicit
SIMD programming model to fully exploit the Gen architecture. Most
implementation techniques used in our CM workloads are simply not available in
the OpenCL language.
SIMD programming on the CPU is conventionally done via C-style intrinsics[14],
but such assembly-like interface demands significant coding efforts. As a
result many high level SIMD programming models for C++ have been proposed.
Together they cover a wide design spectrum from implicit vectorization (e.g.,
OpenMP) akin to OpenCL to explicit vectorization (e.g.,
std::experimental::simd in C++[15]) similar to CM. [16] provides an evaluation
of several SIMD programming models against intrinsic programming. None of
these SIMD programming models are natively designed for Gen, although a few
such as OpenMP have been ported. More recently Intel has announced oneAPI Data
Parallel C++[17], which provides a unified, standards-based programming model
for Intel architectures including CPU, GPU, FPGA, and AI accelerators. We
choose OpenCL for performance comparison as it is the most common language for
general-purpose GPU programming on Gen and has very mature toolchain support.
CM is inspired by C* [18] and VecImp [19]. Every statement including control
flow branch in VecImp is executed in a scalar or vector context explicitly. C*
declares parallel variables with shape that contain many data elements.
Arithmetic operators on parallel variables perform operation on all elements
of a parallel variable at the same time.
In terms of compiler infrastructure, such as LLVM, vector representations and
transformations that we have explored for implementing CM are ongoing research
topics. Recently, authors in [20] introduce MLIR, an extensible multi-level
intermediate representation, which is aimed to ”improve compilation for
heterogeneous hardware, reducing the cost of building domain specific
compilers”. MLIR community is actively working on a vector dialect. One
rationale explained in [21] for developing this vector dialect is “higher-
dimensional vectors are ubiquitous in modern HPC hardware”.
CM can also serve as a back-end compiler of other domain-specific languages
aimed to tackle computationally expensive problems. Recent proposals for
neural networks [22, 23] and image analysis [24] provide high level of
abstraction where the CM back-end compiler naturally fits in to target Intel
GPU.
The CM language was invented more than ten years ago, and hundreds of CM
applications have been developed inside and outside Intel. As an example in
[25] and [26], authors study the extension of linearization properties to SIMD
programming using CM, including the implementation of a concurrent data
structure using atomic operations.
## III Motivations for a New Programming Model on Gen
Here we describe three main challenges faced by SIMT models as represented by
OpenCL on Intel GPUs to formally motivate the need for CM.
1. 1.
Register file control: Effective use of the register file to reduce
unnecessary memory traffic is perhaps the most important optimization strategy
for Intel GPUs [27]. Careful management of register pressure is difficult to
achieve in OpenCL, as its language leaves the decision of register allocation
entirely in the compiler’s hands. Hundreds of compiler transformation and
optimization passes take place for an OpenCL kernel to be compiled into Gen
assembly; most of them can have significant impact to register pressure, yet
their behavior is nontransparent and usually non-controllable for the
programmer.
For example, divergence analysis [28] is a critical analysis for SIMT GPU
compilers, and its results may be used to reduce register usage by allocating
a scalar register for a variable if can prove all lanes hold identical values.
The analysis results are often overly conservative in the presence of complex
data and control dependencies, but offers no mechanism for the programmer to
assist the analysis. By contrast, CM variables are register-allocated by
default, and vectors and matrices can have arbitrary size within hardware
limit. CM developers can thus directly allocate their uniform variables in one
register, and they may also coalesce variables into large matrices for
explicit lifetime management.
2. 2.
Cross-lane data sharing: A well-known limitation of the SIMT execution model
is the lack of data sharing among the work-items in a hardware thread. Even
though SIMD lanes in a thread share the register file, the SIMT abstraction
prevents one lane from accessing another lane’s register data, and this
invariably leads to redundant computation and memory operations. Both CUDA and
OpenCL have introduced explicit SIMD primitives to facilitate cross-lane
communications, and functionalities provided include shuffle, reduction, and
barrier operations [29, 30]. These extensions help bridge the gap between the
SIMT model and the underlying SIMD hardware, but they do not represent actual
hardware capabilities. By contrast, CM’s select operation directly maps to
hardware regioning and may be used directly in compute instructions, thus
eliminating unnecessary shuffle moves.
3. 3.
Vector length control: Each Gen ISA instruction has its own execution size,
and per-instruction SIMD size can be an important optimization technique. One
immediate use of varying vector size is register pressure control. Most
applications go through phases of high and low register demand, and a kernel
should mix its SIMD size to avoid spills in high-pressure regions while
achieving maximum bandwidth for vector memory gather/scatter operations.
Similarly, branch divergence can significantly reduce a program’s
efficiency[31, 32]; in the absence of hardware mechanisms, the inactive
channels will not execute until control flow re-converges. By running with a
lower SIMD size inside divergent regions, a kernel could reduce the amount of
wasted work. Because of CM’s explicit SIMD model, programmers can easily
control each instruction’s SIMD size through the size of vector and matrix
selects. The SIMT model offers no such capabilities, however, as OpenCL GPU
compilers perform implicit vectorization on the kernel. An OpenCL kernel may
specify its dispatch size, but all non-uniform instructions will have that
size by default.
We use a simple 3 by 3 box blur filter (aka linear filter) to compare and
contrast CM and OpenCL’s programming models. We first show a straightforward
OpenCL implementation and point out its efficiencies on Intel GPUs. In Section
IV we present the CM implementation to showcase the language’s key features,
while Section V explains how the CM kernel is compiled into the base ISA. In
Section VI, we evaluate the performance of our CM kernel against an optimized
OpenCL kernel that uses Intel-specific extensions, and show that even this
optimized version can only reach less than 50% of CM’s performance.
Algorithm 1 Linear filter in OpenCL with SIMT model
1:kernel linear(image2d src, image2d dst, int width, int height)
2: int x = get_global_id(0);
3: int y = get_global_id(1);
4: float4 pixel1 = 0.0f;
5: float4 pixel = 0.0f;
6: int tempx, tempy;
7:$\\#$pragma unroll
8: for $i=-1;i\leq 1;i$++ do
9:$\\#$pragma unroll
10: for $j=-1;j\leq 1;j$++ do
11: tempx = min(width-1, max(0, x+j));
12: tempy = min(height-1, max(0, y+i));
13: pixel1 = read(src,sampler,(int2)(tempx,tempy));
14: pixel.z += pixel1.z;
15: pixel.y += pixel1.y;
16: pixel.x += pixel1.x;
17: end for
18: end for
19: uint4 p = convert_uint4(pixel*0.1111f);
20: write(dst, (int2)(x,y), p);
21:end kernel
In Algorithm 1, every work-item computes the result of one pixel, whose
position is indicated by the work-item’s $x$ and $y$ global id, by taking the
average value of its neighbors in the input image. Intel’s OpenCL compiler
vectorizes this kernel into SIMD16 instructions where each lane corresponds to
one pixel in the input and output image. Both images are in 3-channel RGB
format, and the hardware image read unit converts the 8-bit integer in each
channel into normalized floating-point values in structure-of-array (SoA)
format. The image write performs the format conversion in reverse. The
generated assembly consists of 9 image-gather loads (line 11), 27 floating-
point additions (line 12-14), and one image-scatter write (line 18).
This simple implementation suffers from severe redundant loads in each
hardware thread, as in one iteration each work-item is reading pixel values
that were already loaded in previous iterations by its adjacent lanes. A more
efficient method is to have the work-items in a thread cooperatively load a 2D
block of the image in raw format (i.e., the pixels are loaded into registers
without format conversion), then convert each channel into floating-point
values for subsequent computation. This special 2D block read/write
functionality is provided by Intel’s cl_intel_media_block_io extension.
The effectiveness of this approach is still limited by the SIMT model,
however, as the builtin function’s return data must be evenly distributed
among the work-items in a subgroup. Thus, a subgroup shuffle operation is
required to read the neighbor lanes’ pixels and convert them from array-of-
structure (AoS) into SoA layout. The OpenCL compiler is generally not able to
optimize away these costly moves, as to satisfy the SIMT model it must
maintain the values being computed in SoA format. As a last resort one could
avoid the shuffle moves by transposing the input image in host code, but this
increases CPU overhead and real-world applications do not necessarily have
control over their input layout.
As we will show in the next section, these issues can be easily addressed in
CM. Since a CM kernel describes the algorithm for one thread, it can naturally
store the data for the 2D block read/write in a matrix, and it can also choose
the best matrix size without being constrained by the dispatch size. Explicit
vectorization means CM developers can structure their code to accommodate the
block load’s layout, and the select operations efficiently extract the sub-
elements for computation. The CM compiler’s ability to break up matrix
operations into variable-size Gen instructions simplifies programming efforts
while maintaining high performance.
## IV CM Programming Language
The CM programming language is implemented using Clang and supports a subset
of the standard C++ with some restrictions (more details in section 2.6 of the
CM language specification [6]). Two container types, `vector` and `matrix`,
are added to the Clang base type system. These new base types form the
foundation for the CM explicit SIMD programming model. On top of these two
types, we add operations and builtin functions that closely resemble the Gen
instruction set. These new types and functions together form the abstract
interface for close-to-the-metal programming on Gen. The following subsections
illustrate the major features of the language. For all the details needed to
write CM code, refer to the CM language specification [6].
### IV-A Vector and Matrix Types
These types are defined using syntax similar to C++ template classes. The
parameters are the type of data element and the size of a vector/matrix.
Element type must be one of the basic types supported by CM and sizes must be
positive integers and compile-time constants.
vector<short, 8> v; // A vector of 8 shortsmatrix<int, 4, 8> m; // A 4x8
integer matrix
Additionally, CM provides two reference component data types: `vector_ref` and
`matrix_ref`. They define references to basic vector or matrix objects. No
extra memory space is allocated to reference variables. For example, the
second row of matrix $m$ could be defined as a reference variable as:
vector_ref<int, 8> vref(m.row(2));
Vector or matrix variables map to a sequence of consecutive elements residing
in the general register file (GRF) of the Gen hardware. A vector or matrix
variable may not have its address taken; indirect access is performed via the
reference types instead. Reference variables are usually constructed from
operations on base variables which provide alternative views to the base
objects. Reading a reference variable is mapped directly to Gen’s region based
addressing scheme, which provides zero-overhead data pack, unpack, and
shuffling within two registers.
For vectors, matrices, and their corresponding reference variables, CM
supports member functions and operations including constructor and assignment;
arithmetic, shift, logic and comparison; and row, column and element accesses.
The main operations unique to CM vector and matrix types are:
* •
select: a set of select functions for referencing a subset of vector/matrix
elements are supported. Each select operation returns a reference to the
elements of the base object, and they can be used as l-value expressions.
Select operations are of the form (with v being a vector and m a matrix):
v.select<size,stride>(i)m.select<vsize,vstride,hsize,hstride>(i,j)
In the second case, it returns a reference to the sub-matrix starting from the
(i, j)-th element. vsize indicates the number of selected rows; vstride
indicates the distance between two adjacent selected rows; hsize indicates the
number of selected columns; and hstride indicates the distance between two
adjacent selected columns. As Figure 1 shows, v.select<4, 2>(1) is an l-value
expression of type vector_ref<float, 4>, which refers to odd elements in the
8-float vector v. In the case of matrix m, the example shows that the
operation selects 4 elements (vsize=2, hsize=2) with vstride and hstride of 2
and 4 respectively. The initial offset is m[1, 2].
Figure 1: Examples of select operation
Nested vector or matrix select operations are efficiently mapped into direct
register addressing operations on Gen.
* •
iselect: CM allows the user to perform indexed access into another vector.
Indirect selects are always r-value expressions. For example, consider a base
variable v of 16 floats, and let idx be a vector of 4 elements
$\\{0,1,2,2\\}$. Then the expression v.iselect(idx) can be used to create a
new vector with elements $\\{$v[0], v[1], v[2], v[2]$\\}$. This function
exposes Gen’s register-indirect addressing capability.
* •
merge: two forms of merge operations are provided to support conditional
updates: v.merge(x, mask) and v.merge(x, y, mask). The former copies elements
from x to v when the corresponding mask bit is true. The latter copies
elements to v from x when the corresponding mask bit is true; otherwise, it
copies elements to v from y. The first merge is mapped to Gen’s predicated mov
instructions, while the second merge is mapped to sel instructions.
* •
format: this operation allows reinterpreting the element type of a
matrix/vector variable and changing its shape. As an example, on a vector v of
8 floats, the expression v.format<char, 4, 8>() has type matrix_ref<char, 4,
8>, meaning v is reinterpreted to a matrix of type char with 4 rows and 8
columns.
* •
replicate: this operation provides generic regioning operations to gather
elements from a vector or matrix. The expression v.replicate<K, VS, W, HS>(i)
gathers K blocks from the input vector v starting from position i, and each
block has W elements. VS and HS are the vertical and horizontal stride. For
example, v.replicate<2, 4, 4, 0>(2) on vector v from Figure 1 will gather the
elements $\\{$v[2], v[2], v[2], v[2], v[6], v[6], v[6], v[6]$\\}$.
CM also supports mixed operations of vector and matrix objects of different
shapes as long as each operands has identical number of elements. The operand
shape conformance is checked at compile time using template specialization
rules for vector/matrix classes. The CM compiler determines the element type
for the destination operand based on the source operand data types following
standard C++ rules for type promotion (using template specialization
mechanisms). Just like in standard C++, users may want to add explicit type
conversions to change the default type promotion and conversion rules. A
simple example of an implicit and explicit conversion can be:
vector<float, 8> f;vector<int, 8> i;f = i; //Implicit conversionf =
vector<short, 8>(i); //Explicit conversion
CM allows vector and matrix to be declared as file-scope variables, which are
treated as thread private variables. They can be used to facilitate data
sharing among the main function and its callee functions in the same thread.
Optionally, CM supports two variants of global variable usage. The first
variant, denoted by the _GENX_VOLATILE_ qualifier, informs compiler to perform
conservative optimizations on these variables in order to decrease register
pressure and improve code quality. The second variant, denoted by the
_GENX_VOLATILE_BINDING_(Offset) qualifier, indicates the global variable
should be mapped to a GRF block starting from the specified byte offset. Such
register binding feature enables programmer to achieve fine-grained register
allocation control and effectively tackle other challenges such as bank
conflict for performance critical applications.
### IV-B Memory Intrinsics
CM provides a set of memory-access functions that resemble the underlying Gen
hardware operations. By default a buffer-indexed based addressing mode is
used. A kernel includes a number of SurfaceIndex arguments, each of which
represents a handle to the underlying memory object. A read or write intrinsic
takes one surface index and accesses its elements specified by the offsets.
Application host code is responsible for binding each kernel argument to a
memory object through runtime API calls. The most useful intrinsics include:
* •
2D-block read/write: For an image identified by its SurfaceIndex, a block-read
loads a block of pixels at the given x/y location into a matrix. A 2D-block
write stores a matrix into a block of pixels in an image at the given x/y
location. The following intrinsic definition is for 2D-block read.
template<typename T, int N, int M>void read(SurfaceIndex index, CmBufferAttrib
attr, int X, int Y, matrix_ref<T, N, M> output)
* •
Oword-block read/write: For a linearly-addressed buffer, a block-read reads a
consecutive sequence of owords (16 bytes per oword) at a given offset into a
vector. A block-write writes a vector into a consecutive sequence of oword at
the given offset into the buffer. The following intrinsic definition is for
Oword-block read.
template<typename T, int N>void read(SurfaceIndex idx, CmBufferAttrib attr,
int offset, vector_ref<T, N> output)
* •
Scattered read/write: Vector gather and scatter of various granularity are
also supported. Zero-based offsets of each element (relative to a global
offset) to be read/written are specified in a vector. For scattered read and
write functions, the address, source payload, and return data must be vector
type of the same size. The following intrinsic definition is for scattered
read.
template <typename T, int N>void read(SurfaceIndex index, uint globalOffset,
vector<uint, N> elementOffset, vector_ref<T, N> ret)
* •
Atomics: CM supports all native atomic operations on Gen including and, add,
max, inc, compxchg, etc. Like scattered read/write, atomic functions must also
have vector type. The following is the intrinsic definition for atomic inc.
template<CmAtomicOp Op, typename T, int N>void write_atomic(vector<ushort, N>
mask, SurfaceIndex index, vector<uint, N> element_offset)
In addition to SurfaceIndex, CM also supports a flat addressing model where a
kernel argument is a pointer that may be directly used for memory access. This
allows host and kernel code to share data structures and concurrently access
them.
### IV-C Boolean Reductions
To facilitate boolean reductions on mask vectors, CM provides two predefined
boolean functions:
ushort vector<ushort, size>::any(void) ushort vector<ushort, size>::all(void)
any() returns 1 if any of the value in the mask is non-zero; it returns 0
otherwise. all() returns 1 if all the values in the mask are non-zero; it
returns 0 otherwise. Notice that the same functions are also available for
matrix types. The result of either function can be used as a scalar value and
be used in the standard C++ control-flow constructs. Reduction functions are
efficiently mapped to Gen’s compare instructions.
### IV-D SIMD Control Flow
In CM, the default control-flow statement is just the C++ scalar control flow
statements – conditional statements (if-else/switch), loop statements
(for/while/do-while), jump statements (break/continue/goto/return) or function
calls. For those statements, the conditions must be scalars, and all SIMD
lanes branch uniformly.
Beyond that, CM also provides per-lane SIMD control-flow mechanisms utilizing
the Gen `simd-goto` and `simd-join` instructions that support divergent
control-flow under SIMD execution [33]. This feature provides an alternative
to predicating long sequence of instructions, as inactive channels do not
execute inside SIMD control flow regions.
SIMD control flow in CM is expressed by predefined C++ macros. For instance, a
divergent if is represented by macros SIMD_IF_BEGIN and SIMD_IF_END, and are
used as follows:
vector<uint, 16> v(0); vector<ushort, 8> cond = ... SIMD_IF_BEGIN(cond > 0){
// ... v.select<8, 2>(0) = 1; }SIMD_ELSE{ // ... v.select<8, 2>(1) = 1;
}SIMD_IF_END;
The comparison $cond>0$ produces a vector mask that determines whether a lane
is active. Both the then statement and the else statement may get executed for
their active lanes. A SIMD control flow block is skipped if none of the lanes
are active. Notice that the size of SIMD operations within a SIMD control-flow
must be either the same size as the mask or scalar.
### IV-E Linear Filter in CM
We now describe how the linear filter can be implemented in CM (Algorithm 2).
Each thread in the CM kernel reads a 8x32-byte matrix and outputs a 6x24-byte
matrix corresponding to 6x8 pixels. Although we only need 8x30 bytes for 8x10
input pixels, adding two-byte padding to each row gives a good layout in
register file for computation. The select operation acts as follows: after the
input pixels are loaded into the 8x32-byte matrix `m`, at each step, we
extract a 6x24-byte sub-matrix through a select operation, convert all
elements into float, then add them to the running total, which is a
6x24-floating matrix. Figure 2 shows the first 6x24-byte sub-matrix select
operation performed in Algorithm 2.
Algorithm 2 Linear filter written in CM
1:kernel linear(Surface inBuf, Surface outBuf, uint hpos, uint vpos)
2: matrix$<$uchar, 8, 32$>$ in; //8x32 input matrix
3: matrix$<$uchar, 6, 24$>$ out; //6x24 output matrix
4: matrix$<$float, 6, 24$>$ m;
5: read(inBuf, hpos*24, vpos*6, in);
6: //Compute sums of neighbor elements
7: m = in.select$<$6, 1, 24, 1$>$(1, 3);
8: m += in.select$<$6, 1, 24, 1$>$(0, 0);
9: m += in.select$<$6, 1, 24, 1$>$(0, 3);
10: m += in.select$<$6, 1, 24, 1$>$(0, 6);
11: m += in.select$<$6, 1, 24, 1$>$(1, 0);
12: m += in.select$<$6, 1, 24, 1$>$(1, 6);
13: m += in.select$<$6, 1, 24, 1$>$(2, 0);
14: m += in.select$<$6, 1, 24, 1$>$(2, 3);
15: m += in.select$<$6, 1, 24, 1$>$(2, 6);
16: //Compute average (implicit type conversion)
17: out = m*0.1111f;
18: write(outBuf, hpos*24, vpos*6, out);
19:end kernel
Figure 2: Select a 6x24 sub-matrix from a 8x32 matrix
The 2D-block read/write functions are used to perform the load and store on
line 5 and line 18. As mentioned in Section III, for this filter the
specialized 2D block messages are much more efficient than the image
gather/scatter operations in the vanilla OpenCL implementation (Algorithm 1)
due to the elimination of redundant memory traffic.
## V CM Compiler
Like Intel Graphics Compiler (IGC) [33], the CM Compiler consists of three
layers:
* •
Front-end: The clang front-end compiler [34] converts CM source code into LLVM
intermediate representation (IR) [3].
* •
Middle-end: The middle-end performs generic and CM specific optimizations and
transformations before converting the LLVM IR into the virtual-ISA (vISA)
assembly language. The vISA is very close to Gen ISA but offers more
convenience as a compilation target as it has unlimited virtual registers and
hides various hardware-specific restrictions.
* •
Finalizer: The vISA finalizer [27] is a code generator for Intel GPU. Taking
vISA assembly as input, it performs local optimizations, register allocation
and scheduling to generate the final instructions for the target Intel GPU.
The general flow of the CM custom optimizations is illustrated in Figure 3
(inside middle-end module). The input corresponds to LLVM IR generated by LLVM
generic optimizations. The lowering pass gradually converts the high-level CM
language constructs to code sequences that are closer to the target Gen ISA.
Afterwards, several optimizations are performed at each IR level to improve
the code quality. Two of these optimization passes are highlighted in the
remainder of this section: bailing and legalization and vector optimization.
Figure 3: CM compilation flow
Gen ISA has distinct features such as varying execution size, mixed data
types, flexible register regioning, and modifier support [33]. Vector and
matrix data types and their region-select operations need to be carefully
modeled so that they can be directly mapped to those distinct features without
extra move instructions. Since LLVM is based on Static Single Assignment (SSA)
form, where each value is defined exactly once, we extend its IR with the
following two intrinsics to model partial read/write to vector/matrix
variables in SSA form, so that it can benefit from common LLVM optimizations.
* •
Read region (rdregion): extract selected elements from a vector to make a new
smaller vector.
* •
Write region (wrregion): insert elements into selected positions and returns a
new value for the old vector.
The following is a simplified example to illustrate the design. The original
vector `a` is defined as an `8 x i32` value `%a0`. The rdregion intrinsic
extracts `4 x i32` elements from `%a0` based on the given parameters: vertical
stride = 0, width = 4, horizontal stride = 2, starting byte offset = 4. The
wrregion intrinsic inserts the elements of `%b` to the old value of `a`
(`%a0`) based on the other given parameters: vertical stride = 0, width = 4,
horizontal stride = 2, starting byte offset = 0. The SSA property is
maintained as the wrregion intrinsic returns a different `%a1` to represent
the new value of vector `a`.
vector<int, 8> a(init_v);vector<int, 4> b;b = a.select<4, 2>(1);a.select<4,
2>(0) = b;%a0 = <8xi32> …%b = call<4xi32> @llvm.genx.rdregioni... (<8xi32>
%a0, i32 0, i32 4, i32 2, i16 4);%a1 = call<8xi32> @llvm.genx.wrregioni...
(<8xi32> %a0, <4xi32> %b, i32 0, i32 4, i32 2, i16 0);
Due to its expressiveness one vISA instruction may be represented in the LLVM
IR by multiple instructions. Baling is the process of determining which group
of LLVM instructions can be combined (baled) together and efficiently mapped
to vISA. A bale has a root instruction as well as optional modifiers and
region instructions on the source and destination operands. The baling
analysis pass constructs a map to mark which IR instructions are selected and
what roles they play in their resulting bales. The root of a bale is the last
instruction in the program order of all instructions in the bale, which is
also the only instruction whose value is used outside the bale. Since the
baling pass may decide to bale in an instruction with multiple uses as a non-
root instruction, the instruction is cloned to ensure it has only a single use
inside the bale.
vISA is designed to be close to Gen ISA and inherits similar restrictions
(e.g., the size of an operand may not exceed two GRFs). After the initial
baling analysis, the legalization pass may split up one bale into multiple
instructions to conform to vISA restrictions. In general, the splitting must
be done carefully to take advantage of the maximum SIMD width allowed by the
target platform. Other examples of transformations performed here include un-
baling an instruction due to conflicting legalization requirements, aligning
operands for memory access operations, and promoting byte type operations into
equivalent short ones to work around hardware restrictions.
The vector optimization pass performs optimizations based on rdregion and
wrregion tailored for vector and matrix. The following are a few examples:
* •
Constant folding: We have extended LLVM constant folding so that it can fold
and propagate vector constants through rdregions and wrregions.
* •
Promoting C-array into LLVM vector: Although it is not recommended, users can
use a C-array in CM instead of a CM vector. The CM compiler can replace
C-array loads and stores with rdregions and wrregions.
* •
Region collapsing: This can be viewed as instruction-combining transformation
specific to rdregions and wrregions.
* •
Dead vector removal: This is a more general form of dead-code elimination on
vector values. The uses of every vector element are tracked to determine if
the whole vector is dead.
* •
Vector decomposition: Given a large vector, if compiler can show that it can
be divided into multiple segments, where the rdregions and wrregions on these
segments are disjoint, then this large vector can be converted into multiple
small ones, which increases the flexibility for the register allocator.
As an example of the compiler code generation, consider again the linear CM
implementation presented in Algorithm 2. Figure 4 illustrates how a 6x24 sub-
matrix char-to-float conversion is done through a select operation (line 7 in
Algorithm 2).
Figure 4: Sub-matrix layout of a 6x24 char-to-float select operation.
This select operation is compiled into 9 SIMD16 instructions as shown below:
1) mov (16|M0) r11.0<1>:f r4.3<8;8,1>:ub2) mov (16|M0) r13.0<1>:f
r4.19<16;8,1>:ub3) mov (16|M0) r15.0<1>:f r5.11<8;8,1>:ub4) mov (16|M0)
r17.0<1>:f r6.3<8;8,1>:ub5) mov (16|M0) r19.0<1>:f r6.19<16;8,1>:ub6) mov
(16|M0) r21.0<1>:f r7.11<8;8,1>:ub7) mov (16|M0) r23.0<1>:f r8.3<8;8,1>:ub8)
mov (16|M0) r25.0<1>:f r8.19<16;8,1>:ub9) mov (16|M0) r27.0<1>:f
r9.11<8;8,1>:ub
In Gen ISA, a source operand’s region is a 2D-array in row-major order with
the format $<$V;W,H$>$, where W (width) is the number of elements in a row, H
(horizontal stride) is the step size between two elements in a row, and V
(vertical stride) is the step size between two rows. This example shows the
power of CM programming on Gen; programmers express their algorithms using
high-level matrix operations, and the compiler generates them into multiple
SIMD instructions while taking advantage of the region-based address scheme to
efficiently access register data.
## VI Experimental Evaluation
This section presents a set of applications from different domains implemented
in CM and OpenCL with their experimental evaluation on an Intel GPU. We also
analyze results in terms of the productivity and development effort from the
development process of several compute kernels.
### VI-A Applications
We briefly highlight the implementation strategy of every CM kernel that
enables them to achieve close-to-the-metal performance. The source code and
description of the applications benchmarked can be found in [6] and in the
appendix of this paper. The OpenCL kernels are from the Intel OpenCL SDK [35]
except for histogram and k-means which were developed internally by expert
OpenCL programmers. All of them have been tuned and represent state-of-the-art
OpenCL implementations for Intel GPUs. As baseline, all kernels were compiled
with -O2 for the optimization level.
Typical input parameters were used for benchmarking the applications and their
specification is described in every subsection; a detailed study of
application behavior with varying input sizes is beyond the scope of this
paper.
Figure 5: Speedup of CM versus OpenCL kernels. Speedup is computed as
$\frac{OpenCL\\_exec\\_time}{CM\\_exec\\_time}$.
The Intel IceLake (ICL) processor was used to run the workloads. The ICL
system includes an Intel Core i7 with 4 CPU cores, 16GB of system memory and a
Gen11 integrated GPU with 64 EUs. Performance comparison is done by measuring
the total execution time.
1. 1.
Bitonic Sort: it is a classic parallel algorithm for sorting elements [36].
Given $2^{n}$ input elements, the bitonic network takes $n$ stages to sort,
producing chunks of sorted elements in ascending and descending order in every
stage. At every stage there is a split procedure that cuts one bitonic
sequence into two smaller ones. The SIMT bitonic sort implementation benefits
from using vector data types (e.g. int4) available in OpenCL, however, it
involves global memory access within every stage. To avoid excessive global
memory access and global synchronizations, our CM kernel takes advantage of
the large register space to hold 256 data elements in registers, processing
several split steps locally. Experimental results show that our CM
implementation outperforms the OpenCL version by 1.6x to 2.3x as shown in
Figure 5. The higher speedup with larger input sizes is due to additional
savings from memory accesses and global synchronizations.
2. 2.
Histogram: it is a common statistical tool used in image processing
applications. It collects the distribution of pixel intensities from an image.
Both CM and OpenCL are based on local and global histograms to perform the
parallel computation. However, while in the OpenCL implementation each
thread’s local histogram is stored in the SLM, in the CM kernel it is
efficiently stored in registers. Also, in the OpenCL kernel one additional
step is needed: after the local histogram computation the first thread in a
work-group atomically updates the global histogram with local results. Figure
5 shows that CM significantly outperforms OpenCL, achieving up to 2.7x
speedup. Furthermore, OpenCL’s performance is very sensitive to different
input patterns. The performance gap is narrower for randomly-generated input,
where the OpenCL kernel is unlikely to incur SLM bank conflicts and serialized
atomic increments. For real-world images with homogeneous background (e.g.,
earth), however, OpenCL’s performance degrades significantly due to contention
among atomic operations.
3. 3.
K-means Clustering: it is a popular clustering algorithm used in data mining
and machine learning [37]. K-means stores $k$ centroids that it uses to define
clusters. A point is considered to be in a particular cluster if it is closer
to that cluster’s centroid than any other centroid. The CM k-means kernel is
divided into two phases that iterate alternatively until the centroids
converge. The first phase divides input data into chunks of elements. Each
hardware thread processes the clustering for each chunk and computes the
minimum distance to determine which cluster (centroid) a point belongs. The
second phase sums up the accumulated coordinates and the number of points in
each cluster and computes the new centroid positions. In a final step,
coordinates of the thread’s cluster are produced. Compared to the OpenCL
implementation, in Figure 5 it can be seen that the CM k-means is 30% to 50%
faster with three different data sets. This performance difference is mainly
because the CM k-means efficiently shares centroids and other auxiliary data
structures in the register file instead of using SLM and thread barriers. The
CM kernel also benefits from efficient scattered memory reads, which are
overlapped by the CM compiler for latency hiding.
4. 4.
Sparse Matrix-Vector Multiplication (SpMV): for a sparse matrix $A$, SpMV
computes the result of $Y=AX$, where $Y$ and $X$ are two dense vectors. It is
widely used in many graph algorithms and scientific applications. The SIMT
OpenCL implementation uses the cl_intel_subgroup extension and SLM
efficiently, however, the presence of irregular memory accesses due to the
nature of the input limits its performance. The CM implementation tackles this
issue by adding the capability of dynamically varying the instruction SIMD.
Since issuing wider vector loads than necessary wastes memory bandwidth and
increases contention, we use dynamic branches to check different block sizes
and select the best execution size accordingly. This capability of varying
SIMD size to improve both memory and compute efficiency is an important CM
advantage over OpenCL. Another advantage is the use of boolean reductions that
are applied to detect if all input rows are zero and skip the entire
computation. This also improves both memory and compute efficiency for sparse
matrices. Experimental results in Figure 5 show that the CM kernel outperforms
the OpenCL implementation by 10% and 25% for the Protein and Nd24k matrices
which have the highest number of non-zero elements per row (around 200). For
Webbase which has low density and high variance of non-zero elements (3 non-
zeros/row), varying SIMD width is effective on achieving high memory
efficiency and it performs 160% better than OpenCL.
5. 5.
Matrix Transpose: it is a fundamental linear algebra operation that is heavily
used in machine learning workloads. An optimized SIMT GPU implementation [38]
typically utilizes the SLM to avoid uncoalesced global memory access. For an
out-of-place matrix transpose, threads within a thread group cooperatively
copy a tile of the matrix from global memory into SLM, perform barrier
synchronization, then copy SLM data using transposed array indices to the
global output buffer. The CM implementation can completely bypass SLM and
avoid synchronization overhead by directly performing the transpose on
registers. Transpose is performed using a combination of CM’s select and merge
operations to shuffle each element to their transposed position. For example,
the following CM code sequence transposes a $2\times 2$ matrix
$m=\begin{bmatrix}a&b\\\ c&d\end{bmatrix}$:
v0 = v.replicate<2,1,2,0>(0); // [a,a,b,b]v1 = v.replicate<2,1,2,0>(2); //
[c,c,d,d]v2 = merge(v0, v1, 0b0101); // [a,c,b,d]
We view $m$ as a vector $v=[a,b,c,d]$ and $v_{2}$ as the transpose of the
original input matrix. Transpose of bigger matrices can be solved by
recursively applying the above steps to each sub-matrix.
Experimental results on different matrix sizes, as illustrated in Figure 5,
show that this CM implementation achieves a speedup of up to 2.2x compared to
the SLM-based OpenCL implementation. OpenCL’s subgroup shuffle functions do
not help here since they are not expressive enough to exploit Gen’s operand
regioning.
6. 6.
SGEMM and DGEMM: General Matrix-to-Matrix Multiplication (GEMM) is a function
that performs matrix multiplication of the form $C=\alpha AB+\beta C$, where
$A$, $B$ and $C$ are dense matrices and $\alpha$ and $\beta$ are scalar
coefficients. It is at the heart of many scientific applications and achieving
peak theoretical performance is critical for every architecture. Here we focus
on single precision floating-point (SGEMM) and double precision floating-point
(DGEMM). Even though OpenCL and CM GEMM kernels employ a similar register-
blocking strategy –OpenCL is able to do so by using the cl_intel_subgroup
extension [39] and mimicking the CM implementation, the CM kernel is able to
process more data per thread thanks to more efficient management of the
register file. As a result, CM outperforms OpenCL by 8.5% in DGEMM and around
10% in SGEMM for different input sizes as illustrated in Figure 5.
7. 7.
Prefix Sum: it is the cumulative sum of a sequence of numbers and plays an
important role in many algorithms, e.g., stream compaction, radix sort, etc.
The OpenCL implementation is based on Blelloch’s algorithm [40] and uses a
tree-traversal approach to build the prefix sum with parallel reductions and
partial sums. It exploits the SLM but incurs several data movements between
local and global memory, plus multiple barriers. Our CM implementation uses a
similar approach but threads perform the parallel reduction and partial sums
entirely in registers, updating their results in place on the input array
through scattered writes. Figure 5 depicts that the CM implementation achieves
1.6x speedup compared to the OpenCL kernel for different input sizes.
### VI-B Productivity
Programmability is a common concern for the adoption of close-to-the-metal
programming models, as one must carefully weigh their performance advantages
against the potential developer productivity loss due to the ramp-up overhead
and a lower level of abstraction. CM has been extensively used for high-
performance library development inside Intel, however, and user experiences
overwhelmingly suggest that programmers are much more productive using CM once
performance tuning efforts are considered. During the early stages of kernel
development for Intel’s deep learning neural network libraries, there was an
intense debate on the choice of programming model. To ensure a fair
comparison, a team of GPU compute architects implemented several key kernels
in both OpenCL and CM. The architects in the study have years of experiences
developing workloads in both models for Intel GPUs. Table I details the
development efforts as well as the performance achieved by both programming
models. Development effort is measured as the amount of work performed to
implement each kernel from scratch and meet the minimal performance
requirement. Performance data are collected on a simulator for a future GPU
platform and thus not included in the evaluation earlier in this section.
Performance speedup is calculated as
$\frac{OpenCL\\_exec\\_time}{CM\\_exec\\_time}$.
TABLE I: Development effort and performance comparison. Kernel | OCL effort (person-week) | CM effort (person-week) | Performance (OCL/CM)
---|---|---|---
Systolic GEMM | 8 | 3 | 1.09x
DGEMM and SGEMM | 12 | 4 | 1.06$\sim$1.09x
Conv. 1x1 | 4 | 4 | 1.08x
Conv. 3x3 | 15 | 4 | 1.3x
Stencil2D | 2$\sim$3 | 1 | 2.2x
Table I shows that for these deep learning kernels CM yields 2-3x more
productivity than OpenCL on average while achieving better performance.The
study found that developers could deliver functional OpenCL kernels quickly,
but the initial version’s performance is often far below the desired targets.
During the subsequent performance tuning, they have to spend considerable
efforts fighting with the programming model and the compiler to get the
desired assembly code. To achieve the best performance, developers need to
control multiple aspects of kernel behavior including register usage, data
sharing, latency hiding, copy coalescing, and bank conflict avoidance. The
SIMT abstraction makes it difficult for even expert GPU programmers to control
a kernel’s full optimization needs, and their OpenCL implementation suffers
from poor performance predictability; an innocuous one-line change could
result in significant variation in generated code if it causes the kernel to
spill or copy moves to not be coalesced. On the contrary, CM allows users to
manage critical machine resource explicitly to instruct the compiler to
generate expected code sequence. The first working CM version is frequently
able to approach or sometimes even exceed the performance target, thus greatly
reducing the need for intensive tuning and rewrites later.
## VII Conclusions
This paper presents C-for-Metal, a high-level yet close-to-the-metal
programming language for Intel GPUs. Major features are illustrated for how to
expose underlying hardware capabilities: vector/matrix variables represent
registers and express SIMD parallelism, select operation maps to register
regioning, block read/write enables efficient memory access, and divergent
control flow constructs allow for mixing SIMT and SIMD models. We evaluate
several applications and their experimental results show that the performance
gap between CM and OpenCL can be significant, ranging from 20% to over 100%.
This paper is not meant to be an attack on SIMT programming models; they are
popular on GPUs for a reason and several of the authors are active
contributors to Intel’s OpenCL compiler. Rather, we have shown that the
convenience of the SIMT abstraction carries a performance cost that can be
difficult to overcome even with expert programming. A programming model that
is natively designed to harvest hardware capabilities fully thus fills an
essential void, and this metal-level expressiveness is especially important
for performance-critical applications.
CM is positioned as a low-level programming tool for Intel GPUs. Different
languages’ front ends have started using CM as their back end. For instance,
DPC++-ESIMD [41] integrates some CM language features into DPC++, and ISPC
[42] also generates CM vector intrinsics and relies on CM optimizations and
code generation. Moreover, given the rising importance of vector and matrix
data types for neural-network programming, we foresee that IR extensions
similar to our rdregion and wrregion may be added into LLVM for other target
machines.
## Acknowledgment
We thank many colleagues who supported the CM compiler project and contributed
to its development over the past years, including Tim Corringham, Zhenying
Liu, Wei Pan, Tim Renouf, David Stuttard, and Stephen Thomas. We also thank
the anonymous reviewers for their suggestions and comments.
## Appendix A Artifact Appendix
### A-A Abstract
Our artifact contains the implementation of the CM compiler (CMC) as well as
the applications and benchmarks used in the experimental evaluation section.
We provide the required scripts to compile and execute the benchmarks, which
allows the reproducibility of our results on any system with Intel Gen9
(Skylake) GPU or above.
### A-B Artifact Meta-Information
* •
Program: The CM compiler implemented in C++; CM applications; OpenCL
applications (all sources and binaries included).
* •
Compilation: With provided scripts via gcc/g++.
* •
Data set: Applications use input data sets included either as separated files
or generated at runtime. For the former case, they are located in each
application directory.
* •
Run-time environment: Linux Ubuntu 18.04 or above, CM runtime and OpenCL
runtime.
* •
Hardware: Intel Gen9 GPU or above.
* •
Output: Performance results in text files for every application evaluated with
CM and OpenCL.
* •
Publicly available: The CM compiler as well as all the CM and OpenCL examples
are publicly available except from those listed in the productivity section
(section 6.1).
* •
Code license: The Intel(R) CM compiler and examples are distributed under the
MIT license.
### A-C Description
#### A-C1 How Delivered
The CM compiler is available on Github: https://github.com/intel/cm-compiler.
The CM and OpenCL examples, as well as scripts to build and run all the
benchmarks are available on https://github.com/jfuentes/C-for-Metal_CGO2021.
Binaries of the CM compiler and benchmarks are also included in the artifact
repository.
#### A-C2 Hardware Dependencies
We recommend running the benchmarks on an Intel Gen11 GPU (Icelake), however,
any other Intel GPU above Gen9 (Skylake) should give similar results. Notice
that due to hardware configuration differences, further application-specific
tuning may be required to achieve peak performance on different Gen platforms.
#### A-C3 Software Dependencies
This artifact was prepared using Ubuntu 18.04. Similar Linux distributions
should also work. The artifact repository contains the CM compiler build and
its dependencies to compile all the benchmarks. To build the CM and IGC
compilers from sources, specific details about dependencies and how to build
them can be found in their repositories:
* •
CMC: https://github.com/intel/cm-compiler
* •
IGC: https://github.com/intel/intel-graphics-compiler
To run the benchmarks the CM runtime and OpenCL runtime are required, which
can be found in their repositories:
* •
CM runtime: https://github.com/intel/media-driver
* •
OpenCL oneAPI Level Zero Runtime: https://github.com/intel/compute-runtime
### A-D Installation
First, install elemental dependencies for this artifact: g++, git, make, cmake
and jansson.
$ sudo apt install g++ git git-lfs make cmake libjansson-dev
#### A-D1 CM Compiler, Runtime and Benchmarks
Download the artifact repository. It contains a build of the CM compiler and
all the benchmarks. If building the CM compiler from sources is preferred,
visit the CM compiler repository for more details
(https://github.com/intel/cm-compiler). Also, notice that some applications
files are uploaded via lfs. So make sure they are downloaded properly.
$ git clonehttps://github.com/jfuentes/C-for-Metal_CGO2021$ cd C-for-
Metal_CGO2021$ git lfs pull
Now, we need to build and install the media driver which contains the CM
runtime needed to run CM applications. Install prerequisites:
$ sudo apt install autoconf libtool libdrm-dev
xorg-dev openbox libx11-dev libgl1-mesa-glx
libgl1-mesa-dev xutils-dev
Build and install libva:
$ git clone https://github.com/intel/libva.git
$ cd libva
$ ./autogen.sh --prefix=/usr
--libdir=/usr/lib/x86_64-linux-gnu
$ make
$ sudo make install
Finally, build the media driver:
$ git clone
https://github.com/intel/media-driver.git
$ git clone https://github.com/intel/gmmlib.git
$ mkdir build_media & cd build_media
$ cmake ../media-driver/
$ make -j8
$ sudo make install
Notice that at this point you might need to set the path of the driver and
make sure the path for dynamic libraries is set:
$ export LIBVA_DRIVERS_PATH=/usr/lib/
x86_64-linux-gnu/dri
$ export LIBVA_DRIVER_NAME=iHD
$ LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/
local/lib
$ export LD_LIBRARY_PATH
#### A-D2 OpenCL Compiler (IGC) and Runtime for Intel GPU
To install IGC and NEO runtime download the packages and follow the
instructions from the compute runtime repository at
https://github.com/intel/compute-runtime/releases.
Then, install OpenCL headers:
$ git clone https://github.com/KhronosGroup/
OpenCL-Headers.git
$ cd OpenCL-Headers
$ sudo mv CL/ /usr/include/
Additionally, you need to install the OpenCL C++ headers. Follow the
installation steps from https://github.com/KhronosGroup/OpenCL-CLHPP.
Finally, install the OpenCL Installable Client Driver (ICD)
$ git clone https://github.com/KhronosGroup/
OpenCL-ICD-Loader.git
$ cd OpenCL-ICD-Loader
$ mkdir build & cd build
$ cmake ..
$ make
$ sudo make install
### A-E Experiment Workflow
Once the above packages are installed, all the CM and OCL benchmarks can be
built. Locate at the artifact repository and simply run:
$ cd benchmarks
$ sh build_CM_all.sh
$ sh build_OCL_all.sh
The above command will generate both the kernel binaries and host executables
for every benchmark. Notice that as the CM compilation is offline compilation
it will ask the GPU platform you are compiling for (SKL, ICL, etc.). Then, run
the benchmarks:
$ sh run_CM_all.sh
$ sh run_OCL_all.sh
### A-F Evaluation and Expected Result
Once the benchmarks are finished, performance results are reported to the
standard output as well as text files located in the results directory. For
each benchmark the kernel execution time and total execution time are
reported. Performance results are in milliseconds and organized by input data.
## References
* [1] J. Nickolls, I. Buck, M. Garland, and K. Skadron, “Scalable parallel programming with CUDA,” _Queue_ , vol. 6, no. 2, pp. 40–53, 2008.
* [2] A. Munshi, “The OpenCL specification,” in _2009 IEEE Hot Chips 21 Symposium (HCS)_. IEEE, 2009, pp. 1–314.
* [3] C. Lattner and V. Adve, “LLVM: A compilation framework for lifelong program analysis & transformation,” in _International Symposium on Code Generation and Optimization, 2004. CGO 2004._ IEEE, 2004, pp. 75–86.
* [4] Intel Corporation, “Intel(R) Graphics Compute Runtime for oneAPI Level Zero and OpenCL(TM) Driver,” https://github.com/intel/compute-runtime, 2020\.
* [5] ——, _oneAPI Level Zero Specification_ , 2020. [Online]. Available: https://spec.oneapi.com/level-zero/latest/index.html
* [6] ——, “C-for-Metal Compiler,” https://github.com/intel/cm-compiler, 2019\.
* [7] ——, _Intel Subgroup Extension Specification_ , 2016. [Online]. Available: https://www.khronos.org/registry/OpenCL/extensions/intel/cl_intel_subgroups.html
* [8] S. Rul, H. Vandierendonck, J. D’Haene, and K. De Bosschere, “An experimental study on performance portability of OpenCL kernels,” in _Application Accelerators in High Performance Computing, 2010 Symposium, Papers_ , 2010, p. 3. [Online]. Available: http://saahpc.ncsa.illinois.edu/papers/paper_2.pdf
* [9] P. Du, R. Weber, P. Luszczek, S. Tomov, G. Peterson, and J. Dongarra, “From CUDA to OpenCL: Towards a performance-portable solution for multi-platform GPU programming,” _Parallel Computing_ , vol. 38, no. 8, pp. 391–407, 2012\.
* [10] S. J. Pennycook, S. D. Hammond, S. A. Wright, J. Herdman, I. Miller, and S. A. Jarvis, “An investigation of the performance portability of OpenCL,” _Journal of Parallel and Distributed Computing_ , vol. 73, no. 11, pp. 1439–1450, 2013.
* [11] Y. Zhang, M. Sinclair, and A. A. Chien, “Improving performance portability in opencl programs,” in _Supercomputing_ , J. M. Kunkel, T. Ludwig, and H. W. Meuer, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013, pp. 136–150.
* [12] T. L. Falch and A. C. Elster, “Machine learning based auto-tuning for enhanced opencl performance portability,” in _2015 IEEE International Parallel and Distributed Processing Symposium Workshop_ , 2015, pp. 1231–1240.
* [13] J. Fang, A. L. Varbanescu, and H. Sips, “A comprehensive performance comparison of cuda and opencl,” in _Proceedings of the 2011 International Conference on Parallel Processing_ , ser. ICPP ’11. USA: IEEE Computer Society, 2011, p. 216–225. [Online]. Available: https://doi.org/10.1109/ICPP.2011.45
* [14] Intel Corporation, _Intel Intrinsics Guide_ , 2020. [Online]. Available: https://software.intel.com/sites/landingpage/IntrinsicsGuide/
* [15] C++ Standards Committee, _Data-parallel vector library_ , 2020. [Online]. Available: https://en.cppreference.com/w/cpp/experimental/simd
* [16] A. Pohl, B. Cosenza, M. A. Mesa, C. C. Chi, and B. Juurlink, “An Evaluation of Current SIMD Programming Models for C++,” in _Proceedings of the 3rd Workshop on Programming Models for SIMD/Vector Processing_ , ser. WPMVP ’16. New York, NY, USA: Association for Computing Machinery, 2016. [Online]. Available: https://doi.org/10.1145/2870650.2870653
* [17] Intel Corporation, _Intel oneAPI Data Parallel C++_ , 2020. [Online]. Available: https://software.intel.com/en-us/oneapi/dpc-compiler
* [18] J. Rose, “C*: An extended c language for data parallel programming,” in _Proceedings of the Second International Conference on Supercomputing_ , 1987\.
* [19] R. Leißa, S. Hack, and I. Wald, “Extending a C-like language for portable SIMD programming,” _ACM SIGPLAN Notices_ , vol. 47, no. 8, pp. 65–74, 2012\.
* [20] C. Lattner, M. Amini, U. Bondhugula, A. Cohen, A. Davis, J. Pienaar, R. Riddle, T. Shpeisman, N. Vasilache, and O. Zinenko, “MLIR: A Compiler Infrastructure for the End of Moore’s Law,” 2020.
* [21] LLVM Community, _Multi-Level IR Compiler Framework - Vector Dialect_ , 2020\. [Online]. Available: https://mlir.llvm.org/docs/Dialects/Vector
* [22] L. Truong, R. Barik, E. Totoni, H. Liu, C. Markley, A. Fox, and T. Shpeisman, “Latte: a language, compiler, and runtime for elegant and efficient deep neural networks,” in _Proceedings of the 37th ACM SIGPLAN Conference on Programming Language Design and Implementation_ , 2016, pp. 209–223.
* [23] N. Rotem, J. Fix, S. Abdulrasool, G. Catron, S. Deng, R. Dzhabarov, N. Gibson, J. Hegeman, M. Lele, R. Levenstein _et al._ , “Glow: Graph lowering compiler techniques for neural networks,” _arXiv preprint arXiv:1805.00907_ , 2018.
* [24] C. Chiw, G. Kindlmann, J. Reppy, L. Samuels, and N. Seltzer, “Diderot: a parallel DSL for image analysis and visualization,” in _Proceedings of the 33rd ACM SIGPLAN conference on Programming Language Design and Implementation_ , 2012, pp. 111–120.
* [25] J. Fuentes, W.-Y. Chen, G.-Y. Lueh, and I. D. Scherson, “A lock-free skiplist for integrated graphics processing units,” in _2019 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)_. IEEE, 2019, pp. 36–46.
* [26] J. Fuentes, W.-y. Chen, G.-y. Lueh, A. Garza, and I. D. Scherson, “SIMD-node Transformations for Non-blocking Data Structures,” in _Parallel Processing and Applied Mathematics_. Cham: Springer International Publishing, 2020, pp. 385–395.
* [27] W.-Y. Chen, G.-Y. Lueh, P. Ashar, K. Chen, and B. Cheng, “Register allocation for Intel processor graphics,” in _Proceedings of the 2018 International Symposium on Code Generation and Optimization_ , 2018, pp. 352–364.
* [28] B. Coutinho, D. Sampaio, F. M. Q. Pereira, and W. Meira Jr, “Divergence analysis and optimizations,” in _2011 International Conference on Parallel Architectures and Compilation Techniques_. IEEE, 2011, pp. 320–329.
* [29] Lin, Yuan and Grover, Vinod, _Using CUDA Warp-Level Primitives_ , 2018. [Online]. Available: https://devblogs.nvidia.com/using-cuda-warp-level-primitives/
* [30] Khronos OpenCL Working Group, _The OpenCL Extension Specification_ , 2018\. [Online]. Available: https://www.khronos.org/registry/OpenCL/sdk/2.0/docs/man/xhtml/cl_khr_subgroups.html
* [31] J. Anantpur and G. R., “Taming control divergence in gpus through control flow linearization,” in _Compiler Construction_ , A. Cohen, Ed. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014, pp. 133–153.
* [32] T. D. Han and T. S. Abdelrahman, “Reducing branch divergence in gpu programs,” in _Proceedings of the Fourth Workshop on General Purpose Processing on Graphics Processing Units_ , 2011, pp. 1–8.
* [33] A. Chandrasekhar, G. Chen, P.-Y. Chen, W.-Y. Chen, J. Gu, P. Guo, S. H. P. Kumar, G.-Y. Lueh, P. Mistry, W. Pan _et al._ , “IGC: the open source Intel Graphics Compiler,” in _2019 IEEE/ACM International Symposium on Code Generation and Optimization (CGO)_. IEEE, 2019, pp. 254–265.
* [34] C. Lattner, “LLVM and Clang: Next generation compiler technology,” in _The BSD conference_ , vol. 5, 2008.
* [35] Intel Corporation, _Intel SDK for OpenCL Applications_ , 2019. [Online]. Available: https://software.intel.com/en-us/opencl-sdk/training#codesamples
* [36] J.-D. Lee and K. E. Batcher, “A bitonic sorting network with simpler flip interconnections,” in _Proceedings Second International Symposium on Parallel Architectures, Algorithms, and Networks (I-SPAN’96)_. IEEE, 1996, pp. 104–109.
* [37] K. Alsabti, S. Ranka, and V. Singh, “An efficient k-means clustering algorithm,” 1997.
* [38] M. Harris, _An efficient matrix transpose in CUDA C/C++_ , 2013. [Online]. Available: https://devblogs.nvidia.com/efficient-matrix-transpose-cuda-cc
* [39] L. Kong and R. Ioffe, _SGEMM for Intel® Processor Graphics_ , 2015. [Online]. Available: https://software.intel.com/en-us/articles/sgemm-for-intel-processor-graphics
* [40] G. E. Blelloch, “Scans as primitive parallel operations,” _IEEE Transactions on computers_ , vol. 38, no. 11, pp. 1526–1538, 1989.
* [41] Intel Corporation, “Explicit SIMD Programming Extension for DPC++,” https://github.com/intel/llvm/blob/sycl/sycl/doc/extensions/ExplicitSIMD/dpcpp-explicit-simd.md, 2020\.
* [42] ——, “ISPC for Gen,” https://ispc.github.io/ispc_for_gen.html, 2020\.
|
# Non-tautological Hurwitz cycles
Carl Lian Institut für Mathematik, Humboldt-Universität zu Berlin, 12489
Berlin, Germany<EMAIL_ADDRESS>https://sites.google.com/view/carllian
###### Abstract.
We show that various loci of stable curves of sufficiently large genus
admitting degree $d$ covers of positive genus curves define non-tautological
algebraic cycles on $\overline{\mathcal{M}}_{g,N}$, assuming the non-vanishing
of the $d$-th Fourier coefficient of a certain modular form. Our results build
on those of Graber-Pandharipande and van Zelm for degree 2 covers of elliptic
curves; the main new ingredient is a method to intersect the cycles in
question with boundary strata, as developed recently by Schmitt-van Zelm and
the author.
## 1\. Introduction
### 1.1. Tautological classes on moduli spaces of curves
The Chow $A^{*}(\overline{\mathcal{M}}_{g,n})$ and cohomology
$H^{*}(\overline{\mathcal{M}}_{g.n})$ rings of moduli spaces of stable pointed
curves are central objects of enumerative geometry. While both objects are
extremely complicated and likely impossible to understand completely, Mumford
[Mum83] initiated a study of certain tautological classes on
$\overline{\mathcal{M}}_{g,n}$ that appear in many natural geometric
situations and are largely computable in practice.
By definition, the tautological rings
$R^{*}(\overline{\mathcal{M}}_{g,n})\subset
A^{*}(\overline{\mathcal{M}}_{g,n})$ form the smallest system of subrings
containing the $\psi$ and $\kappa$ classes and closed under all pushforwards
by forgetful morphisms
$\pi:\overline{\mathcal{M}}_{g,n+1}\to\overline{\mathcal{M}}_{g,n}$ and
boundary morphisms
$\xi_{\Gamma}:\overline{\mathcal{M}}_{\Gamma}\to\overline{\mathcal{M}}_{g,n}$.
Moreover, additive generators for the tautological ring and formulas for their
intersections may be given combinatorially, see [GP03, Appendix A]. A
conjecturally complete set of relations is given by Pixton’s relations, see
[PPZ15].
Many cohomology classes on moduli spaces of curves arising in geometry turn
out to be tautological. For example, using techniques of Gromov-Witten theory,
Faber-Pandharipande [FP05] show that loci of curves admitting maps to
$\mathbb{P}^{1}$ with prescribed ramification profiles are tautological.
We review the theory of tautological classes in §2.1.
### 1.2. Non-tautological classes from Hurwitz cycles
In contrast to the result of [FP05], it was first shown by Graber-
Pandharipande [GP03] that certain loci of curves admitting double covers of
positive genus curves are non-tautological. For example:
###### Theorem 1.1.
[GP03, Theorem 2] The locus of pointed curves
$[X,x_{1},\ldots,x_{20}]\in\overline{\mathcal{M}}_{2,20}$ such that there
exists a 2-to-1 cover $f:X\to E$, where $E$ is a genus 1 curve, and
$f(x_{2i-1})=f(x_{2i})$ for $i=1,\ldots,10$, is non-tautological.
More recently, this result was extended by van Zelm:
###### Theorem 1.2.
[vZ18, Theorem 1] Suppose that $g\geq 2$ and $g+m\geq 12$. Then, the locus of
pointed curves on $\overline{\mathcal{M}}_{g,2m}$ admitting a double cover of
an elliptic curve with $m$ pairs of conjugate points, is non-tautological.
In particular, when $g\geq 12$, one obtains non-tautological classes on
$\overline{\mathcal{M}}_{g}$.
The method as follows: suppose first that $g+m=12$. Then, consider the
boundary stratum
$\xi:\overline{\mathcal{M}}_{1,11}\times\overline{\mathcal{M}}_{1,11}\to\overline{\mathcal{M}}_{g,2m}$
gluing together $g-1$ pairs of points on opposite components. By [GP03,
Proposition 1] (see also Proposition 2.2), the pullback of any tautological
class to a boundary stratum has Künneth decomposition (in cohomology) into
tautological classes. However, a combinatorial calculation shows that the
pullback of the pointed bielliptic class is a non-zero multiple of the class
of the diagonal
$\overline{\mathcal{M}}_{1,11}\to\overline{\mathcal{M}}_{1,11}\times\overline{\mathcal{M}}_{1,11}$,
which cannot have tautological Künneth decomposition owing to the existence of
odd cohomology on $\overline{\mathcal{M}}_{1,11}$, see §2.2.
When $g+m>12$, one can induct on $g$ and use the same criterion with different
boundary strata to conclude, see [vZ18, Lemma 12].
### 1.3. New results
The goal of this paper to extend these results further to loci of curves
(Hurwitz cycles) admitting branched covers of arbitrary degree and arbitrary
(positive) target genus. More precisely, let $\overline{\mathcal{H}}_{g/h,d}$
denote the moduli space (Hurwitz space) of Harris-Mumford admissible covers
$f:X\to Y$ of degree $d$, where $X,Y$ have genus $g,h$, respectively, and let
$\phi:\overline{\mathcal{H}}_{g/h,d}\to\overline{\mathcal{M}}_{g}$ be the map
remembering the curve $X$ (possibly with non-stable components contracted). We
review the theory of admissible covers in §2.3 and §2.4.
We expect the following:
###### Conjecture 1.
Suppose $h\geq 1$ and $d\geq 2$. Then, for all sufficiently large $g$
depending on $h$ and $d$, the class
$\phi_{*}([\overline{\mathcal{H}}_{g/h,d}])\in
H^{*}(\overline{\mathcal{M}}_{g})$ is non-tautological.
Our methods ultimately fall short of proving Conjecture 1 in full in the
following two ways. First, we require a mild condition on $d$ (independent of
$g,h$) given by the non-vanishing of the $d$-th Fourier coefficient of a
certain modular form. Second, in order for the admissible covers appearing in
the pullbacks of our Hurwitz cycles by boundary strata to have the desired
topological types, we will need to add additional marked points on our covers
satisfying the condition that their images are equal. This is analogous to the
situation of [GP03, vZ18], but in contrast we are not able in general to
remove all of the marked points for sufficiently large $g$.
Let $\overline{\mathcal{H}}_{g/h,d,(m_{2})^{2}(m_{d})^{d},n}\in
H^{*}(\overline{\mathcal{M}}_{g,2m_{2}+dm_{d}+n})$ be the locus of genus $g$
curves admitting a degree $d$ cover of a genus $h$ curve, with $m_{2}$ marked
pairs and $m_{d}$ marked $d$-tuples of points with equal image, along with $n$
marked ramification points. More precisely, let
$\overline{\mathcal{H}}_{g/h,d,m_{2}+m_{d}}$ be the Harris-Mumford space
parametrizing covers $f:X\to Y$ as in $\overline{\mathcal{H}}_{g/h,d}$, with
the data of $m_{2}+m_{d}$ additional marked points on $Y$ and their pre-images
on $X$, and let $\overline{\mathcal{H}}_{g/h,d,(m_{2})^{2}(m_{d})^{d},n}$ be
the class obtained by pushing forward the fundamental class by the map
remembering $X$ with the desired marked points. (See also §2.3.) We then have
the following.
###### Theorem 1.3.
Consider the modular form of weight 24
$\eta(q)^{48}=q^{2}\prod_{\ell\geq 1}(1-q^{\ell})^{48}=\sum_{d\geq
2}a_{d}q^{d}$
and fix $d$ such that $a_{d}\neq 0$.
Then, the class $\overline{\mathcal{H}}_{g/h,d,(m_{2})^{2}(m_{d})^{d},n}\in
H^{*}(\overline{\mathcal{M}}_{g,2m_{2}+dm_{d}+n})$ is non-tautological in the
following cases.
* •
$h=1$: $g\geq 2$ and $g+m_{2}\geq 12$
* •
$h>1$, $d=2$: $g\geq 2h$, $g+m_{2}\geq 2h+10$, and $m_{2}\geq 1$
* •
$h>1$, $d>2$: $g\geq d(h-1)+2$, $g+m_{2}+m_{d}\geq(2d-3)(h-1)+12$, and
$m_{d}\geq(d-3)(h-1)+1$
The question of non-vanishing of the Fourier coefficients of $\eta(q)^{48}$
appears to be difficult. As to the related question of the non-vanishing of
the Ramanujan tau function $\tau(d)$, that is, the Fourier coefficients of
$\eta(q)^{24}$, an old conjecture of Lehmer [Leh47] predicts that $\tau(d)\neq
0$; it is known that Lehmer’s conjecture holds for $d\lesssim 8\cdot 10^{23}$
[DvHZ13].
Because, by definition, tautological classes in singular cohomology are the
images of tautological classes in Chow, the corresponding Chow classes are
also non-tautological. We also obtain immediately a generalization of [vZ18,
Theorem 2] and [GP03, Theorem 3] to the open loci on
$\overline{\mathcal{M}}_{g,2m_{2}}$ of $d$-elliptic curves for $d$ arbitrary
when $g+m_{2}=12$, see Corollary 5.6.
In order to apply the criterion of Graber-Pandharipande to prove Theorem 1.3,
one needs a sufficiently robust way to compute pullbacks of Hurwitz cycles on
$\overline{\mathcal{M}}_{g,N}$ to boundary strata. The development of this
method initiated in the work of Schmitt-van Zelm [SvZ18] (see also §3.1) for
Galois Hurwitz cycles, which was incorporated in to a theory of H-tautological
classes on moduli spaces of admissible Galois covers in [L20b]. In particular,
this new framework allows for the intersection of arbitrary (Harris-Mumford)
admissible cover cycles with boundary strata, see [L20b, §6], which we review
in §3.2.
### 1.4. Summary of proof
The proof of Theorem 1.3 proceeds by induction on $g$ and $h$ in three main
steps. We may reduce to the case $n=0$ (in all three steps, see Lemma 4.1) and
$m_{d}=0$ (in steps 1 and 2, see Lemma 4.2).
* •
Step 1 ($h=1,g+m_{2}=12$): We pull back the cycle
$\overline{\mathcal{H}}_{g/1,d,(m_{2})^{2}}\in
H^{*}(\overline{\mathcal{M}}_{g,2m_{2}})$ to the boundary stratum
$\overline{\mathcal{M}}_{1,11}\times\overline{\mathcal{M}}_{1,11}$ obtained by
gluing $g-1$ pairs of nodes on the two elliptic components together. We find
in Lemmas 5.2 and 5.3 that the only possibly non-tautological contributions in
the pullback come from from pairs of isogenies $X_{1}\to Y_{1}$,
$X^{\prime}_{1}\to Y_{1}$ of total degree $d$ over a common target.
Then, the contribution from odd classes on $\overline{\mathcal{M}}_{1,11}$ is
governed by a Hecke-type operator on $H^{11}(\overline{\mathcal{M}}_{1,11})$,
which we compute using the description of the non-trivial classes in
$H^{11}(\overline{\mathcal{M}}_{11})$ in terms of the weight 12 cusp form
$\eta(q)^{24}$, see Lemma 5.4. Therefore, this contribution is non-zero if and
only if $a_{d}\neq 0$, and we conclude that
$\overline{\mathcal{H}}_{g/1,d,(m_{2})^{2}}\in
H^{*}(\overline{\mathcal{M}}_{g,2m_{2}})$ for such $d$.
* •
Step 2 ($h=1,g+m_{2}>12$): We induct on $m_{2}$ and $g$ by pulling back
$\overline{\mathcal{H}}_{g/1,d,(m_{2})^{2}}\in
H^{*}(\overline{\mathcal{M}}_{g,2m_{2}})$ to boundary divisors. The induction
on $m_{2}$ is addressed in Lemma 4.3 by pulling back to a divisor on
$\overline{\mathcal{M}}_{g,2(m_{2}+1)}$ of curves with a 2-pointed rational
tail, and the induction on $g$ is addressed in §5.2 by pulling back to a
divisor on $\overline{\mathcal{M}}_{g,2m_{2}}$ of curves with an elliptic
tail.
* •
Step 3 ($h>1$): We induct on $h$ by pulling back
$\overline{\mathcal{H}}_{g/h,d,(m_{2})^{2}(m_{d})^{d}}\in
H^{*}(\overline{\mathcal{M}}_{g,2m_{2}+dm_{d}})$ to a boundary stratum of
curves with $d$ elliptic tails attached to a spine of genus $g-d$; this is
carried out in §6. Here, we require the condition that the $d$ attachment
nodes appear in the same fiber of an admissible cover, requiring us to
introduce the parameter $m_{d}$.
### 1.5. Conventions
We work exclusively over $\mathbb{C}$. Cohomology groups are taken with
rational coefficients except when otherwise noted; we will also need to pass
to complex coefficients to study the odd cohomology class $\omega$ on
$\overline{\mathcal{M}}_{1,11}$ coming from the weight 12 modular form
$\eta(q)^{24}$. We will frequently identify homology and cohomology classes in
complementary degrees via Poincaré duality without mention. All curves are
assumed projective and connected with only nodes as singularities, except when
otherwise noted, and the genus of a curve refers to its arithmetic genus. All
moduli spaces are understood to be stacks, rather than coarse spaces.
### 1.6. Acknowledgments
We thank Alessio Cela, Gavril Farkas, Johan de Jong, Dan Petersen, Johannes
Schmitt, and Jason van Zelm for useful discussions related to this paper. This
project was completed with the support of an NSF Postdoctoral Fellowship,
grant DMS-2001976.
## 2\. Preliminaries
### 2.1. Tautological classes
We recall the standard definition:
###### Definition 2.1.
The tautological ring is the smallest system of subrings
$R^{*}(\overline{\mathcal{M}}_{g,n})\subset
A^{*}(\overline{\mathcal{M}}_{g,n})$ containing all $\psi$ and $\kappa$
classes and closed under pushforwards by all boundary morphisms
$\xi_{\Gamma}:\overline{\mathcal{M}}_{\Gamma}\to\overline{\mathcal{M}}_{g,n}$
(indexed by stable graphs $\Gamma$) and all forgetful morphisms
$\pi:\overline{\mathcal{M}}_{g,n+1}\to\overline{\mathcal{M}}_{g,n}$.
We also denote the image of the tautological ring in singular cohomology under
the cycle class map by $RH^{*}(\overline{\mathcal{M}}_{g,n})\subset
H^{*}(\overline{\mathcal{M}}_{g,n})$.
We will work primarily in singular cohomology. However, the Hurwitz classes we
will consider are all algebraic, and after we have proven that they are non-
tautological in cohomology, it is immediate by definition that they are also
non-tautological in Chow.
Additive generators for the tautological ring may be given in terms of
decorated boundary classes, which can be intersected explicitly in terms of
the combinatorics of dual graphs, see [GP03, Appendix A]. In particular, one
obtains the following criterion, which will be our primary tool for detecting
non-tautological classes.
###### Proposition 2.2.
[GP03, Proposition 1] Suppose that $\alpha\in
RH^{*}(\overline{\mathcal{M}}_{g,n})$ is a tautological class, and let
$\xi_{\Gamma}:\overline{\mathcal{M}}_{\Gamma}\to\overline{\mathcal{M}}_{g,n}$
be a boundary class. Then, on the space
$\overline{\mathcal{M}}_{\Gamma}=\prod_{v\in
V(\Gamma)}\overline{\mathcal{M}}_{g_{v},n_{v}}$, the pullback
$\xi_{\Gamma}^{*}\alpha$ has tautological Künneth decomposition (TKD), that
is,
$\xi_{\Gamma}^{*}\alpha\in\bigotimes_{v\in
V(\Gamma)}RH^{*}(\overline{\mathcal{M}}_{g_{v},n_{v}})\subset
H^{*}\left(\prod_{v\in V(\Gamma)}\overline{\mathcal{M}}_{g_{v},n_{v}}\right)$
In particular, when the Künneth decomposition (TKD) of
$\xi_{\Gamma}^{*}\alpha$ includes non-trivial contributions from odd
cohomology, we may immediately conclude that $\alpha$ is non-tautological, as
the tautological ring lives in even degree.
### 2.2. Cohomology of $\overline{\mathcal{M}}_{1,11}$
Following [GP03, vZ18], we will detect classes without TKD via the existence
of odd cohomology on $\overline{\mathcal{M}}_{1,11}$. Here, we collect the
facts that we will need.
###### Lemma 2.3.
[Pet14, Corollary 1.2] All even-dimensional classes (and hence, all algebraic
classes) on $\overline{\mathcal{M}}_{1,11}$ are tautological, that is,
$RH^{*}(\overline{\mathcal{M}}_{1,11})=H^{2*}(\overline{\mathcal{M}}_{1,11})$.
###### Lemma 2.4.
[vZ18, Lemma 8(i)] All algebraic classes on
$\overline{\mathcal{M}}_{1,11}\times\overline{\mathcal{M}}_{1,11}$ supported
on the boundary have TKD.
###### Lemma 2.5.
[Get98] The odd cohomology of $\overline{\mathcal{M}}_{1,11}$ is two-
dimensional, generated by the class of a holomorphic 11-form $\omega\in
H^{0}(\overline{\mathcal{M}}_{1,11},\Omega^{11}_{\overline{\mathcal{M}}_{1,11}})\subset
H^{11}(\overline{\mathcal{M}}_{1,11})$ and its conjugate.
One can write down the form $\omega$ explicitly, see [FP13, §2.3]. Complex-
analytically, the open locus $\mathcal{M}_{1,11}$ may be regarded as the open
subset of
$(\mathbb{H}\times\mathbb{C}^{10})/(\operatorname{SL}_{2}(\mathbb{Z})\ltimes(\mathbb{Z}^{2})^{10})$
obtained by removing the diagonals and zero-sections. In the semi-direct
product, $\operatorname{SL}_{2}(\mathbb{Z})$ acts on the factors
$\mathbb{Z}^{2}$ by via the conjugate representation
$\begin{bmatrix}a&b\\\ c&d\end{bmatrix}\cdot(x,y)=\begin{bmatrix}a&-b\\\
-c&d\end{bmatrix}\begin{bmatrix}x\\\ y\end{bmatrix}.$
The group action is given by the formula
$\left(\begin{bmatrix}a&b\\\
c&d\end{bmatrix},\\{(x_{i},y_{i})\\})\right)\cdot(z,\\{\zeta_{i}\\})=\left(\frac{az+b}{cz+d},\left\\{\frac{\zeta_{i}}{cz+d}+x_{i}+y_{i}\cdot\frac{az+b}{cz+d}\right\\}\right)$
From here, one checks that
$\omega=\eta(e^{2\pi iz})^{24}dz\wedge d\zeta_{1}\wedge\cdots\wedge
d\zeta_{10},$
where $\eta(e^{2\pi iz})^{24}$ is the normalized discriminant cusp form of
weight 12 with Fourier expansion
$\eta(q)^{24}=q\prod_{\ell\geq 1}(1-q^{\ell})^{24},$
is a non-zero holomorphic 11-form on $\mathcal{M}_{1,11}$, and moreover
extends to $\overline{\mathcal{M}}_{1,11}$.
###### Remark 2.6.
The discriminant form $\eta(q)^{24}$ is often denoted simply $\Delta(q)$, but
we will avoid this notation, reserving the letter $\Delta$ for the diagonal
$\Delta:\overline{\mathcal{M}}_{1,11}\to\overline{\mathcal{M}}_{1,11}\times\overline{\mathcal{M}}_{1,11}$.
### 2.3. Hurwitz spaces and admissible covers
Let $\mathcal{H}_{g/h,d}$ denote the moduli space (Hurwitz space) of degree
$d$ simply ramified covers $f:X\to Y$, where $X,Y$ are smooth, connected, and
proper curves of genus $g,h$, and let $\overline{\mathcal{H}}_{g/h,d}$ be its
compactification by Harris-Mumford admissible covers, see [HM82].
Recall that, by definition, the branched points of $Y$ are also marked, and
the resulting marked curve is required to be stable. In addition, all pre-
images on $X$ of the marked points (including those that are not ramification
points) are marked, and the resulting curve is automatically stable.
Therefore, we get maps
$\textstyle{\overline{\mathcal{H}}_{g/h,d}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\phi}$$\scriptstyle{\delta}$$\textstyle{\overline{\mathcal{M}}_{g,N}}$$\textstyle{\overline{\mathcal{M}}_{h,b}}$
where $b=(2g-2)-d(2h-2)$ and $N=(d-1)b$. We refer to $\phi,\delta$ as the
source and target maps, respectively.
The target map $\delta$ is quasi-finite and surjective, with degree given by a
Hurwitz number, counting monodromy actions on the fibers of a degree $d$
cover. In particular, $\overline{\mathcal{H}}_{g/h,d}$ has dimension $3h-3+b$.
The map $\delta$ is in addition unramified over $\mathcal{M}_{h,b}$, so that
$\mathcal{H}_{g/h,d}$ is smooth, but in general ramified at any cover ramified
over at least one node.
More precisely, let $f:X\to Y$ be an admissible cover, and let
$\mathbb{C}[[t_{1},\ldots,t_{3h-3+b}]]$ be the complete local ring of the
marked target curve. Suppose further that $t_{1},\ldots,t_{n}$ are smoothing
parameters for the nodes $y_{1},\ldots,y_{n}$ of $Y$, and for $i=1,\ldots,n$,
let $t_{i,1},\ldots_{i,r_{i}}$ be smoothing parameters for the corresponding
nodes of $X$ above $y_{i}$, with ramification indices
$a_{i,1},\ldots,a_{i,r_{i}}$. Then, the complete local ring of
$\mathcal{H}_{g/h,d}$ at $[f]$ is
$\mathbb{C}\left[\left[t_{1},\ldots,t_{3h-3+b},\\{t_{i,j}\\}^{1\leq i\leq
n}_{1\leq j\leq
r_{i}}\right]\right]/\left(t_{1}=t_{1,1}^{a_{1,1}}=\cdots=t_{1,r_{1}}^{a_{1,r_{1}}},\ldots,t_{n}=t_{n,1}^{a_{n,1}}=\cdots=t_{n,r_{n}}^{a_{n,r_{n}}}\right).$
In particular, $\mathcal{H}_{g/h,d}$ is Cohen-Macaulay, but singular at any
cover with at least one nodal fiber with more than one ramification point.
#### 2.3.1. $\psi$ classes
###### Definition 2.7.
For any marked point $x_{i}$ on a source curve parametrized by
$\overline{\mathcal{H}}_{g/h,d}$, we define the corresponding $\psi$ class
$\psi_{i}\in A^{1}(\overline{\mathcal{H}}_{g/h,d})$ simply by the pullback of
the corresponding $\psi$ class from $\overline{\mathcal{M}}_{g,N}$.
In fact, the $\psi$ classes of points living in the same fiber are all
constant multiples of each other; more precisely, the $\psi$ class at $x\in X$
is equal to the pullback of the $\psi$ class at $f(x)\in Y$ by $\delta$,
divided by the ramification index at $x$, cf. [SvZ18, Lemma 3.9]
#### 2.3.2. Additional marked points
We will also need the following variant:
###### Definition 2.8.
For any $m\geq 0$, let $\overline{\mathcal{H}}_{g/h,d,m}$ be the space of
admissible covers where we mark $m$ points on the target curve $Y$ in addition
to the branch points (and still require that the resulting curve be stable),
along with their $md$ unramified pre-images.
#### 2.3.3. Hurwitz cycles
The cohomology classes we will be interested in come from pushing forward the
fundamental class of $\overline{\mathcal{H}}_{g/h,d,m}$ by the source map to
$\overline{\mathcal{M}}_{g,N}$, then forgetting marked points (and
stabilizing) to get a class on $\overline{\mathcal{M}}_{g,n}$, for $n\leq N$.
We will refer to such classes collectively as Hurwitz cycles. More precisely,
we have the following.
###### Definition 2.9.
Suppose that $m=m_{2}+m_{d}$ for $m_{2},m_{d}\geq 0$ and let $n$ be an integer
satisfying $0\leq n\leq b=(2g-2)-d(2h-2)$. Let
$\phi^{\prime}:\overline{\mathcal{H}}_{g/h,d,m}\to\overline{\mathcal{M}}_{g,2m_{2}+dm_{d}+n}$
be the map obtained by post-composing the usual source map $\phi$ with the map
forgetting all marked points except:
* •
2 points in each of $m_{2}$ of the marked unramified fibers,
* •
al $d$l points in the other $m_{d}$ marked fibers, and
* •
$n$ simple ramification points.
We then (abusing notation) define the Hurwitz cycle
$\overline{\mathcal{H}}_{g/h,d,(m_{2})^{2}(m_{d})^{d},n}$ in
$A^{*}(\overline{\mathcal{M}}_{g,2m_{2}+dm_{d}+n})$ or
$H^{*}(\overline{\mathcal{M}}_{g,2m_{2}+dm_{d}+n})$ by the pushforward of the
fundamental class of $\overline{\mathcal{H}}_{g/h,d,m,n}$ by $\phi^{\prime}$.
Similarly, we may define the open cycles
$\mathcal{H}_{g/h,d,(m_{2})^{2}(m_{d})^{d},n}$ by pullback of the Hurwitz
cycles, as defined above, to $\mathcal{M}_{g,2m_{2}+dm_{d}+n}$. (Note that the
source maps $\phi:\mathcal{H}_{g/h,d}\to\mathcal{M}_{g,N}$ are in general not
proper, so one cannot take this pushforward directly.)
Strictly speaking, one gets different cycles from different choices of points
to forget, but these cycles are related by automorphisms permuting the labels
of the marked points. In particular, the property of being tautological is
agnostic to these choices, so we suppress them. When any of $m_{2},m_{d},n$
are equal to zero, we may also suppress them from the notation when there is
no risk of confusion.
We will refer in the rest of this section, for notational convenience, to
spaces $\overline{\mathcal{H}}_{g/h,d}$ of admissible covers, but the
discussion carries over immediately to the setting where additional marked
points are added, or more generally where source curves are allowed to be
disconnected and/or with higher ramification.
#### 2.3.4. Boundary strata
We now describe boundary strata on $\overline{\mathcal{H}}_{g/h,d}$. Suppose
that $\Gamma,\Gamma^{\prime}$ are stable graphs parametrizing boundary strata
on $\overline{\mathcal{M}}_{g,N},\overline{\mathcal{M}}_{h,b}$, respectively.
###### Definition 2.10.
An admissible cover of stable graphs $\gamma:\Gamma\to\Gamma^{\prime}$ of
degree $d$ by a collection of maps on vertices, half-edges, and legs,
respectively:
$\displaystyle\gamma_{V}:V(\Gamma)$ $\displaystyle\to V(\Gamma^{\prime})$
$\displaystyle\gamma_{H}:H(\Gamma)$ $\displaystyle\to H(\Gamma^{\prime})$
$\displaystyle\gamma_{L}:L(\Gamma)$ $\displaystyle\to L(\Gamma^{\prime})$
compatible (in the obvious sense) with all of the attachment data, in addition
to the data of a degree $d_{v}$ at each $v\in V(\Gamma)$, and a (common)
ramification index $d_{e}$ at each $e\in E(\Gamma)$, such that:
* •
if $v\in V(\Gamma)$ and $h^{\prime}\in H(\Gamma^{\prime})$ is a half-edge
attached to $\gamma_{V}(v)$, then the sum of the ramification indices at the
half-edges attached to $v$ living over $h^{\prime}$ is equal to $d_{v}$, and
* •
if $v^{\prime}\in V(\Gamma^{\prime})$, then the sum of the degrees at vertices
living over $v^{\prime}$ is equal to $d$.
###### Remark 2.11.
The notion of an admissible cover of stable graphs is different from the
notion of an $A$-structure $A\to\Gamma$ on a stable graph, see [GP03, §A.2] or
[SvZ18, Definition 2.5] which captures the phenomenon of stable curve with
dual graph $A$ degenerating to one of dual graph $\Gamma^{\prime}$. Either
notation could sensibly be referred to as a morphism of stable graphs; we
avoid doing so as not to cause confusion.
Let $\gamma:\Gamma\to\Gamma^{\prime}$ be a degree $d$ admissible cover of
stable graphs. Then, for each $v^{\prime}\in V(\Gamma^{\prime})$, let
$\overline{\mathcal{H}}_{v^{\prime}}$ be the moduli space of admissible covers
of the topological type given by the pre-image of $v^{\prime}$ in $V(\Gamma)$,
along with the data of the attached half-edges and legs. Note that such covers
will in general have disconnected targets and arbitrary ramification, but the
discussion above applies in this more general setting. We then get a boundary
stratum
$\xi_{(\Gamma,\Gamma^{\prime})}:\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}\to\overline{\mathcal{H}}_{g/h,d}$
by gluing the constituent admissible covers over each component of $Y$
according to the data of $\Gamma$ and $\Gamma^{\prime}$ (we have suppressed
the map $\gamma$ from the notation).
It is clear that the maps $\xi_{(\Gamma,\Gamma^{\prime})}$ are quasi-finite
and that their images cover the boundary of $\overline{\mathcal{H}}_{g/h,d}$.
The codimension of a boundary stratum is equal to the number of edges of
$\Gamma^{\prime}$, and their specializations to one another can be described
in terms of the combinatorics of the admissible covers of graphs (we will not
need such an explicit description).
#### 2.3.5. Separating nodes
We record here the following straightforward lemma.
###### Lemma 2.12.
Let $f:X\to Y$ be an admissible cover, and suppose that $x\in X$ is a
separating node. Then, $f(x)\in Y$ is a separating node.
### 2.4. Admissible Galois covers
As we have already noted, $\overline{\mathcal{H}}_{g/h,d}$ is in general
singular at the boundary. This will pose only minor problems for our purposes;
in some instances, however, we will need to pass, at least implicitly, to its
normalization.
Let $G$ be a finite group. Let $\overline{\mathcal{H}}_{g,G,\xi}$ be the
moduli space of admissible $G$-covers $f:X\to Y$ with monodromy $\xi$, where
$X$ is a stable curve of genus $g$ with a generically free $G$-action, and $f$
identifies $Y$ with the scheme-theoretic quotient $X/G$. (See [SvZ18, §3] for
detailed definitions.) Recall that we also have source and target maps
---
$\textstyle{\overline{\mathcal{H}}_{g,G,\xi}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\phi}$$\scriptstyle{\delta}$$\textstyle{\overline{\mathcal{M}}_{g,N}}$$\textstyle{\overline{\mathcal{M}}_{h,b}}$
where $h$ is the genus of $X/G$.
As in the Harris-Mumford setting, $\delta$ is quasi-finite, and is ramified at
$G$-covers with ramification at nodes. However,
$\overline{\mathcal{H}}_{g,G,\xi}$ is smooth of dimension $3h-3+b$, and the
map $\phi$ is in addition finite and unramified, see [SvZ18, Theorem 3.7].
As in the Harris-Mumford setting, we may define $\psi$ classes on
$\overline{\mathcal{H}}_{g,G,\xi}$ by pullback by $\phi$ (or, with a
correction factor, by $\delta$); here, any two $\psi$ classes at marked points
in the same $G$-orbit are equal.
#### 2.4.1. Boundary strata
Boundary strata
$\xi_{(\Gamma,G)}:\overline{\mathcal{H}}_{(\Gamma,G)}\to\overline{\mathcal{H}}_{g,G,\xi}$
on $\overline{\mathcal{H}}_{g,G,\xi}$ are indexed by admissible $G$-graphs
$(\Gamma,G)$, see [SvZ18, §3.4]. The space
$\overline{\mathcal{H}}_{(\Gamma,G)}$ is a product, indexed by the vertices
$v$ of the quotient graph $\Gamma/G$, of moduli spaces of admissible
$G_{v}$-covers, where $G_{v}\subset G$ is the stabilizer of any lift of $v$ to
$\Gamma$. However, it will later be convenient to regard these factors
equivalently as spaces of disconnected admissible $G$-covers whose components
indexed by left cosets of $G_{v}$ in $G$.
#### 2.4.2. Normalizing of the Harris-Mumford space
We now explain how $\overline{\mathcal{H}}_{g/h,d}$ may be normalized via
moduli of admissible Galois covers, see also [L20b, §1.3, §6.1]. Let $f:X\to
Y$ be a degree $d$ cover of smooth curves. and let $f_{0}:X_{0}\to Y_{0}$ be
the étale locus. Define
$\widetilde{f}_{0}:\widetilde{X}_{0}:=(X_{0}\times_{Y_{0}}\cdots\times_{Y_{0}}X_{0})-\Delta\to
Y_{0}$
given by taking the $d$-fold product over $X_{0}$ and removing all diagonals,
and define $\widetilde{f}:\widetilde{X}_{0}\to Y$ to be the unique extension
of $\widetilde{f}_{0}$ to a map of smooth and proper curves. Then,
$\widetilde{f}$ is an $S_{d}$-Galois cover of smooth curves, and the data of
$f$ can be recovered from a $S_{d}$-cover $\widetilde{X}\to Y$ by defining
$X=\widetilde{X}/S_{d-1}$. Note, however, that $\widetilde{X}$ may not be
connected.
If, on the other hand, $f$ is an admissible cover, this construction in
general does not yield a map of stable curves. It may instead be carried out
over the components of $Y$ separately, but there will in general be multiple
ways to glue together the resulting maps to form an admissible $S_{d}$-cover
with the property that $X=\widetilde{X}/S_{d-1}$.
In any case, we obtain a map
$\nu:\widetilde{H}_{g/h,d}:=\overline{\mathcal{H}}_{\widetilde{g},S_{d},\xi}\to\overline{\mathcal{H}}_{g/h,d}$,
for appropriately chosen $\widetilde{g},\xi$ (note here that $\widetilde{g}$
will be a vector of integers, corresponding to the fact that the curves
$\widetilde{X}$ may be disconnected), defined by
$\nu([\widetilde{f}:\widetilde{X}\to Y])=[f:X\to Y].$
Then, $\nu$ is a normalization: indeed, one may identify
$\overline{\mathcal{H}}_{\widetilde{g},S_{d},\xi}$ with appropriate components
of the Abramovich-Corti-Vistoli space of twisted $G$-covers, see [SvZ18,
Remark 3.6], which normalizes the Harris-Mumford space, see [ACV03,
Proposition 4.2.2].
## 3\. Intersections of Hurwitz cycles with boundary strata
### 3.1. The Galois case
We first recall the main result of Schmitt-van Zelm concerning the
intersection of Galois Hurwitz loci with boundary strata on
$\overline{\mathcal{M}}_{g,N}$. We consider the pullback of
$\phi:\overline{\mathcal{H}}_{g,G,\xi}\to\overline{\mathcal{M}}_{g,N}$
by the boundary class
$\xi_{A}:\overline{\mathcal{M}}_{A}\to\overline{\mathcal{M}}_{g,N}$.
We have a Cartesian diagram [SvZ18, Proposition 4.3]
$\textstyle{\displaystyle\coprod\overline{\mathcal{H}}_{(\Gamma,G)}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\coprod\phi_{\alpha}}$$\scriptstyle{\xi_{(\Gamma,G)}}$$\textstyle{\overline{\mathcal{H}}_{g,G,\xi}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\phi}$$\textstyle{\overline{\mathcal{M}}_{A}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\xi_{A}}$$\textstyle{\overline{\mathcal{M}}_{g,n}}$
where the coproduct is over admissible $G$-graphs $(\Gamma,G)$ equipped with
an $A$-structure $\alpha:\Gamma\to A$ satisfying the genericity condition that
the induced map
$\alpha_{E}:E(A)\to E(\Gamma)/G$
from edges of $A$ to $G$-orbits of edges of $\Gamma$ is surjective. The
$A$-structures $\alpha:\Gamma\to A$ then naturally induce the maps
$\phi_{\alpha}$ on the left.
The normal bundle of $\overline{\mathcal{M}}_{A}$ in
$\overline{\mathcal{M}}_{g,r}$ is the direct sum of line bundle contributions
from the edges of $A$, and the normal bundle of
$\overline{\mathcal{H}}_{(\Gamma,G)}$ in $\overline{\mathcal{H}}_{g,G,\xi}$ is
the direct sum of line bundle contributions of $G$-orbits of edges of
$\Gamma$. By the excess intersection formula, we conclude:
###### Theorem 3.1.
[SvZ18, Theorem 4.9] With notation as above, we have
$\xi^{*}_{A}(\phi_{*}([\overline{\mathcal{H}}_{g,G,\xi}]))=\sum_{(\Gamma,G)}\phi_{\alpha*}\left(\prod_{(\ell,\ell^{\prime})}(-\psi_{\ell}-\psi_{\ell^{\prime}})\right),$
where $(\ell,\ell^{\prime})$ is a pair of half-edges comprising an edge, and
we range over edges of in the image of $E(A)\to E(\Gamma)$, excluding a choice
of contributions from $G$-orbit representatives of $E(\Gamma)$.
More generally, if $G_{1}\subset G$ is any subgroup, one can compute the
pullback of the restriction map
$\operatorname{res}_{G_{1}}^{G}:\overline{\mathcal{H}}_{g,G,\xi}\to\overline{\mathcal{H}}_{g,G_{1},\xi^{\prime}}$
by any boundary class
$\xi_{(A,G_{1})}:\overline{\mathcal{H}}_{(A,G_{1})}\to\overline{\mathcal{H}}_{g,G_{1},\xi^{\prime}}$,
see [L20b, Proposition 4.13].
### 3.2. The Harris-Mumford case
Now, we consider the analogous question on the Harris-Mumford space
$\overline{\mathcal{H}}_{g/h,d}$ (or any of its variants). Let
$\xi_{A}:\overline{\mathcal{M}}_{A}\to\overline{\mathcal{M}}_{g,N}$ be a
boundary class as before.
###### Proposition 3.2.
We have a commutative diagram
$\textstyle{\coprod\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\coprod\phi_{(\Gamma,\Gamma^{\prime})}}$$\textstyle{\overline{\mathcal{H}}_{g/h,d}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\phi}$$\textstyle{\overline{\mathcal{M}}_{A}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\xi_{A}}$$\textstyle{\overline{\mathcal{M}}_{g,N}}$
where the disjoint union is over boundary strata along with an $A$-structure
on $\Gamma$, with the genericity condition that the composite map
$E(A)\subset E(\Gamma)\to E(\Gamma^{\prime})$
is surjective.
Furthermore, the diagram is Cartesian on the level of closed points.
###### Proof.
The commutativity is clear. We construct the inverse map
$\overline{\mathcal{H}}_{g/h,d}\times_{\overline{\mathcal{M}}_{g,N}}\overline{\mathcal{M}}_{A}(\operatorname{Spec}(\mathbb{C}))\to\coprod\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}(\operatorname{Spec}(\mathbb{C}))$
Let $[f:X\to Y]$ be a point of $\overline{\mathcal{H}}_{g/h,d}$ with an
$A$-structure on the dual graph of $X$. Then, we get a natural stable graph
$\Gamma^{\prime}$ as follows. Let $E(\Gamma^{\prime})$ be the set of nodes to
which the nodes of $X$ corresponding to the edges of $A$ map. Let
$V(\Gamma^{\prime})$ be the set of connected components of the curve obtained
by deleting the nodes of $E(\Gamma^{\prime})$ from $Y$, and let
$L(\Gamma^{\prime})$ be the set of marked points of $Y$; together these define
a natural stable graph $\Gamma^{\prime}$ and a $\Gamma^{\prime}$-structure on
the dual graph of $Y$.
Now, let $E(\Gamma)$ be the set of nodes of $X$ living over
$E(\Gamma^{\prime})$, $V(\Gamma)$ be the set of components of the curve
obtained by deleting these nodes from $X$, and $L(\Gamma)$ be the set of
marked points of $X$. As before, we get a stable graph $\Gamma$, along with a
natural $\Gamma$-structure on the dual graph of $X$ and an $A$-structure on
$\Gamma$. The topology of $f$ also induces an admissible cover
$\gamma:\Gamma\to\Gamma^{\prime}$. The genericity condition above is visibly
satisfied, and we obtain from $f$ a point of
$\coprod\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$; it is
straightforward to check that we get the desired inverse. ∎
In general, the diagram in Proposition 3.2 will fail to be a functorial fiber
diagram on the level of stacks owing to non-reduced structure on the
intersection of $\phi$ and $\xi_{A}$. To see this, we compute the local
picture.
Let $t_{1},\ldots,t_{k}$ be deformation parameters corresponding to the edges
of $\Gamma^{\prime}$ (which in turn correspond to smoothing parameters of
nodes of $Y$), and let $t_{i,j}$ be deformation parameters for the edges of
$\Gamma$ living above $t_{i}$ (which in turn correspond to smoothing
parameters of nodes of $X$). Let $t_{k+1},\ldots,t_{3h-3+b}$ be deformation
parameters for $Y$ away from the chosen nodes. Then, recall from §2.3 that the
complete local ring of $\overline{\mathcal{H}}_{g/h,d}$ at $[f]$ may be
written as
$\mathbb{C}[\\{t_{i}\\},\\{t_{ij}\\}]/(t_{i}=t_{ij}^{a_{ij}})\otimes S,$
where $a_{ij}$ are the associated ramification indices and $S$ is the complete
local ring of $\coprod\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$ at
the point obtained by deleting all of the nodes of $X$ and $Y$ corresponding
to the edges of $\Gamma$ and $\Gamma^{\prime}$.
The effect of pulling back by $\xi_{A}$, on the level of complete local rings,
kills all smoothing parameters corresponding to the nodes of $A$. In
particular, all of the variables $t_{i}$ are killed, and we are left with the
in ring
$R_{(\Gamma,\Gamma^{\prime})}:=\mathbb{C}[\\{t_{ij}\\}]/(t_{ij}^{a_{ij}})\otimes
S,$
where we now range over all $(i,j)$ not corresponding to an edge of $A$.
In general, the complete local ring
$\operatorname{Spec}(R_{(\Gamma,\Gamma^{\prime})})$ is non-reduced, in which
case the functorial intersection of $\phi$ and $\xi_{A}$ is non-reduced with
underlying reduced space
$\coprod\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$. Each boundary
stratum $\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$ contributes
separately to the class
$\xi_{A}^{*}\phi_{*}([\overline{\mathcal{H}}_{g/h,d}])$; we now explain how to
compute this contribution.
###### Proposition 3.3.
With notation as above, consider the contribution of
$\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$ to the class
$\xi_{A}^{*}\phi_{*}([\overline{\mathcal{H}}_{g/h,d}])$
1. (a)
Suppose that $\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$ has the
expected dimension. Then, its contribution to
$\xi_{A}^{*}\phi_{*}([\overline{\mathcal{H}}_{g/h,d}])$ is a non-zero multiple
of $\phi_{*}([\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}])$ on the
boundary stratum $\overline{\mathcal{M}}_{A}$,
2. (b)
If $\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$ is arbitrary, its
contribution to $\xi_{A}^{*}\phi_{*}([\overline{\mathcal{H}}_{g/h,d}])$ is the
pushforward by $\phi_{(\Gamma,\Gamma^{\prime})}$ by a polynomial in $\psi$
classes on $\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$ at half-edges
of $\Gamma$ (capped against the fundamental class of
$\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$)
###### Proof.
If $\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$ has the expected
dimension, then its contribution to the intersection is the fundamental class
of a union of components with underlying reduced
$\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$ and multiplicity equal to
the length of $R_{(\Gamma,\Gamma^{\prime})}$, by the above discussion. This
gives part (a).
For part (b), we apply the excess intersection formula. Morally, the situation
is analogous to the Galois case, but because the spaces in question are
singular, we will need to pass to their normalizations as in §2.4.2.
Recall that the composite map
$\widetilde{\phi}:\widetilde{H}_{g/h,d}\to\overline{\mathcal{M}}_{g,d}$ may be
factored as the composition of the restriction map
$\operatorname{res}^{S_{d}}_{S_{d-1}}:\overline{\mathcal{H}}_{\widetilde{g},S_{d},\xi}\to\overline{\mathcal{H}}_{\widetilde{g},S_{d-1},\xi^{\prime}}$
and the target map
$\delta:\overline{\mathcal{H}}_{\widetilde{g},S_{d-1},\xi^{\prime}}\to\overline{\mathcal{M}}_{g,N}$.
Consider the pullback of $\xi_{A}$ first by $\delta$. By [L20b, §4.3.2], the
result is a disjoint union of boundary strata on
$\overline{\mathcal{H}}_{\widetilde{g},S_{d-1},\xi^{\prime}}$, all of the
expected dimension (equal to that of $\overline{\mathcal{M}}_{A}$), each
appearing with multiplicity given in terms of the ramification indices
appearing, see [L20b, Lemma 4.15]. We may then pull back the underlying
reduced spaces (the boundary strata themselves) by the restriction map, to
obtain a disjoint union of boundary strata
$\overline{\mathcal{H}}_{(\widetilde{\Gamma},S_{d})}$ on
$\overline{\mathcal{H}}_{\widetilde{g},S_{d},\xi}$, as in [L20b, Lemma 4.11].
By [L20b, Proposition 4.13] (that is, the analogue of Theorem 3.1 in the
H-tautological setting), the class
$\xi_{A}^{*}\phi_{*}([\overline{\mathcal{H}}_{g/h,d}])$ is then computed in
terms of $\psi$ classes on
$\overline{\mathcal{H}}_{(\widetilde{\Gamma},S_{d})}$ associated to the half-
edge (orbits) of $\widetilde{\Gamma}$, after re-introducing the correction
factors of the multiplicities of the boundary classes on
$\overline{\mathcal{H}}_{\widetilde{g},S_{d-1},\xi^{\prime}}$.
On the other hand, the union of the
$\overline{\mathcal{H}}_{(\widetilde{\Gamma},S_{d})}$ is the underlying
reduced space of the pullback of $\xi_{A}$ by $\widetilde{\phi}$, so admits a
natural map (compatible with the maps to $\overline{\mathcal{M}}_{A}$)
$\nu_{(\Gamma,\Gamma^{\prime})}:\coprod\overline{\mathcal{H}}_{(\widetilde{\Gamma},S_{d})}\to\coprod\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}.$
In fact, one can easily make this map explicit: we have
$(\Gamma,\Gamma^{\prime})=(\widetilde{\Gamma}/S_{d-1},\widetilde{\Gamma}/S_{d}),$
the admissible cover $\gamma:\Gamma\to\Gamma^{\prime}:$ is the natural
quotient map, and $\nu$ sends $\widetilde{X}^{\prime}\to Y^{\prime}$ to
$\widetilde{X}^{\prime}/S_{d-1}\to Y^{\prime}$ over each component
$Y^{\prime}\subset Y$. In particular, above each
$\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$, the map
$\nu_{(\Gamma,\Gamma^{\prime})}$ is a union of copies of the normalizations of
the constituent spaces. These copies are indexed by possible ways of gluing
the Galois closures of the individual components of the covers appearing in
$\mathcal{H}_{(\Gamma,\Gamma^{\prime})}$, or equivalently, by branches of the
image of $\coprod\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$ in
$\overline{\mathcal{H}}_{(g/h,d)}$ before normalization, cf. [L20b, §6.2, step
(ii)].
Finally, the $\psi$ classes occurring on
$\overline{\mathcal{H}}_{(\widetilde{\Gamma},S_{d})}$ may be identified (up to
appropriate constant factors) with those on
$\coprod\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$ via the
normalization map, so we may express the contribution from
$\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$ to
$\xi_{A}^{*}\phi_{*}([\overline{\mathcal{H}}_{g/h,d}])$ in the desired way. ∎
### 3.3. Hurwitz cycles with rational target
We will later need the following statements which identify, in contrast with
our main results, tautological contributions to pullbacks of Hurwitz cycles by
boundary strata. As usual, both results hold true for all of our variants of
$\overline{\mathcal{H}}_{g/h,d}$ (allowing any combination of additional
marked points, higher ramification, or disconnected source curves).
###### Lemma 3.4.
Consider $\overline{\mathcal{H}}_{g/h,d}$ and $\xi_{A}$ as above, and suppose
further that $\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$ is a boundary
stratum appearing in the fiber product for which all vertices of
$\Gamma^{\prime}$ have genus 0. Then, the contribution of
$\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$ to
$\xi_{A}^{*}\phi_{*}([\overline{\mathcal{H}}_{g/h,d}])$ has TKD on
$\overline{\mathcal{M}}_{A}$.
###### Proof.
By Proposition 3.3(b), this contribution is a polynomial in $\psi$ classes on
$\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$ at half-edges of $\Gamma$,
pushed forward to $\overline{\mathcal{M}}_{A}$. However, because the target
genera are all 0, we may identify the components of
$\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$ with spaces of relative
stable maps to $\mathbb{P}^{1}$ and the $\psi$ classes on them with Gromov-
Witten classes, see [FP05, §0.2.3, §1.2.2]. The claim is then immediate from
[FP05, Theorem 2]. ∎
###### Lemma 3.5.
Let
$\delta^{\prime}:\overline{\mathcal{H}}_{g/h,d}\to\overline{\mathcal{M}}_{h,k}$
be the composition of a target map $\delta$ with a map forgetting any number
of marked points. Let $[Y]$ be a point of $\overline{\mathcal{M}}_{h,k}$ and
$\overline{\mathcal{H}}_{g/h,d}(Y)=\delta^{\prime*}([Y])$. Then, the class of
the pushforward of $\overline{\mathcal{H}}_{g/h,d}(Y)$ to
$\prod_{i}\overline{\mathcal{M}}_{g_{i},n_{i}}$ has TKD.
###### Proof.
Points of $\overline{\mathcal{M}}_{h,k}$ are homologous to each other, so we
may assume that $Y$ is a stable marked curve with only rational components.
Then, $\overline{\mathcal{H}}_{g/h,d}(Y)$ may be expressed as a disjoint union
of boundary strata (of the correct dimension)
$\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$ appearing with
multiplicity, for example, by a straightforward analogue of [L20b, §4.3.2] for
Harris-Mumford spaces. The claim then follows from [FP05, Theorem 2]. ∎
### 3.4. Post-composing with forgetful maps
The results above concern classes coming from source maps
$\phi:\overline{\mathcal{H}}_{g/h,d}\to\overline{\mathcal{M}}_{g,N}$, but we
will be primarily concerned with classes obtained by post-composing with
forgetful maps
$\pi:\overline{\mathcal{M}}_{g,N}\to\overline{\mathcal{M}}_{g,r}$. The
situation here is similar: we need only note that we have a Cartesian diagram
(with the intersection occurring in the correct dimension)
$\textstyle{\coprod\overline{\mathcal{M}}_{g,A^{\prime}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\coprod\xi_{A^{\prime}}}$$\scriptstyle{\coprod\pi}$$\textstyle{\overline{\mathcal{M}}_{g,N}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\pi}$$\textstyle{\overline{\mathcal{M}}_{g,A}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\xi_{A}}$$\textstyle{\overline{\mathcal{M}}_{g,r}}$
where the coproduct is over stable graphs $A^{\prime}$ obtained from $A$ by
distributing the remaining points over its vertices in all possible ways.
## 4\. Reductions
###### Lemma 4.1 (cf. [vZ18, Lemma 10]).
Suppose that $\overline{\mathcal{H}}_{g/h,d,(m_{2})^{2}(m_{d})^{d}}\in
H^{*}(\overline{\mathcal{M}}_{g,2m_{2}+dm_{d}})$ is non-tautological. Then,
$\overline{\mathcal{H}}_{g/h,d,(m_{2})^{2}(m_{d})^{d},n}\in
H^{*}(\overline{\mathcal{M}}_{g,2m_{2}+dm_{d}+n})$ is non-tautological for all
$n\geq 0$.
###### Proof.
Up to a non-zero constant, the class
$\overline{\mathcal{H}}_{g/h,d,(m_{2})^{2}(m_{d})^{d},n}\in
H^{*}(\overline{\mathcal{M}}_{g,2m_{2}+dm_{d}+n})$ pushes forward to
$\overline{\mathcal{H}}_{g/h,d,(m_{2})^{2}(m_{d})^{d}}\in
H^{*}(\overline{\mathcal{M}}_{g,2m_{2}+dm_{d}})$ upon forgetting the
ramification points, so the result is immediate from the fact that
tautological classes are closed under forgetful pushforwards. ∎
Lemma 4.1 immediately reduces Theorem 1.3 to the case $n=0$; we assume this
henceforth unless otherwise mentioned.
###### Lemma 4.2.
Suppose that $m_{2}\geq 1$, and that
$\overline{\mathcal{H}}_{g/h,d,(m_{2})^{2}(m_{d})^{d}}\in
H^{*}(\overline{\mathcal{M}}_{g,2m_{2}+dm_{d}})$ is non-tautological. Then,
$\overline{\mathcal{H}}_{g/h,d,(m_{2}-1)^{2}(m_{d}+1)^{d}}\in
H^{*}(\overline{\mathcal{M}}_{g,2(m_{2}-1)+d(m_{d}+1)})$ is non-tautological.
###### Proof.
Up to a non-zero constant, the class
$\overline{\mathcal{H}}_{g/h,d,(m_{2}-1)^{2}(m_{d}+1)^{d}}\in
H^{*}(\overline{\mathcal{M}}_{g,2(m_{2}-1)+d(m_{d}+1)})$ pushes forward to
$\overline{\mathcal{H}}_{g/h,d,(m_{2})^{2}(m_{d})^{d}}\in
H^{*}(\overline{\mathcal{M}}_{g,2m_{2}+dm_{d}})$, so we conclude as in Lemma
4.1. ∎
###### Lemma 4.3 (cf. [vZ18, Lemma 11]).
Suppose that $\overline{\mathcal{H}}_{g/h,d,(m_{2})^{2}(m_{d})^{d}}\in
H^{*}(\overline{\mathcal{M}}_{g,2m_{2}+dm_{d}})$ is non-tautological. Then,
$\overline{\mathcal{H}}_{g/h,d,(m_{2}+1)^{2}(m_{d})^{d}}\in
H^{*}(\overline{\mathcal{M}}_{g,2(m_{2}+1)+dm_{d}})$ is non-tautological.
###### Proof.
We pull back to the boundary divisor
$\xi:\overline{\mathcal{M}}_{g,2m_{2}+dm_{d}+1}\cong\overline{\mathcal{M}}_{g,2m_{2}+dm_{d}+1}\times\overline{M}_{0,3}\to\overline{\mathcal{M}}_{g,2(m_{2}+1)+dm_{d}}$
parametrizing marked curves with a rational tail marked by a pair of
unramified points in the same fiber, and apply Proposition 2.2. Let
$m=(m_{2}+1)+m_{d}$. We have a diagram
$\textstyle{\coprod\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\overline{\mathcal{H}}_{g/h,d,m}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\phi}$$\textstyle{\coprod\overline{\mathcal{M}}_{A}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\overline{\mathcal{M}}_{g,r}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\pi}$$\textstyle{\overline{\mathcal{M}}_{g,2m_{2}+dm_{d}+1}\times\overline{M}_{0,3}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\xi}$$\textstyle{\overline{\mathcal{M}}_{g,2(m_{2}+1)+dm_{d}}.}$
where $r=(2g-2)-d(2h-2)+2(m_{2}+1)+dm_{d}$, and the stable graphs $A$ arise
from all possible ways to distribute the remaining marked points on the two
components parametrized by
$\overline{\mathcal{M}}_{g,2m_{2}+dm_{d}+1}\times\overline{M}_{0,3}$ as in
§3.4. The bottom square is Cartesian, and the top square is Cartesian at least
on the level of closed points as in Proposition 3.2; it will turn out that the
only strata $\overline{\mathcal{H}}_{(\Gamma,\Gamma)}$ contributing to the
class in question will, in fact, be reduced.
Consider a general point $[f:X\to Y]$ on some
$\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$ in the fiber product.
Because the graphs $A$ have only one edge, by the genericity condition from
Proposition 3.2, $Y$ may only have one node, which must be separating by Lemma
2.12.
Therefore, $Y$ is the union of two components $Y_{0},Y_{h}$ of genus $0,h$,
respectively. Moreover, if $X$ is the union $X_{0}\cup X_{g}$, with the two
pieces corresponding to the vertices of $\Gamma$, then $X_{0}$ must be an
irreducible (and smooth) rational curve living entirely over $Y_{0}$. Note, in
addition, that $X_{0}$ must have degree at least 2 over $Y_{0}$, in order to
contain two marked points in the same fiber.
Let $b=(2g-2)-d(2h-2)+(m_{2}+1)+m_{d}$ be the total number of marked points on
$Y$, and let $B=b+(3h-3)$ be the dimension of
$\overline{\mathcal{H}}_{g/h,d,(m_{2}+1)^{2}(m_{d})^{d}}$. In order for
$\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$ to give a non-zero
contribution to $H_{2(B-1)}(\overline{\mathcal{M}}_{g,2m_{2}+dm_{d}+1})$, we
need the image of $\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$ in
$\overline{\mathcal{M}}_{g,2m_{2}+dm_{d}+1}$ to be supported in dimension (at
least) $B-1$.
Let $b_{0}$ be the number of marked points of $Y$ whose marked pre-images,
only including those not forgotten by $\pi$, lie entirely on $X_{0}$, and let
$b_{g}=b-b_{0}$ be the number that have at least one marked pre-image on
$X_{g}$. Note that $b_{0}\geq 2$, with at least one point coming from the
unramified pair, and at least one more coming from a ramification point, as
the degree of $X_{0}$ over $Y_{0}$ is at least 2. Therefore, $b_{g}\leq b-2$.
Then, by the quasi-finiteness of the target maps $\delta$, the dimension of
the image of $\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$ in
$\overline{\mathcal{M}}_{g,2m_{2}+dm_{d}+1}$ is at most $b_{g}+1+(3h-3)\leq
B-1$, and that this number decreases if one or more of the $b_{g}$ marked
points with a pre-image on $X_{g}$ lies on $Y_{0}$.
Therefore, we must have equality everywhere. In particular, $X_{0}$ has degree
2 over $Y_{0}$ and is ramified over the node of $Y$, $X_{g}$ consists of a
smooth component of genus $g$ mapping with degree $d$ to $Y_{h}$, ramified at
one point over the node of $Y$, along with $d-2$ rational tails mapping
isomorphically to $Y_{0}$. In addition, all marked unramified fibers must lie
on $X_{g}$. The contributing covers are shown in Figure 1.
Figure 1. The only possible contribution to
$\xi^{*}\overline{\mathcal{H}}_{g/h,d,(m_{2}+1)^{2}(m_{d})^{d}}$. All other
marked points lie on $X_{g}$.
Now, we see that the pullback of
$\overline{\mathcal{H}}_{g/h,d,(m_{2}+1)^{2}(m_{d})^{d}}\in
H^{*}(\overline{\mathcal{M}}_{g,2(m_{2}+1)+dm_{d}})$ by $\xi$, after
forgetting the factor $\overline{M}_{0,3}$, gives, up to a non-zero constant,
the class $\overline{\mathcal{H}}_{g/h,d,(m_{2})^{2}(m_{d})^{d},1}\in
H^{*}(\overline{\mathcal{M}}_{g,2m_{2}+dm_{d}+1})$. The proof is now complete
by Lemma 4.1. ∎
## 5\. $d$-elliptic loci
In this section, we prove the first part of Theorem 1.3, in the case $h=1$. We
follow the approach of [vZ18]: we first handle the case $g+m_{2}=12$ by
finding a non-zero contribution from odd cohomology on
$\overline{\mathcal{M}}_{1,11}$ upon a boundary pullback, then use a different
boundary pullback to induct on $g$.
Recall that we define the integers $a_{d}$, $d\geq 2$ by
$\eta(q)^{48}=q^{2}\prod_{\ell\geq 1}(1-q^{\ell})^{48}=\sum_{d\geq
2}a_{d}q^{d}$
### 5.1. The case $g+m_{2}=12$
###### Proposition 5.1.
Suppose that $d\geq 2$, $g\geq 2$, $g+m_{2}=12$, and $a_{d}\neq 0$. Then,
$\overline{\mathcal{H}}_{g/1,d,(m_{2})^{2}}\in
H^{22}(\overline{\mathcal{M}}_{g,2m_{2}})$ is non-tautological.
We will prove Proposition 5.1 by pullback to the boundary stratum
$\xi:\overline{\mathcal{M}}_{1,11}\times\overline{\mathcal{M}}_{1,11}\to\overline{\mathcal{M}}_{g,2m_{2}}$
defined by gluing $g-1$ pairs of marked points on the two elliptic components
together. We have a diagram
$\textstyle{\coprod\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\overline{\mathcal{H}}_{g/1,d,m_{2}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\phi}$$\textstyle{\coprod\overline{\mathcal{M}}_{A}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\overline{\mathcal{M}}_{g,N}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\pi}$$\textstyle{\overline{\mathcal{M}}_{1,11}\times\overline{\mathcal{M}}_{1,11}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\xi}$$\textstyle{\overline{\mathcal{M}}_{g,2m_{2}}}$
where $N=(d-1)(2g-2)+dm_{2}$, and the stable graphs $A$ arise from all
possible ways to distribute the remaining marked points on the two components
parametrized by
$\overline{\mathcal{M}}_{1,11}\times\overline{\mathcal{M}}_{1,11}$. The bottom
square is Cartesian, and the top square is Cartesian on the level of closed
points.
We consider the contributions to the intersection of $\xi$ and
$\phi^{\prime}:\overline{\mathcal{H}}_{g/1,d,m_{2}}\to\overline{\mathcal{M}}_{g,2m_{2}}$
from each $\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$ separately.
First, note that if the image of
$\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$ in
$\overline{\mathcal{M}}_{1,11}\times\overline{\mathcal{M}}_{1,11}$ is
supported on the boundary, then the corresponding contribution to
$\xi^{*}\overline{\mathcal{H}}_{g/1,d,(m_{2})^{2}}$ automatically has TKD by
Lemma 2.4.
Therefore, we can assume in particular that the generic cover $[X\to Y]$ of
$\mathcal{H}_{(\Gamma,\Gamma^{\prime})}$ has the property that the source
curve $X$ has two smooth genus 1 components $X_{1},X^{\prime}_{1}$. Let
$Y_{1},Y^{\prime}_{1}\subset Y$ be the image components of
$X_{1},X^{\prime}_{1}$, respectively. We then have three cases:
1. (i)
$Y_{1}\neq Y^{\prime}_{1}$,
2. (ii)
$Y_{1}=Y^{\prime}_{1}$ is a smooth rational curve, and
3. (iii)
$Y_{1}=Y^{\prime}_{1}$ is a smooth genus 1 curve.
###### Lemma 5.2.
The contributions to $\xi^{*}\overline{\mathcal{H}}_{g/1,d,(m_{2})^{2}}$ from
strata $\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$ whose general point
satisfies either (i) or (ii) have TKD.
###### Proof.
First, consider case (i). The component
$\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$ in question has the
property that $\Gamma$ has two vertices of genus 1, which map to different
vertices $v_{1},v^{\prime}_{1}\in\Gamma^{\prime}$, and the rest of the
vertices of $\Gamma$ have genus 0. We may decompose
$\overline{\mathcal{M}}_{A}=\overline{\mathcal{M}}_{v_{1}}\times\overline{\mathcal{M}}_{v^{\prime}_{1}}\times\overline{\mathcal{M}}_{w},$
where $\overline{\mathcal{M}}_{v_{1}},\overline{\mathcal{M}}_{v^{\prime}_{1}}$
parametrize the components of $X$ mapping to $Y_{1},Y^{\prime}_{1}$,
respectively, and $\overline{\mathcal{M}}_{w}$ parametrizes all other
components. The spaces
$\overline{\mathcal{M}}_{v_{1}},\overline{\mathcal{M}}_{v^{\prime}_{1}}$ are
products of a single moduli space of pointed genus 1 curves with a collection
of moduli spaces of pointed rational curves, whereas
$\overline{\mathcal{M}}_{w}$ is a product of moduli spaces of pointed rational
curves.
The pushforward of $\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$ to
$\overline{\mathcal{M}}_{A}$ decomposes into a product of algebraic classes on
the components
$\overline{\mathcal{M}}_{v_{1}},\overline{\mathcal{M}}_{v^{\prime}_{1}},\overline{\mathcal{M}}_{w}$,
and therefore has TKD, by Lemma 2.3 and the fact that all cohomology on moduli
spaces of pointed rational curves is tautological [Kee92]. In particular, the
further pushforward to
$\overline{\mathcal{M}}_{1,11}\times\overline{\mathcal{M}}_{1,11}$ also has
TKD.
Now, consider case (ii). Note that all components of $Y$ must be rational,
because the only two components of $X$ which can map to a higher genus curve,
namely $X_{1}$ and $X^{\prime}_{1}$, both map to a rational component.
Therefore, the resulting contribution of
$\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$ to
$\xi^{*}\overline{\mathcal{H}}_{g/1,d,(m_{2})^{2}}$ has TKD by Lemma 3.4. ∎
###### Lemma 5.3.
All strata $\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$ whose general
point satisfies (iii) and which give non-zero contributions to
$\xi^{*}\overline{\mathcal{H}}_{g/1,d,(m_{2})^{2}}$ have general point
$[f:X\to Y]$ of the following form, also depicted in Figure 2.
$Y$ consists of an elliptic component $Y_{1}$ with $m_{2}$ marked points and
$g-1$ rational tails, each of which contains two branch points. $X$ contains
two elliptic components $X_{1},X^{\prime}_{1}$ over $Y_{1}$, connected by
$g-1$ rational bridges, mapping to the rational tails of $Y$ with degree 2,
and all other components living over the rational tails have degree 1.
Finally, the unramified pairs of marked points of $X$ live over those of $Y$,
with one point of each pair on $X_{1}$ and $X^{\prime}_{1}$.
Figure 2. The only possible non-tautological contributions to
$\xi^{*}\overline{\mathcal{H}}_{g/1,d,(m_{2})^{2}}$. Here, $(g,m_{2})=(4,8)$.
The rational tails of $X$ mapping isomorphically to those of $Y$ are not
shown.
###### Proof.
In order for $\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$ to give a
non-zero contribution to $\xi^{*}\overline{\mathcal{H}}_{g/1,d,(m_{2})^{2}}$,
its image in
$\overline{\mathcal{M}}_{1,11}\times\overline{\mathcal{M}}_{1,11}$ must have
dimension at least 11. By assumption, the image of
$\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$ is not supported in the
boundary of
$\overline{\mathcal{M}}_{1,11}\times\overline{\mathcal{M}}_{1,11}$, so all of
the moduli must live over $Y_{1}=Y^{\prime}_{1}$. Thus, the total number of
nodes and marked points on $Y_{1}$ must be at least 11.
The pre-image of $Y_{1}$ must consist exactly of the two components
$X_{1},X^{\prime}_{1}$, covering $Y_{1}$ via isogenies of degrees
$d_{1},d^{\prime}_{1}$, with $d_{1}+d^{\prime}_{1}=d$. In particular, the $s$
marked points on $Y_{1}$ each correspond to one of the $m_{2}$ unramified
fibers. On the other hand, if there are $t$ nodes on $Y_{1}$, at which trees
of rational components are attached, each such node contributes at least 2
branch points to $Y_{1}$. Therefore, we have
$11\leq s+t\leq m_{2}+g-1=11,$
meaning we have equality everywhere. The conclusion then follows easily. ∎
To show that, in total, such
$\overline{\mathcal{H}}_{(\Gamma^{\prime},\Gamma)}$ give a contribution to
$\overline{\mathcal{M}}_{1,11}\times\overline{\mathcal{M}}_{1,11}$ without
TKD, we need the following lemma.
###### Lemma 5.4.
Let $\overline{\mathcal{H}}_{1/1,k,11}^{\circ}$ be the space of 11-pointed
admissible degree $m$ covers $f:X\to Y$, where $X,Y$ have genus 1, and 11
marked points of $X$ are chosen over those of $Y$. (Note that this differs
from the usual space $\overline{\mathcal{H}}_{1/1,k,11}$ in that here we only
mark one point in each fiber.)
Consider the operator $T_{k}=\phi_{*}\circ\delta^{*}$ acting on
$H^{11}(\overline{\mathcal{M}}_{1,11})$, induced by the correspondence
$\lx@xy@svg{\hbox{\raise 0.0pt\hbox{\kern
12.0236pt\hbox{\ignorespaces\ignorespaces\ignorespaces\hbox{\vtop{\kern
0.0pt\offinterlineskip\halign{\entry@#!@&&\entry<EMAIL_ADDRESS>0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{\overline{\mathcal{H}}_{1/1,k}^{11}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
18.69514pt\raise 5.43056pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-2.43056pt\hbox{$\scriptstyle{\delta}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern 36.0236pt\raise 0.0pt\hbox{\hbox{\kern
0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
0.0pt\raise-19.88832pt\hbox{{}\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\hbox{\hbox{\kern 0.0pt\raise-1.75pt\hbox{$\scriptstyle{\phi}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern 0.0pt\raise-31.09888pt\hbox{\hbox{\kern
0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
36.0236pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise
0.0pt\hbox{$\textstyle{\overline{\mathcal{M}}_{1,11}}$}}}}}}}{\hbox{\kern-10.47778pt\raise-39.77664pt\hbox{\hbox{\kern
0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{\overline{\mathcal{M}}_{1,11}}$}}}}}}}{\hbox{\kern
43.50139pt\raise-39.77664pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{}$}}}}}}}\ignorespaces}}}}\ignorespaces.$
Then, $T_{k}$ acts on the two-dimensional vector space
$H^{11}(\overline{\mathcal{M}}_{1,11},\mathbb{Q})$ by multiplication by
$\tau(k)$, the $q^{k}$-coefficient of the normalized weight 12 cusp form
$\eta(q)^{24}$.
###### Proof.
In fact, it suffices to consider the action of $T_{k}$ on the class of the
discriminant form $[\omega]\in H^{11,0}(\mathcal{M}_{1,11},\mathbb{C})$, see
§2.2. Indeed, $T_{k}$ necessarily acts by the same constant on both
$H^{11}(\overline{\mathcal{M}}_{1,11})$ and $H^{11}(\mathcal{M}_{1,11})$.
We give complex-analytic descriptions of the spaces involved. First, we have
$\mathcal{M}_{1,1}=\mathbb{H}/\operatorname{SL}_{2}(\mathbb{Z})$. Now,
consider $\mathcal{H}_{1/1,k}$. If $E=\mathbb{C}/\Lambda$ is an elliptic
curve, then isogenies $E^{\prime}\to E$ are in bijection with index $k$
sublattices $\Lambda\subset\Lambda^{\prime}$, which in turn are in bijection
with the right orbit space $SL_{2}(\mathbb{Z})\backslash M_{k}$, where $M_{k}$
is the set of integer matrices of determinant $k$. In addition, we have a
monodromy action of $SL_{2}(\mathbb{Z})$ on such lattices on the left, and
components of $\mathcal{H}_{1/1,k}$ are indexed by the double orbit space
$\operatorname{SL}_{2}(\mathbb{Z})\backslash
M_{k}/\operatorname{SL}_{2}(\mathbb{Z})$.
Now, for any orbit representative
$A\in\operatorname{SL}_{2}(\mathbb{Z})\backslash
M_{k}/\operatorname{SL}_{2}(\mathbb{Z})$, define the congruence subgroup
$\Gamma_{A}\subset\operatorname{SL}_{2}(\mathbb{Z})$ to be the kernel of the
left action of $SL_{2}(\mathbb{Z})$ on the lattice corresponding to $A\cdot
SL_{2}(\mathbb{Z})$. We have that $\mathcal{H}_{1/1,k}$ is the union of
modular curves
$\coprod_{A\in\operatorname{SL}_{2}(\mathbb{Z})\backslash
M_{k}/\operatorname{SL}_{2}(\mathbb{Z})}\mathbb{H}/\Gamma_{A}$
where the index set is over a choice of double coset representatives. If
$A=\begin{bmatrix}a&b\\\ c&d\end{bmatrix}\in M_{k}$, then the point
$z\in\mathbb{H}/\Gamma_{A}$ corresponds to the isogeny
$\mathbb{C}/\langle 1,z\rangle\to\mathbb{C}/\langle
cz+d,az+b\rangle\to\mathbb{C}/\left\langle 1,\begin{bmatrix}a&b\\\
c&d\end{bmatrix}z\right\rangle$
where the first map is multiplication by $k$, and the second is the
isomorphism induced by multiplication by $\frac{1}{cz+d}$.
In particular, the source map $\phi:\mathcal{H}_{1/1,k}\to\mathcal{M}_{1,1}$
is induced by the inclusions $\Gamma_{A}\subset\Gamma$, so that $\phi(z)=z$,
and the target map is defined by $\delta(z)=\begin{bmatrix}a&b\\\
c&d\end{bmatrix}z$.
Now, we add marked points: recall from §2.2 that
$\mathcal{M}_{1,11}\subset(\mathbb{H}\times\mathbb{C}^{10})/(\operatorname{SL}_{2}(\mathbb{Z})\ltimes(\mathbb{Z}^{2})^{10}),$
where the 10 copies of $\mathbb{C}/\mathbb{Z}^{2}$ correspond to the marked
points, and the open subset is given by removing the diagonals and zero
sections. In a similar way, we have
$\overline{\mathcal{H}}_{1/1,k,11}^{\circ}\subset\coprod_{A\in\operatorname{SL}_{2}(\mathbb{Z})\backslash
M_{k}/\operatorname{SL}_{2}(\mathbb{Z})}(\mathbb{H}\times\mathbb{C}^{10})/(\Gamma_{A}\ltimes(\mathbb{Z}^{2})^{10}),$
with source and target maps are given by
$\displaystyle\phi((z,\zeta_{1},\ldots,\zeta_{10}))$
$\displaystyle=(z,\zeta_{1},\ldots,\zeta_{10})$
$\displaystyle\delta((z,\zeta_{1},\ldots,\zeta_{10}))$
$\displaystyle=\left(\begin{bmatrix}a&b\\\
c&d\end{bmatrix}z,\frac{k\zeta_{1}}{cz+d},\ldots,\frac{k\zeta_{10}}{cz+d}\right)$
We now compute the action of $T_{k}$ on
$\omega=\eta(z)^{24}dz\wedge d\zeta_{1}\wedge\cdots\wedge dz_{10}.$
On $\mathbb{H}/\Gamma_{A}$, we have
$\displaystyle\delta^{*}\omega$
$\displaystyle=\eta\left(\frac{az+b}{cz+d}\right)^{24}d\left(\frac{az+b}{cz+d}\right)\wedge
d\left(\frac{k\zeta_{1}}{cz+d}\right)\wedge\cdots\wedge
d\left(\frac{k\zeta_{10}}{cz+d}\right)$
$\displaystyle=\eta\left(\frac{az+b}{cz+d}\right)^{24}\left(\frac{k}{(cz+d)^{2}}dz\right)\wedge\left(\frac{k}{cz+d}d\zeta_{1}\right)\wedge\cdots\wedge\left(\frac{k}{cz+d}d\zeta_{10}\right)$
$\displaystyle=k^{11}(cz+d)^{-12}\cdot\eta\left(\frac{az+b}{cz+d}\right)^{24}dz\wedge
d\zeta_{1}\wedge\cdots\wedge dz_{10}.$
To compute the pushforward by $\phi$, recall that the pre-images of a point of
$\overline{\mathcal{M}}_{1,1}$ are indexed by orbit representatives
$A\in\operatorname{SL}_{2}(\mathbb{Z})\backslash M_{k}$; for each
corresponding point of $\overline{\mathcal{H}}^{\circ}_{1/1,k}$, we may
compute $\delta^{*}(\omega)$ at that point in terms of the chosen matrix $A$.
Thus, summing over all pre-images amounts to summing the above formula for
$\delta^{*}\omega$ over all choices of
$A\in\operatorname{SL}_{2}(\mathbb{Z})\backslash M_{k}$, and we obtain
$\phi_{*}\delta^{*}\omega=k^{11}\sum_{A\in\operatorname{SL}_{2}(\mathbb{Z})\backslash
M_{k}}(cz+d)^{-12}\eta\left(\frac{az+b}{cz+d}\right)^{24}dz\wedge
d\zeta_{1}\wedge\cdots\wedge dz_{10}.$
This identifies $T_{k}$ with the $k$-th Hecke operator on the space of weight
12 cusp forms, which is 1-dimensional, and thus acts by the $k$-th Fourier
coefficient of $\eta(q)^{24}$. ∎
###### Proof of Proposition 5.1.
We wish to show that $\xi^{*}\overline{\mathcal{H}}_{g/1,d,(m_{2})^{2}}$ fails
to have TKD on
$\overline{\mathcal{M}}_{1,11}\times\overline{\mathcal{M}}_{1,11}$. By Lemmas
5.2 and 5.3, we need only consider the contributions as described in Lemma
5.3. Note, in this case, that the strata
$\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$ have the expected
dimension.
Up to a constant factor (depending on $g$ and $d$ but not
$(d_{1},d^{\prime}_{1})$), the relevant contribution to
$\xi^{*}\overline{\mathcal{H}}_{g/1,d,(m_{2})^{2}}$ may be expressed as the
pushforward of the fundamental class by the source map
$\phi:\coprod_{d_{1}+d^{\prime}_{1}=11}\overline{\mathcal{H}}_{(1,1)/1,(d_{1},d^{\prime}_{1}),11}^{\circ}\to\overline{\mathcal{M}}_{1,11}\times\overline{\mathcal{M}}_{1,11},$
where $\overline{\mathcal{H}}_{(1,1)/1,(d_{1},d^{\prime}_{1}),11}^{\circ}$
denotes the space of disconnected covers $X_{1}\coprod X^{\prime}_{1}\to
Y_{1}$, consisting of isogenies of degrees $d_{1},d^{\prime}_{1}$ and 11 pairs
of points on $X_{1},X^{\prime}_{1}$ with equal image. Note, as in Lemma 5.4,
that we do not label here the other $d-2$ points in each of these 11 special
fibers.
We have a Cartesian diagram
$\textstyle{\overline{\mathcal{H}}_{(1,1)/1,(d_{1},d^{\prime}_{1}),11}^{\circ}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\delta}$$\textstyle{\overline{\mathcal{H}}_{1/1,d_{1},11}^{\circ}\times\overline{\mathcal{H}}_{1/1,d^{\prime}_{1},11}^{\circ}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{(\delta,\delta)}$$\textstyle{\overline{\mathcal{M}}_{1,11}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\Delta}$$\textstyle{\overline{\mathcal{M}}_{1,11}\times\overline{\mathcal{M}}_{1,11}}$
That is, $\overline{\mathcal{H}}_{(1,1)/1,(d_{1},d^{\prime}_{1}),11}^{\circ}$
parametrizes pairs of 11-pointed isogenies, with an isomorphism between the
targets. In particular, we have
$\phi_{*}([\overline{\mathcal{H}}_{(1,1)/1,(d_{1},d^{\prime}_{1}),11}^{\circ}])=(\phi,\phi)_{*}(\delta,\delta)^{*}([\Delta]),$
where the maps on the right hand side come from the correspondence
$\textstyle{\overline{\mathcal{H}}_{1/1,d_{1},11}^{\circ}\times\overline{\mathcal{H}}_{1/1,d^{\prime}_{1},11}^{\circ}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{(\delta,\delta)}$$\scriptstyle{(\phi,\phi)}$$\textstyle{\overline{\mathcal{M}}_{1,11}\times\overline{\mathcal{M}}_{1,11}}$$\textstyle{\overline{\mathcal{M}}_{1,11}\times\overline{\mathcal{M}}_{1,11}}$
arising as the product of correspondences from Lemma 5.4.
Finally, consider the Künneth decomposition of the diagonal class $[\Delta]$.
The terms consisting of pairs of even-dimensional classes have TKD both before
and after applying the correspondence by Lemma 2.3. By Lemma 2.5, the
remaining terms are, up to a non-zero constant multiple,
$-\omega\otimes\overline{\omega}-\overline{\omega}\otimes\omega.$
By Lemma 5.4, the correspondence acts by $\tau(d_{1})\tau(d^{\prime}_{1})$ on
this piece, and summing over all pairs $(d_{1},d^{\prime}_{1})$, we find that
the resulting class has non-zero odd contributions whenever the $d$-th
coefficient $a_{d}$ of $\eta(q)^{48}$ is non-zero. In particular, it fails to
have TKD, completing the proof. ∎
###### Remark 5.5.
The modularity of the non-tautological contribution of the intersection of the
$d$-elliptic cycle $\overline{\mathcal{H}}_{g/h,d,m_{2}}$ with the $\xi$ is
consistent with the main conjecture of [L20a], which predicts that the classes
$\overline{\mathcal{H}}_{g/h,d,m_{2}}$ themselves are quasi-modular in $d$.
###### Corollary 5.6.
Suppose that $d\geq 2$ , $g\geq 2$, $g+m_{2}=12$, and $a_{d}\neq 0$. Then,
$\mathcal{H}_{g/1,d,(m_{2})^{2}}\in H^{22}(\mathcal{M}_{g,2m_{2}})$ is non-
tautological.
###### Proof.
The proof is identical of [vZ18, Theorem 2]: pullbacks of boundary cycles of
(complex) codimension 11 have TKD on
$\overline{\mathcal{M}}_{1,11}\times\overline{\mathcal{M}}_{1,11}$, so the
failure of $\overline{\mathcal{H}}_{g/1,d,(m_{2})^{2}}$ to have TKD upon this
pullback persists after adding any combination of boundary cycles. ∎
### 5.2. Induction on genus
###### Theorem 5.7.
Suppose that $d\geq 2$, $g\geq 2$, $g+m_{2}\geq 12$, and furthermore that
$a_{d}\neq 0$. Then, $\overline{\mathcal{H}}_{g/1,d,(m_{2})^{2}}\in
H^{*}(\overline{\mathcal{M}}_{g,2m_{2}})$ is non-tautological.
We prove Theorem 5.7 by induction on $g$. When $2\leq g\leq 12-m_{2}$, the
result follows by Proposition 5.1 and Lemma 4.3.
Now, suppose $g>12$, so that in particular $(g-1)+m_{2}\geq 12$. We consider
the pullback of $\overline{\mathcal{H}}_{g/1,d,(m_{2})^{2}}$ to the boundary
divisor
$\xi:\overline{\mathcal{M}}_{g-1,2m_{2}+1}\times\overline{\mathcal{M}}_{1,1}\to\overline{\mathcal{M}}_{g,2m_{2}}$.
More precisely, let $b=2g-2+m_{2}$ be the dimension of
$\overline{\mathcal{H}}_{g/1,d,m_{2}}$, also equal to the number of marked
points on the target curve. Then, we consider the projection
$\xi^{*}(\overline{\mathcal{H}}_{g/1,d,(m_{2})^{2}})_{b-2,1}$ of
$\xi^{*}(\overline{\mathcal{H}}_{g/1,d,(m_{2})^{2}})$ to the factor
$H_{2(b-2)}(\overline{\mathcal{M}}_{g-1,2m_{2}+1})\otimes
H_{2}(\overline{\mathcal{M}}_{1,1})\subset
H_{2(b-1)}(\overline{\mathcal{M}}_{g-1,2m_{2}+1}\times\overline{\mathcal{M}}_{1,1}),$
of the Künneth decomposition. The factor $H_{2}(\overline{\mathcal{M}}_{1,1})$
is spanned by the fundamental class; we show by induction that the resulting
class on $H_{2(b-2)}(\overline{\mathcal{M}}_{g-1,2m_{2}+1})$ is non-
tautological, so that $\xi^{*}(\overline{\mathcal{H}}_{g/1,d,(m_{2})^{2}})$
fails to have TKD.
Consider the usual diagram
$\textstyle{\coprod\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\overline{\mathcal{H}}_{g/1,d,(m_{2})^{2}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\phi}$$\textstyle{\coprod\overline{\mathcal{M}}_{A}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\overline{\mathcal{M}}_{g,N}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\pi}$$\textstyle{\overline{\mathcal{M}}_{g-1,2m_{2}+1}\times\overline{\mathcal{M}}_{1,1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\xi}$$\textstyle{\overline{\mathcal{M}}_{g,2m_{2}}}$
Let $[f:X\to Y]$ be a general point of
$\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$. Because the graphs $A$
have only one edge, by the genericity condition from Proposition 3.2, $Y$ may
only have one node, which must be separating by Lemma 2.12. Thus, $Y$ is the
union of a smooth genus 1 component $Y_{1}$ and a smooth rational component
$Y_{0}$. In addition, $\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$ is
pure of codimension 1 in $\overline{\mathcal{H}}_{g/1,d,(m_{2})^{2}}$, so in
particular the intersection in the upper square occurs in the expected
dimension.
Let $X_{1},X_{g-1}$ be the subcurves of $X$ of genus $1,g-1$, respectively,
corresponding to the pieces parametrized by the factors of
$\overline{\mathcal{M}}_{g-1,2m_{2}+1}\times\overline{\mathcal{M}}_{1,1}$.
We consider two cases:
1. (i)
At least one component of $X_{1}$ maps to $Y_{1}$, and
2. (ii)
$X_{1}$ maps entirely to $Y_{0}$.
###### Lemma 5.8.
The contributions to
$\xi^{*}(\overline{\mathcal{H}}_{g/1,d,(m_{2})^{2}})_{b-2,1}$ from strata
$\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$ whose general point
satisfies (i) have TKD.
###### Proof.
As its genus is 1, the subcurve $X_{1}$ can contain only one component over
$Y_{1}$, an elliptic component mapping via an isogeny of degree
$d^{\prime}\leq d$. One of the pre-images of the nodes of $Y$ is chosen as the
separating node parametrizing
$\overline{\mathcal{M}}_{g-1,2m_{2}+1}\times\overline{\mathcal{M}}_{1,1}$, and
at the others, we must attach rational tails, in order for the genus of
$X_{1}$ to be equal to 1. The curve $X_{g-1}$ then has degree $d-d^{\prime}$
over $Y_{1}$ and $d-d^{\prime}+1$ over $Y_{0}$.
Recall that we are interested in the contribution
$\xi^{*}(\overline{\mathcal{H}}_{g/1,d,(m_{2})^{2}})_{b-2,1}\in
H_{2(b-2)}(\overline{\mathcal{M}}_{g-1,2m_{2}+1})\otimes
H_{2}(\overline{\mathcal{M}}_{1,1})\cong
H_{2(b-2)}(\overline{\mathcal{M}}_{g-1,2m_{2}+1})$
The resulting class in $H_{2(b-2)}(\overline{\mathcal{M}}_{g-1,2m_{2}+1})$ may
be computed by intersecting
$\xi^{*}(\overline{\mathcal{H}}_{g/1,d,(m_{2})^{2}})_{b-2,1}$ with
$[\overline{\mathcal{M}}_{g-1,2m_{2}+1}]\times[\operatorname{Spec}(\mathbb{C})]$,
which amounts in this case to imposing the condition that the elliptic
component $X_{1}$ have fixed $j$-invariant.
This, in turn, gives a discrete set of choices for the isomorphism class of
the target component $Y_{1}$. For each possible $Y_{1}$, and each possible
generic topological type of a $X\to Y$, we get a contribution to
$\xi^{*}(\overline{\mathcal{H}}_{g/1,d,(m_{2})^{2}})_{b-2,1}\in
H_{2(b-2)}(\overline{\mathcal{M}}_{g-1,2m_{2}+1})$
given by the product of a Hurwitz locus for the fixed targets $Y_{1}$ and and
a Hurwitz locus for the rational target $Y_{0}$, pushed forward by a boundary
morphism. In particular, by Lemmas 3.4 and 3.5, all such contributions are
tautological. ∎
###### Lemma 5.9.
All strata $\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$ whose general
point satisfies (ii) and which give non-zero contributions to
$\xi^{*}(\overline{\mathcal{H}}_{g/1,d,(m_{2})^{2}})_{b-2,1}$ have general
point $[f:X\to Y]$ of the following form, also depicted in Figure 3.
$X_{g-1}$ consists of a smooth genus $g-1$ component mapping to $Y_{1}$ with
degree $d$, along with $d-2$ rational tails; at a ramification point, a smooth
genus 1 curve $X_{1}$ is attached, and $X_{1}$ maps to $Y_{0}$ with degree 2.
Figure 3. The only possible contribution of type (ii) to
$\xi^{*}(\overline{\mathcal{H}}_{g/1,d,(m_{2})^{2}})_{b-2,1}$. All marked
points on the source lie on $X_{g-1}$.
###### Proof.
If $X_{1}$ maps entirely to $Y_{0}$, then $X_{1}$ must be a smooth genus 1
curve. In order for the node at which $X_{1},X_{g-1}$ meet to be separating,
we need $X_{1}$ to be totally ramified over $Y_{0}$.
All $2m_{2}$ of the (unforgotten) unramified marked points of $X$ are
constrained to lie on $X_{g-1}$, so these points, as well as the $2g-2$
ramification points, can be associated in a well-defined way to one of $X_{1}$
and $X_{g-1}$. Let $b_{1},b_{g-1}$, respectively, be the number of marked
points appearing on these components, so that $b_{1}+b_{g-1}=b$. We have
$b_{1}\geq 3$, so $b_{g-1}\leq b-3$.
On the other hand, let $b_{g-1,0},b_{g-1,1}$ be the number of marked points on
$X_{g-1}$ mapping to $Y_{0},Y_{1}$, respectively. Suppose that $b_{g-1,0}>0$.
Then, a dimension count shows that the dimension of the image of
$\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$ upon projection to
$\overline{\mathcal{M}}_{g-1,2m_{2}+1}$ is less than $b-2$. In particular, the
contribution to $\xi^{*}(\overline{\mathcal{H}}_{g/1,d,(m_{2})^{2}})_{b-2,1}$
is zero.
Thus, we find $b_{g-1,0}=0$, $b_{g-1,1}=d-3$, and $b_{1}=3$, from which we may
conclude immediately. ∎
###### Proof of Theorem 5.7.
By the previous two lemmas, all contributions to
$\xi^{*}(\overline{\mathcal{H}}_{g/1,d,(m_{2})^{2}})_{b-2,1}$ have TKD except
possibly those coming from strata
$\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$ as described in Lemma 5.9,
for which we get a positive multiple of
$\overline{\mathcal{H}}_{(g-1)/1,(m_{2})^{2},1}$. By Lemma 4.1 and the
inductive hypothesis, this class is non-tautological on
$\overline{\mathcal{M}}_{g-1,2m_{2}+1}$, so
$\xi^{*}(\overline{\mathcal{H}}_{g/1,d,(m_{2})^{2}})_{b-2,1}$ fails to have
TKD. In particular, $\overline{\mathcal{H}}_{g/1,d,(m_{2})^{2}}$ is non-
tautological. ∎
## 6\. Higher genus targets
In this section, we complete the proof of Theorem 1.3, by induction on $h$,
with the base case given by Theorem 5.7. As $d$ is fixed throughout, we will
eventually require the same non-vanishing condition $a_{d}\neq 0$.
###### Proposition 6.1.
Suppose that $h\geq 2$, $d\geq 2$, $g\geq d$, $m_{2}\geq 0$,
$s\geq\max\\{2,d-1\\}$, and $m_{d}\geq s-1$. Suppose further that
$\overline{\mathcal{H}}_{(g-d)/(h-1),d,(m_{2})^{2}(m_{d}-s+2)^{d}}\in
H^{*}(\overline{\mathcal{M}}_{g-d,2m_{2}+d(m_{d}-s+2)})$ is non-tautological
(and in particular, that the cohomology group in question is non-zero and the
Hurwitz locus is non-empty).
Then, $\overline{\mathcal{H}}_{g/h,d,(m_{2})^{2}(m_{d})^{d}}\in
H^{*}(\overline{\mathcal{M}}_{g,2m_{2}+dm_{d}})$ is non-tautological.
Consider the codimension $d$ stratum
$\xi:\overline{\mathcal{M}}_{g-d,2m_{2}+d(m_{d}-s+2)}\times(\overline{\mathcal{M}}_{1,s})^{d}\to\overline{\mathcal{M}}_{g,2m_{2}+dm_{d}}$
parametrizing “comb” curves, that is, curves formed by attaching $d$ elliptic
tails to a “spine” of genus $g-d$. We require that $s-1$ of the $d$-tuples of
unramified points lie on the elliptic tails, with one point of each $d$-tuple
distributed to each tail. The remaining $d$-tuples are constrained to lie on
the spine, as are all $m_{2}$ pairs of unramified points.
We will prove Proposition 6.1 by showing that
$\xi^{*}\overline{\mathcal{H}}_{g/h,d,(m_{2})^{2}(m_{d})^{d}}$ fails to have
TKD.
Let $b=(2g-2)-d(2h-2)+(m_{2}+m_{d})$ be the number of marked points on the
target of a cover parametrized by
$\overline{\mathcal{H}}_{g/h,d,(m_{2})^{2}(m_{d})^{d}}$, and let $B=(3h-3)+b$
be the dimension of $\overline{\mathcal{H}}_{g/h,d,(m_{2})^{2}(m_{d})^{d}}$.
We will consider the projection
$\xi^{*}(\overline{\mathcal{H}}_{g/h,d,(m_{2})^{2}(m_{d})^{d}})_{B-s-1,s-d+1}$
of $\xi^{*}(\overline{\mathcal{H}}_{g/h,d,(m_{2})^{2}(m_{d})^{d}})$ to
$H_{2(B-s-1)}(\overline{\mathcal{M}}_{g-d,2m_{2}+d(m_{d}-s+1)})\otimes
H_{2(s-d+1)}((\overline{\mathcal{M}}_{1,s})^{d})\subset
H_{2(B-d)}(\overline{\mathcal{M}}_{g-d,2m_{2}+d(m_{d}-s+1)}\times(\overline{\mathcal{M}}_{1,s})^{d})$
Note, in particular, that the condition $s\geq d-1$ ensures that
$H_{2(s-d+1)}((\overline{\mathcal{M}}_{1,s})^{d})$ is non-trivial.
As usual, consider the diagram
$\textstyle{\coprod\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\overline{\mathcal{H}}_{g/h,d,(m_{2})^{2}(m_{d})^{d}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\phi}$$\textstyle{\coprod\overline{\mathcal{M}}_{A}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\overline{\mathcal{M}}_{g,r}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\pi}$$\textstyle{\overline{\mathcal{M}}_{g-d,2m_{2}+d(m_{d}-s+1)}\times(\overline{\mathcal{M}}_{1,s})^{d}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\xi}$$\textstyle{\overline{\mathcal{M}}_{g,2m_{2}+dm_{d}}}$
###### Lemma 6.2.
The $\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$ which give non-zero
contributions to
$\xi^{*}(\overline{\mathcal{H}}_{g/h,d,(m_{2})^{2}(m_{d})^{d}})_{B-s-1,s-d+1}$
have general point $[f:X\to Y]$ of the following form, also depicted in Figure
4.
$Y$ consists of two smooth components $Y_{1},Y_{h-1}$ of genus $1,h-1$,
respectively. Over $Y_{h-1}$, $X$ contains a single smooth connected component
$X_{g-d}$ of genus $g-d$, and over $Y_{1}$, $X$ contains $d$ elliptic
components mapping isomorphically to $Y_{1}$ (and attached at unramified
points to $X_{g-d}$).
Figure 4. The only possible contribution to
$\xi^{*}(\overline{\mathcal{H}}_{g/h,d,(m_{2})^{2}(m_{d})^{d}})_{B-s-1,s-d+1}$.
Here, $s=4$, and all other marked points on the source lie on $X_{g-h}$.
###### Proof.
Suppose $f:X\to Y$ is a general cover in a stratum
$\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$. Consider a marked point
$y\in Y$ with marked $d$-tuple $x_{1},\ldots,x_{d}$ of pre-images lying on the
elliptic tails of $X$. Because $s\geq 2$, at least one such marked fiber
exists.
The points $x_{1},\ldots,x_{d}$ must all lie on different components
$X_{1},\ldots,X_{d}$ of $X$, which must therefore map isomorphically to a
component $Y_{1}\subset Y$. Note that all of these components must be tails,
or else the valence of one of the elliptic vertices of $\Gamma$ would be
greater than 1. Above the node of $Y_{1}$ corresponding to a half-edge of
$\Gamma^{\prime}$, at least one node must be chosen to correspond to a half-
edge of $A$. In particular, $g(Y_{1})=g(X_{1})=\cdots=g(X_{d})=1$.
Furthermore, all $s-1$ of the marked points of $Y$ corresponding to marked
$d$-tuples on elliptic tails must lie on $Y_{1}$, and the resulting $s$-marked
elliptic curves $X_{1},\ldots,X_{d}$ are all isomorphic.
Let $Y_{h-1}$ be the closure of $Y-Y_{1}$, over which the spine of $X$ lives.
If the contribution of $\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$ to
$\xi^{*}(\overline{\mathcal{H}}_{g/h,d,(m_{2})^{2}(m_{d})^{d}})_{B-s-1,s-d+1}$
is non-zero, then the image of
$\overline{\mathcal{H}}_{(\Gamma,\Gamma^{\prime})}$ in
$\overline{\mathcal{M}}_{g-d,2m_{2}+d(m_{d}-s+1)}$ must have dimension at
least $B-s-1$. A parameter counting argument as we have carried out in the
proofs of Lemmas 4.1, 5.3, and 5.9 shows that the $b-(s-1)$-pointed curve
$Y_{g-1}$ must be smooth of genus $g-1$. In particular, the pre-image must be
a smooth and connected curve of genus $g-d$, completing the proof. ∎
Lemma 6.2 shows that the only non-zero contributions to
$\xi^{*}(\overline{\mathcal{H}}_{g/h,d,(m_{2})^{2}(m_{d})^{d}})_{B-s-1,s-d+1}$
come from the diagram
$\textstyle{\overline{\mathcal{H}}_{(g-d)/(h-1),d,m_{2}+m_{d}-s+2}\times\overline{\mathcal{M}}_{1,s}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{(\phi,\Delta)}$$\textstyle{\overline{\mathcal{H}}_{g/h,d,m_{2}+m_{d}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\phi}$$\textstyle{\overline{\mathcal{M}}_{g-d,N-(s-2)d}\times(\overline{\mathcal{M}}_{1,s})^{d}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\overline{\mathcal{M}}_{g,N}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\overline{\mathcal{M}}_{g-d,2m_{2}+d(m_{d}-s+2)}\times(\overline{\mathcal{M}}_{1,s})^{d}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\xi}$$\textstyle{\overline{\mathcal{M}}_{g,2m_{2}+dm_{d}}}$
###### Proof of Proposition 6.1.
We apply the excess intersection formula in the top square; note that in the
functorial fiber product,
$\overline{\mathcal{H}}_{(g-d)/(h-1),d,m_{2}+m_{d}-s+2}\times\overline{\mathcal{M}}_{1,s}$
appears without non-reducedness, as the generic covers appearing in Lemma 6.2
are unramified at the nodes. Recall from §3.2 that we need to pass to the
normalization of $\overline{\mathcal{H}}_{(g-d)/(h-1),d,m_{2}+m_{d}-s+2}$. The
dimensions of $\overline{\mathcal{H}}_{(g-d)/(h-1),d,m_{2}+m_{d}-s+2}$ and
$\overline{\mathcal{M}}_{1,s}$ are $B-s-1,s$, respectively, and we are looking
for the contribution in homological dimension $(B-s-1,s-d+1)$ on
$\overline{\mathcal{M}}_{g-d,2m_{2}+d(m_{d}-s+1)}\times(\overline{\mathcal{M}}_{1,s})^{d}$.
On the other hand, the intersection in the top square occurs in dimension
$d-1$ greater than the expected.
Therefore, after applying the excess intersection formula, the piece of
resulting the class on
$\overline{\mathcal{M}}_{g-d,2m_{2}+d(m_{d}-s+1)}\times(\overline{\mathcal{M}}_{1,s})^{d}$
appearing in the desired pair of dimensions is a non-zero multiple of the
pushforward of
$\overline{\mathcal{H}}_{(g-d)/(h-1),d,(m_{2})^{2}(m_{d}-s)^{d}}\times\psi^{s-d+1},$
where the $\psi$ class on $\overline{\mathcal{M}}_{1,s}$ is taken at the
marked point to which the spine of $X$ is attached.
Therefore, $\xi^{*}(\overline{\mathcal{H}}_{g/h,d,(m_{2})^{2}(m_{d})^{d}})$
fails to have TKD, provided that $\psi^{s-d+1}\neq 0$. However, note that, by
the string equation,
$\int_{\overline{\mathcal{M}}_{1,s}}\psi_{i}^{s}=\int_{\overline{\mathcal{M}}_{1,1}}\psi\neq
0,$
where we have pushed forward by the map forgetting all but the $i$-th marked
point. In particular, all smaller powers of $\psi$ are also non-zero. ∎
###### Proof of Theorem 1.3.
The first claim, for $d$-elliptic loci, is Theorem 5.7.
Now, suppose $h>1$ and $d=2$. Note in this case that $m_{2}=m_{d}\geq 1$ by
assumption. We prove the desired claim by induction on $h$ by applying
Proposition 6.1 with $s=2$. For $h=1$, we already have the same bounds for
$h=1$, though there the condition $m_{2}\geq 1$ is superfluous. Now, we have
$g\geq 2h$ and $g+m_{2}\geq 2h+10$, so $(g-2)\geq 2(h-1)$ and $(g-2)+m_{2}\geq
2(h-1)+10$, and we may apply the inductive hypothesis.
Finally, suppose $h>1$ and $d>2$; again, when $h=1$, we have stronger bounds
after applying Lemma 4.2, so we may use this as the base case for induction on
$h$, applying Proposition 6.1 with $s=d-1$. The conditions $g\geq d$ and
$m_{d}\geq s-1=d-2$ are easily checked to be satisfied given the hypothesis of
the theorem, and the needed inequalities are still satisfied when
$(g,h,d,m_{2},m_{d})$ are replaced by $(g-d,h-1,d,m_{2},m_{d}-d+3)$, so the
proof is complete. ∎
## References
* [ACV03] Dan Abramovich, Alessio Corti, Angelo Vistoli, _Twisted bundles and admissible covers_ , Commun. Algebra 8 (2003), 3547-3618
* [DvHZ13] Maarten Derickx, Mark van Hoeij, Jinxiang Zeng, _Computing Galois representations and equations for modular curves $X_{H}(\ell)$_, arXiv 1312.6819
* [FP05] Carel Faber and Rahul Pandharipande, _Relative maps and tautological classes_ , J. Eur. Math. Soc. 7 (2005), 13-49
* [FP13] Carel Faber and Rahul Pandharipande, _Tautological and non-tautological cohomology of the moduli space of curves_ , in “Handbook of Moduli”, Vol. II, G. Farkas and I. Morrison, eds., Adv. Lect. Math. (ALM) 74, International Press, Boston (2013), 293-330
* [Get98] Ezra Getzler, _The semi-classical approximation for modular operads_ , Comm. Math. Phys. 194 (1998), 481-492
* [GP03] Thomas Graber and Rahul Pandharipande, _Constructions of nontautological classes on moduli spaces of curves_ , Michigan Math. J. 51 (2003), 93-109
* [HM82] Joe Harris and David Mumford, _On the Kodaira dimension of the moduli space of curves_. Invent. Math. 67 (1982), 23-86
* [Kee92] Sean Keel, _Intersection theory of moduli space of stable $N$-pointed curves of genus zero_. Trans. Amer. Math. Soc. 330 (1992), 545-574.
* [Leh47] Derrick H. Lehmer _The vanishing of Ramanujan’s function $\tau(n)$_. Duke Math. J. 14 (1947), 429–433
* [L20a] Carl Lian, _$d$ -elliptic loci in genus 2 and 3_, Int. Math. Res. Not. IMRN, to appear.
* [L20b] Carl Lian, _The $\mathcal{H}$-tautological ring_, arXiv 2011.11565.
* [Mum83] David Mumford, _Towards an enumerative geometry of the moduli spaces of curves_ , Arith. Geom. 2 (1983), 271-328
* [PPZ15] Rahul Pandharipande, Aaron Pixton, and Dmitri Zvonkine, _Relations on $\overline{\mathcal{M}}_{g,n}$ via 3-spin structures_, J. Am. Math. Soc. 28 (2015), 279-309
* [Pet14] Dan Petersen, _The structure of the tautological ring in genus one_ , Duke Math. J. 163 (2014), 777-793
* [SvZ18] Johannes Schmitt and Jason van Zelm, _Intersection of loci of admissible covers with tautological classes_ , Selecta Math. (N.S.) 26, Article Nr. 79 (2020)
* [vZ18] Jason van Zelm, _Nontautological bielliptic cycles_ , Pacific J. Math. 294 (2018), 495-504
|
# Response-Time Analysis and Optimization for Probabilistic Conditional
Parallel DAG Tasks
Niklas Ueter Dortmund, Germany
<EMAIL_ADDRESS>TU Dortmund University Mario Günzel Dortmund,
Germany
<EMAIL_ADDRESS>TU Dortmund University Jian-Jia Chen Dortmund,
Germany
<EMAIL_ADDRESS>TU Dortmund University
###### Abstract
Real-time systems increasingly use multicore processors in order to satisfy
thermal, power, and computational requirements. To exploit the architectural
parallelism offered by the multicore processors, parallel task models,
scheduling algorithms and response-time analyses with respect to real-time
constraints have to be provided. In this paper, we propose a reservation-based
scheduling algorithm for sporadic constrained-deadline parallel conditional
DAG tasks with probabilistic execution behaviour for applications that can
tolerate bounded number of deadline misses and bounded tardiness. We devise
design rules and analyses to guarantee bounded tardiness for a specified
bounded probability for $k$-consecutive deadline misses without enforcing late
jobs to be immediately aborted.
###### Index Terms:
Real-Time Scheduling, Distributed Computing, Parallel Task Models
## I Introduction
A real-time system is a system where the missing of a deadline may lead to a
catastrophe and thus warrants to formally verify the temporal behaviour of the
system to ensure safety. In the last decade real-time systems have shifted
from uniprocessor to multiprocessor systems in order to deal with the
computational, thermal and energy constraints of modern complex applications.
To that end, a lot of research has been conducted with regards to the
challenge of how to make use of the parallelism provided by multiprocessors
for task sets with inter- and intra-task parallelism whilst satisfying
deadline constraints. Inter-task parallelism refers to the potential
concurrent execution of distinct tasks that execute sequentially, whereas
intra-task parallelism refers to tasks that allow for parallel execution.
Fork/join models [18], synchronous parallel task models, real-time scheduling
algorithms and response-time analyses thereof have been published, e.g., [29],
and DAG (directed-acyclic graph) based task models [14, 15, 2, 6, 25]. These
models enable tasks with higher execution demands and inherent parallelism
such as computer vision, radar tracking or video applications to be scheduled
with tighter deadlines.
Besides the different approaches and justifications to represent intra-task
parallelism using the above models, parallel applications in the domain of
autonomous driving and image processing are subject to multiple conditional
branches and control flow instructions as stated by Melani et. al [25].
Moreover, the execution times of the subjobs of parallel algorithms in these
domains are highly varying due to varying sensor inputs, e.g., images for
object detection in autonomous vehicles. Beyond that, it was shown that the
multicore architecture complicates the worst-case timing analysis. This is due
to interference effects from contention on shared resources, e.g., caches,
memory etc. The authors in [13] argue that the _arbitration delay_ and _state
perturbation_ caused by resource sharing must be captured in the worst-case
bounds. All these uncertainties eventually lead to pessimistic response-time
analyses in real-time systems and thus lead to resource underutilization.
These architectural impacts on the worst-case execution time analysis have
been thoroughly researched by e.g., cache partitioning [1] or bandwidth
sharing mechanisms for memory accesses [34].
Another approach to this problem is to _accept_ the uncertain execution
behaviour of the parallel tasks and to focus on the probabilistic response-
time characteristics. For many applications, e.g., closed-loop feedback
controllers, hard real-time system engineering (with a safe but very
pessimistic upper bound) is not required due to the inherent controller
robustness towards timing non-idealities like jitter and deadline misses. In
fact, if only a limited number of deadlines of a control application are
missed, the required quality of control can still be satisfied.
Recently, many research efforts have been focused on formalizing and analyzing
relaxations of deadline constraints [28], e.g., weakly hard systems where $m$
out of $k$ task instances must meet the deadlines. Moreover, Maggio et al.
[22] investigate the closed-loop control system stability under consecutive
deadline-miss constraints, which further motivates the need for scheduling
algorithms that can guarantee probabilistic bounds on consecutive deadline
misses to the application.
In order to formally describe and verify quantitive guarantees of deadline
misses, some quantifications are of importance for soft real-time systems:
Probability of a deadline miss, probability for $k$ consecutive deadline
misses, maximum tardiness of a job. Despite the guarantees are soft, the
precise quantification of such deadline misses are hard and challenging even
for the ordinary sequential real-time task models that are scheduled upon a
uniprocessor system. A summary of the literature in this research direction is
provided in Section II. They can only be derived under strict model
assumptions, e.g., that a job is aborted whenever a job exceeds its deadline
in the state-of-the-art analyses. The reason for this complexity is partly due
to inter task interference, i.e., the preemption and interference patterns of
the task system due to higher-priority jobs, which results in a large number
of system states that must be considered in a response-time analysis.
We aim to analyze, optimize and verify the schedulability of probabilistic
conditional parallel DAG tasks on identical multi-processors with respect to
quantities such as deadline-miss probabilities, consecutive deadline-miss
probabilities and tardiness constraints. When considering the scheduling and
analysis of probabilistic parallel DAG tasks, not only inter-task, but also
intra-task interference, and multiprocessor scheduling anomaly effects (the
early completion of jobs may lead to longer response-times) must be
considered, which complicate the analyses for the above mentioned quantities.
Contributions: We propose scheduling algorithms based on reservations, i.e.,
service provisioning, for the probabilistic analysis of parallel DAG tasks to
avoid inter-task interference induced complexities and anomaly effects and are
thus firstly able to solve the stated objective. More precisely, we make the
following contributions:
* •
We propose a probabilistic version and formal description of the widely used
conditional parallel DAG task model in Section III.
* •
We contribute scheduling algorithms and response-time analyses for
probabilistic conditional parallel DAG tasks based on resource reservation.
The reservations can be scheduled along side real-time workloads using any
existing scheduling paradigm. In addition, we provide design rules to devise
reservations that guarantee probabilistic characteristics such as bounded
tardiness, stability, and probabilistic upper-bounds for $k$-consecutive
deadline misses. Our approach is anomaly-free because any early completions
due to scheduling or dynamic DAG structures are handled by the adoption of
resource reservation and the abstraction of the workload model.
To the best of our knowledge, this is the first paper that addresses the
analysis and optimization for probabilistic conditional parallel DAG task sets
with quantitive guarantees.
## II Related Work
The scheduling of parallel real-time tasks with worst-case parameters, e.g.,
worst-case execution times, upon multiprocessor systems has been extensively
studied for different parallel task models. An early classification of
parallel tasks with real-time constraints into _rigid_ , _moldable_ or
_malleable_ has been described by Goosens et al. [16]. Early work concerning
parallel task models focuses on synchronous parallel task models, e.g., [23,
29, 11]. Synchronous models are an extension of the fork-join model [12] in
the sense that it allows different numbers of subtasks in each (synchronized)
segment and that this number could be greater than the number of processors.
Many of the proposed scheduling algorithms and analyses are based on
decomposition, i.e., the decomposition of the parallel task into a set of
sequential tasks and the scheduling thereof.
Recently, the directed-acyclic graph (DAG) task model has been proposed and
been subject to scheduling algorithm design and analysis. The DAG task is a
more general parallel structure where each task is described by a set of
subtasks and their precedence constraints that are represented by a directed-
acyclic graph. This parallel model has been shown to correspond to models in
parallel computing APIs such as OpenMP by Melani et al. [31] or Sun et al.
[32]. This model has been studied in the case of global scheduling in e.g.,
[6, 26] or partitioned scheduling algorithms [15, 8]. There has also been
research regarding approaches of synchronous and general DAG tasks that are
not decomposition based, e.g., federated scheduling as proposed by Li et al.
[21] that avoids inter-task interference for parallel tasks. In federated
scheduling, the set of DAG tasks are partitioned into tasks that can be
executed sequentially on a single processor whilst meeting it’s deadline
requirements and tasks that need to execute in-parallel in order to meet it’s
deadline. The latter tasks are then assigned to execute on a set of processors
exclusively.
Motivated by the conditional execution behaviour of modern parallel
applications, e.g., autonomous driving or computer vision, the conditional DAG
task model has been proposed. A plethora of research concerning the real-time
schedulability of this model has been conducted by e.g., [25, 3, 10]. Most
recently, the computational complexity of the scheduling of conditional DAG
with real-time constraints has been investigated by Marchetti et al. [24].
However, due to the worst-case parameters and the worst-case conditional
structure that has to be considered during real-time verification of the
scheduling algorithms, resource over-provisioning is inevitable.
For soft real-time applications that can tolerate a bounded number of
deadline-misses, probabilistic task models and response-time analyses for
these kind of parallel tasks are of interest. Moreover, the worst-case
parameter inference is increasingly complex and pessimistic for parallel
architectures further bolstering the importance of probabilistic models and
analyses. For sequential stochastic tasks a plethora of prior work concerning
probabilistic analyses exists, e.g., [30, 17]. Recent work focused on the
improvements of efficiency in convolution-based probabilistic deadline-miss
analysis approaches. In Brüggen et al. [7], the authors propose efficient
convolutions over multinomial distributions by exploiting several state space
reduction techniques and approximations using Hoeffding’s and Bernstein’s
inequality and unifying equivalence classes. Chen et al. [9] propose the
efficient calculation of consecutive deadline-misses using Chebyshev’s
inequality and moment-generating functions and optimizations thereof. There
has also been efforts to use reservation servers to schedule probabilistic
sequential tasks. For example, Palopoli et al. [27] have shown how to
calculate the probability of a deadline miss for periodic real-time tasks
scheduled using the constant bandwidth server (CBS). The authors have reduced
the computation to the computation of a steady state probability of an
infinite state discrete time markov chain with periodic structure. In the
context of parallel DAG tasks Ueter et al. proposed a reservation scheme to
schedule sporadic arbitrary-deadline DAG tasks [33] with real-time
constraints. Other approaches to tackle the probabilistic analysis of real-
time tasks is real-time queuing theory by Lehoczky et al. [19], which is an
extension of classical queuing theory to systems with deadlines. An initial
work that analyzed the probabilistic response-times of parallel DAG tasks was
proposed by Li [20]. Li extended prior work on federated scheduling [21] by
facilitating queuing theory to devise federated scheduling parameters such
that each task’s tardiness is bounded and soft real-time requirements are met.
A more recent work on the probabilistic response-time analysis of parallel DAG
tasks is by Ben-Amor et al. [4, 5]. The authors have studied the probabilistic
response-time analysis of parallel DAG tasks upon multiprocessor systems using
partitioned fixed-priority scheduling at the subtask level. In their model
each subtask is described by a probabilistic worst-case execution time and
static precedence constraints between them. Based on the above, the authors
derive probabilities for subtask response-times using convolution-based
approaches and compose an overall response-time.
## III Task and Problem Model
$3$$v_{1}$$1$$v_{2}$$2$$v_{3}$$1$$v_{4}$$2$$v_{5}$$5$$v_{6}$$3$$v_{7}$$0.4$$0.6$$0.7$$0.3$
Figure 1: An exemplary probabilistic conditional DAG task in which each
conditional node (diamond) denotes that only one of it’s adjacent subjobs is
released (with the annotated probability) during runtime. In this specific
example four different DAG structures can be instanced during runtime.
We consider a given set $\mathbb{T}$ of probabilistic sporadic constrained-
deadline conditional parallel directed-acyclic graph (DAG) tasks in a
multiprocessor system that is comprised of $M$ identical (homogeneous)
processors. Each task releases an infinite sequence of task instances, namely
jobs. Each conditional parallel DAG task $\tau_{i}\in\mathbb{T}$ is defined by
a conditional DAG structure $G_{i}$ (to be defined later), a relative deadline
$D_{i}$ and a minimal inter-arrival time $T_{i}$, which denotes the minimal
distance between two job releases. In this paper we only consider constrained-
deadline tasks, i.e., $D_{i}\leq T_{i}$ for every task $\tau_{i}$. An
exemplary probabilistic conditional DAG is illustrated in Figure 1. A
probabilistic conditional directed-acyclic graph is composed of nodes $V$ and
edges $E$ that denote precedence and control flow constraints. Each node is
either a _subjob_ node with an associated execution time or a _condition_ node
that denotes probabilistic conditional branching to subjobs. In the
illustrated example, two decision nodes with two possible branching options
each are given. The given structure yields four different enumerable DAG
realizations whose probability of realization is given by the probability of
traversing a specific path of condition nodes. A conditional DAG is composed
of finitely many DAGs, each of which consist of a tuple $(V,E)$, where $V$
denotes the finite set of subjobs and the relation $E\subseteq V\times V$
denotes the precedence constraints of these subjobs such that there are no
directed circles in the underlying graph. For each of these DAGs the _volume_
and _length_ parameters are calculated as follows. We use
$pre(v_{i}):=\\{v_{j}\in V~{}|~{}(v_{j},v_{i})\in
E\\}\text{\;\>and\;\>}v_{j}\prec v_{i}\text{~{}if~{}}v_{j}\in pre(v_{i})$
Conversely, we use $succ(v_{i}):=\\{v_{j}\in V~{}|~{}(v_{i},v_{j})\in
E\\}\text{\;\>and\;\>}v_{j}\succ v_{i}\text{~{}if~{}}v_{j}\in succ(v_{i})$.
###### Definition 1 (Path).
A path $\pi$ in a directed-acyclic graph $G$ is any sequence of subjobs
$v_{i_{1}}\prec v_{i_{2}}\prec\ldots\prec v_{i_{k}}$ for $v_{i_{j}}\in V$ such
that $pre(v_{i_{1}})=\emptyset$ and $succ(v_{i_{k}})=\emptyset$. ∎
###### Definition 2 (Length).
Let a path $\pi$ be a sequence of subjobs such that each subjob in the
sequence is an immediate successor of the previous subjob in terms of
precedence constraints. Then the length of a path is given by
$\ell en(\pi):=\sum_{v_{i}\in\pi}\ell en(v_{i})$
where the length of a subjob denotes its execution time. Subsequently, the
length of DAG $G$ is given by
$\ell en(G):=\max\\{\ell en(\pi)~{}|~{}\pi~{}is~{}a~{}path~{}in~{}G\\}.$
∎
###### Definition 3 (Volume).
The volume of DAG $G$ is given by the graph’s cumulative execution time, i.e.,
$vol(G):=\sum_{v_{i}\in V}\ell en(v_{i}).$
∎
### III-A Probabilistic Parametric Description
probability | length | volume
---|---|---
0.42 | 12 | 13
0.18 | 13 | 14
0.28 | 9 | 10
0.12 | 11 | 11
TABLE I: Tabular representation of the probabilities of the parameters volume
and length for the probabilistic conditional DAG task illustrated in Figure 1.
Each probabilistic conditional DAG task is described by the tuple
$\tau_{i}=(G_{i},D_{i},T_{i})$ where $G_{i}$ denotes a probabilistic
conditional DAG structure, $D_{i}$ denotes the relative deadline and $T_{i}$
denotes the minimal inter-arrival time between two job releases. For each task
$\tau_{i}\in\mathbb{T}$ a cumulative distribution function (CDF) is inferred
from the conditional DAG structure, where $F_{i}(u,v)$ describes the
probabilistic behaviour of the _volume_ and _length_ of a DAG instance. That
is each task $\tau_{i}$ releases an infinite number of jobs
$\tau_{i,\ell},~{}\ell=0,1,2,\dots$ and each job is associated with a DAG
instance $G_{i,\ell}$ such that the parameters _volume_ and _length_ of
$G_{i,\ell}$ are a realizations according to the probabilistic
characterization of the distribution function.
For instance the distribution function of the conditional DAG illustrated in
Figure 1 is devised by the calculation of the probability for each of the
DAG’s realizations and its respective parameter values. The instance
illustrated in Figure 2 represents the graph where both upper edges are chosen
for which the probability is $0.7\cdot 0.6=0.42$. The associated length is
$12$ and the associated volume is $13$. By similar reasoning, choosing the
edges with probability $0.7\cdot 0.4$, $0.3\cdot 0.6$, and $0.3\cdot 0.4$
yield $0.28$, $0.18$ or $0.12$ realization probability of the associated DAG
structures. Calculating the volume and length of each of these realizations
yields the data listed in Table I. Consequently, we derive
$F_{i}(u,v)=\mathbb{P}(vol(G_{i})\leq u,\ell en(G_{i})\leq v)$ as follows:
$\mathds{1}(u-13)\cdot\mathds{1}(v-12)\cdot
0.42+\mathds{1}(u-14)\cdot\mathds{1}(v-13)\cdot
0.18+\mathds{1}(u-10)\cdot\mathds{1}(v-9)\cdot
0.28+\mathds{1}(u-11)\cdot\mathds{1}(v-11)\cdot 0.12$
where $\mathds{1}$ denotes the step function, i.e., $\mathds{1}(x)$ is $1$ if
$x\geq 0$ and $0$ otherwise. We note that for probabilistic conditional DAG
tasks as presented, the CDF is a step function with finitely many steps.
Moreover, we assume that the probabilities of DAG instances are independent.
### III-B Tardiness
Every job that misses its deadline must be handled by the system, i.e., a
mechanism must be devised that decides the actions taken upon such events. A
common mechanism is the immediate abortion of every job which exceeds its
deadline in order to avoid any interference of subsequent jobs. This approach
is inefficient in the sense that all computation results and state changes are
dumped and even may have to be revoked for consistency reasons, which holds
especially true if the amount of time that the deadline is exceeded is rather
small. Informally speaking, the tardiness of a job measures the delay of job
with respect to its deadline.
###### Definition 4 (Tardiness).
Let $\delta_{i}(\ell)$ denote the tardiness of the $\ell$-th job of task
$\tau_{i}$, i.e., the amount of time that the $\ell$-th job exceeds the task’s
deadline under the consideration of possibly pending workload from prior jobs.
The tardiness can be recursively stated as
$\delta_{i}(\ell)=\max\\{\delta_{i}(\ell-1)+(R_{i,\ell}-D_{i}),0\\}$, where
$R_{i,\ell}$ denotes the response time of the $\ell-th$ job of task
$\tau_{i}$. Furthermore $\delta_{i}(0)=0$ by definition. ∎
We note that due to this definition, the $\ell$-th job of task $\tau_{i}$ does
meet its deadline if $\delta_{i}(\ell)=0$, and it does miss its deadline if
$\delta_{i}(\ell)>0$. In pursuance of improving this problem we intent to
bound the tardiness of each job of a task by a tardiness bound.
###### Definition 5 (Tardiness Bound).
A task $\tau_{i}$ is said to have a tardiness bound $\rho_{i}>0$ if any job of
that task will be aborted if the job’s tardiness exceeds $\rho_{i}$, i.e., we
have $0\leq\delta_{i}(\ell)\leq\rho_{i}$ for all $\ell\geq 0$. ∎
The tardiness bound is user-specified and refines the formal description of a
probabilistic sporadic constrained-deadline parallel DAG task to the tuple
$(F_{i},D_{i},T_{i},\rho_{i})$.
### III-C Deadline Misses
We pursue to design reservation systems that provide sufficient service to
each task $\tau_{i}$ in the task set
$\mathbb{T}=\\{\tau_{1},\tau_{2},\ldots,\tau_{n}\\}$ such that the probability
of $k$ consecutive deadline misses is bounded.
###### Definition 6 (Consecutive Deadline Misses).
Any sequence of $k$ consecutive job releases
$\tau_{i,\ell},\tau_{i,\ell+1},\ldots,\tau_{i,\ell+k-1}$ for $\ell\geq 0$ is
subject to $k$-consecutive deadline misses if the following conditions hold:
* •
All jobs in the sequence miss their deadline
* •
Either $\ell=0$ or the previous job $\tau_{i,\ell-1}$ does not miss its
deadline. ∎
For each task we define a function $\theta_{i}:\mathbb{N}\to[0,1]$ to specify
that we tolerate $k$ consecutive deadline misses for a given probability of at
most $\theta_{i}(k)$.
###### Definition 7 ($k$ Consecutive Deadline Miss Constraint).
Let
$\phi_{i}(j,k):=\mathbb{P}(\delta_{i}(j)>0,\dots,\delta_{i}(j+k-1)>0~{}|~{}j=0\text{
or }\delta_{i}(j-1)=0)$ denote the probability that the sequence
$\tau_{i,j},\tau_{i,j+1},\ldots,\tau_{i,j+k-1}$ suffers from $k$-consecutive
deadline misses. Then a probabilistic conditional DAG task $\tau_{i}$ is said
to satisfy the deadline constraint $\theta_{i}(k)$ if
$\sup_{j\geq 0}\\{\phi_{i}(j,k)\\}=\phi_{i}(0,k)\leq\theta_{i}(k),$ (1)
i.e., at each position $j$ the probability $\phi_{i}(j,k)$ does not exceed the
threshold $\theta_{i}(k)$. ∎
We note that the equality in Eq. (1) is due to the lack of pending workload
prior to the release of job $\tau_{i,j}$.
$3$$v_{1}$$1$$v_{2}$$1$$v_{4}$$5$$v_{6}$$3$$v_{7}$ Figure 2: DAG instance of
the exemplary conditional DAG task shown in Figure 1 where the conditional
branches with probability $0.7$ and $0.6$ are chosen.
## IV Scheduling Problem
We use a reservation system to handle the scheduling of the DAG tasks and use
any partitioned scheduling algorithm to schedule the reservation system and
other tasks in the system.
### IV-A Reservations
In a _reservation system_ service is reserved for each probabilistic parallel
DAG task $\tau_{i}$ due to some regulation. At those reservations the task
instances of $\tau_{i}$ can be processed. The reservation system is
$m_{i}$-_in-parallel_ if there are at most $m_{i}\in\mathbb{N}$ reservations
at the same time. In this work we consider a simplified version of in-parallel
reservation system:
###### Definition 8 (Our Reservation System).
A _reservation system_ consists of $m_{i}$ reservation servers that provide
$E_{i}$ amount of service each and that is replenished every $P_{i}>0$ time
units. More specifically, to provide the service, each $P_{i}$ time units
there are activated a multiset of $m_{i}\in\mathbb{N}$ distinct reservations,
that each guarantee a service of $E_{i}$ time units over an interval of length
$P_{i}$.
The instances of a task are assigned to the provided service in first-in-
first-out (FIFO)-manner. Furthermore, we assume that at each time all assigned
reservations only serve the subjobs of a single DAG job by the FIFO-policy.
The reservation system is scheduled upon $M$ identical multiprocessors
according to any scheduling paradigm and provides service to the DAG jobs
whenever they are scheduled as follows.
###### Definition 9 (List-Scheduling).
In a list schedule on $m_{i}$ in-parallel reservation servers a subjob of a
given DAG job $G=(V,E)$ is executed on any reservation server that is idle and
scheduled for execution and as soon as all preceding subjobs have executed
until completion. More formally, the starting time $s_{i}$ for each subjob
$v_{i}$ is given by $\min\\{t~{}|~{}\text{some scheduled reservation server
idles at }t,~{}t\geq\max\\{f_{j}~{}|~{}v_{j}\in pre(v_{i})\\}\\}$. ∎
For the remainder of this section, we assume the existence of a _feasible_
schedule $S$ upon $M$ identical multiprocessors, meaning that all reservations
will provide the promised service.
###### Definition 10 (Work).
Let $work_{i}^{S}(t_{1},t_{2})$ denote the amount of workload from DAG jobs
derived by task $\tau_{i}$ that was _worked_ during the time interval $t_{1}$
to $t_{2}$ given the schedule $S$. ∎
Based on this definition, the worst-case response time of a job
$\tau_{i,\ell}$ of a DAG task $\tau_{i}$ that was released at $t_{i,\ell}$ is
given by the smallest $t^{\prime}\geq t_{i,\ell}$ such that
$work_{i}^{S}(t_{i,\ell},t^{\prime})\geq
vol(G_{i}^{\ell})+backlog_{i}^{S}(t_{i,\ell})$, where
$backlog_{i}^{S}(t_{i,\ell})$ is the amount of unfinished work at time
$t_{i,\ell}$ of jobs of $\tau_{i}$ released before $t_{i,\ell}$. Note that
$backlog_{i}^{S}(t_{i,\ell})=0$ if there are no previous deadline misses since
we assume $D_{i}\leq P_{i}$ in our system model. In the following we express
the processed work in terms of provided service and develop a response-time
bound as stated in Theorem 1.
For sake of argument, let $S$ denote a _feasible_ schedule of a reservation
system that works a job of a DAG task $\tau_{i}$ until completion. Furthermore
let $serv_{i}^{S}(t_{1},t_{2})$ denote the service that is provided to the DAG
job during the time interval from $t_{1}$ to $t_{2}$ in the schedule $S$.
###### Definition 11 (Envelope).
Let $S$ be a concrete schedule of $\mathbb{T}$. Consider a given DAG job
instance $G$ of some task in $\mathbb{T}$ with subjobs
$V=\left\\{{v_{1},\dots,v_{\ell}}\right\\}$. Let each subjob $v_{k}$ have the
starting time $s_{k}$ and finishing time $f_{k}$ in $S$. We define the
envelope $s_{k_{1}},f_{k_{1}},s_{k_{2}},f_{k_{2}},\dots,s_{k_{p}},f_{k_{p}}$
of $G$, with $p\in\\{1,\dots,\ell\\}$, recursively by the following
properties:
1. 1.
$k_{i}\neq k_{j}\in\left\\{{1,\dots,\ell}\right\\}$ for all $i\neq j$
2. 2.
$v_{k_{p}}$ is the subjob of $V$ with maximal finishing time
3. 3.
$v_{k_{i-1}}$ is the subjob in $pre(v_{k_{i}})$ with maximal finishing time,
for all $i\in\left\\{{p,p-1,\dots,2}\right\\}$
4. 4.
$pre(v_{k_{1}})=\emptyset$
We note that the definition of an envelope for a DAG job instance may be not
unique if there are subjobs with equal finishing time. In this case we choose
one among them arbitrarily. ∎
Based on the definition of an envelope, we are able to formally state the
following lemma.
###### Lemma 1.
Given a schedule $S$ of $\mathbb{T}$. We consider a task
$\tau_{i}\in\mathbb{T}$ with an $m_{i}$-in-parallel reservation system. Let
$G=\tau_{i,j}$ be one DAG job instance of $\tau_{i}$ with envelope
$s_{k_{1}},f_{k_{1}},\dots,s_{k_{p}},f_{k_{p}}$. Then the amount of work that
is finished during the interval from $f_{k_{q-1}}$ to $f_{k_{q}}$ for
$q\in\\{2,\dots,p\\}$ is lower bounded by
$\displaystyle work_{i}^{S}(f_{k_{q-1}},f_{k_{q}})\geq$ $\displaystyle
serv_{i}^{S}(f_{k_{q-1}},s_{k_{q}})+serv_{i}^{S}(s_{k_{q}},f_{k_{q}})$
$\displaystyle-(m_{i}-1)\ell en(v_{k_{q}})$
where $v_{k_{q}}$ is the subjob from the envelope starting at time $s_{k_{q}}$
and finishing at $f_{k_{q}}$.
###### Proof:
In the proof we split the work at time $s_{k_{q}}$ and estimate each summand
of
$work_{i}^{S}(f_{k_{q-1}},f_{k_{q}})=work_{i}^{S}(f_{k_{q-1}},s_{k_{q}})+work_{i}^{S}(s_{k_{q}},f_{k_{q}})$
on its own. Combining both estimations yields the desired result.
In a first step we will prove that between finish and start of two consecutive
subjobs in the envelope, the provided service is fully utilized by the DAG
instance, i.e.,
$work_{i}^{S}(f_{k_{q-1}},s_{k_{q}})=serv_{i}^{S}(f_{k_{q-1}},s_{k_{q}})$
holds for all $q\in\\{2,\dots,p\\}$. Given the workload conserving properties
of list-scheduling used to dispatch subjobs to the service, an eligible subjob
is scheduled whenever service is available. Since by definition $s_{k_{q}}$ is
the earliest time that $v_{k_{q}}$ is able to execute, all service during
$f_{k_{q-1}}$ to $s_{k_{q}}$ must have been used to _work_ on other (non
envelope) subjobs.
Secondly, we show that the workload $work_{i}^{S}(s_{k_{q}},f_{k_{q}})$ from
start to finish of a subjob in the envelope can be estimated by
$\max\\{serv_{i}^{S}(s_{k_{q}},f_{k_{q}})-(m_{i}-1)\cdot\ell
en(v_{k_{q}}),\ell en(v_{k_{q}})\\}.$
Clearly, during the starting time and finishing time of $v_{k_{q}}$ at least
$\ell en(v_{k_{q}})$ will be worked. Additionally, given the provided service
$serv_{i}^{S}(s_{k_{q}},f_{k_{q}})$ due to sequential execution of
$v_{k_{q}}$, at most $m_{i}-1$ reservations of duration $\ell en(v_{k_{q}})$
may be unused. Therefore
$work_{i}^{S}(s_{k_{q}},f_{k_{q}})\geq\max\\{serv_{i}^{S}(s_{k_{q}},f_{k_{q}})-(m_{i}-1)\cdot\ell
en(v_{k_{q}}),\ell en(v_{k_{q}})\\}$. ∎
Based on this lemma, we can calculate the response-time of a DAG job. To do
this we first extend the Lemma.
###### Lemma 2.
Under the conditions of Lemma 1, we have that
$work_{i}^{S}(r_{G},r_{G}+t)\geq serv_{i}^{S}(r_{G},r_{G}+t)-(m_{i}-1)\ell
en(G)$ (2)
holds, where $r_{G}$ is the release of job $G$ and $0\leq t\leq f_{k_{p}}$.
###### Proof:
The main part to prove this lemma is already done in Lemma 1. We just have to
be careful about the scenarios where $t$ is not a time instant of the
envelope.
Similarly to the proof of Lemma 1 we can show that
$work_{i}^{S}(f_{k_{q-1}},t)=serv_{i}^{S}(f_{k_{q-1}},t)$ for all
$t\in[f_{k_{q-1}},s_{k_{q}}]$ and that $work_{i}^{S}(s_{k_{q}},t)\geq
serv_{i}^{S}(s_{k_{q}},t)-(m_{i}-1)\ell en(v_{k_{q}})$ for all
$t\in[s_{k_{q}},f_{k_{q}}]$. Furthermore, by the same reasoning
$work_{i}^{S}(r_{G},t)=serv_{i}^{S}(r_{G},t)$ holds for all
$t\in[r_{G},s_{k_{1}}]$.
We obtain the desired result by splitting the interval $[r_{G},t]$ into parts
already described above and estimating all of them at the same time. To
formalize this, we define
$\mu:=(r_{G},s_{k_{1}},f_{k_{1}},\dots,s_{k_{p}},f_{k_{p}}).$
For $q\in\\{1,\dots,2p+1\\}$ we denote by $\mu(q)$ the $q$-th entry of $\mu$
and by $\mu^{t}(q):=\min\\{\mu(q),t\\}$ the $q$-th entry bounded by $t$.
By decomposing $work_{i}^{S}(r_{G},r_{G}+t)$, we obtain that it can be written
as the sum of $\sum_{q=1}^{p}work_{i}^{S}(\mu^{t}(2q-1),\mu^{t}(2q))$ and of
$\sum_{q=1}^{p}work_{i}^{S}(\mu^{t}(2q),\mu^{t}(2q+1))$. The first summand is
lower bounded by the sum of the corresponding service values
$\sum_{q=1}^{p}serv_{i}^{S}(\mu^{t}(2q-1),\mu^{t}(2q))$, and the second
summand from above is lower bounded by
$\sum_{q=1}^{p}\left(serv_{i}^{S}(\mu^{t}(2q),\mu^{t}(2q+1))-(m-1)\ell
en(v_{k_{q}})\right)$. By combining both of the results, we obtain the lower
bound
$\sum_{q=1}^{2p}serv_{i}^{S}(\mu^{t}(q),\mu^{t}(q+1))-(m-1)\bigg{(}\sum_{q=1}^{p}\ell
en(v_{k_{q}})\bigg{)},$
which is again bounded by $serv_{i}^{S}(r_{G},r_{G}+t)-(m-1)\ell en(G)$. We
conclude that $work_{i}^{S}(r_{G},r_{G}+t)\geq
serv_{i}^{S}(r_{G},r_{G}+t)-(m-1)\ell en(G)$. ∎
###### Definition 12 (Service Bound Function).
For a task $\tau_{i}\in\mathbb{T}$ the minimal service that is provided by the
reservation system during an interval of length $t\geq 0$ is denoted by
$sbf_{i}(t)$. We call $sbf_{i}$ the _service bound function_ of $\tau_{i}$. ∎
We use the service bound function to provide a lower bound
$serv_{i}^{S}(r_{G},r_{G}+t)\geq sbf_{i}(t)$ for all schedules $S$. This leads
us to the following theorem.
###### Theorem 1 (Response-Time Bound).
We consider a task $\tau_{i}\in\mathbb{T}$. Assume that the reservation system
of $\tau_{i}$ is $m_{i}$-in-parallel and its minimal service is described by
$sbf_{i}$. Let $G$ be the DAG which describes the task instance $\tau_{i,j}$
of $\tau_{i}$. Then the response time of $G$ is upper-bounded by
$\min\\{t>0~{}|~{}sbf_{i}(t)\geq W_{i}^{G}\\}.$ (3)
where $W_{i}^{G}:=vol(G)+(m_{i}-1)\cdot\ell en(G)+backlog_{i}^{S}(r_{G})$ for
notational brevity
###### Proof:
Let $t^{\prime}:=\min\\{t>0~{}|~{}sbf_{i}(t)\geq W_{i}^{G}\\}$. We do the
proof by contraposition: If we assume that $t^{\prime}$ does not bound the
response time, then $t^{\prime}<f_{k_{p}}$, where $f_{k_{p}}$ is the last
entry in the envelope of $G$. In this case Lemma 2 yields:
$\displaystyle work_{i}^{S}(r_{G},r_{G}+t^{\prime})$ $\displaystyle\geq
serv_{i}^{S}(r_{G},r_{G}+t^{\prime})-(m_{i}-1)\ell en(G)$ $\displaystyle\geq
sbf_{i}(t^{\prime})-(m_{i}-1)\ell en(G)$
By the definition of $t^{\prime}$ we have $sbf_{i}(t^{\prime})\geq
vol(G)+(m_{i}-1)\cdot\ell en(G)+backlog_{i}^{S}(r_{G})$. Hence,
$work_{i}^{S}(r_{G},r_{G}+t^{\prime})\geq vol(G)+backlog_{i}^{S}(r_{G})$
the job $G$ is finished at time $t^{\prime}$, i.e., $t^{\prime}\geq
f_{k_{p}}$. ∎
worst-case schedule of provided service02($P_{i}$-$E_{i}$)2$P_{i}$-$E_{i}$
3$P_{i}$-2$E_{i}$3$P_{i}$-$E_{i}$$m_{i}E_{i}$2$m_{i}E_{i}$$t$$work_{i}$
Figure 3: Supply Bound Function $sbf(t)$ of the reservation system.
We emphasize that the reservation schemes and respective supply-bound function
are not enforced to follow any specific kind of reservation scheme. The
complexity of the calculation of the response-time depends only on the supply
bound function. For instance, Figure 3 shows the supply-bound function of a
_our reservation system_ from Definition 8. As depicted, there may be no
service provided to the task for up to $2(P_{i}-E_{i})$ time units in the
worst case. We note that the first activation of reservations has to occur no
later than at the release of the first job of $\tau_{i}$. Otherwise our
analysis becomes invalid. However, the reservation system can stop assigning
new reservation servers if there is no pending or unfinished job of
$\tau_{i}$, as long as it starts assigning new reservations if new jobs arise
in the ready queue.
If we assume a reservation server as in Definition 8, then the response-time
or service-time of a DAG job $G$ is described by the following theorem.
###### Theorem 2 (Service Time).
Let $G=\tau_{i,j}$ be a task instance of $\tau_{i}$. We assume that for
$\tau_{i}$ we have a reservation system as in Definition 8 with $m_{i}$ equal
sized in-parallel services $E_{i}\leq P_{i}$. We can give an upper bound
$R_{G}$ on the response time of $G$ by
$R_{G}=\left(\left\lceil\frac{W_{i}^{G}}{m_{i}E_{i}}\right\rceil+1\right)(P_{i}-E_{i})+\frac{W_{i}^{G}}{m_{i}}$
(4)
where $W_{i}^{G}:=vol(G)+(m_{i}-1)\ell en(G)+backlog_{i}^{S}(r_{G})$ for
notational brevity.
###### Proof:
For the proof we assume that $vol(G)>0$ since otherwise no work has to be done
and $R_{G}=0$ is already a trivial response-time bound. We aim to utilize
Theorem 1. Therefore, we have to find the minimal $t>0$ such that
$sbf_{i}(t)=W_{i}^{G}$. In the following we show one illustrative and one
formal proof to justify that this minimal $t$ is in fact $R_{G}$ from Eq. (4):
We assume the worst-case service as depicted in Figure 3. We can see in the
figure that every time when service is provided, it is done on $m_{i}$
resources simultaneously. Hence, the total time which $\tau_{i}$ has to be
served, until $G$ is finished, is $\frac{W_{i}^{G}}{m_{i}}$. This happens
during $\left\lceil\frac{W_{i}}{m_{i}\cdot E_{i}}\right\rceil+1$ service
cycles. Therefore, we have to add this many times the amount of the service
cycle, where $\tau_{i}$ is not served, i.e., $(P_{i}-E_{i})$. In total, the
response time is $\left(\left\lceil\frac{W^{G}_{i}}{m_{i}\cdot
E_{i}}\right\rceil+1\right)(P_{i}-E_{i})+\frac{W^{G}_{i}}{m_{i}}.$
For the more formal proof, we also assume the worst-case service from Figure
3. For the function $g:\mathbb{R}_{>0}\to\mathbb{R}_{>0}$ with
$g(t):=\left(\left\lceil\frac{t}{m_{i}E_{i}}\right\rceil+1\right)(P_{i}-E_{i})+\frac{t}{m_{i}}$
the composition $sbf\circ g$ is the identity and the function $g$ picks the
minimal value of the inverse image of $sbf_{i}(t)$, i.e.,
$g(t)=\min(sbf_{i}^{-1}(t))$ holds. Hence, we obtain
$g(W_{i}^{G})=\min\\{t>0~{}|~{}sbf_{i}(t)\geq W_{i}^{G}\\}$. ∎
In general, if we know an upper bound $b$ on the backlog of the previous job,
we can state the response time bound from Eq. (4) independent from the
previous schedule, by
$R^{\prime}_{G}(b)=\left(\left\lceil\frac{V_{i}^{G}(b)}{m_{i}E_{i}}\right\rceil+1\right)(P_{i}-E_{i})+\frac{V_{i}^{G}(b)}{m_{i}}$
(5)
where $V_{i}^{G}(b):=vol(G)+(m_{i}-1)\ell en(G)+b$. Based on Eq. (5), we bound
the response time for the case that the preceding job has a deadline miss and
for the case that the preceding job has _no_ deadline miss.
###### Corollary 1.
Under the assumptions of Theorem 2, $R^{\prime}_{G}(\rho_{i}\cdot m_{i})$ is
an upper bound on the response time of $G$ if the preceding job has a deadline
miss, and $R^{\prime}_{G}(0)$ is an upper bound if the preceding job has no
deadline miss.
###### Proof:
This follows directly from Theorem 2 by using either
$backlog_{i}^{S}(r_{G})\leq\rho_{i}\cdot m_{i}$ (in case of a deadline miss)
or $backlog_{i}^{S}(r_{G})=0$ (in case of _no_ deadline miss). ∎
## V Reservation Analysis and Optimization
In this section we devise the analysis and optimization algorithm to generate
reservation systems that provably respect the upper-bounds for $k$ consecutive
deadline misses in a probabilistic sense. We emphasize that in order to co-
design the $k$ consecutive deadline-miss constraints with the reservations
configurations time-efficient algorithms are required to calculate the
probabilities for $k$ consecutive deadline misses for any given reservation
configuration.
### V-A Analysis of Reservation Systems
Based on the finite sample space of DAG structures $G$ of the probabilistic
conditional DAG tasks $\tau_{i}$ we define the random variables
$R_{i}^{1}:=(G\mapsto R^{\prime}_{G}(\rho_{i}m_{i}))$ and
$R_{i}^{0}:=(G\mapsto R^{\prime}_{G}(0))$, which yield for each DAG job the
response time bounds from Corollary 1 with and without a previous deadline
miss. According to Definition 7, the constraint for $k$ consecutive deadline
misses is fulfilled if
$\phi_{i}(0,k)\leq\theta_{i}(k),$ (6)
where $\phi_{i}(0,k)$ is the probability that the first $k$ jobs of $\tau_{i}$
miss their deadline, and $\theta_{i}(k)$ is some predefined value.
Since
$\phi(0,k)=\mathbb{P}\left(\delta_{i}(k)>0,\delta_{i}(k-1)>0,\ldots,\delta_{i}(1)>0\right)$,
we can use Bayes’ Theorem, to reformulate $\phi(0,k)$ as
$\mathbb{P}\left(\delta_{i}(k)>0~{}|~{}\delta_{i}(k-1)>0,\ldots,\delta_{i}(1)>0\right)\cdot\phi_{i}(k-1).$
The probability that $\tau_{i,k}$ does not meet its deadline does not decrease
if the tardiness of the preceding job is increased. Therefore, if
$\delta_{i}(k-1)=\rho_{i}$, then the probability for a deadline miss of
$\tau_{i,k}$ is maximal. In this case, the amount of tardiness of the other
jobs $\delta_{i}(k-2),\dots,\delta_{i}(1)$ is irrelevant for the tardiness of
$\tau_{i,k}$. More specifically,
$\begin{split}&\mathbb{P}\left(\delta_{i}(k)>0~{}|~{}\delta_{i}(k-1)>0,\ldots,\delta_{i}(1)>0\right)\\\
&\qquad\leq\mathbb{P}\left(\delta_{i}(k)>0~{}|~{}\delta_{i}(k-1)=\rho_{i}\right)\end{split}$
holds and we can thus bound the probability for $k$ consecutive deadline
misses by
$\phi_{i}(0,k)\leq\mathbb{P}\left(\delta_{i}(k)>0~{}|~{}\delta_{i}(k-1)=\rho_{i}\right)\cdot\phi_{i}(0,k-1).$
(7)
Then by Corollary 1 we know that
$\displaystyle\mathbb{P}\left(\delta_{i}(k)>0~{}|~{}\delta_{i}(k-1)=\rho_{i}\right)\leq\mathbb{P}\left(R^{1}_{i}>D_{i}\right)$
and for the probability of the first job (without previous deadline miss)
$\displaystyle\phi_{i}(0,1)=\mathbb{P}\left(\delta_{i}(1)>0\right)\leq\mathbb{P}\left(R^{0}_{i}>D_{i}\right).$
Combining the results yields a bound on the probability of $k$ consecutive
deadline misses:
$\displaystyle\phi_{i}(0,k)$
$\displaystyle\leq\mathbb{P}\left(R^{1}_{i}>D_{i}\right)\cdot\phi_{i}(0,k-1)$
$\displaystyle\leq\dots\leq\mathbb{P}\left(R^{1}_{i}>D_{i}\right)^{k-1}\cdot\phi_{i}(0,1)$
$\displaystyle\leq\mathbb{P}\left(R^{\prime}_{i}>D_{i}\right)^{k-1}\cdot\mathbb{P}\left(R^{0}_{i}>D_{i}\right)$
Since
$\mathbb{P}\left(R^{0}_{i}>D_{i}\right)\leq\mathbb{P}\left(R^{1}_{i}>D_{i}\right)$,
we also derive a simplified bound for the probability of $k$ consecutive
deadline misses of task $\tau_{i}$ by
$\phi_{i}(0,k)\leq\mathbb{P}\left(R^{1}_{i}>D_{i}\right)^{k}.$ (8)
As a prerequisite to derive upper-bounds on response-times for queuing systems
it must be shown that the system is stable. Informally speaking this means
that all backlog of the reservation system will have been worked at some point
in time. We first give a formal definition of stability and then show that our
devised reservation-based queuing system is stable by construction.
###### Definition 13 (Stability).
A reservation system $\mathcal{R}_{i}$ is considered _stable_ if for all
$\ell\geq 0$ with $\delta_{i}(\ell)=0$ it is almost certain that there exists
$k>0$ such that $\delta_{i}(k+\ell)=0$. More formally,
$\lim_{k\to\infty}\phi(0,k)=0,$ (9)
i.e., the probability for $k$ consecutive deadline misses approaches $0$ for
$k\to\infty$. ∎
###### Theorem 3 (Stability).
A reservation system $\mathcal{R}_{i}$ is stable if
$\mathbb{P}(R^{\prime}_{i}>D_{i})<1$.
###### Proof:
The probability for $k$ consecutive deadline misses is bounded by
$\phi_{i}(0,k)\leq\mathbb{P}\left(R^{1}_{i}>D_{i}\right)^{k}$ according to Eq.
(8). If $\left(R^{1}_{i}>D_{i}\right)<1$, then
$\mathbb{P}\left(R^{1}_{i}>D_{i}\right)^{k}\to 0$ for $k\to\infty$. This
concludes the theorem. ∎
In consequence we do not have to especially consider stability concerns in the
design of the reservation systems other than $k$-consecutive deadline
constraints.
### V-B Distribution Function Calculation
In this section, we show how to practically calculate the response-time upper
bounds. First, we define the auxiliary random variable
$X_{i}:=\frac{vol(G)+(m_{i}-1)\cdot\ell en(G)+\rho_{i}\cdot m_{i}}{m_{i}\cdot
E_{i}}=\frac{V_{i}^{G}}{m_{i}E_{i}}$
for which the distribution function $\mathbb{P}(X_{i}\leq u)$ can be directly
computed from the probabilistic DAG task model, i.e., by enumerating over all
possible DAG job structures weighted by their realization probabilities as
previously described. With reference to Corollary 1, the distribution function
of $R^{1}_{i}$ can be written as follows:
$\mathbb{P}(R^{1}_{i}\leq u)=\mathbb{P}\left((P_{i}-E_{i})\cdot(\left\lceil
X_{i}\right\rceil+1)+E_{i}\cdot X_{i}\leq u\right)$
Let $dom(X_{i})$ denote all values that $X_{i}$ can take, then we define the
set of constant values
$I_{i}:=\\{\ell\in\mathbb{N}~{}|~{}\left\lfloor{\inf(dom(X_{i}))}\right\rfloor\leq\ell\leq\left\lceil\sup(dom(X_{i}))\right\rceil\\}$.
Moreover given $I_{i}$ the domain of
$\psi(X_{i})=(P_{i}-E_{i})\cdot(\left\lceil X_{i}\right\rceil+1)+E_{i}\cdot
X_{i}$ can be partitioned as follows:
$\bigcup_{\ell\in I_{i}}\\{(P_{i}-E_{i})\cdot(\ell+2)+E_{i}\cdot
X_{i}~{}|~{}\ell<X_{i}\leq\ell+1\\}$
by the fact that $\left\lceil X_{i}\right\rceil\mapsto\ell+1$ for every
$X_{i}\in(\ell,\ell+1]$. By the $\sigma$-additivity property of distribution
functions and rearrangements yields
$\sum_{\ell\in
I_{i}}\mathbb{P}(X_{i}\leq\frac{u-(P_{i}-E_{i})\cdot(\ell+2)}{E_{i}}~{}|~{}\ell<X_{i}\leq\ell+1)$
(10)
### V-C Optimization of Reservation Systems
Algorithm 1 Calculation of Reservation Systems
1:$\mathbb{T},~{}\theta_{1}(k_{1}),\theta_{2}(k_{2}),\ldots,\theta_{n}(k_{n}),~{}\Omega_{1},\Omega_{2},\ldots,\Omega_{n}$;
2:$\mathcal{R}_{1},\mathcal{R}_{2},\ldots,\mathcal{R}_{n}$ that satisfy the
above requirements;
3:Initialize reservations $\mathcal{R}\leftarrow\\{\\}$;
4:for each task $\tau_{i}$ in $\\{\tau_{1},\tau_{2},\ldots,\tau_{n}\\}$ do
5: for $m_{i}$ in $\\{1,2,\ldots,\Omega_{i}\\}$ do
6:
$E_{i}\leftarrow\min\\{E_{i}~{}|~{}(\Phi^{n}_{i})^{k_{i}}\leq\theta_{i}(k_{i})\\}$;
7: if $E_{i}$ could not be found then
8: continue;
9: else
10: $\mathcal{R}_{i}\leftarrow\mathcal{R}_{i}\cup\\{m_{i}$ reservations with
service $E_{i}\\}$; return $\mathcal{R}$;
In this section we present Algorithm 1 to calculate reservation systems for
the scheduling of probabilistic constrained-deadline conditional DAG tasks.
Under the consideration of probabilities of upper-bounds for the maximal
number of tolerable $k_{i}$ consecutive deadline misses and given tardiness
bounds the objective is to find minimal numbers of in-parallel reservations
$m_{i}$ and associated minimal amounts of service time $E_{i}$. For each
probabilistic constrained-deadline conditional DAG task the algorithm
determines all feasible configurations $(m_{i},E_{i})$ by iterating through
the number of in-parallel reservations $m_{i}\in[1,\Omega_{i}]$ and search for
the smallest required reservation service to still comply with the consecutive
deadline-miss constraints.
###### Theorem 4 (Monotonicity).
The functions
$\Phi^{n}_{i}:\mathbb{R}_{>0}\rightarrow\mathbb{R}_{>0},~{}E_{i}\mapsto\mathbb{P}(R^{1}_{i}>D_{i})_{|m_{i}=n}$
that yield the probabilities of an upper-bound of a deadline-miss for a fixed
number of in-parallel reservations with respect to the service time $E_{i}$
are monotonically decreasing.
###### Proof:
For easier readability let
$Y_{i}:=\frac{vol(G_{i})+(m_{i}-1)\cdot\ell en(G_{i})+\rho_{i}\cdot
m_{i}}{m_{i}}$
for which the distribution function is independent of $E_{i}$ for every fixed
$m_{i}$. According to the definition of $\mathbb{P}(R^{1}_{i}>D_{i})$ in the
beginning of this section, we have to prove that
$\displaystyle\mathbb{P}\bigg{(}\Big{(}\left\lceil\frac{Y_{i}}{E_{i}}\right\rceil+1\Big{)}\cdot(P_{i}-E_{i})+Y_{i}>D_{i}\bigg{)}$
$\displaystyle\geq\mathbb{P}\bigg{(}\Big{(}\left\lceil\frac{Y_{i}}{E_{i}+\delta}\right\rceil+1\Big{)}(P_{i}-(E_{i}+\delta))+Y_{i}>D_{i}\bigg{)}$
for any positive arbitrary increment $\delta\geq 0$ and any realizations of
$Y_{i}\geq 0$. Let an arbitrary realization $Y_{i}\geq 0$ satisfy
$(\left\lceil\frac{Y_{i}}{E_{i}+\delta}\right\rceil+1)\cdot(P_{i}-(E_{i}+\delta))+Y_{i}>D_{i}$
In this case $Y_{i}$ satisfies
$(\left\lceil\frac{Y_{i}}{E_{i}}\right\rceil+1)\cdot(P_{i}-E_{i})+Y_{i}>D_{i}$
as well which yields the assumption by the property of distribution functions.
∎
Due to the monotonicity of the functions $\Phi^{n}_{i}$ as shown in Lemma 4,
it is possible to find the minimal amount of reservation service to guarantee
compliance with the consecutive deadline-miss constraints by using binary
search in the interval $(0,D_{i}]$. We emphasize that $\Omega_{i}$ is an
upper-bound specified by the user that can be set to an arbitrary fixed number
that is larger than the number of available processors or determined as the
point where an increase in the number of in-parallel reservations does not
yield a _significant_ decrease in the amount of required service to satisfy
the deadline-miss probability constraints.
## VI Conclusion and Future Work
In this paper we proposed a probabilistic version and formal description of
the widely used conditional parallel DAG task model and proposed a resource
reservation system that allows for _scheduling anomaly free_ scheduling whilst
provably guaranteeing probabilistic quantities such as bounded tardiness,
stability, and probabilistic upper-bounds of $k$ consecutive deadline misses.
In addition, we provided an algorithm to optimize the reservations systems
with respect to the above quantities and showed that probabilistic conditional
DAG tasks with a high degree of parallelism can improve a system’s resource
usage if deadline misses are allowed. In the future we intent to improve the
tightness of our proposed bounds and evaluate the effectiveness of the
approach by implementing a prototype system.
## Acknowledgments
This work has been supported by European Research Council (ERC) Consolidator
Award 2019, as part of PropRT (Number 865170), and by Deutsche
Forschungsgemeinschaft (DFG), as part of Sus-Aware (Project no. 398602212).
## References
* [1] S. Altmeyer, R. Douma, W. Lunniss, and R. I. Davis. Outstanding paper: Evaluation of cache partitioning for hard real-time systems. In 2014 26th Euromicro Conference on Real-Time Systems, pages 15–26, 2014.
* [2] S. Baruah. Federated scheduling of sporadic DAG task systems. In IEEE International Parallel and Distributed Processing Symposium, IPDPS, pages 179–186, 2015.
* [3] S. Baruah. The federated scheduling of systems of conditional sporadic DAG tasks. In Proceedings of the 15th International Conference on Embedded Software (EMSOFT), 2015.
* [4] S. Ben-Amor, L. Cucu-Crosjean, and D. Maxim. Worst-case Response Time Analysis for Partitioned Fixed-Priority DAG tasks on identical processors. In IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), pages 1423–1426, 2019.
* [5] S. Ben-Amor, L. Cucu-Grosjean, M. Mezouak, and Y. Sorel. Probabilistic Schedulability Analysis for Real-time Tasks with Precedence Constraints on Partitioned Multi-core. In IEEE International Symposium on Real-Time Distributed Computing (ISORC), pages 142–143, 2020.
* [6] V. Bonifaci, A. Marchetti-Spaccamela, S. Stiller, and A. Wiese. Feasibility analysis in the sporadic dag task model. In ECRTS, pages 225–233, 2013.
* [7] G. v. d. Brüggen, N. Piatkowski, K.-H. Chen, J.-J. Chen, and K. Morik. Efficiently Approximating the Probability of Deadline Misses in Real-Time Systems. In 30th Euromicro Conference on Real-Time Systems (ECRTS), 2018\.
* [8] D. Casini, A. Biondi, G. Nelissen, and G. Buttazzo. Partitioned Fixed-Priority Scheduling of Parallel Tasks Without Preemptions. In IEEE Real-Time Systems Symposium (RTSS), pages 421–433, 2018\.
* [9] K.-H. Chen, G. v. d. Brüggen, and J.-J. Chen. Analysis of Deadline Miss Rates for Uniprocessor Fixed-Priority Scheduling. In The 24th IEEE International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA), 2018.
* [10] P. Chen, W. Liu, X. Jiang, Q. He, and N. Guan. Timing-anomaly free dynamic scheduling of conditional dag tasks on multi-core systems. ACM Trans. Embed. Comput. Syst., 18(5s), Oct. 2019.
* [11] H. S. Chwa, J. Lee, K. Phan, A. Easwaran, and I. Shin. Global EDF Schedulability Analysis for Synchronous Parallel Tasks on Multicore Platforms. In Euromicro Conference on Real-Time Systems, ECRTS, pages 25–34, 2013.
* [12] M. E. Conway. A Multiprocessor System Design. In Proceedings of the November 12-14, 1963, Fall Joint Computer Conference, AFIPS ’63 (Fall), page 139–146. Association for Computing Machinery, 1963.
* [13] G. Fernandez, J. Abella, E. Quiñones, C. Rochange, T. Vardanega, and F. J. Cazorla. Contention in Multicore Hardware Shared Resources: Understanding of the State of the Art. In 14th International Workshop on Worst-Case Execution Time Analysis, WCET, volume 39, 2014.
* [14] J. Fonseca, G. Nelissen, and V. Nélis. Improved Response Time Analysis of Sporadic DAG Tasks for Global FP Scheduling. In Proceedings of the 25th International Conference on Real-Time Networks and Systems, 2017.
* [15] J. C. Fonseca, G. Nelissen, V. Nélis, and L. M. Pinho. Response time analysis of sporadic DAG tasks under partitioned scheduling. In 11th IEEE Symposium on Industrial Embedded Systems, SIES, pages 290–299.
* [16] J. Goossens and V. Berten. Gang FTP scheduling of periodic and parallel rigid real-time tasks. CoRR, abs/1006.2617, 2010.
* [17] C. Hobbs, Z. Tong, and J. H. Anderson. Optimal soft real-time semi-partitioned scheduling made simple (and dynamic). In Proceedings of the 27th International Conference on Real-Time Networks and Systems, RTNS, pages 112–122, 2019.
* [18] K. Lakshmanan, S. Kato, and R. R. Rajkumar. Scheduling parallel real-time tasks on multi-core processors. In Proceedings of the 31st IEEE Real-Time Systems Symposium, pages 259–268, 2010.
* [19] J. P. Lehoczky. Real-time queueing theory. In 17th IEEE Real-Time Systems Symposium, pages 186–195, 1996.
* [20] J. Li, K. Agrawal, C. Gill, and C. Lu. Federated scheduling for stochastic parallel real-time tasks. In IEEE 20th International Conference on Embedded and Real-Time Computing Systems and Applications, pages 1–10, 2014.
* [21] J. Li, J.-J. Chen, K. Agrawal, C. Lu, C. D. Gill, and A. Saifullah. Analysis of federated and global scheduling for parallel real-time tasks. In 26th Euromicro Conference on Real-Time Systems, ECRTS, pages 85–96, 2014.
* [22] M. Maggio, A. Hamann, E. Mayer-John, and D. Ziegenbein. Control-System Stability Under Consecutive Deadline Misses Constraints. In 32nd Euromicro Conference on Real-Time Systems (ECRTS), volume 165, pages 21:1–21:24, 2020.
* [23] C. Maia, M. Bertogna, L. Nogueira, and L. M. Pinho. Response-Time Analysis of Synchronous Parallel Tasks in Multiprocessor Systems. In M. Jan, B. B. Hedia, J. Goossens, and C. Maiza, editors, 22nd International Conference on Real-Time Networks and Systems, RTNS, page 3, 2014\.
* [24] A. Marchetti-Spaccamela, N. Megow, J. Schlöter, M. Skutella, and L. Stougie. On the Complexity of Conditional DAG Scheduling in Multiprocessor Systems. In IEEE International Parallel and Distributed Processing Symposium (IPDPS), pages 1061–1070. IEEE, 2020.
* [25] A. Melani, M. Bertogna, V. Bonifaci, A. Marchetti-Spaccamela, and G. C. Buttazzo. Response-Time Analysis of Conditional DAG Tasks in Multiprocessor Systems. In Proceedings of the 2015 27th Euromicro Conference on Real-Time Systems, 2015.
* [26] M. Nasri, G. Nelissen, and B. B. Brandenburg. Response-Time Analysis of Limited-Preemptive Parallel DAG Tasks Under Global Scheduling. In 31st Euromicro Conference on Real-Time Systems (ECRTS), pages 21:1–21:23, 2019.
* [27] L. Palopoli, D. Fontanelli, L. Abeni, and B. V. Frias. An analytical solution for probabilistic guarantees of reservation based soft real-time systems. IEEE Trans. Parallel Distrib. Syst., 27(3):640–653, 2016.
* [28] P. Pazzaglia, C. Mandrioli, M. Maggio, and A. Cervin. DMAC: Deadline-Miss-Aware Control. In 31st Euromicro Conference on Real-Time Systems, ECRTS, volume 133, pages 1:1–1:24, 2019.
* [29] A. Saifullah, K. Agrawal, C. Lu, and C. Gill. Multi-Core Real-Time Scheduling for Generalized Parallel Task Models. In Proceedings of the 32nd IEEE Real-Time Systems Symposium, 2011\.
* [30] L. Santinelli, P. M. Yomsi, D. Maxim, and L. Cucu-Grosjean. A component-based framework for modeling and analyzing probabilistic real-time systems. In IEEE 16th Conference on Emerging Technologies & Factory Automation, ETFA, pages 1–8, 2011.
* [31] M. A. Serrano, A. Melani, R. Vargas, A. Marongiu, M. Bertogna, and E. Quiñones. Timing characterization of OpenMP4 tasking model. In International Conference on Compilers, Architecture and Synthesis for Embedded Systems, CASES, pages 157–166, 2015.
* [32] J. Sun, N. Guan, Y. Wang, Q. He, and W. Yi. Real-time scheduling and analysis of OpenMP task systems with tied tasks. In IEEE Real-Time Systems Symposium, RTSS, pages 92–103, 2017\.
* [33] N. Ueter, G. von der Brüggen, J. Chen, J. Li, and K. Agrawal. Reservation-based federated scheduling for parallel real-time tasks. In IEEE Real-Time Systems Symposium (RTSS), pages 482–494, 2018\.
* [34] H. Yun, G. Yao, R. Pellizzoni, M. Caccamo, and L. Sha. MemGuard: Memory bandwidth reservation system for efficient performance isolation in multi-core platforms. In 2013 IEEE 19th Real-Time and Embedded Technology and Applications Symposium (RTAS), pages 55–64, 2013.
|
# Remote Learners, Home Makers: How Digital Fabrication Was Taught Online
During a Pandemic
Gabrielle Benabdallah University of Washington , Samuelle Bourgault
University of California, Santa Barbara , Nadya Peek University of
Washington and Jennifer Jacobs University of California, Santa Barbara
(2021)
###### Abstract.
Digital fabrication courses that relied on physical makerspaces were severely
disrupted by COVID-19. As universities shut down in Spring 2020, instructors
developed new models for digital fabrication at a distance. Through interviews
with faculty and students and examination of course materials, we recount the
experiences of eight remote digital fabrication courses. We found that
learning with hobbyist equipment and online social networks could emulate
using industrial equipment in shared workshops. Furthermore, at-home digital
fabrication offered unique learning opportunities including more iteration,
machine tuning, and maintenance. These opportunities depended on new forms of
labor and varied based on student living situations. Our findings have
implications for remote and in-person digital fabrication instruction. They
indicate how access to tools was important, but not as critical as providing
opportunities for iteration; they show how remote fabrication exacerbated
student inequities; and they suggest strategies for evaluating trade-offs in
remote fabrication models with respect to learning objectives.
Digital Fabrication, Remote Learning, Pandemic
††journalyear: 2021††copyright: rightsretained††conference: CHI Conference on
Human Factors in Computing Systems; May 8–13, 2021; Yokohama,
Japan††booktitle: CHI Conference on Human Factors in Computing Systems (CHI
’21), May 8–13, 2021, Yokohama, Japan††doi: 10.1145/3411764.3445450††isbn:
978-1-4503-8096-6/21/05††ccs: Applied computing Education††ccs: Human-centered
computing HCI theory, concepts and models
## 1\. Introduction
The COVID-19 pandemic created drastic societal changes at a global scale. In
the United States, a public health emergency was declared in early March 2020.
In response to stay-at-home orders and social-distancing restrictions, higher
education pivoted to online instruction. This change posed challenges for all
types of learning. Educators had to adopt new forms of remote instruction with
limited time to plan or share approaches. Classes that were centered around
physical making required particularly radical changes because universities
were forced to shut down physical workshops, labs, and studios. Educators
across art, design, and engineering had to rapidly develop new strategies to
compensate for the loss of these spaces.
In this paper, we examine the impacts of remote instruction for a particular
form of physical making: digital fabrication. Physical making offers unique
learning opportunities (Martinez and Stager, 2016). Digital fabrication
extends these opportunities by enabling students to design and manufacture
custom physical objects through a combination of computer-aided design and
machining (CAD and CAM) and physical computer-numerical-control (CNC) machines
(Eriksson et al., 2019). Digital fabrication technologies are often a central
component of makerspaces—shared workshops with access to tools and materials
that support physical making. Makerspaces are increasingly prevalent in
universities (Rosenbaum and Hartmann, 2017), providing students with access to
shared digital fabrication tools and software such as 3D printers, laser
cutters, and CAD/CAM software, as well as opportunities for community support
through fostered cultures of making and tinkering (Martin, 2015).
Despite losing access to digital fabrication equipment and in-person
communities, many educators still held their digital fabrication classes in
the Spring of 2020 (Jacobs and Peek, 2020). Given the unique challenges of
teaching digital fabrication without a makerspace, we sought to understand
what happened during those classes. Our work is guided by two research
questions. First, how did people teach digital fabrication remotely during the
pandemic? In particular, we wanted to examine how instructors remotely taught
computer-aided design and computer-controlled fabrication, how they provided
and organized community, and what trade-offs they had to consider in the
process. Second, how can we learn from instructors’ efforts to teach digital
fabrication in a crisis to improve remote instruction of digital fabrication
in the future? The pivot to remote instruction was the result of a terrible
crisis; however, it also created a unique opportunity to examine new
strategies for learning through digital fabrication. We sought to understand
what elements of these strategies were effective and how they could be
improved in the future.
As digital fabrication researchers, as well as educators and students who
taught or took remote digital fabrication courses in the spring of 2020, the
authors of this paper were both observers and subjects of the phenomena we
examined. As a result, our research is structured around analysis of remote
fabrication instruction in both our own courses and in the courses of others.
We used a preliminary analysis of our course outcomes to guide a formal set of
interviews with instructors and students in six remote fabrication courses
from different universities. These interviews examined peoples’ experiences
planning, teaching, and participating in remote digital fabrication courses,
as well as the challenges and opportunities that emerged from the remote
format.
Our paper makes the following contributions. First, drawing from both our
classes and the classes of others, we define and document five models of
remote fabrication instruction that were used over the spring of 2020. Second,
through a recounting of our course outcomes and a thematic analysis of our
interviews, we surface themes on shifts in labor caused by remote fabrication
access, learning opportunities of remote fabrication instruction, approaches
to gaining tacit knowledge remotely, and remote collaborative practices for
physical making. These themes highlight assumptions about home-work, and
remote-work, as well as the tensions that arise from different ways of
combining them. Third, we discuss what was lost and what was gained in the
remote format, what was crucial about work performed by instructors and
students, and what factors contributed to equity in outcomes. Combined, these
contributions have implications for human-computer interaction (HCI)
researchers studying digital fabrication and learning. Moreover, as the risks
of the novel coronavirus persist and the future of in-person instruction
remains uncertain, our work provides practical details on viable approaches
for remote instruction for physical making in the future.
## 2\. Background
Digital fabrication encompasses a wide range of practices. At a high level, it
can describe any form of computer-controlled fabrication. This means digital
fabrication contains many different elements, including computer-aided design
(CAD), robotic path planning and computer-aided manufacturing (CAM), computer-
numerically controlled (CNC) processes, and (robotic) placement or assembly.
These processes happens across length scales, ranging from the fabrication and
assembly of Frank Gehry architecture (Coleman and Cole, 2017) to the nanometer
scale fabrication of micro-electromechanical systems such as the
accelerometers in a game controller (Walker et al., 1996). HCI contributes to
digital fabrication research in many ways, including through novel tools for
computational design (Jacobs et al., 2017; Schmidt and Ratto, 2013), materials
(Wang et al., 2018; Ion et al., 2016), fabrication (Tian et al., 2018;
Lafreniere et al., 2016), and collaboration (Gantt and Nardi, 1992; Yildirim
et al., 2020).
Teaching digital fabrication might take place in anywhere from a cleanroom
(Moore and Williams, 2004), to a mechanical engineering shop (Lamancusa,
2006), to an architecture studio (Eversmann, 2017). University digital
fabrication research labs may have equipment that rivals industrial digital
fabrication production factories, featuring large scale 6-axis robotic arms,
milling machines, water jet cutters, and other pieces of $100k+ equipment (for
Computational Design and Stuttgart, 2020; of Michigan, 2020; of Architecture,
2020). The courses we studied were slated to be taught in university spaces
that ranged in tool sophistication and application from large robot arms to
programmable embroidery machines.
Beyond differences in equipment, courses incorporating digital fabrication can
also differ in their learning goals. While some courses focus on developing
particular skills such as designing in 3D or fabricating with a CNC mill
(Whalen, 2013), others might emphasize more abstract learning goals, such as
providing students with the environment in which they will conduct self-
directed projects while managing resources such as materials, shared
equipment, and time (Wilczynski et al., 2017), or using making for critical
inquiry (Nieusma and Malazita, 2016). Managing spaces with diverging goals has
unique challenges, including equipment cost, staffing, hours of operation,
rent, community organization, maintenance, and safety.
The rise of the maker movement (Dougherty et al., 2016) and increased demand
for makerspaces (academic and otherwise) has led to the development of maker-
oriented, lower-cost digital fabrication equipment. These more affordable
machines have increased access to digital fabrication tools and reduced cost
of managing spaces with digital fabrication capabilities. The growth of
makerspaces has also led to research on the efficacy of makerspaces as a
learning environment (Ames et al., 2014). Early advocates of makerspaces in
formal education include Mike and Ann Eisenberg, who argued that hands-on
interacting with materials can offer a tangible way of thinking through
important and expressive ideas (Eisenberg and Eisenberg, 1998), and Paulo
Blikstein, who stated that digital fabrication and maker culture could be
considered the “the ultimate construction kit” with significant advantages for
interdisciplinary and contextualized learning, powerful experiences, and team
building (Blikstein, 2013). Scholarship in digital fabrication and learning is
now extensive and covered in new places including the FabLearn conference
(FabLearn, 2020), first held in 2011, which focuses on hands-on learning and
the role of digital fabrication in education, and International Symposium of
Academic Makerspaces (ISAM) (on Academic Makerspaces, 2020), first held in
2016, which focuses on starting and running academic makerspaces.
Makerspaces make up more than just tools in a space. They are also places for
gathering, peer-learning environments, and an attitude (Martin, 2015). The
makerspace environment shapes the ways students learn, therefore integrating
makerspaces into formal education has not been without growing pains.
Researchers have found that social interaction and discourse, especially as
means to build community and maker attitudes, are crucial for learning in K-12
(Campos et al., 2019) and other (Martin, 2015) makerspaces. Makerspaces
located at universities increasingly show a diversity of implementation, from
large digital fabrication research labs to small student groups focused on
making. We refer to all spaces where digital fabrication was taught on campus
as makerspaces, despite their breadth. Regardless, none of the courses we
surveyed were able to work on campus. Our study is unique, as it was held at a
time of unprecedented changes to higher education.
Well before the pandemic, websites such as Instructables, Thingiverse, and
YouTube provided online community gathering spaces for making and sharing
designs. These online spaces have their own online-specific challenges in
terms of onboarding newcomers, welcoming diversity, and encouraging sharing
and remixing (Sherrill, 2017; Alcock et al., 2016; Oehlberg et al., 2015).
Nonetheless, online maker sites demonstrate a thriving practice of online
documentation, sharing experiences, and encouraging participation. Many of the
instructors we surveyed drew from such sites when restructuring their courses.
HCI contributes crucial analysis of the promises and practices of maker
culture by engaging with and unpacking the complex social, cultural, and
economic conditions that makers operate within (Lindtner et al., 2014; Roedl
et al., 2015). These critical analyses improve the culture and spaces in which
we teach and learn, and we aim for the work in this paper to contribute to
this discussion.
## 3\. Methods
Our research is centered on two datasets: 1) autobiographical from our own
remote fabrication courses from the spring of 2020, and 2) interviews with
instructors and students who taught or attended remote fabrication courses at
other universities. In this section we outline our methodology for assembling
and analyzing these two datasets to provide context to the claims we make in
following sections. To contextualize our methods and analysis, we also provide
background on the research team.
### 3.1. Author Background
Nadya and Jennifer are professors at public universities who collaborate in
research on digital fabrication. They both taught remote graduate-level
digital fabrication courses in the spring 2020. Gabrielle and Samuelle are PhD
students in interdisciplinary art, design, and engineering departments who
research design and making. Gabrielle and Samuelle were students in Nadya and
Jennifer’s courses, respectively.
### 3.2. Preliminary Analysis of Author Courses
Following the conclusion of the Spring 2020 academic quarter, Nadya and
Jennifer theorized that deeper examination of approaches to remote digital
fabrication could inform instruction efforts in the future. They initiated
their research efforts by analyzing the outcomes of their own courses. They
collected public online posting of student projects and written student
reflections. They met regularly for three weeks to review this data and
discuss their experiences as instructors. They extracted preliminary themes
from their course data through the collaborative writing and editing of a
written reflection. Their writing process was organized around 1) examining of
the effects of the remote format to learning outcomes and 2) evaluating of the
impacts of at-home fabrication equipment (Jacobs and Peek, 2020).
### 3.3. Interview and Analysis Methods for External Instructors
Following the analysis of Nadya and Jennifer’s courses, Samuelle and Gabrielle
were brought on as collaborators. Together, we used the preliminary themes
from Nadya and Jennifer’s course analysis to determine selection criteria and
interview structure for instructors and students in remote fabrication courses
at other universities. We identified potential interview candidates through a
short online survey that collected information on the general approaches
university educators used to teach digital fabrication remotely. We received
23 survey responses over a period of one week. We selected eight individuals
representing six different courses for interviews—five instructors via the
survey and three additional co-instructors of the same course who were
recommended by a colleague. We selected instructors who represented a range of
models of remote fabrication instruction to study how different people
compensated for the loss of in-person makerspaces. Instructors were our
primary focus, however we also conducted interviews with three students in
three of the external courses to contrast instructor and student experiences.
Interviews were conducted remotely over video conference and lasted one hour.
Interviews with instructors and students focused on their experiences
planning, teaching, and participating in remote digital fabrication courses,
challenges and opportunities that emerged from the remote format, and how the
experience impacted their perspective on teaching or participating in digital
fabrication in the future. All interviews were audio recorded and transcribed.
To analyze the data we conducted a reflexive thematic analysis (Braun and
Clarke, 2006, 2019) focusing on latent themes. Following each interview, the
authors met and discussed initial aspects of the data. After all interviews
were complete, each author open-coded a subset of interview transcripts.
Gabrielle performed an initial conceptualization of the codes into preliminary
themes and all authors discussed these initial themes. Based on the outcomes
of this discussion, Jennifer performed a secondary review and refinement of
the themes identified by Gabrielle. The themes were further refined in a final
group discussion. Out of a list of eleven themes, we selected a subset of four
that we believed were the most important due to their consistent presence in
all the interviews and the amount of data we compiled on them.
### 3.4. Limitations
We relied, in part, on autobiographical data. The shutdown offered a unique
opportunity to study the uncommon practice of remote digital fabrication
instruction in its early stages. We incorporated autobiographical data in this
research because we used an instruction model that was not present in our
external data. Furthermore, by including an analysis of our own experiences,
we provide context for the motivation of this research and the conclusions we
made. We compared courses in different departments and subjects; however,
instructors apply digital fabrication technologies for different learning
objectives. This factor was evident in our data and impacted the approaches
individual instructors took when selecting models for remote instruction. We
saw value in surveying the ways remote digital fabrication supports learning
across domains, however future studies which examine remote fabrication in a
specific area will likely provide domain-specific insights. We discussed how
the remote learning format exacerbated uneven access to resources for
students. We believed this was a point of particular importance for current
and future remote digital fabrication instruction. Our data did not allow us
to provide a more detailed picture of how discrepancies among students’ living
situations impacted their learning during the pandemic. Further research is
needed to understand and address this key factor in successful and equitable
remote teaching of digital fabrication.
## 4\. Course Summaries
Figure 1. A wide range of student work was produced in remote digital
fabrication classes in Spring 2020. A) Yanrong Chen’s sculpture with many
interlocking parts iteratively printed in HCDE598 B) Design of a space frame
from a single node to robotic assembly in R.A.W. C) A marble maze
collaborative CAD project in ME102 D) Samuelle’s bioplastic cast in 3D printed
molds in MAT594X E) Conductive silicone mixed with a fork-drill in kitchen
containers by Pippa Kelmenson in ITP-Tangible Interaction F) Vinyl lamp
iterations by Aidan Lincoln in ITP-Subtraction G) Pen holder designs for a
robotic arm by Samuel Rushenberg in Arch438X H) Jaideep Cherukuri, Scout
Handford, Jahangir Abbas Mohammed, Abrar Syed & Miyuki Weldon combining
electronics and 3D prints in DESINV190/290-9 I) Kat Sung using found and
recycled objects in DESMA160-4.
Figure 1: 9 fabrication projects including physical objects and CAD
simulations organized in a grid. Figure 1A: On the left half, a 3D printer in
the process of printing the chain part of a 3D printed birdcage with a black
filament. On the right, three similar 3D printed birdcages hanging from a
small wooden structure. Figure 1B: A visualization of 5 steps to create an
architectural space frame including 1) the design of a single node, 2) the CAD
simulation of the space frame, 3) the generation of two-dimensional production
file for fabrication, 4) the paper model fabricated and assembled and 5) the
simulated human/robotic assembly. Figure 1C: A complex CAD figure representing
a multi-part marble run structure. Figure 1D: On the top third, a hand holding
a 3D printed texture square designed to fit in a square mold. On the middle
third, the bottom of the mold is covered with texture squares and cast with
bioplastic. On the bottom third, 8 parts of dry bioplastic made with this mold
and aligned on a table next to each other. Figure 1E: A bowl with a purple
substance of non-cured conductive silicone in it next to a take-out container,
chopsticks, plastic cups, a fork in a drill that served to mix the silicone on
an apartment floor. Figure 1F: On the top half, hand holding a semi-
transparent vinyl cone attached on one side with a thread. On the bottom, a
lamp made of three bulbs with colorful vinyl cones used as lampshades. Figure
1G: 4 CAD figures of the same pencil holder with different objects in it to
attach to a robot arm: the first one holds a pencil, the second a magic wand,
the third a large marker, and the last one a rubber hand. Figure 1H: A white
3D printed device with visible distance sensors, a red button, a small
speaker, and electronics, that can be fixed on a white cane to help visually
impaired people to detect obstacles. Figure 1I: A mask with a cyberpunk
aesthetic made of found and recycled objects and colored construction foam
In total, we analyzed the outcomes of eight remote courses involving digital
fabrication ( table 1). In this section, we summarize the structure of each
course, focusing on the models instructors used to retain access to digital
fabrication technologies and hands on making.
### 4.1. Author Course Summaries
Nadya and Jennifer’s courses used the same model for remote digital
fabrication instruction: students were shipped hobbyist 3D printers and all
digital fabrication instruction was oriented around these machines (at home
machines).
#### 4.1.1. HCDE598 - Digital Fabrication
HCDE598 was a course developed by Nadya in Human-Centered Design and
Engineering, an interdisciplinary department at the University of Washington.
Twenty students enrolled in this quarter-long course in Spring 2020, supported
by two TAs. The course introduced students to CAD and prototyping tools for
making physical artifacts. For remote instruction students were asked to
purchase a $250 3D printer alongside hand tools (e.g., calipers and Exacto
knives) and materials (e.g., 3D printing filament, silicone, plaster, and
cardboard.) The total cost per student was $\pm$$350.
#### 4.1.2. MAT594X - Computational Fabrication
MAT594X was a course developed by Jennifer in Media Arts and Technology, an
interdisciplinary graduate department at the University of California, Santa
Barbara. The course included twelve students from Media Arts and Technology
and Computer Science. The course emphasized computational fabrication;
students used programming languages to design for and control digital
fabrication machines. For the Spring 2020 quarter, Jennifer used a combination
of research funds and departmental resources to purchase low-cost 3D printers,
PLA filaments and additional supplies, such as specialty filament, casting
materials, and electronic and lighting components, to send to students. The
total cost per student ranged from $250-350.
### 4.2. External Course Summaries
We identified four additional models for remote digital fabrication across the
six external courses we surveyed: simulation of fabrication with CAD/CAM
(simulation), ordering from online fabrication vendors (online-vendors),
converting the university makerspace to a service (makerspace-to-jobshop) ,
and having students or instructors fabricate parts for other students with at-
home equipment (instructor/student-technicians). In addition to these models
of digital fabrication access, we observed three supplemental strategies for
retaining hands on making: shipping materials and hand tools directly to
students (material shipping), requiring students to independently source their
own materials and hand tools (student sourcing), and having students rely on
materials and tools already in their homes (home materials).
| Interview Subjects | | |
---|---|---|---|---
Course | Instructors | Students | Fabrication Access Models | Field | School
HCDE598 | Nadya Peek | N/A | at-home machines, material shipping | HCI/Engineering/ Design | University of Washington
MAT594X | Jennifer Jacobs | N/A | at-home machines | HCI/CS/New Media Art | University of California Santa Barbara
ME102 | Mark Cutkosky | S1 | simulation, online-vendor, student sourcing, home materials | Engineering | Stanford
Arch438X | Shelby Doyle | S2 | simulation, makerspace-to-jobshop | Architecture | Iowa State
DESMA160-4 | Paul Esposito | N/A | instructor-technician, material shipping | Fine Arts | UCLA
R.A.W. | James Coleman | S4 | student-technician | Architecture | Princeton
ITP-Subtraction | Ben Light | N/A | at-home machines, student-technician, student sourcing | Fine Arts | NYU
DESINV190/290-9 | Vivek Rao, Adam Patrick Hutz, George Moore | N/A | online-vendor | Engineering/Design | Berkeley
Table 1. Summary of Surveyed Courses
#### 4.2.1. ME102 - Foundations of Product Realization
ME102 was a quarter-long Mechanical Engineering course taught by Mark Cutkosky
at Stanford University. Sixty engineering undergraduate students enrolled.
Approximately 10 TAs were also assigned to this course. The course objective
was to engage students with a design-to-fabrication process through the making
of iterative prototypes using digital fabrication machines in a shared
workshop. To adapt to remote-learning, the focus of the course shifted to
emphasize online collaboration in CAD. Students constructed physical projects
as low fidelity prototypes using materials at home (e.g., cardboard, foam
core, Exacto knives, and glue.) In one assignment, instructors used on-demand
fabrication services to 3D print students’ designs. The total cost was less
than $100 per student and covered by the department.
#### 4.2.2. Arch438X - Architectural Robotics
Arch438X was developed and taught for the first time in Spring 2020 by Shelby
Doyle in the Architecture Department at Iowa State University. This semester-
long course became remote mid-semester. Twenty-four undergraduate students
enrolled. Arch438X acted as an introduction to robotics and aimed to expand
students’ perception of the role of robots in architecture. The course was
designed to give hands-on experience in making small robots and in using a
KUKA industrial robot recently acquired by the ISU Computation+Construction
Lab (CCL). The CCL lab also included digital fabrication equipment, robotic
devices, hand tools and power tools. In the shutdown, the robots became
unavailable and the goal of the course shifted to focus on simulation and
speculative design.
#### 4.2.3. DESMA160-4 - Survival Tools in Weird Times
This quarter-long course was a variation of DESMA22 - Form, and was created
specifically for remote instruction in Spring 2020. It was taught by Paul
Esposito in the Department of Design and Media Arts (DMA) at the University of
California, Los Angeles. Twenty-one undergraduate art students with different
levels of experience with fabrication enrolled. To adapt to the lack of
fabrication lab equipment and materials, DESMA160-4 focused on the theme of
survivalism and its intersection with maker culture. Paul had two 3D printers
and a sewing machine at home and offered to print and sew the designs of his
students and mail them the results. Students who wanted to hand sew their own
designs were also shipped a sewing kit. The course budget included $12 kits
for each student and an additional $500 materials budget which Paul used for
3D printing filament.
#### 4.2.4. R.A.W. - Robotic Architecture Workshop
R.A.W. was a remote workshop that was taught at Princeton University during
Summer 2020 by James Coleman through the Black Box Research Group in the
Architecture department. The workshop was two weeks long and James and two
graduate TAs met with students six times over this period. Six students
enrolled, including civil engineering Ph.D. students, architecture graduate
students, and undergraduate students from the Engineering and Architecture
departments. The objective was to familiarize participants with the design-to-
fabrication-to-assembly workflow required to make space frames using sheet
metal. The in-person format would have involved making metal parts in an
industrial shop then assembling them robotically. This experience was replaced
with robotic simulation and paper prototyping using a Silhouette Cameo 4 vinyl
cutter. Some of the students received a $280 vinyl cutter and all students had
a materials stipend of $90.
#### 4.2.5. ITP-Subtraction & ITP-Tangible Interaction
ITP-Subtraction & ITP-Tangible Interaction were two semester-long graduate
classes taught and co-taught by Ben Light at the New York University Tisch
School of the Arts within the Interactive Telecommunications Program (ITP).
Fifteen students enrolled in ITP-Subtraction and fourteen enrolled in ITP-
Tangible Interaction. ITP-Subtraction was intended to be an introduction to
subtractive fabrication techniques with hands-on experience with machines and
ITP-Tangible Interaction was intended to focus on making physical interfaces.
For in person courses, each student paid $300 in lab fees and could either buy
their own material or use lab scrap for free. Both courses moved online half
way through the semester. To adapt, Ben shipped a Silhouette Cameo vinyl
cutter to each ITP-Subtraction student and focused on two-dimensional
fabrication techniques for the rest of the course. The machines cost $200 each
and were covered by the department. Ben removed the digital fabrication aspect
of ITP-Tangible Interaction during the second half of the semester to focus
mainly on physical computing.
#### 4.2.6. DESINV190/290-9 - Technology Design Foundations
DESINV190/290-9 was taught by Vivek Rao, Adam Patrick Hutz, and George Moore
within the Jacobs Institute for Design Innovation in the College of
Engineering at the University of California, Berkeley. The course was
originally designed for graduate students but most of the twenty students
enrolled during the Spring 2020 semester were undergraduates. DESINV190/290-9
was developed to familiarize students with a human-centered design process.
This process included sketching ideas, conducting interviews and analyzing
data in order to validate a design, prototyping at different levels of
fidelity, using digital fabrication machines, and integrating interactive
digital systems to fabricate objects. When the university makerspace closed
midway through the curriculum, the instructors decided to use on-demand
fabrication services for the rest of the semester. The students received a
budget of $250 per team to order parts from several online fabrication
vendors.
## 5\. Remote Instruction With At Home 3D Printers
In this section, we describe the themes that emerged from analysis of author-
led courses, HCDE598 and MAT594X, in response to our first research question
(How did people teach digital fabrication remotely during the pandemic?). We
focused on: 1) the impacts of at-home 3D printers on student workflows and
domestic activities, 2) the unique learning opportunities of hobbyist machines
in comparison to workshop equipment, and 3) the ways students developed tacit
knowledge while engaged in remote instruction.
### 5.1. Impacts of At-Home 3D Printers
Shipping printers to students’ homes created a situation where students were
simultaneously living with printers and creating objects for personal use with
them. Several students also took personal initiative or assignment contexts to
use the printers to design home goods or to repair or augment existing objects
in their home. One MAT594X student created a program that generated designs
for a customizable self-watering planter, and one student in HCDE598 created a
modular lamp integrated with internal lighting components to create different
patterns of light diffusion.
Projects such as these conformed to many aspects of personal fabrication; the
objects were custom-designed and fabricated by their maker as opposed to mass-
manufactured and purchased. At-home access to the machines did not simplify or
accelerate production, or lead to fundamentally new design and manufacturing
workflows. Instead, producing these products required students to engage in
design workflows that reflected elements of real-world design, manufacturing,
and craft. Students in both HCDE598 and MAT594X engaged in learning, design,
testing, and iteration; and required peer support when fabricating personal
objects for home use. These processes were odds with product-focused visions
of personal fabrication where consumers create custom objects with minimal
effort and knowledge.
The presence of the printers in students’ homes also resulted in changes to
students’ routines and daily activities. Because students often lived with
roommates or occupied small studio apartments, they often kept their printers
in their bedrooms. This, coupled with long print times and heat, smells, and
machine noises generated by printers, resulted in students coordinating their
schedules around their printers. These factors also created additional stress
when prints failed. The students managed to accommodate the requirements of
the printer, but it was not difficult to envision scenarios where such
constraints would be infeasible. There were also elements of at-home 3D
printing that provided important forms of stress relief and pleasure. Students
in both courses repeatedly expressed their delight at being able to make
physical objects and seeing the products made by their classmates.
### 5.2. Learning Opportunities of Hobbyist 3D Printers
The use of at-home printers had unique opportunities when compared with how
students accessed machines in a workshop. Unlike staff-managed workshop
equipment, individual printers required students to learn about machine
maintenance. The Ender 3 Pro required assembly and fine-tuning the printer
could greatly improve printing outcomes. Nadya built this opportunity directly
into her curriculum by making the printer’s assembly and initial calibration
one of the first assignments in HCDE598. By the end of the spring quarter,
students in both courses had tuned and modified their machines to a degree
that went significantly beyond the manufacturer documentation. Several
students in HCDE598 upgraded components (such as the fans or power supplies)
or 3D printed components to improve performance (such as clips for wire
management, holders for work surface illumination, or filament guides). These
activities enabled students to familiarize themselves with the machine’s
implementation details and performance possibilities in a form that would not
have been feasible in a shared-use setting.
The at-home setup also allowed for constant access to the printers, which in
turn allowed students to iterate extensively on their designs. Repeated design
iterations were common in both courses and went beyond simple optimizations.
For example, one student in MAT594X went through multiple iterations to find a
successful printing strategy for sculptures generated from complex
photogrammetry data that she had previously only used for digital designs.
Students were also able to create different kinds of artifacts by developing
custom fabrication processes for their machines. In some cases this involved
close integration of manual manipulation and machine fabrication. One student
in HCDE598 created a complex sculpture of interlocking chains and birdhouses,
which were printed as interlocking structures by pausing the printer at key
moments and inserting previously-completed parts (shown in Figure 1A).
Completing the sculpture involved many tens of hours of print time that were
interspersed with regular adjustments or actions made by the student.
The quality and sophistication of many student projects in both classes
suggested that at-home 3D printers provided unique learning opportunities for
machine maintenance and modification, while supporting increased design
iteration. Such opportunities are often obstructed when machines are shared
and maintained by others.
### 5.3. Gaining Tacit Knowledge Remotely
The use of at-home 3D printers enabled students to work across CAD, CAM, and
CNC throughout the courses. We observed students iterating in CAD based on
initial machine prints, learning to modify CAM settings based on model
geometry, and iterating on machine and material settings based on settings
they looked up and tuned. These outcomes demonstrate how learning
opportunities in integrating CAD, CAM, and machine operation remained present
in the remote format with some key differences. Students were limited to
learning these concepts with one form of additive fabrication. In a makerspace
they would have had the opportunity to learn CAD-CAM-CNC design processes for
additional subtractive fabrication processes. This limitation was highlighted
in a subtractive CNC/CAM assignment in MAT594X where several students ran into
errors of scale—attempting to fabricate parts that were much too large or
small for the target (simulated machine) or similarly selecting tooling that
was much too small. Direct exposure to subtractive milling hardware and
tooling would have likely provided a way to inform this process in a way that
was less feasible through simulation.
All digital fabrication machines place constraints on what can be fabricated.
Producing successful products requires learning how to design for these
constraints in CAD, how to engage in incremental testing when working with new
equipment and materials, and how to systematically adjust machine and CAM
parameters to optimize for different geometries. The hobbyist printers imposed
more severe constraints than machines we had used in prior courses, but they
still enabled students to develop these forms of knowledge in the domain of
additive manufacturing.
### 5.4. Summary
The combined outcomes from two author-led classes that used the at-home
machine model suggested that remote instruction with distributed hobbyist 3D
printers is a viable method for graduate-level digital fabrication courses.
Shifting the workshop to the home led to complex forms of personal fabrication
while creating a mix of positive and negative lifestyle changes. This model
offered learning opportunities that were less feasible in shared makerspaces,
such as maintenance and increased iteration. It also enabled the understanding
of tacit knowledge associated with the constraints of the 3D printers.
## 6\. Remote Instruction With Other Fabrication Models
In this section we describe the themes that emerged from our analysis of
external remote fabrication courses. We conceptualized themes across three
dimensions that built on the analysis of the author-led courses. 1) We further
examine how home life was impacted by remote fabrication by highlighting how
other models of instruction introduced new forms of at-home labor for
instructors and students. 2) We examine how simulation, online-vendor, and
makerspace-to-jobshop models of machine access shaped learning outcomes in
comparison to the at-home machine model of our courses. 3) We contrast the
ways instructors in different disciplines valued tacit knowledge and attempted
to preserve it in a remote learning environment. We also explore a fourth
theme (4) unique to the external data. We compare strategies for collaboration
in remote fabrication courses through the experience of student teams.
### 6.1. New Home Labor Through Remote Fabrication Access
Instructors in the majority of the courses we surveyed worked hard to create
some form of remote digital fabrication access. Each model of access created
new forms of labor for instructors. In cases where instructors and students
fabricated parts for other students with machines in their homes, they took on
the role of shop technicians. In R.A.W., one student and two TAs acted as
vinyl cutter operators for the other five participants. In ITP-Subtraction
several students took initiative to create their own job shop. As Ben
described:
> One person bought an Othermill or a Bantam mill and someone bought 3D
> printer. [The students] were all sort of like, “I’ve got this. If you need a
> part, I’ll run one.”
When students or instructors worked as fabrication technicians, they took on
non-trivial tasks of monitoring production and delivering parts. Mark raised
the concern of relying on TAs as technicians rather than educators.
> I need to be a little careful…TAs didn’t sign up to become a printing
> service. They signed up to become teaching assistants and that’s what they
> want to do.
James described how R.A.W. fabricators were not able to mail the parts in time
and resorted to photographing the pieces, assembling them and sending the
photos to the students. These delivery issues were similar to the challenges
instructors encountered when using professional online fabrication services.
ME102 and DESINV190/290-9 used online fabrication vendors, and in both cases
students experienced delays in receiving the parts. The logistics of using
online vendors disrupted students’ ability to personally test and revise their
parts. Mark described how ME102 TAs tested on-demand printed parts for
students in the lab and Adam in DESINV190/290-9 explained how shipping delays
constrained students to “only one iteration on the timeline.”
Different models of fabrication access resulted in different degrees of use,
depending on how they were implemented. Arch438X had the option of using the
university makerspace as a jobshop, however students could only receive their
parts by picking them up directly. S2 pointed out that the makerspace-as-
jobshop model was “really only an option for a few people,” adding that “a lot
of people that I know moved home, which could be a couple hours away…a couple
of states away.” The student/instructor-technician model also resulted in
limited use in the courses we surveyed. For DESMA160-4, only two of the 21
students had their parts printed by Paul. He described how the students who
used his home-printing service were those most motivated to develop their
existing CAD and digital fabrication skills.
In comparison to the makerspace-as-jobshop and instructor/student-technician
models, there was evidence that the at-home machines model led to higher rates
of machine access and use. Ben described how students who received machines
were able to use them at greater rates, and at irregular hours:
> The thing I loved about the vinyl cutters more than anything that actually
> came out of it, is that students got to live with the machines. And I think
> that’s really the only way to get good at it, right?…You get a crazy idea
> and then you immediately make it. …you somehow learn [the machine] inside
> and out. It starts having quirks that you know. …That’s something that never
> happened in the past because no one had the machines. They did it here [on
> campus] and then they left and there were 10 people behind them waiting for
> their turn. I think it just may not have been the machine they wanted, but
> having total access to something and the time …being trapped indoors with
> nothing but your vinyl cutter… you know, they learned it.
When considering the limited use of the instructor/student-technician model in
comparison to at-home machines it is important to note that instructors made
using this model optional. If it were required, the use rates would likely
have been different.
Similar to student experiences in our courses, the presence of machines at
home was also disruptive to home life for instructors and students to a
certain extent. For example, Shelby described noise interference from her 3D
printers when she was on Zoom calls. She could also hear the printers at night
when going to bed. Machines were not the only source of at-home disruptions.
The expectation to do any physical prototyping could also be a burden for some
students. S1 in ME102 described being unable to prototype effectively in his
home, saying “There’s not really a lot of places in my house where I can do
that kind of work.”
Overall, instructors relied on a wide range of strategies to palliate the
absence of traditional fabrication spaces. Whether they chose at-home
fabrication, a student/instructor-technician model or a an online-vendor
model, the choices they made were closely tied to the learning objectives of
their course. No models were clearly superior or inferior; rather, each
emphasized different aspects of digital fabrication practice and each surfaced
new forms of labor.
### 6.2. Learning Opportunities of Remote Fabrication Instruction
Remote instruction required instructors to make major changes to curricula in
a short period. Similar to the experience of the authors, these changes
created new learning opportunities, which were often the result of how
instructors responded to the constraints of their chosen model of fabrication
access.
Mark altered ME102 to focus on collaborative CAD with minimal elements of
hands-on making. Paul created an entirely new course (DESMA160-4) because his
department determined it was infeasible to teach the original digital
fabrication course in a remote format with limited preparation time. Creating
a new course gave Paul freedom to experiment with new forms of hands-on making
including manual sewing and knot tying.
Instructors also changed how they interacted with students. Four instructors
said they increased the amount of pedagogical support they provided to
individual students. Paul had weekly progress check-ins with each of his 23
students, ranging from five to twenty minutes. Mark and his TAs swapped longer
lectures for more targeted sessions so that the students could “have more
detailed coaching on the projects they’re working on.” Mark’s student, S1,
felt this form of coaching was very effective in comparison to his experience
in some in-person Mechanical Engineering courses.
The online-vendor and student-technician models created conditions where only
some students had access to machines or physical parts. Instructors found they
could use this structure to better simulate the multi-party design workflows
of industry. As James described:
> I think it’s artificial to say that the designer is the fabricator, and is
> also the erector, [and] is also the project manager. And so I think there’s
> actually something interesting about the fact that we were forced to be
> separate. That made it easier to show the tensions between these groups. As
> opposed to me simulating that in a workshop environment where I would
> separate teams into different groups to force the sort of miscommunications
> that typically happen.
The teaching staff of DESINV190/290-9 also found that the online-vendor model
aligned better with some students’ learning objectives. Adam described how
some students were more interested in learning how to fabricate and prototype
on their own whereas other students are more interested in the design process
and “don’t really care about the actual product.”
While learning opportunities in machine use were reduced in classes that
relied on simulation, online-vendors and student/instructor-technicians,
instructors created new opportunities in response to these constraints.
Because these models reflected the realities of distributed expertise and
resources in industrial design and manufacturing, they offered the chance for
students to learn about supply chains and division of labor. It’s important to
note that exposing these new opportunities required substantial additional
instructor and TA labor.
### 6.3. Gaining Tacit Knowledge Remotely
In all the surveyed courses, instructors shared the perception that physical
making was a critical component of the learning objectives. Describing the
ethos of his department, Ben mentioned that the “first ugly cardboard
prototype” is “like a rite of passage, I think for every student.” He added
that learning CAD is only one component of the fabrication pipeline and that
physical making is required to understand materiality.
> I think material is something that is rarely thought of in the CAD stage
> and, or the CAM stage, even, other than speeds and feeds. I think they learn
> that not all material is equal. I think they learned that how more, you
> know, like be prepared for it’ll work 20 times in the 21st time it won’t
> work or, you know, that there’s a reality to these things and it’s not
> magic. I think that translates no matter what machine or whatever you’re
> doing.
This sentiment was echoed by students and instructors in other courses.
Overall interview subjects felt that CAD and simulation alone could not teach
students the critical material elements of digital fabrication including fit,
surface finish, and tolerance.
Shelby also described the technical understanding that results from physical
making. Before the pandemic, in one of her regular first assignments she
required students to create a “cast without undercuts.” She described how
students often did not initially grasp the concept of designing for undercuts.
Only when “they pour the plaster” do they “understand it.” Shelby believed
that this physical experimentation was important for students because it
simultaneously helped them learn techniques and develop confidence when using
the machines.
As universities shut down, the instructors we surveyed felt the need to
emphasize the important learning factors for physical making, sometimes
pushing back against detractors in the process. For Shelby, moving online
reinforced the importance of in-person teaching of physical making, especially
in a context where she was “constantly having to kind of defend the value of
that kind of teaching” in her own institution:
> I do think moving online made it me more aware of how valuable that in-
> person teaching was, if that makes sense. I’ve never been very good at
> explaining it…it matters that we stand in a space together and we make
> things and there’s a sense of community and shared intelligence that comes
> out of that.
Shelby was hopeful that the shift to remote instruction would underscore the
critical importance of in-person fabrication courses in the future. For S2,
one of Shelby ’s students, the complications in reviewing physical objects
remotely made the full-online format difficult to adhere to. She described the
awkwardness of having to showcase a physical project through video calls in
comparison to walking around, touching, or otherwise interacting with such a
project in an in-person studio critique.
For courses that required only some students to engage in fabrication, like
R.A.W., or courses where fabrication was optional, like DESMA160-4, students’
motivation to purse learning elements of physical making was sometimes
reduced. In the R.A.W., S4, who was already highly skilled in digital
fabrication, expressed ambivalence about her role as a student-technician for
the class:
> It wasn’t a waste of time, but it would have been easier if someone else had
> done it, but I still think it was useful to me to like actually do it
> myself, but I still feel like the other participants still learn equally.
All the instructors we interviewed stressed the importance of hands-on making
to acquire the tacit knowledge required for digital fabrication. Not all
curricular changes reflected this concern. Instead, instructors made decisions
about hands on fabrication in relation to the specific aspects of the larger
fabrication ecosystem they originally sought to target in their course. In
cases where instructors chose to preserve tacit learning opportunities,
instructors and TAs undertook additional labor in the form of acquiring and
distributing equipment and materials.
### 6.4. Remote Collaboration for Physical Making
Pre-pandemic, in-person collaboration was often a central component of both
professional and student digital fabrication practices. The instructors and
students we interviewed worked to maintain elements of collaborative design
and construction of physical objects despite being unable to meet in person.
Student collaboration was built into the structure of 3 classes we surveyed.
In ME102, DESINV190/290-9 and Arch438X, students were assigned a team for the
duration of the class. Initially, remote collaboration was demotivating for
students accustomed to collaborative physical construction in makerspaces. S2
in Arch438X described how “asynchronous collaboration” was frustrating when
“you’re so used to liking touching things and working together.” In addition
to collaborative construction, students and instructors valued the peer
learning, motivation and support opportunities of physical makerspace
communities. As S1 put it:
> There’s something really, really fun about biking across campus to the
> [workshop] late at night and seeing all the other people working on their
> projects, bouncing ideas off each other, asking TAs that are there for help.
Instructors relied primarily on online communication technologies and
collaborative CAD tools to retain collaborative workflows in the remote
format. Students also developed new organizational strategies to coordinate at
a distance. In ME102, S1 and his team established a workflow in order to
optimize synchronous collaborative CAD development over Zoom, where they would
alternate between brainstorming, prototyping, and assembling 3D models
collaboratively using Onshape, and working individually on their respective
parts of the design. According to S1, it was actually easier to meet over Zoom
than in person for CAD-based issues; they could simply “get on zoom and fix
it” quickly. When it came to manufacturing and building physical objects,
remote collaboration often involved asynchronous assembly or division of
labor. Students teams in DESINV190/290-9 assigned one member—usually the one
with the most prior digital fabrication experience—to receive and assemble all
parts from an on-demand fabrication service. A limited number of teams sent
duplicate parts to other members to enhance their understanding of the part
physicality.
Remote CAD collaboration also required divisions of labor and advanced
planning. A team-based assignment in ME102 required each student to design a
system in CAD that interfaced with their teammates’ systems to generate a
continuous marble run (see Figure 1C). Working remotely required teams to
define the spatial placement of each 3D model in relation to the others in
advance and create a modular design with different components assigned to each
team member.
In addition to the frustration students experienced transitioning from in-
person to remote collaboration, later issues arose with teamwork and
communication. Mark found that creating team cohesion over online social
networks was more difficult, especially if students were new to the subject or
did not know each other in person. These tensions were exacerbated when team
members were unable or unwilling to use the same software tools in
collaborative CAD, which produced dissatisfaction among students and
discrepancies in the outcome.
In spite of these tensions, one student and one instructor saw potential
benefits to the logistical challenges imposed by remote collaboration.
Interviewees described that remote format provided field-specific workflows.
S1 pointed out that “a lot of what you do now with CAD is collaborative…So
[ME102 ] was the most perfect training for that.” In R.A.W., James felt that
the remote setting enabled participants to select roles in line with their
interests.
> This separation of roles I think is really interesting…People who are
> interested in the digital workflow and the file prep in the parametric
> design jumped into that in a physical workshop. It would have been excellent
> to have the people who want to be the “hands-on folks” designing the jigs
> and doing the assembly.
He described further how the remote setting could simulate the division of
expertise that is common in professional architecture and manufacturing
practice. The absence of a makerspace created significant shifts in patterns
of collaboration. Students were assigned explicit roles and labor was divided
based on interest and expertise. The pleasurable collaboration of in-person
makerspaces was absent, however some students and instructors saw alternative
learning opportunities that reflected professional design and fabrication
practice.
### 6.5. Summary
The fabrication access models in the six courses that we surveyed were chosen
by the instructors to comply to specific course objectives. These models
created different learning opportunities depending on their implementation and
often created additional labor for instructors and TAs. The use of simulation,
online vendors, makerspace-as-jobshop, and student/instructor-technicians
reduced the amount of tacit knowledge students could gain from operating
machines but still allowed students to engage in workflows, collaborative
practices, and division of tasks reflecting industrial realities. Similar to
the authors’ courses courses, students with home access to machines and
physical materials were able to develop greater levels of machine familiarity
and physical construction experience while undergoing disruptions and new
forms of labor in their daily routines.
## 7\. Discussion
The COVID-19 pandemic called attention to implicit elements of digital
fabrication instruction which, as soon as they became absent or more difficult
to access, required more labor to maintain: the tacit elements of physical
making; the facilitation of collaboration in the classroom; and providing
equal access to resources. In this section, we discuss three main takeaways
from the analysis of our data:
1. (1)
The courses’ learning objectives had a great impact on which tacit elements of
digital fabrication were transmitted to students. This stresses the importance
of articulating course objectives and structure over access to fabrication
spaces when teaching digital fabrication, especially remotely.
2. (2)
Proper scaffolding, providing students with opportunities for exploration and
iteration, and facilitating peer collaboration yielded stronger learning
outcomes, according to our data, than a focus on access to tools and materials
alone.
3. (3)
Uneven access to both material and human resources among students was
exacerbated in a remote context. Clearly defining learning objectives became
critical for instructors so that they could make more informed decisions about
what material resources to incorporate in their curriculum and how to manage
them.
Each of these takeaways provides insights for our second research question
(how can we learn from instructors’ efforts to teach digital fabrication in a
crisis to improve remote instruction of digital fabrication in the future?).
### 7.1. What Do We Lose When We Lose the Makerspace?
There are many definitions of what a successful digital fabrication course
looks like. This reality was brought into sharp relief during the pandemic, as
instructors needed to make quick decisions on what to preserve and what to
change when transitioning their course online. This is in part because digital
fabrication encompasses many forms of practice. There are workflows that are
directly relevant to industry, such as the production of architectural
elements or medical devices. There are specific workflows developed by artists
for their unique work. Individuals may practice digital fabrication as a form
of craft. There are many workflows which combine elements of digital
fabrication alongside elements of traditional manufacturing or craft. Each of
these forms of practice corresponds to distinct categories of artifacts that
can be made. The shape of what is possible in turn shapes attitudes about
digital fabrication.
Because of this, the tacit learning components of digital fabrication are
difficult to situate. While all instructors agreed that these tacit components
are tied to the experiential nature of digital fabrication, how this
experiential component is conveyed varied widely. Nadya and Jennifer opted for
an at-home fabrication model, where students acquired hobbyist machines and
lived with them. The instructors we interviewed described a range of
strategies, which we can divide in two main categories: at-home fabrication
(Ben) and diffused fabrication, where the whole group relied on one or a few
fabricators, whether they were the instructor (as was the case in Paul’s
course), the TAs (in Mark’s class), other students (in James’), the makerspace
staff (in Shelby’s) or an external fabrication service (Vivek/Adam/George,
Mark).
Both types of approaches had pros and cons. Nadya and Jennifer observed that
living with machines was not without challenges for their students—with issues
of noise, fumes, and space management—but when properly accommodated, provided
many learning opportunities for machine maintenance, modification, and design
iteration. For instructors who consider a similar fabrication model, paying
particular attention to how hobbyist machines fit into the students’ living
context can smooth eventual frustrations and hindrances to learning.
The instructors we interviewed who chose an at-home fabrication model also
reported gains and trade-offs to this approach. In Ben’s class, there were
issues distributing vinyl cutters to students, resulting in two students not
receiving equipment at all. Students who did get access to equipment, however,
gradually became used to their vinyl cutter, exploring and trying different
approaches, ultimately settling for usages that suited their interests and
learning goals. The fact that the students “got to live with the machines”
meant that they not only developed a deeper knowledge of their tool but also
that they could expand their fabrication practice.
In the diffused fabrication model, the fabrication process was shared between
several parties and usually circulated from students (who designed the part)
to technicians (either other students, a TA, the instructor or a professional
service) and back to students (either in physical or virtual form). For this
model, the external data showed that particular attention needed to be paid to
both the course logistics—planning timelines to receive files, debug them,
print parts and ship them to students—and the course scaffolding so that
students could take advantage of these resources. The experience of Paul
showed that only students who were ready in terms of skills and vision took
advantage of his fabrication setup. Without proper scaffolding, students were
not always motivated or comfortable using the services made available to them.
A diffused fabrication model, however, provided the opportunity to learn
another type of tacit knowledge in digital fabrication, that is the ebb and
flow of collaborating on larger projects, where fabricators, designers, and
project managers are often separated. In this scenario, the tacit learning
component was not conveyed to students through access to tools or parts but
through access to a collaborative fabrication workflow.
What do we lose, then, when we lose the makerspace? We might think that with
the loss of physical fabrication spaces, the tacit learning components of
digital fabrication disappear. Instead, our data shows that these tacit
components resurfaced in students’ homes, in collaborative processes and in
virtual environments, and that these manifestations are intrinsically linked
to the course’s learning objectives. For instance, the experiential aspect of
digital fabrication in James’s course was tied to the level of the course (the
students had experience in fabrication) and its topic (architectural robotics,
which often involves multi-party workflows). For Ben’s class, which focused on
expression, getting students access to machines so that they could explore and
create was critical.
There is not, therefore, one set of tools or materials that will guarantee
successful learning of digital fabrication. Rather, different learning
objectives will result in different decisions for choosing what material
resources are most appropriate for a given class. These decisions are tied to
the class’ level, the field of study, and the students’ backgrounds.
The challenges of teaching the tacit elements of digital fabrication were
exacerbated in remote formats but also presented an opportunity to better
articulate them. Making learning objectives explicit is crucial, as well as
understanding how they are tied to certain digital fabrication practices, how
they lead to specific choices in material sourcing and distribution, and how
they are are sometimes at odds with other curricular goals. This is an
occasion to reconsider the locus of experiential learning in digital
fabrication not in the makerspace, but in the practices each instructor
facilitates.
### 7.2. “What Works is to Teach a Process”: Exploration, Iteration,
Contextualization
When analyzing our data, we found that students having the ability to explore
and iterate was more important for successful learning outcomes than what
means of fabrication they had access to—whether it was students 3D printing on
inexpensive printers at home, or sending parts out to be fabricated. Iteration
happened especially when the instructors gave assignments that encouraged
exploration and experimentation, as was the case in Shelby’s class where the
students had to come up with several versions of a KUKA robotic arm end-
effector. Creating a space for exploration and expression for students goes
hand in hand with a proper contextualization of how the approaches they learn
fit into a larger landscape of computation and fabrication. For example, James
spent a significant amount of time explaining exactly how the problems they
were going to solve with paper craft corresponded to problems they would have
encountered had they been using sheet metal. In Vivek, Adam, and George’s
class, the workflow established for students via an on-demand fabrication
service recreated workflows they were likely to encounter in the workplace,
according to the instructors.
Another important factor for successful learning outcomes we observed was
individual or targeted support for students. Working with a small group of
students and a mentor relieved some of the anxiety of being in a large class.
As remote learning lingers on the horizon, increasing the role of Teaching
Assistants in mentoring might prove beneficial to students, especially as it
recreates the more targeted assistance that can be found in makerspaces.
The data showed no indication that some minimal amount of equipment would be
sufficient to catalyze learning. Rather, we observed that learning outcomes
were more strongly tied to instructors’ ability to contextualize the learning
environment, challenge students, and support community and iteration. This
happened in each of the courses we analyzed, but with emphasis on different
aspects and practices of digital fabrication.
### 7.3. Inequities in Distribution of Machines, Materials, and Labor
The pandemic is calling attention to many existing issues, among them unequal
student access to both human and material resources. These inequities became
particularly prevalent in the context of digital fabrication learning, which
is resource-intensive. During remote instruction, access to tools and a peer
community strongly depended on individual student situations. These can vary
widely, with some students having ample space to accommodate tools in their
living environment as well as established rapport with peers, while others
faced isolation and challenging home situations. These inequalities can lead
to inequities if instructors and institutions do not work to provide and
facilitate equal access of human and material resources to their entire
student body. Instructors play a crucial role in how access to resources is
managed. By being specific about what the learning objectives are, they can
make better decisions about what material resources are needed and how best to
distribute them.
There were many ways in which access to equipment ended up being uneven. For
example, not all students had space for machines in their living quarters.
These students performed additional work of packing machines when not in use,
then taking them out again when working. When given credits to use towards
fabrication services, some students delayed the fabrication of parts in favor
of more CAD revisions. This delayed the learning of tacit elements of digital
fabrication such as the unintended effects of computational design decisions
on production. Shared living spaces are also not immune to unfortunate
accidents, such as when the roommate of one student in Vivek, Adam, and
George’s class stepped on the assembled model for the course’s final showcase
and entirely broke it. One student reported that “the bigger discrepancy
between students is internet connection.” Material sent out to students in
countries other than the US was often more expensive to buy and ship and
sometimes impossible to get to the students.
Access to human resources is as critical as access to material ones. Open and
welcoming communities for peer-learning contributed extensively to positive
outcomes in the remote digital fabrication classes we surveyed. In some cases,
these communities pre-dated the classes and the pandemic but in others they
were scaffolded during the class. Instructors established and organized online
communities. Paul reported initiating a Discord group for students to ask
questions to each other. Nadya observed the evolution of her class’ online
community, which remained active after the course ended. Ensuring students had
access to one or several people—whether the instructor, a TA, or another
student—created positive learning outcomes. This is the case not only during
online teaching, but required more labor to create in a distance learning
format.
Instructors also reported inequities in labor among students. For instance,
Vivek explained that despite the instructors’ efforts to provide a
collaborative video editing platform, students still relied on the most
experienced editor. While not unique to remote instruction, these inequities
were exacerbated in a remote context where asynchronous collaborative
processes can be difficult and where in-person accountability mechanisms are
absent. More importantly, fabrication models influenced how work was shared
between peers. On-demand fabrication services meant that iteration and
exploration was often not possible for students, which pushed them to rely
more heavily on their more experienced peers.
Having more materials to iterate and experiment with helped students
understand possibilities and trade-offs in fabrication. Overall, the classes
we surveyed did not find good ways of providing students with a centralized
repository for materials, access to which is nonetheless critical to
experimentation. For Jennifer, ensuring consistent material access to her
students is crucial for the next iterations of her computational fabrication
course online. Nadya considers institutional support as key to managing equal
distribution of resources for students.
As we are writing this paper, our departments are communicating that remote
instruction of digital fabrication courses in the Spring of 2021 is not
unlikely. Instructors and institutions can work towards developing approaches
to remote instruction of digital fabrication that are not provisional but
cohesive and integrated into the students’ living situations. What was evident
from the courses we surveyed is that with greater possibilities for planning,
remote instruction of digital fabrication could, if not completely address
inequities in access to resources, work towards not aggravating them and even
creating new opportunities for students. For Shelby, teaching her studio class
this Fall meant coming up with other assignments that engage her students’
creativity and surface the opportunities hidden in their living spaces, such
as “conceptual robots that [the students] build at home out of things that
they have. They won’t necessarily need to be mechanized.” She added:
> I think if we were teaching online on purpose rather than kind of as an
> emergency, I could be really, I could feel more creative about it, you know?
> Like, it would be fun.
## 8\. Conclusion
The COVID-19 pandemic has endured in the US substantially beyond the day in
spring 2020 when campuses shut down. This research shows the work of students
and instructors teaching and learning digital fabrication in a crisis. We
examined how students were provided with remote access to digital fabrication,
whether through at-home fabrication or diffused fabrication, and what their
respective challenges were. We identified unique learning opportunities of
remote instruction of digital fabrication, including increased opportunities
to iterate with at-home equipment and increased opportunities for
collaboration, documentation, and engagement through remote learning
technologies. We recounted different approaches instructors took in teaching
important tacit elements of digital fabrication remotely. We found that
overall, there was no minimum requirement for equipment to still learn
important elements of digital fabrication. Rather, it was more important that
instructors framed the work, established buy-in, and supported students’
iteration. Furthermore, we called attention to the ways inequities persist
across education, including remote digital fabrication education, and
reiterated that it is of paramount importance for instructors and institutions
to work together towards more just student experiences. We are now in a
protracted crisis, or “the new normal.” While the future remains uncertain, we
hope that it will hold sustainable and equitable opportunities for students to
have hands-on learning experiences, even if those learning opportunities need
to happen at a safe distance.
## 9\. Acknowledgments
We are grateful to all of the instructors and students who shared their
experiences with us, including Mark Cutkosky, Shelby Doyle, Paul Esposito,
James Coleman, Ben Light, Vivek Rao, Adam Patrick Hutz, and George Moore. We
also greatly appreciate the guidance and input from Madeline Gannon, Daniela
Rosner, and Audrey Desjardins. This research was funded in part by the NSF IIS
Human-Centered Computing program (#2007045) and the UCSB Academic Senate
Faculty Research Grant program.
## References
* (1)
* Alcock et al. (2016) Celena Alcock, Nathaniel Hudson, and Parmit K. Chilana. 2016\. Barriers to Using, Customizing, and Printing 3D Designs on Thingiverse. In _Proceedings of the 19th International Conference on Supporting Group Work_ (Sanibel Island, Florida, USA) _(GROUP ’16)_. Association for Computing Machinery, New York, NY, USA, 195–199. https://doi.org/10.1145/2957276.2957301
* Ames et al. (2014) Morgan G. Ames, Jeffrey Bardzell, Shaowen Bardzell, Silvia Lindtner, David A. Mellis, and Daniela K. Rosner. 2014. Making Cultures: Empowerment, Participation, and Democracy - or Not?. In _CHI ’14 Extended Abstracts on Human Factors in Computing Systems_ (Toronto, Ontario, Canada) _(CHI EA ’14)_. Association for Computing Machinery, New York, NY, USA, 1087–1092. https://doi.org/10.1145/2559206.2579405
* Blikstein (2013) Paulo Blikstein. 2013\. Digital fabrication and ‘making’in education: The democratization of invention. In _FabLabs: Of machines, makers and inventors_. Vol. 4. Bielefeld: Transcript Publishers, 1–21.
* Braun and Clarke (2006) Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. _Qualitative Research in Psychology_ 3, 2 (2006), 77–101. https://doi.org/10.1191/1478088706qp063oa arXiv:https://www.tandfonline.com/doi/pdf/10.1191/1478088706qp063oa
* Braun and Clarke (2019) Virginia Braun and Victoria Clarke. 2019. Reflecting on reflexive thematic analysis. _Qualitative Research in Sport, Exercise and Health_ 11, 4 (2019), 589–597.
* Campos et al. (2019) Fabio Campos, Tatiana Soster, and Paulo Blikstein. 2019\. Sorry, I Was in Teacher Mode Today: Pivotal Tensions and Contradictory Discourses in Real-World Implementations of School Makerspaces. In _Proceedings of FabLearn 2019_ (New York, NY, USA) _(FL2019)_. Association for Computing Machinery, New York, NY, USA, 96–103. https://doi.org/10.1145/3311890.3311903
* Coleman and Cole (2017) James Coleman and Shannon Cole. 2017. By Any Means Necessary: Digitally Fabricating Architecture at Scale. _Proceedings of the 37th Annual Conference of the Association for Computer Aided Design in Architecture (ACADIA)_ (2017).
* Dougherty et al. (2016) D. Dougherty, A. Conrad, and T. O’Reilly. 2016. _Free to Make: How the Maker Movement is Changing Our Schools, Our Jobs, and Our Minds_. North Atlantic Books.
* Eisenberg and Eisenberg (1998) Mike Eisenberg and Ann Eisenberg. 1998. Middle tech: blurring the division between high and low tech in education. In _The design of children’s technology_. Morgan Kaufmann, 244–273.
* Eriksson et al. (2019) Eva Eriksson, Ole Sejer Iversen, Gökçe Elif Baykal, Maarten Van Mechelen, Rachel Smith, Marie-Louise Wagner, Bjarke Vognstrup Fog, Clemens Klokmose, Bronwyn Cumbo, Arthur Hjorth, Line Have Musaeus, Marianne Graves Petersen, and Niels Olof Bouvin. 2019. Widening the Scope of FabLearn Research: Integrating Computational Thinking, Design and Making. In _Proceedings of the FabLearn Europe 2019 Conference_ (Oulu, Finland) _(FabLearn Europe ’19)_. Association for Computing Machinery, New York, NY, USA, Article 15, 9 pages. https://doi.org/10.1145/3335055.3335070
* Eversmann (2017) Philipp Eversmann. 2017\. Digital Fabrication in Education-Strategies and Concepts for Large-Scale Projects. _Proceedings of the 35th eCAADe Conference_ 1, 333–342.
* FabLearn (2020) FabLearn. 2020. The Annual FabLearn Conference. https://fablearn.org/, accessed Aug 2020.
* for Computational Design and Stuttgart (2020) Institute for Computational Design and Construction Stuttgart. 2020. Computational Construction Laboratory. https://www.icd.uni-stuttgart.de/research/research-infrastructure/, accessed Aug 2020.
* Gantt and Nardi (1992) Michelle Gantt and Bonnie A. Nardi. 1992. Gardeners and Gurus: Patterns of Cooperation among CAD Users. In _Proceedings of the SIGCHI Conference on Human Factors in Computing Systems_ (Monterey, California, USA) _(CHI ’92)_. Association for Computing Machinery, New York, NY, USA, 107–117. https://doi.org/10.1145/142750.142767
* Ion et al. (2016) Alexandra Ion, Johannes Frohnhofen, Ludwig Wall, Robert Kovacs, Mirela Alistar, Jack Lindsay, Pedro Lopes, Hsiang-Ting Chen, and Patrick Baudisch. 2016. Metamaterial Mechanisms. In _Proceedings of the 29th Annual Symposium on User Interface Software and Technology_ (Tokyo, Japan) _(UIST ’16)_. Association for Computing Machinery, New York, NY, USA, 529–539. https://doi.org/10.1145/2984511.2984540
* Jacobs et al. (2017) Jennifer Jacobs, Sumit Gogia, Radomír Mundefinedch, and Joel R. Brandt. 2017. Supporting Expressive Procedural Art Creation through Direct Manipulation. In _Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems_ (Denver, Colorado, USA) _(CHI ’17)_. Association for Computing Machinery, New York, NY, USA, 6330–6341. https://doi.org/10.1145/3025453.3025927
* Jacobs and Peek (2020) Jennifer Jacobs and Nadya Peek. 2020. Learning remotely, making locally: Remote digital fabrication instruction during a pandemic. _Interactions_ (2020). http://interactions.acm.org/blog/view/learning-remotely-making-locally-remote-digital-fabrication-instruction-dur.
* Lafreniere et al. (2016) Benjamin Lafreniere, Tovi Grossman, Fraser Anderson, Justin Matejka, Heather Kerrick, Danil Nagy, Lauren Vasey, Evan Atherton, Nicholas Beirne, Marcelo H. Coelho, Nicholas Cote, Steven Li, Andy Nogueira, Long Nguyen, Tobias Schwinn, James Stoddart, David Thomasson, Ray Wang, Thomas White, David Benjamin, Maurice Conti, Achim Menges, and George Fitzmaurice. 2016. Crowdsourced Fabrication. In _Proceedings of the 29th Annual Symposium on User Interface Software and Technology_ (Tokyo, Japan) _(UIST ’16)_. Association for Computing Machinery, New York, NY, USA, 15–28. https://doi.org/10.1145/2984511.2984553
* Lamancusa (2006) John S. Lamancusa. 2006\. The Reincarnation of the Engineering “Shop” _(International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Vol. Volume 4c: 3rd Symposium on International Design and Design Education)_. 849–857. https://doi.org/10.1115/DETC2006-99723 arXiv:https://asmedigitalcollection.asme.org/IDETC-CIE/proceedings-pdf/IDETC-CIE2006/42584c/849/2652532/849_1.pdf
* Lindtner et al. (2014) Silvia Lindtner, Garnet D. Hertz, and Paul Dourish. 2014\. Emerging Sites of HCI Innovation: Hackerspaces, Hardware Startups & Incubators. In _Proceedings of the SIGCHI Conference on Human Factors in Computing Systems_ (Toronto, Ontario, Canada) _(CHI ’14)_. Association for Computing Machinery, New York, NY, USA, 439–448. https://doi.org/10.1145/2556288.2557132
* Martin (2015) Lee Martin. 2015\. The promise of the maker movement for education. _Journal of Pre-College Engineering Education Research (J-PEER)_ 5, 1 (2015), 4.
* Martinez and Stager (2016) S.L. Martinez and G.S. Stager. 2016. _Invent to Learn: Making, Tinkering, and Engineering in the Classroom_. Constructing Modern Knowledge Press. https://books.google.com/books?id=4C5evgAACAAJ
* Moore and Williams (2004) DF Moore and JA Williams. 2004. Laser prototyping of MEMS structures and SiN cantilevers: experience teaching a practical undergraduate course. _IEE Proceedings-Science, Measurement and Technology_ 151, 2 (2004), 54–59.
* Nieusma and Malazita (2016) Dean Nieusma and James W Malazita. 2016. ’Making’ a Bridge: Critical Making as Synthesized Engineering/Humanistic Inquiry. _Proceedings of the 2016 American Society for Engineering Education_ (2016).
* Oehlberg et al. (2015) Lora Oehlberg, Wesley Willett, and Wendy E. Mackay. 2015\. Patterns of Physical Design Remixing in Online Maker Communities. In _Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems_ (Seoul, Republic of Korea) _(CHI ’15)_. Association for Computing Machinery, New York, NY, USA, 639–648. https://doi.org/10.1145/2702123.2702175
* of Architecture (2020) Princeton University School of Architecture. 2020. Embodied Computation Lab. https://soa.princeton.edu/content/embodied-computation-lab, accessed Aug 2020.
* of Michigan (2020) Taubman College University of Michigan. 2020\. Digital Fabrication Lab. https://taubmancollege.umich.edu/labs-workshops/digital-fabrication-lab/, accessed Aug 2020.
* on Academic Makerspaces (2020) International Symposium on Academic Makerspaces. 2020. The Annual International Symposium on Academic Makerspaces. https://isam2020.hemi-makers.org/, accessed Aug 2020.
* Roedl et al. (2015) David Roedl, Shaowen Bardzell, and Jeffrey Bardzell. 2015\. Sustainable Making? Balancing Optimism and Criticism in HCI Discourse. _ACM Trans. Comput.-Hum. Interact._ 22, 3, Article 15 (June 2015), 27 pages. https://doi.org/10.1145/2699742
* Rosenbaum and Hartmann (2017) Leah F Rosenbaum and Björn Hartmann. 2017. Where be dragons? Charting the known (and not so known) areas of research on academic makerspaces. In _International Symposium on Academic Makerspaces (ISAM)_.
* Schmidt and Ratto (2013) Ryan Schmidt and Matt Ratto. 2013. Design-to-Fabricate: Maker Hardware Requires Maker Software. _IEEE Computer Graphics and Applications_ 33, 6 (Nov. 2013), 26–34. https://doi.org/10.1109/MCG.2013.90 Conference Name: IEEE Computer Graphics and Applications.
* Sherrill (2017) John T. Sherrill. 2017\. Gender, Technology, and Narratives in DIY Instructions. In _Proceedings of the 35th ACM International Conference on the Design of Communication_ (Halifax, Nova Scotia, Canada) _(SIGDOC ’17)_. Association for Computing Machinery, New York, NY, USA, Article 9, 8 pages. https://doi.org/10.1145/3121113.3121214
* Tian et al. (2018) Rundong Tian, Sarah Sterman, Ethan Chiou, Jeremy Warner, and Eric Paulos. 2018. _MatchSticks: Woodworking through Improvisational Digital Fabrication_. Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3173574.3173723
* Walker et al. (1996) JF Walker, DF Moore, and JT Whitney. 1996. Focused ion beam processing for microscale fabrication. _Microelectronic engineering_ 30, 1-4 (1996), 517–522.
* Wang et al. (2018) Guanyun Wang, Humphrey Yang, Zeyu Yan, Nurcan Gecer Ulu, Ye Tao, Jianzhe Gu, Levent Burak Kara, and Lining Yao. 2018\. 4DMesh: 4D Printing Morphing Non-Developable Mesh Surfaces. In _Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology_ (Berlin, Germany) _(UIST ’18)_. Association for Computing Machinery, New York, NY, USA, 623–635. https://doi.org/10.1145/3242587.3242625
* Whalen (2013) Richard Whalen. 2013\. Augmenting a First-year Design Course with an Undergraduate Student Ad-ministered SolidWorks Module. _Proceedings of the 120th Annual Conference of the American Society of Engineering Education_ 23 (2013).
* Wilczynski et al. (2017) Vincent Wilczynski, Aubrey Wigner, Micah Lande, and Shawn Jordan. 2017. The Value of Higher Education Academic Makerspaces for Accreditation and Beyond. _Planning for Higher Education_ 46, 1 (2017), 32–40.
* Yildirim et al. (2020) Nur Yildirim, James McCann, and John Zimmerman. 2020\. Digital Fabrication Tools at Work: Probing Professionals’ Current Needs and Desired Futures. In _Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems_ (Honolulu, HI, USA) _(CHI ’20)_. Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3313831.3376621
|
# LDLE: Low Distortion Local Eigenmaps
Dhruv Kohli
<EMAIL_ADDRESS>
Department of Mathematics
University of California San Diego
CA 92093, USA Alexander Cloninger
<EMAIL_ADDRESS>
Department of Mathematics
University of California San Diego
CA 92093, USA Gal Mishne
<EMAIL_ADDRESS>
Halicioğlu Data Science Institute
University of California San Diego
CA 92093, USA
###### Abstract
We present Low Distortion Local Eigenmaps (LDLE), a manifold learning
technique which constructs a set of low distortion local views of a dataset in
lower dimension and registers them to obtain a global embedding. The local
views are constructed using the global eigenvectors of the graph Laplacian and
are registered using Procrustes analysis. The choice of these eigenvectors may
vary across the regions. In contrast to existing techniques, LDLE can embed
closed and non-orientable manifolds into their intrinsic dimension by tearing
them apart. It also provides gluing instruction on the boundary of the torn
embedding to help identify the topology of the original manifold. Our
experimental results will show that LDLE largely preserved distances up to a
constant scale while other techniques produced higher distortion. We also
demonstrate that LDLE produces high quality embeddings even when the data is
noisy or sparse.
Keywords: Manifold learning, graph Laplacian, local parameterization,
Procrustes analysis, closed manifold, non-orientable manifold
## 1 Introduction
Manifold learning techniques such as Local Linear Embedding [37], Diffusion
maps [17], Laplacian eigenmaps [3], t-SNE [30] and UMAP [32], aim at
preserving local information as they map a manifold embedded in higher
dimension into lower (possibly intrinsic) dimension. In particular, UMAP and
t-SNE follow a top-down approach as they start with an initial low-dimensional
global embedding and then refine it by minimizing a local distortion measure
on it. In contrast, similar to LTSA [49] and [40], a bottom-up approach for
manifold learning can be imagined to consist of two steps, first obtaining low
distortion local views of the manifold in lower dimension and then registering
them to obtain a global embedding of the manifold. In this paper, we take this
bottom-up perspective to embed a manifold in low dimension, where the local
views are obtained by constructing coordinate charts for the manifold which
incur low distortion.
### 1.1 Local Distortion
Let $(\mathcal{M},g)$ be a $d$-dimensional Riemannian manifold with finite
volume. By definition, for every $x_{k}$ in $\mathcal{M}$, there exists a
coordinate chart $(\mathcal{U}_{k},\Phi_{k})$ such that
$x_{k}\in\mathcal{U}_{k}$, $\mathcal{U}_{k}\subset M$ and $\Phi_{k}$ maps
$\mathcal{U}_{k}$ into $\mathbb{R}^{d}$. One can imagine $\mathcal{U}_{k}$ to
be a local view of $\mathcal{M}$ in the ambient space. Using rigid
transformations, these local views can be registered to recover $\mathcal{M}$.
Similarly, $\Phi_{k}(\mathcal{U}_{k})$ can be imagined to be a local view of
$\mathcal{M}$ in the $d$-dimensional embedding space $\mathbb{R}^{d}$. Again,
using rigid transformations, these local views can be registered to obtain the
$d$-dimensional embedding of $\mathcal{M}$.
As there may exist multiple mappings which map $\mathcal{U}_{k}$ into
$\mathbb{R}^{d}$, a natural strategy would be to choose a mapping with low
distortion. Multiple measures of distortion exist in literature [14]. The
measure of distortion used in this work is as follows. Let $d_{g}(x,y)$ denote
the shortest geodesic distance between $x,y\in\mathcal{M}$. The distortion of
$\Phi_{k}$ on $\mathcal{U}_{k}$ as defined in [25] is given by
$\displaystyle\text{Distortion}(\Phi_{k},\mathcal{U}_{k})=\left\|\Phi_{k}\right\|_{\text{Lip}}\left\|\Phi_{k}^{-1}\right\|_{\text{Lip}}$
(1)
where $\left\|\Phi_{k}\right\|_{\text{Lip}}$ is the Lipschitz norm of
$\Phi_{k}$ given by
$\displaystyle\left\|\Phi_{k}\right\|_{\text{Lip}}$
$\displaystyle=\sup_{\begin{subarray}{c}x,y\in\mathcal{U}_{k}\\\ x\neq
y\end{subarray}}\frac{\left\|\Phi_{k}(x)-\Phi_{k}(y)\right\|_{2}}{d_{g}(x,y)},$
(2)
and similarly,
$\displaystyle\left\|\Phi^{-1}_{k}\right\|_{\text{Lip}}$
$\displaystyle=\sup_{\begin{subarray}{c}x,y\in\mathcal{U}_{k}\\\ x\neq
y\end{subarray}}\frac{d_{g}(x,y)}{\left\|\Phi_{k}(x)-\Phi_{k}(y)\right\|_{2}}.$
(3)
Note that $\text{Distortion}(\Phi_{k},\mathcal{U}_{k})$ is always greater than
or equal to $1$. If $\text{Distortion}(\Phi_{k},\mathcal{U}_{k})=1$, then
$\Phi_{k}$ is said to have no distortion on $\mathcal{U}_{k}$. This is
achieved when the mapping $\Phi_{k}$ preserves distances between points in
$\mathcal{U}_{k}$ up to a constant scale, that is, when $\Phi_{k}$ is a
similarity on $\mathcal{U}_{k}$. It is not always possible to obtain a mapping
with no distortion. For example, there does not exist a similarity which maps
a locally curved region on a surface into a Euclidean plane. This follows from
the fact that the sign of the Gaussian curvature is preserved under similarity
transformation which in turn follows from the Gauss’s Theorema Egregium.
### 1.2 Our Contributions
This paper takes motivation from the work in [25] where the authors provide
guarantees on the distortion of the coordinate charts of the manifold
constructed using carefully chosen eigenfunctions of the Laplacian. However,
this only applies to the charts for small neighborhoods on the manifold and
does not provide a global embedding. In this paper, we present an approach to
realize their work in the discrete setting and obtain low-dimensional low
distortion local views of the given dataset using the eigenvectors of the
graph Laplacian. Moreover, we piece together these local views to obtain a
global embedding of the manifold. The main contributions of our work are as
follows:
1. 1.
We present an algorithmic realization of the construction procedure in [25]
that applies to the discrete setting and yields low-dimensional low distortion
views of small metric balls on the given discretized manifold (See Section 2
for a summary of their procedure).
2. 2.
We present an algorithm to obtain a global embedding of the manifold by
registering its local views. The algorithm is designed so as to embed closed
as well as non-orientable manifolds into their intrinsic dimension by tearing
them apart. It also provides gluing instructions for the boundary of the
embedding by coloring it such that the points on the boundary which are
adjacent on the manifold have the same color (see Figure 2).
LDLE consists of three main steps. In the first step, we estimate the inner
product of the Laplacian eigenfunctions’ gradients using the local correlation
between them. These estimates are used to choose eigenfunctions which are in
turn used to construct low-dimensional low distortion parameterizations
$\Phi_{k}$ of the small balls $U_{k}$ on the manifold. The choice of the
eigenfunctions depend on the underlying ball. A natural next step is to align
these local views $\Phi_{k}(U_{k})$ in the embedding space, to obtain a global
embedding. One way to align them is to use Generalized Procrustes Analysis
(GPA) [18, 20, 43]. However, we empirically observed that GPA is less
efficient and prone to errors due to large number of local views with small
overlaps between them. Therefore, motivated from our experimental observations
and computational necessity, in the second step, we develop a clustering
algorithm to obtain a small number of intermediate views
$\widetilde{\Phi}_{m}(\widetilde{U}_{m})$ with low distortion, from the large
number of smaller local views $\Phi_{k}(U_{k})$. This makes the subsequent GPA
based registration procedure faster and less prone to errors.
Finally, in the third step, we register intermediate views
$\widetilde{\Phi}_{m}(\widetilde{U}_{m})$ using an adaptation of GPA which
enables tearing of closed and non-orientable manifolds so as to embed them
into their intrinsic dimension. The results on a 2D rectangular strip and a 3D
sphere are presented in Figures 1 and 2, to motivate our approach.
The paper organization is as follows. Section 2 provides relevant background
and motivation. In Section 3 we present the construction of low-dimensional
low distortion local parameterizations. Section 4 presents our clustering
algorithm to obtain intermediate views. Section 5 registers the intermediate
views to a global embedding. In Section 6 we compare the embeddings produced
by our algorithm with existing techniques on multiple datasets. Section 7
concludes our work and discusses future directions.
### 1.3 Related Work
Laplacian eigenfunctions are ubiquitous in manifold learning. A large
proportion of the existing manifold learning techniques rely on a fixed set of
Laplacian eigenfunctions, specifically, on the first few non-trivial low
frequency eigenfunctions, to construct a low-dimensional embedding of a
manifold in high dimensional ambient space. These low frequency eigenfunctions
not only carry information about the global structure of the manifold but they
also exhibit robustness to the noise in the data [17]. Laplacian eigenmaps
[3], Diffusion maps [17] and UMAP [32] are examples of such top-down manifold
learning techniques. While there are limited bottom-up manifold learning
techniques in the literature, to the best of our knowledge, none of them makes
use of Laplacian eigenfunctions to construct local views of the manifold in
lower dimension.
##### LTSA
is an example of a bottom-up approach for manifold learning whose local
mappings project local neighborhoods onto the respective tangential spaces. A
local mapping in LTSA is a linear transformation whose columns are the
principal directions obtained by applying PCA on the underlying neighborhood.
These directions form an estimate of the basis for the tangential space.
Having constructed low-dimensional local views for each neighborhood, LTSA
then aligns all the local views to obtain a global embedding. As discussed in
their work and as we will show in our experimental results, LTSA lacks
robustness to the noise in the data. This further motivates our approach of
using robust low-frequency Laplacian eigenfunctions for the construction of
local views. Moreover, due to the specific constraints used in their
alignment, LTSA embeddings fail to capture the aspect ratio of the underlying
manifold (see Appendix F for details).
##### Laplacian eigenmaps
uses the eigenvectors corresponding to the $d$ smallest eigenvalues (excluding
zero) of the normalized graph Laplacian to embed the manifold in
$\mathbb{R}^{d}$. It can also be perceived as a top-down approach which
directly obtains a global embedding that minimizes Dirichlet energy under some
constraints. For manifolds with high aspect ratio, in the context of Section
1.1, the distortion of the local parameterizations based on the restriction of
these eigenvectors on local neighborhoods, could become extremely high. For
example, as shown in Figure 1, the Laplacian eigenmaps embedding of a
rectangle with an aspect ratio of $16$ looks like a parabola. This issue is
explained in detail in [38, 8, 19, 6].
##### UMAP,
to a large extent, resolves this issue by first computing an embedding based
on the $d$ non-trivial low-frequency eigenvectors of a symmetric normalized
Laplacian and then “sprinkling” white noise in it. It then refines the noisy
embedding by minimizing a local distortion measure based on fuzzy set cross
entropy. Although UMAP embeddings seem to be topologically correct, they
occasionally tend to have twists and sharp turns which may be unwanted (see
Figure 1).
##### t-SNE
takes a different approach of randomly initializing the global embedding,
defining a local t-distribution in the embedding space and local Gaussian
distribution in the high dimensional ambient space, and finally refining the
embedding by minimizing the Kullback–Leibler divergence between the two sets
of distributions. As shown in Figure 1, t-SNE tends to output a dissected
embedding even when the manifold is connected. Note that the recent work by
[26] showed that t-SNE with spectral initialization results in a similar
embedding as that of UMAP. Therefore, in this work, we display the output of
the classic t-SNE construction, with random initialization only.
Input | LDLE | LDLE with $\partial\mathcal{M}$ known apriori | LTSA | UMAP | t-SNE | Laplacian Eigenmaps
---|---|---|---|---|---|---
| | | | | |
Figure 1: Embeddings of a rectangle ($4\times 0.25$) with high aspect ratio in
$\mathbb{R}^{2}$ into $\mathbb{R}^{2}$.
A missing feature in existing manifold learning techniques is their ability to
embed closed manifolds into their intrinsic dimensions. For example, a sphere
in $\mathbb{R}^{3}$ is a $2$-dimensional manifold which can be represented by
a connected domain in $\mathbb{R}^{2}$ with boundary gluing instructions
provided in the form of colors. We solve this issue in this paper (see Figure
2).
Input | LDLE | LTSA | UMAP | t-SNE | Laplacian Eigenmaps
---|---|---|---|---|---
| | | | |
| | | | |
Figure 2: Embeddings of a sphere in $\mathbb{R}^{3}$ into $\mathbb{R}^{2}$.
The top and bottom row contain the same plots colored by the height and the
azimuthal angle of the sphere ($0-2\pi$), respectively. LDLE automatically
colors the boundary so that the points on the boundary which are adjacent on
the sphere have the same color. The arrows are manually drawn to help the
reader identify the two pieces of the boundary which are to be stitched
together to recover the original sphere. LTSA, UMAP and Laplacian eigenmaps
squeezed the sphere into different viewpoints of $\mathbb{R}^{2}$ (side or top
view of the sphere). t-SNE also tore apart the sphere but the embedding lacks
interpretability as it is “unaware” of the boundary.
## 2 Background and Motivation
Due to their global nature and robustness to noise, in our bottom-up approach
for manifold learning, we propose to construct low distortion (see Eq. (1))
local mappings using low frequency Laplacian eigenfunctions. A natural way to
achieve this is to restrict the eigenfunctions on local neighborhoods.
Unfortunately, the common trend of using first $d$ non-trivial low frequency
eigenfunctions to construct these local mappings fails to produce low
distortion on all neighborhoods. This directly follows from the Laplacian
Eigenmaps embedding of a high aspect-ratio rectangle shown in Figure 1. The
following example explains that even in case of unit aspect-ratio, a local
mapping based on the same set of eigenfunctions would not incur low distortion
on each neighborhood, while mappings based on different sets of eigenfunctions
may achieve that.
|
---|---
|
Figure 3: (Left) Distortion of $\Phi_{1}^{*}$ (top) and $\Phi_{2}^{*}$
(bottom) on discs of radius $0.01$ centered at $(x,y)$ for all
$x,y\in[0,1]\times[0,1]$. $\Phi_{2}^{*}$ produces close to infinite distortion
on the discs located in the white region. (Right) Mapping of the discs at
various locations in the square using $\Phi_{1}^{*}$ (top) and $\Phi_{2}^{*}$
(bottom).
Consider a unit square $[0,1]\times[0,1]$ such that for every point $x_{k}$ in
the square, $\mathcal{U}_{k}$ is the disc of radius $0.01$ centered at
$x_{k}$. Consider a mapping $\Phi_{1}^{*}$ based on the first two non-trivial
eigenfunctions $\cos(\pi x)$ and $\cos(\pi y)$ of the Laplace-Beltrami
operator on the square with Neumann boundary conditions, that is,
$\displaystyle\Phi_{1}^{*}(x,y)=(\cos(\pi x),\cos(\pi y)).$ (4)
As shown in Figure 3, $\Phi_{1}^{*}$ maps the discs along the diagonals to
other discs. The discs along the horizontal and vertical lines through the
center are mapped to ellipses. The skewness of these ellipses increases as we
move closer to the middle of the edges of the unit square. Thus, the
distortion of $\Phi_{1}^{*}$ is low on the discs along the diagonals and high
on the discs close to the middle of the edges of the square.
Now, consider a different mapping based on another set of eigenfunctions,
$\displaystyle\Phi_{2}^{*}(x,y)$ $\displaystyle=(\cos(5\pi x),\cos(5\pi y)).$
(5)
Compared to $\Phi_{1}^{*}$, $\Phi_{2}^{*}$ produces almost no distortion on
the discs of radius $0.01$ centered at $(0.1,0.5)$ and $(0.9,0.5)$ (see Figure
3). Therefore, in order to achieve low distortion, it seem to make sense to
construct local mappings for different regions based on different sets of
eigenfunctions.
The following result from [25] manifests the above claim as it shows that, for
a given small neighborhood on a Riemannian manifold, there always exist a
subset of Laplacian eigenfunctions such that a local parameterization based on
this subset is bilipschitz and has bounded distortion. A more precise
statement follows.
###### Theorem 1 ([25], Theorem 2.2.1).
Let $(\mathcal{M},g)$ be a $d$-dimensional Riemannian manifold. Let
$\Delta_{g}$ be the Laplace-Beltrami operator on it with Dirichlet or Neumann
boundary conditions and let $\phi_{i}$ be an eigenfunction of $\Delta_{g}$
with eigenvalue $\lambda_{i}$. Assume that $|\mathcal{M}|=1$ where
$|\mathcal{M}|$ is the volume of $\mathcal{M}$ and the uniform ellipticity
conditions for $\Delta_{g}$ are satisfied. Let $x_{k}\in\mathcal{M}$ and
$r_{k}$ be less than the injectivity radius at $x_{k}$ (the maximum radius
where the the exponential map is a diffeomorphism). Then, there exists a
constant $\kappa>1$ which depends on $d$ and the metric tensor $g$ such that
the following hold. Let $\rho\leq r_{k}$ and $B_{k}\equiv
B_{\kappa^{-1}\rho}(x_{k})$ where
$\displaystyle B_{\epsilon}(x)$ $\displaystyle=\\{y\in\mathcal{M}\ |\
d_{g}(x,y)<\epsilon\\}.$ (6)
Then there exist $i_{1},i_{2},\ldots,i_{d}$ such that, if we let
$\displaystyle\gamma_{ki}=\left(\frac{\int_{B_{k}}\phi_{i}^{2}(y)dy}{|B_{k}|}\right)^{-1/2}$
(7)
then the map
$\displaystyle\Phi_{k}:B_{k}$ $\displaystyle\rightarrow\mathbb{R}^{d}$
$\displaystyle x$
$\displaystyle\rightarrow(\gamma_{ki_{1}}\phi_{i_{1}}(x),\ldots,\gamma_{ki_{d}}\phi_{i_{d}}(x))$
(8)
is bilipschitz such that for any $y_{1},y_{2}\in B_{k}$ it satisfies
$\displaystyle\frac{\kappa^{-1}}{\rho}d_{g}(y_{1},y_{2})\leq\left\|\Phi_{k}(y_{1})-\Phi_{k}(y_{2})\right\|\leq\frac{\kappa}{\rho}d_{g}(y_{1},y_{2}),$
(9)
where the associated eigenvalues satisfy
$\displaystyle\kappa^{-1}\rho^{-2}\leq\lambda_{i_{1}},\ldots,\lambda_{i_{d}}\leq\kappa\rho^{-2},$
(10)
and the distortion is bounded from above by $\kappa^{2}$ i.e.
$\displaystyle\sup_{\begin{subarray}{c}y_{1},y_{2}\in B_{k}\\\ y_{1}\neq
y_{2}\end{subarray}}\frac{\left\|\Phi_{k}(y_{1})-\Phi_{k}(y_{2})\right\|}{d_{g}(y_{1},y_{2})}\sup_{\begin{subarray}{c}y_{1},y_{2}\in
B_{k}\\\ y_{1}\neq
y_{2}\end{subarray}}\frac{d_{g}(y_{1},y_{2})}{\left\|\Phi_{k}(y_{1})-\Phi_{k}(y_{2})\right\|}\leq\frac{\kappa}{\rho}\frac{\rho}{\kappa^{-1}}=\kappa^{2}.$
(11)
Motivated by the above result, we adopt the form of local paramterizations
$\Phi_{k}$ in Eq. (8) as local mappings in our work. The main challenge then
is to identify the set of eigenfunctions for a given neighborhood such that
the resulting parameterization produces low distortion on it. The existence
proof of the above theorem by the authors of [25] suggests a procedure to
identify this set in the continuous setting. Below, we provide a sketch of
their procedure and in Section 3 we describe our discrete realization of it.
### 2.1 Eigenfunction Selection in the Continuous Setting
Before describing the procedure used in [25] to choose the eigenfunctions, we
first provide some intuition about the desired properties for the chosen
eigenfunctions $\phi_{i_{1}},\ldots,\phi_{i_{d}}$ so that the resulting
parameterization $\Phi_{k}$ has low distortion on $B_{k}$.
Consider the simple case of $B_{k}$ representing a small open ball of radius
$\kappa^{-1}\rho$ around $x_{k}$ in $\mathbb{R}^{d}$ equipped with the
standard Euclidean metric. Then the first-order Taylor approximation of
$\Phi_{k}(x)$, $x\in B_{k}$, about $x_{k}$ is given by
$\displaystyle\Phi_{k}(x)$
$\displaystyle\approx\Phi_{k}(x_{k})+J(x-x_{k})\text{ where
}J=[\gamma_{ki_{1}}\nabla\phi_{i_{1}}(x_{k})\ldots\gamma_{ki_{d}}\nabla\phi_{i_{d}}(x_{k})]^{T}.$
(12)
Note that $\gamma_{ki_{s}}$ are positive scalars constant with respect to $x$.
Now, $\text{Distortion}(\Phi_{k},B_{k})=1$ if and only if $\Phi_{k}$ preserves
distances between points in $B_{k}$ up to a constant scale (see Eq. (1)). That
is,
$\displaystyle\left\|\Phi_{k}(x)-\Phi_{k}(y)\right\|_{2}=c\left\|x-y\right\|_{2}\
\forall x,y\in B_{k}\text{ and for some constant }c>0.$ (13)
Using the first-order approximation of $\Phi_{k}$ we get,
$\displaystyle\left\|J(x-y)\right\|_{2}\approx c\left\|x-y\right\|_{2}\
\forall x,y\in B_{k}\text{ and for some constant }c>0.$ (14)
Therefore, for low distortion $\Phi_{k}$, $J$ must approximately behave like a
similarity transformation and therefore, $J$ needs to be approximately
orthogonal up to a constant scale. In other words, the chosen eigenfunctions
should be such that $\gamma_{ki_{1}}\nabla\phi_{i_{1}}(x_{k}),$ $\ldots,$
$\gamma_{ki_{d}}\nabla\phi_{i_{d}}(x_{k})$ are close to being orthogonal and
have similar lengths.
The same intuition holds in the manifold setting too. The construction
procedure described in [25] aims to choose eigenfunctions such that
1. (a)
they are close to being locally orthogonal, that is,
$\nabla\phi_{i_{1}}(x_{k}),\ldots,\nabla\phi_{i_{d}}(x_{k})$ are approximately
orthogonal, and
2. (b)
that their local scaling factors
$\gamma_{ki_{s}}\left\|\nabla\phi_{i_{s}}(x_{k})\right\|_{2}$ are close to
each other.
Note. Throughout this paper, we use the convention
$\nabla\phi_{i}(x_{k})=\nabla(\phi_{i}\ \circ\ \exp_{x_{k}})(0)$ where
$\exp_{x_{k}}$ is the exponential map at $x_{k}$. Therefore,
$\nabla\phi_{i}(x_{k})$ can be represented by a $d$-dimensional vector in a
given $d$-dimensional orthonormal basis of $T_{x_{k}}\mathcal{M}$. Even though
the representation of these vectors depend on the choice of the orthonormal
basis, the value of the canonical inner product between these vectors, and
therefore the $2$-norm of the vectors, are the same across different basis.
This follows from the fact that an orthogonal transformation preserves the
inner product.
###### Remark 1.
Based on the above first order approximation, one may take our local mappings
$\Phi_{k}$ to also be projections onto the tangential spaces. However, unlike
LTSA [49] where the basis of the tangential space is estimated by the local
principal directions, in our case it is estimated by the locally orthogonal
gradients of the global eigenfunctions of the Laplacian. Therefore, LTSA
relies only on the local structure to estimate the tangential space while, in
a sense, our method makes use of both local and global structure of the
manifold.
A high level overview of the procedure presented in [25] to choose
eigenfunctions which satisfy the properties in (a) and (b) follows.
1. 1.
A set $S_{k}$ of the indices of candidate eigenfunctions is chosen such that
$i\in S_{k}$ if the length of $\gamma_{ki}\nabla\phi_{i}(x_{k})$ is bounded
from above by a constant, say $C$.
2. 2.
A direction $p_{1}\in T_{x_{k}}\mathcal{M}$ is selected at random.
3. 3.
Subsequently $i_{1}\in S_{k}$ is selected so that
$\gamma_{ki_{1}}|\nabla\phi_{i_{1}}(x_{k})^{T}p_{1}|$ is sufficiently large.
This motivates $\gamma_{ki_{1}}\nabla\phi_{i_{1}}(x_{k})$ to be approximately
in the same direction as $p_{1}$ and the length of it to be close to the upper
bound $C$.
4. 4.
Then, a recursive strategy follows. To find the $s$-th eigenfunction for
$s\in\\{2,\ldots,d\\}$, a direction $p_{s}\in T_{x_{k}}\mathcal{M}$ is chosen
such that it is orthogonal to
$\nabla\phi_{i_{1}}(x_{k}),\ldots,\nabla\phi_{i_{s-1}}(x_{k})$.
5. 5.
Subsequently, $i_{s}\in S_{k}$ is chosen so that
$\gamma_{ki_{s}}|\nabla\phi_{i_{s}}(x_{k})^{T}p_{s}|$ is sufficiently large.
Again, this motivates $\gamma_{ki_{s}}\nabla\phi_{i_{s}}(x_{k})$ to be
approximately in the same direction as $p_{s}$ and the length of it to be
close to the upper bound $C$.
Since $p_{s}$ is orthogonal to
$\nabla\phi_{i_{1}}(x_{k}),\ldots,\nabla\phi_{i_{s-1}}(x_{k})$ and the
direction of $\gamma_{ki_{s}}\nabla\phi_{i_{s}}$ is approximately the same as
$p_{s}$, therefore $(a)$ is satisfied. Since for all $s\in\\{1,\ldots,d\\}$,
$\gamma_{ki_{s}}\nabla\phi_{i_{s}}(x_{k})$ has a length close to the upper
bound $C$, therefore $(b)$ is also satisfied. The core of their work lies in
proving that these $\phi_{i_{1}},\ldots,\phi_{i_{d}}$ always exist under the
assumptions of the theorem such that the resulting parameterization $\Phi_{k}$
has bounded distortion (see Eq. (11)). This bound depends on the intrinsic
dimension $d$ and the natural geometric properties of the manifold. The main
challenge in practically realizing the above procedure lies in the estimation
of $\nabla\phi_{i_{s}}(x_{k})^{T}p_{s}$. In Section 3, we overcome this
challenge.
## 3 Low-dimensional Low Distortion Local Parameterization
In the procedure to choose $\phi_{i_{1}},\ldots,\phi_{i_{d}}$ to construct
$\Phi_{k}$ as described above, the selection of the first eigenfunction
$\phi_{i_{1}}$ relies on the derivative of the eigenfunctions at $x_{k}$ along
an arbitrary direction $p_{1}\in T_{x_{k}}\mathcal{M}$, that is, on
$\nabla\phi_{i}(x_{k})^{T}p_{1}$. In our algorithmic realization of the
construction procedure, we take $p_{1}$ to be the gradient of an eigenfunction
at $x_{k}$ itself (say $\nabla\phi_{j}(x_{k})$). We relax the unit norm
constraint on $p_{1}$; note that this will neither affect the math nor the
output of our algorithm. Then the selection of $\phi_{i_{1}}$ would depend on
the inner products $\nabla\phi_{i}(x_{k})^{T}\nabla\phi_{j}(x_{k})$. The value
of this inner product does not depend on the choice of the orthonormal basis
for $T_{x_{k}}\mathcal{M}$. We discuss several ways to obtain a numerical
estimate of this inner product by making use of the local correlation between
the eigenfunctions [42, 16]. These estimates are used to select the subsequent
eigenfunctions too.
In Section 3.1, we first review the local correlation between the
eigenfunctions of the Laplacian. In Theorem 2 we show that the limiting value
of the scaled local correlation between two eigenfunctions equals the inner
product of their gradients. We provide two proofs of the theorem where each
proof leads to a numerical procedure described in Section 3.2, followed by
examples to empirically compare the estimates. Finally, in Section 3.3, we use
these estimates to obtain low distortion local parameterizations of the
underlying manifold.
### 3.1 Inner Product of Eigenfunction Gradients using Local Correlation
Let $(\mathcal{M},g)$ be a $d$-dimensional Riemannian manifold with or without
boundary, rescaled so that $|\mathcal{M}|\leq 1$. Denote the volume element at
$y$ by $\omega_{g}(y)$. Let $\phi_{i}$ and $\phi_{j}$ be the eigenfunctions of
the Laplacian operator $\Delta_{g}$ (see statement of Theorem 1) with
eigenvalues $\lambda_{i}$ and $\lambda_{j}$. Let $x_{k}\in\mathcal{M}$ and
define
$\displaystyle\Psi_{kij}(y)=(\phi_{i}(y)-\phi_{i}(x_{k}))(\phi_{j}(y)-\phi_{j}(x_{k})).$
(15)
Then the local correlation between the two eigenfunctions $\phi_{i}$ and
$\phi_{j}$ at the point $x_{k}$ at scale $t_{k}^{-1/2}$ as defined in [42, 16]
is given by
$\displaystyle
A_{kij}=\int_{\mathcal{M}}p(t_{k},x_{k},y)\Psi_{kij}(y)\omega_{g}(y),$ (16)
where $p(t,x,y)$ is the fundamental solution of the heat equation on
$(\mathcal{M},g)$. As noted in [42], for $(t_{k},x_{k})\in\mathbb{R}_{\geq
0}\times\mathcal{M}$ fixed, we have
$\displaystyle
p(t_{k},x_{k},y)\sim\left\\{\begin{matrix}[l]t_{k}^{-d/2}&d_{g}(x_{k},y)\leq
t_{k}^{-1/2}\\\
0&\text{otherwise}\end{matrix}\right.\qquad\text{and}\qquad\int_{M}p(t_{k},x_{k},y)\omega_{g}(y)=1.$
(17)
Therefore, $p(t_{k},x_{k},\cdot)$ acts as a local probability measure centered
at $x_{k}$ with scale $t_{k}^{-1/2}$ (see Eq. (67) in Appendix A for a precise
form of $p$). We define the scaled local correlation to be the ratio of the
local correlation $A_{kij}$ and a factor of $2t_{k}$.
###### Theorem 2.
Denote the limiting value of the scaled local correlation by
$\widetilde{A}_{kij}$,
$\displaystyle\widetilde{A}_{kij}=\lim_{t_{k}\rightarrow
0}\frac{A_{kij}}{2t_{k}}$ (18)
Then $\widetilde{A}_{kij}$ equals the inner product of the gradients of the
eigenfunctions $\phi_{i}$ and $\phi_{j}$ at $x_{k}$, that is,
$\displaystyle\widetilde{A}_{kij}=\nabla\phi_{i}(x_{k})^{T}\nabla\phi_{j}(x_{k}).$
(19)
Two proofs are provided in Appendix A and B. A brief summary is provided
below.
Proof 1. In the first proof we choose a sufficiently small $\epsilon_{k}$ and
show that
$\displaystyle\lim_{t_{k}\rightarrow 0}A_{kij}$
$\displaystyle=\lim_{t_{k}\rightarrow
0}\int_{B_{\epsilon_{k}}(x_{k})}G(t_{k},x_{k},y)\Psi_{kij}(y)\omega_{g}(y)$
(20)
where $B_{\epsilon}(x)$ is defined in Eq. (6) and
$\displaystyle G(t,x,y)$ $\displaystyle=\frac{e^{-d_{g}(x,y)^{2}/4t}}{(4\pi
t)^{d/2}}.$ (21)
Then, by using the properties of the exponential map at $x_{k}$ and applying
basic techniques in calculus, we show that $\lim_{t_{k}\rightarrow
0}A_{kij}/2t_{k}$ evaluates to
$\nabla\phi_{i}(x_{k})^{T}\nabla\phi_{j}(x_{k})$.
Proof 2. In the second proof, as in [41, 42], we used the Feynman-Kac formula,
$\displaystyle A_{kij}$
$\displaystyle=[e^{-t_{k}\Delta_{g}}((\phi_{i}-\phi_{i}(x_{k}))(\phi_{j}-\phi_{j}(x_{k}))](x_{k})$
(22)
and note that
$\displaystyle\lim_{t_{k}\rightarrow
0}\frac{A_{kij}}{2t_{k}}=\left.\frac{1}{2}\frac{\partial A_{kij}}{\partial
t_{k}}\right|_{t_{k}=0}=\frac{-1}{2}\left\\{\Delta_{g}[(\phi_{i}-\phi_{i}(x_{k}))(\phi_{j}-\phi_{j}(x_{k}))](x_{k})\right\\}.$
(23)
Then, by applying the formula of the Laplacian of the product of two
functions, we show that the above equation equals
$\nabla\phi_{i}(x_{k})^{T}\nabla\phi_{j}(x_{k})$.
### 3.2 Estimate of $\widetilde{A}_{kij}$ in the Discrete Setting
To apply Theorems 1 and 2 in practice on data, we need an estimate of
$\widetilde{A}_{kij}$ in the discrete setting. There are several ways to
obtain this estimate. A generic way is by using the algorithms [9, 1] based on
Local Linear Regression (LLR) to estimate the gradient vector
$\nabla\phi_{i}(x_{k})$ itself from the values of $\phi_{i}$ in a neighbor of
$x_{k}$. An alternative approach is to use a finite sum approximation of Eq.
(20) combined with Eq. (18). A third approach is based on the Feynman-Kac
formula where we make use of Eq. (23) in the discrete setting. In the
following we explain the latter two approaches.
#### 3.2.1 Finite sum approximation
Let $(x_{k})_{k=1}^{n}$ be uniformly distributed points on $(\mathcal{M},g)$.
Let $d_{e}(x_{k},x_{k}^{\prime})$ be the distance between $x_{k}$ and
$x_{k^{\prime}}$. The accuracy with which $\widetilde{A}_{kij}$ can be
estimated mainly depends on the accuracy of $d_{e}(\cdot\ ,\ \cdot)$ to the
local geodesic distances. For simplicity, we use $d_{e}(x_{k},x_{k}^{\prime})$
to be the Euclidean distance $\left\|x_{k}-x_{k^{\prime}}\right\|_{2}$. A more
accurate estimate of the local geodesic distances can be computed using the
method described in [29].
We construct a sparse unnormalized graph Laplacian $L$ using Algo. 1, where
the weight matrix $K$ of the graph edges is defined using the Gaussian kernel.
The bandwidth of the Gaussian kernel is set using the local scale of the
neighborhoods around each point as in self-tuning spectral clustering [47].
Let $\bm{\phi}_{i}$ be the $i$th non-trivial eigenvector of $L$ and denote
$\phi_{i}(x_{j})$ by $\bm{\phi}_{ij}$.
Input:
$d_{e}(x_{k},x_{k^{\prime}})_{k,k^{\prime}=1}^{n},k_{\textrm{nn}},k_{\textrm{tune}}\text{
where }k_{\textrm{tune}}\leq k_{\textrm{nn}}$
Output: $L$
1 $\mathcal{N}_{k}\leftarrow$ set of indices of $k_{\textrm{nn}}$ nearest
neighbours of $x_{k}$ based on $d_{e}(x_{k},\cdot)$;
2 $\sigma_{k}\leftarrow d_{e}(x_{k},x_{k^{*}})$ where $x_{k^{*}}$ is the
$k_{\textrm{tune}}$th nearest neighbor of $x_{k}$;
3 $K_{kk}\leftarrow 0,K_{kk^{\prime}}\leftarrow
e^{-d_{e}(x_{k},x_{k^{\prime}})^{2}/\sigma_{k}\sigma_{k^{\prime}}},k^{\prime}\in\mathcal{N}_{k}$;
4 $D_{kk}\leftarrow\sum_{k^{\prime}}K_{kk^{\prime}},\
D_{kk^{\prime}}\leftarrow 0,k\neq k^{\prime}$;
5 $L\leftarrow D-K$;
Algorithm 1 Sparse Unnormalized Graph Laplacian based on [47]
We estimate $\widetilde{A}_{kij}$ by evaluating the scaled local correlation
$A_{kij}/2t_{k}$ at a small value of $t_{k}$. The limiting value of $A_{kij}$
is estimated by substituting a small $t_{k}$ in the finite sum approximation
of the integral in Eq. (20). The sum is taken on a discrete ball of a small
radius $\epsilon_{k}$ around $x_{k}$ and is divided by $2t_{k}$ to obtain an
estimate of $\widetilde{A}_{kij}$.
We start by choosing $\epsilon_{k}$ to be the distance of $k_{\text{lv}}$th
nearest neighbor of $x_{k}$ where $k_{\text{lv}}$ is a hyperparameter with a
small integral value (subscript lv stands for local view). Thus,
$\displaystyle\epsilon_{k}=\text{distance to the }k_{\text{lv}}\text{th
nearest neighbor of }x_{k}.$ (24)
Then the limiting value of $t_{k}$ is given by
$\displaystyle\sqrt{\text{chi2inv}(p,d)}\sqrt{2t_{k}}=\epsilon_{k}\implies
t_{k}=\frac{1}{2}\frac{\epsilon_{k}^{2}}{\text{chi2inv}(p,d)},$ (25)
where chi2inv is the inverse cdf of the chi-squared distribution with $d$
degrees of freedom evaluated at $p$. We take $p$ to be $0.99$ in our
experiments. The rationale behind the above choice of $t_{k}$ is described in
Appendix C.
Now define the discrete ball around $x_{k}$ as
$\displaystyle U_{k}$ $\displaystyle=\\{x_{k^{\prime}}\ |\
d_{e}(x_{k},x_{k^{\prime}})\leq\epsilon_{k}\\}.$ (26)
Let $U_{k}$ denote the $k$th local view of the data in the high dimensional
ambient space. For convenience, denote the estimate of
$G(t_{k},x_{k},x_{k^{\prime}})$ by $G_{kk^{\prime}}$ where $G$ is as in Eq.
(21). Then
$\displaystyle G_{kk^{\prime}}$
$\displaystyle=\left\\{\begin{matrix}[l]\frac{\exp(-d_{e}(x_{k},x_{k^{\prime}})^{2}/4t_{k})}{\sum_{x\in
U_{k}}\exp(-d_{e}(x_{k},x)^{2}/4t_{k})}&,\ x_{k^{\prime}}\in
U_{k}-\\{x_{k}\\}\\\ 0&,\ \text{otherwise}.\end{matrix}\right.$ (27)
Finally, the estimate of $\widetilde{A}_{kij}$ is given by
$\displaystyle\widetilde{A}_{kij}$
$\displaystyle=\frac{1}{2t_{k}}G_{k}^{T}((\bm{\phi_{i}}-\bm{\phi_{ik}})\odot(\bm{\phi_{j}}-\bm{\phi_{jk}}))$
(28)
where $G_{k}$ is a column vector containing the $k$th row of the matrix $G$
and $\odot$ represents the Hadamard product.
#### 3.2.2 Estimation based on Feynman-Kac formula
This approach to estimate $\widetilde{A}_{kij}$ is simply the discrete analog
of Eq. (23),
$\displaystyle\widetilde{A}_{kij}=\frac{-1}{2}L_{k}^{T}((\bm{\phi_{i}}-\bm{\phi_{ik}})\odot(\bm{\phi_{j}}-\bm{\phi_{jk}}))$
(29)
where $L_{k}$ is a column vector containing the $k$th row of $L$. A variant of
this approach which results in better estimates in the noisy case uses a low
rank approximation of $L$ using its first few eigenvectors (see Appendix H).
###### Remark 2.
It is not a coincidence that Eq. (28) and Eq. (29) look quite similar. In
fact, if we take $T$ to be a diagonal matrix with $(t_{k})_{k=1}^{n}$ as the
diagonal, then the matrix $T^{-1}(I-G)$ approximates $\Delta_{g}$ in the limit
of $(t_{k})_{k=1}^{n}$ tending to zero. Replacing $L$ with $T^{-1}(I-G)$ and
therefore $L_{k}$ with $(e_{k}-G_{k})/t_{k}$ reduces Eq. (29) to Eq. (28).
Here $e_{k}$ is a column vector with $k$th entry as $1$ and rest zeros.
Therefore the two approaches are the same in the limit.
###### Remark 3.
The above two approaches can also be generalized to compute the $\nabla
f_{i}(x_{k})^{T}\nabla f_{j}(x_{k})$ for arbitrary $\mathcal{C}^{2}$ mappings
$f_{i}$ and $f_{j}$ from $\mathcal{M}$ to $\mathbb{R}$ ( $\nabla
f_{i}(x_{k})=\nabla(f_{i}\ \circ\ \text{exp}_{x_{k}})(0)$ as per our
convention). To achieve this, simply replace $\phi_{i}$ and $\phi_{j}$ with
$f_{i}$ and $f_{j}$ in Eq. (28) and Eq. (29).
|
---
|
---
Figure 4: Comparison of different approaches to estimate $\widetilde{A}_{kij}$
in the discrete setting.
##### Example.
This example will follow us throughout the paper. Consider a square grid
$[0,1]\times[0,1]$ with a spacing of $0.01$ in both $x$ and $y$ direction.
With $k_{\textrm{nn}}=49$, $k_{\textrm{tune}}=7$ and
$d_{e}(x_{k},x_{k^{\prime}})=\left\|x_{k}-x_{k^{\prime}}\right\|_{2}$ as input
to the Algo. 1, we construct the graph Laplacian $L$. Using
$k_{\text{lv}}=25$, $d=2$ and $p=0.99$, we obtain the discrete balls $U_{k}$
and $t_{k}$. The $3$rd and $8$th eigenvectors of $L$ and the corresponding
analytical eigenfunctions are then obtained. The analytical value of
$\widetilde{A}_{k38}$ is displayed in Figure 4, followed by its estimate using
LLR [9], finite sum approximation and Feynman-Kac formula based approaches.
The analytical and the estimated values are normalized by
$\max_{k}\widetilde{A}_{kij}$ to bring them to the same scale. The absolute
error due to these approaches are shown below the estimates.
Even though, in this example, the Feynman-Kac formulation seem to have a
larger error, in our experiments, no single approach seem to be a clear winner
across all the examples. This becomes clear in Appendix H where we provided a
comparison of these approaches on a noiseless and a noisy Swiss Roll. The
results shown in this paper are based on finite sum approximation to estimate
$\widetilde{A}_{kij}$.
### 3.3 Low Distortion Local Parameterization from Laplacian Eigenvectors
We use $\nabla\phi_{i}\equiv\nabla\phi_{i}(x_{k})$ for brevity. Using the
estimates of $\widetilde{A}_{kij}$, we now present an algorithmic construction
of low distortion local parameterization $\Phi_{k}$ which maps $U_{k}$ into
$\mathbb{R}^{d}$. The pseudocode is provided below followed by a full
explanation of the steps and a note on the hyperparameters. Before moving
forward, it would be helpful for the reader to review the construction
procedure in the continuous setting in Section 2.1.
Input: $L,N,k_{\text{lv}},d,p,(\tau_{s},\delta_{s})_{s=1}^{d}$
Output: $(\Phi_{k},U_{k},\zeta_{kk})_{k=1}^{n}$
1 Compute $(\bm{\phi}_{i})_{i=1}^{N},\lambda_{1}\leq\ldots\leq\lambda_{N}$ by
eigendecomposition of $L$;
2 for $k\leftarrow 1$ to $n$ do
3 Compute $U_{k},(\widetilde{A}_{kij})_{i,j=1}^{N}$ (Eq. (26, 28));
4 Compute $(\gamma_{ki})_{i=1}^{N}$ (Eq. (30));
5 $\theta_{1}\leftarrow\tau_{1}$-percentile of
$(\widetilde{A}_{kii})_{i=1}^{N}$;
6 Compute $S_{k}$ (Eq. (31));
7 Compute $i_{1}$ (Eq. (35));
8 for $s\leftarrow 2$ to $d$ do
9 Compute $H^{s}_{kij}$ (Eq. (37));
10 $\theta_{s}\leftarrow\tau_{s}$-percentile of $(H^{s}_{kii})_{i\in S_{k}}$;
11 Compute $i_{s}$ (Eq. (42));
12
13 end for
14
$\Phi_{k}\leftarrow(\gamma_{ki_{1}}\bm{\phi}_{i_{1}},\ldots,\gamma_{ki_{d}}\bm{\phi}_{i_{d}})$
(Eq. (43));
15 Compute $\zeta_{kk}$ (Eq. (45));
16
17 end for
Algorithm 2 BiLipschitz-Local-Parameterization
An estimate of $\gamma_{ki}$ is obtained by the discrete analog of Eq. (7) and
is given by
$\displaystyle\gamma_{ki}=\text{Root-Mean-Square}(\\{\bm{\phi}_{ij}\ |\
x_{j}\in U_{k}\\})^{-1}.$ (30)
##### Step 1. Compute a set $S_{k}$ of candidate eigenvectors for $\Phi_{k}$.
Based on the construction procedure following Theorem 1, we start by computing
a set $S_{k}$ of candidate eigenvectors to construct $\Phi_{k}$ of $U_{k}$.
There is no easy way to retrieve the set $S_{k}$ in the discrete setting as in
the procedure. Therefore, we make the natural choice of using the first $N$
nontrivial eigenvectors $(\bm{\phi}_{i})_{i=1}^{N}$ of $L$ corresponding to
the $N$ smallest eigenvalues $(\lambda_{i})_{i=1}^{N}$, with sufficiently
large gradient at $x_{k}$, as the set $S_{k}$. The large gradient constraint
is required for the numerical stability of our algorithm. Therefore, we set
$S_{k}$ to be,
$\displaystyle S_{k}$ $\displaystyle=\\{i\in\\{1,\ldots,N\\}\ |\
\left\|\nabla\phi_{i}\right\|^{2}\geq\theta_{1}\\}=\\{i\in\\{1,\ldots,N\\}|\
\widetilde{A}_{kii}\geq\theta_{1}\\},$ (31)
where $\theta_{1}$ is $\tau_{1}$-percentile of the set
$(\widetilde{A}_{kii})_{i=1}^{N}$ and the second equality follows from Eq.
(19). Here $N$ and $\tau_{1}\in(0,100)$ are hyperparameters.
##### Step 2. Choose a direction $p_{1}\in T_{x_{k}}\mathcal{M}$.
The unit norm constraint on $p_{1}$ is relaxed. This will neither affect the
math nor the output of our algorithm. Since $p_{1}$ can be arbitrary we take
$p_{1}$ to be the gradient of an eigenvector $r_{1}$, that is
$\nabla\phi_{r_{1}}$. The choice of $r_{1}$ will determine
$\bm{\phi}_{i_{1}}$. To obtain a low frequency eigenvector, $r_{1}$ is chosen
so that the eigenvalue $\lambda_{r_{1}}$ is minimal, therefore
$\displaystyle r_{1}$ $\displaystyle=\mathop{\mathrm{argmin}}\limits_{j\in
S_{k}}\lambda_{j}.$ (32)
##### Step 3. Find $i_{1}\in S_{k}$ such that
$\gamma_{ki_{1}}|\nabla\phi_{i_{1}}^{T}p_{1}|$ is sufficiently large.
Since $p_{1}=\nabla\phi_{r_{1}}$, using Eq. (19), the formula for
$\nabla\phi_{i}^{T}p_{1}$ becomes
$\displaystyle\nabla\phi_{i}^{T}p_{1}$
$\displaystyle=\nabla\phi_{i}^{T}\nabla\phi_{r_{1}}=\widetilde{A}_{kir_{1}}.$
(33)
Then we obtain the eigenvector $\bm{\phi}_{i_{1}}$ so that
$\gamma_{ki_{1}}|\nabla\phi_{i_{1}}^{T}p_{1}|$ is larger than a certain
threshold. We do not know what the value of this threshold would be in the
discrete setting. Therefore, we first define the maximum possible value of
$\gamma_{ki_{1}}|\nabla\phi_{i}^{T}p_{1}|$ using Eq. (33) as
$\displaystyle\alpha_{1}=\underset{i\in S_{k}}{\max}\
\gamma_{ki}|\nabla\phi_{i}^{T}p_{1}|=\underset{i\in S_{k}}{\max}\
\gamma_{ki}|\widetilde{A}_{kir_{1}}|.$ (34)
Then we take the threshold to be $\delta_{1}\alpha_{1}$ where
$\delta_{1}\in(0,1]$ is a hyperparameter. Finally, to obtain a low frequency
eigenvector $\bm{\phi}_{i_{1}}$, we choose $i_{1}$ such that
$\displaystyle i_{1}$ $\displaystyle=\mathop{\mathrm{argmin}}\limits_{i\in
S_{k}}\\{\lambda_{i}:\gamma_{ki}|\nabla\phi_{i}^{T}p_{1}|\geq\delta_{1}\alpha_{1}\\}=\mathop{\mathrm{argmin}}\limits_{i\in
S_{k}}\\{\lambda_{i}:\gamma_{ki}|\widetilde{A}_{kir_{1}}|\geq\delta_{1}\alpha_{1}\\}.$
(35)
After obtaining $\bm{\phi}_{i_{1}}$, we use a recursive procedure to obtain
the $s$-th eigenvector $\bm{\phi}_{i_{s}}$ where $s\in\\{2,\ldots,d\\}$ in
order.
##### Step 4. Choose a direction $p_{s}\in T_{x_{k}}\mathcal{M}$ orthogonal to
$\nabla\phi_{i_{1}},\ldots,\nabla\phi_{i_{s}}$.
Again the unit norm constraint will be relaxed with no change in the output.
We are going to take $p_{s}$ to be the component of $\nabla\phi_{r_{s}}$
orthogonal to $\nabla\phi_{i_{1}},\ldots,\nabla\phi_{i_{s}}$ for a carefully
chosen $r_{s}$. For convenience, denote by $V_{s}$ the matrix with
$\nabla\phi_{i_{1}},\ldots,\nabla\phi_{i_{s-1}}$ as columns and let
$\mathcal{R}(V_{s})$ be the range of $V_{s}$. Let $\phi_{r_{s}}$ be an
eigenvector such that $\nabla\phi_{r_{s}}\not\in\mathcal{R}(V_{s})$. To find
such an $r_{s}$, we define
$\displaystyle H^{s}_{kij}$
$\displaystyle=\nabla\phi_{i}^{T}(I-V_{s}(V_{s}^{T}V_{s})^{-1}V_{s}^{T})\nabla\phi_{j}$
(36)
$\displaystyle=\widetilde{A}_{kij}-\begin{bmatrix}\widetilde{A}_{kii_{1}}\ldots\widetilde{A}_{kii_{s-1}}\end{bmatrix}\begin{bmatrix}\widetilde{A}_{ki_{1}i_{1}}&\widetilde{A}_{ki_{1}i_{2}}&\ldots&\widetilde{A}_{ki_{1}i_{s-1}}\\\
\widetilde{A}_{ki_{2}i_{1}}&\widetilde{A}_{ki_{2}i_{2}}&\ldots&\widetilde{A}_{ki_{2}i_{s-1}}\\\
\vdots&\vdots&\ddots&\vdots\\\
\widetilde{A}_{ki_{s-1}i_{1}}&\widetilde{A}_{ki_{s-1}i_{2}}&\ldots&\widetilde{A}_{ki_{s-1}i_{s-1}}\end{bmatrix}^{-1}\begin{bmatrix}\widetilde{A}_{ki_{1}j}\\\
\widetilde{A}_{ki_{2}j}\\\ \vdots\\\ \widetilde{A}_{ki_{s-1}j}\end{bmatrix}$
(37)
Note that $H^{s}_{kii}$ is the squared norm of the projection of
$\nabla\phi_{i}$ onto the vector space orthogonal to $\mathcal{R}(V_{s})$.
Clearly $\nabla\phi_{i}\not\in\mathcal{R}(V_{s})$ if and only if
$H^{s}_{kii}>0$. To obtain a low frequency eigenvector $\bm{\phi}_{r_{s}}$
such that $H^{s}_{kr_{s}r_{s}}>0$ we choose
$\displaystyle r_{s}=\mathop{\mathrm{argmin}}\limits_{i\in
S_{k}}\\{\lambda_{i}:H^{s}_{kii}\geq\theta_{s}\\}$ (38)
where $\theta_{s}$ is the $\tau_{s}$-percentile of the set
$\\{H^{s}_{kii}:i\in S_{k}\\}$ and $\tau_{s}\in(0,100)$ is a hyperparameter.
Then we take $p_{s}$ to be the component of $\nabla\phi_{r_{s}}$ which is
orthogonal to $\mathcal{R}(V_{s})$,
$\displaystyle
p_{s}=(I-V_{s}(V_{s}^{T}V_{s})^{-1}V_{s}^{T})\nabla\phi_{r_{s}}.$ (39)
##### Step 5. Find $i_{s}\in S_{k}$ such that
$\gamma_{ki_{s}}|\nabla\phi_{i_{s}}^{T}p_{s}|$ is sufficiently large.
Using Eq. (36, 39), we note that
$\displaystyle\nabla\phi_{i}^{T}p_{s}=H^{s}_{kir_{s}}.$ (40)
To obtain $\bm{\phi}_{i_{s}}$ such that
$\gamma_{ki_{s}}|\nabla\phi_{i_{s}}^{T}p_{s}|$ is greater than a certain
threshold, as in step $3$, we first define the maximum possible value of
$\gamma_{ki_{s}}|\nabla\phi_{i}^{T}p_{s}|$ using Eq. (40) as,
$\displaystyle\alpha_{s}=\max_{i\in
S_{k}}\gamma_{ki}|\nabla\phi_{i}^{T}p_{s}|=\max_{i\in
S_{k}}\gamma_{ki}|H^{s}_{kir_{s}}|.$ (41)
Then we take the threshold to be $\delta_{s}\alpha_{s}$ where
$\delta_{s}\in[0,1]$ is a hyperparameter. Finally, to obtain a low frequency
eigenvector $\bm{\phi}_{i_{s}}$ we choose $i_{s}$ such that
$\displaystyle i_{s}$ $\displaystyle=\mathop{\mathrm{argmin}}\limits_{i\in
S_{k}}\\{\lambda_{i}:\gamma_{ki}|\nabla\phi_{i}^{T}p_{s}|\geq\delta_{s}\alpha_{s}\\}=\mathop{\mathrm{argmin}}\limits_{i\in
S_{k}}\\{\lambda_{i}:\gamma_{ki}|H^{s}_{kir_{s}}|\geq\delta_{s}\alpha_{s}\\}.$
(42)
In the end we obtain a $d$-dimensional parameterization $\Phi_{k}$ of $U_{k}$
given by
$\displaystyle\Phi_{k}$
$\displaystyle\equiv(\gamma_{ki_{1}}\bm{\phi}_{i_{1}},\ldots,\gamma_{ki_{d}}\bm{\phi}_{i_{d}})\
\text{where}$ $\displaystyle\Phi_{k}(x_{k^{\prime}})$
$\displaystyle=(\gamma_{k{i_{1}}}\bm{\phi}_{i_{1}k^{\prime}},\ldots,\gamma_{k{i_{d}}}\bm{\phi}_{i_{d}k^{\prime}})\
\text{and}$ (43) $\displaystyle\Phi_{k}(U_{k})$
$\displaystyle=(\Phi_{k}(x_{k^{\prime}}))_{x_{k^{\prime}}\in U_{k}}.$
We call $\Phi_{k}(U_{k})$ the $k$th local view of the data in the
$d$-dimensonal embedding space. It is a matrix with $|U_{k}|$ rows and $d$
columns. Denote the distortion of $\Phi_{k^{\prime}}$ on $U_{k}$ by
$\zeta_{kk^{\prime}}$. Using Eq. (1) we obtain
$\displaystyle\zeta_{kk^{\prime}}$
$\displaystyle=\text{Distortion}(\Phi_{k^{\prime}},U_{k})$ (44)
$\displaystyle=\sup_{\begin{subarray}{c}x_{l},x_{l^{\prime}}\in U_{k}\\\
x_{l}\neq
x_{l^{\prime}}\end{subarray}}\frac{\left\|\Phi_{k^{\prime}}(x_{l})-\Phi_{k^{\prime}}(x_{l^{\prime}})\right\|}{d_{e}(x_{l},x_{l^{\prime}})}\sup_{\begin{subarray}{c}x_{l},x_{l^{\prime}}\in
U_{k}\\\ x_{l}\neq
x_{l^{\prime}}\end{subarray}}\frac{d_{e}(x_{l},x_{l^{\prime}})}{\left\|\Phi_{k^{\prime}}(x_{l})-\Phi_{k^{\prime}}(x_{l^{\prime}})\right\|}.$
(45)
##### Postprocessing.
The obtained local parameterizations are post-processed so as to remove the
anomalous parameterizations having unusually high distortion. We replace the
local parameterization $\Phi_{k}$ of $U_{k}$ by that of a neighbor,
$\Phi_{k^{\prime}}$ where $x_{k^{\prime}}\in U_{k}$, if the distortion
$\zeta_{kk^{\prime}}$ produced by $\Phi_{k^{\prime}}$ on $U_{k}$ is smaller
than the distortion $\zeta_{kk}$ produced by $\Phi_{k}$ on $U_{k}$. If
$\zeta_{kk^{\prime}}<\zeta_{kk}$ for multiple $k^{\prime}$ then we choose the
parameterization which produces the least distortion on $U_{k}$. This
procedure is repeated until no replacement is possible. The pseudocode is
provided below.
Input:
$d_{e}(x_{k},x_{k^{\prime}})_{k,k^{\prime}=1}^{n},(I_{k},\Phi_{k},\zeta_{kk})_{k=1}^{n}$
Output: $(\Phi_{k},\zeta_{kk})_{k=1}^{n}$
1 $N_{\text{replaced}}\leftarrow 1$;
2 while $N_{\text{replaced}}>0$ do
3 $N_{\text{replaced}}\leftarrow 0$;
4 $\Phi^{\text{old}}_{k}\leftarrow\Phi_{k}$ for all $k\in\\{1,\ldots,n\\}$;
5 for $k\leftarrow 1$ to $n$ do
6 Compute $(\zeta_{kk^{\prime}})_{x_{k^{\prime}}\in U_{k}}$ (Eq. (45));
7 $k^{*}\leftarrow\mathop{\mathrm{argmin}}\limits_{x_{k^{\prime}}\in
U_{k}}\zeta_{kk^{\prime}}$;
8 if $k^{*}\neq k$ then
9 $\ \Phi_{k}\leftarrow\Phi_{k^{*}}^{\text{old}};\ \
\zeta_{kk}\leftarrow\zeta_{kk^{*}};\ \ N_{\text{replaced}}\leftarrow
N_{\text{replaced}}+1$;
10
11 end if
12
13 end for
14
15 end while
Algorithm 3 Postprocess-Local-Parameterization
A note on hyperparameters $N,(\tau_{s},\delta_{s})_{s=1}^{d}$. Generally, $N$
should be small so that the low frequency eigenvectors form the set of
candidate eigenvectors. In almost all of our experiments we take $N$ to be
$100$. The set of $(\tau_{s},\delta_{s})_{s=1}^{d}$ is reduced to two
hyperparameters, one for all $\tau_{s}$’s and one for all $\delta_{s}$’s. As
explained above, $\tau_{s}$ enforces certain vectors to be non-zero and
$\delta_{s}$ enforces certain directional derivatives to be large enough.
Therefore, a small value of $\tau_{s}$ in $(0,100)$ and a large value of
$\delta_{s}$ in $(0,1]$ is suitable. In most of our experiments, we used a
value of $50$ for all $\tau_{s}$ and a value of $0.9$ for all $\delta_{s}$.
Our algorithm is not too sensitive to the values of these hyperparameters.
Other values of $N$, $\tau_{s}$ and $\delta_{s}$ would also result in the
embeddings with high visual quality.
##### Example.
We now build upon the example of the square grid at the end of Section 3.2.
The values of the additional inputs are $N=100$, $\tau_{s}=50$ and
$\delta_{s}=0.9$ for all $s\in\\{1,\ldots,d\\}$. Using Algo. 2 and 3 we obtain
$10^{4}$ local views $U_{k}$ and $\Phi_{k}(U_{k})$ where $|U_{k}|=25$ for all
$k$. In the left image of Figure 5, we colored each point $x_{k}$ with the
distortion $\zeta_{kk}$ of the local parameterization $\Phi_{k}$ on $U_{k}$.
The mapped discrete balls $\Phi_{k}(U_{k})$ for some values of $k$ are also
shown in Figure 30 in the Appendix H.
|
---|---
Figure 5: Distortion of the obtained local parameterizations when the points
on the boundary are not known (left) versus when they are known apriori
(right). Each point $x_{k}$ is colored by $\zeta_{kk}$ (see Eq. (45)).
###### Remark 4.
Note that the parameterizations of the discrete balls close to the boundary
have higher distortion. This is because the injectivity radius at the points
close to the boundary is low and precisely zero at the points on the boundary.
As a result, the size of the balls around these points exceeds the limit
beyond which Theorem 1 is applicable.
At this point we note the following remark in [25].
###### Remark 5.
As was noted by L. Guibas, when M has a boundary, in the case of Neumann
boundary values, one may consider the “doubled” manifold, and may apply the
result in Theorem 1 for a possibly larger $r_{k}$.
Due to the above remark, assuming that the points on the boundary are known,
we computed the distance matrix for the doubled manifold using the method
described in [27]. Then we recomputed the local parameterizations $\Phi_{k}$
keeping all other hyperparameters the same as before. In the right image of
Figure 5, we colored each point $x_{k}$ with the distortion of the updated
parameterization $\Phi_{k}$ on $U_{k}$. Note the reduction in the distortion
of the paramaterizations for the neighborhoods close to the boundary. The
distortion is still high near the corners.
### 3.4 Time Complexity
The combined worst case time complexity of Algo. 1, 2 and 3 is
$O(n(N^{2}(k_{\text{lv}}+d)+k_{\text{lv}}^{3}N_{\text{post}}d))$ where
$N_{\text{post}}$ is the number of iterations it takes to converge in Algo. 3
which was observed to be less than $50$ for all the examples in this paper. It
took about a minute111Machine specification: MacOS version 11.4, Apple M1
Chip, $16$GB RAM. to construct the local views in the above example as well as
in all the examples in Section 6.
## 4 Clustering for Intermediate Views
Recall that the discrete balls $U_{k}$ are the local views of the data in the
high dimensional ambient space. In the previous section, we obtained the
mappings $\Phi_{k}$ to construct the local views $\Phi_{k}(U_{k})$ of the data
in the $d$-dimensional embedding space. As discussed in Section 1.2, one can
use the GPA [18, 20, 43] to register these local views to recover a global
embedding. In practice, too many small local views (high $n$ and small
$|U_{k}|$) result in extremely high computational complexity. Moreover, small
overlaps between the local views makes their registration susceptible to
errors. Therefore, we perform clustering to obtain $M\ll n$ intermediate
views, $\widetilde{U}_{m}$ and $\widetilde{\Phi}_{m}(\widetilde{U}_{m})$, of
the data in the ambient space and the embedding space, respectively. This
reduces the time complexity and increases the overlaps between the views,
leading to their quick and robust registration.
### 4.1 Notation
Our clustering algorithm is designed so as to ensure low distortion of the
parameterizations $\widetilde{\Phi}_{m}$ on $\widetilde{U}_{m}$. We first
describe the notation used and then present the pseudocode followed by a full
explanation of the steps. Let $c_{k}$ be the index of the cluster $x_{k}$
belongs to. Then the set of points which belong to cluster $m$ is given by
$\displaystyle\mathcal{C}_{m}=\\{x_{k}\ |\ c_{k}=m\\}.$ (46)
Denote by $c_{U_{k}}$ the set of indices of the neighboring clusters of
$x_{k}$. The neighboring points of $x_{k}$ lie in these clusters, that is,
$\displaystyle c_{U_{k}}=\\{c_{k^{\prime}}\ |\ x_{k^{\prime}}\in U_{k}\\}.$
(47)
We say that a point $x_{k}$ lies in the vicinity of a cluster $m$ if $m\in
c_{U_{k}}$. Let $\widetilde{U}_{m}$ denote the $m$th intermediate view of the
data in the ambient space. This constitutes the union of the local views
associated with all the points belonging to cluster $m$, that is,
$\displaystyle\widetilde{U}_{m}=\bigcup_{k:\ x_{k}\in\mathcal{C}_{m}}U_{k}.$
(48)
Clearly, a larger cluster means a larger intermediate view. In particular,
addition of $x_{k}$ to $\mathcal{C}_{m}$ grows the intermediate view
$\widetilde{U}_{m}$ to $\widetilde{U}_{m}\cup U_{k}$,
$\displaystyle\mathcal{C}_{m}\rightarrow\mathcal{C}_{m}\cup\\{x_{k}\\}\implies\widetilde{U}_{m}\rightarrow\widetilde{U}_{m}\cup
U_{k}$ (49)
Let $\widetilde{\Phi}_{m}$ be the $d$-dimensional parameterization associated
with the $m$th cluster. This parameterization maps $\widetilde{U}_{m}$ to
$\widetilde{\Phi}_{m}(\widetilde{U}_{m})$, the $m$th intermediate view of the
data in the embedding space. Note that a point $x_{k}$ generates the local
view $U_{k}$ (see Eq. (26)) which acts as the domain of the parameterization
$\Phi_{k}$. Similarly, a cluster $\mathcal{C}_{m}$ obtained through our
procedure, generates an intermediate view $\widetilde{U}_{m}$ (see Eq. (48))
which acts as the domain of the parameterization $\widetilde{\Phi}_{m}$.
Overall, our clustering procedure replaces the notion of a local view per an
individual point by an intermediate view per a cluster of points.
Input: $(U_{k},\Phi_{k})_{k=1}^{n},\eta_{\text{min}}$
Output:
$(\mathcal{C}_{m},\widetilde{U}_{m},\widetilde{\Phi}_{m})_{m=1}^{M},(c_{k})_{k=1}^{n}$
1 Initialize $c_{k}\leftarrow k$, $\mathcal{C}_{m}\leftarrow\\{x_{m}\\}$,
$\widetilde{\Phi}_{m}\leftarrow\Phi_{m}$ for all $k,m\in\\{1,\ldots,n\\}$;
2 for $\eta\leftarrow 2$ to $\eta_{\text{min}}$ do
3 Compute $b_{m\leftarrow x_{k}}$ for all $m,k\in\\{1,\ldots,n\\}$ (Eq. (47,
48, 50));
4
$m,k\leftarrow\mathop{\mathrm{argmax}}\limits_{m^{\prime},k^{\prime}}b_{m^{\prime}\leftarrow
x_{k^{\prime}}};\ \text{bid}^{*}\leftarrow b_{m\leftarrow x_{k}}$;
5 while $\text{bid}^{*}>0$ do
6 $s\leftarrow c_{k};\ \mathcal{C}_{s}\leftarrow\mathcal{C}_{s}-x_{k};\
c_{k}\leftarrow m;\ \mathcal{C}_{m}\leftarrow\mathcal{C}_{m}\cup x_{k}$;
7 Recompute $b_{m^{\prime}\leftarrow x_{k^{\prime}}}$ for all
$(m^{\prime},k^{\prime})\in\mathcal{S}$ (Eq. (51));
8
$m,k\leftarrow\mathop{\mathrm{argmax}}\limits_{m^{\prime},k^{\prime}}b_{m^{\prime}\leftarrow
x_{k^{\prime}}};\ \text{bid}^{*}\leftarrow b_{m\leftarrow x_{k}}$;
9
10 end while
11
12 end for
13$M\leftarrow$ the number of non-empty clusters;
14 Remove $\mathcal{C}_{m}$, $\widetilde{\Phi}_{m}$ when
$|\mathcal{C}_{m}|=0$, relabel clusters from $1$ to $M$ and update $c_{k}$
with new labels;
15 Compute $(\widetilde{U}_{m})_{m=1}^{M}$ (Eq. (48));
Algorithm 4 Clustering
### 4.2 Low Distortion Clustering
Initially, we start with $n$ singleton clusters where the point $x_{k}$
belongs to the $k$th cluster and the parameterization associated with the
$k$th cluster is $\Phi_{k}$. Thus, $c_{k}=k$, $\mathcal{C}_{m}=\\{x_{m}\\}$
and $\widetilde{\Phi}_{m}=\Phi_{m}$ for all $k,m\in\\{1,\ldots,n\\}$. This
automatically implies that initially $\widetilde{U}_{m}=U_{m}$. The
parameterizations associated with the clusters remain the same throughout the
procedure. During the procedure, each cluster $\mathcal{C}_{m}$ is perceived
as an entity which wants to grow the domain $\widetilde{U}_{m}$ of the
associated parameterization $\widetilde{\Phi}_{m}$ by growing itself (see Eq.
49), while simultaneously keeping the distortion of $\widetilde{\Phi}_{m}$ on
$\widetilde{U}_{m}$ low (see Eq. 45). To achieve that, each cluster
$\mathcal{C}_{m}$ places a careful bid $b_{m\leftarrow x_{k}}$ for each point
$x_{k}$. The global maximum bid is identified and the underlying point $x_{k}$
is relabelled to the bidding cluster, hence updating $c_{k}$. With this
relabelling, the bidding cluster grows and the source cluster shrinks. This
procedure of shrinking and growing clusters is repeated until all non-empty
clusters are large enough, i.e. have a size at least $\eta_{\text{min}}$, a
hyperparameter. In our experiments, we choose $\eta_{\text{min}}$ from
$\\{5,10,15,20,25\\}$. We iterate over $\eta$ which varies from $2$ to
$\eta_{\text{min}}$. In the $\eta$-th iteration, we say that the $m$th cluster
is small if it is non-empty and has a size less than $\eta$, that is, when
$|\mathcal{C}_{m}|\in(0,\eta)$. During the iteration, the clusters either
shrink or grow until no small clusters remain. Therefore, at the end of the
$\eta$-th iteration the non-empty clusters are of size at least $\eta$. After
the last ($\eta_{\text{min}}$th) iteration, each non-empty cluster will have
at least $\eta_{min}$ points and the empty clusters are pruned away.
##### Bid by cluster $m$ for $x_{k}$.
In the $\eta$-th iteration, we start by computing the bid $b_{m\leftarrow
x_{k}}$ by each cluster $m$ for each point $x_{k}$. The bid function is
designed so as to satisfy the following conditions. The first two conditions
are there to halt the procedure while the last two conditions follow
naturally. These conditions are also depicted in Figure 6.
1. 1.
No cluster bids for the points in large clusters. Since $x_{k}$ belongs to
cluster $c_{k}$ therefore, if $|\mathcal{C}_{c_{k}}|>\eta$ then the
$b_{m\leftarrow x_{k}}$ is zero for all $m$.
2. 2.
No cluster bids for a point in another cluster whose size is bigger than its
own size. Therefore, if $|\mathcal{C}_{m}|<|\mathcal{C}_{c_{k}}|$ then again
$b_{m\leftarrow x_{k}}$ is zero.
3. 3.
A cluster bids for the points in its own vicinity. Therefore, if $m\not\in
c_{U_{k}}$ (see Eq. 47) then $b_{m\leftarrow x_{k}}$ is zero.
4. 4.
Recall that a cluster $m$ aims to grow while keeping the distortion of
associated parameterization $\widetilde{\Phi}_{m}$ low on its domain
$\widetilde{U}_{m}$. If the $m$th cluster acquires the point $x_{k}$,
$\widetilde{U}_{m}$ grows due to the addition of $U_{k}$ to it (see Eq. (48)),
and so does the distortion of $\widetilde{\Phi}_{m}$ on it. Therefore, to
ensure low distortion, the natural bid by $\mathcal{C}_{m}$ for the point
$x_{k}$, $b_{m\leftarrow x_{k}}$, is
$\text{Distortion}(\widetilde{\Phi}_{m},U_{k}\cup\widetilde{U}_{m})^{-1}$ (see
Eq. 45).
Combining the above conditions, we can write the bid by cluster $m$ for the
point $x_{k}$ as,
$\displaystyle b_{m\leftarrow
x_{k}}=\left\\{\begin{matrix}\text{Distortion}(\widetilde{\Phi}_{m},U_{k}\cup\widetilde{U}_{m})^{-1}&\text{if
}|\mathcal{C}_{c_{k}}|\in(0,\eta)\land m\in
c_{U_{k}}\land|\mathcal{C}_{m}|\geq|\mathcal{C}_{c_{k}}|\\\
0&\text{otherwise}.\end{matrix}\right.$ (50)
In the practical implementation of above equation, $c_{U_{k}}$ and
$\widetilde{U}_{m}$ are computed on the fly using Eq. (47, 48).
Figure 6: Computation of the bid for a point in a small cluster by the
neighboring clusters in $\eta$-th iteration. (left) $x_{k}$ is a point
represented by a small red disc, in a small cluster $c_{k}$ enclosed by solid
red line. The dashed red line enclose $U_{k}$. Assume that the cluster $c_{k}$
is small so that $|\mathcal{C}_{c_{k}}|\in(0,\eta)$. Clusters $m_{1}$,
$m_{2}$, $m_{3}$ and $m_{4}$ are enclosed by solid colored lines too. Note
that $m_{1}$, $m_{2}$ and $m_{3}$ lie in $c_{U_{k}}$ (the nonempty overlap
between these clusters and $U_{k}$ indicate that), while $m_{4}\not\in
c_{U_{k}}$. Thus, the bid by $m_{4}$ for $x_{k}$ is zero. Since the size of
cluster $m_{3}$ is less than the size of cluster $c_{k}$ i.e.
$|\mathcal{C}_{m_{3}}|<|\mathcal{C}_{c_{k}}|$, the bid by $m_{3}$ for $x_{k}$
is also zero. Since clusters $m_{1}$ and $m_{2}$ satisfy all the conditions,
the bids by $m_{1}$ and $m_{2}$ for $x_{k}$ are to be computed. (right) The
bid $b_{m_{1}\leftarrow x_{k}}$, is given by the inverse of the distortion of
$\widetilde{\Phi}_{m_{1}}$ on $U_{k}\cup\widetilde{U}_{m_{1}}$, where the
dashed blue line enclose $\widetilde{U}_{m_{1}}$. If the bid
$b_{m_{1}\leftarrow x_{k}}$ is greater (less) than the bid $b_{m_{2}\leftarrow
x_{k}}$, then the clustering procedure would favor relabelling of $x_{k}$ to
$m_{1}$ ($m_{2}$).
##### Greedy procedure to grow and shrink clusters.
Given the bids by all the clusters for all the points, we grow and shrink the
clusters so that at the end of the current iteration $\eta$, each non-empty
cluster has a size at least $\eta$. We start by picking the global maximum
bid, say $b_{m\leftarrow x_{k}}$. Let $x_{k}$ be in the cluster $s$ (note that
$c_{k}$, the cluster of $x_{k}$, is $s$ before $x_{k}$ is relabelled). We
relabel $c_{k}$ to $m$, and update the set of points in clusters $s$ and $m$,
$\mathcal{C}_{s}$ and $\mathcal{C}_{m}$, using Eq. (46). This implicitly
shrinks $\widetilde{U}_{s}$ and grows $\widetilde{U}_{m}$ (see Eq. 48) and
affects the bids by clusters $m$ and $s$ or the bids for the points in these
clusters. Denote the set of pairs of the indices of all such clusters and the
points by
$\displaystyle\mathcal{S}=\\{(m^{\prime},k^{\prime})\in\\{1,\ldots,n\\}^{2}\
|\ m^{\prime}\in\\{m,s\\}\text{ or
}x_{k^{\prime}}\in\mathcal{C}_{s}\cup\mathcal{C}_{m}\\}.$ (51)
Then the bids $b_{m^{\prime}\leftarrow x_{k^{\prime}}}$ are recomputed for all
$(m^{\prime},k^{\prime})\in\mathcal{S}$. It is easy to verify that for all
other pairs, neither the conditions nor the distortion in Eq. (50) are
affected. After this computation, we again pick the global maximum bid and
repeat the procedure until the maximum bid becomes zero indicating that no
non-empty small cluster remains. This marks the end of the $\eta$-th
iteration.
##### Final intermediate views in the ambient and the embedding space.
At the end of the last iteration, all non-empty clusters have at least
$\eta_{\text{min}}$ points. Let $M$ be the number of non-empty clusters. Using
the pigeonhole principle one can show that $M$ would be less than or equal to
$n/\eta_{\text{min}}$. We prune away the empty clusters and relabel the non-
empty ones from $1$ to $M$ while updating $c_{k}$ accordingly. With this, we
obtain the clusters $(\mathcal{C}_{m})_{m=1}^{M}$ with associated
parameterizations $(\widetilde{\Phi}_{m})_{m=1}^{M}$. Finally, using Eq. (48),
we obtain the $M$ intermediate views $(\widetilde{U}_{m})_{m=1}^{M}$ of the
data in the ambient space. Then, the intermediate views of the data in the
embedding space are given by
$(\widetilde{\Phi}_{m}(\widetilde{U}_{m}))_{m=1}^{M}$. Note that
$\widetilde{\Phi}_{m}(\widetilde{U}_{m})$ is a matrix with
$|\widetilde{U}_{m}|$ rows and $d$ columns (see Eq. (43)).
##### Example.
We continue with our example of the square grid which originally contained
about $10^{4}$ points. Therefore, before clustering we had about $10^{4}$
small local views $U_{k}$ and $\Phi_{k}(U_{k})$, each containing $25$ points.
After clustering with $\eta_{min}=10$, we obtained $635$ clusters and
therefore that many intermediate views $\widetilde{U}_{m}$ and
$\widetilde{\Phi}_{m}(\widetilde{U}_{m})$ with an average size of $79$. When
the points on the boundary are known then we obtained $562$ intermediate views
with an average size of $90$. Note that there is a trade-off between the size
of the intermediate views and the distortion of the parameterizations used to
obtain them. For convenience, define $\tilde{\zeta}_{mm}$ to be the distortion
of $\widetilde{\Phi}_{m}$ on $\widetilde{U}_{m}$ using Eq. (45). Then, as the
size of the views are increased (by increasing $\eta_{min}$), the value of
$\tilde{\zeta}_{mm}$ would also increase. In Figure 7 we colored the points in
cluster $m$, $\mathcal{C}_{m}$, with $\tilde{\zeta}_{mm}$. In other words,
$x_{k}$ is colored by $\tilde{\zeta}_{c_{k}c_{k}}$. Note the increased
distortion in comparison to Figure 5.
|
---|---
Figure 7: Each point $x_{k}$ colored by $\tilde{\zeta}_{c_{k}c_{k}}$ when the
points on the boundary of the square grid are unknown (left) versus when they
are known apriori (right).
### 4.3 Time Complexity
Our practical implementation of Algo. 4 uses memoization for speed up. It took
about a minute to construct intermediate views using in the above example with
$n=10^{4}$, $k_{\text{lv}}=25$, $d=2$ and $\eta_{\text{min}}=10$, and it took
less than $2$ minutes for all the examples in Section 6. It was empirically
observed that the time for clustering is linear in $n$, $\eta_{\text{min}}$
and $d$ while it is cubic in $k_{\text{lv}}$.
## 5 Global Embedding using Procrustes Analysis
In this section, we present an algorithm based on Procrustes analysis to align
the intermediate views $\widetilde{\Phi}_{m}(\widetilde{U}_{m})$ and obtain a
global embedding. The $M$ views $\widetilde{\Phi}_{m}(\widetilde{U}_{m})$ are
transformed by an orthogonal matrix $T_{m}$ of size $d\times d$, a
$d$-dimensional translation vector $v_{m}$ and a positive scalar $b_{m}$ as a
scaling component. The transformed views are given by
$\widetilde{\Phi}^{g}_{m}(\widetilde{U}_{m})$ such that
$\displaystyle\widetilde{\Phi}^{g}_{m}(x_{k})=b_{m}\widetilde{\Phi}_{m}(x_{k})T_{m}+v_{m}\quad\textrm{for
all }x_{k}\in\widetilde{U}_{m}.$ (52)
First we state a general approach to estimate these parameters, and its
limitations in Section 5.1. Then we present an algorithm in Section 5.2 which
computes these parameters and a global embedding of the data while addressing
the limitations of the general procedure. In Section 5.3 we describe a simple
modification to our algorithm to tear apart closed manifolds. In Appendix F,
we contrast our global alignment procedure with that of LTSA.
---
Figure 8: (left) The intermediate views $\widetilde{U}_{m}$ and
$\widetilde{U}_{m^{\prime}}$ of a $2$d manifold in a possibly high dimensional
ambient space. These views trivially align with each other. The red star in
blue circles represent their overlap $\widetilde{U}_{mm^{\prime}}$. (middle)
The $m$th and $m^{\prime}$th intermediate views in the $2$d embedding space.
(right) Transformed views after aligning
$\widetilde{\Phi}_{m}(\widetilde{U}_{mm^{\prime}})$ with
$\widetilde{\Phi}_{m^{\prime}}(\widetilde{U}_{mm^{\prime}})$.
### 5.1 General Approach for Alignment
In general, the parameters $(T_{m},v_{m},b_{m})_{m=1}^{M}$ are estimated so
that for all $m$ and $m^{\prime}$, the two transformed views of the overlap
between $\widetilde{U}_{m}$ and $\widetilde{U}_{m^{\prime}}$, obtained using
the parameterizations $\widetilde{\Phi}^{g}_{m}$ and
$\widetilde{\Phi}^{g}_{m^{\prime}}$, align with each other. To be more
precise, define the overlap between the $m$th and the $m^{\prime}$th
intermediate views in the ambient space as the set of points which lie in both
the views,
$\displaystyle\widetilde{U}_{mm^{\prime}}=\widetilde{U}_{m}\cap\widetilde{U}_{m^{\prime}}.$
(53)
In the ambient space, the $m$th and the $m^{\prime}$th views are neighbors if
$\widetilde{U}_{mm^{\prime}}$ is non-empty. As shown in Figure 8 (left), these
neighboring views trivially align on the overlap between them. It is natural
to ask for a low distortion global embedding of the data. Therefore, we must
ensure that the embeddings of $\widetilde{U}_{mm^{\prime}}$ due to the $m$th
and the $m^{\prime}$th view in the embedding space, also align with each
other. Thus, the parameters $(T_{m},v_{m},b_{m})_{m=1}^{M}$ are estimated so
that $\widetilde{\Phi}^{g}_{m}(\widetilde{U}_{mm^{\prime}})$ aligns with
$\widetilde{\Phi}^{g}_{m^{\prime}}(\widetilde{U}_{mm^{\prime}})$ for all $m$
and $m^{\prime}$. However, due to the distortion of the parameterizations it
is usually not possible to perfectly align the two embeddings (see Figure 8).
We can represent both embeddings of the overlap as matrices with
$|\widetilde{U}_{mm^{\prime}}|$ rows and $d$ columns. Then we choose the
measure of the alignment error to be the squared Frobenius norm of the
difference of the two matrices. The error is trivially zero if
$\widetilde{U}_{mm^{\prime}}$ is empty. Overall, the parameters are estimated
so as to minimize the following alignment error
$\displaystyle\mathcal{L}((T_{m},v_{m},b_{m})_{m=1}^{M})=\frac{1}{2M}\sum_{\begin{subarray}{c}m=1\\\
m^{\prime}=1\end{subarray}}^{M}\left\|\widetilde{\Phi}^{g}_{m}(\widetilde{U}_{mm^{\prime}})-\widetilde{\Phi}^{g}_{m^{\prime}}(\widetilde{U}_{mm^{\prime}})\right\|^{2}_{F}.$
(54)
In theory, one can start with a trivial initialization of $T_{m}$, $v_{m}$ and
$b_{m}$ as $I_{d}$, $\mathbf{0}$ and $1$, and directly use GPA [18, 20, 43] to
obtain a local minimum of the above alignment error. This approach has two
issues.
1. 1.
Like most optimization algorithms, the rate of convergence to a local minimum
and the quality of it depends on the initialization of the parameters. We
empirically observed that with a trivial initialization of the parameters, GPA
may take a great amount of time to converge and may also converge to an
inferior local minimum.
2. 2.
Using GPA to align a view with all of its adjacent views would prevent us from
tearing apart closed manifolds; as an example see Figure 11.
These issues are addressed in subsequent Sections 5.2 and 5.3, respectively.
### 5.2 GPA Adaptation for Global Alignment
Input:
$(x_{k},c_{k},w)_{k=1}^{n},(\mathcal{C}_{m},\widetilde{\Phi}_{m},\widetilde{U}_{m})_{m=1}^{M}$,
to_tear, $\nu$, $N_{r}$
Output: $(T_{m},b_{m},v_{m})_{m=1}^{M}$
1 for $\text{Iter}\leftarrow 1$ to $N_{r}+1$ do
2 if $\text{Iter}=1$ then
3 Initialize $T_{m}\leftarrow I,v_{m}\leftarrow 0$;
4 Compute $b_{m}$ (Eq. (55));
5 Compute $(s_{m},p_{s_{m}})_{m=1}^{M}$ (Eq. (98, 100) in Appendix D);
6 $\mathcal{A}\leftarrow\\{s_{1}\\}$ %The set of already transformed views;
7
8 else
9 $(s_{m})_{m=2}^{M}\leftarrow$ random permutation of $(1,\ldots,M)$ excluding
$s_{1}$;
10
11 end if
12 for $m\leftarrow 2$ to $M$ do
13 $s\leftarrow s_{m}$, $p\leftarrow p_{s_{m}}$;
14 (Step R1) $T_{s},v_{s}\leftarrow$ Procrustes
($\widetilde{\Phi}^{g}_{p}(\widetilde{U}_{sp})$,$\widetilde{\Phi}^{g}_{s}(\widetilde{U}_{sp})$,
No scaling);
15 if $\text{to\\_tear}=$ False then
16 (Step R2) Compute $\mathcal{Z}_{s}$ (Eq. (56));
17
18 else
19 (Step R2) Compute $\mathcal{Z}_{s}$ (Eq. (58));
20
21 end if
22
23 (Step R3) $\mu_{s}\leftarrow$ Centroid of
$(\widetilde{\Phi}^{g}_{m^{\prime}}(\widetilde{U}_{sm^{\prime}}))_{m^{\prime}\in\mathcal{Z}_{s}}$;
24 (Step R4) $T_{s},v_{s}\leftarrow$ Procrustes
($\mu_{s},\widetilde{\Phi}^{g}_{s}(\cup_{m^{\prime}\in\mathcal{Z}}U_{sm^{\prime}})$,
No scaling);
25 (Step R5) $\mathcal{A}\leftarrow\mathcal{A}\cup\\{s\\}$;
26
27 end for
28
29 end for
Compute $(y_{k})_{k=1}^{n}$ (Eq. (57)).
Algorithm 5 Calculate-Global-Embedding
First we look for a better than trivial initialization of the parameters so
that the views are approximately aligned. The idea is to build a rooted tree
where nodes represent the intermediate views. This tree is then traversed in a
breadth first order starting from the root. As we traverse the tree, the
intermediate view associated with a node is aligned with the intermediate view
associated with its parent node (and with a few more views), thus giving a
better initialization of the parameters. Subsequently, we refine these
parameters using a similar procedure involving random order traversal over the
intermediate views.
|
---|---
(a.1). Nine intermediate views $(\widetilde{U}_{s_{m}})_{m=1}^{9}$ of a $2d$ manifold with boundary are shown. $\widetilde{U}_{s_{9}}$ has $\widetilde{U}_{s_{7}}$ and $\widetilde{U}_{s_{8}}$ as the neighboring views. | (a.2). In combination with (a.1), nine intermediate views $(\widetilde{U}_{s_{m}})_{m=1}^{9}$ of a closed $2d$ manifold are shown. In addition to $\widetilde{U}_{s_{7}}$ and $\widetilde{U}_{s_{8}}$, $\widetilde{U}_{s_{9}}$ also has $\widetilde{U}_{s_{1}}$ as the neighboring view.
|
(b) The intermediate views $(\widetilde{\Phi}_{s_{m}}(\widetilde{U}_{s_{m}}))_{m=1}^{9}$ in the $2$d embedding space, as they were passed as input to Algo. 5. These views are scrambled in the embedding space and Algo. 5 will move them to the right location. | (c) The transformed views after scaling them using $b_{m}$ as in Eq. (55).
Figure 9: An illustration of the intermediate views in the ambient and the
embedding space as they are passed as input to Algo. 5 and are scaled using
Eq. (55).
##### Initialization ($\text{Iter}=1$, $\text{to\\_tear}=$ False).
In the first outer loop of Algo. 5, we start with $T_{m}=I_{d}$, $v_{m}$ as
the zero vector and compute $b_{m}$ so as to bring the intermediate views
$\widetilde{\Phi}_{m}(\widetilde{U}_{m})$ to the same scale as their
counterpart $\widetilde{U}_{m}$ in the ambient space. In turn this brings all
the views to similar scale (see Figure 9 (c)). We compute the scaling
component $b_{m}$ to be the ratio of the median distance between unique points
in $\widetilde{U}_{m}$ and in $\widetilde{\Phi}_{k}(\widetilde{U}_{m})$, that
is,
$\displaystyle b_{m}=\frac{\text{median}\left\\{d_{e}(x_{k},x_{k^{\prime}})\
|\ x_{k},x_{k^{\prime}}\in\widetilde{U}_{m},x_{k}\neq
x_{k^{\prime}}\right\\}}{\text{median}\left\\{\left\|\widetilde{\Phi}_{m}(x_{k})-\widetilde{\Phi}_{m}(x_{k^{\prime}})\right\|_{2}\
|\ x_{k},x_{k^{\prime}}\in\widetilde{U}_{m},x_{k}\neq
x_{k^{\prime}}\right\\}}.$ (55)
Then we transform the the views in a sequence $(s_{m})_{m=1}^{M}$. This
sequence corresponds to the breadth first ordering of a tree starting from its
root node (which represents $s_{1}$th view). Let the $p_{s_{m}}$th view be the
parent of the $s_{m}$th view. Here $p_{s_{m}}$ lies in
$\\{s_{1},\ldots,s_{m-1}\\}$ and it is a neighboring view of the $s_{m}$th
view in the ambient space, i.e. $\widetilde{U}_{s_{m}p_{s_{m}}}$ is non-empty.
Details about the computation of these sequences is provided in Appendix D.
Note that $p_{s_{1}}$ is not defined and consequently, the first view in the
sequence ($s_{1}$th view) is not transformed, therefore $T_{s_{1}}$ and
$v_{s_{1}}$ are not updated. We also define $\mathcal{A}$, initialized with
$s_{1}$, to keep track of visited nodes which also represent the already
transformed views. Then we iterate over $m$ which varies from $2$ to $M$. For
convenience, denote the current ($m$th) node $s_{m}$ by $s$ and its parent
$p_{s_{m}}$ by $p$. The following procedure updates $T_{s}$ and $v_{s}$ (refer
to Figure 9 and 10 for an illustration of this procedure).
Step R1 ($m=9$)
---
|
(d) The transformed intermediate views $(\widetilde{\Phi}^{g}_{s_{m}}(\widetilde{U}_{s_{m}}))_{m=1}^{9}$ before the start of the iteration $m=9$. The first eight views are approximately aligned and the ninth view is to be aligned. Inaccuracies occur due to distortion. | (e) Assuming $p_{9}=s_{7}$, step R1 computed $T_{s_{9}}$ and $v_{s_{9}}$ so that $\widetilde{\Phi}^{g}_{s_{9}}(\widetilde{U}_{s_{9}s_{7}})$ aligns with $\widetilde{\Phi}^{g}_{s_{7}}(\widetilde{U}_{s_{9}s_{7}})$. The transformed view $\widetilde{\Phi}^{g}_{s_{9}}(\widetilde{U}_{s_{9}})$ is shown. Note that step R1 results in the same output for both cases in Fig. 9 (a)
Step R2 and R3 ($m=9$)
|
(f.1) For a manifold with boundary, $\widetilde{U}_{s_{9}}$ has non-empty overlaps with $\widetilde{U}_{s_{7}}$ and $\widetilde{U}_{s_{8}}$ only. Therefore, step R2 computed $\mathcal{Z}_{s_{9}}=\\{s_{7},s_{8}\\}$. The obtained $\mu_{s_{9}}$ in step R3 is also shown in black. | (f.2) For a closed manifold, $\widetilde{U}_{s_{9}}$ has non-empty overlaps with $\widetilde{U}_{s_{1}}$, $\widetilde{U}_{s_{7}}$ and $\widetilde{U}_{s_{8}}$. Therefore, step R2 computed $\mathcal{Z}_{s_{9}}=\\{s_{1},s_{7},s_{8}\\}$. The obtained $\mu_{s_{9}}$ in step R3 is also shown in black.
Step R4 ($m=9$)
|
(g.1) For a manifold with boundary, step R4 updated $T_{s_{9}}$ and $v_{s_{9}}$ so that the view $\widetilde{\Phi}_{s_{9}}^{g}(\widetilde{U}_{s_{9}s_{7}}\ \cup\ \widetilde{U}_{s_{9}s_{8}})$ aligns with $\mu_{s_{9}}$ in (f.1). The resulting view $\widetilde{\Phi}_{s_{9}}^{g}(\widetilde{U}_{s_{9}})$ is shown. | (g.2) For a closed manifold step R4 updated $T_{s_{9}}$ and $v_{s_{9}}$ so that view $\widetilde{\Phi}_{s_{9}}^{g}(\widetilde{U}_{s_{9}s_{1}}\cup\widetilde{U}_{s_{9}s_{7}}\cup\widetilde{U}_{s_{9}s_{8}})$ aligns with $\mu_{s_{9}}$ in (f.2). The resulting view $\widetilde{\Phi}_{s_{9}}^{g}(\widetilde{U}_{s_{9}})$ is shown. This is not a desired output as it distorts the global embedding. We resolve this issue in Section 5.3.
Figure 10: An illustration of steps R1 to R4 in Algo. 5, in continuation of
Figure 9.
##### Step R1.
We compute a temporary value of $T_{s}$ and $v_{s}$ by aligning the views
$\widetilde{\Phi}^{g}_{s}(\widetilde{U}_{sp})$ and
$\widetilde{\Phi}^{g}_{p}(\widetilde{U}_{sp})$ of the overlap
$\widetilde{U}_{sp}$, using Procrustes analysis [21] without modifying
$b_{s}$.
##### Step R2.
Then we identify more views to align the $s$th view with. We compute a subset
$\mathcal{Z}_{s}$ of the set of already visited nodes $\mathcal{A}$ such that
$m^{\prime}\in\mathcal{Z}_{s}$ if the $s$th view and the $m^{\prime}$th view
are neighbors in the ambient space. Note that, at this stage, $\mathcal{A}$ is
the same as the set $\\{s_{1},\ldots,s_{m-1}\\}$, the indices of the first
$m-1$ views. Therefore,
$\displaystyle\mathcal{Z}_{s}=\\{m^{\prime}|\
\widetilde{U}_{sm^{\prime}}\neq\emptyset\\}\cap\mathcal{A}.$ (56)
##### Step R3.
We then compute the centroid $\mu_{s}$ of the views
$(\widetilde{\Phi}^{g}_{m^{\prime}}(\widetilde{U}_{sm^{\prime}}))_{m^{\prime}\in\mathcal{Z}_{s}}$.
Here $\mu_{s}$ is a matrix with $d$ columns and the number of rows given by
the size of the set
$\cup_{m^{\prime}\in\mathcal{Z}_{s}}\widetilde{U}_{sm^{\prime}}$. A point in
this set can have multiple embeddings due to multiple parameterizations
$(\widetilde{\Phi}^{g}_{m^{\prime}})_{m^{\prime}\in\mathcal{Z}_{s}}$ depending
on the overlaps $(\widetilde{U}_{sm^{\prime}})_{m^{\prime}\in\mathcal{Z}_{s}}$
it lies in. The mean of these embeddings forms a row in $\mu_{s}$.
##### Step R4.
Finally, we update $T_{s}$ and $v_{s}$ by aligning the view
$\widetilde{\Phi}^{g}_{s}(\widetilde{U}_{sm^{\prime}})$ with
$\widetilde{\Phi}^{g}_{m^{\prime}}(\widetilde{U}_{sm^{\prime}})$ for all
$m^{\prime}\in\mathcal{Z}_{s}$. This alignment is based on the approach in
[18, 20] where, using the Procrustes analysis [21, 31], the view
$\widetilde{\Phi}^{g}_{s}(\cup_{m^{\prime}\in\mathcal{Z}_{s}}\widetilde{U}_{sm^{\prime}})$
is aligned with the centroid $\mu_{s}$, without modifying $b_{s}$.
##### Step R5.
After the $s$th view is transformed, we add it to the set of transformed views
$\mathcal{A}$.
##### Parameter Refinement ($\text{Iter}\geq 2$, $\text{to\\_tear}=$ False).
At the end of the first iteration of the outer loop in Algo. 5, we have an
initialization of $(T_{m},b_{m},v_{m})_{m=1}^{M}$ such that transformed
intermediate views are approximately aligned. To further refine these
parameters, we iterate over $(s_{m})_{m=2}^{M}$ in random order and perform
the same five step procedure as above, $N_{r}$ times. Besides the random-order
traversal, the other difference in a refinement iteration is that the set of
already visited nodes $\mathcal{A}$, contains all the nodes instead of just
the first $m-1$ nodes. This affects the computation of $\mathcal{Z}_{s}$ (see
Eq. (56)) in step R2 so that the $s$th intermediate view is now aligned with
all those views which are its neighbors in the ambient space. Note that the
step R5 is redundant during refinement.
In the end, we compute the global embedding $y_{k}$ of $x_{k}$ by mapping
$x_{k}$ using the transformed parameterization associated with the cluster
$c_{k}$ it belongs to,
$\displaystyle y_{k}=\widetilde{\Phi}^{g}_{c_{k}}(x_{k}).$ (57)
An illustration of the global embedding at various stages of Algo. 5 is
provided in Figure 11.
| Input | First iteration of the outer loop and stages within inner loop | End of outer loop
---|---|---|---
| | Before | Half-way | End |
Square | | | | |
Sphere | | | | |
Figure 11: $2$d embeddings of a square and a sphere at different stages of
Algo. 5. For illustration purpose, in the plots in the $2$nd and $3$rd columns
the translation parameter $v_{m}$ was manually set for those views which do
not lie in the set $\mathcal{A}$. Note that the embedding of the sphere is
fallacious. The reason and the resolution is provided in Section 5.3.
### 5.3 Tearing Closed Manifolds
When the manifold has no boundary, then the step R2 in above section may
result in a set $\mathcal{Z}_{s}$ containing the indices of the views which
are neighbors of the $s$th view in the ambient space but are far apart from
the transformed $s$th view in the embedding space, obtained right after step
R1. For example, as shown in Figure 10 (f.2), $s_{1}\in\mathcal{Z}_{s_{9}}$
because the $s_{9}$th view and the $s_{1}$th view are neighbors in the ambient
space (see Figure 9 (a.1, a.2)) but in the embedding space, they are far
apart. Due to such indices in $\mathcal{Z}_{s_{9}}$, the step R3 results in a
centroid, which when used in step R4, results in a fallacious estimation of
the parameters $T_{s}$ and $v_{s}$, giving rise to a high distortion
embedding. By trying to align with all its neighbors in the ambient space, the
$s_{9}$th view is misaligned with respect to all of them (see Figure 10
(g.2)).
##### Resolution ($\text{to\\_tear}=$ True).
We modify the step R2 so as to introduce a discontinuity by including the
indices of only those views in the set $\mathcal{Z}_{s}$ which are neighbors
of the $s$th view in both the ambient space as well as in the embedding space.
We denote the overlap between the $m$th and $m^{\prime}$th view in the
embedding space by $\widetilde{U}^{g}_{mm^{\prime}}$. There may be multiple
heuristics for computing $\widetilde{U}^{g}_{mm^{\prime}}$ which could work.
In the Appendix E, we describe a simple approach based on the already
developed machinery in this paper, which uses the hyperparameter $\nu$
provided as input to Algo. 5. Having obtained
$\widetilde{U}^{g}_{mm^{\prime}}$, we say that the $m$th and the
$m^{\prime}$th intermediate views in the embedding space are neighbors if
$\widetilde{U}^{g}_{mm^{\prime}}$ is non-empty.
##### Step R2.
Finally, we compute $\mathcal{Z}_{s}$ as,
$\displaystyle\mathcal{Z}_{s}=\\{m^{\prime}\ |\
\widetilde{U}_{sm^{\prime}}\neq\emptyset,\;\widetilde{U}^{g}_{sm^{\prime}}\neq\emptyset\\}\cap\mathcal{A}.$
(58)
Note that if it is known apriori that the manifold can be embedded in lower
dimension without tearing it apart then we do not require the above
modification. In all of our experiments except the one in Section 6.5, we do
not assume that this information is available.
With this modification, the set $\mathcal{Z}_{s_{9}}$ in Figure 10 (f.2) will
not include $s_{1}$ and therefore the resulting centroid in the step R3 would
be the same as the one in Figure 10 (f.1). Subsequently, the transformed
$s_{9}$th view would be the one in Figure 10 (g.1) rather than Figure 10
(g.2).
##### Gluing instruction for the boundary of the embedding.
Having knowingly torn the manifold apart, we provide at the output,
information on the points belonging to the tear and their neighboring points
in the ambient space. To encode the “gluing” instructions along the tear in
the form of colors at the output of our algorithm, we recompute
$\widetilde{U}^{g}_{mm^{\prime}}$. If $\widetilde{U}_{mm^{\prime}}$ is non-
empty but $\widetilde{U}^{g}_{mm^{\prime}}$ is empty, then this means that the
$m$th and $m^{\prime}$th views are neighbors in the ambient space but are torn
apart in the embedding space. Therefore, we color the global embedding of the
points on the overlap $\widetilde{U}_{mm^{\prime}}$ which belong to clusters
$\mathcal{C}_{m}$ and $\mathcal{C}_{m^{\prime}}$ with the same color to
indicate that although these points are separated in the embedding space, they
are adjacent in the ambient space (see Figures 19, 20 and 31).
An illustration of the global embedding at various stages of Algo. 5 with
modified step R2, is provided in Figure 12.
| Input | First iteration of the outer loop and stages within inner loop | End of outer loop
---|---|---|---
| | Before | Half-way | End |
Sphere | | | | |
Figure 12: $2$d embedding of a sphere at different stages of Algo. 5. For
illustration purpose, in the plots in the $2$nd and $3$rd columns the
translation parameter $v_{m}$ was manually set for those views which do not
lie in the set $\mathcal{A}$.
##### Example.
The obtained global embeddings of our square grid with
$\text{to\\_tear}=\text{True}$ and $\nu=3$, are shown in Figure 13. Note that
the boundary of the obtained embedding is more distorted when the points on
the boundary are unknown than when they are known apriori. This is because the
intermediate views near the boundary have higher distortion in the former case
than in the latter case (see Figure 7).
|
---|---
Figure 13: Global embedding of the square grid when the points on the boundary
are unknown (left) versus when they are known apriori (right).
### 5.4 Time Complexity
The worst case time complexity of Algo. 5 is
$O(N_{r}nk_{\text{lv}}^{2}d^{2}/\eta_{\text{min}})$ when to_tear is false. It
costs an additional time of $O(N_{r}n^{2}\text{max}(d,k_{\text{lv}}\log
n,n/\eta_{\text{min}}^{2})))$ when to_tear is true. In practice, one
refinement step took about $15$ seconds in the above example and between
$15$-$20$ seconds for all the examples in Section 6.
## 6 Experimental Results
We present experiments to compare LDLE222The python code is available at
https://github.com/chiggum/pyLDLE with LTSA [49], UMAP [32], t-SNE [30] and
Laplacian eigenmaps [3] on several datasets. First, we compare the embeddings
of discretized $2$d manifolds embedded in $\mathbb{R}^{2}$, $\mathbb{R}^{3}$
or $\mathbb{R}^{4}$, containing about $10^{4}$ points. These manifolds are
grouped based on the presence of the boundary and their orientability as in
Sections 6.2, 6.3 and 6.4. The inputs are shown in the figures themselves
except for the flat torus and the Klein bottle, as their $4$D
parameterizations cannot be plotted. Therefore, we describe their construction
below. A quantitative comparison of the algorithms is provided in Section
6.2.1. In Section 6.2.2 we assess the robustness of these algorithms to the
noise in the data. In Section 6.2.3 we assess the performance of these
algorithms on sparse data. Finally, in Section 6.5 we compare the embeddings
of some high dimensional datasets.
Flat Torus. A flat torus is a parallelogram whose opposite sides are
identified. In our case, we construct a discrete flat torus using a rectangle
with sides $2$ and $0.5$ and embed it in four dimensions as follows,
$\displaystyle X(\theta_{i},\phi_{j})$
$\displaystyle=\frac{1}{4\pi}(4cos(\theta_{i}),4\sin(\theta_{i}),\cos(\phi_{j}),\sin(\phi_{j}))$
(59)
where $\theta_{i}=0.01i\pi$, $\phi_{j}=0.04j\pi$, $i\in\\{0,\ldots,199\\}$ and
$j\in\\{0,\ldots,49\\}$.
Klein bottle. A Klein bottle is a non-orientable two dimensional manifold
without boundary. We construct a discrete Klein bottle using its $4$D Möbius
tube representation as follows,
$\displaystyle X(\theta_{i},\phi_{j})$
$\displaystyle=(R(\phi_{j})\cos\theta_{i},R(\phi_{j})\sin\theta_{i},r\sin\phi_{j}\cos\frac{\theta_{i}}{2},r\sin\phi_{j}\sin\frac{\theta_{i}}{2})$
(60) $\displaystyle R(\phi_{j})$ $\displaystyle=R+r\cos\phi_{j}$ (61)
where $\theta_{i}=i\pi/100$, $\phi_{j}=j\pi/25$, $i\in\\{0,\ldots,199\\}$ and
$j\in\\{0,\ldots,49\\}$.
### 6.1 Hyperparameters
To embed using LDLE, we use the Euclidean metric and the default values of the
hyperparameters and their description are provided in Table 1. Only the value
of $\eta_{\text{min}}$ is tuned across all the examples in Sections 6.2, 6.3
and 6.4 (except for Section 6.2.3), and is provided in Appendix G. For high
dimensional datasets in Section 6.5, values of the hyperaparameters which
differ from the default values are again provided in Appendix G.
Hyper- parameter | Description | Default value
---|---|---
$k_{\text{nn}}$ | No. of nearest neighbors used to construct the graph Laplacian | $49$
$k_{\text{tune}}$ | The nearest neighbor, distance to which is used as a local scaling factor in the construction of graph Laplacian | 7
$N$ | No. of nontrivial low frequency Laplacian eigenvectors to consider for the construction of local views in the embedding space | 100
$d$ | Intrinsic dimension of the underlying manifold | 2
$p$ | Probability mass for computing the bandwidth $t_{k}$ of the heat kernel | 0.99
$k_{\text{lv}}$ | The nearest neighbor, distance to which is used to construct local views in the ambient space | 25
$(\tau_{s})_{s=1}^{d}$ | Percentiles used to restrict the choice of candidate eigenfunctions | 50
$(\delta_{s})_{s=1}^{d}$ | Fractions used to restrict the choice of candidate eigenfunctions | 0.9
$\eta_{\text{min}}$ | Desired minimum number of points in a cluster | 5
to_tear | A boolean for whether to tear the manifold or not | True
$\nu$ | A relaxation factor to compute the neighborhood graph of the intermediate views in the embedding space | 3
$N_{r}$ | No. of iterations to refine the global embedding | 100
Table 1: Default values of LDLE hyperparameters.
For UMAP, LTSA, t-SNE and Laplacian eigenmaps, we use the Euclidean metric and
select the hyperparameters by grid search, choosing the values which result in
best visualization quality. For LTSA, we search for optimal n_neighbors in
$\\{5,10,25,50,75,100\\}$. For UMAP, we use $500$ epochs and search for
optimal n_neighbors in $\\{25,50,100,200\\}$ and min_dist in
$\\{0.01,0.1,0.25,0.5\\}$. For t-SNE, we use $1000$ iterations and search for
optimal perplexity in $\\{30,40,50,60\\}$ and early exaggeration in
$\\{2,4,6\\}$. For Laplacian eigenmaps, we search for $k_{\text{nn}}$ in
$\\{16,25,36,49\\}$ and $k_{\text{tune}}$ in $\\{3,7,11\\}$. The chosen values
of the hyperparameters are provided in Appendix G. We note that the Laplacian
eigenmaps fails to correctly embed most of the examples regardless of the
choice of the hyperparameters.
### 6.2 Manifolds with Boundary
In Figure 14, we show the $2$d embeddings of $2$d manifolds with boundary, in
$\mathbb{R}^{2}$ or $\mathbb{R}^{3}$, three of which have holes. To a large
extent, LDLE preserved the shape of the holes. LTSA perfectly preserved the
shape of the holes in the square but deforms it in the Swiss Roll. This is
because LTSA embedding does not capture the aspect ratio of the underlying
manifold as discussed in Section F. UMAP and Laplacian eigenmaps distorted the
shape of the holes and the region around them, while t-SNE produced dissected
embeddings. For the sphere with a hole which is a curved $2$d manifold with
boundary, LTSA, UMAP and Laplacian eigenmaps squeezed it into $\mathbb{R}^{2}$
while LDLE and t-SNE tore it apart. The correctness of the LDLE embedding is
proved in Figure 31. In the case of noisy swiss roll, LDLE and UMAP produced
visually better embeddings in comparison to the other methods.
We note that the boundaries of the LDLE embeddings in Figure 14 are usually
distorted. The cause of this is explained in Remark 4. When the points in the
input which lie on the boundary are known apriori then the distortion near the
boundary can be reduced using the double manifold as discussed in Remark 5 and
shown in Figure 4. The obtained LDLE embeddings when the points on the
boundary are known, are shown in Figure 15.
| Barbell | Square with two holes | Sphere with a hole | Swiss Roll with a hole | Noisy Swiss Roll
---|---|---|---|---|---
Input | | | | |
LDLE | | | | |
LTSA | | | | |
UMAP | | | | |
t-SNE | | | | |
Laplacian Eigenmaps | | | | |
Figure 14: Embeddings of $2$d manifolds with boundary into $\mathbb{R}^{2}$. The noisy Swiss Roll is constructed by adding uniform noise in all three dimensions, with support on $[0,0.05]$. | Barbell | Square with two holes | Swiss Roll with a hole
---|---|---|---
LDLE with $\partial\mathcal{M}$ known apriori | | |
Figure 15: LDLE embeddings when the points on the boundary are known apriori.
#### 6.2.1 Quantitative comparison
To compare LDLE with other techniques in a quantitative manner, we compute the
distortion $\mathcal{D}_{k}$ of the embeddings of the geodesics originating
from $x_{k}$ and then plot the distribution of $\mathcal{D}_{k}$ (see Figure
16). The procedure to compute $\mathcal{D}_{k}$ follows. In the discrete
setting, we first define the geodesic between two given points as the shortest
path between them which in turn is computed by running Dijkstra algorithm on
the graph of $5$ nearest neighbors. Here, the distances are measured using the
Euclidean metric $d_{e}$. Denote the number of nodes on the geodesic between
$x_{k}$ and $x_{k^{\prime}}$ by $n_{kk^{\prime}}$ and the sequence of nodes by
$(x_{s})_{s=1}^{n_{kk^{\prime}}}$ where $x_{1}=x_{k}$ and
$x_{n_{kk^{\prime}}}=x_{k^{\prime}}$. Denote the embedding of $x_{k}$ by
$y_{k}$. Then the length of the geodesic in the latent space between $x_{k}$
and $x_{k^{\prime}}$, and the length of the embedding of the geodesic between
$y_{k}$ and $y_{k^{\prime}}$ are given by
$\displaystyle L_{kk^{\prime}}$
$\displaystyle=\sum_{s=2}^{n_{kk^{\prime}}}d_{e}(x_{s},x_{s-1}).$ (62)
$\displaystyle L^{g}_{kk^{\prime}}$
$\displaystyle=\sum_{s=2}^{n_{kk^{\prime}}}d_{e}(y_{s},y_{s-1}).$ (63)
Finally, the distortion $\mathcal{D}_{k}$ of the embeddings of the geodesics
originating from $x_{k}$ is given by the ratio of maximum expansion and
minimum contraction, that is,
$\displaystyle\mathcal{D}_{k}$
$\displaystyle=\sup_{k^{\prime}}\frac{L^{g}_{kk^{\prime}}}{L_{kk^{\prime}}}/\inf_{k^{\prime}}\frac{L^{g}_{kk^{\prime}}}{L_{kk^{\prime}}}=\sup_{k^{\prime}}\frac{L^{g}_{kk^{\prime}}}{L_{kk^{\prime}}}\sup_{k^{\prime}}\frac{L_{kk^{\prime}}}{L^{g}_{kk^{\prime}}}.$
(64)
A value of $1$ for $\mathcal{D}_{k}$ means the geodesics originating from
$x_{k}$ have the same length in the input and in the embedding space. If
$\mathcal{D}_{k}=1$ for all $k$ then the embedding is geometrically, and
therefore topologically as well, the same as the input up to scale. Figure 16
shows the distribution of $\mathcal{D}_{k}$ due to LDLE and other algorithms
for various examples. Except for the noisy Swiss Roll, LTSA produced the least
maximum distortion. Specifically, for the square with two holes, LTSA produced
a distortion of $1$ suggesting its strength on manifolds with unit aspect
ratio. In all other examples, LDLE produced the least distortion except for a
few outliers. When the boundary is unknown, the points which result in high
$\mathcal{D}_{k}$ are the ones which lie on and near the boundary. When the
boundary is known, these are the points which lie on or near the corners (see
Figures 4 and 5). We aim to fix this issue in future work.
#### 6.2.2 Robustness to noise
To further analyze the robustness of LDLE under noise we compare the
embeddings of the Swiss Roll with Gaussian noise of increasing variance. The
resulting embeddings are shown in Figure 17. Note that certain points on LDLE
embeddings have a different colormap than the one used for the input. As
explained in Section 5.3, the points which have the same color under this
colormap are adjacent on the manifold but away in the embedding. To be
precise, these points lie close to the middle of the gap in the Swiss Roll,
creating a bridge between those points which would otherwise be far away on a
noiseless Swiss Roll. In a sense, these points cause maximum corruption to the
geometry of the underlying noiseless manifold. One can say that these points
are have adversarial noise, and LDLE embedding can automatically recognize
such points. We will further explore this in future work. LTSA, t-SNE and
Laplacian Eigenmaps fail to produce correct embeddings while UMAP embeddings
also exhibit high quality.
#### 6.2.3 Sparsity
A comparison of the embeddings of the Swiss Roll with decreasing resolution
and increasing sparsity is provided in Figure 18. Unlike LTSA and Laplacian
Eigenmaps, the embeddings produced by LDLE, UMAP and t-SNE are of high
quality. Note that when the resolution is $10$, LDLE embedding of some points
have a different colormap. Due to sparsity, certain points on the opposite
sides of the gap between the Swiss Roll are neighbors in the ambient space as
shown in Figure 32 in Appendix I. LDLE automatically tore apart these
erroneous connections and marked them at the output using a different
colormap. A discussion on sample size requirement for LDLE follows.
The distortion of LDLE embeddings directly depend on the distortion of the
constructed local parameterizations, which in turn depends on reliable
estimates of the graph Laplacian and its eigenvectors. The work in [4, 22, 45,
13] provided conditions on the sample size and the hyperparameters such as the
kernel bandwidth, under which the graph Laplacian and its eigenvectors would
converge to their continuous counterparts. A similar analysis in the setting
of self-tuned kernels used in our approach (see Algo. 1) is also provided in
[12]. These imply that, for a faithful estimation of graph Laplacian and its
eigenvectors, the hyperparameter $k_{\text{tune}}$ (see Table 1) should be
small enough so that the local scaling factors $\sigma_{k}$ (see Algo. 1) are
also small, while the size of the data $n$ should be large enough so that
$n\sigma_{k}^{d+2}/\log(n)$ is sufficiently large for all
$k\in\\{1,\ldots,n\\}$. This suggests that $n$ needs to be exponential in $d$
and inversely related to $\sigma_{k}$. However, in practice, the data is
usually given and therefore $n$ is fixed. So the above mainly states that to
obtain accurate estimates, the hyperparameter $k_{\text{tune}}$ must be
decreased. This indeed holds as we had to decrease $k_{\text{tune}}$ from $7$
to $2$ (see Appendix G) to produce LDLE embeddings of high quality for
increasingly sparse Swiss Roll in Figure 18.
### 6.3 Closed Manifolds
In Figure 19, we show the $2$d embeddings of $2$d manifolds without a
boundary, a curved torus in $\mathbb{R}^{3}$ and a flat torus in
$\mathbb{R}^{4}$. LDLE produced similar representation for both the inputs.
None of the other methods do that. The main difference in the LDLE embedding
of the two inputs is based on the boundary of the embedding. It is composed of
many small line segments for the flat torus, and many small curved segments
for the curved torus. This is clearly because of the difference in the
curvature of the two inputs, zero everywhere for the flat torus and non-zero
almost everywhere on the curved torus. The mathematical correctness of the
LDLE embeddings using the cut and paste argument is shown in Figure 31. LTSA,
UMAP and Laplacian eignemaps squeezed both the manifolds into $\mathbb{R}^{2}$
while the t-SNE embedding is non-interpretable.
### 6.4 Non-Orientable Manifolds
In Figure 20, we show the $2$d embeddings of non-orientatble $2$d manifolds, a
Möbius strip in $\mathbb{R}^{3}$ and a Klein bottle in $\mathbb{R}^{4}$.
Laplacian eigenmaps produced incorrect embeddings, t-SNE produced dissected
and non-interpretable embeddings and LTSA and UMAP squeezed the inputs into
$\mathbb{R}^{2}$. LDLE produced mathematically correct embeddings by tearing
apart both inputs to embed them into $\mathbb{R}^{2}$ (see Figure 31).
### 6.5 High Dimensional Data
#### 6.5.1 Synthetic sensor data
In Figure 21, motivated from [36], we embed a $42$ dimensional synthetic data
set representing the signal strength of $42$ transmitters at about $n=6000$
receiving locations on a toy floor plan. The transmitters and the receivers
are distributed uniformly across the floor. Let $(t_{r_{k}})_{k=1}^{42}$ be
the transmitter locations and $r_{i}$ be the $i$th receiver location. Then the
$i$th data point $x_{i}$ is given by
$(e^{-\left\|r_{i}-t_{r_{k}}\right\|_{2}^{2}})_{k=1}^{42}$. The resulting data
set is embedded using and other algorithms into $\mathbb{R}^{2}$. The
hyperparameters resulting in the most visually appealing embeddings were
identified for each algorithm and are provided in Table 2. The obtained
embeddings are shown in Figure 21. The shapes of the holes are best preserved
by LTSA, then LDLE followed by the other algorithms. The corners of the LDLE
embedding are more distorted. The reason for distorted corners is given in
Remark 4.
#### 6.5.2 Face image data
In Figure 22, we show the embedding obtained by applying LDLE on the face
image data [44] which consists of a sequence of $698$ $64$-by-$64$ pixel
images of a face rendered under various pose and lighting conditions. These
images are converted to $4096$ dimensional vectors, then projected to $100$
dimensions through PCA while retaining about $98\%$ of the variance. These are
then embedded using LDLE and other algorithms into $\mathbb{R}^{2}$. The
hyperparameters resulting in the most visually appealing embeddings were
identified for each algorithm and are provided in Table 5. The resulting
embeddings are shown in Figure 23 colored by the pose and lighting of the
face. Note that values of the pose and lighting variables for all the images
are provided in the dataset itself. We have displayed face images
corresponding to few points of the LDLE embeddings as well. Embeddings due to
all the techniques except LTSA reasonably capture both the pose and lighting
conditions.
#### 6.5.3 Rotating Yoda-Bulldog dataset
In Figure 23, we show the $2$d embeddings of the rotating figures dataset
presented in [28]. It consists of $8100$ snapshots taken by a camera of a
platform with two objects, Yoda and a bull dog, rotating at different
frequencies. Therefore, the underlying $2$d parameterization of the data
should render a torus. The original images have a dimension of $320\times
240\times 3$. In our experiment, we first resize the images to half the
original size and then project them to $100$ dimensions through PCA [24] while
retaining about $98\%$ variance. These are then embedded using LDLE and other
algorithms into $\mathbb{R}^{2}$. The hyperparameters resulting in the most
visually appealing embeddings were identified for each algorithm and are
provided in Table 5. The resulting embeddings are shown in Figure 23 colored
by the first dimension of the embedding itself. LTSA and UMAP resulted in a
squeezed torus. LDLE tore apart the underlying torus and automatically colored
the boundary of the embedding to suggest the gluing instructions. By tracing
the color on the boundary we have manually drawn the arrows. Putting these
arrows on a piece of paper and using cut and past argument one can establish
that the embedding represents a torus (see Figure 31). The images
corresponding to a few points on the boundary are shown. Pairs of images with
the same labels represent the two sides of the curve along which LDLE tore
apart the torus, and as is evident these pairs are similar.
## 7 Conclusion and Future Work
We have presented a new bottom-up approach (LDLE) for manifold learning which
constructs low-dimensional low distortion local views of the data using the
low frequency global eigenvectors of the graph Laplacian, and registers them
to obtain a global embedding. Through various examples we demonstrated that
LDLE competes with the other methods in terms of visualization quality. In
particular, the embeddings produced by LDLE preserved distances upto a
constant scale better than those produced by UMAP, t-SNE, Laplacian Eigenmaps
and for the most part LTSA too. We also demonstrated that LDLE is robust to
the noise in the data and produces fine embeddings even when the data is
sparse. We also showed that LDLE can embed closed as well as non-orientable
manifolds into their intrinsic dimension, a feature that is missing from the
existing techniques. Some of the future directions of our work are as follows.
* •
It is only natural to expect real world datasets to have boundary and to have
many corners. As observed in the experimental results, when the boundary of
the manifold is unknown, then the LDLE embedding tends to have distorted
boundary. Even when the boundary is known, the embedding has distorted
corners. This is caused by high distortion views near the boundary (see
Figures 4 and 5). We aim to fix this issue in our future work. One possible
resolution could be based on [5] which presented a method to approximately
calculate the distance of the points from the boundary.
* •
When the data represents a mixture of manifolds, for example, a pair of
possibly intersecting spheres or even manifolds of different intrinsic
dimensions, it is also natural to expect a manifold learning technique to
recover a separate parameterization for each manifold and provide gluing
instructions at the output. One way is to perform manifold factorization [48]
or multi-manifold clustering [46] on the data to recover sets of points
representing individual manifolds and then use manifold learning on these
separately. We aim to adapt LDLE to achieve this.
* •
The spectrum of the Laplacian has been used in prior work for anomaly
detection [15, 33, 11, 10, 35]. Similar to our approach of using a subset of
Laplacian eigenvectors to construct low distortion local views in lower
dimension, in [34, 10], subsets of Laplacian eigenvectors were identified so
as to separate small clusters from a large background component. As shown in
Figures 4 and 5, LDLE produced high distortion local views near the boundary
and the corners, though these are not outliers. However, if we consider a
sphere with outliers (imagine a sphere with noise only at the north pole as in
Figure 24), then the distortion of the local views containing the outliers is
higher than the rest of the views. Therefore, the distortion of the local
views can help find anomalies in the data. We aim to further investigate this
direction to develop an anomaly detection technique.
* •
Similar to the approach of denoising a signal by retaining low frequency
components, our approach uses low frequency Laplacian eigenvectors to estimate
local views. These eigenvectors implicitly capture the global structure of the
manifold. Therefore, to construct local views, unlike LTSA which directly
relies on the local configuration of data which may be noisy, LDLE relies on
the local elements of low frequency global eigenvectors of the Laplacian which
are supposed to be robust to the noise. Practical implication of this is shown
in Figure 17 to some extent while we aim to further investigate the
theoretical implications.
Rectangle ($4\times 0.25$) | barbell | Square with two holes | Swiss Roll with a hole | Noisy Swiss Roll
---|---|---|---|---
| | | |
Figure 16: Violin plots [23, 2] for the distribution of $\mathcal{D}_{k}$ (See Eq. (64)). LDLE $\partial M$ means LDLE with boundary known apriori. The white point inside the violin represents the median. The straight line above the end of the violin represents the outliers. | $\sigma=0.01$ | $\sigma=0.015$ | $\sigma=0.02$
---|---|---|---
Side view of Swiss Roll | | |
LDLE | | |
LTSA | | |
UMAP | | |
t-SNE | | |
Laplacian Eigenmaps | | |
Figure 17: Embeddings of the Swiss Roll with additive noise sampled from the Gaussian distribution of zero mean and a variance of $\sigma^{2}$ (see Section 6.2.2 for details). | RES$=30$ ($n=990$) | RES$=15$ ($n=280$) | RES$=12$ ($n=184$) | RES$=10$ ($n=133$)
---|---|---|---|---
Input | | | |
LDLE | | | |
LTSA | | | |
UMAP | | | |
t-SNE | | | |
Laplacian Eigenmaps | | | |
Figure 18: Embeddings of the Swiss Roll with decreasing resolution and increasing sparsity (see Section 6.2.3 for details). Note that when RES$=7$ ($n=70$) none of the above method produced a correct embedding. | Curved torus | Flat torus
---|---|---
Input | | | See Eq. (59)
LDLE | | | |
LTSA | | | |
UMAP | | | |
t-SNE | | | |
Laplacian Eigenmaps | | | |
Figure 19: Embeddings of $2$d manifolds without boundary into $\mathbb{R}^{2}$. For each manifold, the left and right columns contain the same plots colored by the two parameters of the manifold. A proof of the mathematical correctness of the LDLE embeddings is provided in Figure 31. | Möbius strip | Klein bottle
---|---|---
Input | | | See Eq. (61)
LDLE | | | |
LTSA | | | |
UMAP | | | |
t-SNE | | | |
Laplacian Eigenmaps | | | |
Figure 20: Embeddings of $2$d non-orientable manifolds into $\mathbb{R}^{2}$.
For each manifold, the left and right columns contain the same plots colored
by the two parameters of the manifold. A proof of the mathematical correctness
of the LDLE embeddings is provided in Figure 31.
True floor plan | LDLE | LTSA | UMAP | t-SNE | Laplacian eigenmaps
---|---|---|---|---|---
| | | | |
Figure 21: Embedding of the synthetic sensor data into $\mathbb{R}^{2}$ (see Section 6.5 for details). | LDLE
---|---
|
| LDLE | LTSA | UMAP | t-SNE
Pose | | | |
Lighting | | | |
Figure 22: Embedding of the face image data set [44] into $\mathbb{R}^{2}$
colored by the pose and lighting conditions (see Section 6.5 for details).
LDLE
---
LTSA | UMAP | t-SNE
| |
Figure 23: Embeddings of snapshots of a platform with two objects, Yoda and a bull dog, each rotating at a different frequency, such that the underlying topology is a torus (see Section 6.5 for details). | |
---|---|---
Figure 24: Local views containing outliers exhibit high distortion. (left)
Input data $(x_{k})_{k=1}^{n}$. (middle) $x_{k}$ colored by the distortion
$\zeta_{kk}$ of $\Phi_{k}$ on $U_{k}$. (right) $y_{k}$ colored by
$\zeta_{kk}$.
## Appendix A First Proof of Theorem 2
Choose $\epsilon>0$ so that the exponential map
$\exp_{x}:T_{x}\mathcal{M}\rightarrow\mathcal{M}$ is a well defined
diffeomorphism on $\mathcal{B}_{2\epsilon}\subset T_{x}\mathcal{M}$ where
$T_{x}\mathcal{M}$ is the tangent space to $\mathcal{M}$ at $x$,
$\exp_{x}(0)=x$ and
$\displaystyle\mathcal{B}_{\epsilon}=\\{v\in T_{x}\mathcal{M}\ |\
\left\|v\right\|_{2}<\epsilon\\}.$ (65)
Then using [7, lem. 48, prop. 50, th. 51], for all $y\in B_{\epsilon}(x)$ such
that
$\displaystyle B_{\epsilon}(x)=\\{y\in\mathcal{M}\ |\ d_{g}(x,y)<\epsilon\\}$
(66)
we have,
$\displaystyle p(t,x,y)$
$\displaystyle=G(t,x,y)(u_{0}(x,y)+tu_{1}(x,y)+O(t^{2})),$ (67)
where
$\displaystyle G(t,x,y)$ $\displaystyle=\frac{e^{-d_{g}(x,y)^{2}/4t}}{(4\pi
t)^{d/2}},$ (68) $\displaystyle u_{0}(x,y)$
$\displaystyle=1+O(\left\|v\right\|^{2}),\ y=\exp_{x}(v),v\in
T_{x}\mathcal{M},$ (69)
and for $f\in C(\mathcal{M})$, the following hold
$\displaystyle f(x)$ $\displaystyle=\lim_{t\rightarrow
0}\int_{M}p(t,x,y)f(y)\omega_{g}(y)$ (70) $\displaystyle=\lim_{t\rightarrow
0}\int_{B_{\epsilon}(x)}p(t,x,y)f(y)\omega_{g}(y),$ (71) $\displaystyle f(x)$
$\displaystyle=\lim_{t\rightarrow
0}\int_{B_{\epsilon}(x)}G(t,x,y)f(y)\omega_{g}(y),$ (72) $\displaystyle
u_{1}(x,x)f(x)$ $\displaystyle=\lim_{t\rightarrow
0}\int_{B_{\epsilon}(x)}G(t,x,y)u_{1}(x,y)f(y)\omega_{g}(y).$ (73)
Using the above equations and the definition of $\Psi_{kij}(y)$ in Eq. (15)
and $A_{kij}$ in Eq. (16) we compute the limiting value of the scaled local
correlation (see Eq. (19)),
$\displaystyle\widetilde{A}_{kij}$ $\displaystyle=\lim_{t\rightarrow
0}\frac{A_{kij}}{2t}$ (74) $\displaystyle=\lim_{t\rightarrow
0}\frac{1}{2t}\int_{M}p(t,x_{k},y)\Psi_{kij}(y)\omega_{g}(y).$ (75)
which will turn out to be the inner product between the gradients of the
eigenfunctions $\bm{\phi}_{i}$ and $\bm{\phi}_{j}$ at $x_{k}$. We start by
choosing an $\epsilon_{k}>0$ so that $\exp_{x_{k}}$ is a well defined
diffeomorphism on $\mathcal{B}_{2\epsilon_{k}}\subset T_{x_{k}}\mathcal{M}$.
Using Eq. (71) we change the region of integration from $\mathcal{M}$ to
$B_{\epsilon_{k}}(x_{k})$,
$\displaystyle\widetilde{A}_{kij}$ $\displaystyle=\lim_{t_{k}\rightarrow
0}\frac{1}{2t_{k}}\int_{B_{\epsilon_{k}}(x_{k})}p(t_{k},x_{k},y)\Psi_{kij}(y)\omega_{g}(y).$
(76)
Substitute $p(t_{k},x_{k},y)$ from Eq. (67) and simplify using Eq. (72, 73)
and the fact that $\Psi_{kij}(x_{k})=0$ to get
$\displaystyle\widetilde{A}_{kij}$ $\displaystyle=\lim_{t_{k}\rightarrow
0}\frac{1}{2t_{k}}\int_{B_{\epsilon_{k}}(x_{k})}G(t_{k},x_{k},y)(u_{0}(x_{k},y)+t_{k}u_{1}(x_{k},y)+O(t_{k}^{2}))\Psi_{kij}(y)\omega_{g}(y).$
$\displaystyle=\lim_{t_{k}\rightarrow
0}\left(\frac{1}{2t_{k}}\int_{B_{\epsilon_{k}}(x_{k})}G(t_{k},x_{k},y)u_{0}(x_{k},y)\Psi_{kij}(y)\omega_{g}(y)+\right.$
$\displaystyle\qquad\qquad\qquad\qquad\qquad\qquad\qquad\left.\frac{t_{k}u_{1}(x_{k},x_{k})\Psi_{kij}(x_{k})+O(t_{k}^{2})\Psi_{kij}(x_{k})}{2t_{k}}\right)$
$\displaystyle=\lim_{t_{k}\rightarrow
0}\frac{1}{2t_{k}}\int_{B_{\epsilon_{k}}(x_{k})}G(t_{k},x_{k},y)u_{0}(x_{k},y)\Psi_{kij}(y)\omega_{g}(y).$
(77)
Replace $y\in B_{\epsilon_{k}}(x_{k})$ by $\exp_{x_{k}}(v)$ where
$v\in\mathcal{B}_{\epsilon_{k}}\subset T_{x_{k}}\mathcal{M}$ and
$\left\|v\right\|=d_{g}(x_{k},y)$. Denote the Jacobian for the change of
variable by $J(v)$ i.e. $J(v)=\frac{d}{dv}\exp_{x_{k}}(v)$. Note that
$\exp_{x_{k}}(0)=x_{k}$ and $J(0)=I$. Using the Taylor expansion of
$\bm{\phi}_{i}$ and $\bm{\phi}_{j}$ about $0$ we obtain
$\displaystyle\phi_{s}(y)=\phi_{s}(\exp_{x_{k}}(v))$
$\displaystyle=\phi_{s}(\exp_{x_{k}}(0))+\nabla\phi_{s}(\exp_{x_{k}}(0))^{T}J(0)v+O(\left\|v\right\|^{2})$
$\displaystyle=\phi_{s}(x_{k})+\nabla\phi_{s}(x_{k})^{T}v+O(\left\|v\right\|^{2}),\
s=i,j.$ (78)
Substituting the above equation in the definition of $\Psi_{kij}(y)$ (see Eq.
(15)) we get
$\displaystyle\Psi_{kij}(y)$ $\displaystyle=\Psi_{kij}(\exp_{x_{k}}(v))$
$\displaystyle=v^{T}\nabla\phi_{i}\nabla\phi_{j}^{T}v+(\nabla\phi_{i}^{T}v+\nabla\phi_{j}^{T}v)O(\left\|v\right\|^{2})+O(\left\|v\right\|^{4}),$
(79)
where $\nabla\phi_{s}\equiv\nabla\phi_{s}(x_{k}),s=i,j$. Now we substitute Eq.
(79, 68, 69) in Eq. (77) while replacing variable $y$ with $\exp_{x_{k}}(v)$
where $J(v)$ is the Jacobian for the change of variable as before, to get
$\displaystyle\widetilde{A}_{kij}$ $\displaystyle=\lim_{t_{k}\rightarrow
0}\frac{1}{2t_{k}}\int_{\mathcal{B}_{\epsilon_{k}}}\frac{e^{-\left\|v\right\|^{2}/4t_{k}}}{(4\pi
t_{k})^{d/2}}(1+O(\left\|v\right\|^{2}))\Psi_{kij}(\exp_{x_{k}}(v))J(v)dv$
$\displaystyle=L_{1}+L_{2},$ (80)
where $L_{1}$ and $L_{2}$ are the terms obtained by expanding
$1+O(\left\|v\right\|^{2})$ in the integrand. We will show that $L_{2}=0$ and
$\widetilde{A}_{kij}=L_{1}=\nabla\phi_{i}^{T}\nabla\phi_{j}$.
$\displaystyle L_{2}$ $\displaystyle=\lim_{t_{k}\rightarrow
0}\frac{1}{2t_{k}}\int_{\mathcal{B}_{\epsilon_{k}}}\frac{e^{-\left\|v\right\|^{2}/4t_{k}}}{(4\pi
t_{k})^{d/2}}O(\left\|v\right\|^{2})(\operatorname{tr}(\nabla\phi_{i}\nabla\phi_{j}^{T}vv^{T})+$
$\displaystyle\qquad\qquad\qquad\qquad\qquad(\nabla\phi_{i}^{T}v+\nabla\phi_{j}^{T}v)O(\left\|v\right\|^{2})+O(\left\|v\right\|^{4}))J(v)dv$
$\displaystyle=\lim_{t_{k}\rightarrow
0}\frac{1}{2t_{k}}(O(t_{k}^{2})+0+0+O(t_{k}^{4}))$ $\displaystyle=0.$ (81)
Therefore,
$\displaystyle\widetilde{A}_{kij}$ $\displaystyle=L_{1}$
$\displaystyle=\lim_{t_{k}\rightarrow
0}\frac{1}{2t_{k}}\int_{\mathcal{B}_{\epsilon_{k}}}\frac{e^{-\left\|v\right\|^{2}/4t_{k}}}{(4\pi
t_{k})^{d/2}}\Psi_{kij}(\exp_{x_{k}}(v))J(v)dv$ (82)
$\displaystyle=\lim_{t_{k}\rightarrow
0}\frac{1}{2t_{k}}\int_{\mathcal{B}_{\epsilon_{k}}}\frac{e^{-\left\|v\right\|^{2}/4t_{k}}}{(4\pi
t_{k})^{d/2}}(v^{T}\nabla\phi_{i}\nabla\phi_{j}^{T}v+$
$\displaystyle\qquad\qquad\qquad\qquad(\nabla\phi_{i}^{T}v+\nabla\phi_{j}^{T}v)O(\left\|v\right\|^{2})+O(\left\|v\right\|^{4}))J(v)dv$
$\displaystyle=\lim_{t_{k}\rightarrow
0}\frac{1}{2t_{k}}\int_{\mathcal{B}_{\epsilon_{k}}}\frac{e^{-\left\|v\right\|^{2}/4t_{k}}}{(4\pi
t_{k})^{d/2}}v^{T}\nabla\phi_{i}\nabla\phi_{j}^{T}vJ(v)dv+\frac{0+0+O(t_{k}^{2})}{2t_{k}}$
$\displaystyle=\lim_{t_{k}\rightarrow
0}\frac{1}{2t_{k}}\int_{\mathcal{B}_{\epsilon_{k}}}\frac{e^{-\left\|v\right\|^{2}/4t_{k}}}{(4\pi
t_{k})^{d/2}}v^{T}\nabla\phi_{i}\nabla\phi_{j}^{T}vJ(v)dv.$ (83)
Substitution of $t_{k}=0$ leads to the indeterminate form $\frac{0}{0}$.
Therefore, we apply L’Hospital’s rule and then Leibniz integral rule to get,
$\displaystyle\widetilde{A}_{kij}$ $\displaystyle=\lim_{t_{k}\rightarrow
0}\frac{1}{2}\int_{\mathcal{B}_{\epsilon_{k}}}\left(\frac{\left\|v\right\|^{2}}{4t_{k}^{2}}-\frac{d}{2t_{k}}\right)\frac{e^{-\left\|v\right\|^{2}/4t_{k}}}{(4\pi
t_{k})^{d/2}}v^{T}\nabla\phi_{i}\nabla\phi_{j}^{T}vJ(v)dv$
$\displaystyle=\operatorname{tr}\left(\frac{1}{2}\nabla\phi_{i}\nabla\phi_{j}^{T}\lim_{t_{k}\rightarrow
0}\int_{\mathcal{B}_{\epsilon_{k}}}\left(\frac{\left\|v\right\|^{2}}{4t_{k}^{2}}-\frac{d}{2t_{k}}\right)\frac{e^{-\left\|v\right\|^{2}/4t_{k}}}{(4\pi
t_{k})^{d/2}}vv^{T}J(v)dv\right)$
$\displaystyle=\operatorname{tr}\left(\frac{1}{2}\nabla\phi_{i}\nabla\phi_{j}^{T}\left(\lim_{t_{k}\rightarrow
0}\left(\frac{(12+4(d-1))t_{k}^{2}}{4t_{k}^{2}}-\frac{2t_{k}d}{2t_{k}}\right)I+O(t_{k})I\right)\right)$
$\displaystyle=\nabla\phi_{i}^{T}\nabla\phi_{j}.$ (84)
Finally, note that the Eq. (82) is same as the following equation with $y$
replaced by $\exp_{x_{k}}(v)$,
$\displaystyle\widetilde{A}_{kij}$ $\displaystyle=\lim_{t_{k}\rightarrow
0}\frac{1}{2t_{k}}\int_{B_{\epsilon_{k}}(x_{k})}G(t_{k},x_{k},y)\Psi_{kij}(y)\omega_{g}(y).$
(85)
We used the above equation to estimate $\widetilde{A}_{kij}$ in Section 3.1. ∎
## Appendix B Second Proof of Theorem 2
Yet another proof is based on the Feynman-Kac formula [41, 42],
$\displaystyle A_{kij}$
$\displaystyle=[e^{-t_{k}\Delta_{g}}((\phi_{i}-\phi_{i}(x_{k}))(\phi_{j}-\phi_{j}(x_{k}))](x_{k}).$
(86)
where
$\displaystyle[e^{-t\Delta_{g}}f](x)=\sum_{i}e^{-\lambda_{i}t}\langle\phi_{i},f\rangle\phi_{i}(x)$
(87)
and therefore,
$\displaystyle\widetilde{A}_{kij}$ $\displaystyle=\lim_{t_{k}\rightarrow
0}\frac{A_{kij}}{2t_{k}}=\left.\frac{1}{2}\frac{\partial A_{kij}}{\partial
t_{k}}\right|_{t_{k}=0}$ (88)
$\displaystyle=\frac{-1}{2}\left\\{\Delta_{g}[(\phi_{i}-\phi_{i}(x_{k}))(\phi_{j}-\phi_{j}(x_{k}))](x_{k})\right\\}$
(89)
$\displaystyle=\frac{-1}{2}\left\\{0+0-2\nabla\phi_{i}(x_{k})^{T}\nabla\phi_{j}(x_{k})\right\\}$
(90) $\displaystyle=\nabla\phi_{i}(x_{k})^{T}\nabla\phi_{j}(x_{k})$ (91)
where we used the fact
$\Delta_{g}(f_{i}f_{j})=f_{j}\Delta_{g}f_{i}+f_{i}\Delta_{g}f_{j}-2\langle\nabla_{g}f_{i}(x),\nabla_{g}f_{j}(x)\rangle_{g}$.
Note that as per our convention $\nabla\phi_{i}(x_{k})=\nabla(\phi_{i}\ \circ\
\text{exp}_{x_{k}})(0)$ and therefore
$\langle\nabla_{g}\phi_{i}(x),\nabla_{g}\phi_{j}(x)\rangle_{g}=\nabla\phi_{i}(x_{k})^{T}\nabla\phi_{j}(x_{k})$.
## Appendix C Rationale Behind the Choice of $t_{k}$ in Eq. (25)
Since $|\mathcal{M}|\leq 1$, we note that
$\displaystyle\epsilon_{k}\leq\Gamma(d/2+1)^{1/d}/\sqrt{\pi}$ (92)
where the maximum can be achieved when $\mathcal{M}$ is a $d$-dimensional ball
of unit volume. Then we take the limiting value of $t_{k}$ as in Eq. (25)
where chi2inv is the inverse cdf of the chi-squared distribution with $d$
degrees of freedom evaluated at $p$. Since the covariance matrix of
$G(t_{k},x,y)$ is $\sqrt{2t_{k}}I$ (see Eq. (21)), the above value of $t_{k}$
ensures $p$ probability mass to lie in $B_{\epsilon_{k}}(x_{k})$. We take $p$
to be $0.99$ in our experiments. Also, using Eq. (92) and Eq. (25) we have
$\displaystyle
t_{k}\leq\frac{1}{2\pi}\frac{\Gamma(d/2+1)^{2/d}}{\text{chi2inv}(p,d)}<<1,\
\text{when }p=0.99.$ (93)
Using the above inequality with $p=0.99$, for $d=2,10,100$ and $1000$, the
upper bound on $t_{k}=0.0172,0.018,0.0228$ and $0.0268$ respectively. Thus,
$t_{k}$ is indeed a small value close to $0$.
## Appendix D Computation of $(s_{m},p_{s_{m}})_{m=1}^{M}$ in Algo. 5
Algo. 5 aligns the intermediate views in a sequence. The computation of the
sequences $(s_{m},p_{s_{m}})_{m=1}^{M}$ is motivated by the necessary and
sufficient conditions for a unique solution to the standard orthogonal
Procrustes problem [39]. We start by a brief review of a variant of the
orthogonal Procrustes problem and then explain how these sequences are
computed.
### D.1 A Variant of Orthogonal Procrustes Problem
Given two matrices $A$ and $B$ of same size with $d$ columns, one asks for an
orthogonal matrix $T$ of size $d\times d$ and a $d$-dimensional columns vector
$v$ which most closely aligns $A$ to $B$, that is,
$\displaystyle
T,v=\mathop{\mathrm{argmin}}\limits_{\Omega,\omega}\left\|A\Omega+\mathbf{1}_{n}\omega^{T}-B\right\|^{2}_{F}\text{
such that }\Omega^{T}\Omega=I.$ (94)
Here $\mathbf{1}_{n}$ is the $n$-dimensional column vector containing ones.
Equating the derivative of the objective with respect to $\omega$ to zero, we
obtain the following condition for $\omega$,
$\displaystyle\omega=\frac{\mathbf{1}_{n}}{n}^{T}(A\Omega-B).$ (95)
Substituting this back in Eq. (94), we reduce the above problem to the
standard orthogonal Procrustes problem,
$\displaystyle
T=\mathop{\mathrm{argmin}}\limits_{\Omega}\left\|\overline{A}\Omega-\overline{B}\right\|_{F}^{2}$
(96)
where
$\displaystyle\overline{X}=\left(I-\frac{1}{n}\mathbf{1}_{n}\mathbf{1}_{n}^{T}\right)X$
(97)
for any matrix $X$. This is equivalent to subtracting the mean of the rows in
$X$ from each row of $X$.
As proved in [39], the above problem, and therefore the variant, has a unique
solution if and only if the square matrix $\overline{A}^{T}\overline{B}$ has
full rank $d$. Denote by $\sigma_{d}(X)$ the $d$th smallest singular value of
$X$. Then $\overline{A}^{T}\overline{B}$ has full rank if
$\sigma_{d}(\overline{A}^{T}\overline{B})$ is non-zero, otherwise there exists
multiple $T$ which minimize Eq. (94).
### D.2 Computation of $(s_{m},p_{s_{m}})_{m=1}^{M}$
Here, $s_{m}$ corresponds to the $s_{m}$th intermediate view and $p_{s_{m}}$
corresponds to its parent view. The first view in the sequence corresponds to
the largest cluster and it has no parent, that is,
$\displaystyle
s_{1}=\mathop{\mathrm{argmax}}\limits_{m=1}^{M}|\mathcal{C}_{m}|\text{ and
}p_{s_{1}}=\text{none}.$ (98)
For convenience, denote $s_{m}$ by $s$, $p_{s_{m}}$ by $p$ and
$V_{mm^{\prime}}$ by $\widetilde{\Phi}^{g}_{m}(\widetilde{U}_{mm^{\prime}})$.
We choose $s$ and $p$ so that the view $V_{sp}$ can be aligned with the view
$V_{ps}$ without any ambiguity. In other words, $s$ and $p$ are chosen so that
there is a unique solution to the above variant of orthogonal Procrsutes
problem (see Eq. (94)) with $A$ and $B$ replaced by $V_{sp}$ and $V_{ps}$,
respectively. Therefore, an ambiguity (non-uniqueness) would arise when
$\sigma_{d}(\overline{V}_{sp}^{T}\overline{V}_{ps})$ is zero. We quantify the
ambiguity in aligning arbitrary $m$th and the $m^{\prime}$th intermediate
views on their overlap, that is, $V_{mm^{\prime}}$ and $V_{m^{\prime}m}$, by
$\displaystyle
W_{mm^{\prime}}=\sigma_{d}(\overline{V}_{mm^{\prime}}^{T}\overline{V}_{m^{\prime}m}).$
(99)
Note that $W_{mm^{\prime}}=W_{m^{\prime}m}$. A value of $W_{mm^{\prime}}$
close to zero means high ambiguity in the alignment of $m$th and
$m^{\prime}$th views. By default, if there is no overlap between $m$th and
$m^{\prime}$th view then $W_{mm^{\prime}}=W_{m^{\prime}m}=0$.
Finally, we compute the sequences $(s_{m},p_{s_{m}})_{m=2}^{M}$ so that
$\sum_{m=2}^{M}W_{s_{m}p_{s_{m}}}$ is maximized and therefore the net
ambiguity is minimized. This is equivalent to obtaining a maximum spanning
tree $T$ rooted at $s_{1}$, of the graph with $M$ nodes and $W$ as the
adjacency matrix. Then $(s_{m})_{m=2}^{M}$ is the sequence in which a breadth
first search starting from $s_{1}$ visits the nodes in $T$. And $p_{s_{m}}$ is
the parent of the $s_{m}$th node in $T$. Thus,
$\displaystyle(s_{m})_{m=2}^{M}=\text{Breadth-First-Search}(T,s_{1})\text{ and
}p_{s_{m}}=\text{parent of }s_{m}\text{ in }T.$ (100)
## Appendix E Computation of $\widetilde{U}^{g}_{mm^{\prime}}$ in Eq. (58)
Recall that $\widetilde{U}^{g}_{mm^{\prime}}$ is the overlap between the $m$th
and $m^{\prime}$th intermediate views in the embedding space. The idea behind
its computation is as follows. We first compute the discrete balls $U^{g}_{k}$
around each point $y_{k}$ in the embedding space. These are the analog of
$U_{k}$ around $x_{k}$ (see Eq. 26) but in the embedding space, and are given
by
$\displaystyle U^{g}_{k}=\\{y_{k^{\prime}}\ |\
d_{e}(y_{k},y_{k^{\prime}})<\epsilon^{g}_{k}\\}.$ (101)
An important point to note here is that while in the ambient space, we used
$\epsilon_{k}$, the distance to the $k_{\text{lv}}$th nearest neighbor, to
define a discrete ball around $x_{k}$, in the embedding space, we must relax
$\epsilon_{k}$ to account for a possibly increased separation between the
embedded points. This increase in separation is caused due to the distorted
parameterizations. Therefore, to compute discrete balls in the embedding
space, we used $\epsilon^{g}_{k}$ in Eq. (101), which is the distance to the
$\nu k_{\text{lv}}$th nearest neighbor of $y_{k}$. In all of our experiments,
we take $\nu$ to be $3$.
Recall that $c_{k}$ is the cluster label for the point $x_{k}$. Using the same
label $c_{k}$ for the point $y_{k}$, we construct secondary intermediate views
$\widetilde{U}^{g}_{m}$ in the embedding space,
$\displaystyle\widetilde{U}^{g}_{m}=\cup_{c_{k}=m}U^{g}_{k}.$ (102)
Finally, same as the computation of $\widetilde{U}_{mm^{\prime}}$ in Eq. (53),
we compute $\widetilde{U}^{g}_{mm^{\prime}}$ as the intersection of
$\widetilde{U}^{g}_{m}$ and $\widetilde{U}^{g}_{m^{\prime}}$,
$\displaystyle\widetilde{U}^{g}_{mm^{\prime}}=\widetilde{U}^{g}_{m}\cap\widetilde{U}^{g}_{m^{\prime}}.$
(103)
## Appendix F Comparison with the Alignment Procedure in LTSA
In the following we use the notation developed in this work. LTSA [49]
computes the global embedding $Y_{m}$ of the $m$th intermediate view
$\widetilde{U}_{m}$ so that it respects the local geometry determined by
$\widetilde{\Phi}_{m}(\widetilde{U}_{m})$. That is,
$\displaystyle
Y_{m}=\widetilde{\Phi}_{m}(\widetilde{U}_{m})L_{m}+e_{m}v_{m}^{T}+E_{m}.$
(104)
Here, $Y=[y_{1},y_{2},\ldots,y_{n}]^{T}$ where $y_{i}$ is a column vector of
length $d$ representing the global embedding of $x_{i}$, $Y_{m}$ is a
submatrix of $Y$ of size $|\widetilde{U}_{m}|\times d$ representing the global
embeddings of the points in $\widetilde{U}_{m}$, and
$\widetilde{\Phi}_{m}(\widetilde{U}_{m})$ is a matrix of size
$|\widetilde{U}_{m}|\times d$ representing the $m$th intermediate view in the
embedding space (or in the notation of LTSA, the local embedding of
$\widetilde{U}_{m}$). $e_{m}$ is a column vector of length
$|\widetilde{U}_{m}|$ containing $1$s. The intermediate view
$\widetilde{\Phi}_{m}(\widetilde{U}_{m})$ is transformed into the final
embedding $Y_{m}$ through an affine matrix $L_{m}$ of size $d\times d$ and a
translation vector $v_{m}$ of length $d$. The reconstruction error is captured
in the matrix $E_{m}$. The total reconstruction error is given by,
$\displaystyle\mathcal{L}^{\prime}(Y,(L_{m},v_{m})_{m=1}^{M})$
$\displaystyle=\sum_{m=1}^{M}\left\|Y_{m}-(\widetilde{\Phi}_{m}(\widetilde{U}_{m})L_{m}+e_{m}v_{m}^{T})\right\|^{2}_{F}.$
(105)
LTSA estimates $Y$ and $(L_{m},v_{m})_{m=1}^{M}$ by minimizing the above
objective with the constraint $Y^{T}Y=I$. This constraint is the mathematical
realization of their assumption that the points are uniformly distributed in
the embedding space. Due to this, the obtained global embedding $Y$ does not
capture the aspect ratio of the underlying manifold. Also note that due to the
overlapping nature of the views $\widetilde{U}_{m}$, the terms in the above
summation are dependent through $Y_{m}$’s.
Setting aside our adaptation of GPA to tear closed and non-orientable
manifolds, our alignment procedure minimizes the error $\mathcal{L}$ in Eq.
(54). By introducing the variables $Y$ and $E_{m}$ as in Eq. (104), one can
deduce that $\mathcal{L}$ is a lower bound of $\mathcal{L}^{\prime}$ in Eq.
(105). The main difference in the two alignment procedures is that, while in
LTSA, $Y$ is constrained and the transformations are not, in our approach, we
restrict the transformations to be rigid. That is, we constrained $L_{m}$ to
be $b_{m}T_{m}$ where $b_{m}$ is a fixed positive scalar as computed in Eq.
(55) and $T_{m}$ is restricted to be an orthogonal matrix, while there is no
constraint on $Y$.
From a practical standpoint, when the tearing of manifolds is not needed, one
can use either procedure to align the intermediate views and obtain a global
embedding. However, as shown in the Figure 25, the embeddings produced by
aligning our intermediate views using the alignment procedure in LTSA, are
visually incorrect. The high distortion views near the boundary must be at
cause here (see Figure 7). Since our alignment procedure works well on the
same views as shown in Section 6.2, this suggests that, compared to LTSA, our
alignment procedure is more robust to the high distortion views. For similar
reasons, one would expect LTSA to be less robust to the noisy data. This is
indeed true as depicted in Figure 17.
Rectangle | Barbell | Square with two holes | Sphere with a hole | Swiss Roll with a hole | Noisy Swiss Roll
---|---|---|---|---|---
| | | | |
Figure 25: Embeddings obtained by using the global alignment procedure in LTSA
to align the intermediate views in the embedding space. These views are the
result of the clustering step in our algorithm.
One advantage of using LTSA is the efficiency. LTSA reduces the optimal $Y$ to
be the eigenvectors of a certain matrix leading to a fast algorithm. Our
constraint does not allow such simplification and therefore we developed an
iterative procedure by adapting GPA [18, 20, 43]. This procedure is slower
than that in LTSA. We aim to improve the run-time in the subsequent versions
of our code.
## Appendix G Hyperparameters
Input Algorithm | Hyperparameters | Rectangle | Barbell | Square with two holes | Sphere with a hole | Swissroll with a hole | Noisy swissroll | Sphere | Curved torus | Flat torus | Möbius strip | Klein Bottle | 42-dim signal strength data
---|---|---|---|---|---|---|---|---|---|---|---|---|---
LDLE | $\eta_{\text{min}}$ | 5 | 5 | 10 | 5 | 20 | 15 | 5 | 18 | 10 | 10 | 5 | 5
LTSA | n_neighbors | 75 | 25 | 10 | 5 | 5 | 50 | 5 | 25 | 25 | 75 | 25 | 50
UMAP | n_neighbors | 200 | 200 | 200 | 200 | 200 | 200 | 200 | 200 | 200 | 200 | 200 | 50
min_dist | 0.1 | 0.05 | 0.5 | 0.5 | 0.25 | 0.05 | 0.5 | 0.25 | 0.5 | 0.05 | 0.5 | 0.25
t-SNE | perplexity | 50 | 40 | 50 | 50 | 50 | 60 | 60 | 60 | 60 | 60 | 50 | 60
exaggeration | 4 | 6 | 6 | 4 | 4 | 4 | 4 | 4 | 6 | 4 | 6 | 4
Laplacian Eigenmaps | $k_{\text{nn}}$ | - | - | 16 | - | - | - | - | - | - | - | - | 16
$k_{\text{tune}}$ | - | - | 7 | - | - | - | - | - | - | - | - | 7
Table 2: Hyperparameters used in the algorithms for the examples in Sections 6.2, 6.3, 6.4 and 6.5.1. For Laplacian eigenmaps, in all the examples except for square with two holes, all the searched values of the hyperparameters result in similar plots. Noise Algorithm | Hyperparameters | $\sigma=0.01$ | $\sigma=0.015$ | $\sigma=0.02$
---|---|---|---|---
LDLE | $\eta_{\text{min}}$ | 5 | 15 | 10
LTSA | n_neighbors | 50 | 75 | 100
UMAP | n_neighbors | 50 | 50 | 100
min_dist | 0.5 | 0.25 | 0.5
t-SNE | perplexity | 60 | 50 | 60
exaggeration | 6 | 6 | 6
Table 3: Hyperparameters used in the algorithms for the Swiss Roll with increasing Gaussian noise (see Figure 17) Resolution Algorithm | Hyperparameters | RES $=30$ | RES $=15$ | RES $=12$ | RES $=10$
---|---|---|---|---|---
LDLE | $\eta_{\text{min}}$ | 3 | 3 | 3 | 3
$k_{\text{tune}}$ | 7 | 2 | 2 | 2
$N$ | 100 | 25 | 25 | 25
$k_{\text{lv}}$ | 7 | 4 | 4 | 4
LTSA | n_neighbors | 5 | 4 | 5 | 10
UMAP | n_neighbors | 25 | 25 | 10 | 5
min_dist | 0.01 | 0.01 | 0.5 | 0.5
t-SNE | perplexity | 10 | 5 | 5 | 5
exaggeration | 4 | 2 | 4 | 2
Table 4: Hyperparameters used in the algorithms for the Swiss Roll with increasing sparsity (see Figure 18) Method | Hyperparameters
---|---
| face image data | Yoda-bulldog data
LDLE | $N=25$, $k_{\text{lv}}=12$, $\tau_{s}=5$, $\delta_{s}=0.25$ for all $s\in\\{1,2\\}$, $\eta_{\text{min}}=4$, $\text{to\\_tear}=$ False | $N=25$, $\tau_{s}=10$, $\delta_{s}=0.5$ for all $s\in\\{1,2\\}$, $\eta_{\text{min}}=10$
LTSA | $\text{n\\_neighbors}=10$ | $\text{n\\_neighbors}=10$
UMAP | $\text{n\\_neighbors}=50$, $\text{min\\_dist}=0.01$ | $\text{n\\_neighbors}=50$, $\text{min\\_dist}=0.01$
t-SNE | $\text{perplexity}=60$, $\text{early\\_exaggeration}=2$ | $\text{perplexity}=60$, $\text{early\\_exaggeration}=2$
Table 5: Hyperparameters used in the algorithms for the face image data [44]
(see Figure 22) and the Yoda-bulldog dataset [28] (see Figure 23).
## Appendix H Supplementary Figures
---
Figure 26: Comparison of different techniques to estimate
$\widetilde{A}_{kij}$ on a Swiss Roll with no noise, where $i=5$ and $j=7$.
(first row) Analytical eigenfunctions and the obtained discrete eigenvectors
are shown. (second row) Analytical value of $|\widetilde{A}_{kij}|$ is shown.
Note that LDLE depends on the absolute values of $\widetilde{A}_{kij}$. (third
row) Estimation of $|\widetilde{A}_{kij}|$ are shown due to Local Linear
Regression based approach [9], finite sum approximation and Feynman-Kac
formula based approaches as described in Section 3.2 and a variant of the
latter which uses low rank (of $100$) approximation of the graph Laplacian in
Eq. (29). (fourth row) Absolute difference between the estimates and the
analytical value. LLR, finite sum approx. and Feynman-Kac formula based
approaches seem to perform slightly better.
---
Figure 27: Comparison of different techniques to estimate
$\widetilde{A}_{kij}$ on a Swiss Roll with no noise, where $i=5$ and $j=23$.
(first row) Analytical eigenfunctions and the obtained discrete eigenvectors
are shown. (second row) Analytical value of $|\widetilde{A}_{kij}|$ is shown.
Note that LDLE depends on the absolute values of $\widetilde{A}_{kij}$. (third
row) Estimation of $|\widetilde{A}_{kij}|$ are shown due to Local Linear
Regression based approach [9], finite sum approximation and Feynman-Kac
formula based approaches as described in Section 3.2 and a variant of the
latter which uses low rank (of $100$) approximation of the graph Laplacian in
Eq. (29). (fourth row) Absolute difference between the estimates and the
analytical value. LLR, finite sum approx. and Feynman-Kac formula based
approaches seem to perform slightly better.
---
Figure 28: Comparison of different techniques to estimate
$\widetilde{A}_{kij}$ on a Swiss Roll with Gaussian noise of variance
$10^{-4}$, where $i=5$ and $j=7$. (first row) Analytical eigenfunctions
obtained for the noiseless version of the Swiss Roll, and the obtained
discrete eigenvectors are shown. (second row) Analytical value of
$|\widetilde{A}_{kij}|$ is shown. Note that LDLE depends on the absolute
values of $\widetilde{A}_{kij}$. (third row) Estimation of
$|\widetilde{A}_{kij}|$ are shown due to Local Linear Regression based
approach [9], finite sum approximation and Feynman-Kac formula based
approaches as described in Section 3.2 and a variant of the latter which uses
low rank (of $100$) approximation of the graph Laplacian in Eq. (29). (fourth
row) Absolute difference between the estimates and the analytical value. The
Feynman-Kac formula based approach which uses low rank approximation of $L$
seem to perform the best while the LLR based approach produced high error.
---
Figure 29: Comparison of different techniques to estimate
$\widetilde{A}_{kij}$ on a Swiss Roll with Gaussian noise of variance
$10^{-4}$, where $i=5$ and $j=23$. (first row) Analytical eigenfunctions
obtained for the noiseless version of the Swiss Roll, and the obtained
discrete eigenvectors are shown. (second row) Analytical value of
$|\widetilde{A}_{kij}|$ is shown. Note that LDLE depends on the absolute
values of $\widetilde{A}_{kij}$. (third row) Estimation of
$|\widetilde{A}_{kij}|$ are shown due to Local Linear Regression based
approach [9], finite sum approximation and Feynman-Kac formula based
approaches as described in Section 3.2 and a variant of the latter which uses
low rank (of $100$) approximation of the graph Laplacian in Eq. (29). (fourth
row) Absolute difference between the estimates and the analytical value. The
Feynman-Kac formula based approach which uses low rank approximation of $L$
seem to perform the best while the errors due to other three approaches are
somewhat similar.
---
Figure 30: (first column) Input square grid is shown. The points $x_{k}$ are colored by the distortion $\zeta_{kk}$ of the obtained local parameterizations $\Phi_{k}$ on the neighborhood $U_{k}$ surrounding them. A local view $U_{k_{0}}$ around $x_{k_{0}}$ for a fixed $k_{0}$ is also shown in black. (second column) The corresponding local view in the embedding space $\Phi_{k_{0}}(U_{k_{0}})$ is shown in black. Although of no significance to our algorithm, for visualization purpose, the embedding of the square due to $\Phi_{k_{0}}$, $\Phi_{k_{0}}(M)$, is shown in red. (third and fourth columns) The eigenvectors $\bm{\phi}_{i_{1}}$ and $\bm{\phi}_{i_{2}}$ chosen for the construction of $\Phi_{k_{0}}$ are shown. Points in $U_{k_{0}}$ are again colored in black. Note that the gradient of these eigenvectors are close to being orthogonal in the vicinity of $U_{k_{0}}$ and in particular, at $x_{k_{0}}$. | LDLE with arrows | Derived cut and paste diagrams
---|---|---
Sphere | |
Sphere with a hole | |
Curved torus | |
Flat torus | |
Möbius strip | |
Klein bottle | |
Figure 31: (Left) LDLE embedding with arrows drawn by tracing the colored
boundary. (Right) Derived cut and paste diagrams to prove the correctness of
the embedding. Pieces of the boundary represented by filled arrows of the same
color are to be stitched together. Pieces of the boundary represented by black
dashed lines are not to be stitched. Dotted lines and shallow arrows represent
cut and paste instructions, respectively. Figure 32: In Figure 18, for the
case when RES $=10$, certain points on the opposite sides of the gap between
the Swiss Roll are neighbors in the ambient space. These points are shown in
red.
## References
* [1] Anil Aswani, Peter Bickel, Claire Tomlin, et al. Regression on manifolds: Estimation of the exterior derivative. The Annals of Statistics, 39(1):48–81, 2011.
* [2] Bastian Bechtold, Patrick Fletcher, Seamus Holden, and Srinivas Gorur-Shandilya. bastibe Violinplot-Matlab: A Good Starting Point. github, 2021.
* [3] Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural computation, 15(6):1373–1396, 2003.
* [4] Mikhail Belkin and Partha Niyogi. Towards a theoretical foundation for Laplacian-based manifold methods. Journal of Computer and System Sciences, 74(8):1289–1308, 2008\. Learning Theory 2005.
* [5] Tyrus Berry and Timothy Sauer. Density estimation on manifolds with boundary. Computational Statistics & Data Analysis, 107:1–17, 2017.
* [6] Yochai Blau and Tomer Michaeli. Non-Redundant Spectral Dimensionality Reduction. CoRR, abs/1612.03412, 2016.
* [7] Yaiza Canzani. Analysis on manifolds via the laplacian. Lecture Notes available at: http://www. math. harvard. edu/canzani/docs/Laplacian. pdf, 2013.
* [8] Yu-Chia Chen and Marina Meila. Selecting the independent coordinates of manifolds with large aspect ratios. In Advances in Neural Information Processing Systems, volume 32, pages 1088–1097. Curran Associates, Inc., 2019.
* [9] Ming-yen Cheng and Hau-tieng Wu. Local Linear Regression on Manifolds and Its Geometric Interpretation. Journal of the American Statistical Association, 108(504):1421–1434, 2013.
* [10] Xiuyuan Cheng and Gal Mishne. Spectral embedding norm: Looking deep into the spectrum of the graph laplacian. SIAM Journal on Imaging Sciences, 13(2):1015–1048, 2020.
* [11] Xiuyuan Cheng, Gal Mishne, and Stefan Steinerberger. The geometry of nodal sets and outlier detection. Journal of Number Theory, 185:48–64, 2018.
* [12] Xiuyuan Cheng and Hau-Tieng Wu. Convergence of graph laplacian with knn self-tuned kernels. arXiv preprint arXiv:2011.01479, 2020.
* [13] Xiuyuan Cheng and Nan Wu. Eigen-convergence of Gaussian kernelized graph Laplacian by manifold heat interpolation. arXiv preprint arXiv:2101.09875, 2021.
* [14] Leena Chennuru Vankadara and Ulrike von Luxburg. Measures of distortion for machine learning. In S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett, editor, Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018.
* [15] A. Cloninger and W. Czaja. Eigenvector localization on data-dependent graphs. In 2015 International Conference on Sampling Theory and Applications (SampTA), pages 608–612, 2015.
* [16] Alexander Cloninger and Stefan Steinerberger. On the Dual Geometry of Laplacian Eigenfunctions. Experimental Mathematics, 0(0):1–11, 2018.
* [17] Ronald R Coifman and Stéphane Lafon. Diffusion maps. Applied and computational harmonic analysis, 21(1):5–30, 2006.
* [18] Fabio Crosilla and Alberto Beinat. Use of generalised Procrustes analysis for the photogrammetric block adjustment by independent models. ISPRS Journal of Photogrammetry and Remote Sensing, 56:195–209, 04 2002.
* [19] Carmeline J. Dsilva, Ronen Talmon, Ronald R. Coifman, and Ioannis G. Kevrekidis. Parsimonious representation of nonlinear dynamical systems through manifold learning: A chemotaxis case study. Applied and Computational Harmonic Analysis, 44(3):759 – 773, 2018\.
* [20] John C Gower. Generalized procrustes analysis. Psychometrika, 40(1):33–51, 1975.
* [21] John C Gower, Garmt B Dijksterhuis, et al. Procrustes problems, volume 30. Oxford University Press on Demand, 2004.
* [22] Matthias Hein, Jean-Yves Audibert, and Ulrike von Luxburg. Graph laplacians and their convergence on random neighborhood graphs. Journal of Machine Learning Research, 8(6), 2007.
* [23] Jerry L Hintze and Ray D Nelson. Violin plots: a box plot-density trace synergism. The American Statistician, 52(2):181–184, 1998.
* [24] Ian T Jolliffe and Jorge Cadima. Principal component analysis: a review and recent developments. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2065):20150202, 2016.
* [25] Peter W Jones, Mauro Maggioni, and Raanan Schul. Universal local parametrizations via heat kernels and eigenfunctions of the Laplacian. arXiv preprint arXiv:0709.1975, 2007.
* [26] Dmitry Kobak and George C Linderman. Initialization is critical for preserving global data structure in both t-SNE and UMAP. Nature biotechnology, 39(2):156–157, 2021.
* [27] S.S. Lafon. Diffusion Maps and Geometric Harmonics. PhD Thesis, page 45, 2004.
* [28] Roy R Lederman and Ronen Talmon. Learning the geometry of common latent variables using alternating-diffusion. Applied and Computational Harmonic Analysis, 44(3):509–536, 2018\.
* [29] Didong Li and David B Dunson. Geodesic distance estimation with spherelets. arXiv preprint arXiv:1907.00296, 2019.
* [30] Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-SNE. Journal of machine learning research, 9(Nov):2579–2605, 2008.
* [31] MATLAB. Procrustes analysis, Statistics and Machine Learning Toolbox. The MathWorks, Natick, MA, USA, 2018.
* [32] Leland McInnes, John Healy, and James Melville. Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426, 2018.
* [33] G. Mishne and I. Cohen. Multiscale anomaly detection using diffusion maps. IEEE Journal of Selected Topics in Signal Processing, 7(1):111–123, 2013.
* [34] Gal Mishne, Ronald R. Coifman, Maria Lavzin, and Jackie Schiller. Automated cellular structure extraction in biological images with applications to calcium imaging data. bioRxiv, 2018.
* [35] Gal Mishne, Uri Shaham, Alexander Cloninger, and Israel Cohen. Diffusion nets. Applied and Computational Harmonic Analysis, 47(2):259 – 285, 2019\.
* [36] Erez Peterfreund, Ofir Lindenbaum, Felix Dietrich, Tom Bertalan, Matan Gavish, Ioannis G Kevrekidis, and Ronald R Coifman. LOCA: LOcal Conformal Autoencoder for standardized data coordinates. arXiv preprint arXiv:2004.07234, 2020.
* [37] Sam T Roweis and Lawrence K Saul. Nonlinear dimensionality reduction by locally linear embedding. science, 290(5500):2323–2326, 2000.
* [38] N. Saito. How Can We Naturally Order and Organize Graph Laplacian Eigenvectors? In 2018 IEEE Statistical Signal Processing Workshop (SSP), pages 483–487, 2018.
* [39] Peter H Schönemann. A generalized solution of the orthogonal procrustes problem. Psychometrika, 31(1):1–10, 1966.
* [40] Amit Singer and Hau-tieng Wu. Orientability and diffusion maps. Applied and computational harmonic analysis, 31(1):44–58, 2011\.
* [41] Stefan Steinerberger. Lower Bounds on Nodal Sets of Eigenfunctions via the Heat Flow. Communications in Partial Differential Equations, 39(12):2240–2261, 2014.
* [42] Stefan Steinerberger. On the spectral resolution of products of Laplacian eigenfunctions. arXiv preprint arXiv:1711.09826, 2017.
* [43] Jos MF Ten Berge. Orthogonal Procrustes rotation for two or more matrices. Psychometrika, 42(2):267–276, 1977.
* [44] Joshua B Tenenbaum, Vin De Silva, and John C Langford. A global geometric framework for nonlinear dimensionality reduction. science, 290(5500):2319–2323, 2000.
* [45] Nicolás García Trillos, Moritz Gerlach, Matthias Hein, and Dejan Slepčev. Error estimates for spectral convergence of the graph Laplacian on random geometric graphs toward the Laplace–Beltrami operator. Foundations of Computational Mathematics, 20(4):827–887, 2020.
* [46] Nicolas Garcia Trillos, Pengfei He, and Chenghui Li. Large sample spectral analysis of graph-based multi-manifold clustering, 2021.
* [47] Lihi Zelnik-Manor and Pietro Perona. Self-tuning spectral clustering. In Advances in neural information processing systems, pages 1601–1608, 2005.
* [48] Sharon Zhang, Amit Moscovich, and Amit Singer. Product Manifold Learning . In Arindam Banerjee and Kenji Fukumizu, editors, Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, volume 130 of Proceedings of Machine Learning Research, pages 3241–3249. PMLR, 13–15 Apr 2021.
* [49] Zhenyue Zhang and Hongyuan Zha. Nonlinear dimension reduction via local tangent space alignment. In International Conference on Intelligent Data Engineering and Automated Learning, pages 477–481. Springer, 2003.
|
# Supervised Momentum Contrastive Learning for Few-Shot Classification
Orchid Majumder correspondence to<EMAIL_ADDRESS>Amazon Web Services
Avinash Ravichandran Amazon Web Services Subhransu Maji UMass Amherst
Alessandro Achille Amazon Web Services Marzia Polito Amazon Web Services
Stefano Soatto Amazon Web Services UCLA
###### Abstract
Few-shot learning aims to transfer information from one task to enable
generalization on novel tasks given a few examples. This information is
present both in the domain and the class labels. In this work we investigate
the complementary roles of these two sources of information by combining
instance-discriminative contrastive learning and supervised learning in a
single framework called Supervised Momentum Contrastive learning (SupMoCo).
Our approach avoids a problem observed in supervised learning where
information in images not relevant to the task is discarded, which hampers
their generalization to novel tasks. We show that (self-supervised)
contrastive learning and supervised learning are mutually beneficial, leading
to a new state-of-the-art on the Meta-Dataset [47] — a recently introduced
benchmark for few-shot learning. Our method is based on a simple modification
of MoCo [19] and scales better than prior work on combining supervised and
self-supervised learning. This allows us to easily combine data from multiple
domains leading to further improvements.
## 1 Introduction
A few-shot learning system should learn a representation of the data that is
invariant to common factors of variations of objects (e.g., change of pose,
deformations, color) while still representing features that allow to
discriminate between different classes. Factoring out all the nuisance factors
reduces the effective dimensionality of the hypothesis space and allows to
learn good classifiers using only a few samples. For this reason, much of the
few-shot literature hinges on the intrinsic ability of deep neural networks
(DNNs) to learn invariant representations when trained in a supervised manner.
However, DNNs are often too eager to learn invariances. In what is known as
”supervision collapse” [11], a DNN can learn to encode only the features that
are useful to discriminate between the training classes, and as a result is
not sufficiently expressive to discriminate between new unseen classes which
is what eventually matters in few-shot learning. The question is then: How can
we learn a representation that is invariant to common factors while
maintaining discriminativeness for unseen classes?
In this paper we introduce Supervised Momentum Contrastive learning (SupMoCo).
SupMoCo (Fig. 1) augments standard self-supervised learning to account for
class labels, so that the network learns the intra-class variability through
supervision while at the same time retaining distinctive features of the
individual images through the self-supervised components, thus avoiding
supervision collapse. On the algorithmic side, SupMoCo makes extensive use of
the efficient queue based architecture of MoCo, which avoids memory bottleneck
and leads to a greater diversity of classes in the contrastive objective. We
found this to be critical for good performance, and it allows SupMoCo to
achieve a significantly better performance than other comparable algorithms
([24, 11]) in the literature. On the popular Meta-Dataset [47] few-shot
benchmark, SupMoCo achieves a new state-of-the-art (Tab. 1, 2) and we observe
an average $4\%$ accuracy increase (Tab. 4, 5) over the closest comparison
(SupCon [24]) .
SupMoCo allows us to easily combine data from different domains during
training in a multi-domain setup. Compared to training on a single domain
(ImageNet), training on a combination of domains leads to a large improvement
in performance on novel tasks where the domain difference from ImageNet is
large (_e.g_., Quickdraw, Aircraft, and Fungi) as seen in Tab. 1, 2. In a
partially labeled setup where we provide all labeled samples from ImageNet and
only $10\%$ of labeled data from the remaining ones with the rest provided as
unlabeled, SupMoCo only suffers an average $2\%$ performance drop (Tab. 3) and
beats several recently proposed few-shot learning algorithms using all
supervision.
We perform an ablation study to investigate the complementary roles of
supervised and self-supervised learning by analyzing the degree of
generalization and supervision collapse (Fig. 3). We visualize the
distribution of nearest neighbors obtained through representations trained
using supervision and with SupMoCo using the empirical framework presented in
[11]. We find that SupMoCo avoids supervision collapse better than the
supervised method in this experiment.
## 2 Related Work
Figure 1: High-level illustration of SupMoCo (Sec. 4). During training, for
each image, we collect $P$ additional images (referred as positives, $P=1$ in
the figure) out of which one is the augmented view of the image and the rest
are random augmented samples from the same class. The original image is fed
through the query-encoder ($f_{q}$) where the other images goes through the
key-encoder ($f_{k}$) for feature extraction. Once feature extraction is
complete, we use an instance-discriminative contrastive loss to maximize
similarity between the features of the image and its positives. Apart from
these $P$ positives, we also identify entries belonging to the same class from
the queue ($\mathcal{Q}$) (used to store features of past samples) and
maximize similarity with those as well for every image. Features corresponding
to the data augmented view of each sample are inserted into the queue and the
oldest entries are removed.
Few-shot learning methods. Meta-learning and standard supervised learning have
been the two most common approaches for pre-training a representation for few-
shot classification. Meta-learning methods can be broadly classified into
metric learning based and optimization based techniques. Metric learning [42,
25, 35, 49, 37, 44, 52] methods learn a feature representation such that
similar images are close in the embedding space relative to dissimilar images.
Optimization based methods [14, 27, 4] learn representations that lead to
generalizable models measured using pre-defined classification model,
objective, or a training procedure. On the other hand [7, 10, 52, 8, 45]
showed that competitive performance can be obtained using standard cross-
entropy based training with a few modifications, suggesting the need to
understand the conditions under which models are transferable. This is the
focus of a broader class of meta-learning approaches that aim to improve few-
shot transfer through techniques for model and dataset selection, designing
task representations, and modeling transferability (_e.g_., [1, 51, 54, 46,
56]).
Few-shot learning benchmarks. Popular few-shot benchmarks such as miniImageNet
[49] and tieredImageNet [35] divide the ImageNet dataset [39] into a disjoint
train, validation, and test set of classes. The train set of classes are used
for pre-training and few-shot evaluation is performed on tasks sampled from
the test set varying the number of classes and labeled examples (_e.g_.,
$5$-way-$5$-shot). These benchmarks exhibit relatively small domain shifts. As
an alternate [47] proposed the Meta-Dataset benchmark, which consists of $10$
datasets from diverse domains. Two settings are used for reporting in general
— one where representations are learned on the train split of “ImageNet only”,
and another where train sets from “all datasets” except two are combined (the
two remaining are used for testing only). After the training, few-shot
evaluation is performed on the test split across all $10$ domains using many
tasks by varying the ways and shots per task.
Instance-discriminative contrastive learning. Among various self-supervised
tasks, contrastive learning with instance discrimination as a pretext task has
emerged as the leading approach for unsupervised representation learning for
visual domains [12, 53, 6, 19, 34]. These methods employ various forms of data
augmentations and optimize a contrastive loss that forces augmentations of the
same instance to be similar in the embedding space relative to other images.
Much prior work has focused on the use of contrastive learning for pre-
training, where the learned represented are evaluated on downstream tasks.
However, sequential training may be sub-optimal due to the dynamics of
training deep networks during transfer across tasks [2], and introducing
supervision early might lead to better representations.
Combining supervised and self-supervised learning. The complementary roles of
supervision and self-supervision have been explored in number of prior works.
Some methods [15, 43] use self-supervised losses (_e.g_., jigsaw [33] and
rotation tasks [17]) as auxiliary losses during supervised training. These
methods require calibrating the two losses and are not robust when combing
data across domains. Alternate approaches combined self-supervised pre-
training followed by supervised finetuning or adaptation [11]. We compare
against these approaches. The work most closely related to ours is SupCon [24]
which uses instance discrimination in a supervised setup using a modification
of SimCLR [6]. Similar to our approach they use the class labels to generate
different views of the data and show superior results on ImageNet compared to
standard supervised training methods. Our work is based on MoCo. While the
difference between SimCLR and MoCo is negligible in self-supervised setting,
it is significant in the supervised setting. In particular the queue-based
architecture of MoCo allows larger effective batch sizes allowing contrastive
losses over diverse set of classes. Empirically we find this to be crucial for
good performance. We find that SupMoCo provides a 4% improvement over SupCon
on both settings on Meta-Dataset. These results echo years of work in the area
of metric learning that has focused on mining triples, hard negatives, and
other sampling schemes [21, 41, 18] to improve learning.
Baselines on Meta-Dataset. Along with the experimental setup [47] includes
results with several meta-learning methods including Prototypical Networks
(PN) [42] and MAML [14] which serve as additional baselines. For the ImageNet-
only setup, [45, 10] showed that a softmax classifier based supervised pre-
training performs better than the meta-learning baselines. CrossTransformers
[11], the current state-of-the-art on the ImageNet-only setup, uses a self-
supervised pre-training and a Transformers [48] based classifier. In the all-
datasets setup, current state-of-the-art methods [13, 29] use a multi-task
learning where a shared backbone is trained on samples from all datasets. Some
domain specific parameter such as FiLM layers [36] are used, which increase
performance, but lead to complexity at training. At test-time, these methods
use a model selection mechanism to pick the right representation to adapt to a
given few-shot task. In contrast our model trains a single network on a
unified dataset created by simply aggregating images and labels from all
datasets.
## 3 Background
We start by briefly describing two popular instance-discriminative contrastive
learning algorithms – MoCo [19] and SimCLR [6], followed by describing how
SupCon [24] adds supervision to formulation of SimCLR.
MoCo [19] is based on a contrastive loss estimated using samples in a batch
$x_{i},i\in\\{1\dots n\\}$ and a queue ($\mathcal{Q}$) of size $K$. It trains
two feature extractors: a query-encoder $f_{q}(\cdot)$ and key-encoder
$f_{k}(\cdot)$. Each image in the batch is transformed in two different ways
$(x_{i},\bar{x}_{i})$ and processed through the $f_{q}$ and $f_{k}$
respectively. The contrastive loss is defined as:
$\displaystyle\begin{split}\mathcal{L}&=-\log\frac{\exp(\mathcal{S}(f_{q}(x_{i}),f_{k}(\bar{x}_{i})))}{\mathcal{D}}\end{split}$
(1) $\displaystyle\mathcal{D}$
$\displaystyle=\exp(\mathcal{S}(f_{q}(x_{i}),f_{k}(\bar{x}_{i})))+\sum_{h=1}^{K}\exp(\mathcal{S}(f_{q}(x_{i}),\mathcal{Q}_{h}))$
where $\mathcal{Q}_{h}$ is the $h^{th}$ entry of the $\mathcal{Q}$ (of size
$K$) and $\mathcal{S}$ is a similarity function such as the scaled cosine
similarity. The main difference between the two encoders is that $f_{q}$ is
updated using the gradient of the objective, while $f_{k}$ is updated using
momentum. The encoded keys are then added to the $\mathcal{Q}$ and the oldest
keys are discarded. A large value of momentum is used to ensure consistency of
the keys in $\mathcal{Q}$ across batches.
SimCLR [6] does not use any queue and estimates a contrastive loss among
examples within the batch. In particular during training each batch contains
$2n$ samples corresponding to two augmentations $(x_{i},x_{j})$ of $n$ images.
The objective between a positive pair $(x_{i},x_{j})$ is defined as
$\mathcal{L}_{ij}=-\log\frac{\exp(\mathcal{S}(f(x_{i}),f(x_{j})))}{\sum_{k=1}^{2n}\mathbf{1}_{[k\neq
i]}\exp(\mathcal{S}(f(x_{i}),f(x_{k})))}$ (2)
where $f(\cdot)$ is a feature extractor with $f(x)=h(g(x))$ consisting of the
backbone $g(\cdot)$ and a multi-layer projection head $h(\cdot)$. The overall
objective consider all positive pairs in the batch. After training $h$ is
discarded and $g$ is used as the feature extractor for downstream tasks.
SupCon [24] modifies the above algorithm to take into account class labels by
simply considering all images from the same class along with their
augmentations to be positives with respect to each other. All other images and
their augmentations are considered negative. If a mini-batch contains $2n$
samples ($n$ images with one augmented view each) with $P$ unique images per
class, then the loss for each $x_{i}$ is:
$\mathcal{L}=\frac{-1}{2P-1}\sum_{r=1}^{2P-1}\log\frac{\exp(\mathcal{S}(f(x_{i}),f(x_{r})))}{\sum_{k=1}^{2n}\mathbf{1}_{[k\neq
i]}\exp(\mathcal{S}(f(x_{i}),f(x_{k})))}$ (3)
$x_{r},r\in\\{1\dots 2P-1\\}$ are $2P-1$ positive samples for $x_{i}$ out of
which one is augmented view of $x_{i}$ and other $2P-2$ samples are $P-1$
different images from the same class along with their augmented views. SupCon
was shown to improve over standard cross-entropy based training on ImageNet as
measured in terms of generalization on downstream tasks. We compare to SupCon
in this work.
## 4 Supervised Momentum Contrast
SupMoCo uses the same idea as SupCon where labels (when available) are used to
define positive pairs in the MoCo objective. The main difference is that we
need to keep track of the labels in both the keys and the $\mathcal{Q}$ and
consider choices of how to sample batches and update the queue. Suppose we
have sampled $B_{i}$ images (positives) for image $x_{i}$ out of which one is
augmented image of $x_{i}$ itself and the others are random augmented images
from the same class. There are $Q_{i}$ other samples that belong to the same
class as $x_{i}$ in the $\mathcal{Q}$ and we denote $B_{i}+Q_{i}$ as $P_{i}$.
The loss for the sample $x_{i}$ is:
$\displaystyle\begin{split}\mathcal{L}=\frac{-1}{P_{i}}\Bigg{[}&\sum_{j=1}^{B_{i}}\log\frac{\mathcal{S}(f_{q}(x_{i}),f_{k}(x_{b^{i}(j)}))}{\mathcal{D}}\\\
&+\sum_{j=1}^{Q_{i}}\log\frac{\mathcal{S}(f_{q}(x_{i}),\mathcal{Q}_{q^{i}(j)})}{\mathcal{D}}\Bigg{]}\\\
\end{split}$ (4) $\displaystyle\mathcal{D}$
$\displaystyle=\sum_{j=1}^{B_{i}}\exp(\mathcal{S}(f_{q}(x_{i}),f_{k}(x_{b^{i}(j)})))+\sum_{h=1}^{K}\exp(\mathcal{S}(f_{q}(x_{i}),\mathcal{Q}_{h}))$
where $b^{i}(j)$ and $q^{i}(j)$ denote the indices of the positive samples for
$x_{i}$ and other images belonging to the same class as $x_{i}$ in the queue
respectively. During training, only gradient with respect to the loss of the
original image $x_{i}$ is used to update the query-encoder $f_{q}$ and $f_{k}$
is updated using momentum instead of gradients. We describe the details of how
we sample the data and update the queue next. 111PyTorch code is provided in
the supplementary materials.
Sample Selection: While selecting keys (positives) for a given query image,
instead of only selecting the augmented view of each image, we also sample
$P-1$ additional images (data-augmented) from the training set. This allows
learning representations to learn class specific invariances. We discuss the
impact of the choice of $P$ in Tab. 6.
Queue Architecture: Apart from storing the keys the above algorithm requires
us to store the class labels as well. One choice is what to add to the queue
after each batch update. We found that instead of adding all the samples in
the batch to the queue, it was effective to add just one per image (the data-
augmented image of each $x_{i}$). This increases the diversity of data-points
in the queue.
Discussion. By contrasting between instances within a batch leads to a
tradeoff where large number of positives samples leads to a poor estimate of
the denominator due to a potential lack of hard negatives. Decoupling the
sampling strategy within the batch and queue provides a greater flexibility
and larger effective batch sizes on the same GPU memory constraint.
Empirically, we find that the performance of SupCon with a batch-size of
$1024$ (maximum that fits on 8 V100 GPUs) lags behind SupMoCo with a batch-
size of $512$. We present the details in Sec. 5.3.
## 5 Experiments
### 5.1 Experimental Setup
We describe our experimental setup below including the dataset, details
regarding SupMoCo training and how we perform few-shot evaluation.
#### Dataset
We use Meta-Dataset [47] to evaluate few-shot classification performance.
Meta-Dataset consists of $10$ datasets from different domains :
ImageNet/ILSVRC-2012 [39], Aircraft [31], Omniglot [26], Textures [9],
QuickDraw [23], Fungi [40], VGG-Flower [32], Birds [50], MSCOCO [28] and
Traffic-Sign [22]. Most of these are fine-grained datasets (_e.g_. VGG-Flower,
Aircraft, Textures, Birds, Fungi). Out of these $10$ datasets, either only the
ImageNet or the first $8$ datasets can be used for training. Traffic-Sign and
MSCOCO are reserved for testing only to evaluate out-of-domain generalization
in case all $8$ datasets are used for training. We refer to the first setup as
“ImageNet-only” and the second setup as “all-datasets”. The first $8$ datasets
are split into train, validation and test segments where the classes present
in each segment are disjoint from each other. MSCOCO and Traffic-Sign does not
have any classes belonging to the train split and therefore can not be used
for training. We provide details about the datasets in the supplementary
materials.
#### Contrastive Training Details
We use a ResNet-18 backbone [20] with $224\times 224$ images and train using
$8$ V100 GPUs (AWS P3.16XL). For SupMoCo, we use $3$ positive samples for
every image, with one of them being the augmented view of the same image. For
SupCon, we use $4$ images per class in a mini-batch which means each image
gets $3$ different images and their augmentations plus its own augmentation as
positives. We train for $250$ epochs when training with ImageNet-only and
$300$ epochs when using all datasets. For SupCon, we train with a batch-size
of $1024$ whereas for SupMoCo, we use a smaller batch-size of $512$. We use a
linear warm-up for learning-rate during first $10$ epochs (starting from $0.1$
to peak of $0.8$ for SupCon and $0.4$ for SupMoCo) and train with SGD +
momentum ($0.9$) with LARS [55] along with a weight-decay of $1e^{-4}$ and
cosine-annealing learning-rate scheduling [30]. We use $5$ data-augmentations
during contrastive training: RandomResizedCrop, ColorJitter,
RandomHorizontalFlip, RandomGrayscale and GaussianBlur. We construct the
projection head with one hidden layer (with ReLU activation) of dimension
$512$ and the output dimension is kept at $128$. We set the softmax-
temperature $\tau$ to be $0.1$. For SupMoCo, we use a queue of length $16384$
and a momentum of $0.999$. When training using all the $8$ datasets, we
concatenate all the training data and train using the combined dataset. While
training using the combined dataset, we randomly sample images from all
datasets in every mini-batch rather than ensuring that a mini-batch contains
data only from a particular dataset. We provide additional results in the
supplementary materials on training in this way versus keeping each mini-batch
pure and show that our setup yields a much better performance.
#### Few-Shot Evaluation
Following the protocol suggested in Meta-Dataset, we sample $600$ tasks from
each of the $10$ datasets where each task contains a variable number of ways
(classes) & shots (samples) and report the average accuracy across these tasks
along with the average rank across all $10$ datasets. To solve each individual
task, we use a finetuning procedure similar to [10] where we use a cosine
classifier [16] with the weights of the classifier initialized using the
prototypes for each class (computed using the support/train set) and then
finetune both the classifier and the backbone jointly. However, we do not use
transductive finetuning [10] to ensure a fair comparison with other methods.
We use a batch-size of $64$, learning-rate of $0.001$, SGD + momentum ($0.9$)
with $1e^{-4}$ weight-decay and finetune for $50$ epochs.
### 5.2 Experimental Results
In this section, we report and analyze the performance of SupMoCo on both the
ImageNet-only and all-datasets setup. For each segment, we provide additional
details regarding existing methods and differentiate our approach against
these.
Table 1: Performance when trained using ImageNet-only. We use the following
methods from the baselines: FS-Baseline : Transductive finetuning; Sup.
Embeddings : lr-distill; CrossTransformers : CTX+SimCLR +Aug. SupMoCo
outperforms all prior methods on the average rank metric and performs better
on $6/10$ tasks compared to the state-of-the-art [11].
Algorithms | Backbone | Avg. Rank | Test Datasets
---|---|---|---
ImageNet | Aircraft | Birds | Omniglot | Textures | MSCOCO | QuickDraw | Traffic-Sign | VGG-Flower | Fungi
ProtoNets [47] | ResNet-18 | 5.75 | 50.50 | 53.10 | 68.79 | 59.98 | 66.56 | 41.00 | 48.96 | 47.12 | 85.27 | 39.71
Proto-MAML [47] | ResNet-18 | 5.15 | 49.53 | 55.95 | 68.66 | 63.37 | 66.49 | 43.74 | 51.52 | 48.83 | 87.15 | 39.96
Sup. Embedding [45] | ResNet-18 | 3.30 | 61.48 | 62.32 | 79.47 | 64.31 | 79.28 | 59.28 | 60.83 | 76.33 | 91.00 | 48.53
FS-Baseline [10] | WRN-28-10 | 3.25 | 60.53 | 72.40 | 82.05 | 82.07 | 80.47 | 42.86 | 57.36 | 64.37 | 92.01 | 47.72
CrossTransformers [11] | ResNet-34 | 1.90 | 62.76 | 79.49 | 80.63 | 82.21 | 75.57 | 59.90 | 72.68 | 82.65 | 95.34 | 51.58
SupMoCo | ResNet-18 | 1.65 | 62.96 | 81.48 | 84.89 | 78.42 | 88.59 | 52.18 | 68.42 | 84.69 | 93.56 | 55.39
Table 2: Performance when trained using all $8$ datasets of Meta-Dataset.
SupMoCo outperforms all methods on the average rank metric and performs equal
or better on $8/10$ tasks compared to the state-of-the-art [29].
Algorithms | Backbone | Avg. Rank | Test Datasets
---|---|---|---
ImageNet | Aircraft | Birds | Omniglot | Textures | MSCOCO | QuickDraw | Traffic-Sign | VGG-Flower | Fungi
ProtoNets [47] | ResNet-18 | 6.60 | 44.50 | 71.14 | 67.01 | 79.56 | 65.18 | 39.87 | 64.88 | 46.48 | 86.85 | 40.26
Proto-MAML [47] | ResNet-18 | 5.90 | 46.52 | 75.23 | 69.88 | 82.69 | 68.25 | 41.74 | 66.84 | 52.42 | 88.72 | 41.99
CNAPs [38] | ResNet-18 | 4.90 | 52.30 | 80.50 | 72.20 | 88.40 | 58.30 | 42.60 | 72.50 | 60.20 | 86.00 | 47.40
SimpleCNAPs [3] | ResNet-18 | 3.45 | 58.60 | 82.40 | 74.90 | 91.70 | 67.80 | 46.20 | 77.70 | 73.50 | 90.70 | 46.90
SUR [13] | ResNet-18 | 3.25 | 56.30 | 85.40 | 71.40 | 93.10 | 71.50 | 52.40 | 81.30 | 70.40 | 82.80 | 63.10
URT [29] | N/A | 2.35 | 55.70 | 85.80 | 76.30 | 94.40 | 71.80 | 52.20 | 82.50 | 69.40 | 88.20 | 63.50
SupMoCo | ResNet-18 | 1.55 | 61.94 | 86.61 | 86.93 | 91.61 | 87.64 | 51.34 | 82.44 | 84.31 | 92.62 | 63.68
#### Training Using ImageNet-Only
We report the performance metrics on ImageNet-only in Tab. 1 and compare
against the following baselines:
* •
ProtoNets/Proto-MAML : ProtoNets (PN) trains a Prototypical Networks [42] on
the training set using episodic sampling whereas Proto-MAML uses a first-order
approximation of MAML [14] where the inner (linear) classifier weights are
initialized using the prototypes of every class. Both of these baselines
suffer from supervision collapse. Using episodic sampling also brings an
additional problem where data-points are not compared against representatives
from all other classes at every training step which further affects
representation quality.
* •
Supervised Embeddings/Few-Shot Baseline : These two algorithms train an
embedding using a standard supervised loss using the entire dataset without
using any form of episodic sampling. Supervised Embeddings [45] keeps the
backbone fixed at test-time and learns a Logistic Regression classifier while
Few-Shot Baseline [10] uses transductive finetuning. Although these methods
suffer from supervision collapse, we see better performance compared to meta-
learning methods because of using a $N$-way softmax classifier which ensures
that each image is compared against all class representatives and creates more
discriminative features.
* •
CrossTransformers : To avoid supervision collapse, CrossTransformers [11]
proposes a self-supervised instance-discriminative pre-training phase followed
by training using a Transformer based architecture which builds upon the
nearest-mean classifier of PN but learns to retain the location of image
features by combining features with an attention based mechanism rather than
using an averaged-out feature-vector.
Though CrossTransformers and SupMoCo both use the supervised datasets to train
the embedding, the instance-discriminative training embedded in both the
algorithms (as an initial training phase in CrossTransformers and in the
single-stage training of SupMoCo) helps to avoid supervision collapse to a
large extent and results in superior performance compared to the other
baselines.
From the experimental results reported in Tab. 1, we can see that SupMoCo
outperforms all other algorithms on the average rank metric. In particular, it
outperforms the current state-of-the-art CrossTransformers despite using a
smaller backbone (ResNet-18 vs ResNet-34) and no additional parameters like
the Transformers while also having a shorter training time due to the single-
stage training mechanism.
One of the baselines that we do not compare against is using an instance-
discriminative contrastive loss as an auxiliary loss similar to [15]. This
involves tuning a crucial hyperparameter to determine the relative importance
of the standard cross-entropy loss and the self-supervised loss which requires
an extensive hyperparameter search. We executed a single training run using an
auxiliary instance-discriminative contrastive loss with equal weightage given
to both the contrastive and supervised loss and observed that it under-
performed standard supervised training [45, 10].
#### Training Using All-Datasets
Figure 2: t-SNE plot for visualizing the features (computed using SupMoCo)
corresponding to the images from the validation set of all training datasets
of Meta-Dataset. A single embedding is able to decipher the characteristics of
individual datasets and project them onto different subspaces. This
qualitatively shows that our SupMoCo embedding can preserve the identities of
each dataset without requiring any domain specific parameters.
In this experimental setup, the algorithms can use images belonging to the
train classes from all the $8$ datasets. Baseline methods can be broadly
divided into three categories here – 1) concatenate all data (and labels) and
train using it 2) train a common backbone and one additional set of parameter
to adapt based on the domain 3) train a common backbone and a set of
additional parameters per domain and use a model selection mechanism at test-
time. Previously discussed baselines (PN/Proto-MAML) use the first approach.
In SupMoCo as well, we take the first approach of training a single model (no
domain specific parameters) by concatenating data from all the classes across
$8$ domains.
* •
CNAPs/SimpleCNAPs : CNAPS/SimpleCNAPS are few-shot classifiers based on
conditional neural processes (CNP). Both use a shared feature extractor with
one set of FiLM [36] layers that is adapted using meta-learning. CNAPS uses a
linear classifier while SimpleCNAPS uses a Mahalanobis distance classifier to
solve each task.
* •
SUR/URT : SUR [13] and URT [29] use the idea of universal representations [5]
where a shared backbone along with domain specific parameters (implemented via
FiLM layers) is used for training. The idea is to share information across
domains while also retaining individual properties of each domain via a few
domain specific parameters. At test-time, SUR uses a probabilistic model to
find out how individual domain representations should be combined given a
target task. On the other hand, URT meta-trains a Transformer layer (after the
universal backbone is trained) for learning-to-learn such a combination. Both
of these methods can be considered a form of “soft” model selection as opposed
to a “hard” selection where features corresponding to one particular domain is
picked.
From the experimental results reported in Tab. 2, we can see that SupMoCo
outperforms all other algorithms on the average rank metric. It may seem
surprising that SupMoCo can outperform other methods, especially the ones
which use domain specific parameters. However, if we see the SupMoCo embedding
space (Fig 2), we can see that it preserves the individuality of each domain
without requiring any domain specific parameters. Using domain specific
parameters comes with an additional downside of having to use model selection
at test time. Given the limited amount of labeled data available during few-
shot testing, the selection process may get biased and assign more importance
to the parameters corresponding to an unrelated domain. Having a single
embedding alleviates that problem as the embedding itself possesses all the
information across domains and can be adapted to the target task as required.
### 5.3 Additional Experiments
Figure 3: In the left side, we show the quantitative results of the
supervision collapse experiment (Sec. 5.3). On the leftmost plot, X-axis shows
the number of retrievals from the same (test) class ($0$ means none from same
test class). In the middle plot, it indicates number of retrievals from train
classes ($0$ indicates all from test). In the rightmost plot, X-axis denotes
the maximum frequency of the same train class _e.g_. a value of $3$ means at
max 3 members were from the same train class ($0$ means all from test). The
Y-axis value denotes the number of queries in each bin. In the leftmost plot,
SupMoCo can be seen to shift the mass to the right which means it can find
more samples with the same class as the test image in the retrieval space. On
middle and rightmost plot, SupMoCo shifts the mass to the left which means it
matches the test image less with images from any train class and a particular
train class respectively and generates more differentiating features for
unseen images. On the right side, we show the nearest neighbors of a
particular test image retrieved using the two algorithms. We see the
representation collapse with ProtoNets where the features for the query image
ends up being similar to the “buckeye” class from training because the network
associates the red circle of the query image to the train class and ignores
other contextual information. In comparison, SupMoCo understands the visual
semantics of the image better and finds images predominantly from the test set
which are very similar to the query image (“screws” and “nails” in the wild).
#### Analyzing Supervision Collapse
Standard supervised training methods suffer from supervision collapse where
they discard information which is not relevant for the immediate training
task. In this experiment, we use the experimental setup provided by [11] to
analyze “supervision collapse” both qualitatively and quantitatively between a
supervised meta-learning algorithm (ProtoNets [42]) and SupMoCo. The
experimental setup is based on performing nearest-neighbor (NN) retrievals in
the joint embedding space (train + test) of the ImageNet dataset. The
retrieval set is constructed by sampling $130$ random images from each of the
$712$ train class and $130$ test classes. The task is to find the top-$9$
nearest neighbors for $1000$ randomly sampled images in this joint embedding
space and evaluate :
* •
Number of NNs that come from the same (test) class as the test/query image.
* •
Number of NNs that come from the train set.
* •
Among NNs that come from the same train class, number of most frequently-
retrieved such class.
The first metric analyzes how differentiated the representation of each test
class is while the second metric measures how much the representation of an
individual test image collapses to some image from the train set. The third
metric evaluates collapsing on one train class – if a majority of the
retrieval comes from one particular train class, it indicates that the
representation of that image has predominantly coincided with that train class
representation.
The empirical analysis is reported in the left side of Figure 3. We can
observe that SupMoCo has at least one neighbor from the same test class in
$60\%$ of the cases while it is $34.1\%$ for PN (as reported in [11], higher
the better). When it comes to evaluating collapse on the same (train) class,
SupMoCo has $39\%$ cases where two or more neighbors are from the train class
while it is $55.3\%$ for PN (lower the better). This indicates that SupMoCo
prevents collapse better than PN and generates more unique representations for
unseen images. Qualitative analysis (Fig. 3 right) shows similar findings as
the quantitative one.
Table 3: Performance comparison among SupMoCo trained with ImageNet-only
(SupMoCo-IM), with ImageNet and $10\%$ of the labeled data (rest provided as
unlabeled) from other domains (SupMoCo-SSL) and with all datasets (SupMoCo).
While only using $10\%$ of the data, SupMoCo only has an average of $2\%$
performance gap compared to the fully supervised model. When comparing with
the model trained on ImageNet alone, SupMoCo-SSL can achieve $4\%$ gain on
domains distant from ImageNet (_e.g_. Fungi, Omniglot which are indicated in
blue).
| Test Datasets
---|---
Data | ImageNet | Aircraft | Birds | Omniglot | Textures | MSCOCO | QuickDraw | Traffic-Sign | VGG-Flower | Fungi
SupMoCo-IM | 62.96 | 81.48 | 84.89 | 78.42 | 88.59 | 52.18 | 68.42 | 84.69 | 93.56 | 55.39
SupMoCo-SSL | 61.92 | 83.41 | 85.09 | 86.17 | 86.97 | 51.21 | 79.93 | 84.35 | 92.43 | 59.67
SupMoCo | 61.94 | 86.61 | 86.93 | 91.61 | 87.64 | 51.34 | 82.44 | 84.31 | 92.62 | 63.68
Table 4: Performance comparison between SupCon and SupMoCo on ImageNet-only.
SupMoCo outperforms SupCon by $3.6\%$ on average across all the tasks.
| Test Datasets
---|---
Algorithms | ImageNet | Aircraft | Birds | Omniglot | Textures | MSCOCO | QuickDraw | Traffic-Sign | VGG-Flower | Fungi
SupCon | 59.30 | 78.39 | 81.86 | 74.60 | 84.88 | 48.36 | 64.31 | 81.23 | 90.16 | 51.41
SupMoCo | 62.96 | 81.48 | 84.89 | 78.42 | 88.59 | 52.18 | 68.42 | 84.69 | 93.56 | 55.39
Table 5: Performance comparison between SupCon and SupMoCo on all-datasets.
SupMoCo outperforms SupCon by $4.1\%$ on average across all the tasks.
| Test Datasets
---|---
Algorithms | ImageNet | Aircraft | Birds | Omniglot | Textures | MSCOCO | QuickDraw | Traffic-Sign | VGG-Flower | Fungi
SupCon | 56.50 | 83.20 | 83.70 | 86.80 | 82.02 | 47.89 | 78.09 | 81.23 | 89.09 | 59.57
SupMoCo | 61.94 | 86.61 | 86.93 | 91.61 | 87.64 | 51.34 | 82.44 | 84.31 | 92.62 | 63.68
Table 6: Comparison between $1$ positive per image (augmented view of the same
image) and $3$ positives per image (1 augmented view + 2 random augmented
images from the same class) when training SupMoCo using ImageNet-only. Using
additional positives beyond the augmented view of itself helps to provide
additional performance gain.
| Test Datasets
---|---
Positives ($P$) | ImageNet | Aircraft | Birds | Omniglot | Textures | MSCOCO | QuickDraw | Traffic-Sign | VGG-Flower | Fungi
$P=1$ | 60.77 | 81.34 | 80.03 | 77.16 | 86.74 | 47.19 | 64.43 | 82.35 | 91.95 | 53.56
$P=3$ | 62.96 | 81.48 | 84.89 | 78.42 | 88.59 | 52.18 | 68.42 | 84.69 | 93.56 | 55.39
#### Comparing SupMoCo and SupCon
Performance of SupCon improves when it has more number of positive samples per
class within every mini-batch [24]. However, increasing number of samples per
class reduces number of unique classes within a batch. This is less of a
problem for self-supervised SimCLR because each image is considered its own
class but SupCon uses the true class labels of each image and therefore,
number of distinct classes reduce by a factor of $P$ when there are $P$
samples per class. This problem does not exist for SupMoCo because it uses a
separate queue to store features corresponding to the negative samples. This
ensures that representations from all classes are available to compare against
at every step even with a small batch-size. For example, in SupCon, with a
batch-size of $1024$ and $4$ samples per class, we can only have $1024/4=256$
unique classes to compare against. Whereas in SupMoCo, irrespective of the
batch-size, a queue of moderate size (_e.g_. $8192$) can store enough samples
from all classes in the larger all-datasets training setup. The queue in
SupMoCo only has to store low dimensional feature vectors ($128$) rather than
the image itself (and its features) and therefore has negligible GPU memory
overhead if queue size increases. The ability to easily compare against
representations from all classes at every training step helps SupMoCo to
produce more discriminative features. From the results in Tab. 4 & 5, we can
observe that SupMoCo outperforms SupCon by $3.7\%$ on ImageNet-only and
$4.2\%$ on all-datasets (on average) while achieving a maximum gain of
$5.6\%$.
#### Using Partially Labeled Data
In this experiment, we measure the performance of SupMoCo in a partially
labeled (semi-supervised setup) to evaluate 1) its flexibility to work with
both fully and partially labeled data and 2) its ability to use only a limited
number of labeled samples to learn class semantics while predominantly using
unlabeled data to learn individual characteristics of data from different
domains. In this setup, we provide all labeled images from ImageNet and only
$10\%$ labeled images from the remaining $7$ datasets while the remaining
images are provided without labels. Depending on the dataset, this can provide
as few as $1$ sample for certain classes. The goal here is to see how much
performance gap there is between SupMoCo with partially labeled against all
labeled data and also how much its performance improves compared to training
using ImageNet-only. From the results in Tab 3, we can see that the
performance using $10\%$ of labeled data from the $7$ domains has only $2\%$
performance gap on average compared to the fully-supervised model. When
compared to ImageNet-only, we see $4\%$ performance gain (on average) in
domains which are further from ImageNet (_e.g_. Omniglot, Aircraft, QuickDraw
and Fungi) while on other domains performance stays largely same.
#### Choosing Additional Positives in SupMoCo
In SupMoCo, during every image, we sample $P$ positive samples for each class
including one augmented view of the image. However, each sample also has some
positive samples from the queue itself. In this experiment, we empirically
evaluate adding extra positive samples from the same class. From Tab. 6, we
can see that using these additional samples help to provide some performance
benefit and we hypothesize that it happens because these additional positives
are encoded using the latest version of the key encoder and provides a more
accurate estimate of the features to compute similarity against unlike the
ones coming from the queue carrying slightly outdated features.
## 6 Conclusion
In this work, we show that combining self-supervised instance-discriminative
contrastive training with supervision can perform favorably on cross-domain
few-shot recognition tasks. Our proposed algorithm SupMoCo can outperform
prior methods on Meta-Dataset and also performs better than a similar method
called SupCon. SupMoCo also offers additional flexibility to use partially
labeled datasets because of how it incorporates supervision and self-
supervision into the algorithm. Our approach provides a new direction for
improving few-shot classification by leveraging instance-discriminative
contrastive learning in both supervised and semi-supervised meta-learning
setup and we hope to see future works exploring this further.
## References
* [1] Alessandro Achille, Michael Lam, Rahul Tewari, Avinash Ravichandran, Subhransu Maji, Charless C Fowlkes, Stefano Soatto, and Pietro Perona. Task2vec: Task embedding for meta-learning. In International Conference on Computer Vision, 2019.
* [2] Alessandro Achille, Matteo Rovere, and Stefano Soatto. Critical learning periods in deep neural networks. arXiv preprint arXiv:1711.08856, 2017.
* [3] Peyman Bateni, Raghav Goyal, Vaden Masrani, Frank Wood, and Leonid Sigal. Improved few-shot visual classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14493–14502, 2020.
* [4] Luca Bertinetto, Joao F Henriques, Philip Torr, and Andrea Vedaldi. Meta-learning with differentiable closed-form solvers. In International Conference on Learning Representations, 2018.
* [5] Hakan Bilen and Andrea Vedaldi. Universal representations: The missing link between faces, text, planktons, and cat breeds. arXiv preprint arXiv:1701.07275, 2017.
* [6] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. arXiv preprint arXiv:2002.05709, 2020.
* [7] Wei-Yu Chen, Yen-Cheng Liu, Zsolt Kira, Yu-Chiang Frank Wang, and Jia-Bin Huang. A closer look at few-shot classification. arXiv preprint arXiv:1904.04232, 2019.
* [8] Yinbo Chen, Xiaolong Wang, Zhuang Liu, Huijuan Xu, and Trevor Darrell. A new meta-baseline for few-shot learning. arXiv preprint arXiv:2003.04390, 2020.
* [9] Mircea Cimpoi, Subhransu Maji, Iasonas Kokkinos, Sammy Mohamed, and Andrea Vedaldi. Describing textures in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3606–3613, 2014.
* [10] Guneet Singh Dhillon, Pratik Chaudhari, Avinash Ravichandran, and Stefano Soatto. A baseline for few-shot image classification. In International Conference on Learning Representations, 2019.
* [11] Carl Doersch, Ankush Gupta, and Andrew Zisserman. Crosstransformers: spatially-aware few-shot transfer. Advances in Neural Information Processing Systems, 33, 2020.
* [12] Alexey Dosovitskiy, Philipp Fischer, Jost Tobias Springenberg, Martin Riedmiller, and Thomas Brox. Discriminative unsupervised feature learning with exemplar convolutional neural networks. IEEE transactions on pattern analysis and machine intelligence, 38(9):1734–1747, 2015.
* [13] Nikita Dvornik, Cordelia Schmid, and Julien Mairal. Selecting relevant features from a universal representation for few-shot classification. arXiv preprint arXiv:2003.09338, 2020.
* [14] Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1126–1135. JMLR. org, 2017.
* [15] Spyros Gidaris, Andrei Bursuc, Nikos Komodakis, Patrick Pérez, and Matthieu Cord. Boosting few-shot visual learning with self-supervision. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 8059–8068, 2019.
* [16] Spyros Gidaris and Nikos Komodakis. Dynamic few-shot visual learning without forgetting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4367–4375, 2018.
* [17] Spyros Gidaris, Praveer Singh, and Nikos Komodakis. Unsupervised representation learning by predicting image rotations. arXiv preprint arXiv:1803.07728, 2018.
* [18] Raia Hadsell, Sumit Chopra, and Yann LeCun. Dimensionality reduction by learning an invariant mapping. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), volume 2, pages 1735–1742. IEEE, 2006.
* [19] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In Conference on Computer Vision and Pattern Recognition, pages 9729–9738, 2020.
* [20] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
* [21] Elad Hoffer and Nir Ailon. Deep metric learning using triplet network. In International Workshop on Similarity-Based Pattern Recognition, pages 84–92. Springer, 2015.
* [22] Sebastian Houben, Johannes Stallkamp, Jan Salmen, Marc Schlipsing, and Christian Igel. Detection of traffic signs in real-world images: The german traffic sign detection benchmark. In The 2013 international joint conference on neural networks (IJCNN), pages 1–8. IEEE, 2013.
* [23] Jonas Jongejan, Henry Rowley, Takashi Kawashima, Jongmin Kim, and Nick Fox-Gieg. The quick, draw!-ai experiment.(2016), 2016.
* [24] Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. Supervised contrastive learning. arXiv preprint arXiv:2004.11362, 2020.
* [25] Gregory Koch, Richard Zemel, and Ruslan Salakhutdinov. Siamese neural networks for one-shot image recognition. In ICML Deep Learning Workshop, 2015.
* [26] Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332–1338, 2015.
* [27] Kwonjoon Lee, Subhransu Maji, Avinash Ravichandran, and Stefano Soatto. Meta-learning with differentiable convex optimization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 10657–10665, 2019.
* [28] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740–755. Springer, 2014.
* [29] Lu Liu, William Hamilton, Guodong Long, Jing Jiang, and Hugo Larochelle. A universal representation transformer layer for few-shot image classification. arXiv preprint arXiv:2006.11702, 2020.
* [30] Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016.
* [31] Subhransu Maji, Esa Rahtu, Juho Kannala, Matthew Blaschko, and Andrea Vedaldi. Fine-grained visual classification of aircraft. arXiv preprint arXiv:1306.5151, 2013.
* [32] Maria-Elena Nilsback and Andrew Zisserman. Automated flower classification over a large number of classes. In 2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing, pages 722–729. IEEE, 2008.
* [33] Mehdi Noroozi and Paolo Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles. In European conference on computer vision, 2016.
* [34] Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018.
* [35] Boris Oreshkin, Pau Rodríguez López, and Alexandre Lacoste. Tadam: Task dependent adaptive metric for improved few-shot learning. In Advances in Neural Information Processing Systems, pages 721–731, 2018.
* [36] Ethan Perez, Florian Strub, Harm De Vries, Vincent Dumoulin, and Aaron Courville. Film: Visual reasoning with a general conditioning layer. arXiv preprint arXiv:1709.07871, 2017.
* [37] Avinash Ravichandran, Rahul Bhotika, and Stefano Soatto. Few-shot learning with embedded class models and shot-free meta training. In International Conference on Computer Vision, 2019.
* [38] James Requeima, Jonathan Gordon, John Bronskill, Sebastian Nowozin, and Richard E Turner. Fast and flexible multi-task classification using conditional neural adaptive processes. In Advances in Neural Information Processing Systems, pages 7959–7970, 2019.
* [39] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211–252, 2015\.
* [40] B Schroeder and Y Cui. Fgvcx fungi classification challenge 2018, 2018.
* [41] Florian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 815–823, 2015.
* [42] Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems, pages 4077–4087, 2017.
* [43] Jong-Chyi Su, Subhransu Maji, and Bharath Hariharan. When does self-supervision improve few-shot learning? arXiv preprint arXiv:1910.03560, 2019.
* [44] Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip HS Torr, and Timothy M Hospedales. Learning to compare: Relation network for few-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1199–1208, 2018.
* [45] Yonglong Tian, Yue Wang, Dilip Krishnan, Joshua B Tenenbaum, and Phillip Isola. Rethinking few-shot image classification: a good embedding is all you need? arXiv preprint arXiv:2003.11539, 2020.
* [46] Anh T Tran, Cuong V Nguyen, and Tal Hassner. Transferability and hardness of supervised classification tasks. In International Conference on Computer Vision, 2019.
* [47] Eleni Triantafillou, Tyler Zhu, Vincent Dumoulin, Pascal Lamblin, Utku Evci, Kelvin Xu, Ross Goroshin, Carles Gelada, Kevin Swersky, Pierre-Antoine Manzagol, et al. Meta-dataset: A dataset of datasets for learning to learn from few examples. arXiv preprint arXiv:1903.03096, 2019.
* [48] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NIPS, 2017.
* [49] Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra, et al. Matching networks for one shot learning. In Advances in neural information processing systems, pages 3630–3638, 2016.
* [50] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The Caltech-UCSD Birds-200-2011 Dataset. Technical Report CNS-TR-2011-001, California Institute of Technology, 2011\.
* [51] Xin Wang, Fisher Yu, Ruth Wang, Trevor Darrell, and Joseph E. Gonzalez. Tafe-net: Task-aware feature embeddings for low shot learning. In Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
* [52] Yan Wang, Wei-Lun Chao, Kilian Q Weinberger, and Laurens van der Maaten. Simpleshot: Revisiting nearest-neighbor classification for few-shot learning. arXiv preprint arXiv:1911.04623, 2019.
* [53] Zhirong Wu, Yuanjun Xiong, Stella X Yu, and Dahua Lin. Unsupervised feature learning via non-parametric instance discrimination. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3733–3742, 2018.
* [54] Xi Yan, David Acuna, and Sanja Fidler. Neural data server: A large-scale search engine for transfer learning data. In Conference on Computer Vision and Pattern Recognition, 2020.
* [55] Yang You, Igor Gitman, and Boris Ginsburg. Large batch training of convolutional networks. arXiv preprint arXiv:1708.03888, 2017.
* [56] Amir R Zamir, Alexander Sax, William Shen, Leonidas J Guibas, Jitendra Malik, and Silvio Savarese. Taskonomy: Disentangling task transfer learning. In Conference on computer vision and pattern recognition, 2018.
* [57] Nanxuan Zhao, Zhirong Wu, Rynson WH Lau, and Stephen Lin. What makes instance discrimination good for transfer learning? arXiv preprint arXiv:2006.06606, 2020.
## Supplementary Materials
## Appendix A SupMoCo vs MoCo
While our few-shot evaluation setup provides a large labeled dataset for pre-
training, we still want to investigate the usefulness of labels by comparing
against a model trained using only the images (without labels). We train a
self-supervised MoCo [19] model on both the ImageNet-only setup and all-
datasets setup in an unsupervised fashion to compare performance against a
SupMoCo model. From Tab. 7 and 8, we can observe that using labels indeed
helps to boost performance on few-shot tasks across all domains, with gains as
large as $14.5\%$. We argue this happens because the self-supervised
representation predominantly learns mid and low-level features [57] and does
not capture enough high-level semantics. Such an embedding would be adequate
for transferring to a downstream task which has a moderate number of labeled
samples because it can learn the (missing) high-level representations using
the available supervision. However, in a few-shot setup, label information is
limited and there are not enough opportunities to learn high-level features
that is required to distinguish a class from another in any classification
setup. By performing this experiment, we show that using supervision with the
instance discriminative learning paradigm is more helpful in a few-shot
classification setup and can outperform a self-supervised model significantly.
## Appendix B Maintaining Data Purity within a Batch
When training a single SupMoCo model on the combined dataset (_e.g_. training
on all $8$ datasets of Meta-Dataset), there are two ways to construct a mini-
batch - keep each batch pure by making it contain images only from a
particular dataset or make it impure by not making any dataset specific
delineation and make every batch contain random samples from all the datasets.
In our experimental section, we mentioned that in such a multi-domain training
scenario, we use the impure batch approach because it performs better. In Tab.
9, we compare the performance between using pure vs impure batch in details
and show that impure batch outperforms pure batch across tasks from all
domains. We hypothesize this to happen because in impure batch setup,
batch_norm parameters face lesser interference and sudden change compared to
pure batch where every batch would present a drastically different set of
images and cause large updates to the parameters, thereby making the training
process sub-optimal.
## Appendix C Confidence Interval Results
In Tab. 10, we provide the confidence intervals when SupMoCo models were
evaluated using $600$ few-shot randomly sampled few-shot tasks from each
domain in both ImageNet-only and all-datasets setup. Because there is inherent
randomness in task sampling, this helps to make a fair comparison across
methods while calculating the average rank metric.
## Appendix D Dataset Details
In this section, we provide a detailed description of Meta-Dataset [47]. It
consists of $10$ datasets from different domains which we will describe next.
Each dataset is divided into a set of disjoint training, validation and test
classes and we are only allowed to train using images corresponding to the
training splits from $8$ of these datasets. The other $2$ datasets are
reserved for testing only.
* •
ImageNet/ILSVRC-2012 [39] : ImageNet is a dataset of $1000$ classes containing
natural images which are split into $712-158-130$ for training-validation-
test.
* •
Aircraft [31] : Aircraft is a fine-grained dataset of aircraft images which
are split into $70-15-15$ for training-validation-test. All images are cropped
using the bounding box information associated with each image.
* •
Omniglot [26] : Omniglot is a dataset of images of handwritten characters
divided into $1623$ classes from $50$ different alphabet classes. $1623$
classes are split into $883-81-659$ for training-validation-test.
* •
Textures [9] : It is a collection of texture images in the wild and the
dataset is split into $33-7-7$ classes for training-validation-test.
* •
QuickDraw [23] : QuickDraw is a dataset of $50$ million doodle drawings across
$345$ categories which is divided into $241-52-52$ categories for training-
validation-test. For this dataset, we only use $2000$ samples per class to
speed up training time.
* •
Fungi [40] : It is a fine-grained dataset containing over $100000$ fungi
images and classes are split into $994-200-200$ for training-validation-test.
* •
VGG-Flower [32] : It is a dataset of natural images of flowers and split into
$71-15-16$ for training-validation-test.
* •
Birds [50] : A dataset for fine-grained classification of $200$ bird species
and the classes are split into $140-30-30$ for training-validation-test.
* •
MSCOCO [28] : MSCOCO is a popular object detection dataset containing 1.5
million objects across $80$ classes. For this task, individual images are
extracted by cropping using the bounding box associated with each object. This
dataset does not allow any images to be used for training and $80$ classes are
split into $40-40$ for validation and testing.
* •
Traffic-Sign [22] : It is a dataset of $50000$ images of traffic signs across
$43$ classes and the entire dataset is reserved for testing only.
Table 7: Performance comparison between MoCo and SupMoCo when trained using
ImageNet-only. SupMoCo clearly outperforms MoCo on tasks from all domains,
with an average difference of $7.5\%$.
| Test Datasets
---|---
Batch Type | ImageNet | Aircraft | Birds | Omniglot | Textures | MSCOCO | QuickDraw | Traffic-Sign | VGG-Flower | Fungi
MoCo | 55.35 | 78.09 | 70.31 | 74.51 | 82.51 | 44.20 | 58.56 | 80.22 | 90.02 | 50.23
SupMoCo | 62.96 | 81.48 | 84.89 | 78.42 | 88.59 | 52.18 | 68.42 | 84.69 | 93.56 | 55.39
Table 8: Performance comparison between MoCo and SupMoCo when trained using
all-datasets. SupMoCo does better than MoCo on all domains here as well, with
an average performance gap of $6.5\%$.
| Test Datasets
---|---
Batch Type | ImageNet | Aircraft | Birds | Omniglot | Textures | MSCOCO | QuickDraw | Traffic-Sign | VGG-Flower | Fungi
MoCo | 53.96 | 79.48 | 69.61 | 83.71 | 83.93 | 43.03 | 70.02 | 80.20 | 91.29 | 53.89
SupMoCo | 61.94 | 86.61 | 86.93 | 91.61 | 87.64 | 51.34 | 82.44 | 84.31 | 92.62 | 63.68
Table 9: Performance comparison when a batch contains sample from all datasets
(Impure Batch) vs only from a particular dataset (Pure Batch) during SupMoCo
training.
| Test Datasets
---|---
Batch Type | ImageNet | Aircraft | Birds | Omniglot | Textures | MSCOCO | QuickDraw | Traffic-Sign | VGG-Flower | Fungi
Pure Batch | 50.60 | 76.39 | 69.81 | 78.24 | 77.01 | 43.36 | 75.78 | 85.73 | 85.98 | 48.41
Impure Batch | 61.94 | 86.61 | 86.93 | 91.61 | 87.64 | 51.34 | 82.44 | 84.31 | 92.62 | 63.68
Table 10: Confidence interval when the SupMoCo models trained using ImageNet-
only and all-datasets respectively were evaluated on $600$ few-shot tasks from
each domain.
| Test Datasets
---|---
Dataset | ImageNet | Aircraft | Birds | Omniglot | Textures | MSCOCO | QuickDraw | Traffic-Sign | VGG-Flower | Fungi
ImageNet-only | $62.96\pm 1.09$ | $81.48\pm 1.42$ | $84.89\pm 0.84$ | $78.42\pm 1.40$ | $88.59\pm 0.82$ | $52.18\pm 1.03$ | $68.42\pm 1.12$ | $84.69\pm 1.35$ | $93.56\pm 0.62$ | $55.39\pm 1.32$
All-Datasets | $61.94\pm 1.04$ | $86.61\pm 0.83$ | $86.93\pm 0.74$ | $91.61\pm 0.65$ | $87.64\pm 0.93$ | $51.34\pm 1.02$ | $82.44\pm 0.58$ | $84.31\pm 0.98$ | $92.62\pm 0.76$ | $63.68\pm 1.12$
## Appendix E PyTorch Code
In Alg. 1, we provide a PyTorch implementation sketch of the SupMoCo algorithm
that was used for the fully-supervised setup. For the semi-supervised setup,
the code was similar with the major difference being — we only find positive
entries from the queue corresponding to those images for which we have label
information available. For others, we treat all queue elements as negative.
⬇
# Additional parameters compared to MoCo
# queue_y: a new queue to store labels (K,)
# y: labels for query images
# P: number of positives per class
# T : total number of classes (0 .. T-1)
# initialize
f_k.params = f_q.params
queue_y.fill_(T)
for x in loader: # load a minibatch x with N samples
x_q = aug(x) # a randomly augmented version
x_k = aug(x) # P positives per image
q = f_q.forward(x_q) # queries: NxC
k = f_k.forward(x_k) # keys: NxC
k = k.detach() # no gradient to keys
# positive logits from batch: N x P
l_pos = (torch.mul(q.unsqueeze(1),
k.reshape(N, P, C)))
l_pos = (l_pos.sum(dim=2)) / t
# labels from queue: N X K,
# each value of K indicates positive or not
yb = torch.nn.functional.one_hot(y, T + 1)
yq = torch.nn.functional.one_hot(queue_y, T + 1)
pos_y_q = torch.matmul(yb, yq.t())
# sum of all positive features from queue: N X C
pos_f_q = torch.matmul(pos_y_q, queue.t())
# compute cosine similarity with q : N X 1
pos_q = (torch.mul(q, pos_f_q) / t).sum(dim=1)
# Number of positives for each x_q : N X 1
num_positives = P + pos_y_q.sum(dim=1)
# Combine batch and queue positives: N X 1
l_pos = l_pos.sum(dim=1) + pos_q
# divide by number of positives per class
l_pos /= num_positives
# negative logits computation stays the same
l_neg = torch.matmul(q, queue) / t
# Compute contrastive loss (Eq. 3) and update parameters
# Enqueue and dequeue images and labels, 1 per P positives
Algorithm 1 SupMoCo (PyTorch skeleton code)
|
11institutetext: Drexel University, Philadelphia, PA, USA
11email<EMAIL_ADDRESS><EMAIL_ADDRESS>
# Defenses Against Multi-Sticker Physical Domain Attacks on Classifiers
Xinwei Zhao 0000-0002-4328-4846 Matthew C. Stamm 0000-0002-3986-4039
###### Abstract
Recently, physical domain adversarial attacks have drawn significant attention
from the machine learning community. One important attack proposed by Eykholt
et al. can fool a classifier by placing black and white stickers on an object
such as a road sign. While this attack may pose a significant threat to visual
classifiers, there are currently no defenses designed to protect against this
attack. In this paper, we propose new defenses that can protect against multi-
sticker attacks. We present defensive strategies capable of operating when the
defender has full, partial, and no prior information about the attack. By
conducting extensive experiments, we show that our proposed defenses can
outperform existing defenses against physical attacks when presented with a
multi-sticker attack.
###### Keywords:
Real-world adversarial attacks, Defenses, Classifiers, Deep learning
## 1 Introduction
Deep neural networks have been widely used for many visual classification
systems, such as autonomous vehicles [13, 35] and robots [38].However, deep
neural networks are vulnerable to adversarial attacks [5, 6, 14, 17, 18, 21,
22, 24, 25, 27, 28, 33, 34]. By modifying the pixel values of an image, many
classifiers can be fooled.
Recently, attacks that can operate in the physical world have started to
attract increasing attention [1, 4, 11]. While some physical domain attacks
require crafting a new object [1, 11, 19], other attacks can fool the
classifiers by adding one or a few physical perturbations, such as printable
patches [4, 11] on or next to an object. The adversarial patch attack creates
one universal patch that can be used to attack an arbitrary object once it is
trained, regardless of scale, location and orientation [4]. The camouflage art
attack uses black and white stickers that are applied to an object such as a
traffic sign to make a classifier believe it is a different object. [11] Since
these physical perturbations are very concentrated and confined to small
regions, it is easy for attackers to craft these physical perturbations and
put the attack in practice in the real world.
Previous research shows that defenses against digital domain attacks [2, 3, 7,
8, 9, 10, 12, 15, 16, 20, 22, 26, 29, 30, 32, 36] may not be able to defend
against physical domain attacks, such as the camouflage art attack, because
physical perturbations are usually stronger than those produced by digital
domain attacks. Some recent research has been done to defend against physical
domain attacks [7, 16, 20, 26, 37].
Figure 1: Attacked signs (a) & (d) as well as their Grad-CAM activation maps
before attack (b) & (e) and after attack (c) & (f).
Existing research, however, focuses on defending against adversarial patches,
and does not translate to defend against other physical attacks like the
camouflage art attack (i.e white and black sticker attack). For example, one
approach to defend against the adversarial patch attack is to first locate the
perturbed area using an attention-based or gradient-based model, and then
remove or diminish these areas [16, 26, 37]. The perturbations produced by
multi-sticker attacks like the camouflage art attack, however, cannot be
detected the same way due to several reasons. First, the black and white
stickers produced camouflage art attack are not highly textured, and hence are
unlikely to be detected via gradient-based methods. Second, the camouflage art
attack works in conjunction with the scene content to redirect the classifiers
decision instead of hijacking its attention like the adversarial patch does.
As a result, multi-sticker attacks are unlikely to be identified using
attention-base models.
An example of this phenomenon can be seen in Figure 1, which shows activation
maps produced by Grad-CAM [31] when presented with images before and after a
multi-sticker attack. When examining the activation maps of the pedestrian
crossing sign before an attack shown in Figure 1(b) and after the attack shown
in Figure 1(c), we can see that the attack has shifted the classifier’s
attention off of attacked sign. Defenses that operate by removing or altering
these regions will have no effect on the attack. Alternatively, from examining
the activation maps of an unattacked speed limit sign in Figure 1(e) and it’s
attacked counterpart in Figure 1(f), the classifier is paying attention to
nearly the entire sign. Defenses that operate by removing or distorting these
regions will degrade the image so severely that the classifier will be unable
to operate.
Furthermore, it is important for defenses against physical domain attacks to
be evaluated on real images of physically attacked objects. Digital
simulations of physical attacks are sometimes used for evaluation due to the
ease of creating a dataset, for example, digitally adding perturbations that
simulate a physical attack into an image. However, these digital simulations
do not capture many effects that occur during imaging, such as lighting
conditions, the curvature of surfaces, focus blur, sampling effects, etc. In
practice, phenomena such as these can impact how a camera captures physical
domain perturbations, and can potentially affect the success of defenses.
Defenses that are highly tuned to features of “pristine” digital simulations
of attacks may be less successful when confronted with real images of
physically attacked objects or scenes.
In this paper, we propose a new defense strategy that does not rely on
attention models to identify attacked image regions and can successfully
defend against multi-sticker attacks, like the camouflage art attack. Our
proposed defense operates by first creating defensive masks that can maximize
the likelihood of guessing the location of the perturbations, then mitigates
the effect of the perturbations through targeted modifications, and eventually
make a final decision based on defended images.
Our Contributions:
* •
We propose a set of new defenses that can protect against multi-sticker
physical domain attacks such as the camouflage art attack by Ekyholt et al.
[11]. To the best of our knowledge, no existing defenses are designed to
defend against such attacks.
* •
We present practical defenses that can be utilized depending on whether the
defender has full knowledge of the attack (non-blind), partial information
about the attack (semi-blind), or no information regarding the attack (blind).
* •
We create a new database of front-facing photos of 90 physically attacked
signs using camouflage art attack and use this database to assess our defense.
* •
We demonstrate that our proposed defenses outperform other state-of-the-art
defenses against physical attacks, such as the digital watermark defense [16],
when presented with multi-sticker attacks.
## 2 Additive Physical Domain Attacks
Adversarial attacks pose an important threat against deep neural networks [1,
4, 5, 6, 11, 14, 17, 18, 19, 21, 22, 24, 25, 27, 28, 33, 34]. Some physical
domain attacks, like the adversarial patch [4] and the camouflage art attack
[11], have shown that adding perceptible but localized patches to an object
can make a classifier identify it as a different object. We now briefly
describe how these two physical domain attacks are launched at a classifier
$C$ using attack target class $t^{\prime}$.
Adversarial patch: To generate an adversarial patch $A^{\prime}$, the authors
of [4] use an operator $O(I,A,\theta_{l},\theta_{t})$ to transform a given
patch $A$, then apply it to an image $I$ at location $\theta_{l}$. Similarly
to an Expectation over Transformation attack (EoT) [1], the adversarial patch
can be obtained by optimizing over sampled transformation and locations,
$A^{\prime}=\max_{A}\mathbb{E}_{I\sim\mathcal{I},\theta_{l}\sim\Theta_{L},\theta_{t}\sim\Theta_{T}}C(t^{\prime}|O(I,A,\theta_{l},\theta_{t}))$
(1)
where $\mathcal{I}$ denotes the training image dataset, $\Theta_{T}$ denotes
the distribution of transformation and $\Theta_{L}$ denotes the distribution
of the location. Once the patch is trained, it can universally attack any
object.
Camouflage art attack: Launching the camouflage art attack involves finding a
single set of perturbations $P$ that are capable of fooling a classifier under
different physical conditions. This attack, which produces perturbations for a
given pairing of source and target class, was demonstrated by using it to fool
a classifier trained to distinguish between different US traffic signs. Let
$H^{v}$ denote the distribution of the image of an object under both digital
and physical transformations, and $h_{i}$ denote each sample from this
distribution. The attack perturbations can be obtained via optimizing,
$\operatorname*{argmin}_{P}\lambda||M_{h},P||_{p}+\mathbb{E}_{h_{i}\sim
H^{v}}J(C(h_{i}+G(M_{h},P),t^{\prime})$ (2)
where $M_{h}$ is the mask that applies spatial constraints to the perturbation
(i.e ensures the perturbation is within the surface area of the object),
$\lambda$ is a hyper-parameter that regularize the distortion, $J(\cdot)$ is
the loss function that measures the difference between the classifier’s
prediction of the attacked object and the target class, $G(\cdot)$ is the
alignment function that maps transformations on the object to transformations
on the perturbation, $||\cdot||_{p}$ denotes $\ell_{p}$ norm.
## 3 Problem Formulation
We assume that the system under attack wishes to analyze some scene $S(x,y)$
containing an object to be classified. To do this, the system will capture a
digital image $I(x,y)$ of the scene, which will then be provided to a pre-
trained classifier $C(\cdot)$ which maps the image into one of $N$ classes
$t\in\mathcal{T}$. For the purposes of this work, we assume that if no
adversarial attack is launched, then the image provided to the classifier is
$I=S$.
An attacker, may attempt to fool the classifier by launching a physical domain
attack $\alpha(\cdot)$. This corresponds to physically modifying an object
within the scene by adding adversarial perturbations $P$ to it. Since these
perturbations must be physically added to the scene, we assume that they will
be spatially localized to one or more regions of the object under attack.
These regions can be specified by a spatial mask $M$, where $M(x,y)=1$
corresponds to a perturbation being present at spatial location $(x,y)$ and
$M(x,y)=0$ corresponds to no perturbation occurring at $(x,y)$. As a result,
we can express a physically attacked scene $\alpha(S)$
$\alpha(S(x,y))=(1-M(x,y))S(x,y)+M(x,y)P(x,y).$ (3)
In this paper, we assume that the adversarial perturbations will take the form
of black and white stickers added to an object as proposed by Eykholt et al.
[11], i.e. $P(x,y)=\\{black,white\\}$. Other physical domain attacks, such as
the adversarial patch [4] can still be modeled using (3) by allowing $P(x,y)$
to correspond to the full range of color values. Since the majority of the
defenses proposed in this paper do not rely on knowledge of the color values
of $P$, it is likely that these defenses can be used against other physical
domain attacks such as the adversarial patch. We note that this work only
addresses physical domain attacks that involve modifying an existing physical
object, and not attacks that involve the creation of a new physical object
such as synthesized 3D objects [1] and printed photos or posters [11, 19].
## 4 Knowledge Scenarios
To defend a classifier, we first assume that the defender has full access to
the classifier and implicitly knows the $N$ classes that it is trained to
distinguish between. We examine three scenarios corresponding to different
levels of knowledge available to the defender.
Non-blind: We assume that defender knows if an object is attacked or not, the
perturbation masks $M$ that indicates the perturbation areas and the
perturbations $P$. Therefore, locations of perturbations can be directly
located.
Semi-blind: We assume that the defender does not know if the object is
attacked or not. We also assume that if the object was attacked, the defender
does not know the perturbation masks $M$. However, the defender knows the
attack method $\alpha(\cdot)$. Therefore, for any source A and target B
pairing, the defender can obtain a perturbation mask $M_{\text{A, B}}$ via
launching the attack.
Blind: We assume that defender has zero knowledge. Specifically, the defender
does not know whether an object is attacked or not. We also assume that if the
object was attacked, the defender does not know the perturbation regions.
Additionally, the defender does not know the attack method.
## 5 Proposed Defenses
To defend against a physical domain attack, we propose a set of defenses based
on the amount of knowledge available to the defender. These defenses attempt
to interfere with or remove adversarial multi-sticker perturbations to
mitigate their effects. If the defender is able to leverage information about
the potential locations of these perturbations, defenses are guided to these
regions. Otherwise, our defenses are designed with the intuition that
adversarial perturbations are more fragile to distortions than the underlying
object that they are attacking.
Our defensive strategy is composed of three major steps. First, we obtain a
defensive mask $R$ or set of defensive masks $\mathcal{R}$ indicating
candidate areas to apply defenses. Second, we launch a local defense in
regions indicated by a defensive mask to produce a defended image $\delta$.
When our first step results in a set of defensive masks, local defenses can
either be sequentially applied in conjunction with each mask to produce a
single defended image, or they can be applied in parallel to produce a set of
defended images. In the third step, the defended image or images are provided
to the classifier. If a set of defended images are produced by the second
step, a fusion strategy is employed to produce a single classification
decision. In what follows, we discuss each step of our proposed defenses in
detail.
### 5.1 Defensive Mask Selection
The goal of each defensive mask is to ensure that defensive distortions are
only applied to small regions of the image, since each perturbation produced
by the multi-sticker attack is still confined to a small region. We do not
want to change the ground truth object. Let $R(x,y)\in\\{0,1\\}$ denote a
defensive mask, where 1 indicates the area need to be defended, 0 indicates
the area of the ground truth content. Now we discuss the acquisition of
defensive masks.
Oracle Approach: If the perturbation mask $M$ is known, such as in the non-
blind scenario, we simply let $R=M$.
Estimated Defensive Mask Sets: In semi-blind scenarios, the defender may know
the potential attack method $\alpha$, but not perturbation masks or the
potential attack mask if the attack was launched. They can, however, leverage
knowledge of $\alpha$ to create a set of estimated defensive masks.
To do this, first we assume that $I$ is an image of an attacked scene. The
attack’s target class $\hat{t}$ can be inferred by using $C$ to classify the
image such that $\hat{t}=C(I)$. Next, the defender can create their own
implementation of $\alpha$ and use it to recreate an attack aimed to move true
class $j$ to target class $\hat{t}$. The attack’s perturbation mask can then
be used as the estimated defensive mask $R_{j,\hat{t}}$ for source $j$ and
target $t$. This process can be repeated for all $j\in\mathcal{T}$ such that
$j\neq\hat{t}$ to produce the set of estimated masks
${\mathcal{R}_{\hat{t}}=\\{R_{1,\hat{t}},\ldots,R_{\hat{t}-1,\hat{t}},R_{\hat{t}+1,\hat{t}},\ldots,R_{N,\hat{t}}\\}}$.
To reduce computational costs while launching the defense, the set
$\mathcal{R}_{\hat{t}}$ can be precomputed for each target class. With
increasing number of classes, the computational cost may become high for
constructing sets of estimated set of defense masks and launching the defense.
To solve this problem, defender can use a subset of defensive masks instead of
every single mask. We propose two methods to form these subsets.
Ranked Selection: The defender can utilize class activations to guide the
selection of the subset of defensive masks to use. Since physical attacks
operate by constraining perturbations to small areas to avoid suspicion, it is
reasonable to assume these perturbation push the object just across the
boundary of its true class. Therefore, the true class of an attacked image
most likely shows up in the top few activated classes. To guide the selection
of defensive masks, first we assume that a scene is always under attack
(regardless of whether this is true or not) and treat the class with the
highest activation as the target class. The true class then lies among the
remaining classes, which are ranked according to their activation scores. The
subset of $k$ defensive masks is then chosen as the set of masks created using
the assumed target class (i.e. the class with the highest activation) and the
$k$ top candidates for the true source class (i.e. the classes with the second
highest through $k+1$ highest activations). By doing this, the defender can
control the computation cost of the defense while increasing the chance that
the most useful defensive masks are utilized.
Random Selection: A heuristic way to form a subset of defensive masks is
through random selection. Since each of the selected mask is related to the
target class, each selected mask can be used to defend a partial of the image.
By grouping several defensive masks, it may increase the chance for a
successful defense.
Randomly Chosen Regions: In blind scenarios, the defender cannot leverage any
prior information about the attack or possible perturbation locations. In
these situations, we create a set of defensive masks made by randomly choosing
defensive regions. Our intuition is that if we use many random defensive
masks, several of them will interfere with the adversarial perturbations. Each
mask is made by randomly selecting $m$ different $w\times w$ windows to apply
localized defenses. We use two different approaches for randomly choosing
these regions:
Overlapping: The locations of each window are chosen uniformly at random from
throughout the image area. As a result, some windows may overlap with one
another.
Non-overlapping: In this approach, we ensure that defensive regions are spread
throughout the region by disallowing overlaps. We do this by first dividing
the defensive mask into non-overlapping $w\times w$ blocks, then randomly
choosing $m$ of these blocks as defensive regions.
### 5.2 Local Defense Strategies
After the defensive masks are obtained, we can apply local defenses to image
regions specified by these masks. To make it clear, we first show how to
obtain the defended image using single defensive mask, then we adapt the
proposed defenses to accommodate multiple defensive masks.
Given one defensive mask, we propose two methods to defend against the attack.
Targeted perturbation remapping: This idea is to interfere the perturbations
instead of removing it. Specifically, we can using remapping functions to
destroy the spatial correlation between perturbed regions. Let $\phi(\cdot)$
be the remapping function, then a single defended image can be expressed as,
$\delta(x,y)=\begin{cases}I(x,y)&R(x,y)=0\\\ \phi(I(x,y))&R(x,y)=1\end{cases}$
(4)
In this work, we consider three mapping functions:
RemapW: Change pixels to white.
RemapB: Change pixels to black.
RemapT: Pick a threshold $\tau$, change pixels to black if the luminance value
is above the threshold and to white if below the threshold.
Localized region reconstruction: The idea is to diminish or remove the effects
of that perturbation by reconstructing perturbed local regions of input image
on the basis of other parts of the image. Since the perturbations are confined
to a small region, we can use the inpainting algorithm to reconstruct the
image.
The defenses discussed above can be easily adapted for multiple defensive
masks. Let $\psi(\cdot)$ denote the defense. For a set of defensive masks
$\mathcal{R}$ that comprises $k$ mask,
$\mathcal{R}=\\{R_{1},R_{2},...,R_{k}\\}$, we can either obtain one single
final defended image via sequential defense, or obtain a sequence of
individually defended images via parallel defense and then fuse the results.
Now we discuss sequential and parallel defense individually.
Sequential defense: We attempt to make the defense stronger by recursively
applying the defense and obtain a single defended image. For iteration $\ell$,
the defense $\psi(\cdot)$ is applied to the output of the previous step using
$\ell^{th}$ defensive mask,
$\psi_{\ell}(\cdot)=\psi(\delta_{\ell-1},R_{\ell})$. The final defended image
is obtained by sequential applying the defense using each of $k$ individual
defensive mask via,
$\delta=(\psi_{k}\circ\psi_{k-1}\circ\ldots\circ\psi_{1})(I)$ (5)
Parallel defense: The idea is to generate many copies of defended image with
each copy being able to defend one part of input image. Using $\ell^{th}$
defensive mask, we define $\ell^{t}h$ defended image as
$\delta_{\ell}=\psi(I,R_{\ell})$, then using $k$ defensive masks we get $k$
individual defended images, $\\{\delta_{1},\delta_{2},\ldots,\delta_{k}\\}$.
### 5.3 Defensive Classification
After applying local defenses, we need to use the classifier to make a final
decision on the defended image or images. We propose two decision making
strategies.
Single defended image: After the sequential defense, the defender will obtain
a single defended image. We simple use the classifier to classify the defended
image, $t=C(\delta)$.
Multiple defended images: The parallel defense will result in a sequence of
defended images. The defender can use the classifier to get a fused decision
by combining the decisions of the individually defended images. We propose two
fusion strategies.
Majority vote (MV): Use the classifier to make a decision with each individual
defended image, $t_{\ell}=C(\delta_{\ell})$, then take a majority votes of all
decisions
$t=\operatorname*{argmax}_{n\in
N}\sum_{\ell=1}^{k}\mathbbm{1}(C(\delta_{\ell})=t_{n})$ (6)
where $\mathbbm{1}$ is the indicator function.
Softmax fusion (SF): Let $\mathbf{v^{(\ell)}}$ denote the softmax output of
the classifier for the $\ell^{th}$ defended image,
$\mathbf{v}^{(\ell)}=C_{softmax}(\delta_{\ell})$, next add the softmax output
of each of the $k$ defended images to form a single vector $\mathbf{v}$,
$\mathbf{v}=\sum_{\ell=1}^{k}\mathbf{v^{(\ell)}}$ (7)
then take the class corresponding to the largest value in $\mathbf{v}$ as the
final decision,
$t=\operatorname*{argmax}_{n\in N}v_{n}$ (8)
where $v_{n}$ is the $n^{th}$ element in the vector $\mathbf{v}$.
## 6 Evaluation Metrics
When formulating our evaluation metrics, we let $t^{*}$ denote the ground
truth class of a scene. Additionally, we let $\pi_{A}$ denote the a priori
probability that an attack is launched against a scene.
Classifier: To evaluate the baseline performance of the classifier $C(\cdot)$,
we calculate the classification accuracy as the probability that the image of
a scene being correctly classified as its ground true class,
$\text{CA}=Pr(C(I)=t^{*}|I=S)$ (9)
Attack: To evaluate the baseline performance of the attack, we calculate the
targeted attack success rate (T-ASR) and the untargeted attack success
rate(U-ASR).
T-ASR is defined as the probability that the image of an attacked scene is
classified as the target class,
$\text{T-ASR}=Pr(C(I))=t^{\prime}|I=\alpha(S))$ (10)
U-ASR is defined as the probability that the image of an attacked scene is
classified as any other class than the true class,
$\text{U-ASR}=Pr(C(I))\neq t^{*}|I=\alpha(S))$ (11)
Defense: To evaluate the performance of our proposed defenses, we calculate
the Defense Rate (DR) for an attacked scene, the Classification Drop (CD) for
an unattacked scene, and the Post-Defense Accuracy (PDA) for any scene.
DR is defined as the probability that the defended image of a scene is
classified as true class, given it is an attacked scene and its image was not
classified as the true class before the defense,
$\text{DR}=Pr(C(D(I))=t^{*}|I=\alpha(S),C(I)\neq t^{*})$ (12)
CD is defined as the probability that the image of an unattacked scene get
misclassified after applying the defense.
$\text{CD}=\text{CA}-Pr(C(D(I))=t^{*}|I=S)$ (13)
PDA is defined as the probability that the image of any scene is correctly
classified as the true class after the defense,
PDA $\displaystyle=(1-\pi_{A})Pr(C(D(I))=t^{*}|I=S)$
$\displaystyle+\pi_{A}Pr(C(D(I))=t^{*}|I=\alpha(S))$ (14)
When U-ASR=1, using equation 11, 12 and 13, equation 14 can be expressed as,
$\text{PDA}=(1-\pi_{A})(\text{CA}-\text{CD})+\pi_{A}\text{DR}$ (15)
## 7 Experimental Results
To evaluate the performance of our proposed defenses, we conducted a series of
experiments. The physical attack we attempt to defend against is the
camouflage art attack proposed by Eykholt et. al [11]. The classifier we used
to evaluate the the proposed defense was trained to differentiate 17 common US
traffic signs using LISA traffic sign database [23] (a US traffic sign
database). The classifier was reported to achieve 91% classification accuracy
in their paper. We started by making a dataset composed of photos of
unattacked ground truth source signs and physical attacked signs. Then we
demonstrated the effectiveness of the proposed defense method under the three
scenarios we discussed in Section 4. We assume $\pi_{A}=0.5$ in all scenarios.
### 7.1 Dataset
To the best of our knowledge, there exists no database that made specifically
for physical attack, especially using camouflage art attack. A physical attack
database should be constructed with the photos of the physically attacked
objects. This is because empirically we found that defenses against physical
perturbations are very different from the digital simulation. One reason is
that the many effects introduced during capturing images of physically
attacked objects, such as the curvature of surfaces, focus blur, sampling
effects, sensor noise, will result in significant discrepancies between
physical perturbations and digital approximation. Therefore, it is important
to create a new database to fill this gap and benefit future research in the
community.
To make the database, we first purchased six US road signs which were included
among the 16 which classes the LISA-CNN is trained to distinguish between.
These six signs are indicated above in Table 1 as ‘source’ signs.
To create training data for the attack, and assess the baseline performance of
the LISA-CNN, we first captured a set of images of the six unattacked signs in
our possession. This was done by photographing each sign at angles running
from $-50$ to $+50$ degrees in an increments of $10$ degrees, to create a set
of $66$ images of unattacked signs.
Next, we launched a series of multi-sticker attacks against the six signs in
our possession, using each of the 15 remaining classes listed in Table 1 as
the attack’s target. This was done by following the attack protocol described
in [11]. For each pair of source and target signs, we first created a digital
copy of the attacked sign. This digital copy was projected onto the
corresponding physical copy of the source sign, then black and white stickers
were placed on the sign in regions indicated by the digitally attacked
version. Front facing images of all of the attacked signs were captured, then
cropped to approximately $340\times 340$ pixels and saved as PNG files. This
resulted in a set of 90 images of physically attacked signs, each with a
different source-target class pairing. The database is publicly available at
https://drive.google.com/drive/folders/1qOmSubSOVY8JzB3KfXhDQ38ihoY5GExK?usp=sharing.
### 7.2 Baseline Evaluation of the Classifier and Attack
To assess the baseline classification accuracy of the LISA-CNN classifier
trained by Eykholt et al., we evaluated its performance on the unattacked
signs captured as part of our database. In this evaluation, the LISA-CNN
achieved $100\%$ classification accuracy. We note that Eykholt et al. reported
a $91\%$ classification accuracy during their evaluation of this trained
classifier. In this paper, when reporting metrics that depend on
classification accuracy, we use the value that we obtained since this
classification accuracy is measured on the same set of road signs in the
attack set. Furthermore, this corresponds to more challenging test conditions
for our defense, since perfect performance would need to bring the defense
rate equal to this higher classification accuracy.
Next, we measured the baseline performance of the attack by using the LISA-CNN
to classify the images of physically attacked signs in our database. Our
implementation of the camouflage art attack achieved a $0.9556$ targeted
attack success rate (T-ASR) and a $1.0000$ untargeted attack success rate
(U-ASR). This result verifies that we were able to reproduce the attack, and
that this attack can successfully fool the classifier.
Table 1: Source and Target Traffic signs. S denotes “source” and T denotes
“target”.
Category | Sign Name | Category | Sign Name
---|---|---|---
S & T | crossing | T | added lane
S & T | stop | T | keep right
S & T | yield | T | lane ends
S & T | signal ahead | T | stop ahead
S & T | speed limit 25 | T | turn right
S & T | speed limit 45 | T | school / limit 25
T | merge | T | speed limit 30
T | school | T | speed limit 35
Table 2: Non-blind evaluation of our proposed defenses.
Proposed defense | DR | CD | PDA
---|---|---|---
RemapW | 0.4339 | 0.0000 | 0.7170
RemapB | 0.4556 | 0.0000 | 0.7283
RemapT | 0.9222 | 0.0000 | 0.9611
Reconst | 0.6778 | 0.0000 | 0.8389
### 7.3 Non-Blind
In our first set of experiments, we evaluated our defenses’ performance in the
non-blind scenario. We used the digital versions of the perturbation masks
obtained while training the attack as the oracle defensive masks known to the
defender. While these digital masks are not perfect ground truth locations of
the actual perturbations they are sufficiently close to evaluate our
experiment.
Using these oracle masks, we evaluated the three perturbation remapping
defenses remap to white (RemapB), black (RemapW), and threshold (RemapT) as
well as the targeted region reconstruction (Reconst) defense. We note that the
classification drop is always zero in this experiment because the defender
always knows if an attack is present and can choose when not to apply the
defense.
Table 2 shows the performance of our defenses in the non-blind scenario.
Thresholded perturbation achieved strongest performance with the highest
defense rate of 0.9222 and post-defense accuracy of 0.9611. Since both the
remap-to-white and remap-to-black strategies will only affect approximately
half of the stickers added to an object, it is reasonable to expect that the
thresholded perturbation remapping approach outperforms these approaches.
Reconstruction approach achieved second highest performance. We believe that
lower defense rate is predominantly due to the slight misalignment between the
ideal digital perturbation masks and the true locations of the physical
perturbations in the attacked images.
Table 3: Evaluation of proposed defenses in semi-blind scenario
Defense Strategies | DR | CD | PDA | Defense Strategies | DR | CD | PDA
---|---|---|---|---|---|---|---
RemapW-Par(6) + MV | 0.3989 | 0.3333 | 0.5328 | RemapW-Par(6) + SF | 0.4186 | 0.1667 | 0.6260
RemapB-Par(6) + MV | 0.0794 | 0.1667 | 0.4563 | RemapB-Par(6) + SF | 0.0690 | 0.0000 | 0.5348
RemapT-Par(6) + MV | 0.5174 | 0.3333 | 0.5921 | RemapT-Par(6) + SF | 0.6453 | 0.1667 | 0.7393
Reconst-Par(6) + MV | 0.3560 | 0.0000 | 0.6780 | Reconst-Par(6) + SF | 0.3514 | 0.0000 | 0.6757
Reconst-Seq-Rand(1) | 0.2815 | 0.0556 | 0.6130 | Reconst-Seq-Rand4) | 0.6200 | 0.1112 | 0.7544
Reconst-Seq-Rand(2) | 0.4237 | 0.0556 | 0.6840 | Reconst-Seq-Rand(5) | 0.6648 | 0.1389 | 0.7630
Reconst-Seq-Rand(3) | 0.5350 | 0.0834 | 0.7250 | Reconst-Seq(6) | 0.7000 | 0.1667 | 0.7667
Reconst-Seq-Rank(1) | 0.3780 | 0.0000 | 0.6890 | Reconst-Seq-Gtd(1) | 0.6778 | 0.0000 | 0.8389
Reconst-Seq-Rank(2) | 0.6336 | 0.0000 | 0.8168 | Reconst-Seq-Gtd(2) | 0.6623 | 0.0333 | 0.8145
Reconst-Seq-Rank(3) | 0.7001 | 0.0000 | 0.8501 | Reconst-Seq-Gtd(3) | 0.6855 | 0.0667 | 0.8094
Reconst-Seq-Rank(4) | 0.6667 | 0.0000 | 0.8333 | Reconst-Seq-Gtd(4) | 0.7022 | 0.1000 | 0.8011
Reconst-Seq-Rank(5) | 0.7000 | 0.0000 | 0.8500 | Reconst-Seq-Gtd(5) | 0.7044 | 0.1333 | 0.7856
Other Methods | DR | CD | PDA | Other Methods | DR | CD | PDA
DW [16] | 0.2222 | 0.0000 | 0.6111 | Median Filter (kernel=7) [36] | 0.3777 | 0.3333 | 0.5222
JPEG (QF=10) [10] | 0.1333 | 0.0000 | 0.5667 | Local Smooth [26] | 0.0000 | 0.0000 | 0.5000
### 7.4 Semi-blind
To evaluate our defenses in the semi-blind scenario, we created a set of
estimated defensive masks for each of the 15 possible target classes. Each set
of defensive masks contained six pairings of source and target sign, i.e. one
for each source sign in our database that an attack could be launched against.
Next, we used these sets of defensive masks to evaluate our relevant defensive
strategies. The results of these experiments are shown in Table 3. We adopt
the notation Par and Seq to denote that a defense was applied either in
parallel or sequentially, and ($k$) to denote the number of defensive masks
used for defense. When defenses were applied in parallel, we use the notation
MV to denote majority vote fusion and SF to denote softmax fusion. We use Rand
and Rank to denote the random or ranked mask selection strategy. Additionally,
we use Gtd to denote a special “Guaranteed scenario” in which the defensive
mask with the correct source-target pair was always included and the remaining
masks were randomly chosen.
Results in Table 3 show that for any mask selection strategy, sequential
reconstruction outperforms both parallel reconstruction and perturbation
remapping. The defense using three defensive masks selected using ranked
activation (Reconst-Seq-Rank(3)) outperforms all other strategies and achieved
the highest defense rate of 0.7001, highest post-defense accuracy of 0.8501,
and zero classification drop on unattacked images. We note that Reconst-Seq-
Rank(3) is statistically the same performance as Reconst-Rank(5), but it is
more computationally efficient using less masks.
Comparisons of Defensive Mask Selection Strategies: For all values of $k$, the
ranked selection strategy achieved a higher defense rate and post-defense
accuracy than the random selection strategy. This shows that using a well
chosen subset of defensive masks improves our system’s performance.
Additionally, it reinforces our observation that important information about
the true source class and attack target class can be observed in the top few
class activations.
To explore impacts from the correct source and target defensive mask, we ran
another set of experiments for the “Guaranteed scenario”. Compared to the
Reconst-Seq-Rand($k$) strategy, the Reconst-Seq-Gtd($k$) strategy always
achieved higher defense rate, post-defense accuracy, and lower classification
drop for the same $k$. These results imply that the inclusion of the estimated
mask for the correct source-target class pair can significantly improve the
performance of defenses.
Comparing the Ranked strategy with the “Guaranteed scenario”, Ranked results
are in a higher post-defense accuracy for $k\geq 2$. The main reason is that
Ranked produces a significantly lower classification drop. These results
suggest that using the ranked selection strategy not only can pick out the
“best” subset of defensive masks to use, but can also exclude those that
deteriorate the classification accuracy of unattacked images.
This is reinforced by examining the classification drop as $k$ increases. For
both Reconst-Seq-Gtd($k$) and Reconst-Seq-Rand($k$), the classification drop
increases as $k$ increases, thus hurting the overall post-defense accuracy.
This is likely because some masks that negatively effect the overall
performance are included. By contrast, Reconst-Seq-Rank($k$) does not suffer
from the same decrease in classification drop because unlikely defensive masks
that may hurt performance are excluded.
Comparisons with Related Defenses: We compared the performance of our proposed
defenses with several existing defenses against physical domain attacks. These
include distortions that are universally applied to an image such as JPEG
compression and median filtering, as well as the more sophisticated digital
watermarking (DW) defensive method and the local smooth approach. While we
evaluated the performance of the JPEG defense using multiple quality factors
and the median filtering defense using multiple kernel sizes, we report only
the strongest results in the interest of space.
The results in Table 3 show that all of our proposed strategies with the
reconstruction defense can significantly outperform each of these existing
defenses. The digital watermarking defense proved to be the strongest
performing existing defense, with a defense rate of 0.2222 and a post-defense
accuracy of 0.6111. However, even when only one randomly chosen estimated
defensive mask is used, our region reconstruction defense outperforms this
approach. Our best performance achieved more than three times higher in
defense rate and about 40% more in post-defense accuracy than this approach.
The relatively poor performance of these existing defenses likely occurs
because they are targeted to defend against the adversarial patch attack.
Since the multi-sticker camouflage art attack works in a different manner and
exhibits different visual properties, these defenses are not as well suited to
protect against this and similar attacks.
### 7.5 Blind
To evaluate our defenses in the blind scenario, we created randomly chosen
defensive masks using both the overlapping (OL) and non-overlapping (NOL)
strategies.
Table 5 shows the results of these experiments. In each experiment, we
identified the optimal window size and number of windows for use in these
masks through a grid search. We chose window size $w$ vary from 2, 4, 8 16
pixels. Next we controlled the number of windows $m$ by randomly selecting a
ratio of total number of windows based on the given window sizes. The results
reported in Table 5 correspond to the pairing of $w$ and ratio that achieved
the highest post-defense accuracy. A detailed examination of the choice of $w$
and ratio is provided later in this section.
From Table 5, we can see the strongest performance in terms of all evaluation
metrics was achieved using targeted region reconstruction applied in parallel
using 100 random masks with non-overlapping windows in conjunction with
majority vote decision fusion (NOL-Reconst-Par(100) + SF). Even though no
information regarding the attack could be leveraged, this defense was still
able to achieve a defense rate of 0.4102 with a corresponding classification
drop of 0.0017 and a post-defense accuracy of 0.7043. Though performance is
worse than in the semi-blind scenario, we are still able to outperform
existing defenses in all evaluation metrics. We note that the local region
reconstruction defense uniformly outperformed the targeted perturbation
remapping strategy, and for targeted region reconstruction, applying the
defense in parallel outperformed sequential application of the defense.
Creating defensive masks using the non-overlapping strategy significantly
improves our defense’s performance over the overlapping approach (i.e.
choosing window locations uniformly at random). Furthermore, we note that
performance increases as the number of randomly chosen masks increases. While
this comes at the price of additional computation costs, in practice we found
that our proposed defense takes 0.4 seconds on average using 100 masks without
any attempt at execution time optimization.
Effect of Size and Number of Windows: To understand the effect that the window
size and number of windows (or ratio) in each randomly chosen defensive mask
has on our defense, we provide detailed results of our search over these
parameters in Table 5. The symbol $\ast$ means when window size was 16 and
ratio was 0.625, the computed number of windows was not an integer. However,
it equals to when ratio was 0.5 if rounded down, and equals to when ratio was
0.75 when rounded up.
The results show that the defense rate increases as the ratio (i.e the number
of windows) increases. After a certain point, the classification drop also
increases, resulting in a negative effect on the post-defense accuracy. We
also find that increasing the window size increases the defense rate up to a
certain point, after which the defense rate begins to decrease. Additionally,
after a certain point, increasing the window size also leads to an increase in
the classification drop and a decrease in the post-defense accuracy. In our
experiments, we found that the optimal window size was 8 pixels and ratio was
0.625. More importantly, when choosing the window size and the ratio (i.e the
number of windows), the defender must balance the trade-off between
interfering with the attack and interfering with the unattacked scene content
used by the classifier.
Comparisons with Related Defenses: From Table 5, we can see that applying
region reconstruction in parallel using non-overlapping masks outperforms the
existing defenses that were also considered in the semi-blind scenario (the
performance of these defenses do not change in the blind scenario). This
result holds true even when only six randomly generated non-overlapping masks
are used.
Table 4: Defense performance in the blind-scenario.
Defense Strategies | DR | CD | PDA | Defense Strategies | DR | CD | PDA
---|---|---|---|---|---|---|---
OL-RemapT-Par(6) + MV | 0.1210 | 0.2367 | 0.4422 | NOL-RemapT-Par(6) + MV | 0.1236 | 0.1283 | 0.4976
OL-RemapT-Par(6) + SF | 0.1271 | 0.1650 | 0.4811 | NOL-RemapT-Par(6) + SF | 0.1255 | 0.1267 | 0.4994
OL-RemapT-Par(100) + MV | 0.0713 | 0.0333 | 0.5190 | NOL-RemapT-Par(100) + MV | 0.0482 | 0.0500 | 0.4991
OL-RemapT-Parallel(100) + SF | 0.0778 | 0.0333 | 0.5222 | NOL-RemapT-Par(100) + SF | 0.0737 | 0.0667 | 0.5035
OL-Reconst-Seq(6) | 0.2352 | 0.1782 | 0.5286 | NOL-Reconst-Seq(6) | 0.1942 | 0.0775 | 0.5584
OL-Reconst-Seq(100) | 0.1588 | 0.8450 | 0.1569 | NOL-Reconst-Seq(100) | 0.2553 | 0.6300 | 0.3027
OL-Reconst-Par(6) + MV | 0.1836 | 0.0717 | 0.5560 | NOL-Reconst-Par(6) + MV | 0.3444 | 0.0467 | 0.6488
OL-Reconst-Par(6) + SF | 0.1762 | 0.0350 | 0.5706 | NOL-Reconst-Par(6) + SF | 0.3501 | 0.0317 | 0.6593
OL-Reconst-Par(100) + MV | 0.1362 | 0.0000 | 0.5681 | NOL-Reconst-Par(100) + MV | 0.4129 | 0.0067 | 0.7031
OL-Reconst-Par(100) + SF | 0.2415 | 0.0167 | 0.6124 | NOL-Reconst-Par(100) + SF | 0.4102 | 0.0017 | 0.7043
Other Methods | DR | CD | PDA | Other Methods | DR | CD | PDA
DW [16] | 0.2222 | 0.0000 | 0.6111 | Median Filter (kernel=7) [36] | 0.3777 | 0.3333 | 0.5222
JPEG (QF=10) [10] | 0.1333 | 0.0000 | 0.5667 | Local Smooth [26] | 0.0000 | 0.0000 | 0.5000
Table 5: Local region reconstruction using 100 parallel masks with non-
overlapping windows and softmax fusion. * For window size 16, ratio 0.625
results in a non-integer number of windows.
| Ratio $=0.25$ | Ratio $=0.5$ | Ratio $=0.625$ | Ratio $=0.75$
---|---|---|---|---
| DR CD PDA | DR CD PDA | DR CD PDA | DR CD PDA
$w=2$ | 0.0546 0.0000 0.5273 | 0.1528 0.0000 0.5764 | 0.2027 0.0033 0.5997 | 0.2770 0.1283 0.5744
$w=4$ | 0.0537 0.0000 0.5269 | 0.1818 0.0000 0.5909 | 0.2395 0.0000 0.6198 | 0.3153 0.0233 0.6460
$w=8$ | 0.1648 0.0000 0.5824 | 0.3268 0.0000 0.6634 | 0.4102 0.0017 0.7043 | 0.4701 0.2500 0.6101
$w=16$ | 0.0183 0.0000 0.5092 | 0.1073 0.1400 0.4837 | $\ast$ | 0.1353 0.3333 0.401
## 8 Conclusions
In this paper, we proposed new defense strategies against physical domain
attacks with a special focus on the multi-sticker attacks, like camouflage art
attack. Our proposed methods attempt to maximize the likelihood of diminishing
the effect of the physical perturbation, given the defender’s different levels
of knowledge. We conducted an extensive amount of experiments to show that our
proposed defense can successfully defend against the camouflage art attack
under many scenarios with small classification drops on unattacked objects.
Additionally, we built a new database using the camouflage art attack that
contains photos of 90 physical attacked traffic signs and six source signs.
This database may benefit future research in the community.
## References
* [1] Athalye, A., Engstrom, L., Ilyas, A., Kwok, K.: Synthesizing robust adversarial examples. arXiv preprint arXiv:1707.07397 (2017)
* [2] Bastani, O., Ioannou, Y., Lampropoulos, L., Vytiniotis, D., Nori, A., Criminisi, A.: Measuring neural net robustness with constraints. In: Advances in neural information processing systems. pp. 2613–2621 (2016)
* [3] Bhagoji, A.N., Cullina, D., Mittal, P.: Dimensionality reduction as a defense against evasion attacks on machine learning classifiers. arXiv preprint arXiv:1704.02654 2 (2017)
* [4] Brown, T.B., Mane, D., Roy, A., Abadi, M., Gilmer, J.: Adversarial patch (2017)
* [5] Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 ieee symposium on security and privacy (sp). pp. 39–57. IEEE (2017)
* [6] Chen, P.Y., Sharma, Y., Zhang, H., Yi, J., Hsieh, C.J.: Ead: elastic-net attacks to deep neural networks via adversarial examples. In: Thirty-second AAAI conference on artificial intelligence (2018)
* [7] Chiang, P.Y., Ni, R., Abdelkader, A., Zhu, C., Studor, C., Goldstein, T.: Certified defenses for adversarial patches. arXiv preprint arXiv:2003.06693 (2020)
* [8] Das, N., Shanbhogue, M., Chen, S.T., Hohman, F., Li, S., Chen, L., Kounavis, M.E., Chau, D.H.: Shield: Fast, practical defense and vaccination for deep learning using jpeg compression. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. pp. 196–204 (2018)
* [9] Dhillon, G.S., Azizzadenesheli, K., Lipton, Z.C., Bernstein, J., Kossaifi, J., Khanna, A., Anandkumar, A.: Stochastic activation pruning for robust adversarial defense. arXiv preprint arXiv:1803.01442 (2018)
* [10] Dziugaite, G.K., Ghahramani, Z., Roy, D.M.: A study of the effect of jpg compression on adversarial images. arXiv preprint arXiv:1608.00853 (2016)
* [11] Eykholt, K., Evtimov, I., Fernandes, E., Li, B., Rahmati, A., Xiao, C., Prakash, A., Kohno, T., Song, D.: Robust physical-world attacks on deep learning visual classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1625–1634 (2018)
* [12] Feinman, R., Curtin, R.R., Shintre, S., Gardner, A.B.: Detecting adversarial samples from artifacts. arXiv preprint arXiv:1703.00410 (2017)
* [13] Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? the kitti vision benchmark suite. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition. pp. 3354–3361. IEEE (2012)
* [14] Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)
* [15] Guo, C., Rana, M., Cisse, M., Van Der Maaten, L.: Countering adversarial images using input transformations. arXiv preprint arXiv:1711.00117 (2017)
* [16] Hayes, J.: On visible adversarial perturbations & digital watermarking. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. pp. 1597–1604 (2018)
* [17] Karmon, D., Zoran, D., Goldberg, Y.: Lavan: Localized and visible adversarial noise. arXiv preprint arXiv:1801.02608 (2018)
* [18] Kos, J., Fischer, I., Song, D.: Adversarial examples for generative models. In: 2018 IEEE Security and Privacy Workshops (SPW). pp. 36–42. IEEE (2018)
* [19] Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533 (2016)
* [20] Levine, A., Feizi, S.: (de) randomized smoothing for certifiable defense against patch attacks. arXiv preprint arXiv:2002.10733 (2020)
* [21] Liu, Y., Chen, X., Liu, C., Song, D.: Delving into transferable adversarial examples and black-box attacks. arXiv preprint arXiv:1611.02770 (2016)
* [22] Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017)
* [23] Mogelmose, A., Trivedi, M.M., Moeslund, T.B.: Vision-based traffic sign detection and analysis for intelligent driver assistance systems: Perspectives and survey. IEEE Transactions on Intelligent Transportation Systems 13(4), 1484–1497 (2012)
* [24] Moosavi-Dezfooli, S.M., Fawzi, A., Fawzi, O., Frossard, P.: Universal adversarial perturbations. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 1765–1773 (2017)
* [25] Moosavi-Dezfooli, S.M., Fawzi, A., Frossard, P.: Deepfool: A simple and accurate method to fool deep neural networks. In: The IEEE Conference on Computer Vision and Pattern Recognition (June 2016)
* [26] Naseer, M., Khan, S., Porikli, F.: Local gradients smoothing: Defense against localized adversarial attacks. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV). pp. 1300–1307. IEEE (2019)
* [27] Nguyen, A., Yosinski, J., Clune, J.: Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 427–436 (2015)
* [28] Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., Swami, A.: The limitations of deep learning in adversarial settings. In: 2016 IEEE European Symposium on Security and Privacy (EuroS&P). pp. 372–387. IEEE (2016)
* [29] Papernot, N., McDaniel, P., Wu, X., Jha, S., Swami, A.: Distillation as a defense to adversarial perturbations against deep neural networks. In: 2016 IEEE Symposium on Security and Privacy (SP). pp. 582–597. IEEE (2016)
* [30] Raghunathan, A., Steinhardt, J., Liang, P.: Certified defenses against adversarial examples. arXiv preprint arXiv:1801.09344 (2018)
* [31] Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-cam: Visual explanations from deep networks via gradient-based localization. In: The IEEE International Conference on Computer Vision (ICCV) (Oct 2017)
* [32] Shaham, U., Garritano, J., Yamada, Y., Weinberger, E., Cloninger, A., Cheng, X., Stanton, K., Kluger, Y.: Defending against adversarial images using basis functions transformations. arXiv preprint arXiv:1803.10840 (2018)
* [33] Su, J., Vargas, D.V., Sakurai, K.: One pixel attack for fooling deep neural networks. IEEE Transactions on Evolutionary Computation 23(5), 828–841 (2019)
* [34] Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.: Intriguing properties of neural networks (2013)
* [35] Urmson, C., Anhalt, J., Bagnell, D., Baker, C., Bittner, R., Clark, M., Dolan, J., Duggins, D., Galatali, T., Geyer, C., et al.: Autonomous driving in urban environments: Boss and the urban challenge. Journal of Field Robotics 25(8), 425–466 (2008)
* [36] Xu, W., Evans, D., Qi, Y.: Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv preprint arXiv:1704.01155 (2017)
* [37] Xu, Z., Yu, F., Chen, X.: Lance: A comprehensive and lightweight cnn defense methodology against physical adversarial attacks on embedded multimedia applications (2019)
* [38] Zhang, F., Leitner, J., Milford, M., Upcroft, B., Corke, P.: Towards vision-based deep reinforcement learning for robotic motion control. arXiv preprint arXiv:1511.03791 (2015)
|
# Strong shock in the uniformly expanding universe with a spherical void
G.S. Bisnovatyi-Kogan1,2,3, S.A. Panafidina1,3
1Space Research Institute RAS, Moscow, Russia;
2National Research Nuclear University MEPhI, Moscow, Russia;
3Moscow Institute of Physics and Technology MIPT, Moscow reg., Russia Email:
<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
Propagation of strong shock wave in the expanding universe is studied using
approximate analytic, and exact numerical solution of self-similar equations.
Both solutions have similar properties, which change qualitatively, depending
on the adiabatic powers $\gamma$. In the interval $1<\gamma<\gamma_{cr}\sim
1.16$ analytic and numeric solutions fill all the space without any voids and
they are rather close to each other. At larger $\gamma>\gamma_{cr}$ a pressure
becomes zero at finite radius, and a spherical void appears around the origin
in both solutions. All matter is collected in thin layer behind the shock wave
front. The structure of this layer qualitatively depends on $\gamma$. At the
inner edge of the layer the pressure is always zero, but the density on this
edge is jumping from zero to infinity at $\gamma\approx 1.4$ in both
solutions.
Keywords: cosmology, strong shock wave, self-similar solution
## 1 Introduction
Strong explosions could happen at stages of star and galaxy formation, and at
last stages of evolution of very massive primordial stars. Observations of GRB
optical afterglows have shown existence of heavy elements in the universe at
red shifts up to $z\sim 10$, like in GRB090423 at $z\approx 8.2$, GRB120923A
at $z\approx 8.5$, GRB090429B with a photo-$z\approx 9.4$ [1]. The heavy
elements should be formed in the explosions at earlier stages, at larger red
shifts. Strong explosions are accompanied by formation of a strong shock wave,
propagating in the expanding universe. For a static media propagation of
strong shocks was was studied by many authors, see e.g. [2],[3]. Exact
analytic solution of self-similar equations, describing strong shock
propagation was obtained by L.I. Sedov [4, 5]. Similar analytic solution was
obtained in [6], for a strong explosion in the expanding media of a flat
Friedman dust universe [7]. Contrary to the static media, which has a real
zero energy density in the undisturbed state, the zero energy density in the
flat Friedman dust universe, in Newtonian approximation, is the result of a
sum of the positive kinetic, and negative gravitational energies. This balance
cold be lost behind the shock, therefore the analytic solution obtained using
the integral of motion similar to [4], is an approximate one. Here we obtain
approximate analytic, and exact numerical solutions for the strong shock
propagation for a gas at different adiabatic powers $\gamma$.
It was obtained that numerical solutions, where matter fills the whole space,
exist only at $\gamma<\gamma_{cr}=\gamma_{*}\approx 1.155$. Similar properties
are expressed by the approximate analytic solutions with
$\gamma_{cr}=\gamma_{*}\approx 1.178$.
The problem of a strong shock propagation in the expanding medium was
considered earlier in different approximations in [8]\- [14]. Review of papers
on this topic is given in [15]. Propagation of a detonation wave in the flat
expanding universe was studied in [17, 16]. Shock propagation in the
outflowing stellar wind was considered in [18].
Detailed analysis of solutions with $\gamma>\gamma_{cr}$ revealed a
fundamentally difference of the structure of a thin layer near the shock. The
pressure at the inner edge of the layer is zero, but density is changing from
zero to infinity when $\gamma$ reaches the value $\gamma=\gamma_{cr1}\approx
1.4$. It is the same within numerical errors in numerical and analytical
solutions, while the density inside this layer has a quite different
behaviour.
## 2 Self-similar equations for a strong shock in a uniform expanding medium
Equations describing in the Newtonian approximation, a uniformly expanding
$v=H(t)r$, self-gravitating medium, with a density $\rho(t)$ depending only on
time, corresponding to the Friedman model of the universe, in spherical
coordinates is written as [7]
$\frac{\partial v}{\partial t}+v\frac{\partial v}{\partial
r}=-\frac{1}{\rho}\frac{\partial p}{\partial
r}-\frac{G_{g}m}{r^{2}},\quad\frac{\partial\rho}{\partial
t}+\frac{\partial\rho v}{\partial r}+\frac{2\rho v}{r}=0,$ (1)
$\quad\left(\frac{\partial}{\partial t}+v\frac{\partial}{\partial
r}\right)\ln{\frac{p}{\rho^{\gamma}}}=0,\quad\frac{\partial m}{\partial
r}={4\pi}\rho r^{2},$
where $G_{g}$ is the gravitational constant. We consider a flat dusty model
with a zero velocity at time infinity, having a density $\rho_{1}(t)$, and
expansion velocity $v_{1}=H_{1}(t)r$. The solution of the system (1) in these
conditions is written as
$\displaystyle\rho_{1}=\delta/t^{2},\quad\delta=\frac{1}{6\pi
G_{g}},\quad\rho_{1}=\frac{1}{6\pi G_{g}t^{2}};\qquad H_{1}=\frac{2}{3t},\quad
v_{1}=2r/3t;$ $\displaystyle m=\frac{4\pi}{3}\rho
r^{3}=\frac{2r^{3}}{9G_{g}t^{2}},\quad\frac{G_{g}m}{r^{2}}=\frac{2}{9}\frac{r}{t^{2}}.\qquad\qquad\qquad.$
(2)
The Newtonian solution is physically relevant in the region where $v_{1}\ll
c_{\rm light}$, $c\ll c_{\rm light}$. In the case of a point explosion with
the energy $E$, at $t=0$, the number of parameters is the same as in the
static medium ($\delta,\,\,\,E$), therefore we may look in this case for a
self-similar solution in the problem of a strong shock propagation. The non-
dimensional combination in this case is written as $r(\delta/Et^{4})^{1/5}$. A
position of the shock in the self-similar solution corresponds to the fixed
value of the self-similar coordinate. The distance of the shock to the center
$R$ is written as
$R=\beta\left(\frac{Et^{4}}{\delta}\right)^{1/5},$ (3)
where $\beta$ is a parameter depending only on the adiabatic power $\gamma$.
The velocity of the shock $u$ in the static laboratory frame is written as
$u=\frac{dR}{dt}=\frac{4R}{5t}=\frac{4\beta E^{1/5}}{5\delta^{1/5}t^{1/5}}.$
(4)
The shock propagation velocity $u$, the velocity of the matter behind the
shock $v_{2}$, in the uniformly expanding medium (2), are decreasing with time
$\sim t^{-1/5}$, the pressure behind the shock $p_{2}$ is decreasing $\sim
t^{-2/5}$, which is slower than in the case of the constant density medium. It
occurs due to the fact, that the background density is decreasing with time,
and the resistance to the shock propagation is decreasing also.
Conditions on the strong shock discontinuity (Hugoniot relations) has the
following view
$v_{2}=\frac{2}{\gamma+1}u+\frac{\gamma-1}{\gamma+1}v^{sh}_{1},\,\,\rho_{2}=\frac{\gamma+1}{\gamma-1}\rho_{1},\,\,$
(5)
$p_{2}=\frac{2}{\gamma+1}\rho_{1}(u-v^{sh}_{1})^{2},\,\,c_{2}^{2}=\frac{2\gamma(\gamma-1)}{(\gamma+1)^{2}}(u-v^{sh}_{1})^{2},$
where $v_{1}^{sh}=\frac{2R}{3t}$ is the unperturbed expansion velocity on the
shock level. The subscript ”2” is related to the values behind the shock.
Introduce non-dimensional variables behind the shock as
$v=\frac{4r}{5t}V,\,\,\,\rho=\frac{\delta}{t^{2}}G,\,\,\,c^{2}=\frac{16r^{2}}{25t^{2}}Z,\,\,\,m=\frac{4\pi}{3}\rho_{1}r^{3}M=\frac{4\pi}{3}\frac{r^{3}}{t^{2}}\delta
M,$ (6)
depending on the self-similar variable $\xi$, written as
$\xi=\frac{r}{R(t)}=\frac{r}{\beta}\left(\frac{\delta}{Et^{4}}\right)^{1/5}.$
(7)
In non-dimensional variables (6), the conditions (5) on the strong shock at
$r=R$, $\xi=1$, are written as
$V(1)=\frac{5\gamma+7}{6(\gamma+1)},\,\,\,G(1)=\frac{\gamma+1}{\gamma-1},\,\,\,Z(1)=\frac{\gamma(\gamma-1)}{18(\gamma+1)^{2}},\,\,\,M(1)=1,$
(8)
and the system (2) is written as
$Z\left(\frac{d\ln Z}{d\ln\xi}+\frac{d\ln
G}{d\ln\xi}+2\right)+\gamma(V-1)\frac{dV}{d\ln\xi}=\gamma
V(\frac{5}{4}-V)-\frac{25}{72}\gamma M,$ (9)
$\frac{dV}{d\ln\xi}-(1-V)\frac{d\ln G}{d\ln\xi}=-3V+\frac{5}{2},$ (10)
$\frac{d\ln Z}{d\ln\xi}-(\gamma-1)\frac{d\ln
G}{d\ln\xi}=-\frac{5-2V-\frac{5}{2}\gamma}{1-V},$ (11)
$\xi\,\frac{dM}{d\xi}=3(G-M).$ (12)
The relations used here are
$\frac{\partial\xi}{\partial
t}\bigg{|}_{r}=-\frac{4\xi}{5t},\quad\frac{\partial\xi}{\partial
r}\bigg{|}_{t}=\frac{\xi}{r}.$ (13)
A constant $\beta$ in the definition of the non-dimensional radius $\xi$ in
(7) is obtained from the explosion energy integral $E$. Due to zero energy
(kinetic + gravitational) in the non-perturbed solution, the conserving value
of the explosion energy behind the shock, in the uniformly expanding medium,
with velocity and density distributions (2), with account of the gravitational
energy, is determined as
$E=\int_{0}^{R(t)}\rho\left[\frac{v^{2}}{2}+\frac{c^{2}}{\gamma(\gamma-1)}\right]4\pi
r^{2}dr-\int_{0}^{R(t)}\frac{G_{g}mdm}{r}.$ (14)
In non-dimensional variables (6) this relation reduces to the equation for the
constant $\beta$
$\beta^{-5}=\frac{64\pi}{25}\int_{0}^{1}G\left[\frac{V^{2}}{2}+\frac{Z}{\gamma(\gamma-1)}\right]\xi^{4}d\xi-\frac{8}{3}\int_{0}^{1}G\xi\left(\int_{0}^{\xi}G\eta^{2}d\eta\right)d\xi.$
(15)
## 3 Approximate analytic solution
### 3.1 Approximate first integral
Using the procedure described in [19] for the case of the shock in a static
media, it was possible to obtain an approximate energy conservation integral
in the expanding medium of the universe [6], in the form
$Z=\frac{(\gamma-1)(1-V)(V-\frac{5}{6})^{2}}{2(V-\frac{5}{6}-\frac{1}{6\gamma})}.$
(16)
At the shock $r=R$, $\xi=1$, using $Z(1)$ and $V(1)$ from (8), the approximate
first integral gives an identity. Using (16) we may consider only two
differential equations (10) and (11), for finding an analytical solution of
the problem, similar to the classical Sedov case. The relation (16) may be
interpreted as a happy choice of the profiling function for the temperature
distribution behind the shock.
### 3.2 Approximate analytic solution for expanding medium
Excluding $Z$ from equations (10),(11) with the help of (16), the analytic
solution of self-similar system of equations (9)-(12) was obtained in [6, 20]
in the form
$\left[(\gamma+1)(3V-\frac{5}{2})\right]^{\mu_{1}}\left[\frac{\gamma+1}{\gamma-1}(6\gamma
V-5\gamma-1)\right]^{\mu_{2}}\left[6(\gamma+1)\frac{3\gamma
V-V-\frac{5}{2}}{15\gamma^{2}+\gamma-22}\right]^{\mu_{3}}=\xi,$ (17)
with
$\mu_{1}=\frac{2}{15\gamma-20},\,\,\,\mu_{2}=\frac{\gamma-1}{17\gamma-15\gamma^{2}+1},$
(18)
$\mu_{3}=-\frac{\gamma+1}{3\gamma-1}-\frac{\gamma-1}{17\gamma-15\gamma^{2}+1}+\frac{2}{20-15\gamma}.$
$G(V)=\frac{\gamma+1}{\gamma-1}\left[6\frac{(\gamma+1)(1-V)}{\gamma-1}\right]^{\kappa_{1}}\left[\frac{\gamma+1}{\gamma-1}(6\gamma
V-5\gamma-1)\right]^{\kappa_{2}}$ (19)
$\times\left[\frac{3(\gamma+1)}{15\gamma^{2}+\gamma-22}[(6\gamma-2)V-5)]\right]^{\kappa_{3}}.$
Here
$\kappa_{1}=\frac{7}{3\gamma-1}-\frac{2}{6\gamma-7}+\frac{(15\gamma-20)(\gamma-1)}{(6\gamma-7)(15\gamma^{2}-17\gamma-1)}$
$-\frac{3\gamma(15\gamma-20)}{(3\gamma-1)(15\gamma^{2}-17\gamma-1)}-\frac{15\gamma-20}{3\gamma-1}\,\frac{\gamma+1}{6\gamma-7},$
(20)
$\kappa_{2}=-\frac{3}{3\gamma-1}+\frac{3\gamma(15\gamma-20)}{(3\gamma-1)(15\gamma^{2}-17\gamma-1)}.$
$\kappa_{3}=\frac{2}{6\gamma-7}-\frac{(15\gamma-20)(\gamma-1)}{(6\gamma-7)(15\gamma^{2}-17\gamma-1)}+\frac{15\gamma-20}{3\gamma-1}\,\frac{\gamma+1}{6\gamma-7},$
The function $Z(V)$ is determined by the integral (16). Here the boundary
conditions (8) at $\xi=1$ have been used.
$M(\xi)=3\,\xi^{-3}\,\int_{0}^{\xi}G(\eta)\eta^{2}d\eta.$ (21)
## 4 Main properties of the approximate analytic solution
### 4.1 Approximate analytic solution at $\gamma$ less than critical value
The analytic solution (17),(19),(16),(21) has a complicated dependence of
$\gamma$, and physically relevant solution exists only for limited values on
$\gamma$. In order to have positive values in brackets of (17), and to satisfy
the condition for $V$ on the shock (8) we obtain restrictions for $V$ as
$V>\frac{5}{6},\quad V>\frac{1+5\gamma}{6\gamma},\quad
V<V(1)=\frac{5\gamma+7}{6(\gamma+1)}.$ (22)
To satisfy all these conditions we obtain the restriction for $\gamma$ as
$1<\gamma<\gamma_{*}$, where $\gamma_{*}$ is defined by equation
$15\gamma^{2}+\gamma-22=0,\qquad\gamma_{*}=-\frac{1}{30}+\sqrt{\frac{1}{900}+\frac{22}{15}},\qquad\gamma_{*}\approx
1.1782.$ (23)
Numerical solutions of self-similar equations (9)-(12), presented below, have
similar restrictions for $\gamma$. We may conclude, therefore, that for other
$\gamma>\sim\gamma_{*}$ there are no smooth self-similar solutions in the
whole space. On figures are plotted, for different $\gamma<\gamma_{*}$,
functions from the analytical solution: $V(\xi)$ from (17) in Fig.1; $G(\xi)$
from (19) in Fig.2; $Z(\xi)$ from (16) in Fig.3; and $M(\xi)$ from (21) in
Fig.4.
Figure 1: Approximate analytic solution without voids for $V(\xi)$ Figure 2:
Approximate analytic solution without voids for $G(\xi)$ Figure 3: Approximate
analytic solution without voids for $Z(\xi)$ Figure 4: Solution without voids
for $M(\xi)$ from (21) based on approximate analytic equations
Introduce notations
$V^{\prime}=\frac{d\,V}{d\,\xi},\quad G^{\prime}=\frac{d\,G}{d\,\xi},\quad
Z^{\prime}=\frac{d\,Z}{d\,\xi}$ (24)
At the shock $\xi=1$ the derivative of the self-similar functions are found
from the analytic solution (17)-(19) in the form [20]
$\displaystyle
V^{\prime}(1)=\frac{-15\gamma^{2}-\gamma+22}{6(\gamma+1)^{2}};\quad
G^{\prime}(1)=\frac{-15\gamma^{2}+5\gamma+28}{(\gamma-1)^{2}};$
$\displaystyle\quad
Z^{\prime}(1)=\frac{(15\gamma^{2}+\gamma-22)\gamma}{9(\gamma+1)^{3}}.\qquad\qquad$
(25)
It follows from (23),(25), that for $\gamma<\gamma_{*}$ the derivatives have
the following signs
$V^{\prime}(1)>0;\quad G^{\prime}(1)>0;\quad Z^{\prime}(1)<0$ (26)
### 4.2 Approximate analytic solution at $\gamma$ larger than critical value
Consider approximate analytic solution at $\gamma\geq\gamma_{*}\approx
1.1782$. Contrary to the approximate analytic solution for $V(\xi)$ at
$\gamma<\gamma_{*}$, the function $V(\xi)$ increases up to infinity at
$\xi\rightarrow 0$.
Figure 5: Approximate analytic solution for $V(\xi)$ at $\gamma>\gamma_{*}$,
plotted according to Eq.(17). Non-physical parts of curves at $V\geq 1$ are
given by dashed lines. Figure 6: Approximate analytic solution for $V(\xi)$
at $\gamma>\gamma_{*}$, plotted according to Eq.(17) in the vicinity of the
shock. Non-physical parts of curves at $V\geq 1$ are given by dashed lines.
Figure 7: Approximate analytic solution for $G(\xi)$ at $\gamma>\gamma_{*}$,
plotted according to Eqs.(17),(19). Figure 8: Approximate analytic solution
for $G(\xi)$ at $\gamma\approx 1.1543$, in the vicinity of the shock.
Figure 9: Approximate analytic solution for $Z(\xi)$ at $\gamma>\gamma_{*}$
plotted according to Eq.(17),(16) in the vicinity of the shock.
Figure 10: Approximate analytic solution for $M(\xi)$ at $\gamma>\gamma_{*}$,
plotted by integration in Eq.(21) in the vicinity of the shock. Figure 11:
Approximate analytic solution for $G(\xi)*Z(\xi)$ at big $\gamma$, in the
vicinity of the shock.
It follows from (19) that $G(\xi)$ has a physical sense only when $V(\xi)<1$,
because $V(\xi)=1$ is the point where $G(\xi)=0$. That means that there is a
point where density of matter becomes zero and spherical void area appears.
Dependence of radius $\xi$ of such spherical void areas on $\gamma$ can be
written in the form
$\left[\frac{\gamma+1}{2}\right]^{\mu_{1}}\bigg{[}\gamma+1\bigg{]}^{\mu_{2}}\left[3(\gamma+1)\frac{6\gamma-7}{15\gamma^{2}+\gamma-22}\right]^{\mu_{3}}=\xi,$
(27)
with $\mu_{1},\,\,\mu_{2},\,\,\mu_{3}$ from Eq.(18).
Calculation of self-similar variables, using Eqs. (17),(19) gives, that at the
point with $V=1$ the density goes to zero at $\gamma<\gamma_{cr1}=1.4$, and
for larger $\gamma$ the density tends to infinity at this point. Nevertheless,
the temperature goes to zero at this point, so that the pressure, represented
by the function $GZ$ goes to zero at the inner edge of the layer at $V=1$, so
we obtain a self-consistent solution with the spherical void. The following
figures represent behaviour of functions at different $\gamma>\gamma_{*}$:
$V(\xi)$ in Figs.(5),(6); $G(\xi)$ in Figs.(7),(8); $Z(\xi)$ in Fig.(9);
$M(\xi)$ in Fig.(10); $G(\xi)\times Z(\xi)$ in Fig.(11).
We obtain from (25) that $G^{\prime}(\xi)|_{\xi=1}>0$ at
$\gamma<\frac{5+\sqrt{1705}}{30}\approx 1.54305$ and
$G^{\prime}(\xi)|_{\xi=1}<0$ at $\gamma>\frac{5+\sqrt{1705}}{30}.$ So the
density starts to fall and then rises up to infinity at
$1.4<\gamma<\frac{5+\sqrt{1705}}{30}$. When
$\gamma>\gamma_{2}=\frac{5+\sqrt{1705}}{30}$ the density starts to grow inside
from the shock, and continues rising up to infinity.
## 5 Numerical solution of self-similar equations
### 5.1 Numerical solution at $\gamma$ less than critical value
The system of equations (9)-(12) written explicitly for derivatives has a
form:
$\begin{cases}$$\frac{dlnG}{dln\xi}=\frac{\frac{3-\frac{5}{2}\gamma}{1-V}Z-\frac{25}{72}\gamma
M+\gamma(2V^{2}-\frac{17}{4}V+\frac{5}{2})}{\gamma[Z-(1-V)^{2}]};$$\\\
$$\frac{dV}{dln\xi}=(1-V)\frac{dlnG}{dln\xi}-3V+\frac{5}{2};$$\\\
$$\frac{dlnZ}{dln\xi}=(\gamma-1)\frac{dlnG}{dln\xi}-\frac{5-2V-\frac{5}{2}\gamma}{1-V};$$\\\
$$\frac{dM}{dln\xi}=3(G-M)$$\\\ \end{cases}$
That reduces to:
$\xi\frac{dG}{d\xi}=G\frac{\frac{3Z}{\gamma}\frac{1-\frac{5\gamma}{6}}{1-V}-\frac{17}{4}V+\frac{5}{2}+2\,V^{2}-\frac{25}{72}M}{Z-(1-V)^{2}},\quad\xi\,\frac{dM}{d\xi}=3(G-M),$
(28)
$\xi\frac{dV}{d\xi}=\xi\frac{1-V}{G}\frac{dG}{d\xi}-3(V-\frac{5}{6}),\quad\frac{\xi}{Z}\frac{dZ}{d\xi}=\xi\frac{\gamma-1}{G}\frac{dG}{d\xi}-\frac{5-2V-\frac{5}{2}\gamma}{1-V}.$
Let us note that the expression (21) for $M(\xi)$ is also valid for the exact
numerical solution. This system is solved numerically, starting from the point
$\xi=1$, where the variables are found from the conditions at the shock (8),
as
$\quad\frac{dV}{d\xi}\bigg{|}_{\xi=1}=\frac{-30\gamma^{2}-11\gamma+27}{6(\gamma+1)^{2}};\quad\frac{dG}{d\xi}\bigg{|}_{\xi=1}=\frac{-30\gamma^{2}-5\gamma+33}{(\gamma-1)^{2}};$
(29)
$\frac{dZ}{d\xi}\bigg{|}_{\xi=1}=-\frac{\gamma(15\gamma^{3}-35\gamma^{2}-17\gamma+49)}{18(\gamma+1)^{3}};\quad\frac{dM}{d\xi}\bigg{|}_{\xi=1}=\frac{6}{\gamma-1}$
The sign of derivatives $V^{\prime}$, $G^{\prime}$ and $Z^{\prime}$ is
negative at $\xi=1$, what differs from the sign of some derivatives in the
approximate analytic solution in (26). It follows from the numerical
integration of the system (28), that close to the shock boundary the values of
$G(\xi)$ and $V(\xi)$ reach their maxima, and after decrease monotonically
until the origin $\xi=0$, see Figs.(12)-(14). Numerical solutions for $Z(\xi)$
and $M(\xi)$ for different $\gamma$ are given in Figs.(15)-(16), respectively.
The solutions of self-similar equations without empty voids exist only in the
interval $1<\gamma<\gamma_{**}$, where $\gamma_{**}=1.155$. At
$\gamma>\gamma_{**}=1.155$ the empty spherical void is formed around the
center, at a finite distance from the shock. Similar voids are formed in Sedov
solution for a shock in the static uniform gas at $\gamma>7$ [19].
Figure 12: Numerical solution for $V(\xi)$. Figure 13: Numerical solution for
$V(\xi)$ at $\xi$ from 0.8 to 1.0. Figure 14: Numerical solution for $G(\xi)$
at $\xi$ from 0.9 to 1.0. Figure 15: Numerical solution for $Z(\xi)$. Figure
16: Numerical solution for $M(\xi)$ at $\xi$ from 0.9 to 1.0.
### 5.2 Numerical solution at $\gamma$ bigger than critical value
Consider approximate analytic solution at $\gamma\geq\gamma_{**}\approx
1.155$. Like in approximate analytic solution, we consider radius of a
spherical void as point where velocity $V=1$. Such point is also a point where
numerical solution stops its existence.
Figure 17: Numerical solution for $V(\xi)$ at big $\gamma$, at $\xi$ from 0.91
to 1.0. Figure 18: Numerical solution for $G(\xi)$ at big $\gamma$, at $\xi$
from 0.88 to 1.0. Figure 19: Numerical solution for $Z(\xi)$ at big $\gamma$,
at $\xi$ from 0.88 to 1.0. Figure 20: Numerical solution for $M(\xi)$ at big
$\gamma$, at $\xi$ from 0.9 to 1.0.
The important parameter is the pressure value $P\sim\rho c^{2}\sim
G(\xi)Z(\xi)$ at the point at $V(\xi)=1$.
Calculations give that the pressure equals $0$ at $V=1$, but the behaviour of
the density $G(\xi)$ at $V=1$ depends on $\gamma$. Like in the approximate
analytic solution, at the point with $V=1$ the density goes to zero at
$\gamma<\gamma_{cr1}=1.4$, and for larger $\gamma$ the density tends to
infinity at this point. Nevertheless, the temperature goes to zero at this
point, so that the pressure, represented by the function $GZ$ goes to zero at
the inner edge of the layer at $V=1$. So we obtain a continuous pressure,
self-consistent solution with a spherical void, with zero, or infinite density
on its inner zero-pressure boundary. The following figures represent behaviour
of functions at different $\gamma>\gamma_{*}$: $V(\xi)$ in Fig.(17); $G(\xi)$
in Fig.(18); $Z(\xi)$ in Fig.(19); $M(\xi)$ in Fig.(20); $G(\xi)\times Z(\xi)$
in Fig.(21).
It is clear from Fig.(21), that on the inner boundary of the layer $P=0$ due
to zero temperature. Inside there is an empty hole. The density at the inner
boundary at $\gamma>1.4$ becomes infinite instead of zero at smaller ones.
Figure 21: Numerical solution for $G(\xi)*Z(\xi)$ at big $\gamma$, at $\xi$
from 0.9 to 1.0.
## 6 Comparison of approximate analytic and numerical solutions. Discussion
Let us compare radiuses of spherical void area in analytic $(\xi_{*}^{an})$
and numerical $(\xi_{*}^{num})$ solutions in the Table 1.
Table 1: The values $\xi_{*}(\gamma)$ for approximate analytic and numerical solutions $\gamma$ | $\xi_{*}^{an}$ | $\xi_{*}^{num}$
---|---|---
1,18 | 0,2498 | 0,8462
1,20 | 0,7364 | 0,92018
1,50 | 0,938 | 0,9672
2,00 | 0,94898 | 0,9664
5,00 | 0,95084 | 0,9581
10,00 | 0,94866 | 0,9527
The analytic formula for the dependence $\xi_{*}^{an}$ in the analytic
solution is obtained from (17) at $V=1$. We have
$\xi_{*}^{an}=\frac{(\gamma+1)^{\mu_{1}+\mu_{2}+\mu_{3}}}{2^{\mu_{1}}}\left(\frac{18\gamma-21}{15\gamma^{2}+\gamma-22}\right)^{\mu_{3}},$
(30)
with powers from (18) as
$\displaystyle\mu_{1}=\frac{2}{15\gamma-20},\qquad\mu_{1}+\mu_{2}+\mu_{3}=-\frac{\gamma+1}{3\gamma-1},$
(31)
$\displaystyle\mu_{3}=-\frac{\gamma+1}{3\gamma-1}-\frac{\gamma-1}{17\gamma-15\gamma^{2}+1}+\frac{2}{20-15\gamma}.$
Tending formally $\gamma\rightarrow\infty$ we obtain from (30),(30) the value
$\xi_{*}^{an}(\infty)=\left(\frac{5}{6}\right)^{1/3}=0.941.$ (32)
We see from the Table 1 the value of $\xi_{*}$ has its maximum value both in
analytic and numerical models. It indicates the thickness of the layer goes
through the minimum. For $\gamma=10$ the value of $\xi_{*}^{an}$ is close to
its limiting value in (32). Actually the results for large $\gamma>\sim 5$,
which is obtained from self-similar solution, are not reliable. At large
$\gamma$ the matter compressibility decreases, and the shock is becoming
weaker. Hugoniot relations in the form (5) describing the strong shock are not
valid anymore. With general Hugoniot adiabatic relations [19] we cannot
construct a self-similar solution. Therefore the results for large $\gamma$
could be considered only as rough estimations by the order of magnitude. The
maximum value of $(\xi_{*}^{num})$ in the Table 1 is related to the minimal
thickness of the layer for large $\gamma$.
It may be seen from Fig. 22 that approximate analytical solution for $G(\xi)$
shows all principal layer behavior features. So it is possible to use
approximate solution for different estimations.
We have made the high precision calculation and got the results, which are
shown in Figs. 23,24. As we can see the density at the inner edge of the layer
is jumping from zero to infinity. Comparing of these figures we have made a
conclusion the transition value $\gamma_{cr1}$ is equal to 1.4 at the
precision of calculations.
a) $\gamma=1.10$
b) $\gamma=1.20$
c) $\gamma=1.42$
d) $\gamma=2.00$
Figure 22: Comparison of analytic and numerical curves for $G(\xi)$ at
different $\gamma$, in the vicinity of the shock. a. Example of the case
without void, at $1<\gamma<1.1782$ (analytic); $1<\gamma<1.155$ (numerical).
b. Example of the case with void, at $1.1782<\gamma<1.4$ (analytic);
$1.155<\gamma<1.4$ (numerical), when the density at the edge of the void
$G(\xi_{*})=0$ in both solutions. c. Example of the case with void, at
$1.4<\gamma<1.543$ (analytic); $\gamma>1.4$ (numerical), when the density at
the edge of the void $G(\xi_{*})=\infty$ in both solutions, and there is a
minimum in the analytical curve. d. Example of the case with void, at
$\gamma>1.543$ (analytic); $\gamma>1.543$ (numerical), when the density at the
edge of the void $G(\xi_{*})=\infty$ in both solutions, and the analytic curve
does not have a minimum. Figure 23: Approximate analytic solution for
$G(\xi)$ at $\gamma\approx 1.4$ Figure 24: Numerical solution for $G(\xi)$ at
$\gamma\approx 1.4$
The constant $\beta$ in the definition of the non-dimensional radius $\xi$ in
(6) is obtained from the explosion energy integral $E$. Due to zero energy
(kinetic + gravitational) in the non-perturbed solution the conserving value
of the explosion energy behind the shock in the uniformly expanding medium
with velocity and density distributions (2) with account of the gravitational
energy determined in (14)
In non-dimensional variables (6) this relation for solutions with hollow
center reduces to the equation for the constant $\beta$
$\beta^{-5}=\frac{64\pi}{25}\int_{\xi_{*}}^{1}G\left[\frac{V^{2}}{2}+\frac{Z}{\gamma(\gamma-1)}\right]\xi^{4}d\xi-\frac{8}{3}\int_{\xi_{*}}^{1}G\xi\left(\int_{0}^{\xi}G\eta^{2}d\eta\right)d\xi.$
(33)
Table 2: The values $\beta(\gamma)$ for the analytic and numerical solutions $\gamma$ | $\beta_{an}$ | $\beta_{num}$
---|---|---
1.05 | 3.2910 | 3.3512
1.10 | 2.2268 | 2.5003
1.12 | 2.0423 | 2.3713
1.15 | 1.8522 | 2.2416
1.17 | 1.7631 | 2.1785
1.20 | 1.6667 | 2.1041
1.35 | 1.4604 | 1.8897
1.45 | 1.4048 | 1.8050
1.60 | 1.3554 | 1.6709
2.00 | 1.2814 | 1.1298
The values of $\beta(\gamma)$ for the analytic and numerical solutions are
given in the Table 2. It follows from numbers in this table, that the value of
$\xi_{*}$ has its maximum value both in analytic and numerical models. It
means that the thickness of the layer goes through the minimum. For
$\gamma=10$ the value of $\xi_{*}^{an}$ is close to its limiting value in
(32). Actually the results for large $\gamma>\sim 5$, which are obtained from
self-similar solution, are not reliable. At large $\gamma$ the matter
compressibility decreases, and the shock is becoming weaker. Hugoniot
relations in the form (5) describing the strong shock are not valid anymore.
With general Hugoniot adiabatic relations [19] we cannot construct a self-
similar solution. Therefore the results for large $\gamma$ could be considered
only as rough estimations by the order of magnitude.
The high precision calculation for the case of $gamma$ around 1.4, gave the
results, which are shown in Figs. 23,24. As we can see the density at the
inner edge of the layer is jumping from zero to infinity. Comparing these
figures we derive the transition value of $\gamma_{cr1}$ is equal to 1.4 in
both solutions, within the precision of calculations.
## Acknowledgments
This work was partially supported by RFBR grants 18-02-00619, 18-29-21021 and
20-02-00455.
## References
* [1] N. Tanvir (2013); arXiv:1307.6156v1.
* [2] K.P. Stanyukovich, Nonstationary motion of continuous media. Gostekhizdat. Moscow, (1955) (in Russian).
* [3] G.I. Taylor, Proc. Roy. Soc. A201, 175 (1950).
* [4] L.I. Sedov, Doklady Acad. USSR 52, No.1 (1946).
* [5] L.I. Sedov, Metody podobiya i razmernostei v mekhanike. Nauka, Moscow, (1977) (in Russian).
* [6] G.S. Bisnovatyi-Kogan, Gravitation and Cosmology 21, 236 (2015); arXiv:1408.1981v2.
* [7] Ya.B. Zeldovich, I.D. Novikov, Relativistic astrophysics. Volume 2. The structure and evolution of the universe. Chicago, IL, University of Chicago Press (1983).
* [8] E. Bertschinger, Astrophys. J. 268, 17 (1983).
* [9] I.G. Kovalenko, P.A. Sokolov, Astron. Astrophys. 270, 1 (1993).
* [10] M.A. Eremin, I.G. Kovalenko, Astron. Astrophys. 335, 370 (1998).
* [11] S. Ikeuchi, K. Tomisaka, J.P. Ostriker, Astrophys. J. 265, 583 (1983).
* [12] L.M. Ozernoi, V.V. Chernomordik, Soviet Astronomy, 22, 141 (1978).
* [13] J. Shwarz, J.P. Ostriker, A. Yahil, Astrophys. J., 202, 1 (1975).
* [14] E.T. Vishniac, J.P. Ostriker, E. Bertschinger, Astrophys. J. 291, 399 (1985).
* [15] J.P. Ostriker, C.F. McKee Astrophysical blast waves (1988) Rev. Modern. Physics 60, 1.
* [16] E. Bertschinger, Astrophys. J. 295, 1 (1985).
* [17] Ya.M. Kazhdan, Sov. Astron. 30, 261 (1986).
* [18] L. Ciotti, A. D’Ercole, Astron. Astrophys. 215, 347 (1989).
* [19] L.D. Landau, E.M. Lifshitz, Hydrodynamics. Nauka, Moscow, (1988) (in Russian)
* [20] G.S. Bisnovatyi-Kogan, S.A. Panafidina, Astron. Reports 63, 263 (2019).
|
# The Neupert Effect of Flare UltraViolet and Soft X-ray Emissions
Jiong Qiu Department of Physics, Montana State University, Bozeman, MT, USA
###### Abstract
We model the Neupert effect that relates flare heating energy with the
observed SXR emission. The traditional form of the Neupert effect refers to
the correlation between the time-integrated HXR or microwave light curve and
the SXR light curve. In this paper, instead, we use as the proxy for heating
energy the ultraviolet (UV) emission at the foot-points of flare loops, and
modify the model of the Neupert effect by taking into account the discrete
nature of flare heating as well as cooling. In the modified empirical model,
spatially resolved UV lightcurves from the transition region or upper
chromosphere are each convolved with a kernel function characterizing decay of
the flare loop emission. Contributions by all loops are summed to compare with
the observed total SXR emission. The model has successfully reproduced the
observed SXR emission from its rise to decay. To estimate heating energies in
flare loops, we also employ the UV Foot-point Calorimeter (UFC) method that
infers heating rates in flare loops from these UV light curves and models
evolution of flare loops with a zero-dimensional hydrodynamic code. The
experiments show that a multitude of impulsive heating events do not well
reproduce the observed flare SXR light curve, but a two-phase heating model
leads to better agreement with observations. Comparison of the two models of
the Neupert effect further allows us to calibrate the UFC method, and improve
the estimate of heating rates in flare loops continuously formed by magnetic
reconnection throughout the flare evolution.
Sun: activities – Sun: flares – Sun: UV radiation – Sun: X-rays
## 1 INTRODUCTION
Neupert (1968) discovered that the time integral of the microwave light curve
of a flare is correlated with the flare soft X-ray (SXR) light curve during
its rise. Subsequently, the Neupert effect has been confirmed in generations
of flare observations. Dennis & Zarro (1993) studied 66 flares observed in
1980 by the Hard X-ray Burst Spectrometer (HXRBS) on the Solar Maximum Mission
(SMM; Orwig et al., 1980) and the Geostationary Operational Environmental
Satellite (GOES), finding that 80% of large flares exhibit good correlations
between the hard X-ray (HXR) light curve and the time derivative of the GOES
SXR light curve in the 1-8 Å passband. Applying the time-correlation analysis
to more than one thousand flares observed between 1997 January and 2000 June
by GOES and the Burst and Transient Source Experiment (BATSE) on-board the
Compton Gamma-Ray Observatory (Schwartz et al., 1992), Veronig et al. (2002)
confirmed that the timing behaviour of the HXR and SXR emissions in large
flares is consistent with the Neupert effect. McTiernan et al. (1999) examined
flare SXR and HXR observations by the Soft X-ray Telescope (SXT; Tsuneta et
al., 1991), the Bragg Crystal Spectrometer(BCS; Culhane et al., 1991), and the
Hard X-ray Telescope (HXT; Kosugi et al., 1991) on Yohkoh, finding the Neupert
effect more prominently demonstrated in high temperature SXR light curves.
Effenberger et al. (2017) further confirmed the Neupert effect exploiting
flare observations by the Reuven Ramaty High Energy Solar Spectroscopic Imager
(RHESSI; Lin et al., 2002) for the past two solar cycles. The Neupert effect
has also been found in small flares. Qiu et al. (2004a) studied the Neupert
effect in more than 100 microflares (of GOES class A to C1) with significant
HXR emissions observed by RHESSI, finding that the time derivative of the GOES
SXR emission is best correlated with the HXR emission at the photon energy 14
– 20 keV. Glesener et al. (2020) have recently detected non-thermal HXR
emission in a A5.7 microflare observed by the Nuclear Spectroscopic Telescope
Array (NuSTAR; Grefenstette et al., 2016), which also exhibits the Neupert
effect.
The Neupert effect is interpreted as that flare plasmas in the corona are
heated by non-thermal electrons. These electrons precipitate at the lower
atmosphere, and lose their energy instantaneously by collision with ions. In
this course, thick-target HXR emissions are generated, and chromosphere
evaporation is driven that heats the corona as well as increases the density
of the corona, leading to the enhanced SXR emission (e.g., Antonucci et al.,
1982; Fisher et al., 1985; Li et al., 1993; Lee et al., 1995). Therefore, the
HXR light curve of a flare can serve as the proxy of the electron energy flux,
and its time integral is equivalent to the maximum thermal energy of the
subsequently heated flare plasmas in the corona, achieved at the time when the
flare SXR emission peaks. Analyzing spectroscopic observations of flares, a
number of studies have then estimated this maximum flare thermal energy as
well as the total energy in non-thermal electrons, suggesting that these two
energies are indeed comparable in large flares (see Emslie et al., 2012;
Aschwanden et al., 2017, and references therein), and sometimes in small
flares as well (e.g., Glesener et al., 2020). With this notion, generations of
hydrodynamic models have been developed to study evolution of flare corona
with non-thermal electron beams as the primary source of heating (Somov et
al., 1981; Nagai & Emslie, 1984; Mariska et al., 1989; Emslie et al., 1992;
Warren & Antiochos, 2004; Reep, 2014). Specifically, effort has been made to
model evolution of the flare corona (and chromosphere), using observed HXR
light curves or the time derivative of SXR light curves to infer time-
dependent heating rates in flares, and reproduce observed thermodynamic
properties of flare plasmas in the corona and the chromosphere (Fisher &
Hawley, 1990; Rubio da Costa et al., 2016).
Despite the prevailing evidence in support of the Neupert effect, there are
several caveats in the traditional form of the Neupert effect. It only
addresses the rise phase of the flare SXR emission and only considers non-
thermal electrons as the primary carrier of corona heating energy. As has been
noted for decades, energy release and flare heating often continue into the
decay phase of the flare SXR emission, when the HXR emission has usually
diminished, and the amount of heating energy deposited in the decay phase can
be significant (Withbroe, 1978; Dere & Cook, 1979; Ryan et al., 2013). Whereas
prior studies have confirmed the Neupert effect in a large number of flares,
these same studies have also revealed that, in a significant fraction of
flares, the SXR emission continues to rise after the HXR emission has ended
(Veronig et al., 2002), and in some flares, the SXR emission rises before the
HXR emission (Effenberger et al., 2017). These observations indicate that
other sources of energy are needed to heat the flare corona (Veronig et al.,
2005). Furthermore, flare heating takes place in many flare loops that are
generated continuously into the decay phase. These loops are heated, by
chromosphere evaporation driven by either non-thermal beams or else, such as
thermal conduction (Gan et al., 1991; Longcope, 2014) or Alfvén waves
(Fletcher & Hudson, 2008; Kerr et al., 2016), and then cool, and the total SXR
emission at any given time is the sum of the emissions from all these loops at
their different evolution stages (e.g., Aschwanden & Alexander, 2001). The
continuous heating and cooling of multiple flare loops cannot be well
described by the Neupert effect applied to the total HXR and SXR emissions
that are not spatially resolved.
These questions motivate the thinking to extend the Neupert effect to a
broader context that addresses the nature of flare heating on elementary
scales and perhaps beyond non-thermal electrons. Apart from microwave and HXR
light curves, which are indicative of non-thermal electrons, flare emission in
the lower-atmosphere observed in the optical, ultraviolet, and extreme
ultraviolet wavelengths generally exhibits an impulsive behavior before the
more gradual rise of the SXR emission (see the review by Fletcher et al.,
2011). In large flares, enhanced UV and EUV emissions have often been found to
trace HXR emissions temporally and/or spatially (Kane & Donnelly, 1971;
McClymont & Canfield, 1986; Cheng et al., 1988; Cheng, 1990; Fletcher &
Hudson, 2001; Warren & Warshall, 2001; Qiu et al., 2010; Cheng et al., 2012),
supporting the scenario of heating by non-thermal electrons. But observations
have also shown impulsive UV emissions at the flare foot-points not associated
with thick-target HXR signatures (Warren & Warshall, 2001; Alexander & Coyner,
2006; Coyner & Alexander, 2009; Cheng et al., 2012), and in these cases, it is
likely that the temperature of the corona is rapidly raised, and thermal
conduction would deposite energy at the chromosphere, causing enhanced
optical, UV, and EUV emissions, and driving chromosphere evaporation as well.
Most recently, spectroscopic observations in these wavelengths with high
spatial resolutions have revealed downflows (chromosphere condensation) and
upflows (chromosphere evaporation) in a large number of flare kernels at
unprecedented small scales, illustrative of prototypical, elementary energy
release events in the flare (Graham & Cauzzi, 2015). These state-of-the-art
observations clearly demonstrate the critical role of chromosphere evaporation
in energizing the flare corona regardless of heating mechanisms.
The advanced flare observations in the lower atmosphere provide us with the
opportunity to better characterize heating rates in flare loops. In this
spirit, we analyze the ultraviolet emission from the transition region and
upper chromosphere at the foot-points of flare loops. The transition region
and upper chromosphere respond promptly to energy release in the corona, and
the resultant UV emission can be used as a proxy for heating. This approach is
free from the assumption that heating is primarily by non-thermal electrons.
Furthermore, high-resolution UV images allow us to track flare loops that are
formed and heated at different times and evolve independently throughout the
flare, assuming that these loops are anchored at brightened UV pixels. This
paper presents a thought experiment on the Neupert effect using spatially
resolved UV light curves instead of HXR light curves, and with two models, a
modified empirical model of the Neupert effect, and the UV Footpoint
Calorimeter (UFC) method that infers heating rates from UV light curves and
models evolution of the flare corona in a multitude of loops (Qiu et al.,
2012; Liu et al., 2013). Both models take into account heating as well as
cooling of flare loops formed at different times during the flare, which
contribute to the observed total SXR emission. The first model examines the
temporal relationship between the SXR and spatially resolved UV 1600 Å light
curves but cannot return the heating energy, whereas the UFC method will be
able to infer the heating rates in flare loops. In this study, we analyze 16
flares observed by GOES and the Atmospheric Imaging Assembly (AIA; Lemen et
al., 2012) (Section 2), apply the empirical model (Section 3) and UFC method
(Section 4) to these flares to reproduce the GOES SXR light curves, and
improve the estimate of flare heating energies by comparing these two models
(Section 5). Conclusions and discussions are given in the last section.
## 2 FLARE LIGHT CURVES
We have analyzed 16 flares listed in Table 1. The flare SXR emissions were
obtained by GOES111In the table, the magnitude of the flare is based on the
GOES flux in the 1 – 8 Å passband, which has been, historically, scaled to
match the flux by GOES satellites 1 – 7. As of October 28, 2020, the SXR flux
obtained by GOES satellites 8 – 15 is reported as the “true” flux, which is
equivalent to the “scaled” flux divided by 0.7 for the long channel (1-8 Å)
and by 0.85 for the short channel (0.5 – 4 Å), respectively
(https://hesperia.gsfc.nasa.gov/rhessidatacenter/complementary_data/goes.html).
The flares analyzed in this paper were observed by GOES satellites 10 – 15,
and analysis in this paper uses the “true” flux in units of W m-2; yet to be
consistent with the past literature, the flare magnitude reported in Table 1
is still derived using the “scaled” flux., and imaging observations of the
flares in the UV 1600Å passband were obtained by AIA on board the Solar
Dynamics Observatory (SDO; Pesnell et al., 2012). Except for one event
SOL2011-12-26 (#3), these flares were also observed by RHESSI. Table 1
presents the information of the source region and position of each flare, the
duration of the flare $\tau_{d}$ derived from the flare light curves, and the
median half-length of flare loops estimated from the separation of the flare
ribbons observed in the AIA 1600Å images. The magnetic flux enclosed in the
total area of the flare ribbons gives the measurement of the total
reconnection flux $\Phi_{rec}$ (e.g. Qiu et al., 2004b; Saba et al., 2006),
and the uncertainty in $\Phi_{rec}$ is characterized by the difference in the
magnetic flux measured in positive and negative magnetic fields, respectively.
The total heating energy and its uncertainty in each flare are derived in the
following text (Sections 3, 4 and 5.1).
Figure 1 shows the light curves of each of the 16 flares, including the GOES
SXR light curve at 1-8 Å (denoted as $\mathcal{F}_{{\rm sxr}}$, in units of W
m-2, in the following text), its time derivative ($\dot{\mathcal{F}}_{{\rm
sxr}}$), the total counts rate light curve in the UV 1600Å passband integrated
over the flare region ($\mathcal{F}_{{\rm uv}}$, in units of DN s-1), and the
HXR counts rate light curve of photon energy 12 - 25 keV by RHESSI. Following
the convention, here we refer to the time period before the peak of the SXR 1
– 8 Å light curve as the rise phase or the impulsive phase of a flare,
followed by the gradual phase, or the decay phase.
Most of these flares exhibit the well-known Neupert effect, namely the flare
HXR light curve is temporally correlated with the time derivative of the 1 – 8
Å SXR light curve $\dot{\mathcal{F}}_{{\rm sxr}}$ during the rise of the SXR
emission. To examine the degree to which the Neupert effect applies, we
conduct a time-lagged cross-correlation between $\dot{\mathcal{F}}_{{\rm
sxr}}$ and the HXR light curves at 12 - 25 keV and 25 - 50 keV, respectively,
and the derived maximum cross-correlation coefficients and time lags are given
in Table 1. In a few flares, the HXR emission in 12 - 25 keV lags
$\dot{\mathcal{F}}_{{\rm sxr}}$ by within a minute, likely due to the mixture
of thermal emission in this channel (e.g., Veronig et al., 2005; McAteer &
Bloomfield, 2013). In comparison, the HXR emssion in 25 - 50 keV (not shown in
the figure) does not lag $\dot{\mathcal{F}}_{{\rm sxr}}$. Since most of these
flares do not exhibit significant HXR emissions beyond 25 keV, here we do not
conduct a comprehensive energy-dependent analysis (e.g. McAteer & Bloomfield,
2013); instead, this study focuses on flare UV light curves in the AIA 1600Å
passband.
Readers are reminded that, throughout the following text, the flare UV light
curve, $\mathcal{F}_{{\rm uv}}$, specifically refers to emission in the AIA
1600 Å passband. The flare emission in this passband is dominated by C iv, Si
ii, C i, and He ii lines formed in the transition region and the upper
chromosphere in the temperature range $4.2<{\rm log}T<5.1$ (Simões et al.,
2019). Using high-resolution spectral observations by the Skylab during the
decay phase of a flare, Simões et al. (2019) found that the most notable line,
the C iv line (100,000 K) in this passband, contributes to 26% of the AIA 1600
Å flare emission. Figure 1 shows that $\mathcal{F}_{{\rm uv}}$ matches very
well $\dot{\mathcal{F}}_{{\rm sxr}}$ during the rise phase, and the
coefficients of the cross-correlation and time lags between the two are
similar to those between $\dot{\mathcal{F}}_{{\rm sxr}}$ and the HXR 12 - 25
keV emission, suggesting a close relation between the HXR emission and the
transition-region and upper-chromosphere line emission (e.g., Cheng et al.,
1984), such as the emission in the AIA 1600Å passband analyzed in this study.
On the other hand, it is noted that the flare UV emission at this passband
proceeds for a longer time than both the $\dot{\mathcal{F}}_{{\rm sxr}}$ and
HXR light curves.
The flare emission in the AIA 1600Å passband is produced by heating of the
transition region or upper chromosphere with reconnection released energy
carried along newly formed flare loops into the lower atmosphere at their
feet. Figure 2 shows, as examples, two flares SOL2014-04-18 (event # 7) and
SOL2013-08-12 (event # 4), respectively. The left panels show the evolution of
flare ribbons in the UV 1600Å passband mapped on a line-of-sight magnetogram
obtained from the Helioseismic and Magnetic Imager (HMI; Schou et al., 2012).
The color code indicates the earliest time a pixel is brightened, or its
activation time, defined as the time when its brightness reaches 4 times the
pre-flare quiescent background (Qiu et al., 2010). The right panels show the
UV 1600Å light curves from a few brightened pixels during the flare. From
these figures, it is evident that, after the impulsive phase of a flare,
reconnection continues to form flare loops and releases energy in them, and
the continuous reconnection into the decay phase contributes to the prolonged
total UV emission. These observations suggest that spatially resolved flare
light curves of UV or optical emission in the lower atmosphere provide a
comprehensive temporal coverage and spatial mapping of reconnection energy
release events in a flare. Therefore, in this study, we use the flare UV 1600Å
emission as the proxy for flare heating regardless of the heating mechanism.
We examine the Neupert effect that relates spatially resolved UV light curves
with the total SXR light curve, and estimate heating energies in flare loops
assumed to be anchored at the UV-brightened pixels.
For this purpose, we obtain spatially resolved UV 1600Å light curves in
flaring pixels whose brightness is increased to at least 4 times the quiescent
background and stays bright for at least 4 minutes. The first criterion is
used to distinguish flaring pixels from plages, whose brightness distribution
peaks at 3.5 times the quiescent background. The second criterion helps to
pick out pixels at the feet of closed loops, different from the feet of open
field lines, or ejecta, which are brightened only briefly. For each of the
flares in Table 1, a few thousand flaring pixels are identified. We assume
that, anchored to each UV bright pixel is a flaring half-loop, and the UV
brightness at the pixel is somewhat scaled to the heating flux in the half-
loop. In the foregoing text, each of these half-loops is called a loop event
or a heating event. We then use two methods, an empirical formula of Neupert
effect and a zero-dimensional hydrodynamic code, to model these heating events
and reproduce the synthetic SXR light curve $\mathcal{F}_{{\rm sxr}}$
comparable with GOES observations.
We specify the time range for the analysis of the UV 1600Å and SXR light
curves. The start time $t_{s}$ of a flare is defined as when
$\mathcal{F}_{{\rm sxr}}$ rises to $e^{-4}$ of its peak emission. The end time
of the flare $t_{e}$ is defined by the $\mathcal{F}_{{\rm uv}}$, instead, as
when $\mathcal{F}_{{\rm uv}}$ decays to $e^{-2}$ of its maximum. The duration
of the flare is $\tau_{d}=t_{e}-t_{s}$, and is reported in Table 1.
## 3 NEUPERT EFFECT: AN EMPIRICAL MODEL
The Neupert effect refers to the observation that the time-integrated HXR or
microwave light curve matches the SXR light curve from its rise to peak. The
SXR emission then decays because of the reduced emissivity in the passband due
to decreased temperature (cooling) and/or density, which is not addressed by
the Neupert effect in its original form. Furthermore, during a flare, numerous
flare loops are formed and heated, and then cool, at different times. The
total SXR emission at any given time is the sum of the emissions from these
loops, each at its own distinct evolution stage; earlier formed flare loops
may be cooling during the rise of $\mathcal{F}_{\rm sxr}$, whereas new heating
events may still take place when $\mathcal{F}_{\rm sxr}$ appears to decay.
To model the Neupert effect in its complete form, we take into consideration
the discrete nature of flare heating as well as cooling in individual flare
loops, and compare the sum of the flare emission from multiple loops with the
observed total SXR emission. We assume that each newly brightened UV pixel is
the foot of a newly formed flare half-loop, and the UV light curve of the
pixel is simply scaled to the heating rate in the loop event. We then convolve
the UV light curve of each loop event with a kernel function $\mathcal{K}$
that represents the decay of the flare emission in the loop. The modeled total
SXR emission is therefore given by
$\mathcal{F}_{{\rm sxr}}(t)=c_{0}\sum_{i=1}^{N}\int_{0}^{t}\mathcal{F}_{{\rm
uv},i}(t^{\prime})\mathcal{K}_{i}(t,t^{\prime})dt^{\prime},$ (1)
where subscript $i$ indicates the contribution from the $i$th loop event,
assumed to be anchored to the $i$th UV brightened pixel. $c_{0}$ is a scaling
constant relating SXR and UV emissions. We have experimented with several
forms of the kernel function, and found that the function of a half-Gaussian
provides the best model:
$\mathcal{K}_{i}(t,t^{\prime})={\rm
exp}\left[\frac{-(t-t^{\prime})^{2}}{2\tau_{i}^{2}}\right](t>t^{\prime}),$ (2)
where $\tau_{i}$ is the decay timescale of the emission of the $i$th loop
event. When $\tau_{i}\rightarrow\infty$, Equation 1 gives the traditional
description of the Neupert effect, that $\mathcal{F}_{{\rm sxr}}$ is the time
integral of $\mathcal{F}_{{\rm uv}}$ without taking into account cooling.
An automated routine is run to search for the optimal decay timescale
$\tau_{i}$ so that the model light curve $\mathcal{F}_{{\rm sxr}}$ matches the
observed light curve. Our experiments suggest that Equation 1 with a same
constant $\tau_{i}$ for all loop events cannot reproduce the observed
$\mathcal{F}_{{\rm sxr}}$ from rise to decay. We then allow the decay time
$\tau_{i}$ to be time-dependent, considering that, as the flare evolves,
reconnection takes place at higher altitudes producing longer loops, which
take a longer time to cool. For a given flare, we use the following trial
function to determine $\tau_{i}$
$\tau_{i}=\tau_{0}{\rm exp}\left[\frac{t_{i}-t_{s}}{f\tau_{d}}\right].$ (3)
Here $t_{i}$ is the peak time of $\mathcal{F}_{{\rm uv},i}$ for the $i$th loop
event, $t_{s}$ and $t_{e}$ are the start and end times of the flare previously
defined, and $\tau_{d}\equiv t_{e}-t_{s}$ is the duration of the flare. For
each loop event, $\tau_{i}$ is constant. For each flare, $\tau_{0}$ and $f$
are constant, which give the decay time at the start of the flare and the
growth rate of the decay time as the flare evolves. For each flare, the
automated routine searches for the optimal set of $\tau_{0}$, $f$, and $c_{0}$
that produce the best overall correlation and smallest deviations between the
model and observed $\mathcal{F}_{{\rm sxr}}$ during the time period from
$t_{s}$ to $t_{e}$.
Figure 3 shows the comparison of the model (thick solid pink) and observed
(thick solid black) $\mathcal{F}_{{\rm sxr}}$ for the 16 flares analyzed in
this paper. Also shown in thin solid lines are the total light curve in the
AIA 1600Å passband $\mathcal{F}_{{\rm uv}}$ (pink) and the time derivative of
$\mathcal{F}_{{\rm sxr}}$ (black). Seen from the figures, the majority of the
flares are very well modeled by Equation 1, and the mean difference between
the model and observation normalized to the peak of $\mathcal{F}_{{\rm sxr}}$
is within 10%. Events #14 and #15 are the least successful, suggesting that
the flare evolution in these two events may deviate from the general
description by Equation 1, particularly in the decay phase. The overall
success of this simple model in the majority of the flares suggests that
hydrodynamic evolution of flare loops, which contribute to the GOES 1- 8Å SXR
emission, may be governed by some general rules (Warren & Antiochos, 2004).
Also shown in Figure 3 is the variation of $\tau_{i}$ (green) as the flare
evolves. Except for event # 4, a growing decay timescale is required to
reproduce both the rise and decay of the total SXR emission. Qualitatively
this is consistent with the general observation that, as flare evolves,
reconnection takes place at higher altitudes, forming longer loops, which cool
more slowly. Observations show the growing separation of the two ribbons (e.g.
Figure 2a), an evidence for growing loops. However, in a few flares (e.g. #
5), during the decay of the SXR emission, $\tau_{i}$ becomes much longer than
expected cooling timescales based on observed flare lengthscales and typical
thermodynamic properties of flare loops. Therefore, the empirical decay
timescale found here to match the observation is not necessarily the same as
the cooling timescale.
We also note that the empirical model (Equation 1) has also been applied to
HXR light curves (in which case $N=1$), or the impulsive component of
$\mathcal{F}_{uv,i}$ with its slow-decay component truncated, but cannot
produce a good agreement with observed $\mathcal{F}_{sxr}$. These experiments
indicate that continuous heating in the gradual phase seems essential in
individual loop events and throughout the flare evolution (Qiu & Longcope,
2016; Zhu et al., 2018). The empirical model supports the scenario requiring
the gradual phase heating in individual loop events, but the model itself is
not physical and cannot return the heating rates. To find the amount of energy
used in heating the flare corona, we then employ the UFC method to model
evolution of flare loops.
## 4 NEUPERT EFFECT: THE UFC METHOD
The encouraging result from the modified empirical model of the Neupert effect
indicates that spatially resolved UV emission may be used as a proxy for
heating rates in flare loops. Qiu et al. (2012); Liu et al. (2013) have
implemented this idea, and developed the UFC method to model flare heating.
The method infers heating rates in loop events from the UV lightcurves at the
foot-points and models plasma evolution in these loop events with a zero-
dimensional hydrodynamic code, the Enthalpy-based Thermal Evolution of Loops
model (EBTEL; Klimchuk et al., 2008; Cargill et al., 2012). The UFC method has
been applied to analyze and model seveval flares with varying degrees of
success (Qiu et al., 2012; Liu et al., 2013; Zeng et al., 2014; Qiu &
Longcope, 2016; Zhu et al., 2018). The latest effort by Qiu & Longcope (2016)
and Zhu et al. (2018) has suggested that, even in one loop event, heating
takes place in two phases, an intense impulsive heating phase lasting for a
few minutes followed by a gradual heating phase lasting for up to a few tens
of minutes yet at a much lower rate. These two phases of heating are reflected
in the UV light curve of a single pixel (see Figure 2b), usually exbihiting a
sharp impulsive rise followed by a long decay. Therefore, in the latest
experiment, the UV light curve has been used to infer the heating rate in both
the impulsive and gradual phases of heating, with which, Zhu et al. (2018)
have successfully modeled a two-ribbon flare with the model synthetic
emissions in agreement with the observed emissions in 15 passbands by GOES,
AIA, the Extreme-ultraviolet Variability Experiment (EVE; Woods et al., 2012),
and the X-ray Telescope (XRT; Golub et al., 2007).
In this paper, we use the UFC method to model the 16 flares with a specific
focus on understanding the relationship between UV light curves in AIA 1600Å
passband and GOES SXR lightcurves. The details of the method are given in Qiu
et al. (2012); Liu et al. (2013), with the most recent update by Zhu et al.
(2018), which takes into account the two-phase heating as well as an empirical
treatment of thermal conduction suppression (Jiang et al., 2006). In this
study, we apply this updated model with the empirical term of turbulent
suppression of thermal conduction, which gives rise to higher plasma
temperature at the peak of the flare heating. For simplicity, we do not aim at
the full-scale comparion of the model results with multi-passband observations
as done before, but focus on the GOES SXR light curves at 1 – 8 Å and 0.5 – 4
Å. In addition, we also constrain the cooling rates by comparing the model
results with the light curves from the AIA 211Å passband, which captures flare
emission at 2 MK as plasma cools down. For each flare, we use a scaling
constant $\lambda$ to convert observed data counts of the UV 1600Å light curve
of a brightened pixel to energy flux in the corresponding loop event:
$\mathcal{Q}_{i}(t)L_{i}=\lambda\mathcal{F}_{uv,i}(t)$, where
$\mathcal{F}_{uv,i}(t)$ is the UV 1600Å light curve (in units of DN s-1
pxl-1), $\mathcal{Q}_{i}(t)$ is the volumetric heating rate (in units of erg
cm-3 s-1), and $L_{i}$ is the length of the half-loop. The length of a given
half-loop is $L_{i}=L_{0}+v(t_{i}-t_{s})$, $t_{i}$ being the time when
$\mathcal{F}_{uv,i}(t)$ peaks, and $L_{0}$ and the growth rate $v$ are
estimated from the time-dependent separation of newly brightened two flare
ribbons in the positive and negative magnetic fields, assuming that the half-
loop is a quarter of a circle whose diameter is the mean distance between two
flare ribbons. With these heating rates as input, and another free parameter
$\eta$ that describes the radiative loss from the transition region as scaled
to the mean pressure in the flare loop (Qiu et al., 2013), the model computes
the mean temperature and density of thousands of loop events that evolve with
time, and the resultant time-dependent differential emission measure is
convolved with the emissivity and instrument response functions.222The GOES
response function is derived with the SSWIDL code goes_fluxes.pro, and the
response functions for the AIA EUV passbands are derived with
aia_get_response.pro. These response functions are provided by the instrument
teams using the latest calibration, as of 2020 October, with CHIANTI 9.0.1
atomic database and coronal abundance. The AIA response functions are also
calibrated with EVE. For a given flare, $\lambda$ and $\eta$ are constant for
all the identified loop events; for different flares, $\lambda$ and $\eta$ may
be different. We model each flare with varying $\lambda$ and $\eta$ and find
the optimal values that give the best comparison between the observed and
synthetic GOES SXR fluxes at two channels and EUV flux at the AIA 211 Å
passband.
Figure 4 shows the comparison of the observed and model synthetic SXR and EUV
fluxes for the 16 flares. In each panel, the synthetic SXR light curves in 1 –
8 Å (thick solid pink) and 0.5 – 4 Å (thin solid pink), and EUV 211 Å light
curve (dashed green) are average from two model runs conducted with different
$\lambda$ and $\eta$ values that produce the optimal comparison with observed
SXRs (solid black) and EUV 211 Å flux (solid green). The total heating rate
(blue) is also the average of the two runs. For clarity of the display, the
synthetic and observed GOES SXR flux in 0.5 – 4 Å is multiplied by a factor of
two, and uncertainties, which are small fractions of the mean fluxes, are not
plotted in the figure. Seen in the figure, in the majority of the flares, the
synthetic SXR and EUV fluxes are in reasonable agreement with the observed
fluxes.
Note that the zero-dimensional model is not capable of accurately calculating
plasma properties out of equilibrium during the very dynamic heating phase in
the first few minutes; therefore, the model cannot produce sufficient SXR 0.5
– 4 Å emission at very high temperatures, which is likely the case in a few
flares, like event # 11. Nevertheless, the total SXR 1 – 8 Å and EUV emissions
summed over all loops during the flare timescale are mostly produced at lower
temperatures, and they much depend on the total energy deposit in the loops
and are less subject to the details of heating and plasma evolution in non-
equilibrium in the short impulsive heating phase (see discussions by
Winebarger & Warren, 2004). Therefore, the overall agreeable comparison
between the synethic and observed total fluxes suggest that the heating rates
inferred from the flare foot-point UV 1600Å emissions are reasonable first-
order estimates. It is noted, though, that in the decay phase of a number of
flares, the model does not produce sufficient $\mathcal{F}_{sxr}$ emission as
observed. This will be further discussed in the next section, in conjuction
with the result of the empirical Neupert model.
We remind that the profile of the heating rate for each loop event used in the
model resembles the time profile of the UV light curve at the foot, which
generally consists of an impulsive component followed by a gradual component
(see Figure 2). As a comparison, the thick dashed pink curves in Figure 4 show
the synthetic $\mathcal{F}_{sxr}$ in 1 – 8 Å with the impulsive heating model.
For the impulsive model, the heating rate of a loop event is derived by
fitting the rise of the UV light curve to a half gaussian, and the impulsive
heating rate is a full gaussian (Qiu et al., 2012; Liu et al., 2013). All
other properties, such as the lengths of the loop events, are the same in the
impulsive heating model and two-phase heating model. The figure shows that, in
the majority of the flares, the two-phase heating model produces synthetic SXR
emissions in much better agreement with the observed SXR emission than the
model only using impulsive heating rates. The necessity of two-phase heating
requires a greater amount of flare heating energy than the impulsive heating.
In different flares, the fraction of impulsive heating energy out of the total
varies from 40% to 85%; on average, the amount of heating energy in the
impulsive components takes up about two thirds of the total heating energy,
and the remaining one third of heating energy is distributed in the gradual
components of the heating events.
## 5 ENERGETICS OF FLARE HEATING
### 5.1 Estimate of Flare Heating Energy
The UFC method allows us to estimate the total energy deposit in the flare
corona. However, in a number of flares, the model still does not produce
sufficient SXR emission in the decay phase; therefore, the total heating
energy derived directly from the UFC method is likely the lower-limit of the
corona heating energy. On the other hand, $\mathcal{F}_{sxr}$ produced by the
empirical model compares better with the observation in the decay phase; yet
the empirical model only relates the time evolution of flare SXR and UV 1600Å
emissions, and cannot return the heating rates. To help improve estimates of
heating energies, we may use the results from the empirical model to calibrate
heating energies derived from the UFC method.
To understand the difference in the total SXR flux produced by the two models,
we compare the synthetic SXR flux in individual loop events. Figure 5 shows
the synthetic SXR light curves in ten randomly sampled loop events generated
by the empircal model (solid) and the UFC method (dashed), respectively, for
the two flares displayed in Figure 2. It is seen that the SXR flux generated
by the two models have very similar time profiles, yet for weak events, the
magnitude of the SXR flux by the UFC method is lower than that by the
empirical model. Such comparison may explain the insufficient SXR emission by
the UFC method during the decay of the flare, when flare heating and the SXR
flux in individual loop events become smaller. Since the empirical model is
able to produce the total SXR flux which compares better with the observation,
we will assume that the SXR emission in each loop event generated by the
empirical model represents the ground truth, and uses it to make new estimates
of heating energies in flare loops.
For this purpose, we first establish the relation between the heating energy
and the synthetic GOES SXR emission by the UFC-EBTEL model. The left panel of
Figure 6 shows a scatter plot of the time integrated heating energy in the
loops, denoted as $\mathcal{E}_{ufc}$ (in units of erg), versus the time
integrated synthetic GOES SXR flux generated by the EBTEL model, denoted as
$\mathcal{G}_{ufc}$ (in units of J m-2). The $\mathcal{E}-\mathcal{G}$ scatter
plot is quite tight for each flare, and can be described by a power-law
$\mathcal{E}\approx 10^{\beta}\mathcal{G}^{\alpha}$. For the flares modeled in
this event, $\alpha$ ranges between 0.45 and 0.67, and $\beta$ ranges from
29.52 to 30.56. In fact, the $\mathcal{E}-\mathcal{G}$ relation for all loop
events in all 16 flares can be fitted to one power-law, as shown in the figure
(solid black line), yielding $\langle\alpha\rangle=0.535\pm 0.001$ and
$\langle\beta\rangle=29.990\pm 0.004$. This scaling law allows us, without
running the hydrodynamic model, to estimate the total SXR emission in a loop
event given the amount of the heating energy, and vice versa.
In comparison, the right panel of Figure 6 shows the time integrated synthetic
SXR emission generated by the empirical model $\mathcal{G}_{emp}$ – for a
better comparison, we exclude event #14 and #15 that are not well modeled with
the empirical formula. As expected, $\mathcal{G}_{ufc}$ becomes increasingly
under-estimated for smaller $\mathcal{G}_{emp}$. Based on these analyses, we
make a new estimate of flare heating energy, denoted as $\mathcal{E}_{emp}$,
using $\mathcal{G}_{emp}$ as the ground truth for the SXR emission by each
loop event to replace $\mathcal{G}_{ufc}$ in the $\mathcal{E}-\mathcal{G}$
scaling, namely, $\mathcal{E}_{emp}\approx
10^{\beta}\mathcal{G}_{emp}^{\alpha}$. The estimate can be made using $\alpha$
and $\beta$ derived for each flare, or $\langle\alpha\rangle$ and
$\langle\beta\rangle$ derived for all flares, and the difference in the
estimate is not found significant. We take the average $\mathcal{E}_{emp}$
from these two estimates as a plausible upper-limit of the heating energy in
each loop event, whereas the heating energy $\mathcal{E}_{ufc}$ derived from
the original UFC method is taken as the lower limit.
Figure 7a shows the distribution of heating energies $\mathcal{E}$, the mean
of $\mathcal{E}_{emp}$ and $\mathcal{E}_{ufc}$. The new estimate changes the
distribution of heating energies in the loop events, which becomes tighter
toward higher energies, and raises the total flare heating energy by one third
on average. Panel (b) shows the total energy
$\mathcal{E}_{tot}=\sum\mathcal{E}$ (in ergs) that is used to heat the flare
corona for each of the 14 flares (i.e., excluding event #14 and # 15), plotted
against the flare magnitude defined by the peak SXR flux in GOES 1 - 8 Å
channel333Here, to be consistent with prior literature, the flare magnitude is
derived with the “scaled” GOES SXR flux, but not the “true” flux. The “true”
flux in this channel, as released in October 2020, is equivalent to the
“scaled” flux divided by 0.7
(https://hesperia.gsfc.nasa.gov/rhessidatacenter/complementary_data/goes.html)..
Each vertical bar indicates the range of the total heating energy, the lower
limit being the sum of $\mathcal{E}_{ufc}$ and the upper limit the sum of
$\mathcal{E}_{emp}$, and the symbols indicate $\mathcal{E}_{tot}$, the mean of
$\sum\mathcal{E}_{ufc}$ and $\sum\mathcal{E}_{emp}$. Overplotted is the
scaling law by Warmuth & Mann (2016, WM16 scaling law hereafter) that relates
the total (bolometric) radiation energy of flares observed between 1996 and
2007 (Kretzschmar, 2011; Emslie et al., 2012) to their GOES magnitude:
$\mathcal{E}_{bol}\approx 10^{34.49\pm 0.44}\mathcal{F}_{sxr}^{0.79\pm 0.10}$.
The total heating energy derived in this study scatters around the WM16
scaling law444The energy-magnitude scaling in this study is
$\mathcal{E}_{tot}\approx 10^{34.33\pm 0.81}\mathcal{F}_{sxr}^{0.72\pm 0.17}$,
suggesting that this study has achieved a close estimate of the total heating
energy in flares. Warmuth & Mann (2016) also derived the maximum thermal
energy $\mathcal{E}_{th}$ and non-thermal electron energy $\mathcal{E}_{nth}$
of 24 flares observed by GOES and RHESSI between 2002 and 2003, which scale
with the flare magnitude as $\mathcal{E}_{th}\approx 10^{33.67\pm
0.26}\mathcal{F}_{sxr}^{0.88\pm 0.06}$, and $\mathcal{E}_{nth}\approx
10^{35.07\pm 0.38}\mathcal{F}_{sxr}^{1.08\pm 0.09}$, respectively. The heating
energy estimated here is nearly an order of magnitude larger than the maximum
thermal energy, and is also greater than the non-thermal electron energy,
particularly in small flares. Therefore, flare heating is not entirely due to
non-thermal electrons, and the foot-point UV emission signatures more
comprehensively capture heating events during the flare regardless of heating
mechanisms.
### 5.2 Reconnection and Energetics
Magnetic reconnection forms flare loops and releases energy that is used to
heat flare loops. The amount of magnetic flux $\Phi_{rec}$ participating in
reconnection is measured by summing up the magnetic flux in the pixels (see
Figure 2a, c) whose brightness in the 1600 Å passband is increased to be more
than 4 times the quiescent brightness and for at least 4 minutes. Flares in
this study take place near the disk center, and we integrate the HMI measured
longitudinal photospheric magnetic flux density $B$ (in units of Gauss, or Mx
cm-2) in flaring pixels, without correcting the projection effect and without
extrapolating $B$ to the upper-chromosphere or transition region, since these
two effects partly cancel each other. Finally, the measurement assumes that
each patch of magnetic flux anchored at a UV-brightened pixel participates in
magnetic reconnection only once to form a flare loop containing this flux. The
uncertainty is estimated from $\Phi_{rec}$ measured in the positive and
negative magnetic fields, which, on average, is about 20% of $\Phi_{rec}$
(also see Qiu & Yurchyshyn, 2005; Qiu et al., 2007).
Figure 1 shows the reconnection rate $\dot{\Phi}_{rec}$, the time derivative
of the time-dependent reconnection flux, which varies in the range of
$10^{17-19}$ Mx s-1 from flare to flare. The figure shows that
$\dot{\Phi}_{rec}$ is more impulsive and usually precedes the total heating
rate $\dot{\mathcal{E}}_{tot}$. In most flares, $\dot{\Phi}_{rec}$ does not
diminish to zero after the peak of the SXR emission, indicating that
reconnection and formation of new flare loops continue into the decay phase,
although at a much smaller reconnection rate and the amount of reconnection
flux making only a small fraction of the total reconnection flux. On the other
hand, the analysis of energetics in Section 5.1 suggests that the total
heating energy in the decay phase of the flare is non-negligible, amounting to
27% on average.
These observations imply that the heating energy $\mathcal{E}$ in individual
loop events is not a simple linear function of the magnetic flux in the loop,
and loop events in the early phase of the flare have less energy per unit flux
compared with loop events in the later phase of the flare. A regression
analysis yields a very weak dependence of the heating energy $\mathcal{E}$ on
either the magnetic flux or the length of the loop events. On the other hand,
the integrated total energy of the flare exhibits a much stronger dependence
on the reconnection flux, $\mathcal{E}_{tot}\sim\Phi_{rec}^{1.1\pm
0.2}L^{0.6\pm 0.1}$, as shown in Figure 7c.555Note that this scaling law is
derived for the 13 flares all observed by AIA and HMI, excluding event #1, 14,
and 15. Scaling laws involving magnetic field measurements change
significantly when the first event is included. With this event included, the
energy-flux relation becomes $\mathcal{E}_{tot}\sim\Phi_{rec}^{0.8\pm
0.1}L^{0.6\pm 0.2}$. In addition, the scaling of the flare magnitude and
reconnection flux is found to be $\mathcal{F}_{sxr}\sim\Phi_{rec}^{1.6\pm
0.2}$ for the 13 flares, similar to that in Kazachenko et al. (2017), who
analyzed more than 3000 flares observed by AIA and HMI and found
$\mathcal{F}_{sxr}\sim\Phi_{rec}^{1.5}$; with the first event included, the
magnitude-flux scaling in this study becomes
$\mathcal{F}_{sxr}\sim\Phi_{rec}^{1.1\pm 0.2}$. The first event was observed
by TRACE and MDI, so the discrepancy might be due to different calibrations of
the two generations of the instruments. Here $L$ is the median length of the
loop events in units of Mm. The energy dependence on $\Phi_{rec}$ is very
close to that found by Reep & Knizhnik (2019) and Zhu et al. (2018). Reep &
Knizhnik (2019) analyzed a few thousand flares, and the energy in the scaling
law refers to the flare thermal energy at the time of peak temperature,
deduced from GOES SXR observations. Zhu et al. (2018) analyzed only one event
SOL2011-12-16 (# 3) and the flux-energy patches are grouped into a few tens
magnetic cells to construct the scaling law; Zhu et al. (2018) did not reveal
a dependence on the loop length, which does not vary significantly during this
flare. In a somewhat different context, Schrijver et al. (2004) found a
similar scaling law $\mathcal{F}_{H}\sim\langle B\rangle^{1.0\pm
0.3}L^{-1.0\pm 0.5}$ that relates the heating flux $\mathcal{F}_{H}$ (in units
of erg cm-2 s-1) of active regions to the mean magnetic field strength
$\langle B\rangle$ at the base (the chromosphere) and the scale size $L$ of
the active region loops. We may re-write their scaling law as
$\mathcal{E}_{tot}\sim\mathcal{F}_{H}A\tau\sim\Phi^{1.0}L^{1.0}$, considering
that the magnetic flux is given by $\Phi=\langle B\rangle A$, $A$ being the
total cross-sectional area of active region loops, and, in equilibrium, the
heating timescale is roughly the same as the thermal conduction timescale
$\tau\sim L^{2}$. On global scales ranging from active regions to magnetic
cells in a given active region, and within uncertainties, these scaling laws
are very similar; in particular, the energy dependence on the magnetic field
is the same, indicating the similar nature of energy release in these systems
(Schrijver et al., 2004).
## 6 CONCLUSIONS AND DISCUSSIONS
### 6.1 Summary
In this study, we estimate the total energy that is used to heat flare plasmas
in the corona using two simplified models, an empirical model of the Neupert
effect and a zero-dimensional hydrodynamic model (the UFC method). The purpose
of the study is to derive a first-order estimate of flare energies in a
multitude of flare loops. Although these models are incapable of precisely
describing thermodynamic properties during the initial impulsive heating phase
of a flare loop when non-equilibrium physics governs the loop evolution, they
are suitable for the thought experiment, as conducted in this paper, on the
longstanding perception that energy release and flare heating take place in
numerous patches over an extended time period. The experiment takes advantage
of spatially resolved UV emission from the foot-points of flare loops at the
transition region or upper chromosphere, assuming that each UV-brightened
pixel represents a single patch of energy release, denoted as a loop event or
heating event in this study. The experiment extends the traditional concept of
the Neupert effect to spatially resolved UV light curves.
We have conducted the experiment on 16 flares ranging from C to M class. The
study confirms that a multitude of impulsive heating events alone cannot
reproduce the observed flare SXR light curve, but the two-phase heating model
produces the synthetic SXR emission in better agreement with observations.
This is consistent with the recent finding by Kerr et al. (2020), who have
conducted one-dimensional loop simulations with impulsive heating at fine
scales, and found that the model produced thermodynamic properties decay
faster than observed by IRIS. Furthermore, comparing the empirical model of
the UV Neupert effect and the UFC method, the former producing the SXR
emission in the decay phase in still better agreement with observations than
the latter, we have improved the estimate of the flare heating energy
particularly in the decay phase of the flare; on average, the amount of the
heating energy in the decay phase of the flare (i.e., after the peak of the
total SXR emission in 1 – 8 Å) makes 27% of the total heating energy during
the flare.
The estimated energies used to heat the flare corona are comparable with the
bolometric radiation energy measured in flares of similar magnitudes (Warmuth
& Mann, 2016). Therefore, the UV emission signatures at the foot-points of
flare loops well capture heating events during the flare regardless of heating
mechanisms. The flare heating energy $\mathcal{E}_{tot}$ is also shown to
scale with the total reconnection flux $\Phi_{rec}$ and the median length of
the flare half-loops $L$ by $\mathcal{E}_{tot}\sim\Phi_{rec}^{1.1\pm
0.2}L^{0.6\pm 0.1}$; the dependence of the heating energy on the magnetic
field is similar to scaling laws found in some studies, though with various
contexts (Schrijver et al., 2004; Zhu et al., 2018; Reep & Knizhnik, 2019),
but different from some other studies such as by Aschwanden (2020a, c, and
references therein). On the other hand, we do not find a strong dependence of
the heating energy on the magnetic field (flux) and/or the loop length for
individual loop events down to the pixel scale ($\sim$ 0.6″).
### 6.2 Discussions
Numerous prior studies have examined scaling laws that relate the flare
magnitude, namely the peak GOES SXR flux in 1 – 8 Å, to flare energies of
various kinds. Some of these studies also take into account the lengthscale of
flare loops. Based on the RTV scaling law, Warren & Antiochos (2004) found the
flux-energy relation to be super-linear
$\mathcal{F}_{sxr}\sim\mathcal{E}_{tot}^{1.75}L^{-1}$ (here
$\mathcal{F}_{sxr}$ refers to the peak SXR flux in units of W m-2), which was
confirmed with a one-dimensional hydrodynamic model of loop heating by a beam
of non-thermal electrons. One-dimensional loop simulations by Reep et al.
(2013) yielded a similar scaling law
$\mathcal{F}_{sxr}\sim\mathcal{E}_{tot}^{1.7}$. However, analyzing a few
thousand flares using the database by Kazachenko et al. (2017), Reep &
Knizhnik (2019) found sub-linear scaling laws
$\mathcal{F}_{sxr}\sim\mathcal{E}_{th}^{0.85}$, and
$\mathcal{F}_{sxr}\sim\mathcal{E}_{tot}^{0.85}$, the former referring to the
thermal energy of the flare (at the time of peak temperature) derived from the
GOES SXR analysis, and the latter referring to the flare heating energy
deduced from the traditional Neupert effect, i.e., $\mathcal{E}_{tot}$ being
the non-thermal electron energy. Similarly, Aschwanden (2020b) found
$\mathcal{F}_{sxr}\sim\mathcal{E}_{diss}^{0.7}$ where $\mathcal{E}_{diss}$
refers to energy dissipated in flares. Finally, the scaling laws by Warmuth &
Mann (2016) would suggest $\mathcal{F}_{sxr}\sim\mathcal{E}_{bol}^{1.3}$,
$\mathcal{F}_{sxr}\sim\mathcal{E}_{nth}^{0.9}$, and
$\mathcal{F}_{sxr}\sim\mathcal{E}_{th}^{1.1}$.
From this study, we find a super-linear flux-energy relation,
$\mathcal{F}_{sxr}\sim\mathcal{E}_{tot}^{1.4\pm 0.2}L^{-1.1\pm 0.2}$ for 14
flares (excluding #14 and #15 that are not well modeled); again, the flux-
energy dependence is closest to the WM16 scaling law of the bolometric energy.
The difference from the other scaling laws by, e.g., Warren & Antiochos
(2004); Reep et al. (2013); Reep & Knizhnik (2019); Aschwanden (2020b) may be
due to the fact that flare heating takes place over an extended time period
beyond the impulsive phase, and is not provided only by non-thermal electrons.
The modified empirical model of the UV Neupert effect is able to produce SXR
light curves in very good agreement with observations, which is used, in this
study, to return an improved estimate of flare energetics, particularly in the
decay phase. However, we do not fully understand the implication of the
convolution in the form of a gaussian (Equation 1), with the decay timescale
which becomes very large at times. Guided by this thought experiment, in the
future work, we will investigate the physical reason for the discrepancy
between the two models, and then conduct a full-scale modeling of flare
evolution with the improved UFC method employing multiple-wavelength
observations in a larger number of flares (Zhu et al., in preparation). This
study may also serve as a prior experiment for more comprehensive and physics
based models, which can unravel physics of heating mechanisms (Longcope &
Klimchuk, 2015; Reep et al., 2019; Kowalski et al., 2019; Graham et al., 2020;
Kerr et al., 2020), and also help address production of flare UV emissions in
the transition region and upper chromosphere (e.g., McClymont & Canfield,
1986; Milligan, 2015; Simões et al., 2019), used in this study as a proxy for
heating.
The author thanks the referee for constructive comments that help improve the
analysis and the clarity of the manuscript. The auhtor thanks Lilly Bralts-
Kelly and Jianxia Cheng for helping prepare the AIA data. This work has been
supported by the NASA grants NNX14AC06G and 80NSSC19K0269. The work also
benefits from the ISSI/ISSI-BJ team collaboration “Diagnosing Heating
Mechanisms in Solar Flares”. SDO is a mission of NASA’s Living With a Star
Program.
## References
* Alexander & Coyner (2006) Alexander, D., & Coyner, A. J. 2006, ApJ, 640, 505
* Antonucci et al. (1982) Antonucci, E., Gabriel, A. H., Acton, L. W., et al. 1982, Sol. Phys., 78, 107
* Aschwanden (2020a) Aschwanden, M. J. 2020a, ApJ, 895, 134
* Aschwanden (2020b) —. 2020b, ApJ, 897, 16
* Aschwanden (2020c) —. 2020c, arXiv e-prints, arXiv:2007.04419
* Aschwanden & Alexander (2001) Aschwanden, M. J., & Alexander, D. 2001, Sol. Phys., 204, 91
* Aschwanden et al. (2017) Aschwanden, M. J., Caspi, A., Cohen, C. M. S., et al. 2017, ApJ, 836, 17
* Cargill et al. (2012) Cargill, P. J., Vlahos, L., Baumann, G., Drake, J. F., & Nordlund, Å. 2012, Space Sci. Rev., 173, 223
* Cheng (1990) Cheng, C.-C. 1990, ApJ, 349, 362
* Cheng et al. (1984) Cheng, C. C., Tandberg-Hanssen, E., & Orwig, L. E. 1984, ApJ, 278, 853
* Cheng et al. (1988) Cheng, C.-C., Vanderveen, K., Orwig, L. E., & Tand berg-Hanssen, E. 1988, ApJ, 330, 480
* Cheng et al. (2012) Cheng, J. X., Kerr, G., & Qiu, J. 2012, ApJ, 744, 48
* Coyner & Alexander (2009) Coyner, A. J., & Alexander, D. 2009, ApJ, 705, 554
* Culhane et al. (1991) Culhane, J. L., Hiei, E., Doschek, G. A., et al. 1991, Sol. Phys., 136, 89
* Dennis & Zarro (1993) Dennis, B. R., & Zarro, D. M. 1993, Sol. Phys., 146, 177
* Dere & Cook (1979) Dere, K. P., & Cook, J. W. 1979, ApJ, 229, 772
* Effenberger et al. (2017) Effenberger, F., Rubio da Costa, F., Oka, M., et al. 2017, ApJ, 835, 124
* Emslie et al. (1992) Emslie, A. G., Li, P., & Mariska, J. T. 1992, ApJ, 399, 714
* Emslie et al. (2012) Emslie, A. G., Dennis, B. R., Shih, A. Y., et al. 2012, ApJ, 759, 71
* Fisher et al. (1985) Fisher, G. H., Canfield, R. C., & McClymont, A. N. 1985, ApJ, 289, 425
* Fisher & Hawley (1990) Fisher, G. H., & Hawley, S. L. 1990, ApJ, 357, 243
* Fletcher & Hudson (2001) Fletcher, L., & Hudson, H. 2001, Sol. Phys., 204, 69
* Fletcher & Hudson (2008) Fletcher, L., & Hudson, H. S. 2008, ApJ, 675, 1645
* Fletcher et al. (2011) Fletcher, L., Dennis, B. R., Hudson, H. S., et al. 2011, Space Sci. Rev., 159, 19
* Gan et al. (1991) Gan, W. Q., Zhang, H. Q., & Fang, C. 1991, A&A, 241, 618
* Glesener et al. (2020) Glesener, L., Krucker, S., Duncan, J., et al. 2020, ApJ, 891, L34
* Golub et al. (2007) Golub, L., Deluca, E., Austin, G., et al. 2007, Sol. Phys., 243, 63
* Graham & Cauzzi (2015) Graham, D. R., & Cauzzi, G. 2015, ApJ, 807, L22
* Graham et al. (2020) Graham, D. R., Cauzzi, G., Zangrilli, L., et al. 2020, ApJ, 895, 6
* Grefenstette et al. (2016) Grefenstette, B. W., Glesener, L., Krucker, S., et al. 2016, ApJ, 826, 20
* Jiang et al. (2006) Jiang, Y. W., Liu, S., Liu, W., & Petrosian, V. 2006, ApJ, 638, 1140
* Kane & Donnelly (1971) Kane, S. R., & Donnelly, R. F. 1971, ApJ, 164, 151
* Kazachenko et al. (2017) Kazachenko, M. D., Lynch, B. J., Welsch, B. T., & Sun, X. 2017, ApJ, 845, 49
* Kerr et al. (2020) Kerr, G. S., Allred, J. C., & Polito, V. 2020, arXiv e-prints, arXiv:2007.13856
* Kerr et al. (2016) Kerr, G. S., Fletcher, L., Russell, A. e. J. B., & Allred, J. C. 2016, ApJ, 827, 101
* Klimchuk et al. (2008) Klimchuk, J. A., Patsourakos, S., & Cargill, P. J. 2008, ApJ, 682, 1351
* Kosugi et al. (1991) Kosugi, T., Makishima, K., Murakami, T., et al. 1991, Sol. Phys., 136, 17
* Kowalski et al. (2019) Kowalski, A. F., Butler, E., Daw, A. N., et al. 2019, ApJ, 878, 135
* Kretzschmar (2011) Kretzschmar, M. 2011, A&A, 530, A84
* Lee et al. (1995) Lee, T. T., Petrosian, V., & McTiernan, J. M. 1995, ApJ, 448, 915
* Lemen et al. (2012) Lemen, J. R., Title, A. M., Akin, D. J., et al. 2012, Sol. Phys., 275, 17
* Li et al. (1993) Li, P., Emslie, A. G., & Mariska, J. T. 1993, ApJ, 417, 313
* Lin et al. (2002) Lin, R. P., Dennis, B. R., Hurford, G. J., et al. 2002, Sol. Phys., 210, 3
* Liu et al. (2013) Liu, W.-J., Qiu, J., Longcope, D. W., & Caspi, A. 2013, ApJ, 770, 111
* Longcope (2014) Longcope, D. W. 2014, ApJ, 795, 10
* Longcope & Klimchuk (2015) Longcope, D. W., & Klimchuk, J. A. 2015, ApJ, 813, 131
* Mariska et al. (1989) Mariska, J. T., Emslie, A. G., & Li, P. 1989, ApJ, 341, 1067
* McAteer & Bloomfield (2013) McAteer, R. T. J., & Bloomfield, D. S. 2013, ApJ, 776, 66
* McClymont & Canfield (1986) McClymont, A. N., & Canfield, R. C. 1986, ApJ, 305, 936
* McTiernan et al. (1999) McTiernan, J. M., Fisher, G. H., & Li, P. 1999, ApJ, 514, 472
* Milligan (2015) Milligan, R. O. 2015, Sol. Phys., 290, 3399
* Nagai & Emslie (1984) Nagai, F., & Emslie, A. G. 1984, ApJ, 279, 896
* Neupert (1968) Neupert, W. M. 1968, ApJ, 153, L59
* Orwig et al. (1980) Orwig, L. E., Frost, K. J., & Dennis, B. R. 1980, Sol. Phys., 65, 25
* Pesnell et al. (2012) Pesnell, W. D., Thompson, B. J., & Chamberlin, P. C. 2012, Sol. Phys., 275, 3
* Qiu et al. (2007) Qiu, J., Hu, Q., Howard, T. A., & Yurchyshyn, V. B. 2007, ApJ, 659, 758
* Qiu et al. (2004a) Qiu, J., Liu, C., Gary, D. E., Nita, G. M., & Wang, H. 2004a, ApJ, 612, 530
* Qiu et al. (2010) Qiu, J., Liu, W., Hill, N., & Kazachenko, M. 2010, ApJ, 725, 319
* Qiu et al. (2012) Qiu, J., Liu, W.-J., & Longcope, D. W. 2012, ApJ, 752, 124
* Qiu & Longcope (2016) Qiu, J., & Longcope, D. W. 2016, ApJ, 820, 14
* Qiu et al. (2013) Qiu, J., Sturrock, Z., Longcope, D. W., Klimchuk, J. A., & Liu, W.-J. 2013, ApJ, 774, 14
* Qiu et al. (2004b) Qiu, J., Wang, H., Cheng, C. Z., & Gary, D. E. 2004b, ApJ, 604, 900
* Qiu & Yurchyshyn (2005) Qiu, J., & Yurchyshyn, V. B. 2005, ApJ, 634, L121
* Reep (2014) Reep, J. 2014, PhD thesis, Rice University
* Reep et al. (2019) Reep, J. W., Bradshaw, S. J., Crump, N. A., & Warren, H. P. 2019, ApJ, 871, 18
* Reep et al. (2013) Reep, J. W., Bradshaw, S. J., & McAteer, R. T. J. 2013, ApJ, 778, 76
* Reep & Knizhnik (2019) Reep, J. W., & Knizhnik, K. J. 2019, ApJ, 874, 157
* Rubio da Costa et al. (2016) Rubio da Costa, F., Kleint, L., Petrosian, V., Liu, W., & Allred, J. C. 2016, ApJ, 827, 38
* Ryan et al. (2013) Ryan, D. F., Chamberlin, P. C., Milligan, R. O., & Gallagher, P. T. 2013, ApJ, 778, 68
* Saba et al. (2006) Saba, J. L. R., Gaeng, T., & Tarbell, T. D. 2006, ApJ, 641, 1197
* Schou et al. (2012) Schou, J., Scherrer, P. H., Bush, R. I., et al. 2012, Sol. Phys., 275, 229
* Schrijver et al. (2004) Schrijver, C. J., Sandman, A. W., Aschwand en, M. J., & De Rosa, M. L. 2004, ApJ, 615, 512
* Schwartz et al. (1992) Schwartz, R. A., Dennis, B. R., Fishman, G. J., et al. 1992, in NASA Conference Publication, Vol. 3137, NASA Conference Publication, 457–468
* Simões et al. (2019) Simões, P. J. A., Reid, H. A. S., Milligan, R. O., & Fletcher, L. 2019, ApJ, 870, 114
* Somov et al. (1981) Somov, B. V., Syrovatskii, S. I., & Spektor, A. R. 1981, Sol. Phys., 73, 145
* Tsuneta et al. (1991) Tsuneta, S., Acton, L., Bruner, M., et al. 1991, Sol. Phys., 136, 37
* Veronig et al. (2002) Veronig, A., Vršnak, B., Dennis, B. R., et al. 2002, A&A, 392, 699
* Veronig et al. (2005) Veronig, A. M., Brown, J. C., Dennis, B. R., et al. 2005, ApJ, 621, 482
* Warmuth & Mann (2016) Warmuth, A., & Mann, G. 2016, A&A, 588, A116
* Warren & Antiochos (2004) Warren, H. P., & Antiochos, S. K. 2004, ApJ, 611, L49
* Warren & Warshall (2001) Warren, H. P., & Warshall, A. D. 2001, ApJ, 560, L87
* Winebarger & Warren (2004) Winebarger, A. R., & Warren, H. P. 2004, ApJ, 610, L129
* Withbroe (1978) Withbroe, G. L. 1978, ApJ, 225, 641
* Woods et al. (2012) Woods, T. N., Eparvier, F. G., Hock, R., et al. 2012, Sol. Phys., 275, 115
* Zeng et al. (2014) Zeng, Z., Qiu, J., Cao, W., & Judge, P. G. 2014, ApJ, 793, 87
* Zhu et al. (2018) Zhu, C., Qiu, J., & Longcope, D. W. 2018, ApJ, 856, 27
Table 1: Properties of Flares and Model Parameters
| start time, magnitudea | position | $\tau_{d}$ | $L$ | $\Phi_{rec}$ | $\mathcal{E}_{tot}$ | cross-correlation coefficient and time lag (sec)f
---|---|---|---|---|---|---|---
| | | (min)b | (Mm)c | (1020 Mx)d | (1030 erg)e | 12-25keV | 25-50keV | UV1600
1 | 2005-05-13 16:33 M8.0 | NOAA10759 N12E05 | 62 | 43 (23) | 76.2 (5.5) | 34.6 (7.8) | 0.41 (0) | 0.88 (20) | 0.79 (0)
2 | 2011-04-22 04:26 M1.8 | NOAA11195 S17E29 | 64 | 29 (6) | 15.2 (6.3) | 11.3 (4.8) | 0.71 (-33) | 0.64 (8) | 0.72 (0)
3 | 2011-12-26 11:16 C5.1 | NOAA11384 N13W14 | 160 | 35 (5) | 5.8 (0.1) | 7.5 (1.5) | - | - | 0.65 (-100)
4 | 2013-08-12 10:25 M1.5 | NOAA11817 S22E10 | 35 | 9 (0) | 8.7 (3.5) | 3.6 (0.9) | 0.91 (-20) | 0.91 (14) | 0.89 (-40)
5 | 2013-08-30 01:58 C8.0 | NOAA11836 N12E28 | 150 | 76 (53) | 8.9 (0.9) | 11.9 (0.6) | 0.76 (0) | 0.53 (208) | 0.74 (-120)
6 | 2014-02-05 18:33 C7.1 | NOAA11967 S12W36 | 34 | 14 (3) | 5.2 (0.6) | 2.3 (0.8) | 0.93 (0) | 0.44 (137) | 0.81 (0)
7 | 2014-04-18 12:38 M7.2 | NOAA12036 S15W42 | 55 | 31 (13) | 20.6 (3.0) | 26.8 (5.9) | - | - | 0.70 (-100)
8 | 2014-05-10 06:52 C7.7 | NOAA12056 N04E17 | 24 | 18 (6) | 8.5 (0.4) | 3.6 (0.9) | 0.92 (0) | 0.88 (0) | 0.82 (-60)
9 | 2014-06-15 23:30 C9.0 | NOAA12087 S18W11 | 66 | 15 (6) | 6.5 (0.9) | 5.4 (0.5) | 0.90 (-41) | 0.85 (127) | 0.93 (-20)
10 | 2014-09-28 02:41 M5.0 | NOAA12173 S21W24 | 52 | 28 (3) | 15.9 (0.6) | 17.6 (5.0) | 0.72 (-39) | 0.67 (98) | 0.72 (0)
11 | 2014-11-09 15:26 M2.3 | NOAA12205 N15E05 | 16 | 7 (35) | 9.3 (1.5) | 3.9 (1.0) | - | - | 0.77 (-20)
12 | 2014-12-01 06:28 M1.8 | NOAA12222 S20E04 | 32 | 25 (6) | 9.5 (1.1) | 5.4 (1.4) | 0.85 (-8) | 0.77 (49) | 0.91 (0)
13 | 2014-12-04 18:02 M6.2 | NOAA12222 S20W35 | 45 | 28 (15) | 26.4 (4.0) | 25.4 (7.2) | 0.79 (-155) | 0.85 (0) | 0.89 (-20)
14 | 2014-12-17 14:42 C9.3 | NOAA12242 S19W02 | 25 | 7 (9) | 4.7 (0.1) | 1.1 (0.1) | 0.95 (-24) | 0.98 (10) | 0.96 (-20)
15 | 2014-12-17 18:56 M1.4 | NOAA12241 S10E17 | 14 | 9 (10) | 8.1 (4.1) | 3.0 (0.8) | 0.37 (-94) | 0.89 (0) | 0.73 (-20)
16 | 2014-12-19 09:33 M1.2 | NOAA12237 S13W40 | 30 | 16 (5) | 6.9 (2.7) | 5.0 (1.8) | 0.69 (0) | 0.35 (12) | 0.71 (0)
aafootnotetext: Flare magnitude is based on the “scaled” GOES SXR flux in 1 –
8 Å, but not the “true” flux released in October, 2020. Determination of the
start time $t_{s}$ is described in the text (Section 2).
bbfootnotetext: The duration of the flare, $\tau_{d}=t_{e}-t_{s}$, where
$t_{s}$ and $t_{e}$ are start and end times defined in the text (Section 2).
ccfootnotetext: The median length of flare half-loops. Also shown in the
parenthesis is the standard deviation of the length of the loop events, which
grows as the flare evolves (see text in Section 4).
ddfootnotetext: The total reconnection flux measured from flare ribbon pixels
with brightness at least 4 times the quiescent background for at least 4
minutes; given in the parenthesis is the difference in the magnetic flux
measured in positive and negative magnetic fields, respectively (Section 5.2).
eefootnotetext: Total heating energy of the flare corona, which is the mean of
$\sum\mathcal{E}_{ufc}$ and $\sum\mathcal{E}_{emp}$; the difference between
$\sum\mathcal{E}_{ufc}$ and $\sum\mathcal{E}_{emp}$ is given in the
parenthesis (see Section 5.1).
fffootnotetext: The maximum coefficient of the time-lagged cross-correlation
between two light curves, one being the time derivative of the GOES SXR 1-8 Å
light curve, and the other being the HXR count rates light curve in 12 - 25
keV, or in 25 - 50 keV by RHESSI, or the total UV 1600 Å counts flux by AIA. A
positive time lag indicates that the time derivative of the SXR light curve
lags other light curves. The correlation with HXR light curves is not
available for events # 3, 7, 11, due to lack of RHESSI observations from the
start of the flare.
Figure 1: Light curves of the flares analyzed and modeled in this paper. These
include the GOES SXR “true” flux in 1 – 8Å in units of W m-2 and its time
derivative (black), the total UV counts rate light curve (pink), integrated
over the flare region, in 1600Å passband from AIA/SDO, and the HXR counts rate
light curve (green) at the photon energy 12 - 25 keV observed by RHESSI. Also
plotted is the time profile of the reconnection rate in units of 1018 Mx s-1
(blue), with the peak reconnection rate marked in each panel. Except the SXR
light curve, all other light curves are arbitrarily scaled. For clarity of the
display, the uncertainties in the reconnection rates are not plotted, but they
are described in the text (Section 5.2). Figure 2: Left: evolution of flare
ribbon brightening in UV 1600Å passband superimposed on a line-of-sight
magnetogram, obtained by HMI, for the flare #7 SOL2014-04-18 (a), and the
flare #4 SOL2013-08-12 (c). For display, the magnetogram is saturated at
$\pm$300 G. The color code indicates the time of the start of the flare
brightening defined as when the brightness is 4 times the brightness of the
pre-flare quiescent background. Right: UV 1600 Å light curves in a few
brightened pixels, showing that flare energy release takes place in different
places (loops) at different times and proceeds into the decay phase of the
flare SXR emission. Figure 3: Comparison of the observed SXR “true” flux light
curve in 1 – 8 Å (thick black) with the SXR light curve generated by the
empirical model of the Neupert effect (thick pink). Thin curves show the time
derivative of the observed SXR light curve (black) and the observed total UV
light curve in AIA 1600 Å (pink), both arbitrarily scaled. The green curve
shows the time dependent decay timescale $\tau_{i}$ in minutes (see text).
Also marked are the variance (normalized to the observed peak SXR emission)
and the coefficient of the cross-correlation between the model and observed
SXR light curves. Figure 4: Comparison of the GOES observed SXR light curves
in 1 – 8 Å (thick black), 0.5 – 4 Å (thin black), and the AIA observed EUV
flux at 211 Å passband (solid green), with the synthetic SXRs (thick and thin
solid pink) and EUV (dashed green) light curves by the UFC method that
includes gradual heating. For comparison, the SXR 1 – 8 Å light curve by the
UFC method using only impulsive heating is shown in thick dashed pink. Also
plotted in each panel is the total heating rate (blue) derived from the UFC
method. The AIA 211 Å light curves are arbitrarily scaled. For clarity of the
display, uncertainties in the synthetic SXR and EUV light curves and in the
heating rates are not plotted, but they are described in the text (Section 4).
Figure 5: Left: synthetic SXR light curves in 1 – 8 Å with the empirical model
(solid) and UFC method (dashed), respectively, in 10 randomly sampled loop
events for the flare SOL2014-04-18. Right: same as the left but for the flare
SOL2013-08-12. Marked in each panel is the peak flux of the SXR light curve by
the UFC method. Figure 6: Scatter plot of the time integrated SXR flux in 1 –
8 Å $\mathcal{G}$ generated by the UFC method (a) or the empirical Neupert
model (b) against the total heating energy $\mathcal{E}_{ufc}$ in individual
loops. Each color shows a few thousand loop events for a given flare, and the
solid line of the same color illustrates the
$\mathcal{E}_{ufc}-\mathcal{G}_{ufc}$ fit to a power law for the same flare.
The black solid line shows the $\mathcal{E}_{ufc}-\mathcal{G}_{ufc}$ fit to a
power law for all loop events in all 16 flares. Note that the solid color
lines in (b) are the same as in (a), for comparison of the synthetic SXR
emissions generated by the two models. Figure 7: (a): histograms of the
heating energies in the loop events for each of the 16 flares analyzed in the
paper. Here the heating energy in each loop event is the average of
$\mathcal{E}_{ufc}$ and $\mathcal{E}_{emp}$. (b) Scatter plot of the total
heating energy against the magnitude of the flare (based on the “scaled”
flux). Vertical bars indicate the range of the total heating energy, with
$\sum\mathcal{E}_{ufc}$ being the lower limit and $\sum\mathcal{E}_{emp}$
being the upper limit. The solid guide line shows the power-law scaling of the
observed bolometric radiation energy to the flare magnitude given by Warmuth &
Mann (2016). (c) The total heating energy against the reconnection flux
$\Phi_{rec}$ (black; see text) and median length $L$ of the flare loop events
(blue). Vertical bars indicate the ranges of the flare heating energy as in
(b); horizontal bars indicate the uncertainties of the $\Phi_{rec}$
measurements (black) or the standard deviations of the estimated lengths
(blue) of the loop events that are subsequently formed during the flare
evolution from rise to decay.
|
# NeurIPS 2020 Competition: The MineRL Competition on Sample Efficient
Reinforcement Learning using Human Priors
William H. Guss111Lead organizer<EMAIL_ADDRESS>222Affiliation: Carnegie
Mellon University 333Affiliation: OpenAI Inc. Mario Ynocente Castro444 Equal
contribution: Organizer names are ordered alphabetically, with the exception
of the lead organizer. Competitions are extremely complicated endeavors
involving a huge amount of organizational overhead from the development of
complicated software packages to event logistics and evaluation. It is
impossible to estimate the total contributions of all involved at the onset.
555Affiliation: Preferred Networks, Inc. Sam Devlin††footnotemark:
666Affiliation: Microsoft Research Brandon Houghton††footnotemark:
††footnotemark: Noboru Sean Kuno††footnotemark: ††footnotemark: Crissman
Loomis††footnotemark: ††footnotemark: Stephanie Milani††footnotemark:
††footnotemark: Sharada Mohanty††footnotemark: 777Affiliation: AIcrowd SA
Keisuke Nakata††footnotemark: ††footnotemark: Ruslan
Salakhutdinov††footnotemark: ††footnotemark: John Schulman††footnotemark:
††footnotemark: Shinya Shiroshita††footnotemark: ††footnotemark: Nicholay
Topin††footnotemark: ††footnotemark: Avinash Ummadisingu††footnotemark:
††footnotemark: Oriol Vinyals††footnotemark: 888Affiliation: DeepMind
## Competition Overview
Although deep reinforcement learning has led to breakthroughs in many
difficult domains, these successes have required an ever-increasing number of
samples. As state-of-the-art reinforcement learning (RL) systems require an
ever-increasing number of samples, their development is restricted to a
continually shrinking segment of the AI community. Likewise, many of these
systems cannot be applied to real-world problems, where environment samples
are expensive. Resolution of these limitations requires new, sample-efficient
methods. To facilitate research in this direction, we propose the _MineRL 2020
Competition on Sample Efficient Reinforcement Learning using Human Priors_
999https://www.aicrowd.com/challenges/neurips-2020-minerl-competition.
The primary goal of the competition is to foster the development of algorithms
which can efficiently leverage human demonstrations to drastically reduce the
number of samples needed to solve complex, hierarchical, and sparse
environments. To that end, participants will compete under a limited
environment sample-complexity budget to develop systems which solve the MineRL
ObtainDiamond task, a sequential decision making environment requiring long-
term planning, hierarchical control, and efficient exploration methods.
Participants will be provided the _MineRL-v0_ dataset [13], a large-scale
collection of over 60 million state-action pairs of human demonstrations that
can be resimulated into embodied agent trajectories with arbitrary
modifications to game state and visuals.
The competition is structured into two rounds in which competitors are
provided several paired versions of the dataset and environment with different
game textures and shaders. At the end of each round, competitors will submit
containerized versions of their learning algorithms to the AIcrowd platform
where they will then be trained from scratch on a hold-out dataset-environment
pair for a total of 4-days on a pre-specified hardware platform. Each
submission will then be automatically ranked according to the final
performance of the trained agent.
This challenge is a follow-up to our NeurIPS 2019 MineRL competition [12],
which yielded over 1000 registered participants and over 662 full submissions.
The competition benchmark, RL environment, and dataset framework were
downloaded over 52,000 times in 26+ countries [21]. In this iteration, we will
implement new features to expand the scale and reach of the competition. In
response to the feedback of the previous participants, we are introducing a
second minor track focusing on solutions without access to environment
interactions of any kind except during test-time. Both tracks will follow the
same two-round schedule. Last year’s top submissions developed novel methods
advancing inverse reinforcement learning, hierarchical imitation learning, and
more. In the forthcoming competition, we anticipate an even larger research
impact. With the addition of action-space randomization and desemantization of
observations and actions, we believe that the most successful competition
submissions will be highly task and domain agnostic.
### Keywords
Reinforcement Learning, Imitation Learning, Sample Efficiency, Games, MineRL,
Minecraft.
### Competition Type
Regular.
## 1 Competition Description
### 1.1 Background and Impact
Many of the recent, most celebrated successes of artificial intelligence (AI),
such as AlphaStar [43], AlphaGo [36], OpenAI Five [3], and their derivative
systems [37], utilize deep reinforcement learning to achieve human or super-
human level performance in sequential decision-making tasks. These
improvements to the state-of-the-art have thus far required exponentially
increasing computational power to achieve such performance [1]. In part, this
is due to an increase in the computation required per environment-sample;
however, the most significant change is the number of environment-samples
required for training. For example, DQN [22], A3C [23], and Rainbow DQN [14]
have been applied to ATARI 2600 games [2] and require from 44 to over 200
million frames (200 to over 900 hours) to achieve human-level performance. On
more complex domains: OpenAI Five utilizes 11,000+ years of Dota 2 gameplay
[26], AlphaGoZero uses 4.9 million games of self-play in Go [36], and
AlphaStar uses 200 years of StarCraft II gameplay [7]. Due to the growing
computational requirements, a shrinking portion of the AI community has the
resources to improve these systems and reproduce state-of-the-art results.
Additionally, the application of many reinforcement learning techniques to
real-world challenges, such as self-driving vehicles, is hindered by the raw
number of required samples. In these real-world domains, policy roll-outs can
be costly and simulators are not yet accurate enough to yield policies robust
to real-world conditions.
One well-known way to reduce the environment sample-complexity of the
aforementioned methods is to leverage human priors and demonstrations of the
desired behavior. Techniques utilizing trajectory examples, such as imitation
learning and Bayesian reinforcement learning, have been successfully applied
to older benchmarks and real-world problems where samples from the environment
are costly. In many simple games with singular tasks, such as the Atari 2600
[2], OpenAI Gym [5], and TORCS environments101010https://github.com/ugo-nama-
kun/gym_torcs, imitation learning can drastically reduce the number of
environment samples needed through pretraining and hybrid RL techniques [6,
11, 15, 27]. Further, in some real-world tasks, such as robotic manipulation
[8, 9] and self-driving [4], in which it is expensive to gather a large number
of samples from the environment, imitation-based methods are often the only
means of generating solutions using few samples. Despite their success, these
techniques are still not sufficiently sample-efficient for application to many
real-world domains.
Figure 1: The top agent from the MineRL 2019 competition mining the first item
required to eventually obtain a diamond.
##### Impact.
To that end, the central aim of our proposed competition is the advancement
and development of novel, sample-efficient methods which leverage human priors
for sequential decision-making problems. Due to the competition’s design,
organizational team, and support, we are confident that the competition will
catalyze research towards the deployment of reinforcement learning in the real
world, democratized access to AI/ML, and reproducibility. By enforcing
constraints on the computation and sample budgets of the considered
techniques, we believe that the methods developed during the competition will
broaden participation in deep RL research by lowering the computational
barrier to entry.
While computational resources inherently have a cost barrier, large-scale,
open-access datasets can be widely used. To that end, we center our proposed
competition around techniques which leverage the MineRL dataset [13]. To
maximize the development of domain-agnostic techniques that enable the
application of deep reinforcement learning to sample-limited, real-world
domains, such as robotics, we carefully developed a novel data-pipeline and
hold-out environment evaluation scheme with AIcrowd to prevent the over-
engineering of submissions to the competition task.
Crucially, the competition will stimulate a broad set of new techniques in
reinforcement and imitation learning. In the previous NeurIPS 2019 iteration
of the competition, competitors developed several new algorithms and
approaches to tackle the challenge in spite of the difficult sample-complexity
limitations [12]. Ranging from hierarchical imitation methods to novel inverse
reinforcement learning techniques, the research impact of the competition was
broad in scope, yielding a diverse set of solutions [21]. With the addition of
new competition features and refined submission and evaluation pipelines (see
Section 1.2), we anticipate this year’s competition to garner further research
progress of relevance to the NeurIPS community.
Our competition will further attract a large number of participants from
within and outside of the NeurIPS community. Given the broad interest and
participation in the previous year (attracting over 1000 registered
participants with a total of 662 full
submissions111111https://www.aicrowd.com/challenges/neurips-2019-minerl-
competition), our extensive media coverage [18, 34, 38, 42], and improvements
to user-experience, we expect the number of participants to grow to 1300 users
and the number of successful submission to increase to over 1000 agents. To
effectuate this growth, we will deliver several improvements over prior years.
First, we plan to drastically simplify the submission process and provide
thorough multi-media documentation to increase the conversion-rate from
registration to submission. Further, we intend on providing more compelling
visualizations for the competitors’ submissions, generating external interest
from outside of the research community. Expanding on media coverage and
outreach channels from last year, we will utilize mailing lists and social
media announcements to retain the previous competitor pool and expand our
user-base to new demographics. Moreover, the expansion of our competition to
multiple tracks supporting pure imitation learning and hybridized imitation
and reinforcement learning submissions will broaden the the appeal of our
competition as a vehicle for researching and developing new methods.
The proposed competition is ambitious, so we have taken meaningful steps to
ensure its smooth execution. Specifically, we are currently securing several
crucial partnerships with organizations and individuals. During the MineRL
2019 competition, our primary partner, Microsoft Research, provided
significant computational resources to enable direct, fair evaluation of the
participants’ training procedures. We developed a relationship with AIcrowd to
provide the submission orchestration platform for our competition, as well as
continued support throughout the competition to ensure that participants can
easily submit their algorithms. Additionally, we partnered with Preferred
Networks in the previous iteration of this competition to provide a set of
standard baseline implementations, which include many state of the art
reinforcement learning and imitation learning techniques. By leveraging our
previous partnerships and developing new ones, we expect to largely increase
the scale, success, and impact of the competition.
#### 1.1.1 Domain Interest
Figure 2: A subset of the Minecraft item hierarchy (totaling 371 unique
items). Each node is a unique Minecraft item, block, or non-player character,
and a directed edge between two nodes denotes that one is a prerequisite for
another. Each item presents is own unique set of challenges, so coverage of
the full hierarchy by one player takes several hundred hours.
Minecraft is a compelling domain for the development of reinforcement and
imitation learning methods because of the unique challenges it presents:
Minecraft is a 3D, first-person, open-world game centered around the gathering
of resources and creation of structures and items. Notably, the procedurally-
generated world is composed of discrete blocks that allow modification; over
the course of gameplay, players change their surroundings by gathering
resources (such as wood from trees) and constructing structures (such as
shelter and storage). Since Minecraft is an embodied domain and the agent’s
surroundings are varied and dynamic, it presents many of the same challenges
as real-world robotics domains. Therefore, solutions created for this
competition are a step toward applying these same methods to real-world
problems.
Furthermore, there is existing research interest in Minecraft. With the
development of Malmo [19], a simulator for Minecraft, the environment has
garnered great research interest: many researchers [25, 35, 39] have leveraged
Minecraft’s massive hierarchality and expressive power as a simulator to make
great strides in language-grounded, interpretable multi-task option-
extraction, hierarchical lifelong learning, and active perception. However,
much of the existing research utilizes toy tasks in Minecraft, often
restricted to 2D movement, discrete positions, or artificially confined maps
unrepresentative of the intrinsic complexity that human players typically
face. These restrictions reflect the difficulty of the domain, the challenge
of coping with fully-embodied human state- and action-spaces, and the
complexity exhibited in optimal human policies.
Our competition and the utilization of the large-scale MineRL-v0 dataset of
human demonstrations will serve to catalyze research on this domain in two
ways: (1) our preliminary results indicate that through imitation learning,
basic reinforcement learning approaches can finally deal directly with the
full, unrestricted state- and action-space of Minecraft; and (2) due to the
difficult and crucial research challenges exhibited on the primary competition
task, ObtainDiamond, we believe that the competition will bring work on the
Minecraft domain to the fore of sample-efficient reinforcement learning
research.
### 1.2 Novelty
This year’s MineRL Competition is a follow-up to the first MineRL competition
held at NeurIPS 2019. We continue to encourage the development of general
learning algorithms which must perform well within a _strict_ computation and
environment-sample budget. Based on community feedback and our retrospection,
we are making the following improvements to this year’s competition:
* •
To further encourage competitors to develop generalizable methods, we are
updating the rules on manually specified policies and pre-processing of the
action space. In particular, we are improving the clarity of the rules, and we
are no longer allowing submissions to manually specify action choices. In the
previous MineRL Competition, actions could be specified by the competitors as
long as the setting did not depend on an aspect of the state.
* •
To ensure that competitors do not exploit the semantic meanings attached to
the action or observation labels, we embed both the action and non-POV
observations individually into latent spaces using auto-encoders. This makes
it difficult to manually specify meaningful actions, or hard-code behaviors
based on observations by providing obfuscated vectors for the action and
observation spaces. The networks trained to embed and recover actions and
observations ensure that the original actions and observations are recoverable
from the embedded space, but also that entire embedded domain maps onto the
original space. Additionally, this embedding is changed in subsequent rounds
to ensure generalizability. Previously, labels were modified during
evaluation, but they still carried semantic meaning and were not fully
obfuscated.
* •
To further encourage the use of methods that learn from demonstrations, we are
adding a second track to the competition. This track will follow the same
restrictions as the original track, but competitors will not be permitted to
use the environment during training. By adding this track, competitors
interested in learning from demonstrations can compete without being
disadvantaged compared to those who also use reinforcement learning.
Additionally, this track will help quantify the performance attainable using
only demonstrations.
Our competition focuses on the application of reinforcement learning and
imitation learning to a domain in Minecraft. As a result, it is related to
competitions which focus on these three aspects. We briefly identify related
competitions and describe the key differences between our proposed competition
and the other competitions.
##### Reinforcement Learning.
Prior to our competition series, reinforcement learning competitions have
focused on the development of policies or meta-policies that perform well on
complex domains or generalize across a distribution of tasks [20, 24, 29].
However, the winning submissions of these competitions are often the result of
massive amounts of computational resources or highly specific, hand-engineered
features. In contrast, our competition directly considers the efficiency of
the training procedures of learning algorithms.
We evaluate submissions solely on their ability to perform well within a
_strict_ computation and environment-sample budget. Moreover, we are uniquely
positioned to propose such a competition due to the nature of our human
demonstration dataset and environment: our dataset is constructed by directly
recording the game-state as human experts play, so we are able to later make
multiple renders of both the environment and data with varied lighting,
geometry, textures, and game-state dynamics, thus yielding development,
validation, and hold-out evaluation dataset/environment pairs. As a result,
competitors are naturally prohibited from hand-engineering or warm-starting
their learning algorithms and winning solely due to resource advantages.
##### Imitation Learning.
To our knowledge, no competitions have explicitly focused on the use of
imitation learning alongside reinforcement learning. This is in large part due
to a lack of large-scale, publicly available datasets of human or expert
demonstrations. Our competition is the first to explicitly involve and
encourage the use of imitation learning to solve the given task, and in that
capacity, we release the largest-ever dataset of human demonstrations on an
embodied domain. The large number of trajectories and rich demonstration-
performance annotations enable the application of many standard imitation
learning techniques and encourage further development of new ones that use
hierarchical labels, varying agent performance levels, and auxiliary state
information.
##### Minecraft.
A few competitions have previously used Minecraft due to its expressive power
as a domain. The first one was The Malmö Collaborative AI
Challenge121212https://www.microsoft.com/en-us/research/academic-
program/collaborative-ai-challenge, in which agents worked in pairs to solve a
collaborative task in a decentralized manner. Later, C. Salge et al. [31]
organized the Generative Design in Minecraft (GDMC): Settlement Generation
Competition, in which participants were asked to implement methods that would
procedurally build complete cities in any given, unknown landscape. These two
contests highlight the versatility of this framework as a benchmark for
different AI tasks.
In 2018, Perez-Liebana et al. [29] organized the Multi-Agent Reinforcement
Learning in MalmÖ (MARLÖ) competition. This competition pitted groups of
agents to compete against each other in three different games. Each of the
games was parameterizable to prevent the agents from overfitting to specific
visuals and layouts. The objective of the competition was to build an agent
that would learn, in a cooperative or competitive multi-agent task, to play
the games in the presence of other agents. The MARLÖ competition successfully
attracted a large number of entries from both existing research institutions
and the general public, indicating a broad level of accessibility and
excitement for the Minecraft domain within and outside of the existing
research community.
In comparison with previous contests, the MineRL series of competitions
tackles one main task and provides a massive number of hierarchical subtasks
and demonstrations (see Section 1.3). The main task and its subtasks are not
trivial; however, agent progress can be easily measured, which allows for a
clear comparison between submitted methods. Further, the target of the
competition series is to promote research on efficient learning, focusing
directly on the sample- and computational-efficiency of the submitted
algorithms [17].
### 1.3 Data
For this competition, we utilize two main components: a set of sequential
decision making environments in Minecraft and a corresponding public large-
scale dataset of human demonstrations. Through an online server which
replicates these environments, we continue to engage the Minecraft community
to add additional demonstrations to this dataset.
#### 1.3.1 Environment
We define _one primary competition environment_ , ObtainDiamond, and six other
auxiliary environments that encompass a significant portion of human Minecraft
play. We select these environment domains to highlight many of the hardest
challenges in reinforcement learning, such as sparse rewards, long reward
horizons, and efficient hierarchical planning.
##### Primary Environment.
As with last year’s competition, the main task of this year’s competition is
solving the Obtain Diamond environment. In this environment, the agent begins
in a random starting location without any items, and is tasked with obtaining
a diamond. The agent receives a high reward for obtaining a diamond and
smaller, auxiliary rewards for obtaining prerequisite items. Episodes end due
to the agent dying, successfully obtaining a diamond, or reaching the maximum
step count of 18000 frames (15 minutes).
##### Auxiliary Environments.
Figure 3: Images of various stages of six of seven total environments.
The ObtainDiamond environment is a difficult environment; diamonds only exist
in a small portion of the world and are 2-10 times rarer than other ores in
Minecraft. Furthermore, obtaining a diamond requires many prerequisite items.
It is practically impossible for an agent to obtain a diamond via naive random
exploration.
We provide six auxiliary environments (in four families), which we believe
will be useful for solving ObtainDiamond:
1. 1.
Navigate: In this environment, the agent must move to a goal location, which
represents a basic primitive used in many tasks in Minecraft. In addition to
standard observations, the agent has access to a “compass” observation, which
points to a set location, 64 meters from the start location. The agent is
given a sparse reward (+100 upon reaching the goal, at which point the episode
terminates). We also support a dense, reward-shaped version of Navigate, in
which the agent receives reward every tick corresponding to the change in
distance between the agent and the goal.
2. 2.
Treechop: In this environment, the agent must collect wood, a key resource in
Minecraft and the first prerequisite item for diamonds. The agent begins in a
forest biome (near many trees) with an iron axe for cutting trees. The agent
is given +1 reward for obtaining each unit of wood, and the episode terminates
once the agent obtains 64 units or the step limit is reached.
3. 3.
Obtain<Item>: We include three additional obtain environments, similar to that
of ObtainDiamond, but with different goal items to obtain. They are:
1. (a)
CookedMeat: cooked meat of a (cow, chicken, sheep, or pig), which is necessary
for survival in Minecraft. In this environment, the agent is given a specific
kind of meat to obtain.
2. (b)
Bed: made out of dye, wool, and wood, an item that is also vital to Minecraft
survival. In this environment, the agent is given a specific color of bed to
create.
3. (c)
IronPickaxe: is a final prerequisite item in obtaining a diamond. It is
significantly easier to solve than ObtainDiamond: iron is 20 times more common
in the Minecraft world than diamonds, and this environment is typically solved
by humans in less than 10 minutes.
4. 4.
Survival: This environment is the standard, open-ended game mode used by most
human players when playing the game casually. There is no specified reward
function, but data from this environment can be used to help train agents in
more structured tasks, such as ObtainDiamond.
#### 1.3.2 Dataset
Figure 4: A diagram of the MineRL data collection platform. Our system renders
demonstrations from packet-level data, so we can easily rerender our data with
different parameters.
The MineRL-v0 dataset consists of over 60 million state-action-(reward) tuples
of recorded human demonstrations over the seven environments mentioned above
[13]. In addition, we are actively working with the community to record
additional human demonstrations. Trajectories are contiguously sampled every
Minecraft game tick (at 20 game ticks per second). Each state is comprised of
an RGB video frame of the player’s point-of-view and a comprehensive set of
features from the game-state at that tick: player inventory, item collection
events, distances to objectives, player attributes (health, level,
achievements), and details about the current GUI the player has open. The
action recorded at each tick consists of: all the keyboard presses, the change
in view pitch and yaw (mouse movements), player GUI interactions, and
agglomerative actions such as item crafting.
Accompanying the human trajectories are a large set of automatically generated
annotations. For all of the environments, we include metrics which indicate
the quality of the demonstration, such as timestamped rewards, number of no-
ops, number of deaths, and total score. Additionally, trajectory meta-data
includes timestamped markers for hierarchical labelings; e.g. when a house-
like structure is built or certain objectives such as chopping down a tree are
met. Data is made available both in the competition materials as well as
through a standalone website131313http://minerl.io.
#### 1.3.3 Data Collection
In the previous MineRL competition, we used our novel platform for the
collection of player trajectories in Minecraft, enabling the construction of
the MineRL-v0 dataset. In this second iteration of the competition, we will
continue to utilize the platform with the hope of drastically expanding the
existing dataset. As shown in Figure 4, our platform consists of (1) _a public
game server and website_ , where we obtain permission to record trajectories
of Minecraft players in natural gameplay; (2) _a custom Minecraft client
plugin_ , which records all packet level communication between the client and
the server, so we can re-simulate and re-render human demonstrations with
modifications to the game state and graphics; and (3) _a data processing
pipeline_ , which enables us to produce automatically annotated datasets of
task demonstrations.
##### Data Acquisition.
Minecraft players find the MineRL server on standard Minecraft server lists.
Players first use our webpage to provide IRB141414The data collection study
was approved by Carnegie Mellon University’s institutional review board as
STUDY2018_00000364. consent to have their gameplay anonymously recorded. Then,
they download a plugin for their Minecraft client, which records and streams
users’ client-server game packets to the MineRL data repository. When playing
on our server, users select an environment to solve and receive in-game
currency proportional to the amount of reward obtained. For the Survival
environment (where there is no known reward function), players receive rewards
only for duration of gameplay, so as not to impose an artificial reward
function.
##### Data Pipeline.
Our data pipeline allows us to resimulate recorded trajectories into several
algorithmically consumable formats. The pipeline serves as an extension to the
core Minecraft game code and synchronously sends each recorded packet from the
MineRL data repository to a Minecraft client using our custom API for
automatic annotation and game-state modification. This API allows us to add
annotations based on any aspect of the game state accessible from existing
Minecraft simulators. Notably, it allows us to rerender the same data with
different textures, shaders, and lighting-conditions which we use to create
test and validation environment-dataset pairs for this competition.
#### 1.3.4 Data Usefulness
Figure 5: Normalized histograms of the lengths of human demonstration on
various MineRL tasks. The red E denotes the upper threshold for expert play
on each task.
##### Human Performance.
A majority of the human demonstrations in the dataset fall within the range of
expert level play. Figure 5 shows the distribution over trajectory length for
each environment. The red region in each histogram denotes the range of times
which correspond to play at an expert level, computed as the average time
required for task completion by players with at least five years of Minecraft
experience. The large number of expert samples and rich labelings of
demonstration performance enable application of many standard imitation
learning techniques which assume optimality of the base policy. In addition,
beginner and intermediate level trajectories allow for the further development
of techniques that leverage imperfect demonstrations.
Figure 6: Item precedence frequency graphs for ObtainDiamond (left),
ObtainCookedMeat (middle), and ObtainIronPickaxe (right). The thickness of
each line indicates the number of times a player collected item $A$ then
subsequently item $B$.
##### Hierarchality.
As shown in Figure 2, Minecraft is deeply hierarchical, and the MineRL data
collection platform is designed to capture these hierarchies both explicitly
and implicitly. Due to the subtask labelings provided in MineRL-v0, we can
inspect and quantify the extent to which these environments overlap. Figure 6
shows precedence frequency graphs constructed from MineRL trajectories on the
ObtainDiamond, ObtainCookedMeat, and ObtainIronPickaxe tasks. In order to
complete the ObtainDiamond task, an agent must complete the sub-goals of
obtaining wood and stone, as well as constructing crafting tables and
furnaces. These subtasks also appear in ObtainIronPickaxe and
ObtainCookedMeat. There is even greater overlap between ObtainDiamond and
ObtainIronPickaxe: most of the item hierarchy for ObtainDiamond consists of
the hierarchy for ObtainIronPickaxe.
##### Interface
Interacting with the environment and our data is as simple as a few lines of
code. Participants will be provided with an OpenAI Gym [5] wrapper for the
environment and a simple interface for loading demonstrations from the
MineRL-v0 dataset as illustrated in Figure 7. Our data will be released in the
form of Numpy .npz files composed of state-action-reward tuples in vector
form, and can be found along with accompanying documentation on the
competition website.
(a) Running a single episode of a random agent in ObtainDiamond.
(b) Utilizing individual trajectories of the MineRLdataset.
(c) Using the MineRLwrapper to filter demonstrations based on metadata
Figure 7: Example code showing how to interact with MineRL data and
environment.
### 1.4 Tasks and application scenarios
#### 1.4.1 Task
The primary task of the competition is solving the ObtainDiamond environment.
As previously described (see Section 1.3), agents begin at a random position
on a randomly generated Minecraft survival map with no items in their
inventory. The task consists of controlling an embodied agent to obtain a
single diamond. This task can only be accomplished by navigating the complex
item hierarchy of Minecraft. The learning algorithm will have direct access to
a $64$x$64$ pixel point-of-view observation from the perspective of the
embodied Minecraft agent, as well as a set of discrete observations of the
agent’s inventory for every item required for obtaining a diamond (see Figure
6). The action space of the agent is the Cartesian product of continuous view
adjustment (turning and pitching), binary movement commands (left/right,
forward/backward), and discrete actions for placing blocks, crafting items,
smelting items, and mining/hitting enemies. The agent is rewarded for
completing the full task. Due to the difficulty of the task, the agent is also
rewarded for reaching a set of milestones of increasing difficulty that form a
set of prerequisites for the full task (see Section 1.5).
The competition task embodies two crucial challenges in reinforcement
learning: sparse rewards and long time horizons. The sparsity of the posed
task (in both its time structure and long time horizon) necessitates the use
of efficient exploration techniques, human priors for policy bootstrapping, or
reward shaping via inverse reinforcement learning techniques. Although this
task is challenging, preliminary results indicate the potential of existing
and new methods utilizing human demonstrations to make progress in solving it
(see Section 1.6).
Progress towards solving the ObtainDiamond environment under strict sample
complexity constraints lends itself to the development of sample-efficient–and
therefore more computationally accessible–sequential decision making
algorithms. In particular, because we maintain multiple versions of the
dataset and environment for development, validation, and evaluation, it is
difficult to engineer domain-specific solutions to the competition challenge.
The best performing techniques must explicitly implement strategies that
efficiently leverage human priors across general domains. In this sense, the
application scenarios of the competition are those which stand to benefit from
the development of such algorithms; to that end, we believe that this
competition is a step towards democratizing access to deep reinforcement
learning based techniques and enabling their application to real-world
problems.
##### Previous Year’s Task
Stability in metrics across years is crucial for tracking and assessing long-
term impact and progress. In the MineRL 2019 competition, no team obtained a
diamond; however, many teams made great progress toward solving this task. In
fact, the top team was able to obtain the penultimate item to the goal. For
this reason, we elected to keep the same task from last year.
### 1.5 Metrics
Milestone | Reward | Milestone | Reward
---|---|---|---
log | 1 | furnace | 32
planks | 2 | stone_pickaxe | 32
stick | 4 | iron_ore | 64
crafting_table | 4 | iron_ingot | 128
wooden_pickaxe | 8 | iron_pickaxe | 256
stone | 16 | diamond | 1024
Table 1: Rewards for sub-goals and main goal (diamond) for Obtain Diamond.
Following training, participants will be evaluated on the average score of
their model over 500 episodes. Scores are computed as the sum of the milestone
rewards achieved by the agent in a given episode as outlined in Table 1. A
milestone is reached when an agent obtains the first instance of the specified
item. Ties are broken by the number of episodes required to achieve the last
milestone. An automatic evaluation script will be included with starter code.
For official evaluation and validation, a fixed map seed will be selected for
each episode. These seeds will not be available to participants during the
competition.
### 1.6 Baselines, Code, and Material Provided
##### Preliminary Baselines
Figure 8: Performance graphs over time with DQN and PreDQN on Navigate(Dense)
We present preliminary results showing the usefulness of the data for
improving sample efficiency and overall performance. We compare algorithms by
the highest average reward obtained over a 100-episode window during training.
We also report the performance of random policies and 50th percentile human
performance. The results are summarized in Table 2.
In the presented comparison, DQN is an implementation of Double Dueling DQN
[40] and Behavioral Cloning is a supervised learning method trained on expert
trajectories. PreDQN denotes a version of DQN pretrained on the MineRL-v0
data: specifically, PreDQN is trained by performing Bellman updates on
minibatches drawn from expert trajectories with accompanying reward labels.
Before training, we initialize the replay buffer with expert demonstrations.
In all environments, the learned agents perform significantly worse than
humans. Treechop exhibits the largest difference: on average, humans achieve a
score of 64, but reinforcement agents achieve scores of less than 4. These
results suggest that our environments are quite challenging, especially given
that the Obtain<Item> environments build upon the Treechop environment by
requiring the completion of several additional sub-goals. We hypothesize that
a large source of difficulty stems from the environment’s inherent long-
horizon credit assignment problems. For example, it is hard for agents to
learn to navigate through water because it takes many transitions before the
agent dies by drowning.
In light of these difficulties, our data is useful in improving performance
and sample efficiency: in all environments, methods that leverage human data
perform better. As seen in Figure 8, the expert demonstrations were able to
achieve higher reward per episode and attain high performance using fewer
samples. Expert demonstrations are particularly helpful in environments where
random exploration is unlikely to yield any reward, like Navigate (Sparse).
These preliminary results indicate that human demonstrations will be crucial
in solving the main competition environment.
| Treechop | Navigate (S) | Navigate (D)
---|---|---|---
DQN [22] | 3.73 $\pm$ 0.61 | 0.00 $\pm$ 0.00 | 55.59 $\pm$ 11.38
A2C [23] | 2.61 $\pm$ 0.50 | 0.00 $\pm$ 0.00 | -0.97 $\pm$ 3.23
Behavioral Cloning | 43.9 $\pm$ 31.46 | 4.23 $\pm$ 4.15 | 5.57 $\pm$ 6.00
PreDQN | 4.16 $\pm$ 0.82 | 6.00 $\pm$ 4.65 | 94.96 $\pm$ 13.42
Human | 64.00 $\pm$ 0.00 | 100.00 $\pm$ 0.00 | 164.00 $\pm$ 0.00
Random | 3.81 $\pm$ 0.57 | 1.00 $\pm$ 1.95 | -4.37 $\pm$ 5.10
Table 2: Results in Treechop, Navigate (S)parse, and Navigate (D)ense, over
the best 100 contiguous episodes. $\pm$ denotes standard deviation. Note:
humans achieve the maximum score for all environments shown.
##### 2019 Baselines
For the 2019 MineRL Competition, Preferred
Networks151515https://preferred.jp/en/ provided extensive
baselines161616https://github.com/minerllabs/baselines, including behavioral
cloning, deep Q-learning from demonstrations (DQfD) [15], Rainbow [14],
generative adversarial inverse RL (GAIL) [16], and proximal policy
optimization (PPO) [33]. These baselines are implemented using ChainerRL [10],
and MineRL 2019 participants found them to be incredibly helpful for
developing their algorithms. These baselines171717Preferred Network’s writeup
of their experiments using their baselines and the MineRL environments can be
found here. are available to participants to freely use in this iteration of
the competition.
##### 2020 Baselines
We have again partnered with Preferred Networks to produce high-quality
baselines. This year, the baselines are implemented using PyTorch [28]. These
baselines consist of state-of-the-art RL and imitation learning algorithms,
including Rainbow, SQIL [30], prioritized dueling double DQN (PDF DQN) [32,
41, 44], and DQfD. These baselines fully comply with the rules of this year’s
competition.
In addition to these baselines, we provide the code from the 2019 baselines
and the top teams of the MineRL 2019 competition. However, these solutions do
not conform to the rules of this year. We hope competitors will be able to
take inspiration from these methods.
##### Starting Code and Documentation.
We released an open-source Github repository with starting code including the
baselines mentioned above, an OpenAI Gym interface for the Minecraft
simulator, and a data-loader to accompany the data. Additionally, we released
a public Docker container for ease of use. We also provide participants with
the code for the solutions from last year’s top participants.
### 1.7 Tutorial and documentation
We have a competition page that contains instructions,
documentation181818http://minerl.io/docs/, and updates to the competition. For
this competition, we plan to include a step-by-step demonstration showing
participants how to submit their learning procedures. Although top
participants in MineRL 2019 stated that they found the documentation to be
helpful, we plan to extend the documentation in the hopes that even more
people can participate this year.
## 2 Organizational Aspects
### 2.1 Protocol
#### 2.1.1 Submission Protocol
The evaluation of the submissions will be managed by AIcrowd, an open-source
platform for organizing machine learning competitions. Throughout the
competition, participants will work on their code bases as git
repositories191919https://gitlab.aicrowd.com. Participants must package their
intended runtime in their repositories to ensure that the AIcrowd evaluators
can automatically build relevant Docker images from their repositories and
orchestrate them as needed. This approach also ensures that all successfully-
evaluated, user-submitted code is both versioned across time and completely
reproducible.
##### Software Runtime Packaging.
Packaging and specification of the software runtime is among the most time
consuming (and frustrating) task for many participants. To simplify this step,
we will support numerous approaches to package the software runtime with the
help of aicrowd-repo2docker202020https://pypi.org/project/aicrowd-
repo2docker/. The aicrowd-repo2docker is a tool which lets participants
specify their runtime using Anaconda environment exports, requirements.txt, or
a traditional Dockerfile. This significantly decreases the barrier to entry
for less technically-inclined participants by transforming an irritating debug
cycle to a deterministic one-liner that performs the work behind the scenes.
##### Submission Mechanism.
Participants will collaborate on their git repository throughout the
competition. Whenever they are ready to make a submission, they will create
and push a git tag to trigger the evaluation pipeline.
##### Orchestration of the Submissions.
The ability to reliably orchestrate user submissions over large periods of
time is a key determining feature of the success of the proposed competition.
We will use the evaluators of AIcrowd, which use custom Kubernetes clusters to
orchestrate the submissions against pre-agreed resource usage constraints. The
same setup has previously been successfully used in numerous other machine
learning competitions, such as NeurIPS 2017: Learning to Run Challenge,
NeurIPS 2018: AI for Prosthetics Challenge, NeurIPS 2018: Adversarial Vision
Challenge, and the 2018 MarLO challenge.
#### 2.1.2 General Competition Structure
##### Round 1: General Entry.
In this round, participants will register on the competition website, and
receive the following materials:
* •
Starter code for running the MineRL environments for the competition task.
* •
Basic baseline implementations provided by Preferred Networks, the competition
organizers, and the top teams from the MineRL 2019 competition (see Section
1.6).
* •
Two different renders of the human demonstration dataset (one for methods
development, the other for validation) with modified textures, lighting
conditions, and minor game state changes.
* •
The Docker Images and quick-start template that the competition organizers
will use to validate the training performance of the competitor’s models.
Competitors may submit solutions to two tracks. The main track will provide
access to both the simulator and paired demonstrations during training, while
the alternate, demonstrations only track, will only provide agents with the
MineRL-v0 dataset during training. Both tracks will be evaluated by measuring
average performance over 100 episodes on the ObtainDiamond task. Competitors
may submit to both tracks.
When satisfied with their models, participants will follow the submission
protocols (described in Section 2.1.1) to submit their code for evaluation,
specifying either the main track or alternate track. The automated evaluation
setup will evaluate the submissions against the validation environment, to
compute and report the metrics (described in Section 1.5) to the respective
public leaderboard on the AIcrowd website. Because the full training phase is
quite resource intensive, it is not be possible to run the training for all
the submissions in this round; however, the evaluator will ensure that the
submitted code includes the relevant subroutines for the training of the
models by running a short integration test on the training code before doing
the actual evaluation on the validation environment.
Once Round 1 is complete, the organizers will examine the code repositories of
the top submissions from each track to ensure compliance with the competition
rules. For the main track, the top 15 verified teams will be invited to the
second round. For the alternate demonstrations-only track, 5 teams will move
on to Round 2. To verify the top submissions comply with the competition
rules, they will be automatically trained on the validation dataset and
environment by the competition orchestration platform. The code repositories
associated with the corresponding submissions will be forked, and scrubbed of
large files to ensure that participants are not using any pretrained models in
the subsequent round. The resulting trained models will then be evaluated over
several hundred episodes. Their performance will be compared with the
submission’s final model performance during Round 1 to ensure that no warm-
starting or adversarial modifications of the evaluation harness was made. In
the case of the demonstrations-only track, we additionally verify that no
environment interactions were used in the development of the model. The teams
whose submissions have conflicting end-of-round and organizer-ran performance
distribution will be contacted for appeal. Unless a successful appeal is made,
the organizers will remove those submissions from the competition and then
evaluate additional submissions until each track is at capacity: 15 teams for
the main track, and 5 teams for the alternate track. Teams may qualify for the
second round in both tracks; therefore, fewer than 20 teams may qualify for
Round 2 among the two tracks.
##### Round 2: Finals.
In this round, the top performing teams will continue to develop their
algorithms. Their work will be evaluated against a confidential, held-out test
environment and test dataset, to which they will not have access. This
environment includes perturbations to both the action-space as well as the
observation space.
Specifically, participants in each track will be able to make a submission to
that track (as described in Section 2.1.1) twice during Round 2. The automated
evaluator will execute their algorithms on the test dataset and simulator, and
report their score and metrics back to the participants. This is done to
prevent competitors from over-fitting to the training and validation
datasets/simulators.
Again all submitted code repositories will be scrubbed to remove any files
larger than 15MB to ensure participants are not including any model weights
pre-trained on the previously released training dataset. While the container
running the submitted code will not have external network access, relevant
exceptions are added to ensure participants can download and use popular
frameworks like PyTorch212121https://pytorch.org and
Tensorflow222222http://tensorflow.org. Participants can request to add network
exceptions for any other publicly available resource, which will be validated
by AIcrowd on a case by case basis.
Further, participants will submit a written report of their technical approach
to the problem; this report will be used to bolster the impact of this
competition on sample-efficient reinforcement learning research. They will
also be encouraged to submit their papers to relevant workshops at NeurIPS in
order to increase interest in their work.
At the end of the second period, the competition organizers will execute a
final run of the participants’ algorithms and the winners will be selected for
each of the competition tracks.
##### User Submitted Code.
If a team requires an exception to the open source policy, then the team has a
time window of 3 weeks after the competition ends to request an appeal by
contacting the organizers. We will communicate with the team and potentially
grant an exception. For example, a submission may be open sourced at a later
date if the team is preparing a research publication based on new techniques
used within their submission. By default, all of the associated code
repositories will be made public and available232323https://gitlab.aicrowd.com
after the 3 week window at the end of the competition.
##### NeurIPS Workshop.
After winners have been selected, there will be a NeurIPS workshop to exhibit
the technical approaches developed during the competition. We plan to invite
teams from Round 2 to attend and present their results at the workshop. Due to
COVID-19, this workshop will be completely online.
### 2.2 Rules
The aim of the competition is to develop sample-efficient training algorithms.
Therefore, we discourage the use of environment-specific, hand-engineered
features because they do not demonstrate fundamental algorithmic improvements.
The following rules attempt to capture the spirit of the competition and any
submissions found to be violating the rules may be deemed ineligible for
participation by the organizers.
* •
Entries to the MineRL competition must be “open”. Teams will be expected to
reveal most details of their method including source-code (special exceptions
may be made for pending publications).
* •
For a team to be eligible to move to Round 2, each member must satisfy the
following conditions:
* –
be at least 18 and at least the age of majority in place of residence;
* –
not reside in any region or country subject to U.S. Export Regulations; and
* –
not be an organizer of this competition nor a family member of a competition
organizer.
* •
To receive any awards from our sponsors, competition winners must attend the
NeurIPS workshop.
* •
The submission must train a machine learning model without relying on human
domain knowledge.
* –
The reward function may not be changed (shaped) based on manually engineered,
hard-coded functions of the state. For example, additional rewards for
approaching tree-like objects are not permitted, but rewards for encountering
novel states (“curiosity rewards”) are permitted.
* –
Actions/meta-actions/sub-actions/sub-policies may not be manually specified in
any way. For example, though a learned hierarchical controller is permitted,
meta-controllers may not choose between two policies based on a manually
specified condition, such as whether the agent has a certain item in its
inventory. This restriction includes the composition of actions (e.g., adding
an additional action which is equivalent to performing “walk forward for 2
seconds” or “break a log and then place a crafting table”).
* –
State processing/pre-processing cannot be hard-coded with the exception of
frame-stacking. For example, the agent can act every even-numbered timestep
based on the last two observations, but a manually specified edge detector may
not be applied to the observation. As another example, the agent’s
observations may be normalized to be “zero-mean, variance one” based on an
observation history or the dataset.
* –
To ensure that the semantic meaning attached to action and observation labels
are not exploited, the labels assigned to actions and observations have been
obfuscated (in both the dataset and the environment). Actions and observations
(with the exception of POV observations) have been embedded into a different
space. Furthermore, during Round 2 submissions, the actions will be re-
embedded. Any attempt to bypass these obfuscations will constitute a violation
of the rules.
* –
Models may only be trained against the competition environments (MineRL
environments ending with “VectorOb(f)”). All of the MineRL environments have
specific competition versions which incorporate action and observation space
obfuscation. They all share a similar observation and action space embedding
which is changed in Round 2 as with the texture pack of the environment.
* •
There are two tracks, each with a different sample budget:
* –
The primary track is “Demonstrations and Environment.” Eight million
(8,000,000) interactions with the environment may be used in addition to the
provided dataset. If stacking observations / repeating actions, then each
skipped frame still counts against this budget.
* –
The secondary track is “Demonstrations Only.” No environment interactions may
be used in addition to the provided dataset. Competitors interested in
learning solely from demonstrations can compete in this track without being
disadvantaged compared to those who also use reinforcement learning.
* –
A team can submit separate entries to both tracks; performance in the tracks
will be evaluated separately (i.e., submissions between the two tracks are not
linked in any way).
* •
Participants may only use the provided dataset; no additional datasets may be
included in the source file submissions nor may be downloaded during training
evaluation, but pre-trained models which are publicly available by June 5th
are permitted.
* –
During the evaluation of submitted code, the individual containers will not
have access to any external network in order to avoid any information leak.
Relevant exceptions are added to ensure participants can download and use the
pre-trained models included in popular frameworks like PyTorch and TensorFlow.
Participants can request to add network exceptions for any other publicly
available pre-trained models, which will be validated by AICrowd on a case-by-
case basis.
* –
All submitted code repositories will be scrubbed to remove files larger than
30MB to ensure participants are not checking in any model weights pretrained
on the released training dataset.
* –
Pretrained models are not allowed to have been trained on MineRL or any
related or unrelated Minecraft data. The intent of this rule is to allow
participants to use models which are, for example, trained on ImageNet or
similar datasets. Don’t abuse this.
* •
The procedure for Round 1 is as follows:
* –
During Round 1, teams submit their trained models for evaluation at most twice
a week times and receive the performance of their models.
* –
At the end of Round 1, teams must submit source code to train their models.
This code must terminate within four days on the specified platform.
* –
For teams with the highest evaluation scores, this code will be inspected for
rule compliance and used to re-train the models with the validation dataset
and environment.
* –
For those submissions whose end-of-round and organizer-ran performance
distributions disagree, the offending teams will be contacted for appeal.
Unless a successful appeal is made, the organizers will remove those
submissions from the competition and then evaluate additional submissions
until each track is at capacity.
* –
The top 15 teams in the main (RL+Demonstration) track and the top 5 teams in
the secondary (Demonstration Only) track will progress to Round 2.
* •
The procedure for Round 2 is as follows:
* –
During Round 2, teams will submit their source code at most once every two
weeks.
* –
After each submission, the model will be trained for four days on a re-
rendered, private dataset and domain, and the teams will receive the final
performance of their model. The dataset and domain will contain matching
perturbations to the action space and the observation space.
* –
At the end of the round, final standings are based on the best-performing
submission of each team during Round 2.
* •
Official rule clarifications will be made in the FAQ on the AIcrowd website.
* –
The FAQ is available
here242424https://www.aicrowd.com/challenges/neurips-2020-minerl-
competition#faq.
* –
Answers within the FAQ are official answers to questions. Any informal answers
to questions (e.g., via email) are superseded by answers added to the FAQ.
See the rules
page252525https://www.aicrowd.com/challenges/neurips-2020-minerl-
competition/challenge_rules (an AIcrowd account is needed to view this page)
for any updates.
##### Cheating.
The competition is designed to prevent rule breaking and to discourage
submissions that circumvent the competition goals. Submissions will be tested
on variants of the environment/data with different textures and lighting,
discouraging the any priors that are not trained from scratch. Inherent
stochasticity in the environment, such as different world and spawn locations,
as well as the desemantization and isomorphic embedding of state and action-
space components directly discourage the use of hard-coded policies.
Furthermore, we will use automatic evaluation scripts to verify the
participants’ submitted scores in the first round and perform a manual code
review of the finalists of each round in the competition. We highlight that
the evaluation dataset/environment pair on which participants will be
evaluated is _completely inaccessible_ to competitors, and measures are taken
to prevent information leak.
### 2.3 Schedule and Readiness
#### 2.3.1 Schedule
Given the difficulty of the problem posed, ample time shall be given to allow
participants to fully realize their solutions. Our proposed timeline gives
competitors over 80 days to prepare, evaluate, and receive feedback on their
solutions before the end of the first round.
* April 13
Competition Accepted.
* May
Pre-Release: Submission framework finalized.
* June
First Round Begins: Participants invited to download starting materials and
baselines and to begin developing their submission.
* September
End of First Round: Submissions close. Models evaluated by organizers and
partners.
* September
First Round Results Posted: Official results posted notifying finalists.
* September
Final Round Begins: Finalists invited to submit their models against the held
out validation texture pack.
* November
End of Final Round: Submissions close. Organizers train finalists latest
submission for evaluation.
* November
Final Results Posted: Official results of model training and evaluation
posted.
* December 6
NeurIPS 2020: Winning teams invited to the conference to present their
results. Awards announced at conference.
#### 2.3.2 Readiness.
At the time of writing this proposal the following key milestones are
complete:
* •
The dataset is fully collected, cleaned, and automatically annotated;
* •
The competition environments have been finalized and implemented;
* •
The advisory committee is fully established;
* •
The partnership with AIcrowd has been confirmed, and we are in discussion with
last year’s sponsors;
* •
A specific plan for attracting underrepresented groups is finalized;
* •
The competition infrastructure has been developed, including the submission
harness.
If accepted to the NeurIPS competition track, there are no major roadblocks
preventing the execution of the competition.
### 2.4 Competition promotion
##### Partnership with Affinity Groups
We hope to partner with affinity groups to promote the participation of groups
who are traditionally underrepresented at NeurIPS. We plan to reach out to
organizers of Women in Machine Learning (WiML)262626https://wimlworkshop.org/,
LatinX in AI (LXAI)272727https://www.latinxinai.org/, Black in AI
(BAI)282828https://blackinai.github.io/, and Queer in
AI292929https://sites.google.com/view/queer-in-ai/. We will also reach out to
organizations, such as Deep Learning
Indaba303030http://www.deeplearningindaba.com/ and Data Science
Africa313131http://www.datascienceafrica.org/, to determine how to increase
the participation of more diverse teams. Specifically, we hope to form a
selection committee for the Inclusion@NeurIPS scholarships consisting of some
of our organizers and members from those groups. We also plan to encourage
competition participants to submit write-ups of their solutions to relevant
affinity group workshops at NeurIPS.
##### Promotion through General Mailing Lists
To promote participation in the competition, we plan to distribute the call to
general technical mailing lists, such as Robotics Worldwide and Machine
Learning News; company mailing lists, such as DeepMind’s internal mailing
list; and institutional mailing lists. We plan to promote participation of
underrepresented groups in the competition by distributing the call to
affinity group mailing lists, including, but not limited to Women in Machine
Learning, LatinX in AI, Black in AI, and Queer in AI. Furthermore, we will
reach out to individuals at historically black or all-female universities and
colleges to encourage the participation of these students and/or researchers
in the competition. By doing so, we will promote the competition to
individuals who are not on any of the aforementioned mailing lists, but are
still members of underrepresented groups.
##### Media Coverage
To increase general interest and excitement surrounding the competition, we
will reach out to the media coordinator at Carnegie Mellon University. By
doing so, our competition will be promoted by popular online magazines and
websites, such as Wired. We will also post about the competition on relevant
popular subreddits, such as r/machinelearning and /r/datascience, and promote
it through social media. We will utilize our industry and academic partners to
post on their various social media platforms, such as the OpenAI Blog, the
Carnegie Mellon University Twitter, and the Microsoft Facebook page.
The previous iteration of the MineRL competition was featured by several
notable news outlets including Nature News [18], BBC [34], The Verge [42], and
Synced [38]. This widespread publication and coverage of the competition led
to a drastic influx of new users and spectators from outside of the NeurIPS
community. We intend on further leveraging these media connections to increase
the reach of our call for competitors.
## 3 Resources
### 3.1 Organizing team
#### 3.1.1 Organizers
##### William H. Guss.
William Guss is a research scientist at OpenAI and Ph.D. student in the
Machine Learning Department at CMU. William co-created the MineRL dataset and
lead the MineRL competition at NeurIPS 2019. He is advised by Dr. Ruslan
Salakhutdinov and his research spans sample-efficient reinforcement learning
and deep learning theory. William completed his bachelors in Pure Mathematics
at UC Berkeley where he was awarded the Regents’ and Chancellor’s Scholarship,
the highest honor awarded to incoming undergraduates. During his time at
Berkeley, William received the Amazon Alexa Prize Grant for the development of
conversational AI and co-founded Machine Learning at Berkeley. William is from
Salt Lake City, Utah and grew up in an economically impacted, low-income
neighborhood without basic access to computational resources. As a result,
William is committed to working towards developing research and initiatives
which promote socioeconomically-equal access to AI/ML systems and their
development.
##### Mario Ynocente Castro.
Mario is an Engineer at Preferred Networks. In 2017, he received a Masters in
Applied Mathematics at École polytechnique and a Masters in Machine Learning
at École Normal Supérieure de Paris-Saclay. His current work focuses on
applications of Reinforcement Learning and Imitation Learning.
##### Sam Devlin.
Sam Devlin is a Senior Researcher in the Game Intelligence and Reinforcement
Learning research groups at Microsoft Research, Cambridge (UK). He received
his PhD on multi-agent reinforcement learning in 2013 from the University of
York. Sam has previously co-organised the Text-Based Adventure AI Competition
in 2016 & 2017 and the Multi-Agent Reinforcement Learning in Minecraft (MARLO)
Competition in 2018.
##### Brandon Houghton.
Brandon Houghton is a Machine Learning Engineer at OpenAI and co-creator of
the MineRL dataset. Graduating from the School of Computer Science at Carnegie
Mellon University, Brandon’s work focuses on developing techniques to enable
agents to interact with the real world through virtual sandbox worlds such as
Minecraft. He has worked on many machine learning projects, such as
discovering model invariants in physical systems as well as learning lane
boundaries for autonomous driving.
##### Noboru Sean Kuno.
Noboru Sean Kuno is a Senior Research Program Manager at Microsoft Research in
Redmond, USA. He is a member of Artificial Intelligence Engaged team of
Microsoft Research Outreach. He leads the design, launch and development of
research programs for AI projects such as Project Malmo, working in
partnership with research communities and universities worldwide.
##### Crissman Loomis.
Crissman works for Preferred Networks, a Japanese AI startup that applies the
latest deep machine learning algorithms to industrial applications, like self-
driving cars, factory automation, or medicine development. At Preferred
Networks, he has supported the development and adoption of open source
frameworks, including the deep learning framework Chainer and more recently
the hyperparameter optimization library Optuna.
##### Stephanie Milani.
Stephanie Milani is a Ph.D. student in the Machine Learning Department at
Carnegie Mellon University. She is advised by Dr. Fei Fang and her research
interests include sequential decision-making problems, with an emphasis on
reinforcement learning. In 2019, she completed her B.S. in Computer Science
and her B.A. in Psychology at the University of Maryland, Baltimore County,
and she co-organized the 2019 MineRL competition.. Since 2016, she has worked
to increase the participation of underrepresented groups in CS and AI at the
local and state level. For these efforts, she has been nationally recognized
through a Newman Civic Fellowship.
##### Sharada Mohanty.
Sharada Mohanty is the CEO and Co-founder of AIcrowd, an open-source platform
encouraging reproducible artificial intelligence research. He was the co-
organizer of many large-scale machine learning competitions, such as NeurIPS
2017: Learning to Run Challenge, NeurIPS 2018: AI for Prosthetics Challenge,
NeurIPS 2018: Adversarial Vision Challenge, NeurIPS 2019 : MineRL Competition,
NeurIPS 2019: Disentanglement Challenge, NeurIPS 2019: REAL Robots Challenge.
During his Ph.D. at EPFL, he worked on numerous problems at the intersection
of AI and health, with a strong interest in reinforcement learning. In his
current role, he focuses on building better engineering tools for AI
researchers and making research in AI accessible to a larger community of
engineers.
##### Keisuke Nakata.
Keisuke Nakata is a machine learning engineer at Preferred Networks, Inc. He
mainly works on machine learning applications in real-world industry settings.
Particularly, his interests lie in creating reinforcement learning algorithms
and frameworks.
##### Ruslan Salakhutdinov.
Ruslan Salakhutdinov received his Ph.D. in machine learning (computer science)
from the University of Toronto in 2009. After spending two post-doctoral years
at the Massachusetts Institute of Technology Artificial Intelligence Lab, he
joined the University of Toronto as an Assistant Professor in the Department
of Computer Science and Department of Statistics. In February of 2016, he
joined the Machine Learning Department at Carnegie Mellon University as an
Associate Professor. Ruslan’s primary interests lie in deep learning, machine
learning, and large-scale optimization. His main research goal is to
understand the computational and statistical principles required for
discovering structure in large amounts of data. He is an action editor of the
Journal of Machine Learning Research and served on the senior programme
committee of several learning conferences including NeurIPS and ICML. He is an
Alfred P. Sloan Research Fellow, Microsoft Research Faculty Fellow, Canada
Research Chair in Statistical Machine Learning, a recipient of the Early
Researcher Award, Connaught New Researcher Award, Google Faculty Award,
Nvidia’s Pioneers of AI award, and is a Senior Fellow of the Canadian
Institute for Advanced Research.
##### John Schulman.
John Schulman is a researcher and founding member of OpenAI, where he leads
the reinforcement learning team. He received a PhD from UC Berkeley in 2016,
advised by Pieter Abbeel. He was named one of MIT Tech Review’s 35 Innovators
Under 35 in 2016.
##### Shinya Shiroshita.
Shinya Shiroshita works for Preferred Networks as an engineer. He graduated
from the University of Tokyo, where he majored in computer science. His
hobbies are competitive programming and playing board games. In Minecraft, he
likes exploring interesting structures and biomes.
##### Nicholay Topin.
Nicholay Topin is a Machine Learning Ph.D. student advised by Dr. Manuela
Veloso at Carnegie Mellon University. His current research focus is
explainable deep reinforcement learning systems. Previously, he has worked on
knowledge transfer for reinforcement learning and learning acceleration for
deep learning architectures.
##### Avinash Ummadisingu.
Avinash Ummadisingu works at Preferred Networks on Deep Reinforcement Learning
for Robotic Manipulation and the open-source library PFRL (formerly
ChainerRL). His areas of interests include building sample efficient
reinforcement learning systems and multi-task learning. Prior to that, he was
a student at USI, Lugano under the supervision of Prof. Jürgen Schmidhuber and
Dr. Paulo E. Rauber of the Swiss AI Lab IDSIA.
##### Oriol Vinyals.
Oriol Vinyals is a Principal Scientist at Google DeepMind, and a team lead of
the Deep Learning group. His work focuses on Deep Learning and Artificial
Intelligence. Prior to joining DeepMind, Oriol was part of the Google Brain
team. He holds a Ph.D. in EECS from the University of California, Berkeley and
is a recipient of the 2016 MIT TR35 innovator award. His research has been
featured multiple times at the New York Times, Financial Times, WIRED, BBC,
etc., and his articles have been cited over 65000 times. His academic
involvement includes program chair for the International Conference on
Learning Representations (ICLR) of 2017, and 2018. He has also been an area
chair for many editions of the NIPS and ICML conferences. Some of his
contributions such as seq2seq, knowledge distillation, or TensorFlow are used
in Google Translate, Text-To-Speech, and Speech recognition, serving billions
of queries every day, and he was the lead researcher of the AlphaStar project,
creating an agent that defeated a top professional at the game of StarCraft,
achieving Grandmaster level, also featured as the cover of Nature. At DeepMind
he continues working on his areas of interest, which include artificial
intelligence, with particular emphasis on machine learning, deep learning and
reinforcement learning.
#### 3.1.2 Advisors
##### Anca Dragan.
Anca Dragan is an Assistant Professor in the EECS Department at UC Berkeley.
Her goal is to enable robots to work with, around, and in support of people.
She runs the InterACT Lab, where the focus is on algorithms for human-robot
interaction – algorithms that move beyond the robot’s function in isolation,
and generate robot behavior that also accounts for interaction and
coordination with end-users. The lab works across different applications, from
assistive robots, to manufacturing, to autonomous cars, and draw from optimal
control, planning, estimation, learning, and cognitive science. She also
helped found and serve on the steering committee for the Berkeley AI Research
(BAIR) Lab, and am a co-PI of the Center for Human-Compatible AI. She was also
honored by the Sloan Fellowship, MIT TR35, the Okawa award, and an NSF CAREER
award.
##### Fei Fang.
Fei Fang is an Assistant Professor at the Institute for Software Research in
the School of Computer Science at Carnegie Mellon University. Before joining
CMU, she was a Postdoctoral Fellow at the Center for Research on Computation
and Society (CRCS) at Harvard University. She received her Ph.D. from the
Department of Computer Science at the University of Southern California in
June 2016. Her research lies in the field of artificial intelligence and
multi-agent systems, focusing on integrating machine learning with game
theory. Her work has been motivated by and applied to security,
sustainability, and mobility domains, contributing to the theme of AI for
Social Good.
##### Chelsea Finn.
Chelsea Finn is an Assistant Professor in Computer Science and Electrical
Engineering at Stanford University. Finn’s research interests lie in the
capability of robots and other agents to develop broadly intelligent behavior
through learning and interaction. To this end, her work has included deep
learning algorithms for concurrently learning visual perception and control in
robotic manipulation skills, inverse reinforcement methods for scalable
acquisition of nonlinear reward functions, and meta-learning algorithms that
can enable fast, few-shot adaptation in both visual perception and deep
reinforcement learning. Finn received her Bachelor’s degree in Electrical
Engineering and Computer Science at MIT and her PhD in Computer Science at UC
Berkeley. Her research has been recognized through the ACM doctoral
dissertation award, an NSF graduate fellowship, a Facebook fellowship, the
C.V. Ramamoorthy Distinguished Research Award, and the MIT Technology Review
35 under 35 Award, and her work has been covered by various media outlets,
including the New York Times, Wired, and Bloomberg. Throughout her career, she
has sought to increase the representation of underrepresented minorities
within CS and AI by developing an AI outreach camp at Berkeley for
underprivileged high school students, a mentoring program for underrepresented
undergraduates across four universities, and leading efforts within the WiML
and Berkeley WiCSE communities of women researchers.
##### David Ha.
David Ha is a Research Scientist at Google Brain. His research interests
include Recurrent Neural Networks, Creative AI, and Evolutionary Computing.
Prior to joining Google, He worked at Goldman Sachs as a Managing Director,
where he co-ran the fixed-income trading business in Japan. He obtained
undergraduate and graduate degrees in Engineering Science and Applied Math
from the University of Toronto.
##### Sergey Levine.
Sergey Levine received a BS and MS in Computer Science from Stanford
University in 2009, and a Ph.D. in Computer Science from Stanford University
in 2014. He joined the faculty of the Department of Electrical Engineering and
Computer Sciences at UC Berkeley in fall 2016. His work focuses on machine
learning for decision making and control, with an emphasis on deep learning
and reinforcement learning algorithms. Applications of his work include
autonomous robots and vehicles, as well as computer vision and graphics. He
has previously served as the general chair for the Conference on Robot
Learning, program co-chair for the International Conference on Learning
Representations, and organizer for numerous workshops at ICML, NeurIPS, and
RSS. He has also served as co-organizer on the _Learning to Run_ and _AI for
Prosthetics_ NeurIPS competitions.
##### Zachary Chase Lipton.
Zachary Chase Lipton is an assistant professor of Operations Research and
Machine Learning at Carnegie Mellon University. His research spans core
machine learning methods and their social impact and addresses diverse
application areas, including clinical medicine and natural language
processing. Current research focuses include robustness under distribution
shift, breast cancer screening, the effective and equitable allocation of
organs, and the intersection of causal thinking and the messy high-dimensional
data that characterizes modern deep learning applications. He is the founder
of the Approximately Correct blog (approximatelycorrect.com) and a founder and
co-author of Dive Into Deep Learning, an interactive open-source book drafted
entirely through Jupyter notebooks.
##### Manuela Veloso.
Manuela Veloso is a Herbert A. Simon University Professor at Carnegie Mellon
University and the head of AI research at JPMorgan Chase. She received her
Ph.D. in computer science from Carnegie Mellon University in 1992. Since then,
she has been a faculty member at the Carnegie Mellon School of Computer
Science. Her research focuses on artificial intelligence and robotics, across
a range of planning, execution, and learning algorithms. She cofounded the
RoboCup Federation and served as president of AAAI from 2011 to 2016. She is a
AAAI, IEEE, AAAS, and ACM fellow.
#### 3.1.3 Partners and Sponsors
We are currently in conversation with potential partners for this year’s
competition. Last year, we partnered with and/or received support from
Microsoft Research, Preferred Networks, NVIDIA, and Artificial Intelligence
Journal (AIJ).
### 3.2 Resources provided by organizers, including prizes
##### Mentorship.
We will facilitate a community forum through our publicly available Discord
server to enable participants to ask questions, provide feedback, and engage
meaningfully with our organizers and advisory board. We hope to foster an
active community to collaborate on these hard problems.
##### Computing Resources.
In concert with our efforts to provide open, democratized access to AI, we are
in conversation with potential sponsors to provide compute grants for teams
that self identify as lacking access to the necessary compute power to
participate in the competition, as we did in the last iteration of the
competition. We will also provide groups with the evaluation resources for
their experiments in Round 2, as we did in the last iteration of the
competition.
##### Travel Grants and Scholarships.
The competition organizers are committed to increasing the participation of
groups traditionally underrepresented in reinforcement learning and, more
generally, in machine learning (including, but not limited to: women, LGBTQ
individuals, underrepresented racial and ethnic groups, and individuals with
disabilities). To that end, we will offer Inclusion@NeurIPS
scholarships/travel grants for Round 1 participants who are traditionally
underrepresented at NeurIPS to attend the conference. These individuals will
be able to apply online for these grants; their applications will be evaluated
by the competition organizers and partner affinity groups. We also plan to
provide travel grants to enable all of the top participants from Round 2 to
attend our NeurIPS workshop. We are in conversation with potential sponsors
about providing funding for these travel grants.
##### Prizes.
We are currently in discussion about prizes with potential sponsors and/or
partners. In the previous competition, we offered 5 NVIDIA GPUs and 10 NVIDIA
Jetsons to the top teams. In addition, we provided two prizes for notable
research contributions.
### 3.3 Support and facilities requested
Due to the quality of sponsorships and industry partnerships secured last
year, we only request facility resources and ticket reservations. We aim to
present at the NeurIPS 2020 Competition Workshop. We will invite guest
speakers, organizers, Round 2 participants, and some Round 1 participants. To
allow these people to attend NeurIPS, we request 30 reservations for NeurIPS.
We plan to provide funding for teams to travel to the competition. The
organizers will be present at their own expense.
## References
* Amodei and Hernandez [2018] Dario Amodei and Danny Hernandez. https://blog.openai.com/ai-and-compute/, May 2018. URL https://blog.openai.com/ai-and-compute/.
* Bellemare et al. [2013] Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. _Journal of Artificial Intelligence Research_ , 47:253–279, 2013.
* Berner et al. [2019] Christopher Berner, Greg Brockman, Brooke Chan, Vicki Cheung, Przemyslaw Debiak, Christy Dennison, David Farhi, Quirin Fischer, Shariq Hashme, Chris Hesse, et al. Dota 2 with large scale deep reinforcement learning. _arXiv preprint arXiv:1912.06680_ , 2019.
* Bojarski et al. [2016] Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon Goyal, Lawrence D Jackel, Mathew Monfort, Urs Muller, Jiakai Zhang, et al. End to end learning for self-driving cars. _arXiv preprint arXiv:1604.07316_ , 2016.
* Brockman et al. [2016] Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. OpenAI gym. _arXiv preprint arXiv:1606.01540_ , 2016.
* Cruz Jr et al. [2017] Gabriel V Cruz Jr, Yunshu Du, and Matthew E Taylor. Pre-training neural networks with human demonstrations for deep reinforcement learning. _arXiv preprint arXiv:1709.04083_ , 2017.
* DeepMind [2018] DeepMind. Alphastar: Mastering the real-time strategy game starcraft ii, 2018. URL https://deepmind.com/blog/alphastar-mastering-real-time-strategy-game-starcraft-ii/.
* Finn et al. [2016] Chelsea Finn, Sergey Levine, and Pieter Abbeel. Guided cost learning: Deep inverse optimal control via policy optimization. In _The 33rd International Conference on Machine Learning_ , pages 49–58, 2016.
* Finn et al. [2017] Chelsea Finn, Tianhe Yu, Tianhao Zhang, Pieter Abbeel, and Sergey Levine. One-shot visual imitation learning via meta-learning. _arXiv preprint arXiv:1709.04905_ , 2017.
* Fujita et al. [2019] Yasuhiro Fujita, Toshiki Kataoka, Prabhat Nagarajan, and Takahiro Ishikawa. ChainerRL: A deep reinforcement learning library. In _The 23rd Conference on Neural Information Processing Systems, Deep Reinforcement Learning Workshop_ , 2019.
* Gao et al. [2018] Yang Gao, Ji Lin, Fisher Yu, Sergey Levine, Trevor Darrell, et al. Reinforcement learning from imperfect demonstrations. _arXiv preprint arXiv:1802.05313_ , 2018.
* Guss et al. [2019] William H. Guss, Cayden Codel*, Katja Hofmann*, Brandon Houghton*, Noboru Kuno*, Stephanie Milani*, Sharada Mohanty*, Diego Perez Liebana*, Ruslan Salakhutdinov*, Nicholay Topin*, Manuela Veloso*, and Phillip Wang*. The MineRL competition on sample efficient reinforcement learning using human priors. In _The 33rd Conference on Neural Information Processing Systems Competition Track_ , 2019.
* Guss* et al. [2019] William H. Guss*, Brandon Houghton*, Nicholay Topin, Phillip Wang, Cayden Codel, Manuela Veloso, and Ruslan Salakhutdinov. MineRL: A large-scale dataset of Minecraft demonstrations. In _The 28th International Joint Conference on Artificial Intelligence_ , 2019.
* Hessel et al. [2018] Matteo Hessel, Joseph Modayil, Hado Van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, and David Silver. Rainbow: Combining improvements in deep reinforcement learning. In _The 32nd AAAI Conference on Artificial Intelligence_ , 2018.
* Hester et al. [2018] Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Ian Osband, et al. Deep q-learning from demonstrations. In _The 32nd AAAI Conference on Artificial Intelligence_ , 2018.
* Ho and Ermon [2016] Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. In _Advancements in Neural Information Processing Systems_ , 2016\.
* Houghton et al. [2020] Brandon Houghton, Stephanie Milani, Nicholay Topin, William Guss, Katja Hofmann, Diego Perez-Liebana, Manuela Veloso, and Ruslan Salakhutdinov. Guaranteeing reproducibility in deep learning competitions. In _The 23rd Conference on Neural Information Processing Systems, Challenges in Machine Learning (CiML) Workshop_ , 2020.
* Hsu [2019] Jeremy Hsu. Ai takes on popular minecraft game in machine-learning contest, Nov 2019\. URL https://www.nature.com/articles/d41586-019-03630-0.
* Johnson et al. [2016] Matthew Johnson, Katja Hofmann, Tim Hutton, and David Bignell. The malmo platform for artificial intelligence experimentation. In _The 25th International Joint Conference on Artificial Intelligence_ , pages 4246–4247, 2016.
* Kidziński et al. [2018] Łukasz Kidziński, Sharada P Mohanty, Carmichael F Ong, Jennifer L Hicks, Sean F Carroll, Sergey Levine, Marcel Salathé, and Scott L Delp. Learning to run challenge: Synthesizing physiologically accurate motion using deep reinforcement learning. In _The NIPS’17 Competition: Building Intelligent Systems_ , pages 101–120. Springer, 2018.
* Milani et al. [2020] Stephanie Milani, Nicholay Topin, Brandon Houghton, William H. Guss, Sharada P. Mohanty, Keisuke Nakata, Oriol Vinyals, and Noboru Sean Kuno. Retrospective analysis of the 2019 MineRL competition on sample efficient reinforcement learning. _Proceedings of Machine Learning Research: NeurIPS 2019 Competition and Demonstration Track_ , 2020.
* Mnih et al. [2015] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. _Nature_ , 518(7540):529, 2015.
* Mnih et al. [2016] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In _The 33rd International Conference on Machine Learning_ , pages 1928–1937, 2016.
* Nichol et al. [2018] Alex Nichol, Vicki Pfau, Christopher Hesse, Oleg Klimov, and John Schulman. Gotta learn fast: A new benchmark for generalization in rl. _arXiv preprint arXiv:1804.03720_ , 2018.
* Oh et al. [2016] Junhyuk Oh, Valliappa Chockalingam, Satinder Singh, and Honglak Lee. Control of memory, active perception, and action in Minecraft. _arXiv preprint arXiv:1605.09128_ , 2016.
* OpenAI [2018] OpenAI. Openai five, Sep 2018. URL https://blog.openai.com/openai-five/.
* Panse et al. [2018] Ameya Panse, Tushar Madheshia, Anand Sriraman, and Shirish Karande. Imitation learning on atari using non-expert human annotations. 2018\.
* Paszke et al. [2019] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In _Advances in Neural Information Processing Systems 32_ , pages 8024–8035. 2019.
* Perez-Liebana et al. [2019] Diego Perez-Liebana, Katja Hofmann, Sharada Prasanna Mohanty, Noburu Kuno, Andre Kramer, Sam Devlin, Raluca D Gaina, and Daniel Ionita. The Multi-Agent Reinforcement Learning in MalmÖ (MARLÖ) Competition. _arXiv preprint arXiv:1901.08129_ , 2019.
* Reddy et al. [2019] Siddharth Reddy, Anca D Dragan, and Sergey Levine. Sqil: Imitation learning via reinforcement learning with sparse rewards. _arXiv preprint arXiv:1905.11108_ , 2019.
* Salge et al. [2018] Christoph Salge, Michael Cerny Green, Rodgrigo Canaan, and Julian Togelius. Generative Design in Minecraft (GDMC): Settlement Generation Competition. In _The 13th International Conference on the Foundations of Digital Games_ , page 49. ACM, 2018.
* Schaul et al. [2015] Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. Prioritized experience replay. In _Proceedings of the International Conference on Learning Representations_ , 2015.
* Schulman et al. [2017] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. _arXiv preprint arXiv:1707.06347_ , 2017.
* Shead [2019] Sam Shead. Minecraft diamond challenge leaves ai creators stumped, Dec 2019. URL https://www.bbc.com/news/technology-50720823.
* Shu et al. [2017] Tianmin Shu, Caiming Xiong, and Richard Socher. Hierarchical and interpretable skill acquisition in multi-task reinforcement learning. _arXiv preprint arXiv:1712.07294_ , 2017.
* Silver et al. [2017] David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go without human knowledge. _Nature_ , 550(7676):354–359, 2017.
* Silver et al. [2018] David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, et al. A general reinforcement learning algorithm that masters chess, shogi, and go through self-play. _Science_ , 362, 2018.
* Synced [2019] Synced. Neurips 2019 will host minecraft reinforcement learning competition, May 2019. URL https://medium.com/syncedreview/neurips-2019-will-host-minecraft-reinforcement-learning-competition-146e8bc8da1.
* Tessler et al. [2017] Chen Tessler, Shahar Givony, Tom Zahavy, Daniel J Mankowitz, and Shie Mannor. A deep hierarchical approach to lifelong learning in minecraft. In _The 31st AAAI Conference on Artificial Intelligence_ , 2017.
* van Hasselt et al. [2016] Hado van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double q-learning. In _The 30th AAAI Conference on Artificial Intelligence_ , 2016.
* Van Hasselt et al. [2016] Hado Van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double q-learning. 2016\.
* Vincent [2019] James Vincent. Ai has bested chess and go, but it struggles to find a diamond in minecraft, Dec 2019. URL https://www.theverge.com/2019/12/13/21020230/ai-minecraft-minerl-diamond-challenge-microsoft-reinforcement-learning.
* Vinyals et al. [2019] Oriol Vinyals, Igor Babuschkin, Wojciech M. Czarnecki, Michael Mathieu, Andrew Dudzik, Junyong Chung, David H. Choi, Richard Powell, Timo Ewalds, Petko Georgiev, et al. Grandmaster level in StarCraft II using multi-agent reinforcement learning. _Nature_ , 2019.
* Wang et al. [2016] Ziyu Wang, Tom Schaul, Matteo Hessel, Hado Hasselt, Marc Lanctot, and Nando Freitas. Dueling network architectures for deep reinforcement learning. In _Proceedings of the International Conference on Machine Learning_ , 2016.
|
# Adaptivity without Compromise: A Momentumized, Adaptive, Dual Averaged
Gradient Method for Stochastic Optimization
Aaron Defazio
Facebook AI Research, New York
Samy Jelassi
Princeton University, Princeton
###### Abstract
We introduce MADGRAD, a novel optimization method in the family of AdaGrad
adaptive gradient methods. MADGRAD shows excellent performance on deep
learning optimization problems from multiple fields, including classification
and image-to-image tasks in vision, and recurrent and bidirectionally-masked
models in natural language processing. For each of these tasks, MADGRAD
matches or outperforms both SGD and ADAM in test set performance, even on
problems for which adaptive methods normally perform poorly.
## 1 Introduction
Optimization for deep learning forms a relatively new and growing sub-field in
the optimization community. Compared to classical first order optimization,
deep learning problems introduce additional concerns which require new tools
to overcome. Deep learning problems are characterized by very large parameter
vector sizes $D$, making it computationally infeasible to store matrices of
size $D\times D$, and even “limited memory” approaches can be impractical for
problems such as the 100+ billion parameter models currently being explored
(Rajbhandari et al., 2019; Brown et al., 2020). The practical limit on these
problems is storage that is fixed at a small multiple of the parameter vector
size.
For this reason, diagonal scaling approaches have become the industry standard
for deep learning. In this class of methods, adaptivity is performed
independently for each coordinate, so that memory usage scales as $O(D)$. We
consider Adam (Kingma and Ba, 2014) the benchmark method in this class; it has
seen widespread adoption, and there are no alternative adaptive methods that
consistently out-perform it (Choi et al., 2020; Schmidt et al., 2020).
Adam builds upon a rich history of diagonal adaptive methods. The AdaGrad
method (Duchi et al., 2011) introduced a principled approach to diagonal
adaptivity, that arises naturally as a simplification of a full-matrix
adaptivity scheme. This approach is clearly motivated and yields natural
convergence rate bounds for convex losses. Also within this family, the
RMSProp method (Tieleman and Hinton, 2012) arose as a well-performing
empirical method in this class, albeit with little theoretical motivation. The
development of the Adam method can be seen as a natural extension of the
scaling used in RMSProp to include a form of momentum, as well as a
stabilizing “bias-correction” that significantly dampens the adaptivity and
step-size during the early stages of optimization.
Despite its widespread success, Adam is far from a panacea for deep learning
optimization. Wilson et al. (2017) show that Adam as well as other common
adaptive optimizers converge to bad local minima on some important problems,
such as the widely studied problem of image classification. This has led to
the general claim that adaptive methods generalize poorly. As we will show,
this is not necessarily the case. The method we develop in this work combines
adaptivity with strong generalization performance.
Our MADGRAD (Momentumized, Adaptive, Dual averaged GRADient) method performs
consistently at a state-of-the-art level across a varied set of realistic
large-scale deep learning problems, without requiring any more tuning than
Adam. MADGRAD is constructed from the lesser-used dual averaging form of
AdaGrad, through a series of direct and systematic changes that adapt the
method to deep learning optimization.
## 2 Problem Setup
We consider the stochastic optimization framework, where the goal is to
minimize a parameterized function
$f(x)=\mathbb{E}_{\xi}\left[f(x,\xi)\right],$
where $x\in\mathbb{R}^{D}$, and each $\xi$ is a random variable drawn from a
fixed known distribution. In the case of empirical risk minimization, $\xi$ is
a data-point drawn from the data distribution, typically further processed by
a stochastic data-augmentation procedure. At each step $k$, a stochastic
optimization algorithm is given $\xi_{k}$ and has access to $f(x_{k},\xi_{k})$
and $\nabla f(x_{k},\xi_{k})$ for a pre-specified iterate $x_{k}$.
## 3 Related Work
The theory of adaptive methods for non-convex optimization is still in its
infancy. The current best known convergence theory for Adam due to Défossez et
al. (2020) greatly improves over earlier theory (Zou et al., 2019b), but has
the important caveat that it requires momentum values of the order
$\beta=1-1/N$ for $N$ iterations, which is far from the values used in
practice, which are of the order $\beta=0.9$ to $\beta=0.99$. Results for
these settings may not be possible, as Reddi et al. (2018) show via a counter-
example that Adam may fail to converge under common parameter settings, even
in the convex case. When $\beta_{1}$ & $\beta_{2}$ are small, the Adam update
is close to sign-sgd (i.e. $x_{k+1}=x_{k}-\gamma\text{sign}(\nabla
f(x_{k},\xi_{k})$), a method that also fails to converge in the general
stochastic case (Balles and Hennig, 2018), although some theory is possible
under a large batch assumption Bernstein et al. (2018) where the behavior is
closer to the non-stochastic case.
AdaGrad’s convergence in the non-convex case has also been studied. Ward et
al. (2019) establish convergence for a restricted variant where only a global
step size is adaptively updated. Li and Orabona (2019) establish almost sure
convergence for a variant of AdaGrad where the most recently seen gradient is
omitted from the denominator. Convergence with high probability is also
established for a variant with global rather than coordinate-wise step size.
More recently Zhou et al. (2020) and Zou et al. (2019a) establish convergence
of non-momentum and momentum variants respectively, although with bounds that
are much worse than established by Défossez et al. (2020), who also cover
AdaGrad in their analysis.
Weighted AdaGrad as we use in this work has been explored to varying degrees
before, including the non-convex case in the aforementioned work by Zou et al.
(2019a), and the convex case by Levy et al. (2018). Weighting is particularly
interesting in the strongly convex case, where weights such as
$\lambda_{k}\propto k^{2}$ can be used to achieve accelerated convergence.
Neither of these works cover the dual averaged form of AdaGrad which we
explore.
## 4 Adaptivity in deep learning beyond Adam
To understand the motivation and design of the MADGRAD method, a clear
understanding of the short-comings of existing methods is needed. Consider
Adam, the most heavily used adaptive method in practice. Although it works
remarkably well on some important problems, it also suffers from the following
issues:
* •
It greatly under-performs the non-adaptive SGD-M method in a number of
important situations including the widely studied ImageNet training problem.
* •
Problems can be constructed on which it will fail to converge entirely, even
in the convex setting.
* •
The exponential moving average updates used are non-sparse when given sparse
gradients, which makes the method poorly suited to sparse problems.
Due to these issues, Adam doesn’t quite reach the goal of being a general-
purpose deep learning optimizer. The MADGRAD method is directly designed to
address these issues. MADGRAD:
* •
Achieves state-of-the-art performance across problems traditionally tackled by
Adam, while simultaneously achieving state-of-the-art on problems where Adam
normally under-performs.
* •
Has provable and strong convergence theory on convex problems.
* •
Is directly applicable to sparse problems when momentum is not used.
## 5 Design
The MADGRAD method is the combination of a number of techniques that
individually address separate short-comings in the AdaGrad method when applied
to deep learning optimization problems. By building upon a method with known
convergence theory, we are able to construct a method that is still provably
convergent (under convexity assumptions) without sacrificing the practical
performance characteristics of Adam. We will detail each of these techniques
in turn, to build up MADGRAD from its foundations.
### 5.1 Dual averaging for deep learning
MADGRAD is based upon the dual averaging formulation of AdaGrad, rather than
the mirror descent formulation. Although the original seminal work on AdaGrad
(Duchi et al., 2011) presents the dual averaging formulation with equal weight
as the mirror descent form, the dual averaging form has seen virtually no use
for deep learning optimization. The AdaGrad implementations available in major
deep learning frameworks (PyTorch, Tensorflow) contain the mirror descent form
only. This is despite the theory presented for the dual averaging formulation
being arguably more elegant than the mirror descent theory. The dual averaging
form of AdaGrad satisfies the following bound:
$\sum_{i=1}^{k}f(x_{i})-f(x_{*})\leq\frac{1}{\gamma}\psi_{k}(x_{*})+\frac{\gamma}{2}\sum_{i=1}^{k}\left\|\nabla
f_{i}(x_{i})\right\|_{\psi^{*}_{i-1}}^{2}$
Whereas the mirror descent form satisfies the following more complex bound,
involving the Bregman divergence of $\psi$:
$\displaystyle\sum_{i=1}^{k}f(x_{i})-f(x_{*})\leq$
$\displaystyle\frac{1}{\gamma}B_{\psi_{1}}(x_{*},x_{1})+\frac{1}{\gamma}\sum_{i=1}^{k-1}\left[B_{\psi_{i+1}}(x_{*},x_{i+1})-B_{\psi_{i}}(x_{*},x_{i+1})\right]+\frac{\gamma}{2}\sum_{i=1}^{k}\left\|\nabla
f_{i}(x_{i})\right\|_{\psi_{i}^{*}}^{2}.$
Given the clear advantage in terms of theoretical simplicity, why then are
dual averaging approaches not used more widely? We believe this is due to a
number of misconceptions. The first misconception is that dual averaging is
only interesting in the composite optimization setting, where sophisticated
regularizers are used to encourage sparsity or induce other properties of the
solution. It is true that for smooth non-stochastic optimization, gradient
descent and mirror descent coincide (under optimal hyper-parameters). However,
when the objective is stochastic or non-smooth, the methods become distinct,
and actually behave quite differently.
Dual averaging has the general form, given a proximal function $\psi$:
$\displaystyle g_{k}$ $\displaystyle=\nabla f\left(x_{k},\xi_{k}\right),$
$\displaystyle s_{k+1}$ $\displaystyle=s_{k}+\lambda_{k}g_{k},$ $\displaystyle
x_{k+1}$ $\displaystyle=\arg\min_{x}\left\\{\left\langle
s_{k+1},x\right\rangle+\beta_{k+1}\psi(x)\right\\}.$ (1)
The gradient buffer $s_{0}$ is initialized as the zero vector. The simplest
form of dual averaging occurs when the standard Euclidean squared norm is
used: $\psi(x)=\frac{1}{2}\left\|x-x_{0}\right\|^{2}$, and $\lambda_{k}=1$ in
which case the method takes the form:
$x_{k+1}=x_{0}-\frac{1}{\beta_{k+1}}\sum_{i=0}^{k}g_{i}.$ (2)
If the objective is either non-smooth or stochastic (or both), $\beta$
sequences of the form $\beta_{k+1}=\sqrt{k+1}$ give a convergent method.
Although Equation 2 has little resemblance to SGD as written, SGD’s update:
$x_{k+1}=x_{k}-\gamma_{k}\nabla f\left(x_{k},\xi_{k}\right),$
can be written in the more comparable form:
$x_{k+1}=x_{0}-\sum_{i=0}^{k}\gamma_{i}g_{i}.$ (3)
where to achieve convergence without a fixed stopping time, a step size of the
form $\gamma_{i}\propto 1/\sqrt{i+1}$ is standard. Comparing SGD and DA at a
step $k$, it’s clear that the weighting sequence used by SGD places a smaller
weight on newer $g_{i}$ in the summation compared to earlier $g_{i}$, whereas
the sequence used by DA places equal weight on all $g_{i}$. This difference is
key to understanding why methods in the DA family behaves differently from SGD
in practice, even without additional regularization or non-Euclidean proximal
functions.
The second misconception arises from implementing the dual averaging form of
AdaGrad without considering what modifications need to be made for the deep
learning setting. The algorithm as originally stated, uses an initial point of
the origin $x_{0}=0$, and a proximity function
$\psi_{t}(x)=\frac{1}{2}\left\langle x,H_{t}x\right\rangle$ that is quadratic,
but centered around the origin. It is well known that neural network training
exhibits pathological behavior when initialized at the origin, and so naive
use of this algorithm does not perform well. When centering around 0, we have
observed severely degraded empirical performance and a high risk of
divergence. Instead, a proximity function centered about $x_{0}$ needs to be
used:
$\psi_{t}(x)=\frac{1}{2}\left\langle
x-x_{0},H_{t}\left(x-x_{0}\right)\right\rangle,$
with initialization of $x_{0}$ following standard conventions for the network
being trained.
### 5.2 Dual averaging generalizes well
In addition to the theoretical advantages of dual averaging methods, we have
also observed that they also enjoy a strong practical advantage in the form of
better generalization performance. Dual averaging based methods include a form
of implicit regularization, which we believe is a crucial factor contributing
to their good generalization performance. To see this, consider the classical
dual averaging update:
$x_{k+1}=x_{0}-\frac{1}{\sqrt{k+1}}\sum_{i=0}^{k}g_{i},$
This update can be written in a form closer to the SGD update by substituting
for $x_{0}$:
$\displaystyle x_{k+1}$
$\displaystyle=\left(x_{k}+\frac{1}{\sqrt{k}}\sum_{i=0}^{k-1}g_{i}\right)-\frac{1}{\sqrt{k+1}}\sum_{i=0}^{k}g_{i},$
$\displaystyle=x_{k}-\frac{1}{\sqrt{k+1}}\left[g_{k}-\left(\frac{\sqrt{k+1}}{\sqrt{k}}-1\right)\sum_{i=0}^{k-1}g_{i}\right],$
$\displaystyle=x_{k}-\frac{1}{\sqrt{k+1}}\left[g_{k}+\left(\sqrt{k+1}-\sqrt{k}\right)\left(x_{k}-x_{0}\right)\right].$
Since $\sqrt{k+1}-\sqrt{k}\approx 1/(2\sqrt{k+1})$, the behavior of dual
averaging resembles a SGD step with a step-dependent regularizer:
$\frac{1}{4\sqrt{k}}\left\|x_{k}-x_{0}\right\|^{2},$
which decays in strength during the course of optimization. We speculate that
the indirect decaying regularization inherent in dual averaging methods may
explain why MADGRAD also requires less decay than other methods to match their
performance. The strong initial regularization may have a positive effect
during early iterations, while not negatively affecting the ability of the
model to fit to the data during the later "fine-tuning" epochs. Given the
practical advantages we observe in our experiments, we believe further
research into the effect of using stronger regularization at the early stages
of optimization may be interesting more generally.
### 5.3 $\lambda$ sequences for deep learning
Figure 1: Comparison of SGD without momentum to DA and DA-AdaGrad and AdaGrad
on CIFAR-10. Left column is test classification performance, right column is
training loss. The "stage" learning rate scheme involves a 10 fold decrease in
the learning rate at epochs 150 and 225. See Section 3 for a full description
of the experimental setup.
Even with this modification, dual averaging both with and without adaptivity
is not competitive with SGD on standard benchmark problems such as CIFAR10, as
shown in Figure 1. The top row shows AdaGrad and DA methods using a flat
learning rate schedule, and the bottom row shows a stage-wise schedule. SGD is
shown as a baseline. For the DA family methods, $\lambda_{k}$ is decreased for
the stage-wise schedules. Both AdaGrad, DA and AdaGrad-DA under-perform SGD
with either learning rate schedule. Part of this performance gap can be
attributed to the fact that each of these methods either implicitly or
explicitly use a $1/\sqrt{i+1}$ learning rate sequence. This sequence is
actually harmful, as we can confirm by testing SGD using a schedule of the
form:
$\gamma_{i}=\frac{a}{\sqrt{i+b}},$
Figure 2 illustrates the learning curves achievable for varying $b$ values on
CIFAR-10. Full description of our experimental setup is in Section 3. We
performed a hyper-parameter search over $a$ separately for each $b$, with test
accuracy as the target quantity. All sqrt-decay sequences are significantly
worse than the baseline stage-wise schedule, where the learning rate is
decreased 10 fold at epochs 150 and 225. We speculate that the sqrt-decay
sequences result in convergence that is too rapid, skipping over the initial
annealing stage of learning, resulting in convergence to a poor local minima.
Figure 2: Sqrt-decay learning rate schedules under-perform stage-wise
schedules. With batch-size 128 on CIFAR-10. No momentum is used in this
comparison. A range of offsets $b$ in the rate $a/\sqrt{i+b}$ were tried with
values up to 10,000 shown. Larger values of $b$ up to 100,000 were also
tested, they also failed to match the performance of the stage-wise schedule.
Left column is test classification performance, right column is training loss.
The AdaGrad and AdaGrad-DA methods also use an implicitly decreasing sequence,
although the rate of decrease depends on the magnitude of the gradients, which
is very problem dependent. If gradients stay of similar magnitude over a
particular time-scale, then the rate of decrease will also be a $1/\sqrt{k}$
rate for step $k$. This step size scheme is also undesirable as prevents the
use of standard SGD & Adam step size sequences for choosing the explicit step
size constants $\lambda_{i}$.
Since in practice the same learning rate scheme is commonly used when
comparing different optimization methods, this schedule contributes to the
commonly held perception that AdaGrad is not as effective as other adaptive
methods such as Adam.
For the DA method, we propose to remedy this issue by introducing a scaling of
the $\lambda$ values to counter-act the step size sequence. In particular we
propose the choice:
$\lambda_{i}=\left(i+1\right)^{1/2}\gamma_{i},$
where $\gamma$ is a conventional (SGD/Adam) step size sequence. The advantage
of this choice is that the leading term in the sum in Equation 2 has constant
weight across $k$:
$\displaystyle x_{k+1}$
$\displaystyle=x_{0}-\frac{1}{\sqrt{k+1}}\sum_{i=0}^{k}\lambda_{i}g_{i},$
$\displaystyle=x_{0}-\gamma_{k}g_{k}-\frac{1}{\sqrt{k+1}}\sum_{i=0}^{k-1}\lambda_{i}g_{i},$
mirroring the behavior of SGD during a constant step size phase, but retaining
the $\sqrt{k+1}$ decay of past gradients. This simple change is sufficient to
greatly improve the test-set performance of DA when using the same learning
rate schedule as SGD.
Another advantage of this sequence is that it will place higher weights on
latter gradients in the final convergence rate bound. This makes no difference
if we expect gradients to be of similar magnitude at all stages of
optimization (which can happen for non-smooth problems in the worse case), but
in practice even for non-smooth objectives the gradient typically shrinks to
some degree during optimization, leading to tighter bounds when using a
forward weighted lambda sequence. We discuss this difference further in
Section 1.
### 5.4 Momentum
The use of momentum on top of SGD is known to be highly beneficial, if not
crucial, for deep learning optimization across a wide variety of architectures
and problem settings (Sutskever et al., 2013). Given how crucial it can be to
maintaining competitive performance, we now examine how we can add a form of
momentum to the dual averaging updates, and latter the AdaGrad updates.
We will consider an update of the following form, which was first explored in
this general form by Nesterov and Shikhman (2015) under the name Dual
Averaging with Double Averaging:
$\displaystyle g_{k}$ $\displaystyle=\nabla f\left(x_{k},\xi_{k}\right),$
$\displaystyle s_{k+1}$ $\displaystyle=s_{k}+\lambda_{k}g_{k},$ $\displaystyle
z_{k+1}$ $\displaystyle=\arg\min_{x}\left\\{\left\langle
s_{k+1},x\right\rangle+\beta_{k+1}\psi(x)\right\\},$ (4) $\displaystyle
x_{k+1}$ $\displaystyle=\left(1-c_{k+1}\right)x_{k}+c_{k+1}z_{k+1}.$
The essential idea behind this algorithm is simple. Instead of evaluating the
gradient at each step at the value of the argmin operation as with regular DA,
instead it’s evaluated at a moving average point instead. This serves to
smooth the iterate sequence. This technique has the advantage in the convex
setting of making it possible to prove convergence properties of the last
iterate $x_{k+1}$ rather than the average iterate
$\bar{x}_{k+1}=\frac{1}{k+1}\sum_{i=0}^{k}x_{i}$. Essentially the averaging
operation is incorporated into the algorithm itself.
Momentum is normally thought of as performing more than just a smoothing of
the iterate sequence, although a line of recent research has shown that inline
averaging of the above form is actually exactly equivalent to momentum
(Sebbouh et al., 2020; Defazio, 2020). This is clearly illustrated when
momentum is added on top of SGD, where inline averaging:
$\displaystyle z_{k+1}$ $\displaystyle=z_{k}-\eta_{k}\nabla f(x_{k},\xi_{k}),$
$\displaystyle x_{k+1}$
$\displaystyle=\left(1-c_{k+1}\right)x_{k}+c_{k+1}z_{k+1},$
is actually exactly equivalent to more common equational forms of momentum:
$\displaystyle m_{k+1}$ $\displaystyle=\beta_{k}m_{k}+\nabla
f(x_{k},\xi_{k}),$ $\displaystyle x_{k+1}$
$\displaystyle=x_{k}-\alpha_{k}m_{k+1},$
for appropriate choices of the hyper-parameters. In the convex setting the
advantage of this form arises when $c_{k+1}=\frac{1}{k+1}$, which corresponds
to an equal weighted moving average
$x_{k+1}=\frac{1}{k+1}\sum_{i=0}^{k}z_{i}$. Under this setting convergence of
the last iterate can be shown just as when this kind of averaging is used with
dual averaging (Defazio and Gower, 2020). In the non-convex setting, constant
$c_{k+1}$ values, which correspond to an exponential moving average, appear to
be the best choice (Defazio, 2020).
### 5.5 Adaptivity
Our goal is to combine these ideas together with the adaptivity technique from
the AdaGrad method. The dual averaging form of coordinate-wise AdaGrad has the
following form:
$x_{k+1}=x_{0}-\frac{1}{\sqrt{\sum_{i=0}^{k}\gamma_{i}g_{i}^{2}}}\circ\sum_{i=0}^{k}\gamma_{i}g_{i},$
where $\circ$ represents the element-wise (Hadamard) product, and $\gamma$ is
a fixed step size hyper-parameter. There are many different ways of combining
this kind of coordinate-wise adaptivity with the weighted gradient sequence
$\lambda_{i}=\sqrt{i+1}$ that we have proposed. Due to the flexibility of the
dual averaging framework, it’s possible to prove a convergence rate of some
form for practically any choice of denominator sequence. However, we must take
into consideration that we also want to maintain the magnitude of the
“effective” step size, as discussed in Section 5.3.
We also need to ensure that the weighted dominator includes $\gamma_{i}$ not
just $\sqrt{i+1}$, as this mitigates a problem illustrated for DA in Figure 1:
when $\lambda$ is decreased 10 fold at epoch 150, the method starts to
diverge. At this point, the $\beta$ sequence continues to decrease at a
square-root rate, while the sum-of-gradients starts growing ten times slower.
This results in the method shrinking the iterates towards $x_{0}$ far to
strongly.
We review a number of possible alternatives below and discuss their
practicality.
#### 5.5.1 Unweighted denominator
One possibility is keep the denominator the same but just weight the gradients
in the sum:
$x_{k+1}=x_{0}-\frac{1}{\sqrt{\sum_{i=0}^{k}\gamma_{i}g_{i}^{2}}}\circ\sum_{i=0}^{k}\left(i+1\right)^{1/2}\gamma_{i}g_{i},$
This is appealing as it maintains the constant effective step size property,
however the resulting convergence rate bound derivable from this form depends
on $\sqrt{\sum_{i=0}^{k}\gamma_{i}g_{i}^{2}}$ rather than
$\sqrt{\sum_{i=0}^{k}\left(i+1\right)^{1/2}\gamma_{i}g_{i}^{2}}$, which
defeats the purpose of using a front-weighted gradient sequence.
#### 5.5.2 Weighted denominator
We can weight the gradient sequence in the denominator by $\lambda$ also:
$x_{k+1}=x_{0}-\frac{1}{\sqrt{\sum_{i=0}^{k}\left(i+1\right)^{1/2}\gamma_{i}g_{i}^{2}}}\circ\sum_{i=0}^{k}\left(i+1\right)^{1/2}\gamma_{i}g_{i}.$
This form does not maintain a constant effective step size, which results in
poor empirical performance. We experimented with mitigations such as adding
additional terms to the numerator that would counteract this growth, however
this still resulted in unsatisfactory empirical results.
#### 5.5.3 Weighted numerator
The AdaGrad variant proposed by Zou et al. (2019a) uses a weighting scheme
where the weights $\lambda_{k}$ are included in the numerator as well as the
denominator:
$x_{k+1}=x_{0}-\frac{\gamma_{i}}{\sqrt{t}}\frac{\sqrt{\sum_{i=0}^{k}\lambda_{i}}}{\sqrt{\sum_{i=0}^{k}\lambda_{i}g_{i}^{2}}}\circ
g_{i}=x_{0}-\frac{\gamma_{i}}{\sqrt{t}}\frac{\sqrt{\sum_{i=0}^{k}\left(i+1\right)^{1/2}}}{\sqrt{\sum_{i=0}^{k}\left(i+1\right)^{1/2}g_{i}^{2}}}\circ
g_{i}.$
This numerator is proportional to $t^{1/4}$. To adapt this sequence to dual
averaging, we must include a step size parameter in the weights. It’s unclear
exactly how to do this in a way that maintains the effective step size
property, since if $\lambda_{i}\propto\gamma_{i}$ then the step size will
cancel between the numerator and denominator.
#### 5.5.4 MADGRAD’s Cube-root denominator
To maintain the correct effective step size we propose the use of a cube root
instead:
$x_{k+1}=x_{0}-\frac{1}{\sqrt[3]{\sum_{i=0}^{k}\left(i+1\right)^{1/2}\gamma_{i}g_{i}^{2}}}\circ\sum_{i=0}^{k}\left(i+1\right)^{1/2}\gamma_{i}g_{i}.$
(5)
Although this modification appears ad-hoc, the use of a cube root here can
actually be motivated by a similar argument used to motivate the standard
square-root formulation. Duchi et al. (2011) consider the following
minimization problem over a $D$ dimensional vector $s$:
$\min_{s}\sum_{i=0}^{k}\sum_{d=0}^{D}\frac{g_{id}^{2}}{s_{d}},\;\left\langle
1,s\right\rangle\leq c,\;\forall d:\,s_{d}>0,$
which is solved by $s_{d}\propto\sqrt{\sum_{i=0}^{k}g_{id}^{2}}$. The
motivation for this surrogate problem is to minimize weighted square norm of
the gradients in hind-sight. Rather than a linear penalty on the size of $s$,
which when combined with the positivity constraint is just a L1 norm penalty
$\left\|s\right\|_{1}\leq c$, if we instead use a L2 norm penalty:
$\min_{s}\sum_{i=0}^{k}\sum_{d=0}^{D}\frac{g_{id}^{2}}{s_{d}},\;\left\|s\right\|_{2}^{2}\leq
c,\;\forall d:\,s_{d}>0$
then we recover a cube-root solution
$s_{d}\propto\sqrt[3]{\sum_{i=0}^{k}g_{id}^{2}}$. We show this in the
Appendix. The cube root maintains the effective step size as can be seem by
considering that
$\sum_{i}^{k}\left(i+1\right)^{1/2}\propto\left(k+1\right)^{3/2}$ which after
the cube root operation gives the necessary $\sqrt{k+1}$ scaled denominator
required to cancel against $\lambda$’s square-root growth.
One disadvantage of this weighting is that it results in a final convergence
rate bound that is not fully adaptive in the sense that the choices of global
step size will depend on an expression involving the gradient norms. We don’t
believe this is a significant problem given that the choice of step size still
depends on other unknown quantities even when using a fully adaptive sequence
such as the function sub-optimality gap and gradient bound $G$.
## 6 Convergence Theory
1:$\gamma_{k}$ stepsize sequence, $c_{k}$ momentum sequence, initial point
$x_{0}$, epsilon $\epsilon$
2:$s_{0}:d=0$, $\nu_{0}:d=0$
3:for $k=0,\dots,T$ do
4: Sample $\xi_{k}$ and set $g_{k}=\nabla f(x_{k},\xi_{k})$
5: $\lambda_{k}=\gamma_{k}\sqrt{k+1}$
6: $s_{k+1}=s_{k}+\lambda_{k}g_{k}$
7: $\nu_{k+1}=\nu_{k}+\lambda_{k}\left(g_{k}\circ g_{k}\right)$
8: $z_{k+1}=x_{0}-\frac{1}{\sqrt[3]{\nu_{k+1}}+\epsilon}\circ s_{k+1}$
9: $x_{k+1}=\left(1-c_{k+1}\right)x_{k}+c_{k+1}z_{k+1}.$
10:end for
11:return $x_{T}$
Algorithm 1 MADGRAD
The MADGRAD algorithm, combining the discussed ideas, is listed in Algorithm
1. In order to establish convergence results for potentially non-smooth
functions, we rely on a bounded gradient assumption:
$\left\|\nabla f(x,\xi)\right\|_{\infty}\leq G\;\text{for all }x,\xi.$
We also assume each $f(\cdot,\cdot)$ is proper and convex in $x$ over all
$\mathbb{R}^{D}$. Our analysis uses a slight variant of Algorithm 1, where the
denominator includes an extra term $\lambda_{k}G^{2}$:
$z_{k+1}=x_{0}-\frac{1}{\sqrt[3]{\lambda_{k+1}G^{2}+v_{k+1}}}\circ s_{k+1},$
(6)
A similar term is also needed by the original DA-AdaGrad method in Duchi et
al. (2011), and appears necessary for bounding the accumulated error. We don’t
believe this term plays an important role in practice as its magnitude quickly
diminishes, and so we have not included this term in Algorithm 1. A per-
coordinate upper bound $G_{d}$ may be used instead of $G$ to further tighten
the theory.
###### Theorem 1.
After $k$ steps of MADGRAD using the update in Equation 6,
$\displaystyle\mathbb{E}\left[f(x_{k})-f(x_{*})\right]$
$\displaystyle\leq\frac{6}{k^{1/2}}\left\|x_{0}-x_{*}\right\|_{2}GD^{1/2},$
if $c_{k}=\frac{3/2}{k+3/2}$ and
$\gamma=\frac{1}{k^{3/4}D^{3/4}G^{1/2}}\left\|x_{0}-x_{*}\right\|_{2}^{3/2}.$
This bound is very loose. It results from the application of $\nabla
f(x,\xi)_{i}\leq G$ to bound each index of the gradient at each time-step
separately, which does not capture any of the adaptivity of the convergence
rate. We discuss more precise bounds below. Note that
$\left\|g\right\|_{2}\leq D^{1/2}\left\|g\right\|_{\infty}=GD^{1/2}$, so the
dependence on dimensionality here is comparable to bounds established for non-
adaptive stochastic methods which have bounds on the 2-norm of the gradient on
the right instead. Note also that we recommend using a flat $c_{k}=c$ momentum
for non-convex problems, this decaying rate is only optimal in the convex
case. A value of $c=0.1$ corresponds to the $\beta=0.9$ momentum commonly used
with SGD and Adam.
### 6.1 Adaptivity
To understand the adaptivity of the method at a more granular level, we can
express the convergence rate as:
$\displaystyle\mathbb{E}\left[f(x_{k})-f(x_{*})\right]$
$\displaystyle\leq\frac{3}{\gamma}\frac{1}{\left(k+1\right)^{3/2}}\sum_{d=0}^{D}\left(\mathbb{E}\left[\lambda_{k}\left(\sum_{i=0}^{k}\lambda_{i}g_{id}^{2}\right)^{2/3}\right]\right)$
$\displaystyle+\frac{3}{\gamma}\frac{1}{\left(k+1\right)^{3/2}}\sum_{d=0}^{D}\left(x_{0x}-x_{*d}\right)^{2}\mathbb{E}\left(\lambda_{k+1}G^{2}+\sum_{i=0}^{k}\lambda_{i}g_{id}^{2}\right)^{1/3}$
The convergence rate heavily depends on a weighted sequence:
$\sum_{d=0}^{D}\sum_{i=0}^{k}\lambda_{i}g_{id}^{2}=\gamma\sum_{d=0}^{D}\sum_{i=0}^{k}\left(i+1\right)^{1/2}g_{id}^{2},$
rather than an unweighted sum $\sum_{d=0}^{D}\sum_{i=0}^{k}g_{id}^{2}$ used in
AdaGrad. This is key to understanding the performance characteristics of
MADGRAD over traditional AdaGrad. In particular, large gradients at the early
stages have a smaller effect on the overall bound then they do for AdaGrad.
This can be quantified by considering the behavior when the gradient norm
bound is time dependent, i.e. $\left\|\nabla f(x_{i},\xi)\right\|_{\infty}\leq
G_{i}$. Then as we show in the appendix, for MADGRAD, when using optimal step-
sizes:
$\displaystyle\mathbb{E}\left[f(x_{k})-f(x_{*})\right]$
$\displaystyle\leq\frac{6}{\left(k+1\right)^{5/4}}\left\|x_{0}-x_{*}\right\|_{2}D^{1/2}\left(\sum_{i=0}^{k}\left(i+1\right)^{1/2}G_{i}^{2}\right)^{1/2},$
whereas for AdaGrad with the use of momentum:
$\displaystyle\mathbb{E}\left[f(x_{k})-f(x_{*})\right]$
$\displaystyle\leq\frac{6}{\left(k+1\right)}\left\|x_{0}-x_{*}\right\|_{2}D^{1/2}\left(\sum_{i=0}^{k}G_{i}^{2}\right)^{1/2}.$
In MADGRAD the effect of an “outlier” $G_{i}$ that is particularly large at
time-step $i$ decays at a faster rate, with a power $5/4$ compared to linearly
for AdaGrad. Using $\lambda_{i}$ with larger power than $1/2$ is also possible
within our momentumized-dual averaged gradient framework, which would result
in a faster decay. We have found that the 1/2 factor is a "Sweet-spot", as
larger values result in empirically slower convergence. Similar convergence
rate bounds can be derived using the same proof technique, although they are
prefixed by progressively larger constants (growing factorially in the power)
as the power used is increased. In general, the advantage of MADGRAD over
AdaGrad manifests in the common situation where the gradients are largest at
the early stages of optimization.
### 6.2 Comparison to Adam
Although Adam is known to potentially diverge, we can consider the theoretical
properties of the AMSGrad variant of Adam, which is perhaps the smallest
modification to Adam that results in provable convergence. For AMSGrad,
parameterized by momentum $\beta_{1}\lambda^{i-1}$ at step i, assuming a
bounded domain with $R=\max_{x,y}\left\|x-y\right\|_{\infty}^{2}$, defining
$\gamma=\beta_{1}/\sqrt{\beta_{2}}$, and using step size
$\alpha_{i}=\alpha/\sqrt{i}$ (Reddi et al., 2018):
$\displaystyle\mathbb{E}\sum_{i=1}^{k}f(x_{i})-f(x_{*})$
$\displaystyle\leq\frac{\beta_{1}RG}{\left(1-\beta_{1}\right)^{2}\left(1-\lambda\right)^{2}}+\frac{R\sqrt{T}}{\alpha\left(1-\beta_{1}\right)}\sum_{d=1}^{D}\left(\hat{v}_{k,d}\right)^{1/2}$
$\displaystyle+\frac{\alpha\sqrt{1+\log
k}}{\left(1-\beta_{1}\right)^{2}\left(1-\gamma\right)\sqrt{1-\beta_{2}}}\sum_{d}^{D}\left(\sum_{i=1}^{k}g_{id}^{2}\right)^{1/2}$
$\hat{v}$ is the maximum of the exponential moving average of the squared
gradients, see Reddi et al. (2018) for further details. This result has a
number of shortcomings compared to the MADGRAD. Firstly, note that the
momentum term $1-\beta_{1}$, comparable to $c$ in MADGRAD divides each term in
the bound. This means that momentum hurts rather than improves performance.
The dependence on a bounded domain is also an undesirable property compared to
MADGRAD, and the convergence theory of MADGRAD avoids log factors.
## 7 Experimental Results
Figure 3: Experimental results for the CIFAR-10, ImageNet and fastMRI Knee
problems. Left column shows test set performance and the right column shows
training set performance.
In our experiments we compared MADGRAD against SGD, Adam and AdaGrad. SGD is
known to perform well on computer vision classification problems due to its
ability to produce solutions that generalize better than adaptive methods. In
contrast, Adam is the method of choice in other domains with structured output
where overfitting is less of an issue. We present results across a large
number of problems across both categories to validate the general purpose
utility of the MADGRAD approach.
In our experiments we use the most common step-size reduction scheme used in
the literature for each respective problem. For all algorithms, we performed a
learning rate and decay sweep on a grid on intervals of $[1\times
10^{i},2.5\times 10^{i},5\times 10^{i}]$ for a range of $i$ large enough to
ensure the best parameters for each problem and method were considered. We
present the results from the best learning rate and decay for each method when
considering test set performance. For other hyper-parameters, we used commonly
accepted defaults for each respective problem. Full parameter settings used
for each method are listed in the appendix. All presented results are averaged
over a number of seeds with error bars indicating 2 standard errors. Ten seeds
were used for CIFAR-10 and IWSLT14, whereas only five seeds were used for the
remaining larger scale problems.
### 7.1 CIFAR10 image classification
CIFAR10 (Krizhevsky, 2009) is an established baseline method within the deep
learning community due to its manageable size and representative performance
within the class of data-limited supervised image classification problems. It
is particularly notable for showing clear differences between adaptive and
non-adaptive methods, as the former tend to overfit considerably on this
problem. Following standard practice, we apply a data-augmentation step
consisting of random horizontal flipping, 4px padding followed by random
cropping to 32px at training time only. We used a high-performance pre-
activation ResNet architecture (He et al., 2016b) which is known to work well
on this problem, consisting of 58,144,842 parameters across 152 layers. The
depth of this network is representative of the typical point of diminishing
returns for network depth on computer vision problems. As this network is
greatly over-parameterized, each method can be expected to fit the training
data exactly, achieving near zero loss, even with this data augmentation. For
this reason, this task is particularly sensitive to difference in
generalization performance of each method.
As illustrated in Figure 3, both Adam and AdaGrad perform poorly on this
problem in terms of test accuracy. The under-performance of Adam on this
problem is well known (Wilson et al., 2017), and is typically attributed to
convergence to poor local minima, as the training set convergence is very
rapid initially.
MADGRAD exhibits excellent test accuracy results on this problem, achieving
the highest test accuracy among the methods considered. This demonstrates that
unlike Adam and AdaGrad, MADGRAD’s adaptivity does not come at the cost of
inferior generalization performance.
### 7.2 ILSVRC 2012 ImageNet image classification
The ImageNet problem (Krizhevsky et al., 2012) is a larger problem more
representative of image classification problems encountered in industrial
applications where a large number of classes and higher resolution input
images are encountered. Like CIFAR10, overfitting can be an issue on this
problem for adaptive methods. We ran experiments using the ResNet-50
architecture, which is considered the standard baseline for this problem. This
combination of data set and architecture are one of the most studied in all of
machine learning, which makes it an ideal testing ground for optimization
algorithms.
Our setup used data preprocessing consisting of a mean [0.485, 0.456, 0.406]
and std [0.229, 0.224, 0.225] normalization of the three respective color
channels, followed by a RandomResizedCrop PyTorch operation to reduce the
resolution to 224 pixels followed by a random 50% chance of horizontal
flipping. For test set evaluation a resize to 256 pixels followed by a center
crop to 224 pixels is used instead. This setup was used as it is standard
within the PyTorch community, however it differs from the setup in He et al.
(2016a), meaning that test accuracy is close but not directly comparable.
On this problem both Adam and AdaGrad show similar convergence properties as
were seen on the CIFAR-10 problem. They both greatly under-perform SGD with
momentum. MADGRAD shows strong performance here as well, achieving higher test
accuracy than any other method for the majority of the training time, and
yielding the best final test accuracy. The accuracy of MADGRAD at epoch 70 is
75.87, a level only reached by SGD+M after the learning rate reduction at
epoch 90, more than 28% longer. MADGRAD also performs the best on training
loss on this problem.
### 7.3 fastMRI challenge MRI reconstruction
The fastMRI Knee challenge (Zbontar et al., 2018) is a recently proposed
large-scale image-2-image problem. Unlike the previously explored
classification problems, the scale of this problem makes overfitting a non-
concern given the number of weights in the largest models currently trainable
on contemporary hardware, meaning that adaptive methods are not prone to
overfitting. This problem is also particularly notable for being poorly
conditioned among image processing problems. Part of the reason for the poor
conditioning is the high depth of current SOTA models, such as the VarNet 2.0
Sriram et al. (2020) model that we used. This model has 12,931,532 parameters
over 273 layers. Our implementation uses 16 auto-calibration lines and an
offset equispaced sampling pattern (Defazio, 2019), which is much closer to a
realistic clinical configuration than the challenge’s random sampling mask.
Figure 3 shows a number of interesting properties of the methods. SGD+M
exhibits extremely variable performance on this problem, and under-performs
other methods by a large margin. AdaGrad also has a clear performance gap
compared to the top performing methods, MADGRAD and Adam. MADGRAD is the best
performer, with a small but statistically significant improvement over Adam,
which is the standard method for this problem. Training set performance shows
a much higher degree of variability, making comparisons difficult, however
MADGRAD appears to also be the best performing method on training loss as
well.
### 7.4 Machine translation with a recurrent neural network
Figure 4: Experimental results for the IWSLT14 and BookWiki problems. Left
column shows test set performance and the right column shows training set
performance.
For a machine translation baseline we trained our model on the IWSLT14
Germain-to-English dataset (Cettolo et al., 2014), using a popular LSTM
variant introduced by Wiseman and Rush (2016).
Figure 4 shows that all of the adaptive methods out-perform SGD on this
problem by a significant margin. The results are close but MADGRAD has a small
performance lead, yielding 4.33 test loss compared to 4.38 for AdaGrad and
4.35 for Adam. In training loss AdaGrad’s lead over the other methods can be
attributed to a slight degree of overfitting; these is a slight increase in
test loss near the end of optimization for AdaGrad which indicates this.
### 7.5 Masked language modeling with a Transformer
Bidirectional training objectives, as used in the BERT approach (Devlin et
al., 2019), have quickly established themselves as the new standard for large-
scale pre-training of natural language models. We performed our experiments
using the RoBERTa variant of BERT_BASE (Liu et al., 2019), a 110M parameter
transformer model. This model is large enough to provide a realistic
optimization test-bed for large-scale Transformer models while still being
trainable in in time comparable to a ResNet-50 model on ImageNet.
Similar to the LSTM problem, SGD+M performs poorly here. It exhibits some
spikes where training loss rapidly degrades then recovers quickly. both Adam
and MADGRAD perform well, however MADGRAD is significantly faster initially,
and also achieves a better final test loss of 2.07 compared to 2.09 achieved
by Adam.
## 8 Discussion
### 8.1 Hyper-parameter settings
We have made the following observations during our experimentation:
* •
Typically, using the default weight decay from previous SGD/Adam training runs
will result in poor generalization performance. Weight decay will need to be
much less, potentially even 0, for good performance. We recommend reducing the
weight-decay before any learning rate tuning.
* •
Learning rate values are not directly comparable to SGD/Adam, a full learning
rate sweep is necessary to find the optimal value. In the appendix we list the
best LR values for each of our test problems, which should form a good
starting point. Sweeping across a power-of-2 grid is recommended as the value
several of orders of magnitude different from SGD/Adam.
* •
Momentum values used for SGD/Adam should work without issue, by setting
$c=1-\beta$ for momentum $\beta$.
### 8.2 Empirical results in deep learning
We believe our experimental validation is one of the most comprehensive
performed for any newly proposed deep learning optimization method. More than
20,000 hours of GPU time were needed to perform the grid search and final
evaluation mentioned above, as we performed the search for each of the methods
considered, rather than just the MADGRAD method. This prevents our method
looking better than it would otherwise look due to hyper-parameter
optimization rather than an actual performance advantage. Our comparison also
includes a number of large and realistic problems, which are better
representative of modern deep learning compared to small scale problems.
Finally, our final results are averaged over a sufficiently large number of
seeds for each problem to ensure that run-to-run variation is not mistaken for
actual performance differences. This is particularly a problem with CIFAR-10,
yet many published results still use only a single seed for comparisons on
that problem. For these reasons, we believe our experimental results for
MADGRAD are representative of the performance of the method across modern
large-scale empirical risk minimization problems.
### 8.3 Sparsity
The reliance on a slowly updating moving average for the squared gradient
within the Adam method greatly hinders its application to sparse models. In
contrast, MADGRAD maintains a simple sum of the squared gradient entries which
may be updated in a sparse fashion. One potential problem in the sparse case
is that the buffer of iterates (rather than gradients) is maintained with a
moving average. To support sparse applications, the iterate buffer may be
removed, effectively equivalent to setting $c=1$.
## 9 Conclusion
We have proposed the MADGRAD (Momentumized, Adaptive, Dual averaged GRADient)
method as a general purpose optimizer for deep learning. Given MADGRAD’s
state-of-the-art empirical performance, together with its strong theoretical
foundations, it is an excellent first choice of optimizer across many sub-
fields of machine learning.
## References
* Balles and Hennig [2018] Lukas Balles and Philipp Hennig. Dissecting adam: The sign, magnitude and variance of stochastic gradients. In _Proceedings of the 35th International Conference on Machine Learning (ICML2018_ , 2018.
* Bernstein et al. [2018] Jeremy Bernstein, Yu-Xiang Wang, Kamyar Azizzadenesheli, and Animashree Anandkumar. signSGD: Compressed optimisation for non-convex problems. In _Proceedings of the 35th International Conference on Machine Learning_ , 2018.
* Brown et al. [2020] Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. _Advances in Neural Information Processing Systems 33 pre-proceedings (NeurIPS 2020)_ , 2020.
* Cettolo et al. [2014] Mauro Cettolo, Jan Niehues, Sebastian Stüker, Luisa Bentivogli, and Marcello Federico. Report on the 11th IWSLT evaluation campaign, IWSLT 2014. 2014\.
* Choi et al. [2020] Dami Choi, Christopher J. Shallue, Zachary Nado, Jaehoon Lee, Chris J. Maddison, and George E. Dahl. On empirical comparisons of optimizers for deep learning, 2020.
* Defazio [2019] Aaron Defazio. Offset sampling improves deep learning based accelerated mri reconstructions by exploiting symmetry. _arXiv preprint arXiv:1912.01101_ , 2019.
* Defazio [2020] Aaron Defazio. Understanding the role of momentum in non-convex optimization: Practical insights from a lyapunov analysis. _arXiv preprint arXiv:2010.00406_ , 2020.
* Defazio and Gower [2020] Aaron Defazio and Robert M. Gower. The power of factorial powers: New parameter settings for (stochastic) optimization, 2020.
* Défossez et al. [2020] Alexandre Défossez, Léon Bottou, Francis Bach, and Nicolas Usunier. A simple convergence proof of adam and adagrad, 2020.
* Devlin et al. [2019] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Tout. Bert: Pre-training of deepbidirectional transformers for language understan. _Proceedings of the 2019 Conferenceof the North American Chapter of the Association for Computational Lingustics_ , 2019.
* Duchi et al. [2011] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. _Journal of machine learning research_ , 12(7), 2011.
* He et al. [2016a] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2016a.
* He et al. [2016b] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In _Computer Vision – ECCV 2016_ , 2016b.
* Kingma and Ba [2014] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_ , 2014.
* Krizhevsky [2009] Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009.
* Krizhevsky et al. [2012] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In _Advances in neural information processing systems_ , pages 1097–1105, 2012.
* Levy et al. [2018] Kfir Y. Levy, Alp Yurtsever, and Volkan Cevher. Online adaptive methods, universality and acceleration. In _Advances in Neural Information Processing Systems_ , 2018.
* Li and Orabona [2019] Xiaoyu Li and Francesco Orabona. On the convergence of stochastic gradient descent with adaptive stepsizes. In Kamalika Chaudhuri and Masashi Sugiyama, editors, _Proceedings of Machine Learning Research_ , volume 89 of _Proceedings of Machine Learning Research_ , pages 983–992. PMLR, 16–18 Apr 2019. URL http://proceedings.mlr.press/v89/li19c.html.
* Liu et al. [2019] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. _arXiv preprint arXiv:1907.11692_ , 2019.
* Nesterov and Shikhman [2015] Yu Nesterov and Vladimir Shikhman. Quasi-monotone subgradient methods for nonsmooth convex minimization. _Journal of Optimization Theory and Applications_ , 165(3):917–940, 2015.
* Nesterov [2009] Yurii Nesterov. Primal-dual subgradient methods for convex problems. _Mathematical programming_ , 120(1):221–259, 2009\.
* Rajbhandari et al. [2019] Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. Zero: Memory optimizations toward training trillion parameter models. ArXiv, 2019.
* Reddi et al. [2018] Sashank J. Reddi, Satyen Kale, and Sanjiv Kumar. On the convergence of adam and beyond. In _International Conference on Learning Representations_ , 2018.
* Schmidt et al. [2020] Robin M. Schmidt, Frank Schneider, and Philipp Hennig. Descending through a crowded valley – benchmarking deep learning optimizers, 2020.
* Sebbouh et al. [2020] Othmane Sebbouh, Robert M Gower, and Aaron Defazio. On the convergence of the stochastic heavy ball method. _arXiv preprint arXiv:2006.07867_ , 2020.
* Sriram et al. [2020] Anuroop Sriram, Jure Zbontar, Tullie Murrell, Aaron Defazio, C Lawrence Zitnick, Nafissa Yakubova, Florian Knoll, and Patricia Johnson. End-to-end variational networks for accelerated mri reconstruction. _arXiv preprint arXiv:2004.06688_ , 2020.
* Sutskever et al. [2013] Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and momentum in deep learning. In _International conference on machine learning_ , pages 1139–1147, 2013.
* Tieleman and Hinton [2012] Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5 - rmsprop, coursera: Neural networks for machine learning. technical report, 2012. URL https://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf.
* Ward et al. [2019] Rachel Ward, Xiaoxia Wu, and Leon Bottou. AdaGrad stepsizes: Sharp convergence over nonconvex landscapes. In _Proceedings of the 36th International Conference on Machine Learning_ , 2019.
* Wilson et al. [2017] Ashia C Wilson, Rebecca Roelofs, Mitchell Stern, Nati Srebro, and Benjamin Recht. The marginal value of adaptive gradient methods in machine learning. In _Advances in Neural Information Processing Systems_ , 2017.
* Wiseman and Rush [2016] Sam Wiseman and Alexander M. Rush. Sequence-to-sequence learning as beam-search optimization. In _Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing_. Association for Computational Linguistics, 2016\.
* Zbontar et al. [2018] Jure Zbontar, Florian Knoll, Anuroop Sriram, Matthew J Muckley, Mary Bruno, Aaron Defazio, Marc Parente, Krzysztof J Geras, Joe Katsnelson, Hersh Chandarana, et al. fastMRI: An open dataset and benchmarks for accelerated MRI. _arXiv preprint arXiv:1811.08839_ , 2018.
* Zhou et al. [2020] Dongruo Zhou, Jinghui Chen, Yuan Cao, Yiqi Tang, Ziyan Yang, and Quanquan Gu. On the convergence of adaptive gradient methods for nonconvex optimization, 2020.
* Zou et al. [2019a] Fangyu Zou, Li Shen, Zequn Jie, Ju Sun, and Wei Liu. Weighted adagrad with unified momentum, 2019a.
* Zou et al. [2019b] Fangyu Zou, Li Shen, Zequn Jie, Weizhong Zhang, and Wei Liu. A sufficient condition for convergences of adam and rmsprop. _CVPR_ , 2019b.
## A Parameter settings
### CIFAR10
Our data augmentation pipeline followed standard practice: random horizontal
flipping, then random cropping to 32x32, then normalization by centering
around (0.5, 0.5, 0.5).
Hyper-parameter | Value
---|---
Architecture | PreAct ResNet152
Epochs | 300
GPUs | 1xV100
Batch Size per GPU | 128
LR schedule | 150-225 tenthing
Seeds | 10
Method | LR | Decay
---|---|---
MADGRAD | 2.5e-4 | 0.0001
AdaGrad | 0.01 | 0.0001
Adam | 0.00025 | 0.0001
SGD | 0.1 | 0.0001
### ImageNet
A standard LR schedule was used, where the learning rate is decreased 10 fold
every 30 epochs. Interestingly, for this problem, a smaller decay constant
improved the performance of MADGRAD, but didn’t yield any improvement to the
other methods considered.
Hyper-parameter | Value
---|---
Architecture | ResNet50
Epochs | 90
GPUs | 8xV100
Batch size per GPU | 32
LR schedule | 30-60-90 tenthing
Seeds | 5
Method | LR | Decay
---|---|---
MADGRAD | 0.001 | 2.5e-5
AdaGrad | 0.01 | 0.0001
Adam | 0.00025 | 0.0001
SGD | 0.1 | 0.0001
### fastMRI
For this task, the best learning rate schedule is a flat schedule, with a
small number of fine-tuning epochs at the end to stabilize. To this end, we
decreased the learning rate 10 fold at epoch 40.
Hyper-parameter | Value
---|---
Architecture | 12 layer VarNet 2
Epochs | 50
GPUs | 8xV100
Batch size per GPU | 1
Acceleration factor | 4
Low frequency lines | 16
Mask type | Offset-1
LR schedule | 40 tenthing
Seeds | 5
Method | LR | Decay
---|---|---
MADGRAD | 0.01 | 0.0
AdaGrad | 0.25 | 0.0
Adam | 0.00025 | 0.0
SGD | 0.01 | 0.0
### IWSLT14
Our implementation used FairSeq defaults except for the parameters listed
below.
Hyper-parameter | Value
---|---
Architecture | lstm_wiseman_iwslt_de_en
Max updates | 60,000
GPUs | 1xV100
Max tokens per batch | 4096
Warmup steps | 4000
Dropout | 0.3
Label smoothing | 0.1
Share decoder/input/output embed | True
Float16 | True
Update Frequency | 1
LR schedule | Inverse square-root
Seeds | 10
Method | LR | Decay
---|---|---
MADGRAD | 0.025 | 5e-6
AdaGrad | 0.25 | 1e-5
Adam | 0.01 | 0.05
SGD | 1.0 | 1e-5
### BookWiki
Our implementation used FairSeq defaults except for the parameters listed
below.
Hyper-parameter | Value
---|---
Architecture | roberta_base
Task | masked_lm
Max updates | 20,000
GPUs | 8xV100
Max tokens per sample | 512
Dropout | 0.1
Attention Dropout | 0.1
Max sentences | 16
Warmup | 10,000
Sample Break Mode | Complete
Share decoder/input/output embed | True
Float16 | True
Update Frequency | 16
LR schedule | Polynomial decay
Seeds | 5
Gradient clipping | 0.5
Method | LR | Decay
---|---|---
MADGRAD | 0.005 | 0.0
AdaGrad | 0.01 | 0.0
Adam | 0.001 | 0.0
SGD | 1.0 | 0.0
## B Theory
### B.1 Theoretical variant
We analyze a variant of the MADGRAD algorithm, using fixed step size $\gamma$,
and $\lambda_{k}=\gamma\sqrt{k+1}$:
$\displaystyle s_{k+1}$ $\displaystyle=s_{k}+\lambda_{k}g_{k},$ $\displaystyle
v_{k+1}$ $\displaystyle=v_{k}+\lambda_{k}g_{k}^{2},$ $\displaystyle z_{k+1}$
$\displaystyle=x_{0}-\frac{1}{\sqrt[3]{\lambda_{k+1}G^{2}+v_{k+1}}}s_{k+1},$
$\displaystyle x_{k+1}$
$\displaystyle=\left(1-c_{k+1}\right)x_{k}+c_{k+1}z_{k+1}.$ (7)
This variant differs from Algorithm 1 just with the addition of
$\lambda_{k}G^{2}$ in the denominator, which is necessitated by our analysis
method. Note that the AdaGrad DA formulation originally proposed by Duchi et
al. [2011] also requires this extra term.
### B.2 Support function
We define a matrix analogue of the support function from Nesterov [2009]:
$V_{A_{k}}(-s_{k})=\max_{x}\left\\{-\left\langle
s_{k},x-x_{0}\right\rangle-\frac{1}{2}\left\|x-x_{0}\right\|_{A_{k}}^{2}\right\\}.$
(8)
In this work we only consider diagonal $A_{k}$, represented by a vector
$a_{k}:$
$A_{k}=\text{diag}(a_{k}).$
In this notation, we have $\alpha_{k}=\sqrt[3]{\lambda_{k}G^{2}+v_{k}}$. The
maximizer of expression 8 is (using component-wise division):
$z_{k}=x_{0}-\frac{s_{k}}{\alpha_{k}}.$
Since $v_{k+1}$ is non-decreasing, it’s clear that:
$V_{A_{k+1}}\left(-s_{k}\right)\leq V_{A_{k}}\left(-s_{k}\right).$ (9)
We will also use the following properties, which follow directly by modifying
the argument in Nesterov [2009] to handle scaling matrices instead of
constants:
$\nabla V_{A_{k}}(-s_{k})=z_{k}-x_{0},$ (10) $V_{A_{k}}(s+\delta)\leq
V_{A_{k}}(s)+\left\langle\delta,\nabla
V_{A_{k}}(s)\right\rangle+\frac{1}{2}\left\|\delta\right\|_{A_{k}^{-1}}^{2}.$
(11)
### B.3 Lemmas
###### Lemma 2.
For all natural $k$, assuming $\lambda_{k+1}\geq\lambda_{k}$:
$\sum_{t=0}^{k}\frac{\lambda_{t}^{2}g_{t}^{2}}{\left(\lambda_{t}G^{2}+\sum_{i=0}^{t-1}\lambda_{i}g_{i}^{2}\right)^{1/3}}\leq\frac{3}{2}\lambda_{k}\left(\sum_{i=0}^{k}\lambda_{i}g_{i}^{2}\right)^{2/3}.$
###### Proof.
We prove by induction. For the base case:
$\frac{g_{0}^{2}}{\left(G^{2}\right)^{1/3}}\leq
g^{2(1-1/3)}=\left(g^{2}\right)^{2/3}\leq\frac{3}{2}\left(g^{2}\right)^{2/3}.$
Now assume the lemma holds for $k-1$ then using the inductive hypothesis
$\displaystyle\sum_{t=0}^{k}\frac{\lambda_{t}^{2}g_{t}^{2}}{\left(\lambda_{t}G^{2}+\sum_{i=0}^{t-1}\lambda_{i}g_{i}^{2}\right)^{1/3}}$
$\displaystyle\leq\frac{\lambda_{k}^{2}g_{k}^{2}}{\left(\lambda_{t}G^{2}+\sum_{i=0}^{k-1}\lambda_{i}g_{i}^{2}\right)^{1/3}}+\frac{3}{2}\lambda_{k-1}\left(\sum_{i=0}^{k-1}\lambda_{i}g_{i}^{2}\right)^{2/3},$
$\displaystyle\leq\frac{\lambda_{k}^{2}g_{k}^{2}}{\left(\lambda_{t}G^{2}+\sum_{i=0}^{k-1}\lambda_{i}g_{i}^{2}\right)^{1/3}}+\frac{3}{2}\lambda_{k}\left(\sum_{i=0}^{k-1}\lambda_{i}g_{i}^{2}\right)^{2/3}.$
Define $b_{k}=\sum_{i=0}^{k}\lambda_{i}g_{i}^{2}$ and $a_{k}=g_{k}^{2}$ then
we have:
$\sum_{t=0}^{k}\frac{\lambda_{t}^{2}g_{t}^{2}}{\left(\lambda_{t}G^{2}+\sum_{i=0}^{t-1}\lambda_{i}g_{i}^{2}\right)^{1/3}}\leq\lambda_{k}^{2}a_{k}\left(\lambda_{k}G^{2}+b_{k}-\lambda_{k}a_{k}\right)^{-1/3}+\frac{3}{2}\lambda_{k}\left(b_{k}-\lambda_{k}a_{k}\right)^{2/3}.$
We have two terms on the right to consider. For the first term, note that
since $a_{k}\leq G^{2}$,
$\lambda_{k}^{2}a_{k}\left(\lambda_{k}G^{2}+b_{k}-\lambda_{k}a_{k}\right)^{-1/3}\leq\lambda_{k}^{2}a_{k}\left(b_{k}\right)^{-1/3}.$
For the 2nd term, we can use concavity to get:
$\frac{3}{2}\lambda_{k}\left(b_{k}-\lambda_{k}a_{k}\right)^{2/3}\leq\frac{3}{2}\lambda_{k}\left(b_{k}\right)^{2/3}-\lambda_{k}^{2}a_{k}\left(b_{k}\right)^{-1/3}.$
Combining gives:
$\sum_{t=0}^{k}\frac{\lambda_{t}^{2}g_{t}^{2}}{\left(\lambda_{t}G^{2}+\sum_{i=0}^{t-1}\lambda_{i}g_{i}^{2}\right)^{1/3}}\leq\frac{3}{2}\lambda_{k}\left(b_{k}\right)^{2/3},$
and so the inductive case is proven. ∎
###### Lemma 3.
Let $0<r<1$ and $j\geq 0$. Then define:
$c_{k}=\frac{r+1}{k+j+r},$
for all $k\geq 0$ it then holds that:
$\frac{1-c_{k}}{c_{k}}(k+j)^{r}\leq\frac{1}{c_{k-1}}(k+j-1)^{r}.$
###### Proof.
We start by simplifying:
$\displaystyle\frac{1-c_{k}}{c_{k}}(k+j)^{r}$
$\displaystyle=\frac{1-\frac{r+1}{k+j+r}}{\frac{r+1}{k+j+r}}(k+j)^{r},$
$\displaystyle=\frac{k+j-1}{r+1}(k+j)^{r},$
$\displaystyle=\frac{k+j+r-1}{r+1}\frac{k+j-1}{k+j+r-1}(k+j)^{r},$
$\displaystyle=\frac{1}{c_{k}}\frac{k+j-1}{k+j+r-1}(k+j)^{r}.$
So we need:
$(k+j)^{r}\leq\frac{k+j+r-1}{k+j-1}\left(k+j-1\right)^{r}.$
Recall the concavity upper bound:
$f(x)\leq f(y)+\left\langle\nabla f(y),x-y\right\rangle,$
using $f(x)=\left(k+j\right)^{r}$ which is concave for $r\in(0,1)$, and
$x=k+j,y=k+j-1,$ we have:
$\displaystyle\left(k+j\right)^{r}$
$\displaystyle\leq\left(k+j-1\right)^{r}+r\left(k+j-1\right)^{r-1},$
$\displaystyle=\left(k+j-1\right)^{r}+\frac{r}{k+j-1}\left(k+j-1\right)^{r},$
$\displaystyle=\frac{k+j-1+r}{k+j-1}\left(k+j-1\right)^{r}.$
Which establishes the result. ∎
###### Lemma 4.
The dual averaging iterates obey:
$z_{k}=x_{k}-\frac{1-c_{k}}{c_{k}}\left(x_{k-1}-x_{k}\right).$ (12)
###### Proof.
We rearrange the $x$ update:
$x_{k+1}=\left(1-c_{k+1}\right)x_{k}+c_{k+1}z_{k+1}.$ $\therefore
x_{k}=\left(1-c_{k}\right)x_{k-1}+c_{k}z_{k},$ $\therefore
c_{k}z_{k}=x_{k}-(1-c_{k})x_{k-1},$ $\therefore
z_{k}=\frac{1}{c_{k}}x_{k}-\frac{1-c_{k}}{c_{k}}x_{k-1}.$
∎
###### Theorem 5.
Consider the MADGRAD method. We upper bound the quantity
$V_{A_{k+1}}\left(-s_{k+1}\right)$ as follows: For the first step $k=0$:
$V_{A_{1}}\left(-s_{1}\right)\leq\frac{\lambda_{0}^{2}}{2}\left\|\nabla
f\left(x_{0},\xi_{k}\right)\right\|_{A_{0}^{-1}}^{2}.$
For subsequent steps $k\geq 1$:
$\displaystyle V_{A_{k+1}}\left(-s_{k+1}\right)$ $\displaystyle\leq
V_{A_{k}}\left(-s_{k}\right)+\frac{\lambda_{k}^{2}}{2}\left\|\nabla
f\left(x_{k},\xi_{k}\right)\right\|_{A_{k}^{-1}}^{2}+\lambda_{k}\left\langle\nabla
f\left(x_{k},\xi_{k}\right),x_{0}-x_{*}\right\rangle$
$\displaystyle-\frac{1}{c_{k}}\lambda_{k}\left[f(x_{k},\xi_{k})-f(x_{*},\xi_{k})\right]+\frac{1-c_{k}}{c_{k}}\lambda_{k}\left[f(x_{k-1},\xi_{k})-f(x_{*},\xi_{k})\right].$
###### Proof.
Base case:
$\displaystyle V_{A_{1}}\left(-s_{1}\right)$
$\displaystyle\leq-\lambda_{0}\left\langle\nabla
f\left(x_{0},\xi_{k}\right),\nabla
V_{0}\left(-s_{0}\right)\right\rangle+\frac{\lambda_{0}^{2}}{2}\left\|\nabla
f\left(x_{0},\xi_{k}\right)\right\|_{A_{0}^{-1}}^{2}\quad\text{(Eq.
\ref{eq:v-l-smooth})},$ $\displaystyle=\lambda_{k}\left\langle\nabla
f\left(x_{k},\xi_{k}\right),x_{0}-x_{0}\right\rangle+\frac{\lambda_{0}^{2}}{2\beta_{0}}\left\|\nabla
f\left(x_{0},\xi_{k}\right)\right\|_{A_{0}^{-1}}^{2},\quad\text{(Eq.
\ref{eq:v-grad})}$ $\displaystyle=\frac{\lambda_{0}^{2}}{2}\left\|\nabla
f\left(x_{0},\xi_{k}\right)\right\|_{A_{0}^{-1}}^{2}.$ (13)
Inductive case:
$\displaystyle V_{A_{k+1}}\left(-s_{k+1}\right)$ $\displaystyle\leq
V_{A_{k}}\left(-s_{k+1}\right)$ $\displaystyle\leq
V_{A_{k}}\left(-s_{k}\right)-\lambda_{k}\left\langle\nabla
f\left(x_{k},\xi_{k}\right),\nabla
V_{A_{k}}\left(-s_{k}\right)\right\rangle+\frac{\lambda_{k}^{2}}{2}\left\|\nabla
f\left(x_{k},\xi_{k}\right)\right\|_{A_{k}^{-1}}^{2},\quad\text{(Eq.
\ref{eq:v-l-smooth})}$
$\displaystyle=V_{A_{k}}\left(-s_{k}\right)+\lambda_{k}\left\langle\nabla
f\left(x_{k},\xi_{k}\right),x_{0}-z_{k}\right\rangle+\frac{\lambda_{k}^{2}}{2}\left\|\nabla
f\left(x_{k},\xi_{k}\right)\right\|_{A_{k}^{-1}}^{2},\quad\text{(Eq.
\ref{eq:v-grad})}$
$\displaystyle=V_{A_{k}}\left(-s_{k}\right)+\frac{\lambda_{k}^{2}}{2}\left\|\nabla
f\left(x_{k},\xi_{k}\right)\right\|_{A_{k}^{-1}}^{2}$
$\displaystyle+\lambda_{k}\left\langle\nabla
f\left(x_{k},\xi_{k}\right),x_{0}-x_{k}+\left(\frac{1-c_{k}}{c_{k}}\right)\left(x_{k-1}-x_{k}\right)\right\rangle,\quad\text{(Eq.
\ref{eq:x-diff})}$
$\displaystyle=V_{A_{k+1}}\left(-s_{k}\right)+\frac{\lambda_{k}^{2}}{2}\left\|\nabla
f\left(x_{k},\xi_{k}\right)\right\|_{A_{k}^{-1}}^{2}$
$\displaystyle+\lambda_{i}\left\langle\nabla
f\left(x_{k},\xi_{k}\right),x_{0}-x_{k}\right\rangle+\lambda_{k}\frac{1-c_{k}}{c_{k}}\left\langle\nabla
f\left(x_{k},\xi_{k}\right),x_{k-1}-x_{k}\right\rangle,$
$\displaystyle=V_{A_{k+1}}\left(-s_{k}\right)+\frac{\lambda_{k}^{2}}{2}\left\|\nabla
f\left(x_{k},\xi_{k}\right)\right\|_{A_{k}^{-1}}^{2}$
$\displaystyle+\lambda_{k}\left\langle\nabla
f\left(x_{k},\xi_{k}\right),x_{0}-x_{*}\right\rangle+\lambda_{k}\left\langle\nabla
f\left(x_{k},\xi_{k}\right),x_{*}-x_{k}\right\rangle$
$\displaystyle+\lambda_{k}\frac{1-c_{k}}{c_{k}}\left\langle\nabla
f\left(x_{k},\xi_{k}\right),x_{k-1}-x_{k}\right\rangle.$
Now we use:
$\left\langle\nabla f\left(x_{k},\xi_{k}\right),x_{*}-x_{k}\right\rangle\leq
f(x_{*},\xi_{k})-f(x_{k},\xi_{k}),$
and:
$\left\langle\nabla f\left(x_{k},\xi_{k}\right),x_{k-1}-x_{k}\right\rangle\leq
f(x_{k-1},\xi_{k})-f(x_{k},\xi_{k}),$
to give:
$\displaystyle V_{A_{k+1}}\left(-s_{k+1}\right)$ $\displaystyle\leq
V_{A_{k}}\left(-s_{k}\right)+\frac{\lambda_{k}^{2}}{2}\left\|\nabla
f\left(x_{k},\xi_{k}\right)\right\|_{A_{k}^{-1}}^{2}$
$\displaystyle+\lambda_{k}\left\langle\nabla
f\left(x_{k},\xi_{k}\right),x_{0}-x_{*}\right\rangle$
$\displaystyle+\lambda_{k}\left[f(x_{*},\xi_{k})-f(x_{k},\xi_{k})\right]+\lambda_{k}\frac{1-c_{k}}{c_{k}}\left[f(x_{k-1},\xi_{k})-f(x_{k},\xi_{k})\right],$
grouping function value terms gives the result. ∎
### B.4 Convergence rate
###### Theorem 6.
After $k$ steps of MADGRAD,
$\displaystyle\mathbb{E}\left[f(x_{k})-f(x_{*})\right]$
$\displaystyle\leq\frac{6}{k^{1/2}}\left\|x_{0}-x_{*}\right\|GD^{1/2},$
if $c_{k}=\frac{3/2}{k+3/2}$ and
$\gamma=\frac{1}{k^{3/4}D^{3/4}G^{1/2}}\left\|x_{0}-x_{*}\right\|^{3/2}.$
We assume that $\gamma_{k}=\gamma$ is a constant. First note that for our
choice of $\lambda_{k}=\gamma\left(k+1\right)^{1/2}$ and:
$c_{k}=\frac{3/2}{k+3/2},$
applying Lemma 3 gives that:
$\frac{1-c_{k}}{c_{k}}\lambda_{k}\leq\frac{1}{c_{k-1}}\lambda_{k-1}.$
Using this bound we can telescope the bound from Theorem 5 after taking
expectations:
$\displaystyle\frac{1}{c_{k}}\lambda_{k}\left[f(x_{k},\xi_{k})-f(x_{*},\xi_{k})\right]$
$\displaystyle\leq-\mathbb{E}\left[V_{A_{k+1}}\left(-s_{k+1}\right)\right]+\frac{1}{2}\mathbb{E}\left[\sum_{t=0}^{k}\lambda_{t}^{2}\left\|\nabla
f\left(x_{t},\xi_{t}\right)\right\|_{A_{t}^{-1}}^{2}\right]$
$\displaystyle+\mathbb{E}\left\langle\sum_{i=0}^{k}\lambda_{i}\nabla
f\left(x_{i},\xi_{i}\right),x_{0}-x_{*}\right\rangle.$
Now note that $s_{k+1}=\sum_{i=0}^{k}\lambda_{i}\nabla
f\left(x_{i},\xi_{i}\right)$, so:
$\displaystyle\mathbb{E}\left[V_{A_{k+1}}\left(-s_{k+1}\right)\right]$
$\displaystyle=\mathbb{E}\left[\max_{x}\left\\{\left\langle-
s_{k+1},x-x_{0}\right\rangle-\frac{1}{2}\left\|x-x_{0}\right\|_{A_{k+1}}^{2}\right\\}\right],$
$\displaystyle\geq\mathbb{E}\left[\left\langle-
s_{k+1},x_{*}-x_{0}\right\rangle-\frac{1}{2}\left\|x_{*}-x_{0}\right\|_{A_{k+1}}^{2}\right],$
$\displaystyle=\mathbb{E}\left\langle\sum_{i=0}^{k}\lambda_{i}\nabla
f\left(x_{i},\xi_{i}\right),x_{0}-x_{*}\right\rangle-\frac{1}{2}\left\|x_{*}-x_{0}\right\|_{A_{k+1}}^{2}.$
So combining this bound and further using the definition of $c_{k}$ and
$\lambda_{k}$:
$\displaystyle\frac{k+3/2}{3/2}\gamma\left(k+1\right)^{1/2}\mathbb{E}\left[f(x_{k})-f(x_{*})\right]$
$\displaystyle\leq\frac{1}{2}\mathbb{E}\left[\sum_{t=0}^{k}\lambda_{t}^{2}\left\|\nabla
f\left(x_{t},\xi_{t}\right)\right\|_{A_{t}^{-1}}^{2}\right]+\frac{1}{2}\left\|x_{*}-x_{0}\right\|_{A_{k+1}}^{2}.$
To simplify further we need to start working in a coordinate wise fashion. Let
$D$ be the number of dimensions in $x$, then we can write the above bound
using Lemma 2 applied coordinate wise as:
$\displaystyle\frac{k+3/2}{3/2}\gamma\left(k+1\right)^{1/2}\mathbb{E}\left[f(x_{k})-f(x_{*})\right]$
$\displaystyle\leq\frac{1}{2}\sum_{d=0}^{D}\left(\mathbb{E}\left[\frac{3}{2}\lambda_{k}\left(\sum_{i=0}^{k}\lambda_{i}g_{id}^{2}\right)^{2/3}\right]\right)$
$\displaystyle+\frac{1}{2}\sum_{d=0}^{D}\left(x_{0x}-x_{*d}\right)^{2}\mathbb{E}\left(\lambda_{k+1}G^{2}+\sum_{i=0}^{k}\lambda_{i}g_{id}^{2}\right)^{1/3}.$
We now apply the bound $g_{id}\leq G$:
$\displaystyle\frac{k+3/2}{3/2}\gamma\left(k+1\right)^{1/2}\mathbb{E}\left[f(x_{k})-f(x_{*})\right]$
$\displaystyle\leq\frac{3}{4}\sum_{d=0}^{D}\left(\lambda_{k}\left(\sum_{i=0}^{k}\lambda_{i}G^{2}\right)^{2/3}\right)$
$\displaystyle+\frac{1}{2}\sum_{d=0}^{D}\left(x_{0x}-x_{*d}\right)^{2}\left(\sum_{i=0}^{k+1}\lambda_{i}G^{2}\right)^{1/3}.$
Since $\lambda_{k}=\gamma\left(k+1\right)^{1/2}$, we can further simplify
using the summation property:
$\sum_{i=0}^{k}\left(i+1\right)^{1/2}\leq\frac{2}{3}\left(k+2\right)^{3/2},$
we apply on the two locations on the right to give:
$\displaystyle\frac{k+3/2}{3/2}\gamma\left(k+1\right)^{1/2}\mathbb{E}\left[f(x_{k})-f(x_{*})\right]$
$\displaystyle\leq\frac{1}{2}\gamma^{5/3}\sum_{d=0}^{D}\left(k+1\right)^{1/2}\left(k+2\right)G^{4/3}$
$\displaystyle+\frac{1}{3}\gamma^{1/3}\sum_{d=0}^{D}\left(x_{0x}-x_{*d}\right)^{2}\left(k+3\right)^{1/2}G^{2/3}.$
Note that:
$\displaystyle\frac{\left(k+3\right)^{1/2}}{(k+3/2)(k+1)}$
$\displaystyle\leq\frac{\left(k+3/2\right)^{1/2}+\left(3/2\right)^{1/2}}{(k+3/2)(k+1)}$
$\displaystyle\leq\frac{1}{k+1}+\frac{1}{(k+1)}\,$
$\displaystyle\leq\frac{2}{k+1}\,$
and likewise:
$\frac{k+2}{k+3/2}\leq 2$
so after rearranging:
$\displaystyle\frac{2}{3}\mathbb{E}\left[f(x_{k})-f(x_{*})\right]$
$\displaystyle\leq 2\gamma^{2/3}G^{4/3}D$
$\displaystyle+\gamma^{-2/3}G^{2/3}\frac{2}{k+1}\sum_{d=0}^{D}\left(x_{0x}-x_{*d}\right)^{2},$
$\mathbb{E}\left[f(x_{k})-f(x_{*})\right]\leq
3\gamma^{2/3}G^{4/3}D+\frac{3}{k+1}\gamma^{-2/3}G^{2/3}\left\|x_{0}-x_{*}\right\|^{2}.$
Taking the gradient with respect to $\gamma$ to zero gives
$0=\frac{2}{3}\gamma^{-1/3}G^{4/3}D-\frac{2}{3(k+1)}\gamma^{-5/3}G^{2/3}\left\|x_{0}-x_{*}\right\|^{2},$
$\therefore\gamma^{-1}G^{4}D^{3}=\frac{1}{\left(k+1\right)^{3}}\gamma^{-5}G^{2}\left\|x_{0}-x_{*}\right\|^{6},$
$\therefore\gamma^{4}=\frac{1}{\left(k+1\right)^{3}D^{3}G^{2}}\left\|x_{0}-x_{*}\right\|^{6},$
$\therefore\gamma=\frac{1}{\left(k+1\right)^{3/4}D^{3/4}G^{1/2}}\left\|x_{0}-x_{*}\right\|^{3/2}.$
Using this optimal $\gamma$ gives:
$\gamma^{2/3}=\frac{1}{k^{1/2}D^{1/2}G^{1/3}}\left\|x_{0}-x_{*}\right\|.$
and so:
$\displaystyle\mathbb{E}\left[f(x_{k})-f(x_{*})\right]$
$\displaystyle\leq\frac{6}{k^{1/2}}\left\|x_{0}-x_{*}\right\|GD^{1/2}.$
Note that $\left\|g\right\|_{2}\leq
D^{1/2}\left\|g\right\|_{\infty}=D^{1/2}G$, so the dependence on
dimensionality here is comparable to standard stochastic method proofs which
have $\left\|g\right\|_{2}$ on the right instead.
### B.5 Time varying case
Consider the situation where the bound on the gradient potentially varies over
time.
$\left\|\nabla f(x_{i},\xi)\right\|_{\infty}\leq G_{i}\;\text{for all }x,\xi.$
Then using the same argument as in the previous section we arrive at:
$\displaystyle\mathbb{E}\left[f(x_{k})-f(x_{*})\right]$ $\displaystyle\leq
3\gamma^{2/3}\frac{1}{\left(k+1\right)}D\left(\sum_{i=0}^{k+1}\left(i+1\right)^{1/2}G_{i}^{2}\right)^{2/3}$
$\displaystyle+3\gamma^{-2/3}\frac{1}{\left(k+1\right)^{3/2}}\left\|x_{0}-x_{*}\right\|_{2}^{2}\left(\sum_{i=0}^{k+1}\left(i+1\right)^{1/2}G_{i}^{2}\right)^{1/3}.$
We may solve for the optimal step size, giving:
$\gamma^{4/3}=\frac{1}{\left(k+1\right)^{1/2}}\frac{\left\|x_{0}-x_{*}\right\|_{2}^{2}\left(\sum_{i=0}^{k+1}\left(i+1\right)^{1/2}G_{i}^{2}\right)^{1/3}}{D\left(\sum_{i=0}^{k+1}\left(i+1\right)^{1/2}G_{i}^{2}\right)^{2/3}},$
$\therefore\gamma^{4/3}=\frac{1}{\left(k+1\right)^{1/2}}\frac{\left\|x_{0}-x_{*}\right\|_{2}^{2}}{D\left(\sum_{i=0}^{k+1}\left(i+1\right)^{1/2}G_{i}^{2}\right)^{1/3}},$
$\therefore\gamma^{2/3}=\frac{1}{\left(k+1\right)^{1/4}}\frac{\left\|x_{0}-x_{*}\right\|_{2}}{D^{1/2}\left(\sum_{i=0}^{k+1}\left(i+1\right)^{1/2}G_{i}^{2}\right)^{1/6}}.$
Then substituting this in gives:
$\displaystyle\mathbb{E}\left[f(x_{k})-f(x_{*})\right]$ $\displaystyle\leq
6\frac{1}{\left(k+1\right)^{5/4}}D^{1/2}\left\|x_{0}-x_{*}\right\|_{2}\left(\sum_{i=0}^{k+1}\left(i+1\right)^{1/2}G_{i}^{2}\right)^{1/2}.$
When applying $\lambda_{i}=\gamma$, as in AdaGrad, we instead get:
$\displaystyle\mathbb{E}\left[f(x_{k})-f(x_{*})\right]$ $\displaystyle\leq
3\frac{\gamma^{1/2}}{\left(k+1\right)}D\left(\sum_{i=0}^{k+1}G_{i}^{2}\right)^{1/2}$
$\displaystyle+3\frac{1}{\left(k+1\right)\gamma^{1/2}}\left\|x_{0}-x_{*}\right\|_{2}^{2}\left(\sum_{i=0}^{k+1}G_{i}^{2}\right)^{1/2},$
solving for the optimal step size:
$\frac{\gamma^{1/2}}{\left(k+1\right)}D\left(\sum_{i=0}^{k}G_{i}^{2}\right)^{1/2}=\frac{1}{\left(k+1\right)\gamma^{-3/2}}\left\|x_{0}-x_{*}\right\|_{2}^{2}\left(\sum_{i=0}^{k}G_{i}^{2}\right)^{1/2},$
$\therefore\gamma^{2}=\frac{\left\|x_{0}-x_{*}\right\|_{2}^{2}}{D}.$
So:
$\displaystyle\mathbb{E}\left[f(x_{k})-f(x_{*})\right]$
$\displaystyle\leq\frac{6}{\left(k+1\right)}\left\|x_{0}-x_{*}\right\|_{2}D^{1/2}\left(\sum_{i=0}^{k}G_{i}^{2}\right)^{1/2}.$
## C Cube root formulation
Consider the minimization problem parameterized by $g:k\times D$ and a single
vector $s:D$, where
$\min_{s}\sum_{i=0}^{k}\sum_{d=0}^{D}\frac{g_{id}^{2}}{s_{d}},\;\left\|s\right\|_{2}^{2}\leq
c,\;\forall d:\,s_{d}>0$
In this section we show that $s_{d}\propto\sqrt[3]{\sum_{i=0}^{k}g_{id}^{2}}$
is a solution. Without loss of generality we disregard the inequality
constraint on $s_{d}$ and consider only positive solutions to the equality
constrained problem. We will apply the method of Lagrange multipliers.
Firstly we form the Lagrangian with multiplier $\mu$:
$L(s,\mu)=\sum_{i=0}^{k}\sum_{d=0}^{D}\frac{g_{id}^{2}}{s_{d}}+\frac{\mu}{2}\left(\sum_{d=0}^{D}s_{d}^{2}-c\right)$
Saddle-points of the Lagrangian can be found by equating the gradients to zero
and solving.
$\frac{\partial L}{\partial
s_{d}}L(s,\mu)=-\frac{1}{s_{d}^{2}}\sum_{i=0}^{k}g_{id}^{2}+\mu s_{d},$
$\frac{\partial
L}{\partial\mu}L(s,\mu)=\frac{1}{2}\left(\sum_{d=0}^{D}s_{d}^{2}-c\right).$
From the first equation:
$\frac{1}{s_{d}^{2}}\sum_{i=0}^{k}g_{id}^{2}=\mu s_{d},$ $\therefore
s_{d}^{3}=\frac{1}{\mu}\sum_{i=0}^{k}g_{id}^{2}.$
Therefore $s_{d}=\mu^{-1/3}\left(\sum_{i=0}^{k}g_{id}^{2}\right)^{1/3}$ since
$s_{d}$ is positive we take the positive root, The Lagrange multiplier $\mu$
is given by the requirement that
$\sum_{d=0}^{D}s_{d}^{2}=c,$
$\therefore\mu^{2/3}=c^{-1}\sum_{d=0}^{D}\left(\sum_{i=0}^{k}g_{id}^{2}\right)^{2/3},$
$\therefore\mu^{1/3}=\sqrt{c^{-1}\sum_{d=0}^{D}\left(\sum_{i=0}^{k}g_{id}^{2}\right)^{2/3}}.$
So
$s_{d}=\frac{1}{\sqrt{c^{-1}\sum_{d=0}^{D}\left(\sum_{i=0}^{k}g_{id}^{2}\right)^{2/3}}}\left(\sum_{i=0}^{k}g_{id}^{2}\right)^{1/3}.$
We can verify that this is an extreme point of the original problem by noting
that the linear independence constraint qualification (LICQ) condition
trivially holds when using one equality constraint. Since the objective is
convex for $s_{d}>0$, this point must be a minimizer.
|
# New Findings on GLRT Radar Detection of Nonfluctuating Targets via Phased
Arrays
Fernando Darío Almeida García, Marco Antonio Miguel Miranda and José Cândido
Silveira Santos Filho F. D. A. García and J. C. S. Santos Filho are with the
Wireless Technology Laboratory, Department of Communications, School of
Electrical and Computer Engineering, University of Campinas, 13083-852
Campinas, SP, Brazil, Tel.: +55 (19) 3788-5106, E-mails:
<EMAIL_ADDRESS>M. A. M. Miranda is with
EMBRAER, Campinas, Brazil, Tel.: +55 19 2101-8800, E-mail:
<EMAIL_ADDRESS>This work was supported by Coordenação de
Aperfeiçoamento de Pessoal de Nível Superior (CAPES), Brazil, and by
Secretaría de Educación Superior, Ciencia, Tecnología e Innovación (SENESCYT),
Ecuador.
###### Abstract
This paper addresses the standard generalized likelihood ratio test (GLRT)
detection problem of weak signals in background noise. In so doing, we
consider a nonfluctuating target embedded in complex white Gaussian noise
(CWGN), in which the amplitude of the target echo and the noise power are
assumed to be unknown. Important works have analyzed the performance for the
referred scenario and proposed GLRT-based detectors. Such detectors are
projected at an early stage (i.e., prior to the formation of a post-
beamforming scalar waveform), thereby imposing high demands on hardware,
processing, and data storage. From a hardware perspective, most radar systems
fail to meet these strong requirements. In fact, due to hardware and
computational constraints, most radars use a combination of analog and digital
beamformers (sums) before any estimation or further pre-processing. The
rationale behind this study is to derive a GLRT detector that meets the
hardware and system requirements. In this work, we design and analyze a more
practical and easy-to-implement GLRT detector, which is projected after the
analog beamforming. The performance of the proposed detector is analyzed and
the probabilities of detection (PD) and false alarm (PFA) are derived in
closed form. An alternative fast converging series for the PD is also derived.
This series proves to be very efficient and computationally tractable, saving
both computation time and computational load. Moreover, we show that in the
low signal-to-noise ratio (SNR) regime, the post-beamforming GLRT detector
performs better than both the classic pre-beamforming GLRT detector and the
square-law detector. This finding suggests that if the signals are weak,
instead of processing the signals separately, we first must to reinforce the
overall signal and then assembling the system’s detection statistic. We also
showed that the PFA of the post-beamforming GLRT detector is independent of
the number of antennas. This property allows us to improve the PD (by
increasing the number of antennas) while maintaining a fixed PFA. At last, the
SNR losses are quantified, in which the superiority of the post-beamforming
GLRT detector was evidenced as the number of antennas and samples increase.
###### Index Terms:
Generalized likelihood ratio test, nonfluctuating targets, complex white
Gaussian noise, phased array radar, probability of detection.
## I Introduction
Before performing any task (i.e., searching, tracking or imaging), the radar
must decide whether the target of interest is present or absent in a certain
range, angle or Doppler bin [1]. Unfortunately, the presence of unwanted
signals such as thermal noise, clutter, and jamming, ubiquitous in practice,
often render this decision very complicated. The optimal decision is achieved
by applying the likelihood ratio test (LRT) [2]. This decision is based on the
Neyman-Pearson (NP) criterion, which maximizes the probability of detection
(PD) for a given probability of false alarm (PFA) [3]. The LRT provides an
optimal decision if the probability density functions (PDFs) of the received
samples are fully known. Of course, this requirement does not fit most
practical problems. In view of this, a more general decision rule arose to
deal with these types of scenarios, the so-called generalized likelihood ratio
test (GLRT) [4]. In the GLRT, all unknown PDF parameters are replaced by their
maximum likelihood estimates (MLEs). This structure allows the GLRT to work
over a wide range of scenarios. Although, there is no optimality associated
with the GLRT, in practice, it appears to work quite well.
Important GLRT-based detectors were derived considering phased array radars,
nonfluctuating targets and, complex white Gaussian noise (CWGN) have been
rigorously analyzed in the literature (cf. [5, 6, 7, 8, 9] for more discussion
on this). These works assumed a partial or a complete lack of knowledge about
the target and noise statistics. More complex detectors that rely on the use
of secondary data can be found in [9, 10, 11, 12, 13, 14, 15]. In these works,
secondary data was assumed to be signal-free from the target components. That
is, only noise is present. In particular, in [10], it was derived the so-
called Kelly’s detector, which considered that the primary and secondary data
vectors share the same unknown noise covariance matrix. In [13], the authors
extended the analysis by considering that the target amplitude follows a
Gaussian distribution.
All referred works formulate the detection problem at an early stage (i.e.,
prior to the formation of a post-beamforming scalar waveform), thereby
imposing high demands on hardware, processing and data storage. In fact, due
to hardware and computational constraints, most radars and mobile applications
use a combination of analog and digital beamformers (sums) before any
estimation or further pre-processing [16, 17, 18, 19]. Furthermore, since the
use of GLRT involves a high degree of mathematical complexity, theoretical
performance analysis can be hampered in most situations. Indeed, this was the
case for the aforementioned studies in which their performance metrics –
probability of detection (PD) and probability of false alarm (PFA) – were
computed through numerical integration, estimated via Monte-Carlo simulations,
expressed in integral-form, or require iterative solutions. In this context,
we also dedicate our efforts to easy the computation of the performance
metrics.
Scanning the technical literature, we realize that no study has been devoted
to the development of GLRT radar detectors using a post-beamforming approach.
In this paper, we design and evaluate a new GLRT-based detector which is
projected after the analog beamforming operation. Moreover, we provide the
analytical tools to properly determine the performance of this detector.
Specifically, we derive the PD and PFA in closed form. An alternative fast
converging series for the PD is also derived. For the analysis, we consider a
nonfluctuating target embedded in CWGN, in which the amplitude of the target
echo and the noise power are assumed to be unknown. The use of secondary data
is not considered. From a mathematical point of view, one could envisage that
our detector will somehow provide poorer performance since we are reducing the
detection problem dimensionality by means of a sum operation (beamformer). In
this paper, we claim that this is not always the case if the signals are weak.
In fact, we show that in the low SNR regime, the post-beamforming GLRT
detector performs better than the classic GLRT detector (called here as pre-
beamforming GLRT detector) [7, Eq. (6.20)] and than the square-law detector
[20, Eq. (15.57)], widely used in non-coherent radars [21, 22, 23]. This
assertion suggest that, instead of processing the signals separately, it is
better to adding them up before building the system’s detection statistic.
Other attractive features about our detector will be discussed throughout this
work.
The key contributions of this work may now be summarized as follows:
1. 1.
Firstly, we design and evaluate a new GLRT detector projected after the analog
beamforming operation. From the practical point of view, this detector meets
the hardware and systems requirements of most radar systems.
2. 2.
Secondly, we obtain closed-form expressions for the corresponding PD and PFA.
In particular, the PD is given in terms of the bivariate Fox’s $H$-function,
for which we also provide a portable and efficient MATHEMATICA routine.
3. 3.
Thirdly, we derive an alternative series representation for the PD, obtained
by exploring the orthogonal selection of poles in the Cauchy’s residue
theorem. This series enjoys a low computational burden and can be quickly
executed in any ordinary desktop computer.111Section VI illustrates the
efficiency of this series and compares it with MATHEMATICA’s built-in
numerical integration.
4. 4.
Finally, we provide some insightful and concluding remarks on the GLRT-based
detection for nonfluctuating targets. To do so, we compare the performance of
our derived detector with the pre-beamforming GLRT detector.
The remainder of this paper is organized as follows. Section II describes the
operation mode of our phased array radar. Section III describes the operation
mode of the phased array radar. Section IV characterizes the detection
statistics and analyzes the corresponding performance metrics. Section V
introduces the multivariate Fox’s $H$-function and derives both a closed-form
solution and a series representation for the PD. Section VI discusses
representative numerical results. Finally, Section VII draws the main
conclusions.
In what follows, $f_{(\cdot)}(\cdot)$ denotes PDF; $\left(\cdot\right)^{T}$,
transposition; $\left|\cdot\right|$, modulus; $\mathbf{Re}\left[\cdot\right]$,
real argument; $\mathbf{Im}\left[\cdot\right]$, imaginary argument;
$\left\|\cdot\right\|$, Euclidean norm; $\mathbb{E}\left[\cdot\right]$,
expectation; $\mathbb{COV}\left[\cdot\right]$, covariance;
$\text{rank}(\cdot)$, rank of a matrix; and $\left(\cdot\right)^{-1}$, matrix
inversion.
## II Receiver’s Front–End: Phased Array
In this work, we consider a linear phased array radar composed of $N$ antennas
equally separated in the azimuth direction, as shown in Fig. 1. The
transmission and reception processes are carried out as follows. A single
antenna transmits a linear frequency-modulated pulse, whereas all antennas
receive the echo signals. Furthermore, an amplification block and a phased
shifter are installed after each antenna element, and all outputs are added
together (i.e., the analog beamforming operation is applied).
Thus, the in-phase and quadrature signals can be written in matrix form,
respectively, as
$\displaystyle\textbf{X}\triangleq$
$\displaystyle\left(\begin{array}[]{cccc}X_{1,1}&X_{2,1}&\cdots&X_{N,1}\\\
X_{1,2}&X_{2,2}&\cdots&X_{N,2}\\\ \vdots&\vdots&\ddots&\vdots\\\
X_{1,M}&X_{2,M}&\cdots&X_{N,M}\\\ \end{array}\right)$ (5)
$\displaystyle\textbf{Y}\triangleq$
$\displaystyle\left(\begin{array}[]{cccc}Y_{1,1}&\ Y_{2,1}&\cdots&\ Y_{N,1}\\\
Y_{1,2}&\ Y_{2,2}&\cdots&\ Y_{N,2}\\\ \vdots&\ \vdots&\ddots&\ \vdots\\\
Y_{1,M}&\ Y_{2,M}&\cdots&\ Y_{N,M}\\\ \end{array}\right),$ (10)
where $X_{n,m}$ and $Y_{n,m}$ represent the in-phase and quadrature received
signals, respectively. In addition, $m\in\left\\{1,2,\ldots,M\right\\}$ is a
discrete-time index, and $n\in\left\\{1,2,\ldots,N\right\\}$ is a spacial
index that denotes the association to the $n$-th antenna.
For simplicity and without loss of generality, we assume a unity gain and a
null phase shift for all antenna elements. In addition, we consider a
collection of $M$ signal samples for each of the $N$ antennas. Then, the
overall received signal can be written, in vector form, as
$\displaystyle\underline{R}=\left[R_{1},R_{2},\cdots,R_{M}\right]^{T},$ (11)
where
$R_{m}=\sum_{n=1}^{N}\left(X_{n,m}+jY_{n,m}\right).$ (12)
Note that $\underline{R}$ is a complex-valued random vector, in which each
component is formed by the sum of the received signals coming from all the
antennas at a certain time.
As will be shown in Section III, the fact of adding the target echoes will
drastically change the hardware design, detection statistic, and performance
of the post-beamforming GLRT detector compared to previous detectors (cf. [7,
9, 10, 12, 13]). Since our detector is projected after the analog beamforming
operation, one could argue that its performance would be somehow suboptimum,
as compared to the pre-beamforming GLRT detector. In this work, we show that
this conclusion not always holds. Indeed, for some cases the post-beamforming
GLRT detector overcomes the pre-beamforming GLRT detector. This assertion
heavily relies on the SNR of the incoming signals.
## III Detection Design Via Post–Beamforming GLRT
Figure 1: Top view of the phased array radar.
In this section, we present the detection scheme for the post-beamforming GLRT
detector.
Herein, the presence of absence of the target is posed over the following
binary hypothesis test.222A binary hypothesis test refers to the choice that a
radar makes between two hypotheses: signal plus interference or only
interference. This choice is made throughout all resolution cells [24].
### III-A Hypothesis Test
* •
Hypothesis $\mathcal{H}_{0}$: target is absent. In this case, from the radar
model described in the previous section, each $X_{n,m}$ and $Y_{n,m}$ are
formed by mutually independent Gaussian components with zero mean and unknown
variance $\sigma^{2}$. (Due to the presence of CWGN alone.)
* •
Hypothesis $\mathcal{H}_{1}$: target is present. In this case, each $X_{n,m}$
and $Y_{n,m}$ are formed by mutually independent Gaussian components with
unknown non-zero means and unknown variance $\sigma^{2}$. (Due to the
nonfluctuating target and noise.)
According to the stochastic model described in Section II, the PDF of
$\underline{R}$ under $\mathcal{H}_{0}$ is given by
$\displaystyle\mathit{f}_{\underline{R}}\left(\underline{r}|\sigma^{2};\mathcal{H}_{0}\right)=\frac{1}{\left(2\pi\sigma^{2}N\right)^{M}}\exp\left[-\frac{\sum_{m=1}^{M}\left|r_{m}\right|^{2}}{2\sigma^{2}N}\right],$
(13)
whereas the PDF of $\underline{R}$ under $\mathcal{H}_{1}$ is given by (14),
displayed at the top of the next page, where $\mu_{X}=\sum_{n=1}^{N}\mu_{X,n}$
and $\mu_{Y}=\sum_{n=1}^{N}\mu_{Y,n}$ represent the total sum of target echoes
for the in-phase and quadrature components, respectively. Note that after the
analog beamforming operation, we no longer have access to the specific value
of target echo received by a particular antenna, which is what actually occurs
in practice.
$\displaystyle\mathit{f}_{\underline{R}}\left(\underline{r}|\sigma^{2};\mu_{X};\mu_{Y};\mathcal{H}_{1}\right)=\frac{1}{\left(2\pi\sigma^{2}N\right)^{M}}\exp\left[-\frac{\sum_{m=1}^{M}\left\\{\left(\mathbf{Re}\left[r_{m}\right]-\mu_{X}\right)^{2}+\left(\mathbf{Im}\left[r_{m}\right]-\mu_{Y}\right)^{2}\right\\}}{2\sigma^{2}N}\right]$
(14)
### III-B Detection Rule
The system’s detection statistic can be defined through GLRT as [7]
$\frac{f_{\underline{R}}\left(\underline{r}|\hat{\sigma}_{1}^{2};\hat{\mu}_{X};\hat{\mu}_{Y};\mathcal{H}_{1}\right)}{f_{\underline{R}}\left(\underline{r}|\hat{\sigma}_{0}^{2};\mathcal{H}_{0}\right)}\begin{array}[]{c}\mathcal{H}_{1}\\\
\gtrless\\\ \mathcal{H}_{0}\end{array}T,$ (15)
where $T$ is an arbitrary threshold and the ratio on the left-hand side of
(15) is called the generalized likelihood ratio. In addition,
$\hat{\sigma}_{0}^{2}$ is the MLE for $\sigma^{2}$, to be obtained from (13),
and $\hat{\sigma}_{1}^{2}$, $\hat{\mu}_{X}$ and $\hat{\mu}_{Y}$ are the MLEs
for $\sigma^{2}$, $\mu_{X}$ and $\mu_{Y}$, respectively, to be obtained from
(14). Eq.(15) implies that the system will decide for $\mathcal{H}_{1}$
whenever the generalized likelihood ratio exceeds the threshold $T$, and will
decide for $\mathcal{H}_{0}$ otherwise. Since the logarithmic function is a
monotonically increasing function, we can rewrite the GLRT as
$\ln\left[\frac{f_{\underline{R}}\left(\underline{r}|\hat{\sigma}_{1}^{2};\hat{\mu}_{X};\hat{\mu}_{Y};\mathcal{H}_{1}\right)}{f_{\underline{R}}\left(\underline{r}|\hat{\sigma}_{0}^{2};\mathcal{H}_{0}\right)}\right]\begin{array}[]{c}\mathcal{H}_{1}\\\
\gtrless\\\ \mathcal{H}_{0}\end{array}\ln\left[T\right].$ (16)
Note in (13) and (14) that all unknown parameters $\left(\sigma^{2},\mu_{X}\
\text{and}\ \mu_{Y}\right)$ are scalars quantities. Hence, the corresponding
MLEs can be obtained easily. For example, $\hat{\sigma}_{0}^{2}$ can be found
by taking the natural logarithm of (13), and then taking the derivative with
respect to $\sigma^{2}$, i.e.,
$\displaystyle\frac{\partial\ln\left[\mathit{f}_{\underline{R}}\left(\underline{r}|\sigma^{2};\mathcal{H}_{0}\right)\right]}{\partial\sigma^{2}}=-\frac{M}{\sigma^{2}}+\frac{1}{2N\sigma^{4}}\sum_{m=1}^{M}\left|r_{m}\right|^{2}.$
(17)
Then, we set (17) equal to zero and solve the equation for $\sigma^{2}$, which
yields to
$\displaystyle\hat{\sigma_{0}}^{2}=$
$\displaystyle\frac{1}{2MN}\sum_{m=1}^{M}\left|r_{m}\right|^{2}.$ (18)
Using (14) and following the same approach as in (18), the MLEs for $\mu_{X}$
and $\mu_{Y}$ can be calculated, respectively, as
$\displaystyle\hat{\mu}_{X}=$
$\displaystyle\frac{1}{M}\sum_{m=1}^{M}\mathbf{Re}\left[r_{m}\right]$ (19)
$\displaystyle\hat{\mu}_{Y}=$
$\displaystyle\frac{1}{M}\sum_{m=1}^{M}\mathbf{Im}\left[r_{m}\right],$ (20)
whereas the MLE for $\sigma^{2}$ can be computed as follows:
$\displaystyle\hat{\sigma_{1}}^{2}=$
$\displaystyle\frac{1}{2NM}\sum_{m=1}^{M}\left\\{\left(\mathbf{Re}\left[r_{m}\right]-\hat{\mu}_{X}\right)^{2}\right.$
$\displaystyle+\left.\left(\mathbf{Im}\left[r_{m}\right]-\hat{\mu}_{Y}\right)^{2}\right\\}.$
(21)
(For brevity, we have omitted the derivation steps.)
Substituting (18)–(III-B) in (16) and after simple simplifications, we have
$\displaystyle
M\ln\left[\left(\frac{\hat{\sigma_{0}}^{2}}{\hat{\sigma_{1}}^{2}}\right)\right]\begin{array}[]{c}\mathcal{H}_{1}\\\
\gtrless\\\ \mathcal{H}_{0}\end{array}\ln\left[T\right].$ (25)
Expanding (III-B) and after performing some minor manipulations, we can
rewrite $\hat{\sigma_{1}}^{2}$ as
$\displaystyle\hat{\sigma_{1}}^{2}$
$\displaystyle=\frac{1}{2MN}\sum_{m=1}^{M}\left\\{\hat{\mu}_{X}^{2}+\hat{\mu}_{Y}^{2}\right\\}$
$\displaystyle+\underbrace{\frac{1}{2MN}\sum_{m=1}^{M}\left\\{\left(\mathbf{Re}\left[r_{m}\right]\right)^{2}+\left(\mathbf{Im}\left[r_{m}\right]\right)^{2}\right\\}}_{\hat{\sigma_{0}}^{2}}$
$\displaystyle+\left(\frac{\hat{\mu}_{X}}{N}\right)\underbrace{\frac{1}{M}\sum_{m=1}^{M}\mathbf{Re}\left[r_{m}\right]}_{\hat{\mu}_{X}}+\left(\frac{\hat{\mu}_{Y}}{N}\right)\underbrace{\frac{1}{M}\sum_{m=1}^{M}\mathbf{Im}\left[r_{m}\right]}_{\hat{\mu}_{Y}}$
$\displaystyle\overset{(a)}{=}\hat{\sigma_{0}}^{2}-\frac{1}{2N}\left(\hat{\mu}_{X}^{2}+\hat{\mu}_{Y}^{2}\right),$
(26)
where in step (a) we have used (18), (19), and (20), along with some
simplifications.
Isolating $\hat{\sigma}_{0}^{2}$ from (III-B), we obtain
$\displaystyle\hat{\sigma}_{0}^{2}=\hat{\sigma}_{1}^{2}+\frac{1}{2N}\left(\hat{\mu}_{X}^{2}+\hat{\mu}_{Y}^{2}\right).$
(27)
Replacing (27) in (25), yields
$\displaystyle
M\ln\left[1+\frac{\left(\hat{\mu}_{X}^{2}+\hat{\mu}_{Y}^{2}\right)}{2N\hat{\sigma_{1}}{}^{2}}\right]\begin{array}[]{c}\mathcal{H}_{1}\\\
\gtrless\\\ \mathcal{H}_{0}\end{array}\ln\left[T\right].$ (31)
Now, since $M$ and $N$ are a positive numbers, we obtain the same decision as
in (31) by simply comparing
$\left(\hat{\mu}_{X}^{2}+\hat{\mu}_{Y}^{2}\right)/\hat{\sigma}_{1}^{2}$ with a
modified threshold, $\gamma^{\prime}$, that is,
$\frac{\hat{\mu}_{X}^{2}+\hat{\mu}_{Y}^{2}}{\hat{\sigma_{1}}^{2}}\begin{array}[]{c}\mathcal{H}_{1}\\\
\gtrless\\\ \mathcal{H}_{0}\end{array}\gamma^{\prime}.$ (32)
For convenience and without loss of generality, we define an equivalent
decision rule as333The constant $\Psi$ was introduced in the decision rule
because it allow us to model $Z$ as a random variable with known PDF, as will
become apparent soon.
$\displaystyle
Z\triangleq\Psi\left(\frac{\hat{\mu}_{X}^{2}+\hat{\mu}_{Y}^{2}}{\hat{\sigma_{1}}^{2}}\right)\begin{array}[]{c}\mathcal{H}_{1}\\\
\gtrless\\\ \mathcal{H}_{0}\end{array}\gamma,$ (36)
where $Z$ is the system’s detection statistic, $\Psi=(M-1)/2N$ is a positive
constant, and $\gamma$ is a new modified threshold.
Fig. 2 illustrates how the pre-beamforming GLRT, the post-beamforming GLRT,
and the square-law detectors are constructed. More specifically, Fig. 2-(a)
depicts the pre-beamforming GLRT detector architecture. In this case, all
received signals are processed separately to form the system’s detection
statistic [7]. Certainly, this type of processing is more difficult to
implement due to hardware constraints. Fig. 2-(b) illustrates the post-
beamforming GLRT detector architecture. This detector provides a less
restrictive hardware implementation, as well as a simpler detection statistic
that results from adding the received signals. Finally, Fig. 2-(c) illustrates
the square-law detector architecture. Here, after the analog beamforming, the
square magnitude of the signal samples is taken and then they are added up
together. It is important to emphasize that in order to analytically calculate
the performance metrics of the square law detector, we do need the information
about the noise power. That is, for a given PFA, the detection threshold is
given as a function of the noise power [20].
---
(a) Pre-beamforming GLRT detector [7].
---
(b) Post-beamforming GLRT detector.
---
(c) Square-law detector [20].
Figure 2: Detection Schemes.
## IV Detection Performance
In this section, we characterize and analyze the performance of the post-
beamforming GLRT detector. To do so, we start finding the PDFs of Z under
$\mathcal{H}_{0}$ and $\mathcal{H}_{1}$.
### IV-A Detection Statistics
First, we rewrite (36) as follows:
$\displaystyle Z$
$\displaystyle=\frac{(M-1)\left(\hat{\mu}_{X}^{2}+\hat{\mu}_{Y}^{2}\right)}{2N\hat{\sigma_{1}}^{2}}$
$\displaystyle\overset{(a)}{=}(M-1)\frac{\overbrace{\left(\hat{\mu}_{X}^{2}+\hat{\mu}_{Y}^{2}\right)M/N\sigma^{2}}^{\triangleq\
\mathcal{I}_{1}}}{\underbrace{2\hat{\sigma_{1}}^{2}M/\sigma^{2}}_{\triangleq\
\mathcal{I}_{2}}},$ (37)
where in step (a), without affecting the detection performance, we have
multiplied the left-hand side of $Z$ by $M\sigma^{2}/M\sigma^{2}$.
Note that, to fully characterize $Z$, it is imperative to find the PDFs of
$\mathcal{I}_{1}$ and $\mathcal{I}_{2}$ under $\mathcal{H}_{0}$ and
$\mathcal{H}_{1}$.
Substituting (19) and (20) in $\mathcal{I}_{1}$, yields to
$\displaystyle\mathcal{I}_{1}=$
$\displaystyle\underbrace{\left(\frac{1}{\sqrt{MN}\sigma}\sum_{k=1}^{M}\mathbf{Re}\left[r_{k}\right]\right)^{2}}_{\triangleq\
U}$
$\displaystyle+\underbrace{\left(\frac{1}{\sqrt{MN}\sigma}\sum_{k=1}^{M}\mathbf{Im}\left[r_{k}\right]\right)^{2}}_{\triangleq\
V}.$ (38)
Hereinafter, the detector in [7, Eq. (6.20)] will be called Fox’s $H$-function
GLRT phased array detector. Observe that $U$ is the square of a Gaussian
random variable (RV) with mean
$\sqrt{M}\mathbb{E}\left[X_{l,k}\right]/\sigma\sqrt{N}$ and unit variance. In
a similar way, $V$ is the square of a Gaussian RV with mean
$\sqrt{M}\mathbb{E}\left[Y_{l,k}\right]/\sigma\sqrt{N}$ and unit variance.
Therefore, depending on the hypothesis, $\mathcal{I}_{1}$ can match one of the
following conditions:
1. 1.
Given $\mathcal{H}_{0}$: $\mathcal{I}_{1}$ follows a central chi-squared (CCS)
distribution [25] with $\nu_{1}=2$ degrees of freedom.
2. 2.
Given $\mathcal{H}_{1}$: $\mathcal{I}_{1}$ follows a noncentral chi-squared
(NCCS) distribution [26] with noncentral parameter
$\lambda_{1}=M\left(\mu_{X}^{2}+\mu_{Y}^{2}\right)/N\sigma^{2}$ and
$\alpha_{1}=2$ degrees of freedom.
Inserting (III-B) in $\mathcal{I}_{2}$, we obtain
$\displaystyle\mathcal{I}_{2}=\frac{1}{N\sigma^{2}}$
$\displaystyle\sum_{m=1}^{M}\left\\{\left(\mathbf{Re}\left[r_{m}\right]-\hat{\mu}_{X}\right)^{2}\right.$
$\displaystyle+\left.\left(\mathbf{Im}\left[r_{m}\right]-\hat{\mu}_{Y}\right)^{2}\right\\}$
(39)
Here, the analysis is a bit more cumbersome; therefore, we establish the
following two lemmas:
Lemma 1: $\mathcal{I}_{2}$ matches the following conditions:
1. 1.
Given $\mathcal{H}_{0}$: $\mathcal{I}_{2}$ follows a CCS distribution with
$\nu_{2}=2(M-1)$ degrees of freedom.
2. 2.
Given $\mathcal{H}_{1}$: $\mathcal{I}_{2}$ also follows a CCS distribution
with $2(M-1)$ degrees of freedom. In this case, for convenience, we model
$\mathcal{I}_{2}$ by a NCCS distribution with noncentral parameter
$\lambda_{2}=0$ and $\alpha_{2}=2(M-1)$ degrees of freedom.
Proof: See Appendix A. $\blacksquare$
Lemma 2: $\mathcal{I}_{1}$ and $\mathcal{I}_{2}$ are mutually independent RVs.
Proof: See Appendix B. $\blacksquare$
Then, using Lemmas 1 and 2, we can define $\mathcal{I}_{1}/\mathcal{I}_{2}$ as
the ratio of either two independent CCS RVs or two independent NCCS RVs,
depending on the hypothesis. The factor $(M-1)$ in (IV-A) allows us to model
$Z$ by a RV with known PDF.
Given $\mathcal{H}_{0}$, it can be shown that $Z$ follows a central
F-distribution [27] with PDF given by
$\displaystyle\mathit{f}_{Z}\left(z|\mathcal{H}_{0}\right)$
$\displaystyle=\frac{(M-1)^{M-1}(M+z-1)^{-M}}{B(1,M-1)},$ (40)
where $B(\cdot,\cdot)$ is the Beta function [28, Eq. (5.12.3)]. Using [28, Eq.
(5.12.1)], we can rewrite (40) in compact form as
$\displaystyle\mathit{f}_{Z}\left(z|\mathcal{H}_{0}\right)=\left(\frac{M-1}{M+z-1}\right)^{M}.$
(41)
For the case of $\mathcal{H}_{1}$, $Z$ can be modeled by a doubly noncentral
F-distribution [29], with PDF given by
$\displaystyle\mathit{f}_{Z}\left(z|\mathcal{H}_{1}\right)=$
$\displaystyle\exp\left[-\Upsilon\ M\right]\left(\frac{M-1}{M+z-1}\right)^{M}$
$\displaystyle\times\,_{1}F_{1}\left(M;1;\frac{\Upsilon\ z\ M}{M+z-1}\right),$
(42)
where $\Upsilon=(\mu_{X}^{2}+\mu_{Y}^{2})/2N\sigma^{2}$, and
${}_{1}F_{1}\left(\cdot;\cdot;\cdot\right)$ is the Kummer confluent
hypergeometric function [28, Eq. (13.1.2)]. The equality $\Upsilon=N\
\text{SNR}_{n}$ holds if $\text{SNR}_{n}=\text{SNR}_{p}\ \forall\ (n,p)$, with
$\text{SNR}_{n}=\left(\mu_{X,n}^{2}+\mu_{Y,n}^{2}\right)/2\sigma^{2}$ being
the signal-to-noise ratio present at the $n$-th antenna. The derivation of
(IV-A) is shown in Appendix C.
### IV-B False Alarm and Detection Probabilities
It is well known that the performance of any radar system is governed by the
PFA and PD. These probabilities can be computed, respectively, as [24]
$\displaystyle P_{\text{FA}}$
$\displaystyle\triangleq\int_{\gamma}^{\infty}\mathit{f}_{Z}\left(z|\mathcal{H}_{0}\right)\,\text{d}z$
(43) $\displaystyle P_{\text{D}}$
$\displaystyle\triangleq\int_{\gamma}^{\infty}\mathit{f}_{Z}\left(z|\mathcal{H}_{1}\right)\,\text{d}z.$
(44)
Replacing (41) in (43), yields
$\displaystyle P_{\text{FA}}=\left(\frac{M-1}{\gamma+M-1}\right)^{M-1}.$ (45)
Now, isolating $\gamma$ from (45) we can find a threshold so as to meet a
desired PFA, i.e.,
$\displaystyle\gamma=1-M+\left(M-1\right){P_{\text{FA}}}^{1/(1-M)}.$ (46)
It can be noticed in (46) that we do not need the knowledge of the noise power
nor the number of antennas to set the detection threshold. That is, the
detection threshold $\gamma$ is independent of both $\sigma^{2}$ and $N$. This
important feature will allow us to maintain a certain PFA for an arbitrary
number of antennas. More precisely, with objective of increasing the PD, we
can increase $N$ without worrying about the increase in the PFA.
On the other hand, after substituting (IV-A) in (44), the PD can be obtained
in single-integral form as
$\displaystyle P_{\text{D}}=$ $\displaystyle\exp\left[-\Upsilon\
M\right]\int_{\gamma}^{\infty}\left(\frac{M-1}{M+z-1}\right)^{M}$
$\displaystyle\times\,_{1}F_{1}\left(M;1;\frac{\Upsilon\ z\
M}{M+z-1}\right)\,\text{d}z.$ (47)
Certainly, (IV-B) can be evaluated by means of numerical integration.
Nonetheless, to further facilitate the computation of the PD, we provide
alternative, faster, and more tractable solutions. This is attained in the
next section.
## V Alternative Expressions for the Probability of Detection
In this section, we provide both a closed-form solution and a fast converging
series for the PD, To this end, we make use complex analysis and a thorough
calculus of residues.
### V-A The Multivariate Fox’s $H$-function
We first begin introducing the Fox’s $H$-function, as it will be used
throughout this section.
The Fox’s $H$-function has been used in a wide variety of recent applications,
including mobile communications and radar systems (cf. [30, 31, 32, 33, 34]
for more discussion on this). In [35], the authors considered the most general
case of the Fox’s $H$-function for several variables, defined as
$\mathbf{H}\left[\textbf{x};\left(\delta,\textbf{D}\right);\left(\beta,\textbf{B}\right);\mathcal{L}_{\textbf{s}}\right]\triangleq\left(\frac{1}{2\pi
j}\right)^{L}\oint_{\mathcal{L}_{\textbf{s}}}\Theta\left(\textbf{s}\right)\textbf{x}^{-\textbf{s}}\text{d}\textbf{s},$
(48)
in which $j=\sqrt{-1}$ is the imaginary unit,
$\textbf{s}\triangleq\left[s_{1},\cdots,s_{L}\right]$,
$\textbf{x}\triangleq\left[x_{1},\cdots,x_{L}\right]$,
$\beta\triangleq\left[\beta_{1},\cdots,\beta_{L}\right]$, and
$\delta\triangleq\left[\delta_{1},\cdots,\delta_{L}\right]$ denote vectors of
complex numbers, and $\textbf{B}\triangleq\left(b_{i,j}\right)_{n\times L}$
and $\textbf{D}\triangleq\left(d_{i,j}\right)_{m\times L}$ are matrices of
real numbers. Also,
$\textbf{x}^{-\textbf{s}}\triangleq\prod_{i=1}^{L}x_{i}^{-s_{i}}$,
$\text{d}\textbf{s}\triangleq\prod_{i=1}^{L}\text{d}s_{i}$,
$\mathcal{L}_{\textbf{s}}\triangleq\mathcal{L}_{\textbf{s},1}\times\cdots\times\mathcal{L}_{\textbf{s},L}$,
$\mathcal{L}_{\textbf{s},k}$ is an appropriate contour on the complex plane
$s_{k}$, and
$\Theta\left(\textbf{s}\right)\triangleq\frac{\prod_{i=1}^{m}\Gamma\left(\delta_{i}+\sum_{k=1}^{L}d_{i,k}s_{k}\right)}{\prod_{i=1}^{n}\Gamma\left(\beta_{i}+\sum_{k=1}^{L}b_{i,k}s_{k}\right)},$
(49)
in which $\Gamma(\cdot)$ is the gamma function [36, Eq. (6.1.1)].
### V-B Fox’s H-Function-Based Representation
Here, we obtain an alternative closed-form solution for (IV-B), expressed in
terms of the Fox’s $H$-function.
To do so, we first perform some mathematical manipulations in (IV-B),
resulting in
$\displaystyle P_{\text{D}}=$ $\displaystyle\frac{\exp\left[-\Upsilon\
M\right](M-1)^{M}}{\Gamma(M)}\int_{\gamma}^{\infty}\left(\frac{1}{M+z-1}\right)^{M}$
$\displaystyle\times G_{1,2}^{1,1}\left[\left.\begin{array}[]{c}1-M\\\ 0,0\\\
\end{array}\right|-\frac{\Upsilon\ z\ M}{M+z-1}\right]\text{d}z,$ (52)
where $G_{m,n}^{p,q}\left[\cdot\right]$ is the Meijer’s G-function [37, Eq.
(8.2.1.1)].
Now, using the contour integral representation of the Meijer’s G-function, we
can express (V-B) as follows:
$\displaystyle P_{\text{D}}=$ $\displaystyle\frac{\exp\left[-\Upsilon\
M\right](M-1)^{M}}{\Gamma(M)}\int_{\gamma}^{\infty}\left(\frac{1}{M+z-1}\right)^{M}$
$\displaystyle\times\left(\frac{1}{2\pi
j}\right)\oint_{\mathcal{L}^{**}_{\textbf{s},1}}\frac{\Gamma(s_{1})\Gamma(M-s_{1})}{\Gamma(1-s_{1})}$
$\displaystyle\times\left(-\frac{\Upsilon\ z\
M}{M+z-1}\right)^{-s_{1}}\text{d}s_{1}\ \text{d}z,$ (53)
in which $\mathcal{L}^{**}_{\textbf{s},1}$ is a closed complex contour that
separates the poles of the gamma function $\Gamma(s_{1})$ from the poles of
$\Gamma(M-s_{1})$. Since
$\int_{\gamma}^{\infty}\left|\mathit{f}_{Z}\left(z|\mathcal{H}_{1}\right)\right|\text{d}z<\infty$,
we can interchange the order of integration[38], i.e.,
$\displaystyle P_{\text{D}}=$ $\displaystyle\frac{\exp\left[-\Upsilon\
M\right](M-1)^{M}}{\Gamma(M)}\left(\frac{1}{2\pi j}\right)$
$\displaystyle\times\oint_{\mathcal{L}^{**}_{\textbf{s},1}}\frac{\Gamma(s_{1})\Gamma(M-s_{1})\left(-\Upsilon\
M\right)^{-s_{1}}}{\Gamma(1-s_{1})}$
$\displaystyle\times\int_{\gamma}^{\infty}\left(\frac{1}{M+z-1}\right)^{M}\left(\frac{z}{M+z-1}\right)^{-s_{1}}\text{d}z\
\text{d}s_{1}.$ (54)
Developing the inner real integral, we obtain
$\displaystyle P_{\text{D}}=$ $\displaystyle\frac{\exp\left[-\Upsilon\
M\right](M-1)^{M}\Gamma(M-1)}{\Gamma(M)\ \gamma^{M-1}}\left(\frac{1}{2\pi
j}\right)$
$\displaystyle\times\oint_{\mathcal{L}^{*}_{\textbf{s},1}}\frac{\Gamma(s_{1})\Gamma(M-s_{1})\left(-\Upsilon\
M\right)^{-s_{1}}}{\Gamma(1-s_{1})}$
$\displaystyle\times\,_{2}\tilde{F}_{1}\left(M-1,M-s_{1};M;\frac{1-M}{\gamma}\right)\text{d}s_{1},$
(55)
where $\,{}_{2}\tilde{F}_{1}(a,b;c;x)=\,_{2}F_{1}(a,b;c;x)/\Gamma(c)$ is the
regularized Gauss hypergeometric function, and
$\,{}_{2}F_{1}(\cdot,\cdot;\cdot;\cdot)$ is the Gauss hypergeometric function
[28, Eq. (15.1.1)]. Note that we have used a new complex contour,
$\mathcal{L}^{*}_{\textbf{s},1}$. This is because the inner integration
changed the integration path in the complex plane. Here,
$\mathcal{L}^{*}_{\textbf{s},1}$ is a closed contour that separates the poles
of $\Gamma(s_{1})$ from those of $\Gamma(M-s_{1})$.
Figure 3: Integration path for $\mathcal{L}_{\textbf{s},1}$. Figure 4:
Integration path for $\mathcal{L}_{\textbf{s},2}$.
Finally, replacing (46) in (V-B) and after using the complex integral
representation of the regularized Gauss hypergeometric function [39, Eq.
(07.24.26.0004.01)], we can express PD in closed form as in (62), shown at the
top of the next page, where
$\mathcal{L}_{\textbf{s}}=\mathcal{L}_{\textbf{s}_{1}}\times\mathcal{L}_{\textbf{s}_{2}}$,
and
$\displaystyle\Phi$ $\displaystyle=\frac{\Omega^{M-1}\exp\left[-\Upsilon\
M\right]}{\Gamma(M-1)}$ (56) $\displaystyle\Omega$
$\displaystyle=\frac{M-1}{1-M+\left(M-1\right){P_{\text{FA}}}^{1/(1-M)}}.$
(57)
Observe that (62) has two new closed contours, $\mathcal{L}_{\textbf{s},1}$
and $\mathcal{L}_{\textbf{s},2}$. $\mathcal{L}_{\textbf{s},1}$ is an adjusted
contour that appears due to the presence of the new gamma functions, whereas
$\mathcal{L}_{\textbf{s},2}$ is the contour corresponding to the complex
representation of the regularized Gauss hypergeometric function. The
integration paths for $\mathcal{L}_{\textbf{s},1}$ and
$\mathcal{L}_{\textbf{s},2}$ are described in Section VI.
$\displaystyle P_{\text{D}}=\Phi\ \mathbf{H}\left[\left[\Omega,-\Upsilon\
M\right];\left(\left[0,0,M-1,M\right],\left(\begin{array}[]{c c c
c}1&0&-1&-1\\\ 0&1&0&-1\\\
\end{array}\right)^{T}\right);\left(\left[M,1\right],\left(\begin{array}[]{cc}-1&0\\\
0&-1\\\ \end{array}\right)\right);\mathcal{L}_{\textbf{s}}\right]$ (62)
A general implementation for the multivariate Fox’s $H$-function is not yet
available in mathematical packages such as MATHEMATICA, MATLAB, or MAPLE. Some
works have been done to alleviate this problem [40, 41, 42]. Specifically in
[40], the Fox’s $H$-function was implemented from one up to four variables. In
this work, we provide an accurate and portable implementation in MATHEMATICA
for the bivariate Fox’s $H$-function. The code used to compute (62) is
presented in Appendix D. It is important to mention that such implementation
is specific for our system model. Moreover, an equivalent series
representation for (62) is also provided to facilitate the use of our results.
This series representation is presented in the subsequent subsection.
### V-C Infinite-Series Representation
Here, we provide a series representation for (62). To achieve this, we exploit
the orthogonal selection of poles in Cauchy’s residue theorem.
First, let us consider the following suitable closed contours for (62): (i)
$\mathcal{L}_{\textbf{s},1}=\text{L}_{0,1}+\text{L}_{-\infty,1}$, and (ii)
$\mathcal{L}_{\textbf{s},2}=\text{L}_{0,2}+\text{L}_{-\infty,2}$. Both
contours are shown in Figs. 3 and 4, where $\xi_{1}\in\mathbb{R}^{+}$ must be
chosen so that all the poles of $\Gamma(s_{1})$ are separated from those of
$\Gamma(M-1-s_{1})$ and $\Gamma(M-s_{1}-s_{2})$, and
$\xi_{2}\in\mathbb{R}^{+}$ must be chosen so that all the poles of
$\Gamma(s_{2})$ are separated from those of $\Gamma(M-s_{1}-s_{2})$.
Additionally, $\rho_{1}$ and $\rho_{2}$ are the radius of the arcs
$\text{L}_{-\infty,1}$ and $\text{L}_{-\infty,2}$, respectively.
It is easy to prove that any complex integration along the paths
$\text{L}_{-\infty,1}$ and $\text{L}_{-\infty,2}$ will be zero as $\rho_{1}$
and $\rho_{2}$ go to infinity, respectively. ($\rho_{1}$ and $\rho_{2}$ tend
to infinity since the gamma functions $\Gamma(s_{1})$ and $\Gamma(s_{2})$
generate simple poles at all non-positive integers [28, Eq. (5.2.1)].)
Therefore, the final integration path for $\mathcal{L}_{\textbf{s},1}$ starts
at $\xi_{1}-j\infty$ and goes to $\xi_{1}+j\infty$, whereas the final
integration path for $\mathcal{L}_{\textbf{s},2}$ starts at $\xi_{2}-j\infty$
and goes to $\xi_{2}+j\infty$.
Now, we can rewrite (62) through the sum of residues as [43]
$\displaystyle
P_{\text{D}}=\Phi\sum_{k=0}^{\infty}\sum_{l=0}^{\infty}\text{Res}\left[\Xi\left(s_{1},s_{2}\right);s_{1}=-k,s_{2}=-l\right],$
(63)
where $\text{Res}\left[\Xi\left(s_{1},s_{2}\right);s_{1}-k,s_{2}=-l\right]$
represents the residue of $\Xi\left(s_{1},s_{2}\right)$ at the poles
$s_{1}=-k$, $s_{2}=-l$, and
$\displaystyle\Xi\left(s_{1},s_{2}\right)=$
$\displaystyle\frac{\Gamma(s_{1})\Gamma(s_{2})\Gamma(M-s_{1}-1)\Gamma(-s_{1}+M-s_{2})}{\Gamma(1-s_{2})\Gamma(-(s_{1}-M))}$
$\displaystyle\times\Omega^{-s_{1}}\left(-\Upsilon\ M\right)^{-s_{2}}.$ (64)
is the integration kernel of (62).
Accordingly, after applying the residue operation [43, Eq. (16.3.5)], (63)
reduces to
$\displaystyle P_{\text{D}}=$
$\displaystyle\Phi\sum_{k=0}^{\infty}\sum_{l=0}^{\infty}\left\\{\frac{\Gamma(k+M-1)\Gamma(k+l+M)\left(-\Omega\right)^{k}}{k!\Gamma(l+1)^{2}\Gamma(k+M)}\right.$
$\displaystyle\times\left.\left(\Upsilon\ M\right)^{l}\right\\}.$ (65)
Finally, with the aid of [28, Eq. (15.2.1)] and after some mathematical
manipulations, we obtain
$\displaystyle P_{\text{D}}=$ $\displaystyle\exp\left[-\Upsilon\
M\right]\Omega^{M-1}\sum_{k=0}^{\infty}\left\\{\frac{\Gamma(k+M)\left(\Upsilon\
M\right)^{k}}{\Gamma(k+1)^{2}}\right.$ $\displaystyle\times\left.\
{}_{2}\tilde{F}_{1}\left(M-1,k+M;M;-\Omega\right)\right\\}.$ (66)
It is worth mentioning that (V-C) is also an original contribution of this
work, proving to be very efficient and computationally tractable, as will be
shown in the next section.
Generally, when radar designers need to compute the PD over a certain volume
(i.e., range, azimuth and elevation), the calculation of the PD has to be
performed for all the point scatterers within the entire coverage volume, thus
increasing the computational load and simulation time. Eq. (V-C) can be
executed quickly on an ordinary desktop computer, serving as a useful tool for
radar designers.
Moreover, if $\mathcal{T}_{0}-1$ terms are used in (V-C), we can define the
truncation error as
$\displaystyle\mathcal{T}=$
$\displaystyle\frac{1}{\Gamma(M)}\sum_{k=T_{0}}^{\infty}\frac{\Omega^{M-1}\exp\left[-M\Upsilon\right](M\Upsilon)^{k}}{\Gamma(k+1)^{2}}$
$\displaystyle\times\Gamma(k+M)\,_{2}F_{1}(M-1,k+M;M;\Omega).$ (67)
Since the Gauss hypergeometric function in (19) is monotonically decreasing
with respect to $k$, $\mathcal{T}$ can be bounded as
$\displaystyle\mathcal{T}\leq$
$\,{}_{2}F_{1}\left(M-1,M+T_{0};M;\Omega\right)$
$\displaystyle\times\sum_{k=T_{0}}^{\infty}\frac{\Omega^{M-1}\exp\left[-M\Upsilon\right](M\Upsilon)^{k}\Gamma(k+M)}{\Gamma(k+1)^{2}\Gamma(M)}.$
(68)
Since we add up strictly positive terms, we have
$\displaystyle\sum_{k=T_{0}}^{\infty}\frac{\Omega^{M-1}\exp\left[-M\Upsilon\right](M\Upsilon)^{k}\Gamma(k+M)}{\Gamma(k+1)^{2}\Gamma(M)}$
$\displaystyle\ \ \
\leq\sum_{k=0}^{\infty}\frac{\Omega^{M-1}\exp\left[-M\Upsilon\right](M\Upsilon)^{k}\Gamma(k+M)}{\Gamma(k+1)^{2}\Gamma(M)}$
$\displaystyle\ \ \ \overset{(a)}{=}\Omega^{M-1}L_{M-1}(-M\Upsilon),$ (69)
where in step (a), we have used [39, Eq. (05.02.02.0001.01)] and some minor
simplifications. Then, from (V-C) and (V-C), (V-C) can be bounded as
$\displaystyle\mathcal{T}\leq\frac{L_{M-1}(-M\Upsilon)\,_{2}F_{1}\left(M-1,M+T_{0};M;-\Omega\right)}{\Omega^{1-M}},$
(70)
where $L_{\left(\cdot\right)}(\cdot)$ is the Laguerre polynomial [39, Eq.
(05.02.02.0001.01)].
## VI Numerical Results and Discussions
Figure 5: PDF of $Z$ under $\mathcal{H}_{0}$ for different values of $M$. Figure 6: PDF of $Z$ under $\mathcal{H}_{1}$ for different values of $M$ and $N$. Figure 7: $P_{\text{D}}$ vs $P_{\text{FA}}$ with $M=22$, $N=3$, and different values of $\text{SNR}_{n}$. Figure 8: $P_{\text{D}}$ vs $\text{SNR}_{n}$ with $M=15$, $P_{\text{FA}}=10^{-6}$ and different values of $N$. Figure 9: $P_{\text{D}}$ vs $\text{SNR}_{n}$ with $N=11$, $P_{\text{FA}}=10^{-6}$ and different values of $M$. Figure 10: $P_{\text{D}}$ vs $\text{SNR}_{n}$ with $M=10$, $N=15$ and different values of $P_{\text{FA}}$. TABLE I: Efficiency of (V-C) as compared to (IV-B). $P_{\text{D}}$ Parameters | $P_{\text{D}}$ Value | | Absolute
---
Error, $\epsilon$
| Number
---
of terms
| Computation Time
---
for Eq. (IV-B)
| Computation Time
---
for Eq. (V-C)
| Reduction
---
Time
$M=50$, $P_{FA}=10^{-8}$, $\Upsilon=-10\ \text{dB}$ | $0.106$ % | $5.471\times 10^{-10}$ | 23 | $92.725\times 10^{-3}\ \text{(s)}$ | $1.923\times 10^{-3}\ \text{(s)}$ | $97.92\ \%$
$M=80$, $P_{FA}=10^{-8}$, $\Upsilon=-10\ \text{dB}$ | $1.416$ % | $5.248\times 10^{-10}$ | 30 | $197.044\times 10^{-3}\ \text{(s)}$ | $2.464\times 10^{-3}\ \text{(s)}$ | $98.74\ \%$
$M=100$, $P_{FA}=10^{-8}$, $\Upsilon=-10\ \text{dB}$ | $4.423$ % | $6.032\times 10^{-10}$ | 34 | $294.950\times 10^{-3}\ \text{(s)}$ | $3.415\times 10^{-3}\ \text{(s)}$ | $98.84\ \%$
$M=50$, $P_{FA}=10^{-8}$, $\Upsilon=-5\ \text{dB}$ | $19.224$ % | $5.261\times 10^{-10}$ | 45 | $96.370\times 10^{-3}\ \text{(s)}$ | $4.625\times 10^{-3}\ \text{(s)}$ | $95.20\ \%$
$M=50$, $P_{FA}=10^{-6}$, $\Upsilon=-5\ \text{dB}$ | $52.886$ % | $5.341\times 10^{-10}$ | 45 | $95.769\times 10^{-3}\ \text{(s)}$ | $4.663\times 10^{-3}\ \text{(s)}$ | $95.13\ \%$
$M=50$, $P_{FA}=10^{-4}$, $\Upsilon=-5\ \text{dB}$ | $87.958$ % | $5.361\times 10^{-10}$ | 45 | $92.911\times 10^{-3}\ \text{(s)}$ | $4.54\times 10^{-3}\ \text{(s)}$ | $95.11\ \%$
$M=50$, $P_{FA}=10^{-6}$, $\Upsilon=-3\ \text{dB}$ | $92.089$ % | $9.339\times 10^{-10}$ | 60 | $99.896\times 10^{-3}\ \text{(s)}$ | $7.043\times 10^{-3}\ \text{(s)}$ | $92.94\ \%$
$M=50$, $P_{FA}=10^{-6}$, $\Upsilon=-2\ \text{dB}$ | $98.621$ % | $4.790\times 10^{-10}$ | 71 | $95.124\times 10^{-3}\ \text{(s)}$ | $9.238\times 10^{-3}\ \text{(s)}$ | $90.28\ \%$
$M=50$, $P_{FA}=10^{-6}$, $\Upsilon=-1\ \text{dB}$ | $99.902$ % | $6.522\times 10^{-10}$ | 83 | $98.728\times 10^{-3}\ \text{(s)}$ | $11.418\times 10^{-3}\ \text{(s)}$ | $88.43\ \%$
In this section, we validate our derived expressions and discuss the
representative results. To do so, we make use of the receiver operating
characteristic (ROC) curves and Monte-Carlo simulations.444The number of
realizations was set to $1\times 10^{7}$. For comparison purposes, besides the
pre-beamforming GLRT and square-law detectors, we also include the (optimum)
LRT detector [7] so as to quantify the SNR losses.555Herein, the SNR loss is
defined as extra SNR required to achieved the same performance as the LRT
detector [7, Eq. (4.3)], for a given PD.
Figs. 5 and 6 show the PDF of $Z$ (analytical and simulated) given the
hypotheses $\mathcal{H}_{0}$ and $\mathcal{H}_{1}$, respectively. The
distribution parameters have been selected to show the broad range of shapes
that the PDFs can exhibit. Observe the perfect match between Monte-Carlo
simulations and our derived expressions [refer to (41) and (IV-A)].
Fig. 7 shows $P_{\text{D}}$ as a function of $P_{\text{FA}}$ (analytical and
simulated) for different values of $\text{SNR}_{n}$. Observe that for low
$\text{SNR}_{n}$, the post-beamforming GLRT detector is superior to both the
pre-beamforming GLRT detector and the square-law detector. That is, the weaker
the signals, the better the performance of our proposed detector. For example,
given $P_{\text{FA}}=10^{-4}$, the post-beamforming GLRT detector, the pre-
beamforming GLRT detector, and the square-law detector provide, respectively,
the following probabilities of detection: $0.53$, $0.38$ and $0.47$ for
$\text{SNR}_{n}=-7.9$ dB; $0.78$, $0.66$ and $0.75$ for $\text{SNR}_{n}=-6.5$
dB; and finally, $0.94$, $0.90$ and $0.95$ for $\text{SNR}_{n}=-5.1$ dB. The
following figures illustrate the impact on the PD as the SNR is reduced.
Fig. 8 shows $P_{\text{D}}$ as a function of $\text{SNR}_{n}$ (analytical and
simulated) for different values of $N$. Note that all detectors improve as the
number of antennas increases, requiring a lower SNR for a certain PD. Also,
note how the post-beamforming GLRT detector overcomes the pre-beamforming GLRT
detector and the square-law detector as the SNR decreases. For example, given
$\text{SNR}_{n}=-8$ dB, the post-beamforming GLRT detector, the pre-
beamforming GLRT detector, and the square-law detector provide, respectively,
the following probabilities of detection: $0.55$, $0.40$ and $0.54$ for
$N=10$; $0.79$, $0.64$ and $0.75$ for $N=14$; and finally, $0.94$, $0.80$ and
$0.86$ for $N=18$. Additionally, observe how the SNR loss is reduced as $N$
increases. In particular, for a fixed $P_{\text{D}}=0.8$, the post-beamforming
GLRT detector is superior to both the pre-beamforming GLRT detector and the
square-law detector deliver, respectively, the following SNR losses: $3.8$ dB,
$4.2$ dB and $2.8$ dB for $N=10$; $2.9$ dB, $3.6$ dB and $3.1$ dB for $N=14$;
and finally, $2.8$ dB, $3.9$ dB and $3.5$ dB for $N=18$.
Fig. 9 shows $P_{\text{D}}$ as a function of $\text{SNR}_{n}$ (analytical and
simulated) for different values of $M$. Observe that all detectors improve as
the number of samples increases. This occurs because we “average down” the
noise power by increasing $M$. Once again, the post-beamforming GLRT detector
performs better than the pre-beamforming GLRT detector and the square-law
detector in the low SNR regime. More specifically, given $\text{SNR}_{n}=-8$
dB, the post-beamforming GLRT detector, the pre-beamforming GLRT detector and
the square-law detector provide, respectively, the following probabilities of
detection: $0.30$, $0.21$ and $0.35$ for $M=10$; $0.53$, $0.40$ and $0.53$ for
$M=14$; and finally, $0.87$, $0.73$ and $0.82$ for $M=18$. Moreover, observe
how the SNR loss is reduced as $N$ increases. In particular, for a fixed
$P_{\text{D}}=0.8$, the post-beamforming GLRT detector, the pre-beamforming
GLRT detector and the square-law detector deliver, respectively, the following
SNR losses: $3.6$ dB, $3.4$ dB and $3.2$ dB for $M=10$; $3.4$ dB, $3.5$ dB and
$3.1$ dB for $M=14$; and finally, $2.8$ dB, $3.6$ dB and $3.1$ dB for $M=18$.
Fig. 10 shows $P_{\text{D}}$ as a function of $\text{SNR}_{n}$ (analytical and
simulated) for different values of $P_{\text{FA}}$. Note that all detectors
improve as $P_{\text{FA}}$ is increased. This fundamental trade-off means that
if the PFA is reduced, the PD decreases as well. Observe that for low SNR, the
superiority of our detector still remains. For example, given
$\text{SNR}_{n}=-8$ dB, the post-beamforming GLRT detector, the pre-
beamforming GLRT detector and the square-law detector provide, respectively,
the following probabilities of detection: $0.93$, $0.76$ and $0.84$ for
$P_{\text{FA}}=10^{-6}$; $0.80$, $0.57$ and $0.70$ for
$P_{\text{FA}}=10^{-5}$; and finally, $0.55$, $0.40$ and $0.54$ for
$P_{\text{FA}}=10^{-4}$. Additionally, observe how the SNR loss is reduced as
$N$ increases. In particular, for a fixed $P_{\text{D}}=0.8$, the post-
beamforming GLRT detector, the pre-beamforming GLRT detector and the square-
law detector deliver, respectively, the following SNR losses: $2.4$ dB, $3.6$
dB and $3.2$ dB for $P_{\text{FA}}=10^{-6}$; $2.6$ dB, $3.4$ dB and $3.0$ dB
for $P_{\text{FA}}=10^{-5}$; and finally, $2.9$ dB, $3.2$ dB and $2.8$ dB for
$P_{\text{FA}}=10^{-4}$.
An important remark is in order. The results presented herein show that if the
the received signals are weak, instead of processing the received signals
separately, as described in [7, Eq. (6.20)], it is better to sum up the
signals and then construct the system’s detection statistic. Intuitively, this
means that if the signal received by each antenna is defectively estimated
(due to low target power or strong interference), then the system will also
deliver a faulty final estimate. Therefore, it is better to reinforce (i.e.,
applying the beamforming operation) the overall signal before any further pre-
processing. Moreover, the way we create the system’s detection statistic
enables us to improve radar detection as we increase the number of antennas
while maintaining a fixed PFA.
Table I illustrates the efficiency of (V-C) by showing the absolute error,
computation time, required number of terms to guarantee a certain accuracy,
and reduction time [compared to (IV-B)]. The absolute error can be expressed
as
$\displaystyle\epsilon=|P_{\text{D}}-\overline{P_{\text{D}}}|,$ (71)
where $\overline{P_{\text{D}}}$ is the probability of detection obtained via
MATHEMATICA’s built-in numerical integration.666Eq. (IV-B) was evaluated by
using the fastest MATHEMATICA’s integration method, “GlobalAdaptive”, with an
accuracy goal of $10^{-10}$. Observe that for 9 different parameter settings,
(V-C) converges rapidly requiring between 23 and 83 terms to guarantee an
accuracy of $10^{-10}$. Moreover, the computation time dropped dramatically,
thereby providing reduction times above $88$%. This impressive reduction can
lead to major savings in computational load if one wants to evaluate the
detection performance over an entire area or volume covered by the radar
system.
## VII Conclusions
This paper proposed and analyzed a new GLRT phased array detector, which is
projected after the analog beamforming operation. For the analysis, a
nonfluctuating target embedded in CWGN was considered. From the practical
point of view, this detector fulfils the hardware and computational
constraints of most radar systems. The performance metrics – PD and PFA – were
derived in closed form assuming a total lack of knowledge about the target
echo and noise statistics. Moreover, a novel fast converging series for the PD
was also derived. This series representation proved to be very efficient and
computationally tractable, showing an outstanding accuracy and impressive
reductions in both computational load and computation time, compared to
MATHEMATICA’s built-in numerical integration. Numerical results showed that
when the incoming signals are weak, it is best to combine (sum) them before
any estimation or further processing. Indeed, this paper is conclusive in
indicating that for low SNR, the post-beamforming GLRT detector shows superior
to the pre-beamforming GLRT detector and square-law detectors. Another
interesting feature about the post-beamforming GLRT detector demonstrates that
for a fixed PFA, the detection threshold is independent of the number of
antennas, which allows us to improve the PD (by increasing $N$) while
maintaining a fixed PFA. The SNR losses were also quantified and they
illustrated the superiority of the post-beamforming GLRT detector as $N$ and
$M$ increase.
## Appendix A: Proof of Lemma 1
Let us define the following RV
$\displaystyle\mathcal{I}_{3}\triangleq\frac{1}{N\sigma^{2}}\overset{M}{\sum_{m=1}}\left(\mathbf{Re}\left[r_{m}\right]-\mu_{X}\right)^{2},$
(72)
where $\mu_{X}$ is the total sum of the target echoes for the in-phase
components.
Rewriting (72), we have
$\displaystyle\mathcal{I}_{3}=\overset{M}{\sum_{m=1}}\left(\frac{\mathbf{Re}\left[r_{m}\right]-\mu_{X}}{\sqrt{N}\sigma}\right)^{2}.$
(73)
It can be noticed that $\mathcal{I}_{3}$ is a sum of the squares of $M$
standard Gaussian (zero mean and unit variance) RVs. Therefore,
$\mathcal{I}_{3}$ can be modeled by a CCS RV with $M$ degrees of freedom.
Now, after performing some manipulations, we can rewrite (73) as
$\displaystyle\mathcal{I}_{3}=$
$\displaystyle\overset{M}{\sum_{m=1}}\left(\frac{\mathbf{Re}\left[r_{m}\right]-\hat{\mu}_{X}}{\sqrt{N}\sigma}+\frac{\hat{\mu}_{X}-\mu_{X}}{\sqrt{N}\sigma}\right)^{2}$
$\displaystyle\overset{(a)}{=}$
$\displaystyle\overset{M}{\sum_{m=1}}\left(\frac{\mathbf{Re}\left[r_{m}\right]-\hat{\mu}_{X}}{\sqrt{N}\sigma}\right)^{2}+2\left(\frac{\hat{\mu}_{X}-\mu_{X}}{\sqrt{N}\sigma}\right)$
$\displaystyle\times\left(\frac{\sum^{M}_{m=1}\mathbf{Re}\left[r_{m}\right]-M\hat{\mu}_{X}}{\sqrt{N}\sigma}\right)+\overset{M}{\sum_{m=1}}\left(\frac{\hat{\mu}_{X}-\mu_{X}}{\sqrt{N}\sigma}\right)^{2}$
$\displaystyle\overset{(b)}{=}$
$\displaystyle\underbrace{\overset{M}{\sum_{m=1}}\left(\frac{\mathbf{Re}\left[r_{m}\right]-\hat{\mu}_{X}}{\sqrt{N}\sigma}\right)^{2}}_{\triangleq\
\mathcal{I}_{4}}+\underbrace{\left(\frac{\hat{\mu}_{X}-\mu_{X}}{\sqrt{N}\sigma/M}\right)^{2}}_{\triangleq\
\mathcal{I}_{5}},$ (74)
where in step (b) we use the fact that
$M\hat{\mu}_{X}=\sum^{M}_{m=1}\mathbf{Re}\left[r_{m}\right]$ and,
consequently, the second term in step (a) vanishes. Observe that
$\mathcal{I}_{5}$ represents the square of a standard Gaussian variable and,
therefore, can be modeled by a CCS distribution with one degree of freedom.
Employing the additivity property of the CCS distribution [25] and taking into
account the distributions of $\mathcal{I}_{3}$ and $\mathcal{I}_{5}$, we can
now describe $\mathcal{I}_{4}$ by a CCS RV with $M-1$ degrees of freedom.
Also, observe that $\mathcal{I}_{4}$ is just the first term of (IV-A).
Following the same approach, it can be prove that the second term in (IV-A)
also follows a CCS distribution with $M-1$ degrees of freedom. Since
$\mathcal{I}_{2}$ is formed by the sum of two CCS RVs, then its distribution
is governed by a CCS RV with $2(M-1)$ degrees of freedom, which completes the
proof. It is worth mentioning that this result remains true regardless of the
hypothesis, because any value of $\mu_{X}$ or $\mu_{Y}$ will not affect the
distribution of $\mathcal{I}_{2}$.
## Appendix B: Proof of Lemma 2
Let
$\displaystyle P_{1}$
$\displaystyle=\mathbf{L}\left(\mathbf{L}^{T}\mathbf{L}\right)^{-1}\mathbf{L}^{T}=\frac{1}{M}\mathbf{L}\
\mathbf{L}^{T}$ (75) $\displaystyle P_{2}$
$\displaystyle=\mathbf{I}-P_{1}=\mathbf{I}-\frac{1}{M}\mathbf{L}\
\mathbf{L}^{T}$ (76)
be symmetric and idempotent matrices such that
$\text{rank}\left(P_{1}\right)=\mathbf{L}$,
$\text{rank}\left(P_{2}\right)=M-1$ and $P_{1}+P_{2}=\mathbf{I}$, where
$\mathbf{I}\in\mathbb{N}^{M\times M}$ represents the identity matrix and
$\mathbf{L}=\left[1,1,\cdots,1\right]^{T}\in\mathbb{N}^{M}$ is the unitary
vector. In addition, let
$\displaystyle\mathbf{Re}\left[\underline{r}\right]=\left[\mathbf{Re}\left[r_{1}\right],\mathbf{Re}\left[r_{2}\right],\cdots,\mathbf{Re}\left[r_{M}\right]\right]^{T}$
(77)
be a random vector with
$\mathbb{E}\left[\mathbf{Re}\left[\underline{r}\right]\right]=\mu_{X}\mathbf{L}$
and
$\mathbb{COV}\left[\mathbf{Re}\left[\underline{r}\right]\right]=N\sigma^{2}\mathbf{I}$.
Then, the Cochran’s Theorem [44] states that
$\displaystyle\omega_{1}=$
$\displaystyle\frac{\mathbf{Re}\left[\underline{r}\right]^{T}P_{1}\
\mathbf{Re}\left[\underline{r}\right]}{N\sigma^{2}}$ (78)
$\displaystyle\omega_{2}=$
$\displaystyle\frac{\mathbf{Re}\left[\underline{r}\right]^{T}P_{2}\
\mathbf{Re}\left[\underline{r}\right]}{N\sigma^{2}}$ (79)
are independently distributed.
Now, replacing (75) in (78), we have
$\displaystyle\omega_{1}$
$\displaystyle=\frac{1}{N\sigma^{2}}\mathbf{Re}\left[\underline{r}\right]^{T}\left(\frac{1}{M}\mathbf{L}\
\mathbf{L}^{T}\right)\mathbf{Re}\left[\underline{r}\right]$
$\displaystyle=\frac{1}{MN\sigma^{2}}\mathbf{Re}\left[\underline{r}\right]^{T}\mathbf{L}\
\mathbf{L}^{T}\mathbf{Re}\left[\underline{r}\right]$
$\displaystyle=\frac{1}{MN\sigma^{2}}\left(\sum_{k=1}^{M}\mathbf{Re}\left[r_{k}\right]\right)^{2}.$
(80)
Similarly, inserting (76) in (79), we have
$\displaystyle\omega_{2}$
$\displaystyle\overset{(a)}{=}\frac{1}{N\sigma^{2}}\mathbf{Re}\left[\underline{r}\right]^{T}P_{2}^{T}P_{2}\mathbf{Re}\left[\underline{r}\right]$
$\displaystyle=\frac{1}{N\sigma^{2}}\left\|P_{2}\mathbf{Re}\left[\underline{r}\right]\right\|^{2}$
$\displaystyle\overset{(b)}{=}\frac{1}{N\sigma^{2}}\left\|\left(\mathbf{I}-\frac{1}{M}\mathbf{L}\
\mathbf{L}^{T}\right)\mathbf{Re}\left[\underline{r}\right]\right\|^{2}$
$\displaystyle\overset{(c)}{=}\frac{1}{N\sigma^{2}}\left\|\mathbf{Re}\left[\underline{r}\right]-\mathbf{L}\hat{\mu}_{X}\right\|^{2}$
$\displaystyle\overset{(d)}{=}\frac{1}{N\sigma^{2}}\sum_{k=1}^{M}\left(\mathbf{Re}\left[r_{k}\right]-\hat{\mu}_{X}\right)^{2},$
(81)
where in step (a), we have used the definition of idempotent and symmetric
matrices [45], in step (b), we have used (76), in step (c), we have employed
(19), and in step (d), we have used (77) and applied the Euclidean norm.
Observe that $\omega_{1}$ and $\omega_{2}$ are the first terms of (IV-A) and
(IV-A), respectively. The same approach can also be applied to prove the
independence between the second terms. Finally, since
$\mathbf{Re}\left[r_{k}\right]$ and $\mathbf{Im}\left[r_{k}\right]$ are also
independent statistics (cf. Section III-A), then $\mathcal{I}_{1}$ and
$\mathcal{I}_{2}$ are mutually independent RVs, which completes the proof.
## Appendix C: Derivation of (IV-A)
To prove (IV-A), we make use of the doubly noncentral F-distribution, defined
as [29]
$\displaystyle\mathit{f}_{Z}\left(z|\mathcal{H}_{1}\right)=\sum_{k=0}^{\infty}\sum_{l=0}^{\infty}\left\\{\frac{z^{-1}\exp\left[\frac{-\lambda_{1}-\lambda_{2}}{2}\right]\left(\frac{\alpha_{1}z}{\alpha_{1}z+\alpha_{2}}\right){}^{\frac{\alpha_{1}}{2}}}{k!\
l!\ B\left(k+\frac{\alpha_{1}}{2},l+\frac{\alpha_{2}}{2}\right)}\right.$
$\displaystyle\left.\times\left(\frac{\alpha_{2}}{\alpha_{1}z+\alpha_{2}}\right)^{\frac{\alpha_{2}}{2}}\left(\frac{\lambda_{1}\alpha_{1}z}{2\left(\alpha_{1}z+\alpha_{2}\right)}\right)^{k}\left(\frac{\lambda_{2}\alpha_{2}}{2\left(\alpha_{1}z+\alpha_{2}\right)}\right)^{l}\right\\}$
(82)
Rearranging some terms, and after applying [39, Eq. (07.20.02.0001.01)],
(Appendix C: Derivation of ()) simplifies to
$\displaystyle\mathit{f}_{Z}$
$\displaystyle\left(z|\mathcal{H}_{1}\right)=z^{-1}\exp\left[\frac{-\lambda_{1}-\lambda_{2}}{2}\right]\left(\frac{\alpha_{1}z}{\alpha_{1}z+\alpha_{2}}\right)^{\frac{\alpha_{1}}{2}}$
$\displaystyle\times\left(\frac{\alpha_{2}}{\alpha_{1}z+\alpha_{2}}\right)^{\frac{\alpha_{2}}{2}}\sum_{k=0}^{\infty}\left\\{\left(\frac{\lambda_{1}\alpha_{1}z}{2\alpha_{1}z+2\alpha_{2}}\right)^{k}\right.$
$\displaystyle\times\left.\frac{{}_{1}F_{1}\left(\frac{1}{2}\left(2k+\alpha_{1}+\alpha_{2}\right);\frac{\alpha_{2}}{2};\frac{\alpha_{2}\lambda_{2}}{2\left(z\alpha_{1}+\alpha_{2}\right)}\right)}{k!\
B\left(k+\frac{\alpha_{1}}{2},\frac{\alpha_{2}}{2}\right)}\right\\}.$ (83)
Now, replacing $\alpha_{1}=2$, $\alpha_{2}=2(M-1)$,
$\lambda_{1}=M(\mu_{X}^{2}+\mu_{Y}^{2})/N\sigma^{2}$, and $\lambda_{2}=0$ (cf.
Section IV-A) in (Appendix C: Derivation of ()), and after applying [28, Eq.
(15.2.1)], and [28, Eq. (5.12.1)], we obtain
$\displaystyle\mathit{f}_{Z}\left(z|\mathcal{H}_{1}\right)=$
$\displaystyle\frac{\exp\left[-\frac{M\left(\mu_{X}^{2}+\mu_{Y}^{2}\right)}{2N\sigma^{2}}\right]}{\Gamma(M)}\left(\frac{M-1}{M+z-1}\right)^{M}$
$\displaystyle\times\sum_{k=0}^{\infty}\frac{\Gamma(k+M)}{\Gamma(k+1)^{2}}\left(\frac{Mz\left(\mu_{X}^{2}+\mu_{Y}^{2}\right)}{2N\sigma^{2}(M+z-1)}\right)^{k}.$
(84)
Finally, after using the definition of the Kummer confluent hypergeometric
function [39, Eq. (07.20.02.0001.01)], along with minor simplifications, we
obtain (IV-A), which completes the derivation.
## Appendix D: Mathematica’s implementation for the Bivariate Fox’s
$H$-function
⬇ ClearAll["Global‘*"]; Remove[s]; H[x_, delta_, D_,beta_, B_] := Module[{UpP,
LoP, Theta,R1, T1, R2, T2, m, n}, L=Length[Transpose[D]]; (*L represents the
dimension of the Fox’s H-function*) m=Length[D]; (*Number of Gamma functions
in the numerator*) n=Length[B]; (*Number of Gamma functions in the
denominator*) S=Table[Subscript[s,i],{i,1,L}]; (*s is the vector containing
the number of branches, in our case s=[s_1,s_2]*)
UpP=Product[Gamma[delta[[1,j]]+Sum[D[[j,k]] S[[k]],{k,1, L}]], {j,1,m}];
LoP=Product[Gamma[beta[[1,j]]+Sum[B[[j,k]] S[[k]],{k,1,L}]],{j,1,n}];
Theta=UpP/LoP (*Theta computes Eq. (2)*); W=50; (*Limit for the complex
integration*) T=Table[delta[[1,j]]+Sum[D[[j,k]] S[[k]],{k,1,L}]>0,{j,1,m}];
(*Generates a restriction table*) R1=Reduce[And@@Flatten[{T[[1]],T[[3]]}]];
(*R1 computes the real interval that separates the poles of Gamma[s_1] from
the poles of Gamma[M-1-s_1] and Gamma[M-s_1-s_2]*)
T1=Mean[{First@R1,Last@R1}]; R2=Reduce[And@@Flatten[{T[[2]],T[[4]]}]]; (*R2
computes the real interval that separates the poles of Gamma[s_2] from the
poles of Gamma[M-s_1-s_2]*) T2=Mean[{First@R2,Last@R2}]; W=100; (*Limit for
the complex axis*) kernel=Theta(x[[1,1]])^(-S[[1]])(x[[1,2]])^(-S[[2]])
/.{S[[1]]->s1,S[[2]]->s2}; (*Prepare the Kernel for Mathematica’s
Integration*) N[1/(2*Pi*I)^2 NIntegrate[kernel,{s1,T1-I W,T1+I W}, {s2,T2-I
W,T2+I W}],20]]
## References
* [1] L. V. Blake, _Radar Range-performance Analysis_ , 1st ed. Norwood, MA, USA: Artech House, 1986.
* [2] A. Leon-Garcia, _Probability and Random Processes for Electrical Engineering_ , 3rd ed. New Jersey, NJ, USA: Pearson Prentice Hall, 1994.
* [3] H. Chernoff, “On the distribution of likelihood ratio,” _Ann. Math. Statist._ , vol. 25, no. 3, pp. 573–578, Sept. 1954.
* [4] S. M. Kay, _Fundamentals of Statistical Signal Processing: Estimation Theory_ , 1st ed. Upper Saddle River, NJ, USA: Prentice Hall PTR, 1993.
* [5] S. M. Kendall and A. Stuart, _The Advanced Theory of Statistics_ , 2nd ed. New York, NY, USA: Macmillan, 1979\.
* [6] E. Conte, A. D. Maio, and C. Galdi, “Signal detection in compound-gaussian noise: Neyman-Pearson and CFAR detectors,” _IEEE Trans. Signal Process._ , vol. 48, no. 2, pp. 419–428, Feb. 2000.
* [7] S. M. Kay, _Fundamentals of Statistical Signal Processing: Detection Theory_ , 2nd ed. Upper Saddle River, NJ, USA: Prentice Hall PTR, 1998.
* [8] F. D. A. García, H. R. C. Mora, and N. V. O. Garzón, “GLRT detection of nonfluctuating targets in background noise using phased arrays,” in _Proc. 15th IEEE International Conference on Wireless and Mobile Computing, Networking and Communications (WIMOB)_ , Barcelona, Spain, Oct. 2019, pp. 1–8.
* [9] S. S. Haykin and A. O. Steinhardt, _Adaptive Radar Detection and Estimation_ , 1st ed. New Jersey, NJ, USA: J. Wiley, 1992.
* [10] E. J. Kelly, “An adaptive detection algorithm,” _IEEE Trans. Aerosp. Electron. Syst._ , vol. AES-22, no. 2, pp. 115–127, Mar. 1986.
* [11] I. S. Reed, J. D. Mallett, and L. E. Brennan, “Rapid convergence rate in adaptive arrays,” _IEEE Trans. Aerosp. Electron. Syst._ , vol. AES-10, no. 6, pp. 853–863, Nov. 1974.
* [12] S. Bose and A. O. Steinhardt, “Optimum array detector for a weak signal in unknown noise,” _IEEE Trans. Aerosp. Electron. Syst._ , vol. 32, no. 3, pp. 911–922, Jul. 1996.
* [13] O. Besson, A. Coluccia, E. Chaumette, G. Ricci, and F. Vincent, “Generalized likelihood ratio test for detection of gaussian rank-one signals in gaussian noise with unknown statistics,” _IEEE Trans. Signal Process._ , vol. 65, no. 4, pp. 1082–1092, Feb. 2017.
* [14] N. B. Pulsone and C. M. Rader, “Adaptive beamformer orthogonal rejection test,” _IEEE Trans. Signal Process._ , vol. 49, no. 3, pp. 521–529, Mar. 2001.
* [15] F. C. Robey, D. R. Fuhrmann, E. J. Kelly, and R. Nitzberg, “A CFAR adaptive matched filter detector,” _IEEE Trans. Aerosp. Electron. Syst._ , vol. 28, no. 1, pp. 208–216, Jan. 1992.
* [16] S. Zhang, C. Guo, T. Wang, and W. Zhang, “ON–OFF analog beamforming for massive MIMO,” _IEEE Trans. Veh. Technol._ , vol. 67, no. 5, pp. 4113–4123, Jan. 2018.
* [17] S. Huber, M. Younis, A. Patyuchenko, G. Krieger, and A. Moreira, “Spaceborne reflector SAR systems with digital beamforming,” _IEEE Trans. Aerosp. Electron. Syst._ , vol. 48, no. 4, pp. 3473–3493, Oct. 2012.
* [18] S. R. J. Axelsson, “Noise radar for range/doppler processing and digital beamforming using low-bit ADC,” _IEEE Trans. Geosci. Remote Sens._ , vol. 41, no. 12, pp. 2703–2720, Dec. 2003.
* [19] D. Zhu, B. Li, and P. Liang, “A novel hybrid beamforming algorithm with unified analog beamforming by subspace construction based on partial CSI for massive MIMO-OFDM systems,” _IEEE Trans. Commun._ , vol. 65, no. 2, pp. 594–607, Nov. 2017.
* [20] M. A. Richards, J. Scheer, W. A. Holm, and W. L. Melvin, _Principles of Modern Radar: Basic Principles_ , 1st ed. West Perth, WA, Australia: SciTech, 2010.
* [21] G. V. Weinberg, “Noncoherent radar detection in correlated Pareto distributed clutter,” _IEEE Trans. Aerosp. Electron. Syst._ , vol. 53, no. 5, pp. 2628–2636, Oct. 2017.
* [22] G. V. Weinberg and C. Tran, “Noncoherent detector threshold determination in correlated Pareto distributed clutter,” _IEEE Geosci. Remote Sens. Lett._ , vol. 16, no. 3, pp. 372–376, Mar. 2019.
* [23] G. V. Weinberg, “Minimum-based sliding window detectors in correlated Pareto distributed clutter,” _IEEE Geosci. Remote Sens. Lett._ , vol. 14, no. 11, pp. 1958–1962, Nov. 2017.
* [24] M. I. Skolnik, _Introduction to Radar Systems_ , 3rd ed. Ney York, NY, USA: McGraw-Hill, 2001.
* [25] A. Papoulis, _Probability, Random Variables, and Stochastic Processes_ , 4th ed. Ney York, NY, USA: McGraw-Hill, 2002.
* [26] P. B. Patnaik, “The non-central $\chi^{2}$ and F-distributions and their applications,” _Biometrika_ , vol. 36, no. 1, pp. 202–232, Jun. 1949\.
* [27] P. C. B. Phillips, “The true characteristic function of the F distribution,” _Biometrika_ , vol. 69, no. 1, p. 261–264, Apr. 1982.
* [28] F. W. J. Olver, D. W. Lozier, R. F. Boisvert, and C. W. Clark, _NIST Handbook of Mathematical Functions_ , 1st ed. Washington, DC: US Dept. of Commerce: National Institute of Standards and Technology (NIST), 2010.
* [29] W. G. Bulgren, “On representations of the doubly non-central F distribution,” _J. Amer. Statist._ , vol. 66, no. 333, pp. 184–186, Mar. 1971.
* [30] F. D. A. García, A. C. F. Rodriguez, G. Fraidenraich, and J. C. S. Santos Filho, “CA-CFAR detection performance in homogeneous Weibull clutter,” _IEEE Geosci. Remote Sens. Lett._ , vol. 16, no. 6, pp. 887–891, Jun. 2019\.
* [31] Y. Abo Rahama, M. H. Ismail, and M. S. Hassan, “On the sum of independent Fox’s $H$ -function variates with applications,” _IEEE Trans. Veh. Technol._ , vol. 67, no. 8, pp. 6752–6760, Aug. 2018.
* [32] C. R. N. da Silva, E. J. Leonardo, and M. D. Yacoub, “Product of two envelopes taken from $\alpha-\mu$, $\kappa-\mu$ and $\eta-\mu$ distributions,” _IEEE Trans. Commun._ , vol. PP, no. 99, pp. 1–1, Mar. 2017.
* [33] C. H. M. de Lima, H. Alves, and P. H. J. Nardelli, “Fox $H$-function: A study case on variate modeling of dual-hop relay over Weibull fading channels,” in _2018 IEEE Wireless Communications and Networking Conference (WCNC)_ , Apr. 2018, pp. 1–5.
* [34] F. D. A. García, H. R. C. Mora, G. Fraidenraich, and J. C. S. Santos Filho, “Alternative representations for the probability of detection of non-fluctuating targets,” _Electron. Lett._ , vol. 56, no. 21, pp. 1136–1139, Oct. 2020.
* [35] N. T. Hai and H. M. Srivastava, “The convergence problem of certain multiple Mellin-Barnes contour integrals representing H-functions in several variables,” _Computers & Mathematics with Applications_, vol. 29, no. 6, pp. 17–25, 1995.
* [36] M. Abramowitz and I. A. Stegun, _Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables_ , 10th ed. Washington, DC: US Dept. of Commerce: National Bureau of Standards, 1972.
* [37] A. P. Prudnikov, Y. A. Bryčkov, and O. I. Maričev, _Integral and Series: Vol. 3_ , 2nd ed., Fizmatlit, Ed. Moscow, Russia: Fizmatlit, 2003.
* [38] G. Fubini, “Sugli integrali multipli.” _Rom. Acc. L. Rend. (5)_ , vol. 16, no. 1, pp. 608–614, 1907.
* [39] Wolfram Research, Inc. (2018), _Wolfram Research_ , Accessed: Sept. 19, 2020\. [Online]. Available: http://functions.wolfram.com
* [40] H. R. Alhennawi, M. M. H. E. Ayadi, M. H. Ismail, and H. A. M. Mourad, “Closed-form exact and asymptotic expressions for the symbol error rate and capacity of the H-function fading channel,” _IEEE Trans. Veh. Technol._ , vol. 65, no. 4, pp. 1957–1974, Apr. 2016.
* [41] F. Yilmaz and M. S. Alouini, “Product of the powers of generalized nakagami-$m$ variates and performance of cascaded fading channels,” in _Proc. IEEE Global Telecommun. Conf. (GLOBECOM)_ , Abu Dhabi, UAE, Nov. 2009, pp. 1–8.
* [42] F. D. A. García, H. R. C. Mora, G. Fraidenraich, and J. C. S. Santos Filho, “Square-law detection of exponential targets in Weibull-distributed ground clutter,” _IEEE Geosci. Remote Sens. Lett._ , to be published, doi: 10.1109/LGRS.2020.3009304.
* [43] E. Kreyszig, _Advanced Engineering Mathematics_ , 10th ed. New Jersey, NJ, USA: John Wiley & Sons, 2010.
* [44] W. G. Cochran, “The distribution of quadratic forms in a normal system, with applications to the analysis of covariance,” _Proc. Camb. Phil. Soc._ , vol. 30, no. 2, p. 178–191, 1934.
* [45] M. D. Springer, _The Algebra of Random Variables_. New York, NY, USA: Wiley, 1979.
|
# A Generalization of the Greene-Kleitman Duality Theorem
Frank Y. Lu Department of Mathematics, Princeton University, Princeton, NJ
08544, USA<EMAIL_ADDRESS>
(Date: August 27, 2024)
Abstract: In this paper, we describe and prove a generalization of both the
classical Greene-Kleitman duality theorem for posets and the local version
proved recently by Lewis-Lyu-Pylyavskyy-Sen in studying discrete solitons,
using an approach more closely linked to the approach of the classical case.
## 1\. Introduction
The Greene-Kleitman duality theorem for finite posets, first described in
Greene’s paper, [Gre76] (see also [BF99], upon which the following exposition
is loosely based) states the following result. Given a poset $P,$ let $A_{k}$
be the maximal possible sum of the lengths of $k$ disjoint increasing
sequences of elements (chains), and $D_{k}$ is the maximal possible sum of the
lengths of $k$ disjoint sequences of elements where no two elements are
pairwise comparable (anti-chains). Then $A_{k},D_{k}$ are conjugate in the
following sense: $A_{1}+(A_{2}-A_{1})+\cdots$ and $D_{1}+(D_{2}-D_{1})+\cdots$
form conjugate partitions of $n.$ See [BF99][§8] for a description of one
proof of this result, attributed to A. Frank, using a graph theoretic
construction, which will be relevant to us. Here, we will refer to this as the
classical Greene-Kleitman duality theorem.
The duality of these partitions lends itself to applications, which [BF99]
discusses in detail. For instance, [BF99][§3-4] goes into how one can
interpret results about tableau associated with permutations through this lens
of the duality. This is done by using the above theorem on the permutation
poset, or the poset associated with a given permutation $\sigma$ of $n$
elements by imposing the ordering, on the set of elements
$\\{(i,\sigma(i))|i=1,2,\ldots,n\\},$ that $(i,\sigma(i))<(j,\sigma(j))$ if
and only if $i<j$ and $\sigma(i)<\sigma(j).$
Recently, another related duality result was published in [Lew+20], which
[Lew] (from which the exposition regarding this theorem is based) calls the
localized Greene’s theorem. Here, we start with a permutation $\sigma$ on a
set of elements $\\{1,2,\ldots,n\\},$ and consider the sequence
$\sigma(1),\sigma(2),\ldots,\sigma(n).$ From there, the same sort of duality
in the original duality theorem was shown: however, instead of the original
values for the classical theorem being conjugate, we have quantities
$A^{\prime}_{k}$ and $D^{\prime}_{k}$ being conjugate, which are defined as
follows. For $A^{\prime}_{k},$ we consider, over all sets of $k$ disjoint
subsequences, the maximum of the sum of the ascents of each subsequence, where
the ascent of a subsequence $s_{1},s_{2},\ldots s_{m}$ is the number of
indices $i$ so that $s_{i}<s_{i+1},$ plus one (or $0$ if the sequence is
empty). For $D^{\prime}_{k},$ this is defined to be the maximum, over all sets
of $k$ consecutive subsequences, of the sum of the lengths of the maximal
descending sequences of each subsequence. Note the consecutive condition here,
in contrast with the classical theorem: for instance, if we have $2,4,3,1$ as
the sequence, we may not take the sequences to be $2,1$ and $4,3.$ The proof
in [Lew+20] of this result differs substantially from the classical proof,
utilizing the study of discrete solitons in the paper.
In this paper, we unite these two theorems with a generalization, Theorem 2.1,
which we detail in the next section, using an overall structure of the proof
similar to the proof provided by Frank. Here, we translate the problem into a
problem about direct graphs and flows on direct graphs; again, see [BF99][§8]
for one version of Frank’s proof, for instance, upon which the main ideas for
this proof are built. However, the structure of the graph is constructed in a
way where flows and potentials correspond more “naturally” to sequences.
Specifically, in Section 2, we introduce the generalized theorem and the
specific information necessary. From there, in section 3 we set up the
required graph theory to allow for the translation of the problem to this
graph theoretic construction, along the lines of the classical proof. From
here, we prove some basic properties of the graph construction which will be
useful in Section 4, before proceeding on to the core part of the proof. In
Section 5, we link the two desired poset-based quantities to the graph-
theoretic construction and use this to arrive at an inequality, which we
sharpen to the desired equality in Section 6. Finally, in Section 7, we show
that both versions of the Greene-Kleitman duality theorem follow as
corollaries of this general theorem, and provide another interesting special
case.
Thanks to Dr. Pavlo Pylyavskyy for introducing me to this problem, as well as
offering suggestions on drafts, including on the exposition in sections 1, 2,
and 3, and the abstract. Thanks as well to Dr. Emily Gunawan for suggestions
on the draft, especially with regards to the exposition of sections 2 and 3,
including the example, and thanks to Dr. Joel Lewis for comments on the
earlier version of the draft, in particular on the exposition in sections 2
and 3 as well.
## 2\. The Generalized Problem
The exposition loosely adapts from [Lew] in generalizing this problem, in that
the notation and exposition here generalizes that of the localized Greene’s
theorem given in [Lew].
Given a poset $P$ on elements $S_{P}=\\{e_{1},e_{2},\ldots,e_{n}\\}$ and a
bijection $h:S_{P}\rightarrow\\{1,2,\ldots,n\\},$ we pick a set $C_{P}$ of
pairs of distinct elements in $S_{P}$ with the following properties:
1. (1)
Given $x,y\in S_{P},$ if $x<y$ and $h(x)<h(y),$ then $(x,y)\in C_{P}.$
2. (2)
Given that $(x,y)\in C_{P},$ we have that $h(x)<h(y).$
3. (3)
Given that $(x,y)\in C_{P}$ and $(y,z)\in C_{P},$ we have that $(x,z)\in
C_{P}.$
In other words, $C_{P}$ is some binary transitive relation on $S_{P}$ that is
a subset of the strict total ordering given by $h,$ which also contains the
intersection of the relations given by $P$ and the relations given by $h.$
Given the set $C_{P}$ and bijection $h,$ we say that a sequence of distinct
elements $s_{1},s_{2},\ldots,s_{m}$ is adjacentable if for each $j,1\leq j\leq
m-1,$ $(s_{j},s_{j+1})\in C_{P}.$ In addition, we say that the sequence is
$h-$ordered if it satisfies that $h(s_{j})<h(s_{j+1})$ for each $j.$ Note that
adjacentable sequences are necessarily $h-$ordered, but the reverse isn’t true
if $C_{P}$ is a strictly smaller relation than $h.$
Now, let $S$ be an adjacentable sequence of distinct elements
$s_{1},s_{2},\ldots,s_{m}.$ Define $asc(S)$ to be the number of indices $j$ so
that $s_{j}<s_{j+1},$ plus one, or to equal $0$ if the sequence is empty.
In addition, for any $h-$ordered sequence $S$ of distinct elements, define
$desc(S)$ to be the length of the longest subsequence of $S,$ say
$s_{1},s_{2},\ldots,s_{n},$ so that $s_{i}\not<s_{j}$ for each $i<j.$
###### Example 2.1.
Suppose we have a poset $P$ on the set $\\{a,b,c,d,e\\},$ with the cover
relations $a<b,b<d,c<d,d<e,$ and the function $h$ that takes on the following
values:
$\displaystyle h(a)=1$ $\displaystyle h(b)=3$ $\displaystyle h(c)=5$
$\displaystyle h(d)=4$ $\displaystyle h(e)=2.$
Let $C_{P}$ be the set
$\\{(x,y)\in\\{a,b,d,e\\}\times\\{a,b,d,e\\}|h(x)<h(y)\\}\cup\\{(a,c)\\}$ If
we have the sequence $S$ be $(a,e,b,d),$ we have $asc(S)=3,$ as $a<e$ and
$b<d.$ Also $desc(S)=2,$ by taking the subsequence $e,d.$ Note that this will
naturally be $0$ if $S$ is empty.
We say that two disjoint $h-$ordered sequences $s_{1},s_{2},\ldots,s_{m}$ and
$t_{1},t_{2},\ldots,t_{l}$ of $P$ are semi-overlapping if and only if there
exist indices $i,j,k,l$ so that $(t_{j},s_{i})$ and $(s_{k},t_{l})$ lie in
$C_{P}.$ For instance, note that $a,e,d$ and $b,c$ are semi-overlapping (since
$f(d)>f(b),f(a)<f(b)$), but $a,e$ and $b,c$ aren’t.
From here, define $A_{k}^{\prime}$ to be the maximum value, over all sets of
$k$ disjoint adjacentable sequences $\\{S_{1},S_{2},\ldots,S_{k}\\}$ of
$asc(S_{1})+asc(S_{2})+\cdots+asc(S_{k}).$ Similarly, define $D_{k}^{\prime}$
to be the maximum value, over all sets $\\{S_{1},S_{2},\ldots,S_{k}\\},$ of
$k$ disjoint $h-$ordered sequences where no two are semi-overlapping, of
$desc(S_{1})+desc(S_{2})+\cdots+desc(S_{k}).$
For Example 2.1, we compute that $A_{1}^{\prime}=3,$ using the sequence
$S=(a,e,b,d).$ Similarly, we see that $A_{2}^{\prime}=4,$ using
$S_{1}=(a,e,b,d)$ and $S_{2}=(c),$ and $A_{3}^{\prime}=5,$ using
$S_{1}=(a,e),$ $S_{2}=(b,d)$ and $S_{3}=(c).$ We compute also that
$D_{1}^{\prime}=3,$ using the sequence $(e,b,c),$ $D_{2}^{\prime}=4$ using the
sequences $(e,b,c)$ and $(d),$ and $D_{3}^{\prime}=5$ using the sequences
$(a),(e,b,c),$ and $(d).$
Given these quantities, we have the following theorem.
###### Theorem 2.1.
Let $\lambda_{1}=A_{1}^{\prime},$ and $\mu_{1}=D_{1}^{\prime},$ and for $k\geq
2,$ let
$\lambda_{k}=A_{k}^{\prime}-A_{k-1}^{\prime},\mu_{k}=D_{k}^{\prime}-D_{k-1}^{\prime}.$
Then, the sums $n=\lambda_{1}+\lambda_{2}+\cdots$ and
$n=\mu_{1}+\mu_{2}+\cdots$ are partitions; moreover, they are conjugate
partitions.
For instance, as we’ll show in Section 7, if we let $P$ be the natural
ordering on the set of elements $\\{1,2,\ldots,n\\},$ if we take $C_{P}$ to be
the set $\\{(x,y)\in S_{P}\times S_{P}|h(x)<h(y)\\}$ and $h$ to be the
permutation, we will arrive at the localized Greene’s theorem for permutations
from [Lew+20]. Also, if we let $P$ be a poset, $h$ to be a linear extension,
and $C_{P}=\\{(x,y)|x<y\\},$ we will arrive at the Greene-Kleitman theorem
from [Gre76]. This latter result, however, will require a little bit more
work, as we will do in Section 7.
As mentioned before, the general method of proof is similar to [BF99] §7 and
§8, which was used to prove the classical Greene-Kleitman theorem.
## 3\. Setup
In this section, we establish a directed graph which reflects the structure of
the poset $P.$ The exposition in this section follows [BF99] §7, with
modifications to the theorems in the section, though we will also borrow some
exposition from [Wil19] when needed for modifications.
### 3.1. The Graph
Given a poset $P$ on set $S_{P}$ with $n$ elements, and bijection $h$ between
elements of $P$ and the set $\\{1,2,\ldots,n\\},$ we now construct a directed
graph $G_{P,h,C_{P}}=(V,E).$ Here, the set $V$ consists of $2n+2$ elements: a
source vertex $b_{0},$ a sink vertex $t_{n+1},$ and for each element $e\in P,$
we have a “top” vertex $t_{h(e)}$ and a “bottom” vertex $b_{h(e)}.$ The set of
edges $E$ is the union of the following four sets, where we have the ordered
pair $(v,w)$ represent a directed edge from vertex $v$ to vertex $w:$
1. (1)
The set $\\{(b_{0},t_{i})|1\leq i\leq n\\}$ of edges from $b_{0}$ to each of
the vertices $t_{i}.$
2. (2)
The set $\\{(b_{i},t_{n+1})|1\leq i\leq n\\}$ from each of the $b_{i}$ to
$t_{n+1}.$
3. (3)
The set $\\{(t_{i},b_{i})|1\leq i\leq n\\}$ from each $t_{i}$ to its
corresponding $b_{i}.$
4. (4)
The set $\\{(b_{i},t_{j})|(h^{-1}(i),h^{-1}(j))\in C_{P}\\}.$
Notice that these four sets of edges are distinct. For Example 2.1, we get a
graph like the following graph.
Figure 1. $G_{P,h,C_{P}}$ for Example 2.1
Here, green represents the first set, red represents the second set, blue
represent the third, and black represent the fourth set. Next to each pair of
vertices $t_{i},b_{i}$ for $i$ from $1$ to $5$ is the element in $P$ that it
corresponds to (namely, $h^{-1}(i)$).
### 3.2. Minimal-Cost Flow
We now consider imposing a flow onto the graph, and finding, for a given flow
value $v$ (defined as in [BF99] as the sum of the flows assigned to each edges
going out of a source node), the minimal cost flow. As mentioned before, the
exposition we use here is similar to that of [BF99] §7, with some adaptations
from [Wil19].
We use the definition of flow used in [BF99] §7: a flow on a directed graph
with vertex set $V$ and edge set $E,$ with one source node and one sink node,
is a function $f:E\rightarrow\mathbb{R}_{\geq 0}$ so that, for each vertex $v$
that isn’t a source or a sink, $\sum\limits_{(w,v)\in
E}f((w,v))=\sum\limits_{(v,w)\in E}f((v,w)).$ This property is also known as
flow conservation. The value of a flow is then just $\sum\limits_{(s,w)\in
E}f((s,w)),$ where $s$ is the source node. Notice that this flow can be
restricted in value; the capacity of a given edge gives us the bounds for what
values $f$ can take on the edge. For this discussion, as $f$ is nonnegative,
we let the capacity function simply be the maximum value that $f$ can take on
each edge.
Now, for the costs of this graph, define the function
$c:E\rightarrow\mathbb{Z},$ so that an edge $e=(v,w)\in E$ has cost $-1$ if
$w=t_{n+1},$ or $v=b_{i},w=t_{j},$ where $h^{-1}(i)<h^{-1}(j)$ and $i<j,$ and
all other edges have cost $0.$ Define as well the capacity function
$u:E\rightarrow\mathbb{Z}$ that sets the capacity of all edges to be $1.$
The exposition from here follows that of [Wil19], as [BF99] doesn’t provide us
with a sufficiently general context, though we will return to the mechanics of
[BF99] afterwards.
Working more directly in the context of [Wil19], we have the following
definition:
###### Definition 1 (Definition 5.2 from [Wil19]).
Given a directed graph with vertices $V$ and edges $E,$ we add to the edges
the reverse of these edges (so if $(v,w)\in E,$ we add $(w,v)$), and we denote
the total set of edges as $E^{\prime}.$ Suppose we are also given a function
$u:E^{\prime}\rightarrow\mathbb{Z}$ with $u(e)\geq 0$ for all $e\in
E^{\prime},$ and cost function $c:E^{\prime}\rightarrow\mathbb{Z},$ where
$c((v,w))=-c((w,v)).$ Then, a circulation is a function
$g:E^{\prime}\rightarrow\mathbb{R}_{\geq 0}$ is a function satisfying the
following properties:
* •
For all edges $e\in E^{\prime},$ we have that $g(e)\leq u(e).$
* •
For all vertices $i\in V,$ we have that $\sum\limits_{k\in V|(i,k)\in
E}g((i,k))=0.$
* •
For all vertices $v,w$ so that $(v,w)\in E^{\prime},$ $g((v,w))=-g((w,v)).$
We say that the cost of the circulation is $\frac{1}{2}\sum\limits_{e\in
E}c(e)g(e),$ which we denote as $c(g).$
We have the following theorem from [Wil19] which corresponds to [BF99][Theorem
7.1], giving us certain criteria for when we have the minimal cost flow. We
weaken the theorem to only the needed conditions.
###### Theorem 3.1 (part of Theorem $5.3$ from [Wil19]).
The following are equivalent for a circulation $g,$ given capacity and cost
functions $u,c$ respectively:
* •
$g$ is a minimum-cost circulation.
* •
There exists a potential function $p:V\rightarrow\mathbb{R}$ so that for all
vertices $v,w$ where $(v,w)\in E^{\prime}$ and $u((v,w))-g((v,w))\geq 0,$
$c((v,w))+p(v)-p(w)\geq 0.$
Using this theorem, we prove that a strengthened version of [BF99][Theorem
7.1] holds. Suppose that we have a flow $f$ and potential $p$ on
$G_{P,h,C_{P}},$ with the cost function $c$ and capacity function $u,$ so that
$f$ always lies between $0$ and $u$ for each edge in $E.$ We prove the
following theorem.
###### Theorem 3.2 (Modified Theorem 7.1 from [BF99]).
Let $G$ be a directed graph, with set of vertices $V$ and set of edges $E,$
with a single source and a single sink vertex. If we have a flow $f$ and
potential $p$ so that
$p(w)-p(v)<c((v,w))\implies f((v,w))=0,$
and
$p(w)-p(v)>c((v,w))\implies f((v,w))=u((v,w))$
for any $(v,w)\in E,$ then $f$ has minimal cost over all flows of the same
value; that is, the sum of $f(e)c(e)$ over all edges $e$ is minimal for this
flow.
###### Proof.
Suppose that the flow $f$ satisfies these conditions, with value $v.$ We’ll
show that it is minimal by comparison with [Wil19][Theorem 5.3]. Denote the
source node $a$ and the sink node $b.$
First, as in [Wil19][§5], given the graph $G=(V,E)$ and a desired flow value
$v,$ add to $G$ the vertex $s,$ and two edges, one from $s$ to $a,$ and one
from $b$ to $s,$ both with capacity $v$ and cost $0.$ Given $p$ as well,
extend the potential function so that $p(s)=p(a)$ as well.
In addition, once we’ve added these two edges, perform the modifications in
the beginning of definition $1.$ Specifically, let $E^{\prime}$ be the new set
of edges. Extend the capacity function $u$ to
$u^{\prime}:E^{\prime}\rightarrow\mathbb{R}$ that is $v$ on the edges $(s,a)$
and $(b,s),$ $-v$ on their reverses, and $0$ on all the other edges not in
$E$. Furthermore, extend the cost function to equal $0$ on the new edges.
Now, [Wil19] notes that given this flow, there is a corresponding circulation
with the same cost. We show this more precisely.
To do this, given any flow $f^{\prime},$ construct a function $g^{\prime}$
where the following hold:
$g((v,w))=f((v,w))$ for all $(v,w)$ in $E.$
$g((v,w))=-f((w,v))$ for all $(v,w)$ in $E^{\prime}-E.$
$g((s,a))=g((b,s))=v.$
$g((a,s))=g((s,b))=-v.$
It is not hard to check that this is a circulation.
By construction, notice that the cost of $g^{\prime}$ and the cost of
$f^{\prime}$ are the same. Notice that the two new edges and their respective
“reversed” edge have cost $0$ and so don’t contribute to the total cost. Also,
observe that for every edge $(v,w)\in E$ in the circulation, the contribution
of the cost due to $(v,w)$ and $(w,v)$ in total is
$\frac{1}{2}(c((v,w))f((v,w))+c((w,v))f((w,v)))=c((v,w))f((v,w)),$ and summing
these up yields the same cost.
Now, let $g$ be the circulation constructed from the particular flow $f$
mentioned at the beginning of the proof. Notice that for all $(v,w)\in
E^{\prime},$ we have three cases to consider.
1. (1)
First, if $(v,w)\in E,$ by construction notice that if $u((v,w))>f((v,w)),$
then by construction we see that $p(w)-p(v)\leq c((v,w)),$ or that
$c((v,w))-p(w)+p(v)\geq 0.$
2. (2)
Next, if $(v,w)$ is so that $(w,v)\in E,$ then notice that
$u((w,v))>f((w,v))\iff f((v,w))>0,$ which in turn means that $p(w)-p(v)\geq
c((v,w)),$ or that $c((w,v))-p(v)+p(w)\geq 0.$
3. (3)
For the last four edges, notice that their circulation equals their capacity,
so there’s nothing that needs to be checked here.
Then, it follows that $g$ is a minimal cost circulation, which means that the
cost of $g$ is at most the cost of $g^{\prime}.$ But then it follows that the
cost of $f$ is at most the cost of $f^{\prime},$ for any flow $f^{\prime}$
with value $v,$ which gives us the desired. ∎
In particular, notice that the first condition in [BF99], namely that the
potential is bounded between its values at the source and sink nodes, is not
necessary to maintain for minimality, thus allowing us more flexibility with
the potential function. We are now able to return back to the notation of
[BF99], but now with the possibility of negative potentials and costs.
### 3.3. Applying the Algorithm
We now apply [BF99][Algorithm 7.2] to the graph $G_{P,h,C_{P}},$ with the
$2n+2$ vertices $b_{0},t_{1},b_{1},\ldots,t_{n+1},$ but with a few
modifications (specifically to the initial conditions), which are produced
below. Let $V$ be the set of vertices and $E$ the set of edges in this graph.
###### Algorithm 1 (Modified Algorithm 7.2 from [BF99]).
The algorithm is as follows:
1. (1)
To initialize the flow and potential, set $f$ to be so that $f(e)=0$ for every
edge $e\in E.$ We also declare that $p(b_{i})=-i=p(t_{i}).$
2. (2)
Let $G^{\prime}$ be the modified graph with the same vertices and edges
$\bar{E}=\\{(v,w):(v,w)\in
E,p(w)-p(v)=c((v,w)),f((v,w))<u((v,w))\\}\cup\\{(w,v):(v,w)\in
E,p(w)-p(v)=c((v,w)),f((v,w))>0\\}.$ From here, let $X$ be the set of vertices
$v$ where a path exists from the source $s$ to $v$ using the edges in
$\bar{E}.$ If $t\in X,$ then go to step 3. Otherwise, go to step 4.
3. (3)
There exists a path through vertices $s,v_{1},v_{2},\ldots,v_{k},t,$ where all
these vertices lie in $X$ and the edges are in $\bar{E}.$ Increase the flow of
each edge along here by $1,$ then go to step 5.
4. (4)
Otherwise, for every vertex not in $X,$ increase the potential of that vertex
by $1.$ Go to step 5 next.
5. (5)
If we have maximal flow, stop. Otherwise, go to step 2 again.
[BF99][Theorem 7.3] says that the above algorithm maintains a minimum cost
flow for each flow value at each step, comparing with the conditions in
[BF99][Theorem 7.1]. We will explicitly prove that this theorem holds even if
we strip the potential bounding condition, for the sake of completeness.
###### Theorem 3.3 (Modified Theorem 7.3, [BF99]).
The above algorithm produces, for each flow value, a minimal-cost flow, as the
two conditions described in Theorem 3.2 are preserved after each step.
Furthermore, the algorithm terminates when we reach a maximal flow value.
###### Proof.
We prove that the initial conditions have the desired properties in Theorem
3.2, and then that, after running through the algorithm, the desired
properties hold, assuming that they held initially. This will prove the
desired claim by induction, and hence Theorem 3.2. For ease of notation, let
the index of $v$ be the value $i$ so that either $v=t_{i}$ or $v=b_{i};$
initially, we see that the index of $v$ is just $-p(v)$ by construction.
First, for the initial conditions, notice that the flow everywhere is $0,$ by
construction, so the first condition is vacuously true. As for the second,
notice that
$p(w)-p(v)>c((v,w))\implies f((v,w))=u((v,w)),$
means that the index of $w$ is smaller than that of $v.$ But then we have no
edges from $w$ to $v,$ which means that this vacuously holds for all edges
$(v,w)\in E.$
Now, for the algorithm. The only issues we need to check are for steps $3$ and
$4.$ Suppose $G_{P,h,C_{P}}$ initially satisfied the conditions in Theorem
3.2. If we reach step 3, then by the algorithm we have a sequence of vertices
$s,v_{1},\ldots,v_{k},t,$ where each consecutive pair of vertices in the
sequence has an edge in $\bar{E},$ and we’ve increased the flow along these
edges by $1.$
But notice that, by construction, the potentials between every pair of
consecutive vertices equals the cost. This means that the conditions still
hold, since the only pairs of vertices $(v,w)$ where the flow changes are
those where $p(w)-p(v)=c((v,w)),$ so the conditions remain satisfied.
Now, suppose we reached step 4. Consider any edge $(v,w)\in E.$ If
$p(w)-p(v)<c((v,w)),$ notice then that, since $p,c$ are always integers,
$p(w)-p(v)$ remains at most $c((v,w)),$ and similarly for the $>$ symbol. The
only thing we need to check is when $p(w)-p(v)=c((v,w))$ initially, and where
exactly one of the potentials changes.
Suppose that $p(w)$ increases by $1.$ Then, it follows that $w$ is not in $X,$
but $v$ is in $X.$ But this means that, since we have a path from $s$ to $v$
along edges in $\bar{E},$ there is no edge between $v$ and $w$ in $\bar{E}.$
This means that, as $(v,w)\in E,$ we have that $f((v,w))=u((v,w)),$ since the
flow must remain at most the capacity. But then notice that this satisfies the
condition.
Similarly, if $p(v)$ increases by $1,$ this means that $w\in X,v\not\in X.$
But again, this means that we have no edge from $w$ to $v.$ But this means
that $f((v,w))=0,$ as $(v,w)\in E.$ This means that the condition is satisfied
for that edge too.
For maximality, we will prove this at the end of the next section. ∎
This allows us to notice that, at every stage of the algorithm, even with a
different potential function, we still output a minimal cost flow for a given
flow value $v.$
## 4\. Basic Properties
First, we prove some properties of the flow on $G_{P,h,C_{P}}$ in general,
throughout the algorithm. We say that a vertex is “reachable by $b_{0}$,” or
just “reachable,” if it lies in the set $X$ (as per the notation of
[BF99][§7], which we had for Algorithm 1). We first have the following lemma.
It’s not hard to see that every vertex of the form $t_{i}$ or $b_{i},$ where
$1\leq i\leq n,$ can have at most one edge with nonzero flow going in, and at
most one edge with nonzero flow going out. To see this, notice that $t_{i}$
has only one edge that flows out, and $b_{i}$ has only one edge going into it,
and all edges in this case have capacity $1.$ Since all flows are integral, by
the algorithm, it follows that there can only be one edge for the other side
that has nonzero flow.
We now have two lemmas that we’d like to prove.
###### Lemma 4.1.
For any edge from $b_{i}$ to $t_{j},$ if there is a flow along that edge, then
$p(t_{j})-p(b_{i})$ equals the cost of the edge.
###### Proof.
Suppose for the sake of contradiction that this fails at some point during
Algorithm 1. Consider the first step at which this fails, after making the
change in flow or potential.
Note that this can’t be the first time that there is a flow between the two
edges, since by construction we only add the flow if the cost equals the
potential change. So this must mean that this occurs while potential drops; in
other words, one of $b_{i},t_{j}$ is reachable by $s$ along this new graph (in
the sense that it lies in $X$) and the other isn’t.
Suppose that $t_{j}$ is reachable by $b_{0}.$ Then, by construction, since
before the change in potential the cost of flow along the edge equals the
difference in potentials, we must have that $b_{i}$ is also reachable.
Similarly, if $b_{i}$ is reachable by $b_{0},$ then there had to exist some
point before it that allowed us to reach it. But this means that either we had
to reach it via an unused edge (going forwards), or a used edge going
backwards. The former, however, is impossible, since by the fact that there is
flow out of $b_{i}$ there is flow into $b_{i},$ and there is only one edge
flowing into $b_{i}.$
This means that we had to have reached $t_{j}$ to get to $b_{i}.$ Hence, the
supposed situation is impossible, which proves that the condition in the lemma
always holds, as desired. ∎
In addition, we have the following property:
###### Lemma 4.2.
For any $i\in\\{1,2,\ldots,n\\},$ $p(t_{i})\geq p(b_{i})-1.$
###### Proof.
Again we proceed by contradiction. Suppose that at some point that
$p(t_{i})-p(b_{i})$ was less than $-1,$ for some $i.$ Then, since
$p(t_{i})-p(b_{i})$ can only increase or decrease by $1$ at each point, at
some point, then, $p(t_{i})-p(b_{i})=-1.$ Furthermore, at this point, only
$t_{i}$ was reachable by $b_{0},$ and $t_{n+1}$ wasn’t reachable, to cause the
potential difference to change.
By the conditions given in Theorem 3.2, there has to be a flow from $t_{i}$ to
$b_{i}.$ Then, note that, since there is flow into $t_{i},$ there has to be
another vertex, $b_{k},$ with $k<i,$ where there is nonzero flow along the
edge from $b_{k}$ to $t_{i}.$ If $k=0,$ then we can’t reach $i$ directly from
$s$ via an unused edge; this means that there is some other vertex $b_{h}$
with $h<i$ and where $p(t_{i})-p(b_{h})=c((b_{h},t_{i})).$ We take that vertex
instead. Otherwise, if $k\neq 0,$ we just take $b_{k}.$ In either case, notice
that we have that $p(t_{i})-p(b_{k})=c((b_{k},t_{i})),$ with the case $k\neq
0$ following from Lemma 4.1.
In addition, consider the next vertex along the flow line, say $t_{j},$ $j>i,$
after $b_{i}.$ Notice that there can’t be any flow from $b_{k}$ to $t_{j},$ as
the edge from $b_{i}$ to $t_{j}$ has nonzero flow, and $k<i.$
We now do casework:
1. (1)
$p(b_{k})=p(t_{i}).$ Since the cost of an edge is either $-1$ or $0,$ we have
that, by Lemma 4.1, $p(t_{j})-p(b_{i})=c((b_{i},t_{j}))\geq-1,$ or that
$p(t_{j})\geq p(t_{i})=p(b_{k}).$ But this means that the cost of the edge
between $b_{k}$ and $t_{j}$ is at most $p(t_{j})-p(b_{k})$. Since there can’t
be any flow between them, the cost must be at least the potential difference,
so their potential difference is the same as the cost of the edge between
them. However, since $b_{k}$ is reachable, this means that $t_{j}$ is too,
which means that $b_{i}$ is reachable, contradiction.
2. (2)
$p(b_{k})=p(t_{i})+1.$ By a similar logic as above, we have that $p(t_{j})\geq
p(t_{i})=p(b_{k})-1.$ But also, since there can’t be nonzero flow in the edge
between $b_{k}$ and $t_{j},$ notice that $p(t_{j})-p(b_{k})\leq
c((b_{k},t_{j}))\leq 0.$ Hence, either $p(b_{k})=p(t_{j}),$ or
$p(b_{k})=p(t_{j})+1.$ The former gives us the same logic as the first case.
For the latter, note that for this to occur, $p(t_{j})=p(t_{i})=p(b_{i})-1,$
or that $p(t_{j})-p(b_{i})=-1.$ But by Lemma 4.1, as we have flow on the edge
from $b_{i}$ to $t_{j}$, this potential difference equals $c((b_{i},t_{j})).$
But this means that either $h^{-1}(k)<h^{-1}(i)<h^{-1}(j),$ or
$t_{j}=t_{n+1}.$ In either case, note that this means that the cost of the
edge between $b_{h}$ and $t_{j}$ is $-1$ and is equal to their potential
difference, meaning that $t_{j},$ and hence $b_{i},$ is reachable.
In either case, we run into a contradiction, which proves the lemma. ∎
From here, we can now prove that we eventually get maximality from Theorem
3.3.
###### Proof of Theorem 3.3, continued.
Suppose for the sake of contradiction that this doesn’t ever reach maximal
flow. Then, Algorithm 1 doesn’t terminate, and so eventually reaches a point
where step 4 is constantly repeated, as step 3 increases flow and this maximal
flow is well-defined; see the Ford-Fulkerson theorem, which is, for instance,
[Wil19][Theorem 2.6].
In fact, here we can be more precise: notice that the maximal flow value is
$n.$ To see this, notice that the value of the flow is the sum of the flows of
the edges coming out of $b_{0};$ with $n$ edges with capacity $1,$ this is at
most $n.$ But to see maximality, notice that taking the edges between $b_{0}$
and $t_{i},$ $t_{i}$ and $b_{i},$ and $b_{i}$ to $t_{n+1},$ for each
$i\in\\{1,2,\ldots,n\\},$ gives a flow with value $n.$ Hence, maximal flow is
$n.$ Therefore, for the sake of contradiction, we see that the flow value we
reach is $v<n.$
Now, notice that, in general, step 4 cannot make $|X|$ fall; indeed, notice
that step 4 alters potentials of vertices outside of $X,$ and doesn’t alter
flows, so every vertex in $X$ remains in $X.$
This means that, for us to never have $t\in X,$ eventually $X$ reaches some
maximal set $X^{\prime},$ since the number of elements is at most $2n+2.$
Furthermore, beyond this point, all of the flows of edges in $G_{P,h,C_{P}}$
remain constant. Consider the elements that must lie in this set $X^{\prime}.$
Given that the only edges from $b_{0}$ are to vertices of the form $t_{i},$
and that furthermore by construction in Algorithm 1 flows for each edge are
integers (either $0$ or $1$), it follows that there is some $t_{i},$ $i$ an
integer between $1$ and $n,$ inclusive, so that the edge from $b_{0}$ to
$t_{i}$ has flow $0$ (since we are assuming non-maximal flow). By the second
part of Theorem 3.3, it follows that $p(t_{i})-p(b_{0})=p(t_{i})\leq 1.$ If
$t_{i}$ wasn’t in $X^{\prime}$ it would follow that the potential of $t_{i}$
would repeatedly increase by $1,$ contradicting this inequality.
From here, we have two cases. If the edge between $t_{i}$ and $b_{i}$ doesn’t
have a flow, then it follows that $p(b_{i})-p(t_{i})\leq 1,$ which using the
above means that $p(b_{i})-p(b_{0})=p(b_{i})-p(t_{i})+p(t_{i})-p(b_{0})\leq
2.$ But again, by the same argument above, $b_{i}$ must lie in $X^{\prime},$
as otherwise its potential will be unbounded as we continually repeat step 4
in Algorithm 1 (with $X$ never changing from $X^{\prime}$).
Now, notice that, since the only edge that points to $b_{i}$ is from $t_{i},$
by construction, and since we assumed that the flow on the edge was $0,$ the
edge between $b_{i}$ and $t_{n+1}$ has flow zero too. But the exact same
argument shows that $t_{n+1}\in X^{\prime},$ which contradicts the fact that
we did step 4.
Otherwise, there is a flow on the edge between $t_{i}$ and $b_{i}.$ But this
means that, by flow conservation, there exists an edge pointing into $t_{i}$
with flow, say from $b_{j}.$ But notice that all of the flow values are
integers, and since the capacities are $1,$ this edge has flow $1.$ But notice
then that the only edge pointing into $b_{j}$ is from $t_{j},$ and it has
capacity $1.$ This means that the edge from $b_{j}$ to $t_{n+1},$ by flow
conservation, has flow $0,$ meaning that $p(t_{n+1})-p(b_{j})\leq 1.$ However,
notice that, by Lemma 4.1, we have that
$p(b_{j})=p(t_{i})-c((b_{j},t_{i}))\leq p(t_{i})+1$ the latter by construction
of the costs.
This means, however, that $p(t_{n+1})-p(b_{j})+p(b_{j})\leq 2+p(t_{i})\leq 3,$
which again means that $p(t_{n+1})$ is bounded, so $t_{n+1}$ has to lie in
$X^{\prime},$ contradiction. This means that step 4 isn’t used here, proving
maximality, as desired. ∎
## 5\. Relating Graph and Poset Quantities
We now take $G_{P,h,C_{P}}$ and relate it back to $A^{\prime}_{k}$ and
$D^{\prime}_{k}.$ We begin by translating the poset quantities to the
quantities on the graph, specifically flows and potentials, which [BF99][§8]
also does. However, the way these quantities are related to the poset
quantities is somewhat different here compared to the corresponding version in
[BF99][§8].
###### Proposition 5.1.
In $G_{P,h,C_{P}}$ given a fixed flow volume $v,$ the minimal cost of the flow
is equal to $-A^{\prime}_{v}.$
###### Proof.
To see this, we will first show that this is attainable. To do this, suppose
that we have sequences $S_{1},S_{2},\ldots,S_{v}$ that give the value
$A^{\prime}_{v}.$ If one of the sequences consists of elements
$s_{1},s_{2},\ldots,s_{l}$ we add the flow line going from $b_{0}$ to
$t_{h(s_{1})},$ then $t_{h(s_{1})}$ to $b_{h(s_{1})},$ then $b_{h(s_{1})}$ to
$t_{h(s_{2})},$ and so forth, until $b_{h(s_{l})}$ to $t_{n+1}.$ By
construction, notice that we may do this, since the edges from $b_{h(s_{i})}$
to $t_{h(s_{i+1})}$ exist by construction, as we demanded $(s_{i},s_{i+1})$ to
lie in $C_{P}$ for the sequences.
Doing this for each sequence gives us the flow. Note that this satisfies the
flow requirements, since at each vertex, the in and out flows are the same for
all the vertices besides $b_{0},t_{n+1}.$ In addition, we only use each edge
once, since the vertices are all distinct in the sequences (from
construction).
As for the cost of this flow, note that along each flow line, if it
corresponds to sequence $S$ of elements $s_{1},s_{2},\ldots,s_{l}$ we see that
all the edges have cost $0$ except those edges from $b_{h(s_{i})}$ to
$t_{h(s_{i+1})},$ where $s_{i}<s_{i+1}$ or from $b_{i_{k}}$ to $t_{n+1}.$ But
this means that this flow line goes through edges whose total cost is just
$-asc(S),$ for this sequence $S.$ Adding this up over all flow lines yields a
flow with cost $-A^{\prime}_{v}$ and volume of flow $v.$
To show this is minimal, suppose we have some other flow with value $v.$ Given
an edge from $b_{0},$ we can “follow” this edge (since each vertex has either
only one edge going in or one edge going out, except for $t_{n+1}$ or $b_{0},$
and by conservation of flow there is exactly one for each) until we reach
$t_{n+1}.$ This gives us a sequence of vertices.
We can repeat this for all the other edges from $b_{0},$ yielding us $v$
distinct sequences. Note then that the cost of this flow is just the negative
sum of the ascents over each sequence, which is at least $-A^{\prime}_{v},$ by
the argument above. Notice that also by the fact that we followed edges that
every pair of adjacent elements in a given sequence lie in $C_{P},$ so these
are actually adjacentable sequences.
We thus see that $-A^{\prime}_{v}$ is the minimum cost of a flow with flow
volume $v,$ as desired. ∎
Now, we introduce another quantity. Let $p=|p(t_{n+1})|.$ We say that $P_{p}$
is the number of $i\in\\{1,2,\ldots,n\\}$ such that
$p(t_{i})=p(b_{i})\in\\{-p+1,-p+2,\ldots,0\\}.$
###### Proposition 5.2.
At any point along Algorithm 1, $P_{p}+A^{\prime}_{v}\geq n+vp.$
###### Proof.
This method is very similar to that of [BF99][§8], in that we argue that, at
every step along the algorithm, this inequality must continue to hold. We
cannot jump immediately to equality yet, however; this will be the subject of
Section 6.
Note that when the flow increases by $1,$ the only thing that changes is the
cost of the flow, which decreases (by construction of this new flow line from
the algorithm) by $p,$ since along this new flow-line, for each edge, the cost
is equal to the difference in potential.
If there are no flow lines, then note that if $t_{i}$ is reachable, so is
$b_{i},$ and if $t_{i}$ has potential $0,$ it is reachable, so again we have
no problems here (potential drop of $t$ doesn’t change $P_{p}^{\prime}$)
Now, suppose that the potential of $t_{n+1}$ increases by $1.$ Consider a
given flow-line, say reaching top and bottom pairs with indices
$i_{1},i_{2},\ldots,i_{k}.$ Then, note that if $t_{i_{j}}$ is reachable by
$b_{0},$ so is $b_{i_{j-1}}.$
We can consider consecutive blocks of vertices reachable by $b_{0}$ along this
flow line. Suppose we have a block running from $b_{i_{c}}$ to $t_{i_{d}}.$
Note that, among these, their potentials stay the same. Furthermore, note that
this sequence cannot cause $P_{p}^{\prime}$ to drop; the only place where one
is no longer counted was if initially $t_{i_{d}}$ and $b_{i_{d}}$ had the same
potentials. But note that, from un-reachability, $p(t_{i_{c}})$ increases by
$1,$ which means that it matches up with $p(b_{i_{c}})$ now.
Hence, the only way for there to be a drop would be either if a pair of
vertices had potential $0$ and went up to $1,$ or if there is a block that
went directly to $t_{i_{1}}.$ But these are mutually distinct events, for a
potential of $0$ going up to $1$ can only mean that $t_{i_{1}}$ and
$b_{i_{1}}$ had potential $0$ and weren’t reachable (if any other pair of
vertices had potential $0,$ the top would be reachable).
This means that, when $p$ rises by $1,$ $P_{p}^{\prime}$ drops by at most $v.$
But this the establishes the inequality. ∎
Note that $D^{\prime}_{p}\geq P^{\prime}_{p}.$ To see this, we let the
sequences be so that the $i$th sequence has the indices of those whose
potentials of the top and bottom vertices are all $-i+1.$
This is a valid sequence for two reasons. First, for the actual non-increasing
part, note that between the bottom vertex of one and the top vertex of the
next, the cost can’t be less than the potential difference, which is $0$
(either there is no flow, or there is flow, which means that this follows from
Lemma 4.1). Hence, we see that this forms a non-increasing sequence.
Now, we claim that if the potential of $t_{a},b_{a}$ are $i,$ and that for
$t_{c},b_{c}$ is $i-1,$ then $(h^{-1}(c),h^{-1}(a))\not\in C_{P}.$ To see
this, if not we would have an edge from $b_{c}$ to $t_{a}.$ But then notice
that Lemma 4.1 and the condition 1 from Theorem 3.2 requires that
$p(t_{a})-p(b_{c})\leq c((b_{c},t_{a}))\leq 0,$ contradiction. This means that
the sequences can’t be semi-overlapping, so this is a valid choice of
sequences, giving a value of the sum of the $desc$ over these sequences as
$P_{p}^{\prime}.$
The main result, that $A^{\prime}_{v}$ and $D^{\prime}_{p}$ are conjugate in
the sense we described, will follow in the next section.
## 6\. Establishing Equality
This section follows [BF99][§5] in concept, though the actual method of
calculation is slightly different, due to different conditions on the
ascending and non-ascending sequences.
We use the same idea of considering intersections, however. Suppose that we
are given sequences $d_{1},d_{2},\ldots,d_{p}$ as the non-increasing sequences
that meet the condition for $D^{\prime}_{p},$ and $a_{1},a_{2},\ldots,a_{v}$
for $A^{\prime}_{v}.$ Notice that if the $d_{i}$ are contained in sequences
that are not semi-overlapping, then the $d_{i}$ are not semi-overlapping
either.
Fixing some $a_{i},$ note that $a_{i}\cap d_{1},a_{i}\cap
d_{2},\ldots,a_{i}\cap d_{p}$ (the subsequences of $a_{i}$ that are also part
of $d_{1},d_{2},\ldots,d_{p},$ respectively) are also not pairwise semi-
overlapping, from construction.
In fact, notice that if element $x\in a_{i}\cap d_{m}$ and $y\in a_{i}\cap
d_{j}$ are so that $h(x)<h(y),$ then notice that, by construction, $(x,y)\in
C_{P}.$ But this means that all elements in $a_{i}\cap d_{m}$ occur before
those in $a_{i}\cap d_{j}.$
Now, notice that for pair of consecutive elements within $a_{i}\cap d_{j},$
say $x$ and $x^{\prime},$ there exists a non-ascent in-between $x$ and
$x^{\prime}$ in $a_{i},$ as otherwise $x<x^{\prime},$ contradiction.
Furthermore, in-between these elements, by the argument above, no other
$a_{i}\cap d_{k}$ can have elements, meaning that each element of $a_{i}\cap
d_{j},$ letting $j$ vary, other than the last for each, corresponds uniquely
to a non-ascent.
This means that we have $\sum\limits_{j=1}^{p}|a_{i}\cap d_{j}|\leq
p+(des(a_{i})),$ where $des(a_{i})$ is the number of “non-ascents,” which by
definition we can see satisfies $des(a_{i})=|a_{i}|-asc(a_{i}).$ Note that
this isn’t $desc(a_{i}).$
This in turn yields that $\sum\limits_{j=1}^{p}|a_{i}\cap d_{j}|\leq
p+(des(a_{i}))\leq p+|a_{i}|-asc(a_{i}).$ But then we have that
$\sum\limits_{i=1}^{v}\sum\limits_{j=1}^{p}|a_{i}\cap d_{j}|\leq
vp+\sum\limits_{i=1}^{v}|a_{i}|-A^{\prime}_{v}.$
But by PIE, since the $a_{i}$ are disjoint and the $d_{j}$ are disjoint, we
have that
$D^{\prime}_{p}=|\bigcup_{j=1}^{p}d_{j}|=|\bigcup_{j=1}^{p}d_{j}\cup\bigcup_{i=1}^{v}a_{i}|-|\bigcup_{i=1}^{v}a_{i}|+\sum\limits_{i=1}^{v}\sum\limits_{j=1}^{p}|a_{i}\cap
d_{j}|\leq
n-|\bigcup_{i=1}^{v}a_{i}|+vp+\sum\limits_{i=1}^{v}|a_{i}|-A^{\prime}_{v}=n+vp-A^{\prime}_{v}.$
For equality, now note, for each pair $(p,v)$ that are reachable for
$|p(t_{n+1})|$ and flow value, respectively, we have that
$n+vp-A^{\prime}_{v}\geq D^{\prime}_{p}\geq P^{\prime}_{p}\geq
n+vp-A^{\prime}_{v}.$
To get the desired conjugacy, the exact argument at the end of [BF99][§8]
allows us to finish. Specifically, we know now that
$D^{\prime}_{p}+A^{\prime}_{v}=n+vp,$ where $p,v$ are values that are attained
for $p(t_{n+1})$ and flow value, respectively, during Algorithm 1. We just
need to check that we can apply the argument in that section here to all of
the indices.
Now, notice that, by Theorem 3.3, the algorithm terminates when flow is
maximal for the graph, which is when $v=n$ (taking, for each
$i\in\\{1,2,\ldots,n\\}$ a flow from $b_{0}$ to $t_{i}$ to $b_{i}$ to
$t_{n+1}$). Furthermore, note that $v$ starts at $0.$
Therefore, notice that, at this ending point, we have flow value $n$ and some
potential $p_{0}.$ When this occurs, notice that
$A^{\prime}_{n}+D^{\prime}_{p_{0}}=n+np_{0}\implies
D^{\prime}_{p_{0}}=np_{0}.$ But notice that, by construction, we see that
$D^{\prime}_{p_{0}}\leq D^{\prime}_{n}=n$ meaning that $p_{0}=0$ or $p_{0}=1,$
so by a similar argument we see that the value of $|p(t_{n+1})|$ attains all
values between $1$ and $n.$
Thus, for each $i\in\\{1,2,\ldots,n\\}$ when flow value increases from $i-1$
to $i$ in the algorithm, $\lambda_{i}=A_{i}^{\prime}-A_{i-1}^{\prime}=p.$
Notice that we can show that $\lambda_{i}$ and $\mu_{i}$ are partitions, from
the same logic as in [BF99, §8]. This is because, as we perform this process,
we have that $p$ is weakly decreasing, giving us that the $\lambda_{i}$ are
weakly decreasing. As for the $\mu_{i},$ notice that, by a similar logic, when
the potential goes from $p$ to $p-1$, we have that
$\mu_{p}=D_{p}^{\prime}-D_{p-1}^{\prime}=(n+vp-
A_{v}^{\prime})-(n+v(p-1)-A_{v}^{\prime})=v.$ But then, observe that,
throughout the process, $p$ falls and $v$ rises, so again the $\mu_{i}$ are
also weakly decreasing if we start from $i=1.$ This gives us that these are
partitions.
This yields us the desired conjugacy of
$\lambda_{i}=A^{\prime}_{i}-A^{\prime}_{i-1}$ and
$\mu_{i}=D^{\prime}_{i}-D^{\prime}_{i-1},$ as desired, which proves Theorem
2.1.
## 7\. Corollaries
Theorem 2.1 gives us both the localized Greene’s theorem for permuation posets
and the original Greene-Kleitman duality theorem. We prove each of these
results using Theorem 2.1 in this section.
###### Corollary 7.1 (Localized Greene’s Theorem, Lemma 2.1 [Lew+20]).
Let $\sigma$ be a permutation on $n$ elements, $\\{1,2,\ldots,n\\}.$ Then,
with $A_{k}^{*}$ as the maximal sum of the ascents of $k$ disjoint sequences,
and $D_{k}^{*}$ as the maximal sum of the longest descending subsequences in
$k$ consecutive sequences (as we noted in the introduction, Section 1, which
are defined as per [Lew]), if $\lambda_{k}=A_{k}^{*}-A_{k-1}^{*}$ and
$\mu_{k}=D_{k}^{*}-D_{k-1}^{*},$ then $\lambda_{1}+\lambda_{2}+\cdots$ and
$\mu_{1}+\mu_{2}+\cdots$ form conjugate partitions of $n.$
###### Proof.
Take the poset of $1,2,\ldots,n$ with the natural ordering, and suppose that
$h$ is the inverse of the permutation $\sigma,$ which is a bijection. Let
$C_{P}$ just be the set $\\{(x,y)|1\leq x,y,\leq n,h(x)<h(y)\\};$ in this
case, $h-$ordering and adjacentable are the same. Apply Theorem 2.1, obtaining
$A_{k}^{\prime}$ and $D_{k}^{\prime}.$
Then, notice that $A_{k}^{\prime}$ is the same as $A_{k}^{*}$ since $asc$ is
defined the same way. To see this, notice that any sequence $S,$ with elements
$s_{1},s_{2},\ldots,s_{l},$ where $\sigma(s_{j})<\sigma(s_{j+1})$ for each
index $j,$ can be thought of as a subsequence of elements from
$\sigma(1),\sigma(2),\ldots,\sigma(n),$ as the above tells us that
$\sigma^{-1}(s_{1}),\sigma^{-1}(s_{2}),\ldots,\sigma^{-1}(s_{l})$ is a
strictly increasing sequence. This means we may re-write the sequence as
$\sigma(x_{1}),\sigma(x_{2}),\ldots,\sigma(x_{l})$ for an increasing sequence
$x_{1},\ldots,x_{l}.$ But then $asc(S)$ is just the number of indices $j$
where $\sigma(x_{j})<\sigma(x_{j+1})$ plus one (or $0$ if $S$ is empty), which
matches. This means that $A_{k}^{\prime},$ as the maximum of the sum of $asc$
of $k$ disjoint sequences, is the same as $A_{k}^{*}.$
As for $D_{k}^{\prime},$ first notice that $desc$ is defined the same way as
well, since the condition that $s_{i}\not<s_{j}$ for each $i<j,$ with the
totally ordered set, just means that the sequence must be strictly decreasing.
Now, suppose that we have sequences $S_{1},S_{2},\ldots,S_{k}$ that give the
maximal value, such that no two are semi-overlapping.
Now, since $C_{P}$ is just $h-$ordering, notice that for each pair of elements
$x,y\in\\{1,2,\ldots,n\\},$ either $(x,y)\in C_{P}$ or $(y,x)\in C_{P}.$ We
may thus re-index the sequences so that $\forall i<j,\forall a\in S_{i},b\in
S_{j},h(a)<h(b)$ (the semi-overlapping condition allows us to do this re-
indexing).
From here, suppose that some element $x\in\\{1,2,\ldots,n\\}$ not in any of
the $S_{i}.$ Let $j$ be the largest index so that $\exists a\in S_{j}$ where
$h(a)<h(x),$ and suppose that $a$ is chosen so that $h(a)$ is the maximum
value of $\\{h(b)|b\in S_{j},h(b)<h(x)\\}.$ We may then add $x$ to $S_{j}$
right after $a;$ by construction, this preserves all of the conditions of non
semi-overlapping. Furthermore, notice that the $\sum_{i=1}^{k}desc(S_{i})$
cannot decrease; indeed, we may take the same descending sequence within
$S_{j}.$ By maximality, this value also can’t increase.
We may thus assume that maximal $S_{1},S_{2},\ldots,S_{k}$ covers all of the
elements in $\\{1,2,\ldots,n\\}.$ But notice then that, as required in [Lew],
$S_{1}|S_{2}|\ldots|S_{k}$ is the sequence
$h^{-1}(1),h^{-1}(2),\ldots,h^{-1}(n),$ or
$\sigma(1),\sigma(2),\ldots,\sigma(n).$ This means that the value of
$D_{k}^{\prime},$ as defined here, is the same as $D_{k}^{*}$. This proves the
desired. ∎
###### Corollary 7.2 (Classical Greene-Kleitman Duality Theorem, Theorem 1.6
[Gre76]).
Given a poset $P,$ let $A_{k}$ be the maximal number of elements within $k$
disjoint chains, and $D_{p}$ the maximal number of elements within $p$
disjoint anti-chains. Then, if $\lambda_{i}=A_{i}-A_{i-1}$ and
$\mu_{i}=D_{i}-D_{i-1}$ for $i\geq 1,$ with $A_{0}=D_{0}=0,$ then
$\lambda_{1}+\lambda_{2}+\ldots$ and $\mu_{1}+\mu_{2}+\cdots$ are conjugate
partitions of $n.$
###### Proof.
Let $P$ be the poset, and $h$ any linear extension of $P.$ From here, let
$C_{P}$ be just the set $\\{(x,y)|x<y\\};$ notice that this satisfies the
properties given.
Then, notice that any adjacentable sequence, by construction, must consist
solely of elements where any two adjacent are increasing; in other words, they
must be chains. Therefore, it follows that $A^{\prime}_{k}$ in Theorem 2.1
just corresponds to the maximal length of $k$ disjoint chains, which is just
$A_{k}.$
As for $D_{p}^{\prime},$ we need to do a little more work. Notice that
$D_{p}^{\prime}\leq D_{p}.$ To see this, suppose that sequences
$S_{1},\ldots,S_{p}$ had subsequences $d_{1},\ldots,d_{p},$ whose sum of
lengths was $D_{p}^{\prime}.$ By construction, for each sequence $d_{j},$ if
the elements in order were $s_{1,j},\ldots,s_{l_{j},j},$ then notice that the
condition that $s_{a,j}\not<s_{b,j}$ for each $a,b,$ combined with the
ordering $h,$ thus demands that, in fact, $s_{a,j}$ and $s_{b,j}$ are not
comparable. This means that each of the $d_{i}$ are anti-chains.
To show the other direction: suppose that we have $p$ anti-chains by
$d_{1},d_{2},\ldots,d_{p}$ so that their sum has maximal size. Consider the
ordered tuple obtained by taking the elements for $d_{1}$ in order, followed
by the elements for $d_{2}$ in order, and so forth, and order these
lexicographically using the linear extension. For instance, if we have the
poset on five elements $a,b,c,d,e,$ with relations $a<b,b<d,c<d,$ and $d<e,$
with $h(a)=1,h(b)=2,h(c)=3,h(d)=4,$ and $h(e)=5,$ taking $d_{1}$ to be the
sequence $a,c$ and $d_{2}$ to be $b$ yields the tuple $(a,c,b).$
Now, consider the following operation: given $d_{i}$ and $d_{j},$ where $i<j,$
let $A=\\{x\in d_{i}|\exists y\in d_{j}\text{ so that }y<x\\}.$ Similarly, let
$B=\\{y\in d_{j}|\exists x\in d_{i}\text{ so that }y<x\\}.$ Then, take the
elements from $A,$ and move them to $d_{j},$ and take the elements from $A,$
and move them to $d_{i}.$ Call these new anti-chains
$d_{i}^{\prime},d_{j}^{\prime}.$
First, note that the new $d_{i}$ and $d_{j}$ are both anti-chains. Suppose for
the sake of contradiction this wasn’t the case; then, since $d_{i},d_{j}$ were
anti-chains, the relations that occur afterwards must have one element in one
of the sets $A,B$ and the other not (since, by anti-chain, all the elements in
$A$ are pairwise incomparable, and similarly for $B$). This yields four cases:
1. (1)
If there exists an $a\in d_{i}^{\prime},b\in B$ so that $b<a,$ then $a\in
d_{i}^{\prime}$ means that $a\not\in A.$ But $a\not\in B,$ so $a\in d_{i},$
and $a\in A,$ contradiction.
2. (2)
If there exists an $a\in d_{i}^{\prime},b\in B$ so $a<b,$ then there exists an
element $x$ in $d_{i}$ so that $b<x,$ so then $a<x.$ But $a\not\in B,$ so
$a\in d_{i},$ contradicting anti-chain.
3. (3)
If there exists an $a\in A,b\in d_{j}^{\prime}$ so that $b<a,$ then notice
that $b\in d_{j}^{\prime}$ means that $b\not\in B.$ But $a\in A\subseteq
d_{i},$ meaning that $b\in B,$ contradiction.
4. (4)
If there exists an $a\in A,b\in d_{j}$ so $a<b,$ then there exists a $y\in
d_{j}$ so that $y<a<b,$ or $y<b.$ But $b\not\in A,$ so thus $b\in d_{j},$
contradicting anti-chain.
Therefore, we end up still with anti-chains, the sum of whose lengths is the
same.
Furthermore, notice that the result we get is an element that is
lexicographically earlier; let $x$ be so that $h(x)$ is minimal, among all
elements of $A,B.$ Then, notice that, by construction, $x\in B,$ otherwise we
see that there is a $y\in d_{j}$ so that $h(y)<h(x),$ meaning that $y\in B$ as
$x\in A\subseteq d_{i},$ contradicting minimality. Then, notice that this
moves from the list of $j$s to the list of $i$s, and by construction no other
elements are moved other than those in $A$ or $B.$ But $i<j$ means that this
means it is lexicographically earlier.
Since we only have a finite number of these tuples, we can only apply this
process a finite number of times before we end up with a result where, for any
$i,j,$ the resulting $A,B$ are empty. But if $A,B$ are empty, notice then that
these anti-chains are all not semi-overlapping, since the semi-overlapping
condition for $d_{i},d_{j}$ here requires that, for $i<j,$ that there exists
$x\in d_{i},y\in d_{j}$ so $y<x,$ or that the resulting $A,B$ aren’t empty.
Therefore, we see that we can re-arrange the anti-chains in a way so that they
are not semi-overlapping, so $D^{\prime}_{p}\geq D_{p}\geq D^{\prime}_{p},$
and these are equal.
But this means that the conjugate partitions in this theorem are precisely
those given in Theorem 2.1, as desired. ∎
Note that Example 2.1 yields a case that doesn’t fall under either of these
corollaries. In particular, we can view Corollary 7.1, the localized Greene’s
theorem, as being the case when $C_{P}$ is as large as possible, and poset $P$
is just $\\{1,2,\ldots,n\\}.$ On the other hand, Corollary 7.2 occurs when
$C_{P}$ is as small as possible, and $h$ is a linear extension.
## References
* [Gre76] Curtis Greene “Some partitions associated with a partially ordered set” In _Journal of Combinatorial Theory, Series A_ 20.1, 1976, pp. 69–79 DOI: https://doi.org/10.1016/0097-3165(76)90078-9
* [BF99] Thomas Britz and Sergey Fomin “Finite Posets and Ferrers Shapes”, 1999 arXiv:math/9912126 [math.CO]
* [Wil19] David P. Williamson “Network Flow Algorithms” Cambridge University Press, 2019 DOI: 10.1017/9781316888568
* [Lew+20] Joel Lewis, Hanbaek Lyu, Pavlo Pylyavskyy and Arnab Sen “Scaling limit of soliton lengths in a multicolor box-ball system”, 2020 arXiv:1911.04458 [math.PR]
* [Lew] Joel Lewis “A localized version of Greene’s theorem” URL: https://realopacblog.wordpress.com/2019/11/24/a-localized-version-of-greenes-theorem/
|
# Blind Reconstruction of Multilayered Tissue Profiles with UWB Radar Under
Bayesian Setting
Burak Cevat Civek and Emre Ertin B. C. Civek and E. Ertin are with the
Department of Electrical and Computer Engineering, The Ohio State University,
Columbus, OH, 43210, USA. Contact e-mail<EMAIL_ADDRESS>
###### Abstract
In this paper, we investigate the problem of inverse electromagnetic
scattering to recover multilayer human tissue profiles using ultrawideband
radar systems in Bayesian setting. We study the recovery problem in blind
setting, in which we simultaneously estimate both the dielectric/geometric
properties of the one-dimensional target tissue profile and the transmitted
radar waveform. To perform Bayesian parameter estimation, we propose a hybrid
and adaptive Markov Chain Monte Carlo method, which combines the Slice
sampling and Hamiltonian Monte Carlo approaches. The introduced sampling
mechanism also incorporates the Parallel Tempering approach to escape from the
local optimal regions of the complex posterior distribution. We provide
empirical support through various numerical simulations for the achieved
enhanced sampling efficiency compared to conventional sampling schemes. To
investigate the recovery performance, we work on synthetic measurements
simulating actual radar returns from multilayer tissue profiles. We derive
theoretical bounds for the best achievable estimation performance in terms of
normalized root mean square error and provide a comparison with the
performance of our estimator.
###### Index Terms:
Bayesian Inference, Adaptive Markov Chain Monte Carlo, Blind Recovery, UWB
Radar.
## I Introduction
Remote sensing of human physiology is of growing importance in medical
research for the diagnosis and treatment of chronic diseases [1, 2].
Monitoring the alterations in internal tissue composition provides valuable
information about the progression of life-threatening diseases, including but
not limited to, brain tumor, pulmonary edema, and cardiac disorders [3].
However, traditional imaging modalities, such as Magnetic Resonance Imaging
(MRI), Computed Tomography (CT), or Ultrasound, are not feasible for
monitoring variations regularly, e.g., on a daily basis, due to their high
cost and accessibility issues. Therefore, more efficient, low-cost, and
possibly mobile sensing schemes are needed for frequent and long-term
measurements on the human body.
Following the advancements in sensor technologies, reliable characterization
of tissue profiles is becoming viable for both clinic and home environments at
much lower costs with easy access [4]. Specifically, ultrawideband (UWB) radar
sensors emitting electromagnetic (EM) waves, which can penetrate through most
of the biological tissues including skin, fat, muscle, etc., provide a
promising alternative to those conventional sensing modalities [5, 6]. In
principle, a UWB radar system transmits a short duration pulse and records the
backscattered signal composed of reflections from the target object. In human
body, each tissue exhibits distinct dielectric properties, i.e., permittivity
and conductivity. This causes impedance mismatches at the interfaces and
creates multiple reflection points for the impinging transmitted pulse.
Therefore, a rich backscattered signal, which is strongly affected by the
dielectric properties, is observed and can be processed to make inferences
about the tissue composition underneath the skin.
The emergence of UWB radar as a medical sensing technology occurred when
McEwan described the physical principle of the UWB system which was able to
detect movements of the heart wall in the two patents awarded to him [7, 8].
Since then, detecting vital signs of human body, such as respiration and heart
rate, is one of the most widely studied problems in medical UWB sensing [6,
9]. Many studies successfully recovered vital signs in a non-invasive manner
due to the sensitivity of the backscattered signal to movements of the inner
tissues, such as lungs or heart [10, 11]. In this work, however, instead of
measuring vital signs, we focus on extracting a complete reflectivity profile
for sub-skin tissue composition in terms of the dielectric and geometric
properties. Possible applications include detecting or monitoring the
evolution of breast cancer, brain tumor, water retention in lungs, or
pulmonary edema.
In general, the inference methods for detecting alterations in tissue
compositions focus on the explicit recovery of the dielectric properties, such
as permittivity and conductivity, as well as the geometrical properties, such
as thickness, of the target tissues based on the backscattered measurement. In
medical UWB sensing literature, a homogeneous multilayer planar model is a
reasonable and widely studied model to describe the anatomical structure of
the human body [12, 13, 14, 15]. One of the common techniques for inverse EM
scattering problems targeting multilayer homogeneous mediums is the layer
stripping, which is extensively studied in GPR systems using UWB pulses to
evaluate the physical and geometric properties of the subsurface earth layers
[16, 17, 18, 19]. Layer stripping is a time domain approach that estimates the
constitutive parameters of each layer in a sequential manner, i.e., at each
iteration, the algorithm estimates the properties of the top-most layer and
removes its effect from the backscattered signal, progressively reconstructing
each layer until all layers are reconstructed. The estimation procedure is
usually based on the amplitude and time-of-arrival of the echos reflected from
the layer interfaces. Therefore, success of the technique is closely related
to accurate estimation of reflected pulse amplitudes and corresponding time
delays, which requires clearly separated echos in time domain [19, 20].
Although this requirement is satisfied for many geophysical applications due
to greater thicknesses of earth layers, such clear separation is usually not
possible for human tissues. Moreover, typical layer stripping techniques
assume the multiple reflections are negligible as in [16, 17, 21],
illustrating the validity of this assumption for geophysical applications such
as road pavement evaluation and ice sheet reconstruction. However, multiple
reflections have a dominating effect when the target medium is human body [12,
14]. Recently, Caorsi et al. [22], proposed a comprehensive layer stripping
technique which uses a binary decision tree approach [23] to detect and remove
the pulses caused by multiple reflections to eliminate ambiguities. The
proposed technique successfully classifies each echo as a direct or multiple
reflection in the case of well-separated pulses with loss-less mediums (zero
conductivities), but the performance significantly degrades if overlaps exist
or the mediums have non-zero conductivities. As a result, application of layer
stripping is limited for medical UWB sensing due to overlapping pulses,
multiple reflections, and non-negligible conductivity losses.
An alternative to the time-domain layer stripping approach is the EM
inversion, which constructs a least squares problem (usually in frequency
domain) to minimize the mean squared error between the actual and
reconstructed measurements. The reconstructed measurement is obtained through
a problem specific forward model governing the EM wave propagation in layered
media and antenna responses. The optimization is performed on the constitutive
parameters, i.e., permittivity, conductivity and thickness, to find the set of
parameters achieving the best fit to the actual measurement. In [24],
Spagnolini compared EM inversion with layer stripping and demonstrated its
promising capabilities in radar inverse problems. Unlike layer stripping,
which only concerns the time delay and amplitude information, EM inversion
completely utilizes the underlying physical interactions in EM wave
propagation. Therefore, it eliminates the need for the strong simplifying
assumptions and facilitates successful recovery even for the cases where there
exist overlapping pulses, multiple reflections and non-zero conductivities.
Even though EM inversion approach has extensive practical applications in GPR
literature, its utilization for medical sensing problems has not yet been
investigated. To eliminate this gap, in this work, we employ the EM inversion
approach for estimating the parameters of multilayer targets composed of human
tissues. We restrict the scope of this work to a one-dimensional setting in
which plane waves propagate through non-dispersive homogeneous planar mediums.
Although this is a simplified version of the reality, it provides useful
insights to develop more sophisticated imaging systems.
The contributions of this work can be summarized as follows. Firstly, we pose
the problem as a blind deconvolution problem and simultaneously estimate both
the transmitted waveform and the reflectivity profile to achieve self-
calibration. In practice, the waveform generated within the radar circuitry is
distorted by the antenna transmitter/receiver responses, and hence, the actual
transmitted waveform is unknown without an appropriate calibration process.
Traditional approaches for UWB radar inverse problems, therefore, assume
calibrated antenna responses. Secondly, we study the problem in Bayesian
setting and present a comprehensive and efficient Markov Chain Monte Carlo
(MCMC) method to estimate the marginal posterior densities of the unknowns.
Unlike the widely employed deterministic least squares approach, this enables
us to perform additional posterior analyses, from which quantitative
uncertainty measures about the estimations can be obtained through credibility
intervals. Finally, we derive theoretical bounds on the estimation of
multilayer model parameters in blind setting, which signify the best
achievable error performance of any estimator. We note that even though the
presented MCMC methods are designed for one-dimensional wave propagation
model, they can be extended to the three-dimensional scenario.
The paper is organized as follows. We first introduce the wave propagation and
measurement models in Section II, followed by the description of the problem
formulation under Bayesian setting in Section III. Then, in Sections IV and V,
we present the proposed MCMC method for sampling from the highly complex
posterior distribution. We validate the proposed sampling schemes and provide
a comparison between the derived theoretical bounds and the performance of the
proposed estimator in Section VI. We finalize our discussion in Section VII
with concluding remarks and possible future research directions.
Figure 1: Illustration of reflection paths for an $M$-layer structure. Black
arrows represent the primary reflection paths associated with each interface.
Gray arrows represent the multiple bounces between the interfaces. Inclined
arrows are used only for the illustration purposes.
## II Measurement Model for Multilayer Reflectivity Profile
### II-A Multilayer Reflection Model
We consider an UWB system where we transmit a short duration UWB pulse and
collect the backscattered signals which are reflections from an object
composed of multiple planar layers. The layers are assumed to be homogeneous
mediums and have distinct dielectric properties such that the interfaces
between them can be considered as reflective surfaces. The backscattered
signal can be expressed as a combination of scaled, shifted and distorted
versions of the transmitted waveform. The distortion occurs due to materials
either being dispersive or having non-zero conductivity. These factors are
completely determined by the reflectivity profile of the target being
monitored. In general, for an $M$-layer structure with thicknesses $d_{i}$, as
illustrated in Fig. 1, where the last layer has infinite depth, the 1D
downward reflectivity profile $X_{i}(\omega)$ in frequency domain has the
following recursive form [25]
$X_{i}(\omega)=\dfrac{r_{i}+X_{i+1}(\omega)e^{-2\alpha_{i}d_{i}}e^{-j2\beta_{i}d_{i}}}{1+r_{i}X_{i+1}(\omega)e^{-2\alpha_{i}d_{i}}e^{-j2\beta_{i}d_{i}}},$
(1)
at each interface $I_{i}$ for $i=1,\ldots,M-1$, with $X_{M}(\omega)=r_{M}$ and
$\omega$ representing the angular frequency in rad/sec. The downward local
reflection coefficient at interface $I_{i}$ is given by
$r_{i}=(\eta_{i}-\eta_{i-1})/(\eta_{i}+\eta_{i-1})$, where
$\eta_{i}=\sqrt{(j\omega\mu_{o})/(\sigma_{i}+j\omega\varepsilon_{o}\varepsilon_{i})}$
is the complex valued intrinsic impedance defined in terms of the dielectric
constant $\varepsilon_{i}$ and conductivity $\sigma_{i}$ in S/m of the
mediums. Here, $\mu_{o}$ and $\varepsilon_{o}$ are constants representing the
vacuum permeability in H/m and vacuum permittivity in F/m respectively.
Lastly,
$\alpha_{i}=\omega[\mu_{o}\varepsilon_{o}\varepsilon_{i}(\zeta_{i}-1)/2]^{1/2}$
and
$\beta_{i}=\omega[\mu_{o}\varepsilon_{o}\varepsilon_{i}(\zeta_{i}+1)/2]^{1/2}$
represent the attenuation coefficients and the phase constants respectively,
where
$\zeta_{i}=\sqrt{1+(\sigma_{i}/\omega\varepsilon_{o}\varepsilon_{i})^{2}}$.
### II-B Measurement Model
In this work, we consider the scenario in which the source of the transmitted
pulse is $d_{0}$ meters away from the interface $I_{1}$ with normal incidence.
Therefore, for a given frequency $\omega$, the corresponding frequency
component of the transmitted pulse, $H(\omega)$, is multiplied by
$X_{0}(\omega)=X_{1}(\omega)e^{-2\alpha_{0}d_{0}}e^{-j2\beta_{0}d_{0}}$,
yielding the following backscattering model
$Y(\omega)=H(\omega)X_{0}(\omega)$, where $Y(\omega)$ represents the frequency
domain representation of the backscattered signal. In practice, we observe the
measurement sampled at frequencies $\\{\omega_{n}\\}_{n=0}^{N-1}$, which can
be modeled as
$\mbox{\boldmath${y}$}=\text{diag}(\mbox{\boldmath${F}$}_{Q}\mbox{\boldmath${h}$})\mbox{\boldmath${x}$}+\mbox{\boldmath${v}$},$
(2)
where $\mbox{\boldmath${y}$},\mbox{\boldmath${x}$}\in\mathbbm{C}^{N}$ are
defined as $\mbox{\boldmath${y}$}=[Y(\omega_{0}),\ldots,Y(\omega_{N-1})]^{T}$
and
$\mbox{\boldmath${x}$}=[X_{0}(\omega_{0}),\ldots,X_{0}(\omega_{N-1})]^{T}$,
and the transmitted waveform is modeled in time domain as
$\mbox{\boldmath${h}$}\in\mathbbm{R}^{Q}$ to limit its duration with $Q$
samples in time domain. The matrix
$\mbox{\boldmath${F}$}_{Q}\in\mathbbm{C}^{N\times Q}$ represents the
appropriately selected partial DFT matrix. We model the measurement noise by
including a complex valued additive noise term
$\mbox{\boldmath${v}$}\in\mathbbm{C}^{N}$.
Figure 2: An example cross section of high dimensional log-posterior
distribution $\log
p(\mbox{\boldmath${\theta}$},\mbox{\boldmath${\gamma}$},\sigma_{v}^{2}|\mbox{\boldmath${y}$})$
for $d_{2}$-$\varepsilon_{2}$ plane at different temperature levels. Remaining
model parameters are fixed at their true values.
## III Problem Setting
Our goal is to estimate the multilayer model parameters
$\\{\varepsilon_{i}\\}_{i=1}^{M}$, $\\{\sigma_{i}\\}_{i=1}^{M}$, and
$\\{d_{i}\\}_{i=0}^{M-1}$ along with the transmitted pulse ${h}$ solely based
on the measurement vector ${y}$. We note that dielectric constant
$\varepsilon_{0}$ (not to be confused with vacuum permittivity
$\varepsilon_{o}$) and conductivity $\sigma_{0}$ of the first medium, where
the source is located, are assumed to be known, but the distance $d_{0}$
between the transmitter and the first interface is also unknown and to be
estimated. Following a Bayesian framework, we assign specific prior
distributions on the unknown variables reflecting our prior knowledge, which
are described in the subsequent sections.
#### III-1 Prior Distribution for Multilayer Model Parameters
We collect the multilayer model parameters in a single vector
$\mbox{\boldmath${\theta}$}=[\varepsilon_{1},\ldots,\varepsilon_{M},\sigma_{1},\ldots,\sigma_{M},d_{0},\ldots,d_{M-1}]^{T}$
for more compact notation. Assuming bounded parameter space
$\Lambda_{\theta}$, where the lower and upper bounds are given by
$\theta_{i,\text{min}}$ and $\theta_{i,\text{max}}$ for $i^{th}$ parameter,
and statistically independent parameters, the joint prior distribution of
${\theta}$ follows
$p(\mbox{\boldmath${\theta}$})=\prod_{i=1}^{3M}p(\theta_{i})=\prod_{i=1}^{3M}\mathcal{B}(\bar{\theta}_{i};\lambda_{i},\kappa_{i})$
where $\mathcal{B}(\cdot;\lambda,\kappa)$ denotes the Beta distribution with
mode $\lambda_{i}$, concentration $\kappa_{i}$, and
$\bar{\theta}_{i}=(\theta_{i}-\theta_{i,\text{min}})/(\theta_{i,\text{max}}-\theta_{i,\text{min}})$.
The individual parameters $\lambda_{i}$ and $\kappa_{i}$ are selected to
reflect our prior knowledge.
#### III-2 Prior Distribution for Pulse Sequence
We represent the transmitted pulse $\mbox{\boldmath${h}$}\in\mathbbm{R}^{Q}$
using a subspace $\mbox{\boldmath${A}$}\in\mathbbm{R}^{Q\times L}$, i.e.,
$\mbox{\boldmath${h}$}=\mbox{\boldmath${A}$}\mbox{\boldmath${\gamma}$}$, where
$\mbox{\boldmath${\gamma}$}\in\mathbbm{R}^{L}$ represents the random
coefficient vector. Here, ${A}$ is selected to reflect the frequency domain
restrictions, i.e., it can be constructed by selecting the first $L$ sequence
of either Discrete Prolate Spheroidal (DPS) Sequences or Hermite Functions
[26]. Instead of directly solving for ${h}$, we solve for the coefficient
vector ${\gamma}$, which is assigned a zero-mean i.i.d. Gaussian distribution
with known diagonal covariance
$\mbox{\boldmath${\Sigma}$}_{\gamma}=\text{diag}(\sigma_{\gamma}^{2}\mbox{\boldmath${I}$})$,
i.e.,
$p(\mbox{\boldmath${\gamma}$})=\mathcal{N}(\mbox{\boldmath${\gamma}$};\mbox{\boldmath${0}$},\mbox{\boldmath${\Sigma}$}_{\gamma})$.
#### III-3 Prior Distribution for Noise Variance
We model the measurement noise ${v}$ with a circularly symmetric complex
Gaussian law,
$\mathcal{CN}(\mbox{\boldmath${v}$};\mbox{\boldmath${0}$},\sigma_{v}^{2}\mbox{\boldmath${I}$})$,
where its variance, $\sigma_{v}^{2}$, is another unknown and to be estimated
along with the other model parameters. We assign Inverse-Gamma distribution
with shape and scale parameters $\alpha_{v}$ and $\beta_{v}$ to noise variance
since it is the analytically tractable conjugate prior for the unknown
variance of Gaussian distribution, i.e.,
$p(\sigma_{v}^{2})=\mathcal{IG}(\sigma_{v}^{2};\alpha_{v},\beta_{v})$.
Given the prior distributions for each of the variables, and assuming
${\theta}$, ${\gamma}$ and $\sigma_{v}^{2}$ are statistically independent, the
posterior distribution has the following expression
$p(\mbox{\boldmath${\theta}$},\mbox{\boldmath${\gamma}$},\sigma_{v}^{2}|\mbox{\boldmath${y}$})\propto
p(\mbox{\boldmath${y}$}|\mbox{\boldmath${\theta}$},\mbox{\boldmath${\gamma}$},\sigma_{v}^{2})p(\mbox{\boldmath${\theta}$})p(\mbox{\boldmath${\gamma}$})p(\sigma_{v}^{2}),$
(3)
where we dropped the irrelevant scaling factor $p(\mbox{\boldmath${y}$})$. The
likelihood term has the form of circularly symmetric complex Gaussian
distribution
$p(\mbox{\boldmath${y}$}|\mbox{\boldmath${\theta}$},\mbox{\boldmath${\gamma}$},\sigma_{v}^{2})=\bigg{(}\dfrac{1}{\pi\sigma_{v}^{2}}\bigg{)}^{N}\exp\bigg{(}-\dfrac{\|\mbox{\boldmath${y}$}-\text{diag}(\mbox{\boldmath${B}$}\mbox{\boldmath${\gamma}$})\mbox{\boldmath${x}$}\|^{2}}{\sigma_{v}^{2}}\bigg{)}$
(4)
where $\mbox{\boldmath${B}$}=\mbox{\boldmath${F}$}_{Q}\mbox{\boldmath${A}$}$
and $\|\cdot\|$ represents the $\ell_{2}$-norm of a vector.
We consider the Minimum Mean Square Error (MMSE) estimator, given by
$(\mbox{\boldmath${\theta}$}^{*},\mbox{\boldmath${\gamma}$}^{*},\sigma_{v}^{2*})_{\text{MMSE}}=E[\mbox{\boldmath${\theta}$},\mbox{\boldmath${\gamma}$},\sigma_{v}^{2}|\mbox{\boldmath${y}$}],$
(5)
for the estimation of the parameters. However, the posterior distribution
given in (3) is highly complex, possibly having multimodal structure with many
local maxima, as illustrated in Fig. 2. In such cases, the Maximum A
Posteriori (MAP) estimator could be a more favorable choice. Therefore, we
also consider the MAP estimator, given by
$(\mbox{\boldmath${\theta}$}^{*},\mbox{\boldmath${\gamma}$}^{*},\sigma_{v}^{2*})_{\text{MAP}}=\operatorname*{arg\,max}_{\mbox{\boldmath${\theta}$},\mbox{\boldmath${\gamma}$},\sigma_{v}^{2}}p(\mbox{\boldmath${\theta}$},\mbox{\boldmath${\gamma}$},\sigma_{v}^{2}|\mbox{\boldmath${y}$}).$
(6)
The MMSE estimator requires intractable integration of the posterior
distribution due to its complex structure. The MAP estimator, on the other
hand, can be achieved by employing off-the-shelf gradient ascent methods,
since the probability space is well-defined and does not have any
discontinuities. However, due to existence of many local maxima,
initialization plays a critical role on finding the global maximum. Therefore,
we propose to employ MCMC simulations, which not only provide an approximate
MMSE solution through the sample mean, but also explore the high probability
regions of the parameter space, yielding a good initialization for achieving
the MAP solution. Moreover, besides the point estimates, this approach also
enables us to calculate credibility intervals to represent uncertainties about
the estimations.
## IV Gibbs Sampler with Parallel Tempering
The MCMC simulations are widely used in complex Bayesian inference problems to
achieve numerical solutions. The core of the MCMC methods is the samplers,
which are used to draw samples from a target distribution, which is the
posterior distribution given in (3) in our case. These samples can then be
used to approximate the statistics of the target distribution, for example,
the MMSE estimation can be approximated by the mean average of the samples
drawn from the posterior distribution. However, the multimodality of the
posterior distribution significantly reduces the efficiency of the MCMC
samplers, i.e., although the probability of jump from one mode to another is
not zero, it is generally small enough, causing the sampler to get stuck on
one mode of the distribution for a long time. In order to resolve this issue,
we adopt a tempering approach, i.e., Parallel Tempering, which substantially
improves the exploration power when combined with the standard MCMC samplers.
In this section, we first briefly discuss the general idea of tempering and
specifically the Parallel Tempering, followed by the description of our
proposed MCMC sampler.
### IV-A Tempering Approaches for Multimodal Distributions
Consider a high dimensional target probability distribution
$\pi(\mbox{\boldmath${z}$})$, from which we aim to draw samples. When the
target distribution $\pi(\mbox{\boldmath${z}$})$ is highly multimodal, the
standard MCMC samplers such as MH and Gibbs, or even more sophisticated
methods like HMC, fail to explore the probability space efficiently, due to
the low probability regions acting like barriers in between the modes of the
distribution. The main idea of tempering is to augment the original target
distribution $\pi(\mbox{\boldmath${z}$})$ with an additional temperature
variable $T$ to create the tempered distribution
$\pi(\mbox{\boldmath${z}$};T)=K(T)\pi(\mbox{\boldmath${z}$})^{1/T}$, where
$K(T)$ denotes the normalization constant. As illustrated in Fig. 2,
tempering, when $T>1$, has a flattening effect on the original distribution,
which removes the low probability barriers between the modes. Therefore, jumps
between different modes become much more likely for the distributions with
high temperatures.
The idea of Parallel Tempering (PT) is to run multiple MCMC chains
independently and simultaneously at each temperature level with stochastic
temperature swaps between the neighbouring temperature levels [27]. The target
distribution in PT is a joint distribution over all chains given by
$\prod_{\ell=1}^{L}\pi(\mbox{\boldmath${z}$}^{(\ell)};T_{\ell})$, where
$\mbox{\boldmath${z}$}^{(\ell)}$ denotes the variables for the chain running
at temperature level $T_{\ell}$. Assuming symmetric proposals, the acceptance
probability $\alpha_{\ell,\ell+1}$ that maintains the detailed balance in the
case of a temperature swap between the chains at $T_{\ell}$ and $T_{\ell+1}$
is given by
$\alpha_{\ell,\ell+1}=\min\bigg{\\{}1,\dfrac{\pi(\mbox{\boldmath${z}$}^{(\ell)})^{1/T_{\ell+1}}\pi(\mbox{\boldmath${z}$}^{(\ell+1)})^{1/T_{\ell}}}{\pi(\mbox{\boldmath${z}$}^{(\ell+1)})^{1/T_{\ell+1}}\pi(\mbox{\boldmath${z}$}^{(\ell)})^{1/T_{\ell}}}\bigg{\\}}.$
(7)
### IV-B Proposed Gibbs Sampler with Parallel Tempering
We begin with introducing the general structure of our proposed sampler and
discussing its connection to the Parallel Tempering approach. We employ a
Gibbs sampler scheme, which is a powerful MCMC tool for sampling from high
dimensional distributions especially when the conditional posteriors are
analytically tractable and straightforward to sample from [28]. Here, note
that the multimodality of the posterior is mainly due to the likelihood
function given in (4). The prior distributions assigned to the pulse shape and
the noise variance do not contribute to the multimodality of the target
posterior. Therefore, we follow an alternative tempering approach, where we
partially temper the posterior distribution by applying tempering only to the
likelihood. With this approach, the chains running at high temperatures will
sample from the prior distributions, instead of a flat distribution over the
parameter space. This is quite useful when the prior distributions are
unimodal, which is the case for the Gaussian and Inverse-Gamma distributions.
TABLE I: Proposed Gibbs sampler for partially tempered posterior distribution
$p(\mbox{\boldmath${\theta}$},\mbox{\boldmath${\gamma}$},\sigma_{v}^{2}|\mbox{\boldmath${y}$};T)$
for a given temperature $T$. Step 1. Draw $\sigma_{v}^{2}$ from
$p(\sigma_{v}^{2}|\mbox{\boldmath${y}$},\mbox{\boldmath${\theta}$},\mbox{\boldmath${\gamma}$};T)\propto
p(\mbox{\boldmath${y}$}|\mbox{\boldmath${\theta}$},\mbox{\boldmath${\gamma}$},\sigma_{v}^{2})^{1/T}p(\sigma_{v}^{2})$
---
Step 2. Draw ${\gamma}$ from
$p(\mbox{\boldmath${\gamma}$}|\mbox{\boldmath${y}$},\mbox{\boldmath${\theta}$},\sigma_{v}^{2};T)\propto
p(\mbox{\boldmath${y}$}|\mbox{\boldmath${\theta}$},\mbox{\boldmath${\gamma}$},\sigma_{v}^{2})^{1/T}p(\mbox{\boldmath${\gamma}$})$
Step 3. Draw ${\theta}$ from
$p(\mbox{\boldmath${\theta}$}|\mbox{\boldmath${y}$},\mbox{\boldmath${\gamma}$},\sigma_{v}^{2};T)\propto
p(\mbox{\boldmath${y}$}|\mbox{\boldmath${\theta}$},\mbox{\boldmath${\gamma}$},\sigma_{v}^{2})^{1/T}p(\mbox{\boldmath${\theta}$})$
One iteration of the proposed Gibbs sampler for sampling from the partially
tempered posterior
$p(\mbox{\boldmath${\theta}$},\mbox{\boldmath${\gamma}$},\sigma_{v}^{2}|\mbox{\boldmath${y}$};T)\propto
p(\mbox{\boldmath${y}$}|\mbox{\boldmath${\theta}$},\mbox{\boldmath${\gamma}$},\sigma_{v}^{2})^{1/T}p(\mbox{\boldmath${\theta}$},\mbox{\boldmath${\gamma}$},\sigma_{v}^{2})$
for a given temperature $T$ is given in Table I. This is a valid Gibbs
sampler, which samples each variable at least once within one iteration. The
validity of the sampler is established in Section I of the supplementary
material by showing that the MH acceptance probability is always 1 for each
step. Here, due to our selection of conjugate priors for $\sigma_{v}^{2}$ and
${\gamma}$, the partially tempered posterior conditionals
$p(\sigma_{v}^{2}|\mbox{\boldmath${y}$},\mbox{\boldmath${\theta}$},\mbox{\boldmath${\gamma}$};T)\propto
p(\mbox{\boldmath${y}$}|\mbox{\boldmath${\theta}$},\mbox{\boldmath${\gamma}$},\sigma_{v}^{2})^{1/T}p(\sigma_{v}^{2})$
and
$p(\mbox{\boldmath${\gamma}$}|\mbox{\boldmath${y}$},\mbox{\boldmath${\theta}$},\sigma_{v}^{2};T)\propto
p(\mbox{\boldmath${y}$}|\mbox{\boldmath${\theta}$},\mbox{\boldmath${\gamma}$},\sigma_{v}^{2})^{1/T}p(\mbox{\boldmath${\gamma}$})$
in Steps 1 and 2 have well-known forms in which the sampling is
straightforward. However, the posterior conditional of the multilayer model
parameters
$p(\mbox{\boldmath${\theta}$}|\mbox{\boldmath${y}$},\mbox{\boldmath${\gamma}$},\sigma_{v}^{2};T)\propto
p(\mbox{\boldmath${y}$}|\mbox{\boldmath${\theta}$},\mbox{\boldmath${\gamma}$},\sigma_{v}^{2})^{1/T}p(\mbox{\boldmath${\theta}$})$,
given in Step 3, is highly complex and does not have a well-known form, which
prevents direct sampling of ${\theta}$. Therefore, we will create a
hierarchical sampling scheme and propose a hybrid sampling mechanism combining
Slice Sampling and Hamiltonian Monte Carlo approaches, to draw samples from
$p(\mbox{\boldmath${\theta}$}|\mbox{\boldmath${y}$},\mbox{\boldmath${\gamma}$},\sigma_{v}^{2};T)$.
We present the details of the proposed hybrid sampling method in Section V. We
now describe how the Parallel Tempering approach is incorporated with the
proposed Gibbs sampler, followed by the derivation of sampling distributions
for Steps 1 and 2.
Considering a Parallel Tempering scheme with $L$ temperature levels, each MCMC
chain samples from a specific partially tempered version of the posterior
distribution, i.e., the chain at level $T_{\ell}$ samples from
$p(\mbox{\boldmath${\theta}$},\mbox{\boldmath${\gamma}$},\sigma_{v}^{2}|\mbox{\boldmath${y}$};T_{\ell})\propto
p(\mbox{\boldmath${y}$}|\mbox{\boldmath${\theta}$},\mbox{\boldmath${\gamma}$},\sigma_{v}^{2})^{1/T_{\ell}}p(\mbox{\boldmath${\theta}$},\mbox{\boldmath${\gamma}$},\sigma_{v}^{2})$
for $\ell=1,2,\ldots,L$. After one iteration of the Gibbs sampler is completed
at all chains, a parameter exchange between the neighbouring levels, say,
$T_{\ell}$ and $T_{\ell+1}$, is proposed, where $\ell$ is randomly selected
from the uniformly distributed proposal distribution $q_{\ell}=1/(L-1)$ for
$\ell\in\\{1,2,\ldots,L-1\\}$. The proposal is accepted with the following
acceptance probability
$\alpha_{\ell}=\min\bigg{\\{}1,\dfrac{p(\mbox{\boldmath${y}$}|\mbox{\boldmath${\theta}$}^{(\ell,j)},\mbox{\boldmath${\gamma}$}^{(\ell,j)},\sigma_{v}^{2(\ell,j)})^{1/T_{\ell+1}-1/T_{\ell}}}{p(\mbox{\boldmath${y}$}|\mbox{\boldmath${\theta}$}^{(\ell+1,j)},\mbox{\boldmath${\gamma}$}^{(\ell+1,j)},\sigma_{v}^{2(\ell+1,j)})^{1/T_{\ell+1}-1/T_{\ell}}}\bigg{\\}},$
(8)
where
$(\mbox{\boldmath${\theta}$}^{(\ell,j)},\mbox{\boldmath${\gamma}$}^{(\ell,j)},\sigma_{v}^{2(\ell,j)})$
and
$(\mbox{\boldmath${\theta}$}^{(\ell+1,j)},\mbox{\boldmath${\gamma}$}^{(\ell+1,j)},\sigma_{v}^{2(\ell+1,j)})$
represent the current parameter values at $j^{th}$ MCMC iteration which are to
be exchanged between the chains running at level $T_{\ell}$ and $T_{\ell+1}$
respectively (See Section II of the supplementary material for derivation of
the acceptance probability). Therefore, one complete MCMC cycle consists of
$L$ regular Gibbs sampling stages, followed by a single parameter exchange
step. Each cycle $j$ produces a new set of samples for each temperature level,
$\\{(\mbox{\boldmath${\theta}$}^{(\ell,j)},\mbox{\boldmath${\gamma}$}^{(\ell,j)},\sigma_{v}^{2(\ell,j)})\\}_{\ell=1}^{L}$,
but in the end, we are only interested in the samples generated at the first
level, $T_{1}=1$, which corresponds to the original posterior distribution. We
provide a more detailed description of the sampler in Algorithm 1. Next, we
present the sampling distributions for the first two steps of our sampler,
associated with each temperature level. The derivations are provided in
Section III of the supplementary material.
#### IV-B1 Sampling Distribution for Step 1
The partially tempered posterior conditional distribution for the noise
variance $\sigma_{v}^{2}$ for a given temperature level $T$ is given by
$p(\sigma_{v}^{2}|\mbox{\boldmath${y}$},\mbox{\boldmath${\theta}$},\mbox{\boldmath${\gamma}$};T)=\mathcal{IG}(\sigma_{v}^{2};\tilde{\alpha}_{v},\tilde{\beta}_{v})$
with $\tilde{\alpha}_{v}=\alpha_{v}+N/T$ and
$\tilde{\beta}_{v}=\beta_{v}+\|\mbox{\boldmath${y}$}-\text{diag}(\mbox{\boldmath${B}$}\mbox{\boldmath${\gamma}$})\mbox{\boldmath${x}$}\|^{2}/T$.
Sampling $\sigma_{v}^{2}$ is straightforward due to its well-known sampling
distribution. Note that as $T\rightarrow\infty$, we have
$\tilde{\alpha}_{v}\rightarrow\alpha_{v}$ and
$\tilde{\beta}_{v}\rightarrow\beta_{v}$, which corresponds to the prior
distribution $p(\sigma_{v}^{2})$.
#### IV-B2 Sampling Distribution for Step 2
This step requires the partially tempered posterior conditional of the pulse
coefficient ${\gamma}$ for a given temperature level $T$, which has the form
of a multivariate Gaussian law:
$p(\mbox{\boldmath${\gamma}$}|\mbox{\boldmath${y}$},\mbox{\boldmath${\theta}$},\sigma_{v}^{2};T)=\mathcal{N}(\mbox{\boldmath${\gamma}$};\tilde{\mbox{\boldmath${\mu}$}}_{\gamma},\tilde{\mbox{\boldmath${\Sigma}$}}_{\gamma})$
with
$\tilde{\mbox{\boldmath${\mu}$}}_{\gamma}=\frac{2}{T\sigma_{v}^{2}}\tilde{\mbox{\boldmath${\Sigma}$}}_{\gamma}\Re\\{\mbox{\boldmath${C}$}^{H}\mbox{\boldmath${y}$}\\}$
and
$\tilde{\mbox{\boldmath${\Sigma}$}}_{\gamma}=\big{(}\frac{2}{T\sigma_{v}^{2}}\Re\\{\mbox{\boldmath${C}$}^{H}\mbox{\boldmath${C}$}\\}+\mbox{\boldmath${\Sigma}$}_{\gamma}^{-1}\big{)}^{-1}$
where
$\mbox{\boldmath${C}$}=\text{diag}(\mbox{\boldmath${x}$})\mbox{\boldmath${B}$}$
and $\Re\\{\cdot\\}$ denotes the real part of its argument. Hence sampling
${\gamma}$ is also straightforward. Similar to Step 1, as
$T\rightarrow\infty$, the distribution converges to the prior distribution
$p(\mbox{\boldmath${\gamma}$})$ since
$\tilde{\mbox{\boldmath${\mu}$}}_{\gamma}\rightarrow\mbox{\boldmath${0}$}$ and
$\tilde{\mbox{\boldmath${\Sigma}$}}_{\gamma}\rightarrow\mbox{\boldmath${\Sigma}$}_{\gamma}$.
Initialize $\sigma_{v}^{2(\ell,0)}$, $\mbox{\boldmath${\gamma}$}^{(\ell,0)}$,
and $\mbox{\boldmath${\theta}$}^{(\ell,0)}$ for $\ell=1,2,\ldots,L$
for _$j=1$ to $J$_ do
for _$\ell=1$ to $L$_ do
Draw $\sigma_{v}^{2(\ell,j)}$ from
$p(\sigma_{v}^{2}|\mbox{\boldmath${y}$},\mbox{\boldmath${\theta}$}^{(\ell,j-1)},\mbox{\boldmath${\gamma}$}^{(\ell,j-1)};T_{\ell})$
Draw $\mbox{\boldmath${\gamma}$}^{(\ell,j)}$ from
$p(\mbox{\boldmath${\gamma}$}|\mbox{\boldmath${y}$},\mbox{\boldmath${\theta}$}^{(\ell,j-1)},\sigma_{v}^{2(\ell,j)};T_{\ell})$
Draw $\mbox{\boldmath${\theta}$}^{(\ell,j)}$ from
$p(\mbox{\boldmath${\theta}$}|\mbox{\boldmath${y}$},\mbox{\boldmath${\gamma}$}^{(\ell,j)},\sigma_{v}^{2(\ell,j)};T_{\ell})$
end for
Draw a level $\ell$ uniformly from $\\{1,2,\ldots,L-1\\}$
Compute acceptance probability $\alpha_{\ell}$ using (8)
if _$U[0,1] <\alpha_{\ell}$_ then
Swap parameters
$\sigma_{v}^{2(\ell,j)}\rightleftharpoons\sigma_{v}^{2(\ell+1,j)}$
Swap parameters
$\mbox{\boldmath${\gamma}$}^{(\ell,j)}\rightleftharpoons\mbox{\boldmath${\gamma}$}^{(\ell+1,j)}$
Swap parameters
$\mbox{\boldmath${\theta}$}^{(\ell,j)}\rightleftharpoons\mbox{\boldmath${\theta}$}^{(\ell+1,j)}$
end if
end for
Algorithm 1 Proposed Gibbs Sampler with PT
## V Proposed Hybrid Sampler for Sampling Multilayered Model Parameters
The multidimensional sampling distribution for the multilayer model parameters
${\theta}$ does not have a well-known form that would enable direct sampling.
Therefore, we construct a hierarchical scheme that incorporates a different
sampling approach for Step 3 in Table I. Although PT approach helps resolving
the multimodality (or local optimality) issue of the likelihood, the employed
sampling scheme still plays an important role on the sampling efficiency. To
this end, in this section, we present a specific hybrid sampling mechanism
which combines the Slice Sampling (SS) and Hamiltonian Monte Carlo (HMC)
approaches. In the following sections, we first describe the principles of SS
and HMC, and then present our hybrid sampling scheme.
### V-A Slice Sampling
SS is among the widely used methods for within-Gibbs sampling schemes [29]. It
is applicable to both univariate and multivariate cases when the target
distribution can be calculated up to a scale. In this work, we employ the
univariate setting and sample ${\theta}$ in $3M$ steps, where in each step, we
sample an element $\theta_{i}$ from its full conditional posterior
distribution
$p(\theta_{i}|\mbox{\boldmath${y}$},\mbox{\boldmath${\theta}$}_{-i},\mbox{\boldmath${\gamma}$},\sigma_{v}^{2};T)$,
associated with a given temperature level $T$. The first step of SS is to
randomly draw a density level $\eta_{i}$ from
$U[0,p(\theta_{i}|\mbox{\boldmath${y}$},\mbox{\boldmath${\theta}$}_{-i},\mbox{\boldmath${\gamma}$},\sigma_{v}^{2};T)]$.
Then, a line segment (or a hyper-rectangle for multivariate case) with
predefined length, $w_{i}$, is randomly positioned around $\theta_{i}$ and
sequentially extended in both directions with multiples of $w_{i}$ until both
ends are above
$p(\theta_{i}|\mbox{\boldmath${y}$},\mbox{\boldmath${\theta}$}_{-i},\mbox{\boldmath${\gamma}$},\sigma_{v}^{2};T)$,
which is known as the stepping-out procedure. Once the stepping-out procedure
is completed, a point $\tilde{\theta}_{i}$ is drawn uniformly within the
extended line segment. If the selected point does not satisfy
$p(\tilde{\theta}_{i}|\mbox{\boldmath${y}$},\mbox{\boldmath${\theta}$}_{-i},\mbox{\boldmath${\gamma}$},\sigma_{v}^{2};T)\geq\eta_{i}$,
the line segment is shrunk by setting one end to $\tilde{\theta}_{i}$ such
that $\theta_{i}$ still lies within the resulting line segment and a new point
is drawn randomly in the same manner. The shrinkage process, also known as
stepping-in procedure, continues until a point satisfies
$p(\tilde{\theta}_{i}|\mbox{\boldmath${y}$},\mbox{\boldmath${\theta}$}_{-i},\mbox{\boldmath${\gamma}$},\sigma_{v}^{2};T)\geq\eta_{i}$.
Once such a point is selected, it is assigned as the next sample value.
Throughout this work, we set the length of line segment as the range of
corresponding parameter, i.e.,
$w_{i}=\theta_{i,\text{max}}-\theta_{i,\text{min}}$.
Figure 3: Illustration of reflective HMC for two-dimensional case when (left)
only one boundary is violated and (right) both boundaries are violated. Shaded
regions represent outside of the boundaries.
### V-B Hamiltonian Monte Carlo
The core idea of HMC is to utilize the geometry of the target distribution to
eliminate the random walk behaviour of the conventional Metropolis-Hastings
(MH) method by enabling longer jumps in parameter space with high acceptance
rate [30]. It is based on an analogy with physical systems, in which the
target distribution is translated to a potential energy function, where the
parameters of interest, ${\theta}$, are regarded as position variables. An
augmented state-space is created by introducing momentum variables, denoted by
${p}$, representing the rate of change of the position variables. Defining the
tempered potential energy function as $U(\mbox{\boldmath${\theta}$};T)=-\log
p(\mbox{\boldmath${y}$}|\mbox{\boldmath${\theta}$},\mbox{\boldmath${\gamma}$},\sigma_{v}^{2};T)$
and the kinetic energy function as
$K(\mbox{\boldmath${p}$})=\frac{1}{2}\mbox{\boldmath${p}$}^{T}\mbox{\boldmath${M}$}\mbox{\boldmath${p}$}$,
where ${M}$ is a weighting matrix that adjusts the momentum distribution for
more efficient sampling, total energy of the system at a given state
$(\mbox{\boldmath${\theta}$},\mbox{\boldmath${p}$})$ at temperature $T$ is
given by the Hamiltonian
$H(\mbox{\boldmath${\theta}$},\mbox{\boldmath${p}$};T)=U(\mbox{\boldmath${\theta}$};T)+K(\mbox{\boldmath${p}$})$.
HMC is used to sample $(\mbox{\boldmath${\theta}$},\mbox{\boldmath${p}$})$
pairs jointly from the canonical distribution
$P(\mbox{\boldmath${\theta}$},\mbox{\boldmath${p}$};T)\propto\exp\big{(}-H(\mbox{\boldmath${\theta}$},\mbox{\boldmath${p}$};T)\big{)}$
at a given temperature level $T$. The sampling is achieved by first sampling a
new momentum state from
$\mathcal{N}(\mbox{\boldmath${p}$};\mbox{\boldmath${0}$},\mbox{\boldmath${M}$}^{-1})$,
and then simulating the Hamiltonian dynamics, given by
$\dfrac{d\mbox{\boldmath${\theta}$}}{dt}=\nabla_{p}H(\mbox{\boldmath${\theta}$},\mbox{\boldmath${p}$};T),\qquad\dfrac{d\mbox{\boldmath${p}$}}{dt}=-\nabla_{\theta}H(\mbox{\boldmath${\theta}$},\mbox{\boldmath${p}$};T),$
(9)
to produce a new position state. However, exact simulation requires
integration of (9), which is not feasible in practice. Hence, it is
approximated by the leapfrog algorithm, which is a numerical integration
scheme consisting of alternating discretized updates to ${\theta}$ and ${p}$:
$i)$
$\mbox{\boldmath${p}$}_{\epsilon/2}=\mbox{\boldmath${p}$}_{0}-\frac{\epsilon}{2}\nabla_{\theta}U(\mbox{\boldmath${\theta}$}_{0};T)$,
$ii)$
$\mbox{\boldmath${\theta}$}_{\epsilon}=\mbox{\boldmath${\theta}$}_{0}+\epsilon\mbox{\boldmath${M}$}\mbox{\boldmath${p}$}_{\epsilon/2}$,
and $iii)$
$\mbox{\boldmath${p}$}_{\epsilon}=\mbox{\boldmath${p}$}_{\epsilon/2}-\frac{\epsilon}{2}\nabla_{\theta}U(\mbox{\boldmath${\theta}$}_{\epsilon};T)$.
One iteration of the leapfrog algorithm simulates the dynamics for a time
interval $\epsilon$, which is the predefined step size of the algorithm. In
order to simulate for a duration of $\tau$, the process is repeated for
$\Delta=\tau/\epsilon$ times. Although the leapfrog algorithm provides quite
accurate approximation of the continuous time integration, some residual error
will remain due to discretization, which might alter the value of Hamiltonian.
In order to maintain detailed balance, the proposed state is accepted with MH
acceptance criterion.
Figure 4: Proposed hybrid sampling mechanism with self-adaptation.
HMC is conventionally used for sampling from smooth and unbounded
distributions. For bounded parameter spaces, as we have with
$\Lambda_{\theta}$, a modified reflective HMC can be used, where the
trajectory on the parameter space is bounced back when it is blocked by a
boundary. Specifically, if
$\theta_{i}\notin[\theta_{i,\text{min}},\theta_{i,\text{max}}]$ after
completing one step of the leapfrog algorithm, we undo the previous step,
negate the $i^{th}$ momentum variable, i.e., $p_{i}^{\prime}=-p_{i}$, and then
complete the remaining steps using the updated momentum vector. If multiple
boundaries are violated simultaneously, all of the corresponding momentum
variables are negated. In Fig. 3, we demonstrate the employed reflection
method for a two-dimensional case. This method of reflection leaves the
Hamiltonian invariant, since negation does not change the value of kinetic
energy function, i.e.,
$K(\mbox{\boldmath${p}$}^{\prime})=K(\mbox{\boldmath${p}$})$. Moreover, the
same MH acceptance criterion remains valid, preserving the detailed balance.
Figure 5: Evolution of MPSRF and log-posterior for different samplers with
$L=1$ (No Tempering) and $L=16$ (Parallel Tempering).
Note that the leapfrog algorithm still requires the analytic expression for
the gradient of the potential energy function
$U(\mbox{\boldmath${\theta}$};T)=\|\mbox{\boldmath${y}$}-\text{diag}(\mbox{\boldmath${B}$}\mbox{\boldmath${\gamma}$})\mbox{\boldmath${x}$}\|^{2}/T\sigma_{v}^{2}$.
Following the derivation provided in Section IV of the supplementary material,
it is given by
$\nabla_{\theta}U(\mbox{\boldmath${\theta}$})=-\dfrac{2}{T\sigma_{v}^{2}}\Re\Big{\\{}\big{(}\mbox{\boldmath${y}$}-\mbox{\boldmath${D}$}\mbox{\boldmath${x}$}\big{)}^{H}\mbox{\boldmath${D}$}\nabla_{\theta}\mbox{\boldmath${x}$}\Big{\\}},$
(10)
where
$\mbox{\boldmath${D}$}=\text{diag}(\mbox{\boldmath${B}$}\mbox{\boldmath${\gamma}$})$.
The gradient of ${x}$ is defined as
$\nabla_{\theta}\mbox{\boldmath${x}$}=[\nabla_{\theta}X_{0}(\omega_{0}),\nabla_{\theta}X_{0}(\omega_{1}),\ldots,\nabla_{\theta}X_{0}(\omega_{N-1})]^{T}$,
where the individual terms $\nabla_{\theta}X_{0}(\omega_{i})$ have the form of
$\nabla_{\theta}X_{0}(\omega_{i})=[\partial
X_{0}(\omega_{i})/\partial\theta_{1},\ldots,\partial
X_{0}(\omega_{i})/\partial\theta_{3M}]^{T}$ for $i=0,1,\ldots,N-1$. Exact
expression for each element of $\nabla_{\theta}X_{0}(\omega_{i})$ is also
provided in Section IV of the supplementary material.
### V-C Proposed Hybrid Sampler with Self-Adaptation
The parameters $\epsilon$, $\Delta$ and ${M}$ affect the overall performance
of HMC significantly. In general, higher $\epsilon$ causes high residual error
leading to low acceptance rate. On the other hand, selecting a too small
$\epsilon$ will require large number of steps $\Delta$ to achieve long jumps,
which increases the computational load. Hence, both parameters need to be
tuned for the best trade-off. Similarly, appropriate selection of ${M}$ is
crucial for sampling efficiency. Note that the residual error is actually the
sum of all errors in each dimensions. Therefore, the simple choice of
$\mbox{\boldmath${M}$}=\mbox{\boldmath${I}$}$, which assigns equal weights for
all dimensions, causes step size $\epsilon$ to be determined according to the
dimension with the smallest variance. This is because a smaller variance at a
given direction generally corresponds to a higher gradient in that direction,
which increases sensitivity to the value of the momentum. The performance can
be significantly improved by adjusting the momentum variables accordingly to
maintain a similar level of error in each dimension. This can be achieved by
selecting ${M}$ as a diagonal matrix consisting of the inverse of the
variances in each dimension. A better strategy would be to set ${M}$ as the
inverse of the full covariance matrix ${\Sigma}$, which would not only
incorporate the variance information, but also capture the linear correlations
between the parameters. However, for complex distributions, analytical
calculation of the covariance matrix is not possible, and hence, an estimate
is required. In addition to these issues, another essential but non-trivial
problem is the selection of the temperature ladder for PT scheme as it has a
substantial effect on the overall exploration power of the sampler. Since no
unique set of temperature levels exist that works well for all distributions,
the temperatures should be adjusted for improved sampling performance.
To address the issues described above, we designed an adaptive sampling
mechanism that consists of an initialization/adaptation stage as part of the
burn-in process for learning the temperature levels as well as the covariance
matrices and the step sizes (for fixed number of steps $\Delta$) associated
with each temperature level from the measurement. As illustrated in Fig. 4, we
initialize the sampling process using SS and iteratively learn the temperature
levels in Stage I through the mechanism described in Section V-C1. Once a
certain convergence criterion is satisfied, we fix the temperatures and start
generating samples for the covariance estimation in Stage II. After having the
covariance estimates for each temperature level, in Stage III, we switch to
HMC approach, set
$\mbox{\boldmath${M}$}=\mbox{\boldmath${\hat{\Sigma}}$}_{SS}^{-1}$ and learn
the step sizes associated with each temperature level in a sequential manner
as described in Section V-C2. After convergence, we fix the step sizes and
start the actual sampling process for inference.
The proposed sampling mechanism combines the SS and HMC approaches, yielding a
hybrid model. Our main motivations for initializing the process with SS and
then switching the HMC are as follows: Firstly, we only need to set the widths
of the hyper-rectangles for SS, which, as our experiments indicated, does not
have a crucial effect on the sampling performance and can be fixed from the
initialization. This creates a controlled sampling period for more accurate
temperature adjustment. Secondly, SS achieves the fastest convergence rate
compared to conventional MH and HMC that uses an identity weight matrix, as we
will illustrate in Section VI. Finally, HMC achieves outstanding sampling
efficiency after convergence if the weighting matrix is well-adjusted to
capture the correlations between different parameters. Therefore, the idea is
to combine the convergence speed of SS with the sampling efficiency of HMC to
create a more powerful sampling method. In the following sections, we describe
the adaptive models employed in Stage I and III for temperature level and step
size adjustments.
TABLE II: Autocorrelation time (ACT) of the samplers for each model parameter.
Lowest value in each column is represented in bold.
| Model Parameters
---|---
Samplers | $\varepsilon_{1}$ | $\varepsilon_{2}$ | $\varepsilon_{3}$ | $\varepsilon_{4}$ | $\varepsilon_{5}$ | $\sigma_{1}$ | $\sigma_{2}$ | $\sigma_{3}$ | $\sigma_{4}$ | $\sigma_{5}$ | $d_{0}$ | $d_{1}$ | $d_{2}$ | $d_{3}$ | $d_{4}$
MH | 1272 | 653 | 1092 | 1416 | 2702 | 2521 | 1628 | 1433 | 3250 | 284 | 310 | 1161 | 597 | 1363 | 2406
SS | 1026 | 516 | 965 | 757 | 106 | 148 | 628 | 135 | 30 | 60 | 164 | 1018 | 484 | 957 | 691
HMC-I | 510 | 409 | 750 | 1279 | 728 | 704 | 488 | 639 | 1007 | 1262 | 238 | 502 | 403 | 750 | 1295
HMC-$\mbox{\boldmath${\hat{\Sigma}}$}_{SS}$ | 56 | 63 | 35 | 51 | 69 | 34 | 28 | 64 | 25 | 28 | 87 | 50 | 62 | 34 | 55
Figure 6: Autocorrelation functions (ACF) of the samplers for the first layer
parameters.
#### V-C1 Adaptive Temperature Selection
For PT, selection of the temperature ladder $T_{1}<\ldots<T_{L}$ has a
substantial effect on the overall sampling performance. The general practice
is to set $T_{1}=1$ to sample from the original target distribution and
$T_{L}$ sufficiently high to explore all the modes. There exist different
point of views to optimize the structure of the temperature ladder. In this
work, we assume that the total number of temperatures is fixed and determined
by the available computational budget. It has been shown in the literature
that a reasonable approach is to set the temperature spacing such that the
swap ratios approximately equal for adjacent levels [31]. Following this
approach, we provide an adaptive temperature selection scheme that iteratively
adjusts the temperature levels until a certain convergence criterion is met.
Consider an intermediate temperature ladder configuration
$\\{T_{\ell}^{(j)}\\}_{\ell=1}^{L}$ at $j^{th}$ MCMC iteration. The effect of
any changes on $\\{T_{\ell}^{(j)}\\}_{\ell=1}^{L}$ can only be observed in the
proceeding iterations. Therefore, we update the temperatures after every
$J_{T}$ iterations based on the empirical swap ratio $s_{\ell}^{(j)}$, which
is calculated by the ratio of the total accepted swaps to the total proposed
swaps between chains $\ell$ and $\ell+1$ during the iterations $(j-J_{T}+1)$
and $j$. In order to maintain the order, i.e., $T_{1}<\ldots<T_{L}$, and level
out the scaling differences, we perform the updates on the logarithm of their
difference as
$T_{\Delta_{\ell}}^{(j+1)}=T_{\Delta_{\ell}}^{(j)}-e_{\ell}^{(j)}K_{T}\mathbbm{1}_{J_{T}}(j)$
(11)
where
$T_{\Delta_{\ell}}^{(j)}=\log\big{(}T_{\ell+1}^{(j)}-T_{\ell}^{(j)}\big{)}$,
$e_{\ell}^{(j)}=s_{\ell+1}^{(j)}-s_{\ell}^{(j)}$, $K_{T}$ is the controller
gain, and $\mathbbm{1}_{J_{T}}(j)$ refers to the indicator function defined as
$\mathbbm{1}_{J_{T}}(j)=1$ if $j\bmod J_{T}=0$ and $\mathbbm{1}_{J_{T}}(j)=0$
otherwise. The initial configuration is generally selected as $L$
geometrically spaced levels between $T_{1}$ and $T_{L}$. Here, we note that
any adjustment on the temperature levels during the sampling process violates
the detailed balance. Therefore, we finalize the temperature update when the
variation within the last $N_{T}$ updates is less than 10% simultaneously for
all levels:
$\dfrac{\sqrt{\frac{1}{N_{T}-1}\sum_{i=0}^{N_{T}-1}\big{(}T_{\ell}^{(j-iJ_{T})}-\bar{T}_{\ell}\big{)}^{2}}}{\bar{T}_{\ell}}\leq
0.1,$ (12)
where
$\bar{T}_{\ell}=\frac{1}{N_{T}}\sum_{i=0}^{N_{T}-1}T_{\ell}^{(j-iJ_{T})}$. We
then fix the temperatures and initiate Stage II for covariance estimation.
#### V-C2 Adaptive Step Size Selection
In this section, we provide an adaptive model to be used in Stage III, by
which we periodically update the step sizes to achieve a predetermined
acceptance ratio $\xi$ based on the current empirical acceptance ratios.
Similar to temperature adjustments, we update the step sizes after every
$J_{\epsilon}$ iterations based on the empirical acceptance ratio
$\hat{\xi}_{\ell}^{(j)}$ measured by the ratio of the total accepted proposals
between iterations $(j-J_{\epsilon}+1)$ and $j$ to the duration
$J_{\epsilon}$. We employ a proportional controller approach and use the
difference between the target and empirically measured acceptance ratios,
i.e., $e_{\ell}^{(j)}=\xi-\hat{\xi}_{\ell}^{(j)}$, as the model feedback.
Hence, the adaptive model is described by
$\epsilon_{\ell}^{(j+1)}=\exp\big{(}\log(\epsilon_{\ell}^{(j)})-e_{\ell}^{(j)}K_{\epsilon}\mathbbm{1}_{J_{\epsilon}}(j)\big{)},$
(13)
where we perform the updates on the logarithm of parameters to level out scale
differences and use the same constant gain $K_{\epsilon}$ for all temperature
levels. We employ the same convergence criterion defined in (12) and fix the
step sizes before initiating Stage IV. As a result, no adaptation is performed
and all parameters are fixed during the actual sampling period, which
maintains the Markovianity and detailed balance.
## VI Simulations
In the first part of this section, we justify our reasoning behind the
construction of proposed hybrid sampling mechanism and demonstrate the
obtained superior sampling efficiency. Then, in the second part, we
investigate the recovery of multilayer model parameters from synthetic
measurements simulating human tissues. Throughout this section, we will use MH
to denote the Metropolis-Hastings sampling scheme. Since the parameter space
is bounded, we use independent Beta distributions for each dimensions as the
proposal distribution. We locate the mode at the current sample value and
employ the same adaptation model given in (13) for the concentration of
proposal distributions to achieve a predetermined acceptance rate. Same as
before, SS and HMC will represent the Slice Sampling and Hamiltonian Monte
Carlo approaches described in Section V-A and V-B. More specifically, we will
use HMC-I and HMC-$\mbox{\boldmath${\hat{\Sigma}}$}_{SS}$ to denote
$\mbox{\boldmath${M}$}=\mbox{\boldmath${I}$}$ and
$\mbox{\boldmath${M}$}=\mbox{\boldmath${\hat{\Sigma}}$}_{SS}^{-1}$ cases
respectively. In other words, HMC-$\mbox{\boldmath${\hat{\Sigma}}$}_{SS}$
corresponds to the Stage III and IV of the proposed hybrid sampler.
For the experiments, the parameters of prior distributions were selected as
$\sigma_{\gamma}^{2}=10$, $\alpha_{v}=10^{-3}$, and $\beta_{v}=10^{-3}$, which
constitute non-informative priors. The subspace matrix ${A}$ for the
transmitted waveform was constructed by the first 8 length-$23$ DPS sequences,
which span the frequency range of $0$ to $16$ GHz. The lower and upper bounds
of the multilayer model parameters were specified as
$\varepsilon_{\text{min}}=2$, $\varepsilon_{\text{max}}=100$,
$\sigma_{\text{min}}=5\times 10^{-3}$, $\sigma_{\text{max}}=3$,
$d_{\text{min}}=10^{-3}$ and $d_{\text{max}}=3\times 10^{-2}$. The associated
prior distributions were selected such that the mode $\lambda_{i}$ is located
at the normalized typical value of the corresponding model parameter with
concentration $\kappa_{i}=100$, except for the last layer parameters, which
were assigned flat priors with $\kappa_{i}=0$. For parallel tempering, a total
of $L=16$ different temperature levels, initialized at geometrically spaced
points in between $T_{1}=1$ and $T_{16}=10^{5}$, were employed. We performed
temperature updates after every $J_{T}=200$ iterations with $K_{T}=10$ and
$N_{T}=10$. For HMC, the step sizes were initialized at $10^{-2}$ with
$\xi=0.85$, $J_{\epsilon}=100$, and $K_{\varepsilon}=0.5$.
Figure 7: Evolution of swap ratios (top left) and temperature levels (bottom
left) using the adaptive temperature adjustment model with $L=16$ levels. The
lowest and highest temperature levels are fixed at $T_{1}=1$ and
$T_{16}=10^{5}$. Evolution of acceptance ratios (top right) and step sizes
(bottom right) using the adaptive step size adjustment model with target
acceptance ratio $\xi=0.85$.
Figure 8: Trace plots for the parameters $\varepsilon_{1}$ (top), $\sigma_{1}$
(middle), and $d_{1}$ (bottom) at all stages of the proposed sampling
procedure.
### VI-A Convergence Rate Analysis
One of the main reasons for using SS within the first two stages of our
sampling mechanism is its faster convergence rate compared to MH and HMC-I. In
this section, we establish this by comparing the empirically measured
convergence rates. Since no covariance estimate is available initially, we do
not consider HMC-$\mbox{\boldmath${\hat{\Sigma}}$}_{SS}$ for comparison. In
order to empirically compare the convergence rates, we first consider the
iterated graphical monitoring approach proposed by Brook and Gelman in [32].
The convergence is measured based on the Multivariate Potential Scale
Reduction Factor (MPSRF) as defined in [32], which is calculated on multiple
simulations running simultaneously and independently. The convergence is
declared when MPSRF is close to 1, a typical threshold being 1.2 as suggested
in [32].
To produce the MPSRF curves, we run 8 different simulations on the same
measurement and calculate the MPSRF value after every 100 iterations by using
only the second half of the generated samples, where the first half is
discarded as part of the burn-in process. Note that we employ a PT scheme and
have multiple chains associated with each of these 8 simulations. Since we are
only interested in the samples corresponding to $T_{1}=1$, we calculate the
MPSRF curves on the first chains. In order to demonstrate the effect of PT, we
also considered the scenario in which we do not employ any tempering approach
and run a single chain at $T=1$ for each simulations. The resulting curves are
illustrated in Fig. 5.
Our first observation is that PT improves the convergence rates significantly
for all samplers. Without PT, the samplers quickly get stuck on a local
optimal region depending on their initialization and the MPSRF fails to
decrease within the simulation duration. On the other hand, the MPSRF curves
for the samplers with PT quickly converge to 1 for both SS and HMC-I. Even
though it does not converge as quickly for MH, a significant improvement still
exists. This deficiency mainly results from the random walk behaviour of MH,
which is inevitable in most complex multivariate distributions without
accurate estimation of the curvature information. Comparing SS and HMC-I,
although they both get close to 1 very rapidly, it takes, respectively, around
6000 and 19000 iterations for MPSRF to fall below the convergence threshold of
1.2 for SS and HMC-I. This result provides an empirical evidence for the fast
convergence rate of SS.
As an additional convergence analysis, we also compared the evolution of the
value of posterior distribution as simulations progress. We present the mean
average of the logarithm of unnormalized posterior value over 8 independent
simulations in Fig. 5. Same as before, the performance improvement obtained
via PT scheme is clearly visible for all samplers. Although both SS and HMC-I
reach the stationary distribution within the first $2\times 10^{3}$
iterations, SS considerably outperforms HMC-I in terms of the number of
iterations needed for convergence, providing another empirical support for
selecting SS as the sampling method employed within the first two stages of
the proposed sampling mechanism.
Figure 9: Recovery of relative permittivity profile (left), conductivity
profile (middle), and transmitted pulse (right) for deflated (top) and
inflated (bottom) lung scenarios at 40 dB SNR.
Figure 10: Actual conditional posterior densities and estimated marginal
posterior densities of $\varepsilon_{5}$ and $\sigma_{5}$ for deflated and
inflated lung scenarios.
### VI-B Comparison of Sampling Efficiency
After convergence to the stationary distribution, efficiency of a sampler can
be measured based on the correlation of generated samples. In general, the
consecutive samples generated within a MCMC scheme are correlated. Obviously,
a lower correlation is more desirable since it increases the number of
effective samples. It is defined as the ratio of total number of generated
samples to the autocorrelation time (ACT). Therefore, ACT of a sampler
provides an objective metric for comparing the efficiency of different
samplers. It can be calculated by integrating the autocorrelation function
(ACF), which is estimated over the chain of generated samples. The details of
ACT and ACF calculations are provided in Section V of the supplemental
material.
In Fig. 6, we illustrate the estimated ACFs over the chains with $T_{1}=1$ for
the first layer parameters $\varepsilon_{1}$, $\sigma_{1}$, and $d_{1}$. The
ACFs were calculated on the converged portion of the chains which corresponds
to Stage IV of our sampling scheme. The random walk behaviour of MH can be
clearly observed by noting the existence of significant correlations even
after long lags. Even though SS achieves a better sampling performance for
$\sigma_{1}$ compared to HMC-I, they both perform very similarly for
$\varepsilon_{1}$ and $d_{1}$, and still exhibit considerable correlations. On
the other hand, HMC-$\mbox{\boldmath${\hat{\Sigma}}$}_{SS}$ dramatically
outperforms the others by rapidly vanishing the correlations. This indicates
that employing a warm-up stage for covariance estimation considerably improves
the sampling efficiency.
In order to have an analytical measure, we also compared the ACTs associated
with each model parameter in Table II. As it can be observed,
HMC-$\mbox{\boldmath${\hat{\Sigma}}$}_{SS}$ dramatically reduces the number of
samples required for generating a new independent sample for all model
parameters. In addition, as the ACTs are highly fluctuating for different
parameters in the case of other samplers,
HMC-$\mbox{\boldmath${\hat{\Sigma}}$}_{SS}$ provides a consistently lower ACT
for all parameters. This is a natural result since the weighting matrix ${M}$
successfully captures the linear correlations between different parameters.
Overall, the obtained results demonstrate the superior sampling efficiency of
HMC-$\mbox{\boldmath${\hat{\Sigma}}$}_{SS}$.
Figure 11: Estimation of last layer relative permittivity and conductivity
along with $95\%$ credibility intervals corresponding to noise-free (first and
third from left) and noisy (second and fourth from left) measurements at 40dB
SNR.
### VI-C Validation of Self-Adaptation
The adaptive models for temperature level and step size selection enable us to
achieve improved sampling efficiency. To illustrate the adaptation process, in
Fig. 7, we represent the evolution of temperature levels and step sizes along
with the associated swap and acceptance ratios. The lowest and highest
temperature levels were fixed at $T_{1}=1$ and $T_{16}=10^{5}$, and the
remaining were initialized at geometrically spaced levels between $10^{2}$ and
$10^{3}$ to better illustrate the evolution process. We initialize the
sampling process with temperature adaptation using SS approach in Stage I.
After the point at which the convergence criterion is satisfied, which is
marked with the vertical dashed line located just after iteration 6000, we
fixed the temperature levels and initiate Stage II. As it can be seen from the
top left plot, the associated swap ratios between adjacent temperature levels
successfully converge to a same level around 0.2. Once Stage II is finalized
and an estimate of the covariance matrix is obtained, we initiate the step
size adaptation with target acceptance ratio $\xi=0.85$ for all temperature
levels, as shown in the right plots of Fig. 7. The step sizes were all
initialized at $10^{-2}$, which is small enough to have roughly 100%
acceptance at each temperature level. As the evolution of acceptance ratios
indicate, step sizes were successfully updated to achieve the desired
acceptance ratio at all temperature levels until the convergence criterion is
satisfied just before iteration 16000.
We also provide example trace plots for parameters $\varepsilon_{1}$,
$\sigma_{1}$, and $d_{1}$, corresponding each stage in Fig 8 to visually
demonstrate the effect of adaptation stages on the sampling performance.
During the first half of Stage I, we observe a strong random walk behaviour,
especially for $\varepsilon_{1}$ and $d_{1}$, which is due to inadequate
initialization of temperature levels. Once the temperatures are calibrated and
the sampler converges to the stationary distribution, the random walk
behaviour diminishes appreciably. But still, the generated sample traces
exhibit noticeable correlations in Stage II, even though the sampling
performance is visibly better compared to Stage I. In this stage, the sampling
efficiency is limited by the performance of SS approach. After switching to
HMC in Stage III, we again observe the random walk behaviour during the first
a few hundreds of iterations due to inadequate selection of step sizes.
However, as the step size adaptation progresses,
HMC-$\mbox{\boldmath${\hat{\Sigma}}$}_{SS}$ rapidly improves the sampling
efficiency and starts producing samples with significantly reduced
correlation.
### VI-D Recovery Results on Synthetic Measurements
In this part of the experiments, we assess the recovery performance of the
proposed methods on synthetic measurements. The measurement sequences are
created using the circular convolution model given in (2). The reflectivity
profiles are calculated using the 1D multilayer propagation model given in
(1). We considered a multilayer structure with the following 5 layers: skin
(0.3 cm), fat (1.25 cm), muscle (1 cm), bone (0.75 cm), and lung (semi-
infinite) to simulate human tissues in thoracic cavity. The actual typical
permittivity and conductivity properties of each tissue were obtained from
[33]. The transmitted waveform used in the experiments is the first derivative
of Gaussian pulse with center frequency $f_{c}=4$ GHz, which is nearly
bandlimited with a bandwidth of $4$ GHz.
As an illustrative example, in Fig. 9, we represent the recovery results for
the relative permittivity and conductivity profiles as well as the transmitted
waveform using the measurement with $40$ dB SNR. We note that such high level
of SNR is required for observing meaningful reflections from deeper tissues.
We included both deflated and inflated lung scenarios to investigate whether
it is possible to detect variations in the last layer parameters. We used the
sample mean of the generated samples as an approximation to the MMSE estimate,
while we employed gradient based off-the-shelf local search methods
initialized at the sample that achieves the highest posterior value to for the
MAP estimate. The recovered profiles indicate that estimating relative
permittivity is relatively easier as opposed to estimating conductivity
property. Moreover, the thickness estimation is almost perfect for all layers.
This is mainly due to the shape of posterior distribution. In order to justify
this, we illustrate the true conditional 2D posterior distributions of
$\varepsilon_{5}$ and $\sigma_{5}$, where all other parameters are fixed at
their true values, as well as the corresponding estimated 2D marginal
distributions in Fig. 10. The results points out that the variance along
$\sigma_{5}$ direction is considerably higher, making successful recovery more
difficult. Nevertheless, we also observed from conditional distributions that
the modes of posterior distribution are clearly separated for deflated and
inflated lung scenarios, which is successfully captured by the estimated
marginal distributions as well. This indicates the possibility of detecting
variations in deeper tissue layers given sufficiently high SNR in blind
setting, where the transmitted waveform is almost perfectly recovered in both
cases as well. As a final note, we did not observe a remarkable difference
between MMSE and MAP estimates, which can be explained by the nearly symmetric
structure of the estimated marginals.
Figure 12: Comparison of the CRLB and the MAP estimator variance. Flat priors
were used to mimic ML estimator. The estimator variance is empirically
calculated based on 100 noisy observations. Top figures illustrate the
Normalized RMSEs as a function of SNR for all model parameters. Bottom figures
represent the Normalized RMSEs as a function of the last layer parameter
values at 40 dB SNR.
### VI-E Estimation with Credibility Intervals
We now consider two different scenarios, where in the first one, we varied the
last layer relative permittivity in between 5 and 70, and in the second one,
we varied the last layer conductivity in between 0.125 and 2, while keeping
all other parameters constant at their typical values. Our goal is to
investigate the tracking performance of the estimators. For these experiments,
on top of the point estimates of MMSE and MAP, we also compute the 95%
credibility intervals to represent the uncertainty of estimates. We considered
both noise-free and noisy (40 dB SNR) measurement cases to see how the
estimates and the associated credibility intervals change. The noise-free
measurement was still handled within the noisy model, i.e., it just represent
the lucky case, where the noise components were happened to be zero at all
indices.
We demonstrate the recovery results along with the credibility intervals in
Fig. 11. Considering the noise-free scenarios, the MAP estimate perfectly
recovers the actual parameter values. This is an expected result since the
prior distributions were selected in a way not to disturb the mode of the
posterior distribution. For noisy measurements, the MAP estimates fluctuate
around the true values due to disturbed mode of the posterior. The MMSE
estimates seem to be consistently overestimating in all cases, especially for
larger values of $\varepsilon_{5}$ and $\sigma_{5}$. This is an indicator that
posterior distributions are skewed towards larger parameter values. Hence, the
MAP estimator might be a more favorable choice over MMSE. The credibility
intervals provide useful information about the shape of distributions. It can
be observed that the posterior becomes more peaky around the true values for
smaller values of $\varepsilon_{5}$ and $\sigma_{5}$, which was also observed
in Fig. 10. Hence, one might argue that it is relatively easier to estimate
smaller values of parameters, especially for relative permittivity. Our final
observation is, for noisy measurements, the results show that actual parameter
value always lie within the 95% credibility interval at 40 dB SNR.
### VI-F Theoretical Bounds on the Estimator Performance
In order to assess the estimation performance, in this section, we derive the
Cramer-Rao Lower Bounds (CRLB) for unbiased estimators and present the best
achievable performance on estimation of tissue properties in blind setting. We
assume that non-informative flat prior distributions are employed for the
multilayer model parameters (with $\kappa_{i}=0$) and that the variance of
pulse subspace $\sigma_{\gamma}^{2}$ is sufficiently high. In this setting,
the problem can be considered within the frequentist approach and the unknown
parameters can be treated as deterministic valued quantities. Let us collect
all the parameters except the noise variance in
$\mbox{\boldmath${\phi}$}=(\mbox{\boldmath${\theta}$},\mbox{\boldmath${\gamma}$})$
and denote the noise-free signal as
$\mbox{\boldmath${s}$}=\text{diag}(\mbox{\boldmath${F}$}_{Q}\mbox{\boldmath${h}$})\mbox{\boldmath${x}$}$,
which is then corrupted by white Gaussian noise ${v}$. For a given noise
variance $\sigma_{v}^{2}$, the log-likelihood is expressed as
$\log
p(\mbox{\boldmath${y}$}|\mbox{\boldmath${\phi}$})=-N\log(\pi\sigma_{v}^{2})-\dfrac{1}{\sigma_{v}^{2}}\sum_{n=0}^{N-1}|y_{n}-s_{n}|^{2},$
(14)
where the partial second derivatives are given by
$\dfrac{\partial^{2}\log
p(\mbox{\boldmath${y}$}|\mbox{\boldmath${\phi}$})}{\partial\phi_{i}\partial\phi_{j}}=\dfrac{2}{\sigma_{v}^{2}}\sum_{n=0}^{N-1}\Re\bigg{\\{}(y_{n}-s_{n})^{*}\dfrac{\partial^{2}s_{n}}{\partial\phi_{i}\partial\phi_{j}}-\dfrac{\partial
s_{n}^{*}}{\partial\phi_{j}}\dfrac{\partial
s_{n}}{\partial\phi_{i}}\bigg{\\}}.$ (15)
For multivariate case, the Fisher information matrix
$\mathcal{I}(\mbox{\boldmath${\phi}$})$ has the following form
$[\mathcal{I}(\mbox{\boldmath${\phi}$})]_{i,j}=-E\bigg{[}\dfrac{\partial^{2}\log
p(\mbox{\boldmath${y}$}|\mbox{\boldmath${\phi}$})}{\partial\phi_{i}\partial\phi_{j}}\bigg{]}=\dfrac{2}{\sigma_{v}^{2}}\sum_{n=0}^{N-1}\Re\bigg{\\{}\dfrac{\partial
s_{n}^{*}}{\partial\phi_{j}}\dfrac{\partial
s_{n}}{\partial\phi_{i}}\bigg{\\}},$ (16)
since $E[y_{n}]=s_{n}$. Here, $[\cdot]_{i,j}$ denotes the element at $i^{th}$
row and $j^{th}$ column. Therefore, the covariance matrix
$\mbox{\boldmath${C}$}_{\hat{\phi}}$ of any unbiased estimator
$\hat{\mbox{\boldmath${\phi}$}}(\mbox{\boldmath${y}$})$ satisfies
$\mbox{\boldmath${C}$}_{\hat{\phi}}-\mathcal{I}^{-1}(\mbox{\boldmath${\phi}$})\succcurlyeq
0$, i.e.,
$\text{Var}(\hat{\phi}_{i})=[\mbox{\boldmath${C}$}_{\hat{\phi}}]_{i,i}\geq[\mathcal{I}^{-1}(\mbox{\boldmath${\phi}$})]_{i,i}.$
(17)
The derivations for partial derivatives in (16) are provided in Section IV of
the supplementary material. Although the recursive structure of the
derivatives prevents obtaining analytical expressions, we can still calculate
the CRLBs numerically.
In upper plots of Fig. 12, we present the minimum achievable Normalized Root
Mean Square Error (N-RMSE) as a function of SNR for each of the multilayer
model parameters. We also included the empirically estimated N-RMSE of our MAP
estimator, which uses flat priors to mimic the Maximum-Likelihood (ML)
estimator. The empirical error rates were estimated over 100 different noisy
measurements generated with the same model parameters. The first and most
essential observation is that the MAP estimator strictly achieves the CRLB for
the given range of SNRs for all parameters. Secondly, the error rates are
consistently higher for deeper tissues, which is an expected result due to
considerable signal attenuation. Comparing the estimation of different sets of
parameters, we observe that the lowest achievable error rates are for
thicknesses, followed by relative permittivities, and conductivities.
Therefore, the posterior is much more sensitive to changes in the layer
thicknesses as opposed to other properties. With this results, we are also
able to quantify the expected recovery performance. For example, even with 40
dB SNR, the minimum achievable N-RMSE is around 15% for lung permittivity and
36% for lung conductivity.
In lower plots of Fig. 12, we presented the lower bounds as well as the
empirical error rates of MAP estimator for different values of
$\varepsilon_{5}$ and $\sigma_{5}$ at 40 dB SNR. The results show that the MAP
estimator achieves the lower bounds even for different parameter values. One
important observation is that when we have
$\varepsilon_{5}\approx\varepsilon_{4}$, the CRLB for $\varepsilon_{5}$
increases significantly. The main reason for this phenomenon can be explained
as follows. When the relative permittivities of adjacent layers are
indistinguishably close, the magnitude of the reflection coefficient at that
interface becomes considerably small, and hence, the actual 5-layer model
behaves like a 4-layer structure, causing overparametrization. This result
indirectly informs us about the recovery performance when using more number of
layers than the underlying model itself has. Unlike the relative permittivity,
we do not observe the same phenomenon in the case of conductivities, which is
most likely due to the fact that conductivity difference has a minor effect on
the magnitude of reflection coefficients.
## VII Concluding Remarks
In this paper, we studied the reconstruction of one-dimensional multilayer
tissue profiles from ultrawideband radar measurements. We assumed a blind
setting and jointly estimated both the transmitted radar waveform and the
multilayer model parameters. We approached the problem from a Bayesian
perspective and presented a comprehensive MCMC method to perform inference on
the highly complex posterior distribution. We employed parallel tempering to
resolve the local optimality issue, estimated covariance of the posterior to
capture linear correlations between model parameters, and incorporated
adaptation methods to adjust the sampler parameters. As a result, the proposed
sampling mechanism achieved superior sampling efficiency compared to
conventional sampling schemes. Simulations on the synthetic radar measurements
revealed successful recovery results. Comparisons with the derived theoretical
bounds showed that the proposed estimator achieves the minimum possible error
rate. More importantly, the estimated marginal posterior distributions
revealed promising results indicating the feasibility of tracking/detecting
variations in deeper tissue layers. Overall, although the one-dimensional
setting investigated in this work is a simplified version of the reality, it
provides useful insights about the feasibility and challenges of the problem.
As the future work, we aim to extend the presented recovery methods to a
three-dimensional wave propagation model, which has been rigorously studied in
[34, 35, 36] for ultrawideband radar systems. In addition, we also aim to
incorporate frequency dependence of model parameters through Debye relaxation
models [37] to improve modelling accuracy.
## References
* [1] A. Pantelopoulos and N. G. Bourbakis, “A survey on wearable sensor-based systems for health monitoring and prognosis,” _IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews)_ , vol. 40, no. 1, pp. 1–12, 2010.
* [2] S. Majumder, T. Mondal, and M. Deen, “Wearable sensors for remote health monitoring,” _Sensors_ , vol. 17, no. 12, p. 130, Jan 2012.
* [3] S. Patel, H. Park, P. Bonato, L. Chan, and M. Rodgers, “A review of wearable sensors and systems with application in rehabilitation,” _Journal of NeuroEngineering and Rehabilitation_ , vol. 9, no. 21, 2012.
* [4] J. Gao, S. Baskar, D. Teng, M. al’Absi, S. Kumar, and E. Ertin, “A new direction for biosensing: RF sensors for monitoring cardio-pulmonary function,” in _Mobile Health_ , J. Rehg, S. Murphy, and S. Kumar, Eds. Springer, 2017, p. 289–312.
* [5] C. Hsien-Chin, R. Chávez-Santiago, I. Balasingham, and J. Bergsland, “Ultrawideband technology in medicine: A survey,” _Journal of Electrical and Computer Engineering_ , 2012.
* [6] R. Zetik, J. Sachs, and R. S. Thoma, “UWB short-range radar sensing - The architecture of a baseband, pseudo-noise UWB radar sensor,” _IEEE Instrumentation Measurement Magazine_ , vol. 10, no. 2, pp. 39–45, 2007\.
* [7] T. E. McEwan, “Body monitoring and imaging apparatus and method,” United States Patent 5,573,012, Nov. 12, 1996.
* [8] ——, “Body monitoring and imaging apparatus and method,” United States Patent 5,766,208, Jun. 16, 1998.
* [9] D. Dias and J. P. S. Cunha, “Wearable health devices-vital sign monitoring, systems and technologies,” _Sensors_ , vol. 18, no. 8, Aug 2018.
* [10] J. Gao, E. Ertin, S. Kumar, and M. al’Absi, “Contactless sensing of physiological signals using wideband RF probes,” in _2013 Asilomar Conference on Signals, Systems and Computers_ , 2013, pp. 86–90.
* [11] J. Gao, “Wearable sensing of cardio-pulmonary function: Non-invasive sensor design and statistical approaches to signal compression and analysis,” Ph.D. dissertation, The Ohio State University, 2018.
* [12] E. M. Staderini, “UWB radars in medicine,” _IEEE Aerospace and Electronic Systems Magazine_ , vol. 17, no. 1, pp. 13–18, 2002.
* [13] G. Varotto and E. M. Staderini, “A 2D simple attenuation model for EM waves in human tissues: Comparison with a FDTD 3D simulator for UWB medical radar,” in _2008 IEEE International Conference on Ultra-Wideband_ , vol. 3, 2008, pp. 1–4.
* [14] M. Cavagnaro, E. Pittella, and S. Pisa, “UWB pulse propagation into human tissues,” _Physics in Medicine and Biology_ , vol. 58, no. 24, pp. 8689–8707, Nov 2013.
* [15] M. Ketata, M. Dhieb, G. Ben Hmida, H. Ghariani, and M. Lahiani, “UWB pulse propagation in human tissue: Comparison between Gaussian and square waves shape,” in _16th International Conference on Sciences and Techniques of Automatic Control and Computer Engineering (STA)_ , 2015, pp. 158–162.
* [16] T. Saarenketo and T. Scullion, “Road evaluation with ground penetrating radar,” _Journal of Applied Geophysics_ , vol. 43, no. 2, pp. 119–138, 2000\.
* [17] I. AL-Qadi and S. Lahouar, “Measuring layer thicknesses with GPR – Theory to practice,” _Construction and Building Materials_ , vol. 19, no. 10, pp. 763 – 772, 2005.
* [18] A. Loizos and C. Plati, “Accuracy of pavement thicknesses estimation using different ground penetrating radar analysis approaches,” _NDT & E International_, vol. 40, no. 2, pp. 147 – 157, 2007.
* [19] S. Lahouar and I. L. Al-Qadi, “Automatic detection of multiple pavement layers from GPR data,” _NDT & E International_, vol. 41, no. 2, pp. 69–81, 2008\.
* [20] M. Africano, J. O. Vargas, R. Adriano, D. B. Oliveira, and A. C. Lisboa, “Ground-penetrating radar antenna design for homogeneous and low-loss dielectric multilayer media,” _Journal of Microwaves, Optoelectronics and Electromagnetic Applications_ , vol. 19, pp. 137 – 151, 06 2020.
* [21] J. Lee, C. Nguyen, and T. Scullion, “A novel, compact, low-cost, impulse ground-penetrating radar for nondestructive evaluation of pavements,” _IEEE Transactions on Instrumentation and Measurement_ , vol. 53, no. 6, pp. 1502–1509, 2004.
* [22] S. Caorsi and M. Stasolla, “A layer stripping approach for EM reconstruction of stratified media,” _IEEE Transactions on Geoscience and Remote Sensing_ , vol. 52, no. 9, pp. 5855–5869, 2014.
* [23] S. Caorsi and M. Stasolla, “Towards the detection of multiple reflections in time-domain EM inverse scattering of multi-layered media,” _Progress in Electromagnetics Research B_ , vol. 38, pp. 351–365, 2012.
* [24] U. Spagnolini, “Permittivity measurements of multilayered media with monostatic pulse radar,” _IEEE Transactions on Geoscience and Remote Sensing_ , vol. 35, no. 2, pp. 454–463, 1997.
* [25] W. C. Chew, _Waves and Fields in Inhomogeneous Media_. New York: IEEE Press, 1995.
* [26] F. Hlawatsch, _Time-Frequency Analysis and Synthesis of Linear Signal Spaces: Time-Frequency Filters, Signal Detection and Estimation, and Range-Doppler Estimation_. USA: Kluwer Academic Publishers, 1998.
* [27] C. J. Geyer, “Markov Chain Monte Carlo Maximum Likelihood,” in _Computing Science and Statistics: Proceedings of the 23rd Symposium on the Interface_ , Elaine M. K. and Selma M. K., Ed. American Statistical Association, New York, 1991, pp. 156–163.
* [28] S. Geman and D. Geman, “Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 6, no. 6, pp. 721–741, 1984.
* [29] R. M. Neal, “Slice sampling,” _Annals of Statistics_ , vol. 31, no. 3, pp. 705–767, 2003.
* [30] ——, “MCMC using Hamiltonian dynamics,” _Handbook of Markov Chain Monte Carlo_ , vol. 54, pp. 113–162, 2010.
* [31] W. D. Vousden, W. M. Farr, and I. Mandel, “Dynamic temperature selection for parallel tempering in Markov Chain Monte Carlo simulations,” _Monthly Notices of the Royal Astronomical Society_ , vol. 455, no. 2, pp. 1919–1937, Nov 2015.
* [32] S. P. Brooks and A. Gelman, “General methods for monitoring convergence of iterative simulations,” _Journal of Computational and Graphical Statistics_ , vol. 7, no. 4, pp. 434–455, 1998.
* [33] S. Gabriel, R. W. Lau, and C. Gabriel, “The dielectric properties of biological tissues: II. Measurements in the frequency range 10 Hz to 20 GHz,” _Physics in Medicine and Biology_ , vol. 41, no. 11, pp. 2251–2269, Nov 1996.
* [34] S. Lambot, E. C. Slob, I. van den Bosch, B. Stockbroeckx, B. Scheers, and M. Vanclooster, “Estimating soil electric properties from monostatic ground-penetrating radar signal inversion in the frequency domain,” _Water Resources Research_ , vol. 40, no. 4, 2004.
* [35] S. Lambot, E. C. Slob, I. van den Bosch, B. Stockbroeckx, and M. Vanclooster, “Modeling of ground-penetrating radar for accurate characterization of subsurface electric properties,” _IEEE Transactions on Geoscience and Remote Sensing_ , vol. 42, no. 11, pp. 2555–2568, 2004.
* [36] S. Lambot and F. André, “Full-wave modeling of near-field radar data for planar layered media reconstruction,” _IEEE Transactions on Geoscience and Remote Sensing_ , vol. 52, no. 5, pp. 2295–2303, 2014.
* [37] P. Debye, _Polar Molecules_. New York: The Chemical Catalog Company, Inc., 1929.
|
# Deep Video Inpainting Detection
Peng Zhou1 Ning Yu1 Zuxuan Wu1 Larry S. Davis1 Abhinav Shrivastava1 and Ser-
Nam Lim2
1University of Maryland, College Park 2Facebook AI
###### Abstract
This paper studies video inpainting detection, which localizes an inpainted
region in a video both spatially and temporally. In particular, we introduce
VIDNet, Video Inpainting Detection Network, which contains a two-stream
encoder-decoder architecture with attention module. To reveal artifacts
encoded in compression, VIDNet additionally takes in Error Level Analysis
frames to augment RGB frames, producing multimodal features at different
levels with an encoder. Exploring spatial and temporal relationships, these
features are further decoded by a Convolutional LSTM to predict masks of
inpainted regions. In addition, when detecting whether a pixel is inpainted or
not, we present a quad-directional local attention module that borrows
information from its surrounding pixels from four directions. Extensive
experiments are conducted to validate our approach. We demonstrate, among
other things, that VIDNet not only outperforms by clear margins alternative
inpainting detection methods but also generalizes well on novel videos that
are unseen during training.
## 1 Introduction
Video inpainting, which completes corrupted or missing regions in a video
sequence, has achieved impressive progress over the years [17, 15, 37, 24, 2,
11, 38, 36, 25]. The ability to produce realistic videos that can be used in
applications like video restoration, virtual reality, , while appealing,
brings significant security concerns at the same time since these techniques
can also be used maliciously. By removing objects that could serve as
evidence, malicious inpainting can result in serious legal and social
implications including swaying a jury, accelerating the spread of
misinformation on social platforms, . Our goal in this work is to develop a
framework for detecting inpainted videos constructed with state-of-the-art
methods (see Fig. 1 for a conceptual overview).
Figure 1: Problem introduction. Given an inpainted video (second column), we
localize the inpainted region both spatially and temporally.
Although there are recent studies on detecting tampered regions in images [12,
43, 35, 4], very limited effort has been devoted to video inpainting
detection. For image-based manipulation detection, existing approaches either
focus on spliced regions or “deepfake”-style face replacement instead of
object removal based on inpainting. Additionally, most of them are designed
specifically for images [18, 34] only and suffer from poor performance on
videos. Learning robust video representations can help mitigate issues with
single image detection.
In light of this, we introduce VIDNet, a video inpainting detection network,
which is an encoder-decoder architecture with a quad-directional local
attention module to predict inpainted regions in videos (as is shown in Fig.
2). In particular, at each time step, VIDNet’s encoder takes as input the
current RGB frame, truncated from a pretrained VGG network [29]. Since videos
are compressed based on discrete cosine transforms (DCT) and frames extracted
are usually stored in JPEG format, we leverage ELA [33] images as an
additional input to the encoder to reveal artifacts like compression
inconsistency (as is shown in Fig. 3). We extract features from both ELA and
RGB images with the encoder, producing five different multimodal features at
different scales, that are further used jointly to train our inpainting
detector. In addition, given a missing region to fill in, inpainting methods
leverage information from surrounding pixels of the region to make the region
coherent spatially. Motivated by this, for RGB features from the last layer of
the encoder, we introduce a quad-directional local attention module to attend
to the neighbors of a pixel, allowing us to explicitly model spatial
dependencies among different pixels during detection.
Finally, with multimodal features encoded at different scales, we leverage a
four-layer Convolutional LSTM, serving as a decoder for inpainting detection.
More specifically, the ConvLSTM at a certain layer not only takes in features
from a previous time step but also features upsampled from a coarse level (, a
lower decoding layer). In this way both spatial relationships across different
scales and temporal dynamics over time are leveraged to produce inpainted
masks over time. The framework is trained end-to-end with backpropagation. We
conduct experiments on the DAVIS 2016 [26] Dataset and the Free-form Video
Inpainting Dataset [2]. VIDNet successfully detects inpainted regions under
all different settings and outperforms by clear margins competing methods. We
also show that VIDNet can be generalized to detect out-of-domain inpainted
videos that are unseen during training.
Our contributions can be summarized as follows: 1) To the best of our
knowledge, we introduce the first learning based approach for video inpainting
detection. 2) We present an end-to-end framework for video inpainting
detection, which models spatial and temporal relationships in videos. 3) We
leverage multimodal features, , RGB and ELA features, at different scales, for
video inpainting detection. 4) We introduce a quad-directional local attention
module to explicitly determine if a pixel is inpainted or not by attending to
its neighbours.
Figure 2: Framework overview. Given an RGB frame in a video, we first derive
its corresponding ELA frame and compute multimodal features at different
scales with both frames. We also introduce a quad-directional local attention
module (striped) to the last encoded RGB features (colored blue) to explore
spatial relationships among pixels from four directions. These encoded
features are further input into a multi-layer ConvLSTM (colored green) for
decoding, exploiting spatial and temporal relationships explicitly, to produce
masks of inpainted regions. See texts for more details.
## 2 Related Work
Video Inpainting. With the advance of recent image inpainting approaches [10,
9, 13, 21, 25, 38, 19, 36, 41], more recent studies have investigated video
inpainting. There are two lines of work — patch based and learning based
approaches. For patch based approaches, PatchMatch [1] is a prominent approach
which searches for similar patches in the surrounding region iteratively to
complete the inpainted region. To achieve better quality, Huang et al. [11]
explore an optimization based method to match patches and utilize information
including color and flow as regularization. On the other hand, learning based
approaches have been explored recently. Wang [32] propose a 3D encoder-decoder
structure for video inpaining. Afterwards, Xu et al. [37] leverages optical
flow information to guide inpainting in videos in both forward and backward
passes. Similarly, Kim et al. [15] estimate the proceeding flow as additional
constraint while completing the missing regions. To maintain more frame
pixels, Oh et al. [24] use gated convolution to inpaint video frames gradually
from the reference frame. Lee et al. [17] copy and paste future frames to
complete missing details in the current frame. In contrast, our approach
detects regions inpainted by these approaches.
Manipulation Detection. There are also approaches focusing on manipulation
detection. Most mainly tackle splicing based manipulation and use clues
specific to it [7, 5, 39, 4]. In particular, Zhou et al. [43] use both RGB and
local noise to detect potential regions. Salloum et al. [28] rely on boundary
artifacts to reveal manipulated regions in a multi-task learning fashion and
Zhou et al. [42] improve its generalization ability with a generative model.
Huh et al. [12] use meta-data to find inconsistent patches and Wu et al. [35]
treat it as anomaly detection to learn features in a self-supervised manner.
More related to our work are methods for image inpainting detection. [34] is a
classical approach that searches for similar patches matched by zero-
connectivity. However, high false alarm rates limit their applications in real
scenarios. More recently, Zhu et al. [44] use CNNs to localize inpainting
patches within images. Li et al. [18] explore High Pass Filtering (HPF) as the
initialization of CNNs for the purpose of distinguishing high frequency noise
of natural images from inpainted ones. However, the generalization and
robustness is limited as these HPFs are learned given specific inpainting
methods. In contrast, we combine both RGB information and ELA features as
inputs to VIDNet, and show that our approach generalizes to different
inpainting methods. In addition, without temporal guidance, the methods above
cannot guarantee temporally consistent prediction like our approach.
## 3 Approach
VIDNet, Video Inpainting Detection Network, is an encoder-decoder architecture
(See Fig. 2 for an overview the framework) operating on multimodal features to
detect inpainted regions. In addition to RGB video frames, VIDNet utilizes
Error Level Analysis frames (Sec. 3.1) to identify artifacts incurred during
the inpainting process. Motivated by the fact that inpainting methods
typically borrow information from neighbouring pixels of the region to be
inpainted, we introduce a multi-head local attention module (Sec. 3.2) which
uses adjacent pixels to discover inpainting traces. Finally, we model the
temporal relations among different frames with a ConvLSTM (Sec. 3.3). In the
following, we describe the components of the model.
### 3.1 Multimodal Features
Learning a mapping directly from an inpainted RGB frame to a mask that
encloses the removed object is challenging, since the RGB space is
intentionally modified by replacing regions with their surrounding pixels to
appear realistic. To mitigate this issue, we additionally augment RGB
information with error level analysis features [33] that are designed to
reveal regions with inconsistent compression artifacts in compressed JPEG
images. Note although videos are usually compressed in MPEG formats, extracted
frames are often times stored in the format of JPEG. More formally, an ELA
image is defined as:
$I_{ELA}=|I-I_{jpg}|,$ (1)
where $I_{ELA}$ is the ELA image, $I$ denotes the original image and $I_{jpg}$
denotes the recompressed JPEG image from the original image.
Fig. 3 illustrates the corresponding ELA images of sampled inpainted frames.
Although ELA images have been used in forensics applications [39, 40], they
tend to create false alarms when other artifacts like , sharp boundaries, are
present in the images, which requires ad-hoc judgement to determine whether a
region is tampered. So, instead of only using ELA frames, we augment them with
RGB frames as inputs to our encoder. (See results in Sec. 4)
In particular, both the RGB and ELA frames are input to a two-stream encoder.
Each stream, based on a VGG encoder, transforms the input image to high-level
representations with five layers, yielding 5 feature representations at
different scales. At each scale, we normalize the corresponding RGB and ELA
features, respectively with $\ell_{2}$ normalization, and then apply one
convolutional layer to absorb both features into a unified representation:
$f_{l}=\sigma(F(\;[\;f^{RGB}_{l}\;~{}|~{}\;f^{ELA}_{l}\;]))~{}~{}(l<5),$ (2)
where $[|]$ denotes feature concatenation, $f_{l}$ denotes the feature at
$l$-th layer. $f^{RGB}_{l}$, $f^{ELA}_{l}$ denote the $L2$ normalized RGB and
ELA features at layer $l$, respectively. $F$ represents the convolutional
layer and $\sigma$ denotes the activation function. The fused representation
at each level is further used for decoding. For $l=5$, we simply use RGB
features as we find that high-level ELA features are not helpful.
Figure 3: ELA frame example. From the top to the bottom: the inpainted RGB
frame, its corresponding ELA frame, and the ground-truth inpainting mask. The
inpainting artifacts, , the dog, person and ship, stand out in ELA space while
not easily seen in the RGB space.
### 3.2 Quad-Directional Local Attention
Inpainting methods aim to replace a region with pixels from its surrounding
areas for photorealistic visual effect. Therefore, when determining whether a
pixel is inpainted or not, it is important to examine its surrounding pixels.
Inspired by recursive filtering techniques that model pixel relations from
four directions for edge-preserving smoothing, we introduce a quad-directional
local attention module to explore spatial relations among adjacent pixels.
We learn four attention maps for four directions, left-to-right, right-to-
left, top-to-bottom, bottom-to-top, to determine how much information to
leverage from the pixels in the corresponding direction based on each map.
More specifically, we use $F_{\rightarrow}$, $F_{\leftarrow}$, $F_{\uparrow}$
and $F_{\downarrow}$ to denote functions that derive attention maps for the
left-to-right, right-to-left, top-to-bottom and bottom-to-top four directions.
In the following, we consider the left-to-right direction for simplicity.
Given features $f_{5}$ from the last layer of the RGB stream, we first
transform the features with $F_{\rightarrow}$ to have the same dimension as
$f_{5}$, and then compute an attention map $A_{\rightarrow}$:
$\displaystyle
A_{\rightarrow}=\sigma(F_{\rightarrow}(f_{5};W_{\rightarrow})),$ (3)
where $W_{\rightarrow}$ denotes the weights for the convolutional kernel, and
$\sigma$ is the sigmoid function to ensure the attentional weights at each
pixel are in the range of $[0,1]$. Then, for each pixel in the feature map, we
obtain information from the surrounding pixels as:
$f_{5\rightarrow}[k]=(1-A_{\rightarrow}[k])f_{5}[k]+A_{\rightarrow}[k]f_{5}[k-1],$
(4)
where $k$ denotes the location of the pixel. Since we are considering
attention from the left-to-right direction, $k-1$ indicates the pixel to the
left of $k$. The current value of pixel $k$ is updated with information from
its neighboring pixel, and the weight to balance the contribution
$A_{\rightarrow}$ is derived with convolution, which aggregates information
from a small grid in the original features. As a result, we attend to a small
local region to compute the refined representation. We can derive
$f_{5\leftarrow}$, $f_{5\uparrow}$ and $f_{5\downarrow}$ similarly, and thus
we have four different refined representations.
Note that the quad-directional attention module is similar in spirit to
recursive filtering. However, in standard recursive filtering, a weight
matrix, in the form of an edge map [3] or a weighted map [20], is used for the
attention map $A$ to guide the filtering to restore images or smooth feature
maps. In contrast, our filtering can be considered as a form of self-
attention—we derive attention maps by modeling similarities in a local region
with convolutions conditioned on input features and the resulting maps are in
turn used to refine features, allowing pixels to borrow information by
attending to its adjacent pixels. In addition, the motivation of our approach
can be seen as the “reverse” process of recursive filtering—in recursive
filtering, information from surrounding pixels is diffused to make local
regions coherent, whereas we wish to detect inconsistent pixels by attending
to a neighboring region.
Furthermore, we compute four refined feature maps for four directions in a
parallel way conditioned on the same feature map. An alternative is to
generate a single feature representation by sequentially performing attention
in four directions, _i.e._ , $f_{5\rightarrow}$ is used as inputs to generate
$f_{5\leftarrow}$, and so on and so forth, as in [3]. However, we find in Sec.
4 that the parallel multi-head approach offers better results, possibly due to
the disentanglement of different directions.
Figure 4: The quad-directional local attention module. Given RGB features from
the last layer of the encoder, we derive attention maps with a quad-
directional local attention module. To detect whether a pixel is inpainted or
not, the module attends to its neighbors from four directions.
### 3.3 ConvLSTM Decoder
Temporal information like inconsistency in the inpainted region over time is
an important cue for video inpainting detection. To explore temporal
relationships among adjacent frames, we use multiple ConvLSTM decoding layers
to take features from the encoders and produce predicted detection results,
which enables message passing from previous frames. More specifically, the
decoder contains four ConvLSTM layers to process features from different
spatial scales. At each time step, taking into account both spatial and
temporal information, we concatenate the skipped connected feature of the
current frame and the upsampled feature from a lower level, as the inputs to
the current ConvLSTM layer. More formally, for the $t$-th time step, the
$i$-th ($2<=i<=4$) ConvLSTM computes the hidden states and cell contents for
the $t+1$-th time step as:
$\displaystyle\;h^{t+1}_{i}\;,c^{t+1}_{i}$
$\displaystyle=~{}\mathrm{ConvLSTM}_{i}(\;g_{i}^{t}\;,h^{t}_{i}\;,c^{t}_{i}),$
(5) $\displaystyle g_{i}^{t}$
$\displaystyle=~{}\;[\;U(h_{i-1}^{t})\;|\;f_{6-i}^{t}\;],$ (6)
where $h^{t}_{i}$ and $c^{t}_{i}$ denote the hidden states and cell states for
the $i$-th ConvLSTM, respectively, and $U$ denotes the function for bilinearly
upsampling, which maps the outputs from a lower-level ConvLSTM with smaller
feature maps to have the same dimension as the current one. In addition,
$f_{6-i}^{t}$ is the skip connected feature of the frame $t$ from the encoder.
When $i=1$, the first layer of the ConvLSTM takes features from the last layer
of the encoder, _i.e._ $f_{5}$ as inputs. Recall that we obtain four refined
features based on $f_{5}$ with our quad-directional local attention module to
identify pixels that are inconsistent with its neighbours from four
directions. Thus, we use these refined features as inputs to ConvLSTM1. We
input them into the LSTM in the order of $f_{5\rightarrow}$,
$f_{5\leftarrow}$, $f_{5\uparrow}$ and $f_{5\downarrow}$ to obtain all the
four directional features.
At each time step, we compute $g_{5}^{t}$ with Eqn. 6 to produce a prediction
$p^{t}$ for each QDLA direction via one convolutional layer. Finally, to
explore non-linear relations among these four directional outputs, we fuse
them with one additional convolutional layer to form the final prediction.
During training, we divide each video into N clips with equal clip length. To
encourage more intersection with the binary ground truth mask, we use IoU
score [27] as our loss function which is formulated as:
$L(p,y)\\!\\!=\\!\\!1-\frac{\sum
P*Y}{\sum(P\\!\\!+\\!\\!Y\\!\\!-\\!\\!P*Y\\!\\!)+\\!\\!\epsilon},$ (7)
where $P$ and $Y$ denote the prediction and the binary ground truth mask,
respectively. $\epsilon$ denotes a small number to avoid zero division.
The loss is updated once the ConvLSTM decoder goes through a single video clip
to collect temporal information. By exploring spatial and temporal information
recurrently, predictions of inpainted regions become more accurate.
### 3.4 Implementation Details
We use PyTorch for implementation. Our model is trained on a NVIDIA GeForce
TITAN P6000. The input to the network is resized to $240\times 427$. The
length of our video clips is set to 3 frames during training. To extract ELA
frames, we recompress the corresponding RGB frames by quality factor 50 and
compute their difference. Our feature extraction backbone is VGG-16 [29] for
both RGB and ELA features. To increase the generalization ability, we add
instance normalization [31] layers to the backbone. The encoder is initialized
from VGG-16 model pretrained on ImageNet [6] and the decoder is initialized by
Xavier initialization [8]. We concatenate both RGB and ELA features up to the
penultimate encoding layer. Afterwards, the features are passed into one
convolutional and normalization layer to reduce the dimension by half to
reduce training parameters. The QDLA module is only added to the last encoder
layer to extract directional feature information based on ablation results in
Sec. 4. The decoder is a 4-layer ConvLSTM. We use Adam [16] optimizer with a
fixed learning rate of $1\times 10^{-4}$ for encoder and $1\times 10^{-3}$ for
decoder. The optimizer of the encoder and decoder network are updated in an
alternating fashion. To avoid overfitting, weight decay with a factor of
$5\times 10^{-5}$ and $50\%$ dropout [30] are applied. Only random horizontal
flipping augmentation is applied during training. We train the whole network
end-to-end for 40 epochs with a batch size of 4.
## 4 Experiment
We compare VIDNet with approaches on manipulation/image inpainting detection
in this section to show the advantages of our approach on video inpainting
detection. We also analyze the robustness of our approach under different
perturbations and show both quantitative and qualitative results.
| VI* | OP* | CP | VI | OP* | CP* | VI* | OP | CP*
---|---|---|---|---|---|---|---|---|---
Methods | IoU/F1 | IoU/F1 | IoU/F1 | IoU/F1 | IoU/F1 | IoU/F1 | IoU/F1 | IoU/F1 | IoU/F1
NOI [23] | 0.08/0.14 | 0.09/0.14 | 0.07/ 0.13 | 0.08/0.14 | 0.09/0.14 | 0.07/0.13 | 0.08/0.14 | 0.09/0.14 | 0.07/ 0.13
CFA [7] | 0.10/0.14 | 0.08/0.14 | 0.08/0.12 | 0.10/0.14 | 0.08/0.14 | 0.08/0.12 | 0.10/0.14 | 0.08/0.14 | 0.08/0.12
COSNet [22] | 0.40/0.48 | 0.31/0.38 | 0.36/0.45 | 0.28/0.37 | 0.27/0.35 | 0.38/0.46 | 0.46/0.55 | 0.14/0.26 | 0.44/0.53
HPF [18] | 0.46/0.57 | 0.49/0.62 | 0.46/0.58 | 0.34/0.44 | 0.41 /0.51 | 0.68/0.77 | 0.55/0.67 | 0.19/ 0.29 | 0.69/0.80
GSR-Net [42] | 0.57/0.69 | 0.50/0.63 | 0.51/0.63 | 0.30 /0.43 | 0.74/0.82 | 0.80/0.85 | 0.59 /0.70 | 0.22/0.33 | 0.70/0.77
Ours RGB (baseline) | 0.55/0.67 | 0.46/0.58 | 0.49/0.63 | 0.31/0.42 | 0.71 /0.77 | 0.78/0.86 | 0.58/0.69 | 0.20/0.31 | 0.70/0.82
VIDNet-BN (ours) | 0.62/0.73 | 0.75/ 0.83 | 0.67/0.78 | 0.30/0.42 | 0.80/0.86 | 0.84/0.92 | 0.58 /0.70 | 0.23/0.32 | 0.75/0.85
VIDNet-IN (ours) | 0.59/0.70 | 0.59/ 0.71 | 0.57/0.69 | 0.39 /0.49 | 0.74/0.82 | 0.81/0.87 | 0.59/ 0.71 | 0.25/0.34 | 0.76/0.85
Table 1: mean $IoU$ and $F_{1}$ score comparison on inpainted DAVIS. The model
is trained on VI and OP inpainting, OP and CP inpainting, and VI and CP
inpainting respectively (denoted as ‘*’).
### 4.1 Experiment setup
Dataset and Evaluation Metrics. Since DAVIS 2016 [26] is the most common
benchmark for video inpainting, which consists of 30 videos for training and
20 videos for testing, we evaluate our approach on it for inpainting
detection. We generate inpainted videos using SOTA video inpainting approaches
— VI [15], OP [24] and CP [17], with the ground truth object mask as
reference. To show both the performance and generalization, we choose two out
of the three inpainted DAVIS for training and testing, leaving one for
additional testing. The training/testing split follows DAVIS default setting.
We report the $F_{1}$ score and mean Intersection of Union (IoU) to the ground
truth mask as evaluation metrics.
We compare our method with both video segmentation methods COSNet [22] and
manipulation detection methods including NOI [23], CFA [7], HPF [18] and GSR-
Net [42]. Our baselines are shown below and see our supplementary for details
on other approaches.
Ours RGB (baseline): Our baseline approach which feeds as input RGB frame
only. No QDLA module is applied.
VIDNet-BN (ours): Our batch normalization [14] version.
VIDNet-IN (ours): We report this as our main results, which replaces the batch
normalization in encoder by instance normalization.
### 4.2 Sanity Check
Following [12], we first check the ability of our learned model to distinguish
between original and inpainted video frames. We compare models trained on VI
and OP for simplicity. We add the original uninpainted videos to test sets for
evaluation, and average the prediction score for every frame as frame-level
score. Afterwards, we report the AUC classification performance in Tab.2.
(Inpainted frames are labeled positive) Our model achieves better performance
for all the three algorithms compared to other methods, indicating the
advantages of our learned features to classify between inpainted and original
videos.
Methods | VI* | OP* | CP
---|---|---|---
HPF [18] | 0.718 | 0.640 | 0.845
GSR-Net [42] | 0.762 | 0.758 | 0.834
VIDNet-IN (ours) | 0.778 | 0.768 | 0.884
Table 2: Sanity check for inpainting classification AUC comparison. The
results are tested on the three inpainting algorithms, and all the model are
trained on VI and OP inpainted DAVIS.
### 4.3 Main Results
Tab. 1 highlights our advantages over other methods. Video segmentation method
COSNet captures the flow difference between adjacent frames to segment
objects. In contrast, manipulation detection methods are learned to find
tamper artifacts and thus yields better performance. For all the three
settings, our IN version outperforms other approaches in both trained and
untrained inpainting algorithms, showing the generalization of our approach.
Additionally, we show clear improvement over our baseline, indicating the
effectiveness of our proposed ELA feature and QDLA module. Comparing across
different inpainting algorithms, the performance degrades on the untrained
algorithms, indicating a domain shift between trained and untrained inpainting
algorithms. However, benefiting from diverse features and more focus on
proximity regions, our method still results in better generalization compared
with other approaches. Finally, the results indicate that our BN version
generally has better performance on the in-domain training inpainting
algorithms while IN version shows better generalization on the cross-domain
one. Therefore, we provide both results as a trade off between in-domain
performance and generalization.
| VI* | OP* | CP
---|---|---|---
Methods | IoU/F1 | IoU/F1 | IoU/F1
Ours ELA | 0.460/0.578 | 0.509/0.631 | 0.417/0.546
Ours RGB (baseline) | 0.552/0.671 | 0.456/0.580 | 0.493/0.625
Ours w/o QDLA | 0.559/0.682 | 0.557/0.681 | 0.512/0.644
Ours frame-by-frame | 0.558/0.683 | 0.566/0.688 | 0.532/0.664
Ours RF edge | 0.540/0.661 | 0.460/0.591 | 0.555/0.670
QDLA both features | 0.555/0.680 | 0.580/0.700 | 0.495/0.635
Ours w/o ELA | 0.568/0.691 | 0.465/0.595 | 0.560/0.678
QDLA all layers | 0.570/0.693 | 0.469/0.585 | 0.564/0.682
VIDNet-IN (ours) | 0.585/0.704 | 0.588/ 0.707 | 0.565/0.685
Table 3: Ablation analysis. The model is trained on VI and OP inpainting
algorithms (denoted as ‘*’).
(a) JPEG perturbation (VI*, OP*, CP) (b) Noise perturbation (VI*, OP*, CP)
Figure 5: Mean IoU comparison under different perturbations. Perturbation in
JPEG compression consists of the quality factor with 90 and 70; perturbation
in noise consists of SNR 30dB and 20dB. Column from left to right is the
result on VI, OP and CP inpainting. ‘*’ denotes that the model is trained on
these inpainting algorithms.
### 4.4 Ablation Analysis
We analyze the importance of each key component in our framework and the
details are as follows:
Ours ELA: The baseline architecture which only feeds ELA frame as input.
Ours w/o ELA: Our full model without the ELA features.
Ours w/o QDLA: Our full model without QDLA module.
Ours RF edge: Similar to Chen et al. [3], we add additional edge branch and
apply recursive filter to the final prediction. The output of edge branch is
used as the reference to recursive filter layer. The loss function of the edge
branch is a weighted binary cross entropy loss.
QDLA both features: Our full model except that the input to QDLA module is the
concatenation of both RGB and ELA feature from the $5$-th layer.
QDLA all layers: Applying QDLA module to all the 5 encoding feature layers.
Ours frame-by-frame: Instead of training with video clip length of 3, we train
our full model frame-by-frame.
Tab.3 displays the comparison results. Compared to baseline, the ELA feature
alone yields worse performance. This perhaps because the ELA frame also
contains other artifacts like sharp boundary, which leads to confusion without
proper guidance from RGB contents. Adding QDLA module introduces feature
adjacency relationship and thus leads to improvement. However, the higher
features are more useful for our QDLA than lower ones when comparing to QDLA
all layers, and high level ELA features are less helpful than lower ones when
comparing with QDLA both features. Compared to Ours RF edge, our QDLA module
(Ours w/o ELA) yields better performance because the boundary prediction
degrades in video inpainting scenario and thus edge map contains false
positives to guide the segmentation branch. In addition, the comparison
between Ours frame-by-frame and our final model verifies the importance of
temporal information in video inpainting detection. Eventually, with QDLA
module, ELA feature and temporal information, the performance gets boosted
further.
Figure 6: Qualitative visualization on DAVIS. The first row shows the
inpainted video frame. The second to fourth row indicates the final
predictions from different methods. The fifth row is the ground truth.
### 4.5 Robustness Analysis
To test the robustness of our approach under noise and JPEG perturbation, we
conduct experiments listed in Fig. 5. We add Gaussian noise to the input frame
with Signal-to-Noise Ratio (SNR) 30 and 20 dB and evaluate on these noisy
frames, or recompress test frame with JPEG quality 90 and 70 for perturbation.
Moreover, to study the effect of specific augmentation on performance, we
apply noise and JPEG augmentation to our approach and make comparison
together. The details of our augmentation is as follow.
VID-Noise-Aug: Randomly apply Gaussian noise with SNR 20 dB to the input
frames during training.
VID-JPEG-Aug: Randomly apply JPEG compression with quality factor 90 to the
input frames during training.
The robustness of our approach stands out under different perturbations.
Compared to other approaches, HPF suffers more from perturbation because more
high frequency noises will be introduced. With generative models for
augmentation, GSR-Net shows good robustness. However, our approach outperforms
GSR-Net as more modalities of video inpainting clues have been considered.
Even though adding noise augmentation results in a small degradation on the
initial performance, the robustness to both noise and JPEG perturbation has
been improved. Similar observation is made on JPEG augmentation. See our
supplementary for analysis under video compression perturbation.
### 4.6 Results on Free-form Video Inpainting Dataset
To further test the performance on different dataset, additional evaluation is
provided on Free-form Video Inpainting dataset (FVI). FVI dataset [2] provides
100 test videos, which mostly targets multi-instance object removal. We
directly apply their approach, which leverages 3D gated convolution encoder-
decoder architecture for video inpainting, to generate the 100 inpainted
videos. To test the generalization of our approach, we directly test the
models trained on VI and OP inpainted DAVIS.
Tab. 4 displays the comparison results. Since both the dataset and inpainting
approach are different, the performance degrades due to the domain shift.
However, compared to other approaches, our method still achieves better
generalization by a large margin. Also, compared with our baseline model which
only uses RGB features, our approach shows clear improvement. This further
validates the effectiveness to combine both RGB and ELA features and introduce
spatial and temporal information for more evidence.
### 4.7 Qualitative Results
Fig. 6 illustrates the visualization of our predictions versus others under
the same setting. Thanks to our ELA and RGB features which provide spatial
clues, it is clear that our approach is able to obtain a closer prediction to
the ground truth than other methods. Specifically, HPF only transfers RGB into
noise domain, making it easier to produce false alarm. GSR-Net makes decision
frame-by-frame, making the result less temporally consistent. In contrast,
with the favor of temporal information, our prediction maintains temporal
consistency.
| FVI
---|---
Methods | IoU/F1
NOI [23] | 0.062/0.107
CFA [7] | 0.073/0.122
HPF [18] | 0.205/0.285
GSR-Net [42] | 0.195/0.288
Ours RGB (baseline) | 0.156/0.223
VIDNet-IN (ours) | 0.257/0.367
Table 4: Mean IoU and F1 score comparison on FVI. The results are directly
tested on FVI dataset, and all the model are trained on VI and OP inpainted
DAVIS.
## 5 Conclusions
We introduce learning based video inpainting detection in this paper. To
reveal more inpainting artifacts from different domains, we propose to extract
both RGB and ELA features and make concatenation. Additionally, we encourage
learning from adjacent feature in a self-attended manner by introducing QDLA
module. With both the adjacent spatial and temporal information, we make the
final prediction through a ConvLSTM based decoder. Our experiments validate
the effectiveness of our approach both in-domain and cross-domain. As shown in
the results, there still exists a clear gap in the generalization and
robustness, making the problem far from being solved. Involving some domain
adaption strategies might be a remedy for this issue, which we leave for
future research.
## 6 Acknowledge
We gratefully acknowledge support from Facebook AI and the DARPA MediFor
program under cooperative agreement FA87501620191, “Physical and Semantic
Integrity Measures for Media Forensics”.
## References
* [1] Connelly Barnes, Eli Shechtman, Adam Finkelstein, and Dan B Goldman. Patchmatch: A randomized correspondence algorithm for structural image editing. In ToG, 2009.
* [2] Ya-Liang Chang, Zhe Yu Liu, Kuan-Ying Lee, and Winston Hsu. Free-form video inpainting with 3d gated convolution and temporal patchgan. ICCV, 2019.
* [3] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. In TPAMI, 2018.
* [4] Davide Cozzolino, Justus Thies, Andreas Rössler, Christian Riess, Matthias Nießner, and Luisa Verdoliva. Forensictransfer: Weakly-supervised domain adaptation for forgery detection. arXiv preprint arXiv:1812.02510, 2018.
* [5] Davide Cozzolino and Luisa Verdoliva. Single-image splicing localization through autoencoder-based anomaly detection. In WIFS, 2016.
* [6] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009.
* [7] Pasquale Ferrara, Tiziano Bianchi, Alessia De Rosa, and Alessandro Piva. Image forgery localization via fine-grained analysis of cfa artifacts. In TIFS, 2012.
* [8] Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In AISTATS, 2010.
* [9] James Hays and Alexei A Efros. Scene completion using millions of photographs. TOG, 2007.
* [10] Kaiming He and Jian Sun. Image completion approaches using the statistics of similar patches. TPAMI, 2014.
* [11] Jia-Bin Huang, Sing Bing Kang, Narendra Ahuja, and Johannes Kopf. Temporally coherent completion of dynamic video. TOG, 2016.
* [12] Minyoung Huh, Andrew Liu, Andrew Owens, and Alexei A Efros. Fighting fake news: Image splice detection via learned self-consistency. In ECCV, 2018.
* [13] Satoshi Iizuka, Edgar Simo-Serra, and Hiroshi Ishikawa. Globally and locally consistent image completion. ToG, 2017.
* [14] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
* [15] Dahun Kim, Sanghyun Woo, Joon-Young Lee, and In So Kweon. Deep video inpainting. In CVPR, 2019.
* [16] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
* [17] Sungho Lee, Seoung Wug Oh, DaeYeun Won, and Seon Joo Kim. Copy-and-paste networks for deep video inpainting. In ICCV, 2019.
* [18] Haodong Li and Jiwu Huang. Localization of deep inpainting using high-pass fully convolutional network. In ICCV, 2019.
* [19] Guilin Liu, Fitsum A Reda, Kevin J Shih, Ting-Chun Wang, Andrew Tao, and Bryan Catanzaro. Image inpainting for irregular holes using partial convolutions. In ECCV, 2018.
* [20] Sifei Liu, Jinshan Pan, and Ming-Hsuan Yang. Learning recursive filters for low-level vision via a hybrid neural network. In ECCV, 2016.
* [21] Yunqiang Liu and Vicent Caselles. Exemplar-based image inpainting using multiscale graph cuts. TIP, 2012.
* [22] Xiankai Lu, Wenguan Wang, Chao Ma, Jianbing Shen, Ling Shao, and Fatih Porikli. See more, know more: Unsupervised video object segmentation with co-attention siamese networks. In CVPR, 2019.
* [23] Babak Mahdian and Stanislav Saic. Using noise inconsistencies for blind image forensics. In IMAVIS, 2009.
* [24] Seoung Wug Oh, Sungho Lee, Joon-Young Lee, and Seon Joo Kim. Onion-peel networks for deep video completion. In ICCV, 2019.
* [25] Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context encoders: Feature learning by inpainting. In CVPR, 2016.
* [26] Federico Perazzi, Jordi Pont-Tuset, Brian McWilliams, Luc Van Gool, Markus Gross, and Alexander Sorkine-Hornung. A benchmark dataset and evaluation methodology for video object segmentation. In CVPR, 2016.
* [27] Mengye Ren and Richard S Zemel. End-to-end instance segmentation with recurrent attention. In CVPR, 2017.
* [28] Ronald Salloum, Yuzhuo Ren, and C-C Jay Kuo. Image splicing localization using a multi-task fully convolutional network (mfcn). In JVCI, 2018.
* [29] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015.
* [30] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. In JMLR, 2014.
* [31] Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Instance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022, 2016.
* [32] Chuan Wang, Haibin Huang, Xiaoguang Han, and Jue Wang. Video inpainting by jointly learning temporal structure and spatial details. In AAAI, 2019.
* [33] Wei Wang, Jing Dong, and Tieniu Tan. Tampered region localization of digital color images based on jpeg compression noise. In IWDW, 2010.
* [34] Qiong Wu, Shao-Jie Sun, Wei Zhu, Guo-Hui Li, and Dan Tu. Detection of digital doctoring in exemplar-based inpainted images. In ICMLC, 2008.
* [35] Yue Wu, Wael AbdAlmageed, and Premkumar Natarajan. Mantra-net: Manipulation tracing network for detection and localization of image forgeries with anomalous features. In CVPR, 2019.
* [36] Wei Xiong, Jiahui Yu, Zhe Lin, Jimei Yang, Xin Lu, Connelly Barnes, and Jiebo Luo. Foreground-aware image inpainting. In CVPR, 2019.
* [37] Rui Xu, Xiaoxiao Li, Bolei Zhou, and Chen Change Loy. Deep flow-guided video inpainting. In CVPR, 2019.
* [38] Jiahui Yu, Zhe Lin, Jimei Yang, Xiaohui Shen, Xin Lu, and Thomas S Huang. Generative image inpainting with contextual attention. In CVPR, 2018.
* [39] Markos Zampoglou, Symeon Papadopoulos, and Yiannis Kompatsiaris. Detecting image splicing in the wild (web). In ICMEW, 2015.
* [40] Markos Zampoglou, Symeon Papadopoulos, and Yiannis Kompatsiaris. Large-scale evaluation of splicing localization algorithms for web images. In MTAP, 2017.
* [41] Haotian Zhang, Long Mai, Ning Xu, Zhaowen Wang, John Collomosse, and Hailin Jin. An internal learning approach to video inpainting. In ICCV, 2019.
* [42] Peng Zhou, Bor-Chun Chen, Xintong Han, Mahyar Najibi, Abhinav Shrivastava, Ser Nam Lim, and Larry S Davis. Generate, segment and refine: Towards generic manipulation segmentation. AAAI, 2020.
* [43] Peng Zhou, Xintong Han, Vlad I Morariu, and Larry S Davis. Learning rich features for image manipulation detection. In CVPR, 2018.
* [44] Xinshan Zhu, Yongjun Qian, Xianfeng Zhao, Biao Sun, and Ya Sun. A deep learning approach to patch-based image inpainting forensics. SPIC, 2018.
|
# The Effect of Class Definitions on the Transferability of Adversarial
Attacks Against Forensic CNNs
Xinwei Zhao and Matthew C. Stamm; Drexel University; Philadelphia, PA,
<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
In recent years, convolutional neural networks (CNNs) have been widely used by
researchers to perform forensic tasks such as image tampering detection. At
the same time, adversarial attacks have been developed that are capable of
fooling CNN-based classifiers. Understanding the transferability of
adversarial attacks, i.e. an attack’s ability to attack a different CNN than
the one it was trained against, has important implications for designing CNNs
that are resistant to attacks. While attacks on object recognition CNNs are
believed to be transferrable, recent work by Barni et al. has shown that
attacks on forensic CNNs have difficulty transferring to other CNN
architectures or CNNs trained using different datasets. In this paper, we
demonstrate that adversarial attacks on forensic CNNs are even less
transferrable than previously thought – even between virtually identical CNN
architectures! We show that several common adversarial attacks against CNNs
trained to identify image manipulation fail to transfer to CNNs whose only
difference is in the class definitions (i.e. the same CNN architectures
trained using the same data). We note that all formulations of class
definitions contain the “unaltered” class. This has important implications for
the future design of forensic CNNs that are robust to adversarial and anti-
forensic attacks.
## Introduction
The integrity and authenticity of multimedia contents are top concerns in many
scenarios, such as criminal investigation and news reporting[1]. Research has
shown that many editing operations, such as resizing [2] or contrast
enhancement [3], will leave unique traces behind. Many forensic algorithms
have been developed to detect or identify editing operations [4, 5, 6, 7, 8,
9, 10, 11, 12, 13, 14, 15]. In recent years, convolutional neural networks
(CNNs) have been widely used by researchers to perform forensic tasks such as
image tampering detection[16, 17, 18, 9] and source identification[19, 20,
21].
In some scenarios, an intelligent attacker may attempt to launch an
adversarial attacks to fool forensic algorithms [22, 23, 24, 25, 26]. Many
adversarial attacks have been found to be able to fool deep learning based
algorithms [27, 28, 29, 30, 31, 32, 33, 34, 35]. Researchers have already
demonstrated that fast gradient sign method (FGSM) [36] and generative
adversarial network (GAN) [37, 38] based attacks can be used to fool forensic
CNNs. Therefore, it is important to understand the capability and limitations
of the adversarial attacks.
Transferability is one of the well-known problems pertaining to adversarial
attacks [39, 40, 41, 42]. Transferability issues occur when the attacker
attempts to attack a different CNN than the one that were explicitly trained
against. Since many attacks operate by pushing the adversarial examples across
the boundaries of the target class, it is important for the attacks to be able
to observe the gradient of the target classifier with respect to the input
data. However, when the CNN used to train the attack cannot fully mimic the
boundaries of the target CNN, the obtained adversarial examples may not be
able to transfer. Two common reasons that can cause attacks’ transferability
issues are training data discrepancy and CNN architecture discrepancy.
Understanding the transferability of adversarial attacks has important
security implications. If information can be discovered that negatively
effects an attacks transferability, it can be used to defend CNNs against
attack. Additionally, knowledge of attack transferability helps researchers
understand how feasible real-world adversarial attacks could be. While
previous research has shown that attacks against object recognition CNNs can
transfer to attack CNNs with different architectures or trained using
different data, recent research in multimedia forensics shows an opposite
result. Specifically, work by Barni et al. has shown that attacks on forensic
CNNs have difficulty transferring to attack other CNN architectures or CNNs
trained using different datasets [43].
In this paper, we demonstrate that adversarial attacks on forensic CNNs are
even less transferrable than previously thought – even between virtually
identical CNN architectures! Particularly, we discover that several common
adversarial attacks against forensic CNNs fail to transfer between CNNs whose
only difference is in the class definitions (i.e. the same CNN architectures
trained using the same data). We note that all formulations of class
definitions contain the “unaltered” class. To investigate the impact of class
definitions on forensic CNNs, we assume that attacker knows every details of
the forensic CNNs, including the training data and CNN architecture. The only
missing information of the attacker is the class definition. Next, we defined
three typical class definitions for image manipulation forensic CNNs by
grouping individual manipulation or parameterization of individual
manipulation. Then we use the attacked images that are produced by fooling one
forensic CNN to fool the other CNNs whose only difference is the class
definition. We defined the successful attack rates (SARs) and transferability
scores (T-scores) to measure the success and transferability of adversarial
attacks. By conducting an extensive amount of experiments, we found that
adversarial attacks are difficult to transfer to other class definitions of
the same CNN architecture. Moreover, a secondary finding of ours is that
binary classification of forensic CNNs (i.e grouping all manipulation into one
class) performs slightly more robust than the other two class definitions.
This has important implications for the future design forensic CNNs that are
robust to adversarial and anti-forensic attacks.
## Background
We assume that an attacker applies some editing operations to an images and
then launches an adversarial attack attempted to bypass the detection. The
investigator will use a forensic CNN to identify if the image presented was
unaltered or not.
For a single forensic manipulation identification CNN, there exists different
ways to form class definitions. For instance, an binary decisions of unaltered
or manipulated, multi-class definitions of unaltered vs several individual
manipulations, or multi-class definitions of unaltered vs. several
parameterized versions of individual manipulations. Each of the above class
definitions includes the “unaltered” class.
### Near-perfect knowledge scenario
Previous research has shown the attacker’s knowledge pertaining to the target
investigator’s algorithm determines how easy and successful attacks can be
[37, 42]. Therefore, depending on the amount of knowledge accessible to
attackers, it is common to categorize the scenarios into the perfect knowledge
scenario and partial knowledge scenarios. The perfect knowledge scenario is
when attackers can observe the every detail of the investigator’s algorithm or
they can obtain an identical copy of the investigator’s algorithm. Under the
perfect knowledge scenario, attackers can directly integrate the
investigator’s CNN into their attack and train the attack explicitly bypass
the detection of the identification CNN. All other scenarios are categorized
as partial knowledge scenarios. Under partial knowledge scenarios, attackers
has no full access to the investigator’s CNN. As a result, attackers have to
ensure their trained attack is capable of fooling different CNNs than the CNN
explicitly trained against. If an attack fails to fool different CNNs,
transferability of the attack occurs. Two common reasons that cause that
attack’s transferability are the dependencies of training data and CNN
architectures [43, 39].
To investigate the transferability of adversarial attacks induced by class
definition, we formulate a special partial knowledge scenario, the near-
perfect knowledge scenario. Under this scenario, the attacker knows every
details of the investigator’s CNN architecture and also will use identical
training data as the investigator. The only missing information of the
attacker is the class definition of the target CNN (i.e the attacker does not
know how the investigator forms the output classes for the forensic
identification CNN.).
## Investigation procedure
To investigate the impact of class definition on transferability of
adversarial attacker, we used the following procedure: 1) We categorized three
different class definitions that could be used by forensic CNNs attempting to
identify image editing. 2) We trained six different forensic CNNs to perform
editing detection and achieve their baseline performance under each class
definition. 3) We implemented two popular adversarial attacks and obtain their
Successful Attack Rate (SAR) in the perfect knowledge scenario (without
attempting transfer). 4) We evaluated each attack’s ability to transfer to an
identical CNN whose only difference is the class definition used in the near
perfect knowledge scenario, then interpreted the results. A detailed
description of our experimental procedure, as well as the metrics used to
evaluate the attacks is provided below.
### Class definitions
There are several ways to define the classes used by a forensic CNN created to
identify image manipulation. While all class definitions include the
“unaltered class” other classes may differ depending on if different
manipulations, as well as different parameterizations of manipulations, are
grouped together into one class. In this work, we consider the following three
different CNN class definitions.
Manipulation detection: In this class definition, only two classes are used:
“manipulated” and “unaltered”. Any type of editing is grouped together into
the “manipulated” class. This class definition would be used if the
investigator only wants to know if an image has been modified in any means.
Manipulation classification: In this multi-class case, one class is assigned
to “unaltered” along with one class for each individual editing operation. All
parameterizations of that editing operation are grouped together into a single
class. This class definition would be used if the investigator not only wants
to know if the image has been modified, but also wants to know the individual
manipulation applied to the image.
Manipulation parameterization: In this multi-class case, one class is assigned
to “unaltered” and separate classes are assigned to each pair of manipulation
and parameterization (or range of parameterizations). For example, median
filtering with a 3x3 window would be a separate class than median filtering
with a 5x5 window. This class definition could be used if the investigator
wants to know very detailed information about a possible forger or identify
inconsistencies in editing within an image.
### Image forensic CNNs
In this paper, we examined six well-known CNN architectures, including MISLnet
[9], TransferNet[44], PHNet [45], SRNet [46], DenseNet [47] and VGG-19 [48].
While some of the CNN architectures were initially used for computer vision or
steganalysis tasks, they can be adapted to train for image forensics.
For each CNN architecture, we trained forensic CNNs using the above three
class definitions. All CNNs were trained using the same dataset created from
the Dresden Image Database (more detail is provided in the results section).
Furthermore, CNNs with the same architecture were trained using the same
hyperparameters for all class definitions.
### Adversarial attacks
To fool a forensic CNN, images modified by an attack should be classified as
“unaltered” by that (or other) CNNs. As a result, attacks used our work
operate in a targeted fashion, where the “unaltered” class is always the
attack’s target.
We used two well-known adversarial attacks in our experiments: the iterative
targeted fast gradient sign method (I-FGSM) attack and the generative
adversarial network (GAN) based attack. These two attack methods are very
commonly used in anti-forensics (as well as the broader ML community), and are
described below.
Iterative targeted fast gradient sign method (targeted I-FGSM): It operates by
iteratively adding a small noise to the original image $I$ and to push the
adversarial examples $I_{adv}$ to the target classes (i.e unaltered class in
this context). At each iteration, the gradient is calculated with respect to
the attacked image produced from previous iteration. The equation of targeted
I-FGSM attacks is,
$\displaystyle I_{adv}^{0}$ $\displaystyle=I$ (1) $\displaystyle
I_{adv}^{n+1}$ $\displaystyle=I_{adv}^{n}-\epsilon\times
sign\nabla_{I_{adv}^{n}}J(I_{adv}^{n},y_{unaltered})$ (2)
where $n$ denotes the index of iteration, $\epsilon$ denotes a small number,
$J(\cdot)$ denotes the loss function, and $y_{u}naltered$ denotes target class
label.
Generative Adversarial Network (GAN)-Based Attack: GAN-based method operates
by training a GAN network to obtain a generator and then uses the generator to
produce an image that can mimic the statistics of unaltered images.
A traditional GAN [28] is trained using a min-max function,
$\min_{G}\max_{D}\operatorname{\mathbb{E}}_{I\sim p_{r}(I)}[\log
D(I)]+\operatorname{\mathbb{E}}_{I_{adv}\sim
p_{g}(I_{adv})}[\log(1-D(I_{adv}))]$ (3)
where $G$ denotes the generator, $D$ denotes the discriminator, $p_{r}(I)$
denotes the distribution of unaltered images, $p_{g}(I_{adv})$ denotes the
distribution of adversarial images and $\operatorname{\mathbb{E}}$ denotes the
operation of taking expected value.
We adopted MISLGAN method which was has been initially designed for fooling
camera model identification CNNs[34]. MISLGAN is consisted of three major
components, a generator, a discriminator and a pre-trained forensic CNN. While
the generator and the discriminator are trained in the same fashion as the
traditional GAN, the pre-trained is introduced to force the generator to
produce an image that can mimic the forensic information of the “unaltered”
image. To attack the manipulation detection CNNs, we modified MISLGAN by
removing the synthetic CFA module in the generator. Due to the page limitation
of the paper, we advise the readers to find details about the architecture and
loss formulation of MISLGAN in the original paper.
### Evaluation metrics
We define the successful attack rate and transferability score to evaluate the
performance and transferability of the attack against the classifiers.
Successful attack rate (SAR): To evaluate the performance of the anti-forensic
crafted images against manipulation detection CNNs, we calculated the
percentage that the adversarial images are classified as “unaltered” by each
CNN, and we define this percentage as successful attack rate (SAR). CNNs that
have a stronger resistance to an anti-forensic attack should have lower SARs.
Transferability score (T-Score): To evaluate an attack’s transferability, we
calculated transferability score as the SAR of the unknown classifier over the
SAR of the known classifier. The known classifier is directly used when
launching the attack and the unknown classifier is used for classifying the
adversarial images created by the attack. As a result, when an attack has good
transferability, the transferability score should be high. Otherwise, the
transferability would be low. For example, when all adversarial images
produced by fooling one forensic CNN can fool other unseen CNNs, the
transferability score equals to 1. We would like to point out that the
transferability score should be positive and can be higher than 1. It is
because the adversarial attack may be more effective on the unknown
classifiers than the known classifiers, typically when the known classifiers
are more resistant to the attack.
## Experiments
We conducted a series of experiments to evaluated the transferability of
multiple attacks against several forensic CNN architectures. Our database is
created using 84,810 full-size JPEG images taken by 27 camera models from the
Dresden Image Database [49] (images are from 70 unique devices). We randomly
selected 80% for training, 10% for validation and 10% for testing. Next, we
divided the full images into non-overlapping 256 by 256 image patches for each
set. As a result, we ensure that there are no image patches from the same set
coming from the same image and share the same statistics. To create the
manipulated image patches, we selected three manipulations and five parameters
that span a reasonable range for each manipulation. Then we manipulated each
image patch in the database and obtained 15 unique sets of manipulated image
patches. Along with the unaltered image classes, we obtained 16 classes
corresponding to unaltered vs parameterized manipulations (manipulation
parameterizer). These images were also grouped into 4 classes of unaltered vs
individual manipulations (manipulation classifier), and 2 classes of unaltered
vs manipulated (manipulation detector). Table 1 shows the chosen manipulations
and parameters we used to created manipulated image classes. Due to
computational cost constraints, we limited ourselves to three manipulations
with five parameterizations each. Since we used 5 parameters per manipulation
to create forged images, we in total created over 1,000,000 full sized JPEG
images which are in bar with the up-to-date data size for training CNNs.
Table 1: Table 1: Editing operations and their associated parameters. Manipulations | Parameters
---|---
Additive Gaussian white noise | $\mu=0,\sigma=0.5,1,1.5,2,2.5$
Gaussian blurring | $\sigma=1,1.5,2,2.5,3,3.5,4,4.5$
Median filtering | window size$=3,5,7,9,11$
### Baseline performance of forensic CNNs
We started by training forensic CNNs using six CNN architectures and three
class definitions. Each CNN was trained from scratch using stochastic gradient
decent optimizer for 43 epochs and would stop early if validation accuracies
started decreasing. The learning rate started with 0.0005 and would decrease
to half every 4 epochs. Batch size was 25. On average, we achieved 99.29%
accuracy using manipulation detector, 98.52% for manipulation classifier, and
77.93% for manipulation parameterizer. These results are consistent with the
state-of-art performance for manipulation detection. Table 2 demonstrates the
classification accuracies achieved by trained manipulation detection CNNs.
Each entry corresponds one pairing of CNN architecture and class definition.
Table 2: Table 2: Baseline classification accuracies achieved by six CNN
architectures and three class definitions.
CNN Architect. | Manip. Detector | Manip. Classifier | Manip. Parameterizer
---|---|---|---
MISLnet | 99.84% | 99.55% | 86.24%
TransferNet | 99.20% | 98.04% | 65.27%
PHNet | 99.58% | 98.94% | 86.58%
SRNet | 99.16% | 99.36% | 81.30%
DenseNet_BC | 98.13% | 95.66% | 65.50%
VGG-19 | 99.87% | 99.50% | 82.67%
Average | 99.29% | 98.51% | 77.93%
### Launching adversarial attacks
We started by creating set of images used for evaluating the attacks. From the
testing set, we randomly selected 6,000 manipulated image patches that equally
come from 15 manipulated image classes to form the attack set. Then we used
the two attack methods to attack each image patch in the attack set and
targeted at “unaltered” class. As a result, we obtained 216,000 anti-
forensically attacked images.
For targeted I-FGSM, we chose $\epsilon$ in equation to be $0.1$ and the
iteration to attack each image to be 100. For the GAN-based attack, we started
by training a generator targeted at the “unaltered” class for each forensic
CNNs, and then we used the trained generator to attack each image patch in the
attack set. To train the generator, we randomly selected 360,000 manipulated
image patches from the training set that equally come from 15 manipulated
image. We trained the GAN-based attacked using the parameters in the original
MISLGAN paper authored by Chen et al [34].
### Baseline performance of adversarial attacks
In this experiment, we would like to show the performance of the adversarial
attacks against forensic CNNs when the attacks were trained directly to target
at the “unaltered” class of each forensic CNN. It corresponds to the scenario
when the attacker has the perfect knowledge of investigator’s training data
and full CNNs (i.e. including CNN architecture and the class definition).
Table 3: Table 3: Baseline performance of targeted I-FGSM against forensic
CNNs.
| Successful Attack Rate
---|---
CNN Architect. | Manip. Detector | Manip. Classifier | Manip. Parameterizer
MISLnet | 1.00 | 1.00 | 1.00
TransferNet | 0.99 | 1.00 | 1.00
PHNet | 0.87 | 0.96 | 1.00
SRNet | 0.88 | 0.78 | 1.00
DenseNet | 0.63 | 0.98 | 0.91
VGG-19 | 0.85 | 1.00 | 0.98
Average | 0.87 | 0.95 | 0.98
Table 4: Table 4: Baseline performance of GAN-based attack against forensic
CNNs.
| Successful Attack Rate
---|---
CNN Architect. | Manip. Detector | Manip. Classifier | Manip. Parameterizer
MISLnet | 0.55 | 0.95 | 0.84
TransferNet | 0.81 | 0.84 | 0.98
PHNet | 0.90 | 0.97 | 0.94
SRNet | 0.88 | 0.90 | 0.82
DenseNet | 0.90 | 0.94 | 0.94
VGG-19 | 0.71 | 0.97 | 0.96
Average | 0.79 | 0.93 | 0.91
Table 5: Table 5: Transferability of targeted I-FGSM attack re-targeting on
manipulation classifiers and parameterizers.
| Successful Attack Rate | Transferability Score
---|---|---
CNN Architect. | Manip. Classifier | Manip. Parameterizer | Manip. Classifier | Manip. Parameterizer
MISLnet | 0 | 0 | 0 | 0
TransferNet | 0 | 0 | 0 | 0
PHNet | 0 | 0 | 0 | 0
SRNet | 0 | 0 | 0 | 0
DenseNet | 0 | 0 | 0 | 0
VGG-19 | 0 | 0 | 0 | 0
Average | 0 | 0 | 0 | 0
Table 6: Table 6: Transferability of targeted I-FGSM attack re-targeting on
manipulation classifiers and parameterizers.
| Successful Attack Rate | Transferability Score
---|---|---
CNN Architect. | Manip. Detector | Manip. Parameterizer | Manip. Detector | Manip. Parameterizer
MISLnet | 0 | 0 | 0 | 0
TransferNet | 0 | 0 | 0 | 0
PHNet | 0 | 0 | 0 | 0
SRNet | 0 | 0 | 0 | 0
DenseNet | 0 | 0 | 0 | 0
VGG-19 | 0 | 0 | 0 | 0
Average | 0 | 0 | 0 | 0
Table 7: Table 7: Transferability of targeted I-FGSM attack re-targeting on
manipulation detectors and classifiers.
| Successful Attack Rate | Transferability Score
---|---|---
CNN Architect. | Manip. Detector | Manip. Classifier | Manip. Detector | Manip. Classifier
MISLnet | 0 | 0 | 0 | 0
TransferNet | 0 | 0 | 0 | 0
PHNet | 0 | 0 | 0 | 0
SRNet | 0 | 0 | 0 | 0
DenseNet | 0 | 0 | 0 | 0
VGG-19 | 0 | 0 | 0 | 0
Average | 0 | 0 | 0 | 0
Table 8: Table 8: Transferability of GAN-based attack re-targeting on
manipulation classifiers and parameterizers.
| Successful Attack Rate | Transferability Score
---|---|---
CNN Architect. | Manip. Classifier | Manip. Parameterizer | Manip. Classifier | Manip. Parameterizer
MISLnet | 0.004 | 0.045 | 0.007 | 0.082
TransferNet | 0.008 | 0.005 | 0.010 | 0.006
PHNet | 0.275 | 0.120 | 0.306 | 0.133
SRNet | 0.420 | 0.000 | 0.477 | 0.000
DenseNet | 0.005 | 0.010 | 0.008 | 0.016
VGG-19 | 0.020 | 0.090 | 0.024 | 0.106
Average | 0.122 | 0.045 | 0.139 | 0.057
Table 9: Table 9: Transferability of GAN-based attack re-targeting on
manipulation detectors and parameterizers.
| Successful Attack Rate | Transferability Score
---|---|---
CNN Architect. | Manip. Detector | Manip. Parameterizer | Manip. Detector | Manip. Parameterizer
MISLnet | 0.090 | 0.035 | 0.095 | 0.037
TransferNet | 0.000 | 0.000 | 0.000 | 0.000
PHNet | 0.000 | 0.055 | 0.000 | 0.057
SRNet | 0.050 | 0.005 | 0.056 | 0.006
DenseNet | 0.000 | 0.000 | 0.000 | 0.000
VGG-19 | 0.525 | 0.260 | 0.541 | 0.268
Average | 0.111 | 0.059 | 0.115 | 0.060
Table 10: Table 10: Transferability of GAN-based attack re-targeting on
manipulation detectors and classifiers.
| Successful Attack Rate | Transferability Score
---|---|---
CNN Architecture | Manip. Detector | Manip. Classifier | Manip. Detector | Manip. Classifier
MISLnet | 0.365 | 0.035 | 0.435 | 0.042
TransferNet | 0.000 | 0.000 | 0.000 | 0.000
PHNet | 0.065 | 0.490 | 0.069 | 0.521
SRNet | 0.350 | 0.440 | 0.427 | 0.537
DenseNet | 0.535 | 0.135 | 0.588 | 0.148
VGG-19 | 0.235 | 0.185 | 0.240 | 0.189
Average | 0.259 | 0.214 | 0.290 | 0.240
Table 3 and Table 4 show the SARs we obtained for fooling forensic CNNs using
I-FGSM and GAN-based attacks. Each entry is the individual SAR when targeting
at a particular pair of CNN architecture and class definition. One average,
manipulation detectors can be fooled with 0.87 SAR using I-FGSM and 0.68 using
GAN-based attack. Manipulation classifiers can be fooled with 0.95 SAR using
I-FGSM attack and 0.90 SAR using GAN-based attack. And manipulation
parameterizers can be fooled with 0.98 SAR using I-FGSM and 0.91 SAR using
GAN-based attack. First we noticed that under the perfect knowledge scenarios,
both attacks can fool forensic CNNs with high SARs. Second, we noticed that
for both attacks SARs for fooling manipulation detectors are consistently
lower than the other two class definitions. For example, targeted I-FGSM
achieved 0.63 SAR on the manipulation detector using DenseNet architecture,
compared to the over 0.90 SARs for fooling the other two class definitions.
GAN-based attack achieved 0.55 SAR for fooling manipulation detector using
MISLnet architecture, compared to over 0.85 SAR for fooling the other two
class definitions. These results may imply that the manipulation detectors are
more robust to adversarial attacks under the perfect knowledge scenario.
### Transferability of adversarial attacks
In this experiment, we evaluated the performance of the adversarial attacks
against forensic CNNs when only the class definition of the target CNNs is
changed. For each CNN architecture, we used forensic CNNs built with other
class definitions to classify the adversarial images produced by individual
attack. For example, if the adversarial images were produced to fool a MISLnet
manipulation detector, we used the manipulation classifiers and parameterizers
of MISLnet to classify these attacked images.
Table 5 - 10 show the successful attack rates and transferability scores
achieved by the two adversarial attacks. The left side of each table shows the
SARs of fooling one particular pairing of CNN architecture and class
definition, and the right side of each table shows the T-Scores of each class
definition with respect to the trained class definition. Table 5-7 shows that
for targeted I-FGSM attack, both SARs and T-scores are 0’s when re-targeting
on different class definitions. It means the targeted I-FGSM attack cannot
transfer to other class definitions.
For GAN-based attack, the average SARs are less than 26% and the average
T-scores are less than 0.30. Shown in Table 8-10, the GAN-based attacks can
slightly transfer when trained with particular paring of class definitions and
CNN architectures. Among the 36 transferability cases we tested, only 4 cases
achieved over 0.5 T-scores and 20 cases are less then 0.1. The highest T-score
was achieved when the GAN-based attack were trained to fool manipulation
parameterizer using DenseNet architecture, then re-targeted at manipulation
detectors. However, there is still over 40% SAR drop taken in account that
class definition was the only changed factor. These results demonstrated that
adversarial attacks cannot transfer well across class definitions. Changing
class definitions would significantly mitigate impact from adversarial
attacks.
## Conclusion
In this paper, we investigated the impact of class definitions on the
transferability of adversarial attacks.While previous research has shown that
the adversarial attacks cannot transfer across different CNN architectures or
training data, we discovered that adversarial attacks are less transferable
than previously thought. Particularly, by only changing the class definition
of a forensic CNN, we can significantly decrease the the performance of
adversarial attacks. The finding holds consistent when using multiple
adversarial attacks to attack many well-known CNN architectures. Besides, a
secondary finding shows that some class definitions may be more robust to
adversarial attacks than others. Particularly, the SARs are lower when fooling
binary detection under the perfect knowledge scenario.
## Acknowledgment
This material is based upon work supported by the National Science Foundation
under Grant No. 1553610. Any opinions, findings, and conclusions or
recommendations expressed in this material are those of the authors and do not
necessarily reflect the views of the National Science Foundation.
This material is based on research sponsored by DARPA and Air Force Research
Laboratory (AFRL) under agreement number PGSC-SC-111346-03. The U.S.
Government is authorized to reproduce and distribute reprints for Governmental
purposes notwithstanding any copyright notation thereon. The views and
conclusions contained herein are those of the authors and should not be
interpreted as necessarily representing the official policies or endorsements,
either expressed or implied, of DARPA and Air Force Research Laboratory (AFRL)
or the U.S. Government.
## References
* [1] M. C. Stamm, M. Wu, and K. J. R. Liu, “Information forensics: An overview of the first decade,” _IEEE Access_ , vol. 1, pp. 167–200, 2013.
* [2] M. Kirchner, “Fast and reliable resampling detection by spectral analysis of fixed linear predictor residue,” in _Proceedings of the 10th ACM workshop on Multimedia and security_ , 2008, pp. 11–20.
* [3] M. C. Stamm and K. R. Liu, “Forensic detection of image manipulation using statistical intrinsic fingerprints,” _IEEE Transactions on Information Forensics and Security_ , vol. 5, no. 3, pp. 492–506, 2010.
* [4] I. Amerini, L. Ballan, R. Caldelli, A. Del Bimbo, and G. Serra, “A sift-based forensic method for copy–move attack detection and transformation recovery,” _IEEE Transactions on Information Forensics and Security_ , vol. 6, no. 3, pp. 1099–1110, Sep. 2011.
* [5] J. Fridrich, D. Soukal, and J. Lukáš, “Detection of copy-move forgery in digital images,” in _in Proceedings of Digital Forensic Research Workshop_. Citeseer, 2003.
* [6] S. Bayram, H. T. Sencar, and N. Memon, “An efficient and robust method for detecting copy-move forgery,” in _2009 IEEE International Conference on Acoustics, Speech and Signal Processing_. IEEE, 2009, pp. 1053–1056.
* [7] X. Pan and S. Lyu, “Region duplication detection using image feature matching,” _IEEE Transactions on Information Forensics and Security_ , vol. 5, no. 4, pp. 857–867, Dec 2010.
* [8] O. Mayer and M. C. Stamm, “Accurate and efficient image forgery detection using lateral chromatic aberration,” _IEEE Transactions on information forensics and security_ , vol. 13, no. 7, pp. 1762–1777, 2018.
* [9] B. Bayar and M. C. Stamm, “Constrained convolutional neural networks: A new approach towards general purpose image manipulation detection,” _IEEE Transactions on Information Forensics and Security_ , vol. 13, no. 11, pp. 2691–2706, Nov 2018.
* [10] T. Bianchi and A. Piva, “Image forgery localization via block-grained analysis of jpeg artifacts,” _IEEE Transactions on Information Forensics and Security_ , vol. 7, no. 3, pp. 1003–1017, 2012.
* [11] M. Chen, J. Fridrich, J. Lukáš, and M. Goljan, “Imaging sensor noise as digital x-ray for revealing forgeries,” in _International Workshop on Information Hiding_. Springer, Berlin, Heidelberg, 2007, pp. 342–358.
* [12] D. Cozzolino, G. Poggi, and L. Verdoliva, “Splicebuster: A new blind image splicing detector,” in _2015 IEEE International Workshop on Information Forensics and Security_. IEEE, 2015, pp. 1–6.
* [13] H. Farid, “Exposing digital forgeries from jpeg ghosts,” _IEEE transactions on information forensics and security_ , vol. 4, no. 1, pp. 154–160, 2009.
* [14] P. Ferrara, M. Fontani, T. Bianchi, A. De Rosa, A. Piva, and M. Barni, “Unsupervised fusion for forgery localization exploiting background information,” in _2015 IEEE International Conference on Multimedia Expo Workshops_ , June 2015, pp. 1–6.
* [15] H. Li, W. Luo, X. Qiu, and J. Huang, “Identification of various image operations using residual-based features,” _IEEE Transactions on Circuits and Systems for Video Technology_ , vol. 28, no. 1, pp. 31–45, Jan 2018\.
* [16] O. Mayer and M. C. Stamm, “Forensic similarity for digital images,” _IEEE Transactions on Information Forensics and Security_ , vol. 15, pp. 1331–1346, 2020.
* [17] L. Bondi, S. Lameri, D. Guera, P. Bestagini, E. J. Delp, and S. Tubaro, “Tampering detection and localization through clustering of camera-based cnn features,” in _Conference on Computer Vision and Pattern Recognition Workshops_. IEEE, July 2017, pp. 1855–1864.
* [18] B. Li, H. Zhang, H. Luo, and S. Tan, “Detecting double jpeg compression and its related anti-forensic operations with cnn,” _Multimedia Tools and Applications_ , 01 2019.
* [19] A. Tuama, F. Comby, and M. Chaumont, “Camera model identification with the use of deep convolutional neural networks,” in _Information Forensics and Security (WIFS)_. IEEE, 2016, pp. 1–6.
* [20] D. Cozzolino and L. Verdoliva, “Noiseprint: a cnn-based camera model fingerprint,” _IEEE Transactions on Information Forensics and Security_ , vol. 15, pp. 144–159, 2019.
* [21] L. Bondi, L. Baroffio, D. Güera, P. Bestagini, E. J. Delp, and S. Tubaro, “First steps toward camera model identification with convolutional neural networks,” _IEEE Signal Processing Letters_ , vol. 24, no. 3, pp. 259–263, March 2017.
* [22] S. Sharma, A. V. Subramanyam, M. Jain, A. Mehrish, and S. Emmanuel, “Anti-forensic technique for median filtering using l1-l2 tv model,” in _2016 IEEE International Workshop on Information Forensics and Security_ , Dec 2016, pp. 1–6.
* [23] M. Fontani and M. Barni, “Hiding traces of median filtering in digital images,” in _Signal Processing Conference, Proceedings of the 20th European_. IEEE, 2012, pp. 1239–1243.
* [24] M. Kirchner and R. Bohme, “Hiding traces of resampling in digital images,” _IEEE Transactions on Information Forensics and Security_ , vol. 3, no. 4, pp. 582–592, 2008.
* [25] G. Cao, Y. Zhao, R. Ni, and H. Tian, “Anti-forensics of contrast enhancement in digital images,” in _Proceedings of the 12th ACM Workshop on Multimedia and Security_ , 2010, pp. 25–34.
* [26] M. C. Stamm and K. J. R. Liu, “Anti-forensics of digital image compression,” _IEEE Transactions on Information Forensics and Security_ , vol. 6, no. 3, pp. 1050–1065, Sep. 2011.
* [27] S.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, “Deepfool: A simple and accurate method to fool deep neural networks,” in _The IEEE Conference on Computer Vision and Pattern Recognition_ , June 2016.
* [28] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in _Advances in neural information processing systems_ , 2014, pp. 2672–2680.
* [29] N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, “Practical black-box attacks against machine learning,” in _Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security_. Association for Computing Machinery, 2017, pp. 506–519.
* [30] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” _arXiv preprint arXiv:1412.6572_ , 2014.
* [31] N. Carlini and D. Wagner, “Towards evaluating the robustness of neural networks,” in _2017 IEEE Symposium on Security and Privacy_ , May 2017, pp. 39–57.
* [32] B. Biggio, I. Corona, D. Maiorca, B. Nelson, N. Šrndić, P. Laskov, G. Giacinto, and F. Roli, “Evasion attacks against machine learning at test time,” pp. 387–402, 2013.
* [33] B. Biggio, B. Nelson, and P. Laskov, “Poisoning attacks against support vector machines,” 2012.
* [34] C. Chen, X. Zhao, and M. C. Stamm, “Mislgan: an anti-forensic camera model falsification framework using a generative adversarial network,” in _2018 25th IEEE International Conference on Image Processing (ICIP)_. IEEE, 2018, pp. 535–539.
* [35] S. Huang, N. Papernot, I. Goodfellow, Y. Duan, and P. Abbeel, “Adversarial attacks on neural network policies,” _arXiv preprint arXiv:1702.02284_ , 2017\.
* [36] D. Guera, Y. Wang, L. Bondi, P. Bestagini, S. Tubaro, and E. J. Delp, “A counter-forensic method for cnn-based camera model identification,” in _Computer Vision and Pattern Recognition Workshops_. IEEE, July 2017, pp. 1840–1847.
* [37] C. Chen, X. Zhao, and M. C. Stamm, “Generative adversarial attacks against deep-learning-based camera model identification,” _IEEE Transactions on Information Forensics and Security_ , 2019.
* [38] D. Kim, H. U. Jang, S. M. Mun, S. Choi, and H. K. Lee, “Median filtered image restoration and anti-forensics using adversarial networks,” _IEEE Signal Processing Letters_ , vol. 25, no. 2, pp. 278–282, Feb 2018.
* [39] Y. Liu, X. Chen, C. Liu, and D. Song, “Delving into transferable adversarial examples and black-box attacks,” 2016.
* [40] N. Papernot, P. McDaniel, and I. Goodfellow, “Transferability in machine learning: from phenomena to black-box attacks using adversarial samples,” _arXiv preprint arXiv:1605.07277_ , 2016.
* [41] N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami, “The limitations of deep learning in adversarial settings,” in _2016 IEEE European Symposium on Security and Privacy (EuroS &P)_. IEEE, 2016, pp. 372–387.
* [42] M. Barni, M. C. Stamm, and B. Tondi, “Adversarial multimedia forensics: Overview and challenges ahead,” in _2018 26th European Signal Processing Conference_. IEEE, 2018, pp. 962–966.
* [43] M. Barni, K. Kallas, E. Nowroozi, and B. Tondi, “On the transferability of adversarial examples against cnn-based image forensics,” pp. 8286–8290, 2019\.
* [44] Y. Zhan, Y. Chen, Q. Zhang, and X. Kang, “Image forensics based on transfer learning and convolutional neural network,” in _Proceedings of the 5th ACM Workshop on Information Hiding and Multimedia Security_ , ser. IH&MMSec ’17, pp. 165–170.
* [45] M. Boroumand and J. Fridrich, “Deep learning for detecting processing history of images,” _Electronic Imaging_ , vol. 2018, no. 7, pp. 213–1, 2018.
* [46] M. Boroumand, M. Chen, and J. Fridrich, “Deep residual network for steganalysis of digital images,” _IEEE Transactions on Information Forensics and Security_ , vol. 14, no. 5, pp. 1181–1193, 2018.
* [47] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2017, pp. 4700–4708.
* [48] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” _arXiv preprint arXiv:1409.1556_ , 2014.
* [49] T. Gloe and R. Böhme, “The dresden image database for benchmarking digital image forensics,” _Journal of Digital Forensic Practice_ , vol. 3, no. 2-4, pp. 150–159, 2010.
|
# The canonical ideal and the deformation theory of curves with automorphisms
Aristides Kontogeorgis Department of Mathematics, National and Kapodistrian
University of Athens Panepistimioupolis, 15784 Athens, Greece
<EMAIL_ADDRESS>and Alexios Terezakis Department of Mathematics,
National and Kapodistrian University of Athens
Panepistimioupolis, 15784 Athens, Greece<EMAIL_ADDRESS>
###### Abstract.
The deformation theory of curves is studied by using the canonical ideal. The
deformation problem of curves with automorphisms is reduced to a deformation
problem of linear representations.
###### Key words and phrases:
Automorphisms of Curves, Deformation theory, Petri’s theorem
###### 2020 Mathematics Subject Classification:
14H37,14D15,14H10,13D02
## 1\. Introduction
The deformation theory of curves with automorphisms is an important
generalization of the classical deformation theory of curves. This theory is
related to the lifting problem of curves with automorphisms since one can
consider liftings from characteristic $p>0$ to characteristic zero in terms of
a sequence of local Artin-rings.
J. Bertin and A. Mézard in [4], following Schlessinger’s [32] approach
introduced a deformation functor $D_{\mathrm{gl}}$ and studied it in terms of
Grothendieck’s equivariant cohomology theory [12]. In Schlessinger’s approach
to deformation theory, we want to know the tangent space to the deformation
functor $D_{\mathrm{gl}}(k[\epsilon])$ and the possible obstructions to lift a
deformation over an Artin local ring $\Gamma$ to a small extension
$\Gamma^{\prime}\rightarrow\Gamma$. The reader who is not familiar with
deformation theory is refereed to section 2.1 for terminology and references
to the literature. The tangent space of the global deformation functor
${D_{\rm gl}}(k[\epsilon])$ can be identified as Grothendieck’s equivariant
cohomology group $H^{1}(G,X,\mathscr{T}_{X})$, which is known to be equal to
the invariant space $H^{1}(X,\mathscr{T}_{X})^{G}$. Moreover, a local local-
global theorem is known, which can be expressed in terms of the short exact
sequence:
(1)
$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{H^{1}(X/G,\pi_{*}^{G}(\mathscr{T}_{X}))\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{H^{1}(G,X,\mathscr{T}_{X})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{H^{0}(X/G,R^{1}\pi_{*}^{G}(\mathscr{T}_{X}))\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$
$\cong$
$\textstyle{0}$$\textstyle{\displaystyle\bigoplus_{i=1}^{r}H^{1}\left(G_{x_{i}},\widehat{\mathscr{T}}_{X,x_{i}}\right)}$
The lifting obstruction can be seen as an element in
$H^{2}(G,X,\mathscr{T}_{X})\cong\bigoplus_{i=1}^{r}H^{2}\left(G_{x_{i}},\widehat{\mathscr{T}}_{X,x_{i}}\right).$
In the above equations $x_{1},\ldots,x_{r}\in X$ are the ramified points,
$G_{x_{i}}$ are the corresponding isotropy groups and
$\widehat{\mathscr{T}}_{X,x_{i}}$ are the completed local tangent spaces, that
is $\widehat{\mathscr{T}}_{X,x_{i}}=k[[t_{i}]]\frac{d}{dt_{i}}$, where $t_{i}$
is a local uniformizer at $x_{i}$. The space $k[[t_{i}]]\frac{d}{dt_{i}}$ is
seen as $G_{x_{i}}$-module by the adjoint action, see [7, 2.1], [22, 1.5].
Bertin and Mézard reduced the computation of obstruction to the infinitesimal
lifting problem of representations of the isotropy group $G_{x_{i}}$ to the
difficult group $\mathrm{Aut}k[[t]]$. In this article for a ring $\Gamma$,
$\mathrm{Aut}\Gamma[[t]]$ denotes the group of continous automorphisms of
$\Gamma[[t]]$.
This article aims to give a new approach to the deformation theory of curves
with automorphisms, which is not based on the deformation theory of
representations on the subtle object $\mathrm{Aut}k[[t]]$, but on the
deformation theory of the better understood general linear group. In order to
do so, we will restrict ourselves to curves that satisfy the mild assumptions
of Petri’s theorem
###### Theorem 1 (Petri’s theorem).
For a non-singular non-hyperelliptic curve $X$ of genus $g\geq 3$ defined over
an algebraically closed field with sheaf of differentials $\Omega_{X}$ there
is the following short exact sequence:
$0\rightarrow
I_{X}\rightarrow\mathrm{Sym}H^{0}(X,\Omega_{X})\rightarrow\bigoplus_{n=0}^{\infty}H^{0}(X,\Omega_{X}^{\otimes
n})\rightarrow 0,$
where $I_{X}$ is generated by elements of degree $2$ and $3$. Also if $X$ is
not a non-singular quintic of genus $6$ or $X$ is not a trigonal curve, then
$I_{X}$ is generated by elements of degree 2.
For a proof of this theorem we refer to [11], [31]. The ideal $I_{X}$ is
called the canonical ideal and it is the homogeneous ideal of the embedded
curve $X\rightarrow\mathbb{P}^{g-1}$.
For curves that satisfy the assumptions of Petri’s theorem and their canonical
ideal is generated by quadrics, we prove in section 3 the following relative
version of Petri’s theorem
###### Proposition 2.
Let $f_{1},\ldots,f_{r}\in
S:=\mathrm{Sym}H^{0}(X,\Omega_{X})=k[\omega_{1},\ldots,\omega_{g}]$ be
quadratic polynomials which generate the canonical ideal $I_{X}$ of a curve
$X$ defined over an algebraic closed field $k$. Any deformation
$\mathscr{X}_{A}$ is given by quadratic polynomials
$\tilde{f}_{1},\ldots,\tilde{f}_{r}\in\mathrm{Sym}H^{0}(\mathscr{X}_{A},\Omega_{\mathscr{X}_{A}/A})=A[W_{1},\ldots,W_{g}]$,
which reduce to $f_{1},\ldots,f_{r}$ modulo the maximal ideal
$\mathfrak{m}_{A}$ of $A$.
This approach allows us to replace several of Grothendieck’s equivariant
cohomology constructions in terms of linear algebra. Let us mention that in
general, it is not so easy to perform explicit computations with equivariant
Grothendieck cohomology groups and usually, spectral sequences or a
complicated equivariant Chech cohomology is used, see [3], [23, sec.3].
Let $i:X\rightarrow\mathbb{P}^{g-1}$ be the canonical embedding. In
proposition 27 we prove that elements $[f]\in
H^{1}(X,\mathscr{T}_{X})^{G}=D_{\mathrm{gl}}k[\epsilon]$ correspond to
cohomology classes in $H^{1}(G,M_{g}(k)/\langle\mathbb{I}_{g}\rangle)$, where
$M_{g}(k)/\langle\mathbb{I}_{g}\rangle$ is the space of $g\times g$ matrices
with coefficients in $k$, modulo the vector subspace of scalar multiples of
the identity matrix.
Furthermore, in our setting the obstruction to liftings is reduced to an
obstruction to the lifting of the linear canonical representation
(2) $\rho:G\rightarrow\mathrm{GL}\big{(}H^{0}(X,\Omega_{X})\big{)}$
and a compatibility criterion involving the defining quadratic equations of
our canonically embedded curve, namely in section 4 we will prove the
following:
###### Theorem 3.
Consider an epimorphism $\Gamma^{\prime}\rightarrow\Gamma\rightarrow 0$ of
local Artin rings. A deformation $x\in D_{\mathrm{gl}}(\Gamma)$ can be lifted
to a deformation $x^{\prime}\in D_{\mathrm{gl}}(\Gamma^{\prime})$ if and only
if the representation $\rho_{\Gamma}:G\rightarrow\mathrm{GL}_{g}(\Gamma)$
lifts to a representation
$\rho_{\Gamma^{\prime}}:G\rightarrow\mathrm{GL}_{g}(\Gamma^{\prime})$ and
moreover there is a lifting $X_{\Gamma^{\prime}}$ of the embedded deformation
of $X_{\Gamma}$ which is invariant under the lifted action of
$\rho_{\Gamma^{\prime}}$.
###### Remark 4.
The liftability of the representation $\rho$ is a strong condition. In
proposition 30 we give an example of a representation
$\rho:G\rightarrow\mathrm{GL}_{2}(k)$, for a field $k$ of positive
characteristic $p$, which can not be lifted to a representation
$\tilde{\rho}:G\rightarrow\mathrm{GL}_{2}(R)$ for $R=W(k)[\zeta_{p^{h}}]$,
meaning that a lifting in some small extension
$R/\mathfrak{m}_{R}^{i+1}\rightarrow R/\mathfrak{m}_{R}^{i}$ is obstructed.
Here $R$ denotes the Witt ring of $k$ with a primitive $p^{h}$ root of unity
added, which has characteristic zero. In our counterexample $G=C_{q}\rtimes
C_{m}$, $q=p^{h}$, $(m,p)=1$.
###### Remark 5.
One can always pass from the local lifting problem of $\rho:G\rightarrow{\rm
Aut}\Gamma[[t]]$ to a global lifting problem, by considering the Harbater-
Katz-Gabber (HKG for short) compactification $X$ of the local action. Then one
can consider the the criterion involving the linear representation
$\rho:G\rightarrow\mathrm{Gl}(H^{0}(X,\Omega_{X}))$. Notice that in [26] the
canonical ideal for HGK-curves is explicitly described.
###### Remark 6.
In 4.1 we will use the tools developed in this article to show that certain
automorphisms of the Hermitian curve do not lift even in possible
characteristic. This is expected since the Hermitian curve is the unique curve
with an extreme size of its automorphism group, see [35].
###### Remark 7.
The invariance of the canonical ideal $I_{X_{\Gamma}}$ under the action of $G$
can be checked using Gauss elimination and echelon normal forms, see [24, sec
2.2].
###### Remark 8.
The canonical ideal $I_{X_{\Gamma}}$ is determined by $r$ quadratic
polynomials which form a $\Gamma[G]$-invariant $\Gamma$-submodule $V_{\Gamma}$
in the free $\Gamma$-module of symmetric $g\times g$ matrices with entries in
$\Gamma$. When we pass from a deformation $x\in D_{\mathrm{gl}}(\Gamma)$ to a
deformation $x^{\prime}\in D_{\mathrm{gl}}(\Gamma^{\prime})$ we ask that the
canonical ideal $I_{X_{\Gamma^{\prime}}}$ is invariant under the lifted
action, given by the representation
$\rho_{G^{\prime}}:G\rightarrow\mathrm{GL}_{g}(\Gamma^{\prime})$. In
definition 11.1 we introduce an action $T(g)$ on the vector space of symmetric
$g\times g$ matrices, and the invariance of the canonical ideal is equivalent
to the invariance under the $T$-action of the $\Gamma^{\prime}$-submodule
$V_{\Gamma^{\prime}}$ generated by the quadratic polynomials generating
$I_{X^{\prime}}$. Therefore, we can write one more representation
(3)
$\rho^{(1)}:G\rightarrow\mathrm{GL}\big{(}\mathrm{Tor}_{1}^{S}(k,I_{X})\big{)}.$
Set $r=\binom{g-2}{2}$. Liftings of the representations $\rho,\rho^{(1)}$
defined in eq. (2), (3) in $\mathrm{GL}_{g}(\Gamma)$ resp.
$\mathrm{GL}_{r}(\Gamma)$ will be denoted by $\rho_{\Gamma}$ resp.
$\rho^{(1)}_{\Gamma}$.
Notice that if the representation $\rho_{G}$ lifts to a representation
$\rho_{\Gamma^{\prime}}$ and moreover there is a lifting $X_{\Gamma^{\prime}}$
of the relative curve $X_{\Gamma}$ so that $X_{\Gamma^{\prime}}$ has an ideal
$I_{X_{\Gamma^{\prime}}}$ which is $\rho_{\Gamma^{\prime}}$ invariant, then
the representation $\rho^{(1)}_{\Gamma}$ also lifts to a representation
$\rho^{(1)}_{\Gamma^{\prime}}$, see also [24, prop. 5]
The deformation theory of linear representations $\rho,\rho^{(1)}$ gives rise
to cocycles $D_{\sigma}$, $D^{(1)}_{\sigma^{-1}}$ in $H^{1}(G,M_{g}(k))$,
$H^{1}(G,M_{\binom{g-2}{2}}(k))$, while the deformation theory of curves with
automorphisms introduces a cocycle $B_{\sigma}[f]$ corresponding to $[f]\in
H^{1}(X,\mathscr{T}_{X})^{G}$. We will introduce a compatibility condition in
section 4.2 among these cocycles, using the isomorphism
$\displaystyle\psi:M_{g}(k)/\langle\mathbb{I}_{g}\rangle$
$\displaystyle\stackrel{{\scriptstyle\cong}}{{\longrightarrow}}H^{0}(X,i^{*}\mathscr{T}_{\mathbb{P}^{g-1}})\hookrightarrow\mathrm{Hom}_{S}(I_{X},S/I_{X})=H^{0}(X,\mathscr{N}_{X/\mathbb{P}^{g-1}})$
$\displaystyle B$ $\displaystyle\longmapsto\psi_{B}$
defined in In proposition 24.
###### Proposition 9.
The following compatibility condition is satisfied
(4) $\psi_{D_{\sigma}}-\psi_{B_{\sigma}[f]}=D_{\sigma^{-1}}^{(1)}.$
The structure of the article is as follows. In section 2.2 we will present
together the deformation theory of linear representations
$\rho:G\rightarrow\mathrm{GL}(V)$ and the deformation theory of
representations of the form $\rho:G\rightarrow\mathrm{Aut}k[[t]]$. The
deformation theory of linear representations is a better-understood object of
study, see [28], which played an important role in topology [20] and also in
the proof of Fermat’s last theorem, see [29]. The deformation theory of
representations in $\mathrm{Aut}k[[t]]$ comes out from the study of local
fields and it is related to the deformation problem of curves with
automorphisms after the local global theory of Bertin Mézard. There is also an
increased interest related to the study of Nottingham groups and
$\mathrm{Aut}k[[t]]$, see [5], [9],[25].
It seems that the similarities between these two deformation problems are
known to the expert, see for example [30, prop. 3.13]. For the convenience of
the reader and in order to fix the notation, we also give a detailed
explanation and comparison of these two deformation problems.
In section 3 we revise the theory of relative canonical ideals and the work of
the first author together with H. Charalambous and K. Karagiannis [6] aiming
at the deformation problem of curves with automorphisms. More precisely a
relative version of Petri’s theorem is proved, which implies that the relative
canonical ideal is generated by quadratic polynomials.
In section 4 we study both the obstruction and the tangent space problem of
the deformation theory of curves with automorphisms using the relative
canonical ideal point of view. In this section theorem 3 is proved.
Aknowledgement Alexios Terezakis is financially supported by the Tsakyrakis
scholarship of the National and Kapodistrian University of Athens.
## 2\. Deformation theory of curves with automorphisms
### 2.1. Global deformation functor
Let $\Lambda$ be a complete local Noetherian ring with residue field $k$,
where $k$ is an algebraically closed field of characteristic $p\geq 0$. Let
$\mathscr{C}$ be the category of local Artin $\Lambda$-algebras with residue
field $k$ and homomorphisms the local $\Lambda$-algebra homomorphisms
$\phi:\Gamma^{\prime}\rightarrow\Gamma$ between them, that is
$\phi^{-1}(\mathfrak{m}_{\Gamma})=\mathfrak{m}_{\Gamma^{\prime}}$. The
deformation functor of curves with automorphisms is a functor ${D_{\rm gl}}$
from the category $\mathscr{C}$ to the category of sets
${D_{\rm gl}}:\mathscr{C}\rightarrow\rm{Sets},\Gamma\mapsto\left\\{\mbox{
\begin{tabular}[]{l}Equivalence classes\\\ of deformations of\\\ couples
$(X,G)$ over $\Gamma$\end{tabular} }\right\\}$
defined as follows. For a subgroup $G$ of the group ${\rm Aut}(X)$, a
deformation of the couple $(X,G)$ over the local Artin ring $\Gamma$ is a
proper, smooth family of curves
$X_{\Gamma}\rightarrow\rm Spec(\Gamma)$
parametrized by the base scheme $\rm Spec(\Gamma)$, together with a group
homomorphism $G\rightarrow{\rm Aut}_{\Gamma}(X_{\Gamma})$, such that there is
a $G$-equivariant isomorphism $\phi$ from the fibre over the closed point of
$\Gamma$ to the original curve $X$:
$\phi:X_{\Gamma}\otimes_{\rm Spec(\Gamma)}\rm Spec(k)\rightarrow X.$
Two deformations $X_{\Gamma}^{1},X_{\Gamma}^{2}$ are considered to be
equivalent if there is a $G$-equivariant isomorphism $\psi$ that reduces to
the identity in the special fibre and making the following diagram
commutative:
$\textstyle{X_{\Gamma}^{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\psi}$$\textstyle{X_{\Gamma}^{2}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\rm
Spec\Gamma}$
Given a small extension of Artin local rings
(5) $0\rightarrow E\cdot
k\rightarrow\Gamma^{\prime}\rightarrow\Gamma\rightarrow 0$
and an element $x\in D_{\mathrm{gl}}(\Gamma)$ we have that the set of lifts
$x^{\prime}\in D_{\mathrm{gl}}(\Gamma^{\prime})$ extending $x$ is a principal
homogeneous space under the action of $D_{\mathrm{gl}}(k[\epsilon])$ and such
an extension $x^{\prime}$ exists if certain obstruction vanishes. It is well
known, see section 2.2, that similar behavior have the deformation functors of
representations.
### 2.2. Lifting of representations
Let $\mathscr{G}:\mathscr{C}\rightarrow\mathrm{Groups}$ be a group functor,
see [8, ch. 2]. In this article, we will be mainly interested in two group
functors. The first one, $\mathrm{GL}_{g}$, will be represented by the by the
group scheme $G_{g}=\Lambda[x_{11},\ldots,x_{gg},\det(x_{ij})^{-1}]$, that is
$\mathrm{GL}_{g}(\Gamma)=\mathrm{Hom}_{\Lambda}(G_{g},\Gamma)$. The second one
is the group functor from the category of rings to the category of groups
$\mathscr{N}:\Gamma\mapsto\mathrm{Aut}\Gamma[[t]]$.
We also assume that each group $\mathscr{G}(\Gamma)$ is embedded in the group
of units of some ring $\mathscr{R}(\Gamma)$ depending functorially on
$\Gamma$. This condition is asked since our argument requires us to be able to
add certain group elements. We also assume that the additive group of the ring
$\mathscr{R}(\Gamma)$ has the structure of direct product $\Gamma^{I}$, while
$\mathscr{R}(\Gamma)=\mathscr{R}(\Lambda)\otimes_{\Lambda}\Gamma$. Notice,
that $I$ might be an infinite set, but since all rings involved are Noetherian
$\Gamma^{I}$ is flat, see [27, 4F].
A representation of the finite group $G$ in $\mathscr{G}(\Gamma)$ is a group
homomorphism
$\rho:G\rightarrow\mathscr{G}(\Gamma),$
where $\Gamma$ is a commutative ring.
###### Remark 10.
Consider two sets $X,Y$ acted on by the group $G$. Then every function
$f:X\rightarrow Y$ is acted on by $G$, by defining ${}^{\sigma}f:X\rightarrow
Y$, sending $x\mapsto\sigma f\sigma^{-1}(x)$. This construction will be used
throughout this article.
More precisely we will use the following actions
###### Definition 11.
1. (1)
Let $M_{g}(\Gamma)$ denote the set of $g\times g$ matrices with entries in
ring $\Gamma$. An element $A\in M_{g}(\Gamma)$ will be acted on by $g\in G$ in
terms of the action
$T(g)A=\rho(g^{-1})^{t}A\rho(g^{-1}).$
This is the natural action coming from the action of $G$ on
$H^{0}(X,\Omega_{X/k})$ and on the quadratic forms $\omega^{t}A\omega$. We
raise the group element in $-1$ in order to have a left action, that is
$T(gh)A=T(g)T(h)A$. Notice also that $T(g)$ restricts to an action on the
space $\mathscr{S}_{g}(\Gamma)$ of symmetric $g\times g$ matrices with entries
in $\Gamma$.
2. (2)
The adjoint action on elements $A\in M_{g}(\Gamma)$, comes from the action to
the tangent space of the general linear group.
$\mathrm{Ad}(g)A=\rho(g)A\rho(g^{-1}).$
3. (3)
Actions on elements which can be seen as functions between $G$-spaces as in
remark 10. This action will be denoted as $f\mapsto^{\sigma}\\!\\!f$.
Examples
1. Consider the groups $\mathrm{GL}_{g}(\Gamma)$ consisted of all invertible $g\times g$ matrices with coefficients in $\Gamma$. The group functor
$\Gamma\mapsto\mathrm{GL}_{g}(\Gamma)=\mathrm{Hom}(R,\Gamma),$
is representable by the affine $\Lambda$-algebra
$R=k[x_{11},\ldots,x_{gg},\det\big{(}(x_{ij})\big{)}^{-1}]$, see [36, 2.5]. In
this case the ring $\mathscr{R}(\Gamma)$ is equal to
$\mathrm{End}(\Gamma^{g})$, while $I=\\{i,j\in\mathbb{N}:1\leq i,j,\leq g\\}$.
We can consider the subfunctor $\mathrm{GL}_{g,\mathbb{Id}_{g}}$ consisted of
all elements $f\in\mathrm{GL}_{g}(\Gamma)$, which reduce to the identity
modulo the maximal ideal $\mathfrak{m}_{\Gamma}$. The tangent space
$T_{\mathbb{I}_{g}}\mathrm{GL}_{g}$ of $\mathrm{GL}_{g}$ at the identity
element $\mathbb{I}_{g}$, that is the space $\mathrm{Hom}(\rm
Speck[\epsilon],\rm SpecR)$ or equivalently the set
$\mathrm{GL}_{g,\mathbb{Id}_{g}}(k[\epsilon])$ consisted of
$f\in\mathrm{Hom}(R,k[\epsilon])$, so that $f\equiv\mathbb{I}_{g}{\;\rm
mod}\langle\epsilon\rangle$. This set is a vector space according to the
functorial construction given in [29, p. b 272] and can be identified to the
space of $\mathrm{End}(k^{g})=M_{g}(k)$, by identifying
$\mathrm{Hom}(R,k[\epsilon])\ni f\mapsto\mathbb{I}_{g}+\epsilon M,M\in
M_{g}(k).$
The later space is usually considered as the tangent space of the algebraic
group $\mathrm{GL}_{g}(k)$ at the identity element or equivalently as the Lie
algebra corresponding to $\mathrm{GL}_{g}(k)$.
The representation $\rho:G\rightarrow\mathrm{GL}_{g}(\Gamma)$ equips the space
$T_{\mathbb{I}_{g}}\mathrm{GL}_{g}=M_{g}(k)$ with the adjoint action, which is
the action described in remark 10, when the endomorphism $M$ is seen as an
operator $V\rightarrow V$, where $V$ is a $G$-module in terms of the
representation $\rho$:
$\displaystyle G\times M_{g}(k)$ $\displaystyle\longrightarrow M_{g}(k)$
$\displaystyle(g,M)$ $\displaystyle\longmapsto\mathrm{Ad}(g)(M)=gMg^{-1}.$
In order to make clear the relation with the local case below, where the main
object of study is the automorphism group of a completely local ring we might
consider the completion $\hat{R}_{\mathbb{I}}$ of the localization of
$R=k[x_{11},\ldots,x_{gg},\det\big{(}(x_{ij})\big{)}^{-1}]$ at the identity
element. We can now form the group ${\rm Aut}\hat{R}_{\mathbb{I}}$ of
automorphisms of the ring $\hat{R}_{\mathbb{I}}$ which reduce to the identity
modulo $\mathfrak{m}_{\hat{R}_{\mathbb{I}}}$. The later automorphism group is
huge but it certainly contains the group $G$ acting on $\hat{R}_{\mathbb{I}}$
in terms of the adjoint representation. We have that elements $\sigma\in{\rm
Aut}\hat{R}_{\mathbb{I}}\otimes k[\epsilon]$ are of the form
$\sigma(x_{ij})=x_{ij}+\epsilon\beta(x_{ij}),\text{ where
}\beta(x_{ij})\in\hat{R}_{\mathbb{I}}.$
Moreover, the relation
$\sigma(f\cdot
g)=fg+\epsilon\beta(fg)=(f+\epsilon\beta(f))(g+\epsilon\beta(f)),$
implies that the map $\beta$ is a derivation and
$\beta(fg)=f\beta(g)+\beta(f)g.$
Therefore, $\beta$ is a linear combination of $\frac{\partial}{\partial
x_{ij}}$, with coefficients in $\hat{R}_{\mathbb{I}}$, that is
$\beta=\sum_{0\leqq i,j\leq g}a_{i,j}\frac{\partial}{\partial x_{ij}}$
###### Remark 12.
In the literature of Lie groups and algebras, the matrix notation $M_{g}(k)$
for the tangent space is frequently used for the Lie algebra-tangent space at
identity, instead of the later vector field-differential operator approach,
while in the next example the differential operator notation for the tangent
space is usually used.
2. Consider now the group functor $\Gamma\mapsto\mathscr{N}(\Gamma)={\rm Aut}\Gamma[[t]]$. An element $\sigma\in{\rm Aut}\Gamma[[t]]$ is fully described by its action on $t$, which can be expressed as an element in $\Gamma[[t]]$. When $\Gamma$ is an Artin local algebra then an automorphism is given by
$\sigma(t)=\sum_{\nu=0}^{\infty}a_{\nu}t^{\nu},\text{ where
}a_{i}\in\Gamma,a_{0}\in\mathfrak{m}_{\Gamma}\text{ and }a_{1}\text{ is a unit
in }\Gamma.$
If $a_{1}$ is not a unit in $\Gamma$ or $a_{0}\not\in\mathfrak{m}_{\Gamma}$
then $\sigma$ is an endomorphism of $\Gamma[[t]]$. In this way ${\rm
Aut}\Gamma[[t]]$ can be seen as the group of invertible elements in
$\Gamma[[t]]=\mathrm{End}\Gamma[[t]]=\mathscr{R}(\Gamma)$. In this case, the
set $I$ is equal to the set of natural numbers, where $\Gamma^{I}$ can be
identified as the set of coefficients of each powerseries.
$\displaystyle{\rm Aut}(k[\epsilon][[t]])$
$\displaystyle=\left\\{t\mapsto\sigma(t)=\sum_{\nu=1}^{\infty}a_{i}t^{\nu}:a_{i}=\alpha_{i}+\epsilon\beta_{i},\
\alpha_{i},\beta_{i}\in k,\alpha_{1}\neq 0\right\\}$
Exactly as we did in the general linear group case let as consider the
subfunctor $\Gamma\mapsto\mathscr{N}_{\mathbb{I}}(\Gamma)$, where
$\mathscr{N}_{\mathbb{I}}(\Gamma)$ consists of all elements in ${\rm
Aut}\Gamma[[t]]$ which reduce to the identity mod $\mathfrak{m}_{\Gamma}$.
Such an element $\sigma\in\mathscr{N}_{\mathbb{I}}(k[\epsilon])$ transforms
$f\in k[[t]]$ to a formal powerseries of the form
$\sigma(f)=f+\epsilon F_{\sigma}(f),$
where $F_{\sigma}(f)$ is fully determined by the value of $\sigma(t)$. The
multiplication condition $\sigma(f_{1}f_{2})=\sigma(f_{1})\sigma(f_{2})$
implies that
$F_{\sigma}(f_{1}f_{2})=f_{1}F_{\sigma}(f_{2})+F_{\sigma}(f_{1})f_{2},$
that is $F_{\sigma}$ is a $k[[t]]$-derivation, hence an element in
$k[[t]]\frac{d}{dt}$.
The local tangent space of $\Gamma[[t]]$ is defined to be the space of
differential operators $f(t)\frac{d}{dt}$, see [4], [7], [22]. The $G$ action
on the element $\frac{d}{dt}$ is given by the adjoint action, which is given
as a composition of operators, and is again compatible with the action given
in remark 10:
$\textstyle{\Gamma[[t]]\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\rho(\sigma^{-1})}$$\textstyle{\Gamma[[t]]\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\frac{d}{dt}}$$\textstyle{\Gamma[[t]]\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\rho(\sigma)}$$\textstyle{\Gamma[[t]]}$$\textstyle{t\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\rho(\sigma^{-1})(t)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\frac{d\rho(\sigma^{-1})(t)}{dt}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\rho(\sigma)\left(\frac{d\rho(\sigma^{-1})(t)}{dt}\right)}$
So the $G$-action on the local tangent space $k[[t]]\frac{d}{dt}$ is given by
$f(t)\frac{d}{dt}\longmapsto\mathrm{Ad}(\sigma)\left(f(t)\frac{d}{dt}\right)=\rho(\sigma)(f(t))\cdot\rho(\sigma)\left(\frac{d\rho(\sigma^{-1})(t)}{dt}\right)\frac{d}{dt},$
see also [22, lemma 1.10], for a special case.
$\mathscr{G}(\Gamma)$ $\mathscr{R}(\Gamma)$ tangent space action
$\mathrm{GL}_{g}(\Gamma)$ $\mathrm{End}_{g}(\Gamma)$
$\mathrm{End}_{g}(k)=M_{g}(k)$ $M\mapsto\mathrm{Ad}(\sigma)(M)$ ${\rm
Aut}\Gamma[[t]]$ $\mathrm{End}(\Gamma[[t]])$ $k[[t]]\frac{d}{dt}$
$f(t)\frac{d}{dt}\longmapsto\mathrm{Ad}(\sigma)\left(f(t)\frac{d}{dt}\right)$
Table 1. Comparing the two group functors
Motivated by the above two examples we can define
###### Definition 13.
Let $\mathscr{G}_{\mathbb{I}}$ be the subfunctor of $\mathscr{G}$, defined by
$\mathscr{G}_{\mathbb{I}}(\Gamma)=\\{f\in\mathscr{G}(\Gamma):f=\mathbb{I}{\;\rm
mod}\mathfrak{m}_{\Gamma}\\}.$
The tangent space to the functor $\mathscr{G}$ at the identity element is
defined as $\mathscr{G}_{\mathbb{I}}(k[\epsilon])$, see [29]. Notice, that
$\mathscr{G}_{\mathbb{I}}(k[\epsilon])\cong\mathscr{R}(k)$, is $k$-vector
space, acted on in terms of the adjoint representation, given by
$\displaystyle G\times\mathscr{G}_{\mathbb{I}}(\Gamma)$
$\displaystyle\longrightarrow\mathscr{G}_{\mathbb{I}}(\Gamma)$
$\displaystyle(\sigma,f)$ $\displaystyle\longmapsto\rho(\sigma)\cdot
f\cdot\rho(\sigma)^{-1}.$
If $\mathscr{R}(\Gamma)$ can be interpreted as an endomorphism ring, then the
above action can be interpreted in terms of the action on functions as
described in remark 10.
We will define the tangent space in our setting as
$\mathscr{T}=\mathscr{R}(k)$, which is equipped with the adjoint action.
### 2.3. Deforming representations
We can now define the deformation functor $F_{\rho}$ for any local Artin
algebra $\Gamma$ with maximal ideal $\mathfrak{m}_{\Gamma}$ in $\mathscr{C}$
to the category of sets:
(6)
$F_{\rho}:\Gamma\in\mathrm{Ob}(\mathscr{C})\mapsto\left\\{\begin{array}[]{l}\mbox{liftings
of }\rho:G\rightarrow\mathscr{G}(k)\\\ \mbox{to
}\rho_{\Gamma}:G\rightarrow\mathscr{G}(\Gamma)\mbox{ modulo}\\\
\mbox{conjugation by an element }\\\ \mbox{of
}\ker(\mathscr{G}(\Gamma)\rightarrow\mathscr{G}(k))\end{array}\right\\}$
Let
(7)
$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\langle
E\rangle=E\cdot\Gamma^{\prime}=E\cdot
k\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\phi^{\prime}}$$\textstyle{\Gamma^{\prime}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\phi}$$\textstyle{\Gamma\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{i}$$\textstyle{0}$
be a small extension in $\mathscr{C}$, that is the kernel of the natural onto
map $\phi$ is a principal ideal, generated by $E$ and
$E\cdot\mathfrak{m}_{\Gamma^{\prime}}=0$. In the above diagram
$i:\Gamma\rightarrow\Gamma^{\prime}$ is a section, which is not necessarily a
homomorphism. Since the kernel of $\phi$ is a principal ideal
$E\cdot\Gamma^{\prime}$ annihilated by $\mathfrak{m}_{\Gamma^{\prime}}$ it is
naturally a $k=\Gamma^{\prime}/\mathfrak{m}_{\Gamma^{\prime}}$-vector space,
which is one dimensional.
###### Lemma 14.
For a small extension as given in eq. (7) consider two liftings
$\rho^{1}_{\Gamma^{\prime}},\rho^{2}_{\Gamma^{\prime}}$ of the representation
$\rho_{\Gamma}$. The map
$\displaystyle d:G$ $\displaystyle\longrightarrow\mathscr{T}:=\mathscr{R}(k)$
$\displaystyle\sigma$ $\displaystyle\longmapsto
d(\sigma)=\frac{\rho^{1}_{\Gamma^{\prime}}(\sigma)\rho^{2}_{\Gamma^{\prime}}(\sigma)^{-1}-\mathbb{I}_{\Gamma^{\prime}}}{E}$
is a cocycle.
###### Proof.
We begin by observing that
$\phi\left(\rho^{1}_{\Gamma^{\prime}}(\sigma)\rho^{2}_{\Gamma^{\prime}}(\sigma)^{-1}-\mathbb{I}_{\Gamma^{\prime}}\right)=0,$
hence
$\rho^{1}_{\Gamma^{\prime}}(\sigma)\rho^{2}_{\Gamma^{\prime}}(\sigma)^{-1}=\mathbb{I}_{\Gamma^{\prime}}+E\cdot
d(\sigma),\text{ where }d(\sigma)\in\mathscr{T}.$
Also, we compute that
$\displaystyle\mathbb{I}_{\Gamma^{\prime}}+E\cdot d(\sigma\tau)$
$\displaystyle=\rho^{1}_{\Gamma^{\prime}}(\sigma\tau)\rho^{2}_{\Gamma^{\prime}}(\sigma\tau)^{-1}$
$\displaystyle=\rho^{1}_{\Gamma^{\prime}}(\sigma)\rho^{1}_{\Gamma^{\prime}}(\tau)\rho^{2}_{\Gamma^{\prime}}(\tau)^{-1}\rho^{2}_{\Gamma^{\prime}}(\sigma)^{-1}$
$\displaystyle=\rho^{1}_{\Gamma^{\prime}}(\tau)\big{(}\mathbb{I}_{\Gamma^{\prime}}+Ed(\sigma)\big{)}\rho^{2}_{\Gamma^{\prime}}\tau)\big{)}^{-1}$
$\displaystyle=\rho^{1}_{\Gamma^{\prime}}(\tau)\rho^{2}_{\Gamma^{\prime}}(\tau)^{-1}+E\cdot\rho^{1}_{\Gamma^{\prime}}(\tau)d(\sigma)\rho^{2}_{\Gamma^{\prime}}(\tau)^{-1}$
$\displaystyle=\mathbb{I}_{\Gamma^{\prime}}+E\cdot
d(\tau)+E\cdot\rho_{k}(\tau)d(\sigma)\rho_{k}(\tau)^{-1},$
since $E$ annihilates $\mathfrak{m}_{\Gamma^{\prime}}$, so the values of both
$\rho^{1}_{\Gamma^{\prime}}(\tau))$ and $\rho^{2}_{\Gamma^{\prime}}(\tau)$
when multiplied by $E$ are reduced modulo the maximal ideal
$\mathfrak{m}_{\Gamma^{\prime}}$. We, therefore, conclude that
$d(\sigma\tau)=d(\tau)+\rho_{k}(\tau)d(\sigma)\rho_{k}(\tau)^{-1}=d(\tau)+\mathrm{Ad}(\tau)d(\sigma).$
∎
Similarly if $\rho^{1}_{\Gamma^{\prime}},\rho^{2}_{\Gamma^{\prime}}$ are
equivalent extensions of $\rho_{\Gamma}$, that is
$\rho^{1}_{\Gamma^{\prime}}(\sigma)=\big{(}\mathbb{I}_{\Gamma^{\prime}}+EQ\big{)}\rho^{2}_{\Gamma^{\prime}}(\sigma)\big{(}\mathbb{I}_{\Gamma^{\prime}}+EQ\big{)}^{-1},$
then
$d(\sigma)=Q-\mathrm{Ad}(\sigma)Q,$
that is $d(\sigma)$ is a coboundary. This proves that the set of liftings
$\rho_{\Gamma^{\prime}}$ of a representation $\rho_{\Gamma^{\prime}}$ is a
principal homogeneous space, provided it is non-empty.
The obstruction to the lifting can be computed by considering a naive lift
$\rho_{\Gamma^{\prime}}$ of $\rho_{\Gamma}$ (that is we don’t assume that
$\rho_{\Gamma^{\prime}}$ is a representation) and by considering the element
$\phi(\sigma,\tau)=\rho_{\Gamma^{\prime}}(\sigma)\circ\rho_{\Gamma^{\prime}}(\tau)\circ\rho_{\Gamma^{\prime}}(\sigma\tau)^{-1},\quad\text{
for }\sigma,\tau\in G$
which defines a cohomology class as an element in $H^{2}(G,\mathscr{T})$. Two
naive liftings $\rho^{1}_{\Gamma^{\prime}},\rho^{2}_{\Gamma^{\prime}}$ give
rise to cohomologous elements $\phi^{1},\phi^{2}$ if their difference
$\rho^{1}_{\Gamma^{\prime}}-\rho^{2}_{\Gamma^{\prime}}$ reduce to zero in
$\Gamma^{\prime}$. If this class is zero, then the representation
$\rho_{\Gamma}$ can be lifted to $\Gamma^{\prime}$.
Examples Notice that in the theory of deformations of representations of the
general linear group, this is a classical result, see [29, prop. 1], [28,
p.30] while for deformations of representations in ${\rm Aut}\Gamma[[t]]$,
this is in [7],[4].
The functors in these cases are given by
(8)
$F:\mathrm{Ob}(\mathscr{C})\ni\Gamma\mapsto\left\\{\begin{array}[]{l}\mbox{liftings
of }\rho:G\rightarrow\mathrm{GL}_{n}(k)\\\ \mbox{to
}\rho_{\Gamma}:G\rightarrow\mathrm{GL}_{n}(\Gamma)\mbox{ modulo}\\\
\mbox{conjugation by an element }\\\ \mbox{of
}\ker(\mathrm{GL}_{n}(\Gamma)\rightarrow\mathrm{GL}_{n}(k))\end{array}\right\\}$
(9) $D_{P}:\mathrm{Ob}(\mathscr{C})\ni\Gamma\mapsto\left\\{\mbox{
\begin{tabular}[]{l}lifts $G\rightarrow\mathrm{Aut}(\Gamma[[t]])$ of $\rho$
mod-\\\ ulo conjugation with an element\\\ of
$\ker(\mathrm{Aut}\Gamma[[t]]\rightarrow\mathrm{Aut}k[[t]])$\end{tabular}
}\right\\}$
Let $V$ be the $n$-dimensional vector space $k$, and let $\mathrm{End}_{A}(V)$
be the Lie algebra corresponding to the algebraic group $GL(V)$. The space
$\mathrm{End}_{A}(V)$ is equipped with the adjoint action of $G$ given by:
$\displaystyle\mathrm{End}_{A}(V)$
$\displaystyle\rightarrow\mathrm{End}_{A}(V)$ $\displaystyle e$
$\displaystyle\mapsto(g\cdot e)(v)=\rho(g)(e(\rho(g)^{-1})(v))$
The tangent space of this deformation functor equals to
$F(k[\epsilon])=H^{1}(G,\mathrm{End}_{A}(V)),$
where the later cohomology group is the group cohomology group and
$\mathrm{End}_{A}(V)$ is considered as a $G$-module with the adjoint action.
More precisely, if
$0\rightarrow\langle
E\rangle\rightarrow\Gamma^{\prime}\stackrel{{\scriptstyle\phi}}{{\longrightarrow}}\Gamma\rightarrow
0$
is a small extension of local Artin algebras then we consider the diagram of
small extensions
$\textstyle{\mathrm{GL}_{n}(\Gamma^{\prime})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\phi}$$\textstyle{G\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\rho_{\Gamma}}$$\scriptstyle{\rho^{1}_{\Gamma^{\prime}},\rho^{2}_{\Gamma^{\prime}}}$$\textstyle{\mathrm{GL}_{n}(\Gamma)}$
where $\rho^{1}_{\Gamma^{\prime}},\rho^{2}_{\Gamma^{\prime}}$ are two liftings
of $\rho_{\Gamma}$ in $\Gamma^{\prime}$.
We have the element
$d(\sigma):=\frac{1}{E}\left(\rho^{1}_{\Gamma^{\prime}}(\sigma)\rho^{2}_{\Gamma^{\prime}}(\sigma)^{-1}-\mathbb{I}_{n}\right)\in
H^{1}(G,\mathrm{End}_{n}(k)).$
To a naive lift $\rho_{\Gamma^{\prime}}$ of $\rho_{\Gamma}$ we can attach the
2-cocycle
$\alpha(\sigma,\tau)=\rho_{\Gamma^{\prime}}(\sigma)\rho_{\Gamma^{\prime}}(\tau)\rho_{\Gamma^{\prime}}(\sigma\tau)^{-1}$
defining a cohomology class in $H^{2}(G,\mathrm{End}_{n}(k))$.
The following proposition shows us that a lifting is not always possible.
###### Proposition 15.
Let $k$ be an algebraically closed field of positive characteristic $p>0$, end
let $R=W(k)[\zeta_{q}]$ be the Witt ring of $k$ with a primitive $q=p^{h}$
root adjoined. Consider the group $G=C_{q}\rtimes C_{m}$, where $C_{m}$ and
$C_{q}$ are cyclic groups of orders $m$ and $q$ respectively and $(m,p)=1$.
Assume that $\sigma$ and $\tau$ are generators for $C_{m}$ and $C_{q}$
respectively and moreover
$\sigma\tau\sigma^{-1}=\tau^{a}$
for some integer $a$ (which should satisfy $a^{m}\equiv 1{\;\rm mod}q$.) There
is a linear representation $\rho:G\rightarrow\mathrm{GL}_{2}(k)$, which can
not be lifted to a representation $\rho_{R}:G\rightarrow\mathrm{GL}_{2}(R)$.
###### Proof.
Consider the field $\mathbb{F}_{p}\subset k$ and let $\lambda$ be a generator
of the cyclic group $\mathbb{F}_{p}^{*}$. The matrices
$\sigma=\begin{pmatrix}a&0\\\ 0&1\end{pmatrix}\text{ and
}\tau=\begin{pmatrix}1&1\\\ 0&1\end{pmatrix}$
satisfy
$\sigma^{p-1}=1,\tau^{q}=1,\sigma\tau\sigma^{-1}=\begin{pmatrix}1&a\\\
0&1\end{pmatrix}=\sigma^{a}$
and generate a subgroup of $\mathrm{GL}_{2}(k)$, isomorphic to $C_{q}\rtimes
C_{m}$ for $m=p-1$, giving a natural representation
$\rho:G\rightarrow\mathrm{GL}_{2}(\bar{\mathbb{F}}_{p})\subset\mathrm{GL}_{2}(k)$.
Suppose that there is a faithful representation
$\tilde{\rho}:G\rightarrow\mathrm{GL}_{n}(R)$ which gives a faithful
representation of
$\tilde{\rho}:G\rightarrow\mathrm{GL}_{n}(\mathrm{Quot}(R))$. Since
$\tilde{\rho}(\tau)$ is of finite order after a $\mathrm{Quot}(R)$ linear
change of basis we might assume that $\tilde{\rho}(\tau)$ is diagonal with
$q$-roots of unity in the diagonal (we have considered $R=W(k)[\zeta]$ so that
the necessary diagonal elements exist in $\mathrm{Quot}(R)$). We have
$\tilde{\rho}(\tau)=\mathrm{diag}(\lambda_{1},\ldots,\lambda_{n}).$
At least one of the diagonal elements say $\lambda=\lambda_{i_{0}}$ in the
above expression is a primitive $q$-th root of unity. Let $E$ be an
eigenvector, that is
$\tilde{\rho}(\tau)E=\lambda E.$
The equality $\tau\sigma=\sigma\tau^{a}$ implies that $\sigma E$ is an
eigenvector of the eigenvalue $\lambda^{a}$. This means that $n$ should be
greater than the order of $a{\;\rm mod}q$ since we have at least as many
different (and linearly independent) eigenvectors as the different values
$\lambda,\lambda^{a},\lambda^{a^{2}},\ldots$.
Since, for large prime ($p>3$) we have $2=n<p-1$ the representation $\rho$ can
not be lifted to $R$. ∎
Local Actions By the local-global theorems of J.Bertin and A. Mézard [4] and
the formal patching theorems of D. Harbater, K. Stevenson [14], [15], the
study of the functor ${D_{\rm gl}}$ can be reduced to the study of the
deformation functors $D_{P}$ attached to each wild ramification point $P$ of
the cover $X\rightarrow X/G$, as defined in eq. (9). The theory of
automorphisms of formal powerseries rings is not as well understood as is the
theory of automorphisms of finite dimensional vector spaces, i.e. the theory
of general linear groups.
As in the theory of liftings for the general linear group, we consider small
extensions
$1\rightarrow\langle
E\rangle\rightarrow\Gamma^{\prime}\stackrel{{\scriptstyle\phi}}{{\longrightarrow}}\Gamma\rightarrow
1$
An automorphism $\rho^{\Gamma}(\sigma)\in\mathrm{Aut}\Gamma[[t]]$ is
completely described by a powerseries
$\rho^{\Gamma}(\sigma)(t)=f_{\sigma}=\sum_{\nu=1}^{\infty}a_{\nu}^{\Gamma}(\sigma)t^{\nu},$
where $a_{\nu}^{\Gamma}(\sigma)\in\Gamma$. Given a naive lift
$\rho^{\Gamma^{\prime}}(\sigma)(t)=\sum_{\nu=1}^{\infty}a_{\nu}^{\Gamma^{\prime}}(\sigma)t^{\nu},$
where $a_{\nu}^{\Gamma^{\prime}}(\sigma)\in\Gamma^{\prime}$ we can again form
a two cocycle
$\alpha(\sigma,\tau)=\rho^{\Gamma^{\prime}}(\sigma)\circ\rho^{\Gamma^{\prime}}(\tau)\circ\rho^{\Gamma^{\prime}}(\sigma\tau)^{-1}(t),$
defining a cohomology class in $H^{2}(G,\mathscr{T}_{k[[t]]})$. The naive lift
$\rho^{\Gamma^{\prime}}(\sigma)$ is an element of
$\mathrm{Aut}\Gamma^{\prime}[[t]]$ if and only if $\alpha$ is cohomologous to
zero.
Suppose now that $\rho_{1}^{\Gamma^{\prime}},\rho_{2}^{\Gamma^{\prime}}$ are
two lifts in $\mathrm{Aut}\Gamma^{\prime}[[t]]$. We can now define
$d(\sigma):=\frac{1}{t}\left(\rho_{1}^{\Gamma^{\prime}}(\sigma)\rho_{2}^{\Gamma^{\prime}}(\sigma)^{-1}-\mathrm{Id}\right)\in
H^{1}(G,\mathscr{T}_{k[[t]]}).$
## 3\. Relative Petri’s theorem.
Recall that a functor $F:\mathscr{C}\rightarrow\mathrm{Sets}$ can be extended
to a functor $\hat{F}:\hat{\mathscr{C}}\rightarrow\mathrm{Sets}$ by letting
for every $R\in\mathrm{Ob}(\hat{\mathscr{C}})$,
$\displaystyle\hat{F}(R)=\lim_{\leftarrow}F(R/\mathfrak{m}_{R}^{n+1})$. An
element $\hat{u}\in\hat{F}(R)$ is called a formal element, and by definition
it can be represented as a system of elements $\\{u_{n}\in
F(R/\mathfrak{m}_{R}^{n+1})\\}_{n\geq 0}$, such that for each $n\geq 1$, the
map $F(R/\mathfrak{m}_{R}^{n+1})\rightarrow F(R/\mathfrak{m}_{R}^{n})$ induced
by $R/\mathfrak{m}_{R}^{n+1}\rightarrow R/\mathfrak{m}_{R}^{n}$ sends
$u_{n}\mapsto u_{n-1}$. For $R\in\mathrm{Ob}(\hat{\mathscr{C}})$ and a formal
element $\hat{u}\in\hat{F}(R)$, the couple $(R,\hat{u})$ is called a formal
couple. It is known that there is a 1-1 correspondence between $\hat{F}(R)$
and the set of morphisms of functors
$h_{R}:=\mathrm{Hom}_{\hat{\mathscr{C}}}(R,-)\rightarrow F$, see [34, lemma
2.2.2.]. The formal element $\hat{u}\in\hat{F}(R)$ will be called versal if
the corresponding morphism $h_{R}\rightarrow F$ is smooth. For the definition
of a smooth map between functors, see [34, def. 2.2.4]. The ring $R$ will be
called versal deformation ring.
Schlessinger [32, 3.7] proved that the deformation functor $D$ for curves
without automorphisms, admits a ring $R$ as versal deformation ring.
Schlessinger calls the versal deformation ring the hull of the deformation
functor. Indeed, since there are no obstructions to liftings in small
extensions for curves, see [32, rem. 2.10] the hull $R$ of ${D_{\rm gl}}$ is a
powerseries ring over $\Lambda$, which can be taken as an algebraic extension
of $W(k)$. Moreover $R=\Lambda[[x_{1},\ldots,x_{3g-3}]],$ as we can see by
applying [3, cor. 3.3.5], when $G$ is the trivial subgroup of the automorphism
group. In this case the quotient map
$f:X\rightarrow\Sigma=X/\\{\mathrm{Id}\\}=X$ is the identity. Indeed, for the
equivariant deformation functor, in the case of the trivial group, there are
no ramified points and the short exact sequence in eq. (1) reduces to an
isomorphism of the first two spaces. We have
$\dim_{k}H^{1}(X/G,\pi_{*}^{G}(\mathscr{T}_{X}))=\dim_{k}H^{1}(X,\mathscr{T}_{X})=3g-3$.
The deformation $\mathscr{X}\rightarrow\mathrm{Specf}R$ can be extended to a
deformation $\mathscr{X}\rightarrow\mathrm{Spec}R$ by Grothendieck’s
effectivity theorem, see [34, th. 2.5.13], [13].
The versal element $\hat{u}$ corresponds to a deformation
$\mathscr{X}\rightarrow\mathrm{Spec}R$, with generic fibre
$\mathscr{X}_{\eta}$ and special fibre $\mathscr{X}_{0}$. The couple
$(R,\hat{u})$ is called the versal [34, def. 2.2.6] element of the deformation
functor $D$ of curves (without automorphisms). Moreover, the element $u$
defines a map $h_{R/\Lambda}\rightarrow D$, which by definition of the hull is
smooth, so every deformation $X_{A}\rightarrow\rm SpecA$ defines a
homomorphism $R\rightarrow A$, which allows us to see $A$ as an $R$-algebra.
Indeed, for the Artin algebra $A\rightarrow A/\mathfrak{m}_{A}=k$ we consider
the diagram
$\textstyle{h_{R/\Lambda}=\mathrm{Hom}_{\widehat{\mathscr{C}}}(R,A)\rightarrow
h_{R/\Lambda}(k)\times_{D(k)}D(A)}$
This section aims to prove the following
###### Proposition 16.
Let $f_{1},\ldots,f_{r}\in k[\omega_{1},\ldots,\omega_{g}]$ be quadratic
polynomials which generate the canonical ideal of a curve $X$ defined over an
algebraic closed field $k$. Any deformation $\mathscr{X}_{A}$ is given by
quadratic polynomials $\tilde{f}_{1},\ldots,\tilde{f}_{r}\in
A[W_{1},\ldots,W_{g}]$, which reduce to $f_{1},\ldots,f_{r}$ modulo the
maximal ideal $\mathfrak{m}_{A}$ of $A$.
For $n\geq 1$, we write $\Omega_{\mathscr{X}/R}^{\otimes n}$ for the sheaf of
holomorphic polydifferentials on $\mathscr{X}$. By [17, lemma II.8.9] the
$R-$modules $H^{0}(\mathscr{X},\Omega^{\otimes n}_{\mathscr{X}/R})$ are free
of rank $d_{n,g}$ for all $n\geq 1$, with $d_{n,g}$ given by eq. (10)
(10) $d_{n,g}=\begin{cases}g,&\text{ if }n=1\\\ (2n-1)(g-1),&\text{ if
}n>1.\end{cases}$
Indeed, by a standard argument using Nakayama’s lemma, see [17, lemma
II.8.9],[21] we have that the $R$-module $H^{0}(\mathscr{X},\Omega^{\otimes
n}_{\mathscr{X}/R})$ is free. Notice that to use Nakayama’s lemma we need the
deformation over $R$ to have both a special and generic fibre and this was the
reason we needed to consider a deformation over the spectrum of $R$ instead of
the formal spectrum.
###### Lemma 17.
For every Artin algebra $A$ the $A$-module
$H^{0}(X_{A},\Omega_{X_{A}/A}^{\otimes n})$ is free.
###### Proof.
This follows since $H^{0}(\mathscr{X},\Omega_{\mathscr{X}/R})$ is a free
$R$-module and [17, prop. II.8.10], which asserts that $\Omega_{X_{A}/A}\cong
g^{\prime*}(\Omega_{\mathscr{X}/R})$, where $g^{\prime}$ is shown in the next
commutative diagram:
$\textstyle{X_{A}=\mathscr{X}\times_{\rm SpecR}\rm
SpecA\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{g^{\prime}}$$\textstyle{\mathscr{X}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\rm
SpecA\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\rm
SpecR}$
We have by definition of the pullback
(11)
$g^{\prime*}(\Omega_{\mathscr{X}/R})(X_{A})=(g^{\prime})^{-1}\Omega_{\mathscr{X}/R}(X_{A})\otimes_{(g^{\prime})^{-1}\mathscr{O}_{\mathscr{X}}(X_{A})}\mathscr{O}_{X_{A}}(X_{A})$
and by definition of the fiber product
$\mathscr{O}_{X_{A}}=\mathscr{O}_{\mathscr{X}}\otimes_{R}A$. Observe also that
since $A$ is a local Artin algebra the schemes $X_{A}$ and $\mathscr{X}$ share
the same underlying topological space so
$g^{\prime-1}(\Omega_{\mathscr{X}/R}(X_{A}))=\Omega_{\mathscr{X}/R}(\mathscr{X})$
and
$g^{\prime-1}\mathscr{O}_{\mathscr{X}}(X_{A})=\mathscr{O}_{\mathscr{X}}(\mathscr{X})$.
So eq. (11) becomes
$\displaystyle H^{0}(X_{A},\Omega_{X_{A}/A})$
$\displaystyle=\Omega_{X_{A}/A}(X_{A})=g^{\prime*}(\Omega_{\mathscr{X}/R})(X_{A}))=$
$\displaystyle=\Omega_{\mathscr{X}/R}(\mathscr{X})\otimes_{\mathscr{O}_{\mathscr{X}}(\mathscr{X})}\otimes{\mathscr{O}_{\mathscr{X}}}(\mathscr{X})\otimes_{R_{gl}}A$
$\displaystyle=H^{0}(\mathscr{X},\Omega_{\mathscr{X}/R})\otimes_{R}A.$
So $H^{0}(X_{A},\Omega_{X_{A}/A})$ is a free $A$-module of the same rank as
$H^{0}(\mathscr{X},\Omega_{\mathscr{X}/R})$.
The proof for $H^{0}(X_{A},\Omega_{X_{A}/A}^{\otimes n})$ follows in the same
way. ∎
We select generators $W_{1},\ldots,W_{g}$ for the symmetric algebra
$\mathrm{Sym}(H^{0}(\mathscr{X},\Omega_{\mathscr{X}/R}))=R[W_{1},\ldots,W_{g}].$
Similarly, we write
$\mathrm{Sym}(H^{0}(\mathscr{X}_{\eta},\Omega_{\mathscr{X}_{\eta}/L}))=L[\omega_{1},\ldots,\omega_{g}]\text{
and
}\mathrm{Sym}(H^{0}(\mathscr{X}_{0},\Omega_{\mathscr{X}_{0}/k}))=k[w_{1},\ldots,w_{g}],$
where
$\omega_{i}=W_{i}\otimes_{R}L\qquad w_{i}=W_{i}\otimes_{R}k\text{ for all
}1\leq i\leq g.$
We have the following diagram relating special and generic fibres.
(12)
${\mathrm{Spec}(k)\times_{\mathrm{Spec}(R)}\mathscr{X}=\mathscr{X}_{0}}$${\mathscr{X}}$${\mathscr{X}_{\eta}=\mathrm{Spec}(L)\times_{\mathrm{Spec}(R)}\mathscr{X}}$${\mathrm{Spec}(k)}$${\mathrm{Spec}(R)}$${\mathrm{Spec}(L)}$
Our article is based on the following relative version of Petri’s theorem.
###### Theorem 18.
Diagram (12) induces a deformation-theoretic diagram of canonical embeddings
(13)
$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{I_{\mathscr{X}_{\eta}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{S_{L}:=L[\omega_{1},\ldots,\omega_{g}]\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\phi_{\eta}}$$\textstyle{\displaystyle\bigoplus_{n=0}^{\infty}H^{0}(\mathscr{X}_{\eta},\Omega_{\mathscr{X}_{\eta}/L}^{\otimes
n})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0}$$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{I_{\mathscr{X}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\otimes_{R}L}$$\scriptstyle{\otimes_{R}R/\mathfrak{m}}$$\textstyle{S_{R}:=R[W_{1},\ldots,W_{g}]\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\phi}$$\scriptstyle{\otimes_{R}L}$$\scriptstyle{\otimes_{R}R/\mathfrak{m}}$$\textstyle{\displaystyle\bigoplus_{n=0}^{\infty}H^{0}(\mathscr{X},\Omega_{\mathscr{X}/R}^{\otimes
n})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\otimes_{R}L}$$\scriptstyle{\otimes_{R}R/\mathfrak{m}}$$\textstyle{0}$$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{I_{\mathscr{X}_{0}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{S_{k}:=k[w_{1},\ldots,w_{g}]\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\phi_{0}}$$\textstyle{\displaystyle\bigoplus_{n=0}^{\infty}H^{0}(\mathscr{X}_{0},\Omega_{\mathscr{X}_{0}/k}^{\otimes
n})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0}$
where
$I_{\mathscr{X_{\eta}}}=\ker\phi_{\eta},\;I_{\mathscr{X}}=\ker\phi,\;I_{\mathscr{X}_{0}}=\ker\phi_{0}$,
each row is exact and each square is commutative. Moreover, the ideal
$I_{\mathscr{X}}$ can be generated by elements of degree $2$ as an ideal of
$S_{R}$.
The commutativity of the above diagram was proved in [6] by H. Charalambous,
K. Karagiannis and the first author. For proving that $I_{\mathscr{X}}$ is
generated by elements of degree $2$ as in the special and generic fibers we
argue as follows: Since $L$ is a field it follows by Petri’s Theorem, that
there are elements $\tilde{f_{1}},\dots,\tilde{f_{r}}\in S_{L}$ of degree $2$
such that
$I_{\mathscr{X}_{\eta}}=\langle\tilde{f_{1}},\dots,\tilde{f_{r}}\rangle.$
Now we choose an element $c\in R$ such that
$f_{i}\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=c\tilde{f}_{i}\in
S_{R}$ for all $i$ and notice that
$\mathrm{deg}(f_{i})=\mathrm{deg}(\tilde{f_{i}})=2$.
$\bullet$ Assume first that the element $c\in R$ is invertible in $R$.
Consider the ideal $I=\langle f_{1},\ldots,f_{r}\rangle$ of $S_{R}$. We will
prove that $I=I_{\mathscr{X}}$. Consider the multiplicative system $R^{*}$. We
will prove first $I\subset I_{\mathscr{X}}=\mathrm{ker}\phi$. Indeed, using
the commuting upper square every element $a=\sum_{\nu=1}^{r}a_{i}f_{i}\in I$
maps to $\sum_{\nu=1}^{r}a_{i}f_{i}\otimes_{R}1$ which in turn maps to $0$ by
$\phi_{\eta}$. The same element maps to $\phi(a)$ and $\phi(a)\otimes_{R}1$
should be zero. Since all modules
$H^{0}(\mathscr{X},\Omega_{\mathscr{X}/R}^{\otimes n})$ are free $\phi(a)=0$
and $a\in I_{\mathscr{X}}$.
Since the family $\mathscr{X}\rightarrow\rm SpecR$ is flat we have that
$I_{\mathscr{X}}\otimes_{R}L=I_{\mathscr{X}_{\eta}}$, that is we apply the
$\otimes_{R}L$ functor on the middle short exact sequence of eq. (13). The
ideal $I=I_{\mathscr{X}_{\eta}}\cap S_{R}=(I_{\mathscr{X}}\otimes_{R}L)\cap
S_{R}$. By [2, prop. 3.11ii] this gives that
$I=\cup_{s\in R^{*}}(I_{\mathscr{X}}:s)\supset I_{\mathscr{X}},$
so $I_{\mathscr{X}}=I$. In the above formula $(I_{\mathscr{X}}:s)=\\{x\in
S_{R}:xs\in I_{\mathscr{X}}\\}$.
$\bullet$ From now on we don’t assume that the element $c$ is an invertible
element of $R$.
Let $\bar{g}$ be an element of degree $2$ in $I_{\mathscr{X}_{0}}$, we will
prove that we can select an element $g\in I_{\mathscr{X}}$ such that $g\otimes
1_{k}=\bar{g}$, so that $g$ has degree $2$.
Let us choose a lift $\tilde{g}\in S_{R}$ of degree $2$ by lifting each
coefficient of $g$ from $k$ to $R$. This element is not necessarily in
$I_{\mathscr{X}}$. We have $\phi(g)\otimes 1_{k}=\phi_{0}(g\otimes
1_{k})=\phi_{0}(\bar{g})=0$. Let $\bar{e}_{1},\ldots,\bar{e}_{3g-3}$ be
generators of the free $R$-module
$H^{0}(\mathscr{X},\Omega_{\mathscr{X}/R}^{\otimes 2})$ and choose
$e_{1},\ldots,e_{3g-3}\in S_{R}$ such that $\phi(e_{i})=\bar{e}_{i}$. Let us
write $\phi(\tilde{g})=\sum_{i=1}^{3g-3}\lambda_{i}\bar{e}_{i}$, with
$\lambda_{i}\in R$. Since $\phi_{0}(\bar{g})=0$ we have that all
$\lambda_{i}\in\mathfrak{m}_{R}$ for all $1\leq i\leq{3g-3}$. This means that
the element $g=\tilde{g}-\sum_{i=1}^{3g-3}\lambda_{i}e_{i}\in S_{R}$ reduces
to $\bar{g}$ modulo $\mathfrak{m}_{R}$ and also
$\phi(g)=\phi(\tilde{g})-\sum_{i=1}^{3g-3}\lambda_{i}\bar{e}_{i}=0$, so $g\in
I_{\mathscr{X}}$.
Let $\bar{g}_{1},\dots,\bar{g}_{s}\in I_{\mathscr{X}_{0}}$ be elements of
degree $2$ such that
$I_{\mathscr{X}_{0}}=\langle\bar{g}_{1},\dots,\bar{g}_{s}\rangle$
and, using the previous construction, we take $g_{i}$ lifts in
$I_{\mathscr{X}}\lhd S_{R}$, i.e. such that $g_{i}\otimes 1_{k}=\bar{g}_{i}$
and also assume that the elements $g_{i}$ have also degree $2$.
We will now prove that the elements
$g_{1}\otimes_{S_{R}}1_{L},\ldots,g_{s}\otimes_{S_{R}}1_{L}\in S_{L}$ generate
the ideal $I_{\mathscr{X}_{\eta}}$. By the commutativity of the diagram in eq.
(13) we have $\langle
g_{1}\otimes_{S_{R}}1_{L},\ldots,g_{s}\otimes_{S_{R}}1_{L}\rangle\subset
I_{\mathscr{X}_{\eta}}=\ker\phi_{\eta}$. Observe that any linear relation
$\sum_{\nu=1}^{s}(a_{\nu}g_{\nu}\otimes_{S_{R}}1_{L})=0,\text{ with
}a_{\nu}\in L$
gives rise to a relation for some $c\in R$
$\sum_{\nu=1}^{s}c\cdot a_{\nu}g_{\nu}=0,\qquad c\cdot a_{\nu}\in S_{R},$
which implies that $c\cdot a_{\nu}\in\mathfrak{m}_{R}$.
We will prove that the elements $g_{i}\otimes_{S_{R}}1_{L}$ are linear
independent.
###### Lemma 19.
Let $\bar{v}_{1},\ldots,\bar{v}_{n}\in k^{m}$ be linear independent elements
and $v_{1},\ldots,v_{n}$ be lifts in $R^{m}$. Then
$\sum_{\nu=1}^{n}a_{\nu}v_{\nu}=0\qquad a_{\nu}\in R,$
implies that $a_{1}=\cdots=a_{n}=0$.
###### Proof.
We have $n\leq m$. We write the elements $v_{1},\ldots,v_{n}$ (resp.
$\bar{v}_{1},\ldots,\bar{v}_{n}$) as columns and in this way we obtain an
$m\times n$ matrix $J$ (resp. $\bar{J}$). Since the elements are linear
independent in $k^{m}$ there is an $n\times n$ minor matrix with an invertible
determinant. Without loss of generality, we assume that there is an $n\times
n$ invertible matrix $\bar{Q}$ with coefficients in $k$ such that
$\bar{Q}\cdot\bar{J}^{t}=\left(\begin{array}[]{l|l}\mathbb{I}_{n}&\bar{A}\end{array}\right)$,
where $\bar{A}$ is an $(m-n)\times n$ matrix. We now get lifts $Q,J$ and $A$
of $\bar{Q},\bar{J}$ and $\bar{A}$ respectively, with coefficients in R, i.e.
$Q\cdot
J^{t}\equiv(\begin{array}[]{l|l}\mathbb{I}_{n}&A\end{array})\mathrm{mod}\mathfrak{m}_{R}.$
The columns of J are lifts of the elements $\bar{v}_{1},\ldots,\bar{v}_{n}$.
It follows that $Q\cdot
J^{t}=\left(\begin{array}[]{c|c}\mathbb{I}_{n}&A\end{array}\right)+\left(\begin{array}[]{c|c}C&D\end{array}\right)$,
where $C,D$ are matrices with entries in $\mathfrak{m}_{R}$. The determinant
of $\mathbb{I}_{n}+C$ is $1+m$, for some element $m\in\mathfrak{m}_{R}$, and
this is an invertible element in the local ring $R$. Similarly, the matrix $Q$
is invertible. Therefore,
$J^{t}=\left(\begin{array}[]{l|l}Q^{-1}(\mathbb{I}_{n}+C)&Q^{-1}(A+D)\end{array}\right)$
has the first $n\times n$ block matrix invertible and the desired result
follows.
∎
###### Remark 20.
It is clear that over a ring where $2$ is invertible, there is an 1-1
correspondence between symmetric $g\times g$ matrices and quadratic
polynomials. Indeed, a quadratic polynomial can be written as
$f(w_{1},\ldots,w_{g})=\sum_{1\leq i,j\leq g}a_{ij}w_{i}w_{j}=w^{t}Aw,$
where $A=(a_{ij})$. Even if the matrix $A$ is not symmetric, the matrix
$(A+A^{t})/2$ is and generates the same quadratic polynomial
$w^{t}Aw=w^{t}\left(\frac{A+A^{t}}{2}\right)w.$
Notice that the map
$A\mapsto\frac{A+A^{t}}{2}$
is onto the space of symmetric matrices and has as kernel the space of
antisymmetric matrices.
A minimal set of quadratic generators is given by a set of polynomials
$f_{1},\ldots,f_{r}$, with $f_{i}=w^{t}A_{i}w$, where the symmetric
polynomials are linearly independent.
By the general theory of Betti tables we know that in the cases the canonical
ideal is generated by quadratic polynomials, the dimension of this set of
matrices equals $\binom{g-2}{2}$, see [10, prop. 9.5]. Therefore we begin on
the special fibre with the $s=\binom{g-2}{2}$ generators
$\bar{g}_{1},\ldots,\bar{g}_{s}$ elements. As we have proved in theorem 18 we
can lift them to elements $g_{1},\ldots,g_{s}\in I_{\mathscr{X}}$ so that for
$J\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\langle
g_{1},\dots,g_{s}\rangle$ we have
* $(i)$
$J\otimes_{R}L=I_{\mathscr{X}_{\eta}}$.
* $(ii)$
$J\otimes_{R}k=I_{\mathscr{X}_{0}}$.
In this way we obtain the linear independent elements
$g_{1}\otimes_{S_{R}}1_{L},\ldots,g_{s}\otimes_{S_{R}}1_{L}$ in
$I_{X_{\eta}}$. We have seen that the $s=\binom{g-2}{2}$ linear independent
quadratic elements generate also $I_{\mathscr{X}_{\eta}}$.
By following Lemma 5 (ii) of [6] we have the next lemma.
###### Lemma 21.
Let $G$ be a set of polynomials in $S_{R}$ such that $\langle
G\rangle\otimes_{R}L=I_{\mathscr{X}_{\eta}}$ and $\langle
G\rangle\otimes_{R}k=I_{\mathscr{X}_{0}}$. Then $I_{\mathscr{X}}=\langle
G\rangle$.
Essential for the proof of lemma 21 was that the ring $R$ has a generic fibre.
The deformation theory is concerned with deformations over local Artin
algebras which do not have generic fibres. But by tensoring with $A$ in the
middle sequence of eq. (13) we have the following commutative diagram:
$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{I_{X_{A}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\otimes_{A}A/\mathfrak{m}_{A}}$$\textstyle{S_{A}:=A[W_{1},\ldots,W_{g}]\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\phi}$$\scriptstyle{\otimes_{A}A/\mathfrak{m}_{A}}$$\textstyle{\displaystyle\bigoplus_{n=0}^{\infty}H^{0}(X_{A},\Omega_{X_{A}/A}^{\otimes
n})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\otimes_{A}A/\mathfrak{m}_{A}}$$\textstyle{0}$$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{I_{\mathscr{X}_{0}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{S_{k}:=k[w_{1},\ldots,w_{g}]\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\phi_{0}}$$\textstyle{\displaystyle\bigoplus_{n=0}^{\infty}H^{0}(\mathscr{X}_{0},\Omega_{\mathscr{X}_{0}/k}^{\otimes
n})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0}$
Indeed, since $H^{0}(\mathscr{X},\Omega_{\mathscr{X}/A}^{\otimes n})$ is free
the left top arrow in the above diagram is injective. Moreover the relative
canonical ideal $I_{X_{A}}$ is still generated by quadratic polynomials in
$S_{A}$.
### 3.1. Embedded deformations
Let $Z$ be a scheme over $k$ and let $X$ be a closed subscheme of $Z$. An
embedded deformation $X^{\prime}\rightarrow\rm Speck[\epsilon]$ of $X$ over
$\rm Speck[\epsilon]$ is a closed subscheme $X^{\prime}\subset
Z^{\prime}=Z\times\rm Speck[\epsilon]$ fitting in the diagram:
$\textstyle{Z\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{Z\times\rm
Speck[\epsilon]\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{X^{\prime}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\rm
Speck\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\rm
Speck[\epsilon]}$
Let $\mathscr{I}$ be the ideal sheaf describing $X$ as a closed subscheme of
$Z$ and
(14)
$\mathscr{N}_{X/Z}=\mathscr{H}\\!\mathit{om}_{Z}(\mathscr{I},\mathscr{O}_{X})=\mathscr{H}\\!\mathit{om}_{X}(\mathscr{I}/\mathscr{I}^{2},\mathscr{O}_{X}),$
be the normal sheaf. In particular for an affine open set $U$ of $X$ we set
$B^{\prime}=\mathscr{O}_{Z^{\prime}}(U)=B\oplus\epsilon B$, where
$B=\mathscr{O}_{Z}(U)$ and we observe that describing the sheaf of ideals
$\mathscr{I}^{\prime}(U)\subset\mathscr{B}^{\prime}$ is equivalent to giving
an element
$\phi_{U}\in\mathrm{Hom}_{\mathscr{O}_{Z}(U)}\big{(}\mathscr{I}(U),\mathscr{O}_{Z}(U)/\mathscr{I}(U)\big{)},$
see [18, prop. 2.3].
In this article, we will take $Z=\mathbb{P}^{g-1}$ and consider the canonical
embedding $f:X\rightarrow\mathbb{P}^{g-1}$. We will denote by $N_{f}$ the
sheaf $\mathscr{N}_{X/\mathbb{P}^{g-1}}$. Let $\mathscr{I}_{X}$ be the sheaf
of ideals of the curve $X$ seen as a subscheme of $\mathbb{P}^{g-1}$. Since
the curve $X$ satisfies the conditions of Petri’s theorem it is fully
described by certain quadratic polynomials
$f_{1}=\tilde{A}_{1},\ldots,f_{r}=\tilde{A}_{r}$ which correspond to a set
$g\times g$ matrices $A_{1},\ldots,A_{r}$, see [24]. The elements
$f_{1},\ldots,f_{r}$ generate the ideal $I_{X}$ corresponding to the
projective cone $C(X)$ of $X$, $C(X)\subset\mathbb{A}^{g}$.
We have
$H^{0}(X,N_{f})=\mathrm{Hom}_{S}(I_{X},\mathscr{O}_{X}).$
Assume that $X$ is deformed to a curve $X_{\Gamma}\rightarrow\rm Spec\Gamma$,
where $\Gamma$ is a local Artin algebra,
$X_{\Gamma}\subset\mathbb{P}^{g-1}_{\Gamma}=\mathbb{P}^{g-1}\times\rm
Spec\Gamma$. Our initial curve $X$ is described in terms of the homogeneous
canonical ideal $I_{X}$, generated by the elements
$\\{w^{t}A_{1}w,\ldots,w^{t}A_{r}w\\}$. For a local Artin algebra $\Gamma$ let
$\mathscr{S}_{g}(\Gamma)$ denote the space of symmetric $g\times g$ matrices
with coefficients in $\Gamma$. The deformations $X_{\Gamma}$ are expressed in
terms of the ideals $I_{X_{\Gamma}}$, which by the relative Petri’s theorem
are also generated by elements
$w^{t}A_{1}^{\Gamma}w,\ldots,w^{t}A_{r}^{\Gamma}w$, where $A_{i}^{\Gamma}$ is
in $\mathscr{S}_{g}(\Gamma)$. This essentially fits with Schlessinger’s
observation in [33], where the deformations of the projective variety are
related to the deformations of the affine cone, notice that in our case all
relative projective curves are smooth and the assumptions of [33, th. 2] are
satisfied. We can thus replace the sheaf theoretic description of eq. (14) and
work with the affine cone instead.
###### Remark 22.
A set of quadratic generators $\\{w^{t}A_{1}w,\ldots,w^{t}A_{r}w\\}$ is a
minimal set of generators if and only if the elements $A_{1},\ldots,A_{r}$ are
linear independent in the free $\Gamma$-module $\mathscr{S}_{g}(\Gamma)$ of
rank $(g+1)g/2$.
#### 3.1.1. Embedded deformations and small extensions
Let
$0\rightarrow\langle
E\rangle\rightarrow\Gamma^{\prime}\stackrel{{\scriptstyle\pi}}{{\longrightarrow}}\Gamma\rightarrow
0$
be a small extension and a curve $\mathbb{P}^{g-1}_{\Gamma^{\prime}}\supset
X_{\Gamma^{\prime}}\rightarrow\rm Spec\Gamma^{\prime}$ be a deformation of
$X_{\Gamma}$ and $X$. The curve $X_{\Gamma^{\prime}}$ is described in terms of
quadratic polynomials $w^{t}A_{i}^{\Gamma^{\prime}}w$, where
$A_{i}^{\Gamma^{\prime}}\in\mathscr{S}_{g}(\Gamma^{\prime})$, which reduce to
$A_{i}^{\Gamma}$ modulo $\langle E\rangle$. This means that
(15) $A_{i}^{\Gamma^{\prime}}\equiv A_{i}^{\Gamma}{\;\rm
mod}\;\mathrm{ker}(\pi)\text{ for all }1\leq i\leq r$
and if we select a naive lift $i(A_{i}^{\Gamma})$ of $A_{i}^{\Gamma}$, then we
can write
$A_{i}^{\Gamma^{\prime}}=i(A_{i}^{\Gamma})+E\cdot B_{i},\text{ where
}B_{i}\in\mathscr{S}_{g}(k).$
The set of liftings of elements $A_{i}^{\Gamma^{\prime}}$ of elements
$A_{i}^{\Gamma}$, for $1\leq i\leq r$ is a principal homogeneous space, under
the action of $H^{0}(X,N_{f})$, since two such liftings
$\\{A_{i}^{(1)}(\Gamma^{\prime}),1\leq i\leq r\\}$,
$\\{A_{i}^{(2)}(\Gamma^{\prime}),1\leq i\leq r\\}$ differ by a set of matrices
in
$\\{B_{i}(\Gamma^{\prime})=A_{i}^{(1)}(\Gamma^{\prime})-A_{i}^{(2)}(\Gamma^{\prime}),1\leq
i\leq r\\}$ with entries in $\langle E\rangle\cong k$, see also [18, thm.
6.2].
Define a map $\phi:\langle
A_{1},\ldots,A_{r}\rangle\rightarrow\mathscr{S}_{g}(k)$ by
$\phi(A_{i})=B_{i}(\Gamma^{\prime})$ and we also define the a corresponding
map on polynomials $\tilde{\phi}(\tilde{A_{i}})=w^{t}\phi(A_{i})w.$ we obtain
a map $\tilde{\phi}\in\mathrm{Hom}_{S}(I_{X},\mathscr{O}_{X})=H^{0}(X,N_{f})$,
see also [18, th. 6.2], where $S=S_{k}$. Obstructions to such liftings are
known to reside in
$H^{1}(X,\mathscr{N}_{X/\mathbb{P}^{g-1}}\otimes_{k}\ker\pi)$, which we will
prove it is zero, see remark 23.
#### 3.1.2. Embedded deformations and tangent spaces
Let us consider the $k[\epsilon]/k$ case. Since
$i:X\hookrightarrow\mathbb{P}^{g-1}$ is non-singular we have the following
exact sequence
$0\rightarrow\mathscr{T}_{X}\rightarrow
i^{*}\mathscr{T}_{\mathbb{P}^{g-1}}\rightarrow\mathscr{N}_{X/\mathbb{P}^{g-1}}\rightarrow
0$
which gives rise to
$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{H^{0}(X,\mathscr{T}_{X})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{H^{0}(X,i^{*}\mathscr{T}_{\mathbb{P}^{g-1}})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{H^{0}(X,\mathscr{N}_{X/\mathbb{P}^{g-1}})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\delta}$$\textstyle{\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!H^{1}(X,\mathscr{T}_{X})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{H^{1}(X,i^{*}\mathscr{T}_{\mathbb{P}^{g-1}})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{H^{1}(X,\mathscr{N}_{X/\mathbb{P}^{g-1}})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0}$
###### Remark 23.
In the above diagram, the last entry in the bottom row is zero since it
corresponds to a second cohomology group on a curve. By Riemann-Roch theorem
we have that $H^{0}(X,\mathscr{T}_{X})=0$ for $g\geq 2$. Also, the relative
Petri theorem implies that the map $\delta$ is onto. We will give an
alternative proof that $\delta$ is onto by proving that
$H^{1}(X,i^{*}\mathscr{T}_{\mathbb{P}^{g-1}})=0$. This proves that
$H^{1}(X,\mathscr{N}_{X/\mathbb{P}^{g-1}})=0$ as well, so there is no
obstruction in lifting the embedded deformations.
Each of the above spaces has a deformation theoretic interpretation, see [16,
p.96]:
* •
The space $H^{0}(X,i^{*}\mathscr{T}_{\mathbb{P}^{g-1}})$ is the space of
deformations of the map $i:X\hookrightarrow\mathbb{P}^{g-1}$, that is both
$X,\mathbb{P}^{g-1}$ are trivially deformed, see [34, p. 158, prop.
3.4.2.(ii)]
* •
The space $H^{0}(X,\mathscr{N}_{X/\mathbb{P}^{g-1}})$ is the space of embedded
deformations, where $\mathbb{P}^{g-1}$ is trivially deformed see [18, p. 13,
Th. 2.4)].
* •
The space $H^{1}(X,\mathscr{T}_{X})$ is the space of all deformations of $X$.
The dimension of the space $H^{1}(X,\mathscr{T}_{X})$ can be computed using
Riemann-Roch theorem on the dual space $H^{0}(X,\Omega_{X}^{\otimes 2})$ and
equals $3g-3$. In next section we will give a linear algebra interpretation
for the spaces $H^{0}(X,\mathscr{N}_{X/\mathbb{P}^{g-1}})$,
$H^{0}(X,i^{*}\mathscr{T}_{\mathbb{P}^{g-1}})$ allowing us to compute its
dimensions.
### 3.2. Some matrix computations
We begin with the Euler exact sequence (see. [17, II.8.13], [37, p. 581] and
[19] MO)
$0\rightarrow\mathscr{O}_{\mathbb{P}^{g-1}}\rightarrow\mathscr{O}_{\mathbb{P}^{g-1}}(1)^{\oplus
g}\rightarrow\mathscr{T}_{\mathbb{P}^{g-1}}\rightarrow 0.$
We restrict this sequence to the curve $X$:
$0\rightarrow\mathscr{O}_{X}\rightarrow
i^{*}\mathscr{O}_{\mathbb{P}^{g-1}}(1)^{\oplus g}=\omega_{X}^{\oplus
g}\rightarrow i^{*}\mathscr{T}_{\mathbb{P}^{g-1}}\rightarrow 0.$
We now take the long exact sequence in cohomology
(16)
$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{k=H^{0}(X,\mathscr{O}_{X})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f_{1}}$$\textstyle{H^{0}(X,i^{*}\mathscr{O}_{\mathbb{P}^{g-1}}(1)^{\oplus
g})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f_{2}}$$\textstyle{H^{0}(X,i^{*}\mathscr{T}_{\mathbb{P}^{g-1}})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f_{3}}$$\textstyle{H^{1}(X,\mathscr{O}_{X})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f_{4}}$$\textstyle{H^{1}(X,i^{*}\mathscr{O}_{\mathbb{P}^{g-1}}(1)^{\oplus
g})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f_{5}}$$\textstyle{H^{1}(X,i^{*}\mathscr{T}_{\mathbb{P}^{g-1}})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{H^{2}(X,\mathscr{O}_{X})=0}$
The spaces involved above have the following dimensions:
* •
$i^{*}\mathscr{O}_{\mathbb{P}^{g-1}}(1)=\Omega_{X}$ (canonical bundle)
* •
$\dim H^{0}(X,i^{*}\mathscr{O}_{\mathbb{P}^{g-1}}(1)^{\oplus g})=g\cdot\dim
H^{0}(X,\Omega_{X})=g^{2}$
* •
$\dim H^{1}(X,\mathscr{O}_{X})=\dim H^{1}(X,\Omega_{X})=g$
* •
$\dim H^{1}(X,i^{*}\mathscr{O}_{\mathbb{P}^{g-1}}(1)^{\oplus g})=g\cdot\dim
H^{0}(X,\mathscr{O}_{X})=g$
We will return to the exact sequence given in eq. (16) and the above dimension
computations in the next section.
#### 3.2.1. Study of $H^{0}(X,N_{f})$
By relative Petri theorem the elements $\phi(A_{i})$ are quadratic polynomials
not in $I_{X}$, that is elements in a vector space of dimension
$(g+1)g/2-\binom{g-2}{2}=3g-3$, where $(g+1)g/2$ is the dimension of the
symmetric $g\times g$ matrices and $\binom{g-2}{2}$ is the dimension of the
space generated by the generators of the canonical ideal, see [10, prop. 9.5].
The set of matrices $\\{A_{1},\ldots,A_{r}\\}$ can be assumed to be linear
independent but this does not mean that an arbitrary selection of quadratic
elements $\omega^{t}B_{i}\omega\in\mathscr{O}_{X}$ will lead to a homomorphism
of rings. Indeed, the linear independent elements $A_{i}$ might satisfy some
syzygies, see the following example where the linear independent elements
$x^{2}=\begin{pmatrix}x&y\end{pmatrix}^{t}\begin{pmatrix}1&0\\\
0&0\end{pmatrix}\begin{pmatrix}x\\\ y\end{pmatrix}\qquad
xy=\begin{pmatrix}x&y\end{pmatrix}^{t}\begin{pmatrix}0&1/2\\\
1/2&0\end{pmatrix}\begin{pmatrix}x\\\ y\end{pmatrix}$
satisfy the syzygy
$y\cdot x^{2}-x\cdot xy=0.$
Therefore, a map of modules $\phi$, should be compatible with the syzygy and
satisfy the same syzygy. This is known as the fundamental Grothendieck
flatness criterion, see [33, 1.1] and also [1, lem. 5.1, p. 28].
###### Proposition 24.
The map
$\displaystyle\psi:M_{g}(k)$
$\displaystyle\longrightarrow\mathrm{Hom}_{S}(I_{X},S/I_{X})=H^{0}(X,\mathscr{N}_{X/\mathbb{P}^{g-1}})$
$\displaystyle B$
$\displaystyle\longmapsto\psi_{B}:\omega^{t}A_{i}\omega\mapsto\omega^{t}(A_{i}B+B^{t}A_{i})\omega{\;\rm
mod}I_{X}$
identifies the vector space $M_{g}(k)/\langle\mathbb{I}_{g}\rangle$ to
$H^{0}(X,i^{*}\mathscr{T}_{\mathbb{P}^{g-1}})\subset
H^{0}(X,\mathscr{N}_{X/\mathbb{P}^{g-1}})$. The map $\psi$ is equivariant,
where $M_{g}(k)$ is equipped with the adjoint action
$B\mapsto\rho(g)B\rho(g^{-1})=\mathrm{Ad}(g)B,$
that is
${}^{g}\psi_{B}=\psi_{\mathrm{Ad}(g)B}.$
###### Proof.
Recall that the space $H^{0}(X,i^{*}\mathscr{T}_{\mathbb{P}^{g-1}})$ can be
identified to the space of deformations of the map $f$, where $X$,
$\mathbb{P}^{g-1}$ are both trivially deformed. By [33] a map
$\phi\in\mathrm{Hom}_{S}(I_{X},S/I_{X})=\mathrm{Hom}_{S}(I_{X},\mathscr{O}_{X})$
gives rise to a trivial deformation if there is a map
$w_{j}\mapsto w_{j}+\epsilon\delta_{j}(w),$
where $\delta_{j}(w)=\sum_{\nu=1}^{g}b_{j,\nu}w_{\nu}$. The map can be defined
in terms of the matrix $B=(b_{j,\nu})$,
$w\mapsto w+\epsilon Bw$
so that for all $\tilde{A}_{i}$, $1\leq i\leq r$
(17) $\nabla\tilde{A}_{i}\cdot Bw=\phi(\tilde{A}_{i})=\phi(w^{t}A_{i}w){\;\rm
mod}I_{X}.$
But for $\tilde{A}_{i}=w^{t}A_{i}w$ we compute
$\nabla\tilde{A}_{i}=w^{t}A_{i}$, therefore eq. (17) is transformed to
(18) $w^{t}A_{i}Bw=w^{t}B_{i}w{\;\rm mod}I_{X},$
for a symmetric $g\times g$ matrix $B_{i}$ in $\mathscr{S}_{g}(k[\epsilon])$.
Therefore if $2$ is invertible according to remark 20 we replace the matrix
$A_{i}B$ appearing in eq. (18) by the symmetric matrix $A_{i}B+B^{t}A_{i}$.
Since we are interested in the projective algebraic set defined by homogeneous
polynomials the $1/2$ factor of remark 20 can be omitted.
For every $B\in M_{g}(k)$ we define the map
$\psi_{B}\in\mathrm{Hom}_{S}(I_{X},S/I_{X})=\mathrm{Hom}_{S}(I_{X},\mathscr{O}_{X})$
given by
$\tilde{A}_{i}=\omega^{t}A_{i}\omega\mapsto\omega^{t}(A_{i}B+B^{t}A_{i})\omega{\;\rm
mod}I_{X},$
and we have just proved that the functions $\psi_{B}$ are all elements in
$H^{0}(X,i^{*}\mathscr{T}_{\mathbb{P}^{g-1}})$. The kernel of the map
$\psi:B\mapsto\psi_{B}$ consists of all matrices $B$ satisfying:
(19) $A_{i}B=-B^{t}A_{i}{\;\rm mod}I_{X}\text{ for all }1\leq
i\leq\binom{g-2}{2}.$
This kernel seems to depend on the selection of the elements $A_{i}$. This is
not the case. We will prove that the kernel consists of all multiples of the
identity matrix. Indeed,
$\dim H^{0}(X,i^{*}\mathscr{T}_{X})=g^{2}-\ker\psi.$
We now rewrite the spaces in eq. (16) by their dimensions we get
$\textstyle{(0)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{(1)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f_{1}}$$\textstyle{(g^{2})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f_{2}}$$\textstyle{(g^{2}-\ker\psi)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f_{3}}$$\textstyle{(g)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{(g)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{(?)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{(0)}$
So
* •
$\dim\ker f_{2}=\dim\operatorname{Im}f_{1}=1$
* •
$\dim\ker f_{3}=\dim\operatorname{Im}f_{2}=g^{2}-1$
* •
$\dim\operatorname{Im}f_{3}=(g^{2}-\dim\ker\psi)-(g^{2}-1)=1-\dim\ker\psi$
It is immediate that $\dim\ker\psi=0\text{ or }1$. But obviously
$\mathbb{I}_{g}\in\ker\psi$, and hence
$\dim\ker\psi=1.$
Finally $\dim\operatorname{Im}f_{3}=0$, i.e. $f_{3}$ is the zero map and we
get the small exact sequence,
$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{k=H^{0}(X,\mathscr{O}_{X})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{H^{0}(X,i^{*}\mathscr{O}_{\mathbb{P}^{g-1}}(1)^{\oplus
g})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{H^{0}(X,i^{*}\mathscr{T}_{\mathbb{P}^{g-1}})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0}$
It follows that
$\dim H^{0}(X,i^{*}\mathscr{T}_{\mathbb{P}^{g-1}})=g^{2}-1.$
We have proved that $\psi:M_{g}(k)/\langle\mathbb{I}_{g}\rangle\rightarrow
H^{0}(X,i^{*}\mathscr{T}_{\mathbb{P}^{g-1}})$ is an isomorphism of vector
spaces. We will now prove it is equivariant.
Using remark 10 we have that the action of the group $G$ on the function
$\psi_{B}:A_{i}\mapsto A_{i}B+B^{t}A_{i},$
seen as an element in $H^{0}(X,i^{*}\mathscr{T}_{\mathbb{P}^{g-1}})$ is given:
$\displaystyle A_{i}$ $\displaystyle\mapsto
T(\sigma^{-1})A_{i}\stackrel{{\scriptstyle\psi_{B}}}{{\longmapsto}}T(\sigma)\left(\rho(\sigma)^{t}A_{i}\rho(\sigma)B+B^{t}\rho(\sigma)^{t}A_{i}\rho(\sigma)\right)$
$\displaystyle=\left(A_{i}\rho(\sigma)B\rho(\sigma^{-1})+(\rho(\sigma)B\rho(\sigma^{-1}))^{t}A_{i}\right)$
∎
###### Corollary 25.
The space $H^{0}(X,i^{*}\mathscr{T}_{\mathbb{P}^{g-1}})^{G}$ is generated by
the elements $B\neq\\{\lambda\mathbb{I}_{g}:\lambda\in k\\}$ such that
$\rho(\sigma)B\rho(\sigma^{-1})B^{-1}=[\rho(\sigma),B]\in\langle
A_{1},\ldots,A_{r}\rangle\text{ for all }\sigma\in\mathrm{Aut}(X).$
###### Remark 26.
This construction allows us to compute the space
$H^{1}(X,i^{*}\mathscr{T}_{\mathbb{P}^{g-1}})$. Indeed, we know that $f_{4}$
is isomorphism and hence $f_{5}$ is the zero map, on the other hand $f_{5}$ is
surjective, it follows that $H^{1}(X,i^{*}\mathscr{T}_{\mathbb{P}^{g-1}})=0$.
This provides us with another proof of the exactness of the sequence
(20)
$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{H^{0}(X,i^{*}\mathscr{T}_{\mathbb{P}^{g-1}})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{H^{0}(X,\mathscr{N}_{X/\mathbb{P}^{g-1}})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\delta}$$\textstyle{H^{1}(X,\mathscr{T}_{X})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0}$
### 3.3. Invariant spaces
Let
$0\rightarrow A\rightarrow B\rightarrow C\rightarrow 0$
be a short exact sequence of $G$-modules. We have the following sequence of
$G$-invariant spaces
$0\rightarrow A^{G}\rightarrow B^{G}\rightarrow
C^{G}\stackrel{{\scriptstyle\delta_{G}}}{{\longrightarrow}}H^{1}(G,A)\rightarrow\cdots$
where the map $\delta_{G}$ is computed as follows: an element $c$ is given as
a class $b{\;\rm mod}A$ and it is invariant if and only if $gb-b=a_{g}\in A$.
The map $G\ni g\mapsto a_{g}$ is the cocycle defining $\delta_{G}(c)\in
H^{1}(G,A)$.
Using this construction on the short exact sequence of eq. (20) we arrive at
$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{H^{0}(X,i^{*}\mathscr{T}_{\mathbb{P}^{g-1}})^{G}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{H^{0}(X,\mathscr{N}_{X/\mathbb{P}^{g-1}})^{G}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\delta}$$\textstyle{H^{1}(X,\mathscr{T}_{X})^{G}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\delta_{G}}$$\textstyle{\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!H^{1}\big{(}G,H^{0}(X,i^{*}\mathscr{T}_{\mathbb{P}^{g-1}})\big{)}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\cdots}$
We will use eq. (20) in order to represent elements in
$H^{1}(X,\mathscr{T}_{X})$ as elements $[f]\in
H^{0}(X,\mathscr{N}_{X/\mathbb{P}^{g-1}})/H^{0}(X,i^{*}\mathscr{T}_{\mathbb{P}^{g-1}})=H^{0}(X,\mathscr{N}_{X/\mathbb{P}^{g-1}})/\mathrm{Im}\psi$.
###### Proposition 27.
Let $[f]\in H^{1}(X,\mathscr{T}_{X})^{G}$ be a class of a map
$f:I_{X}\rightarrow S/I_{X}$ modulo $\mathrm{Im}\psi$. For each element
$\sigma\in G$ there is a matrix $B_{\sigma}[f]$, depending on $f$, which
defines a class in $M_{g}(k)/\langle\mathbb{I}_{g}\rangle$ satisfying the
cocycle condition in eq. (22), such that
$\delta_{G}(f)(\sigma):A_{i}\mapsto
A_{i}\left(B_{\sigma}[f]\right)+\left(B_{\sigma}^{t}[f]\right)A_{i}{\;\rm
mod}\langle A_{1},\ldots,A_{g}\rangle.$
###### Proof.
Let $[f]\in H^{1}(X,\mathscr{T}_{X})^{G}$, where $f:I_{X}\rightarrow S/I_{X}$
that is $f\in H^{0}(X,\mathscr{N}_{X/\mathbb{P}^{g-1}})$. The $\delta_{G}(f)$
is represented by an $1$-cocycle given by
$\delta_{G}(f)(\sigma)=^{\sigma}\\!\\!f-f$. Using the equivariant isomorphism
of $\psi:M_{g}(k)/\langle\mathbb{I}_{g}\rangle\rightarrow
H^{0}(X,i^{*}\mathscr{T}_{\mathbb{P}^{g-1}})$ of proposition 24 we arrive at
the diagram:
$\textstyle{G\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{H^{0}(X,i^{*}\mathscr{T}_{\mathbb{P}^{g-1}})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\psi^{-1}}$$\textstyle{M_{g}(k)/\langle\mathbb{I}_{g}\rangle}$$\textstyle{\sigma\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\delta_{G}(f)(\sigma)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{B[f]_{\sigma}:=\psi^{-1}(\delta_{G}(f)(\sigma))}$
We will now compute
$\textstyle{{}^{\sigma}\\!f:A_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{T(\sigma^{-1})}$$\textstyle{T(\sigma^{-1})A_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f}$$\textstyle{f(T(\sigma^{-1})A_{i})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{T(\sigma)}$$\textstyle{T(\sigma)f(T(\sigma^{-1})A_{i}).}$
We set
$T(\sigma^{-1})(A_{i})=\rho(\sigma)^{t}A_{i}\rho(\sigma)=\sum_{\nu=1}^{r}\displaystyle\lambda_{i,\nu}(\sigma)A_{i}$
so
(21) $\displaystyle\delta_{G}(f)(\sigma)(A_{i})$
$\displaystyle=\sum_{\nu=1}^{r}\displaystyle\lambda_{i,\nu}(\sigma)\cdot\rho(\sigma^{-1})^{t}f(A_{\nu})\rho(\sigma^{-1})-f(A_{i})$
$\displaystyle=A_{i}B_{\sigma}[f]+B_{\sigma}[f]^{t}A_{i}{\;\rm mod}I_{X}$
for some matrix $B_{\sigma}[f]\in M_{g}(k)$ such that for all $\sigma,\tau\in
G$ we have
(22) $\displaystyle B_{\sigma\tau}[f]$ $\displaystyle=B_{\sigma}[f]+\sigma
B_{\tau}[f]\sigma^{-1}+\lambda(\sigma,\tau)\mathbb{I}_{g}$
$\displaystyle=B_{\sigma}[f]+\mathrm{Ad}(\sigma)B_{\tau}[f]+\lambda(\sigma,\tau)\mathbb{I}_{g}.$
In the above equation we have used the fact that $\sigma\mapsto B_{\sigma}[f]$
is a $1$-cocycle in the quotient space $M_{g}(k)/\mathbb{I}_{g}$, therefore
the cocycle condition holds up to an element of the form
$\lambda(\sigma,\tau)\mathbb{I}_{g}$. ∎
###### Remark 28.
Let
$\lambda(\sigma,\tau)\mathbb{I}_{g}=B_{\sigma\tau}[f]-B_{\sigma}[f]-\rm{Ad}(\sigma)B_{\tau}[f].$
The map $G\times G\rightarrow k$, $(\sigma,\tau)\mapsto\lambda(\sigma,\tau)$
is a normalized 2-cocycle (see [39, p. 184]), that is
$\displaystyle 0$ $\displaystyle=\lambda(\sigma,1)=\lambda(1,\sigma)$
$\displaystyle\text{ for all }\sigma\in G$ $\displaystyle 0$
$\displaystyle={\rm{Ad}(\sigma_{1})}\lambda(\sigma_{2},\sigma_{3})-\lambda(\sigma_{1}\sigma_{2},\sigma_{3})+\lambda(\sigma_{1},\sigma_{2}\sigma_{3})-\lambda(\sigma_{1},\sigma_{2})$
$\displaystyle\text{ for all }\sigma_{1},\sigma_{2},\sigma_{3}\in G$
$\displaystyle=\lambda(\sigma_{2},\sigma_{3})-\lambda(\sigma_{1}\sigma_{2},\sigma_{3})+\lambda(\sigma_{1},\sigma_{2}\sigma_{3})-\lambda(\sigma_{1},\sigma_{2})$
$\displaystyle\text{ for all }\sigma_{1},\sigma_{2},\sigma_{3}\in G$
For the last equality notice that the $\mathrm{Ad}$-action is trivial on
scalar multiples of the identity.
###### Proof.
The first equation is clear. For the second one,
$\lambda(\sigma_{1}\sigma_{2},\sigma_{3})\mathbb{I}_{g}=B_{\sigma_{1}\sigma_{2}\sigma_{3}}[f]-B_{\sigma_{1}\sigma_{2}}[f]-\rm{Ad}(\sigma_{1}\sigma_{2})B_{\sigma_{3}}[f]$
and
$\lambda(\sigma_{1},\sigma_{2})\mathbb{I}_{g}=B_{\sigma_{1}\sigma_{2}}[f]-B_{\sigma_{1}}[f]-\rm{Ad}(\sigma_{1})B_{\sigma_{2}}[f].$
Hence
$\displaystyle\lambda(\sigma_{1}\sigma_{2},\sigma_{3})\mathbb{I}_{g}+\lambda(\sigma_{1},\sigma_{2})\mathbb{I}_{g}=$
$\displaystyle
B_{\sigma_{1}\sigma_{2}\sigma_{3}}[f]-\rm{Ad}(\sigma_{1}\sigma_{2})B_{\sigma_{3}}[f]-B_{\sigma_{1}}[f]-\rm{Ad}(\sigma_{1})B_{\sigma_{2}}[f]$
$\displaystyle=$ $\displaystyle
B_{\sigma_{1}\sigma_{2}\sigma_{3}}[f]-B_{\sigma_{1}}[f]-\rm{Ad}(\sigma_{1})B_{\sigma_{2}\sigma_{3}}[f]+$
$\displaystyle+\rm{Ad}(\sigma_{1})B_{\sigma_{2},\sigma_{3}}[f]-\rm{Ad}(\sigma_{1})B_{\sigma_{2}}[f]-\rm{Ad}(\sigma_{1}\sigma_{2})B_{\sigma_{3}}[f]$
$\displaystyle=$
$\displaystyle\lambda(\sigma_{1},\sigma_{2}\sigma_{3})\mathbb{I}_{g}+\rm{Ad}(\sigma_{1})\big{(}B_{\sigma_{2},\sigma_{3}}[f]-B_{\sigma_{2}}[f]-\rm{Ad}(\sigma_{1})B_{\sigma_{3}}[f]\big{)}$
$\displaystyle=$
$\displaystyle\rm{Ad}(\sigma_{1})\lambda(\sigma_{2},\sigma_{3})\mathbb{I}_{g}+\lambda(\sigma_{1},\sigma_{2}\sigma_{3})\mathbb{I}_{g}.$
∎
###### Corollary 29.
If $f(\omega^{t}A_{i}\omega)=\omega^{t}B_{i}\omega$, where $B_{i}\in M_{g}(k)$
are the images of the elements defining the canonical ideal in the small
extension $\Gamma^{\prime}\rightarrow\Gamma$, then the symmetric matrices
defining the canonical ideal $I_{X}(\Gamma^{\prime})$ are given by
$A_{i}+E\cdot B_{i}$. Using proposition 27 we have
(23) $\displaystyle(^{\sigma}f-f)(A_{i})$
$\displaystyle=\sum_{\nu=1}^{r}\lambda_{i,\nu}(\sigma)T(\sigma)(B_{\nu})-B_{i}$
$\displaystyle=\left(A_{i}B_{\sigma}[f]+B_{\sigma}^{t}[f]A_{i}\right){\;\rm
mod}\langle A_{1},\ldots,A_{r}\rangle$
$\displaystyle=\psi_{B_{\sigma}[f]}A_{i}.$
Therefore, using also eq. (21)
(24)
$\sum_{\nu=1}^{r}\lambda_{i,\nu}(\sigma)(B_{\nu})-T(\sigma^{-1})B_{i}=T(\sigma^{-1})\psi_{B_{\sigma}[f]}(A_{i}).$
## 4\. On the deformation theory of curves with automorphisms
Let $1\rightarrow\langle
E\rangle\rightarrow\Gamma^{\prime}\rightarrow\Gamma\rightarrow 0$ be a small
extension of Artin local algebras and consider the diagram
$\textstyle{X_{\Gamma}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{X_{\Gamma^{\prime}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathscr{X}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathrm{Spec}(\Gamma)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathrm{Spec}(\Gamma^{\prime})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathrm{Spec}(R)}$
Suppose that $G$ acts on $X_{\Gamma}$, that is every automorphism $\sigma\in
G$ satisfies $\sigma(I_{X_{\Gamma}})=I_{X_{\Gamma}}$. If the action of the
group $G$ is lifted to $X_{\Gamma^{\prime}}$ then we should have a lift of the
representations $\rho,\rho^{(1)}$ defined in eq. (2), (3) to $\Gamma^{\prime}$
as well. The set of all such liftings is a principal homogeneous space
parametrized by the spaces $H^{1}(G,M_{g}(k)),H^{1}(G,M_{r}(k))$, provided
that the corresponding lifting obstructions in
$H^{2}(G,M_{g}(k)),H^{2}(G,M_{r}(k))$ both vanish.
Assume that there is a lifting of the representation
(25)
$\textstyle{\mathrm{GL}_{g}(\Gamma^{\prime})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{{\;\rm
mod}\langle
E\rangle}$$\textstyle{G\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\rho_{\Gamma}}$$\scriptstyle{\rho_{\Gamma^{\prime}}}$$\textstyle{\mathrm{GL}_{g}(\Gamma)}$
This lift gives rise to a lifting of the corresponding automorphism group to
the curve $X_{\Gamma^{\prime}}$ if
$\rho_{\Gamma^{\prime}}(\sigma)I_{X_{\Gamma^{\prime}}}=I_{X_{\Gamma^{\prime}}}\quad\text{
for all }\sigma\in G,$
that is if the relative canonical ideal is invariant under the action of the
lifted representation $\rho_{\Gamma^{\prime}}$. In this case the free
$\Gamma^{\prime}$-modules $V_{\Gamma^{\prime}}$, defined in remark 8, are
$G$-invariant and the $T$-action, as defined in definition 11.1 restricts to a
lift of the representation
(26)
$\textstyle{\mathrm{GL}_{r}(\Gamma^{\prime})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{{\;\rm
mod}\langle
E\rangle}$$\textstyle{G\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\rho^{(1)}_{\Gamma}}$$\scriptstyle{\rho^{(1)}_{\Gamma^{\prime}}}$$\textstyle{\mathrm{GL}_{r}(\Gamma)}$
In [24, sec. 2.2] we gave an efficient way to check this compatibility in
terms of linear algebra:
Consider an ordered basis $\Sigma$ of the free $\Gamma$-module
$\mathscr{S}_{g}(\Gamma)$ generated by the matrices
$\Sigma(ij)=(\sigma(ij))_{\nu,\mu}$, $1\leq i\leq j\leq g$ ordered
lexicographically, with elements
$\sigma(ij)_{\nu,\mu}=\begin{cases}\delta_{i,\nu}\delta_{j,\mu}+\delta_{i,\mu}\delta_{j,\nu},&\text{
if }i\neq j\\\ \delta_{i,\nu}\delta_{i,\mu}&\text{ if }i=j.\end{cases}$
For example, for $g=2$ we have the elements
$\sigma(11)=\begin{pmatrix}1&0\\\
0&0\end{pmatrix}\quad\sigma(12)=\begin{pmatrix}0&1\\\
1&0\end{pmatrix}\quad\sigma(22)=\begin{pmatrix}0&0\\\ 0&1\end{pmatrix}.$
For every symmetric matrix $A$, let $F(A)$ be the column vector consisted of
the coordinates of $A$ in the basis $\Sigma$. Consider the symmetric matrices
$A_{1}^{\Gamma^{\prime}},\ldots,A_{r}^{\Gamma^{\prime}}$, which exist since at
the level of curves there is no obstruction of the embedded deformation. For
each $\sigma\in G$ the $(g+1)g/2\times 2r$ matrix
(27)
$F_{\Gamma^{\prime}}(\sigma)=\left[F\left(A_{1}^{\Gamma^{\prime}}\right),\ldots,F\left(A_{r}^{\Gamma^{\prime}}\right),F\left(\rho_{\Gamma^{\prime}}(\sigma)^{t}A_{1}^{\Gamma^{\prime}}\rho_{\Gamma^{\prime}}(\sigma)\right),\ldots,F\left(\rho_{\Gamma^{\prime}}(\sigma)^{t}A_{r}^{\Gamma^{\prime}}\rho_{\Gamma^{\prime}}(\sigma)\right)\right].$
The automorphism $\sigma$ acting on the relative curve $X_{\Gamma}$ is lifted
to an automorphism $\sigma$ of $X_{\Gamma^{\prime}}$ if and only if the matrix
given in eq. (27) has rank $r$.
###### Proposition 30.
The obstruction to lifting an automorphism of $X_{\Gamma}$ to
$X_{\Gamma^{\prime}}$ has a global obstruction given by vanishing the class of
$A(\sigma,\tau)=\rho_{\Gamma^{\prime}}(\sigma)\rho_{\Gamma^{\prime}}(\tau)\rho_{\Gamma^{\prime}}(\sigma\tau)^{-1}$
in $H^{2}(G,M_{g}(k))$ and a compatibility rank condition given by requiring
that the matrix $F_{\Gamma^{\prime}}(\sigma)$ equals $r$ for all elements
$\sigma\in G$.
### 4.1. An example
Let $k$ be an algebraically closed field of positive characteristic $p>0$.
Consider the Hermitian curve, defined over $k$, given by the equation
(28) $H:y^{p}-y=\frac{1}{x^{p+1}},$
which has the group $\mathrm{PGU}(3,p^{2})$ as an automorphism group, [38, th.
7]. As an Artin-Schreier extension of the projective line, this curve fits
within the Bertin-Mézard model of curves, and the deformation functor with
respect to the subgroup
$\mathbb{Z}/p\mathbb{Z}\cong\mathrm{Gal}(H/\mathbb{P}^{1})=\\{y\mapsto y+1\\}$
has versal deformation ring $W(k)[\zeta][[x_{1}]]$, where $\zeta$ is a
primitive $p$ root of unity which resides in an algebraic extension of
$\mathrm{Quot}(W(k))$ [4], [21]. Indeed, $m=p+1=2p-(p-1)=qp-l$, so in the
notation of [4] $q=2$ and $l=p-1$.
The reduction of the universal curve in the Bertin-Mezard model modulo
$\mathfrak{m}_{W(k)[\zeta]}$ is given by the Artin-Schrein equation:
(29) $X^{p}-X=\frac{x^{p-1}}{(x^{2}+x_{1}x)^{p}}$
which has special fibre at the specialization $x_{1}=0$ the original Hermitian
curve given in eq. (28).
The initial Hermitian curve admits the automorphism $\sigma:y\mapsto
y,x\mapsto\zeta_{p+1}x$, where $\zeta_{p+1}$ is a primitive $p+1$ root of
unity. We will use the tools developed in this article in order to show that
the automorphism $\sigma$ does not lift even in positive characteristic.
We set $a(x)=x^{2}+x_{1}x$ and $\lambda=\zeta-1\in W(k)[\zeta]$. In [21] the
first author together with S. Karanikolopoulos proved that the free $R$-module
$H^{0}(\mathscr{X},\Omega_{\mathscr{X}/R})$ has basis
$\mathbf{c}=\left\\{W_{N,\mu}=\frac{x^{N}a(x)^{p-1-\mu}X^{p-1-\mu}}{a(x)^{p-1}(\lambda
X+1)^{p-1}}dx:\left\lfloor\frac{\mu\ell}{p}\right\rfloor\leq N\leq\mu
q-2,\;1\leq\mu\leq p-1\right\\}.$
From the form of the holomorphic differentials it is clear that the
representation of $\langle\sigma\rangle$ on $H^{0}(H,\Omega_{H/k})$ is
diagonal, since $a(x)=x^{2}+x_{1}x$ reduces to $x^{2}$ for $x_{1}=0$. In our
example, we have $q=\deg a(x)=2$ so in the special fibre we have
$w_{N,\mu}=x^{N-2\mu}X^{p-1-\mu}dx$
$\sigma(w_{N,\mu})=\zeta_{p+1}^{N-2\mu+1}w_{N,\mu}$
and
(30)
$\sigma(w_{N,\mu}w_{N^{\prime},\mu^{\prime}})=\zeta_{p+1}^{N+N^{\prime}-2(\mu+\mu^{\prime})+2}w_{N,\mu}w_{N^{\prime},\mu^{\prime}}.$
Thus, the action of $\sigma$ on holomorphic differentials on the special fibre
is given by a diagonal matrix.
To decide, using the tools developed in this article, whether the action lifts
to the Artin local ring $k[\epsilon]$, we have to see first whether the
diagonal representation can be lifted, that is whether we have the following
commutative diagram:
$\textstyle{\mathrm{GL}_{g}(k[\epsilon])\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\langle\sigma\rangle\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\rho}$$\scriptstyle{\tilde{\rho}}$$\textstyle{\mathrm{GL}_{g}(k)}$
Since $\rho(\sigma)=\mathrm{diag}(\delta_{1},\ldots,\delta_{g})=:\Delta$ a
possible lift will be given by $\tilde{\rho}(\sigma)=\Delta+\epsilon B$, for
some $g\times g$ matrix $B$ with entries in $k$. The later element should have
order $p+1$, that is
$\mathbb{I}_{g}=(\Delta+\epsilon B)^{p+1}=\Delta^{p+1}+\epsilon\Delta^{p}B,$
which in turn implies that $\Delta^{p}B=0$ and since $\Delta$ is invertible
$B=0$. This means that the representation of the cyclic group generated by
$\sigma$ is trivially deformed to a representation into
$\mathrm{GL}_{g}(k[\epsilon])$.
The next step is to investigate whether the canonical ideal is kept invariant
under the action of $\sigma$ for $x_{1}\neq 0$. The canonical ideal for
Bertin-Mézard curves was recently studied by H. Haralampous K. Karagiannis and
the first author, [6]. Namely, using the notation of [6] we have
$\displaystyle a(x)^{p-i}$
$\displaystyle=(x^{2}+x_{1}x)^{p-i}=\sum_{j=j_{\min}}^{2(p-1)}c_{j,p-i}x^{j}$
$\displaystyle=\sum_{j=0}^{p-i}\binom{p-i}{j}x_{1}^{p-i-j}x^{j+p-i}$
so by setting $J=j+p-i$, $p-i\leq J\leq 2(p-i)$ we have
$c_{J,p-i}=\begin{cases}\binom{p-i}{J-(p-i)}x_{1}^{2(p-i)-J}&\text{ if }J\geq
p-i\\\ 0&\text{ if }J<p-i\end{cases}$
This means that $c_{2(p-i),p-i}=1$, $c_{2(p-i)-1,p-i}=(p-i)x_{1}$ and for all
other values of $J$, the quantity $c_{J,p-i}$ is either zero or a monomial in
$x_{1}$ of degree $\geq 2$.
It is proved in [6] that the canonical ideal is generated by two sets of
generators $G_{1}^{\mathbb{c}}$ and $G_{2}^{\mathbb{c}}$ given by:
$G_{1}^{\mathbf{c}}=\\{W_{N_{1},\mu_{1}}W_{N^{\prime}_{1},\mu^{\prime}_{1}}-W_{N_{2},\mu_{2}}W_{N^{\prime}_{2},\mu^{\prime}_{2}}\in
S\;:\;W_{N_{1},\mu_{1}}W_{N^{\prime}_{1},\mu^{\prime}_{1}},W_{N_{2},\mu_{2}}W_{N^{\prime}_{2},\mu^{\prime}_{2}}\in\mathbb{T}^{2}\\\
\text{ and
}N_{1}+N_{1}^{\prime}=N_{2}+N_{2}^{\prime},\quad\mu_{1}+\mu_{1}^{\prime}=\mu_{2}+\mu_{2}^{\prime}\\}.$
$G_{2}^{\mathbf{c}}=\bigg{\\{}W_{N,\mu}W_{N^{\prime},\mu^{\prime}}-W_{N^{\prime\prime},\mu^{\prime\prime}}W_{N^{\prime\prime\prime},\mu^{\prime\prime\prime}}\\\
+\sum_{i=1}^{p-1}\sum_{j=j_{\min}(i)}^{(p-i)q}\lambda^{i-p}\binom{p}{i}c_{j,p-i}W_{N_{j},\mu_{i}}W_{N_{j}^{\prime},\mu_{i}^{\prime}}\in
S\;:\;\\\
N^{\prime\prime}+N^{\prime\prime\prime}=N+N^{\prime}+p-1,\quad\mu^{\prime\prime}+\mu^{\prime\prime\prime}=\mu+\mu^{\prime}+p,\\\
N_{j}+N_{j}^{\prime}=N+N^{\prime}+j,\quad\mu_{i}+\mu_{i}^{\prime}=\mu+\mu^{\prime}+p-i\\\
\text{ for }0\leq i\leq p,\;j_{\min}(i)\leq j\leq(p-i)q\bigg{\\}}.$
The reduction modulo $\mathfrak{m}_{W(k)[\zeta]}$, of the set
$G_{1}^{\mathbf{c}}$ is given by simply replacing each $W_{n,\mu}$ by
$w_{N,\mu}$ and does not depend on $x_{1}$. Therefore it does not give us any
condition to deform $\sigma$.
The reduction of the set $G_{2}^{\mathbf{c}}$ modulo
$\mathfrak{m}_{W(k)[\zeta]}$ is given by
$G_{2}^{\mathbf{c}}\otimes_{R}k=\bigg{\\{}w_{N,\mu}w_{N^{\prime},\mu^{\prime}}-w_{N^{\prime\prime},\mu^{\prime\prime}}w_{N^{\prime\prime\prime},\mu^{\prime\prime\prime}}-\sum_{j=j_{\min}(1)}^{(p-1)q}c_{j,p-1}w_{N_{j},\mu_{j}}w_{N^{\prime}_{j},\mu^{\prime}_{j}}\in
S\;:\;\\\
N^{\prime\prime}+N^{\prime\prime\prime}=N+N^{\prime}+p-1,\quad\mu^{\prime\prime}+\mu^{\prime\prime\prime}=\mu+\mu^{\prime}+p,\\\
N_{j}+N_{j}^{\prime}=N+N^{\prime}+j,\quad\mu_{i}+\mu_{i}^{\prime}=\mu+\mu^{\prime}+p-i\\\
\text{ for }j_{\min}(1)\leq j\leq(p-1)q\bigg{\\}}.$
If we further consider this set modulo $\langle x_{1}^{2}\rangle$, that is if
we consider the canonical curve as a family over first-order infinitesimals
then, only the terms $c_{2(p-1),p-1}=1$, $c_{2(p-1)-1,p-1}=(p-1)x_{1}$
survive.
Using eq. (30) and the definition of $G_{2}^{\mathbf{c}}$ we have that for
$W=w_{N,\mu}w_{N^{\prime},\mu^{\prime}}-w_{N^{\prime\prime},\mu^{\prime\prime}}w_{N^{\prime\prime\prime},\mu^{\prime\prime\prime}}-w_{N_{2(p-1)},\mu_{p-1}}w_{N^{\prime}_{2(p-1)},\mu^{\prime}_{p-1}}$
$\sigma(W)=\zeta_{p+1}^{N+N^{\prime}-2(\mu+\mu^{\prime})+2}W$
Set
$W^{\prime\prime}=w_{N_{2(p-1)-1},\mu_{p-1}}w_{N^{\prime}_{2(p-1)-1},\mu^{\prime}_{p-1}}.$
The automorphism lifts if and only if the element
$W^{\prime}=W+x_{1}W^{\prime\prime}$
we have
$\sigma(W^{\prime})=\chi(\sigma)\big{(}W^{\prime}\big{)}.$
But this is not possible since for
$\sigma(W^{\prime\prime})=\zeta_{p+1}^{N_{2(p-1)-1}+N_{2(p-1)-1}-2(\mu_{p-1}+\mu^{\prime}_{p-1})+2}W^{\prime\prime}$
and
$\displaystyle N_{2(p-1)-1}+N_{2(p-1)-1}-2(\mu_{p-1}+\mu^{\prime}_{p-1})+2$
$\displaystyle=N+N^{\prime}-2(\mu+\mu^{\prime})+2-1.$
### 4.2. A tangent space condition
All lifts of $X_{\Gamma}$ to $X_{\Gamma^{\prime}}$ form a principal
homogeneous space under the action of
$H^{0}(X,\mathscr{N}_{X/\mathbb{P}^{g-1}})$. This paragraph aims to provide
the compatibility relation given in eq. (4) by selecting the deformations of
the curve and the representations.
Let $\\{A_{1}^{\Gamma},\dots,A_{r}^{\Gamma}\\}$ be a basis of the canonical
Ideal $I_{X_{\Gamma}}$, where $X_{\Gamma}$ is a canonical curve. Assume also
that the special fibre is acted on by the group $G$, and we assume that the
action of the group $G$ is lifted to the relative curve $X_{\Gamma}$. Since
$X_{\Gamma}$ is assumed to be acted on by $G$, we have the action
(31)
$T(\sigma^{-1})(A_{i}^{\Gamma})=\rho_{\Gamma}(\sigma)^{t}A_{i}^{\Gamma}\rho_{\Gamma}(\sigma)=\sum_{j}\lambda_{i,j}^{\Gamma}(\sigma)A_{j}(\Gamma)\text{
for each }i=1,\dots,r,$
where $\rho_{\Gamma}$ is a lift of the representation $\rho$ induced by the
action of $G$ on $H^{0}(X_{\Gamma},\Omega_{X/\Gamma})$, and
$\lambda_{i,j}^{\Gamma}(\sigma)$ are the entries of the matrix of the lifted
representation $\rho^{(1)}_{\Gamma}$ induced by the action of $G$ on
$A_{1}^{\Gamma},\ldots,A_{r}^{\Gamma}$. Notice that the matrix
$\rho_{\Gamma}(\sigma)\in\mathrm{GL}_{g}(\Gamma)$. We will denote by
$A_{1}^{\Gamma^{\prime}},\ldots,A_{r}^{\Gamma^{\prime}}\in\mathscr{S}_{g}(\Gamma^{\prime})$
a set of liftings of the matrices $A_{1}^{\Gamma},\ldots,A_{r}^{\Gamma}$.
Since the couple $(X_{\Gamma},G)$ is lifted to $(X_{\Gamma^{\prime}},G)$,
there is an action
$T(\sigma^{-1})(A_{i}^{\Gamma^{\prime}})=\rho_{\Gamma^{\prime}}(\sigma)^{t}A_{i}^{\Gamma^{\prime}}\rho_{\Gamma^{\prime}}(\sigma)=\sum_{j}\lambda_{i,j}^{\Gamma^{\prime}}(\sigma)A_{j}^{\Gamma^{\prime}}\text{
for each }i=1,\dots,r,$
where $\lambda_{ij}^{\Gamma^{\prime}}(\sigma)\in\Gamma^{\prime}$. All other
liftings extending $X_{\Gamma}$ form a principal homogeneous space under the
action of $H^{0}(X,\mathscr{N}_{X/\mathbb{P}^{g-1}})$ that is we can find
matrices $B_{1},\ldots,B_{r}\in\mathscr{S}_{g}(k)$, such that the set
$\\{A_{1}^{\Gamma^{\prime}}+E\cdot B_{1},\dots,A_{r}^{\Gamma^{\prime}}+E\cdot
B_{r}\\}$
forms a basis for another lift $I_{X^{1}_{\Gamma^{\prime}}}$ of the canonical
ideal of $I_{X_{\Gamma}}$. That is all lifts of the canonical curve
$I_{X_{\Gamma}}$ differ by an element
$f\in\mathrm{Hom}_{S}(I_{X},S/I_{X})=H^{0}(X,\mathscr{N}_{X/\mathbb{P}^{g-1}})$
so that $f(A_{i})=B_{i}$.
In the same manner, if $\rho_{\Gamma^{\prime}}$ is a lift of the
representation $\rho_{\Gamma}$ every other lift is given by
$\rho_{\Gamma^{\prime}}(\sigma)+E\cdot\tau(\sigma),$
where $\tau(\sigma)\in M_{g}(k)$.
We have to find out when $\rho_{\Gamma^{\prime}}(\sigma)+E\cdot\tau(\sigma)$
is an automorphism of the relative curve $X_{\Gamma^{\prime}}$, i.e. when
(32)
$T(\rho_{\Gamma^{\prime}}(\sigma^{-1})+E\cdot\tau(\sigma^{-1}))(A_{i}^{\Gamma^{\prime}}+E\cdot
B_{i})\in\mathrm{span}_{\Gamma^{\prime}}\\{A_{1}^{\Gamma^{\prime}}+E\cdot
B_{1},\dots,A_{r}^{\Gamma^{\prime}}+E\cdot B_{r}\\},$
that is
(33) $\displaystyle(\rho_{\Gamma^{\prime}}(\sigma)$
$\displaystyle+E\cdot\tau(\sigma))^{t}\left(A_{i}^{\Gamma^{\prime}}+E\cdot
B_{i}\right)(\rho_{\Gamma^{\prime}}(\sigma)+E\cdot\tau(\sigma))=\sum_{j=1}^{r}\tilde{\lambda}^{\Gamma^{\prime}}_{ij}(\sigma)\left(A_{j}^{\Gamma^{\prime}}+E\cdot
B_{j}\right),$
for some $\tilde{\lambda}^{\Gamma^{\prime}}_{ij}(\sigma)\in\Gamma^{\prime}$.
Since
$T_{\Gamma^{\prime}}(\sigma^{-1})A_{i}^{\Gamma^{\prime}}=\rho_{\Gamma}(\sigma)^{t}A_{i}^{\Gamma}\rho_{\Gamma}(\sigma){\;\rm
mod}\langle E\rangle$
we have that
$\tilde{\lambda}^{\Gamma^{\prime}}_{ij}(\sigma)=\lambda^{\Gamma}_{i,j}(\sigma){\;\rm
mod}E$, therefore we can write
(34)
$\tilde{\lambda}^{\Gamma^{\prime}}_{ij}(\sigma)=\lambda_{ij}^{\Gamma^{\prime}}(\sigma)+E\cdot\mu_{ij}(\sigma),$
for some $\mu_{ij}(\sigma)\in k$. We expand first the right-hand side of eq.
(33) using eq. (34). We have
(35)
$\displaystyle\sum_{j=1}^{r}\tilde{\lambda}^{\Gamma^{\prime}}_{ij}(\sigma)\left(A_{j}^{\Gamma^{\prime}}+E\cdot
B_{j}\right)$
$\displaystyle=\sum_{j=1}^{r}\left(\lambda_{ij}^{\Gamma^{\prime}}(\sigma)+E\cdot\mu_{ij}(\sigma)\right)\left(A_{j}^{\Gamma^{\prime}}+E\cdot
B_{j}\right)$ (36)
$\displaystyle=\sum_{j=1}^{r}\lambda_{ij}^{\Gamma^{\prime}}(\sigma)A_{j}^{\Gamma^{\prime}}+E\big{(}\mu_{ij}(\sigma)A_{j}+\lambda_{ij}(\sigma)B_{j}\big{)}.$
Here we have used the fact that
$E\mathfrak{m}_{\Gamma}=E\mathfrak{m}_{\Gamma^{\prime}}$ so $E\cdot
x=E\cdot(x{\;\rm mod}\mathfrak{m}_{\Gamma^{\prime}})$ for every
$x\in\Gamma^{\prime}$.
We now expand the left-hand side of eq. (33).
$\displaystyle(\rho_{\Gamma^{\prime}}(\sigma)$
$\displaystyle+E\cdot\tau(\sigma))^{t}\left(A_{i}^{\Gamma^{\prime}}+E\cdot
B_{i}\right)(\rho_{\Gamma^{\prime}}(\sigma)+E\cdot\tau(\sigma))=\rho_{\Gamma^{\prime}}(\sigma)^{t}A_{i}^{\Gamma^{\prime}}\rho_{\Gamma^{\prime}}(\sigma)$
$\displaystyle+E\cdot\left(\rho(\sigma)^{t}B_{i}\rho(\sigma)+\tau^{t}(\sigma)A_{i}\rho(\sigma)+\rho(\sigma)^{t}A_{i}\tau(\sigma)\right).$
Set $D_{\sigma}=\tau(\sigma)\rho(\sigma)^{-1}=d(\sigma)$ according to the
notation of lemma 14, we can write
(37)
$\begin{split}\tau(\sigma)^{t}A_{i}\rho(\sigma)&+\rho(\sigma)^{t}A_{i}\tau(\sigma)\\\
&=\rho(\sigma)^{t}\rho(\sigma^{-1})^{t}\tau(\sigma)^{t}A_{i}\rho(\sigma)+\rho(\sigma)^{t}A_{i}\tau(\sigma)\rho(\sigma)^{-1}\rho(\sigma)\\\
&=\rho(\sigma)^{t}(D_{\sigma}^{t}A_{i})\rho(\sigma)+\rho(\sigma)^{t}(A_{i}D_{\sigma})\rho(\sigma)\\\
&=T(\sigma^{-1})\psi_{D_{\sigma}}(A_{i}).\end{split}$
while eq. (24) implies that
(38)
$\rho(\sigma)^{t}B_{i}\rho(\sigma)-\sum_{j=1}^{r}\lambda_{ij}(\sigma^{-1})B_{j}=-T(\sigma^{-1})\psi_{B_{\sigma}[f]}(A_{i}).$
For the above computations recall that for a $g\times g$ matrix $B$, the map
$\psi_{B}$ is defined by
$\psi_{B}(A_{i})=A_{i}B+B^{t}A_{i}.$
Combining now eq. (37) and (38) we have that eq. (33) is equivalent to
$\displaystyle
T(\sigma^{-1})\big{(}\psi_{D_{\sigma}}(A_{i})\big{)}-T(\sigma^{-1})\psi_{B_{\sigma}[f]}(A_{i})$
$\displaystyle=\sum_{j=1}^{r}\mu_{ij}(\sigma)A_{j}$ (39)
$\displaystyle\big{(}\psi_{D_{\sigma}}(A_{i})\big{)}-\psi_{B_{\sigma}[f]}(A_{i})$
$\displaystyle=\sum_{j=1}^{r}T(\sigma)\mu_{ij}(\sigma)A_{j}.$
$\displaystyle=\sum_{j=1}^{r}\sum_{\nu=1}^{r}\mu_{ij}(\sigma)\lambda_{j\nu}(\sigma^{-1})A_{\nu}.$
On the other hand the action $T$ on $A_{1},\ldots,A_{r}$ is given in terms of
the matrix $(\lambda_{i,j})$ while the right hand side of eq. (39)
$\big{(}\mu_{i,j}(\sigma^{-1})\big{)}\big{(}\lambda_{ij}(\sigma)\big{)}$
corresponds to the derivation $D^{(1)}(\sigma^{-1})$ of the
$\rho_{1}$-representation. Equation (4) is now proved.
## References
* [1] Enrico Arbarello, Maurizio Cornalba, and Phillip A. Griffiths. Geometry of algebraic curves. Volume II, volume 268 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer, Heidelberg, 2011. With a contribution by Joseph Daniel Harris. doi:10.1007/978-3-540-69392-5.
* [2] M. F. Atiyah and I. G. Macdonald. Introduction to commutative algebra. Addison-Wesley Publishing Co., Reading, Mass.-London-Don Mills, Ont., 1969\.
* [3] José Bertin and Ariane Mézard. Induction and Restriction In Formal Deformation of Coverings. arXiv:math.AG/0205228.
* [4] José Bertin and Ariane Mézard. Déformations formelles des revêtements sauvagement ramifiés de courbes algébriques. Invent. Math., 141(1):195–238, 2000.
* [5] Rachel Camina. The Nottingham group. In New horizons in pro-$p$ groups, volume 184 of Progr. Math., pages 205–221. Birkhäuser Boston, Boston, MA, 2000.
* [6] Hara Charalambous, Kostas Karagiannis, and Aristides Kontogeorgis. The relative canonical ideal of the Artin-Schreier-Kummer-Witt family of curves. Annales de l’institut Fourier (Accepted), 2019. URL: arXiv:1905.05545.
* [7] Gunther Cornelissen and Fumiharu Kato. Equivariant deformation of Mumford curves and of ordinary curves in positive characteristic. Duke Math. J., 116(3):431–470, 2003.
* [8] Michel Demazure. Lectures on $p$-divisible groups, volume 302 of Lecture Notes in Mathematics. Springer-Verlag, Berlin, 1986. Reprint of the 1972 original.
* [9] Marcus du Sautoy and Ivan Fesenko. Where the wild things are: ramification groups and the Nottingham group. In New horizons in pro-$p$ groups, volume 184 of Progr. Math., pages 287–328. Birkhäuser Boston, Boston, MA, 2000.
* [10] David Eisenbud. The geometry of syzygies, volume 229 of Graduate Texts in Mathematics. Springer-Verlag, New York, 2005. A second course in commutative algebra and algebraic geometry.
* [11] Mark Green and Robert Lazarsfeld. A simple proof of Petri’s theorem on canonical curves. In Geometry today (Rome, 1984), volume 60 of Progr. Math., pages 129–142. Birkhäuser Boston, Boston, MA, 1985.
* [12] Alexander Grothendieck. Sur quelques points d’algèbre homologique. Tôhoku Math. J. (2), 9:119–221, 1957.
* [13] Alexander Grothendieck. Géométrie formelle et géométrie algébrique. In Séminaire Bourbaki, Vol. 5, pages Exp. No. 182, 193–220, errata p. 390. Soc. Math. France, Paris, 1995.
* [14] David Harbater. Patching and Galois theory. In Galois groups and fundamental groups, volume 41 of Math. Sci. Res. Inst. Publ., pages 313–424. Cambridge Univ. Press, Cambridge, 2003.
* [15] David Harbater and Katherine F. Stevenson. Patching and thickening problems. J. Algebra, 212(1):272–304, 1999.
* [16] Joe Harris and Ian Morrison. Moduli of curves, volume 187 of Graduate Texts in Mathematics. Springer-Verlag, New York, 1998.
* [17] Robin Hartshorne. Algebraic Geometry. Springer-Verlag, New York, 1977. Graduate Texts in Mathematics, No. 52.
* [18] Robin Hartshorne. Deformation theory, volume 257 of Graduate Texts in Mathematics. Springer, New York, 2010. doi:10.1007/978-1-4419-1596-2.
* [19] Enrique Acosta (https://mathoverflow.net/users/1724/enrique acosta). Geometric meaning of the Euler sequence on $\mathbb{P}^{n}$ (example 8.20.1 in ch ii of Hartshorne). MathOverflow. URL:https://mathoverflow.net/q/5211 (version: 2016-12-11). URL: https://mathoverflow.net/q/5211, arXiv:https://mathoverflow.net/q/5211.
* [20] Michael Kapovich and John J. Millson. On the deformation theory of representations of fundamental groups of compact hyperbolic $3$-manifolds. Topology, 35(4):1085–1106, 1996. doi:10.1016/0040-9383(95)00060-7.
* [21] Sotiris Karanikolopoulos and Aristides Kontogeorgis. Integral representations of cyclic groups acting on relative holomorphic differentials of deformations of curves with automorphisms. Proc. Amer. Math. Soc., 142(7):2369–2383, 2014. URL: https://doi.org/10.1090/S0002-9939-2014-12010-7.
* [22] Aristides Kontogeorgis. On the tangent space of the deformation functor of curves with automorphisms. Algebra Number Theory, 1(2):119–161, 2007.
* [23] Aristides Kontogeorgis. Polydifferentials and the deformation functor of curves with automorphisms. Journal of Pure and Applied Algebra, 210(2):551–558, 2007.
* [24] Aristides Kontogeorgis, Alexios Terezakis, and Ioannis Tsouknidas. Automorphisms and the Canonical Ideal. Mediterr. J. Math., 18(6):Paper No. 261, 2021. doi:10.1007/s00009-021-01878-3.
* [25] Aristides Kontogeorgis and Ioannis Tsouknidas. A cohomological treatise of HKG-covers with applications to the Nottingham group. J. Algebra, 555:325–345, 2020. doi:10.1016/j.jalgebra.2020.02.037.
* [26] Aristides Kontogeorgis and Ioannis Tsouknidas. A generating set for the canonical ideal of HKG-curves. Res. Number Theory, 7(1):Paper No. 4, 16, 2021. doi:10.1007/s40993-020-00230-0.
* [27] T. Y. Lam. Lectures on modules and rings, volume 189 of Graduate Texts in Mathematics. Springer-Verlag, New York, 1999. doi:10.1007/978-1-4612-0525-8.
* [28] Alexander Lubotzky and Andy R. Magid. Varieties of representations of finitely generated groups. Mem. Amer. Math. Soc., 58(336):xi+117, 1985. doi:10.1090/memo/0336.
* [29] B. Mazur. Deformation theory of Galois representations (in Modular forms and Fermat’s last theorem). pages xx+582, 1997. Papers from the Instructional Conference on Number Theory and Arithmetic Geometry held at Boston University, Boston, MA, August 9–18, 1995\.
* [30] Martin Olson. Tangent spaces and obstructed theoris, 2019. URL: https://math.berkeley.edu/~molsson/MSRISummer07.pdf.
* [31] B. Saint-Donat. On Petri’s analysis of the linear system of quadrics through a canonical curve. Math. Ann., 206:157–175, 1973. URL: https://doi.org/10.1007/BF01430982.
* [32] Michael Schlessinger. Functors of Artin rings. Trans. Amer. Math. Soc., 130:208–222, 1968.
* [33] Michael Schlessinger. On rigid singularities. Rice Univ. Stud., 59(1):147–162, 1973.
* [34] Edoardo Sernesi. Deformations of algebraic schemes, volume 334 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer-Verlag, Berlin, 2006.
* [35] Henning Stichtenoth. Über die Automorphismengruppe eines algebraischen Funktionenkörpers von Primzahlcharakteristik. I. Eine Abschätzung der Ordnung der Automorphismengruppe. Arch. Math. (Basel), 24:527–544, 1973.
* [36] John Tate. Finite flat group schemes. In Modular forms and Fermat’s last theorem (Boston, MA, 1995), pages 121–154. Springer, New York, 1997.
* [37] Ravi Vakil. The rising sea. 2017\.
* [38] Robert C. Valentini and Manohar L. Madan. A hauptsatz of L. E. Dickson and Artin-Schreier extensions. J. Reine Angew. Math., 318:156–177, 1980.
* [39] Charles A. Weibel. An Introduction to Homological Algebra. Cambridge University Press, Cambridge, 1994.
|
# EPIC-Survival: End-to-end Part Inferred Clustering for Survival Analysis,
Featuring Prognostic Stratification Boosting
Hassan Muhammad Department of Physiology, Biophysics, and Systems Biology -
Weill Cornell Medicine Chensu Xie Carlie S. Sigel Department of Surgery -
Memorial Sloan Kettering Cancer Center Amber Simpson Dept. of Biomedical and
Molecular Sciences - Queen’s University Michael Doukas Department of
Pathology - Erasmus Medical Center Rotterdam Lindsay Alpert Department of
Anatomic Pathology - University of Chicago William R. Jarnagin Department of
Surgery - Memorial Sloan Kettering Cancer Center Thomas J. Fuchs Department
of Physiology, Biophysics, and Systems Biology - Weill Cornell Medicine
Department of Pathology - Hasso Plattner Institute for Digital Health at Mount
Sinai
###### Abstract
Histopathology-based survival modelling has two major hurdles. Firstly, a
well-performing survival model has minimal clinical application if it does not
contribute to the stratification of a cancer patient cohort into different
risk groups, preferably driven by histologic morphologies. In the clinical
setting, individuals are not given specific prognostic predictions, but are
rather predicted to lie within a risk group which has a general survival
trend. Thus, It is imperative that a survival model produces well-stratified
risk groups. Secondly, until now, survival modelling was done in a two-stage
approach (encoding and aggregation). The massive amount of pixels in digitized
whole slide images were never utilized to their fullest extent due to
technological constraints on data processing, forcing decoupled learning.
EPIC-Survival bridges encoding and aggregation into an end-to-end survival
modelling approach, while introducing stratification boosting to encourage the
model to not only optimize ranking, but also to discriminate between risk
groups. In this study we show that EPIC-Survival performs better than other
approaches in modelling intrahepatic cholangiocarcinoma, a historically
difficult cancer to model. Further, we show that stratification boosting
improves further improves model performance, resulting in a concordance-index
of 0.880 on a held-out test set. Finally, we were able to identify specific
histologic differences, not commonly sought out in ICC, between low and high
risk groups.
_K_ eywords computational pathology $\cdot$ deep learning $\cdot$ clustering
$\cdot$ disease staging $\cdot$ survival analysis
## 1 Introduction
One of the primary purposes of survival analysis in medicine is cancer
subtyping, an important tool used to help predict disease prognosis and direct
therapy—it is the functional application of a survival model in the clinical
setting. Though traditional methods used for discovering cancer subtypes are
extremely labor intensive and subjective, successful stratification of common
cancers, such as prostate, into effective subtypes has only been possible due
to the existence of large datasets. However, working with rare cancers poses
it’s own set of challenges. Under the current largely privatized global
medical infrastructure, access to large datasets is nearly impossible, wide
collaboration is limited, and the traditional human evaluation of recognizing
repeatable tissue morphologies is rendered difficult. Further, histologic
features are limited to the discretion of the manual observer’s past
experiences and subjectivity. EPIC-Survival offers a way forward for subtyping
rare cancers as a unique deep learning-based survival model which overcomes
two key barriers.
Firstly, it is difficult to computationally predict the specific outcome of a
patient. It is more reasonable to predict the subgroup of a cancer population
in which an individual patient falls into. Further, without a robust
prognostic model which learns the relationships between histology and patient
outcome, survival models have minimal use. Thus, it is important that a
survival model produces stratified groups, preferably driven by histology,
rather than simply performing well at ranking patients by risk. Regardless,
survival modelling based on whole slide image (WSI) histopathology is a
difficult task which requires overcoming a second problem.
Because a single digitized WSI can span billions of pixels, it is impossible
to directly use WSIs in full to train survival models, given current
technological constraints. Thus, it is a common technique to sample tiles from
WSIs, often in creative ways, and then aggregating them to represent their
respective WSIs in the final step of training. We can simplify these stages as
the tile _encoding_ stage and the _aggregation_ stages. While the aggregation
stage of survival modelling has historically defaulted to the Cox-proportional
Hazard regression model, recent advancements have made survival modelling more
robust to complex data. We highlight some examples in the next section.
Nevertheless, creative ways to extract features from WSIs and more advanced
techniques to aggregate them still face the limits of operating in detached
two-stage frameworks, in which the information at slide level, e.g. the given
patient prognosis, is never taken into consideration while learning tile
encoding by proxy tasks (cf. Figure 1). This creates a difficulty in being
able to confidently identify specific and direct relationships between tissue
morphology and patient prognosis, even though prognostic performance may be
strong.
In this paper, we introduce a deep convolutional neural network which utilizes
end-to-end training to directly produce survival risk scores for a given WSI
without limitations on image size. Further, we contribute a new loss function
called _stratification boosting_ which further strengthens risk group
separation and overall prognostic performance. Our introduction of
stratification boosting not only improves overall performance, but also forces
the model to identify risk groups. In contrast, other works attempt to find
groups in the distribution of ranking after modelling a dataset. We claim that
this model takes us one step closer to systematically mapping out the
relationships between tissue morphology and patient death or cancer recurrence
times. To challenge our method, we consider the difficult case of small
dataset rare cancers.
### 1.1 Intrahepatic Cholangiocarcinoma
Intrahepatic cholangiocarcinoma (ICC), a cancer of the bile duct, has an
incidence of approximately 1 in 160,000 in the United States [12]. In general,
the clinical standard for prognostic prediction and risk-based population
stratification relies on simple metrics which are not based on histopathology.
These methods have unreliable prognostic performances [3], even when studied
in relatively large cohorts (1000+ samples). Studies which have attempted to
stratify ICC into different risk groups based on histopathology have been
inconsistent and unsuccessful [2, 11, 13].
Figure 1: While other deep learning-based survival modelling approaches employ
a traditional "two-stage" approach, EPIC-Survival introduces end-to-end
learning for prognostic prediction, allowing for a more robust loss function
which encourages the model to learn subgroups within the patient population.
### 1.2 Related Works
Because survival analysis continues to operate in a two-stage approach as
outlined above, advancements in survival analysis largely lie in the feature
extraction front. Muhammad et. al. introduced a deep unsupervised clustering
autoencoder which stratified a limited set of tiles randomly sampled from WSIs
into groups based on visual features at high resolution. These clusters were
then visualized and used as covariates to train simple univariate and
multivariate CPH models [10]. Similarly, in another study by Abbet et. al.,
self-supervised clustering was used to produce subtypes based on histologic
features [1]. These were then visualized and used as covariates in survival
models to measure significance of the clustered morphologies. Zhu et al. takes
the clustering approach one step further by modeling local clusters for a
tile-level prediction before aggregating the results into slide-level survival
predictions [17]. These methods work to build visual dictionaries through
clustering without having direct association to survival data. Slightly
differently, Yao et el. developed a method [16] to build a visual dictionary
through multiple instance learning. Though not completely unsupervised, even
weak supervision can only operate with a decoupled survival regression. Other
studies such as [14, 4, 7] have used even simpler approaches, producing models
which learn to predict prognosis on tiles based on slide-level outcomes and
then aggregate them into a slide-level predictions. These models, however, do
utilize the DeepSurv [8] function, a neural-network based survival learning
loss robust to complex and non-linear data (discussed further in section 2.2).
Unfortunately, the simplified feature extraction methods of the works listed
do not allow the DeepSurv model to operate in its fullest potential—our method
overcomes this barrier.
Recently, Xie et. al. bridged the gap of the two-stage problem in WSI
classification tasks with the introduction of End-to-end Part Learning (EPL)
[15]. EPL maps tiles of each WSI to $k$ feature groups defined as parts. The
tile encoding and aggregation are learned together against slide label in an
end-to-end manner. Refer to [15] for more details. Although the authors
suggested that EPL is theoretically applicable to survival regression,
treatment recommendation, or other learnable WSI label predictions, the effort
has been limited to testing the EPL framework with experiments benchmarking
against classification datasets. In this study, we introduce EPIC-Survival to
extend the EPL method to survival analysis by integrating the DeepSurv
survival function, unencumbered by the limitations of two-stage training.
Moreover, we contribute a new concept called stratification boosting, which
acts as a critical loss term to the learning of distinct risk groups among the
patient cohort. Most importantly, by applying EPIC-Survival, we show that it
is capable of discovering new relationships between histology and prognosis.
## 2 Methods
### 2.1 Survival Modelling
To review, survival modelling is used to predict ranking of censored time-
duration data. A sample is defined as censored when the end-point of its given
time duration, or time-to-event, is not directly associated to the study. For
example, in a dataset of time-to-death by cause of cancer, not all samples
will have end-points associated with a cancer-related death. In some cases, an
end-point may indicate a patient dropping out of the study or dying of other
causes. Rather than filtering out censored samples and regressing only on
uncensored time-to-events, Cox-propotional hazard (CPH) models are used to
regress on a complete dataset and predict hazard, the instantaneous risk that
the event of interest occurs. CPH as defined as:
$H(t)=h_{o}e^{b_{i}x_{i}},$ (1)
where $H(t)$ is the hazard function dependent on time $t$, $h_{o}$ is a
baseline hazard, and covariate(s) $x_{i}$ are weighted by coefficient(s)
$b_{i}$.
In 2016, DeepSurv [8] made an advancement in survival modelling by using a
neural network to regress survival data based on theoretical work proposed in
1995 [5]. Their results showed better performance than the typical CPH model,
especially on more complex data. In the case of a neural network-based
survival function, $b_{i}$ is substituted for model parameters, $\theta_{i}$.
Traditionally, a negative log partial likelihood (NLPL) is used to optimize
the survival function. It is defined as:
$NLPL(\theta)=-\sum_{i:E_{i}=1}(h_{\theta}(x_{i})-log\sum_{j\in\Re(T_{i})}e^{h_{\theta}(x_{j})}),$
(2)
where $h_{\theta}(x_{i})$ is the output risk score for sample $i$,
$h_{\theta}(x_{j})$ is a risk score from ordered set $\Re(T_{i})={i:T_{i}\geq
t}$ of patients still at risk of failure at time $t$, and $i:E_{i}=1$ is the
set of samples with an observed event (uncensored). The performance of a CPH
or CPH-based model can be tested using a concordance index (CI) which compares
the ranking of predicted risks to associated time-to-events. A CI of 0.5
indicates randomness and a CI of 1.0 indicates perfect prognostic predictions.
Further, the Kaplan-Meier (KM) method can be used to estimate a survival
function, the probability of survival past time $t$, allowing for an
illustrative way to see prognostic stratification between two or more groups.
The survival function is defined as:
$S(t)=\prod_{t_{i}<t}\frac{n_{i}-d_{i}}{n_{n}},$ (3)
where $d_{i}$ are the number of observed events at time $t$ and $n_{i}$ are
the number of subjects at risk of death or recurrence prior to time $t$. The
Log-Rank Test (LRT) is used to measure significance of separation between two
survival functions modelled using KM. LRT is a special case of the chi-squared
test used to test the null hypothesis that there is no difference between the
$S(t)$ of two populations.
### 2.2 EPIC Survival
EPIC-Survival bridges the DeepSurv loss with the comprehensive framework of
EPL. In short, EPL assigns tiles to inferred histology parts and
backpropagates the loss against slide labels (time-to-event data) through the
integrated aggregation and encoding graph. For EPIC-Survival, the last fully
connected layer of the original EPL was replaced by a a series of fully
connected layers and a single output node which functions as a risk score for
a given input WSI. Similar to the traditional EPL, NLPL is combined with a
clustering function based on minimizing distances between a sample embedding
and its assigned centroid:
$Loss=NLPL(\theta)+\lambda_{c}\sum_{i=1}^{N}||z_{i}-c_{i}||^{2},$ (4)
where $z_{i}$ is the embedding of randomly sampled tiles, $c_{i}$ is the
centroid assigned during previous training epoch to the WSI from which $z_{i}$
is sampled , and $\lambda$ is a weighting parameter. Figure 2 helps visualize
this combined loss function and the process of slide-level and global
clustering of visual morphology.
Figure 2: Diagram of the proposed EPIC-Survival approach for prognosis
prediction. Top: Whole slide images are tiled into small patches which pass
through an ImageNet pretrained ResNet-34 backbone, outputting a tile feature
vector. Each vector is assigned to a histology feature group defined by global
centroids. Next, local slide-level centroids are calculated and the nearest
tiles to $k$ local centroids are used as part representations of the slide.
This process is repeated for all slides. Bottom: Still within the same
training epoch, parts of all slides are concatenated and trained with survival
data, in conjunction with optimizing local clustering and overall risk group
separation. Note: Global centroids are randomnly initialized before training
and updated between epochs, based on the optimization of the ResNet-34
backbone.
### 2.3 Stratification Boosting
While CPH and DeepSurv regressions serve to optimize the ranking of samples in
relation to time-to-event data, they do not actively form risk groups within a
dataset. In Mayr and Schmid’s work on CI-based learning, they conclude that
"specifically, prediction rules that are well calibrated do not necessarily
have a high discriminatory power (and vice versa)" [9]. One of the most
important applications of survival analysis is cancer subtyping, an important
tool used to help predict disease prognosis and direct therapy. Moreover,
subtyping based on survival analysis creates a functional use for the survival
model, especially if specific morphologies can be identified within each
prognostic group. The DeepSurv loss, which only optimizes ranking, does not
explicitly put a lower bound to the separation between the predicted risks. To
further improve prognostic separation between high and low risk groups in the
patient population, we extend the DeepSurv-EPL function with a stratification
loss term. During training, predicted risks are numerically ordered and
divided into two groups based on the median predicted risk. The mean is
calculated for each group of predicted risks ($R_{high}$ and $R_{low}$) and
the model is optimized to diverge the two values using Huber loss:
$smoothL_{1}(\frac{1}{1+\lvert R_{high}-R_{low}\lvert},0)$ (5)
### 2.4 Dataset
WSIs of ICC cases were obtained from Memorial SLoan Kettering Cancer Center
(MSKCC), Erasmus Medical Center-Rotterdam (EMC), and University of Chicago
(UC) with approval from each respective Institutional Review Boards. In total,
265 patients with resected ICC without neoadjuvant chemotherapy were included
in the analysis. Up-to-date retrospective data for recurrence free survival
after resection was also obtained. A subset of samples (n=157) from MSKCC]
were classified into their respective AJCC [6] TNM and P-Stage groups. 246
slides from MSKCC and EMC were used as training data, split into five folds
for cross validation. 19 slides from UC were set aside as an external held-out
test set. Using a web-based whole slide viewer developed by our group, areas
of tumor were manually annotated in each WSI. Using a touchscreen tablet and
desktop (Surface Pro 3, Surface Studio; Microsoft Inc.), a pathologist painted
over regions of tumor to identify where tiles should be extracted for
training. Tiles used in training were extracted from tumor-regions of tissue
and sampled at 224x224px, 10x resolution.
### 2.5 Architecture and Experiments
An ImageNet-pretrained ResNet-34 was used as the base feature extractor
($\theta_{e}$). A series of three wide fully connected layers (4096, 4096,
256) with dropout were implemented before the single risk output node. Model
hyperparameters $n$, $w$, $b$, $lr$, $d$, and $p$ (number of clusters, waist
size, part-batch size, learning rate, dropout rate, and top-k tiles
respectively) were optimized using random grid search and CI as a performance
metric at the end of each epoch. 16 clusters and a waist size of 16 produced
the best performance. The same 5-fold cross validation was implemented and
held throughout all experiments and models. Predicted risks of the validation
sets from each fold were concatenated for a complete performance analysis
using CI and LRT. Each model was subsequently trained using all training data,
tested on the held-out test set, and evaluated using CI and LRT.
As a baseline, Deep Clustering Convolutional Autoencoder [10] was implemented.
This model was chosen because, like EPIC-Survival, it uses clustering to
define morphological features. However, these features are learned based on
image reconstruction and then used as covariates in traditional CPH modelling,
as a representation for the classic two-stage approach. Further, the subset of
training data with AJCC staging, a clinical standard, was analyzed using a
4-fold cross validation and CPH.
## 3 Results
EPIC-Survival with and without stratification boosting performed similarly on
the 5-fold cross validation producing CI of 0.671 and 0.674, respectively. On
the held out test set, EPIC-survival with stratification boosting performed
significantly better with a CI of 0.880, compared to a CI of 0.652 without
stratification boosting. Unsupervised clustering with a traditional CPH
regression yielded a CI of 0.583 on 5-fold cross validation and 0.614 on the
test set. Table 1 summarizes these results. AJCC staging using the TRN and
P-stage protocols on the subset of ICC produced CIs of 0.576 and 0.638,
respectively. While we recognize that a CI produced on a subset of data may
produce biases from batch effects, these results are not different from the
results of a study which tested multiple prognostic scores on a very large ICC
cohort (n=1054) [3].
Figure 3: Top Left: EPIC-Survival without stratification successfully stratifies (LRT: p $<$ 0.05) the patient population into high and low risks on 5-Fold Cross Validation but fails on the held out test set. Bottom Left: EPIC-Survival with stratification boosting produces strong patient population separation on both 5-Fold Cross Validation and the External Test Set. On the right, we visualize the distribution of time-to-events relative to predicted risk scores ordered from low to high. We find that EPIC-Survival, in general, does well at predicting early recurrence. In general, the inclusion of stratification boosting improves the correlation between predicted risk values and patient outcome. Top Right: EPIC-Survival without stratification boosting Bottom Right: EPIC-Survival with stratification boosting. | Cross Validation | Test
---|---|---
AJCC TNM | 0.576 (n=157) | -
AJCC P-Stage | 0.638 (n=157) | -
Muhammad et. al. | 0.583 (n=244) | 0.614 (n=19)
EPIC (DeepSurv) | 0.671 (n=244) | 0.652 (n=19)
EPIC (Stratfication Boosting) | 0.674 (n=244) | 0.880 (n=19)
AJCC TNM | - | 0.582 (n=1054)
Wang Nomogram | - | 0.607 (n=1054)
LCSGj | - | 0.562 (n=1054)
Okabayashi | - | 0.557 (n=1054)
Nathan Staging | - | 0.581 (n=1054)
Hyder Nomogram | - | 0.521 (n=1054)
Table 1: EPIC-Survival with stratification boosting showed the best
concordance index-based performance. For reference, performance of various
clinical metrics on a very large ICC dataset (n=1054) are provided [3].
In a KM analysis (Figure 3), EPIC-Survival with stratification boosting showed
significant separation between high and low risk populations (p $<$ 0.05).
Epic-Survival without stratification boosting failed on the held out test set.
Although stratification on the 5-fold cross validation is assumed significant,
there remains a risk of crossing survival curves, breaking the assumption of
proportional hazard rates.
To further analyze results, we visualize the distribution of predicted risks
relative to the distribution of time-to-events (Figure 3). We found that EPIC-
Survival with and without stratification boosting performs well at predicting
early recurrence ($<$50 months). Correlation between predicted risks and time
durations of the external test set using EPIC-Survival with stratification
boosting is very strong, as further indicated by the strong CI of 0.880.
Figure 4: Rows: Slide parts, Columns: Patients with their predicted risk
scores highlighted above. Black tiles indicate that there was no assigned tile
to that part of a slide.
### 3.1 Prognosis Predictive Histology Discovery
In Figure 4, we visualize the part representation (rows) in each slide
(columns) from the test set. The slides are ordered by predicted risk scores.
A pathologist with a specialty in gastrointestinal pathology reviewed these
and discovered some general trends indicating that tiles with a low predicted
risk (earlier rate of recurrence) tended to have loose, desmoplastic stroma
with haphazard, delicate collagen fibers, whereas high risk tiles (later
recurrence) tended to have dense intratumoral stroma with thickened collagen
fibers. The quality of nuclear chromatin was vesicular more commonly in the
low risk tiles. The quality of the intratumoral stroma has never been a part
of tumor grading or observed as a prognostic marker. Further, there is no
grading scheme that involves assessment of nuclear features for ICC.
## 4 Discussion
### 4.1 On the Concordance Index
Our test results show a significantly higher CI than the cross validation
experiments. We found that CI on smaller sets are often larger because
correctly ranking a smaller set of data is easier. During hyperparameter
optimization, this was also observed in the case of batch sizes. Smaller batch
sizes produces better CIs—in other words, optimizing the ranking of smaller
batches was easier than optimizing the ranking in larger batches.
### 4.2 Reflections on Stratification Boosting
Our work shows that EPIC-Survival has the capacity to identify specific risk
factors in histology, though these morphologies would need further testing on
a larger study. We hypothesise that altering the stratification boosting
component of the loss function to push separation between $>$2 groups would
further improve performance and has the potential to function as a general
subtyping model.
### 4.3 Conclusion
Our contributions are threefold: (1) we introduce the first end-to-end
survival model, allowing computational pathology to overcome the memory limits
introduced by two-stage approaches; (2) we contribute a new loss term to
strengthen the traditional hazard regression and encourage the learning of
stratified risk groups; (3) we show the power of EPIC-Survival by applying it
to the difficult test case of ICC, surpassing other metrics and providing
insight into new histologic features which may unlock new discoveries in ICC
subtyping.
## 5 Acknowledgements
This work was supported by Cycle for Survival, the generous computational
support given by the Warren Alpert foundation, and spectacular project
management from Christina Virgo.
This study was supported in part by National Institutes of Health/National
Cancer Institute (NIH/NCI) Cancer Center Support Grant P30 CA008748 and U01
CA238444-02.
Thomas J. Fuchs is a founder, equity owner, and Chief Scientific Officer of
Paige.AI.
## References
* [1] Christian Abbet, Inti Zlobec, Behzad Bozorgtabar, and Jean-Philippe Thiran. Divide-and-rule: Self-supervised learning for survival analysis in colorectal cancer. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 480–489. Springer, 2020.
* [2] Shinichi Aishima, Yousuke Kuroda, Yunosuke Nishihara, Tomohiro Iguchi, Kenichi Taguchi, Akinobu Taketomi, Yoshihiko Maehara, and Masazumi Tsuneyoshi. Proposal of progression model for intrahepatic cholangiocarcinoma: clinicopathologic differences between hilar type and peripheral type. The American Journal of Surgical Pathology, 31(7):1059–1067, 2007\.
* [3] Stefan Buettner, Boris Galjart, Jeroen LA van Vugt, Fabio Bagante, Sorin Alexandrescu, Hugo P Marques, Jorge Lamelas, Luca Aldrighetti, T Clark Gamblin, Shishir K Maithel, et al. Performance of prognostic scores and staging systems in predicting long-term survival outcomes after surgery for intrahepatic cholangiocarcinoma. Journal of surgical oncology, 116(8):1085–1095, 2017.
* [4] Pierre Courtiol, Charles Maussion, Matahi Moarii, Elodie Pronier, Samuel Pilcer, Meriem Sefta, Pierre Manceron, Sylvain Toldo, Mikhail Zaslavskiy, Nolwenn Le Stang, et al. Deep learning-based classification of mesothelioma improves prediction of patient outcome. Nature medicine, 25(10):1519–1525, 2019.
* [5] David Faraggi and Richard Simon. A neural network model for survival data. Statistics in medicine, 14(1):73–82, 1995.
* [6] Olivier Farges, David Fuks, Yves-Patrice Le Treut, Daniel Azoulay, Alexis Laurent, Philippe Bachellier, Gennaro Nuzzo, Jacques Belghiti, François René Pruvot, and Jean Marc Regimbeau. Ajcc 7th edition of tnm staging accurately discriminates outcomes of patients with resectable intrahepatic cholangiocarcinoma: by the afc-ihcc-2009 study group. Cancer, 117(10):2170–2177, 2011.
* [7] Jakob Nikolas Kather, Johannes Krisam, Pornpimol Charoentong, Tom Luedde, Esther Herpel, Cleo-Aron Weis, Timo Gaiser, Alexander Marx, Nektarios A Valous, Dyke Ferber, et al. Predicting survival from colorectal cancer histology slides using deep learning: A retrospective multicenter study. PLoS Medicine, 16(1):e1002730, 2019.
* [8] Jared Katzman, Uri Shaham, Jonathan Bates, Alexander Cloninger, Tingting Jiang, and Yuval Kluger. Deepsurv: Personalized treatment recommender system using a cox proportional hazards deep neural network. arXiv, pages arXiv–1606, 2016.
* [9] Andreas Mayr and Matthias Schmid. Boosting the concordance index for survival data–a unified framework to derive and evaluate biomarker combinations. PloS one, 9(1):e84483, 2014.
* [10] Hassan Muhammad, Carlie S Sigel, Gabriele Campanella, Thomas Boerner, Linda M Pak, Stefan Büttner, Jan NM IJzermans, Bas Groot Koerkamp, Michael Doukas, William R Jarnagin, et al. Unsupervised subtyping of cholangiocarcinoma using a deep clustering convolutional autoencoder. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 604–612. Springer, 2019.
* [11] Tohru Nakajima, Yoichiro Kondo, Masaru Miyazaki, and Katsuji Okui. A histopathologic study of 102 cases of intrahepatic cholangiocarcinoma: histologic classification and modes of spreading. Human Pathology, 19(10):1228–1234, 1988.
* [12] Supriya K Saha, Andrew X Zhu, Charles S Fuchs, and Gabriel A Brooks. Forty-year trends in cholangiocarcinoma incidence in the us: intrahepatic disease on the rise. The Oncologist, 21(5):594–599, 2016.
* [13] Christine Sempoux, Ghalib Jibara, Stephen C Ward, Cathy Fan, Lihui Qin, Sasan Roayaie, M Isabel Fiel, Myron Schwartz, and Swan N Thung. Intrahepatic cholangiocarcinoma: new insights in pathology. In Seminars in Liver Disease, volume 31, pages 049–060. Thieme Medical Publishers, 2011.
* [14] Sairam Tabibu, PK Vinod, and CV Jawahar. Pan-renal cell carcinoma classification and survival prediction from histopathology images using deep learning. Scientific reports, 9(1):1–9, 2019.
* [15] Chensu Xie, Hassan Muhammad, Chad M. Vanderbilt, Raul Caso, Dig Vijay Kumar Yarlagadda, Gabriele Campanella, and Thomas J. Fuchs. Beyond classfication: Whole slide tissue histopathology analysis by end-to-end part learning. In Medical Imaging with Deep Learning, 2020.
* [16] Jiawen Yao, Xinliang Zhu, and Junzhou Huang. Deep multi-instance learning for survival prediction from whole slide images. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 496–504. Springer, 2019.
* [17] Xinliang Zhu, Jiawen Yao, Feiyun Zhu, and Junzhou Huang. Wsisa: Making survival prediction from whole slide histopathological images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7234–7242, 2017.
|
# Coulomb corrections to two-particle interaction in artificial traps
Peng Guo<EMAIL_ADDRESS>Department of Physics and Engineering, California
State University, Bakersfield, CA 93311, USA Kavli Institute for Theoretical
Physics, University of California, Santa Barbara, CA 93106, USA
###### Abstract
In present work, we discuss the effect of Coulomb interaction to the dynamics
of two-particle system bound in various traps. The strategy of including
Coulomb interaction into the quantization condition of trapped system is
discussed in a general and non-perturbative manner. In most cases, Coulomb
corrections to quantization condition largely rely on numerical approach or
perturbation expansion. Only for some special cases, such as the spherical
hard wall trap, a closed-form of quantization condition with all orders of
Coulomb corrections can be obtained.
## I Introduction
Recent advances in lattice quantum Chromodynamics (LQCD), ab initio nuclear
many-body theory and developments in computer technology have now made it
possible for the high precision computation of hadron and nuclei systems from
the first principle. However, most of these computations are performed in
various traps, for instance, harmonic oscillator trap in nuclear physics and
periodic cubic box in LQCD. The typical observables from these ab initio
computations are discrete energy spectrum of trapped systems. Therefore,
extracting particle interactions from discrete energy spectrum in the trap and
building connection between trapped dynamics and infinite volume dynamics have
became an important subject in both LQCD and nuclear physics communities in
recent years. In elastic two-particle sector, such a connection between
trapped system and infinite volume system can be formulated in a closed form,
such as Lüscher formula Lüscher (1991) in a periodic cubic box in LCQD and
BERW formula Busch et al. (1998) in a harmonic oscillator trap in nuclear
physics community. Since then, Lüscher and BERW formula have been quickly
extended into both coupled-channel and few-body sectors, see e.g. Refs.
Rummukainen and Gottlieb (1995); Christ et al. (2005); Bernard et al. (2008);
He et al. (2005); Lage et al. (2009); Döring et al. (2011); Guo et al. (2013);
Guo (2013); Kreuzer and Hammer (2009); Polejaeva and Rusetsky (2012); Hansen
and Sharpe (2014); Mai and Döring (2017, 2019); Döring et al. (2018); Guo
(2017); Guo and Gasparian (2017, 2018); Guo and Morris (2019); Mai et al.
(2019); Guo et al. (2018); Guo (2020a); Guo and Döring (2020); Guo (2020b);
Guo and Long (2020a); Guo (2020c); Guo and Long (2020b); Guo (2020d); Guo and
Gasparian (2021); Guo and Long (2021). Both Lüscher and BERW formula have the
form of
$\det\left[\cot\delta(E)-\mathcal{M}(E)\right]=0\,,$ (1)
where $\delta(E)$ refers to the diagonal matrix of scattering phase shifts,
and the analytic matrix function $\mathcal{M}(E)$ is associated to the
geometry and dynamics of trap itself. Lüscher and BERW formula as the matter
of fact is the result of the presence of two well separated physical scales:
(1) short-range interaction between two particles and (2) size of trap. Hence
the short-range dynamics that is described by scattering phase shift and long-
range correlation effect due to the trap can be factorized.
The aim of present work is to extend such a relation to include long-range
Coulomb interaction between charged particles. Coulomb interaction becomes
dominant for charged particles interactions at low energy Kong and Ravndal
(2000), including Coulomb interaction may be crucial for charged system
interaction in a trap, see e.g. charge hadron system in LQCD Beane et al.
(2020). In fact, some early works on including Coulomb corrections in finite
volume has already been presented in Refs. Beane and Savage (2014); Stellin
and Meißner (2021). The discussion in Refs. Beane and Savage (2014); Stellin
and Meißner (2021) was primarily based on effective perturbation field theory
approach. It has been well known fact that both incoming plane wave and
scattered spherical wave are distorted by long-range Coulomb interaction
Messiah (1999),
$\displaystyle\psi^{(\infty)}_{l}(r,q)$ $\displaystyle\stackrel{{\scriptstyle
r\rightarrow\infty}}{{\sim}}\frac{\sin(qr-\frac{\pi}{2}l+\frac{Z\mu}{q}\ln
2qr)}{qr}$
$\displaystyle+t_{l}(q)\frac{e^{i(qr-\frac{\pi}{2}l+\frac{Z\mu}{q}\ln
2qr)}}{qr},$ (2)
where $Z=-Z_{1}Z_{2}e^{2}$ is Coulomb interaction strength, and $\mu$ and $q$
refers to the effective mass and incoming momentum of two-particle system.
$t_{l}(q)$ is the partial wave scattering amplitude. Hence perturbation breaks
down and Coulomb corrections must be dealt with non-perturbatively, see e.g.
Kong and Ravndal (2000). When it comes to formulating Lüscher and BERW formula
in the presence of long-range Coulomb force, Coulomb propagator must be used
instead of free particle propagator. In this work, we offer a general
perspective for the formulating Lüscher and BERW formula in presence of long-
range Coulomb force. All the discussion are based on Lippmann-Schwinger (LS)
equation approach, hence, the discussion can be made general in non-
perturbative way for various types of trap. However, except the hard-sphere
wall trap, the analytic form of Green’s function in a trap is usually not
available, Dyson equation must be solved either numerically or in terms of
perturbation expansion.
The paper is organized as follows. The derivation of the Coulomb corrections
to the quantization condition of trapped system is presented in Sec. II. The
discussions and summary are given in Sec. III.
## II Connecting bound states in a trap to infinite volume scattering state
with Coulomb force
In this section, we present a general formalism on the topic of bridging
discrete bound state energy spectrum in a trap and infinite volume scattering
dynamics in the presence of Coulomb force. The commonly used traps are
periodic finite box in LQCD Beane et al. (2020), harmonic potential in nuclear
physics Rotureau et al. (2010, 2012); Luu et al. (2010); Zhang et al. (2020),
and spherical hard wall in some of the lattice implementations of chiral
effective field theory Elhatisari et al. (2016); Rokash et al. (2015). The
brief discussion of formal scattering in presence of both a short-range
interaction and a long-range Coulomb interaction is given in Appendix A.
Before the technical presentation of detailed derivation of quantization
conditions, our notations for describing the dynamics of two-particle
interaction in trap and in infinite volume are established as follows:
##### Dynamics in a trap:
the relative motion of two charged spinless particles interacting with both
Coulomb and short-range interactions in a trap is described by Schrödinger
equation
$\left[\varepsilon-\hat{H}_{t}-V_{C}(r)\right]\psi^{(t)}_{\varepsilon}(\mathbf{r})=\int_{trap}d\mathbf{r}^{\prime}V_{S}(\mathbf{r},\mathbf{r}^{\prime})\psi^{(t)}_{\varepsilon}(\mathbf{r}^{\prime}),$
(3)
where $\int_{trap}d\mathbf{r}^{\prime}$ refers to the integral over space of
unit cell of the trap, e.g.
$\int_{trap}d\mathbf{r}^{\prime}=\int^{\frac{L}{2}}_{-\frac{L}{2}}dx^{\prime}dy^{\prime}dz^{\prime}$
in a periodic box with the size of $L$. The Hamiltonian operator of the trap
is given by
$\hat{H}_{t}=\hat{H}_{0}+V_{trap}(\mathbf{r}),$ (4)
with
$\hat{H}_{0}=-\frac{\nabla_{\mathbf{r}}^{2}}{2\mu}$ (5)
and $V_{trap}(\mathbf{r})$ representing the free particle Hamiltonian operator
and trap potential respectively. $\mu$ stands for the reduced mass of two
particles, and $\varepsilon$ is energy of trapped particles associated with
relative motion.
$V_{C}(r)=-\frac{Z}{r}$ (6)
and $V_{S}(\mathbf{r},\mathbf{r}^{\prime})$ denote the Coulomb and short-range
interactions between particles respectively.
##### Dynamics in infinite volume:
the dynamics of two charged interacting particles through the same short-range
interaction $V_{S}(\mathbf{r},\mathbf{r}^{\prime})$ in infinite volume is
given by
$\left[\varepsilon_{\infty}-\hat{H}_{0}-V_{C}(r)\right]\psi^{(\infty)}_{\varepsilon_{\infty}}(\mathbf{r})=\int_{-\infty}^{\infty}d\mathbf{r}^{\prime}V_{S}(\mathbf{r},\mathbf{r}^{\prime})\psi^{(\infty)}_{\varepsilon_{\infty}}(\mathbf{r}^{\prime}),$
(7)
where $\varepsilon_{\infty}$ stands for the relative motion energy of
particles in infinite volume. $\varepsilon_{\infty}$ is related to
$\varepsilon$ in the trap by total energy conservation,
$\varepsilon_{\infty}+\frac{\mathbf{P}^{2}}{2M}=\varepsilon+E^{(t)}_{CM}=E,$
(8)
where $\frac{\mathbf{P}^{2}}{2M}$ and $E^{(t)}_{CM}$ are the center of mass
(CM) energy of system in infinite volume and in the trap respectively.
##### Short-range interaction:
the separable zero-range potential is assumed in follows for the derivation of
quantization condition. In coordinate space, it has the form of, see Refs.Guo
and Gasparian (2021); Guo and Long (2021),
$V_{S}(\mathbf{r},\mathbf{r}^{\prime})=\frac{\delta(r)\delta(r^{\prime})}{(rr^{\prime})^{2}}\sum_{lm}\frac{V^{(S)}_{l}}{(rr^{\prime})^{l}}Y_{lm}(\mathbf{\hat{r}})Y^{*}_{lm}(\mathbf{\hat{r}}^{\prime}).$
(9)
We emphasis that the assumption of separable zero-range potential is not
essential for obtaining the Lüscher or BERW formula type quantizations
condition, due to the fact that Lüscher or BERW formula type quantization
conditions are model independent asymptotic result when the size of trap is
much larger than the range of nuclear interactions, see e.g. discussion in
Refs.Guo and Gasparian (2021); Guo and Long (2021). However, separable zero-
range potential does serve as a convenient tool for the derivation of
quantization condition.
Next, the dynamical equations of charged particles interaction in a trap is
presented in II.1, and quantization condition of trapped particles system that
connects the strength of short-range interaction and the Coulomb Green’s
function in a trap is also derived and given in II.1. Then, under the same
assumption of separable short-range interaction, the scattering solutions of
charged particles in infinite volume are given in details in II.2, the similar
relation that connecting the strength of short-range interaction, infinite
volume Coulomb Green’s function and scattering phase shift is also obtained.
At last, by combining dynamical equations in a trap and infinite volume
together, the Lüscher or BERW formula type of quantization condition is
obtained and given in II.3.
### II.1 Coulomb force modified dynamical equations in a trap
In the trap, the integral representation of Eq.(3) is given by
$\displaystyle\psi^{(t)}_{\varepsilon}(\mathbf{r})$
$\displaystyle=\int_{trap}d\mathbf{r}^{\prime\prime}G^{(C,t)}(\mathbf{r},\mathbf{r}^{\prime\prime};\varepsilon)$
$\displaystyle\times\int_{trap}d\mathbf{r}^{\prime}V_{S}(\mathbf{r}^{\prime\prime},\mathbf{r}^{\prime})\psi^{(t)}_{\varepsilon}(\mathbf{r}^{\prime}),$
(10)
where
$G^{(C,t)}(\mathbf{r},\mathbf{r}^{\prime\prime};\varepsilon)=\langle\mathbf{r}|\frac{1}{\varepsilon-\hat{H}_{t}-\hat{V}_{C}}|\mathbf{r}^{\prime\prime}\rangle$
(11)
stands for the Coulomb Green’s function in a trap. The Coulomb Green’s
function $G^{(C,t)}$ satisfies Dyson equation,
$\displaystyle
G^{(C,t)}(\mathbf{r},\mathbf{r}^{\prime\prime};\varepsilon)=G^{(t)}(\mathbf{r},\mathbf{r}^{\prime\prime};\varepsilon)$
$\displaystyle+\int_{trap}d\mathbf{r}^{\prime}G^{(t)}(\mathbf{r},\mathbf{r}^{\prime};\varepsilon)V_{C}(r^{\prime})G^{(C,t)}(\mathbf{r}^{\prime},\mathbf{r}^{\prime\prime};\varepsilon),$
(12)
where
$G^{(t)}(\mathbf{r},\mathbf{r}^{\prime\prime};\varepsilon)=\langle\mathbf{r}|\frac{1}{\varepsilon-\hat{H}_{t}}|\mathbf{r}^{\prime\prime}\rangle$
(13)
is particle Green’s function in a trap. The partial wave expansions
$\psi^{(t)}_{\varepsilon}(\mathbf{r})=\sum_{lm}\psi^{(t)}_{lm}(r)Y_{lm}(\mathbf{\hat{r}})$
(14)
and
$\displaystyle G^{(C,t)}(\mathbf{r},\mathbf{r}^{\prime\prime};\varepsilon)$
$\displaystyle=\sum_{lm,l^{\prime\prime}m^{\prime\prime}}Y_{lm}(\mathbf{\hat{r}})G_{lm,l^{\prime\prime}m^{\prime\prime}}^{(C,t)}(r,r^{\prime\prime};\varepsilon)Y^{*}_{l^{\prime\prime}m^{\prime\prime}}(\mathbf{\hat{r}}^{\prime\prime})$
(15)
yields
$\displaystyle\psi^{(t)}_{lm}(r)$
$\displaystyle=\sum_{l^{\prime}m^{\prime}}\int_{trap}{r^{\prime\prime}}^{2}dr^{\prime\prime}G_{lm,l^{\prime}m^{\prime}}^{(C,t)}(r,r^{\prime\prime};\varepsilon)$
$\displaystyle\times\int_{trap}{r^{\prime}}^{2}dr^{\prime}V^{(S)}_{l^{\prime}}(r^{\prime\prime},r^{\prime})\psi^{(t)}_{l^{\prime}m^{\prime}}(r^{\prime}).$
(16)
With separable potential given in Eq.(9), the quantization condition that
determines the discrete bound state energy spectrum of trapped system is thus
given by
$\det\left[\delta_{lm,l^{\prime}m^{\prime}}\frac{1}{V^{(S)}_{l}}-\frac{G_{lm,l^{\prime}m^{\prime}}^{(C,t)}(r,r^{\prime};\varepsilon)}{r^{l}{r^{\prime}}^{l^{\prime}}}|_{r,r^{\prime}\rightarrow
0}\right]=0.$ (17)
The Coulomb Green’s function, $G^{(C,t)}$, that describes the propagation of
charged particles in a trap is an essential ingredient of quantization
condition, and must be solved first. Specifically, only commonly used traps in
lattice and nuclear physics communities are considered in this work:
#### II.1.1 Harmonic oscillator trap
In the harmonic oscillator (HO) trap with trap potential:
$V_{trap}(r)=\frac{1}{2}\mu\omega^{2}r^{2},$ (18)
the rotational symmetry is still preserved, thus only diagonal elements of
partial wave Green’s function contribute. The partial wave Dyson equation for
harmonic trap Green’s function in presence of Coulomb force is given by
$\displaystyle
G_{l}^{(C,\omega)}(r,r^{\prime};\varepsilon)=G^{(\omega)}_{l}(r,r^{\prime};\varepsilon)$
$\displaystyle-\int_{0}^{\infty}{r^{\prime\prime}}^{2}dr^{\prime\prime}G_{l}^{(\omega)}(r,r^{\prime\prime};\varepsilon)\frac{Z}{r^{\prime\prime}}G_{l}^{(C,\omega)}(r^{\prime\prime},r^{\prime};\varepsilon),$
(19)
where $G^{(\omega)}_{l}$ is the partial-wave HO Green’s function and is given
in Refs. Blinder (1984); Guo and Long (2021) by
$\displaystyle
G^{(\omega)}_{l}(r,r^{\prime};\varepsilon)=-\frac{1}{\omega(rr^{\prime})^{\frac{3}{2}}}\frac{\Gamma(\frac{l}{2}+\frac{3}{4}-\frac{\varepsilon}{2\omega})}{\Gamma(l+\frac{3}{2})}$
$\displaystyle\times\mathcal{M}_{\frac{\varepsilon}{2\omega},\frac{l}{2}+\frac{1}{4}}(\mu\omega
r^{2}_{<})\mathcal{W}_{\frac{\varepsilon}{2\omega},\frac{l}{2}+\frac{1}{4}}(\mu\omega
r^{2}_{>}).$ (20)
$\mathcal{M}_{a,b}(z)$ and $\mathcal{W}_{a,b}(z)$ are the Whittaker functions
as defined in Ref. DLMF , and $r_{<}$ and $r_{>}$ represent the lesser and
greater of $(r,r^{\prime})$ respectively.
#### II.1.2 Periodic cubic box
In finite volume, the trap potential is replaced by periodic boundary
condition. The rotational symmetry is broken, and angular orbital momenta are
no longer good quantum numbers. In addition, the periodic boundary condition
is not satisfied by infinite volume Coulomb potential:
$V_{C}(r)=-\frac{Z}{r}$. The infinite volume Coulomb potential is usually
replaced by infrared singularity regularized periodic Coulomb potential, see
Refs. Beane and Savage (2014); Stellin and Meißner (2021),
$V_{C}^{(L)}(\mathbf{r})=-\frac{1}{L^{3}}\sum_{\mathbf{p}=\frac{2\pi\mathbf{n}}{L},\mathbf{n}\in\mathbb{Z}^{3},\mathbf{n}\neq\mathbf{0}}\frac{4\pi
Z}{|\mathbf{p}|^{2}}e^{i\mathbf{p}\cdot\mathbf{r}},$ (21)
where $L$ is size of cubic box, and
$V_{C}^{(L)}(\mathbf{r}+\mathbf{n}L)=V_{C}^{(L)}(\mathbf{r}),\ \ \ \
\mathbf{n}\in\mathbb{Z}^{3}.$ (22)
In momentum space, Dyson equation in finite volume is given by
$\displaystyle\widetilde{G}^{(C,L)}(\mathbf{p},\mathbf{p}^{\prime};\varepsilon)=\frac{L^{3}\delta_{\mathbf{p},\mathbf{p}^{\prime}}}{\varepsilon-\frac{\mathbf{p}^{2}}{2\mu}}$
$\displaystyle-\frac{1}{\varepsilon-\frac{\mathbf{p}^{2}}{2\mu}}\frac{1}{L^{3}}\sum_{\mathbf{p}^{\prime\prime}=\frac{2\pi\mathbf{n}}{L},\mathbf{n}\in\mathbb{Z}^{3}}^{\mathbf{p}^{\prime\prime}\neq\mathbf{p}}\frac{4\pi
Z}{|\mathbf{p}-\mathbf{p}^{\prime\prime}|^{2}}\widetilde{G}^{(C,L)}(\mathbf{p}^{\prime\prime},\mathbf{p}^{\prime};\varepsilon).$
(23)
The finite volume Coulomb force modified Green’s function in coordinate space
is thus given by finite volume Fourier transform
$G^{(C,L)}(\mathbf{r},\mathbf{r}^{\prime};\varepsilon)=\frac{1}{L^{6}}\sum_{\mathbf{p},\mathbf{p}^{\prime}\in\frac{2\pi\mathbf{n}}{L}}^{\mathbf{n}\in\mathbb{Z}^{3}}e^{i\mathbf{p}\cdot\mathbf{r}}\widetilde{G}^{(C,L)}(\mathbf{p},\mathbf{p}^{\prime};\varepsilon)e^{-i\mathbf{p}^{\prime}\cdot\mathbf{r}^{\prime}}.$
(24)
#### II.1.3 Spherical hard wall
The hard-sphere boundary condition is accomplished by the trap potential
$V_{trap}(r)=\begin{cases}0,&r<R\,,\\\ \infty,&r>R\,,\end{cases}$ (25)
where $R$ is the radius of the sphere. Hence, inside of spherical hard wall:
$|\mathbf{r}|<R$, Coulomb force modified Green’s function satisfies
$\left[\varepsilon-\hat{H}_{0}-\hat{V}_{C}\right]G^{(C,h.s.)}(\mathbf{r},\mathbf{r}^{\prime};\varepsilon)=\delta(\mathbf{r}-\mathbf{r}^{\prime}),$
(26)
which is just regular differential equation for Coulomb Green’s function
except boundary condition.
### II.2 Coulomb force modified infinite volume dynamical equations
In infinite volume, the scattering solution of two charged interacting
particles in presence of Coulomb interaction is described by inhomogeneous LS
equation,
$\displaystyle\psi^{(\infty)}_{\varepsilon_{\infty}}(\mathbf{r},\mathbf{q})=\psi^{(C,\infty)}_{\varepsilon_{\infty}}(\mathbf{r},\mathbf{q})$
$\displaystyle+\int_{-\infty}^{\infty}d\mathbf{r}^{\prime\prime}G^{(C,\infty)}(\mathbf{r},\mathbf{r}^{\prime\prime};q)\int_{-\infty}^{\infty}d\mathbf{r}^{\prime}V_{S}(\mathbf{r}^{\prime\prime},\mathbf{r}^{\prime})\psi^{(\infty)}_{\varepsilon_{\infty}}(\mathbf{r}^{\prime},\mathbf{q}),$
(27)
where $\mathbf{q}$ is on-shell incoming momentum:
$q=\sqrt{2\mu\varepsilon_{\infty}}.$ (28)
$\psi^{(C,\infty)}_{\varepsilon_{\infty}}$ and $G^{(C,\infty)}$ are Coulomb
wave function and Coulomb Green’s function respectively. The partial wave
expansion
$\displaystyle\psi^{(\infty)}_{\varepsilon_{\infty}}(\mathbf{r},\mathbf{q})=\sum_{lm}Y^{*}_{lm}(\mathbf{\hat{q}})\psi^{(\infty)}_{l}(r,q)Y_{lm}(\mathbf{\hat{r}}),$
$\displaystyle
G^{(C,\infty)}(\mathbf{r},\mathbf{r}^{\prime\prime};q)=\sum_{lm}Y_{lm}(\mathbf{\hat{r}})G_{l}^{(C,\infty)}(r,r^{\prime\prime};q)Y^{*}_{lm}(\mathbf{\hat{r}}^{\prime\prime}),$
(29)
and separable potential in Eq.(9) yield an algebra equation
$\displaystyle\frac{\psi^{(\infty)}_{l}(r,q)}{r^{l}}=\frac{\psi^{(C,\infty)}_{l}(r,q)}{r^{l}}$
$\displaystyle+V^{(S)}_{l}\frac{G_{l}^{(C,\infty)}(r,r^{\prime\prime};q)}{(rr^{\prime\prime})^{l}}\frac{\psi^{(\infty)}_{l}(r^{\prime},q)}{{r^{\prime}}^{l}}|_{r^{\prime},r^{\prime\prime}\rightarrow
0}.$ (30)
#### II.2.1 Coulomb wave function and Coulomb Green’s function
The analytic expression of $\psi^{(C,\infty)}_{l}$ and $G_{l}^{(C,\infty)}$
are given in Refs. Messiah (1999); Hostler (1964) respectively by
$\displaystyle\psi^{(C,\infty)}_{l}(r,q)=4\pi\frac{\Gamma(l+1+i\gamma)}{(2l+1)!}e^{-\frac{\pi}{2}\gamma}$
$\displaystyle\times(2iqr)^{l}e^{iqr}M(l+1+i\gamma,2L+2,-2iqr),$ (31)
and
$\displaystyle
G_{l}^{(C,\infty)}(r,r^{\prime\prime};q)=2\mu(2iq)\frac{\Gamma(l+1+i\gamma)}{(2l+1)!}$
$\displaystyle\times(-2iqr_{<})^{l}e^{iqr_{<}}M(l+1+i\gamma,2l+2,-2iqr_{<})$
$\displaystyle\times(-2iqr_{>})^{l}e^{iqr_{>}}U(l+1+i\gamma,2l+2,-2iqr_{>}),$
(32)
where $M(a,b,z)$ and $U(a,b,z)$ are two linearly independent Kummer functions
DLMF , and
$\gamma=-\frac{Z\mu}{q}.$ (33)
For the convenience, let’s introduce two real functions:
$\displaystyle
j_{l}^{(C)}(\gamma,qr)=C_{l}(\gamma)(qr)^{l}e^{iqr}M(l+1+i\gamma,2l+2,-2iqr),$
(34)
and
$\displaystyle n_{l}^{(C)}(\gamma,qr)$
$\displaystyle=i(-2qr)^{l}e^{\frac{\pi}{2}\gamma}e^{iqr}U(l+1+i\gamma,2l+2,-2iqr)e^{i\delta_{l}^{(C)}}$
$\displaystyle-i(-2qr)^{l}e^{\frac{\pi}{2}\gamma}e^{-iqr}U(l+1-i\gamma,2l+2,2iqr)e^{-i\delta_{l}^{(C)}},$
(35)
where the Sommerfeld factor and Coulomb phase shift are defined in Messiah
(1999) by
$C_{l}(\gamma)=2^{l}\frac{|\Gamma(l+1+i\gamma)|}{(2l+1)!}e^{-\frac{\pi}{2}\gamma},$
(36)
and
$e^{2i\delta_{l}^{(C)}}=\frac{\Gamma(l+1+i\gamma)}{\Gamma(l+1-i\gamma)}.$ (37)
At the limit of $\gamma\rightarrow 0$, $j_{l}^{(C)}(\gamma,qr)$ and
$n_{l}^{(C)}(\gamma,qr)$ are reduced to the regular spherical Bessel
functions,
$\left(j_{l}^{(C)}(\gamma,qr),n_{l}^{(C)}(\gamma,qr)\right)\stackrel{{\scriptstyle\gamma\rightarrow
0}}{{\rightarrow}}\left(j_{l}(qr),n_{l}(qr)\right).$ (38)
Also using identity
$\displaystyle M(l+1+i\gamma,2l+2,-2iqr)$
$\displaystyle=-(-1)^{l}\frac{(2l+1)!}{\Gamma(l+1-i\gamma)}e^{\pi\gamma}U(l+1+i\gamma,2l+2,-2iqr)$
$\displaystyle-(-1)^{l}\frac{(2l+1)!}{\Gamma(l+1+i\gamma)}e^{\pi\gamma}e^{-2iqr}U(l+1-i\gamma,2l+2,2iqr),$
(39)
the partial wave Coulomb wave function and Coulomb Green’s function can thus
be rewritten as
$\psi^{(C,\infty)}_{l}(r,q)=4\pi
i^{l}j_{l}^{(C)}(\gamma,qr)e^{i\delta_{l}^{(C)}},$ (40)
and
$G_{l}^{(C,\infty)}(r,r^{\prime\prime};q)=-i2\mu
qj_{l}^{(C)}(\gamma,qr_{<})h_{l}^{(C,+)}(\gamma,qr_{>}),$ (41)
where
$\displaystyle h_{l}^{(C,\pm)}(\gamma,qr)=j_{l}^{(C)}(\gamma,qr)\pm
in_{l}^{(C)}(\gamma,qr)$
$\displaystyle=-2(-2qr)^{l}e^{\frac{\pi}{2}\gamma}e^{\pm iqr}U(l+1\pm
i\gamma,2l+2,\mp 2iqr)e^{\pm i\delta_{l}^{(C)}}.$ (42)
The Coulomb Green’s function in Eq.(41) thus resemble the free particle
Green’s function,
$G_{l}^{(0,\infty)}(r,r^{\prime\prime};q)=-i2\mu
qj_{l}(qr_{<})h_{l}^{(+)}(qr_{>}),$ (43)
where $j_{l}$ and $h_{l}^{(+)}$ are regular spherical Bessel and Hankel
functions.
#### II.2.2 Coulomb force modified scattering amplitudes
In presence of Coulomb force, the total scattering amplitude now is composed
of two components: (1) the short-range interaction scattering amplitude
modified by Coulomb interaction and (2) the pure Coulomb scattering amplitude.
##### Coulomb force modified short-range interaction scattering amplitude:
the short-range interaction scattering amplitude can be defined by the
solution of Eq.(30),
$\displaystyle\psi^{(\infty)}_{l}(r,q)$ $\displaystyle=4\pi
i^{l}\bigg{[}j^{(C)}_{l}(\gamma,qr)e^{i\delta_{l}^{(C)}}$
$\displaystyle+it^{(SC)}_{l}(q)h_{l}^{(C,+)}(\gamma,qr)e^{-i\delta_{l}^{(C)}}\bigg{]},$
(44)
where $t^{(SC)}_{l}(q)$ is the Coulomb force modified short-range interaction
scattering amplitude and is given by
$t^{(SC)}_{l}(q)=-\frac{2\mu
q\left(\frac{j_{l}^{(C)}(\gamma,qr)}{r^{L}}|_{r\rightarrow
0}\right)^{2}}{\frac{1}{V^{(S)}_{l}}-\frac{G_{l}^{(C,\infty)}(r^{\prime},r^{\prime\prime};q)}{(r^{\prime}r^{\prime\prime})^{l}}|_{r^{\prime},r^{\prime\prime}\rightarrow
0}}e^{2i\delta_{l}^{(C)}}.$ (45)
The $t^{(SC)}_{l}(q)$ is typically parameterized by both Coulomb phase shift
$\delta_{l}^{(C)}$ and a short-range scattering phase shift
$\delta_{l}^{(S)}$,
$t^{(SC)}_{l}(q)=\frac{1}{\cot\delta_{l}^{(S)}-i}e^{2i\delta_{l}^{(C)}}.$ (46)
Using the asymptotic form:
$\displaystyle\frac{j_{l}^{(C)}(\gamma,qr)}{r^{l}}|_{r\rightarrow 0}$
$\displaystyle=q^{l}C_{l}(\gamma),$
$\displaystyle\Im\left[\frac{G_{l}^{(C,\infty)}(r^{\prime},r^{\prime\prime};q)}{(r^{\prime}r^{\prime\prime})^{l}}|_{r^{\prime},r^{\prime\prime}\rightarrow
0}\right]$ $\displaystyle=-2\mu q^{2l+1}C_{l}^{2}(\gamma),$ (47)
and Eq.(45) and Eq.(46), one thus find
$\displaystyle\frac{1}{V^{(S)}_{l}}$ $\displaystyle=-2\mu
q^{2l+1}C^{2}_{l}(\gamma)\cot\delta_{l}^{(S)}(q)$
$\displaystyle+\Re\left[\frac{G_{l}^{(C,\infty)}(r^{\prime},r^{\prime\prime};q)}{(r^{\prime}r^{\prime\prime})^{l}}|_{r^{\prime},r^{\prime\prime}\rightarrow
0}\right].$ (48)
##### Pure Coulomb scattering amplitude:
the pure Coulomb scattering amplitude is defined by Coulomb wave function. By
introducing
$h_{l}^{(C,\pm)}(\gamma,qr)=e^{\pm
i\delta_{l}^{(C)}}H_{l}^{(C,\pm)}(\gamma,qr),$ (49)
where
$\displaystyle H_{l}^{(C,\pm)}(\gamma,qr)=J_{l}^{(C)}(\gamma,qr)\pm
iN_{l}^{(C)}(\gamma,qr)$
$\displaystyle=-2(-2qr)^{l}e^{\frac{\pi}{2}\gamma}e^{\pm iqr}U(l+1\pm
i\gamma,2l+2,\mp 2iqr),$ (50)
we can thus rewrite the Coulomb wave function in Eq.(40) to
$\psi^{(C,\infty)}_{l}(r,q)=4\pi
i^{l}\left[J_{l}^{(C)}(\gamma,qr)+it_{l}^{(C)}(q)H_{l}^{(C,+)}(\gamma,qr)\right],$
(51)
where $t_{l}^{(C)}(q)$ is the pure Coulomb scattering amplitude:
$t_{l}^{(C)}(q)=\frac{e^{2i\delta_{l}^{(C)}}-1}{2i}.$ (52)
##### Total scattering amplitude:
the total wave function in Eq.(44) is now also given by
$\displaystyle\psi^{(\infty)}_{l}(r,q)$ $\displaystyle=4\pi
i^{l}\bigg{[}J^{(C)}_{l}(\gamma,qr)+it_{l}(q)H_{l}^{(C,+)}(\gamma,qr)\bigg{]},$
(53)
where
$\displaystyle
t_{l}(q)=t^{(C)}_{l}(q)+t^{(SC)}_{l}(q)=\frac{e^{2i\delta_{l}^{(C)}}e^{2i\delta_{l}^{(S)}}-1}{2i}.$
(54)
##### Asymptotic forms of wave functions:
using asymptotic form of $H_{l}^{(C,\pm)}$ functions,
$H_{l}^{(C,\pm)}(\gamma,qr)\stackrel{{\scriptstyle
r\rightarrow\infty}}{{\rightarrow}}h^{(\pm)}_{l}(qr)e^{\mp i\gamma\ln 2qr},$
(55)
one can easily illustrate that
$\displaystyle\psi^{(C,\infty)}_{l}(r,q)$
$\displaystyle\stackrel{{\scriptstyle r\rightarrow\infty}}{{\rightarrow}}4\pi
i^{l}\bigg{[}\frac{\sin(qr-\frac{\pi}{2}l-\gamma\ln 2qr)}{qr}$
$\displaystyle+it_{l}^{(C)}(q)h^{(+)}_{l}(qr)e^{-i\gamma\ln 2qr}\bigg{]},$
(56)
and
$\displaystyle\psi^{(\infty)}_{l}(r,q)$ $\displaystyle\stackrel{{\scriptstyle
r\rightarrow\infty}}{{\rightarrow}}4\pi
i^{l}\bigg{[}\frac{\sin(qr-\frac{\pi}{2}l-\gamma\ln 2qr)}{qr}$
$\displaystyle+it_{l}(q)h^{(+)}_{l}(qr)e^{-i\gamma\ln 2qr}\bigg{]},$ (57)
where the factor $\gamma\ln 2qr$ represents the long-range Coulomb distortion
effect to the incoming plane wave and outgoing spherical wave.
### II.3 Quantization condition in a trap in presence of Coulomb interaction
Combining Eq.(17) and Eq.(48) by eliminating $V^{(S)}_{l}$, one thus find
Coulomb force modified Lüscher formula-like relation,
$\det\left[\delta_{lm,l^{\prime}m^{\prime}}\cot\delta^{(S)}_{l}(q)-\mathcal{M}^{(C,t)}_{lm,l^{\prime}m^{\prime}}(\varepsilon)\right]=0,$
(58)
where $\mathcal{M}^{(C,t)}$ is generalized zeta function in presence of
Coulomb force,
$\displaystyle\mathcal{M}^{(C,t)}_{lm,l^{\prime}m^{\prime}}(\varepsilon)=-\frac{1}{2\mu
q^{2l+1}C^{2}_{l}(\gamma)}\frac{G_{lm,l^{\prime}m^{\prime}}^{(C,t)}(r,r^{\prime};\varepsilon)}{r^{l}{r^{\prime}}^{l^{\prime}}}|_{r,r^{\prime}\rightarrow
0}$ $\displaystyle+\delta_{lm,l^{\prime}m^{\prime}}\frac{1}{2\mu
q^{2l+1}C^{2}_{l}(\gamma)}\frac{\Re\left(G_{l}^{(C,\infty)}(r,r^{\prime};q)\right)}{(rr^{\prime})^{l}}|_{r,r^{\prime}\rightarrow
0}.$ (59)
Both $G_{lm,l^{\prime}m^{\prime}}^{(C,t)}$ and $G_{l}^{(C,\infty)}$ are
ultraviolet divergent, and after cancellation between two terms, generalized
zeta function is finite and well-defined. At the limit of $\gamma\rightarrow
0$,
$C_{l}(\gamma)\stackrel{{\scriptstyle\gamma\rightarrow
0}}{{\rightarrow}}C_{l}(0)=\frac{\sqrt{\pi}}{2^{l+1}\Gamma(l+\frac{3}{2})}=\frac{j_{l}(qr)}{(qr)^{l}}|_{r\rightarrow
0},$ (60)
and
$\frac{\Re\left(G_{l}^{(C,\infty)}(r,r^{\prime\prime};q)\right)}{(rr^{\prime\prime})^{l}}|_{r,r^{\prime\prime}\rightarrow
0}\stackrel{{\scriptstyle\gamma\rightarrow 0}}{{\rightarrow}}2\mu
q\frac{j_{l}(qr)n_{l}(qr)}{r^{2l}}|_{r\rightarrow 0},$ (61)
hence, Eq.(59) is reduced to Eq.(B32) in Ref. Guo and Gasparian (2021).
## III Discussion and summary
### III.1 Perturbation expansion
The key element of generalized zeta function in Eq.(59) is Coulomb interaction
modified Green’s function in a trap, which is given by Dyson equation,
Eq.(12). Solving Dyson equation in most cases is not an easy task, great
effort must to be made to deal with both ultraviolet divergence (UV) and
infrared divergence (IR) caused by Coulomb interaction. Therefore the
perturbation expansion may be more practical in general, see discussion in
Beane and Savage (2014); Stellin and Meißner (2021). Symbolically, the Coulomb
force modified zeta function is a real function and given by
$\mathcal{\hat{M}}_{C,t}\sim\frac{1}{C^{2}(\gamma)}\left[\hat{G}_{C,t}-\Re\left(\hat{G}_{C,\infty}\right)\right],$
(62)
where the solution of $\hat{G}_{C,t}$ is given by
$\hat{G}_{C,t}=\frac{\hat{G}_{t}}{1-\hat{V}_{C}\hat{G}_{t}}=\sum_{n=0}^{\infty}\hat{G}_{t}\left(\hat{V}_{C}\hat{G}_{t}\right)^{n},$
(63)
and $\hat{G}_{t}$ denotes Green’s function in a trap,
$\hat{G}_{t}(E)=\frac{1}{E-\hat{H}_{t}}.$ (64)
Although the analytic expression of infinite volume Coulomb Green’s function
$\hat{G}_{C,\infty}$ is known already, in order to make sure the UV and IR
divergences cancelled out properly order by order, $\hat{G}_{C,\infty}$ can be
expanded in terms of perturbation theory as well
$\hat{G}_{C,\infty}=\frac{\hat{G}_{0,\infty}}{1-\hat{V}_{C}\hat{G}_{0,\infty}}=\sum_{n=0}^{\infty}\hat{G}_{0,\infty}\left(\hat{V}_{C}\hat{G}_{0,\infty}\right)^{n},$
(65)
where
$\hat{G}_{0,\infty}(E)=\frac{1}{E-\hat{H}_{0}}.$ (66)
Hence, Coulomb corrected zeta function may be computed by perturbation
expansion systematically,
$\displaystyle C^{2}(\gamma)\mathcal{\hat{M}}_{C}$
$\displaystyle\sim\sum_{n=0}^{\infty}\left[\hat{G}_{t}\left(\hat{V}_{C}\hat{G}_{t}\right)^{n}-\Re\left(\hat{G}_{0,\infty}\left(\hat{V}_{C}\hat{G}_{0,\infty}\right)^{n}\right)\right].$
(67)
Perturbation expansion may be well applied to both HO trap and periodic cubic
box, also see Beane and Savage (2014); Stellin and Meißner (2021) for the
discussion in finite volume from effective field theory perspective.
##### HO trap:
for the harmonic oscillator trap, iterating Dyson equation in Eq.(19) once,
the leading order and first order perturbation result can be written down
formally by,
$\displaystyle\frac{C^{2}_{l}(\gamma)}{C^{2}_{l}(0)}\mathcal{M}^{(C,0th)}_{l}(\varepsilon)=(-1)^{l+1}\left(\frac{4\mu\omega}{q^{2}}\right)^{l+\frac{1}{2}}\frac{\Gamma(\frac{l}{2}+\frac{3}{4}-\frac{\varepsilon}{2\omega})}{\Gamma(\frac{1}{4}-\frac{l}{2}-\frac{\varepsilon}{2\omega})},$
(68)
and
$\displaystyle\frac{C^{2}_{l}(\gamma)}{C^{2}_{l}(0)}\mathcal{M}^{(C,1st)}_{l}(\varepsilon)$
$\displaystyle=-\frac{2^{2l+2}\Gamma^{2}(l+\frac{3}{2})}{2\mu
q^{2l+1}\pi}\frac{\triangle
G_{l}^{(C,1st)}(r,r^{\prime};\varepsilon)}{(rr^{\prime})^{l}}|_{r,r^{\prime}\rightarrow
0},$ (69)
where
$\displaystyle\triangle G_{l}^{(C,1st)}(r,r^{\prime};\varepsilon)$
$\displaystyle=-\int_{0}^{\infty}{r^{\prime\prime}}^{2}dr^{\prime\prime}G_{l}^{(\omega)}(r,r^{\prime\prime};\varepsilon)\frac{Z}{r^{\prime\prime}}G_{l}^{(\omega)}(r^{\prime\prime},r^{\prime};\varepsilon)$
$\displaystyle+\Re\int_{0}^{\infty}{r^{\prime\prime}}^{2}dr^{\prime\prime}G_{l}^{(0,\infty)}(r,r^{\prime\prime};q)\frac{Z}{r^{\prime\prime}}G_{l}^{(0,\infty)}(r^{\prime\prime},r^{\prime};q).$
(70)
##### Periodic cubic box:
similarly, in finite volume, the leading order and first order perturbation
result are given by
$\displaystyle\frac{C^{2}_{l}(\gamma)}{C^{2}_{l}(0)}\mathcal{M}^{(C,0th)}_{lm,l^{\prime}m^{\prime}}(\varepsilon)$
$\displaystyle=-\frac{1}{L^{3}}\sum_{\mathbf{p}\in\frac{2\pi\mathbf{n}}{L}}^{\mathbf{n}\in\mathbb{Z}^{3}}\frac{p^{l+l^{\prime}}}{q^{2l+1}}\frac{Y_{lm}(\mathbf{\hat{p}})Y^{*}_{l^{\prime}m^{\prime}}(\mathbf{\hat{p}})}{2\mu\varepsilon-\mathbf{p}^{2}}$
$\displaystyle-\delta_{lm,l^{\prime}m^{\prime}}\frac{2^{2l+1}\Gamma(l+\frac{1}{2})\Gamma(l+\frac{3}{2})}{\pi}\frac{1}{(qr)^{2l+1}}|_{r\rightarrow
0},$ (71)
and
$\displaystyle\frac{C^{2}_{l}(\gamma)}{C^{2}_{l}(0)}\mathcal{M}^{(C,1st)}_{lm,l^{\prime}m^{\prime}}(\varepsilon)$
$\displaystyle=\frac{1}{2\mu
q^{2l+1}}\frac{1}{L^{6}}\sum_{\mathbf{p},\mathbf{p}^{\prime}\in\frac{2\pi\mathbf{n}}{L}}^{\mathbf{n}\in\mathbb{Z}^{3}}\frac{p^{l}Y_{lm}(\mathbf{\hat{p}})}{\varepsilon-\frac{\mathbf{p}^{2}}{2\mu}}\frac{4\pi
Z}{|\mathbf{p}-\mathbf{p}^{\prime}|^{2}}\frac{{p^{\prime}}^{l^{\prime}}Y^{*}_{l^{\prime}m^{\prime}}(\mathbf{\hat{p}}^{\prime})}{\varepsilon-\frac{\mathbf{p^{\prime}}^{2}}{2\mu}}$
$\displaystyle-\frac{\delta_{lm,l^{\prime}m^{\prime}}}{2\mu
q^{2l+1}}\Re\int\frac{d\mathbf{p}d\mathbf{p}^{\prime}}{(2\pi)^{6}}\frac{p^{l}Y_{lm}(\mathbf{\hat{p}})}{\varepsilon-\frac{\mathbf{p}^{2}}{2\mu}}\frac{4\pi
Z}{|\mathbf{p}-\mathbf{p}^{\prime}|^{2}}\frac{{p^{\prime}}^{l}Y^{*}_{lm}(\mathbf{\hat{p}}^{\prime})}{\varepsilon-\frac{\mathbf{p^{\prime}}^{2}}{2\mu}}.$
(72)
### III.2 Analytic solutions in a spherical hard wall trap
The rotational symmetry inside of a hard-sphere trap is also well-preserved,
so angular orbital momentum is still a good quantum number, only diagonal
elements of Coulomb Green’s function contribute. The partial wave Coulomb
Green’s function inside hard-sphere must be the combination of regular and
irregular functions of Coulomb differential equation: $j_{l}^{(C)}(\gamma,qr)$
and $n_{l}^{(C)}(\gamma,qr)$ defined in Eq.(34) and Eq.(35) respectively.
Similar to the hard-sphere trap without Coulomb interaction, see Eq.(54) in
Ref.Guo and Long (2021), the closed form of Coulomb force modified Green’s
function inside hard-sphere is given by
$\displaystyle G_{l}^{(C,h.s.)}$
$\displaystyle(r,r^{\prime\prime};\varepsilon)=-2\mu
qj_{l}^{(C)}(\gamma,qr_{<})j_{l}^{(C)}(\gamma,qr_{>})$
$\displaystyle\times\left[\frac{n_{l}^{(C)}(\gamma,qR)}{j_{l}^{(C)}(\gamma,qR)}-\frac{n_{l}^{(C)}(\gamma,qr_{>})}{j_{l}^{(C)}(\gamma,qr_{>})}\right].$
(73)
The real part of Coulomb Green’s function in infinite volume is given by
$\Re\left(G_{l}^{(C,\infty)}(r,r^{\prime\prime};\varepsilon)\right)=2\mu
qj_{l}^{(C)}(\gamma,qr_{<})n_{l}^{(C)}(\gamma,qr_{>}).$ (74)
Hence, after UV cancellation in Eq.(59), the analytic expression of Coulomb
force modified generalized zeta function for hard-sphere trap is obtained
$\mathcal{M}^{(C,h.s.)}_{lm,l^{\prime}m^{\prime}}(\varepsilon)=\delta_{lm,l^{\prime}m^{\prime}}\frac{C^{2}_{l}(0)}{C^{2}_{l}(\gamma)}\frac{n_{l}^{(C)}(\gamma,qR)}{j_{l}^{(C)}(\gamma,qR)}.$
(75)
The quantization condition in a hard-sphere trap in presence of Coulomb
interaction is given in a closed-form:
$\cot\delta^{(S)}_{l}(q)=\frac{C^{2}_{l}(0)}{C^{2}_{l}(\gamma)}\frac{n_{l}^{(C)}(\gamma,qR)}{j_{l}^{(C)}(\gamma,qR)},$
(76)
where $j_{l}^{(C)}(\gamma,qr)$ and $n_{l}^{(C)}(\gamma,qr)$ are defined in
Eq.(34) and Eq.(35) respectively.
### III.3 Summary
In summary, we present a general discussion on the topic of formulating
quantization condition of trapped systems by including long-range Coulomb
interaction. Although all the discussion are based on non-perturbative LS
equation approach, in most cases, the Coulomb force modified Green’s function
in a trap must be solved either numerically or by perturbation expansion. In
special cases, such as the spherical hard wall trap, the closed-form of
quantization condition is obtained and given in Eq.(76).
###### Acknowledgements.
We thank fruitful discussion with Bingwei Long. P.G. also acknowledges support
from the Department of Physics and Engineering, California State University,
Bakersfield, CA. The work was supported in part by the National Science
Foundation under Grant No. NSF PHY-1748958.
## Appendix A Formal scattering theory with short-range and Coulomb
interactions
In this section, the formal scattering theory in presence of both a short-
range interaction and a long-range Coulomb interaction is briefly discussed,
the complete discussion can be found in Refs. Mott and Massey (1985);
Goldberger and Watson (1964). The connection to trapped system is also briefly
discussed symbolically.
### A.1 Coulomb force modified scattering amplitude in infinite volume
The infinite volume scattering amplitude in the presence of both Coulomb and
short-range nuclear interactions is defined by
$T_{\infty}=-\langle\Psi_{0}|(\hat{V}_{C}+\hat{V}_{S})|\Psi^{(+)}\rangle,$
(77)
where $|\Psi_{0}\rangle$ stands for plane wave. The $|\Psi^{(+)}\rangle$ is
defined by LS equation,
$|\Psi^{(\pm)}\rangle=|\Psi_{0}\rangle+\hat{G}_{0}(E\pm
i0)(\hat{V}_{C}+\hat{V}_{S})|\Psi^{(\pm)}\rangle,$ (78)
where
$\hat{G}_{0}(E\pm i0)=\frac{1}{E-\hat{H}_{0}\pm i0}.$ (79)
Using Eq.(78) and also LS equation for pure Coulomb interaction,
$\langle\Psi_{0}|=\langle\Psi^{(-)}_{C}|-\langle\Psi^{(-)}_{C}|\hat{V}_{C}\hat{G}_{0}(E+i0),$
(80)
the total infinite volume scattering amplitude $T_{\infty}$ can be rewritten
as, also see Refs. Mott and Massey (1985); Goldberger and Watson (1964),
$T_{\infty}=-\langle\Psi^{(-)}_{C}|\hat{V}_{C}|\Psi_{0}\rangle-\langle\Psi^{(-)}_{C}|\hat{V}_{S}|\Psi^{(+)}\rangle.$
(81)
The first term in Eq.(81) is identified as pure Coulomb interaction scattering
amplitude,
$T_{C,\infty}=-\langle\Psi^{(-)}_{C}|\hat{V}_{C}|\Psi_{0}\rangle=-\langle\Psi_{0}|\hat{V}_{C}|\Psi^{(+)}_{C}\rangle.$
(82)
The partial wave coulomb amplitude is parameterized by Coulomb phase shifts,
$T^{(C,\infty)}_{l}\propto\frac{e^{2i\delta^{(C)}_{l}}-1}{2i},$ (83)
where the Coulomb phase shift $\delta_{l}^{(C)}$ is defined in Eq.(37).
The second term in Eq.(81) is the result of short-range interaction in the
presence of Coulomb interaction, using LS equation
$|\Psi^{(+)}\rangle=|\Psi^{(+)}_{C}\rangle+\hat{G}_{C}(E+i0)V_{S}|\Psi^{(+)}\rangle,$
(84)
where
$\hat{G}_{C}(E\pm i0)=\frac{1}{E-\hat{H}_{0}-\hat{V}_{C}\pm i0},$ (85)
it can be shown rather straight-forwardly that second term satisfies equation
$\displaystyle-\langle\Psi^{(-)}_{C}|\hat{V}_{S}|\Psi^{(+)}\rangle$
$\displaystyle=-\langle\Psi^{(-)}_{C}|\hat{V}_{S}|\Psi^{(+)}_{C}\rangle-\langle\Psi^{(-)}_{C}|\hat{V}_{S}\hat{G}_{C}(E+i0)\hat{V}_{S}|\Psi^{(+)}\rangle.$
(86)
Hence, it may be useful and more convenient to define a Coulomb force modified
scattering operator
$\hat{T}_{SC,\infty}|\Psi_{C}^{(+)}\rangle=-\hat{V}_{S}|\Psi^{(+)}\rangle,$
(87)
thus, the second term in Eq.(81) now can be written as
$-\langle\Psi^{(-)}_{C}|\hat{V}_{S}|\Psi^{(+)}\rangle=\langle\Psi^{(-)}_{C}|\hat{T}_{SC,\infty}|\Psi^{(+)}_{C}\rangle.$
(88)
According to Eq.(86), $\hat{T}_{SC,\infty}$ satisfies operator equation
$\hat{T}_{SC,\infty}=-\hat{V}_{S}+\hat{V}_{S}\hat{G}_{C}(E+i0)\hat{T}_{SC,\infty}.$
(89)
The total scattering amplitude is now given by
$T_{\infty}=T_{C,\infty}+\langle\Psi^{(-)}_{C}|\hat{T}_{SC,\infty}|\Psi^{(+)}_{C}\rangle.$
(90)
Given the fact that the symbolic solution of $\hat{T}_{SC,\infty}$ operator is
given by
$\hat{T}^{-1}_{SC,\infty}=-\hat{V}_{S}^{-1}+\hat{G}_{C}(E+i0),$ (91)
and
$\langle\Psi^{(-)}_{C}|\Psi^{(+)}_{C}\rangle=1+2iT_{C,\infty}=S_{C,\infty}\propto
e^{2i\delta^{(C)}_{l}}$ (92)
is the pure Coulomb interaction $S$-matrix, the partial wave expansion of
$\langle\Psi^{(-)}_{C}|\hat{T}_{SC,\infty}|\Psi^{(+)}_{C}\rangle$ is
conventionally parameterized by both Coulomb phase shift $\delta_{l}^{(C)}$
and the short-range interaction phase shift, $\delta^{(S)}_{l}$, see Refs.
Mott and Massey (1985); Goldberger and Watson (1964),
$\langle\Psi^{(-)}_{C}|\hat{T}_{SC,\infty}|\Psi^{(+)}_{C}\rangle\propto
e^{2i\delta^{(C)}_{l}}\frac{e^{2i\delta^{(S)}_{l}}-1}{2i}.$ (93)
Therefore, the partial wave total infinite volume scattering amplitude is thus
defined by a total phase shift,
$\delta_{l}=\delta^{(S)}_{l}+\delta^{(C)}_{l},$ (94)
and
$T^{(\infty)}_{l}\propto\frac{e^{2i\delta_{l}}-1}{2i}=\frac{e^{2i\delta^{(C)}_{l}}-1}{2i}+e^{2i\delta^{(C)}_{l}}\frac{e^{2i\delta^{(S)}_{l}}-1}{2i}.$
(95)
### A.2 Charged particles in a trap in presence of Coulomb force
In the trap, the Eq.(89) is now modified to
$\hat{T}_{SC,t}=-\hat{V}_{S}+\hat{V}_{S}\hat{G}_{C,t}(E+i0)\hat{T}_{SC,t}\,,$
(96)
where
$\hat{G}_{C,t}(E\pm i0)=\frac{1}{E-\hat{H}_{t}-\hat{V}_{C}\pm i0},$ (97)
is Coulomb Green’s function in the trap, and
$\hat{H}_{t}=\hat{H}_{0}+\hat{V}_{t}$
is trap Hamiltonian operator. The quantization condition including Coulomb
interaction thus is given by
$\det\left[V_{S}^{-1}-G_{C,t}(E+i0)\right]=0.$ (98)
### A.3 Quantization condition including Coulomb interaction
Eliminating $V^{-1}_{S}$ from Eq.(91) and Eq.(98), the quantization condition
Eq.(98) thus can be rewritten as
$\det\left[T_{SC,\infty}^{-1}+G_{C,t}(E+i0)-G_{C}(E+i0)\right]=0.$ (99)
In general, Coulomb Green’s function in the trap is obtained by Dyson
equation,
$\hat{G}_{C,t}(E)=\hat{G}_{t}(E)+\hat{G}_{t}(E)\hat{V}_{C}\hat{G}_{C,t}(E),$
(100)
where
$\hat{G}_{t}(E\pm i0)=\frac{1}{E-\hat{H}_{t}\pm i0}.$ (101)
In practice, Coulomb effect may be treated as perturbation by summing over all
the ladder diagrams generated by Coulomb exchanges,
$\hat{G}_{C,t}(E)=\hat{G}_{t}(E)\sum_{n=0}^{\infty}\left(\hat{V}_{C}\hat{G}_{t}(E)\right)^{n}.$
(102)
## References
* Lüscher (1991) M. Lüscher, Nucl. Phys. B354, 531 (1991).
* Busch et al. (1998) T. Busch, B.-G. Englert, K. Rzażewski, and M. Wilkens, Found. Phys. 28, 549–559 (1998).
* Rummukainen and Gottlieb (1995) K. Rummukainen and S. A. Gottlieb, Nucl. Phys. B450, 397 (1995), eprint hep-lat/9503028.
* Christ et al. (2005) N. H. Christ, C. Kim, and T. Yamazaki, Phys. Rev. D72, 114506 (2005), eprint hep-lat/0507009.
* Bernard et al. (2008) V. Bernard, M. Lage, U.-G. Meißner, and A. Rusetsky, JHEP 08, 024 (2008), eprint 0806.4495.
* He et al. (2005) S. He, X. Feng, and C. Liu, JHEP 07, 011 (2005), eprint hep-lat/0504019.
* Lage et al. (2009) M. Lage, U.-G. Meißner, and A. Rusetsky, Phys. Lett. B681, 439 (2009), eprint 0905.0069.
* Döring et al. (2011) M. Döring, U.-G. Meißner, E. Oset, and A. Rusetsky, Eur. Phys. J. A47, 139 (2011), eprint 1107.3988.
* Guo et al. (2013) P. Guo, J. Dudek, R. Edwards, and A. P. Szczepaniak, Phys. Rev. D88, 014501 (2013), eprint 1211.0929.
* Guo (2013) P. Guo, Phys. Rev. D88, 014507 (2013), eprint 1304.7812.
* Kreuzer and Hammer (2009) S. Kreuzer and H. W. Hammer, Phys. Lett. B673, 260 (2009), eprint 0811.0159.
* Polejaeva and Rusetsky (2012) K. Polejaeva and A. Rusetsky, Eur. Phys. J. A48, 67 (2012), eprint 1203.1241.
* Hansen and Sharpe (2014) M. T. Hansen and S. R. Sharpe, Phys. Rev. D90, 116003 (2014), eprint 1408.5933.
* Mai and Döring (2017) M. Mai and M. Döring, Eur. Phys. J. A53, 240 (2017), eprint 1709.08222.
* Mai and Döring (2019) M. Mai and M. Döring, Phys. Rev. Lett. 122, 062503 (2019), eprint 1807.04746.
* Döring et al. (2018) M. Döring, H.-W. Hammer, M. Mai, J.-Y. Pang, A. Rusetsky, and J. Wu, Phys. Rev. D 97, 114508 (2018), eprint 1802.03362.
* Guo (2017) P. Guo, Phys. Rev. D95, 054508 (2017), eprint 1607.03184.
* Guo and Gasparian (2017) P. Guo and V. Gasparian, Phys. Lett. B774, 441 (2017), eprint 1701.00438.
* Guo and Gasparian (2018) P. Guo and V. Gasparian, Phys. Rev. D97, 014504 (2018), eprint 1709.08255.
* Guo and Morris (2019) P. Guo and T. Morris, Phys. Rev. D99, 014501 (2019), eprint 1808.07397.
* Mai et al. (2019) M. Mai, M. Döring, C. Culver, and A. Alexandru (2019), eprint 1909.05749.
* Guo et al. (2018) P. Guo, M. Döring, and A. P. Szczepaniak, Phys. Rev. D98, 094502 (2018), eprint 1810.01261.
* Guo (2020a) P. Guo, Phys. Lett. B 804, 135370 (2020a), eprint 1908.08081.
* Guo and Döring (2020) P. Guo and M. Döring, Phys. Rev. D 101, 034501 (2020), eprint 1910.08624.
* Guo (2020b) P. Guo, Phys. Rev. D101, 054512 (2020b), eprint 2002.04111.
* Guo and Long (2020a) P. Guo and B. Long, Phys. Rev. D 101, 094510 (2020a), eprint 2002.09266.
* Guo (2020c) P. Guo (2020c), eprint 2007.04473.
* Guo and Long (2020b) P. Guo and B. Long, Phys. Rev. D 102, 074508 (2020b), eprint 2007.10895.
* Guo (2020d) P. Guo, Phys. Rev. D 102, 054514 (2020d), eprint 2007.12790.
* Guo and Gasparian (2021) P. Guo and V. Gasparian (2021), eprint 2101.01150.
* Guo and Long (2021) P. Guo and B. Long (2021), eprint 2101.03901.
* Kong and Ravndal (2000) X. Kong and F. Ravndal, Nucl. Phys. A 665, 137 (2000), eprint hep-ph/9903523.
* Beane et al. (2020) S. R. Beane et al. (2020), eprint 2003.12130.
* Beane and Savage (2014) S. R. Beane and M. J. Savage, Phys. Rev. D 90, 074511 (2014), eprint 1407.4846.
* Stellin and Meißner (2021) G. Stellin and U.-G. Meißner, Eur. Phys. J. A 57, 26 (2021), eprint 2008.06553.
* Messiah (1999) A. Messiah, _Quantum Mechanics_ , Dover books on physics (Dover Publications, 1999), ISBN 9780486409245, URL https://books.google.com/books?id=mwssSDXzkNcC.
* Rotureau et al. (2010) J. Rotureau, I. Stetcu, B. Barrett, M. Birse, and U. van Kolck, Phys. Rev. A 82, 032711 (2010), eprint 1006.3820.
* Rotureau et al. (2012) J. Rotureau, I. Stetcu, B. Barrett, and U. van Kolck, Phys. Rev. C 85, 034003 (2012), eprint 1112.0267.
* Luu et al. (2010) T. Luu, M. J. Savage, A. Schwenk, and J. P. Vary, Phys. Rev. C 82, 034003 (2010), eprint 1006.0427.
* Zhang et al. (2020) X. Zhang, S. Stroberg, P. Navrátil, C. Gwak, J. Melendez, R. Furnstahl, and J. Holt, Phys. Rev. Lett. 125, 112503 (2020), eprint 2004.13575.
* Elhatisari et al. (2016) S. Elhatisari, D. Lee, U.-G. Meißner, and G. Rupak, Eur. Phys. J. A 52, 174 (2016), eprint 1603.02333.
* Rokash et al. (2015) A. Rokash, M. Pine, S. Elhatisari, D. Lee, E. Epelbaum, and H. Krebs, Phys. Rev. C 92, 054612 (2015), eprint 1505.02967.
* Blinder (1984) S. Blinder, J. Math. Phys. 25, 905 (1984).
* (44) DLMF, _NIST Digital Library of Mathematical Functions_ , http://dlmf.nist.gov/, Release 1.1.0 of 2020-12-15, f. W. J. Olver, A. B. Olde Daalhuis, D. W. Lozier, B. I. Schneider, R. F. Boisvert, C. W. Clark, B. R. Miller, B. V. Saunders, H. S. Cohl, and M. A. McClain, eds., URL http://dlmf.nist.gov/.
* Hostler (1964) L. Hostler, Journal of Mathematical Physics 5, 591 (1964), eprint https://doi.org/10.1063/1.1704153, URL https://doi.org/10.1063/1.1704153.
* Mott and Massey (1985) Mott and Massey, _The Theory of Atomic Collisions_ (Oxford University Press, 1985), 3rd ed., ISBN 0198512422\.
* Goldberger and Watson (1964) M. L. Goldberger and K. M. Watson, _Collision Theory_ (Wiley, New York, 1964), ISBN 0471311103.
|
0 Research
# Text2Gestures: A Transformer-Based Network for Generating Emotive Body
Gestures for Virtual Agents††thanks: This work has been supported in part by
ARO Grants W911NF1910069 and W911NF1910315, and Intel. Code and additional
materials available at: https://gamma.umd.edu/t2g.
Uttaran Bhattacharya1 Nicholas Rewkowski2 Abhishek Banerjee3 Pooja Guhan4
Aniket Bera5 Dinesh Manocha6 University of Maryland, College Park, MD 20742,
USA
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
We present Text2Gestures, a transformer-based learning method to interactively
generate emotive full-body gestures for virtual agents aligned with natural
language text inputs. Our method generates emotionally expressive gestures by
utilizing the relevant biomechanical features for body expressions, also known
as affective features. We also consider the intended task corresponding to the
text and the target virtual agents’ intended gender and handedness in our
generation pipeline. We train and evaluate our network on the MPI Emotional
Body Expressions Database and observe that our network produces state-of-the-
art performance in generating gestures for virtual agents aligned with the
text for narration or conversation. Our network can generate these gestures at
interactive rates on a commodity GPU. We conduct a web-based user study and
observe that around 91% of participants indicated our generated gestures to be
at least plausible on a five-point Likert Scale. The emotions perceived by the
participants from the gestures are also strongly positively correlated with
the corresponding intended emotions, with a minimum Pearson coefficient of
0.77 in the valence dimension.
Computing methodologiesVirtual reality; Computing methodologiesIntelligent
agents; Computer systems organizationNeural networks;
## 1 Introduction
As the world increasingly uses digital and virtual platforms for everyday
communication and interactions, there is a heightened need to create highly
realistic virtual agents endowed with social and emotional intelligence.
Interactions between humans and virtual agents are being used to augment
traditional human-human interactions in different applications, including
online learning [37, 39, 59], virtual interviewing and counseling [6, 16],
virtual social interactions [56, 24, 35, 40], and large-scale virtual worlds
[50]. Human-human interactions rely heavily on a combination of verbal
communications (the text), inter-personal relationships between the people
involved (the context), and more subtle non-verbal face and body expressions
during communication (the subtext) [41, 32]. While context is often
established at the beginning of interactions, virtual agents in social VR
applications need to align their text with their subtext throughout the
interaction, thereby improving the human users’ sense of presence in the
virtual environment. Gesticulation is an integral component in subtext, where
humans use patterns of movement for hands, arms, heads, and torsos to convey a
wide range of intent, behaviors, and emotions [42]. In this work, we
investigate the problem of aligning emotionally expressive gestures with the
text to generate virtual agents’ actions that result in natural interactions
with human users.
Current game engines and animation engines can generate human-like movements
for virtual agents, including head poses, hand gestures, and torso movements
[61, 2]. However, aligning these movements with a virtual agent’s associated
speech or text transcript is more challenging. Traditional approaches such as
hand-crafting animations or collecting and transferring context-specific
gestures through rotoscoping or motion capture look natural [49, 66], but need
to be manually designed for every new gesture. However, virtual agents
performing live social interactions with humans in VR need to adapt their
gestures to their words and current social context in real-time. As a result,
prior approaches based on pre-generated animations or motion specifications
are limited, and we need interactive methods to generate plausible gestures.
Existing approaches for interactive speech-aligned gesture generation learn
mappings between speech signals and the generated gesture sequences [33, 2].
In contrast to these speech-based methods, our goal is to align the gestures
directly with the natural language text transcripts. This eliminates the need
to have speeches pre-recorded by humans or machines, which have a higher
production cost. Prior works on generating gestures aligned with text [69]
have leveraged the well-known sequence-to-sequence modeling network, which is
efficient at performing a variety of sequence-to-sequence prediction tasks.
These methods have only considered arms and head motions and are limited to
generating gestures with small variations in categorical emotions such as
happy, angry, and sad.
However, as evidenced by works on adding emotions through facial and vocal
expressions [18, 60, 67], emotional expressiveness adds to the realism of
virtual agents. Studies in psychology and affective computing show that body
expressions also contain useful cues for perceived emotions [3, 7, 9], and
often help disambiguate the emotions perceived from facial and vocal cues [4,
46, 47]. These body expressions are composed of biomechanical features known
as affective features. Common affective features include, among others, the
rate of arm swings, stride lengths, shoulder and spine postures, and head
jerks [28]. More recent approaches for generating virtual agents with gait-
based body expressions have leveraged the relevant gait-based affective
features to improve the perceived naturalness of the animations [55, 54, 7].
Following these works, we aim to generate body gestures for virtual agents in
social VR settings to either narrate text-based content to human participants
or continue a text-based conversation with human participants. We use
affective features to make the gestures emotionally expressive, such that the
human participants can perceive appropriate emotions from the virtual agents
based on the natural language text.
Main Results: We present an end-to-end trainable generative network that
produces emotive body gestures aligned with natural language text. We design
our method for interactive applications, where a virtual agent narrates lines
or takes part in a conversation. To this end, we make use of the transformer
network [64], and extend current approaches to work with gestures for virtual
agents in 3D. We also adapt the gestures based on narration or conversation
and the intended gender and handedness (dominance of left-hand or right-hand
in gesticulation) of the virtual agents. We also make the gestures emotionally
expressive by utilizing the relevant gesture-based affective features of the
virtual agents.
To summarize, our contributions are four-fold:
* •
A transformer-based network that interactively takes in text one sentence at a
time and generates 3D pose sequences for virtual agents corresponding to
gestures aligned with that text.
* •
Conditioning the generation process to follow the intended acting task of
narration or conversation and the virtual agents’ intended gender and
handedness.
* •
Considering the intended emotion in the text to generate emotionally
expressive gestures.
* •
A web study with 600 total responses to evaluate the quality of our generated
gestures compared to motion-captured sequences and the emotional
expressiveness of our generated gestures.
Based on our experiments, we find that our network has state-of-the-art
performance for generating gestures aligned with text compared to ground-truth
sequences in a large-scale motion capture database. We can generate these
gestures at an interactive rate of 312.5 fps using an Nvidia GeForce GTX
1080Ti GPU. Based on our user study, we also find that the emotions perceived
by the participants from the gestures are strongly positively correlated with
the corresponding intended emotions of the gestures, with a minimum Pearson
coefficient of 0.77 in the valence dimension. Moreover, around 91% of
participants found our generated gestures are plausible on a five-point Likert
Scale.
## 2 Related Work
This section summarizes studies exploring how different emotions are perceived
from body gestures and how they have been utilized to generate emotive virtual
agents.
We also review prior work on generating human body gestures in graphics and
VR, particularly those that align the gestures with speech and text content.
We focus mostly on data-driven approaches here because we base our work on a
similar foundation, and refer the interested reader to Wagner et al.’s
extensive survey [66] for the more classical rule-based approaches. The main
limitation of such rule-based approaches is that their range of gestures is
confined to the designed set of gestures. Hence, they require that gestures
for every novel speech and text inputs are manually designed.
### 2.1 Perceiving Emotions from Body Expressions
Studies in psychology show that body expressions, including gestures, are
better suited than facial and vocal cues to express and perceive emotions
varying in arousal and dominance, such as anger, relief, fear, and pride [15,
22]. Body expressions are also useful for disambiguating between pairs of
emotions such as fear or anger [43], and fear or happiness [63]. Follow-up
studies in affective computing [31, 28, 11, 5] have identified sets of
biomechanical features from body expressions, known as affective features, on
which human observers focus when perceiving these different emotions from
gestures. For example, rapid arm swings can indicate anger, an expanded upper
body can indicate pride, and slouching shoulders can indicate fear or sadness.
In our work, we use such affective features observable from gestures to emote
our generated virtual agents.
### 2.2 Generating Emotive Virtual Agents
Current approaches to endow virtual agents with emotional expressiveness make
use of a number of modalities, including verbal communication [13, 60], face
movements [29, 18], body gestures [27], and gaits [55]. In the context of
generating emotional expressions aligned with speech, Chuah et al. [14]
leveraged a dataset of words mapped to emotive facial expressions to generate
virtual agents with basic emotions automatically. DeVault et al. [16]
developed a full-fledged virtual human counselor, using a pre-built corpus of
mappings between mental states and body expressions to make their virtual
agent appropriately expressive. In contrast to these approaches, we build a
generalizable data-driven mapping to body gestures from a more diverse range
of intended emotions associated with text transcripts, such that we can
generate appropriately expressive gestures for out-of-dataset text sentences.
### 2.3 Generating Gestures Aligned with Speech and Text
There has been extensive deep-learning-based work on generating human body
gestures that align with speech content in the recent past [12]. Levine et al.
[36] used a hidden Markov model to learn latent mappings between speech and
gestures. Hasegawa et al. [23] used recurrent neural networks to predict 3D
pose sequences for gestures from input speech. More recently, Kucherenko et
al. [33] trained autoencoders to learn latent representations for the speech
and the gesture data and then learned mappings between the two to generate
gestures that are less sensitive to noise in the training data. By contrast,
Alexanderson et al. [2] learned invertible sub transformations between speech
and gesture spaces to stochastically generate a set of best-fitting gestures
corresponding to the speech. Other approaches have also incorporated
individual styles into gestures [20], added multiple adversarial losses to
make the generated gestures look more realistic [19], and even added
prototypical rule-based behaviors such as head nods and hand waves based on
the discourse [58]. These have culminated into works such as generating
gestures for multiple speakers through style-transfer [1], and semantic-aware
gesture generation from speech [34].
Our approach is complementary to these approaches in that we learn mappings
from the text transcripts of speech to gestures. This eliminates the noise in
speech signals and helps us focus only on the relevant content and context.
Learning from the text also enables us to focus on a broader range of
gestures, including iconic, deictic, and metaphoric gestures [42]. Our work is
most closely related to that of Yoon et al. [69]. They learn upper body
gestures as PCA-based, low-dimensional pose features, corresponding to text
transcripts from a dataset of TED-talk videos, then heuristically map these 3D
gestures to an NAO robot. They have also followed up this work by generating
upper-body gestures aligned with the three modalities of speech, text
transcripts, and person identity [68]. On the other hand, we learn to map text
transcripts to 3D pose sequences corresponding to semantic-aware, full-body
gestures of more human-like virtual agents using an end-to-end trainable
transformer network and blend in emotional expressiveness.
### 2.4 Generating Stylistic Human Body Motions
Generating speech- or text-aligned gestures with emotional expressiveness can
be considered a sub-problem in generating stylistic human body motions,
including facial motions, head motions, and locomotion. Existing approaches on
face motions include generating lip movements and other face-muscle motions
aligned with speech, using either recurrent neural networks [62] or
convolutional networks [18]. Methods for generating head motions that convey
the pace and intensity of speech have explored neural network architectures
based on autoencoders [21] and generative adversarial networks [57]. Methods
to generate stylistic locomotion are based on convolutional networks [26],
parametric phase functions [25], and deeply learned phase functions [61] for
different styles of walking. Recent approaches have also incorporated gait-
based affective features to generate emotionally expressive walking [53, 7,
8]. Moreover, there has been considerable progress in generating images and
videos of body motions based on textual descriptions of moments and actions
[38, 70].
In contrast, we aim to generate emotionally expressive gestures at interactive
rates that correspond to text sentences. The space of gesture motions we
explore is also different from the space of motions corresponding to
locomotion, head motions, or facial muscle motions. Although there is some
overlap with the space of head motions [21, 57], the corresponding methods
have not been extended to deal with full-body motions.
## 3 Transforming Text to Gestures
Given a natural language text sentence associated with an acting task of
narration or conversation, an intended emotion, and attributes of the virtual
agent, including gender and handedness, our goal is to generate the virtual
agent’s corresponding body gestures. In other words, we aim to generate a
sequence of relative 3D joint rotations $\mathcal{Q}^{*}$ underlying the poses
of a virtual agent, corresponding to a sequence of input words $\mathcal{W}$,
and subject to the acting task $A$ and the intended emotion $E$ based on the
text, and the gender $G$ and the handedness $H$ of the virtual agent. We
therefore have
$\mathcal{Q}^{*}=\arg\max_{\mathcal{Q}}\textrm{Prob}\left[\mathcal{Q}|\mathcal{W};A,E,G,H\right].$
(1)
### 3.1 Representing Text
Following standard practices in NLP tasks, we represent the word at each
position $s$ in the input sentence
$\mathcal{W}=\begin{bmatrix}w_{1}&\dots&w_{s}&\dots&w_{T_{\textrm{sen}}}\end{bmatrix}$,
with $T_{\textrm{sen}}$ being the maximum sentence length, using word
embeddings $w_{s}\in\mathbb{R}^{300}$. We obtain the word embeddings using the
GloVe model pre-trained on the Common Crawl corpus [52]. We opt for GloVe
based on our preliminary experiments, where it marginally outperformed other
similar-dimensional embedding models such as Word2Vec [45] and FastText [10],
and had similar performance as much higher dimensional embedding models, e.g.,
BERT [17]. We demarcate the start and the end of sentences using special start
of sequence (SoS) and end of sequence (EoS) vectors that are pre-defined by
GloVe.
Figure 1: Directed pose graph. Our pose graph is a directed tree consisting of
23 joints, with the root joint as the root node of the tree, and the end-
effector joints (head, wrists, toes) as the leaf nodes of the tree. We
manipulate the appropriate joints to generate emotive gestures.
### 3.2 Representing Gestures
Following prior works on human motion generation [51], we represent a gesture
as a sequence of poses or configurations of the 3D body joints. These include
body expressions as well as postures. We represent each pose with quaternions
denoting 3D rotations of each joint relative to its parent in the directed
pose graph (Fig. 1). Specifically, at each time step $t$ in the sequence
$\mathcal{Q}=\begin{bmatrix}q_{1}&\dots&q_{t}&\dots&q_{T_{\textrm{ges}}}\end{bmatrix}$,
with $T_{\textrm{ges}}$ being the maximum gesture length, we represent the
pose using flattened vectors of unit quaternions
$q_{t}=\begin{bmatrix}\dots&q_{j,t}^{\top}&\dots\end{bmatrix}^{\top}\in\mathbb{H}^{J}$.
Each set of $4$ entries in the flattened vector $q_{t}$, represented as
$q_{j,t}$, is the rotation on joint $j$ relative to its parent in the directed
pose graph, and $J$ is the total number of joints. We choose quaternions over
other representations to represent rotations as quaternions are free of the
gimbal lock problem [51]. To demarcate the start and the end of each gesture
sequence, we define our start-of-sequence (SoS) and end-of-sequence (EoS)
poses. Both of these are idle sitting poses with decorative changes in the
positions of the end-effector joints, the root, wrists and the toes.
### 3.3 Representing the Agent Attributes
We categorize the agent attributes into two types: attributes depending on the
input text and attributes depending on the virtual agent.
#### 3.3.1 Attributes Depending on Text
In this work, we consider two attributes that depend on text, the acting task,
and the intended emotion.
##### Acting Task
We consider two acting tasks, narration and conversation. In narration, the
agent narrates lines from a story to a listener. The gestures, in this case,
are generally more exaggerated and theatrical. In conversation, the agent uses
body gestures to supplement the words spoken in conversation with another
agent or human. The gestures are subtler and more reserved. In our
formulation, we represent the acting task as a two-dimensional one-hot vector
$A\in\left\\{0,1\right\\}^{2}$, to denote either narration or conversation.
##### Intended Emotion
We consider each text sentence to be associated with an intended emotion,
given as a categorical emotion term such as joy, anger, sadness, pride, etc.
While the same text sentence can be associated with multiple emotions in
practice, in this work, we limit ourselves to sentences associated with only
one emotion, owing primarily to the limitations in the dataset available for
training. We use the NRC-VAD lexicon [48] to transform these categorical
emotions associated with the text to the VAD space. The VAD space [44] is a
well-known representation in affective computing to model emotions. It maps an
emotion as a point in a three-dimensional space spanned by valence (V),
arousal (A), and dominance (D). Valence is a measure of the pleasantness in
the emotion (e.g., happy vs. sad), arousal is a measure of how active or
excited the subject expressing the emotion is (e.g., angry vs. calm), and
dominance is a measure of how much the subject expressing the emotion feels
“in control” of their actions (e.g., proud vs. remorseful). Thus, in our
formulation, the intended emotion $E\in\left[0,1\right]^{3}$, where the values
are coordinates in the normalized VAD space.
#### 3.3.2 Attributes Depending on the Agent
We consider two attributes that depend on the agent to be animated, its gender
$G$, and handedness $H$. In our work, gender $G\in\left\\{0,1\right\\}^{2}$ is
limited to a one-hot representation denoting either female or male, and
handedness $H\in\left\\{0,1\right\\}^{2}$ is a one-hot representation
indicating whether the agent is left-hand dominant or right-hand dominant.
Male and female agents typically have differences in body structures (e.g.,
shoulder-to-waist ratio, waist-to-hip ratio). Handedness determines which hand
dominates, especially when gesticulating with one hand (e.g., beat gestures,
deictic gestures). Each agent has exactly one assigned gender and one assigned
handedness.
### 3.4 Using the Transformer Network
Modeling the input text and output gestures as sequences shown in Secs. 3.1
and 3.2, the optimization in Eq. 1 becomes a sequence transduction problem.
We, therefore, approach this problem using a transformer-based network. We
briefly revisit the transformer as originally introduced by Vaswani et al.
[64], and describe how we modify it for our transduction problem.
The transformer network follows the traditional encoder-decoder architecture
for sequence-to-sequence modeling. However, instead of using sequential chains
of recurrent memory networks, or the computationally expensive convolutional
networks, the transformer uses a multi-head self-attention mechanism to model
the dependencies between the elements at different temporal positions in the
input and target sequences.
Figure 2: Text2Gestures Network. Our network takes in sentences of natural
language text and transforms them to word embeddings using the pre-trained
GloVe model [52]. It then uses a transformer encoder to transform the word
embeddings to latent representations, appends the agent attributes to these
latent representations, and transforms the combined representations into
encoded features. The transformer decoder takes in these encoded features and
the past gesture history to predict gestures for the subsequent time steps. At
each time step, we represent the gesture by the set of rotations on all the
body joints relative to their respective parents in the pose graph at that
time step.
The attention mechanism is represented as a sum of values from a dictionary of
key-value pairs, where the weight or attention on each value is determined by
the relevance of the corresponding key to a given query. Thus, given a set of
$m$ queries $Q\in\mathbb{R}^{m\times k}$, a set of $n$ keys
$K\in\mathbb{R}^{n\times k}$, and the corresponding set of $n$ values
$V\in\mathbb{R}^{n\times v}$ (for some dimensions $k$ and $v$), and using the
scaled dot-product as a measure of relevance, we can write,
$\textrm{Att}\left(Q,K,V\right)=\textrm{softmax}\left(\frac{QK^{\top}}{k}\right)V,$
(2)
where the softmax is used to normalize the weights. In the case of self-
attention (SA) in the transformer, $Q$, $K$, and $V$ all come from the same
sequence. In the transformer encoder, the self-attention operates on the input
sequence $\mathcal{W}$. Since the attention mechanism does not respect the
relative positions of the elements in the sequence, the transformer network
uses a positional encoding scheme to signify the position of each element in
the sequence, prior to using the attention. Also, in order to differentiate
between the queries, keys, and values, it projects $\mathcal{W}$ into a common
space using three independent fully-connected layers consisting of trainable
parameters $W_{Q,enc}$, $W_{K,enc}$, and $W_{V,enc}$. Thus, we can write the
self-attention in the encoder, $\textrm{SA}_{enc}$, as
$\textrm{SA}_{enc}\left(\mathcal{W}\right)=\textrm{softmax}\left(\frac{\mathcal{W}W_{Q}W_{K}^{\top}\mathcal{W}^{\top}}{k}\right)\mathcal{W}W_{V}.$
(3)
The multi-head (MH) mechanism enables the network to jointly attend to
different projections for different parts in the sequence, i.e.,
$\textrm{MH}\left(\mathcal{W}\right)=\textrm{concat}\left(\textrm{SA}_{enc,1}\left(\mathcal{W}\right),\dots,\textrm{SA}_{enc,h}\left(\mathcal{W}\right)\right)W_{\textrm{concat}},$
(4)
where $h$ is the number of heads, $W_{\textrm{concat}}$ is the set of
trainable parameters associated with the concatenated representation, and each
self-attention $i$ in the concatenation consists of its own set of trainable
parameters $W_{Q,i}$, $W_{K,i}$, and $W_{V,i}$.
The transformer encoder then passes the MH output through two fully-connected
(FC) layers. It repeats the entire block consisting of (SA–MH–FC) $N$ times
and uses the residuals around each layer in the blocks during backpropagation.
We denote the final encoded representation of the input sequence $\mathcal{W}$
as $F_{\mathcal{W}}$.
To meet the given constraints on the acting task $A$, intended emotion $E$,
gender $G$, and handedness $H$ of the virtual agent, we append these variables
to $F_{\mathcal{W}}$ and pass the combined representation through two fully-
connected layers with trainable parameters $W_{FC}$ to obtain feature
representations
$\bar{F_{\mathcal{W}}}=FC\left(\begin{bmatrix}F_{\mathcal{W}}^{\top}&A^{\top}&E^{\top}&G^{\top}&H^{\top}\end{bmatrix}^{\top};W_{FC}\right).$
(5)
The transformer decoder operates similarly using the target sequence
$\mathcal{Q}$, but with some important differences. First, it uses a masked
multi-head (MMH) self-attention on the sequence, such that the attention for
each element covers only those elements appearing before it in the sequence,
i.e.,
$\textrm{MMH}\left(\mathcal{Q}\right)=\textrm{concat}\left(\textrm{SA}_{dec,1}\left(\mathcal{Q}\right),\dots,\textrm{SA}_{dec,h}\left(\mathcal{Q}\right)\right)W_{\textrm{concat}}.$
(6)
This ensures that the attention mechanism is causal and therefore usable at
test time, when the full target sequence is not known apriori. Second, it uses
the output of the MMH operation as the key and the value, and the encoded
representation $\bar{F_{\mathcal{W}}}$ as the query, in an additional multi-
head self-attention layer without any masking, i.e.,
$\textrm{MH}\left(\bar{F_{\mathcal{W}}},\mathcal{Q}\right)=\textrm{concat}\left(\underbrace{\textrm{Att}_{dec,1}\left(\bar{F_{\mathcal{W}}},\textrm{MMH}\left(\mathcal{Q}\right),\textrm{MMH}\left(\mathcal{Q}\right)\right),\dots}_{h\textrm{
entries}}\right)W_{\textrm{concat}}$.
(7)
It then passes the output of this multi-head self-attention through two fully-
connected layers to complete the block. Thus, one block of the decoder is
(SA–MMH–SA–MH–FC), and the transformer network uses $N$ such blocks. It also
uses positional encoding of the target sequence upfront and uses the residuals
around each layer in the blocks during backpropagation.
## 4 Training the Transformer-Based Network
Fig. 2 shows the overall architecture of our transformer-based network. The
word embedding layer transforms the words into feature vectors using the pre-
trained GloVe model. The encoder and the decoder respectively consist of $N=2$
blocks of (SA–MH–FC) and (SA–MMH–SA–MH–FC). We use $h=2$ heads in the multi-
head attention. The set of FC layers in each of the blocks maps to 200-dim
outputs. At the output of the decoder, we normalize the predicted values so
that they represent valid rotations. We train our network using the sum of
three losses: the angle loss, the pose loss, and the affective loss. We
compute these losses between the gesture sequences generated by our network
and the original motion-captured sequences available as ground-truth in the
training dataset.
### 4.1 Angle Loss for Smooth Motions
We denote the ground-truth relative rotation of each joint $j$ at time step
$t$ as the unit quaternion $q_{j,t}$, and the corresponding rotation predicted
by the network as $\hat{q}_{j,t}$. If needed, we correct $\hat{q}_{j,t}$ to
have the same orientation as $q_{j,t}$. Then we measure the angle loss between
each such pair of rotations as the squared difference of their Euler angle
representations, modulo $\pi$. We use Euler angles rather than the quaternions
in the loss function as it is straightforward to compute closeness between
Euler angles using Euclidean distances. To ensure that the motions look smooth
and natural, we also consider the squared difference between the derivatives
of the ground-truth and the predicted rotations, computed at successive time
steps. We write the net angle loss $\mathcal{L}_{\textrm{ang}}$ as
$\begin{split}\mathcal{L}_{\textrm{ang}}=&\sum_{t}\sum_{j}\left(\textrm{Eul}\left(q_{j,t}\right)-\textrm{Eul}\left(\hat{q}_{j,t}\right)\right)^{2}+\\\
&\left(\textrm{Eul}\left(q_{j,t}\right)-\textrm{Eul}\left(q_{j,t-1}\right)-\textrm{Eul}\left(\hat{q}_{j,t}\right)+\textrm{Eul}\left(\hat{q}_{j,t-1}\right)\right)^{2}.\end{split}$
(8)
### 4.2 Pose Loss for Joint Trajectories
The angle loss only penalizes the absolute differences between the ground-
truth and the predicted joint rotations and does not explicitly constrain the
resulting poses to follow the same trajectory as the ground-truth at all time
steps. To this end, we compute the squared norm difference between the ground-
truth and the predicted joint positions at all time steps. Given the relative
joint rotations and the offset $o_{j}$ of every joint $j$ from its parent, we
can easily compute all the joint positions using forward kinematics (FK).
Thus, we write the pose loss $\mathcal{L}_{\textrm{pose}}$ as
$\mathcal{L}_{\textrm{pose}}=\sum_{t}\sum_{j}\lVert\textrm{FK}\left(q_{j,t},o_{j}\right)-\textrm{FK}\left(\hat{q}_{j,t},o_{j}\right)\rVert^{2}.$
(9)
Figure 3: Variance in emotive gestures. Emotions with high arousal (e.g.,
amused) generally have rapid limb movements, while emotions with low arousal
(e.g., sad) generally have slow and subtle limb movements. Emotions with high
dominance (e.g., proud) generally have an expanded upper body and spread arms,
while emotions with low dominance (e.g., afraid) have a contracted upper body
and arms close to the body. Our algorithm uses these characteristics to
generate the appropriate gestures.
### 4.3 Affective Loss for Emotive Gestures
To ensure that the generated gestures are emotionally expressive, we also
penalize the loss between the gesture-based affective features of the ground-
truth and the predicted poses. Prior studies in affective computing [22, 28,
11] show that gesture-based affective features are good indicators of emotions
that vary in arousal and dominance. Emotions with high dominance, such as
pride, anger, and joy, tend to be expressed with an expanded upper body,
spread arms, and upright head positions. Conversely, emotions with low
dominance, such as fear and sadness, tend to be expressed with a contracted
upper body, arms close to the body, and collapsed head positions. Again,
emotions with high arousal, such as anger and amusement, tend to be expressed
with rapid arm swings and head movements. By contrast, emotions with low
arousal, such as relief and sadness, tend to be expressed with subtle, slow
movements. Different valence levels are not generally associated with
consistent differences in gestures, and humans often infer from other cues and
the context. Fig. 3 shows some gesture snapshots to visualize the variance of
these affective features for different levels of arousal and dominance.
We define scale-independent affective features using angles, distance ratios,
and area ratios for training our network, following the same rationale as in
[7]. Since, in our experiments, the virtual agent is sitting down, and only
the upper body is expressive during the gesture sequences, only the joints at
the root, neck, head, shoulders, elbows, and wrists move significantly.
Therefore, we use these joints to compute our affective features. We show the
complete list of affective features we use in Fig. 4. Denoting the set of
affective features computed from the ground-truth and the predicted poses at
time $t$ as $a_{t}$ and $\hat{a}_{t}$ respectively, we write the affective
loss $\mathcal{L}_{\textrm{aff}}$ as
$\mathcal{L}_{\textrm{aff}}=\sum_{t}\lVert a_{t}-\hat{a}^{t}\rVert^{2}.$ (10)
Combining all the individual loss terms, we write our training loss functions
$\mathcal{L}$ as
$\mathcal{L}=\mathcal{L}_{\textrm{ang}}+\mathcal{L}_{\textrm{pose}}+\mathcal{L}_{\textrm{aff}}+\lambda\lVert
W\rVert,$ (11)
where $W$ denotes the set of all trainable parameters in the full network, and
$\lambda$ is the regularization factor.
Figure 4: Gesture-based affective features. We use a total of 15 features: 7
angles, $A_{1}$ through $A_{7}$, 5 distance ratios, $\frac{D_{1}}{D_{4}}$,
$\frac{D_{2}}{D_{4}}$, $\frac{D_{8}}{D_{5}}$, $\frac{D_{7}}{D_{5}}$, and
$\frac{D_{3}}{D_{6}}$, and 3 area ratios, $\frac{R_{1}}{R_{2}}$,
$\frac{R_{3}}{R_{4}}$, and $\frac{R_{5}}{R_{6}}$.
## 5 Results
This section elaborates on the database we use to train, validate, and test
our method. We also report our training routine, the performance of our method
compared to the ground-truth, and the current state-of-the-art method for
generating gestures aligned with text input. We also perform ablation studies
to show the benefits of each of the components in our loss function: the angle
loss, the pose loss, and the affective loss.
### 5.1 Data for Training, Validation and Testing
We evaluate our method on the MPI emotional body expressions database [65].
This database consists of 1,447 motion-captured sequences of human
participants performing one of three acting tasks: narrating a sentence from a
story, gesticulating a scenario given as a sentence, or gesticulating while
speaking a line in a conversation. Each sequence corresponds to one text
sentence and the associated gestures. For each sequence, the following
annotations of the intended emotion $E$, gender $G$, and handedness $H$, are
available:
* •
$E$ as the VAD representation for one of “afraid”, “amused”, “angry”,
“ashamed”, “disgusted”, “joyous”, “neutral”, “proud”, “relieved”, “sad”, or
“surprised”,
* •
$G$ is either female or male, and
* •
$H$ is either left or right.
Each sequence is captured at 120 fps and is between 4 and 20 seconds long. We
pad all the sequences with our EoS pose (Sec. 3.2) so that all the sequences
are of equal length. Since the sequences freeze at the end of the
corresponding sentences, padding with the EoS pose often introduces small
jumps in the joint positions and the corresponding relative rotations when any
gesture sequence ends. To this end, we have designed our training loss
function (Eq. 11) to ensure smoothness and generate gestures that transition
smoothly to the EoS pose after the end of the sentence.
### 5.2 Training and Evaluation Routines
We train our network using the Adam optimizer [30] with a learning rate of
0.001 and a weight decay of 0.999 at every epoch. We train our network for 600
epochs, using a stochastic batch size of 16 without replacement in every
iteration. We have a total of 26,264,145 trainable parameters in our network.
We use 80% of the data for training, validate the performance on 10% of the
data, and test on the remaining 10% of the data. The total training takes
around 8 hours using an Nvidia GeForce GTX 1080Ti GPU. At the time of
evaluation, we initialize the transformer decoder with $T=20$ (Fig. 2) time
steps of the SoS pose and keep using the past $T=20$ time steps to generate
the gesture at every time step.
### 5.3 Comparative Performance
We compare the performance of our network with the transformer-based text-to-
gesture generation network of Yoon et al. [69] because this method is the
closest to our work. To make a fair comparison, we perform the following as
per their original paper:
* •
use the eight upper body joints (three each on the two arms, neck, and head)
for their method,
* •
use PCA to reduce the eight upper body joints to 10-dimensional features,
* •
retrain their network on the MPI emotional body expressions database [65],
using the same data split as in our method, and the hyperparameters provided
by the authors,
* •
compare the performances only on the eight upper body joints.
Table 1: Mean pose errors. For each listed method, this is the mean Euclidean distance of all the joints over all the time steps from all the ground-truth sequences over the entire test set. The mean error for each sequence is computed relative to the mean length of the longest diagonal of the 3D bounding box of the virtual agent in that sequence. Method | Mean pose error
---|---
Yoon et al. [69] | 1.57
Our method, no angle loss | 0.07
Our method, no pose loss | 0.06
Our method, no affective loss | 0.06
Our method, all losses | 0.05
We report the mean pose error from the ground-truth sequences over the entire
held-out test set for both Yoon et al. [69] and our method in Table 1. For
each test sequence and each method, we compute the total pose error for all
the joints at each time step and calculate the mean of these errors across all
time steps. We then divide the mean error by the mean length of the longest
diagonal of the 3D bounding box of the virtual agent to get the normalized
mean error. To obtain the mean pose error for the entire test set, we compute
the mean of the normalized mean errors for all the test sequences. We also
plot the trajectories of the three end-effector joints in the upper body,
head, left wrist, and right wrist, independently in the three coordinate
directions, for two diverse sample sequences from the test set in Fig. 5. We
ensure diversity in the samples by choosing a different combination of the
gender, handedness, acting task, and intended emotion of the gesture for each
sample.
Figure 5: End-effector trajectories. The trajectories in the three coordinate
directions for the head and two wrists. We show two sample sequences from the
test set, as generated by all the methods. Removing the angle loss makes the
trajectory heavily jerky. Removing the pose loss makes our method unable to
follow the desired trajectory. Removing the affective loss reduces the
variations corresponding to emotional expressiveness. Yoon et al.’s method
[69] is unable to generate large amplitude variations in the trajectories
because it works with a dimension-reduced representation of the sequences.
We observe from Table 1 that our method reduces the mean pose error by around
97% over Yoon et al. [69]. From the plots in Fig. 5, we can observe that
unlike our method, Yoon et al.’s method is unable to generate the high
amplitude oscillations in motion, leading to larger pose errors. This is
because their lower-dimensional representation of pose motions does not
sufficiently capture the oscillations. Moreover, the gestures generated by
Yoon et al.’s method did not produce any movements in the $z$-axis. Instead,
they confined the movements to a particular $z$-plane. The step in their
method in the $z$-axis occurs when the gesture returns to the EoS rest pose,
which is in a different $z$-plane.
Figure 6: Ablation studies. Snapshots of gestures at five time steps from two
sample ground-truth sequences in the test set, and the gestures at the same
five time steps as generated by our method and its different ablated versions.
The full sequences of these gestures are available in our supplementary video.
### 5.4 Ablation Studies
We compare the performance between different ablated versions of our method.
We test the contribution of each of the three loss terms, angle loss, pose
loss, and affective loss, in Eq. 11 by removing them from the total loss one
at a time and training our network from scratch with the remaining losses.
Each of these ablated versions has a higher mean pose error over the entire
test set than our actual method, as we report in Table 1. To visualize the
performance differences, we show in Fig. 5 sample end-effector trajectories in
the same setup as described in Sec. 5.3. We also show snapshots from the two
sample gesture sequences generated by all the ablated versions in Fig. 6. We
show the full gesture sequences of these and other samples in our
supplementary video.
We can observe from Fig. 5 that the gestures become heavily jerky without the
angle loss. When we add in the angle loss but remove the pose loss, the
gestures become smoother but still have some jerkiness. This shows that the
pose loss also lends some robustness to the generation process. The other
major drawback in removing either the angle or the pose loss is that the
network can only change the gesture between time steps within some small
bounds, making the overall animation sequence appear rigid and constricted.
When we remove only the affective loss from Eq. 11, the network can generate a
wide range of gestures, leading to animations that appear fluid and plausible.
However, the emotional expressions in the gestures, such as spreading and
contracting the arms and shaking the head, are not consistent with the
intended emotions.
### 5.5 Interfacing the VR Environment
Given a sentence of text, we can generate the gesture animation files at an
interactive rate of 3.2 ms per frame, or 312.5 frames per second, on average
on an Nvidia GeForce GTX 1080Ti GPU.
We use gender and handedness to determine the virtual agent’s physical
attributes during the generation of gestures. Gender impacts the pose
structure. The handedness determines the hand for one-handed or longitudinally
asymmetrical gestures. To create the virtual agents, we use low-poly humanoid
meshes with no textures on the face. We use the pre-defined set of male and
female skeletons in the MPI emotional body motion database [65] for the
gesture animations. We assign a different model to each of these skeletons,
matching their genders. We manually correct any visual distortions caused by a
shape mismatch between the pre-defined skeletons and the low-poly meshes.
We use Blender 2.7 to rig the generated animations to the humanoid meshes. To
ensure a proper rig, we modify the rest pose of the humanoid meshes to match
the rest pose of our pre-defined skeletons. To make the meshes appear more
life-like, we add periodic blinking and breathing movements to the generated
animations using blendshapes in Blender.
We prepare our VR environment using Unreal 4.25. We place the virtual agents
on a chair in the center of the scene in full focus. The users can interact
with the agent in two ways. They can either select a story that the agent
narrates line by line using appropriate body gestures or send lines of text as
part of a conversation to which the agent responds using text and associated
body gestures. We show the full demos in our supplementary video. We use
synthetic, neutral-toned audio aligned with all our generated gestures to
understand the timing of the gestures with the text. However, we do not add
any facial features or emotions in the audio for the agents since they are
dominant modalities of emotional expression and make a fair evaluation of the
emotional expressiveness of the gestures difficult. For example, if the
intended emotion is happy, and the agent has a smiling face, observers are
more likely to respond favorably to any gesture with high valence or arousal.
## 6 User Study
We conduct a web-based user study to test two major aspects of our method: the
correlation between the intended and the perceived emotions of and from the
gestures, and the quality of the animations compared to the original motion-
captured sequences.
### 6.1 Procedure
The study consisted of two sections and was about ten minutes long. In the
first section, we showed the participant six clips of virtual agents sitting
on a chair and performing randomly selected gesture sequences generated by our
method, one after the other. We then asked the participant to report the
perceived emotion as one of multiple choices. Based on our pilot study, we
understood that asking participants to choose from one of 11 categorical
emotions in the EBEDB dataset [65] was overwhelming, especially since some of
the emotion terms were close to each other in the VAD space (e.g., joyous and
amused). Therefore, we opted for fewer choices to make it easier for the
participants and reduce the probability of having too many emotion terms with
similar VAD values in the choices. For each sequence, we, therefore, provided
the participant with four choices for the perceived emotion. One of the
choices was the intended emotion, and the remaining three were randomly
selected. For each animation, randomly choosing three choices can
unintentionally bias the participant’s response (for instance, if the intended
emotion is “sad” and the random options are “joyous”, “amused” and “proud”).
However, the probability of such a set of choices drops exponentially as we
consider multiple sequences for each participant and multiple participants in
the overall study.
Table 2: Likert scale markers to asses quality of gestures. We use the
following markers in our five-point Likert scale
Very Unnatural | e.g., broken arms or legs, torso at an impossible angle
---|---
Not Realistic | e.g., limbs going inside the body or through the chair
Looks OK | No serious problems, but does not look very appealing
Looks good | No problems and the gestures look natural
Looks great! | The gestures look like they could be from a real person
In the second section, we showed the participant three clips of virtual agents
sitting on a chair and performing a randomly selected original motion-captured
sequence and three clips of virtual agents performing a randomly selected
generated gesture sequence, one after the other. We showed the participant
these six sequences in random order. We did not tell the participant which
sequences were from the original motion-capture and which sequences were
generated by our method. We asked the participant to report the naturalness of
the gestures in each of these sequences on a five-point Likert scale,
consisting of the markers mentioned in Table 2.
We had a total of 145 clips of generated gestures and 145 clips of the
corresponding motion-captured gestures. For every participant, we chose all
the 12 random clips across the two sections without replacement. We did not
notify the participant apriori which clips had motion-captured gestures and
which clips had our generated gestures. Moreover, we ensured that in the
second section, none of the three selected generated gestures corresponded to
the three selected motion-captured gestures. Thus, all the clips each
participant looked at were distinct. However, we did repeat clips at random
across participants to get multiple responses for each clip.
### 6.2 Participants
Fifty participants participated in our study, recruited via web
advertisements. To study the demographic diversity, we asked the participants
to report their gender and age group. Based on the statistics, we had 16 male
and 11 female participants in the age group of 18-24, 15 male and seven female
participants in the age group of 25-34, and one participant older than 35 who
preferred not to disclose their gender. However, we did not observe any
particular pattern of responses based on the demographics.
### 6.3 Evaluation
We analyze the correlation between the intended and the perceived emotions
from the first section of the user study and the reported quality of the
animations from the second section. We also summarize miscellaneous user
feedback.
#### 6.3.1 Correlation between Intended and Perceived Emotions
Each participant responded to six random sequences in the first section of the
study, leading to a total of 300 responses. We convert the categorical emotion
terms from these responses to the VAD space using the mapping of NRC-VAD [48].
We show the distribution of the valence, arousal, and dominance values of the
intended and perceived emotions in Fig. 7.
We compute the Pearson correlation coefficient between the intended and
perceived values in each of the valence, arousal, and dominance dimensions. A
Pearson coefficient of 1 indicates maximum positive linear correlation, 0
indicates no correlation, and -1 indicates maximum negative linear
correlation. In practice, any coefficient larger than 0.5 indicates a strong
positive linear correlation. We hypothesize that intended and the perceived
values in all three dimensions have such a strong positive correlation.
We observe a Pearson coefficient of 0.77, 0.95, and 0.82, respectively,
between the intended and the perceived values in the valence, arousal, and
dominance dimensions. Thus, the values in all three dimensions are strongly
positively correlated, satisfying our hypothesis. The values also indicate
that the correlation is stronger in the arousal and the dominance dimensions
and comparatively weaker in the valence dimension. This is in line with prior
studies in affective computing [22, 28], which show that humans can
consistently perceive arousal and dominance from gesture-based body
expressions.
Figure 7: Valence, arousal, and dominance distributions. Distribution of
values from the intended and perceived emotions in the valence, arousal, and
dominance dimensions for gestures in the study. All the distributions indicate
strong positive correlation between the intended and the perceived values,
with the highest correlation in arousal and the lowest in valence.
#### 6.3.2 Quality of Gesture Animations
Each participant responded to three random motion-captured and three randomly
generated sequences in the second section of the study. Therefore, we have a
total of 150 responses on both the motion-captured and the generated
sequences. We summarize the percentage of responses of each of the five points
in the Likert scale in Fig. 8. We consider a minimum score of 3 on our Likert
scale to indicate that the participant found the corresponding gesture
plausible. By this criterion, we observe that 86.67% of the responses
indicated the virtual agents performing the motion-captured sequences have
plausible gestures and 91.33% of the responses the virtual agents performing
the generated sequences have plausible gestures. In fact, we observe that a
marginally higher percentage of responses scored the generated gestures 4 and
5 (2.00% and 3.33% respectively), compared to the percentage of responses with
the same score for the motion-captured gestures. This, coupled with the fact
that participants did not know apriori which sequences were motion-captured
and generated, indicates that our generated sequences were perceived to be as
realistic as the original motion-captured sequences. One possible explanation
of participants rating our generated gestures marginally more plausible than
the motion-captured gestures is that our generated poses return smoothly to a
rest pose after the end of the sentence. The motion-captured gestures, on the
other hand, freeze at the end-of-the-sentence pose.
#### 6.3.3 Miscellaneous Feedback
Our virtual agents only express emotions through gestures and do not use any
other modalities such as faces or voices. Therefore, we expected some
participants taking the study to be distracted by the lack of emotions on the
face or to be unable to determine the emotions based only on the gestures,
without supporting cues from the other modalities. Indeed, 14% of the
participants reported they were distracted by the lack of facial emotions, 10%
were unable to determine the emotions based on only the gestures, and 8%
experienced both difficulties.
Figure 8: Responses on the Quality of Gestures. A small fraction of
participants responded to the few gesture sequences that had some stray self-
collisions, and therefore found these sequences to not be realistic. The vast
majority of the participants found both the motion-captured and generated
gestures to look OK (plausible) on the virtual agents. A marginally higher
percentage of participants reported that our generated gesture sequences
looked better on the virtual agents that the original motion-captured gesture
sequences.
## 7 Conclusion
We present a novel method that takes in natural language text one sentence at
a time and generates 3D pose sequences for virtual agents corresponding to
emotive gestures aligned with that text. Our generative method also considers
the intended acting task of narration or conversation, the intended emotion
based on the text and the context, and the intended gender and handedness of
the virtual agents to generate plausible gestures. We can generate these
gestures in a few milliseconds on an Nvidia GeForce GTX 1080Ti GPU. We also
conducted a web study to evaluate the naturalness and emotional expressiveness
of our generated gestures. Based on the 600 total responses from 50
participants, we found a strong positive correlation between the intended
emotions of the virtual agents’ gestures and the emotions perceived from them
by the respondents, with a minimum Pearson coefficient of 0.77 in the valence
dimension. Moreover, around 91% of the respondents found our generated
gestures to be at least plausible on a five-point Likert Scale.
## 8 Limitations and Future Work
Our work has some limitations. First, we train our network to learn mappings
from complete text sentences to gestures. We can improve this by exploring a
more granular phrase-level mapping from text to gestures to gain insights on
how gestures corresponding to parts of sentences can be combined to produce
gestures for full sentences. Second, our generated gestures return to the EoS
pose after every gesticulating every sentence. This is because all the samples
in the EBEDB dataset [65] start from a rest pose. As a result, we cannot
exploit any information related to the continuity between gestures that
correspond to adjacent sentences. A simple method to extend this approach is
to use the last window of the current sentence as an initialization for the
next sentence. However, without any ground-truth information on the continuity
between gesture, it is difficult to train or evaluate the transitioning
gestures. As part of our future work, we plan to explore other techniques to
enforce such continuity. Third, We only consider the VAD representation for
the categorical emotion terms associated with the texts. This simplifies our
network design and the evaluation of the emotions perceived by the
participants in our study. In the future, we plan to explore the correlations
between the VAD representations of words in the text and the associated
categorical emotions. We also plan to study the interrelations of these VAD
representations with the gender, age, and ethnicity of the subjects, to build
more sophisticated maps from texts to a more diverse range of emotive
gestures. We would also like to integrate our emotive gesture generation
algorithm with social VR systems and use them for socially-aware
conversational agents.
Lastly, we only consider text-based emotive gesture generation, but no facial
expression or expressive voice tones. In real-world scenarios, facial
expressions and voice tones tend to play dominant roles in conveying the
emotions and may occupy the user’s focus. Consequently, in our current video
results and studies, we evaluated the effectiveness of our gesture generation
approach without any using facial or vocal expressions, which is similar to
other methods for evaluating gestures [68, 34]. This way, we ensure that the
user mainly focuses on emotive gestures. As part of future work, it would be
useful to combine our work with varying facial expressions corresponding to
different emotions and vary the emotional tone in voices. Furthermore, we
would like to evaluate the relative benefits of combining different
modalities, such as emotive gestures, facial expressions, and voice tones.
## References
* [1] C. Ahuja, D. W. Lee, Y. I. Nakano, and L.-P. Morency. Style transfer for co-speech gesture animation: A multi-speaker conditional-mixture approach. European Conference on Computer Vision, 2020.
* [2] S. Alexanderson, G. E. Henter, T. Kucherenko, and J. Beskow. Style-controllable speech-driven gesture synthesis using normalising flows. Computer Graphics Forum, 39(2):487–496, 2020. doi: 10 . 1111/cgf . 13946
* [3] M. Argyle. Bodily communication. Routledge, 2013.
* [4] H. Aviezer, Y. Trope, and A. Todorov. Body cues, not facial expressions, discriminate between intense positive and negative emotions. Science, 338(6111):1225–1229, 2012.
* [5] A. Banerjee, U. Bhattacharya, and A. Bera. Learning unseen emotions from gestures via semantically-conditioned zero-shot perception with adversarial autoencoders. arXiv preprint arXiv:2009.08906, 2020.
* [6] T. Baur, I. Damian, P. Gebhard, K. Porayska-Pomsta, and E. André. A job interview simulation: Social cue-based interaction with a virtual character. In 2013 International Conference on Social Computing, pp. 220–227, 2013.
* [7] U. Bhattacharya, T. Mittal, R. Chandra, T. Randhavane, A. Bera, and D. Manocha. Step: Spatial temporal graph convolutional networks for emotion perception from gaits. In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI’20, p. 1342–1350. AAAI Press, 2020.
* [8] U. Bhattacharya, N. Rewkowski, P. Guhan, N. L. Williams, T. Mittal, A. Bera, and D. Manocha. Generating emotive gaits for virtual agents using affect-based autoregression. In 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 24–35, 2020. doi: 10 . 1109/ISMAR50242 . 2020 . 00020
* [9] U. Bhattacharya, C. Roncal, T. Mittal, R. Chandra, K. Kapsaskis, K. Gray, A. Bera, and D. Manocha. Take an emotion walk: Perceiving emotions from gaits using hierarchical attention pooling and affective mapping. In A. Vedaldi, H. Bischof, T. Brox, and J.-M. Frahm, eds., Computer Vision – ECCV 2020, pp. 145–163. Springer International Publishing, Cham, 2020.
* [10] P. Bojanowski, E. Grave, A. Joulin, and T. Mikolov. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146, 2017.
* [11] G. Castillo and M. Neff. What do we express without knowing? emotion in gesture. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS ’19, p. 702–710. International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 2019.
* [12] C.-C. Chiu, L.-P. Morency, and S. Marsella. Predicting co-verbal gestures: A deep and temporal modeling approach. In Intelligent Virtual Agents, pp. 152–166. Springer International Publishing, Cham, 2015.
* [13] A. Chowanda, P. Blanchfield, M. Flintham, and M. Valstar. Computational models of emotion, personality, and social relationships for interactions in games: (extended abstract). In Proceedings of the 2016 International Conference on Autonomous Agents and Multiagent Systems, AAMAS ’16, p. 1343–1344. International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 2016.
* [14] J. H. Chuah, B. Rossen, and B. Lok. Automated generation of emotive virtual humans. In Intelligent Virtual Agents, pp. 490–491. Springer Berlin Heidelberg, Berlin, Heidelberg, 2009.
* [15] B. De Gelder. Towards the neurobiology of emotional body language. Nature Reviews Neuroscience, 7(3):242–249, 2006.
* [16] D. DeVault, R. Artstein, G. Benn, T. Dey, E. Fast, A. Gainer, K. Georgila, J. Gratch, A. Hartholt, M. Lhommet, et al. Simsensei kiosk: A virtual human interviewer for healthcare decision support. In Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems, pp. 1061–1068, 2014.
* [17] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota, June 2019. doi: 10 . 18653/v1/N19-1423
* [18] Y. Ferstl and R. McDonnell. A perceptual study on the manipulation of facial features for trait portrayal in virtual agents. In Proceedings of the 18th International Conference on Intelligent Virtual Agents, IVA ’18, p. 281–288. Association for Computing Machinery, New York, NY, USA, 2018. doi: 10 . 1145/3267851 . 3267891
* [19] Y. Ferstl, M. Neff, and R. McDonnell. Multi-objective adversarial gesture generation. In Motion, Interaction and Games, MIG ’19. Association for Computing Machinery, New York, NY, USA, 2019. doi: 10 . 1145/3359566 . 3360053
* [20] S. Ginosar, A. Bar, G. Kohavi, C. Chan, A. Owens, and J. Malik. Learning individual styles of conversational gesture. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
* [21] D. Greenwood, S. Laycock, and I. Matthews. Predicting head pose from speech with a conditional variational autoencoder. ISCA, 2017.
* [22] M. M. Gross, E. A. Crane, and B. L. Fredrickson. Effort-shape and kinematic assessment of bodily expression of emotion during gait. Human movement science, 31(1):202–221, 2012.
* [23] D. Hasegawa, N. Kaneko, S. Shirakawa, H. Sakuta, and K. Sumi. Evaluation of speech-to-gesture generation using bi-directional lstm network. In Proceedings of the 18th International Conference on Intelligent Virtual Agents, IVA ’18, p. 79–86. Association for Computing Machinery, New York, NY, USA, 2018. doi: 10 . 1145/3267851 . 3267878
* [24] P. Heidicker, E. Langbehn, and F. Steinicke. Influence of avatar appearance on presence in social vr. In 2017 IEEE Symposium on 3D User Interfaces (3DUI), pp. 233–234, 2017.
* [25] D. Holden, T. Komura, and J. Saito. Phase-functioned neural networks for character control. ACM Transactions on Graphics (TOG), 36(4):42, 2017.
* [26] D. Holden, J. Saito, and T. Komura. A deep learning framework for character motion synthesis and editing. ACM Trans. Graph., 35(4), July 2016. doi: 10 . 1145/2897824 . 2925975
* [27] N. Jaques, D. J. McDuff, Y. L. Kim, and R. W. Picard. Understanding and predicting bonding in conversations using thin slices of facial expressions and body language. In Intelligent Virtual Agents - 16th International Conference, IVA 2016, Los Angeles, CA, USA, September 20-23, 2016, Proceedings, vol. 10011 of Lecture Notes in Computer Science, pp. 64–74, 2016. doi: 10 . 1007/978-3-319-47665-0
* [28] M. Karg, A. Samadani, R. Gorbet, K. Kühnlenz, J. Hoey, and D. Kulić. Body movements for affective expression: A survey of automatic recognition and generation. IEEE Transactions on Affective Computing, 4(4):341–359, 2013.
* [29] T. Karras, T. Aila, S. Laine, A. Herva, and J. Lehtinen. Audio-driven facial animation by joint end-to-end learning of pose and emotion. ACM Trans. Graph., 36(4), July 2017. doi: 10 . 1145/3072959 . 3073658
* [30] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
* [31] A. Kleinsmith and N. Bianchi-Berthouze. Affective body expression perception and recognition: A survey. IEEE Transactions on Affective Computing, 4(1):15–33, 2013.
* [32] M. L. Knapp, J. A. Hall, and T. G. Horgan. Nonverbal communication in human interaction. Cengage Learning, 2013.
* [33] T. Kucherenko, D. Hasegawa, G. E. Henter, N. Kaneko, and H. Kjellström. Analyzing input and output representations for speech-driven gesture generation. In Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents, IVA ’19, p. 97–104. Association for Computing Machinery, New York, NY, USA, 2019. doi: 10 . 1145/3308532 . 3329472
* [34] T. Kucherenko, P. Jonell, S. van Waveren, G. E. Henter, S. Alexandersson, I. Leite, and H. Kjellström. Gesticulator: A framework for semantically-aware speech-driven gesture generation. ICMI ’20, p. 242–250. Association for Computing Machinery, New York, NY, USA, 2020. doi: 10 . 1145/3382507 . 3418815
* [35] M. E. Latoschik, D. Roth, D. Gall, J. Achenbach, T. Waltemate, and M. Botsch. The effect of avatar realism in immersive social virtual realities. In Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology, VRST ’17. Association for Computing Machinery, New York, NY, USA, 2017. doi: 10 . 1145/3139131 . 3139156
* [36] S. Levine, P. Krähenbühl, S. Thrun, and V. Koltun. Gesture controllers. In ACM SIGGRAPH 2010 Papers, SIGGRAPH ’10. Association for Computing Machinery, New York, NY, USA, 2010. doi: 10 . 1145/1833349 . 1778861
* [37] J. Li, R. Kizilcec, J. Bailenson, and W. Ju. Social robots and virtual agents as lecturers for video instruction. Computers in Human Behavior, 55:1222 – 1230, 2016. doi: 10 . 1016/j . chb . 2015 . 04 . 005
* [38] Y. Li, M. R. Min, D. Shen, D. E. Carlson, and L. Carin. Video generation from text. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, AAAI’18, pp. 7065–7072. AAAI Press.
* [39] M. Liao, C. Sung, H. Wang, and W. Lin. Virtual classmates: Embodying historical learners’ messages as learning companions in a vr classroom through comment mapping. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 163–171, 2019.
* [40] B. Liebold and P. Ohler. Multimodal emotion expressions of virtual agents, mimic and vocal emotion expressions and their effects on emotion recognition. In Proceedings of the 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction, ACII ’13, p. 405–410. IEEE Computer Society, USA, 2013. doi: 10 . 1109/ACII . 2013 . 73
* [41] D. Matsumoto, M. G. Frank, and H. S. Hwang. Nonverbal communication: Science and applications. Sage Publications, 2012.
* [42] D. McNeill. Hand and mind: What gestures reveal about thought. University of Chicago press, 1992.
* [43] H. K. M. Meeren, C. C. R. J. van Heijnsbergen, and B. de Gelder. Rapid perceptual integration of facial expression and emotional body language. Proceedings of the National Academy of Sciences, 102(45):16518–16523, 2005. doi: 10 . 1073/pnas . 0507650102
* [44] A. Mehrabian and J. A. Russell. An approach to environmental psychology. the MIT Press, 1974.
* [45] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. Distributed representations of words and phrases and their compositionality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, eds., Advances in Neural Information Processing Systems, vol. 26, pp. 3111–3119. Curran Associates, Inc., 2013.
* [46] T. Mittal, U. Bhattacharya, R. Chandra, A. Bera, and D. Manocha. M3er: Multiplicative multimodal emotion recognition using facial, textual, and speech cues. In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI’20, pp. 1359–1367. AAAI Press, 2020.
* [47] T. Mittal, P. Guhan, U. Bhattacharya, R. Chandra, A. Bera, and D. Manocha. Emoticon: Context-aware multimodal emotion recognition using frege’s principle. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14234–14243, 2020.
* [48] S. Mohammad. Obtaining reliable human ratings of valence, arousal, and dominance for 20,000 English words. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 174–184. Association for Computational Linguistics, Melbourne, Australia, July 2018. doi: 10 . 18653/v1/P18-1017
* [49] M. Neff, M. Kipp, I. Albrecht, and H.-P. Seidel. Gesture modeling and animation based on a probabilistic re-creation of speaker style. ACM Trans. Graph., 27(1), Mar. 2008. doi: 10 . 1145/1330511 . 1330516
* [50] Oculus. Facebook Horizon, https://www.oculus.com/facebook-horizon/.
* [51] D. Pavllo, D. Grangier, and M. Auli. Quaternet: A quaternion-based recurrent model for human motion. In British Machine Vision Conference 2018, BMVC 2018, p. 299, 2018\.
* [52] J. Pennington, R. Socher, and C. Manning. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar, Oct. 2014. doi: 10 . 3115/v1/D14-1162
* [53] T. Randhavane, A. Bera, K. Kapsaskis, U. Bhattacharya, K. Gray, and D. Manocha. Identifying emotions from walking using affective and deep features. arXiv preprint arXiv:1906.11884, 2019.
* [54] T. Randhavane, A. Bera, K. Kapsaskis, K. Gray, and D. Manocha. Fva: Modeling perceived friendliness of virtual agents using movement characteristics. IEEE transactions on visualization and computer graphics, 25(11):3135–3145, 2019.
* [55] T. Randhavane, A. Bera, K. Kapsaskis, R. Sheth, K. Gray, and D. Manocha. Eva: Generating emotional behavior of virtual agents using expressive features of gait and gaze. In ACM Symposium on Applied Perception 2019, p. 6. ACM, 2019.
* [56] D. Roth, J. Lugrin, D. Galakhov, A. Hofmann, G. Bente, M. E. Latoschik, and A. Fuhrmann. Avatar realism and social interaction quality in virtual reality. In 2016 IEEE Virtual Reality (VR), pp. 277–278, 2016.
* [57] N. Sadoughi and C. Busso. Novel realizations of speech-driven head movements with generative adversarial networks. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6169–6173, 2018.
* [58] N. Sadoughi and C. Busso. Speech-driven animation with meaningful behaviors. Speech Communication, 110:90 – 100, 2019. doi: 10 . 1016/j . specom . 2019 . 04 . 005
* [59] A. L. Simeone, M. Speicher, A. Molnar, A. Wilde, and F. Daiber. Live: The human role in learning in immersive virtual environments. In Symposium on Spatial User Interaction, SUI ’19. Association for Computing Machinery, New York, NY, USA, 2019. doi: 10 . 1145/3357251 . 3357590
* [60] S. S. Sohn, X. Zhang, F. Geraci, and M. Kapadia. An emotionally aware embodied conversational agent. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS ’18, p. 2250–2252. International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 2018.
* [61] S. Starke, H. Zhang, T. Komura, and J. Saito. Neural state machine for character-scene interactions. ACM Transactions on Graphics (TOG), 38(6):209, 2019.
* [62] S. Suwajanakorn, S. M. Seitz, and I. Kemelmacher-Shlizerman. Synthesizing obama: Learning lip sync from audio. ACM Trans. Graph., 36(4), July 2017. doi: 10 . 1145/3072959 . 3073640
* [63] J. Van den Stock, R. Righart, and B. De Gelder. Body expressions influence recognition of emotions in the face and voice. Emotion, 7(3):487, 2007.
* [64] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. u. Kaiser, and I. Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems 30, pp. 5998–6008. Curran Associates, Inc., 2017.
* [65] E. Volkova, S. De La Rosa, H. H. Bülthoff, and B. Mohler. The mpi emotional body expressions database for narrative scenarios. PloS one, 9(12):e113647, 2014.
* [66] P. Wagner, Z. Malisz, and S. Kopp. Gesture and speech in interaction: An overview. Speech Communication, 57:209 – 232, 2014. doi: 10 . 1016/j . specom . 2013 . 09 . 008
* [67] J. Z. Wang, N. Badler, N. Berthouze, R. O. Gilmore, K. L. Johnson, A. Lapedriza, X. Lu, and N. Troje. Panel: Bodily expressed emotion understanding research: A multidisciplinary perspective. In A. Bartoli and A. Fusiello, eds., Computer Vision – ECCV 2020 Workshops, pp. 733–746. Springer International Publishing, Cham, 2020.
* [68] Y. Yoon, B. Cha, J.-H. Lee, M. Jang, J. Lee, J. Kim, and G. Lee. Speech gesture generation from the trimodal context of text, audio, and speaker identity. ACM Transactions on Graphics, 39(6), 2020.
* [69] Y. Yoon, W.-R. Ko, M. Jang, J. Lee, J. Kim, and G. Lee. Robots learn social skills: End-to-end learning of co-speech gesture generation for humanoid robots. In Proc. of The International Conference in Robotics and Automation (ICRA), 2019.
* [70] X. Zhou, S. Huang, B. Li, Y. Li, J. Li, and Z. Zhang. Text guided person image synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
|
# Screen2Vec: Semantic Embedding of GUI Screens and GUI Components
Toby Jia-Jun Li<EMAIL_ADDRESS>Carnegie Mellon UniversityPittsburghPA ,
Lindsay Popowski<EMAIL_ADDRESS>Harvey Mudd CollegeClaremontCA , Tom M.
Mitchell<EMAIL_ADDRESS>Carnegie Mellon UniversityPittsburghPA and
Brad A. Myers<EMAIL_ADDRESS>Carnegie Mellon UniversityPittsburghPA
(2021)
###### Abstract.
Representing the semantics of GUI screens and components is crucial to data-
driven computational methods for modeling user-GUI interactions and mining GUI
designs. Existing GUI semantic representations are limited to encoding either
the textual content, the visual design and layout patterns, or the app
contexts. Many representation techniques also require significant manual data
annotation efforts. This paper presents Screen2Vec, a new self-supervised
technique for generating representations in embedding vectors of GUI screens
and components that encode all of the above GUI features without requiring
manual annotation using the context of user interaction traces. Screen2Vec is
inspired by the word embedding method Word2Vec, but uses a new two-layer
pipeline informed by the structure of GUIs and interaction traces and
incorporates screen- and app-specific metadata. Through several sample
downstream tasks, we demonstrate Screen2Vec’s key useful properties:
representing between-screen similarity through nearest neighbors,
composability, and capability to represent user tasks.
GUI embedding, interaction mining, screen semantics
††journalyear: 2021††copyright: rightsretained††conference: CHI Conference on
Human Factors in Computing Systems; May 8–13, 2021; Yokohama,
Japan††booktitle: CHI Conference on Human Factors in Computing Systems (CHI
’21), May 8–13, 2021, Yokohama, Japan††doi: 10.1145/3411764.3445049††isbn:
978-1-4503-8096-6/21/05††ccs: Human-centered computing Smartphones††ccs:
Human-centered computing User interface design††ccs: Human-centered computing
Graphical user interfaces††ccs: Computing methodologies Neural networks
## 1\. Introduction
With the rise of data-driven computational methods for modeling user
interactions with graphical user interfaces (GUIs), the GUI screens have
become not only interfaces for human users to interact with the underlying
computing services, but also valuable data sources that encode the underlying
task flow, the supported user interactions, and the design patterns of the
corresponding apps, which have proven useful for AI-powered applications. For
example, programming-by-demonstration (PBD) intelligent agents such as (Li et
al., 2017; Li et al., 2019; Sereshkeh et al., 2020) use task-relevant entities
and hierarchical structures extracted from GUIs to parameterize, disambiguate,
and handle errors in user-demonstrated task automation scripts. Erica (Deka et
al., 2016) mines a large repository of mobile app GUIs to enable user
interface (UI) designers to search for example design patterns to inform their
own design. Kite (Li and Riva, 2018) extracts task flows from mobile app GUIs
to bootstrap conversational agents.
Semantic representations of GUI screens and components, where each screen and
component is encoded as a vector (known as the embedding), are highly useful
in these applications. The representations of GUI screens and components can
be used to also represent other entities of interest. For example, a task in
an app can be modeled as a sequence of GUI actions, where each action can be
represented as a GUI screen, a type of interaction (e.g., click), and the
component that is interacted with on the screen. An app can be modeled as a
collection of all its screens, or a large collection of user interaction
traces of using the app. Voice shortcuts in mobile app deep links (Azim et
al., 2016) can be modeled as matching the user’s intent expressed in natural
language to the target GUI screens. The representation of the screen that the
user is viewing or has previously viewed can also be used as the context to
help infer the user’s intents and activities in predictive intelligent
interfaces. The semantic embedding approach represents GUI screens and
components in a distributed form (Bengio, 2009) (i.e., an item is represented
across multiple dimensions) as continuous-valued vectors, making it especially
suitable for use in popular machine learning models.
However, existing approaches of representing GUI screens and components are
limited. One type of approach solely focuses on capturing the text on the
screen, treating the screen as a bag of words or phrases. For example,
Sugilite (Li et al., 2017) uses exact matches of text labels on the screen to
generalize the user demonstrated tasks. Sovite (Li et al., 2020b) uses the
average of individual word embedding vectors for all the text labels on the
screen to represent the screen for retrieving relevant task intents. This
approach can capture the semantics of the screen’s textual content, but misses
out on using the information encoded in the layout and the design pattern of
the screen and the task context encoded in the interactivity and meta-data of
the screen components.
Another type of approach focuses on the visual design patterns and GUI
layouts. Erica (Deka et al., 2016) uses an unsupervised clustering method to
create semantic clusters of visually similar GUI components. Liu et al.’s
approach (Liu et al., 2018) leverages the hierarchical GUI structures, the
class names of GUI components, and the visual classifications of graphical
icons to annotate the design semantics of GUIs. This type of approach has been
shown to be able to determine the category of a GUI component (e.g., list
items, tab labels, navigation buttons), the “UX concept” semantics of buttons
(e.g., “back”, “delete”, “save”, and “share”), and the overall type of task
flow of screens (e.g., “searching”, “promoting”, and “onboarding”). However,
it does not capture the content in the GUIs—two structurally and visually
similar screens with different content (e.g., the search results screen in a
restaurant app and a hotel booking app) will yield similar results.
There have been prior approaches that combine the textual content and the
visual design patterns (Pasupat et al., 2018; Li et al., 2020c). However,
these approaches use supervised learning with large datasets for very specific
task objectives. Therefore they require significant task-specific manual data
labeling efforts, and their resulting models cannot be used in different
downstream tasks. For example, Pasupat et al. (Pasupat et al., 2018) create a
embedding-based model that can map the user’s natural language commands to web
GUI elements based on the text content, attributes, and spatial context of the
GUI elements. Li et al.’s work (Li et al., 2020c) describes a model that
predicts sequences of mobile GUI action sequences based on step-by-step
natural language descriptions of actions. Both models are trained using large
manually-annotated corpora of natural language utterances and the
corresponding GUI actions.
We present a new self-supervised technique (i.e., the type of machine learning
approach that trains a model without human-labeled data by withholding some
part of the data, and tasking the network with predicting it) Screen2Vec for
generating more comprehensive semantic representations of GUI screens and
components. Screen2Vec uses the screens’ textual content, visual design and
layout patterns, and app context meta-data. Screen2Vec’s approach is inspired
by the popular word embedding method Word2Vec (Mikolov et al., 2013b), where
the embedding vector representations of GUI screens and components are
generated through the process of training a prediction model. However, unlike
Word2Vec, Screen2Vec uses a two-layer pipeline informed by the structures of
GUIs and GUI interaction traces and incorporates screen- and app-specific
metadata.
The embedding vector representations produced by Screen2Vec can be used in a
variety of useful downstream tasks such as nearest neighbor retrieval,
composability-based retrieval, and representing mobile tasks. The self-
supervised nature of Screen2Vec allows its model to be trained without any
manual data labeling efforts—it can be trained with a large collection of GUI
screens and the user interaction traces on these screens such as the Rico
(Deka et al., 2017) dataset.
Along with this paper, we also release the open-source111A pre-trained model
and the Screen2Vec source code are available at:
https://github.com/tobyli/screen2vec code of Screen2Vec as well as a pre-
computed Screen2Vec model trained on the Rico dataset (Deka et al., 2017)
(more in Section 2.1). The pre-computed model can encode the GUI screens of
Android apps into embedding vectors off-the-shelf. The open-source code can be
used to train models for other platforms given the appropriate dataset of user
interaction traces.
Screen2Vec addresses an important gap in prior work about computational HCI
research. The lack of comprehensive semantic representations of GUI screens
and components has been identified as a major limitation in prior work in GUI-
based interactive task learning (e.g., (Li et al., 2019; Sereshkeh et al.,
2020)), intelligent suggestive interfaces (e.g., (Chen et al., 2019)),
assistive tools (e.g., (Bigham et al., 2009)), and GUI design aids (e.g.,
(Swearngin et al., 2018; Lee et al., 2020a)). Screen2Vec embeddings can encode
the semantics, contexts, layouts, and patterns of GUIs, providing
representations of these types of information in a form that can be easily and
effectively incorporated into popular modern machine learning models.
This paper makes the following contributions:
1. (1)
Screen2Vec: a new self-supervised technique for generating more comprehensive
semantic embeddings of GUI screens and components using their textual content,
visual design and layout patterns, and app meta-data.
2. (2)
An open-sourced GUI embedding model trained using the Screen2Vec technique on
the Rico (Deka et al., 2017) dataset that can be used off-the-shelf.
3. (3)
Several sample downstream tasks that showcase the model’s usefulness.
## 2\. Our Approach
The figure showing the architecture of the Screen2Vec model. The pipeline of
Screen2Vec consists of two levels: the GUI component level and the GUI screen
level.
Figure 1. The two-level architecture of Screen2Vec for generating GUI
component and screen embeddings. The weights for the steps in teal color are
optimized during the training process.
Figure 1 illustrates the architecture of Screen2Vec. Overall, the pipeline of
Screen2Vec consists of two levels: the GUI component level (shown in the gray
shade) and the GUI screen level. We will first describe the approach at a
high-level here, and then explain the details in Section 2.2.
The GUI component level model encodes the textual content and the class type
of a GUI component into a 768-dimensional222We decided to produce
768-dimensional vectors so that they can be directly used with the
768-dimensional vectors produced by the pre-trained Sentence-BERT model with
its default settings (Reimers and Gurevych, 2019) embedding vector to
represent the GUI component (e.g., a button, a textbox, a list entry etc.).
This GUI component embedding vector is computed with two inputs: (1) a
768-dimensional embedding vector of the text label of the GUI component,
encoded using a pre-trained Sentence-BERT (Reimers and Gurevych, 2019) model;
and (2) a 6-dimensional class embedding vector that represents the class type
of the GUI component, which we will discuss in detail later in Section 2.2.
The two embedding vectors are combined using a linear layer, resulting in the
768-dimensional GUI component embedding vector that represents the GUI
component. The class embeddings in the class type embedder and the weights in
the linear layer are optimized through training a Continuous Bag-of-Words
(CBOW) prediction task: for each GUI component on each screen, the task
predicts the current GUI component using its context (i.e., all the other GUI
components on the same screen). The training process optimizes the weights in
the class embeddings and the weights in the linear layer for combining the
text embedding and the class embedding.
The GUI screen level model encodes the textual content, visual design and
layout patterns, and app context of a GUI screen into an 1536-dimensional
embedding vector. This GUI screen embedding vector is computed using three
inputs: (1) the collection of the GUI component embedding vectors for all the
GUI components on the screen (as described in the last paragraph), combined
into a 768-dimension vector using a recurrent neural network model (RNN),
which we will discuss more in Section 2.2; (2) a 64-dimensional layout
embedding vector that encodes the screen’s visual layout (details later in
Section 2.2); and (3) a 768-dimensional embedding vector of the textual App
Store description for the underlying app, encoded with a pre-trained Sentence-
BERT (Reimers and Gurevych, 2019) model. These GUI and layout vectors are
combined using a linear layer, resulting in a 768-dimensional vector. After
training, the description embedding vector is concatenated on, resulting in
the 1536-dimensional GUI screen embedding vector (if included in the training,
the description dominates the entire embedding, overshadowing information
specific to that screen within the app). The weights in the RNN layer for
combining GUI component embeddings and the weights in the linear layer for
producing the final output vector are similarly trained on a CBOW prediction
task on a large number of interaction traces (each represented as a sequence
of screens). For each trace, a sliding window moves over the sequence of
screens. The model tries to use the representation of the context (the
surrounding screens) to predict the screen in the middle. See Section 2.2 for
more details.
However, unlike the GUI component level embedding model, the GUI screen level
model is trained on a screen prediction task in the user interaction traces of
using the apps. Within each trace, the training task tries to predict the
current screen using other screens in the same trace.
### 2.1. Dataset
We trained Screen2Vec on the open-sourced Rico333Available at:
http://interactionmining.org/rico dataset (Deka et al., 2017). The Rico
dataset contains interaction traces on 66,261 unique GUI screens from 9,384
free Android apps collected using a hybrid crowdsourcing plus automated
discovery approach. For each GUI screen, the Rico dataset includes a
screenshot image (that we did not use in Screen2Vec), and the screen’s “view
hierarchy” in a JSON file. The view hierarchy is structurally similar to a DOM
tree in HTML; it starts with a root view, and contains all its descents in a
tree. The node for each view includes the class type of this GUI component,
its textual content (if any), its location as the bounding box on the screen,
and various other properties such as whether it is clickable, focused, or
scrollable, etc. Each interaction trace is represented as a sequence of GUI
screens, as well as information about which (x, y) screen location was clicked
or swiped on to transit from the previous screen to the current screen.
### 2.2. Models
This section explains the implementation details of each key step in the
pipeline shown in Figure 1.
#### GUI Class Type Embeddings
To represent the class types of GUI components, we trained a class embedder to
encode the class types into the vector space. We used a total of 26 class
categories: the 22 categories that were present in (Liu et al., 2018), one
layout category, list and drawer categories, and an “Other” category. We
classified the GUI component classes based on the classes of their className
properties and, sometimes, other simple heuristic rules (see Table 1). For
example, if a GUI component is an instance of EditText (i.e., its className
property is either EditText, or a class that inherits EditText), then it is
classified as an Input. There are two exceptions: the Drawer and the List Item
categories look at the className of the parent of the current GUI component
instead of the className of itself. A standard PyTorch embedder
(torch.nn.Embedding444https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html)
maps each of these 26 discrete categories into a continuous 6-dimensional
vector. The embedding vector value for each category is optimized during the
training process for the GUI component prediction tasks so that GUI components
categories that are semantically similar to each other are closer together in
the vector space.
GUI Component | Associated Class Type | GUI Component | Associated Class Type
---|---|---|---
Advertisement | AdView, HtmlBannerWebView, AdContainer | Layouts | LinearLayout, AppBarLayout, FrameLayout, RelativeLayout, TableLayout
Bottom Navigation | BottomTabGroupView, BottomBar | Button Bar | ButtonBar
Card | CardView | CheckBox | CheckBox, CheckedTextView
Drawer (Parent) | DrawyerLayout | Date Picker | DatePicker
Image | ImageView | Image Button | ImageButton, GlyphView, AppCompatButton, AppCompatImageButton, ActionMenuItemView, ActionMenuItemPresenter
Input | EditText, SearchBoxView, AppCompatAutoCompleteTextView, TextView555The property editable needs to be TRUE. | List Item (Parent) | ListView, RecyclerView, ListPopupWindow, TabItem, GridView
Map View | MapView | Multi-Tab | SlidingTab
Number Stepper | NumberPicker | On/Off Switch | Switch
Pager Indicator | ViewPagerIndicatorDots, PageIndicator, CircileIndicator, PagerIndicator | RadioButton | RadioButton, CheckedTextView
Slider | SeekBar | TextButton | Button666The GUI component needs to have a non-empty text property., TextView777The property clickable needs to be TRUE.
Tool Bar | ToolBar, TitleBar, ActionBar | Video | VideoView
Web View | WebView | Drawer Item | Others category and ancestor is Drawer(Parent)
List Item | Others category and ancestor is List(Parent) | Others | ...
Table 1. The 26 categories (including the “Others” category) of GUI class
types we used in Screen2Vec and their associated base class names. Some
categories have additional heuristics, as shown in the notes. This
categorization is adapted from (Liu et al., 2018).
#### GUI Component Context
As discussed earlier, Screen2Vec uses a Continuous Bag-of-Words (CBOW)
prediction task (Mikolov et al., 2013b) for training the weights in the model,
where for each GUI component, the model tries to predict it using its context.
In Screen2Vec, we define the context of a GUI component as its 16 nearest
components. The size 16 is chosen to balance the model performance and the
computational cost.
Inspired by prior work on the correlation between the semantic relatedness of
entities and the spatial distance between them (Li et al., 2014). We tried
using two different measures of screen distance for determining GUI component
context in our model: EUCLIDEAN, which is the straight-line minimal distance
on the screen (measured in pixels) between the bounding boxes of the two GUI
components; and HIERARCHICAL, which is the distance between the two GUI
components on the hierarchical GUI view tree. For example, a GUI component has
a distance of 1 to its parent and children and a distance of 2 to its direct
siblings.
#### Linear Layers
At the end of each of the two levels in the pipeline, a linear layer is used
to combine multiple vectors and shrink the combined vector into a lower-
dimension vector that contains the relevant semantic content of each input.
For example, in the GUI component embedding process, the model first
concatenates the 768-dimensional text embedding with the 6-dimensional class
embedding. The linear layer then shrinks the GUI component embedding back down
to 768 dimensions. The linear layer works by creating $774\times 768$ weights:
one per pair of input dimension and output dimension. These weights are
optimized along with other parameters during the training process, so as to
minimize the overall total loss (loss function detail in Section 2.3).
In the screen embedding process, a linear layer is similarly used for
combining the 768-dimensional layout embedding vector with the 64-dimensional
GUI content embedding vector to produce a new 768-dimensional embedding vector
that encodes both the screen content and the screen layout.
#### Text Embeddings
We use a pre-trained Sentence-BERT language model (Reimers and Gurevych, 2019)
to encode the text labels on each GUI component and the Google Play store
description for each app into 768-dimensional embedding vectors. This
Sentence-BERT model, which is a modified BERT network (Devlin et al., 2019),
was pre-trained on the SNLI (Bowman et al., 2015) dataset and the Multi-Genre
NLI (Williams et al., 2018) dataset with a mean-pooling strategy, as described
in (Reimers and Gurevych, 2019). This pre-trained model has been shown to
perform well in deriving semantically meaningful sentence and phrase
embeddings where semantically similar sentences and phrases are close to each
other in the vector space (Reimers and Gurevych, 2019).
Screen2Vec’s autoencoder can transform the screenshot of an app into an image
that represents the layout of the screen
Figure 2. Screen2Vec extracts the layout of a GUI screen as a bitmap, and
encodes this bitmap into a 64-dimensional vector using a standard autoencoder
architecture where the autoencoder is trained on the loss of the output of the
decoder (Deka et al., 2017).
#### Layout Embeddings
Another important step in the pipeline is to encode the visual layout pattern
of each screen. We use the layout embedding technique from (Deka et al.,
2017), where we first extract the layout of a screen from its screenshot using
the bounding boxes of all the leaf GUI components in the hierarchical GUI
tree, differentiating between text and non-text GUI components using different
colors (Figure 2). This layout image represents the layout of the GUI screen
while abstracting away its content and visual specifics. We then use an image
autoencoder to encode each image into a 64-dimensional embedding vector. The
autoencoder is trained using a typical encoder-decoder architecture, that is,
the weights of the network are optimized to produce the 64-dimensional vector
from the original input image that can produce the best reconstructed image
when decoded.
The encoder has input dimension of 11,200, and then two hidden layers of size
2,048 and 256, with output dimension of size 64; this means three linear
layers of sizes $11,200\rightarrow 2,048,2,048\rightarrow 256$, and
$256\rightarrow 64$. These layers have the Rectified Linear Unit (ReLU) (Nair
and Hinton, 2010) applied, so the output of each linear layer is put through
an activation function which transforms any negative input to 0. The decoder
has the reverse architecture (three linear layers with ReLU $64\rightarrow
256,256\rightarrow 2,048$, and $2,048\rightarrow 11,200$). The layout
autoencoder is trained on the process of reconstructing the input image when
it is run through the encoder and the decoder; the loss is determined by the
mean squared error (MSE) between the input of the encoder and the output of
the decoder.
#### GUI Embedding Combining Layer
To combine the embedding vectors of multiple GUI components on a screen into a
single fixed-length embedding vector, we use an Recurrent Neural Network
(RNN): The RNN operates similarly to the linear layer mentioned earlier,
except it deals with sequential data (thus the “recurrent” in the name). The
RNN we used was a sequence of linear layers with the additional input of a
hidden state. The GUI component embeddings are fed into the RNN in the pre-
order traversal order of the GUI hierarchy tree. For the first input of GUI
component embedding, the hidden state was all zeros, but for the second input,
the output from the first serves as the hidden state, and so on, so that the
$n^{th}$ input is fed into a linear layer along with $(n-1)^{th}$ output. The
overall output is the output for the final GUI component in the sequence,
which encodes parts of all of the GUI components, since the hidden states
could pass on that information. This allows screens with different numbers of
GUI components to have vector representations that both take all GUI
components into account _and_ are of the same size. This RNN is trained along
with all other parameters in the screen embedding model, optimizing for the
loss function (detail in Section 2.3) in the GUI screen prediction task.
### 2.3. Training Configurations
In the training process, we use 90% of the data for training and save the
other 10% for validation. The models are trained on a cross entropy loss
function with an Adam optimizer (Kingma and Ba, 2015), which is an adaptive
learning gradient-based optimization algorithm of stochastic objective
functions. For both stages, we use an initial learning rate of 0.001 and a
batch size of 256.
The GUI component embedding model takes about 120 epochs to train, while the
GUI screen embedding model takes 80–120 epochs depending on which version is
being trained888The version without spatial information takes 80 epochs; and
the one with spatial information takes 120.. A virtual machine with 2 NVIDIA
Tesla K80 GPUs can train the GUI component embedding model in about 72 hours,
and train the GUI screen embedding model in about 6-8 hours.
We used PyTorch’s implementation of the CrossEntropyLoss
function999https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html
to calculate the prediction loss. The CrossEntropyLoss function combines
negative log likelihood loss (NLL Loss) with the log softmax function:
$\displaystyle CrossEntropyLoss(x,class)$
$\displaystyle=NLL\\_Loss(logSoftmax(x),class))$
$\displaystyle=-log(\frac{exp(x[class])}{\sum\nolimits_{c}exp(x[c])})$
$\displaystyle=-x[class]+log\sum\nolimits_{c}exp(x[c])$
In the case of the GUI component embedding model, the total loss is the sum of
the cross entropy loss for the text prediction and the cross entropy loss for
the class type prediction. In calculating the cross entropy loss, each text
prediction was compared to every possible text embedding in the vocabulary,
and each class prediction was compared to all possible class embeddings.
In the case of the GUI screen embedding model, the loss is exclusively for
screen predictions. However, the vector $x$ does not contain the similarity
between the correct prediction and every screen in the dataset. Instead we use
negative sampling (Mikolov et al., 2013b; Mikolov et al., 2013a) so that we do
not have to recalculate and update every screen’s embedding on every training
iteration, which is computationally expensive and prone to over-fitting. In
each iteration, the prediction is compared to the correct screen and a sample
of negative data that consists of: a random sampling of size 128 of other
screens, the other screens in the batch, and the screens in the same trace as
the correct screen, used in the prediction task. We specifically include the
screens in the same trace to promote screen-specific learning in this process:
This way, we can disincentive screen embeddings that are based solely on the
app101010Since the next screen is always within the same app and therefore
shares an app description embedding, the prediction task favors having
information about the specific app (i.e., app store description embedding)
dominate the embedding., and emphasize having the model learn to differentiate
the different screens within the same app.
### 2.4. Baselines
We compared Screen2Vec to the following three baseline models:
#### Text Embedding Only
The TextOnly model replicates the screen embedding method used in Sovite (Li
et al., 2020b). It only looks at the textual content on the screen: the screen
embedding vector is computed by averaging the text embedding vectors for all
the text found on the screen. The pre-trained Sentence-BERT model (Reimers and
Gurevych, 2019) calculates the text embedding vector for each text. With the
the TextOnly model, screens with semantically similar textual contexts will
have similar embedding vectors.
#### Layout Embedding Only
The LayoutOnly model replicates the screen embedding method used in the
original Rico paper (Deka et al., 2017). It only looks at the visual layout of
the screen: It uses the layout embedding vector computed by the layout
autoencoder to represent the screen, as discussed in Section 2.2. With the
LayoutOnly model, screens with similar layouts will have similar embedding
vectors.
#### Visual Embedding Only
The VisualOnly model encodes the visual look of a screen by applying an
autoencoder (described in Section 2.2) directly on its screenshot image bitmap
instead of the layout bitmap. This baseline is inspired by the visual-based
approach used in GUI task automation systems such as VASTA (Sereshkeh et al.,
2020), Sikuli (Yeh et al., 2009), and HILC (Intharah et al., 2019). With the
VisualOnly model, screens that are visually similar will have similar
embedding vectors.
### 2.5. Prediction Task Results
We report the performance on the GUI component and GUI screen prediction tasks
of the Screen2Vec model, as well as the GUI screen prediction performance for
the baseline models described above.
Table 2 shows the top-1 accuracy (i.e., the top predicted GUI component
matches the correct one), the top-0.01% accuracy (i.e., the correct GUI
component is among the top 0.01% in the prediction result), the top-0.1%
accuracy, and the top-1% accuracy of the two variations of the Screen2Vec
model on the GUI component prediction task, where the model tries to predict
the text content for each GUI component in all the GUI screens in the Rico
dataset using its context (the other GUI components around it) among the
collection of all the GUI components in the Rico dataset.
Model | Top-1 Accuracy | Top 0.01% Accuracy | Top 0.1% Accuracy | Top 1% Accuracy | Top 5% Accuracy | Top 10% Accuracy
---|---|---|---|---|---|---
Screen2Vec-EUCLIDEAN-text | 0.443 | 0.619 | 0.783 | 0.856 | 0.885 | 0.901
Screen2Vec-HIERARCHICAL-text | 0.588 | 0.687 | 0.798 | 0.849 | 0.878 | 0.894
Table 2. The GUI component prediction performance of the two variations of the
Screen2Vec model with two different distance measures (EUCLIDEAN and
HIERARCHICAL).
Similarly, Table 3 reports the accuracy of the Screen2Vec model and the
baseline models (TextOnly, LayoutOnly, and VisualOnly) on the task of
predicting GUI screens, where each model tries to predict each GUI screen in
all the GUI interaction traces in the Rico dataset using its context (the
other GUI screens around it in the trace) among the collection of all the GUI
screens in the Rico dataset. For the Screen2Vec model, we compare three
versions: one that encodes the locations of GUI components and the screen
layouts and uses the EUCLIDEAN distance measure, one that uses such spatial
information and the HIERARCHICAL distance measure, and one that uses the
EUCLIDEAN distance measure without considering spatial information. A higher
accuracy indicates that that the model is better at predicting the correct
screen.
We also report the normalized root mean square error (RMSE) of the predicted
screen embedding vector for each model, normalized by the mean length of the
actual screen embedding vectors. A smaller RMSE indicates that the top
prediction screen generated by the model is, on average, more similar to the
correct screen.
From the results in Table 3, we can see that the Screen2Vec models perform
better than the baseline models in top-1 and top-k prediction accuracy. Among
the different versions of Screen2Vec, the versions that encode locations of
GUI components and the screen layouts performs better than the one without
spatial information, suggesting that such spatial information is useful. The
model that uses the HIERARCHICAL distance performs similarly to the one that
uses the EUCLIDEAN distance in GUI component prediction, but performs worse in
screen prediction. In the Sample Downstream Tasks section below, we will use
the Screen2Vec-EUCLIDEAN-spatial info version of the Screen2Vec model.
As we can see, adding spatial information dramatically improves the Top-1
accuracy and the Top-0.01% accuracy. However, the improvements in Top 0.1%
accuracy, Top 1% accuracy, and normalized RMSE are smaller. We think the main
reason is that aggregating the textual information, GUI class types, and app
descriptions is useful for representing the high-level “topic” of a screen
(e.g., a screen is about hotel booking because its text and app descriptions
talk about hotels, cities, dates, rooms etc.), hence the good top 0.1% and 1%
accuracy and normalized RMSE for the“no spatial info” model. But these types
of information are not sufficient for reliably differentiating the different
types of screens needed (e.g., search, room details, order confirmation) in
the hotel booking process because all these screens in the same app and task
domain would contain “semantically similar” text. This is why the adding
spatial information is helpful in identifying the top-1 and top-0.01% results.
Interestingly, the baseline models beat the “no spatial info” version of
Screen2Vec in normalized RMSE: i.e., although the baseline models are less
likely to predict the correct screen, their predicted screens are, on average,
more similar to the correct screen. A likely explanation to this phenomenon is
that both baseline models use, by nature, similarity-based measures, while the
Screen2Vec model is trained on a prediction-focused loss function. Therefore
Screen2Vec does not emphasize making more similar predictions when then
prediction is incorrect. However, we can see that the spatial info versions of
Screen2Vec perform better than the baseline models on both the prediction
accuracy and the similarity measure.
Model | Top-1 Accuracy | Top 0.01% Accuracy | Top 0.1% Accuracy | Top 1% Accuracy | Top 5% Accuracy | Normalized RMSE
---|---|---|---|---|---|---
Screen2Vec-EUCLIDEAN-spatial info | 0.061 | 0.258 | 0.969 | 0.998 | 1.00 | 0.853
Screen2Vec-HIERARCHICAL-spatial info | 0.052 | 0.178 | 0.646 | 0.924 | 0.990 | 0.997
Screen2Vec-EUCLIDEAN-no spatial info | 0.0065 | 0.116 | 0.896 | 0.986 | 0.999 | 1.723
TextOnly | 0.012 | 0.055 | 0.196 | 0.439 | 0.643 | 1.241
LayoutOnly | 0.0041 | 0.024 | 0.091 | 0.222 | 0.395 | 1.135
VisualOnly | 0.0060 | 0.026 | 0.121 | 0.252 | 0.603 | 1.543
Table 3. The GUI screen prediction performance of the three variations of the
Screen2Vec model and the baseline models (TextOnly, LayoutOnly, and
VisualOnly).
## 3\. Sample Downstream Tasks
Note that while the accuracy measures are indicative of how much the model has
learned about GUI screens and components, the main purpose of the Screen2Vec
model is not to predict GUI components or screens, but to produce distributed
vector representations for them that encode useful semantic, layout, and
design properties. Therefore this section presents several sample downstream
tasks to illustrate important properties of the Screen2Vec representations and
the usefulness of our approach.
A web interface showing multiple-choice questions asking Mechanical Turk
workers to rank the similarity between pairs of screens
Figure 3. The interface shown to the Mechanical Turk workers for rating the
similarities for the nearest neighbor results generated by different models.
### 3.1. Nearest Neighbors
The nearest neighbor task is useful for data-driven design, where the
designers want to find examples for inspiration and for understanding the
possible design solutions (Deka et al., 2017). The task focuses on the
similarity between GUI screen embeddings: for a given screen, what are the
top-N most similar screens in the dataset? The similar technique can also be
used for unsupervised clustering in the dataset to infer different types of
GUI screens. In our context, this task also helps demonstrate the different
characteristics between Screen2Vec and the three baseline models.
We conducted a Mechanical Turk study to compare the similarity between the
nearest neighbor results generated by the different models. We selected 50
screens from apps and app domains that most users are familiar with. We did
not select random apps from the Rico dataset, as many apps in the dataset
would be obscure to Mechanical Turk workers so they might not understand them
and therefore might not be able to judge the similarity of the results. For
each screen, we retrieved the top-5 most similar screens using each of the 3
models. Therefore, each of the 50 screens had up to 3 (models) $\times$ 5
(screen each) = 15 similar screens, but many had fewer since different models
may select the same screens.
79 Mechanical Turk workers participated in this study111111The protocol was
approved by the IRB at our institution.. In total, they labeled the similarity
between 5,608 pairs of screens. Each worker was paid $2 for each batch of 5
sets of source screens they labeled. A batch on average takes around 10
minutes to complete. In each batch, a worker went through a sample of 5 source
screens from the 50 source screens in random order, where for each source
screen, the worker saw the union of the top-5 most similar screens to the
source screen generated by the 3 models in random order. For each screen, we
also showed the worker the app it came from and a short description of the app
from the Google Play Store, but we did not show them which model produced the
screen. The worker was asked to rate the similarity of each screen to the
original source screen on a scale of 1 to 5 (Figure 3). We asked the workers
to consider 3 aspects in measuring similarity: (1) app similarity (how similar
are the two apps); (2) screen type similarity (how similar are the types of
the two screens e.g., if they are both sign up screens, search results,
settings menu etc.); and (3) content similarity (how similar are the content
on the two screens).
Table 4 shows the mean screen similarity rated by the Mechanical Turk workers
for the top-5 nearest neighbor results of the sample source screens generated
by the 3 models. The Mechanical Turk workers rated the nearest neighbor
screens generated by the Screen2Vec model to be, on average, more similar to
their source screens than the nearest neighbor screens generated by the
baseline TextOnly and LayoutOnly models. Tested with a non-parametric Mann-
Whitney U test (because the ratings are not normally distributed), the
differences between the mean ratings of the Screen2Vec model and both the
TextOnly model and the LayoutOnly model are significant ($p<0.0001$).
Screen2Vec | TextOnly | LayoutOnly
---|---|---
Mean Rating | Std. Dev. | Mean Rating | Std. Dev. | Mean Rating | Std. Dev.
3.295* | 1.238 | 3.014* | 1.321 | 2.410* | 1.360
Table 4. The mean screen similarity rated by the Mechanical Turk workers for
the top-5 nearest neighbor results of the sample source screens generated by
the 3 models: Screen2Vec, TextOnly, and LayoutOnly (*p¡0.0001).
Screen2Vec generates the nearest neighbor screens for the “request ride”
screen of the Lyft app.
Figure 4. The example nearest neighbor results for the Lyft “request ride”
screen generated by the Screen2Vec, TextOnly, and LayoutOnly models.
Subjectively, when looking at the nearest neighbor results, we can see the
different aspects of the GUI screens that each different model captures.
Screen2Vec can create more comprehensive representations that encode the
textual content, visual design and layout patterns, and app contexts of the
screen compared with the baseline models, which only capture one or two
aspects. For example, Figure 4 shows the example nearest neighbor results for
the “request ride” screen in the Lyft app. Screen2Vec model retrives the “get
direction” screen in the Uber Driver app, “select navigation type” screen in
the Waze app, and “request ride” screen in the Free Now (My Taxi) app.
Considering the Visual and component layout aspects, the result screens all
feature a menu/information card at the bottom 1/3 to 1/4 of the screen, with a
MapView taking the majority of the screen space. Considering the content and
app domain aspects, all of these screens are from transportation-related apps
that allow the user to configure a trip. In comparison, the TextOnly model
retrieves the “request ride” screen from the zTrip app, the “main menu” screen
from the Hailo app (both zTrip and Hailo are taxi hailing apps), and the home
screen of the Paytm app (a mobile payment app in India). The commonality of
these screens is that they all include text strings that are semantically
similar to “payment” (e.g., add payment type, wallet, pay, add money), and
strings that are semantically similar to “destination” and “trips” (e.g., drop
off location, trips, bus, flights). But the model did not consider the visual
layout and design patterns of the screens nor the app context. Therefore the
result contains the “main menu” (a quite different type of screen) in the
Hailo app and the “home screen” in the Paytm app (a quite different type of
screen in a different type of app). The LayoutOnly model, on the other hand,
retrieves the “exercise logging” screens from the Map My Walk app and the Map
My Ride app, and the tutorial screen from the Clever Dialer app. We can see
that the content and app-context similarity of the result of the LayoutOnly
model is quite lower than those of the Screen2Vec and TextOnly models.
However, the result screens all share similar layout features as the source
screen, such as the menu/information card at the bottom of the screen and the
screen-wide button at the bottom of the menu.
An expression showing that adding the hotel booking page of the Marriott app
to the search results page of the Cheapoair app, and substracting the hotel
booking page of the Cheapoair app can result in the search result page in the
Marriott app and the similar pages of a few other travel apps.
Figure 5. An example showing the composability of Screen2Vec embeddings:
running the nearest neighbor query on the composite embedding of Marriott app
’s hotel booking page $+$ Cheapoair app’s hotel booking page $-$ Cheapoair
app’s search result page can match the Marriott app’s search result page and
the similar pages of a few other travel apps.
### 3.2. Embedding Composability
A useful property of embeddings is that they are composable—meaning that we
can add, subtract, and average embeddings to form a meaningful new one. This
property is commonly used in word embeddings. For example, in Word2Vec,
analogies such as “man is to woman as brother is to sister” is reflected in
that the vector $(man-woman)$ is similar to the vector $(brother-sister)$.
Besides representing analogies, this embedding composability can also be
utilized for generative purposes—for example, $(brother-man+woman)$ results in
an embedding vector that represents “sister”.
This property is also useful in screen embeddings. For example, we can run a
nearest neighbor query on the composite embedding of (Marriott app ’s “hotel
booking” screen $+$ (Cheapoair app’s “search result” screen $-$ Cheapoair
app’s “hotel booking” screen)). The top result is the “search result” screen
in the Marriott app (see Figure 5). When we filter the result to focus on
screens from apps other than Marriott, we get screens that show list results
of items from other travel-related mobile apps such as Booking, Last Minute
Travel, and Caesars Rewards.
The composability can make Screen2Vec particularly useful for GUI design
purposes—the designer can leverage the composability to find inspiring
examples of GUI designs and layouts. We will discuss more about its potential
applications in Section 4.
### 3.3. Screen Embedding Sequences for Representing Mobile Tasks
GUI screens are not only useful data sources individually on their own, but
also as building blocks to represent a user’s task. A task in an app, or
across multiple apps, can be represented as a sequence of GUI screens that
makes up the user interaction trace for performing this task using app GUIs.
In this section, we conduct a preliminary evaluation on the effectiveness of
embedding mobile tasks as sequences of Screen2Vec screen embedding vectors.
Similar to GUI screens and components, the goal of embedding mobile tasks is
to represent them in a vector space where more similar tasks are closer to
each other. To test this, we recorded the scripts of completing 10 common
smartphone tasks, each with two variations that use different apps, using our
open-sourced Sugilite (Li et al., 2017) system on a Pixel 2 XL phone running
Android 8.0. Each script consists of a sequence of “perform action X (e.g.,
click, long click) on the GUI component Y in the GUI screen Z”. In this
preliminary evaluation, we only used the screen context: we represented each
task as the average of the Screen2Vec screen embedding vectors for all the
screens in the task sequence.
Table 5 shows the 10 tasks we tested on, the two apps used for each task, and
the number of unique GUI screens in each trace used for task embedding. We
queried for the nearest neighbor within the 20 task variations for each task
variation, and checked if the model could correctly identify the similar task
that used a different app. The Screen2Vec model achieved a 18/20 (90%)
accuracy in this test. In comparison, when we used the TextOnly model for task
embedding, the accuracy was 14/20 (70%).
Task Description | App 1 | Screen Count | App 2 | Screen Count
---|---|---|---|---
Request a cab | Lyft | 3 | Uber | 2
Book a flight | Fly Delta | 4 | United Airlines | 4
Make a hotel reservation | Booking.com | 7 | Expedia | 7
Buy a movie ticket | AMC Theaters | 3 | Cinemark | 4
Check the account balance | Chase | 4 | American Express | 3
Check sports scores | ESPN | 4 | Yahoo! Sports | 4
Look up the hourly weather | AccuWeather | 3 | Yahoo! Weather | 3
Find a restaurant | Yelp | 3 | Zagat | 4
Order an iced coffee | Starbucks | 7 | Dunkin’ Donuts | 8
Order takeout food | GrubHub | 4 | Uber Eats | 3
Table 5. A list of 10 tasks we used for the preliminary evaluation of using
Screen2Vec for task embedding, along with the apps used and the count of
screens used in the task embedding for each variation.
While the task embedding method we explored in this section is quite
primitive, it illustrates that the Screen2Vec technique can be used to
effectively encode mobile tasks into the vector space where semantically
similar tasks are close to each other. For the next steps, we plan to further
explore this direction. For example, the current method of averaging all the
screen embedding vectors does not consider the order of the screens in the
sequence. In the future, we may collect a dataset of human annotations of task
similarity, and use techniques that can encode the sequences of items, such as
recurrent neural networks (RNN) and long short-term memory (LSTM) networks, to
create the task embeddings from sequences of screen embeddings. We may also
incorporate the Screen2Vec embeddings of the GUI components that were
interacted with (e.g., the button that was clicked on) to initiate the screen
change into the pipeline for embedding tasks.
## 4\. Potential Applications
This section describes several potential applications where the new Screen2Vec
technique can be useful based on the downstream tasks described in Section 3.
Screen2Vec can enable new GUI design aids that take advantage of the nearest
neighbor similarity and composability of Screen2Vec embeddings. Prior work
(Deka et al., 2017; Kumar et al., 2013; Huang et al., 2019) has shown that
data-driven tools that enable designers to curate design examples are useful
for interface designers. Unlike (Deka et al., 2017), which uses a content-
agnostic approach that focuses on the visual and layout similarities,
Screen2Vec considers the textual content and app meta-data in addition to the
visual and layout patterns, often leading to different nearest neighbor
results as discussed in Section 3.1. This new type of similarity results will
also be useful when focusing on interface design beyond just visual and layout
issues, as the results enable designers to query for example designs that
display similar content or screens that are used in apps in a similar domain.
The composability in Screen2Vec embeddings enables querying for design
examples at a finer granularity. For example, suppose a designer wishes to
find examples for inspiring the design of a new checkout page for app A. They
may query for the nearest neighbors of the synthesized embedding App A’s order
page $+$ (App B’s checkout page $-$ App B’s order page). Compared with only
querying for the nearest neighbors of App B’s checkout page, this synthesized
query encodes the interaction context (i.e., the desired page should be the
checkout page for App A’s order page) in addition to the “checkout” semantics.
The Screen2Vec embeddings can also be useful in generative GUI models. Recent
models such as the neural design network (NDN) (Lee et al., 2020b) and
LayoutGAN (Li et al., 2019) can generate realistic GUI layouts based on user-
specified constraints (e.g., alignments, relative positions between GUI
components). Screen2Vec can be used in these generative approaches to
incorporate the semantics of GUIs and the contexts of how each GUI screen and
component gets used in user interactions. For example, the GUI component
prediction model can estimate the likelihood of each GUI component given the
context of the other components in a generated screen, providing a heuristic
of how likely the GUI components would fit well with each other. Similarly,
the GUI screen prediction model may be used as a heuristic to synthesize GUI
screens that would better fit with the other screens in the planned user
interaction flows. Since Screen2Vec has been shown effective in representing
mobile tasks in Section 3.3, where similar tasks will yield similar
embeddings, one may also use the task embeddings of performing the same task
on an existing app to inform the generation of new screen designs. The
embedding vector form of Screen2Vec representations would make them
particularly suitable for use in the recent neural-network based generative
models.
Screen2Vec’s capability of embedding tasks can also enhance interactive task
learning systems. Specifically, Screen2Vec may be used to enable more powerful
procedure generalizations of the learned tasks. We have shown that the
Screen2Vec model can effectively predict screens in an interaction trace.
Results in Section 3.3 also indicated that Screen2Vec can embed mobile tasks
so that the interaction traces of completing the same task in different apps
will be similar to each other in the embedding vector space. Therefore, it is
quite promising that Screen2Vec may be used to generalize a task learned from
the user by demonstration in one app to another app in the same domain (e.g.,
generalizing the procedure of ordering coffee in the Starbucks app to the
Dunkin’ Donut app). In the future, we plan to further explore this direction
by incorporating Screen2Vec into open-sourced mobile interactive task learning
agents such as our Sugilite system (Li et al., 2017).
## 5\. Limitations and Future Work
There are several limitations of our work in Screen2Vec. First, Screen2Vec has
only been trained and tested on Android app GUIs. However, the approach used
in Screen2Vec should apply to any GUI-based apps with hierarchical-based
structures (e.g., view hierarchies in iOS apps and hierarchical DOM structures
in web apps). We expect embedding desktop GUIs to be more difficult than
mobile ones, because individual screens in desktop GUIs are usually more
complex with more heterogeneous design and layout patterns.
Second, the Rico dataset we use only contains interaction traces within single
apps. The approach used in Screen2Vec should generalize to interaction traces
across multiple apps. We plan to evaluate its prediction performance on cross-
app traces in the future with an expanded dataset of GUI interaction traces.
The Rico dataset also does not contain screens from paid apps, screens that
require special accounts/privileges to access to (screens that require free
accounts to access are included when the account registration is readily
available in the app), or screens that require special hardware (e.g., in the
companion apps for smart home devices) or specific context (e.g., pages that
are only shown during events) to access. This limitation of the Rico dataset
might affect the performance of the pre-trained Screen2Vec model on these
underrepresented types of app screens.
A third limitation is that the current version of Screen2Vec does not encode
the semantics of graphic icons that have no textual information.
Accessibility-compliant apps all have alternative texts for their graphic
icons, which Screen2Vec already encodes in its GUI screen and component
embeddings as a part of the text embedding. However, for non-accessible apps,
computer vision-based (e.g., (Chen et al., 2020; Liu et al., 2018)) or crowd-
based (e.g., (Zhang et al., 2017)) techniques can be helpful for generating
textual annotations for graphic icons so that their semantics can be
represented in Screen2Vec. Another potentially useful kind of information is
the rules and examples in GUI design systems (e.g., Android Material Design,
iOS Design Patterns). While Screen2Vec can, in some ways, “learn” these
patterns from the training data, it will be interesting to explore a hybrid
approach that can leverage their explicit notions. We will explore
incorporating these techniques into the Screen2Vec pipeline in the future.
## 6\. Related Work
### 6.1. Distributed Representations of Natural Language
The study of representing words, phrases, and documents as mathematical
objects, often vectors, is central to natural language processing (NLP)
research (Turian et al., 2010; Mikolov et al., 2013b). Conventional non-
distributed word embedding methods represent a word using a one-hot
representation where the vector length equals the size of the vocabulary, and
only one dimension (that corresponds to the word) is on (Turian et al., 2010).
This representation does not encode the semantics of the words, as the vector
for each word is perpendicular to the others. Documents represented using a
one-hot word representation also suffer from the curse of dimensionality
(Bellman, 1966) as a result of the extreme sparsity in the representation.
By contrast, a distributed representation of a word represents the word across
multiple dimensions in a continuous-valued vector (word embedding) (Bengio,
2009). Such distributed representations can capture useful syntactic and
semantic properties of the words, where syntactically and semantically related
words are similar in this vector space (Turian et al., 2010). Modern word
embedding approaches usually use the language modeling task. For example,
Word2Vec (Mikolov et al., 2013b) learns the embedding of a word by predicting
it based on its context (i.e., surrounding words), or predicting the context
of a word given the word itself. GloVe (Pennington et al., 2014) is similar to
Word2Vec on a high level, but focuses on the likelihood that each word appears
in the context of other words with in the whole corpus of texts, as opposed to
Word2Vec which uses local contexts. More recent work such as ELMo (Peters et
al., 2018) and BERT (Devlin et al., 2019) allowed contextualized embedding.
That is, the representation of a phrase can vary depending on a word’s context
to handle polysemy (i.e., the capacity for a word or phrase to have multiple
meanings). For example, the word “bank” can have different meanings in “he
withdrew money from the bank” versus “the river bank”
While distributed representations are commonly used in natural language
processing, to our best knowledge, the Screen2Vec approach presented in this
paper is the first to seek to encode the semantics, the contexts, and the
design patterns of GUI screens and components using distributed
representations. The Screen2Vec approach is conceptually similar to Word2Vec
on a high level—like Word2Vec, Screen2Vec is trained using a predictive
modeling task where the context of a target entity (words in Word2Vec, GUI
components and screens in Screen2Vec) is used to predict the entity (known as
the continuous bag of words (CBOW) model in Word2Vec). There are also other
relevant Word2Vec-like approaches for embedding APIs based their usage in
source code and software documentations (e.g., API2Vec (Nguyen et al., 2017)),
and modeling the relationships between user tasks, system commands, and
natural language descriptions in the same vector space (e.g., CommandSpace
(Adar et al., 2014)).
Besides the domain difference between our Screen2Vec model and Word2Vec and
its follow-up work, Screen2Vec uses both a (pre-trained) text embedding vector
and a class type vector, and combines them with a linear layer. It also
incorporates external app-specific meta-data such as the app store
description. The hierarchical approach allows Screen2Vec to compute a screen
embedding with the embeddings of the screen’s GUI components, as described in
Section 2. In comparison, Word2Vec only computes word embeddings using word
contexts without using any other meta-data (Mikolov et al., 2013b).
### 6.2. Modeling GUI Interactions
Screen2Vec is related to prior research on computationally modeling app GUIs
and the GUI interactions of users. The interaction mining approach (Deka et
al., 2016) captures the static (UI layout, visual features) and dynamic (user
flows) parts of an app’s design from a large corpus of user interaction traces
with mobile apps, identifies 23 common flow types (e.g., adding, searching,
composing), and can classify the user’s GUI interactions into these flow
types. A similar approach was also used to learn the design semantics of
mobile apps, classifying GUI elements into 25 types of GUI components, 197
types of text buttons, and 135 types of icon classes (Liu et al., 2018).
Appstract (Fernandes et al., 2016) focused on the semantic entities (e.g.,
music, movie, places) instead, extracting entities, their properties, and
relevant actions from mobile app GUIs. These approaches use a smaller number
of discrete types of flows, GUI elements, and entities to represent GUI
screens and their components, while our Screen2Vec uses continuous embedding
in a vector space for screen representation.
Some prior techniques specifically focus on the visual aspect of GUIs. The
Rico dataset (Deka et al., 2017) shows that it is feasible to train a GUI
layout embedding with a large screen corpus, and retrieve screens with similar
layouts using such embeddings. Chen et al.’s work (Chen et al., 2020) and Li
et al.’s work (Li et al., 2020d) show that a model can predict semantically
meaningful alt-text labels for GUI components based on their visual icon.
Screen2Vec provides a more holistic representation of GUI screens by encoding
textual content, GUI component class types, and app-specific meta-data in
addition to the visual layout.
Another category of work in this area focuses on predicting GUI actions for
completing a task objective. Pasupat et al.’s work (Pasupat et al., 2018) maps
the user’s natural language commands to target elements on web GUIs. Li et
al.’s work (Li et al., 2020c) goes a step further by generating sequences of
actions based on natural language commands. These works use a supervised
approach that requires a large amount of manually-annotated training data,
which limits its utilization. In comparison, Screen2Vec uses a self-supervised
approach that does not require any manual data annotation of user intents and
tasks. Screen2Vec also does not require any annotations of the GUI screens
themselves, unlike (Zhang et al., 2018) which requires additional developer
annotations as meta-data for GUI components.
### 6.3. Interactive Task Learning
Understanding and representing GUIs is a central challenge in GUI-based
interactive task learning (ITL). When the user demonstrates a task in an app,
the system needs to understand the user’s action in the context of the
underlying app GUIs so that it can generalize what it has learned to future
task contexts (Li et al., 2018). For example, Sugilite represents each app
screen as a graph where each GUI component is an entity (Li et al., 2020e).
Properties of GUI components, their hierarchical relations, and the spatial
layouts are represented as edges in the graph. This graph representation
allows grounding natural language instructions to GUIs (Li et al., 2018; Li et
al., 2020e) with graph queries, allowing a more natural end user development
experience (Myers et al., 2017). It also supports personal information
anonymization on GUIs (Li et al., 2020a). However, this graph representation
is difficult to aggregate or compare across different screens or apps. Its
structure also does not easily fit into common machine learning techniques for
computationally modeling the GUI tasks. As a result, the procedure
generalization capability of systems like Sugilite is limited to parameters
within the same app and the same set of screens.
Some other interactive task learning systems such as Vasta (Sereshkeh et al.,
2020), Sikuli (Yeh et al., 2009), and Hilc (Intharah et al., 2019) represent
GUI screens visually. This approach performs segmentation and classification
on the video of the user performing GUI actions to extract visual
representations (e.g., screenshot segments/icons) of GUI components, allowing
replay of actions by identifying target GUI components using computer vision
object recognition techniques. This approach supports generalization based on
visual similarity (e.g., perform an action on all PDF files in a file explorer
because they all have visually similar icons). However, this visual approach
is limited by its lack of semantic understanding of the GUI components. For
example, the icon of a full trash bin is quite different from an that of an
empty one pixel count wise, but they should have the same meaning when the
user intent is “open the trash bin”. The icon for a video file can be similar
to that of an audio file (with the only difference being the tiny “mp3“ and
“mp4“ at a corner), but the system should differentiate them in intents like
“select all the video files”.
The Screen2Vec representation presented in this paper encodes the textual
content, visual layout and design patterns, and app-specific context of GUI
screens in a distributed vector form that can be used across different apps
and task domains. We think this representation can be quite useful in
supplementing the existing graph and visual GUI representations in ITL
systems. For example, as shown in Section 3.3, sequences of Screen2Vec screen
embedding can represent tasks in a way that allows the comparison and
retrieval of similar tasks among different apps. The results in Section 3.3
also suggest that the embedding can help an ITL agent transfer procedures
learned from one app to another.
## 7\. Conclusion
We have presented Screen2Vec, a new self-supervised technique for generating
distributed semantic representations of GUI screens and components using their
textual content, visual design and layout patterns, and app meta-data. This
new technique has been shown to be effective in downstream tasks such as
nearest neighbor retrieval, composability-based retrieval, and representing
mobile tasks. Screen2Vec addresses an important gap in computational HCI
research, and could be utilized for enabling and enhancing interactive systems
in task learning (e.g., (Li et al., 2019; Sereshkeh et al., 2020)),
intelligent suggestive interfaces (e.g., (Chen et al., 2019)), assistive tools
(e.g., (Bigham et al., 2009)), and GUI design aids (e.g., (Swearngin et al.,
2018; Lee et al., 2020a)).
###### Acknowledgements.
This research was supported in part by Verizon through the Yahoo! InMind
project, a J.P. Morgan Faculty Research Award, Google Cloud Research Credits,
NSF grant IIS-1814472, and AFOSR grant FA95501710218. Any opinions, findings
or recommendations expressed here are those of the authors and do not
necessarily reflect views of the sponsors. We would like to thank our
anonymous reviewers for their feedback and Ting-Hao (Kenneth) Huang, Monica
Lam, Vanessa Hu, Michael Xieyang Liu, Haojian Jin, and Franklin Mingzhe Li for
useful discussions.
## References
* (1)
* Adar et al. (2014) Eytan Adar, Mira Dontcheva, and Gierad Laput. 2014\. CommandSpace: Modeling the Relationships Between Tasks, Descriptions and Features. In _Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology_ _(UIST ’14)_. ACM, New York, NY, USA, 167–176. https://doi.org/10.1145/2642918.2647395
* Azim et al. (2016) Tanzirul Azim, Oriana Riva, and Suman Nath. 2016. uLink: Enabling User-Defined Deep Linking to App Content. In _Proceedings of the 14th Annual International Conference on Mobile Systems, Applications, and Services_ _(MobiSys ’16)_. ACM, New York, NY, USA, 305–318. https://doi.org/10.1145/2906388.2906416
* Bellman (1966) Richard Bellman. 1966\. Dynamic Programming. _Science_ 153, 3731 (1966), 34–37. https://doi.org/10.1126/science.153.3731.34
* Bengio (2009) Yoshua Bengio. 2009\. _Learning deep architectures for AI_. Now Publishers Inc.
* Bigham et al. (2009) Jeffrey P. Bigham, Tessa Lau, and Jeffrey Nichols. 2009\. Trailblazer: Enabling Blind Users to Blaze Trails through the Web. In _Proceedings of the 14th International Conference on Intelligent User Interfaces_ (Sanibel Island, Florida, USA) _(IUI ’09)_. ACM, New York, NY, USA, 177–186. https://doi.org/10.1145/1502650.1502677
* Bowman et al. (2015) Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In _Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing_. ACL, Lisbon, Portugal, 632–642. https://doi.org/10.18653/v1/D15-1075
* Chen et al. (2019) Fanglin Chen, Kewei Xia, Karan Dhabalia, and Jason I. Hong. 2019\. MessageOnTap: A Suggestive Interface to Facilitate Messaging-Related Tasks. In _Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems_ (Glasgow, Scotland Uk) _(CHI ’19)_. ACM, New York, NY, USA, Article 575, 14 pages. https://doi.org/10.1145/3290605.3300805
* Chen et al. (2020) Jieshan Chen, Chunyang Chen, Zhenchang Xing, Xiwei Xu, Liming Zhu, Guoqiang Li, and Jinshui Wang. 2020. Unblind Your Apps: Predicting Natural-Language Labels for Mobile GUI Components by Deep Learning. In _Proceedings of the 42nd International Conference on Software Engineering_ _(ICSE ’20)_.
* Deka et al. (2017) Biplab Deka, Zifeng Huang, Chad Franzen, Joshua Hibschman, Daniel Afergan, Yang Li, Jeffrey Nichols, and Ranjitha Kumar. 2017\. Rico: A Mobile App Dataset for Building Data-Driven Design Applications. In _Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology_ _(UIST ’17)_. ACM, New York, NY, USA, 845–854. https://doi.org/10.1145/3126594.3126651
* Deka et al. (2016) Biplab Deka, Zifeng Huang, and Ranjitha Kumar. 2016\. ERICA: Interaction Mining Mobile Apps. In _Proceedings of the 29th Annual Symposium on User Interface Software and Technology_ _(UIST ’16)_. ACM, New York, NY, USA, 767–776. https://doi.org/10.1145/2984511.2984581
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_. ACL, Minneapolis, Minnesota, 4171–4186. https://doi.org/10.18653/v1/N19-1423
* Fernandes et al. (2016) Earlence Fernandes, Oriana Riva, and Suman Nath. 2016. Appstract: On-the-fly App Content Semantics with Better Privacy. In _Proceedings of the 22Nd Annual International Conference on Mobile Computing and Networking_ _(MobiCom ’16)_. ACM, New York, NY, USA, 361–374. https://doi.org/10.1145/2973750.2973770
* Huang et al. (2019) Forrest Huang, John F. Canny, and Jeffrey Nichols. 2019\. Swire: Sketch-Based User Interface Retrieval. In _Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems_ (Glasgow, Scotland Uk) _(CHI ’19)_. ACM, New York, NY, USA, 1–10. https://doi.org/10.1145/3290605.3300334
* Intharah et al. (2019) Thanapong Intharah, Daniyar Turmukhambetov, and Gabriel J. Brostow. 2019. HILC: Domain-Independent PbD System Via Computer Vision and Follow-Up Questions. _ACM Trans. Interact. Intell. Syst._ 9, 2-3, Article 16 (March 2019), 27 pages. https://doi.org/10.1145/3234508
* Kingma and Ba (2015) Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In _3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings_ , Yoshua Bengio and Yann LeCun (Eds.). http://arxiv.org/abs/1412.6980
* Kumar et al. (2013) Ranjitha Kumar, Arvind Satyanarayan, Cesar Torres, Maxine Lim, Salman Ahmad, Scott R. Klemmer, and Jerry O. Talton. 2013. Webzeitgeist: Design Mining the Web. In _Proceedings of the SIGCHI Conference on Human Factors in Computing Systems_ (Paris, France) _(CHI ’13)_. ACM, New York, NY, USA, 3083–3092. https://doi.org/10.1145/2470654.2466420
* Lee et al. (2020a) Chunggi Lee, Sanghoon Kim, Dongyun Han, Hongjun Yang, Young-Woo Park, Bum Chul Kwon, and Sungahn Ko. 2020a. GUIComp: A GUI Design Assistant with Real-Time, Multi-Faceted Feedback. In _Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems_ (Honolulu, HI, USA) _(CHI ’20)_. ACM, New York, NY, USA, 1–13. https://doi.org/10.1145/3313831.3376327
* Lee et al. (2020b) Hsin-Ying Lee, Weilong Yang, Lu Jiang, Madison Le, Irfan Essa, Haifeng Gong, and Ming-Hsuan Yang. 2020b. Neural Design Network: Graphic Layout Generation with Constraints. _European Conference on Computer Vision (ECCV)_ (2020).
* Li et al. (2019) Jianan Li, Jimei Yang, Aaron Hertzmann, Jianming Zhang, and Tingfa Xu. 2019\. LayoutGAN: Synthesizing Graphic Layouts with Vector-Wireframe Adversarial Networks. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ (2019).
* Li et al. (2017) Toby Jia-Jun Li, Amos Azaria, and Brad A. Myers. 2017\. SUGILITE: Creating Multimodal Smartphone Automation by Demonstration. In _Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems_ _(CHI ’17)_. ACM, New York, NY, USA, 6038–6049. https://doi.org/10.1145/3025453.3025483
* Li et al. (2020a) Toby Jia-Jun Li, Jingya Chen, Brandon Canfield, and Brad A. Myers. 2020a. Privacy-Preserving Script Sharing in GUI-Based Programming-by-Demonstration Systems. _Proc. ACM Hum.-Comput. Interact._ 4, CSCW1, Article 060 (May 2020), 23 pages. https://doi.org/10.1145/3392869
* Li et al. (2020b) Toby Jia-Jun Li, Jingya Chen, Haijun Xia, Tom M. Mitchell, and Brad A. Myers. 2020b. Multi-Modal Repairs of Conversational Breakdowns in Task-Oriented Dialogs. In _Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology_ _(UIST 2020)_. ACM. https://doi.org/10.1145/3379337.3415820
* Li et al. (2018) Toby Jia-Jun Li, Igor Labutov, Xiaohan Nancy Li, Xiaoyi Zhang, Wenze Shi, Tom M. Mitchell, and Brad A. Myers. 2018. APPINITE: A Multi-Modal Interface for Specifying Data Descriptions in Programming by Demonstration Using Verbal Instructions. In _Proceedings of the 2018 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC 2018)_. https://doi.org/10.1109/VLHCC.2018.8506506
* Li et al. (2020e) Toby Jia-Jun Li, Tom Mitchell, and Brad Myers. 2020e. Interactive Task Learning from GUI-Grounded Natural Language Instructions and Demonstrations. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations_. ACL, Online, 215–223. https://doi.org/10.18653/v1/2020.acl-demos.25
* Li et al. (2019) Toby Jia-Jun Li, Marissa Radensky, Justin Jia, Kirielle Singarajah, Tom M. Mitchell, and Brad A. Myers. 2019. PUMICE: A Multi-Modal Agent that Learns Concepts and Conditionals from Natural Language and Demonstrations. In _Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology_ _(UIST 2019)_. ACM. https://doi.org/10.1145/3332165.3347899
* Li and Riva (2018) Toby Jia-Jun Li and Oriana Riva. 2018. KITE: Building conversational bots from mobile apps. In _Proceedings of the 16th ACM International Conference on Mobile Systems, Applications, and Services (MobiSys 2018)_. ACM. https://doi.org/10.1145/3210240.3210339
* Li et al. (2014) Toby Jia-Jun Li, Shilad Sen, and Brent Hecht. 2014. Leveraging Advances in Natural Language Processing to Better Understand Tobler’s First Law of Geography. In _Proceedings of the 22Nd ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems_ _(SIGSPATIAL ’14)_. ACM, New York, NY, USA, 513–516. https://doi.org/10.1145/2666310.2666493
* Li et al. (2020c) Yang Li, Jiacong He, Xin Zhou, Yuan Zhang, and Jason Baldridge. 2020c. Mapping Natural Language Instructions to Mobile UI Action Sequences. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_. ACL, Online, 8198–8210. https://doi.org/10.18653/v1/2020.acl-main.729
* Li et al. (2020d) Yang Li, Gang Li, Luheng He, Jingjie Zheng, Hong Li, and Zhiwei Guan. 2020d. Widget Captioning: Generating Natural Language Description for Mobile User Interface Elements. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_. ACL, Online, 5495–5510. https://doi.org/10.18653/v1/2020.emnlp-main.443
* Liu et al. (2018) Thomas F. Liu, Mark Craft, Jason Situ, Ersin Yumer, Radomir Mech, and Ranjitha Kumar. 2018\. Learning Design Semantics for Mobile Apps. In _Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology_ (Berlin, Germany) _(UIST ’18)_. ACM, New York, NY, USA, 569–579. https://doi.org/10.1145/3242587.3242650
* Mikolov et al. (2013a) Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient Estimation of Word Representations in Vector Space. _arXiv:1301.3781 [cs]_ (Jan. 2013). http://arxiv.org/abs/1301.3781 arXiv: 1301.3781.
* Mikolov et al. (2013b) Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In _Advances in neural information processing systems_. 3111–3119. http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality
* Myers et al. (2017) Brad A. Myers, Amy J. Ko, Chris Scaffidi, Stephen Oney, YoungSeok Yoon, Kerry Chang, Mary Beth Kery, and Toby Jia-Jun Li. 2017\. Making End User Development More Natural. In _New Perspectives in End-User Development_. Springer, Cham, 1–22. https://doi.org/10.1007/978-3-319-60291-2_1
* Nair and Hinton (2010) Vinod Nair and Geoffrey E. Hinton. 2010. Rectified Linear Units Improve Restricted Boltzmann Machines. In _Proceedings of the 27th International Conference on International Conference on Machine Learning_ (Haifa, Israel) _(ICML’10)_. Omnipress, Madison, WI, USA, 807–814.
* Nguyen et al. (2017) Trong Duc Nguyen, Anh Tuan Nguyen, Hung Dang Phan, and Tien N. Nguyen. 2017. Exploring API Embedding for API Usages and Applications. In _Proceedings of the 39th International Conference on Software Engineering_ (Buenos Aires, Argentina) _(ICSE ’17)_. IEEE, 438–449. https://doi.org/10.1109/ICSE.2017.47
* Pasupat et al. (2018) Panupong Pasupat, Tian-Shun Jiang, Evan Liu, Kelvin Guu, and Percy Liang. 2018\. Mapping natural language commands to web elements. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ _(EMNLP ’18)_. ACL, Brussels, Belgium, 4970–4976. https://doi.org/10.18653/v1/D18-1540
* Pennington et al. (2014) Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global Vectors for Word Representation. In _Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing_ _(EMNLP ’14)_. ACL, Doha, Qatar, 1532–1543. https://doi.org/10.3115/v1/D14-1162
* Peters et al. (2018) Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep Contextualized Word Representations. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_ _(NAACL ’18)_. ACL, New Orleans, Louisiana, 2227–2237. https://doi.org/10.18653/v1/N18-1202
* Reimers and Gurevych (2019) Nils Reimers and Iryna Gurevych. 2019. Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing_. Association for Computational Linguistics. http://arxiv.org/abs/1908.10084
* Sereshkeh et al. (2020) Alborz Rezazadeh Sereshkeh, Gary Leung, Krish Perumal, Caleb Phillips, Minfan Zhang, Afsaneh Fazly, and Iqbal Mohomed. 2020\. VASTA: a vision and language-assisted smartphone task automation system. In _Proceedings of the 25th International Conference on Intelligent User Interfaces_ _(IUI ’20)_. 22–32.
* Swearngin et al. (2018) Amanda Swearngin, Mira Dontcheva, Wilmot Li, Joel Brandt, Morgan Dixon, and Amy J. Ko. 2018\. Rewire: Interface Design Assistance from Examples. In _Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems_ (Montreal QC, Canada) _(CHI ’18)_. ACM, New York, NY, USA, 1–12. https://doi.org/10.1145/3173574.3174078
* Turian et al. (2010) Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010\. Word Representations: A Simple and General Method for Semi-Supervised Learning. In _Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics_ (Uppsala, Sweden) _(ACL ’10)_. ACL, USA, 384–394.
* Williams et al. (2018) Adina Williams, Nikita Nangia, and Samuel Bowman. 2018\. A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_. ACL, New Orleans, Louisiana, 1112–1122. https://doi.org/10.18653/v1/N18-1101
* Yeh et al. (2009) Tom Yeh, Tsung-Hsiang Chang, and Robert C. Miller. 2009\. Sikuli: Using GUI Screenshots for Search and Automation. In _Proceedings of the 22Nd Annual ACM Symposium on User Interface Software and Technology_ _(UIST ’09)_. ACM, New York, NY, USA, 183–192. https://doi.org/10.1145/1622176.1622213
* Zhang et al. (2017) Xiaoyi Zhang, Anne Spencer Ross, Anat Caspi, James Fogarty, and Jacob O. Wobbrock. 2017. Interaction Proxies for Runtime Repair and Enhancement of Mobile Application Accessibility. In _Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems_ _(CHI ’17)_. ACM, New York, NY, USA, 6024–6037. https://doi.org/10.1145/3025453.3025846
* Zhang et al. (2018) Xiaoyi Zhang, Anne Spencer Ross, and James Fogarty. 2018\. Robust Annotation of Mobile Application Interfaces in Methods for Accessibility Repair and Enhancement. In _Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology_.
|
# First Align, then Predict:
Understanding the Cross-Lingual Ability of Multilingual BERT
Benjamin Muller1,2 Yanai Elazar3,4 Benoît Sagot1 Djamé Seddah1
1Inria, Paris, France 2Sorbonne Université, Paris, France
3Computer Science Department, Bar Ilan University
4Allen Institute for Artificial Intelligence
{benjamin.muller<EMAIL_ADDRESS>
<EMAIL_ADDRESS>
###### Abstract
Multilingual pretrained language models have demonstrated remarkable zero-shot
cross-lingual transfer capabilities. Such transfer emerges by fine-tuning on a
task of interest in one language and evaluating on a distinct language, not
seen during the fine-tuning. Despite promising results, we still lack a proper
understanding of the source of this transfer. Using a novel layer ablation
technique and analyses of the model’s internal representations, we show that
multilingual BERT, a popular multilingual language model, can be viewed as the
stacking of two sub-networks: a multilingual encoder followed by a task-
specific language-agnostic predictor. While the encoder is crucial for cross-
lingual transfer and remains mostly unchanged during fine-tuning, the task
predictor has little importance on the transfer and can be reinitialized
during fine-tuning. We present extensive experiments with three distinct
tasks, seventeen typologically diverse languages and multiple domains to
support our hypothesis.
## 1 Introduction
Zero-shot Cross-Lingual transfer aims at building models for a target language
by reusing knowledge acquired from a source language. Historically, it has
been tackled with a two-step standard cross-lingual pipeline (Ruder et al.,
2019): (1) Building a shared multilingual representation of text, typically by
aligning textual representations across languages. This step can be done using
feature extraction (Aone and McKee, 1993; Schultz and Waibel, 2001) as with
the delexicalized approach Zeman and Resnik (2008); Søgaard (2011) or using
word embedding techniques (Mikolov et al., 2013; Smith et al., 2017) by
projecting monolingual embeddings onto a shared multilingual embedding space,
this step requiring explicit supervision signal in the target language in the
form of features or parallel data. (2) Training a task-specific model using
supervision on a source language on top of the shared representation.
Recently, the rise of multilingual language models entailed a paradigm shift
in this field. Multilingual pretrained language models Devlin et al. (2019);
Conneau and Lample (2019) have been shown to perform efficient zero-shot
cross-lingual transfer for many tasks and languages Pires et al. (2019); Wu
and Dredze (2019). Such transfer relies on three-steps: (i) pretraining a
mask-language model (e.g. Devlin et al. (2019)) on the concatenation of
monolingual corpora across multiple languages, (ii) fine-tuning the model on a
specific task in the source language, and (iii) using the fine-tuned model on
a target language. The success of this approach is remarkable, and in contrast
to the standard cross-lingual pipeline, the model sees neither aligned data
nor task-specific annotated data in the target language at any training stage.
The source of such a successful transfer is still largely unexplained. Pires
et al. (2019) hypothesize that these models learn shared multilingual
representations during pretraining. Focusing on syntax, Chi et al. (2020)
recently showed that the multilingual version of BERT (mBERT) (Devlin et al.,
2019), encodes linguistic properties in shared multilingual sub-spaces.
Recently, Gonen et al. (2020) suggest that mBERT learns a language encoding
component and an abstract cross-lingual component. In this work, we are
interested in understanding the mechanism that leads mBERT to perform zero-
shot cross-lingual transfer. More specifically, we ask what parts of the model
and what mechanisms support cross-lingual transfer?
By combining behavioral and structural analyses (Belinkov et al., 2020), we
show that mBERT operates as the stacking of two modules: (1) A multilingual
encoder, located in the lower part of the model, critical for cross-lingual
transfer, is in charge of aligning multilingual representations; and (2) a
task-specific, language-agnostic predictor which has little importance for
cross-lingual transfer and is dedicated to performing the downstream task.
This mechanism that emerges out-of-the-box, without any explicit supervision,
suggests that mBERT behaves like the standard cross-lingual pipeline. Our
contributions advance the understanding of multilingual language models and as
such have the potential to support the development of better pretraining
processes.
## 2 Analysis Techniques
We study mBERT with a novel behavioral test that disentangles the task fine-
tuning influence from the pretraining step (§2.1), and a structural analysis
on the intermediate representations (§2.2). Combining the results from these
analyses allows us to locate the cross-lingual transfer and gain insights into
the mechanisms that enable it.
### 2.1 Locating Transfer with Random-init
In order to disentangle the impact of the pretraining step from the fine-
tuning, we propose a new behavioral technique: Random-init. First, we randomly
initialize a set of parameters (e.g. all the parameters of a given layer)
instead of using the parameters learned during the pretraining step. Then, we
fine-tune the modified pretrained model and measure the downstream
performance.111Note that we perform the same optimization procedure for the
model with and w/o Random-init (optimal learning rate and batch size are
chosen with grid-search).
By replacing a given set of pretrained parameters and fine-tuning the model,
all other factors being equal, Random-init enables us to quantify the
contribution of a given set of pretrained parameters on downstream performance
and therefore to locate which pretrained parameters contribute to the cross-
lingual transfer.
If the cross-language performance is significantly lower than same-language
performance, we conclude that these layers are more important to cross-
language performance than they are for same-language performance. If the
cross-language score does not change, it indicates that cross-language
transfer does not rely on these layers.
This technique is reminiscent of the recent Amnesic Probing method Elazar et
al. (2020), that removes from the representation a specific feature, e.g.
Part-of-Speech, and then measures the outcome on the downstream task. In
contrast, Random-init allows to study a specific architecture component,
instead of specific features.
### 2.2 Hidden State Similarities across Languages
To strengthen the behavioral evidence brought by Random-init , and provide
finer analyses that focus on individual layers, we study how the textual
representations differ between parallel sentences in different languages. We
hypothesize that an efficient fine-tuned model should be able to represent
similar sentences in the source and target languages similarly, even-though it
was fine-tuned only on the source language.
To measure the similarities of the representation across languages, we use the
Central Kernel Alignment metric (CKA), introduced by Kornblith et al. (2019).
We follow Conneau et al. (2020) who use the CKA as a similarity metric to
compare the representations of monolingual and bilingual pretrained models
across languages. In our work, we use the CKA to study the representation
difference between source and target languages in pretrained and fine-tuned
multilingual models. For every layer, we average all contextualized tokens in
a sentence to get a single vector.222After removing [CLS] and [SEP] special
tokens. Then we compute the similarity between target and source
representations and compare it across layers in the pretrained and fine-tuned
models. We call this metric the cross-lingual similarity.
## 3 Experimental Setup
##### Tasks, Datasets and Evaluation
We consider three tasks covering both syntactic and semantic aspects of
language: Part-Of-Speech Tagging (POS), dependency parsing, and Named-Entity
Recognition (NER). For POS tagging and parsing we use the Universal Dependency
(Nivre et al., 2018) treebanks, and for NER, we use the WikiANN dataset (Pan
et al., 2017). We evaluate our systems with the standard metrics per task;
word-level accuracy for POS tagging, F1 for NER and labeled attachment score
(LAS) for parsing. All the reported scores are computed on the test set of
each dataset.
We experiment with English, Russian and Arabic as source languages, and
fourteen typologically diverse target languages, including Chinese, Czech,
German and Hindi. The complete list can be found in the Appendix A.1.2.
The results of a model that is fine-tuned and evaluated on the same language
are referred to as same-language and those evaluated on distinct languages are
referred to as cross-language.
##### Multilingual Model
We focus on mBERT (Devlin et al., 2019), a 12-layer model trained on the
concatenation of 104 monolingual Wikipedia corpora, including our languages of
study.
##### Fine-Tuning
We fine-tune the model for each task following the standard methodology of
Devlin et al. (2019). The exact details for reproducing our results can be
found in the Appendix. All reported scores are averaged on 5 runs with
different random seeds.
## 4 Results
### 4.1 Disentangling the Pretraining Effect
| Random-init of layers
---|---
Src-Trg | Ref | $\Delta$ 1-2 | $\Delta$ 3-4 | $\Delta$ 5-6 | $\Delta$ 7-8 | $\Delta$ 9-10 | $\Delta$ 11-12
| Parsing
En - En | 88.98 | -0.96 | -0.66 | -0.93 | -0.55 | 0.04 | -0.09
Ru - Ru | 85.15 | -0.82 | -1.38 | -1.51 | -0.86 | -0.29 | 0.18
Ar - Ar | 59.54 | -0.78 | -2.14 | -1.20 | -0.67 | -0.27 | 0.08
En - X | 53.23 | -15.77 | -6.51 | -3.39 | -1.47 | 0.29 | 1.00
Ru - X | 55.41 | -7.69 | -3.71 | -3.13 | -1.70 | 0.92 | 0.94
Ar - X | 27.97 | -4.91 | -3.17 | -1.48 | -1.68 | -0.36 | -0.14
| POS
En - En | 96.51 | -0.30 | -0.25 | -0.40 | -0.00 | 0.05 | 0.02
Ru - Ru | 96.90 | -0.52 | -0.55 | -0.40 | -0.07 | 0.02 | -0.03
Ar - Ar | 79.28 | -0.35 | -0.49 | -0.36 | -0.19 | -0.05 | -0.00
En - X | 79.37 | -8.94 | -2.49 | -1.66 | -0.88 | 0.20 | -0.14
Ru - X | 79.25 | -10.08 | -2.83 | -1.65 | -2.74 | 0.01 | -0.45
Ar - X | 64.81 | -6.73 | -3.50 | -1.63 | -1.56 | -0.73 | -1.29
| NER
En - En | 83.30 | -2.66 | -2.14 | -1.43 | -0.63 | -0.23 | -0.12
Ru - Ru | 88.20 | -2.08 | -2.13 | -1.52 | -0.64 | -0.33 | -0.13
Ar - Ar | 87.97 | -2.37 | -2.11 | -0.96 | -0.39 | -0.15 | 0.21
En - X | 64.17 | -8.28 | -5.09 | -3.07 | -0.79 | -0.47 | -0.13
Ru - X | 62.13 | -15.85 | -9.36 | -5.50 | -2.44 | -1.16 | -0.06
Ar - X | 65.59 | -16.10 | -8.42 | -3.73 | -1.40 | -0.25 | 0.67
Table 1: Relative Zero shot Cross-Lingual performance of mBERT with Random-
init (§2.1) on pairs of consecutive layers compared to mBERT without any
random-initialization (Ref). In Src-Trg, Src indicates the source language on
which we fine-tune mBERT, and Trg the target language on which we evaluate it.
Src-X is the average across all 17 target language with X $\neq$ Src. Detailed
results per target language are reported in tables 6, 7 and 8 in the Appendix.
Coloring is computed based on how mBERT with Random-init performs compared to
the Ref model. $\geq$ Ref $<$ Ref $\leq$ -2 points $\leq$ -5 points
For each experiment, we measure the impact of randomly-initializing specific
layers as the difference between the model performance without any random-
initialization (Ref) and with random-initialization (Random-init). Results for
two consecutive layers are shown in Table 1. The rest of the results, which
exhibit similar trends, can be found in the Appendix (Table 5).
For all tasks, we observe sharp drops in the cross-language performance at the
lower layers of the model but only moderate drops in the same-language
performance. For instance, the parsing experiment with English as the source
language, results in a performance drop on English of only 0.96 points (En-
En), when randomly-initializing layers 1 and 2. However, it leads to an
average drop of 15.77 points on other languages (En-X).
Furthermore, we show that applying Random-init to the upper layers does not
harm same-language and cross-language performances (e.g. when training on
parsing for English, the performance slightly decreases by 0.09 points in the
same-language while it increases by 1.00 in the cross-language case). This
suggests that the upper layers are task-specific and language-agnostic, since
re-initializing them have minimal change on performance. We conclude that
mBERT’s upper layers do not contribute to cross-language transfer.
##### Does the Target Domain Matter?
In order to test whether this behavior is specific to the cross-language
setting and is not general to out-of-distribution (OOD) transfer, we repeat
the same Random-init experiment by evaluating on same-language setting while
varying the evaluated domain.333Although other factors might play a part in
out-of-distribution, we suspect that domains plays a crucial part in transfer.
Moreover, it was shown that BERT encodes out-of-the-box domain information
Aharoni and Goldberg (2020) If the drop is similar to cross-language
performance, it means that lower layers are important for out-of-distribution
transfer in general. Otherwise, it would confirm that these layers play a
specific role for cross-language transfer.
| | Random-init of layers
---|---|---
Src - Trg | Ref | $\Delta$0-1 | $\Delta$2-3 | $\Delta$4-5 | $\Delta$6-7 | $\Delta$8-9 | $\Delta$10-11
Domain Analyses | | Parsing
En - En | 90.40 | -1.41 | -2.33 | -1.57 | -1.43 | -0.60 | -0.46
En - En Lit. | 77.91 | -0.91 | -1.38 | -1.85 | -0.83 | -0.23 | -0.17
En - En Web | 75.77 | -2.14 | -2.42 | -2.54 | -1.42 | -0.71 | -0.69
En - En UGC | 45.90 | -1.97 | -2.75 | -2.10 | -1.04 | -0.39 | -0.25
Cross-Language | | | | | | |
En - Fr tran. | 83.25 | -5.82 | -2.69 | -2.42 | -0.44 | 0.25 | 0.94
En - Fr Wiki | 71.29 | -7.86 | -4.33 | -4.64 | -0.92 | -0.11 | 0.33
Domain Analyses | | POS
En - En | 96.83 | -1.35 | -0.98 | -0.70 | -0.40 | -0.28 | -0.24
En - En Lit. | 93.09 | -0.58 | -0.65 | -0.28 | -0.04 | -0.06 | 0.12
En - En Web | 89.67 | -1.07 | -1.21 | -0.41 | -0.10 | 0.03 | 0.21
En - En UGC | 68.93 | -2.38 | -1.07 | -0.14 | 0.54 | -0.04 | 0.63
Cross-Language | | | | | | |
En - Fr Tran. | 93.43 | -3.59 | -0.88 | -1.31 | -0.56 | 0.46 | 0.25
En - Fr. | 91.13 | -5.10 | -0.93 | -1.16 | -0.74 | 0.15 | -0.07
Domain Analyses | | NER
En - En | 83.22 | -2.45 | -2.15 | -1.28 | -0.49 | -0.15 | -0.06
En - News | 51.72 | -1.32 | -1.05 | -0.80 | -0.14 | -0.31 | -0.33
Cross-Language | | | | | | |
En - Fr | 76.16 | -5.14 | -2.82 | -1.97 | -0.33 | 0.52 | 0.34
Table 2: Relative Zero shot Cross-Lingual performance of mBERT with Random-
init (§2.1) on pairs of consecutive layers compared to mBERT without any
random-initialization (Ref). We present experiments with English as the source
language and evaluate across various target domains in English in comparison
with the cross-lingual setting when we evaluate on French. EN-Lit. refers to
the Literature Domain. UGC refers to User-Generated Content. FR-Tran. refers
to sentences translated from the English In-Domain test set, hence reducing
the domain-gap to its minimum. $\geq$ Ref $<$ Ref $\leq$ -2 points $\leq$ -5
points
We report the results in Table 4.1. For all analyzed domains (Web, News,
Literature, etc.) applying Random-init to the two first layers of the models
leads to very moderate drops (e.g. -0.91 when the target domain is English
Literature for parsing), while it leads to large drops when the evaluation is
done on a distinct language (e.g. -5.82 when evaluated on French). The trends
are similar for all the domains and tasks we tested on. We conclude that the
pretrained parameters at the lower layers are consistently more critical for
cross-language transfer than for same-language transfer, and cannot be
explained by the possibly different domain of the evaluated datasets.
### 4.2 Cross-Lingual Similarity in mBERT
The results from the previous sections suggest that the lower layers of the
model are responsible for the cross lingual transfer, whereas the upper layers
are language-agnostic. In this section, we assess the transfer by directly
analyzing the intermediate representations and measuring the similarities of
the hidden state representations between source and target languages. We
compute the CKA metric (cf. §2.2) between the source and the target
representations for pretrained and fine-tuned models using parallel sentences
from the PUD dataset (Zeman et al., 2017). In Figure 1, we present the
similarities between Russian and English with mBERT pretrained and fine-tuned
on the three tasks.444We report the comparisons for 5 other languages in
Figure 4.1 in the Appendix.
The cross-lingual similarity between the representations constantly increases
up to layer 5 for all the three tasks (reaching 78.1%, 78.1% and 78.2% for
parsing, POS tagging and NER respectively). From these layers forward, the
similarity decreases. We observe the same trends across all languages (cf.
Figure 4.1). This demonstrates that the fine-tuned model creates similar
representations regardless of the language and task, and hints on an alignment
that occurs in the lower part of the model. Interestingly, the same trend is
also observed in the pretrained model, suggesting that the fine-tuning step
preserves the multilingual alignment.
Figure 1: Cross-Lingual similarity (CKA) between representations of pretrained
and fine-tuned models on POS, NER and Parsing between English and Russian.
Figure 2: Cross-Lingual similarity (CKA) of the representations of a fine-
tuned model on NER with and w/o Random-init between English (source) and
Russian (target). The higher the score the greater the similarity.
These results do not match the findings of Singh et al. (2019), who found no
language alignment across layers, although they inspected Natural Language
Inference, a more “high-level task” Dagan et al. (2005); Bowman et al. (2015).
We leave the inspection of this mismatch to future work.
### 4.3 Better Alignment Leads to Better Cross-Lingual Transfer
In the previous section we showed that fine-tuned models align the
representations between parallel sentences, across languages. Moreover, we
demonstrated that the lower part of the model is critical for cross-language
transfer but hardly impacts the same-language performance. In this section, we
show that the alignment measured plays a critical role in cross-lingual
transfer.
As seen in Figure 2 in the case of English to Russian (and in Figures 4.1-4.1
in the Appendix for other languages), when we randomly-initialize the lower
part of the model, there is no alignment: the similarity between the source
and target languages decreases. We observe the same trend for all other
languages and tasks and report it in the Appendix in Figures 4.1-4.1. This
result matches the drop in cross-lingual performance that occurs when we apply
Random-init to the lower part of the model while impacting moderately same-
language performance.
For a more systematic view of the link between the cross-lingual similarities
and the cross-language transfer, we measure the Spearman correlation between
the cross-lang gap (i.e the difference between the same-language perfromance
and the cross-language performance) (Hu et al., 2020) and the cross-lingual
similarity averaged over all the layers. We measure it with the cross-lingual
similarity computed on the pretrained and fine-tuned models (without random-
initialization) on all the languages. We find that the cross-lingual
similarity correlates significantly with the cross-lang gap for all three
tasks, both on the fine-tuned and pretrained models. The spearman correlation
for the fine-tuned models are 0.76, 0.75 and 0.47 for parsing, POS and NER,
respectively.555Correlations for both the pretrained and the fine-tuned models
are reported in the Appendix Table 4. In summary, our results show that the
cross-lingual alignment is highly correlated with the cross-lingual transfer.
## 5 Discussion
Understanding the behavior of pretrained language models is currently a
fundamental challenge in NLP. A popular approach consists of probing the
intermediate representations with external classifiers (Alain and Bengio,
2017; Adi et al., 2017; Conneau et al., 2018) to measure if a specific layer
captures a given property. Using this technique, Tenney et al. (2019) showed
that BERT encodes linguistic properties in the same order as the “classical
NLP pipeline”. However, probing techniques only indirectly explain the
behavior of a model and do not explain the relationship between the
information captured in the representations and its effect on the task (Elazar
et al., 2020). Moreover, recent works have questioned the usage of probing as
an interpretation tool Hewitt and Liang (2019); Ravichander et al. (2020).
This motivates our approach to combine a structural analysis based on
representation similarity with behavioral analysis. In this regard, our
findings extend recent work from Merchant et al. (2020) in the multilingual
setting, who show that fine-tuning impacts mainly the upper layers of the
model and preserves the linguistic features learned during pretraining. In our
case, we show that the lower layers are in charge of aligning representations
across languages and that this cross-lingual alignment learned during
pretraining is preserved after fine-tuning.
## 6 Conclusion
The remarkable performance of multilingual languages models in zero-shot
cross-lingual transfer is still not well understood. In this work, we combine
a structural analysis of the similarities between hidden representation across
languages with a novel behavioral analysis that randomly-initialize the
models’ parameters to understand it. By combining those experiments on 17
languages and 3 tasks, we show that mBERT is constructed from: (1) a
multilingual encoder in the lower layers, which aligns hidden representations
across languages and is critical for cross-language transfer, and (2) a task-
specific, language-agnostic predictor that has little effect to cross-language
transfer, in the upper layers. Additionally, we demonstrate that hidden cross-
lingual similarity strongly correlates with downstream cross-lingual
performance suggesting that this alignment is at the root of these cross-
lingual transfer abilities. This shows that mBERT reproduces the standard
cross-lingual pipeline described by Ruder et al. (2019) without any explicit
supervision signal for it. Practically speaking, our findings provide a
concrete tool to measure cross-lingual representation similarity that could be
used to design better multilingual pretraining processes.
## Acknowledgments
We thank Hila Gonen, Shauli Ravfogel and Ganesh Jawahar for their careful
review and insightful comments. We also thank the anonymous reviewers for
their valuable suggestions. This work was partly funded by two French National
funded projects granted to Inria and other partners by the Agence Nationale de
la Recherche, namely projects PARSITI (ANR-16-CE33-0021) and SoSweet
(ANR-15-CE38-0011), as well as by the third author’s chair in the PRAIRIE
institute funded by the French national agency ANR as part of the
“Investissements d’avenir” programme under the reference ANR-19-P3IA-0001.
Yanai Elazar is grateful to be partially supported by the PBC fellowship for
outstanding Phd candidates in Data Science.
## References
* Adi et al. (2017) Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2017. Fine-grained analysis of sentence embeddings using auxiliary prediction tasks. In _5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings_. OpenReview.net.
* Aharoni and Goldberg (2020) Roee Aharoni and Yoav Goldberg. 2020. Unsupervised domain clusters in pretrained language models. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 7747–7763, Online. Association for Computational Linguistics.
* Alain and Bengio (2017) Guillaume Alain and Yoshua Bengio. 2017. Understanding intermediate layers using linear classifier probes. In _5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Workshop Track Proceedings_. OpenReview.net.
* Aone and McKee (1993) Chinatsu Aone and Douglas McKee. 1993. A language-independent anaphora resolution system for understanding multilingual texts. In _31st Annual Meeting of the Association for Computational Linguistics_ , pages 156–163.
* Belinkov et al. (2020) Yonatan Belinkov, Sebastian Gehrmann, and Ellie Pavlick. 2020. Interpretability and analysis in neural nlp. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts_ , pages 1–5.
* Bowman et al. (2015) Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015\. A large annotated corpus for learning natural language inference. In _Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing_ , pages 632–642, Lisbon, Portugal. Association for Computational Linguistics.
* Chi et al. (2020) Ethan A. Chi, John Hewitt, and Christopher D. Manning. 2020. Finding universal grammatical relations in multilingual BERT. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 5564–5577, Online. Association for Computational Linguistics.
* Conneau et al. (2018) Alexis Conneau, German Kruszewski, Guillaume Lample, Loïc Barrault, and Marco Baroni. 2018. What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 2126–2136, Melbourne, Australia. Association for Computational Linguistics.
* Conneau and Lample (2019) Alexis Conneau and Guillaume Lample. 2019. Cross-lingual language model pretraining. In _Advances in Neural Information Processing Systems_ , pages 7057–7067.
* Conneau et al. (2020) Alexis Conneau, Shijie Wu, Haoran Li, Luke Zettlemoyer, and Veselin Stoyanov. 2020\. Emerging cross-lingual structure in pretrained language models. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 6022–6034, Online. Association for Computational Linguistics.
* Dagan et al. (2005) Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The pascal recognising textual entailment challenge. In _Machine Learning Challenges Workshop_ , pages 177–190. Springer.
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
* Elazar et al. (2020) Yanai Elazar, Shauli Ravfogel, Alon Jacovi, and Y. Goldberg. 2020. Amnesic probing: Behavioral explanation with amnesic counterfactuals. _arXiv: Computation and Language_.
* Gonen et al. (2020) Hila Gonen, Shauli Ravfogel, Yanai Elazar, and Yoav Goldberg. 2020. It’s not greek to mbert: Inducing word-level translations from multilingual bert. In _Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP_ , pages 45–56.
* van der Goot and van Noord (2018) Rob van der Goot and Gertjan van Noord. 2018. Modeling input uncertainty in neural network dependency parsing. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 4984–4991.
* Hewitt and Liang (2019) John Hewitt and Percy Liang. 2019. Designing and interpreting probes with control tasks. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 2733–2743.
* Hu et al. (2020) Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, and Melvin Johnson. 2020. XTREME: A massively multilingual multi-task benchmark for evaluating cross-lingual generalisation. In _Proceedings of the 37th International Conference on Machine Learning_ , volume 119 of _Proceedings of Machine Learning Research_ , pages 4411–4421. PMLR.
* Kingma and Ba (2015) Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. _CoRR_ , abs/1412.6980.
* Kornblith et al. (2019) Simon Kornblith, Mohammad Norouzi, Honglak Lee, and Geoffrey Hinton. 2019. Similarity of neural network representations revisited. In _International Conference on Machine Learning_ , pages 3519–3529.
* McDonald et al. (2013) Ryan McDonald, Joakim Nivre, Yvonne Quirmbach-Brundage, Yoav Goldberg, Dipanjan Das, Kuzman Ganchev, Keith Hall, Slav Petrov, Hao Zhang, Oscar Täckström, Claudia Bedini, Núria Bertomeu Castelló, and Jungmee Lee. 2013. Universal dependency annotation for multilingual parsing. In _Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)_ , pages 92–97, Sofia, Bulgaria. Association for Computational Linguistics.
* Merchant et al. (2020) Amil Merchant, Elahe Rahimtoroghi, Ellie Pavlick, and Ian Tenney. 2020. What happens to BERT embeddings during fine-tuning? In _Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP_ , pages 33–44, Online. Association for Computational Linguistics.
* Mikolov et al. (2013) Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013. Exploiting similarities among languages for machine translation. _arXiv preprint arXiv:1309.4168_.
* Nivre et al. (2018) Joakim Nivre, Mitchell Abrams, Željko Agić, Lars Ahrenberg, Lene Antonsen, Maria Jesus Aranzabe, Gashaw Arutie, Masayuki Asahara, Luma Ateyah, Mohammed Attia, Aitziber Atutxa, Liesbeth Augustinus, Elena Badmaeva, Miguel Ballesteros, Esha Banerjee, Sebastian Bank, Verginica Barbu Mititelu, John Bauer, Sandra Bellato, Kepa Bengoetxea, Riyaz Ahmad Bhat, Erica Biagetti, Eckhard Bick, Rogier Blokland, Victoria Bobicev, Carl Börstell, Cristina Bosco, Gosse Bouma, Sam Bowman, Adriane Boyd, Aljoscha Burchardt, Marie Candito, Bernard Caron, Gauthier Caron, Gülşen Cebiroğlu Eryiğit, Giuseppe G. A. Celano, Savas Cetin, Fabricio Chalub, Jinho Choi, Yongseok Cho, Jayeol Chun, Silvie Cinková, Aurélie Collomb, Çağrı Çöltekin, Miriam Connor, Marine Courtin, Elizabeth Davidson, Marie-Catherine de Marneffe, Valeria de Paiva, Arantza Diaz de Ilarraza, Carly Dickerson, Peter Dirix, Kaja Dobrovoljc, Timothy Dozat, Kira Droganova, Puneet Dwivedi, Marhaba Eli, Ali Elkahky, Binyam Ephrem, Tomaž Erjavec, Aline Etienne, Richárd Farkas, Hector Fernandez Alcalde, Jennifer Foster, Cláudia Freitas, Katarína Gajdošová, Daniel Galbraith, Marcos Garcia, Moa Gärdenfors, Kim Gerdes, Filip Ginter, Iakes Goenaga, Koldo Gojenola, Memduh Gökırmak, Yoav Goldberg, Xavier Gómez Guinovart, Berta Gonzáles Saavedra, Matias Grioni, Normunds Grūzītis, Bruno Guillaume, Céline Guillot-Barbance, Nizar Habash, Jan Hajič, Jan Hajič jr., Linh Hà Mỹ, Na-Rae Han, Kim Harris, Dag Haug, Barbora Hladká, Jaroslava Hlaváčová, Florinel Hociung, Petter Hohle, Jena Hwang, Radu Ion, Elena Irimia, Tomáš Jelínek, Anders Johannsen, Fredrik Jørgensen, Hüner Kaşıkara, Sylvain Kahane, Hiroshi Kanayama, Jenna Kanerva, Tolga Kayadelen, Václava Kettnerová, Jesse Kirchner, Natalia Kotsyba, Simon Krek, Sookyoung Kwak, Veronika Laippala, Lorenzo Lambertino, Tatiana Lando, Septina Dian Larasati, Alexei Lavrentiev, John Lee, Phương Lê Hồng, Alessandro Lenci, Saran Lertpradit, Herman Leung, Cheuk Ying Li, Josie Li, Keying Li, KyungTae Lim, Nikola Ljubešić, Olga Loginova, Olga Lyashevskaya, Teresa Lynn, Vivien Macketanz, Aibek Makazhanov, Michael Mandl, Christopher Manning, Ruli Manurung, Cătălina Mărănduc, David Mareček, Katrin Marheinecke, Héctor Martínez Alonso, André Martins, Jan Mašek, Yuji Matsumoto, Ryan McDonald, Gustavo Mendonça, Niko Miekka, Anna Missilä, Cătălin Mititelu, Yusuke Miyao, Simonetta Montemagni, Amir More, Laura Moreno Romero, Shinsuke Mori, Bjartur Mortensen, Bohdan Moskalevskyi, Kadri Muischnek, Yugo Murawaki, Kaili Müürisep, Pinkey Nainwani, Juan Ignacio Navarro Horñiacek, Anna Nedoluzhko, Gunta Nešpore-Bērzkalne, Lương Nguyễn Thị, Huyền Nguyễn Thị Minh, Vitaly Nikolaev, Rattima Nitisaroj, Hanna Nurmi, Stina Ojala, Adédayọ̀ Olúòkun, Mai Omura, Petya Osenova, Robert Östling, Lilja Øvrelid, Niko Partanen, Elena Pascual, Marco Passarotti, Agnieszka Patejuk, Siyao Peng, Cenel-Augusto Perez, Guy Perrier, Slav Petrov, Jussi Piitulainen, Emily Pitler, Barbara Plank, Thierry Poibeau, Martin Popel, Lauma Pretkalniņa, Sophie Prévost, Prokopis Prokopidis, Adam Przepiórkowski, Tiina Puolakainen, Sampo Pyysalo, Andriela Rääbis, Alexandre Rademaker, Loganathan Ramasamy, Taraka Rama, Carlos Ramisch, Vinit Ravishankar, Livy Real, Siva Reddy, Georg Rehm, Michael Rießler, Larissa Rinaldi, Laura Rituma, Luisa Rocha, Mykhailo Romanenko, Rudolf Rosa, Davide Rovati, Valentin Roșca, Olga Rudina, Shoval Sadde, Shadi Saleh, Tanja Samardžić, Stephanie Samson, Manuela Sanguinetti, Baiba Saulīte, Yanin Sawanakunanon, Nathan Schneider, Sebastian Schuster, Djamé Seddah, Wolfgang Seeker, Mojgan Seraji, Mo Shen, Atsuko Shimada, Muh Shohibussirri, Dmitry Sichinava, Natalia Silveira, Maria Simi, Radu Simionescu, Katalin Simkó, Mária Šimková, Kiril Simov, Aaron Smith, Isabela Soares-Bastos, Antonio Stella, Milan Straka, Jana Strnadová, Alane Suhr, Umut Sulubacak, Zsolt Szántó, Dima Taji, Yuta Takahashi, Takaaki Tanaka, Isabelle Tellier, Trond Trosterud, Anna Trukhina, Reut Tsarfaty, Francis Tyers, Sumire Uematsu, Zdeňka Urešová, Larraitz Uria, Hans Uszkoreit, Sowmya Vajjala, Daniel van Niekerk, Gertjan van Noord, Viktor Varga, Veronika Vincze, Lars Wallin, Jonathan North Washington, Seyi Williams, Mats Wirén, Tsegay Woldemariam, Tak-sum Wong, Chunxiao Yan, Marat M. Yavrumyan, Zhuoran Yu, Zdeněk Žabokrtský, Amir Zeldes, Daniel Zeman, Manying Zhang, and Hanzhi Zhu. 2018. Universal dependencies 2.2. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics (ÚFAL), Faculty of Mathematics and Physics, Charles University.
* Pan et al. (2017) Xiaoman Pan, Boliang Zhang, Jonathan May, Joel Nothman, Kevin Knight, and Heng Ji. 2017. Cross-lingual name tagging and linking for 282 languages. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1946–1958, Vancouver, Canada. Association for Computational Linguistics.
* Pires et al. (2019) Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 4996–5001, Florence, Italy. Association for Computational Linguistics.
* Ravichander et al. (2020) Abhilasha Ravichander, Yonatan Belinkov, and Eduard Hovy. 2020. Probing the probing paradigm: Does probing accuracy entail task relevance? _arXiv preprint arXiv:2005.00719_.
* Ruder et al. (2019) Sebastian Ruder, Ivan Vulić, and Anders Søgaard. 2019. A survey of cross-lingual word embedding models. _Journal of Artificial Intelligence Research_ , 65:569–631.
* Schultz and Waibel (2001) Tanja Schultz and Alex Waibel. 2001. Language-independent and language-adaptive acoustic modeling for speech recognition. _Speech Communication_ , 35(1-2):31–51.
* Singh et al. (2019) Jasdeep Singh, Bryan McCann, Richard Socher, and Caiming Xiong. 2019. Bert is not an interlingua and the bias of tokenization. In _Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019)_ , pages 47–55.
* Smith et al. (2017) Samuel L. Smith, David H. P. Turban, Steven Hamblin, and Nils Y. Hammerla. 2017\. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. In _5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings_. OpenReview.net.
* Søgaard (2011) Anders Søgaard. 2011. Data point selection for cross-language adaptation of dependency parsers. In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers-Volume 2_ , pages 682–686. Association for Computational Linguistics.
* Svizzera (2014) Corso Svizzera. 2014. Converting the parallel treebank partut in universal stanford dependencies.
* Tenney et al. (2019) Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. BERT rediscovers the classical NLP pipeline. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 4593–4601, Florence, Italy. Association for Computational Linguistics.
* Wolf et al. (2020) Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations_ , pages 38–45, Online. Association for Computational Linguistics.
* Wu and Dredze (2019) Shijie Wu and Mark Dredze. 2019. Beto, bentz, becas: The surprising cross-lingual effectiveness of bert. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 833–844.
* Zeman et al. (2017) Daniel Zeman, Martin Popel, Milan Straka, Jan Hajič, Joakim Nivre, Filip Ginter, Juhani Luotolahti, Sampo Pyysalo, Slav Petrov, Martin Potthast, Francis Tyers, Elena Badmaeva, Memduh Gokirmak, Anna Nedoluzhko, Silvie Cinková, Jan Hajič jr., Jaroslava Hlaváčová, Václava Kettnerová, Zdeňka Urešová, Jenna Kanerva, Stina Ojala, Anna Missilä, Christopher D. Manning, Sebastian Schuster, Siva Reddy, Dima Taji, Nizar Habash, Herman Leung, Marie-Catherine de Marneffe, Manuela Sanguinetti, Maria Simi, Hiroshi Kanayama, Valeria de Paiva, Kira Droganova, Héctor Martínez Alonso, Çağrı Çöltekin, Umut Sulubacak, Hans Uszkoreit, Vivien Macketanz, Aljoscha Burchardt, Kim Harris, Katrin Marheinecke, Georg Rehm, Tolga Kayadelen, Mohammed Attia, Ali Elkahky, Zhuoran Yu, Emily Pitler, Saran Lertpradit, Michael Mandl, Jesse Kirchner, Hector Fernandez Alcalde, Jana Strnadová, Esha Banerjee, Ruli Manurung, Antonio Stella, Atsuko Shimada, Sookyoung Kwak, Gustavo Mendonça, Tatiana Lando, Rattima Nitisaroj, and Josie Li. 2017. CoNLL 2017 shared task: Multilingual parsing from raw text to universal dependencies. In _Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies_ , pages 1–19, Vancouver, Canada. Association for Computational Linguistics.
* Zeman and Resnik (2008) Daniel Zeman and Philip Resnik. 2008. Cross-language parser adaptation between related languages. In _Proceedings of the IJCNLP-08 Workshop on NLP for Less Privileged Languages_.
## Appendix A Appendices
### A.1 Reproducibility
#### A.1.1 Optimization
We fine-tune our models using the standard Adam optimizer (Kingma and Ba,
2015). We warmup the learning rate on the first 10% steps and use linear decay
in the rest of the training. Using the validation set of the source language,
we find the best combination of hyper-parameters with a grid search on batch
size among {16, 32} and learning rate initialization among {1e-5, 2.5e-5,
5e-5} We select the model with the highest validation performance out of 15
epochs for parsing and out of 6 epochs for POS tagging and NER.
##### Hyperparameters
In Table 3, we report the best hyper-parameters set for each task, the bound
of each hyperparameter, the estimated number of grid search trial for each
task as well as the estimated run time.
#### A.1.2 Data
##### Data Sources
We base our experiments on data originated from two sources. The Universal
Dependency project McDonald et al. (2013) downloadable here
https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-2988 and the
WikiNER dataset Pan et al. (2017). We also make use of the CoNLL-2003 shared
task NER English dataset https://www.clips.uantwerpen.be/conll2003/
##### Languages
For all our experiments, we use English, Russian and Arabic as source
languages in addition to Chinese, Czech, Finish, French, Indonesian, Italian,
Japanese, German, Hindi, Polish, Portuguese, Slovenian, Spanish, and Turkish
as target languages.
##### Fine-tuning Data
For all the cross-lingual experiments, we use English, Russian and Arabic as
source languages on which we fine-tune mBERT. For English, we take the
English-EWT treebank for fine-tuning, for Russian the Russian-GSD treebank and
for Arabic the Arabic-PADT treebank.
##### Evaluation Data
For all our experiments, we perform the evaluation on all the 17 languages.
For Parsing and POS tagging we use the test set from the PUD treebanks
released for the CoNLL Shared Task 2017 (Zeman et al., 2017). For NER, we use
the corresponding annotated datasets in the wikiner dataset.
##### Domain Analysis Datasets
We list here the datasets for completing our domain analysis experiment in
Section 4.1 reported in Table 4.1. To have a full control on the source
domains, we use for fine-tuning the English Partut treebank for POS tagging
and parsing (Svizzera, 2014). It is a mix of legal, news and wikipedia text.
For NER, we keep the WikiANN dataset (Pan et al., 2017). For the same-language
and out-of-domain experiments, we use the English-EWT, English-Lines and
English Lexnorm (van der Goot and van Noord, 2018) treebanks for Web Media
data, Literature data and Noisy tweets respectively. For the cross-language
French evaluation, we use the translation of the English test set,666We do so
by taking the French-ParTUT test set that overlaps with the English-ParTUT,
which includes 110 sentences. as well as the French-GSD treebank. For NER, we
take the CoNLL-2003 shared task English data as our out-of-domain evaluation
extracted from the News domain. We note that the absolute performance on this
dataset is not directly comparable to the one on the source wikiner. Indeed,
the CoNLL-2003 dataset uses an extra MISC class. In our work, we only
interpret the relative performance of different models on this test set.
##### Cross-Lingual Similarity Analysis
For a given source language $l$ and a target language $l^{\prime}$, we collect
a 1000 pairs of aligned sentences from the UD-PUD treebanks (Zeman et al.,
2017). For a given model and for each layer, we get a single sentence
embedding by averaging token-level embeddings (after excluding special
tokens). We then concatenate the 1000 sentence embedding vectors and get the
matrices $X_{l}$ and $X_{l}^{\prime}$. Based on these two matrices, the CKA
between the language $l$ and the language $l^{\prime}$ is defined as:
$CKA(X_{l},X_{l^{\prime}})=\frac{||X_{l}^{T}X_{l^{\prime}}||_{F}^{2}}{||X_{l}^{T}X_{l}||_{F}||X_{l^{\prime}}^{T}X_{l^{\prime}}||_{F}}$
with $||.||_{F}$ defining the Frobenius norm.
We do so for each source-target language pairs using the representation of the
pretrained mBERT model as well as for mBERT fine-tuned on each downstream
task.
In addition to the results presented in §4.2, we report in Figure 4, a
comparison of the cross-lingual similarity per hidden layer of mBERT fine-
tuned on NER, across target languages. The trend is the same for all
languages.
#### A.1.3 Computation
##### Infrastructure
Our experiments were ran on a shared cluster on the equivalent of 15 Nvidia
Tesla T4 GPUs.777https://www.nvidia.com/en-sg/data-center/tesla-t4/
##### Codebase
All of our experiments are built using the Transformers library described in
(Wolf et al., 2020). We also provide code to reproduce our experiments at
https://github.com/benjamin-mlr/first-align-then-predict.git.
#### A.1.4 Preprocessing
Our experiments are ran with word-level tokenization as provided in the
datasets. We then tokenize each sequence of words at the sub-word level using
the Wordpiece algorithm of BERT and provided by the Transformers library.
Params. | Parsing | NER | POS | Bounds
---|---|---|---|---
batch size | 32 | 16 | 16 | [1,256]
learning rate | 5e-5 | 3.5e-5 | 5e-5 | [1e-6,1e-3]
epochs (best) | 15 | 6 | 6 | [1,50]
#grid | 60 | 60 | 180 | -
Run-time (min) | 32 | 24 | 75 | -
Table 3: Fine-tuning best hyper-parameters for each task as selected on the
validation set of the source language with bounds. #grid: number of grid
search trial. Run-time is reported in average for training and evaluation.
### A.2 Cross-lingual transfer analyses
#### A.2.1 Correlation
We report here in Figure 4 the correlation between the hidden representation
of each layer and the cross-lang gap between the source and the target
averaged across all target languages and all layers. The correlation is strong
and significant for all the tasks and for both the fine-tuned and the
pretrained models. This shows that multilingual alignment that occurs within
the models, learnt during pretraining is strongly related with cross-lingual
transfer.
We report in Figure 3, the detail of this correlation per layer. For the
pretrained model, we observe the same distribution for each task with layer 6
being the most correlated to cross-lingual transfer. We observe large
variations in the fine-tuned cases, the most notable being NER. This
illustrates the task-specific aspect of the relation between cross-lingual
similarity and cross-lingual transfer. More precisely, in the case of NER, the
sharp increase and decrease in the upper part of the model provides new
evidence that for this task, fine-tuning highly impacts cross-lingual
similarity in the upper part of the model which correlates with cross-language
transfer.
Figure 3: Spearman Correlation between Cross-Lingual Similarity (CKA between
English and the target representations) and cross-lang gap averaged over all
17 target languages for each layer Figure 4: Cross-Lingual Similarity (CKA)
(§4.2) of hidden representations of a source language (English) sentences with
target languages of mBERT fine-tuned for NER. The higher the CKA value the
greater the similarity.
| Correlation
---|---
Task | X-Gap vs. X-Similarity
| Fine-Tuned | Pretrained
Parsing | 0.76 | 0.79
POS | 0.74 | 0.82
NER* | 0.47 | 0.43
Table 4: Spearman-Rank Correlation between the Cross-lingual Gap (X-Lang Gap)
and the Cross-lingual Similarity between the source and the target languages
of the fine-tuned models and the pretrained model averaged over all the hidden
layers and all the 17 target languages (sample size per task: 17). For NER,
cross-lang gap measured on wikiner data and not on the parrallel data itself
in constrast with Parsing and POS tagging. Complete list of languages can be
found in Appendix A.1.2
| Random-init of layers
---|---
Eval | Ref | All | 1 | 2 | 1-2 | 3-4 | 1-3 | 4-6 | 7-9 | 10-12 | 1-4 | 5-8 | 9-12
| Parsing
English Dev | 88.52 | 74.66 | 87.77 | 88.03 | 87.28 | 86.81 | 83.77 | 85.86 | 87.53 | 88.78 | 84.30 | 85.41 | 88.35
English Test | 88.59 | 74.58 | 87.77 | 88.09 | 87.25 | 86.79 | 83.37 | 85.54 | 87.36 | 88.62 | 83.10 | 85.37 | 88.69
French | 68.94 | 3.70 | 65.73 | 65.21 | 55.31 | 61.31 | 43.81 | 61.77 | 67.03 | 69.36 | 37.29 | 61.82 | 69.26
German | 67.43 | 4.73 | 64.97 | 65.20 | 57.08 | 60.62 | 47.85 | 58.93 | 64.12 | 66.67 | 36.05 | 59.37 | 67.21
Turkish | 28.40 | 2.76 | 21.65 | 23.77 | 16.78 | 21.21 | 10.69 | 20.23 | 25.39 | 30.43 | 9.70 | 20.94 | 29.33
Indonesian | 45.13 | 4.99 | 43.33 | 43.48 | 39.83 | 39.09 | 33.06 | 40.65 | 44.42 | 46.96 | 30.35 | 40.85 | 47.53
Russian | 59.70 | 2.95 | 57.81 | 57.53 | 54.10 | 53.51 | 47.01 | 52.37 | 56.45 | 61.41 | 38.58 | 52.41 | 60.72
Arabic | 23.37 | 3.19 | 23.66 | 23.49 | 21.01 | 19.55 | 16.17 | 18.84 | 20.70 | 24.54 | 13.26 | 18.27 | 23.93
| POS
English Dev | 96.45 | 87.47 | 96.04 | 96.06 | 95.92 | 95.81 | 95.38 | 95.43 | 96.25 | 96.58 | 94.01 | 95.35 | 96.39
English Test | 96.53 | 87.71 | 96.08 | 96.24 | 95.94 | 95.72 | 95.40 | 95.59 | 96.34 | 96.74 | 94.05 | 95.45 | 96.51
French | 88.25 | 28.96 | 86.70 | 87.66 | 79.84 | 87.14 | 69.43 | 86.42 | 86.94 | 88.30 | 62.28 | 86.37 | 88.26
German | 90.63 | 28.93 | 88.26 | 89.53 | 82.26 | 88.39 | 71.63 | 88.30 | 90.26 | 90.83 | 59.16 | 89.12 | 90.64
Turkish | 72.65 | 32.23 | 62.17 | 66.17 | 54.50 | 63.22 | 47.77 | 66.37 | 70.91 | 72.92 | 44.16 | 69.30 | 73.08
Indonesian | 84.06 | 36.98 | 82.15 | 82.89 | 80.13 | 81.40 | 75.94 | 81.99 | 83.78 | 84.42 | 72.42 | 82.59 | 84.09
Russian | 82.97 | 32.63 | 83.14 | 83.63 | 81.95 | 82.26 | 77.93 | 81.69 | 82.98 | 81.76 | 70.33 | 82.56 | 83.19
Arabic | 56.66 | 19.61 | 58.10 | 58.06 | 57.89 | 55.62 | 57.93 | 54.69 | 56.04 | 55.97 | 52.28 | 53.60 | 58.84
| NER
English Dev | 83.29 | 56.99 | 82.04 | 82.26 | 79.52 | 80.36 | 76.22 | 79.53 | 82.18 | 82.53 | 69.31 | 80.05 | 82.47
English Test | 83.06 | 56.56 | 81.46 | 82.00 | 79.63 | 79.25 | 76.68 | 78.93 | 81.64 | 82.39 | 69.08 | 79.91 | 82.27
French | 76.76 | 35.35 | 75.46 | 77.57 | 69.94 | 72.83 | 65.14 | 70.34 | 75.42 | 75.90 | 55.79 | 73.12 | 75.77
German | 76.68 | 18.95 | 73.73 | 75.39 | 66.18 | 70.12 | 56.50 | 69.53 | 75.38 | 77.11 | 42.37 | 71.14 | 75.50
Turkish | 67.64 | 20.76 | 62.54 | 64.84 | 52.20 | 57.11 | 53.03 | 60.59 | 65.66 | 64.87 | 39.38 | 61.43 | 66.62
Indonesian | 53.47 | 21.20 | 49.19 | 49.27 | 46.50 | 46.87 | 43.75 | 47.83 | 54.39 | 48.71 | 36.11 | 46.06 | 48.23
Russian | 58.23 | 7.43 | 55.63 | 58.08 | 50.67 | 52.89 | 42.83 | 46.13 | 53.38 | 58.09 | 34.66 | 52.03 | 59.12
Arabic | 41.81 | 5.49 | 35.79 | 34.80 | 32.37 | 32.31 | 26.21 | 38.88 | 38.55 | 40.83 | 21.85 | 38.67 | 41.23
Table 5: Zero-shot cross-lingual performance when applying Random-init to
specific set of consecutive layers compared to the Ref model. Source language
is English. Baseline model ALL (for all layers randomly initialized)
corresponds to a model trained from scratch on the task. For reproducibility
purposes, we report performance on the Validation set English Dev. For all
target languages, we report the scores on the test split of each dataset. Each
score is the average of 5 runs with different random seeds. For more insights
into the variability of our results, we report the min., median and max. value
of the standard deviations (std) across runs with different random seeds for
each task: Parsing:0.02/0.34/1.48, POS:0.01/0.5/2.38, NER:0.0/0.47/2.62 (std
min/median/max). $\geq$ Ref $<$ Ref $\leq$ 5 points $\leq$ 10 points
| Random-init of layers
---|---
Source - Target | Ref | $\Delta$ 0-1 | $\Delta$ 2-3 | $\Delta$ 4-5 | $\Delta$ 6-7 | $\Delta$ 8-9 | $\Delta$ 10-11 |
| | Parsing
EN - English | 88.98 | -0.96 | -0.66 | -0.93 | -0.55 | 0.04 | -0.09 |
EN - Arabic | 35.88 | -4.05 | -2.38 | -3.16 | -0.78 | 1.74 | 1.68 |
EN - French | 74.04 | -21.30 | -6.84 | -2.93 | -0.69 | 0.03 | 0.76 |
EN - German | 70.34 | -15.06 | -9.26 | -4.75 | -1.54 | -0.29 | 1.82 |
EN - Turkish | 34.03 | -16.37 | -10.10 | -5.11 | -3.71 | 0.43 | 1.43 |
EN - Indo | 44.11 | -10.57 | -5.87 | -2.66 | -0.96 | -0.74 | 0.73 |
EN - Russian | 62.52 | -7.31 | -5.37 | -2.84 | -1.09 | 0.44 | 0.71 |
EN - Porthughese | 68.59 | -25.83 | -6.22 | -2.97 | -0.77 | 0.15 | 0.82 |
EN - SPanish | 69.96 | -18.05 | -5.74 | -2.78 | -0.96 | 0.13 | 0.72 |
EN - Finish | 48.42 | -24.25 | -9.48 | -4.39 | -2.51 | -0.28 | 0.22 |
EN - Italian | 74.54 | -30.54 | -9.63 | -4.18 | -1.32 | -0.12 | 0.90 |
EN - Slovenian | 73.04 | -29.89 | -6.52 | -3.00 | -1.68 | -0.05 | 0.18 |
EN - Czech | 60.44 | -31.84 | -10.69 | -4.61 | -1.82 | 0.18 | 1.17 |
EN - Polish | 55.23 | -23.57 | -9.11 | -3.34 | -1.83 | 0.28 | 0.89 |
EN - Hindi | 28.86 | -9.13 | -7.58 | -5.84 | -2.50 | 1.35 | 1.49 |
EN - Chinese | 27.48 | -7.31 | -4.47 | -1.65 | -0.62 | 0.65 | 1.32 |
EN - Japanese | 11.99 | -4.36 | -2.76 | -1.91 | -1.19 | 0.47 | 1.12 |
EN - X (mean) | 53.23 | -15.77 | -6.51 | -3.39 | -1.47 | 0.29 | 1.00 |
Ru - Russian | 85.15 | -0.82 | -1.38 | -1.51 | -0.86 | -0.29 | 0.18 |
Ru - English | 61.40 | -8.37 | -3.55 | -3.90 | -0.72 | 1.77 | 1.14 |
Ru - Arabic | 59.41 | -5.65 | -5.26 | -5.15 | -1.47 | 0.24 | 0.16 |
Ru - French | 65.84 | -8.87 | -2.93 | -1.81 | -1.05 | 3.81 | 1.24 |
Ru - German | 65.90 | -7.02 | -4.19 | -1.97 | -1.45 | 2.58 | 2.05 |
Ru - Turkish | 32.20 | -13.13 | -7.18 | -6.82 | -3.77 | -0.85 | 1.21 |
Ru - Indo | 47.59 | -4.74 | -2.99 | -2.30 | -1.81 | 0.04 | 1.02 |
Ru - Porthughese | 66.41 | -11.17 | -1.61 | -1.09 | -1.25 | 4.16 | 1.94 |
Ru - SPanish | 66.74 | -4.52 | -1.38 | -0.69 | -0.97 | 2.95 | 1.37 |
Ru - Finish | 52.92 | -15.43 | -6.59 | -4.09 | -1.35 | 0.12 | 0.77 |
Ru - Italian | 65.28 | -12.97 | -3.56 | -2.34 | -1.46 | 3.16 | 1.55 |
Ru - Slovenian | 62.91 | -16.67 | -2.71 | -3.18 | -1.03 | 0.31 | 1.08 |
Ru - Czech | 72.77 | -11.95 | -4.17 | -3.13 | -1.57 | -0.33 | 0.30 |
Ru - Polish | 66.07 | -5.70 | -3.22 | -2.57 | -1.54 | -0.12 | 0.54 |
Ru - Hindi | 28.67 | -6.02 | -5.77 | -5.27 | -3.75 | -0.06 | 0.99 |
Ru - Chinese | 28.77 | -4.66 | -4.38 | -3.22 | -1.80 | 0.15 | 1.12 |
Ru - Japanese | 15.10 | -4.89 | -3.56 | -3.95 | -3.11 | 0.68 | 0.73 |
Ru - X (Mean) | 55.41 | -7.69 | -3.71 | -3.13 | -1.70 | 0.92 | 0.94 |
Ar - Arabic | 59.54 | -0.78 | -2.14 | -1.20 | -0.67 | -0.27 | 0.08 |
Ar - English | 25.46 | -2.09 | -2.92 | -0.90 | -1.40 | -0.97 | -0.61 |
Ar - French | 28.92 | -4.85 | -1.45 | -0.25 | -2.72 | -1.60 | -0.88 |
Ar - German | 27.14 | -6.38 | -4.51 | -0.98 | -2.24 | 0.13 | 0.09 |
Ar - Turkish | 9.58 | -3.90 | -3.14 | -2.76 | -2.33 | 0.31 | 0.15 |
Ar - Indo | 36.16 | -5.85 | -4.86 | -1.71 | -0.68 | -0.17 | 0.58 |
Ar - Russian | 42.25 | -3.52 | -5.28 | -2.46 | -1.66 | -0.67 | -0.27 |
Ar - Porthughese | 34.71 | -4.80 | -1.22 | 0.10 | -2.98 | -0.33 | -0.24 |
Ar - SPanish | 31.95 | -4.02 | -0.15 | -0.44 | -1.46 | -0.77 | 0.38 |
Ar - Finish | 28.18 | -9.89 | -7.03 | -3.17 | -1.81 | -0.58 | -0.42 |
Ar - Italian | 28.85 | -3.01 | 0.60 | 1.45 | -2.26 | -1.47 | -0.70 |
Ar - Slovenian | 35.78 | -9.73 | -4.97 | -2.21 | -1.43 | -0.41 | -0.56 |
Ar - Czech | 40.04 | -13.61 | -6.82 | -3.20 | -2.38 | -1.12 | -0.21 |
Ar - Polish | 41.16 | -8.46 | -5.52 | -2.48 | -1.48 | -0.47 | -0.55 |
Ar - Hindi | 10.24 | -2.46 | -2.86 | -2.57 | -1.55 | 1.00 | 0.14 |
Ar - Chinese | 11.46 | -2.42 | -2.43 | -1.26 | -0.82 | 0.23 | -0.05 |
Ar - Japanese | 6.66 | -1.28 | -0.79 | -1.20 | -1.04 | 0.74 | 0.30 |
Ar - X (Mean) | 27.97 | -4.91 | -3.17 | -1.48 | -1.68 | -0.36 | -0.14 |
Table 6: Parsing (LAS score) Relative Zero shot Cross-Lingual performance of
mBERT with Random-init (section 2.1) on pairs of consecutive layers compared
to mBERT without any random-initialization (Ref). In Src - Trg, Src indicates
the source language on which we fine-tune mBERT, and Trg the target language
on which we evaluate it. Src-X is the average across all 17 target language
with X $\neq$ Src $\geq$ Ref $<$ Ref $\leq$ -2 points $\leq$ -5 points
| Random-init of layers
---|---
Source - Target | Ref | $\Delta$ 0-1 | $\Delta$ 2-3 | $\Delta$ 4-5 | $\Delta$ 6-7 | $\Delta$ 8-9 | $\Delta$ 10-11 |
| | POS
En - English | 96.51 | -0.30 | -0.25 | -0.40 | -0.00 | 0.05 | 0.02 |
En - Arabic | 70.20 | -3.63 | -1.88 | -2.40 | -1.26 | -1.89 | -2.74 |
En - French | 89.16 | -9.68 | -2.09 | -1.49 | -1.03 | 0.29 | 0.59 |
En - German | 89.32 | -7.81 | -2.12 | -1.27 | -0.99 | -0.46 | -0.68 |
En - Turkish | 71.67 | -11.62 | -4.43 | -1.48 | -0.95 | 0.04 | -0.95 |
En - Indo | 71.44 | -6.39 | -2.80 | -1.74 | -0.59 | -0.41 | -1.10 |
En - Russian | 86.26 | -2.66 | -0.94 | -0.27 | 0.13 | 0.37 | 0.62 |
En - Porthughese | 86.51 | -10.84 | -1.83 | -1.44 | -0.81 | -0.01 | -0.14 |
En - Spanish | 87.26 | -8.09 | -1.30 | -1.36 | -1.13 | 0.20 | 0.17 |
En - Finish | 84.85 | -20.00 | -8.09 | -2.77 | -0.97 | -0.06 | -0.86 |
En - Italian | 91.35 | -13.97 | -3.35 | -2.66 | -1.34 | -0.01 | 0.27 |
En - Slovenian | 89.64 | -16.46 | -2.41 | -1.09 | -0.18 | 0.34 | 0.19 |
En - Czech | 83.39 | -19.62 | -3.93 | -0.73 | -0.56 | 0.21 | 0.29 |
En - Polish | 81.45 | -13.33 | -3.52 | -1.19 | -1.22 | -0.50 | -0.16 |
En - Hindi | 65.43 | -10.04 | -2.70 | -2.89 | -3.25 | 3.00 | 0.28 |
En - Chinese | 67.89 | -3.04 | -2.82 | -3.59 | -0.29 | 0.66 | 0.29 |
En - Japanese | 48.86 | -2.19 | 1.52 | -1.51 | -1.13 | 1.42 | 1.79 |
En - X (Mean) | 79.37 | -8.94 | -2.49 | -1.66 | -0.88 | 0.20 | -0.14 |
Ru - Russian | 96.90 | -0.52 | -0.55 | -0.40 | -0.07 | 0.02 | -0.03 |
Ru - English | 82.55 | -20.72 | -7.06 | -5.01 | -3.93 | 0.74 | -1.57 |
Ru - Arabic | 79.30 | -4.04 | -1.48 | -2.06 | 0.64 | 0.01 | 0.47 |
Ru - French | 86.02 | -18.66 | -4.64 | -4.10 | -9.00 | -0.13 | -1.84 |
Ru - German | 84.90 | -12.50 | -4.80 | -2.79 | -3.90 | 0.47 | -1.82 |
Ru - Turkish | 69.92 | -15.20 | -2.06 | -0.55 | -1.41 | -0.11 | 0.68 |
Ru - Indo | 71.16 | -8.33 | -3.44 | -1.03 | -0.56 | -0.73 | 0.15 |
Ru - Porthughese | 84.24 | -19.56 | -7.15 | -3.00 | -7.78 | -0.15 | -2.08 |
Ru - SPanish | 84.84 | -13.64 | -4.09 | -2.66 | -7.67 | -0.35 | -2.48 |
Ru - Finish | 81.08 | -18.55 | -5.42 | -1.37 | -1.00 | -0.16 | 0.02 |
Ru - Italian | 85.56 | -21.04 | -5.11 | -3.41 | -8.21 | -0.20 | -3.36 |
Ru - Slovenian | 85.37 | -14.65 | -3.53 | -1.72 | -2.00 | -0.15 | -0.15 |
Ru - Czech | 87.37 | -8.43 | -1.99 | -0.71 | -1.16 | -0.50 | -0.28 |
Ru - Polish | 86.42 | -4.41 | -1.89 | -0.64 | -0.44 | -0.21 | 0.09 |
Ru - Hindi | 65.49 | -1.16 | 0.41 | -1.49 | -2.17 | 1.13 | 3.20 |
Ru - Chinese | 65.85 | -5.12 | -1.43 | -0.32 | -0.74 | -0.13 | -0.47 |
Ru - Japanese | 46.91 | -0.72 | 2.16 | 0.00 | -1.30 | 1.15 | 1.12 |
Ru - X (Mean) | 79.25 | -10.08 | -2.83 | -1.65 | -2.74 | 0.01 | -0.45 |
Ar - Arabic | 79.28 | -0.35 | -0.49 | -0.36 | -0.19 | -0.05 | -0.00 |
Ar - English | 63.26 | -3.32 | -1.09 | -1.72 | -1.68 | -1.03 | -1.78 |
Ar - French | 63.33 | -4.41 | -1.53 | -1.14 | -1.30 | -0.44 | -0.92 |
Ar - German | 63.23 | -4.95 | -2.97 | -1.04 | -1.58 | -0.53 | -2.09 |
Ar - Turkish | 60.99 | -13.76 | -8.74 | -2.86 | -4.49 | -1.08 | -1.88 |
Ar - Indo | 64.24 | -5.11 | -3.43 | -1.87 | -0.58 | -0.28 | -0.63 |
Ar - Russian | 74.52 | -4.01 | -2.37 | -2.40 | -1.84 | -1.69 | -2.03 |
Ar - Porthughese | 67.28 | -6.51 | -2.84 | -1.30 | -1.23 | 0.04 | -0.96 |
Ar - SPanish | 64.84 | -3.08 | -0.51 | -0.74 | -0.48 | 0.02 | -0.14 |
Ar - Finish | 64.28 | -19.72 | -8.32 | -3.72 | -2.56 | -1.64 | -3.03 |
Ar - Italian | 63.55 | -4.25 | -1.60 | -0.94 | -1.15 | 0.14 | -0.64 |
Ar - Slovenian | 68.06 | -12.21 | -4.31 | -2.17 | -1.85 | 0.68 | -1.81 |
Ar - Czech | 72.65 | -13.57 | -3.14 | -1.88 | -1.77 | -1.35 | -1.57 |
Ar - Polish | 75.00 | -8.87 | -2.94 | -1.46 | -0.62 | -1.00 | -1.37 |
Ar - Hindi | 62.29 | -7.31 | -6.07 | -2.42 | -1.26 | 0.19 | -1.72 |
Ar - Chinese | 56.51 | -5.02 | -4.94 | -2.10 | -1.35 | -1.02 | -1.77 |
Ar - Japanese | 47.06 | -3.34 | -3.34 | -0.65 | -0.89 | -1.54 | -0.35 |
Ar - X (Mean) | 64.81 | -6.73 | -3.50 | -1.63 | -1.56 | -0.73 | $-1.29$ |
Table 7: POS tagging Relative Zero shot Cross-Lingual performance of mBERT
with Random-init (section 2.1) on pairs of consecutive layers compared to
mBERT without any random-initialization (Ref). In Src - Trg, Src indicates the
source language on which we fine-tune mBERT, and Trg the target language on
which we evaluate it. Src-X is the average across all 17 target language with
$X\neq$ Src. $\geq$ Ref $<$ Ref $\leq$ -2 points $\leq$ -5 points
| Random-init of layers
---|---
Source - Target | Ref | $\Delta$ 0-1 | $\Delta$ 2-3 | $\Delta$ 4-5 | $\Delta$ 6-7 | $\Delta$ 8-9 | $\Delta$ 10-11 |
| | NER
EN - English | 83.27 | -2.64 | -2.12 | -1.41 | -0.61 | -0.21 | -0.14 |
EN - French | 76.20 | -4.41 | -2.72 | -2.09 | -0.30 | 0.51 | 0.08 |
EN - German | 75.58 | -8.25 | -4.65 | -2.50 | -0.40 | 0.06 | 0.26 |
EN - Turkish | 66.23 | -8.71 | -6.57 | -2.16 | -1.01 | 0.51 | 0.51 |
EN - Indo | 50.24 | -2.94 | -1.43 | -2.54 | 2.49 | -0.70 | 0.82 |
EN - Porthughese | 76.09 | -4.66 | -0.88 | -1.16 | -0.57 | 0.62 | -0.70 |
EN - SPanish | 67.00 | -0.99 | 4.37 | 2.03 | -1.69 | 1.57 | -1.38 |
EN - Finish | 75.61 | -11.89 | -4.47 | -2.29 | 0.63 | 0.54 | -0.37 |
EN - Italian | 78.48 | -6.65 | -3.64 | -3.08 | -1.32 | -0.30 | -0.28 |
EN - Slovenian | 72.80 | -10.37 | -2.96 | -3.11 | -0.36 | 0.10 | -0.72 |
EN - Czech | 76.90 | -8.02 | -6.81 | -3.17 | 0.09 | 1.00 | 0.39 |
EN - Russian | 60.20 | -5.87 | -6.65 | -5.71 | -2.82 | -0.82 | -0.37 |
EN - Arabic | 39.15 | -8.98 | -5.31 | -1.97 | 1.56 | 0.31 | -0.98 |
EN - Polish | 77.20 | -8.32 | -5.53 | -3.05 | -0.06 | 0.67 | 0.09 |
EN - Hindi | 60.61 | -12.08 | -13.88 | -9.23 | -0.91 | -1.25 | 2.08 |
EN - Chinese | 37.74 | -13.68 | -6.49 | -4.59 | -2.41 | -5.23 | -1.00 |
EN - Japanese | 25.19 | -11.40 | -7.54 | -4.67 | -2.53 | -3.45 | -0.23 |
EN - X (mean) | 64.17 | -8.28 | -5.09 | -3.07 | -0.79 | -0.47 | -0.13 |
Ru - Russian | 88.20 | -2.08 | -2.13 | -1.52 | -0.64 | -0.33 | -0.13 |
Ru - English | 56.62 | -13.83 | -8.52 | -4.70 | -1.50 | -0.76 | 1.38 |
Ru - French | 67.35 | -18.45 | -9.70 | -4.32 | -1.76 | -1.77 | 2.29 |
Ru - German | 69.23 | -13.94 | -9.01 | -5.80 | -2.98 | -1.65 | 0.40 |
Ru - Turkish | 63.64 | -18.52 | -10.06 | -6.01 | -4.16 | -0.67 | -0.27 |
Ru - Indo | 41.92 | -10.29 | -7.20 | -5.19 | -1.20 | -1.91 | 0.50 |
Ru - Porthughese | 67.33 | -21.23 | -8.27 | -8.84 | -2.83 | -1.83 | 1.51 |
Ru - SPanish | 69.15 | -16.74 | -10.00 | -8.16 | -5.80 | -1.66 | 0.26 |
Ru - Finish | 73.03 | -17.17 | -8.70 | -5.88 | -2.12 | 0.86 | 1.48 |
Ru - Italian | 70.05 | -19.47 | -9.54 | -6.90 | -3.06 | 0.73 | 1.04 |
Ru - Slovenian | 71.18 | -12.02 | -9.48 | -3.61 | -0.70 | 1.16 | 2.14 |
Ru - Czech | 74.87 | -17.93 | -10.59 | -6.34 | -4.02 | 0.17 | -0.23 |
Ru - Arabic | 38.63 | -8.67 | -6.81 | -0.13 | -0.65 | -1.34 | -0.29 |
Ru - Polish | 75.16 | -15.38 | -7.97 | -6.33 | -3.07 | -0.63 | 1.34 |
Ru - Hindi | 58.01 | -19.60 | -12.36 | -6.18 | 0.93 | -1.64 | 1.17 |
Ru - Chinese | 43.86 | -23.73 | -11.68 | -6.80 | -4.27 | -4.13 | -6.01 |
Ru - Japanese | 30.79 | -16.80 | -11.29 | -5.26 | -2.77 | -3.99 | -6.91 |
Ru - X (Mean) | 62.13 | -15.85 | -9.36 | -5.50 | -2.44 | -1.16 | -0.06 |
Ar - Arabic | 87.97 | -2.37 | -2.11 | -0.96 | -0.39 | -0.15 | 0.21 |
Ar - French | 75.21 | -18.71 | -8.31 | -3.76 | -0.19 | 0.82 | 1.07 |
Ar - German | 74.24 | -15.25 | -7.19 | -3.72 | -1.38 | -0.04 | 0.27 |
Ar - Turkish | 68.45 | -14.89 | -8.65 | -2.78 | -0.30 | 0.98 | 1.90 |
Ar - Indo | 54.65 | -13.86 | -10.95 | -8.53 | -4.66 | -2.82 | 0.09 |
Ar - Porthughese | 74.67 | -20.42 | -10.54 | -3.17 | -1.59 | 0.10 | 1.28 |
Ar - SPanish | 74.88 | -18.16 | -12.18 | -3.06 | -1.95 | 0.52 | 0.63 |
Ar - Finish | 78.01 | -18.79 | -8.84 | -4.30 | -2.03 | -0.30 | 0.19 |
Ar - Italian | 75.76 | -16.37 | -7.73 | -3.98 | -1.49 | -0.06 | 0.74 |
Ar - Slovenian | 63.08 | -11.13 | -5.49 | 4.79 | 0.88 | 2.17 | 0.79 |
Ar - Czech | 74.70 | -21.93 | -10.95 | -5.84 | -2.42 | -1.36 | 0.09 |
Ar - Russian | 45.51 | -7.59 | -5.81 | -2.63 | 0.15 | -0.22 | 0.47 |
Ar - English | 57.94 | -12.79 | -6.03 | -4.57 | -0.32 | 0.29 | 1.65 |
Ar - Polish | 77.29 | -20.61 | -9.47 | -5.93 | -2.64 | -1.09 | -0.19 |
Ar - Hindi | 65.31 | -14.95 | -9.12 | -3.84 | -1.48 | 0.72 | 0.98 |
Ar - Chinese | 45.88 | -25.72 | -10.67 | -3.99 | -1.41 | -2.72 | 0.57 |
Ar - Japanese | 24.75 | -14.66 | -5.19 | -3.82 | -0.99 | -1.17 | 1.50 |
Ar - X (Mean) | 65.59 | -16.10 | -8.42 | -3.73 | -1.40 | -0.25 | 0.67 |
Table 8: NER (F1 score) Relative Zero shot Cross-Lingual performance of mBERT
with Random-init (section 2.1) on pairs of consecutive layers compared to
mBERT without any random-initialization (Ref). In Src - Trg, Src indicates the
source language on which we fine-tune mBERT, and Trg the target language on
which we evaluate it. Src-X is the average across all 17 target language with
X $\neq$ Src $\geq$ Ref $<$ Ref $\leq$ -2 points $\leq$ -5 points
.
.
.
.
.
(a)
(b)
(c) (d) (e)
Figure 5: Cross-Lingual similarity (CKA) similarity (§4.2) of hidden
representations of a source language (English) sentences with a target
language sentences on fine-tuned and pretrained mBERT. The higher the CKA
value the greater the similarity.
(a)
(c)
(d) (e)
Figure 6: Cross-Lingual similarity (CKA) (§4.2) of hidden representations of a
source language (English) sentences with target languages sentences on fine-
tuned Parsing models with and without Random-init. The higher the CKA value
the greater the similarity.
(a)
(c)
(d) (e)
Figure 7: Cross-Lingual similarity (CKA) (§4.2) of hidden representations of a
source language (English) sentences with target languages sentences on fine-
tuned POS models with and w/o Random-init. The higher the CKA value the
greater the similarity.
(a)
(c)
(d) (e)
Figure 8: Cross-Lingual similarity (CKA) (§4.2) of hidden representations of a
source language (English) sentences with target languages sentences on fine-
tuned NER models with and w/o Random-init. The higher the CKA value the
greater the similarity.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.